2019-11-05 #opensuse-admin - heroes meeting [20:00:26] cboltz: time to start, i guess ;-) [20:00:33] yes ;-) [20:00:37] Hi everybody, and welcome to the heroes meeting! [20:01:17] Here listening [20:01:24] Hi [20:01:29] the topics are on https://progress.opensuse.org/issues/57602 [20:01:57] besides the usual topics, we have the planning for the meeting in Nuremberg, and the disk space on rsync.o.o [20:01:58] Hi All [20:02:20] hi mcaj_away [20:02:32] can you pleaes explain how you can type and be away at the same time? ;-) [20:03:08] and while you think about that - [20:03:08] well multitasking you know ^^ [20:03:09] * kbabioch always knew that martin is a bot :-) [20:03:12] cboltz: mcaj_away is doing magic all day long - thats nothing special [20:03:17] does someone from the community have any questions? [20:03:30] hi, I have a question [20:03:41] agraul: ask ;-) [20:03:48] although this might fit the "review old tickets" phase better [20:04:05] It is about upgrading software-o-o's VM from 42.3 to 15.1 [20:05:01] I am a bit afraid to just zypper dup "on my own", could someone that knows both openSUSE and SUSE's networks stand by when I do it? [20:05:41] in case something breaks and a new VM is needed (and the proxy in front might need to be reconfigured) [20:05:57] agraul: speaking of experience this will break something and might need attention / changes ... [20:06:26] agraul: in 99% zypper dup work and in the worst case if If does not boot you, then we need to look on the cluster [20:06:39] hello again! [20:06:46] welcome thomic ;-) [20:06:50] thomic: Hi ! [20:06:57] agraul: I have done that last month. It is broken after reboot. I cannot doing sudo. :D [20:06:58] Good evening [20:07:05] hi everybody who just joined ;-) [20:07:12] cboltz help me out. :) [20:07:14] all my ssh keys expired and this IRC VM will die after 10 years .. my provider told me [20:07:17] :D [20:07:30] so yes... I have some fun to catch up before being active-active again [20:07:36] hi @thomic [20:07:49] at least I can listen and give clever hints from my old white beard :D [20:08:32] hello everyone [20:08:34] hi [20:09:18] tuanpembual: sounds like the usual "sssd needs a restart". Done, please try again ;-) [20:10:18] agraul: I know it's more work, but I'd recommend to avoid doing the upgrade in the live instance [20:10:38] I know how to fix the problem with sssd needs a restart but we need to edit all sssd services and add there dependency for network... [20:11:06] mcaj_away: could be done as a package update? or with salt? [20:11:12] mcaj_away: should sssd config not come from salt? [20:11:13] =) [20:11:13] mcaj_away: maybe submitting the fix as a maintenance update would make more sense? [20:11:18] iirc [20:11:25] cboltz: what about creating a snapshot before the live upgrade? it will most likely work and if it does not we can easily rewind back [20:11:36] well ... setting in /etc yes but not for systemd [20:11:54] cboltz: that is fine for me. how should I proceed once a new VM is ready and the openSUSE reverse proxy in front needs to be adjusted? [20:12:33] cboltz: it looks alsomost like a bug but I did not find time to report it yet [20:12:59] mcaj_away: then please do that - or "report" it with a SR ;-) [20:13:09] that's much better than deploying a workaround with salt [20:14:03] ok... but back to 42.3 we should also update all 15.0 to 15.1 ... since November the disto is EoL [20:14:41] no objections on that (and I know that I still have to do that update for the wiki) [20:14:59] agraul: we can give you something like $host-test on the haproxy for your testing and then just switch it as soon as you say you are fine with the result [20:15:50] Welll there are some MVs with a *special* packages where 42.3 is last working disto ... and there we have a problem ... [20:16:16] 15.0 still has 3 weeks of life left [20:16:23] mcaj_away: are the details documented somewhere? [20:16:33] mcaj_away: oh, good point - please tell Tomas that he's either blind or lazy ;-) [20:16:47] you could even keep software-test.o.o agraul for future (like redeploy the old VM after the switchover) in case you'd like to do something crazy like "staging" before killing software.o.o live instance with a new software patch =) [20:17:31] thomic: where is the fun in that :D, you are right though, that would be a better setup than what we have now [20:18:22] agraul: yup, it saves our friends from being pinged by cboltz on sunday afternoon because "something is so slow" :D [20:18:29] I wonder if it would make sense to use btrfs+snapper for some of these [20:18:46] bmwiedemann2: as soon as it is stable =) [20:19:08] I thought, it is just 50% slower - and takes 4x as much disk space [20:19:49] you can do that, as I don't need to discuss a higher budget for more disk space anymore :D but be prepared you might run out of disk [20:20:13] would be mostly for OS+conf - not data [20:20:52] bmwiedemann2: our default VM root disk was usually 10GB or 20GB max [20:20:59] if it fits there :) alright [20:21:10] but if it fills this up very quickly, I wouldn't recommend to change this [20:21:26] as those small block storage is what keeps the backend fast and smooth [20:21:43] thomic: get bmwiedemann2 also a $host-test instance to he can create a POC machine [20:21:43] would indeed be tight. [20:22:10] jdsn: don't get me wrong - VM creation is on EngInfra todo list [20:22:25] as far as I'm correctly informed? [20:22:36] ok :) [20:22:39] kbabioch mcaj_away ^^ that still the case? [20:23:03] it is, no self-service unfortunately ... maybe some openstack cloud in the future ;-) [20:23:33] kbabioch: haha good one. RedHat offer some products there I heard :P [20:23:46] lol [20:24:28] we got the warez below the counter :-P [20:24:30] mcaj_away: to explain my sarcastic note - I found an open pull request for helios (a *big* one) to make it working with django 1.11, added it as a patch, and now have the web interface working on 15.1 on a local test VM. I still need some time for celeryd (for background jobs), but that should be the boring part. [20:24:47] well there were plans of splitting up the atrejus in openSUSE (or whatever the future name of the project with the green geeko will be) and SUSE [20:25:16] I don't know if this is still the plan? [20:26:47] no such plan and/or capacity for such a plan short term [20:27:03] we could start pushing the tyres for some public-cloud sponsors [20:27:08] like hetzner or something? [20:27:19] would that be an idea at least? [20:27:38] * kbabioch is NOT objecting, but not going to drive this [20:28:00] yup sure... well I would need to check if my time allows that minor "side project" [20:28:31] but anyways, we need some kind of "public cloud" to move forward i guess [20:29:07] any ideas / objections from the crowd^^? [20:29:07] thomic: does it have to be public cloud? or is a VM self service sufficient? [20:29:29] but maybe we should get back to the agenda and be more specific ... because we're talking mostly about long term goals / philosophical stuff ;-) [20:29:32] I think we can make some ideas about future, but back to present... we have a big problem with (big) data on widehat ATM... and we need to: [20:29:32] a] fix for now [20:29:33] b] plan for future because 19 TB disks will again in lest them month ... [20:30:05] jdsn: well, let's face the truth, the cluster in the NUE basement is having limited capacities, with growing needs of SUSE public services I guess... [20:30:31] kbabioch: sorry =) go on and moderate :P [20:30:40] cboltz, is moderating ;-) [20:30:49] thomic: OTOH hardware gets more compact + powerful every year [20:31:01] thomic: lets discuss later [20:31:22] bmwiedemann2: - what jdsn says, I can explain myself later [20:31:52] mcaj_away: ... good point ... just to let all know, there is no free slot anymore, all 8 disks slots are filled [20:31:59] 2x1.5TB system disk [20:32:14] 6x4TB RAID5 for download.o.o [20:32:20] iirc? [20:32:36] s/download.o.o/rsync.o.o/g [20:33:32] there should be a monitoring node now in the datacenter - right kbabioch? [20:33:43] right [20:33:48] 6x4 is about the 19 TB size we see. [20:33:58] yes, it is [20:33:59] well, for now there is another machine that is not doing much and can be used as ipmi backdoor ... but not used for much more (currently) [20:34:16] ok, just thinking, is this 1U? [20:34:20] 1u [20:34:23] not much disk space [20:34:24] :/ [20:34:51] as QSC does not monitor what we put there and we can ask for as much ports as we like I guess, We could put some storage there [20:35:11] if SUSE would sponsor something like an old Quantum/DotHill [20:35:34] what we maybe need is : [20:35:35] a] new machine(s) with a lot of disks [20:35:35] b] a bacend storage with 100TB RAW capacity ... [20:35:35] and there we are fine [20:35:35] well, actually i've registered it and they did check up on it ... so we cannot just put anything there [20:35:47] mcaj_away: why 100TB? [20:35:58] "ready for the future" [20:36:03] ah ok [20:36:06] =) [20:36:20] yes... and not fight with disk space every 1/2 Y [20:36:30] I know of a machine with 12 3.5" slots that is not doing that much... [20:36:32] well kbabioch, at least there are not "too strict" with their own rules [20:36:52] bmwiedemann2: is it in support? [20:37:06] because we fixed it earlier this year, quickfixed it again and again [20:37:08] maybe not. but also not that old [20:37:24] and it would be time to have a final solution for rsync.o.o [20:37:44] bmwiedemann2: but sooner out of support than when we would ask for sponsoring a new one [20:37:49] I mean, a lot of production depends on that... afaik [20:38:11] oh as temporary fix - I agree [20:38:13] I would like to see device like this https://www.thomas-krenn.com/en/products/rack-server/4u-server/intel-dual-cpu/4u-intel-dual-cpu-ri2424-scalable.html with al 24 disks ... [20:38:25] (at www.thomas-krenn.com) [20:38:30] mcaj_away: I disagree a bit... [20:38:36] im not going to install anything unsupported there ... also putting there a lot of disks might be a challenge ... dont want to go there regularly to replace disks .. [20:39:02] If you would invest in new machines now, you would like to have something like 2 servers at least and maybe two hosts syncing [20:39:13] that if one dies, rsync.o.o is not completely down =) [20:39:27] kbabioch: well other ideas? [20:39:41] HA is not trivial either. [20:40:15] kbabioch: and you usually keep some spare disks in the machine - to activate them on demand - so you only drive there if like 4 disks are dead [20:40:34] always keep 2 hot standby drives [20:40:55] in the 24bay machine we can even keep 6 ;) [20:41:04] well, not sure if we're realistic here ... up until now we had a budget of essentially 0 eur/usd for this hardware ... [20:41:17] we used some old / out of support suse hardware [20:41:30] what about just add there U2 backend storage ? [20:41:32] so we can talk all day long aobut some 24 bay machines and spare disks ... but not sure if it is going to happen [20:42:01] kbabioch: but we should create an idea about what we need to ask for, right? [20:42:10] yes [20:42:21] so I see value in the current discussion [20:42:34] what if the heroes request it? and maybe not EngInfra? [20:42:39] like via Gerald etc. [20:42:40] but then we should also consider to pay qsc for their service ... and then we don't have to hope for qsc to be nice enough to take another server of ours [20:42:45] maybe there is budget than [20:43:13] because right now we are rely relying on "qsc being nice guys" whenever we change anything there [20:43:25] kbabioch: as long as they appear as a sponsor on our page, we have a deal [20:43:46] well, they are happy to have us kbabioch =) [20:44:07] where exatly are they listed -> https://en.opensuse.org/Sponsors [20:44:08] :-)? [20:44:48] but to get some progress here ... let's agree on what we need / want to have ... and then see how we can get there ... [20:45:18] https://mirrors.opensuse.org/ search for QSC kbabioch [20:45:27] I think we as Heroes should send an email / message to board that the situation with disk space is a critical and we need a] new machine, b] storage c] aggrement with QSC and so on [20:45:29] so, basically any objections to having 1 (or 2) nodes for http/rsync/ftp/whatever ... and a storage backend? [20:46:18] https://www.opensuse.org/ kbabioch and there in the bottom we still have the "old" ipexchange logo [20:46:48] how reliable is that storage backend compared to the nodes? (just wondering, I never needed such big hardware - and want to avoid creating a SPOF) [20:47:21] cboltz: the current machine also is a SPOF :) [20:47:33] I know, but we want to improve things, right? ;-) [20:47:46] yes, but the immediate issue is the space [20:47:49] not the SPOF [20:47:50] if you have support contract works well... [20:47:56] if can fix both, great [20:48:33] speaking of immediate / short term ... is there anything we can do? [20:48:49] because we also have a problem right now ... and finding budget / ordering hardwrae ... will take months [20:48:53] cboltz: so reading between the lines: would you feel better with standard hardware that we can in the worst case fix ourselves? [20:48:54] (if at all) [20:49:53] jdsn: no need to read between the lines - I don't have experience with big servers or storage hardware, so it was just a "silly question" ;-) [20:50:13] I'd also really appreciate to have our data stored on disks that we can replace with normal things (e.g. SATA or SAS HDDs) without needing a support contract for 100KEUR [20:50:16] or we ask for something like a CDN sponsor? but the costs are not manageable [20:50:30] if the experts tell me that they feel comfortable with the storage, then everything is fine [20:50:34] they would be like 2500-5000 USD for a CDN - i just checked [20:50:44] per month [20:51:19] cboltz: well, the contract is expensive, so I would also second bmwiedemann2s view [20:51:31] ok, wait a second [20:51:39] thomic: sounds like we could instead buy a new server and several disks each year... [20:51:40] but if there is a sponsor for a storage, I would be fine with it as well [20:52:09] 1st of all - if you put a Storage-Machine there (like Quantum QXS) you always have more than 1 controller, you always have 2 controllers connected to the disks [20:52:39] second, storage systems, always have redundant (failover disks) that you dont need to go there and they usually take "normal SATA disks" [20:52:50] We are not talking about NetApp here... [20:53:11] and if we get a storage + support contract + QSC datacenter suport to change disks every time one fails ? [20:53:38] the big advantage I see with having 2 x 1U virtualization server + 1 x 2/3U storage system is, you can cross-connect via fibrechannel or iSCSI and if one of the virt hosts fails, the other one can take over [20:53:49] alternative suggestion - move widehat to NUE, and upgrade or get a 2nd uplink? [20:54:12] pjessen: there is no possibility as there is no second fibre afaik [20:54:25] get another one? it's only 3K/annum [20:54:28] and that is an even longer discussion you would start [20:54:44] pjessen: you know, in the building, opening streets, etc... [20:54:56] and I guess fibre is managed by SUSE-IT now [20:55:01] thomic: I like your idea with 2x1U plus 1x 2/U [20:55:05] so even that discussion you go down a long road [20:55:28] surely not - even here in the darkest of Switzerland, I can have a new 1Gbit fibre in less than a week. [20:55:39] klein: you would not need a QSC hands-on-service, as to be honest, we changed disks in DotHill 2-3 times a year maximum, I guess we can afford this time [20:55:56] anyway, it was just meant as an alternative. [20:55:57] pjessen: yes, switzerland [20:56:02] welcome to germany [20:56:03] pjessen: I'd guess that in germany it will take you a week to fill out the needed forms ;-) [20:56:06] (we recently changed multiple disks within a couple of days in our dothill :-)) [20:56:11] haha [20:56:22] offtopic ))) [20:56:39] so let's have something like a vote vboltz? [20:57:02] kbabioch: yes, ok, maybe that was needed, because nobody checked it for month ;) - mcaj_away how are your experiences? [20:57:21] mine are, we are changing 2 spare/hot standby disks every 4-6 month? [20:57:32] ok cboltz is sleeping [20:57:43] so let's have a vote on this? [20:57:44] just out of curiosity, would that be a temporary solution, or even a long term alternative? https://www.hetzner.de/dedicated-rootserver/sx132 [20:57:56] (at www.hetzner.de) [20:58:01] I'm not sure if we need a vote ;-) [20:58:04] depends on the luck ... you I had 2 broken disk in 4 year on dothill for example and like 4 disk on netapp for the same time frame [20:58:15] jdsn: I suggested that more than one time :) [20:58:38] Option 1: Setup one big machine with 24 disks and a lot of spare disks? [20:58:39] thomic: and....? [20:58:52] jdsn: technically its a perfect solution ... but we need to find someone to pa yfor it ;-) [20:58:54] Option 2: Setup 2 small virt machines and a storage machine? [20:59:07] kbabioch: I can ask ;) [20:59:09] Option 3: Rent hardware somewhere like Hetzner? [20:59:29] or offer to be come a sponsor ;) [20:59:38] kbabioch: We will rent a server and we make Melissa pay for it! [20:59:40] :D [20:59:48] * kbabioch is all in [20:59:52] thomic: +1 [21:00:06] so Option 3 is our favourite? [21:00:12] thomic: please define the "make" [21:00:25] the nice thing here would be - we could even have 2 servers at some point in future [21:00:30] i like option 3 the most, yes ... [21:00:31] maybe we can check Serverbörse [21:00:35] I vote for the Hetzner option. [21:00:35] or 3b) (with sponsoring offer) [21:00:37] and have a better price [21:00:49] jdsn: Hetzner is not our best friend, let's say it like this [21:00:59] oh. [21:01:03] jdsn: option 3 only has 1gbit network - 10GBit might be available with extra cost / per TB used [21:01:08] so let's change it [21:01:21] bmwiedemann2: this is what we have now as well :D [21:01:22] widehat only has 1gbit anyway? [21:01:26] yay right [21:01:58] I see [21:02:06] looking at the price tag, the Hetzner option looks quite good - maybe even cheaper than buying the needed hardware [21:02:27] yeah, the price is really good [21:02:33] Last time hetzner "sponsored us" was back then when $somebody was pointing from download.o.o mirrorbrain net directly to their public visible customer mirrors without telling them :D - I guess that caused "a bit of traffic on their side" [21:02:42] and as sponsor the price might even be perfect :) [21:02:44] and we have hands-on support [21:03:01] that's why the name opensuse is a bit burned in-house [21:03:30] well, if they run a public mirror... ;-) [21:03:43] cboltz: it was in the wiki, as a mirror for customers... to be fair here [21:03:47] cboltz: they changed their mirror to private [21:03:52] yes... [21:03:57] anyone willing to ask nicely and/or has some contacts there? [21:04:04] kbabioch: yes [21:04:07] both [21:04:22] are we asking hetzxner to sponsor us? [21:04:36] asking doesn't hurt, i guess ;-) [21:04:40] pjessen: yes [21:04:45] ok, got it. [21:05:01] just for the record [21:05:03] https://www.hetzner.de/dedicated-rootserver/matrix-sx [21:05:15] (at www.hetzner.de) [21:05:21] we could ask for two sx62 instead of 1 sx132 [21:05:23] for now [21:05:30] as 40TB are enough for us by now [21:05:38] and upgrade later [21:05:46] enough for how long [21:06:02] furthermore we could finally have widehat.o.o and rsync.o.o on two different hosts for http and rsync traffic split [21:06:04] ok, yea, if they let us upgrade any time - sure [21:06:10] jdsn: well we have 20TB now [21:06:18] let's say 20TB is +2 years [21:06:20] also, 4x10 TB would become 30TB with RAID5 [21:06:39] at least we can upgrade later to a SX132 [21:06:46] os its like 913,92 EUR per year ... [21:06:46] with those sizes, I would seriously recommend raid6. [21:06:46] ok, I have yet to see the statistics about the growth rate [21:06:47] if we deploy the machine from slat [21:06:52] that should be easy [21:07:07] jdsn: ask rudi [21:07:13] he can provide you with clear numbers [21:07:19] it's like 500GB per month [21:07:23] rough number [21:07:28] pjessen: kun for dig [21:08:01] thomic: that would be 6TB/y - I think it is less [21:08:32] ok, its 2110 already and we have more topics ... can we discuss what we want to do short-term ... and for the long term we will wait for jdsn to get in touch with hetzner ... [21:09:25] I would like to have short-term + one option for longterm + ActionItem with responsible. [21:09:28] cboltz: ^^ [21:09:59] i think we have one possible option for long term with action item / responsibility (jdsn) ... [21:10:09] for (very) short term, would it be possible to abuse a part of the system disk? 2x1.5 TB should have some space left for packages ;-) [21:10:11] what about the option to replace widehat's 6x4TB disks with 6x12 TB? [21:10:38] unless you can quickly get the budget for it, also not really short term :-/ [21:10:39] cboltz: with the risk that you shred your system disks to death [21:10:45] ~2KEUR [21:10:53] keep in mind, they are not new like the 4TB disks I put in [21:11:38] yes, I know [21:12:01] cboltz: if we are brave we could also run with a degraded RAID5 and gain extra 4TB :) [21:12:07] you need to see if the controller support 12TB disks bmwiedemann2 [21:12:17] is there anything we can delete (like we did in the past) ... not really good, but the only way out of this with the current setup? [21:12:48] klein: what feature does the controller need for that? We dont even need to boot off these [21:12:54] jdsn: well the data there is not critical in the sense of "doesn't exit anywhere else" [21:13:28] but running without any redundancy might be asking for too much :-/ [21:13:29] kbabioch: sure, thats why I mentionened it, but still its risky, becaus if then one disk dies, the whole service dies [21:13:57] from the past on the server I see that we deleted home repos ... [21:13:59] coming back to my question: anything (home projects, etc. pp.) we can delete for the time being? [21:14:13] I would highly!!! not recommend to exchange anything in the existing machine [21:14:26] better break up the RAID instead of exchanging something [21:14:45] the system is old, I brought it to live with a lot of fun [21:14:57] and I wouldn't recommend to put their new disks it is wasted money [21:15:05] as the 800 euros for the 4TB disks was [21:15:13] but back then, we had a dead machine [21:15:18] at least now we have a machine [21:16:30] kbabioch: you can smash the home repos [21:16:44] and the resync will take at least 1 month or so via the slow lines [21:16:56] so you won 1 month to get the hetzner thing running [21:17:01] if not, delete home repos again [21:17:11] not the best solution, but it helps [21:17:12] yeah, this will be taking more time / iterations i guess :-/ [21:17:25] but remember, people will start complaining [21:17:36] as on their rsync targets, home repos will be deleted as well [21:17:37] shouldn't we also add this deleted dirs/repos to a --exclude line in the rsync that copy the files over there? [21:17:37] we can have a monthly cron job for that :D [21:17:42] thomic: there will be more complaints if the server dies [21:18:05] there are customers building in OBS... syncing to their private mirror via rsync.o.o specifically their home repo [21:18:21] and they always complain when it goes down [21:18:28] (i mean the home repos) [21:18:34] as they use it for production [21:18:37] well, more people will complain if nothing works anymore :-) [21:18:42] yes... i see [21:18:47] i did it in the past [21:18:49] it works [21:18:56] but be prepared for whining people [21:19:07] klein: 21:17:37< klein> shouldn't we also add this deleted dirs/repos to a --exclude line in the rsync that copy the files over there? [21:19:10] nope [21:19:21] you don't want to touch the rsync-fun now :D [21:19:41] ok :-) [21:19:50] dont destroy two things at the same time [21:20:11] mcaj_away: well it's a very bad behaviour [21:20:15] we could also be more subtle with home: and do find -mtime +30 -delete or such [21:20:18] as scanner.o.o is seeing home repos this second [21:20:24] and the other second it's gone [21:20:36] wild idea: instead of deleting repos, freeze. Till solution implemented. :-? [21:20:59] robin_listas: nay, people expect to get the latest updates from mirrors [21:21:03] removing home will give us back 6tb: 6.1T home:/ [21:21:03] robin_listas: how exactly do you want to freeze the OBS? [21:21:13] kbabioch: you can do it! [21:21:18] asking... [21:21:24] the problem is that we're just mirroring what obs is doing [21:21:28] we cannot freeze obs [21:21:32] sure, we just tell the guys to freeze OBS [21:21:43] until somebody pays for rsync.o.o replacement [21:21:49] that might give that project some new drive [21:21:50] :D [21:22:07] Or ask devs to brake deving as much as they can [21:22:07] indeed ;-) [21:22:44] what about *really bad* idea ... take an external disk with like 10TB capacity, plug it into USB 3.0 port and mode there home repos ? [21:22:58] thomic: yes [21:23:04] well actually a good thing to do is to actually talk to devops guys about OBS repos that waste a lot of space [21:23:13] * cboltz wonders if that old server has USB 3.0 [21:23:17] they do a clean-up round from time-to-time [21:23:21] if you ask for it [21:23:27] mcaj_away: even this will only be a mid term solution ... i.e. we have to buy and order it ... [21:23:39] mcaj_away: AHHHHHHHHHHHHHHHHHHHHHHHHHH! [21:23:59] its like its there but juts slow .... [21:24:20] "hey what's that USB disk in your data center for?" - "ah never mind, just holds some very important data" [21:24:33] no usb 3 controller on that machine [21:24:38] i guess so^^^ [21:24:42] it is freaking old [21:24:48] just checked, there isnt [21:24:49] BTW What about exclude 42.3 repos ... [21:24:51] :D [21:24:57] and have fun transferring 6tb over usb 2.0 :-) [21:25:08] free pcie slot? *duckandrun* [21:25:13] thomic: we could attach two disks, and make a RAID1 *g,d&r* [21:25:51] ok, but on a serious note ... let's do: a.) delete home (for now as we are at 100% capacity) ... and b.) ask hetzner for sponsoring ... otherwise we won't finish with this topic [21:26:10] any (strong) objections / pragmatic suggestions? [21:26:10] jdsn: https://www.seedhost.eu/dedicated-seedboxes.php [21:26:16] maybe you can ask there as well^^ [21:26:40] yes let do it like that [21:27:00] yes, and c) if we don't get it sponsored, find someone at SUSE to pay for it [21:27:59] kbabioch: but we do not need to delete all of home [21:27:59] I would force them to pay for it [21:28:05] just to get a bit of pain as well [21:28:13] after years of ignoring the topic [21:28:16] but hey :D [21:28:19] just my 2 cents [21:28:29] bmwiedemann2 not sure what will happen if we remove files "randomly" (i.e. only old and/or new ones) [21:28:49] jdsn: ? are you writing a letter somewhere in etherpad or so ? i would contribute and send it to seeboxes.eu? [21:28:51] but im happy with whatever buys us some time and gives us back some of the 6tb [21:29:26] AFAIK mirrorbrain should handle it even if you delete just a single file somewhere [21:29:29] thomic: I am using my contact to talk to them [21:29:38] can you selectively delete 42.3 and older home repos, for instance? [21:30:14] yes, that should be possible [21:30:26] thomic: seedhost? seedboxes? [21:32:19] robin_listas: yes, find home: -path \*/openSUSE_Leap_42.\?/\* -delete or so [21:32:27] jdsn: yes =) [21:32:30] linode.com has a datacenter in Frankfurth, maybe they want to sponsor us if hetzner doesn't [21:32:44] klein: do you have a contact there? [21:33:16] I have a vm on US linode since... maybe 5+ years... but have no contact [21:33:17] -> find /srv/pub/opensuse/repositories/home\:/ -name 'openSUSE_Leap_42.3' [21:33:20] we can ask [21:33:23] that's what i can offer [21:33:30] kbabioch: go for it :D [21:33:32] maybe I can open a ticket, and see what happens [21:33:38] I have a contact at https://vpsfree.cz/ [21:33:42] or even the version from bmwiedemann2 to get rid of 42 everything [21:33:48] well we should coordinate our sponsoring efforts ... [21:33:55] not run around in headless chicken mode [21:34:00] yeah, I don't like to do that [21:34:08] kbabioch: I wonder how many openSUSE_1* are left in there [21:34:31] malaka! just delete the home repos for now... people will complain anyways. [21:34:35] maybe in the end we can have mode cloudhat servers from different sponsors ^^ [21:34:53] those who use home repos for production have 42.3 in their production as well as windows xp [21:34:58] you can't change people [21:35:31] hey, no Greek swearing plz [21:35:43] https://etherpad.opensuse.org/p/rsyncsponsors [21:36:36] pjessen: :-D [21:36:47] ok, removing repos now ... takes a while [21:37:17] will buy us osme time, but let's make sure to investigate the other options ... [21:37:26] when it's done, please report back how much disk space those 42.x repos took ;-) [21:37:27] we should *really* move on to other topics, though [21:37:37] agree [21:37:42] like the heroes meeting [21:37:43] :-) [21:37:47] exactly ;-) [21:38:46] I have little to report, I have fixed the repopusher, but other wise october has been very busy with other stuff [21:39:06] forums - no progress. I think I need someone to pusg MFIT. [21:40:02] I know this joke is getting old, but - maybe ask TSP for a flight to Provo, and take some safety boots with you? [21:40:36] cboltz: oooh, nice idea! I'll file a TSP request right away. :-) [21:41:46] RobertW and Renato are onsite in Provo - so maybe they could use their safety boots? [21:42:30] sounds good, can you please ask them? [21:42:50] I don't know those names, can someone send me their addresses please? [21:43:58] I have little update to. Testing ichain-plugin on https://progress-test.opensuse.org. Need test login status. I have login on connect, and open progress-test.o.o . My status logged, but cannot access any page. Still using old backup db (201904xx). run db on same machine as progress-test.o.o. will try another test case. [21:45:02] (intermediate report: we've gained 0.6 tb by removing all of the 42.3 home repos) [21:50:16] tuanpembual: progress-test.o.o looks quite good to me - I can access overview pages (like user list or ticket list), but trying to view a ticket gives an internal error [21:50:51] so even if there are still problems, you are making progress :-) [21:51:42] maybe link for ticket still using original url: progress.o.o [21:52:06] I debug it by see the dump DB. [21:52:17] no, it links to progress-test [21:52:43] the log should (hopefully) tell you why it errors out [21:52:58] sure. [21:54:14] hmm, do we have some breakage in the VPN? [21:54:30] it worked for me ~2 hours ago, but now I can't connect anymore :-( [21:54:52] same from me. [21:56:00] the most relevant messages is probably AUTH: Received control message: AUTH_FAILED [21:56:10] The same to me :( [21:56:18] sudo: unknown uid 1366800077: who are you ? [21:56:46] sounds like we might have a problem with FreeIPA... [21:57:12] (the initial connection and cert validation works, the failing part is probably the username/pass check) [21:57:37] I will check the VM on atruju ... [21:57:39] just a second [21:57:48] cboltz: ah, I was wondering about that [21:59:41] yes ... sssd service is down : Active: failed (Result: exit-code) [21:59:58] great :-/ [22:00:08] does the log indicate why it failed? [22:00:29] fixed, but there was no reboot ... so I'm not sure why that happen [22:01:01] feel fee to investigate it ... [22:02:50] hmm, /var/log/sssd/sssd_infra.opensuse.org.log looks interesting[tm] [22:03:16] its 22:00 CET to me time for dog beer ... [22:03:52] I know it's late, but - should we do some planning for the offsite meeting? [22:04:17] like topics to discuss, maybe workshops etc.? [22:04:20] how many people will join? [22:04:28] what i need to know: where did you go the last time for dinner on friday? and when? [22:04:32] i need to make some resevations [22:04:42] we are 10-15 people as of now [22:05:54] what about Ziet und Raum ? [22:05:58] kbabioch: you mean, you don't want to book the same location twice? [22:06:10] im also fine booking the same location [22:06:14] i was not part of this event last time [22:06:18] so im asking for some guidance here [22:06:39] then maybe ask for recommendations/wishes :) [22:07:20] I can probably look up where we went in my mail archive, but OTOH - if you know a nice place, just make a reservation there [22:07:36] or one of the top 10 ^^ ? ( https://theculturetrip.com/europe/germany/articles/the-top-10-bars-in-nuremberg-germany/ ) [22:07:38] ok ... and time-wise? [22:07:48] (at theculturetrip.com) [22:07:50] pjessen oreinert wil arrive at ~ 18 (or so) [22:08:07] what about dinner at 19:00? [22:08:29] fine with me ... [22:08:59] shud be fine. [22:09:16] okay, will let you know via mail [22:09:21] I'm also fine with 19:00 [22:09:30] is there anything else? has everyone (outside of suse) been contacted and is setup? [22:09:54] for the suse employees ... you'll need to speak to your manager and book a hotel via bcd when you are remote [22:10:18] otherwise we have a dinner on friday ... and two days of sessions ... [22:10:39] any topics that we definitely want to discuss and are not on the agenda yet? [22:10:48] also is any preparation needed / reocmmended? [22:11:38] is the SUSE guest network still restricted to http and https? If yes, I can recommend a nice VPN on port 443 somewhere ;-) [22:11:52] i'll add some stuff for the agenda, via email or the list [22:12:19] cboltz to be honest ... not sure not using it much :-/ [22:13:10] I need to go ... So CU next week on Friday in Nuremberg ;) [22:13:42] when I was last there (in May), my VPN tunnel was quite helpful ;-) and I'd be surprised if it changed since then [22:14:47] but maybe that's only a technical detail - traditionally, we rarely use computers during the heroes meeting ;-) [22:17:24] ok ... seems like there is nothing else for the heroes meeting [22:17:29] i will make a reservation for friday [22:18:21] maybe also for saturday? [22:18:52] yup, makes sense. and during the day ... we will have to order something (e.g. pizza) [22:19:24] sounds like a plan :-) [22:22:39] so - thanks everybody for joining the meeting today, and see you in Nuremberg soon! [22:23:43] thanks all, and good morning.