Project

General

Profile

communication #27424 » 2017-11-07-heroes-meeting.txt

IRC log - cboltz, 2017-11-07 21:12

 
2017-11-07 heroes meeting

[20:05:04] <cboltz> so - welcome everybody to the monthly Heroes meeting!
[20:05:19] <cboltz> does someone from the community have questions?
[20:05:33] <tampakrap> I'm moving the topics from last meeting
[20:06:29] <tampakrap> added
[20:07:02] <cboltz> looks like nobody has questions, so let's move to the status reports
[20:07:38] <kl_eisbaer> who should start?
[20:08:11] <cboltz> what about you? ;-)
[20:08:17] <kl_eisbaer> :D
[20:08:40] <kl_eisbaer> Monitoring: we are now at 40 VMs and >900 monitored services
[20:08:41] <Ada_Lovelace> You can tell us a lot. :)
[20:08:59] <tampakrap> wow
[20:09:20] <kl_eisbaer> as I wrote in the news, not only the checks are interesting, also the information provided via the graphs are helpful
[20:09:39] <kl_eisbaer> ...so - as example - to see how many people are logged in via VPN all the time ;-)
[20:10:07] <kl_eisbaer> I plan to get more information also from our nginx and lighttpd instances, but this seems to be a bit tricky
[20:10:49] <kl_eisbaer> JFYI: I disabled notifications for the Updates check, as this turned out to be the service with most notifications ;-)
[20:11:17] <kl_eisbaer> ..and as we update on Thursday anyway, I think it is ok to disable notifications for this check
[20:12:03] <kl_eisbaer> I also wrote a short "run_zypper_up" script, which is in /root/bin/ on monitor.infra.opensuse.org
[20:12:14] <pjessen> I'm back
[20:12:17] <cboltz> would it be possible to run the update check _after_ the cronjob that auto-installs most updates?
[20:12:40] <kl_eisbaer> if you log in to this machine via SSH and have agent forwarding enabled, running this script helps to automate the maintenance on thursday
[20:12:55] <kl_eisbaer> cboltz: the update check *is* running after that, yes
[20:13:40] <cboltz> :-)
[20:13:47] <kl_eisbaer> Other topic:
[20:13:57] <kl_eisbaer> The Provo mirror supports now http2 protocol
[20:14:18] <kl_eisbaer> ...but we need to investigate if libcurl uses that protocol automatically or not
[20:15:15] <kl_eisbaer> I talked with the zypper maintainer, and he told me that it might be possible to add a configurable feature, it needed. So zypper could make use of the http2 protocol, too
[20:15:30] <kl_eisbaer> tcp fast open is enabled anyway already
[20:16:07] <kl_eisbaer> The Nuremberg mirror (aka download.opensuse.org) is getting closer to a state where I can open it up to the heroes:
[20:16:23] <kl_eisbaer> SUSE specific stuff has been migrated to another machine already
[20:16:47] <kl_eisbaer> so what's left is to have the machine re-installed with openSUSE and place it in the heroes network
[20:17:05] <pjessen> sounds good
[20:17:14] <kl_eisbaer> we might need a special network interface for the push from the OBS, but that should not be a big problem
[20:17:24] <cboltz> will that re-install happen with salt? ;-)
[20:17:34] <kl_eisbaer> I hope to get this done next week
[20:17:49] <kl_eisbaer> cboltz: problably not - as long as I do it
[20:18:12] <kl_eisbaer> Next mirror under our control: widehat, aka rsync.opensuse.org
[20:18:24] <kl_eisbaer> this host as currently a major problem: disk full
[20:18:51] <kl_eisbaer> as we can not extend the storage of this machine, I'm currently thinking to declare our mirror in Provo as new rsync.opensuse.org
[20:19:19] <kl_eisbaer> as this mirror in Provo is currently the only one who has everything that is also available on download.opensuse.org
[20:19:36] <kl_eisbaer> => in the end it's just about the DNS name
[20:19:40] <Ada_Lovelace> Is this mirror working correctly now?
[20:20:00] <kl_eisbaer> so far no problem reports so far (other than one bug report about the used style sheet)
[20:20:06] <Ada_Lovelace> I know about problems with the provo mirror in the past...
[20:20:11] <tampakrap> it doesn't have ipv6 though, would that be a problem?
[20:20:17] <kl_eisbaer> Ada_Lovelace: I guess the problem in the past was just a missing redirector setup
[20:20:41] <kl_eisbaer> tampakrap: right, but latest rumor has it that MF-IT is able now to assign IPv6 addresses in Provo, too
[20:20:50] <tampakrap> ah nice
[20:20:53] <Ada_Lovelace> nice
[20:21:15] <kl_eisbaer> tampakrap: so let's wait for my request for IPv6 addresses, check of the rsync modules (and the "knapsack" stuff) to work
[20:21:24] <kl_eisbaer> and then switch over
[20:22:05] <kl_eisbaer> I still need to get some approval because of the additional bandwidth used, but I see currently not really any other chance to provide a fully equipped rsync.opensuse.org
[20:22:41] <kl_eisbaer> ...or we find a mirror as "sponsor" that hosts all the stuff, including the knapsack module, for us
[20:23:16] <kl_eisbaer> JFYI: the python-knapsack scripts fill up the 80g, 160g, 320g, ... modules with content based on the access logs
[20:23:43] <cboltz> what exactly does " we can not extend the storage" mean? All disks in NBG full, or just a restriction on this VM?
[20:23:59] <kl_eisbaer> The widehat machine is not running inside the Nuremberg office
[20:24:30] <kl_eisbaer> cboltz: widehats place is sponsored by QSC - and is an old machine with "just 7TB" capacity
[20:24:59] <kl_eisbaer> cboltz: does that answer your question?
[20:25:02] <Ada_Lovelace> I know the new hosting boss at QSC. He changed from 1&1 to them after my election.
[20:25:05] <cboltz> yes, thanks
[20:25:08] <Ada_Lovelace> Should I write him?
[20:25:25] <kl_eisbaer> Ada_Lovelace: in the past, our problem was that they want to end the sponsoring ...
[20:25:28] <Ada_Lovelace> That's the ex 1&1 hosting boss.
[20:25:35] <Ada_Lovelace> Oh...
[20:25:45] <kl_eisbaer> ...so we tried ot avoid any requests ...
[20:26:00] <pjessen> I guess it's more about bandwidth than the actual disk space?
[20:26:02] <kl_eisbaer> as long as the machine is up and running, everything is ok for us
[20:26:35] <kl_eisbaer> pjessen: in the past, widehat was indeed a bandwidth saver for the NUE office
[20:26:48] <kl_eisbaer> especially as we had a dark fiber to their data center
[20:27:09] <kl_eisbaer> so we could push stuff from download.o.o to widehat via that dark fiber
[20:27:31] <kl_eisbaer> ...and everybody else was downloading from widehat using the bandwidth from QSC
[20:27:39] <pjessen> yeah, I get the picture.
[20:27:56] <kl_eisbaer> the machine is an old system with a local RAID controller
[20:28:11] <kl_eisbaer> the maximum amount of disks is installed
[20:28:20] <kl_eisbaer> ...and it sadly supports only 1TB disks
[20:28:28] <kl_eisbaer> so extending it is not possible
[20:28:54] <kl_eisbaer> we might be able to replace the machine silently with a new one with bigger disks, but so far I don't see that happen
[20:29:45] <pjessen> how about a bigger raid controller? I have some spare that will take 2Tb disks.
[20:30:04] <kl_eisbaer> might be possible, but I need to check the hardware first
[20:30:23] <Ada_Lovelace> You don't have access to the data center?
[20:30:35] <kl_eisbaer> Ada_Lovelace: I have access, yes.
[20:30:53] <kl_eisbaer> But I need to fire up the next VPN to get access to the machine for checking dmidecode ;-)
[20:31:40] <kl_eisbaer> => what about moving the discussion about widehat to "later", resp. mailing list, once I have the hardware details ?
[20:32:07] <cboltz> sounds like a good idea
[20:32:09] <pjessen> yep.
[20:32:16] <kl_eisbaer> Next topic: logs from #opensuse-admin
[20:32:36] <kl_eisbaer> It's true, I'm lazy ;-)
[20:32:56] <kl_eisbaer> ...and I'm not running my IRC client all the day, but our bot is there...
[20:33:23] <kl_eisbaer> ...so I - in my glory ;-) decided to let it log for me, but I guess this might be useful for others, too?
[20:33:31] <kl_eisbaer> => https://monitor.opensuse.org/heroes/
[20:33:55] <kl_eisbaer> I could put the URL behind LDAP auth, but at the moment, I think it might be useful also for others.
[20:34:01] <kl_eisbaer> => what do you think?
[20:34:11] <tampakrap> leave it public, it's a public channel after all
[20:34:18] <Ada_Lovelace> It should be public...
[20:34:26] <cboltz> yes, keep it public
[20:34:41] <kl_eisbaer> fine with me ;-)
[20:34:47] <cboltz> but maybe add a note to /topic to make people aware that the channel is logged
[20:34:58] <kl_eisbaer> cboltz: up to you ;-)
[20:35:15] <kl_eisbaer> we might also put a link to the log in our wiki page ?
[20:35:24] <Ada_Lovelace> But we should speak about the time how long it should be saved or when cleaned.
[20:35:33] <cboltz> I doubt I have permissions to update /topic
[20:35:55] <kl_eisbaer> cboltz: IMHO /op should work in this channel for everyone
[20:36:00] <tampakrap> +1 for the link to the wiki page
[20:36:13] <kl_eisbaer> Ada_Lovelace: any suggestions for the time frame?
[20:36:13] [Fehler] Sie müssen in #opensuse-admin Operator sein, um das tun zu können.
[20:36:52] <cboltz> kl_eisbaer: no, /op gives me "you have to be operator in #opensuse-admin to do that"
[20:36:53] <Ada_Lovelace> 1 - 2 months (depends on the size of the disk)
[20:37:19] <kl_eisbaer> cboltz: sorry, I mean you should ask chanserv to become op
[20:37:41] <kl_eisbaer> cboltz: if you have a suggestion for the topic header, I can do that later for you ;-)
[20:37:53] <kl_eisbaer> Ada_Lovelace: the size does not really matter
[20:38:35] <kl_eisbaer> Ada_Lovelace: we are speaking about 472k at the moment
[20:38:45] <cboltz> I'd vote to keep the logs forever like we do with ML archives
[20:38:46] <Ada_Lovelace> Then 3 months should be more than enough... It can be that anybody wants to refer to any task in the chat.
[20:39:09] <cboltz> deleting something from the internet doesn't work anyway ;-)
[20:39:26] <kl_eisbaer> cboltz: +1 from my side
[20:39:32] <Ada_Lovelace> Ok
[20:39:41] <tampakrap> restrict the search engines maybe?
[20:39:41] <kl_eisbaer> if we need to censor something, we can do it on the monitoring host anyway
[20:39:51] <kl_eisbaer> tampakrap: fine with me
[20:40:21] <tampakrap> okay
[20:40:33] <tampakrap> the bot is amazing, thanks a lot for it!
[20:40:54] <kl_eisbaer> tampakrap: you can even enhance it to a "meet bot" ;-)
[20:41:14] <tampakrap> where is it running btw? on scar?
[20:41:24] <kl_eisbaer> tampakrap: ...or connect it to the rabbitmq queue and get informations pushed here once a package in openSUSE:infrastructure is built
[20:41:34] <kl_eisbaer> tampakrap: it's running on the monitor machine ;-)
[20:41:35] <tampakrap> yeah that would be amazing
[20:41:55] <tampakrap> or to send new merge requests and commits in the salt repository
[20:42:22] <kl_eisbaer> tampakrap: ...and all you need for this is netcat ;-)
[20:42:57] <kl_eisbaer> I've placed an example script in /root/bin/send_irc_message on the monitor machine
[20:42:57] <kl_eisbaer>
[20:43:09] <kl_eisbaer> ...if you want to try it out.
[20:43:33] <kl_eisbaer> At the moment, the port is bond to localhost, but I can open it up to the whole infra.opensuse.org network, if this is needed
[20:44:27] <kl_eisbaer> Other topic: galera cluster ?
[20:44:28] <heroes-bot> PROBLEM: MySQL WSREP recv on galera2.infra.opensuse.org - CRIT wsrep_local_recv_queue_avg = 1.184783 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=galera2.infra.opensuse.org&service=MySQL%20WSREP%20recv
[20:44:44] <kl_eisbaer> ...no, that ^^ was not planned ;-)
[20:44:54] <cboltz> I already wondered ;-)
[20:44:56] <tampakrap> hahahaha
[20:45:06] <kl_eisbaer> but as you can see, the cluster still needs some fine tuning
[20:45:30] <kl_eisbaer> I just took some default values and combined them with some "good practices" from our internal cluster at SUSE
[20:45:56] <kl_eisbaer> At the moment, I have 3 interesting things:
[20:46:36] <kl_eisbaer> * mysql-tuning-scripts is updated and contains now the latest mysqltuner.pl script, which is analizing a running mysql instance and (new) also gives some hints for galera cluster
[20:47:30] <kl_eisbaer> * I will update/change the VM definition for the machines, to allow more features from the hypervisor CPUs - sadly this requires a complete power cycle of the VM
[20:47:46] <kl_eisbaer> ^^^ that should give hopefully a bit more performance
[20:48:24] <kl_eisbaer> * I think we don't need to migrate so many databases as I initially thought: most of the DBs hosted at the old cluster are meanwhile obsolete
[20:48:51] <kl_eisbaer> The biggest DB will be the one from beans.o.o aka piwik
[20:49:32] <kl_eisbaer> I'm currently thinking to add the new cluster as slave for this DB to the old cluster, to avoid a big downtime if we migrate piwik
[20:49:53] <kl_eisbaer> but I haven't done that, so I need to do some more testing
[20:50:24] <kl_eisbaer> the other databases (incl. the wiki ones) should be "ready to migrate" in one or two weeks, IMHO
[20:51:02] <kl_eisbaer> if someone is interested and wants to become a "DB-Admin", I'm happy to hand over ;-)
[20:51:15] <cboltz> just tell me when you want to migrate the wiki DBs, so that we can make the wikis read-only during that time
[20:51:26] <kl_eisbaer> cboltz: will do, of course
[20:51:50] <cboltz> well, ideally I'd like to hand over creating database users and databases to salt ;-)
[20:51:59] <kl_eisbaer> I hope to have just minimal impact for all DBs other than the piwik one
[20:52:21] <kl_eisbaer> cboltz: but even than, you definitively want to have some DB-Admin who cares about your profile ;-)
[20:52:45] <kl_eisbaer> otherwise, your wikis would suddenly become very, very slow ....
[20:53:17] <kl_eisbaer> ...another topic: svn/kernel.opensuse.org
[20:53:21] <cboltz> I know salt can't do automated performance monitoring and tuning ;-)
[20:53:44] <kl_eisbaer> the current plan is to migrate the machine hosting the two services on Thursday to the new network
[20:54:26] <kl_eisbaer> ...that's all I have as "status report" so far
[20:55:23] <tampakrap> one quick question
[20:55:33] <tampakrap> the mysql servers are using the official opensuse packages?
[20:55:42] <kl_eisbaer> no
[20:55:55] <tampakrap> okay
[20:56:02] <kl_eisbaer> they are using the ones from server:database, as the latest galera features are very fresh
[20:56:22] <kl_eisbaer> the ones for 42.3 don't support galera so good, (yet)
[20:56:53] <kl_eisbaer> with the latest packages from server:database, it's more or less just adding the galera packages and a configuration snipplet
[20:56:59] <tampakrap> got it
[20:57:18] <kl_eisbaer> any other status reports ?
[20:57:36] <pjessen> no much from me, havent had time since the summer holidays
[20:57:41] <tampakrap> about salt
[20:57:51] <tampakrap> we support now gpg encrypted pillars
[20:57:55] <kl_eisbaer> pjessen: is the "sience" list online already ?
[20:58:05] <pjessen> it should be, yes.
[20:58:12] <pjessen> opensuse-science@o.o
[20:58:21] <kl_eisbaer> ok, thanks
[20:58:45] <kl_eisbaer> tampakrap: what does this mean ?
[20:59:10] <tampakrap> this means that we can put passwords in pillars
[20:59:30] <kl_eisbaer> nice :-)
[20:59:42] <tampakrap> there is a MR open that has the passwords for the keepalived config on anna/elsa encrypted
[21:00:03] <tampakrap> it needs documentation though on how to use it and about the structure
[21:00:09] <tampakrap> and then I'll proceed with that MR
[21:00:43] <kl_eisbaer> :D
[21:01:13] <kl_eisbaer> BTW: daffy (aka login2) is prepared for the 2nd daffy already - I setup the keepalived there already
[21:02:03] <tampakrap> cool
[21:02:08] <kl_eisbaer> BTW2: something for cboltz (-: https://monitor.opensuse.org/pnp4nagios/index.php/graph?host=redmine.infra.opensuse.org&srv=Heroes+tickets&view=2
[21:02:46] <cboltz> yes, I already noticed that :-)
[21:02:52] <kl_eisbaer> ^^ should give you a quick overview about the current tickets on progress.opensuse.org
[21:03:24] <kl_eisbaer> I also defined some warning/critical levels - just to create some fun here in the channel ;-)
[21:03:30] <pjessen> hehe
[21:04:46] <kl_eisbaer> but while I was creating that check, I was wondering if we shouldn't define a maximum lifetime for a ticket
[21:05:25] <kl_eisbaer> I was just wondering if tickets that are older than a year are really interesting anyone any longer?
[21:05:26] <Ada_Lovelace> We had the same in Bugzilla...
[21:06:09] <kl_eisbaer> Ada_Lovelace: ...and was there a solution ?
[21:06:11] <Ada_Lovelace> Christian and I found some tickets from the past which were interesting for us.
[21:06:17] <pjessen> in bugzilla, things take time.
[21:06:29] <Ada_Lovelace> Reviewing and pinging was the solution.
[21:07:24] <kl_eisbaer> maybe we should schedule a "progress" cleanup event ?
[21:07:28] <cboltz> right, let's handle tickets quickly instead of inventing an auto-close (which is more or less a motivation to be lazy IMHO)
[21:07:47] <pjessen> agree.
[21:07:49] <cboltz> yes, such a cleanup would make sense
[21:08:20] <kl_eisbaer> maybe a good transfer to the next topic: " offsite meeting? " :-)
[21:08:28] <Ada_Lovelace> I would be available.
[21:09:05] <cboltz> kl_eisbaer: indeed, pinging people in the same room is much easier ;-)
[21:09:24] <kl_eisbaer> any suggestions ?
[21:09:41] <kl_eisbaer> ...I hope everyone knows that SUSE will have the yearly hackweek starting on Friday ?
[21:09:41] <pjessen> somewhere in Zurich?
[21:09:47] <pjessen> :-)
[21:09:55] <kl_eisbaer> pjessen: fine with me :-)
[21:09:58] <Ada_Lovelace> I saw it today.
[21:10:14] <kl_eisbaer> pjessen: but I guess you need to organize the "where" in Zurich for us :-)
[21:10:29] <tampakrap> zurich is fine for me, my brother lives there
[21:10:50] <kl_eisbaer> heya: so we have a Party location already :-)
[21:12:20] <Ada_Lovelace> But wait... Are trains from Germany to Zurich going at the moment? There is a lot damageg, if I think back.
[21:12:37] <Ada_Lovelace> *damaged*
[21:12:39] <kl_eisbaer> pjessen: can you try to organize something ?
[21:12:43] <pjessen> I think that tunnel was fixed already.
[21:12:54] <Ada_Lovelace> Really? Then all is ok.
[21:13:14] <pjessen> I'll have to say no - at least not on this side of Christmas
[21:13:17] <kl_eisbaer> pjessen: I guess the main need is a reliable internet connection
[21:13:41] <kl_eisbaer> pjessen: next year should be not a big problem ;-)
[21:14:22] <pjessen> What time frame do we have in mind? Roughly.
[21:14:23] <Ada_Lovelace> February should be the best for me because of semester holidays. ;-)
[21:14:43] <pjessen> Yeah, we have Sportferien in Feb too.
[21:15:04] <tampakrap> fine by me
[21:15:09] <cboltz> as long as you avoid the carnival and FOSDEM weekends, February sounds good
[21:15:20] <kl_eisbaer> I guess 2-3 days (a weekend?) should really be enough
[21:15:44] <Ada_Lovelace> We weekend after FOSDEM is good.
[21:16:01] <pjessen> Is this something we should continue on the mailing list later?
[21:16:18] <kl_eisbaer> pjessen: jip, good idea
[21:16:24] <cboltz> no, the weekend after FOSDEM is carnival
[21:16:27] <tampakrap> perfect
[21:16:40] <Ada_Lovelace> Then we can have a party. ;-)
[21:16:42] <cboltz> and I know ~40 people who would hate me if I refuse to drive the carnival float ;-)
[21:17:19] <Ada_Lovelace> Let's speak about it on the mailinglist.
[21:17:21] <kl_eisbaer> So it looks like everybody is looking forward to a meeting in Zurich :-)
[21:17:34] <pjessen> Wow, I should have kept my mouht shut ....
[21:17:51] <kl_eisbaer> otherwise we could always schedule a meeting here in Nuremberg again. That should not really be a problem.
[21:18:04] * kl_eisbaer now waits for Theo to invite everyone to Prague ;-)
[21:18:11] <tampakrap> prague yey!
[21:18:19] <tampakrap> we have the conference as well here
[21:18:31] <Ada_Lovelace> Yes. The oSC
[21:18:37] <tampakrap> or actually, let's go to greece!
[21:18:38] <pjessen> I don't mind setting something up, but Nuernberg is within range for me too.
[21:18:53] <pjessen> Too cold there in Feb :-)
[21:19:18] <kl_eisbaer> pjessen: I just don't wanted to put pressure on you, that's why I offered NUE again ;-)
[21:19:56] <kl_eisbaer> pjessen: but we can of course include a survival training in the offsite meeting
[21:20:16] <kl_eisbaer> ^ => let's move the discussion to the mailing list ;-)
[21:20:21] <pjessen> thanks - I have a lot on my plate, to be honest.
[21:20:41] <pjessen> ok
[21:20:53] <kl_eisbaer> Next topic: enhance infra.opensuse.org domain ?
[21:21:05] <kl_eisbaer> that one's from me
[21:21:18] <pjessen> "enhance" ?
[21:21:38] <kl_eisbaer> in short: are there any objections, if I add the "service" names as aliases to our hosts with their special names?
[21:21:58] <kl_eisbaer> boosters.infra.opensuse.org would have an additional alias connect.infra.opensuse.org as example
[21:22:14] <Ada_Lovelace> infra.opensuse.org isn't the right thing for real domainnames.
[21:22:24] <Ada_Lovelace> That's something for hostnames.
[21:22:26] <kl_eisbaer> that would make it a bit easier - at least for me - to get to the "right" machine directly
[21:22:37] <tampakrap> I would like to have them, and I created some as well already
[21:22:38] <pjessen> Sounds good, that's what we do with all services locally. A service can always move.,
[21:22:40] <kl_eisbaer> Ada_Lovelace: ?
[21:23:12] <Ada_Lovelace> You want to offer easy domainnames for users without infra in it.
[21:23:26] <kl_eisbaer> Ada_Lovelace: no, sorry
[21:23:28] <Ada_Lovelace> I know such names only as hostnames.
[21:23:39] <kl_eisbaer> Ada_Lovelace: I just want to make my live as admin easier
[21:23:41] <pjessen> it's just a dns cname
[21:23:56] <tampakrap> no, the point is to be able to do `ssh connect` instead of going to the machine list to find out where you need to ssh to check what is broken on connect
[21:24:08] <Ada_Lovelace> If we have the cname additionally to this name, then ok.
[21:24:09] <tampakrap> CNAMEs are cheap
[21:24:10] <kl_eisbaer> tampakrap: exactly
[21:24:15] <tampakrap> are for free actually
[21:24:22] <Ada_Lovelace> I don't like the situation with gitlab.
[21:24:27] <tampakrap> totally desired imho
[21:24:48] <tampakrap> gitlab moved from gitlab.o.o to gitlab.infra.o.o
[21:24:51] <kl_eisbaer> Ada_Lovelace: that's not related to my request, sorry
[21:25:00] <Ada_Lovelace> ok
[21:25:05] <kl_eisbaer> I'm speaking about all the machines like scar.infra.opensuse.org
[21:25:24] <kl_eisbaer> not everyone knows what scar.infra.opensuse.org or mickey.infra.opensuse.org is doing
[21:25:58] <pjessen> adding service names is a good thing.
[21:26:02] <kl_eisbaer> but if they become aliases like "vpn.infra.opensuse.org" or "salt.infra.opensuse.org", most people might instantly know which services they can find on those machines
[21:26:40] <kl_eisbaer> Ada_Lovelace: does this make it a bit clearer to you?
[21:27:26] <Ada_Lovelace> Yes
[21:27:33] <kl_eisbaer> still objections ?
[21:28:25] <Ada_Lovelace> I like service names as additional names.
[21:28:47] <kl_eisbaer> ok - I take this as a "yes" from everyone ;-)
[21:28:52] <kl_eisbaer> thanks!
[21:29:10] <kl_eisbaer> Next topic: sponsoring offer from cPanal (see Doug's mail on the ML)
[21:29:24] <tampakrap> typo, I'll fix it
[21:30:03] <kl_eisbaer> I've heard no news about this topic - anyone else?
[21:30:26] <pjessen> nothing
[21:30:45] <kl_eisbaer> anyone who wants to drive this ?
[21:31:07] <tampakrap> I will ask Max tomorrow, I *think* he communicated something about htis already
[21:31:10] <tampakrap> so AI for me
[21:31:21] <kl_eisbaer> thanks
[21:31:50] <kl_eisbaer> Next topic: transfer opensuse.cz domain (another mail from Doug)
[21:32:01] <tampakrap> I object on this
[21:32:11] <kl_eisbaer> tampakrap: ok
[21:32:42] <tampakrap> I told Petr (the original requestor) already that I don't like the idea that the official opensuse DNS will handle domains from other community teams
[21:32:54] <tampakrap> their request is simple though, they want a redirect to the wiki
[21:33:11] <tampakrap> but we will have to accept other domains in the future as well, and maintain them
[21:33:16] <tampakrap> do we want to do that?
[21:33:39] <Ada_Lovelace> Why not?
[21:33:45] <pjessen> Who is behind opensuse.cz ?
[21:33:48] <cboltz> if it's just a redirect, it's easy to answer 'yes'
[21:34:17] <cboltz> besides that, having control over opensuse.* domains can't hurt
[21:34:17] <kl_eisbaer> tampakrap: just to understand you right: Petr offered us to get the owner of the opensuse.cz domain?
[21:35:02] <pjessen> can "we" even act as the owner?
[21:35:10] <tampakrap> correct, I said that I will take the topic to our meeting, but he went as well to doug, who sent the mail first to our ml
[21:35:47] <kl_eisbaer> from a technical standpoint, I see no issues - but the legal point needs to be clarified by the board IMHO
[21:36:08] <cboltz> actually it's already on the board's radar
[21:36:35] <cboltz> last I heard is that Richard waits for response from Ciaran who should be in the whois data
[21:37:04] <tampakrap> okay mind responding to Petr and Doug then?
[21:37:11] <pjessen> the whois data will accept anything.
[21:37:28] <kl_eisbaer> pjessen: :-)
[21:37:47] <cboltz> pjessen: you are technically right, but legally Ciaran might have a different opinion ;-)
[21:37:47] <kl_eisbaer> tampakrap: I would say: go ahead
[21:38:04] <kl_eisbaer> ...as we are just the technical part of the story
[21:38:15] <kl_eisbaer> tampakrap: but what is with their webside?
[21:38:43] <tampakrap> last thing they told me, they want to get rid of it
[21:38:47] <kl_eisbaer> I guess they want to leave that stuff as it is and just want to get the opensuse.cz domain (DNS) under openSUSE control?
[21:39:25] <pjessen> there is presuambly also an issue of cost?
[21:39:30] <kl_eisbaer> tampakrap: maybe put this (together with the Email question) in your answer email?
[21:39:39] <tampakrap> ack
[21:40:22] <kl_eisbaer> means: is it ok for them that openSUSE takes over the domain, redirects anything opensuse.cz related to the CZ wiki and skip the email part?
[21:40:31] <kl_eisbaer> ...something like that
[21:41:08] <kl_eisbaer> ok for everyone ?
[21:41:26] <cboltz> yes
[21:41:29] <pjessen> So we would be just bne the DNS admns?
[21:42:00] <Ada_Lovelace> yes
[21:42:20] <kl_eisbaer> pjessen: IMHO yes. That would be my understanding
[21:42:37] <pjessen> I have no issue with that. Especially as I don't do any DNS admin ...
[21:42:53] <kl_eisbaer> pjessen: not yet... ;-)
[21:43:16] <kl_eisbaer> I guess the next 2 topics were handled already: monitoring/mirror status ?
[21:43:33] <Ada_Lovelace> yes. You told all. ;)
[21:43:39] <tampakrap> and the salt status
[21:44:01] <kl_eisbaer> tampakrap: I'm happy to hear more ;-)
[21:45:16] * cboltz wonders how the 40 VMs match the 28 pillar/id/* files
[21:45:40] <kl_eisbaer> cboltz: now you know how many machines are administrated by me ;-)
[21:46:00] <cboltz> lol
[21:46:15] <tampakrap> yep let's fix that
[21:46:18] <kl_eisbaer> at least the galera machines are currently completely unmanaged
[21:46:18] <cboltz> but seriously - why don't you use salt?
[21:46:43] <kl_eisbaer> cboltz: fear? not enough knowledge? using ansible ?
[21:46:51] <kl_eisbaer> cboltz: choose one and you might be right
[21:47:51] <cboltz> I can help you to fix #2, which might then also fix #1 ;-)
[21:48:08] <kl_eisbaer> cboltz: thanks - I will definitively come back to that :-)
[21:48:23] <kl_eisbaer> tampakrap: btw, I've one question
[21:48:33] <tampakrap> shoot
[21:48:46] <kl_eisbaer> tampakrap: is the salt master in the heroes network only serving for the heroes machines ?
[21:48:59] <tampakrap> yes
[21:49:06] <kl_eisbaer> I was just wondering as I was on the svn machine in the other network
[21:49:26] <kl_eisbaer> but that might be just a "flash back", as I saw there some repos pointing to "nowhere"
[21:49:39] <tampakrap> we used to have a separate master for the suse-dmz but it is currently broken since the network split
[21:49:49] <kl_eisbaer> ok
[21:49:57] <tampakrap> hopefully will be back during hackweek
[21:50:13] <kl_eisbaer> about the ntp stuff - I've two notes from what I saw so far...
[21:50:22] <Ada_Lovelace> That's your hackweek project? :D
[21:50:26] <kl_eisbaer> 1) any objections using chrony instead of ntpd ?
[21:50:46] <pjessen> Any reason why?
[21:50:58] <kl_eisbaer> 2) any objections to use just "ntp1", "ntp2" instead of the full DNS name?
[21:51:05] <tampakrap> no objections, I saw the configs are quite similar, but I'll need to disable it in salt as well, as ntp is currently managed in salt
[21:51:24] <kl_eisbaer> chrony is a bit more secure, as some tests from secint showed
[21:51:44] <tampakrap> for 2 it will work if the machines have properly set up the searchlist right?
[21:51:46] <kl_eisbaer> the chrony maintainers did not implement all features, so they are not fully RFC compatible
[21:51:52] <cboltz> tampakrap: sounds like kl_eisbaer found a nice task to practise salt ;-)
[21:52:05] <kl_eisbaer> but they implemented enough for having everything that a client needs
[21:52:27] <pjessen> I'm a long-time fan of ntp, but that's personal.
[21:52:59] <kl_eisbaer> one example: chrony only binds to localhost and no other interface per default
[21:53:15] <kl_eisbaer> pjessen: I was, too, but times are changing ;-)
[21:53:56] <kl_eisbaer> tampakrap: yes, with just the hostname (ntp1), they will always just try their local domain
[21:54:37] <kl_eisbaer> pjessen: I just found one more or less important feature that chrony handles different than ntpd
[21:55:20] <kl_eisbaer> the "tinker panic 0" (which is btw. missing in the salt profile), is translated there to something like "makestep 1.0 3"
[21:55:50] <kl_eisbaer> tampakrap: ...and IMHO "disable monitor" is also missing in the salt profile, but I'm not sure here
[21:56:11] <tampakrap> I'll check
[21:56:27] <kl_eisbaer> cboltz: I've already some other quick and easy things to add to salt
[21:57:02] <tampakrap> disable monitor is there
[21:57:02] <kl_eisbaer> but my last merge requests are long ago - so I probably forgot how to do it properly
[21:57:10] <kl_eisbaer> tampakrap: ah, sorry
[21:57:18] <tampakrap> tinker panic 0 is not
[21:57:22] <tampakrap> add it to all machines?
[21:57:33] <kl_eisbaer> tampakrap: I guess the tinker panic 0 was left out because the initial formula did not support it
[21:57:40] <kl_eisbaer> but all virtual machines should have it
[21:57:52] <tampakrap> only virtual? no physical?
[21:58:03] <kl_eisbaer> because it allows the clock to be stepped
[21:58:04] <tampakrap> ntp servers as well?
[21:58:24] <kl_eisbaer> yes: for virtual machines this might be important during live migration or when they are paused
[21:58:46] <tampakrap> ack
[21:58:59] <kl_eisbaer> some docs also mention that the virtual machines might have problems right after boot, when their "virtual hw clock" is not synced with the hypervisor
[21:59:13] <kl_eisbaer> so it's definitively a setting you want to have for virtual machines
[22:00:07] <pjessen> will have to ask you about that chrony feature tomorrow.
[22:00:11] <kl_eisbaer> the funny part with the "restrict" settings: you can skip all of them with chrony, if you do not plan to use the VM as time server
[22:00:35] <pjessen> ah, better defaults?
[22:00:35] <kl_eisbaer> pjessen: https://chrony.tuxfamily.org/
[22:00:44] <kl_eisbaer> pjessen: more secure defaults, yes
[22:02:00] <kl_eisbaer> but I would say: 2 hours is enough for a meeting ;-)
[22:02:20] <pjessen> yeah, I have to go. Goot meeting though.
[22:02:23] <pjessen> good
[22:02:31] <kl_eisbaer> pjessen: CU
[22:03:06] <pjessen> see ya all.
[22:03:07] <kl_eisbaer> ...and bye, bye to everyone else :-)
[22:03:09] <pjessen> bye
[22:03:20] <Ada_Lovelace> bye
[22:03:21] <cboltz> bye
[22:03:44] <tampakrap> yeah let's finish it
    (1-1/1)