|
2019-12-03 heroes meeting
|
|
|
|
[20:01:20] <cboltz> hi everybody, and welcome to the heroes meeting ;-)
|
|
[20:01:34] <cboltz> our usual topics are listed on https://progress.opensuse.org/issues/59121
|
|
[20:01:40] <oreinert> good evening
|
|
[20:02:08] <cboltz> does someone from the community have any questions?
|
|
[20:03:47] <cboltz> doesn't look so, so let's continue with the status reports
|
|
[20:04:09] <cboltz> as discussed in Nuremberg, let's try to limit this to the reports, and have discussions afterwards
|
|
[20:04:16] <cboltz> who wants to start?
|
|
[20:04:25] <jdsn> I can share some information about widehat
|
|
[20:04:40] <jdsn> we got a 'verbal' appoval from SUSE that they will "most likely" find the budget for a new widehat machine
|
|
[20:04:47] <jdsn> so we can replace the old machine soon
|
|
[20:05:10] <jdsn> we also have a configuration that should fit for the next few years
|
|
[20:05:44] <jdsn> an interim solution with the move to a hetzner server is not really needed as a short downtime for the replacement should not be an issue
|
|
[20:06:22] <jdsn> questions?
|
|
[20:06:53] <cboltz> no, just a thank you ;-)
|
|
[20:07:18] <kl_eisbaer> Is Hetzner still an option for an additional mirror?
|
|
[20:07:43] <jdsn> if they offer to sponsor a server long term, why not
|
|
[20:07:51] <jdsn> but until now I did not hear back from them
|
|
[20:07:58] <kl_eisbaer> ok, thanks
|
|
[20:08:04] <jdsn> I had my wish forwarded to Martin Hetzner himself
|
|
[20:08:23] <jdsn> s/my/our/
|
|
[20:08:47] <oreinert> Nothing much from me, it's a busy time of year. Spent time setting myself up, and familiarising myself with the way things work. No real accomplishments, just some wiki edits. And I will continue with that until next time.
|
|
[20:09:30] <kl_eisbaer> While I'm also familiarising myself (again) with the setup, I already have something...
|
|
[20:09:40] <kl_eisbaer> = Duplicate IP addresses in infra.opensuse.org network: =
|
|
[20:09:40] <kl_eisbaer> caasp-worker1.infra.opensuse.org. 300 IN A 192.168.47.47
|
|
[20:09:40] <kl_eisbaer> helloworld.infra.opensuse.org. 300 IN A 192.168.47.47
|
|
[20:09:40] <kl_eisbaer> aedir1.infra.opensuse.org. 300 IN A 192.168.47.57
|
|
[20:09:40] <kl_eisbaer> osc-collab-future.infra.opensuse.org. 300 IN A 192.168.47.57
|
|
[20:09:40] <kl_eisbaer> aedir2.infra.opensuse.org. 300 IN A 192.168.47.58
|
|
[20:09:40] <kl_eisbaer> mailman-test.infra.opensuse.org. 300 IN A 192.168.47.58
|
|
[20:09:41] <kl_eisbaer> Someone should fix this...
|
|
[20:10:14] <kl_eisbaer> I did not check if those affected machines are currently online - but IF they are, someone (probably the one who set them up) should change their IPs
|
|
[20:10:32] <kl_eisbaer> = status.opensuse.org =
|
|
[20:10:46] <kl_eisbaer> Both machines are now running 15.1 and the latest stable Cachet code
|
|
[20:10:54] <kl_eisbaer> https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Statusopensuseorg
|
|
[20:10:54] <kl_eisbaer> https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Status1opensuseorg
|
|
[20:10:54] <kl_eisbaer> https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Status2opensuseorg
|
|
[20:11:02] <kl_eisbaer> is the documentation (updated)
|
|
[20:11:10] <kl_eisbaer> = Documentation in general =
|
|
[20:11:16] <kl_eisbaer> https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Machines
|
|
[20:11:17] <kl_eisbaer> => currently lists ~90 (!) machines
|
|
[20:11:40] <kl_eisbaer> This brings me to a topic we might need to discuss / make a decision...
|
|
[20:11:48] <kl_eisbaer> Q: FreeIPA allows to define Hosts. This would currently help to get a short overview of available machines and their functions. In addition, it allows to store machines MAC addresses, public SSH keys and to define roles (functional roles as well as sudoers for example) and group them.
|
|
[20:11:48] <kl_eisbaer> This would not make the wiki obsolete (as the description field in FreeIPA does not allow wiki semantic), but could give a good, first overview.
|
|
[20:11:48] <kl_eisbaer> Backside: people need to add/maintain information at least in 3 different systems:
|
|
[20:11:48] <kl_eisbaer> * FreeIPA
|
|
[20:11:48] <kl_eisbaer> * Admin-Wiki
|
|
[20:11:49] <kl_eisbaer> * Salt
|
|
[20:11:49] <kl_eisbaer> Fact: there is currently not one single page which gives an overview about machines and their
|
|
[20:12:17] <kl_eisbaer> So I would love to have this discussed either here or via mailing list.
|
|
[20:12:23] <kl_eisbaer> (but I have more... ;-)
|
|
[20:12:32] <kl_eisbaer> = Monitoring cleanup =
|
|
[20:12:32] <kl_eisbaer> Removed/ fixed some machines.
|
|
[20:12:32] <kl_eisbaer> Question: what about the machines that are currently NOT monitored at all?
|
|
[20:12:32] <kl_eisbaer> * aedir{1,2}
|
|
[20:12:32] <kl_eisbaer> * caasp*/kubic (16 machines)
|
|
[20:12:33] <kl_eisbaer> * ci-opensuse
|
|
[20:12:33] <kl_eisbaer> * narwal (6 machines)
|
|
[20:12:34] <kl_eisbaer> * pinot
|
|
[20:12:34] <kl_eisbaer> * ses-admin
|
|
[20:12:35] <kl_eisbaer> What about test machines in general?
|
|
[20:13:10] <kl_eisbaer> As I have no access to those machines, I can not really do anything here regarding monitoring. Only monitoring the available services
|
|
[20:13:31] <kl_eisbaer> So if some admin of those machines feels now trapped: please ping me
|
|
[20:13:34] <kl_eisbaer> = openSUSE:infrastructure repo =
|
|
[20:13:34] <kl_eisbaer> * Started with cleanup - and fixing packages
|
|
[20:13:34] <kl_eisbaer> + updated etherpad-lite to 1.7.5 (waiting for someone to deploy)
|
|
[20:13:34] <kl_eisbaer> + abuild-online-update is replaced with suse-online-update -> this requires adaptions on machines with the old package
|
|
[20:13:34] <kl_eisbaer> + adjusted repositories (enabled 15.2 and removed some old repos like SLE_12_SP3) -> might affect some machines that should either see an update or a migration
|
|
[20:13:34] <kl_eisbaer>
|
|
[20:13:34] <kl_eisbaer> * started to work on Leap 15.2 images
|
|
[20:13:35] <kl_eisbaer> * Leap 15.1 image deployment is currently challenging:
|
|
[20:13:35] <kl_eisbaer> + need to wait for dracut to run into timeout
|
|
[20:13:36] <kl_eisbaer> + chroot into installed system
|
|
[20:13:36] <kl_eisbaer> + run grub2-mkconfig -o /boot/grub2/grub2.cfg
|
|
[20:13:37] <kl_eisbaer> + reboot
|
|
[20:13:54] <kl_eisbaer> = Security issues popped up during scan =
|
|
[20:13:54] <kl_eisbaer> * most obvious problems fixed
|
|
[20:13:54] <kl_eisbaer> + SSL ciphers enhanced
|
|
[20:13:54] <kl_eisbaer> + TLS 1.2 enforced
|
|
[20:13:54] <kl_eisbaer> * status1&2 upgraded
|
|
[20:13:54] <kl_eisbaer> * daffy1&2 upgraded
|
|
[20:13:54] <kl_eisbaer> Real old machines (SLE11):
|
|
[20:13:55] <kl_eisbaer> * boosters
|
|
[20:13:55] <kl_eisbaer> * narwal{,2}
|
|
[20:13:56] <kl_eisbaer> * redmine
|
|
[20:13:56] <kl_eisbaer> * community
|
|
[20:13:57] <kl_eisbaer> Still some 42.3 machines online:
|
|
[20:14:09] <kl_eisbaer> = Salt =
|
|
[20:14:09] <kl_eisbaer> What is the common procedere for Salt?
|
|
[20:14:09] <kl_eisbaer> I'm asking, because I see some long hanging merge requests. Wouldn't it be a good idea to have some arrangements like:
|
|
[20:14:09] <kl_eisbaer> * emergency updates fixing something that is already broken => direct
|
|
[20:14:09] <kl_eisbaer> * stuff that is interesting only for machines that the requester maintains => direct
|
|
[20:14:09] <kl_eisbaer> * stuff that nobody was able to review for more than 2 months => direct
|
|
[20:14:09] <kl_eisbaer> And in turn:
|
|
[20:14:10] <kl_eisbaer> * stuff that tends to break existing stuff => request
|
|
[20:14:10] <kl_eisbaer> * stuff that affects other machines where the submitter != machine-admin => request
|
|
[20:14:11] <mstroeder> aedir{1,2} are VMs for Æ-DIR PoC (see progress #39872)
|
|
[20:14:28] * kl_eisbaer is done with status report
|
|
[20:15:03] <kl_eisbaer> mstroeder: that was my guess :-) - but I want to know if those machines should be monitored?
|
|
[20:15:27] <kl_eisbaer> IMHO every machine which provides an external visible service should be monitored. But this is just my personal wish.
|
|
[20:15:42] <cboltz> wow, that was a lot! - and I have a feeling that we can fill the meeting with discussing your questions ;-)
|
|
[20:15:50] <cboltz> but before we do that - more status reports?
|
|
[20:15:51] <kl_eisbaer> But even this "wish" leaves room for questions, as some test-instances are visible externally.
|
|
[20:16:17] <mstroeder> aedir{1,2} are still not in production use. But feel free to monitor them. Because of conflicts with Python3 modules I had to disable salt on aedir1 though.
|
|
[20:16:23] <tuanpembual> I have update
|
|
[20:16:26] <tuanpembual> :D
|
|
[20:16:41] <cboltz> go ahead ;-)
|
|
[20:16:57] <tuanpembual> progress-test.o.o get some fixed plugins. some broken error page 500 have fixed.
|
|
[20:17:05] <tuanpembual> https://progress-test.opensuse.org/
|
|
[20:17:12] <kl_eisbaer> mstroeder: I would leave the final decision up to you (especially as you should also provide the information of "what to monitor"
|
|
[20:17:50] <tuanpembual> please look for this. Next plan, if acceptable, please move to next step, using real db.
|
|
[20:18:00] <kl_eisbaer> tuanpembual: one word: wow :D
|
|
[20:18:41] <tuanpembual> backward, it still use manual installation. no salt stuff yet.
|
|
[20:19:13] <tuanpembual> for now, still using local mariadb. more detail at https://progress.opensuse.org/issues/27720
|
|
[20:19:21] <tuanpembual> thanks @kl_eisbaer
|
|
[20:19:37] <kl_eisbaer> tuanpembual: well, first it would be good to have a secure and up-to date installation. Salt (and other stuff) can IMHO follow later...
|
|
[20:20:17] <kl_eisbaer> tuanpembual: did you test the ticket system as well? Means: if you sent Emails, do they end up in the right queue ?
|
|
[20:20:38] <tuanpembual> mail is working.
|
|
[20:20:49] <tuanpembual> but I will test make new ticket now
|
|
[20:20:51] <tuanpembual> :D
|
|
[20:20:51] <kl_eisbaer> perfect1 :-)
|
|
[20:21:59] <kl_eisbaer> https://progress-test.opensuse.org/projects/opensuse-admin/files => the images are missing (looks like they are stored locally somewhere)
|
|
[20:22:49] <kl_eisbaer> tuanpembual: do you mind to create a project in gitlab, where you can put scripts and other stuff - and which can be used to file issues ?
|
|
[20:23:27] <cboltz> I'd argue that scripts should be hosted in the salt repo and listed as "file.managed" ;-)
|
|
[20:23:38] <tuanpembual> sure. I have some notes about installation and other stuff
|
|
[20:23:56] <kl_eisbaer> cboltz: is Salt unable to get sources / files from more than one repo?
|
|
[20:24:42] <cboltz> we can use "git.cloned", but IMHO that only makes sense for repos with lots of files
|
|
[20:25:09] <cboltz> if we are talking about a few scripts (which get managed by us anyway), using an external repo sounds like superfluous overhead IMHO
|
|
[20:25:24] <tuanpembual> had successfully create new ticket. and an email arrived at my inbox
|
|
[20:25:43] <jdsn> kl_eisbaer: whats the goal of having multiple repos?
|
|
[20:26:02] <kl_eisbaer> jdsn: for me the goal is to keep things separated that are separated
|
|
[20:26:20] <kl_eisbaer> pushing everything into one single repo ends up in a mess sooner or later.
|
|
[20:26:35] <kl_eisbaer> does not matter if it is openSUSE:infrastructure or any git/svn repo
|
|
[20:27:00] <kl_eisbaer> What - for example - has my issue report above to do with our salt repository?
|
|
[20:27:29] <jdsn> ok, I am unsure about the level of separateness - but salt even works without git, so the source does not matter - its just a matter of taste how to integrate the other files
|
|
[20:27:54] <kl_eisbaer> instead, if a repo is clearly defined to host one special tool / machine scripts, I see the benefit for the maintainer to work independenly
|
|
[20:28:21] <kl_eisbaer> jdsn: jip: salt can work with plain files - it doesn't matter where they come from.
|
|
[20:28:27] <jdsn> kl_eisbaer: ... which can potentially break stuff if the does not think about the other repo :)
|
|
[20:28:41] <kl_eisbaer> right
|
|
[20:28:54] <cboltz> kl_eisbaer: I understand your reasons, but OTOH I'd like to avoid having 100 repos - keeping an overview of everything would be a nightmare
|
|
[20:29:07] <kl_eisbaer> On the other side, I would love to give people as much freedom as possible. - and the one who breaks stuff should be able to fix it as well ;-)
|
|
[20:29:31] <tuanpembual> so, what next plan to do for new redmine?
|
|
[20:29:34] <kl_eisbaer> in good times, "he" breaks only stuff "he" maintains anyway
|
|
[20:29:45] <jdsn> kl_eisbaer: what freedom do we take if the files are in the same repo? you can still work independent
|
|
[20:29:51] <jdsn> or what am I missing
|
|
[20:29:52] <jdsn> ?
|
|
[20:30:02] <kl_eisbaer> cboltz: do you really have an overview right now?
|
|
[20:30:28] <kl_eisbaer> jdsn: everyone has to work under the conditions the whole team has defined for this repo
|
|
[20:30:36] <kl_eisbaer> see my question about the infra/salt repo above.
|
|
[20:30:39] <cboltz> maybe not 100% (because for example I'm not a LDAP expert), but in general I'd say that I have a quite good overview
|
|
[20:30:50] <jdsn> kl_eisbaer: I think this topic deserves to be detailled eg. in an etherpad so we all are on the same page
|
|
[20:30:50] <kl_eisbaer> Do you really want to have a merge request hanging for more than a year?
|
|
[20:30:58] <jdsn> you obviously know more than we do
|
|
[20:31:26] <kl_eisbaer> jdsn: no, I do not know. I just have my personal feelings and my personal experience - like everyone of us has
|
|
[20:31:51] <kl_eisbaer> I just see it very often that a very restricted master branch tends to move contributors away
|
|
[20:32:13] <jdsn> kl_eisbaer: what are the reasons? lets address them!
|
|
[20:32:15] <kl_eisbaer> ...and I posted the solution that my team is running above
|
|
[20:32:34] <kl_eisbaer> The main reason is that it takes ages before a merge request get reviewed - or even accepted
|
|
[20:32:51] <jdsn> but that is independent of the number of repos
|
|
[20:33:12] <cboltz> kl_eisbaer: IMHO we "just" need to adjust our policy how merge requests get handled - allow to self-merge simple and/or urgent things
|
|
[20:33:16] <jdsn> even more, one shared repo is less work to look at, so the MRs get reviewed faster
|
|
[20:33:22] <kl_eisbaer> If I fix a typo somewhere, if I change stuff that clearly is maintained only by myself - why do I need to wait weeks or months before my changes get into the master branch?
|
|
[20:33:38] <jdsn> but thats a topic for the MR policy that you already proposed - easier merging
|
|
[20:33:39] <cboltz> and also define a "timeout" which allows to merge without a formal review
|
|
[20:33:58] <kl_eisbaer> jip. This is my proposal for "team repos" like the Salt one
|
|
[20:34:40] <kl_eisbaer> but if tuanpembual has written some scripts to make the usage of progress more conveniant for each of us - why should he required to push them into the salt repo ?
|
|
[20:34:41] <jdsn> shall we try that, and see how we get along with it for a few months?
|
|
[20:34:58] <kl_eisbaer> (sorry, tuanpembual, just taking you as example here)
|
|
[20:35:06] <jdsn> kl_eisbaer: because they may be part of the system configuration
|
|
[20:35:15] <jdsn> without these scripts the host is incomplete
|
|
[20:35:45] <jdsn> but if we find a nice way to reference other sources we can try that too
|
|
[20:36:04] <oreinert> if it's for a single project or machine, we could also package the scripts and just install that
|
|
[20:36:05] <jdsn> I just see no problem to require someone to move some scripts to an existing repo
|
|
[20:36:13] <kl_eisbaer> jdsn: you are right. But if I am currently working on those scripts, I will not push them into a repository where I have to wait for days (or even hours) before I can proceed
|
|
[20:36:17] <jdsn> if the merging is easy, it should not matter
|
|
[20:36:39] <jdsn> but thats the same topic again -> easier merging
|
|
[20:36:50] <kl_eisbaer> If we can agree that those scripts (or stuff that clearly belongs only into a dedicated area) can be directly pushed, I'm in :-)
|
|
[20:37:02] <jdsn> again: shall we try Lars' proposal of easier merging?
|
|
[20:37:07] <kl_eisbaer> ...and similar to issue tracking and wiki usage.
|
|
[20:38:05] <cboltz> we'll "only" need to give more people write access to master - not really a problem, and indeed worth a try
|
|
[20:38:23] <jdsn> are there other opinions? will we have a voting on that?
|
|
[20:38:27] <cboltz> I'd still propose to handle everything via merge requests (even if you self-accept them)
|
|
[20:38:28] <kl_eisbaer> I'm even happy to enhance the README with the rules posted above ;-)
|
|
[20:39:03] <kl_eisbaer> cboltz: looks like the OBS approach ;-)
|
|
[20:39:04] <jdsn> cboltz: yea, that creates some more visibility, ok
|
|
[20:39:14] <cboltz> reason: a MR sends out a mail to everybody (who subscribed), so you might get reviews "for free"
|
|
[20:39:27] <cboltz> (in worst case, you'll have to do another MR with the proposed improvements ;-)
|
|
[20:39:46] <kl_eisbaer> "MR + self-accept in special cases" => +1
|
|
[20:39:51] <oreinert> don't forget that an important part of PRs is to allow others to keep track of what's happening
|
|
[20:40:38] <oreinert> besides, isn't it possible to get salt to run a change without committing it first? (possible noob question)
|
|
[20:40:44] <jdsn> +1 as long as you define special cases :)
|
|
[20:41:45] <oreinert> also, Google commits *all* of their software in a single repository, so why can't we do that, too?
|
|
[20:41:54] <cboltz> oreinert: IIRC there's a way to somehow specify the git branch to use, but I'd have to look it up
|
|
[20:42:34] <cboltz> (obviously you'll first need to commit to that branch ;-)
|
|
[20:42:37] * kl_eisbaer is normally "trying out" things directly from the saltmaster. ;-)
|
|
[20:42:47] <kl_eisbaer> but this depends on the setup
|
|
[20:42:58] <cboltz> personally, I have some test VMs on my laptop and can test things on them
|
|
[20:43:51] <oreinert> to me is sounds like kl_eisbaer wants to fire off a rapid succession of commits/PRs while developing, and that's not really what you're supposed to do. PRs is for the final things (or as close to it as you can get), also to reduce load on reviewers.
|
|
[20:44:18] <kl_eisbaer> well: I'm a fan of "release often"....
|
|
[20:44:39] <oreinert> sure - but that's not the same as "release during development"
|
|
[20:44:51] <kl_eisbaer> oreinert: so I have to admit that you are probably right with this
|
|
[20:45:08] <cboltz> as long as you don't have one MR followed by two "fix previous MR" MRs ;-) I'm fine with "release often"
|
|
[20:45:32] <oreinert> if we can't make (local) salt changes and run/test them without committing and pushing to the repo (maybe also via a PR) then the process is wrong, I'd argue.
|
|
[20:45:46] <kl_eisbaer> I am more from the DevOPS approach - and YES, this sometimes breaks things. But on the other side, this gives some possibility for fast development
|
|
[20:46:04] <oreinert> sure, "fix my previous mistake" PRs are normal. :-)
|
|
[20:47:02] <cboltz> yeah, no problem as long as we have (on average) more "$foo" MRs than "fix previous MR for $foo" ;-)
|
|
[20:47:07] <kl_eisbaer> My experience with this is just that people tend to hold their changes back (because they need some love/beautify) - and suddenly notice that others already did "quick and dirty" what they wanted to achieve
|
|
[20:48:05] <cboltz> you shouldn't be that shy ;-)
|
|
[20:48:42] <cboltz> improvements in small steps are always welcome (and maybe even easier to review than one big MR including 20 of those steps)
|
|
[20:48:49] <oreinert> +1
|
|
[20:48:56] <kl_eisbaer> My current feeling is just that we sometimes outbrake ourselves, when we wait for "someone" who clicks on the "merge" button - more than one year later....
|
|
[20:49:37] <cboltz> I agree completely
|
|
[20:49:51] <oreinert> i assume you mean it feels like a year waiting for approval? :-)
|
|
[20:49:57] <kl_eisbaer> If I find the time to work on openSUSE stuff, I just don't want to get stopped because some rules require that someone reviews my commits at 03:00 night
|
|
[20:50:39] <kl_eisbaer> oreinert: well - threre are indeed merge requests that started over a year ago - just in the salt repo
|
|
[20:51:05] <kl_eisbaer> ...and this is something that I do not understand
|
|
[20:51:18] <cboltz> if the change is a) small and trivial or b) only affecting "your" VM, I see no problem with self-accepting the MR
|
|
[20:51:29] <kl_eisbaer> cboltz: thanks.
|
|
[20:51:48] <kl_eisbaer> cboltz: and I would only extend this rule for "emergency updates"
|
|
[20:51:48] <cboltz> the obvious disadvantage is that you don't have someone to blame for not noticing the breakage it causes in the review, but that will be your choice ;-)
|
|
[20:51:57] <oreinert> kl_eisbaer: I remember we talked about them in Nürnberg. They are special, if I remember correctly - potentially harmful, and noone quite seems to know what the impact of merging is. I assume most PRs by far will not be like that.
|
|
[20:52:09] <kl_eisbaer> example: the given NTP servers are down and all hosts should get a replacement immediately
|
|
[20:52:13] <cboltz> agreed, emergency updates are another obvious category for self-merge
|
|
[20:52:42] <oreinert> i don't really mind direct commits for small changes without PR either
|
|
[20:53:10] <kl_eisbaer> Once I figured out what changed in the notify mechanism of the IRC-Bot, we could even think about pushing merge requests topics here
|
|
[20:53:13] <oreinert> as long as it's tracked in a VCS, I'm fine (instead of hacking directly on the box)
|
|
[20:53:54] <kl_eisbaer> oreinert: me as well (especially as a VCS has this nice "way-back-machine" interface ;-)
|
|
[20:54:05] <cboltz> I'd prefer MRs for everything - even if you self-merge within seconds, it will still send out some mails (which pushing to production directly doesn't)
|
|
[20:54:19] <jdsn> cboltz: +1
|
|
[20:54:23] <kl_eisbaer> cboltz: ...and this is IMHO a good compromize
|
|
[20:54:54] <tuanpembual> cboltz: +1
|
|
[20:56:01] <cboltz> ok, so on the technical side, we'll just need to give more people permissions to (self)accept MRs ;-)
|
|
[20:56:46] <cboltz> and on the practical side, I'm sure everybody has enough common sense to judge if a MR qualifies for one of the self-merge categories
|
|
[20:57:50] <cboltz> anything else on this topic, or can we switch to the next one? (+ define "next one" - any preferences?)
|
|
[20:57:51] <jdsn> kl_eisbaer: please define these categories, cause IMHO intrusive changes should get more than one voting
|
|
[20:58:14] <kl_eisbaer> jdsn: I'm on it...
|
|
[20:58:22] <jdsn> define them in the README I mean
|
|
[20:58:23] <jdsn> ok thanks
|
|
[20:58:42] <jdsn> but dont self-merge these changes :)
|
|
[20:58:51] <jdsn> give us a chance to review :)
|
|
[20:59:52] <kl_eisbaer> jdsn: argh! now you have me :-)
|
|
[21:00:58] <cboltz> 3... 2... 1... merged, you had your chance *g,d&r*
|
|
[21:02:15] <cboltz> should we switch to the next topic?
|
|
[21:02:23] <cboltz> I'd propose documentation / machine list etc. which Lars brought up
|
|
[21:03:45] <cboltz> kl_eisbaer: I noticed some of the machines you added in the wiki are not in the heroes network - was adding them intentional?
|
|
[21:04:16] <kl_eisbaer> cboltz: it was just a DNS dump from FreeIPA : "dig -AXFR @127.0.0.1 infra.opensuse.org"
|
|
[21:04:44] <kl_eisbaer> cboltz: as this DNS domain (and the opensuse.org one) is maintained by the heroes, I see no reason to hide something ;-)
|
|
[21:05:07] <kl_eisbaer> Instead, I see it as requirement that the heroes KNOW what is running inside these domains
|
|
[21:05:37] <cboltz> agreed
|
|
[21:05:50] <cboltz> maybe we should add a comment saying "SUSE network" to the machines not in the heroes network?
|
|
[21:06:28] <kl_eisbaer> cboltz: that's the problem I described above...
|
|
[21:06:41] <kl_eisbaer> IMHO we need such documentation - but I'm unsure WHERE...
|
|
[21:06:59] <cboltz> there's nothing like a "wrong place for documentation"
|
|
[21:07:01] <kl_eisbaer> jdsn: https://gitlab.infra.opensuse.org/infra/salt/merge_requests/287 - Feuer frei! :-)
|
|
[21:07:09] <cboltz> the typical problem is "no documentation at all"
|
|
[21:07:15] <kl_eisbaer> cboltz: yes, but there is "too many places for outdated documentation"
|
|
[21:07:21] <kl_eisbaer> So we have:
|
|
[21:07:37] <kl_eisbaer> * FreeIPA (where we can add the machines and do other, crazy things with them)
|
|
[21:07:41] <kl_eisbaer> * progress wiki
|
|
[21:07:44] <kl_eisbaer> * Salt
|
|
[21:08:26] <cboltz> in general, I'd like to have the "quick overview" in salt (pillar/id/*) - which obviously only works for machines we have in salt
|
|
[21:08:27] <kl_eisbaer> If you look into FreeIPA, you will notice that there are currently 33 hosts listed
|
|
[21:08:50] <cboltz> for a) more details and b) machines not in salt (because they are in the SUSE network), the wiki is fine
|
|
[21:09:03] <kl_eisbaer> each machine with an IP address assigned, sometimes even MAC addresses or SSL/SSH certs
|
|
[21:09:17] <kl_eisbaer> ...and the possibility to define (for example) sudoer roles...
|
|
[21:10:07] <cboltz> I'm not sure if I like to have more things in FreeIPA - I try to avoid logging in there whenever possible ;-)
|
|
[21:10:08] <kl_eisbaer> If there are no objections, I am fine if we go with the wiki for now
|
|
[21:10:31] <cboltz> so - managing membership of a "$whatever-admins" group in FreeIPA is fine
|
|
[21:10:39] <kl_eisbaer> Maybe the new redmine allows to use some kind of API to update the list automatically
|
|
[21:10:46] <cboltz> but deploying the actual sudo permissions for this group should IMHO stay in salt
|
|
[21:11:17] <cboltz> ideally we should replace that list with ls pillar/id/ ;-)
|
|
[21:11:18] <kl_eisbaer> cboltz: ...and where is the documentation about this? :-)
|
|
[21:11:33] <cboltz> /dev/brain ;-)
|
|
[21:11:41] <kl_eisbaer> cboltz: don't get me wrong: I'm fine with your approach of avoiding FreeIPA
|
|
[21:12:10] <kl_eisbaer> but we should write this down (at least) in the wiki, to avoid that people start using FreeIPA for things that "we" did not agree upon
|
|
[21:12:39] <cboltz> that indeed makes sense
|
|
[21:13:23] <cboltz> we probably have some more things which are only documented in /dev/brain and missing in the wiki
|
|
[21:13:44] <cboltz> whenever you miss something in the wiki, feel free to document it
|
|
[21:13:52] <kl_eisbaer> I know that this is kind of a German approach to ask for more written guidance, but I guess it helps newbees here
|
|
[21:14:29] <cboltz> even what you write is wrong - I monitor the wiki changes, and will help to get those things fixed
|
|
[21:14:59] <cboltz> but adding them myself is sometimes hard because I'm "betriebsblind" ;-)
|
|
[21:15:09] <kl_eisbaer> cboltz: you are giving me more and more the impression that we don't need any additional monitoring at all, as long as we have you :-)
|
|
[21:15:25] <oreinert> even if it *only* helps newbees, it helps attract newbees
|
|
[21:15:28] <cboltz> lol
|
|
[21:15:54] <kl_eisbaer> I hope at least that the https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Machines list will get correct during the next weeks
|
|
[21:16:09] <cboltz> yeah, sounds doable
|
|
[21:16:29] <cboltz> question: do we want to have a) all machines there or b) only machines we don't have in salt?
|
|
[21:16:42] <kl_eisbaer> cboltz: btw: I do not see any benefit from your "see also pillar/id/" comments there. Especially, as you do not link to the correct pillar/id/ directly
|
|
[21:16:59] <kl_eisbaer> cboltz: I would vote for a "quick overview" page in the wiki
|
|
[21:17:12] <cboltz> I know that b) means we'll have to look at two places, but OTOH it avoids having an outdated copy in the wiki
|
|
[21:17:29] <kl_eisbaer> as not everyone is familar with all this crazy IT stuff we always mention (like Salt, Pillars, Git and so on)
|
|
[21:18:08] <kl_eisbaer> I think we should find a way to attract newbees - and to get a quick overview, if we want to know something about any machine
|
|
[21:18:11] <cboltz> well, we should document how to clone the salt repo, and that people should look at the files in pillar/id/
|
|
[21:18:28] <cboltz> even if you don't understand the detailed structure of those files, I'm sure the machine info is human-readable
|
|
[21:18:36] <kl_eisbaer> I even link to the progress wiki pages about a machine from monitor.opensuse.org
|
|
[21:19:38] <cboltz> good point - should we host a copy of the salt git repo on monitor.o.o, alias pillar/id/ into the docroot and link to the pillar/id/ files instead?
|
|
[21:19:44] <kl_eisbaer> cboltz: what about a simple table (like now), just extended with a short description about the machine and a link to the Salt Piller, if it exists
|
|
[21:20:20] <kl_eisbaer> cboltz: I'm fine with that - but this might open security problems, as the webserver is reachable from the outside
|
|
[21:21:00] <kl_eisbaer> So - if we want to link to the pillars in monitoring, we can even think about opening gitlab to the outside
|
|
[21:21:37] <kl_eisbaer> The wiki pages are also public - but people have to log in in redmine and get the correct access rights there
|
|
[21:21:44] <cboltz> that, or only rsync pillar/id/* to monitor.o.o to limit the possible damage
|
|
[21:22:07] <kl_eisbaer> cboltz: what about my approach and leave the wiki where it is now
|
|
[21:22:19] <kl_eisbaer> just with the two small extensions
|
|
[21:22:29] <kl_eisbaer> this would result in "everything on one page"
|
|
[21:22:58] <kl_eisbaer> ...and if the machine is in Salt, you (hehe) can add links to the salt pillars
|
|
[21:23:13] <kl_eisbaer> ...and if not, we can use additional wiki pages to provide a bit more information about the machine
|
|
[21:23:23] <cboltz> that's indeed an option
|
|
[21:23:51] <cboltz> can you do that for a few machines so that we see a practical example?
|
|
[21:23:52] <kl_eisbaer> Otherwise we could also add each and every machine (even if not reachable for us) into Salt
|
|
[21:24:17] <kl_eisbaer> cboltz: ok - taken as action item for me: enhance https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Machines
|
|
[21:24:35] <cboltz> I like that idea - grepping salt is easier than reading the wiki (at least for me ;-)
|
|
[21:24:52] <kl_eisbaer> cboltz: I don't want to store much information in the wiki
|
|
[21:25:11] <kl_eisbaer> but I want to have one central point - and from there correct links to additional information
|
|
[21:25:14] <oreinert> +1
|
|
[21:25:48] <kl_eisbaer> This might help, for example, if an important machine is down - and you want to get quick information about it.
|
|
[21:26:16] * kl_eisbaer just thinks about "helloworld.infra.opensuse.org" or "login3.infra.opensuse.org"
|
|
[21:27:00] <cboltz> sounds like we have different definitions of getting quick information (hey, grep $whatever pillar/id/* _is_ fast!) ;-)
|
|
[21:27:25] <kl_eisbaer> cboltz: for this, you need to have access to YOUR checked out repo. This is some luxury I do not always have...
|
|
[21:27:57] <cboltz> ok, good argument
|
|
[21:28:03] <cboltz> as I already said, I'm not against your way - just start with a few machines so that we see how it will look, and can give feedback
|
|
[21:28:03] <kl_eisbaer> so my alternative would be to visit gitlab, behind a firewall, reachable only via vpn ...
|
|
[21:28:35] <cboltz> ... which sounds even more problematic if you don't even have the git checkout available...
|
|
[21:28:50] <kl_eisbaer> But it's already 21:30 - so what about another topic? :-D
|
|
[21:29:30] <cboltz> I think we now know what we need to do for the documentation, so - yes ;-)
|
|
[21:29:54] <cboltz> monitoring cleanup sounds like an easier one to me
|
|
[21:30:01] <kl_eisbaer> I like to get rid of the old SLE11 machines as soon as possible. So these might be my next targets... But there are even some 42.3 machines, that should see some "zypper dup"
|
|
[21:30:27] <kl_eisbaer> ...and I'm wondering if we really need 6 narwal machines?
|
|
[21:30:51] <cboltz> only narwal{5,6,7} are used, all setup with salt
|
|
[21:31:07] <kl_eisbaer> so we can shut down the old narwal machines, perfect!
|
|
[21:31:09] <cboltz> narwal{,2,3} are old machines and waiting for someone to shut them down ;-)
|
|
[21:31:21] <kl_eisbaer> jdsn?
|
|
[21:31:32] <kl_eisbaer> or should I get the honor to pull the plug?
|
|
[21:31:41] <tuanpembual> :D
|
|
[21:32:03] <cboltz> whoever is faster ;-)
|
|
[21:32:27] <kl_eisbaer> cboltz: can I add the new machines into monitoring?
|
|
[21:32:30] <cboltz> however, please (I'm afraid: manually) sync haproxy.cfg from anna to elsa - elsa might still reference the old narwals :-/
|
|
[21:32:50] <kl_eisbaer> ok
|
|
[21:32:51] <cboltz> yes, please - the interesting thing to monitor is obviously port 80
|
|
[21:33:15] <cboltz> static.o.o is the most important domain to monitor
|
|
[21:33:38] <cboltz> if you want to monitor all domains narwal* serve, see pillar/id/narwal{5,6,7}* for the domain list
|
|
[21:33:41] <kl_eisbaer> that will not change - but full fs_/ is something I like to have pro-actively monitored...
|
|
[21:34:13] <cboltz> right, that's something we should monitor on all machines
|
|
[21:34:24] <kl_eisbaer> cboltz: that's something we have haproxy for ;-)
|
|
[21:35:06] <cboltz> ;-)
|
|
[21:35:19] <kl_eisbaer> FYI: narwal{,2,3} are gone from haproxy
|
|
[21:35:36] <kl_eisbaer> so it's really just the decomissioning
|
|
[21:36:11] <cboltz> did you also sync the haproxy.cfg to elsa?
|
|
[21:36:49] <kl_eisbaer> cboltz: "csync2 -xv" is your friend - even running a haproxy test before reloading the haproxy on else
|
|
[21:37:04] <cboltz> good to know, thanks
|
|
[21:37:20] <kl_eisbaer> I know, old school - but I managed nearly all my HA setups with this simple tool
|
|
[21:37:28] <cboltz> I'll happily salt haproxy.cfg - as soon as the keepalived MRs are merged and get rid of that "salt timebomb"
|
|
[21:37:52] <cboltz> (currently we have a hand-modified keepalived config, and salt highstate would revert those changes)
|
|
[21:38:11] <kl_eisbaer> well... ;-)
|
|
[21:38:40] <kl_eisbaer> but this leaves us IMHO just with boosters, redmine (progress) and community running SLE11
|
|
[21:38:51] <kl_eisbaer> redmine is WIP
|
|
[21:39:08] <kl_eisbaer> some people started to work on community stuff (doc.o.o) as well
|
|
[21:39:20] <kl_eisbaer> so the only machine currently left seems to be boosters
|
|
[21:39:40] <kl_eisbaer> which is running ...
|
|
[21:40:03] <kl_eisbaer> *grmbl*: CONNECT
|
|
[21:40:16] <kl_eisbaer> ...ok: here the solution is simple: "poweroff"
|
|
[21:40:22] <cboltz> yeah, but at least "only" connect (to my knownledge)
|
|
[21:40:41] <kl_eisbaer> there is a vhost for travel-support-program - but I don't know if this one is still used
|
|
[21:41:10] <cboltz> at the moment, travel support is on connect.o.o/travel-support/
|
|
[21:42:01] <kl_eisbaer> seems so. But this just means that the idea of putting that stuff behind a .htaccess file should produce the needed attention for people to start reacting
|
|
[21:42:19] <cboltz> we already have a new VM for travel support, I "just" need some time to move it there
|
|
[21:42:34] <kl_eisbaer> ok - so no real road-blocker for the .htaccess file
|
|
[21:42:52] <kl_eisbaer> I will get in contact with ancor about the travel-support stuff
|
|
[21:43:23] <kl_eisbaer> cboltz: can you ping the membership commitee and tell them that we will restrict access to connect in a few days ?
|
|
[21:43:53] <cboltz> I already was in contact with ancor and forced ;-) him to do quite some things (like updating to the latest gems etc.)
|
|
[21:43:58] <kl_eisbaer> They should work as before - just need to know a common username/password to get behind the .htaccess stuff
|
|
[21:44:13] <cboltz> so deploying and moving the (AFAIK sqlite) database are the only steps left
|
|
[21:44:35] <kl_eisbaer> cboltz: ^^ can you inform them?
|
|
[21:44:53] <kl_eisbaer> I can meanwhile prepare an anouncement for the community
|
|
[21:45:12] <cboltz> good idea, maybe people want to move some things to their wiki user page
|
|
[21:46:01] <cboltz> and yes, I can send the membership commitee a mail with a quick summary, and tell them that you'll send a public announcement
|
|
[21:46:38] <cboltz> how will applying for membership work?
|
|
[21:46:51] <cboltz> a) ask someone for the .htaccess password, and continue as usual
|
|
[21:46:54] <kl_eisbaer> Should we use one single htaccess-account or create one for every mc-member?
|
|
[21:47:11] <cboltz> b) membership commitee can apply for membership in someone's name?
|
|
[21:47:21] <kl_eisbaer> well: I would say: sent an Email to a mailing list with your application
|
|
[21:47:23] <cboltz> (not sure if the software allows b) )
|
|
[21:47:32] <kl_eisbaer> and for the commitee, it's just one additional log-in
|
|
[21:47:58] <kl_eisbaer> need to check, but I guess they can just set a checkbox
|
|
[21:48:11] <kl_eisbaer> worst case would be that we get some more ELGG admins :-))
|
|
[21:48:34] <kl_eisbaer> as admins can set every button - even the "b" one
|
|
[21:49:29] <kl_eisbaer> any other questions/topics?
|
|
[21:49:31] <cboltz> ok, then please check if admins can add membership without someone having clicked the "request membership" button ;-)
|
|
[21:49:41] <kl_eisbaer> will do
|
|
[21:50:00] <cboltz> well, another machine for the monitoring - pinot
|
|
[21:50:12] <cboltz> it runs apache for countdown.o.o
|
|
[21:50:17] <kl_eisbaer> just open a ticket, please
|
|
[21:50:28] <cboltz> and I consider to also move doc.o.o there (unless someone thinks it should be a separate VM)
|
|
[21:50:41] <cboltz> ok, will do
|
|
[21:51:33] <kl_eisbaer> Last topic on my list is the openSUSE:infrastructure repository.
|
|
[21:51:52] <kl_eisbaer> In general, there is not much to say about - it just needs some love ;-)
|
|
[21:52:03] <cboltz> agreed ;-)
|
|
[21:52:07] <kl_eisbaer> But I did 2 interesting changes, I like to share
|
|
[21:52:18] <kl_eisbaer> 1) upgrade of etherpad-lite to 1.7.5
|
|
[21:52:30] <kl_eisbaer> here I'm looking for the admin of the current etherpad.opensuse.org machine
|
|
[21:52:43] <kl_eisbaer> 2) I replaced abuild-online-update with suse-online-update
|
|
[21:53:08] <kl_eisbaer> which might need some changes on some machines - but these should get notified in the monitoring
|
|
[21:53:10] <cboltz> for 1), search /dev/null for that admin ;-)
|
|
[21:53:18] <cboltz> (in other words: you just volunteered ;-)
|
|
[21:53:26] <kl_eisbaer> perfect :-/
|
|
[21:53:39] <kl_eisbaer> "you just won another machine..."
|
|
[21:54:17] <cboltz> ;-)
|
|
[21:54:34] <cboltz> well, at least you know etherpad, and probably know how to fix it if the upgrade breaks something
|
|
[21:54:42] <kl_eisbaer> not really.
|
|
[21:54:50] <kl_eisbaer> I just packaged the current version :-)
|
|
[21:55:18] <kl_eisbaer> but this machine is really just running etherpad, so it seems
|
|
[21:55:32] <cboltz> right
|
|
[21:55:35] <kl_eisbaer> might be a perfect candidate for consolidation (pinot? har, har)
|
|
[21:56:02] <kl_eisbaer> But let me gather some experience before we do this.
|
|
[21:56:25] <cboltz> doc.o.o fits better there ;-) (needs apache for MultiViews, while we use nginx for most other things)
|
|
[21:56:29] <kl_eisbaer> ...or even move that into a kubernetes/caasp cluster, which already needs half the amount of machines in the infra.opensuse.org network
|
|
[21:56:49] <kl_eisbaer> for etherpad, you just need the haproxy in front
|
|
[21:57:09] <kl_eisbaer> btw: who - beside Theo - is maintaining this clusters?
|
|
[21:57:33] <cboltz> check the open MR for caasp, you'll find some names there ;-)
|
|
[21:57:57] <cboltz> or check pillar/id/caasp*
|
|
[21:57:58] <kl_eisbaer> you mean these other, old-aged MRs?
|
|
[21:58:18] <kl_eisbaer> I was even wondering why they have dedicated (but empty) projects in gitlab
|
|
[21:58:24] <cboltz> the MR for caasp is "only" a few weeks old ;-)
|
|
[21:59:17] <kl_eisbaer> once I get the Leap 15.1 image to work (thanks, dracut), I was already wondering if we want to build some docker/pod stuff as well.
|
|
[21:59:24] <kl_eisbaer> ...but this is something for Christmas time...
|
|
[21:59:52] <cboltz> I wonder what's wrong with the 15.1 image - I'm sure it built successfully in the past
|
|
[22:00:08] <kl_eisbaer> it builds and works in general
|
|
[22:00:31] <kl_eisbaer> just after the initial deployment, dracut hangs, as it still wants to use /dev/loop0
|
|
[22:01:12] <kl_eisbaer> doing some "recovery" in a chroot via grub2 brings the machine up permanently - but I think this should be fixed....
|
|
[22:01:38] <cboltz> hmm, I never had this problem in my test VMs (but I'm using an "old" copy of the image, not a recently downloaded one)
|
|
[22:01:58] <kl_eisbaer> I guess I will start from scratch with the 15.1 template as base
|
|
[22:02:30] <cboltz> I won't stop you ;-)
|
|
[22:03:19] <kl_eisbaer> That's all I have so far.
|
|
[22:04:37] <kl_eisbaer> There are only some minor things left from the security scan. But the only thing we should check is the setup of the mail servers running inside the internal LAN
|
|
[22:04:49] <kl_eisbaer> the default configuration is very open...
|
|
[22:05:18] <kl_eisbaer> that's anyway something to get salted
|
|
[22:05:43] <cboltz> it is already ;-)
|
|
[22:06:09] <kl_eisbaer> ok - so it's just some additional tuning of the setup.
|
|
[22:06:17] <cboltz> (IIRC "only" package install etc. the relayhost setting, not the whole main.cf)
|
|
[22:06:51] <kl_eisbaer> JFYI: I plan to run some scans via openVAS again next year, so have a good overview
|
|
[22:08:52] <kl_eisbaer> 1.7GB sqlite database for etherpad...! Looks like we should do some cleanup ;-)
|
|
[22:09:12] <cboltz> looks like people actually use it ;-)
|
|
[22:10:35] <cboltz> if cleanup means "under the hood" (like "optimize table"), go ahead
|
|
[22:10:52] <cboltz> but I wouldn't delete old pads
|
|
[22:11:08] <kl_eisbaer> I would put this into a real database ...
|
|
[22:11:26] <cboltz> good idea
|
|
[22:11:35] <kl_eisbaer> but first: migrate to current version
|
|
[22:11:47] <kl_eisbaer> a copy command is way easier than a DB dump ;-)
|
|
[22:11:56] <cboltz> ;-)
|
|
[22:13:25] <cboltz> another topic - you mentioned some duplicate IPs
|
|
[22:13:32] <kl_eisbaer> yes
|
|
[22:13:36] <cboltz> .57 and .58 are aedir1 and aedir2 (just logged in to verify)
|
|
[22:13:49] <cboltz> this also means you should change (or drop?) osc-collab-future and mailman-test
|
|
[22:14:15] <cboltz> (actually dropping them shouldn't be a problem - the fact that you end up on aedir VMs, and nobody complained, shows that these names aren't used in practise)
|
|
[22:14:22] <kl_eisbaer> yes, but I have currently no idea if those machines exist (at least as templates)
|
|
[22:14:35] <kl_eisbaer> cboltz: feel free to do so :-)
|
|
[22:14:48] <cboltz> AFAIK I don't have permissions to change DNS entries
|
|
[22:15:15] <mstroeder> Hmm, could these IP conflicts be the cause of my problems with zypper repos on aedir1/2?
|
|
[22:16:53] <cboltz> I'm quite sure we don't have two _running_ VMs with the same IP (that would cause other problems, for example my ssh login wouldn't have ended up on the aedir* VMs
|
|
[22:17:18] <cboltz> so the conflict is "just" a superfluous A record with a strange name pointing to the aedir* VMs
|
|
[22:17:24] <cboltz> which shouldn't do any harm
|
|
[22:17:30] <kl_eisbaer> well: I just don't know if the other two VMs are currently just off
|
|
[22:18:27] <cboltz> that's something you'll probably need to check on the atreju bare metal level
|
|
[22:18:28] <kl_eisbaer> cboltz: should be in "Network Services" tab
|
|
[22:18:40] <kl_eisbaer> JFYI: etherpad updated
|
|
[22:18:51] <cboltz> I can access and read it, but don't have write permission
|
|
[22:19:01] <kl_eisbaer> hm...
|
|
[22:19:12] <kl_eisbaer> can you please log out and in again?
|
|
[22:19:28] <cboltz> so you just gave me additional permissions on the ldap level?
|
|
[22:19:48] <kl_eisbaer> Let's say I found an "add" button :-)
|
|
[22:21:47] <cboltz> seems to work, thanks!
|
|
[22:22:02] <kl_eisbaer> that's what I call a "quick fix" :-)
|
|
[22:22:13] <cboltz> actually - damn, now I also have to do DNS changes!
|
|
[22:23:26] * kl_eisbaer thought you were doing them via Salt already...
|
|
[22:23:51] <cboltz> no, sadly DNS is managed in LDAP instead of plaintext zone files ;-)
|
|
[22:24:10] <kl_eisbaer> well, you can edit them on the command line
|
|
[22:24:51] <cboltz> I know (darix showed me an example recently), but for now I prefer the web interface
|
|
[22:25:03] <kl_eisbaer> "Mausschubser!"
|
|
[22:25:11] <cboltz> much easier to learn ;-)
|
|
[22:25:35] <tuanpembual> I need back to go.
|
|
[22:25:51] <tuanpembual> cboltz: I will ping you latter about migration
|
|
[22:25:53] <cboltz> and since we want to change the DNS setup anyway, there's not really a point to learn the soon-old commandline syntax ;-)
|
|
[22:26:03] <kl_eisbaer> tuanpembual: thanks for your work! Much appreciated!
|
|
[22:26:28] <cboltz> tuanpembual: whenever you see me online ;-)
|
|
[22:26:32] <tuanpembual> sure kl_eisbaer, glad I can do some help for openSUSE
|
|
[22:26:37] <tuanpembual> good morning
|
|
[22:27:18] <kl_eisbaer> very welcome!
|
|
[22:27:22] <cboltz> it's close to "good night" here, but that's timezone fun ;-)
|
|
[22:28:51] <cboltz> FYI: I deleted osc-collab-future.infra.o.o and mailman-test.infra.o.o from DNS
|
|
[22:29:20] <cboltz> that only leaves caasp-worker1 vs. helloworld who have the same IP
|
|
[22:29:43] <kl_eisbaer> Maybe jdsn can have a look
|
|
[22:29:57] <cboltz> that would be welcome, because I could only guess
|
|
[22:30:15] <jdsn> I am off tomorrow
|
|
[22:30:23] <jdsn> so on Thu I can check it
|
|
[22:30:39] <kl_eisbaer> perfect. A day more or less should not harm
|
|
[22:31:49] <cboltz> any other topic?
|
|
[22:32:05] * kl_eisbaer don't things so
|
|
[22:32:17] <kl_eisbaer> s/think/
|
|
[22:32:53] <cboltz> ok, then let's close the meeting
|
|
[22:32:58] <cboltz> thanks everybody for joining
|
|
[22:33:26] <kl_eisbaer> cboltz: thanks for leading!
|
|
[22:33:44] <jdsn> thanks
|
|
[22:33:49] <cboltz> also thanks for all the things you all did since we met in Nuremberg - I haven't seen that much activity for a while :-)
|
|
[22:34:54] <kl_eisbaer> cboltz: don't worry, I will cool down. Just want to get into it again... ;-)
|
|
[22:35:16] <cboltz> you don't _have to_ cool down ;-)
|
|
[22:36:50] <kl_eisbaer> cboltz: hey, it's getting cold outside :-)
|
|
[22:37:19] <cboltz> I know, I'm outside several hours per day ;-)
|
|
[22:52:43] <kl_eisbaer> ok - time to say good night here!
|
|
[22:52:45] <kl_eisbaer> CU!
|
|
[22:52:51] <cboltz> good night!
|
|
[22:52:54] <kl_eisbaer> ...and enjoy the new etherpad ;-)
|
|
[22:53:52] <oreinert> bye - it was quite a ride
|
|
[22:56:29] <cboltz> kl_eisbaer: looks like we skipped a few ;-) versions - the new version looks quite different, and much better :-)
|
|
|