Project

General

Profile

communication #48728 » 2019-04-02-heroes-meeting.txt

IRC meeting log - cboltz, 2019-04-02 21:42

 
2019-04-02 heroes meeting

[20:00:37] <pjessen> i'll join later, only just about to eat some pizza.
[20:00:50] <pjessen> will probably be 15min
[20:02:11] <cboltz> no problem ;-)
[20:02:11] <tampakrap> okay
[20:02:14] <tampakrap> so can we start?
[20:02:33] <cboltz> yes
[20:02:47] <cboltz> the (usual) topics are on https://progress.opensuse.org/issues/48728
[20:03:13] <cboltz> does someone from the community dare to ask any question? ;-)
[20:03:50] <tampakrap> I wanted to ask tuanpembual if there is any progress with progress :)
[20:04:54] <tampakrap> apparently he's not here
[20:05:40] <cboltz> some days ago he mentioned http://progress.infra.opensuse.org:3000 which is currently a redmine without plugins
[20:05:54] <tampakrap> ah cool
[20:06:11] <tampakrap> did he mention if the db is local or if he is using our cluster?
[20:06:29] <cboltz> no idea
[20:06:36] <tampakrap> okay
[20:08:01] <cboltz> since we already slipped into the status reports, let's continue with them ;-)
[20:08:10] <cboltz> tampakrap: anything from you?
[20:08:25] <tampakrap> so thomic was in Prague last week
[20:08:37] <tampakrap> we did a huge writeup of all the services that are in the atreju cluster
[20:08:43] <tampakrap> opensuse plus suse-dmz
[20:09:08] <tampakrap> we found even VMs that are idle so we removed them after getting approval from the maintainers
[20:09:11] <tampakrap> and more are coming
[20:09:42] <tampakrap> and also we wrote down a lot of notes and procedures regarding services
[20:09:55] <tampakrap> so we reduced the bus factor :)
[20:10:28] <cboltz> Idle VMs are a good point - narwal, narwal2 and narwal3 should be idle since narwal[5-7] are online (assuming they don't do anything I'm not aware of, so please double-check)
[20:10:28] <tampakrap> most of the VMs are in salt, we wrote down the few ones that are not
[20:11:05] <tampakrap> file a ticket plz so we don't forget
[20:11:08] <cboltz> ok
[20:11:16] <tampakrap> that's it pretty much
[20:12:53] <cboltz> will you put the openSUSE part of that notes into a somewhat public space (for example the heroes wiki or into pillar/id/*)?
[20:13:18] <tampakrap> sure, some of them was already added to the wiki
[20:14:02] <tampakrap> eg of adding vpn account
[20:14:22] <pjessen> i'n back
[20:15:09] <cboltz> yes, I've seen that
[20:15:23] <cboltz> for the machine-specific information, I'd prefer pillar/id/* to avoid spreading stuff over multiple places (and to force people to have a checkout of our salt repo *g*)
[20:15:56] <cboltz> (and yes, I know that some VMs already have a (probably outdated) wiki page)
[20:16:05] <tampakrap> yes we agree
[20:18:34] <cboltz> pjessen: any status updates from you?
[20:18:48] <pjessen> i'm trying to think if I have anything in particular.
[20:19:20] <cboltz> well, AFAIK you had some "fun" with rsync.o.o in the last days. Is it in sync again?
[20:19:24] <pjessen> no updates. I seem to be spending a lot of time on mirroring lately.
[20:19:37] <pjessen> thomic is really taking care of widehat
[20:19:50] <cboltz> ah, ok
[20:19:57] <pjessen> I think it is up to daste now.
[20:20:26] <cboltz> :-)
[20:20:55] <pjessen> it was quite slow, and the 15.0 update issue caused a lot more traffic
[20:21:09] <pjessen> which slowed things down even more.
[20:21:59] <cboltz> indeed, syncing the whole 15.0 update repo to all mirrors (because of the path change) is a good way to generate traffic ;-)
[20:22:22] <pjessen> someone has to test the internet
[20:22:43] <cboltz> lol
[20:23:18] <pjessen> I've also put some of the push mirrors on ipv6, that's working well.
[20:23:25] <pjessen> s/well/fast/
[20:23:46] <cboltz> does v6 have bigger cables? ;-)
[20:24:06] <pjessen> i know it sounds silly, but sometimes you do wonder.
[20:24:38] <cboltz> I've seen strange things more than once, so I'm not too surprised ;-)
[20:25:08] <cboltz> I also have some status updates:
[20:25:45] <cboltz> static.opensuse.org and some other static domains are now served by narwal[5-7] which are fully salted
[20:25:59] <cboltz> there's also a cronjob that pulls the content from github hourly
[20:26:21] <cboltz> (well, as soon as someone reviews my merge request to get the script working ;-)
[20:26:42] <tampakrap> which merge request?
[20:27:00] <tampakrap> it's in my todo to go through all of them this month either way
[20:27:10] <cboltz> I'd have to check, but I'm quite sure you'll find it in the list ;-)
[20:27:15] <cboltz> (it's one of the newest ones)
[20:27:15] <tampakrap> okay
[20:28:03] <cboltz> today I installed kernel updates on the machines I maintain, and noticed that several other machines also have pending kernel updates
[20:28:11] <cboltz> so everybody please install the latest kernel ;-)
[20:29:10] <cboltz> oh, and you might want to have a look at https://build.opensuse.org/project/show/home:cboltz:infra (especially the second page)
[20:29:41] <tampakrap> checking
[20:29:56] <tampakrap> the 1_31 packages?
[20:29:59] <cboltz> yes
[20:30:15] <cboltz> they are the base for updating the wiki
[20:31:24] <cboltz> currently I'm blocked by elasticsearch because MediaWiki expects a specific version, but I'm in contact with Klaus who will help me to package the version we need
[20:32:13] <cboltz> when that is done, I'll need a new VM (water3) because AFAIK you can't run two elasticsearch versions in parallel
[20:32:34] <tampakrap> I see
[20:33:37] <cboltz> maybe we should try to create a 15.1 beta JeOS image so that we don't need to update it in a few weeks ;-)
[20:34:07] <tampakrap> also true
[20:34:22] <cboltz> ... which reminds me that we have lots of 42.3 VMs that need an upgrade to 15.x
[20:34:36] <cboltz> let's hope we don't have too many Requires: php5 on them ;-)
[20:34:52] <tampakrap> which ones are left?
[20:35:21] <cboltz> basically all VMs with a pending kernel update ;-)
[20:36:07] <cboltz> + sarabi + riesling + status* + water (but updating water is superfluous since it will be replaced with water3)
[20:37:45] <cboltz> we also have some SLE11 left, which also need an update
[20:37:50] <cboltz> or better a salted replacement
[20:38:29] <cboltz> (have fun finding out what exactly they do, I'm quite sure their documentation is as old as SLE11, or they are undocumented)
[20:38:36] <tampakrap> yes we have them also written down with thomic
[20:38:46] <tampakrap> all the SLE11 and all the 42.3 that need upgrade or replacement
[20:39:18] <cboltz> we'll see if you found out all the tasks they run ;-)
[20:41:06] * cboltz expects quite some "oh, that VM also does that?" surprises
[20:41:43] <cboltz> I just did some statistics (thanks to salt grains.get osrelease)
[20:42:10] <cboltz> we have 6 SLE11 left (5 if you ignore the to-be-shutdown narwal)
[20:42:15] <cboltz> 20 VMs run 42.3
[20:42:20] <cboltz> and 13 run 15.0
[20:42:57] <cboltz> does someone else have a status report?
[20:43:26] <pjessen> is anybody else here?
[20:43:45] <tampakrap> there are a few more
[20:44:06] <cboltz> the question is if they are online or if they are near their computer ;-)
[20:44:47] <tampakrap> that are not on salt
[20:45:23] <cboltz> that makes counting them with grains.get hard ;-)
[20:45:57] <cboltz> oh, speaking about grains, here's a crazy idea:
[20:46:14] <cboltz> can/should we introduce a grain to indicate that a VM is "reboot-safe"?
[20:46:33] <cboltz> in practise, this would allow everybody to install kernel updates and to reboot the VM
[20:46:44] <cboltz> without a (big) risk of breaking something
[20:46:54] <pjessen> not a bad idea.
[20:47:56] <tampakrap> yes I like it
[20:48:09] <tampakrap> because not everyone knows all services that are running in a VM
[20:49:13] <cboltz> ok, so how should we call that grain?
[20:49:28] <cboltz> I'd propose something like "reboot_allowed" or "may_reboot"
[20:49:54] <cboltz> with a value of "yes" or "no"
[20:49:59] <pjessen> reboot_adhoc ?
[20:51:14] <tampakrap> all of those names are fine to me, better discuss it in an MR though :)
[20:51:31] <cboltz> no
[20:51:35] <cboltz> let's decide on the name _now_
[20:51:42] <cboltz> discussing it in the MR will delay it too much
[20:52:29] <mstroeder> My vote: reboot_safe
[20:53:25] <pjessen> nice one
[20:53:44] <cboltz> pjessen, mstroeder - your proposals look better than mine ;-)
[20:53:51] <tampakrap> reboot_safe for me as well then
[20:53:57] <cboltz> I'd tend to reboot_safe because it best describes what we want
[20:54:14] <mstroeder> actually it was Christian's suggestion. I only replaced - by _ in "reboot-safe".
[20:54:55] <tuanpembual> hi all
[20:54:59] <cboltz> :-)
[20:55:04] <tuanpembual> sorry for late
[20:55:18] <cboltz> hi tuanpembual
[20:55:19] <tuanpembual> missing different time
[20:55:21] <cboltz> no problem ;-)
[20:55:30] <tuanpembual> *scroll up
[20:55:57] <cboltz> europe will probably stop switching summer/winter time in 2021 ;-)
[20:56:27] <cboltz> back to the grains for a second - looks like we'll use reboot_safe: yes :-)
[20:56:58] <tuanpembual> update from me.
[20:56:59] <cboltz> you can also add reboot_safe: no if needed, but a) I'd call that a bug and b) please add a comment about the reason
[20:57:35] <tampakrap> I like also the comment
[20:58:33] <tuanpembual> I use local db on progress.
[20:58:59] <pjessen> gotta take a break, the cat's brough in a mouse. my wife doesn't like mice.
[20:59:28] <cboltz> pjessen: as long as she likes the mouse next to your computer... ;-)
[21:00:54] <pjessen> trackball .....
[21:01:00] <thomic> moin
[21:01:18] <cboltz> hi thomic
[21:01:27] <thomic> sorry. worked like 14h today
[21:01:31] <thomic> and i'm really tired
[21:01:37] <thomic> any questions for me?
[21:01:48] <cboltz> do you have any status updates?
[21:02:00] <cboltz> (for example, is rsync.o.o in sync again?)
[21:02:32] <thomic> well... let's say it like this - as good as it can
[21:02:42] <thomic> it gets push from stage.o.o again since last week
[21:02:45] <thomic> which is a major step
[21:02:55] <thomic> but I'm still running the full sync over it from stage.o.o
[21:02:55] <thomic> but
[21:03:07] <thomic> BuildOPS people kill rsync processes from time-to-time atm
[21:03:18] <thomic> i'm watching and caring about rsync.o.o almost every day
[21:03:25] <thomic> since it failed
[21:03:32] <thomic> as we had several outages since then
[21:03:53] <thomic> now I consider it being in a sane state, but the problem is, that it only has 2x1GBit Ethernet
[21:04:02] <thomic> and verrry slow spinning disks
[21:04:11] <thomic> most of the time it tells me STAT D :D
[21:04:29] <cboltz> ;-)
[21:04:33] <cboltz> what caused the outages?
[21:04:38] <thomic> but still we are receiving ~150MB/s and sending out ~200-300MB/s
[21:04:44] <thomic> well ... some times disk full
[21:04:50] <thomic> back then when I exchanged disks
[21:05:01] <thomic> we had 14GB on pontifex
[21:05:04] <thomic> now it's around 17
[21:05:09] <thomic> we have 19TB
[21:05:14] <thomic> s/GB/TB/g
[21:05:20] <thomic> so
[21:05:36] <thomic> we obviously need a solution for this whole mess in first-line monitoring
[21:05:50] <thomic> If I would have the time at the moment I would try to adress this
[21:06:04] <thomic> but the best would be to have 4-8 servers world wide
[21:06:08] <thomic> which allow push
[21:06:11] <thomic> which are managed by us
[21:06:20] <thomic> and have around 25TB each of storage for now
[21:06:30] <thomic> which would cost some money
[21:06:38] <thomic> but would build a reliable network of mirrors
[21:06:53] <thomic> even if "rsync.o.o" would be down, not the whole world would start crying
[21:07:04] <cboltz> indeed
[21:07:18] <thomic> if we only would have somebody in the board who could push for that :D
[21:07:50] <thomic> cboltz: ^^
[21:07:59] <cboltz> actually I just wanted to ask if you can provide a hardware "inventory" and a wishlist of machines that need replacement or upgrades
[21:08:09] <thomic> the inventory is just waste
[21:08:18] <thomic> it's 5-10 yrs old hardware out of the suse stock
[21:08:29] <thomic> but hardware (like RAM and CPU) is not that important
[21:08:41] <thomic> SSDs instead of HDDs would be nice
[21:08:45] <thomic> and maybe 10GBit of Ethernet
[21:08:55] <pjessen> bandwidth is the real issue
[21:08:57] <thomic> but where to get it with transit costs included ;)
[21:09:11] <thomic> pjessen: well... atm the disks are even not filling up 2x1GBit/s
[21:09:23] <pjessen> really??
[21:09:25] <thomic> because they are 4TB spinning disks in a RAID5
[21:09:40] <thomic> I had to go low budget ... for the repair
[21:09:43] <thomic> as always
[21:09:45] <pjessen> thought - run it from stage.o.o, but with bandwidth restriction.
[21:10:01] <thomic> wait .. stage.o.o has a proper storage backend :)
[21:10:10] <pjessen> use traffic control to limit bandwidth used by rsync.o.o
[21:10:21] <thomic> I have one Interface for rsync.o.o
[21:10:31] <thomic> and an IP which is not shown public where stage.o.o pushes
[21:10:49] <thomic> I already thought about the "public IP only for download"
[21:11:04] <thomic> but still by the read/write we get on the disk, we can try to optimize
[21:11:14] <thomic> but nginx and rsync are fighting against eachother
[21:11:20] <thomic> plus writes from stage.o.o
[21:11:48] <thomic> so we are peaking around 550MBit/s I guess - which is not that bad for the mixed usage of spinning disks
[21:11:48] <cboltz> yeah, random access on spinning rust doesn't perform well
[21:11:57] <pjessen> syncing from rsync. can be slow, use "spare" bandiwdth
[21:12:41] <thomic> well we could think of disabling rsyncd pub modules for a while on rsync.o.o .. with a proper announcement
[21:12:49] <thomic> and maybe moving rsyncd to another machine
[21:13:00] <thomic> but all of this would be "intermediate" solutions
[21:13:13] <thomic> i would prefer to build a cool solution for the problem
[21:13:43] <pjessen> another thought - why aren't these non-piblic mirrors just rsync'ing from the public mirrors?
[21:15:07] <pjessen> the big ones usally have rsync too, on big pipes.
[21:15:29] <pjessen> s/big/biiiig/
[21:16:00] <thomic> pjessen: well maybe we should update the wikipage on this
[21:16:04] <thomic> but in-general-
[21:16:10] <thomic> if we write on our wikipage
[21:16:16] <thomic> please rsync from $freesponsor
[21:16:26] <thomic> the free sponsor are not so happy
[21:16:32] <thomic> with us redirecting the traffic
[21:17:41] <cboltz> obviously we should ask them first ;-) bug I'm not sure if http traffic via mirrorbrain differs that much from rsync traffic
[21:18:01] <thomic> I'm think of having https://www.hetzner.de/dedicated-rootserver/sx62
[21:18:05] <thomic> 4 times those
[21:18:07] <thomic> 2 in Finland
[21:18:11] <thomic> 2 in Germany
[21:18:19] <thomic> with 4x10TB just as RAID0
[21:18:24] <thomic> if one of them dies - no problem
[21:18:31] <thomic> each of them has 1 GBit Uplink
[21:18:48] <thomic> enough to have some load balancing handled by mirrorbrain
[21:20:06] <pjessen> Just my 2 cents - lets also look at which problem we are *actually* solving. It's about moving those 200-300MB/sec off to somewhere else. or just reduce the usage to something that can be handled by stage.o.o
[21:20:28] <thomic> so 1st stage.o.o can't handle it
[21:20:47] <thomic> after the oss-update desaster
[21:20:57] <thomic> stage.o.o was not even able to get the push traffic out
[21:21:12] <thomic> not speaking about rsync-pulls ...
[21:21:14] <pjessen> we can reduce the rsync.o.o traffic to even less.
[21:21:33] <thomic> well the idea of having rsync.o.o
[21:21:38] <thomic> is having a first line mirror
[21:21:49] <thomic> which does not hit downloadcontent (pontifex) directly
[21:21:52] <pjessen> for non-public mirrors
[21:22:05] <thomic> the main traffic I guess is HTTP
[21:22:13] <thomic> that ends up on widehat
[21:22:19] <thomic> so having 4 mirrors
[21:22:26] <thomic> that can handle this european traffic
[21:22:33] <thomic> and maybe 2 more in US and Asia
[21:22:36] <thomic> would be awesome
[21:22:42] <thomic> because even in disaster situations
[21:22:46] <thomic> we can fill those first
[21:22:58] <thomic> and move away traffic from stage.o.o and pontifex again
[21:23:11] <thomic> that is the most critical sit we have atm
[21:23:19] <thomic> if we release bigger chunks of updates
[21:23:32] <thomic> people complain about getting it too late, because widehat is not yet updated
[21:23:39] <thomic> or fully loaded
[21:23:46] <thomic> waiting with STAT D
[21:24:11] <lcp> huh, I missed majority of the meeting again :/
[21:24:28] <thomic> any thoughts on pushing something like this forward?
[21:25:09] <pjessen> Agree, it would be cool setup, but I cant help thinking "over-enginering". For the monthly cost of renting those 4 boxes at Hetzner, I can get a 10Gbit link, maybe two.
[21:25:48] <thomic> well
[21:25:53] <thomic> than let's have two and 10G
[21:25:55] <pjessen> okay, consumer SLA, not business
[21:25:59] <thomic> :) but we need them somewhere
[21:26:06] <thomic> like with a lot of traffic
[21:26:18] <thomic> hetzner unfortunately does only allow traffic unlimited
[21:26:19] <thomic> on 1G
[21:30:01] <cboltz> how much bandwidth do we currently use for pontifex and rsync.o.o?
[21:30:18] <cboltz> (a rough number is good enough, no need for details)
[21:30:36] <thomic> cboltz: i can deliver those numbers tommorow
[21:30:39] <thomic> if somebody reminds me
[21:30:40] <thomic> :)
[21:30:46] <cboltz> ok
[21:30:48] <pjessen> I would have to take a look, I don't have any snmp on pontifex
[21:31:00] <thomic> well .. we have there mrtg
[21:31:04] <thomic> on those ports
[21:31:05] <pjessen> ah
[21:31:07] <pjessen> cool
[21:31:15] <thomic> ** inclusive monthly traffic for servers 10G uplink is 20TB. There is no bandwidth limitation. Overusage will be charged with € 1/TB.
[21:31:20] <thomic> says hetzner
[21:31:27] <thomic> just for discussion
[21:31:30] <thomic> or gimme 5min
[21:31:35] <thomic> will start my other laptop
[21:32:23] <cboltz> 20 TB/month isn't much when hosting 19 TB
[21:32:30] <thomic> yay
[21:32:39] <thomic> i will provide numbers for this as well ...
[21:40:19] <pjessen> guys, if nobody minds, it's enough for me for tonight.
[21:40:32] <thomic> i will send the links on the ml
[21:40:38] <thomic> as soon as i published them
[21:40:42] <pjessen> Thanks, I'd like that.
[21:40:56] <tampakrap> pjessen: sure, have a nice evening
[21:41:37] <pjessen> allright, nice discussion, talk later.
[21:42:41] <tampakrap> so, anything else?
[21:42:59] <tuanpembual> hi tampakrap
[21:43:04] <cboltz> well, in theory "review of old tickets", but I'd say it's late enough to skip that ;-)
[21:43:13] <tuanpembual> any question for me?
[21:43:31] <tampakrap> hello tuanpembual
[21:43:41] <cboltz> just as a note - we also have some (mostly old) tickets at https://bugzilla.opensuse.org/buglist.cgi?component=Infrastructure&list_id=11646919&product=openSUSE.org&resolution=---
[21:43:44] <tampakrap> tuanpembual: did you use local mysql or our cluster?
[21:44:02] <tuanpembual> I use local mysql
[21:44:14] <tuanpembual> dont have access on cluster.
[21:44:29] <tampakrap> tuanpembual: okay I'll create a db for you and I'll put the credentials on the VM
[21:44:43] <tuanpembual> noted.
[21:44:51] <tampakrap> cboltz: we should close them and redirect the people to the appropriate ticketing system (admin@o.o or github)
[21:45:38] <cboltz> tampakrap: define "we" ;-)
[21:45:46] <mstroeder> What software is used as auth DNS server for opensuse.org?
[21:45:46] * cboltz already closed some of them a few months ago
[21:45:53] <tampakrap> you, me, anyone that would like to do it
[21:46:10] <tampakrap> I also closed a few last month
[21:46:15] <tampakrap> at least the ones assigned to me
[21:46:37] <tuanpembual> can I ask question about new progress.o.o?
[21:46:47] <cboltz> yes, of course
[21:47:08] <tampakrap> mstroeder: I don't remember the name, maybe thomic does
[21:47:13] <tampakrap> tuanpembual: shoot
[21:47:17] <tuanpembual> I need sugestion, which we use on sntp?
[21:47:25] <tuanpembual> *for mail sender.
[21:47:56] <tuanpembual> *which one.
[21:48:34] <tampakrap> postfix, and we have relay.infra.opensuse.org to do the relay
[21:49:04] <tuanpembual> can I see the config?
[21:49:43] <tuanpembual> or wiki I can read.
[21:49:50] <lcp> cboltz: I would close the complaints about font size too, everything that is important is moving into chameleon theme >:T
[21:50:03] <thomic> mstroeder: infobloxx
[21:50:04] <cboltz> tuanpembual: you should be able to ssh relay.infra.opensuse.org as user and can read the postfix config there
[21:50:10] <lcp> cboltz: and fonts there are too big
[21:50:12] <thomic> is what we push against with powerdns
[21:50:33] <cboltz> basically just send the mails to relay.infra.o.o port 25, no auth needed
[21:50:43] <tuanpembual> noted cboltz
[21:51:09] <mstroeder> thomic: So you're using pdns as authorative DNS server or you plan to use it?
[21:51:15] <cboltz> actually - IIRC our default setup includes postfix which relays to relay.infra.o.o, so sending to localhost 25 should also work
[21:51:54] <cboltz> lcp: since you are the design expert, feel free to close these bugreports ;-)
[21:52:22] <lcp> cboltz: well, from my pov the compliant is that fonts are too black and too big >:D
[21:52:58] <cboltz> I also remember complaints about fonts not beeing big and black enough ;-)
[21:53:00] <lcp> there is a rule of thumb that to make something visible, you should make stuff over visible to not strain the eyes
[21:53:26] <lcp> but on the other side there are people that are so blind they might need that strain to read anything
[21:54:05] <lcp> you can't win
[21:54:40] <cboltz> I'd argue that you win if you are somewhere in the middle
[21:54:55] <cboltz> and since we got complaints from both directions... ;-)
[21:55:13] <thomic> mstroeder: pdns have our wrapper scripts.. infobloxx is what holds the external zones atm
[21:55:13] <lcp> nope
[21:55:23] <thomic> there are no plans yet to change this yet
[21:55:27] <lcp> people believe black fonts are the magic cure to all their issues
[21:57:53] <tampakrap> tuanpembual: do you want also a dump of the production db?
[22:00:13] <mstroeder> thomic: I wondered where to eventually do DNSSEC zone signing. pdns is a hidden primary and infoblox gets updated via zone transfer?
[22:00:53] <thomic> mstroeder: correct... this infobloxx network is run by our "internet provider" atm
[22:00:59] <tuanpembual> tampakrap: sure
[22:01:02] <thomic> who owns administers the domain
[22:01:06] <thomic> called "microfocus"
[22:01:16] <thomic> this will change.. but not now
[22:01:20] <thomic> more like in mid-future
[22:01:54] <tuanpembual> It will help me more, thanks tampakrap
[22:02:13] <tampakrap> sure, tomorrow you'll have it
[22:06:21] <lcp> thomic: mid-future, huh?
[22:07:16] <lcp> cboltz: hm, piwik should be used for forums instead of google analytics
[22:07:43] <lcp> also would be nice to update piwik to its new name (I don't remember what it was atm :/)
[22:07:54] <lcp> matomo
[22:08:21] <cboltz> agreed, but getting the forum change done while the forums are hosted in Provo will be funny[tm]
[22:08:31] <tampakrap> anything else or can we close the meeting?
[22:09:24] <lcp> cboltz: yeeeah, I hope moving to discourse and out of provo could happen at the same time
[22:09:43] <lcp> speaking of which https://github.com/openSUSE/chameleon-discourse/tree/master
[22:10:10] <cboltz> I'm looking forward to the day we do this move :-)
[22:11:09] <lcp> cboltz: like with everything moved out of provo
[22:12:30] <tuanpembual> nope from me.
[22:16:30] <tampakrap> cboltz / lcp: shall we close the meeting?
[22:16:41] <lcp> yeah, it's everything I got
[22:18:22] <Stiopa> staying up to date, thank you guys
[22:19:38] <tuanpembual> thank you guys.
[22:19:51] <tampakrap> thank you all for joining
[22:20:04] <tampakrap> that was my last heroes meeting, I'll be active till the end of the month
[22:20:07] <tampakrap> so keep on rocking!
[22:20:14] <thomic> :'(
[22:20:52] <cboltz> you'll still be allowed to join the meetings ;-)
[22:21:59] <tampakrap> true!
[22:22:33] <lcp> and hopefully Stiopa will too >:D
[22:22:46] <lcp> polonization of the openSUSE forces >:DDD
[22:23:08] <Stiopa> lcp, old times ;-)
[22:28:44] <thomic> for the last meeting of tampakrap some greek trivia I learned today by accident. Aristoteles Onassis was such an extrovert guy that he has his own Tie knot http://www.101knots.com/onassis-knot.html :D
[22:28:49] <thomic> named after him
[22:29:03] <thomic> not bad...
[22:30:33] <tampakrap> didn't like it
[22:31:41] <thomic> ok
[22:31:42] <thomic> gn8
[22:31:44] <thomic> see ya all
[22:31:56] <cboltz> good night!
[22:32:06] <tampakrap> hahaha
[22:32:10] <tampakrap> good night thomic!

    (1-1/1)