2017-11-07 heroes meeting [20:05:04] so - welcome everybody to the monthly Heroes meeting! [20:05:19] does someone from the community have questions? [20:05:33] I'm moving the topics from last meeting [20:06:29] added [20:07:02] looks like nobody has questions, so let's move to the status reports [20:07:38] who should start? [20:08:11] what about you? ;-) [20:08:17] :D [20:08:40] Monitoring: we are now at 40 VMs and >900 monitored services [20:08:41] You can tell us a lot. :) [20:08:59] wow [20:09:20] as I wrote in the news, not only the checks are interesting, also the information provided via the graphs are helpful [20:09:39] ...so - as example - to see how many people are logged in via VPN all the time ;-) [20:10:07] I plan to get more information also from our nginx and lighttpd instances, but this seems to be a bit tricky [20:10:49] JFYI: I disabled notifications for the Updates check, as this turned out to be the service with most notifications ;-) [20:11:17] ..and as we update on Thursday anyway, I think it is ok to disable notifications for this check [20:12:03] I also wrote a short "run_zypper_up" script, which is in /root/bin/ on monitor.infra.opensuse.org [20:12:14] I'm back [20:12:17] would it be possible to run the update check _after_ the cronjob that auto-installs most updates? [20:12:40] if you log in to this machine via SSH and have agent forwarding enabled, running this script helps to automate the maintenance on thursday [20:12:55] cboltz: the update check *is* running after that, yes [20:13:40] :-) [20:13:47] Other topic: [20:13:57] The Provo mirror supports now http2 protocol [20:14:18] ...but we need to investigate if libcurl uses that protocol automatically or not [20:15:15] I talked with the zypper maintainer, and he told me that it might be possible to add a configurable feature, it needed. So zypper could make use of the http2 protocol, too [20:15:30] tcp fast open is enabled anyway already [20:16:07] The Nuremberg mirror (aka download.opensuse.org) is getting closer to a state where I can open it up to the heroes: [20:16:23] SUSE specific stuff has been migrated to another machine already [20:16:47] so what's left is to have the machine re-installed with openSUSE and place it in the heroes network [20:17:05] sounds good [20:17:14] we might need a special network interface for the push from the OBS, but that should not be a big problem [20:17:24] will that re-install happen with salt? ;-) [20:17:34] I hope to get this done next week [20:17:49] cboltz: problably not - as long as I do it [20:18:12] Next mirror under our control: widehat, aka rsync.opensuse.org [20:18:24] this host as currently a major problem: disk full [20:18:51] as we can not extend the storage of this machine, I'm currently thinking to declare our mirror in Provo as new rsync.opensuse.org [20:19:19] as this mirror in Provo is currently the only one who has everything that is also available on download.opensuse.org [20:19:36] => in the end it's just about the DNS name [20:19:40] Is this mirror working correctly now? [20:20:00] so far no problem reports so far (other than one bug report about the used style sheet) [20:20:06] I know about problems with the provo mirror in the past... [20:20:11] it doesn't have ipv6 though, would that be a problem? [20:20:17] Ada_Lovelace: I guess the problem in the past was just a missing redirector setup [20:20:41] tampakrap: right, but latest rumor has it that MF-IT is able now to assign IPv6 addresses in Provo, too [20:20:50] ah nice [20:20:53] nice [20:21:15] tampakrap: so let's wait for my request for IPv6 addresses, check of the rsync modules (and the "knapsack" stuff) to work [20:21:24] and then switch over [20:22:05] I still need to get some approval because of the additional bandwidth used, but I see currently not really any other chance to provide a fully equipped rsync.opensuse.org [20:22:41] ...or we find a mirror as "sponsor" that hosts all the stuff, including the knapsack module, for us [20:23:16] JFYI: the python-knapsack scripts fill up the 80g, 160g, 320g, ... modules with content based on the access logs [20:23:43] what exactly does " we can not extend the storage" mean? All disks in NBG full, or just a restriction on this VM? [20:23:59] The widehat machine is not running inside the Nuremberg office [20:24:30] cboltz: widehats place is sponsored by QSC - and is an old machine with "just 7TB" capacity [20:24:59] cboltz: does that answer your question? [20:25:02] I know the new hosting boss at QSC. He changed from 1&1 to them after my election. [20:25:05] yes, thanks [20:25:08] Should I write him? [20:25:25] Ada_Lovelace: in the past, our problem was that they want to end the sponsoring ... [20:25:28] That's the ex 1&1 hosting boss. [20:25:35] Oh... [20:25:45] ...so we tried ot avoid any requests ... [20:26:00] I guess it's more about bandwidth than the actual disk space? [20:26:02] as long as the machine is up and running, everything is ok for us [20:26:35] pjessen: in the past, widehat was indeed a bandwidth saver for the NUE office [20:26:48] especially as we had a dark fiber to their data center [20:27:09] so we could push stuff from download.o.o to widehat via that dark fiber [20:27:31] ...and everybody else was downloading from widehat using the bandwidth from QSC [20:27:39] yeah, I get the picture. [20:27:56] the machine is an old system with a local RAID controller [20:28:11] the maximum amount of disks is installed [20:28:20] ...and it sadly supports only 1TB disks [20:28:28] so extending it is not possible [20:28:54] we might be able to replace the machine silently with a new one with bigger disks, but so far I don't see that happen [20:29:45] how about a bigger raid controller? I have some spare that will take 2Tb disks. [20:30:04] might be possible, but I need to check the hardware first [20:30:23] You don't have access to the data center? [20:30:35] Ada_Lovelace: I have access, yes. [20:30:53] But I need to fire up the next VPN to get access to the machine for checking dmidecode ;-) [20:31:40] => what about moving the discussion about widehat to "later", resp. mailing list, once I have the hardware details ? [20:32:07] sounds like a good idea [20:32:09] yep. [20:32:16] Next topic: logs from #opensuse-admin [20:32:36] It's true, I'm lazy ;-) [20:32:56] ...and I'm not running my IRC client all the day, but our bot is there... [20:33:23] ...so I - in my glory ;-) decided to let it log for me, but I guess this might be useful for others, too? [20:33:31] => https://monitor.opensuse.org/heroes/ [20:33:55] I could put the URL behind LDAP auth, but at the moment, I think it might be useful also for others. [20:34:01] => what do you think? [20:34:11] leave it public, it's a public channel after all [20:34:18] It should be public... [20:34:26] yes, keep it public [20:34:41] fine with me ;-) [20:34:47] but maybe add a note to /topic to make people aware that the channel is logged [20:34:58] cboltz: up to you ;-) [20:35:15] we might also put a link to the log in our wiki page ? [20:35:24] But we should speak about the time how long it should be saved or when cleaned. [20:35:33] I doubt I have permissions to update /topic [20:35:55] cboltz: IMHO /op should work in this channel for everyone [20:36:00] +1 for the link to the wiki page [20:36:13] Ada_Lovelace: any suggestions for the time frame? [20:36:13] [Fehler] Sie müssen in #opensuse-admin Operator sein, um das tun zu können. [20:36:52] kl_eisbaer: no, /op gives me "you have to be operator in #opensuse-admin to do that" [20:36:53] 1 - 2 months (depends on the size of the disk) [20:37:19] cboltz: sorry, I mean you should ask chanserv to become op [20:37:41] cboltz: if you have a suggestion for the topic header, I can do that later for you ;-) [20:37:53] Ada_Lovelace: the size does not really matter [20:38:35] Ada_Lovelace: we are speaking about 472k at the moment [20:38:45] I'd vote to keep the logs forever like we do with ML archives [20:38:46] Then 3 months should be more than enough... It can be that anybody wants to refer to any task in the chat. [20:39:09] deleting something from the internet doesn't work anyway ;-) [20:39:26] cboltz: +1 from my side [20:39:32] Ok [20:39:41] restrict the search engines maybe? [20:39:41] if we need to censor something, we can do it on the monitoring host anyway [20:39:51] tampakrap: fine with me [20:40:21] okay [20:40:33] the bot is amazing, thanks a lot for it! [20:40:54] tampakrap: you can even enhance it to a "meet bot" ;-) [20:41:14] where is it running btw? on scar? [20:41:24] tampakrap: ...or connect it to the rabbitmq queue and get informations pushed here once a package in openSUSE:infrastructure is built [20:41:34] tampakrap: it's running on the monitor machine ;-) [20:41:35] yeah that would be amazing [20:41:55] or to send new merge requests and commits in the salt repository [20:42:22] tampakrap: ...and all you need for this is netcat ;-) [20:42:57] I've placed an example script in /root/bin/send_irc_message on the monitor machine [20:42:57] [20:43:09] ...if you want to try it out. [20:43:33] At the moment, the port is bond to localhost, but I can open it up to the whole infra.opensuse.org network, if this is needed [20:44:27] Other topic: galera cluster ? [20:44:28] PROBLEM: MySQL WSREP recv on galera2.infra.opensuse.org - CRIT wsrep_local_recv_queue_avg = 1.184783 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=galera2.infra.opensuse.org&service=MySQL%20WSREP%20recv [20:44:44] ...no, that ^^ was not planned ;-) [20:44:54] I already wondered ;-) [20:44:56] hahahaha [20:45:06] but as you can see, the cluster still needs some fine tuning [20:45:30] I just took some default values and combined them with some "good practices" from our internal cluster at SUSE [20:45:56] At the moment, I have 3 interesting things: [20:46:36] * mysql-tuning-scripts is updated and contains now the latest mysqltuner.pl script, which is analizing a running mysql instance and (new) also gives some hints for galera cluster [20:47:30] * I will update/change the VM definition for the machines, to allow more features from the hypervisor CPUs - sadly this requires a complete power cycle of the VM [20:47:46] ^^^ that should give hopefully a bit more performance [20:48:24] * I think we don't need to migrate so many databases as I initially thought: most of the DBs hosted at the old cluster are meanwhile obsolete [20:48:51] The biggest DB will be the one from beans.o.o aka piwik [20:49:32] I'm currently thinking to add the new cluster as slave for this DB to the old cluster, to avoid a big downtime if we migrate piwik [20:49:53] but I haven't done that, so I need to do some more testing [20:50:24] the other databases (incl. the wiki ones) should be "ready to migrate" in one or two weeks, IMHO [20:51:02] if someone is interested and wants to become a "DB-Admin", I'm happy to hand over ;-) [20:51:15] just tell me when you want to migrate the wiki DBs, so that we can make the wikis read-only during that time [20:51:26] cboltz: will do, of course [20:51:50] well, ideally I'd like to hand over creating database users and databases to salt ;-) [20:51:59] I hope to have just minimal impact for all DBs other than the piwik one [20:52:21] cboltz: but even than, you definitively want to have some DB-Admin who cares about your profile ;-) [20:52:45] otherwise, your wikis would suddenly become very, very slow .... [20:53:17] ...another topic: svn/kernel.opensuse.org [20:53:21] I know salt can't do automated performance monitoring and tuning ;-) [20:53:44] the current plan is to migrate the machine hosting the two services on Thursday to the new network [20:54:26] ...that's all I have as "status report" so far [20:55:23] one quick question [20:55:33] the mysql servers are using the official opensuse packages? [20:55:42] no [20:55:55] okay [20:56:02] they are using the ones from server:database, as the latest galera features are very fresh [20:56:22] the ones for 42.3 don't support galera so good, (yet) [20:56:53] with the latest packages from server:database, it's more or less just adding the galera packages and a configuration snipplet [20:56:59] got it [20:57:18] any other status reports ? [20:57:36] no much from me, havent had time since the summer holidays [20:57:41] about salt [20:57:51] we support now gpg encrypted pillars [20:57:55] pjessen: is the "sience" list online already ? [20:58:05] it should be, yes. [20:58:12] opensuse-science@o.o [20:58:21] ok, thanks [20:58:45] tampakrap: what does this mean ? [20:59:10] this means that we can put passwords in pillars [20:59:30] nice :-) [20:59:42] there is a MR open that has the passwords for the keepalived config on anna/elsa encrypted [21:00:03] it needs documentation though on how to use it and about the structure [21:00:09] and then I'll proceed with that MR [21:00:43] :D [21:01:13] BTW: daffy (aka login2) is prepared for the 2nd daffy already - I setup the keepalived there already [21:02:03] cool [21:02:08] BTW2: something for cboltz (-: https://monitor.opensuse.org/pnp4nagios/index.php/graph?host=redmine.infra.opensuse.org&srv=Heroes+tickets&view=2 [21:02:46] yes, I already noticed that :-) [21:02:52] ^^ should give you a quick overview about the current tickets on progress.opensuse.org [21:03:24] I also defined some warning/critical levels - just to create some fun here in the channel ;-) [21:03:30] hehe [21:04:46] but while I was creating that check, I was wondering if we shouldn't define a maximum lifetime for a ticket [21:05:25] I was just wondering if tickets that are older than a year are really interesting anyone any longer? [21:05:26] We had the same in Bugzilla... [21:06:09] Ada_Lovelace: ...and was there a solution ? [21:06:11] Christian and I found some tickets from the past which were interesting for us. [21:06:17] in bugzilla, things take time. [21:06:29] Reviewing and pinging was the solution. [21:07:24] maybe we should schedule a "progress" cleanup event ? [21:07:28] right, let's handle tickets quickly instead of inventing an auto-close (which is more or less a motivation to be lazy IMHO) [21:07:47] agree. [21:07:49] yes, such a cleanup would make sense [21:08:20] maybe a good transfer to the next topic: " offsite meeting? " :-) [21:08:28] I would be available. [21:09:05] kl_eisbaer: indeed, pinging people in the same room is much easier ;-) [21:09:24] any suggestions ? [21:09:41] ...I hope everyone knows that SUSE will have the yearly hackweek starting on Friday ? [21:09:41] somewhere in Zurich? [21:09:47] :-) [21:09:55] pjessen: fine with me :-) [21:09:58] I saw it today. [21:10:14] pjessen: but I guess you need to organize the "where" in Zurich for us :-) [21:10:29] zurich is fine for me, my brother lives there [21:10:50] heya: so we have a Party location already :-) [21:12:20] But wait... Are trains from Germany to Zurich going at the moment? There is a lot damageg, if I think back. [21:12:37] *damaged* [21:12:39] pjessen: can you try to organize something ? [21:12:43] I think that tunnel was fixed already. [21:12:54] Really? Then all is ok. [21:13:14] I'll have to say no - at least not on this side of Christmas [21:13:17] pjessen: I guess the main need is a reliable internet connection [21:13:41] pjessen: next year should be not a big problem ;-) [21:14:22] What time frame do we have in mind? Roughly. [21:14:23] February should be the best for me because of semester holidays. ;-) [21:14:43] Yeah, we have Sportferien in Feb too. [21:15:04] fine by me [21:15:09] as long as you avoid the carnival and FOSDEM weekends, February sounds good [21:15:20] I guess 2-3 days (a weekend?) should really be enough [21:15:44] We weekend after FOSDEM is good. [21:16:01] Is this something we should continue on the mailing list later? [21:16:18] pjessen: jip, good idea [21:16:24] no, the weekend after FOSDEM is carnival [21:16:27] perfect [21:16:40] Then we can have a party. ;-) [21:16:42] and I know ~40 people who would hate me if I refuse to drive the carnival float ;-) [21:17:19] Let's speak about it on the mailinglist. [21:17:21] So it looks like everybody is looking forward to a meeting in Zurich :-) [21:17:34] Wow, I should have kept my mouht shut .... [21:17:51] otherwise we could always schedule a meeting here in Nuremberg again. That should not really be a problem. [21:18:04] * kl_eisbaer now waits for Theo to invite everyone to Prague ;-) [21:18:11] prague yey! [21:18:19] we have the conference as well here [21:18:31] Yes. The oSC [21:18:37] or actually, let's go to greece! [21:18:38] I don't mind setting something up, but Nuernberg is within range for me too. [21:18:53] Too cold there in Feb :-) [21:19:18] pjessen: I just don't wanted to put pressure on you, that's why I offered NUE again ;-) [21:19:56] pjessen: but we can of course include a survival training in the offsite meeting [21:20:16] ^ => let's move the discussion to the mailing list ;-) [21:20:21] thanks - I have a lot on my plate, to be honest. [21:20:41] ok [21:20:53] Next topic: enhance infra.opensuse.org domain ? [21:21:05] that one's from me [21:21:18] "enhance" ? [21:21:38] in short: are there any objections, if I add the "service" names as aliases to our hosts with their special names? [21:21:58] boosters.infra.opensuse.org would have an additional alias connect.infra.opensuse.org as example [21:22:14] infra.opensuse.org isn't the right thing for real domainnames. [21:22:24] That's something for hostnames. [21:22:26] that would make it a bit easier - at least for me - to get to the "right" machine directly [21:22:37] I would like to have them, and I created some as well already [21:22:38] Sounds good, that's what we do with all services locally. A service can always move., [21:22:40] Ada_Lovelace: ? [21:23:12] You want to offer easy domainnames for users without infra in it. [21:23:26] Ada_Lovelace: no, sorry [21:23:28] I know such names only as hostnames. [21:23:39] Ada_Lovelace: I just want to make my live as admin easier [21:23:41] it's just a dns cname [21:23:56] no, the point is to be able to do `ssh connect` instead of going to the machine list to find out where you need to ssh to check what is broken on connect [21:24:08] If we have the cname additionally to this name, then ok. [21:24:09] CNAMEs are cheap [21:24:10] tampakrap: exactly [21:24:15] are for free actually [21:24:22] I don't like the situation with gitlab. [21:24:27] totally desired imho [21:24:48] gitlab moved from gitlab.o.o to gitlab.infra.o.o [21:24:51] Ada_Lovelace: that's not related to my request, sorry [21:25:00] ok [21:25:05] I'm speaking about all the machines like scar.infra.opensuse.org [21:25:24] not everyone knows what scar.infra.opensuse.org or mickey.infra.opensuse.org is doing [21:25:58] adding service names is a good thing. [21:26:02] but if they become aliases like "vpn.infra.opensuse.org" or "salt.infra.opensuse.org", most people might instantly know which services they can find on those machines [21:26:40] Ada_Lovelace: does this make it a bit clearer to you? [21:27:26] Yes [21:27:33] still objections ? [21:28:25] I like service names as additional names. [21:28:47] ok - I take this as a "yes" from everyone ;-) [21:28:52] thanks! [21:29:10] Next topic: sponsoring offer from cPanal (see Doug's mail on the ML) [21:29:24] typo, I'll fix it [21:30:03] I've heard no news about this topic - anyone else? [21:30:26] nothing [21:30:45] anyone who wants to drive this ? [21:31:07] I will ask Max tomorrow, I *think* he communicated something about htis already [21:31:10] so AI for me [21:31:21] thanks [21:31:50] Next topic: transfer opensuse.cz domain (another mail from Doug) [21:32:01] I object on this [21:32:11] tampakrap: ok [21:32:42] I told Petr (the original requestor) already that I don't like the idea that the official opensuse DNS will handle domains from other community teams [21:32:54] their request is simple though, they want a redirect to the wiki [21:33:11] but we will have to accept other domains in the future as well, and maintain them [21:33:16] do we want to do that? [21:33:39] Why not? [21:33:45] Who is behind opensuse.cz ? [21:33:48] if it's just a redirect, it's easy to answer 'yes' [21:34:17] besides that, having control over opensuse.* domains can't hurt [21:34:17] tampakrap: just to understand you right: Petr offered us to get the owner of the opensuse.cz domain? [21:35:02] can "we" even act as the owner? [21:35:10] correct, I said that I will take the topic to our meeting, but he went as well to doug, who sent the mail first to our ml [21:35:47] from a technical standpoint, I see no issues - but the legal point needs to be clarified by the board IMHO [21:36:08] actually it's already on the board's radar [21:36:35] last I heard is that Richard waits for response from Ciaran who should be in the whois data [21:37:04] okay mind responding to Petr and Doug then? [21:37:11] the whois data will accept anything. [21:37:28] pjessen: :-) [21:37:47] pjessen: you are technically right, but legally Ciaran might have a different opinion ;-) [21:37:47] tampakrap: I would say: go ahead [21:38:04] ...as we are just the technical part of the story [21:38:15] tampakrap: but what is with their webside? [21:38:43] last thing they told me, they want to get rid of it [21:38:47] I guess they want to leave that stuff as it is and just want to get the opensuse.cz domain (DNS) under openSUSE control? [21:39:25] there is presuambly also an issue of cost? [21:39:30] tampakrap: maybe put this (together with the Email question) in your answer email? [21:39:39] ack [21:40:22] means: is it ok for them that openSUSE takes over the domain, redirects anything opensuse.cz related to the CZ wiki and skip the email part? [21:40:31] ...something like that [21:41:08] ok for everyone ? [21:41:26] yes [21:41:29] So we would be just bne the DNS admns? [21:42:00] yes [21:42:20] pjessen: IMHO yes. That would be my understanding [21:42:37] I have no issue with that. Especially as I don't do any DNS admin ... [21:42:53] pjessen: not yet... ;-) [21:43:16] I guess the next 2 topics were handled already: monitoring/mirror status ? [21:43:33] yes. You told all. ;) [21:43:39] and the salt status [21:44:01] tampakrap: I'm happy to hear more ;-) [21:45:16] * cboltz wonders how the 40 VMs match the 28 pillar/id/* files [21:45:40] cboltz: now you know how many machines are administrated by me ;-) [21:46:00] lol [21:46:15] yep let's fix that [21:46:18] at least the galera machines are currently completely unmanaged [21:46:18] but seriously - why don't you use salt? [21:46:43] cboltz: fear? not enough knowledge? using ansible ? [21:46:51] cboltz: choose one and you might be right [21:47:51] I can help you to fix #2, which might then also fix #1 ;-) [21:48:08] cboltz: thanks - I will definitively come back to that :-) [21:48:23] tampakrap: btw, I've one question [21:48:33] shoot [21:48:46] tampakrap: is the salt master in the heroes network only serving for the heroes machines ? [21:48:59] yes [21:49:06] I was just wondering as I was on the svn machine in the other network [21:49:26] but that might be just a "flash back", as I saw there some repos pointing to "nowhere" [21:49:39] we used to have a separate master for the suse-dmz but it is currently broken since the network split [21:49:49] ok [21:49:57] hopefully will be back during hackweek [21:50:13] about the ntp stuff - I've two notes from what I saw so far... [21:50:22] That's your hackweek project? :D [21:50:26] 1) any objections using chrony instead of ntpd ? [21:50:46] Any reason why? [21:50:58] 2) any objections to use just "ntp1", "ntp2" instead of the full DNS name? [21:51:05] no objections, I saw the configs are quite similar, but I'll need to disable it in salt as well, as ntp is currently managed in salt [21:51:24] chrony is a bit more secure, as some tests from secint showed [21:51:44] for 2 it will work if the machines have properly set up the searchlist right? [21:51:46] the chrony maintainers did not implement all features, so they are not fully RFC compatible [21:51:52] tampakrap: sounds like kl_eisbaer found a nice task to practise salt ;-) [21:52:05] but they implemented enough for having everything that a client needs [21:52:27] I'm a long-time fan of ntp, but that's personal. [21:52:59] one example: chrony only binds to localhost and no other interface per default [21:53:15] pjessen: I was, too, but times are changing ;-) [21:53:56] tampakrap: yes, with just the hostname (ntp1), they will always just try their local domain [21:54:37] pjessen: I just found one more or less important feature that chrony handles different than ntpd [21:55:20] the "tinker panic 0" (which is btw. missing in the salt profile), is translated there to something like "makestep 1.0 3" [21:55:50] tampakrap: ...and IMHO "disable monitor" is also missing in the salt profile, but I'm not sure here [21:56:11] I'll check [21:56:27] cboltz: I've already some other quick and easy things to add to salt [21:57:02] disable monitor is there [21:57:02] but my last merge requests are long ago - so I probably forgot how to do it properly [21:57:10] tampakrap: ah, sorry [21:57:18] tinker panic 0 is not [21:57:22] add it to all machines? [21:57:33] tampakrap: I guess the tinker panic 0 was left out because the initial formula did not support it [21:57:40] but all virtual machines should have it [21:57:52] only virtual? no physical? [21:58:03] because it allows the clock to be stepped [21:58:04] ntp servers as well? [21:58:24] yes: for virtual machines this might be important during live migration or when they are paused [21:58:46] ack [21:58:59] some docs also mention that the virtual machines might have problems right after boot, when their "virtual hw clock" is not synced with the hypervisor [21:59:13] so it's definitively a setting you want to have for virtual machines [22:00:07] will have to ask you about that chrony feature tomorrow. [22:00:11] the funny part with the "restrict" settings: you can skip all of them with chrony, if you do not plan to use the VM as time server [22:00:35] ah, better defaults? [22:00:35] pjessen: https://chrony.tuxfamily.org/ [22:00:44] pjessen: more secure defaults, yes [22:02:00] but I would say: 2 hours is enough for a meeting ;-) [22:02:20] yeah, I have to go. Goot meeting though. [22:02:23] good [22:02:31] pjessen: CU [22:03:06] see ya all. [22:03:07] ...and bye, bye to everyone else :-) [22:03:09] bye [22:03:20] bye [22:03:21] bye [22:03:44] yeah let's finish it