action #109635
openCheck grafana monitoring host performance size:M
0%
Description
Observation¶
Some graphs have a lot of data points and can be very slow to load. In the worst case Grafana says "No data".
Example: https://monitor.qa.suse.de/d/WebuiDb/webui-summary?viewPanel=80&orgId=1&from=now-30d&to=now
30 days seems to be ok for this graph (if there are no other expensive queries running).
There are other graphs using conditions on fields which take even more time:
https://monitor.qa.suse.de/d/1pHb56Lnk/tinas-dashboard?viewPanel=10&orgId=1&from=now-7d&to=now
Looking at htop, at least influxdb seems to be able to make use of all the CPUs for showing one graph, so maybe we could ask for more CPUs to improve the situation a bit.
it could also be that influxdb is having too much data be able to act efficiently. In #94492 we already worked on this topic and we resolved with the database being 101GB which is rather big but was at least better than in before. Now we are at 117GB again. I suggest into reducing the size of the database. For that I suggest to research online and if nothing found actively seek help from the influxdb community.
The problem is happening for certain graphs only, which have a lot of data points.
Why should be the total size of the DB responsible for graphs showing no data if you select 90 days?
Maybe because when getting many data points the problem of a fragmented database becomes more severe or so. Maybe just the heavy graphs themselves are the problematic ones, could also be.
Acceptance criteria¶
- AC1: It is known what size vm is required for our monitoring needs
- AC2: All panels load in a reasonable time frame
Suggestions¶
- Compare current influxdb size with community recommendations (https://docs.influxdata.com/influxdb/v1.8/guides/hardware_sizing/)
- Look into older tickets where we already looked into sizes of tables and downsampling
- Lookup sizes of tables to find biggest size contributors
- If we find individual measurements that contribute significantly more than others then handle these specific measurements. E.g. if "apache response times" account for 80% of size then handle that measurement either with downsampling or deleting it completely, etc.
- See https://progress.opensuse.org/issues/94492#note-15 for how to determine the size of certain measurements
- See https://progress.opensuse.org/issues/94492#note-17 for how to delete old data from a certain measurement
- That note confirms that "apache_log" is the worst one with already 60gb (at least before the last cleanup), can we just delete that measurement and start from scratch? so in influxdb
DROP MEASUREMENT apache_log
or more elaborate `https://stackoverflow.com/a/49022924 then check again the size of tables, repeat for worst offender
Updated by tinita over 2 years ago
- Related to action #107881: [retro] Conduct a zombie scrum team survey added
Updated by okurz over 2 years ago
- Priority changed from Normal to Low
- Target version set to Ready
I am a bit worried adding that ticket to our backlog. Yesterday we identified "too many infrastructure issues" as a problem for us and then we need to add more ourselves :)
Updated by tinita over 2 years ago
Well, we need grafana for helping to identify infrastructure issues.
So this will hopefully make grafana a bit more useful.
Updated by okurz over 2 years ago
tinita wrote:
Looking at htop, at least influxdb seems to be able to make use of all the CPUs for showing one graph, so maybe we could ask for more CPUs to improve the situation a bit.
it could also be that influxdb is having too much data be able to act efficiently. In #94492 we already worked on this topic and we resolved with the database being 101GB which is rather big but was at least better than in before. Now we are at 117GB again. I suggest into reducing the size of the database. For that I suggest to research online and if nothing found actively seek help from the influxdb community.
Updated by tinita over 2 years ago
The problem is happening for certain graphs only, which have a lot of data points.
Why should be the total size of the DB responsible for graphs showing no data if you select 90 days?
Updated by okurz over 2 years ago
tinita wrote:
The problem is happening for certain graphs only, which have a lot of data points.
Why should be the total size of the DB responsible for graphs showing no data if you select 90 days?
Maybe because when getting many data points the problem of a fragmented database becomes more severe or so. Maybe just the heavy graphs themselves are the problematic ones, could also be.
Updated by okurz over 2 years ago
- Subject changed from Check grafana monitoring host performance to Check grafana monitoring host performance size:M
- Description updated (diff)
- Status changed from New to Workable