action #16166
closedLog per test
0%
Description
All the logs are into one file, called autoinst-log.txt. This single file contains all the logs for the tests that took part during a build. As you can probably imagine this file is rather big, sometimes up to 20K lines long. Inside of it, you can find all sorts of messages; no doubt about it. But ... I would like to propose to create log per test (aka *.pm
) which is imho much more helpful and meaningful.
This can be done by extracting the text between the ||| starting
and ||| finished
placeholders.
For example:
10:44:49.1221 26680 ||| starting boot_to_desktop tests/boot/boot_to_desktop.pm at 2016-11-18 10:44:49
... (extract)
... (this)
... (part)
10:45:21.5714 26680 ||| finished boot_to_desktop boot at 2016-11-18 10:45:21 (32 s)
As soon as you have successfully extracted the correct portion of the log file, you can save it as $test.log
(in that case it should be: boot_to_desktop.log
)
Then repeat the same procedure for all the tests which run during the build and save each log separately.
Attention : When a test fails, the placeholder for signaling the end of the test, is not the same. Expected keywords are failed
and died
.
For example:
starting=1; # Number of the line that the test log starts
finished=1 # Number of the line that the test log finishes
while IFS='' read -r line || [[ -n "$line" ]]; do
if [[ $line == *"starting $testname "* ]]
then
line_start=$starting
fi
if [[ $line == *"finished $testname "* ]] || [[ $line == *"$testname died"* ]] || [[ $line == *"$testname failed"* ]]
then
line_finish=$finished
fi
starting=$[$starting +1]
finished=$[$finished +1]
done < "$file"
PS: Most probably, the back-end developers have another (better) way of doing this, without looking for placeholders
, since this kind of wording can be changed anytime.
Updated by szarate almost 8 years ago
- Follows coordination #14972: [tools][epic] Improvements on backend to improve better handling of stalls added
Updated by szarate almost 8 years ago
- Follows action #16180: Better log viewer added
Updated by szarate almost 8 years ago
- Follows deleted (action #16180: Better log viewer)
Updated by szarate almost 8 years ago
- Blocks action #16180: Better log viewer added
Updated by okurz almost 8 years ago
please keep in mind the glossary of openQA as defined here: https://progress.opensuse.org/projects/openqav3/wiki#Glossary
You are talking about a "test module", not a test. In general what one wants to conduct as "test case" is either a "test module" which is executed as a "test step" or a "test suite" which is conducted as scenarios.
Please think about if you can restructure your feature request based on the template proposal in https://progress.opensuse.org/projects/openqav3/wiki#feature-requests
I for myself have no problems searching for specific keywords within the big logfile. Splitting the logfile per module would not provide a big benefit but would make understanding the full flow of scenario way harder, especially because the test steps are not independant.
I suggest to close as "rejected" but I leave that to you for consideration for now. Thank you for understanding.
Updated by pgeorgiadis almost 8 years ago
@okurz thanks for the links! From now on I will remember to use the correct terminology ;) FYI: I tried to edit the current feature request, but I see no 'edit' button :/
Please try to see this from a tester's point of view. As a tester, I have not the luxury to search into a big log file, plus the fact that there are also irrelevant information (internal engine logs, backend mechanism). From the other hand, these data are relevant to you as the developer of the tool. So having two different aspects, I propose to keep them both, so every one is satisfied: one big file, and isolated logs per test. If I may, a good place to put them would be under the name of each test-module, but that's just a stylistic reference.
As for the flow of the scenarios, I don't see any impact on that. If someone would like to see the test plan, he could use the webui to see what test-modules are included and which job-groups are related to them. Looking at the logs for understanding the test-plan and the full flow of a scenario is wrong. Furthermore, the vast majority of test steps for QAM tests-modules are completely independent -- apart from the installation.
PS: The implementation of this feature request is a requirement for #16180 and #16184 which both of them lead to bigger improvements ;)
Updated by szarate almost 8 years ago
Updated by szarate almost 8 years ago
- Priority changed from Normal to Low
- Target version set to future
Updated by okurz@suse.de almost 8 years ago
On Tuesday, 24 January 2017 11:03:26 CET you wrote:
@okurz thanks for the links! From now on I will remember to use the correct
terminology ;) FYI: I tried to edit the current feature request, but I see
no 'edit' button :/
try the little pencil icon
Updated by pgeorgiadis almost 8 years ago
okurz@suse.de wrote:
On Tuesday, 24 January 2017 11:03:26 CET you wrote:
@okurz thanks for the links! From now on I will remember to use the correct
terminology ;) FYI: I tried to edit the current feature request, but I see
no 'edit' button :/try the little pencil icon
I've tried but it prompts me to a new message, while I want to edit the already posted one.
The link of the little pencil icon is this: https://progress.opensuse.org/issues/16166/edit
I can edit only my replies to that issue, not the original (first) message that describes the issue :/
Updated by szarate almost 8 years ago
There's a field called update description :)
EDIT (okurz): delete nonsense email quote
Updated by okurz@suse.de almost 8 years ago
On Tuesday, 24 January 2017 16:41:31 CET you wrote:
[…]
I've tried but it prompts me to a new message, while I want to edit the
already posted one. The link of the little pencil icon is this:
https://progress.opensuse.org/issues/16166/editI can edit only my replies to that issue, not the original (first)
message that describes the issue :/
because you were obviously never added as a member to the project. I thought
your QAM PM should do it? It might make sense to check with your colleagues if
they are also already member. If not ask to be added, e.g. to me.
Updated by okurz almost 8 years ago
- Status changed from New to Rejected
feel free to reopen with an updated description which describes the feature with a proper goal and user story. Otherwise I fear we might just be stuck in discussion about implementation details when we do not even understand the "why".