Improvements to the release regression testing

Hello everyone!
During the latest RC testing, it turned out that it would be good to think of ways that would quicken it, as it takes much time because of the small size of our team. Me and the rest of the Mind the Parrot team came up with a few ideas to improve this process.

  1. Some of the test cases with the CoreRegression label (the ones that we execute during RC testing) can be removed from the test cycle, i.e. the CoreRegression label can be removed from them. In order to achieve it, Joanna Szymańska reviewed the test cases with this label. Currently, there are 91 of them and it takes several days to execute all of them with the current team size, which is a long time, especially if there is more than one Release Candidate. In our view, the label has to be removed from those which have a very similar or identical scope to other CoreRegression test cases. If we do it, we won’t be duplicating work and the duration of release regression testing will decrease. The test cases from which the label is to be removed will have to be linked to those with the CoreRegression label of which they are duplicates. A comment stating that the label was removed because the test case’s scope overlaps with another CoreRegression test case(s) will have to be added in a given test case. As a part of the review, it was checked whether there were any test cases that concerned features that were not essential – for them, the label should also be removed. It was also checked whether there were any test cases for which all executions had passed and if so, if they concerned essential features of the application. If not, the label should be removed as well. All in all, RC manual testing should be about the so called smoke testing. We already discussed the results of Joanna’s CoreRegression test case review at a meeting and identified various test cases that are not in fact CoreRegression ones; some also turned out to be redundant/duplicates and will be moved to Dead.
  2. We need to resume writing automated, mainly functional, tests based on the CoreRegression test cases because the former decrease the number of the latter, and frequently decrease the number of test steps to execute in the remaining test cases.
    All epics containing tickets concerning the creation of automated tests based on the CoreRegression test cases will have to be reviewed. A ticket for the review has to be created. The epics are the following: Functional tests - UI components (, Release Stock Management faster with more confidence ( and Release CCE faster with more confidence (
    We can consider creating separate epics for all other services, containing tickets for the creation of automated tests based on CoreRegression test cases (of course, suitable tickets will need to be created for the test cases). We may also create one epic for the creation of automated tests based on CoreRegression test cases, and move the already-existing tickets concerning it included in the above-mentioned epics there. Personally, I think this would be the preferable solution.
  3. The whole release process could be quickened if the ticket from the Release faster – processes and global improvements epic ( was done. Regressions in the application would be found earlier, as it would be easier to identify the reason why the functional tests build failed. RC testing would be quicker because a fewer number of bugs would be found, and thus there would be fewer Release Candidates.
  4. There should be only one phase of release regression testing. Testing the Reporting Stack should be included in this phase. One person should be executing a test case concerning the Superset reports at a time. They should inform others on the qa Slack channel that they have just started executing the test case, that they re-started the processor groups in NiFi, and that they finished executing the test case and some other person can start executing another one related to the Superset reports.
  5. Performance tests should also be executed in the first phase. Exploratory and translation testing should be performed in the meantime – during the wait for the bug fixes or when someone doesn’t have any other testing to perform (manual or performance). Exploratory and translation tests could be performed instead of the code freeze tasks.
  6. Testing could be conducted only on one browser: Firefox because there are virtually no situations in which a bug exists only on Chrome but the other way round happens once in a while, although still not frequently. For this reason, it doesn’t make much sense to execute each test case on both browsers – only one execution, on Firefox, is enough. Currently, there are 6 open bugs occurring only on Firefox, none only on Chrome. All of them are minor UI issues. The most recent is OLMIS-6543 (, created on 30 August. The application’s performance is also worse on Firefox, so potential issues related to it are more easily noticeable on this browser. I found only two bugs occurring only on Chrome with the Done status. They are very old (they concern the 3.2 and 2.0.1 versions), and one of them concerns Mac users. They are OLMIS-2780 ( and OLMIS-578 (

Please let me know what you think about the ideas presented. I will be very grateful for your feedback.


Hi Joanna,

Thank you for the above list of ideas!
With reference to the first point, I am providing a list of test cases that could no longer be performed during the release:
Can be moved to DEAD:

Remove the CoreRegression label:

Marge them with other test cases and then can be moved to DEAD:


Hi, Joanna!
Thank you for reviewing the test cases and for preparing the list of those that we can remove from the release regression testing. I looked at them again and think that nothing needs to be changed in them; all of them are in correct categories in my view. But let us see what others say about it, as well as all other of the proposed ideas.

Heya. Thanks for this writeup. Here are my thoughts.

I’d say that maybe they should just be removed from the regression testing, rather than removed completely.

Yes, always. But first, we should focus on making them more stable. It’s much better than it used to be, but it’s still not reliable enough to throw another couple hundreds of new tests in my mind.

I see it more like a nice-to-have, but I’m not sure this will make the release process before the release actually faster.

Yes to that! I didn’t understand why we needed to wait with reporting stack and exploratory testing until fixing all of the bugs from phase 1.

Agreed! In the first (and only) phase.

Maybe instead of committing to one and only one browser, we could execute some tests on Firefox and some on Chrome? Eg. perform all manual test cases on Firefox, but the performance and exploratory testing on Chrome?

Best regards,

Thank you for feedback, Sebastian!
As for the four test cases, on second thought I too, think that we can keep them and only remove the “CoreRegression” label.
As for the automated tests, I agree that it’s essential that we make them more stable first, then we should think about creating new ones.
As for #3, I also don’t think that it is high priority to move this ticket to Done but it would be really helpful to do so when there is time.
It’s nice to hear that you agree with the idea of only one phase of tests. Reporting and all other types of tests were moved to the second phase because this is how it’s been done in the past, and there wasn’t enough time to analyze it more thoroughly and to discuss other solutions, especially taking into account the fact that the start of release testing was delayed.
Thank you for the suggestion on using Chrome for performance and exploratory tests, and Firefox for manual test cases! I think it’s a great idea and the best possible solution.

As there were no objections, I will start working on removing the CoreRegression label from the above-mentioned test cases and merging the others.

Best regards,

1 Like

Hello everyone!
I would like to inform you that if we don’t hear any objections to the approved ideas (i.e. #4, #5 and #6, as #1 is already implemented, and #2 and #3 were not approved) in 2-3 days, we will start implementing them. Please let me know if any of you has any suggestions or objections.

Since there were no objections, I’ll start implementing those of the proposed improvements that gained approval.