During the latest RC testing, it turned out that it would be good to think of ways that would quicken it, as it takes much time because of the small size of our team. Me and the rest of the Mind the Parrot team came up with a few ideas to improve this process.
- Some of the test cases with the CoreRegression label (the ones that we execute during RC testing) can be removed from the test cycle, i.e. the CoreRegression label can be removed from them. In order to achieve it, Joanna Szymańska reviewed the test cases with this label. Currently, there are 91 of them and it takes several days to execute all of them with the current team size, which is a long time, especially if there is more than one Release Candidate. In our view, the label has to be removed from those which have a very similar or identical scope to other CoreRegression test cases. If we do it, we won’t be duplicating work and the duration of release regression testing will decrease. The test cases from which the label is to be removed will have to be linked to those with the CoreRegression label of which they are duplicates. A comment stating that the label was removed because the test case’s scope overlaps with another CoreRegression test case(s) will have to be added in a given test case. As a part of the review, it was checked whether there were any test cases that concerned features that were not essential – for them, the label should also be removed. It was also checked whether there were any test cases for which all executions had passed and if so, if they concerned essential features of the application. If not, the label should be removed as well. All in all, RC manual testing should be about the so called smoke testing. We already discussed the results of Joanna’s CoreRegression test case review at a meeting and identified various test cases that are not in fact CoreRegression ones; some also turned out to be redundant/duplicates and will be moved to Dead.
- We need to resume writing automated, mainly functional, tests based on the CoreRegression test cases because the former decrease the number of the latter, and frequently decrease the number of test steps to execute in the remaining test cases.
All epics containing tickets concerning the creation of automated tests based on the CoreRegression test cases will have to be reviewed. A ticket for the review has to be created. The epics are the following: Functional tests - UI components (https://openlmis.atlassian.net/browse/OLMIS-5716), Release Stock Management faster with more confidence (https://openlmis.atlassian.net/browse/OLMIS-5497) and Release CCE faster with more confidence (https://openlmis.atlassian.net/browse/OLMIS-5495).
We can consider creating separate epics for all other services, containing tickets for the creation of automated tests based on CoreRegression test cases (of course, suitable tickets will need to be created for the test cases). We may also create one epic for the creation of automated tests based on CoreRegression test cases, and move the already-existing tickets concerning it included in the above-mentioned epics there. Personally, I think this would be the preferable solution.
- The whole release process could be quickened if the https://openlmis.atlassian.net/browse/OLMIS-5638 ticket from the Release faster – processes and global improvements epic (https://openlmis.atlassian.net/browse/OLMIS-5651) was done. Regressions in the application would be found earlier, as it would be easier to identify the reason why the functional tests build failed. RC testing would be quicker because a fewer number of bugs would be found, and thus there would be fewer Release Candidates.
- There should be only one phase of release regression testing. Testing the Reporting Stack should be included in this phase. One person should be executing a test case concerning the Superset reports at a time. They should inform others on the qa Slack channel that they have just started executing the test case, that they re-started the processor groups in NiFi, and that they finished executing the test case and some other person can start executing another one related to the Superset reports.
- Performance tests should also be executed in the first phase. Exploratory and translation testing should be performed in the meantime – during the wait for the bug fixes or when someone doesn’t have any other testing to perform (manual or performance). Exploratory and translation tests could be performed instead of the code freeze tasks.
- Testing could be conducted only on one browser: Firefox because there are virtually no situations in which a bug exists only on Chrome but the other way round happens once in a while, although still not frequently. For this reason, it doesn’t make much sense to execute each test case on both browsers – only one execution, on Firefox, is enough. Currently, there are 6 open bugs occurring only on Firefox, none only on Chrome. All of them are minor UI issues. The most recent is OLMIS-6543 (https://openlmis.atlassian.net/browse/OLMIS-6543), created on 30 August. The application’s performance is also worse on Firefox, so potential issues related to it are more easily noticeable on this browser. I found only two bugs occurring only on Chrome with the Done status. They are very old (they concern the 3.2 and 2.0.1 versions), and one of them concerns Mac users. They are OLMIS-2780 (https://openlmis.atlassian.net/browse/OLMIS-2780) and OLMIS-578 (https://openlmis.atlassian.net/browse/OLMIS-578).
Please let me know what you think about the ideas presented. I will be very grateful for your feedback.