End-user performance tests

Hi all,

We have recently started work on improving our end-user performance testing. So far they were performed manually for each release candidate. They seem not to be very reliable and stable so we decided to introduce automate end-user performance tests.

We already verified that can be done using our functional tests. We are planning to update existing steps to map previously used manual steps. Furthermore, we would like to change the functional test server’s dataset to Malawi’s one. Currently, tests are executed on a program with ~30 products which is not enough to measure performance. Moreover, Malawi’s data was also used for manual testing so it would be easy to compare the results. The sensitive data would be surely removed/replaced.

After that, our functional tests would measure performance and warn us much faster than during RC whether some important action is slower. The only disadvantage is that the time of functional test execution will be longer because of the number of products supported by the program etc.

Please let us know if you have any concerns. Otherwise, we will proceed with the described solution.


Thanks for the summary, Klaudia.

I think this is fine to start with, as introducing a new dataset would also mean we need to tackle additional problems/questions, like how do the new dataset map to the previous one? How does it affect the performance results? An obvious advantage is that we are operating on data generated in a real production environment. On the downside, though, altering this performance data set is more difficult, as we don’t have it defined in a JSON/CSV file or another place that allows easy modifications of the data - the snapshot needs to be loaded, the data modified, and a new snapshots created and replaced.

Replacing this dataset can be considered in the - [OLMIS-4658] Demo-data for fun and profit - OpenLMIS JIRA epic if needed.