during performance testing 3.9 the core team encountered some issues with the process of testing:
- User performing test using different hardware have different results (there was a need to perform a test of current and previous system version by each tester)
- Results were not comparable between testers
- Measuring API request is reliable but does not include time to process result by UI
- Results collected using stopwatch were misinterpreted as 100% accurate
- A cached and not cached version of the test was conducted slightly differently due to lack of common definition
We’ve gathered all our recent experiences with performance testing and came up with some solutions:
- Use a virtual machine with a shared image,
- Add a definition of ‘cached’ and ‘not cached’ version of tests
- Define steps for whole user story e.g. Authorize Proceed, Initiate Proceed should be more granular
- Add checklist which would help with starting each test (checkboxes with obligatory conditions)
- Make it clear when the result form performance spreadsheet is measured manually
- We could use Lighthouse which can audit a page and creates a report listing results, issues, and even possible improvements. It seems to be perfect for actions like It cannot be used for actions like ‘Initial UI Load’, but it won’t measure actions like ‘Authorize’ or ‘Sync with server’. In such a case loading the next page is only a part of the action but still can be useful.
- The summary contains only results measure manually, it should contain: overall time form user’s perspective and time of all API requests (optionally also Lighthouse for certain UI pages)
Please, let us know what you think, any suggestion or feedback would be helpful.