Thank you for your feedback. We will continue to make some of the issues, about which you mentioned in the document, more specific.
Answering some of your questions in the document:
We should modify these parts of tests cases which are checking the exact text on UI by making them more general to avoid situation, that someone fails the manual test case because the information is not identical. I can create a ticket to ensure these tests will be modified.
However, we still should check whether UI information make sense, regardless whether it is exactly the same text as presented in a test case. Similarly, we should check whether report was generated with right content. That’s why we thought that these parts of the system should be checked manually, as it requires some intuitiveness.
Currently a new manual test is created when a new functionality is added to the system and there are no adjusted test cases yet. When a given test case partly covers a functionality, this test case is being updated - we do not create a new manual test then. Moreover, there are no longer new manual test cases created after ticket has landed in QA column.
Finally, referring to automated tests for edge cases - we are open to giving it a try. For example, in CCE service there is one edge case, which we thought that can be replaced with automated test. This test case is about adding a solar device with battery and there is a ticket for writing this test OLMIS-5422.