Merging demo and performance data sets

Hello everyone,

  a (long) while ago we have started to load performance data into our test and uat servers to help create an environment that is closer to the actual, production instance. This was meant to expose any potential performance issues faster so they can be prioritized and worked on. We have, however, gave up on that approach, because at that point there were too many places/issues with poor performance, which slowed down or totally blocked testing. Another issue was that the data wasn't very meaningful. We generate performance data sets with Mockaroo, which in the end generates random numbers/strings/assignments.

  Since that time, however, we have made some good progress improving the performance both on the backend and frontend side. During the last technical committee call, a question emerged on whether we want to go back to the approach of having performance data sets as our demo data. There are some obvious pros, like the one mentioned earlier about faster discovering performance issues faster. In addition to that, we would no longer need to maintain two separate data sets - demo and performance. On the other hand, we have to consider that testing with performance data is still a little slower. Additionally, we would potentially need to work on making the performance data more meaningful.

  What do people think? You can try testing on our perf test server -   (although, it currently has got Malawi data loaded, not performance data from Mockaoo)

Best regards,

  Sebastian.
···

https://perftest.openlmis.org

      Sebastian Brudziński

              Senior Software Developer / Team Leader

sbrudzinski@soldevelo.com

SolDevelo
Sp. z o.o. [LLC] / www.soldevelo.com
Al. Zwycięstwa 96/98, 81-451, Gdynia, Poland
Phone: +48 58 782 45 40 / Fax: +48 58 782 45 41

Hi,

We have currently loaded performance data on https://test.openlmis.org/ for stock card line items. It is helpful for testing purposes to have a lot of data that make sense. I agree that much of our performance data is not meaningful, some of it is even completely broken (I believe we will fix it during OLMIS-4485). I think we should take another approach than last time, merging performance data with demo data one by one, service by service. That way, we won’t block the whole system. If we start with Referencedata, we wouldn’t need to worry much about e.g. requisition when we will change orderables performance data often, to make it right. When data for one service will be proven useful and sensible, we can enable performance data in other service.

Regards,

Paweł


SolDevelo
Sp. z o.o. [LLC] / www.soldevelo.com
Al. Zwycięstwa 96/98, 81-451, Gdynia, Poland
Phone: +48 58 782 45 40 / Fax: +48 58 782 45 41

···

On Tue, Apr 17, 2018 at 11:27 AM, Sebastian Brudziński sbrudzinski@soldevelo.com wrote:

Hello everyone,

  a (long) while ago we have started to load performance data into our test and uat servers to help create an environment that is closer to the actual, production instance. This was meant to expose any potential performance issues faster so they can be prioritized and worked on. We have, however, gave up on that approach, because at that point there were too many places/issues with poor performance, which slowed down or totally blocked testing. Another issue was that the data wasn't very meaningful. We generate performance data sets with Mockaroo, which in the end generates random numbers/strings/assignments.
  Since that time, however, we have made some good progress improving the performance both on the backend and frontend side. During the last technical committee call, a question emerged on whether we want to go back to the approach of having performance data sets as our demo data. There are some obvious pros, like the one mentioned earlier about faster discovering performance issues faster. In addition to that, we would no longer need to maintain two separate data sets - demo and performance. On the other hand, we have to consider that testing with performance data is still a little slower. Additionally, we would potentially need to work on making the performance data more meaningful.
  What do people think? You can try testing on our perf test server - [https://perftest.openlmis.org](https://perftest.openlmis.org)        (although, it currently has got Malawi data loaded, not performance data from Mockaoo)

Best regards,

  Sebastian.


Sebastian Brudziński

              Senior Software Developer / Team Leader

     sbrudzinski@soldevelo.com


SolDevelo
Sp. z o.o. [LLC] / www.soldevelo.com
Al. Zwycięstwa 96/98, 81-451, Gdynia, Poland
Phone: +48 58 782 45 40 / Fax: +48 58 782 45 41

You received this message because you are subscribed to the Google Groups “OpenLMIS Dev” group.

To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev+unsubscribe@googlegroups.com.

To post to this group, send email to openlmis-dev@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/openlmis-dev/78d584de-b80d-29df-fd17-a1a8fd91e453%40soldevelo.com.

For more options, visit https://groups.google.com/d/optout.

Paweł Albecki

    Software Developer

     palbecki@soldevelo.com