OpenLMIS Demo Data

Hello,

I’ve been recently working on #847 - Demo Data.

So, the main objective for the whole standard data set was to have it kept in clean format, friendly for non-developers, and ideally, useful for API calls.

To accomplish this, I’ve came up with an idea to store all the data as JSON files, and use a Node.js script to transform those into sql input file (as the set of API calls would involve extra effort with retrieving object IDs, because we can not force any specfic ID for an entity thorugh API call, so it makes it much more difficult to have those objects’ relations in static files). My current idea is to store every model as a separate JSON file, which would look something like this (for GeographicZone):

[{
“id”: “00000000-0000-0000-0000-000000000001”,
“code”: “CND”,
“name”: “Canada”,
“level”: “/api/geographicLevels/00000000-0000-0000-0000-000000000001”
},

]

``

First thing to notice is that each object it isn’t completely the same as an API call would look like, because we need to set an explicit ID to connect the related objects together, also the “foreign keys” values do not involve host, as it could be different depending on setup (although we use “localhost” everywhere at this moment). ID itself is ignored while sending the JSON through API, so it does not disturb proper API calling.

The script transforms the JSON arrays and outputs a sql input file which then can be used to populate the database (currently attaching it to build process would be difficult, because we do not set up the database explicitly before running the application with bootRun, and when it’s up, then the bootRun blocks any other tasks). We can use something like:

docker exec -i openlmisrequisition_db_1 psql -Upostgres open_lmis < input.sql

``

All of JSON files would be for example stored in /demo-data with naming strategy: {schema}.{table}.json, like “referencedata.geographic_levels.json”. If we stick with our current naming strategy (eg. table names pluralized, foreign key table column has an “id” suffix, all table column names lowercase), then transforming those into sql format would involve very little work (as it does now).

Cheers,

Paweł

Thanks Pawel — Great start to the discussion.

How would the full lifecycle work? By lifecycle, I assume the following:

  • Create: Non-technical users (e.g. Our community manager) will need to occasionally set up scenarios for a demonstration — they could do this by taking a base demo installation, and using the UI to add more programs, users, etc., etc.
  • Save: These users will likely want to save these customized demos for later use either as is, or for a new baseline from which to customize a new demo. Thus, we’d like to be able to allow non-technical users the ability to save/tag their own demo data
  • Build/Deploy: Eventually we will build up a small library of demos from which users should have the ability to deploy. To start, let’s assume that a technical user would handle this task — not a nontechnical user — so passing the build a demo data tag could be an option.
    Pengfei, since you are working on the CI/CD process, we would welcome your thoughts.
···

On 8/24/16, 9:21 AM, "openlmis-dev@googlegroups.com on behalf of Paweł Nawrocki" <openlmis-dev@googlegroups.com on behalf of pnawrocki@soldevelo.com> wrote:

Hello,

I’ve been recently working on #847 - Demo Data.

So, the main objective for the whole standard data set was to have it kept in clean format, friendly for non-developers, and ideally, useful for API calls.

To accomplish this, I’ve came up with an idea to store all the data as JSON files, and use a Node.js script to transform those into sql input file (as the set of API calls would involve extra effort with retrieving object IDs, because we can not force any specfic ID for an entity thorugh API call, so it makes it much more difficult to have those objects’ relations in static files). My current idea is to store every model as a separate JSON file, which would look something like this (for GeographicZone):

[{

“id”:“00000000-0000-0000-0000-000000000001”,

“code”:“CND”,

“name”:“Canada”,

“level”:"/api/geographicLevels/00000000-0000-0000-0000-000000000001"

},

]

``

First thing to notice is that each object it isn’t completely the same as an API call would look like, because we need to set an explicit ID to connect the related objects together, also the “foreign keys” values do not involve host, as it could be different depending on setup (although we use “localhost” everywhere at this moment). ID itself is ignored while sending the JSON through API, so it does not disturb proper API calling.

The script transforms the JSON arrays and outputs a sql input file which then can be used to populate the database (currently attaching it to build process would be difficult, because we do not set up the database explicitly before running the application with bootRun, and when it’s up, then the bootRun blocks any other tasks). We can use something like:

docker exec- i openlmisrequisition_db_1 psql -Upostgres open_lmis < input.sql

``

All of JSON files would be for example stored in /demo-data with naming strategy: {schema}.{table}.json, like “referencedata.geographic_levels.json”. If we stick with our current naming strategy (eg. table names pluralized, foreign key table column has an “id” suffix, all table column names lowercase), then transforming those into sql format would involve very little work (as it does now).

Cheers,

Paweł

You received this message because you are subscribed to the Google Groups “OpenLMIS Dev” group.

To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev+unsubscribe@googlegroups.com.

To post to this group, send email to
openlmis-dev@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/openlmis-dev/127c7b23-8a87-4610-8b61-5d403911680f%40googlegroups.com
.

For more options, visit https://groups.google.com/d/optout.

Hi Pawel,

Having demo/seed data certainly would benefit the contract testing that I am working on.

But I did not fully understand that solution, is it in github somewhere?

If so, I think I can get a better understanding trying it out in action and try to use it to setup data before running contract tests

Thanks and regards,

Pengfei

···

On Thursday, August 25, 2016 at 12:21:34 AM UTC+8, Paweł Nawrocki wrote:

Hello,

I’ve been recently working on #847 - Demo Data.

So, the main objective for the whole standard data set was to have it kept in clean format, friendly for non-developers, and ideally, useful for API calls.

To accomplish this, I’ve came up with an idea to store all the data as JSON files, and use a Node.js script to transform those into sql input file (as the set of API calls would involve extra effort with retrieving object IDs, because we can not force any specfic ID for an entity thorugh API call, so it makes it much more difficult to have those objects’ relations in static files). My current idea is to store every model as a separate JSON file, which would look something like this (for GeographicZone):

[{
“id”: “00000000-0000-0000-0000-000000000001”,
“code”: “CND”,
“name”: “Canada”,
“level”: “/api/geographicLevels/00000000-0000-0000-0000-000000000001”
},

]

``

First thing to notice is that each object it isn’t completely the same as an API call would look like, because we need to set an explicit ID to connect the related objects together, also the “foreign keys” values do not involve host, as it could be different depending on setup (although we use “localhost” everywhere at this moment). ID itself is ignored while sending the JSON through API, so it does not disturb proper API calling.

The script transforms the JSON arrays and outputs a sql input file which then can be used to populate the database (currently attaching it to build process would be difficult, because we do not set up the database explicitly before running the application with bootRun, and when it’s up, then the bootRun blocks any other tasks). We can use something like:

docker exec -i openlmisrequisition_db_1 psql -Upostgres open_lmis < input.sql

``

All of JSON files would be for example stored in /demo-data with naming strategy: {schema}.{table}.json, like “referencedata.geographic_levels.json”. If we stick with our current naming strategy (eg. table names pluralized, foreign key table column has an “id” suffix, all table column names lowercase), then transforming those into sql format would involve very little work (as it does now).

Cheers,

Paweł

Hi Pengfei,

This is currently contained within the requisition service.

I’ve updated the README with instructions.

W dniu piątek, 26 sierpnia 2016 10:06:18 UTC+2 użytkownik pf...@thoughtworks.com napisał:

···

Hi Pawel,

Having demo/seed data certainly would benefit the contract testing that I am working on.

But I did not fully understand that solution, is it in github somewhere?

If so, I think I can get a better understanding trying it out in action and try to use it to setup data before running contract tests

Thanks and regards,

Pengfei

On Thursday, August 25, 2016 at 12:21:34 AM UTC+8, Paweł Nawrocki wrote:

Hello,

I’ve been recently working on #847 - Demo Data.

So, the main objective for the whole standard data set was to have it kept in clean format, friendly for non-developers, and ideally, useful for API calls.

To accomplish this, I’ve came up with an idea to store all the data as JSON files, and use a Node.js script to transform those into sql input file (as the set of API calls would involve extra effort with retrieving object IDs, because we can not force any specfic ID for an entity thorugh API call, so it makes it much more difficult to have those objects’ relations in static files). My current idea is to store every model as a separate JSON file, which would look something like this (for GeographicZone):

[{
“id”: “00000000-0000-0000-0000-000000000001”,
“code”: “CND”,
“name”: “Canada”,
“level”: “/api/geographicLevels/00000000-0000-0000-0000-000000000001”
},

]

``

First thing to notice is that each object it isn’t completely the same as an API call would look like, because we need to set an explicit ID to connect the related objects together, also the “foreign keys” values do not involve host, as it could be different depending on setup (although we use “localhost” everywhere at this moment). ID itself is ignored while sending the JSON through API, so it does not disturb proper API calling.

The script transforms the JSON arrays and outputs a sql input file which then can be used to populate the database (currently attaching it to build process would be difficult, because we do not set up the database explicitly before running the application with bootRun, and when it’s up, then the bootRun blocks any other tasks). We can use something like:

docker exec -i openlmisrequisition_db_1 psql -Upostgres open_lmis < input.sql

``

All of JSON files would be for example stored in /demo-data with naming strategy: {schema}.{table}.json, like “referencedata.geographic_levels.json”. If we stick with our current naming strategy (eg. table names pluralized, foreign key table column has an “id” suffix, all table column names lowercase), then transforming those into sql format would involve very little work (as it does now).

Cheers,

Paweł