Requisition data feed

Hi everyone,

on the latest Technical Committee meeting, you can find meeting notes here, I was proposing an idea of introducing new batch fetch endpoint to the Requisition service which would help us implement new batch approval screen for SELV v3. According to @joshzamor comments, we should avoid introducing a new endpoint to the core Requisition service at it was done for Malawi since it is against our microservice infrastructure.
As an action item, I have to describe how current batch fetch endpoint behaves and propose a new design that would suffice both old Malawi batch screen and the new one that is going to be introduced for SELV. Unfortunately, both screens need data aggregated in a very different way to work - one simply uses a list of regular requisitions and the second needs data aggregated by some geographic zones. For me making both using the same endpoint would either mean it would return a completely different response based on some param (which is essentially making 2 different endpoints into one which is not very RESTful) or reworking whole existing batch approval screen for which we probably won’t have enough resources.

@joshzamor mentioned that another possibility would be to introduce a data feed mechanism to Requisition service and I think that is a good solution for this issue. As SELV we could introduce a new microservice that would subscribe to it and aggregate data to our needs. I’m not entirely sure how ideally should this mechanism be designed, but as the SELV team, we could try to introduce some kind of simple working concept. After exchanging a couple of thoughts with @Sebastian_Brudzinski I think it could work something like this:

  • we would have a simple endpoint that you subscribe to, specifying how Requisition service should notify you if a change in any requisition occurs
  • upon any status change and saving of requisition Requisition service runs a new thread that will go through a list of subscribers and will notify them sending HTTP requests

What do you think about this approach? I assumed the simplest solution as of now, maybe we could include some logic around requisitions changing statues? I would welcome any ideas and comments that you have.

Best Regards
Mateusz Kwiatkowski

Hi @mkwiatkowski,

Reworking the existing batch approval screen probably would be much more time-consuming than a data feed mechanism. Also, I don’t like the idea of returning different responses based on params so I agree that the second solution is better in this case.

Your plan sounds good to me in overall, but I’m not sure what did you meant by “some logic around requisitions changing statuses”?

Best,
Klaudia

I think the feed solution seems to be the easiest path SELV can take to build the batch approvals solution they need. For starters, I think it would be sufficient if the subscribers received the id of the requisition that was modified. Then they can use the ID to fetch the requisition and perform any logic they want.

I suppose requisitions still don’t support service-level tokens, so you might need to include this in estimations as well.

Best,
Sebastian

Hi everyone,

thanks for your responses.
@Klaudia_Palkowska I was just thinking, that in the future such a mechanism could allow other services subscribing to Requisition service and specify on which status changes they want to get notified, i.e. one service would like to get notified only when requisitions are released, other would like to be informed about approvable requisitions (going into authorized and in approval status).
I agree with you @Sebastian_Brudzinski. We could introduce both service-level tokens and this data feed mechanism to Requisition service.

Best Regards,
Mateusz Kwiatkowski

Thanks @mkwiatkowski and @Sebastian_Brudzinski,

This sounds good.

Let’s unwrap this. When I say feed, I think of such things as RSS or ATOM. When you describe subscribers, that means the more complex messaging patterns.

I’d step back from messaging, and rather focus on feeds, afterall OpenLMIS v1 and v2 had (underutilized) feeds for RequisitionStatus and RequisitionDetails. As a starting point we can look at a couple libraries that provide ATOM feeds: atomfeed and simplefeed, used in Bahmni and others. There’s work to make the feed, secure it, and build contract-tests in the Requisition Service, and the new micro-service would need to properly identify itself, read the feed (including pagination and the since-last-read concept) on a schedule, and provide some visibility into this activity to detect if something isn’t going right or is delayed.

That’s a good start, though we’ve experimented with other approaches, notably Project Casper experimented with Change Data Capture using Debezium (which needs Kafka). This made it very easy to get data out of a database (no feed generation, no service-level tokens), however we currently only include Kafka in our separate reporting stack. To sketch out this architecture, it’d look roughly like: Requisition db (with CDC) -> Kafka topic -> Requisition Batch (batch on write). Heavier components of the reporting stack, notably Nifi, aren’t required. There are pro/cons here, perhaps the biggest pro being that Kafka middle-ware could be a stepping stone to more performant inter-service communication & caching. I’ll leave it at that for now, and invite @Chongsun_Ahn to comment, especially thoughts regarding how working with Debezium went in Project Casper. If there’s interest in this, I’d support investigating moving Kafka and Debezium into OpenLMIS proper.

Best,
Josh

Hey everyone,

For Project Casper, I would say the CDC/Debezium into Kafka part of the project was pretty straightforward and fairly easy to set up. The main effort would be in database design for Requisition Batch component/service and how to connect that to a corresponding Kafka topic.

If there’s interest, I can explain the technical design and details of Project Casper in a meeting, so that it could be better understood how to adapt it for a Requisition batch data sync.

Shalom,
Chongsun

Hi everyone,

I apologize for the late response. Thank you for all the ideas and advice you’ve given @joshzamor and @Chongsun_Ahn. I’m happy to hear that in order to achieve the desired mechanism we’ve had some experience with Kafka and Debisium, we’ve used them in other projects and it should be relatively easy to set up for requisition service. No matter the path we would take I was aware, that most work would go into the new Barch Requisition service and I hope that basically building a single view out of the requisition data should not be very complicated.
I’m interested in a meeting regarding a similarly designed system. If you have some free time in this or next week @Chongsun_Ahn please feel free to pick a time and date, I’m pretty flexible with my schedule.

Best Regards
Mateusz

Hey @mkwiatkowski,

I’ve been out on vacation this week, so perhaps we can try for early next week. How about 8am Seattle time (17:00 Gdansk time) on Monday the 24th? That should give me enough time to prepare.

Shalom,
Chongsun

Hi @Chongsun_Ahn,

thank you very much for the response, Monday next week works for me, let me send out some invites.

Best
Mateusz

@mkwiatkowski Please include me on the invite. I am interested in hearing how the overhead and complexity of Debezium/Kafka compares to a feed approach. Once you started mentioning the feed idea my thinking went to Bahmni as well (as Josh mentioned) and their use of atomfeed to provide decoupled communication between, in their case, separate systems.

Hi @ibewes,
sure, I’ve sent you an invite, you should have an email.