OLMIS Server Service Discovery

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like http://host/VIRTUAL_LOCATION (nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

Hi Pawel,

This sounds great, and I’m looking forward to seeing it in action. I do think we’d like “nested locations” as in API discussions we’ve discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service. Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future. You mentioned that this shouldn’t be a huge effort so yes please, lets do that. Thanks.

Best,
Josh

···

On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like http://host/VIRTUAL_LOCATION (nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

Hi Josh,

Thanks for the input! It looks like we will need to enhance Nginx Consul template to handle "nested locations/endpoints". I've taken a freedom and created ticket for that ([OLMIS-982](https://openlmis.atlassian.net/browse/OLMIS-841)    ). Feel free to add any info you deem necessary.

To have a "real life" situation I think it would be best to tackle it after extracting reference data service from requisition service, which should happen soon.

Regards,

Chris
···

On 23.08.2016 07:52, Josh Zamor wrote:

Hi Pawel,

    This sounds great, and I'm looking forward to seeing it in action.  I do think we'd like "nested locations" as in API discussions we've discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service.  Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future.  You mentioned that this shouldn't be a huge effort so yes please, lets do that.  Thanks.



    Best,

    Josh



     



    On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

          Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don't have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder's nginx-proxy.
          Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.
          I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: [https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery](https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery)

Little explanation over what’s happening there:

          In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.
          I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like [](http://host/VIRTUAL_LOCATION) (nested locations not supported, but it wouldn't be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn't feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.
          The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.
          So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

  You received this message because you are subscribed to the Google Groups "OpenLMIS Dev" group.

  To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev+unsubscribe@googlegroups.com.

  To post to this group, send email to openlmis-dev@googlegroups.com.

  To view this discussion on the web visit [](https://groups.google.com/d/msgid/openlmis-dev/fb393578-43be-41bd-bd87-58af494b7cd8%40googlegroups.com?utm_medium=email&utm_source=footer)      . For more options, visit .

http://host/VIRTUAL_LOCATIONhttps://groups.google.com/d/msgid/openlmis-dev/fb393578-43be-41bd-bd87-58af494b7cd8%40googlegroups.com
https://groups.google.com/d/optout

Hello Josh,

I am wondering how this should apply keeping in mind our current Reference Data Service development, and I got little confused, so please clarify the concept some more so we make sure I understand everything correctly.

Is our final objective to have all the services mapped under the same base path, having our proxy server redirecting the request to proper service basing directly on their api endpoints (what if some of api endpoinds would have the same paths for in various services?), therefore for inter-service communication, each service could just send a request to our host, like a user would normally do, then having it routed to proper recipient? This way, if we decide to have some services apis not exposed to public, will we have separate way of communicating to handle those cases?

Regards,

Pawel

···

On Tuesday, 23 August 2016 07:52:32 UTC+2, Josh Zamor wrote:

Hi Pawel,

This sounds great, and I’m looking forward to seeing it in action. I do think we’d like “nested locations” as in API discussions we’ve discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service. Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future. You mentioned that this shouldn’t be a huge effort so yes please, lets do that. Thanks.

Best,
Josh

On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like http://host/VIRTUAL_LOCATION (nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

Hi Pawel,

It sounds like you have it right. The knowledge of where a service is (IP, port, etc) should lie within this service discovery component. A client, weather that’s a web-browser or another peer micro-service, shouldn’t have any knowledge about where a service is. Instead any client has a URI, and it’s the job of the service discovery + proxy to keep track of how to route the URI to a particular Service. This decouples the client from having to know about where a service is or even that there are different services fulfilling its request(s). Allowing "hierarchical’ URI mappings to different services helps achieve this decoupling. Otherwise as a client I might start using the knowledge that /requisitions always maps to the requisition service and /facilities always maps to the reference data service that I shouldn’t make for the sake of flexibility.

If we have two services both attempting to fulfill requests at the same URI, this feels like something that Consul, or a tool at it’s level, should be able to report on at boot. Can Consul do this for us?

It sounds like you’re on the right track conceptually, do you think we’re close to achieving this on openlmis-blue?

Thanks,
Josh

···

On Monday, September 5, 2016 at 5:28:45 AM UTC-7, Paweł Nawrocki wrote:

Hello Josh,

I am wondering how this should apply keeping in mind our current Reference Data Service development, and I got little confused, so please clarify the concept some more so we make sure I understand everything correctly.

Is our final objective to have all the services mapped under the same base path, having our proxy server redirecting the request to proper service basing directly on their api endpoints (what if some of api endpoinds would have the same paths for in various services?), therefore for inter-service communication, each service could just send a request to our host, like a user would normally do, then having it routed to proper recipient? This way, if we decide to have some services apis not exposed to public, will we have separate way of communicating to handle those cases?

Regards,

Pawel

On Tuesday, 23 August 2016 07:52:32 UTC+2, Josh Zamor wrote:

Hi Pawel,

This sounds great, and I’m looking forward to seeing it in action. I do think we’d like “nested locations” as in API discussions we’ve discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service. Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future. You mentioned that this shouldn’t be a huge effort so yes please, lets do that. Thanks.

Best,
Josh

On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like http://host/VIRTUAL_LOCATION (nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

Hello Josh.

Two things:

  1. I have implemented querying consul records and configuring nginx based on received response. It is published over here (separate branch for now): https://github.com/OpenLMIS/openlmis-blue/commit/2b0dcb0b9e7a4e34cb9c6012c211827efe3418b0

Quick explanation over the changes:

I replaced jwilder’s nginx-proxy image with the official nginx image. To contact our service registry, we use Consul Template, which is a tool made by creators of Consul, that queries its api and lets us easily configure nginx basing on the response. It is worth to mention that currently (I mean on the branch) we’re using consul-template binary that was put in our repository. I’m aware we might want to have this downloaded rather automatically, but this would mean we either need to derive the nginx container to do this during build, or to put these operations into our startup script (so we’d download the template, or at least check for it, on container startup). I’m not sure if it is worth it, so I’d like to know your opinion on that.

Until we introduce those nested locations, I have kept the registrator, slightly refractoring the docker-compose file. For this moment, the outcome routing is exactly the same as before.

  1. Yet another question, or maybe clarification, about putting all of those endpoints in one place:

The most obvious (or maybe only) place to keep the metadata about service routes is Consul’s KV store. I think that if we move service registration from registrator to each service, we can effectively use those values to configure our routing. However, there’s a problem with conflicting paths - user is able to check if the route is already taken and decide to leave it untouched, but we’re not really able to prevent that on our side - Consul would just accept any value overrides made by user. Well, we could possibly workaround this by creating infinite locks for those KV nodes (if that’s even possible - else we could renew them all the time), but I think this could make more problems than solutions.

The second, somewhat connected concern is about automatic deregistration - if any service would fail to de-register itself, the KV store would be left with corrupted data. We could define some watches that would check this overtime, but that would also need cooperation with services. Are we okay with leaving this as it is, or any possible workaround with its flaws?

Bets,

Pawel

···

On Tuesday, 13 September 2016 08:01:03 UTC+2, Josh Zamor wrote:

Hi Pawel,

It sounds like you have it right. The knowledge of where a service is (IP, port, etc) should lie within this service discovery component. A client, weather that’s a web-browser or another peer micro-service, shouldn’t have any knowledge about where a service is. Instead any client has a URI, and it’s the job of the service discovery + proxy to keep track of how to route the URI to a particular Service. This decouples the client from having to know about where a service is or even that there are different services fulfilling its request(s). Allowing "hierarchical’ URI mappings to different services helps achieve this decoupling. Otherwise as a client I might start using the knowledge that /requisitions always maps to the requisition service and /facilities always maps to the reference data service that I shouldn’t make for the sake of flexibility.

If we have two services both attempting to fulfill requests at the same URI, this feels like something that Consul, or a tool at it’s level, should be able to report on at boot. Can Consul do this for us?

It sounds like you’re on the right track conceptually, do you think we’re close to achieving this on openlmis-blue?

Thanks,
Josh

On Monday, September 5, 2016 at 5:28:45 AM UTC-7, Paweł Nawrocki wrote:

Hello Josh,

I am wondering how this should apply keeping in mind our current Reference Data Service development, and I got little confused, so please clarify the concept some more so we make sure I understand everything correctly.

Is our final objective to have all the services mapped under the same base path, having our proxy server redirecting the request to proper service basing directly on their api endpoints (what if some of api endpoinds would have the same paths for in various services?), therefore for inter-service communication, each service could just send a request to our host, like a user would normally do, then having it routed to proper recipient? This way, if we decide to have some services apis not exposed to public, will we have separate way of communicating to handle those cases?

Regards,

Pawel

On Tuesday, 23 August 2016 07:52:32 UTC+2, Josh Zamor wrote:

Hi Pawel,

This sounds great, and I’m looking forward to seeing it in action. I do think we’d like “nested locations” as in API discussions we’ve discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service. Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future. You mentioned that this shouldn’t be a huge effort so yes please, lets do that. Thanks.

Best,
Josh

On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like http://host/VIRTUAL_LOCATION (nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

Hi Pawel,

Great to hear you’re making good progress on this.

For your first question, I’m not sure I understand what the different options here for consul-template are. It sounds though like this should be a docker-based service (i.e. image and container) and so it wouldn’t be in Blue (the reference distribution) at all. Blue would simply use whichever docker image had consul-template in its docker-compose file. Is this what we’re talking about?

On question #2, I think the answer is going to evolve. Right now, if someone on a team overwrites someone else’s endpoint, it’s something to be concerned about, but it’s not likely a typical scenario - it’s more of an edge case. I’d think we could write the code that actually registers an endpoint in a service, and copy it to all the other services so that we all agree to do the same thing - crash early if the service attempts to register an endpoint that’s already registered. We can move this into a shared library instead of copying it later. This feels similar to improperly de-registering a service in that it should be an edge case. Lets consider solving that problem, or potentially incorporating a technology that can solve it for us, when we need to handle this edge case gracefully. I’m thinking how this is solved might change should we adopt Docker Swarm Mode in the future.

Does this answer your questions?

I think last time we talked about this, we said we’d review the overall service discovery plan, such as potentially ditching nginx-proxy, together on a call. It sounds like we’re ready to do that, right?

Best,
Josh

···

On Friday, November 18, 2016 at 10:44:56 AM UTC-8, Paweł Nawrocki wrote:

Hello Josh.

Two things:

  1. I have implemented querying consul records and configuring nginx based on received response. It is published over here (separate branch for now): https://github.com/OpenLMIS/openlmis-blue/commit/2b0dcb0b9e7a4e34cb9c6012c211827efe3418b0

Quick explanation over the changes:

I replaced jwilder’s nginx-proxy image with the official nginx image. To contact our service registry, we use Consul Template, which is a tool made by creators of Consul, that queries its api and lets us easily configure nginx basing on the response. It is worth to mention that currently (I mean on the branch) we’re using consul-template binary that was put in our repository. I’m aware we might want to have this downloaded rather automatically, but this would mean we either need to derive the nginx container to do this during build, or to put these operations into our startup script (so we’d download the template, or at least check for it, on container startup). I’m not sure if it is worth it, so I’d like to know your opinion on that.

Until we introduce those nested locations, I have kept the registrator, slightly refractoring the docker-compose file. For this moment, the outcome routing is exactly the same as before.

  1. Yet another question, or maybe clarification, about putting all of those endpoints in one place:

The most obvious (or maybe only) place to keep the metadata about service routes is Consul’s KV store. I think that if we move service registration from registrator to each service, we can effectively use those values to configure our routing. However, there’s a problem with conflicting paths - user is able to check if the route is already taken and decide to leave it untouched, but we’re not really able to prevent that on our side - Consul would just accept any value overrides made by user. Well, we could possibly workaround this by creating infinite locks for those KV nodes (if that’s even possible - else we could renew them all the time), but I think this could make more problems than solutions.

The second, somewhat connected concern is about automatic deregistration - if any service would fail to de-register itself, the KV store would be left with corrupted data. We could define some watches that would check this overtime, but that would also need cooperation with services. Are we okay with leaving this as it is, or any possible workaround with its flaws?

Bets,

Pawel

On Tuesday, 13 September 2016 08:01:03 UTC+2, Josh Zamor wrote:

Hi Pawel,

It sounds like you have it right. The knowledge of where a service is (IP, port, etc) should lie within this service discovery component. A client, weather that’s a web-browser or another peer micro-service, shouldn’t have any knowledge about where a service is. Instead any client has a URI, and it’s the job of the service discovery + proxy to keep track of how to route the URI to a particular Service. This decouples the client from having to know about where a service is or even that there are different services fulfilling its request(s). Allowing "hierarchical’ URI mappings to different services helps achieve this decoupling. Otherwise as a client I might start using the knowledge that /requisitions always maps to the requisition service and /facilities always maps to the reference data service that I shouldn’t make for the sake of flexibility.

If we have two services both attempting to fulfill requests at the same URI, this feels like something that Consul, or a tool at it’s level, should be able to report on at boot. Can Consul do this for us?

It sounds like you’re on the right track conceptually, do you think we’re close to achieving this on openlmis-blue?

Thanks,
Josh

On Monday, September 5, 2016 at 5:28:45 AM UTC-7, Paweł Nawrocki wrote:

Hello Josh,

I am wondering how this should apply keeping in mind our current Reference Data Service development, and I got little confused, so please clarify the concept some more so we make sure I understand everything correctly.

Is our final objective to have all the services mapped under the same base path, having our proxy server redirecting the request to proper service basing directly on their api endpoints (what if some of api endpoinds would have the same paths for in various services?), therefore for inter-service communication, each service could just send a request to our host, like a user would normally do, then having it routed to proper recipient? This way, if we decide to have some services apis not exposed to public, will we have separate way of communicating to handle those cases?

Regards,

Pawel

On Tuesday, 23 August 2016 07:52:32 UTC+2, Josh Zamor wrote:

Hi Pawel,

This sounds great, and I’m looking forward to seeing it in action. I do think we’d like “nested locations” as in API discussions we’ve discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service. Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future. You mentioned that this shouldn’t be a huge effort so yes please, lets do that. Thanks.

Best,
Josh

On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like http://host/VIRTUAL_LOCATION (nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

Hi Josh,

Thanks for your response.

For the first question - yes, we’re talking about how to handle using that additional files, as consul-template, and related stuff.

Consul-template is essentially more like an automation of retrieving consul’s data in nginx container. In a nutshell, it is an executable file that queries consul over time for available services, and if any change is detected, it generates new proxy configuration file based on that data and then reloads proxy settings. It is designed to be more like an insider of nginx container, than a standalone service. I think we could possibly configure it that way, to achieve some kind of separation, but it is not a typical use case - nginx container would have to somehow retrieve the generated configuration file from consul-template container anyways, so we’d also need to implement some additional logic to handle this scenario.

Either way, if we don’t want to store all those files in Blue, we’d at least need to have our own derived image of nginx. It would either contain consul-template itself, or have implemented some logics to communicate with consul-template container, if we decide to move it outside. So, I’d suggest to go for the first way, and maybe in the future, if we decide we really need it, then we can split those into separate containers. What do you think about that?

On the last part, I think we absolutely can discuss the suggested service discovery setup. Just let me know when we can do this.

Regards,

Paweł

···

On Monday, 21 November 2016 09:22:12 UTC+1, Josh Zamor wrote:

Hi Pawel,

Great to hear you’re making good progress on this.

For your first question, I’m not sure I understand what the different options here for consul-template are. It sounds though like this should be a docker-based service (i.e. image and container) and so it wouldn’t be in Blue (the reference distribution) at all. Blue would simply use whichever docker image had consul-template in its docker-compose file. Is this what we’re talking about?

On question #2, I think the answer is going to evolve. Right now, if someone on a team overwrites someone else’s endpoint, it’s something to be concerned about, but it’s not likely a typical scenario - it’s more of an edge case. I’d think we could write the code that actually registers an endpoint in a service, and copy it to all the other services so that we all agree to do the same thing - crash early if the service attempts to register an endpoint that’s already registered. We can move this into a shared library instead of copying it later. This feels similar to improperly de-registering a service in that it should be an edge case. Lets consider solving that problem, or potentially incorporating a technology that can solve it for us, when we need to handle this edge case gracefully. I’m thinking how this is solved might change should we adopt Docker Swarm Mode in the future.

Does this answer your questions?

I think last time we talked about this, we said we’d review the overall service discovery plan, such as potentially ditching nginx-proxy, together on a call. It sounds like we’re ready to do that, right?

Best,
Josh

On Friday, November 18, 2016 at 10:44:56 AM UTC-8, Paweł Nawrocki wrote:

Hello Josh.

Two things:

  1. I have implemented querying consul records and configuring nginx based on received response. It is published over here (separate branch for now): https://github.com/OpenLMIS/openlmis-blue/commit/2b0dcb0b9e7a4e34cb9c6012c211827efe3418b0

Quick explanation over the changes:

I replaced jwilder’s nginx-proxy image with the official nginx image. To contact our service registry, we use Consul Template, which is a tool made by creators of Consul, that queries its api and lets us easily configure nginx basing on the response. It is worth to mention that currently (I mean on the branch) we’re using consul-template binary that was put in our repository. I’m aware we might want to have this downloaded rather automatically, but this would mean we either need to derive the nginx container to do this during build, or to put these operations into our startup script (so we’d download the template, or at least check for it, on container startup). I’m not sure if it is worth it, so I’d like to know your opinion on that.

Until we introduce those nested locations, I have kept the registrator, slightly refractoring the docker-compose file. For this moment, the outcome routing is exactly the same as before.

  1. Yet another question, or maybe clarification, about putting all of those endpoints in one place:

The most obvious (or maybe only) place to keep the metadata about service routes is Consul’s KV store. I think that if we move service registration from registrator to each service, we can effectively use those values to configure our routing. However, there’s a problem with conflicting paths - user is able to check if the route is already taken and decide to leave it untouched, but we’re not really able to prevent that on our side - Consul would just accept any value overrides made by user. Well, we could possibly workaround this by creating infinite locks for those KV nodes (if that’s even possible - else we could renew them all the time), but I think this could make more problems than solutions.

The second, somewhat connected concern is about automatic deregistration - if any service would fail to de-register itself, the KV store would be left with corrupted data. We could define some watches that would check this overtime, but that would also need cooperation with services. Are we okay with leaving this as it is, or any possible workaround with its flaws?

Bets,

Pawel

On Tuesday, 13 September 2016 08:01:03 UTC+2, Josh Zamor wrote:

Hi Pawel,

It sounds like you have it right. The knowledge of where a service is (IP, port, etc) should lie within this service discovery component. A client, weather that’s a web-browser or another peer micro-service, shouldn’t have any knowledge about where a service is. Instead any client has a URI, and it’s the job of the service discovery + proxy to keep track of how to route the URI to a particular Service. This decouples the client from having to know about where a service is or even that there are different services fulfilling its request(s). Allowing "hierarchical’ URI mappings to different services helps achieve this decoupling. Otherwise as a client I might start using the knowledge that /requisitions always maps to the requisition service and /facilities always maps to the reference data service that I shouldn’t make for the sake of flexibility.

If we have two services both attempting to fulfill requests at the same URI, this feels like something that Consul, or a tool at it’s level, should be able to report on at boot. Can Consul do this for us?

It sounds like you’re on the right track conceptually, do you think we’re close to achieving this on openlmis-blue?

Thanks,
Josh

On Monday, September 5, 2016 at 5:28:45 AM UTC-7, Paweł Nawrocki wrote:

Hello Josh,

I am wondering how this should apply keeping in mind our current Reference Data Service development, and I got little confused, so please clarify the concept some more so we make sure I understand everything correctly.

Is our final objective to have all the services mapped under the same base path, having our proxy server redirecting the request to proper service basing directly on their api endpoints (what if some of api endpoinds would have the same paths for in various services?), therefore for inter-service communication, each service could just send a request to our host, like a user would normally do, then having it routed to proper recipient? This way, if we decide to have some services apis not exposed to public, will we have separate way of communicating to handle those cases?

Regards,

Pawel

On Tuesday, 23 August 2016 07:52:32 UTC+2, Josh Zamor wrote:

Hi Pawel,

This sounds great, and I’m looking forward to seeing it in action. I do think we’d like “nested locations” as in API discussions we’ve discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service. Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future. You mentioned that this shouldn’t be a huge effort so yes please, lets do that. Thanks.

Best,
Josh

On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like http://host/VIRTUAL_LOCATION (nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

Then that sounds like we’re on the same page - one or two images for nginx and consul template in repos (not blue). If we can discuss in showcase let’s do that, or perhaps directly after.

Thanks

···

On Nov 21, 2016, at 4:24 AM, Paweł Nawrocki pnawrocki@soldevelo.com wrote:

Hi Josh,

Thanks for your response.

For the first question - yes, we’re talking about how to handle using that additional files, as consul-template, and related stuff.

Consul-template is essentially more like an automation of retrieving consul’s data in nginx container. In a nutshell, it is an executable file that queries consul over time for available services, and if any change is detected, it generates new proxy configuration file based on that data and then reloads proxy settings. It is designed to be more like an insider of nginx container, than a standalone service. I think we could possibly configure it that way, to achieve some kind of separation, but it is not a typical use case - nginx container would have to somehow retrieve the generated configuration file from consul-template container anyways, so we’d also need to implement some additional logic to handle this scenario.

Either way, if we don’t want to store all those files in Blue, we’d at least need to have our own derived image of nginx. It would either contain consul-template itself, or have implemented some logics to communicate with consul-template container, if we decide to move it outside. So, I’d suggest to go for the first way, and maybe in the future, if we decide we really need it, then we can split those into separate containers. What do you think about that?

On the last part, I think we absolutely can discuss the suggested service discovery setup. Just let me know when we can do this.

Regards,

Paweł

On Monday, 21 November 2016 09:22:12 UTC+1, Josh Zamor wrote:

Hi Pawel,

Great to hear you’re making good progress on this.

For your first question, I’m not sure I understand what the different options here for consul-template are. It sounds though like this should be a docker-based service (i.e. image and container) and so it wouldn’t be in Blue (the reference distribution) at all. Blue would simply use whichever docker image had consul-template in its docker-compose file. Is this what we’re talking about?

On question #2, I think the answer is going to evolve. Right now, if someone on a team overwrites someone else’s endpoint, it’s something to be concerned about, but it’s not likely a typical scenario - it’s more of an edge case. I’d think we could write the code that actually registers an endpoint in a service, and copy it to all the other services so that we all agree to do the same thing - crash early if the service attempts to register an endpoint that’s already registered. We can move this into a shared library instead of copying it later. This feels similar to improperly de-registering a service in that it should be an edge case. Lets consider solving that problem, or potentially incorporating a technology that can solve it for us, when we need to handle this edge case gracefully. I’m thinking how this is solved might change should we adopt Docker Swarm Mode in the future.

Does this answer your questions?

I think last time we talked about this, we said we’d review the overall service discovery plan, such as potentially ditching nginx-proxy, together on a call. It sounds like we’re ready to do that, right?

Best,

Josh

On Friday, November 18, 2016 at 10:44:56 AM UTC-8, Paweł Nawrocki wrote:

Hello Josh.

Two things:

  1. I have implemented querying consul records and configuring nginx based on received response. It is published over here (separate branch for now): https://github.com/OpenLMIS/openlmis-blue/commit/2b0dcb0b9e7a4e34cb9c6012c211827efe3418b0

Quick explanation over the changes:

I replaced jwilder’s nginx-proxy image with the official nginx image. To contact our service registry, we use Consul Template, which is a tool made by creators of Consul, that queries its api and lets us easily configure nginx basing on the response. It is worth to mention that currently (I mean on the branch) we’re using consul-template binary that was put in our repository. I’m aware we might want to have this downloaded rather automatically, but this would mean we either need to derive the nginx container to do this during build, or to put these operations into our startup script (so we’d download the template, or at least check for it, on container startup). I’m not sure if it is worth it, so I’d like to know your opinion on that.

Until we introduce those nested locations, I have kept the registrator, slightly refractoring the docker-compose file. For this moment, the outcome routing is exactly the same as before.

  1. Yet another question, or maybe clarification, about putting all of those endpoints in one place:

The most obvious (or maybe only) place to keep the metadata about service routes is Consul’s KV store. I think that if we move service registration from registrator to each service, we can effectively use those values to configure our routing. However, there’s a problem with conflicting paths - user is able to check if the route is already taken and decide to leave it untouched, but we’re not really able to prevent that on our side - Consul would just accept any value overrides made by user. Well, we could possibly workaround this by creating infinite locks for those KV nodes (if that’s even possible - else we could renew them all the time), but I think this could make more problems than solutions.

The second, somewhat connected concern is about automatic deregistration - if any service would fail to de-register itself, the KV store would be left with corrupted data. We could define some watches that would check this overtime, but that would also need cooperation with services. Are we okay with leaving this as it is, or any possible workaround with its flaws?

Bets,

Pawel

On Tuesday, 13 September 2016 08:01:03 UTC+2, Josh Zamor wrote:

Hi Pawel,

It sounds like you have it right. The knowledge of where a service is (IP, port, etc) should lie within this service discovery component. A client, weather that’s a web-browser or another peer micro-service, shouldn’t have any knowledge about where a service is. Instead any client has a URI, and it’s the job of the service discovery + proxy to keep track of how to route the URI to a particular Service. This decouples the client from having to know about where a service is or even that there are different services fulfilling its request(s). Allowing "hierarchical’ URI mappings to different services helps achieve this decoupling. Otherwise as a client I might start using the knowledge that /requisitions always maps to the requisition service and /facilities always maps to the reference data service that I shouldn’t make for the sake of flexibility.

If we have two services both attempting to fulfill requests at the same URI, this feels like something that Consul, or a tool at it’s level, should be able to report on at boot. Can Consul do this for us?

It sounds like you’re on the right track conceptually, do you think we’re close to achieving this on openlmis-blue?

Thanks,

Josh

On Monday, September 5, 2016 at 5:28:45 AM UTC-7, Paweł Nawrocki wrote:

Hello Josh,

I am wondering how this should apply keeping in mind our current Reference Data Service development, and I got little confused, so please clarify the concept some more so we make sure I understand everything correctly.

Is our final objective to have all the services mapped under the same base path, having our proxy server redirecting the request to proper service basing directly on their api endpoints (what if some of api endpoinds would have the same paths for in various services?), therefore for inter-service communication, each service could just send a request to our host, like a user would normally do, then having it routed to proper recipient? This way, if we decide to have some services apis not exposed to public, will we have separate way of communicating to handle those cases?

Regards,

Pawel

On Tuesday, 23 August 2016 07:52:32 UTC+2, Josh Zamor wrote:

Hi Pawel,

This sounds great, and I’m looking forward to seeing it in action. I do think we’d like “nested locations” as in API discussions we’ve discussed having an endpoint such as /facility/{id} map to the reference data service and /facility/{id}/requisitions map to the requisition service. Also allowing ourselves this flexibility helps hide to the client what end-points exist today, which is a good practice should the back-end service demarcations shift in the future. You mentioned that this shouldn’t be a huge effort so yes please, lets do that. Thanks.

Best,

Josh

On Wednesday, August 10, 2016 at 11:00:31 AM UTC-7, pnawrocki wrote:

Hello everybody.

Recently I have been working on server side service discovery (#841). We have chosen Consul to achieve this functionality. In addition, I included Registrator, so we don’t have to take care of service registration (or deregistration) by ourselves. The good news is that they seem to work flawlessly with our current configuration, so can hopefully keep our jwilder’s nginx-proxy.

Consul and Registrator services are added to docker-compose files for requisition and example, so whenever you run those, the Consul UI (and API) should be accessible on port 8500.

I have also added an example of environment configuration allowing us to have multiple services accessed over the same host and port. It is available here: https://github.com/OpenLMIS/openlmis-blue/tree/poc-service-discovery

Little explanation over what’s happening there:

In order to achieve the goal of accessing services over single host and port, I had to override the nginx configuration template, so we can point to multiple upstreams.

I have introduced the VIRTUAL_LOCATION variable, which represents the location of the service like
http://host/VIRTUAL_LOCATION
(nested locations not supported, but it wouldn’t be a huge effort to introduce this, if needed), that needs to be provided for each service that also has VIRTUAL_HOST provided (again, we could make it optional and/or use container names, but I didn’t feel like we really need that kind of feature in PoC). Nginx-proxy also gets passed another volume, which is our modified template.

The template itself is a copy of original one, with different upstream generation (to divide those by VIRTUAL_LOCATION), and adjusted routing.

So, running with the provided example configuration, we would access requisition and example services under localhost/requisition and localhost/example respectively.

Let me know what do you think about this.

Cheers,

Paweł

You received this message because you are subscribed to the Google Groups “OpenLMIS Dev” group.

To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev+unsubscribe@googlegroups.com.

To post to this group, send email to
openlmis-dev@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/openlmis-dev/0ef17bbf-46c5-45b2-8c4a-0d4908370643%40googlegroups.com
.

For more options, visit https://groups.google.com/d/optout.