Jenkins pipeline idea for review

Hi all community members,

I have made a prototype of one Jenkins pipeline, using the Requisition service as an example.

For other services, could also follow the idea.

First, the happy path will be like this:

As the image shows:

  1. Code change is pushed to github, which triggers the “requisition-service” job. In this job, it runs unit/integration tests and generate new docker images, test reports, etc.

  2. The first job finishes, it’ll trigger 3 down stream jobs, concurrently. They will run contract tests, erd generation, sonar checks. Each of them would generate reports, docs, etc.

  3. If all 3 jobs in the item above works out fine, the deploy to test env job is triggered. It will deploy requisition service to an AWS server. QAs can look at this pipeline and know when they can start testing a newly pushed feature.

  4. After deploy to test is done, deploy to UAT evn will be available for manual trigger.

And then the not so happy path:

The above image shows that if any up steam job is not successful, the down stream job will not be triggered automatically.

But, the downstream job will still be available for manual trigger, to enable flexibility.

This is taking the requisition service as an example. For other services, they can follow a similar idea.

Please let me know if you have questions or suggestions.

Thanks and regards,

Pengfei

Hey everyone,

I have set up contract test job in jenkins: http://build.openlmis.org/view/Requisitoin/

The deploy job is just a placeholder for now.

···

On Thursday, September 1, 2016 at 12:09:23 PM UTC+8, pf...@thoughtworks.com wrote:

Hi all community members,

I have made a prototype of one Jenkins pipeline, using the Requisition service as an example.

For other services, could also follow the idea.

First, the happy path will be like this:

As the image shows:

  1. Code change is pushed to github, which triggers the “requisition-service” job. In this job, it runs unit/integration tests and generate new docker images, test reports, etc.
  1. The first job finishes, it’ll trigger 3 down stream jobs, concurrently. They will run contract tests, erd generation, sonar checks. Each of them would generate reports, docs, etc.
  1. If all 3 jobs in the item above works out fine, the deploy to test env job is triggered. It will deploy requisition service to an AWS server. QAs can look at this pipeline and know when they can start testing a newly pushed feature.
  1. After deploy to test is done, deploy to UAT evn will be available for manual trigger.

And then the not so happy path:

The above image shows that if any up steam job is not successful, the down stream job will not be triggered automatically.

But, the downstream job will still be available for manual trigger, to enable flexibility.

This is taking the requisition service as an example. For other services, they can follow a similar idea.

Please let me know if you have questions or suggestions.

Thanks and regards,

Pengfei

Hello,

  the pipeline looks fine to me at first glance. One thing to note is that we should also consider triggering the requisition build after changes to either auth or notification (and other service dependencies when those are added).

Regards,

Paweł

···

On 01.09.2016 14:17, wrote:

pfcui@thoughtworks.com

Hey everyone,

      I have set up contract test job in jenkins:

The deploy job is just a placeholder for now.

      On Thursday, September 1, 2016 at 12:09:23 PM UTC+8, wrote:

  You received this message because you are subscribed to the Google Groups "OpenLMIS Dev" group.

  To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev+unsubscribe@googlegroups.com.

  To post to this group, send email to openlmis-dev@googlegroups.com.

  To view this discussion on the web visit [https://groups.google.com/d/msgid/openlmis-dev/eb36081a-6b84-4907-a05e-2615f7798d32%40googlegroups.com](https://groups.google.com/d/msgid/openlmis-dev/eb36081a-6b84-4907-a05e-2615f7798d32%40googlegroups.com?utm_medium=email&utm_source=footer).

  For more options, visit [https://groups.google.com/d/optout](https://groups.google.com/d/optout).

http://build.openlmis.org/view/Requisitoin/pf...@thoughtworks.com

Hi all community members,

            I have made a prototype of one Jenkins pipeline, using the **Requisition service** as an example.

For other services, could also follow the idea.

First, the happy path will be like this:

As the image shows:

            1. Code change is pushed to github, which triggers the "requisition-service" job. In this job, it runs unit/integration tests and generate new docker images, test reports, etc.
            2. The first job finishes, it'll trigger 3 down stream jobs, concurrently. They will run contract tests, erd generation, sonar checks. Each of them would generate reports, docs, etc.
  1. If all 3 jobs in the item above works out fine, the deploy to test env job is triggered. It will deploy requisition service to an AWS server. QAs can look at this pipeline and know when they can start testing a newly pushed feature.
            4. After deploy to test is done, deploy to UAT evn will be available for manual trigger.

And then the not so happy path:

            The above image shows that if any up steam job is not successful, the down stream job will not be triggered automatically.
            But, the downstream job will still be available for manual trigger, to enable flexibility.
            This is taking the requisition service as an example. For other services, they can follow a similar idea.
            Please let me know if you have questions or suggestions.

Thanks and regards,

Pengfei

Hi Pawel,

I think your suggestion is based on the intention to detect services integration issues early, correct?

I have addressed it here:

Basically, if Service A and Service B are both involved in a business scenario called X, and there is a set of contract tests written for X.

When A has code change that is pushed, its whole pipeline will start churning, which include running contract tests for X.

When B has code change that is pushed, the same thing will happen for B’s pipeline.

So we can make sure that contract tests for X are ran either A or B has any code change.

I think your suggestion could achieve the same purpose, but it also means triggering multiple pipelines when one service is changed, which adds a bit overhead to Jenkins.

···

On Thursday, September 1, 2016 at 8:39:10 PM UTC+8, Paweł Gesek wrote:

Hello,

  the pipeline looks fine to me at first glance. One thing to note is that we should also consider triggering the requisition build after changes to either auth or notification (and other service dependencies when those are added).

Regards,

Paweł

  On 01.09.2016 14:17, pf...@thoughtworks.com wrote:

Hey everyone,

      I have set up contract test job in jenkins: [http://build.openlmis.org/view/Requisitoin/](http://build.openlmis.org/view/Requisitoin/)

The deploy job is just a placeholder for now.

      On Thursday, September 1, 2016 at 12:09:23 PM UTC+8, pf...@thoughtworks.com wrote:

Hi all community members,

            I have made a prototype of one Jenkins pipeline, using the **Requisition service** as an example.

For other services, could also follow the idea.

First, the happy path will be like this:

As the image shows:

            1. Code change is pushed to github, which triggers the "requisition-service" job. In this job, it runs unit/integration tests and generate new docker images, test reports, etc.
            2. The first job finishes, it'll trigger 3 down stream jobs, concurrently. They will run contract tests, erd generation, sonar checks. Each of them would generate reports, docs, etc.
  1. If all 3 jobs in the item above works out fine, the deploy to test env job is triggered. It will deploy requisition service to an AWS server. QAs can look at this pipeline and know when they can start testing a newly pushed feature.
            4. After deploy to test is done, deploy to UAT evn will be available for manual trigger.

And then the not so happy path:

            The above image shows that if any up steam job is not successful, the down stream job will not be triggered automatically.
            But, the downstream job will still be available for manual trigger, to enable flexibility.
            This is taking the requisition service as an example. For other services, they can follow a similar idea.
            Please let me know if you have questions or suggestions.

Thanks and regards,

Pengfei

  You received this message because you are subscribed to the Google Groups "OpenLMIS Dev" group.

  To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev...@googlegroups.com.

  To post to this group, send email to openlm...@googlegroups.com.

  To view this discussion on the web visit [https://groups.google.com/d/msgid/openlmis-dev/eb36081a-6b84-4907-a05e-2615f7798d32%40googlegroups.com](https://groups.google.com/d/msgid/openlmis-dev/eb36081a-6b84-4907-a05e-2615f7798d32%40googlegroups.com?utm_medium=email&utm_source=footer).

  For more options, visit [https://groups.google.com/d/optout](https://groups.google.com/d/optout).