Code quality and enforcing automated checks

To date we’ve added the basics to the Services for automated checking of style rules as well as code smells with PMD. As we’ve all seen we have collected a list of warnings for which we fix as we can. I’d like to propose that we break the build (gradle build) if a Checkstyle rule or PMD rule is broken. While this would entail fixing these issues all-at-once, longer term we would ensure that we’re paying attention to the rulesets we have in place and it would help speed an adoption of peer code review as developers and reviewers alike could start to rely on automated systems for maintaining basic standards.

The proposal is this:

  • break the build if any check (gradle clean check) fails - currently PMD & Checkstyle
  • fix all code issues with broken rules across all Services as one concerted push
  • hold a single discussion thread here to debate any rules in PMD we’d like to disable/adopt to get a starting point in conjunction with the work load of the point above.
  • Ongoing discussions of appropriate checks are expected of course. Ideally as we formalize our code-review process by adopting a code review checklist, we’d continually move as many trivial items to automated checks as possible.

Best,
Josh

+1 to doing this. It’s still early enough that it’s feasible to get on the right track here. (And it becomes exponentially harder later.)

-Darius

···

On Wed, Jul 13, 2016 at 2:11 PM, Josh Zamor josh.zamor@villagereach.org wrote:

To date we’ve added the basics to the Services for automated checking of style rules as well as code smells with PMD. As we’ve all seen we have collected a list of warnings for which we fix as we can. I’d like to propose that we break the build (gradle build) if a Checkstyle rule or PMD rule is broken. While this would entail fixing these issues all-at-once, longer term we would ensure that we’re paying attention to the rulesets we have in place and it would help speed an adoption of peer code review as developers and reviewers alike could start to rely on automated systems for maintaining basic standards.

The proposal is this:

  • break the build if any check (gradle clean check) fails - currently PMD & Checkstyle
  • fix all code issues with broken rules across all Services as one concerted push
  • hold a single discussion thread here to debate any rules in PMD we’d like to disable/adopt to get a starting point in conjunction with the work load of the point above.
  • Ongoing discussions of appropriate checks are expected of course. Ideally as we formalize our code-review process by adopting a code review checklist, we’d continually move as many trivial items to automated checks as possible.

Best,
Josh

You received this message because you are subscribed to the Google Groups “OpenLMIS Dev” group.

To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev+unsubscribe@googlegroups.com.

To post to this group, send email to openlmis-dev@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/openlmis-dev/4faa8a8f-30b6-4edb-8be1-9d545cd41be7%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Darius JazayeriPrincipal Architect - Global Health
Email
djazayeri@thoughtworks.com

Telephone
+1 617 383 9369

ThoughtWorks

I think we already do fail a build in case of a PMD error. Concerning Checkstyle - each rule has got its own severity level (like INFO, WARN, ERROR). The build currently fails for violations that are of ERROR severity. The severity of every single rule can be adjusted in our ruleset if we don’t like the default value. It looks like we currently have globally set Checkstyle to report all problems as warnings (thus no build failures). So, I suppose we can either globally change the severity to error for all rules or go ahead and select rules that should actually fail the build and rules that should only generate a warning.

By the way, since we are on the Checkstyle/PMD topic - we currently run the plugin checks on both the production and test code. While there may be rules, that make sense for the production code, they may at the same time not be really appropriate for the test code. If we want to keep running the checks on the test code, I’ve proposed to separate the rulesets used for the production and the test code: https://openlmis.atlassian.net/browse/OLMIS-827 A perfect example is the “Too Many Methods in Class” PMD rule. It does make sense for the production code to avoid “God object” anti-pattern, but for the test code, it probably makes no sense to separate the code testing the same component into several test files.

Best regards,
Sebastian Brudzinski.

···

On Wednesday, July 13, 2016 at 11:33:10 PM UTC+2, djazayer wrote:

+1 to doing this. It’s still early enough that it’s feasible to get on the right track here. (And it becomes exponentially harder later.)

-Darius

On Wed, Jul 13, 2016 at 2:11 PM, Josh Zamor josh....@villagereach.org wrote:

To date we’ve added the basics to the Services for automated checking of style rules as well as code smells with PMD. As we’ve all seen we have collected a list of warnings for which we fix as we can. I’d like to propose that we break the build (gradle build) if a Checkstyle rule or PMD rule is broken. While this would entail fixing these issues all-at-once, longer term we would ensure that we’re paying attention to the rulesets we have in place and it would help speed an adoption of peer code review as developers and reviewers alike could start to rely on automated systems for maintaining basic standards.

The proposal is this:

  • break the build if any check (gradle clean check) fails - currently PMD & Checkstyle
  • fix all code issues with broken rules across all Services as one concerted push
  • hold a single discussion thread here to debate any rules in PMD we’d like to disable/adopt to get a starting point in conjunction with the work load of the point above.
  • Ongoing discussions of appropriate checks are expected of course. Ideally as we formalize our code-review process by adopting a code review checklist, we’d continually move as many trivial items to automated checks as possible.

Best,
Josh

You received this message because you are subscribed to the Google Groups “OpenLMIS Dev” group.

To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev...@googlegroups.com.

To post to this group, send email to openlm...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/openlmis-dev/4faa8a8f-30b6-4edb-8be1-9d545cd41be7%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Darius JazayeriPrincipal Architect - Global Health
Email
djaz...@thoughtworks.com

Telephone
+1 617 383 9369

ThoughtWorks

+1 for setting severity to error by default.
It could be annoying for the first couple of days, as time goes by the ones that don’t fit very well will be ruled out.

I think this “strict at first, loosen as needed” approach could produce a more desirable result than a “loose at first, strengthen as needed” approach.

···

On Thursday, July 14, 2016 at 6:33:40 AM UTC+8, Sebastian Brudziński wrote:

I think we already do fail a build in case of a PMD error. Concerning Checkstyle - each rule has got its own severity level (like INFO, WARN, ERROR). The build currently fails for violations that are of ERROR severity. The severity of every single rule can be adjusted in our ruleset if we don’t like the default value. It looks like we currently have globally set Checkstyle to report all problems as warnings (thus no build failures). So, I suppose we can either globally change the severity to error for all rules or go ahead and select rules that should actually fail the build and rules that should only generate a warning.

By the way, since we are on the Checkstyle/PMD topic - we currently run the plugin checks on both the production and test code. While there may be rules, that make sense for the production code, they may at the same time not be really appropriate for the test code. If we want to keep running the checks on the test code, I’ve proposed to separate the rulesets used for the production and the test code: https://openlmis.atlassian.net/browse/OLMIS-827 A perfect example is the “Too Many Methods in Class” PMD rule. It does make sense for the production code to avoid “God object” anti-pattern, but for the test code, it probably makes no sense to separate the code testing the same component into several test files.

Best regards,
Sebastian Brudzinski.

On Wednesday, July 13, 2016 at 11:33:10 PM UTC+2, djazayer wrote:

+1 to doing this. It’s still early enough that it’s feasible to get on the right track here. (And it becomes exponentially harder later.)

-Darius

On Wed, Jul 13, 2016 at 2:11 PM, Josh Zamor josh....@villagereach.org wrote:

To date we’ve added the basics to the Services for automated checking of style rules as well as code smells with PMD. As we’ve all seen we have collected a list of warnings for which we fix as we can. I’d like to propose that we break the build (gradle build) if a Checkstyle rule or PMD rule is broken. While this would entail fixing these issues all-at-once, longer term we would ensure that we’re paying attention to the rulesets we have in place and it would help speed an adoption of peer code review as developers and reviewers alike could start to rely on automated systems for maintaining basic standards.

The proposal is this:

  • break the build if any check (gradle clean check) fails - currently PMD & Checkstyle
  • fix all code issues with broken rules across all Services as one concerted push
  • hold a single discussion thread here to debate any rules in PMD we’d like to disable/adopt to get a starting point in conjunction with the work load of the point above.
  • Ongoing discussions of appropriate checks are expected of course. Ideally as we formalize our code-review process by adopting a code review checklist, we’d continually move as many trivial items to automated checks as possible.

Best,
Josh

You received this message because you are subscribed to the Google Groups “OpenLMIS Dev” group.

To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev...@googlegroups.com.

To post to this group, send email to openlm...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/openlmis-dev/4faa8a8f-30b6-4edb-8be1-9d545cd41be7%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Darius JazayeriPrincipal Architect - Global Health
Email
djaz...@thoughtworks.com

Telephone
+1 617 383 9369

ThoughtWorks

As for the “Too Many Methods” PMD rule, I think it’s a valid point that some test files do have a legitimate need for more methods than production code.
On the other hand, when a test file contains many test cases in it, it could be beneficial to group them by scenario and put them in separate smaller files.

···

On Thursday, July 14, 2016 at 6:33:40 AM UTC+8, Sebastian Brudziński wrote:

I think we already do fail a build in case of a PMD error. Concerning Checkstyle - each rule has got its own severity level (like INFO, WARN, ERROR). The build currently fails for violations that are of ERROR severity. The severity of every single rule can be adjusted in our ruleset if we don’t like the default value. It looks like we currently have globally set Checkstyle to report all problems as warnings (thus no build failures). So, I suppose we can either globally change the severity to error for all rules or go ahead and select rules that should actually fail the build and rules that should only generate a warning.

By the way, since we are on the Checkstyle/PMD topic - we currently run the plugin checks on both the production and test code. While there may be rules, that make sense for the production code, they may at the same time not be really appropriate for the test code. If we want to keep running the checks on the test code, I’ve proposed to separate the rulesets used for the production and the test code: https://openlmis.atlassian.net/browse/OLMIS-827 A perfect example is the “Too Many Methods in Class” PMD rule. It does make sense for the production code to avoid “God object” anti-pattern, but for the test code, it probably makes no sense to separate the code testing the same component into several test files.

Best regards,
Sebastian Brudzinski.

On Wednesday, July 13, 2016 at 11:33:10 PM UTC+2, djazayer wrote:

+1 to doing this. It’s still early enough that it’s feasible to get on the right track here. (And it becomes exponentially harder later.)

-Darius

On Wed, Jul 13, 2016 at 2:11 PM, Josh Zamor josh....@villagereach.org wrote:

To date we’ve added the basics to the Services for automated checking of style rules as well as code smells with PMD. As we’ve all seen we have collected a list of warnings for which we fix as we can. I’d like to propose that we break the build (gradle build) if a Checkstyle rule or PMD rule is broken. While this would entail fixing these issues all-at-once, longer term we would ensure that we’re paying attention to the rulesets we have in place and it would help speed an adoption of peer code review as developers and reviewers alike could start to rely on automated systems for maintaining basic standards.

The proposal is this:

  • break the build if any check (gradle clean check) fails - currently PMD & Checkstyle
  • fix all code issues with broken rules across all Services as one concerted push
  • hold a single discussion thread here to debate any rules in PMD we’d like to disable/adopt to get a starting point in conjunction with the work load of the point above.
  • Ongoing discussions of appropriate checks are expected of course. Ideally as we formalize our code-review process by adopting a code review checklist, we’d continually move as many trivial items to automated checks as possible.

Best,
Josh

You received this message because you are subscribed to the Google Groups “OpenLMIS Dev” group.

To unsubscribe from this group and stop receiving emails from it, send an email to openlmis-dev...@googlegroups.com.

To post to this group, send email to openlm...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/openlmis-dev/4faa8a8f-30b6-4edb-8be1-9d545cd41be7%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Darius JazayeriPrincipal Architect - Global Health
Email
djaz...@thoughtworks.com

Telephone
+1 617 383 9369

ThoughtWorks