Improving your source code quality

One reason I work on Open Source projects such as OpenCover is so that I can try things out, experiment if you wish, sometimes it's TDD techniques or a new mocking framework, and sometimes it's tooling; some of these experiments were successes and some were successful failures; my experiment in using SpecFlow for unit testing was interesting but I'll never do that again; my knowledge in what I can do in SpecFlow however has greatly improved.

Tools help us produce better software, the better our tools and the more we know how to use those tools help us become better developers. Just because you have the best tools and splashed out many $$$ to access those tools, if you don't learn how to use them properly or at least the basics then you might as well save your money. Thing is, the lack of money in the Open Source world is our problem, we've devoted our time and usurped our work laptops to access those expensive IDEs and additional tooling, but can we really tell our partners we have personally spent thousands on some software or infrastructure to help make better code we are giving away for free. Thankfully we .NET developers now have access to Visual Studio Community Edition and we may be able to get away with the odd $100 here and there but Code Quality tools are probably at the more expensive end of the scale and out of reach for most individuals. There are however a few good Code Quality tools available to us open source developers for free, in most cases, and they are relatively easy to set up.


Coverity was the first quality metric tool we integrated into the OpenCover pipeline; it handles C#, C++ and many others. This tool was suggested sometime ago by one of the OpenCover contributors and we addressed the more serious issues he found at the time but it took a little while before we got round to integrating it into the pipeline. Integration took about a day of tinkering locally and then building the scripts so that it would run on AppVeyor; I now ask myself why did we wait? AppVeyor had already preinstalled the Coverity package onto their images and added it to the path so it was quite a simple task. Once we have finally succeeded in uploading our first scan for analysis we got a clean dashboard and the ability to configure code exclusions and manage the issues.

As you can see, a number of issues were found and the team rallied around to fix them. Only a few defects were dismissed as false positives and only under one occasion did fixing one defect introduce another; we are after all only human. We now have it scheduled to run once a week to keep us honest.

You can also run Coverity locally on your machine and upload to their site (open source projects only). I'll provide the steps I used and there are some nuget packages about to help you if you wish to use them but I didn't really feel a need.

Installation steps:

  • create an account on Coverity and provide some project details for you project. You can't view the results for free anywhere other than the portal so you might as well do this.
  • download the package for your platform, unblock and unpack.


  • this is very simple as you just used the supplied tool cov-build.exe to run your build in our case it is build.bat build-release-platforms-x64 e.g. from the build script
    <exec program="${coverity.exe}" commandline="--dir cov-int --encoding=UTF-8 build.bat build-release-platforms-x64" />

Viewing the results:

  • you will need to upload your results you can use a nuget package for this but I used curl instead e.g. from my build script
    <exec program="${tools.folder}/7-Zip/7za.exe">
      <arg value="a" />
      <arg value="" />
      <arg value="cov-int" />

    <exec program="${curl.exe}" 
      commandline='--form token=${coverity.token} --insecure --form email=${} --form [email protected] --form version="${ci.buildNumber}" --form description="${ci.buildNumber}"' />
  • wait... sometimes your code gets analysed really quickly and sometimes it doesn't, there are a few restrictions with open source projects such as frequency of submission and code size.
  • play...

SonarSource has an Open Source offering called SonarQube and even offers integration into their own online dashboard. This integration is not currently available to those who require a windows build platform so until they have implemented their push feature there is probably going to be some sort of hosting outlay to make your results publicly accessible.

SonarQube is a bit more verbose/pedantic than Coverity and found 13 critical defects. All of these were OWASP related issues due to the Console.WriteLine statements but since OpenCover is a console application they will all be Resolved as 'won't-fix' or 'false-positive'; still trying to work out what is the best approach. In fact it would be easy to dismiss many of the issues found as some of them are a matter of style; SonarQube does allow you to review all the rules and change their usage depending on how the team wishes to treat each rule. In hindsight, before we started fixing the defects found by Coverity, it might have been better to get both products working to compare the output and see if they found the same issues

One rule that I habitually turn off is the "Literal suffixes should be upper case" rule. This is a 'minor' rule that tries to insist that I write my decimals and doubles as 0M and 0F rather than my preferred 0m and 0f; I just think the latter is easier to read. They do at least provide a reason for each rule which in this case is, 'Using upper case literal suffixes removes the potential ambiguity between "1" (digit 1) and "l" (letter el) for declaring literals.' i.e. is it 0l or 01, which is fair enough for that case but to then blanket apply the rule is a bit excessive. There are some alternative developer fonts e.g. top-10-programming-fonts, that you can use in your development environments that make it easier to distinguish the 1 and l if it is such an issue for you and you don't want to apply the rule like myself.

I did find setting up SonarQube a lot trickier than Coverity as I also had to self-host, steps were available but just not in one place that I could find. For initial testing I just used the in-memory database but it has some caveats and I have since experimented with a MySql+SonarQube setup on windows and linux. The install steps I used for my initial windows hosted experiment follows

Installation Steps:


*change to match your setup

  • run sonarqube-5.3\bin\windows-x86-32\startsonar.bat
  • open Visual Studio Developer Prompt 2013/5 and run the following
\Projects\sonarqube-runner\MSBuild.SonarQube.Runner.exe begin /k:"opencover" /n:"opencover" /v:""
msbuild main\OpenCover.sln /t:rebuild  
\Projects\sonarqube-runner\MSBuild.SonarQube.Runner.exe end

Access Results:

  • Access the site on http://localhost:9000 using admin/admin
  • change password of the setup if publicly accessible.
  • play...

We are still looking at how we host and integrate SonarQube into our pipeline and we my look into using the c++ community plugin


ReSharper is one of the best productivity tools about for a .NET developer IMO and it also has some built in code quality rules. We've been using ReSharper for some time now and often try to get to the ReSharper "Green tick of approval" on the files, in doing so we probably preemptively reduced the number of Coverity and SonarQube issues detected when we ran those tools.

Update (25/01/2016)

I eventually got round to integrating it into the build pipeline, I used a headless ubuntu vm and followed these instructions; remembering of course to not use the same default password. The results of these efforts can be found here.

Update (17/07/2016)

I have since migrated the information to; or nemo as it used to be known.