Integration testing with Gradle

Integration testing with Gradle

Unit testing works automatically using Gradle, but if you would like to have a separate set of integration tests you need to do a small exercise. Actually they don’t have to be integration tests at all. This guide shows you how to configure Gradle to use any kind of tests and run them independently from others. I will use Scala language here but the same works for any JVM language.

The goal

We are about to define a new Gradle task named itest which will run only tests implemented in a specific folder “src/itest/scala”. The standard built-in task test will work without any change running only tests in “src/test/scala” directory.

Standard Java/Scala Project

We will start with a standard Gradle Java or Scala project. The programming language doesn’t matter here. Typically the directory structure looks like this:

<project root>
  + src
    + main
      + scala
    + test
      + scala
  - build.gradle

Main source code (being tested) resides in “src/main/scala” and all unit tests are in “src/test/scala”.

Where to put integration test classes and how to name them?

We already know where our unit tests are. A good habit is to name them using by the class they test, followed by “Test” or “Spec” suffix. For example if the tested class is named “Miracle” then unit tests for it should go to a class named “MiracleSpec” (or MiracleTest if you like). It’s just a convention, nothing more.

We will use the same principle for integration tests but we will put them inside “src/itest/scala” directory and use “ITest” or “ISpec” suffix. This is also a convention, but it allows us to run them separately from unit tests.

Why a special directory and also a special name suffix?

I recommend to put integration tests physically to a different directory and also use a different naming pattern so that you can distinguish the tests from the rest of your code in many other cases.

For example if you package the whole application into a one big fat JAR and you want to run integration tests only. How would you do that? Some test runners support filtering by class/file name only. You would use “*ISpec” regular expression to achieve it.

Another example is that it is very convenient to right-click a directory in your favourite IDE (IntelliJ IDEA for example) and run all tests inside the directory. Also IDEA allows you to run tests by providing class name pattern which is the reason why I like to use different suffixes for integration and unit tests.

Example project structure

Imagine a Scala project with one implementation class named Fujara (an awesome Slovak musical instrument). Its unit tests are in FujaraSpec class and integration tests in FujaraISpec. Often we need some data for integration tests (itest-data.xml) or logging configuration (logback-test.xml) different from the main application logging configuration.

<project root>
  + src
    + itest
      + resources
        + com
          + buransky
            - itest-data.xml
      + scala
        + com
          + buransky
            - FujaraISpec.scala
    + main
      + resources
        - logback.xml
      + scala
        + com
          + buransky
            - Fujara.scala
    + test
      + scala
        + com
          + buransky
            - FujaraSpec.scala
  - build.gradle

The build.gradle

I am using Gradle 2.4 but this solution has worked for older versions too. I am not going to provide complete build script, but only the parts relevant to this topic.

configurations {
  itestCompile.extendsFrom testCompile
  itestRuntime.extendsFrom testRuntime

sourceSets {
  itest {
    compileClasspath += main.output + test.output
    runtimeClasspath += main.output + test.output

    // You can add other directories to the classpath like this:
    //runtimeClasspath += files('src/itest/resources/com/buransky')

    // Use "java" if you don't use Scala as a programming language
    scala.srcDir file('src/itest/scala')

  // This is just to trick IntelliJ IDEA to add integration test
  // resources to classpath when running integration tests from
  // the IDE. It's is not a good solution but I don't know about
  // a better one.
  test {
    resources.srcDir file('src/itest/resources')

task itest(type: Test) {
  testClassesDir = sourceSets.itest.output.classesDir
  classpath = sourceSets.itest.runtimeClasspath

  // This is not needed, but I like to see which tests have run
  testLogging {
    events "passed", "skipped", "failed"

Run integration tests

Now we should be able to run integration test simply by running “gradle itest” task. In our example it should run FujaraISpec only. To run unit tests in FujaraSpec, execute “gradle test”.

Define other test types

If you would like to use the same principle for functional tests, performance tests, acceptance tests, or whatever tests, just copy&paste the code above and replace “itest” with “ftest”, “ptest”, “atest”, “xtest”, …

Sunrise in Slovakia

Build and release Scala/Java Gradle project in GitLab using Jenkins to Artifactory

I am going to show in detail how to regularly build your project and then how to make a release build. It involves cooperation of a number of tools which I found tricky to set up properly, that’s why I wrote this.

The goal

I am about to show you how to achieve two following scenarios. The first one is how to make a regular development non-release build:

  1. Implement something, commit and push it to GitLab.
  2. Trigger Jenkins build by a web hook from GitLab.
  3. Build, test, assemble and then publish binary JAR to Artifactory repository.

The second and more interesting goal is when you want to build a release version:

  1. Run parametric Jenkins build(s) that uses Gradle release plugin to:
    1. Verify that the project meets certain criteria to be released.
    2. Create Git tag with the release version number.
    3. Modify Gradle project version to allow further development.
    4. Commit this change and push it to GitLab.
  2. Trigger another generic parametric Jenkins build to publish release artifact(s) to Artifactory.

The situation

I will demonstrate the process describing a real Scala project which I build using Gradle. The build server is Jenkins. Binary artifacts are published to a server running free version of Artifactory. Version control system is a free community edition of GitLab. I am sure that you can follow this guide for any Java application. For clarity of this guide let’s assume that your URLs are following:

  • GitLab repository (SSH) = git@gitlab.local:com.buransky/release-example.git
  • Jenkins server = http://jenkins/
  • Artifactory server = http://artifactory/

Project structure

Nothing special is needed. I use common directory structure:

<project root>
  + build (build output)
  + gradle (Gradle wrapper)
  + src (source code)
  + main
    + scala
  + test
    + scala
  - build.gradle
  - gradlew
  - gradlew.bat
  - settings.gradle

Gradle project

I use Gradle wrapper which is just a convenient tool to download and install Gradle itself if it is not installed on the machine. It is not required. But you need to have these three files:

settings.gradle – common Gradle settings for multi-projects, not really required for us = name – contains group name, project name and version


build.gradle – the main Gradle project definition

buildscript {
  repositories {
    maven { url '' }

plugins {
  id 'scala'
  id 'maven'
  id 'net.researchgate.release' version '2.1.2'

group = group
version = version


release {
  preTagCommitMessage = '[Release]: '
  tagCommitMessage = '[Release]: creating tag '
  newVersionCommitMessage = '[Release]: new snapshot version '
  tagTemplate = 'v${version}'

Add following to generate JAR file with sources too:

task sourcesJar(type: Jar, dependsOn: classes) {
  classifier = 'sources'
  from sourceSets.main.allSource

artifacts {
  archives sourcesJar
  archives jar

Let’s test it. Run this from shell:

$ gradle assemble


Now you should have two JAR files in build/libs directory:

  • release-example-1.0.0-SNAPSHOT.jar
  • release-example-1.0.0-SNAPSHOT-sources.jar

Ok, so if this is working, let’s try to release it:

$ gradle release
> Building 0% > :release > :release-example:confirmReleaseVersion
??> This release version: [1.0.0]
:release-example:beforeReleaseBuild UP-TO-DATE
:release-example:compileJava UP-TO-DATE
:release-example:processResources UP-TO-DATE
:release-example:compileTestJava UP-TO-DATE
:release-example:afterReleaseBuild UP-TO-DATE
> Building 0% > :release > :release-example:updateVersion
??> Enter the next version (current one released as [1.0.0]): [1.0.1-SNAPSHOT]


Because I haven’t run the release task with required parameters, the build is interactive and asks me first to enter (or confirm) release version, which is 1.0.0. And then later it asks me again to enter next working version which the plugin automatically proposed to be 1.0.1-SNAPSHOT. I haven’t entered anything, I just confirmed default values by pressing enter.

Take a look at Git history and you should see a tag named v1.0.0 in your local repository and also in GitLab. Also open the file and you should see that version has been changed to version=1.0.1-SNAPSHOT.

The release task requires a lot of things. For example your working directory must not contain uncommitted changes. Or all your project dependencies must be release versions (they cannot be snapshots). Or your current branch must be master. Also you must have permissions to push to master branch in GitLab because the release plugin will do git push.

Setup Artifactory

There is nothing special required to do at Artifactory side. I assume that it is up and running at let’s say http://artifactory/. Of course your URL is probably different. Default installation already has two repositories that we will publish to:

  • libs-release-local
  • libs-snapshot-local

Jenkins Artifactory plugin

This plugin integrates Jenkins with Artifactory which enables publishing artifacts from Jenkins builds. Install the plugin, go to Jenkins configuration, in Artifactory section add new Artifactory server and set up following:

  • URL = http://artifactory/ (yours is different)
  • Default Deployer Credentials
    • provide user name and password for an existing Artifactory user who has permissions to deploy

Click the Test connection button to be sure that this part is working.

Continuous integration Jenkins build

This is the build which is run after every single commit to master branch and push to GitLab. Create it as a new freestyle project and give it a name of your fancy. Here is the list of steps and settings for this build:

  • Source Code Management – Git
    • Repository URL = git@gitlab.local:com.buransky/release-example.git (yours is different)
    • Credentials = none (at least I don’t need it)
    • Branches to build, branch specifier = */master
  • Build Triggers
    • Poll SCM (this is required so that the webhook from GitLab works)
  • Build Environment
    • Gradle-Artifactory integration (requires Artifactory plugin)
  • Artifactory Configuration
    • Artifactory server = http://artifactory/ (yours is different)
    • Publishing repository = libs-snapshot-local (we are going to publish snapshots)
    • Capture and publish build info
    • Publish artifacts to Artifactory
      • Publish Maven descriptors
    • Use Maven compatible patterns
      • Ivy pattern = [organisation]/[module]/ivy-[revision].xml
      • Artifact pattern = [organisation]/[module]/[revision]/[artifact]-[revision](-[classifier]).[ext]
  • Build – Invoke Gradle script
    • Use Gradle wrapper
    • From Root Build Script Dir
    • Tasks = clean test

Run the build and then go to Artifactory to check if the snapshot has been successfully published. I use tree browser to navigate to libs-snapshot-local / com / buransky / release-example / 1.0.1-SNAPSHOT. There you should find:

  • binary JARs
  • source JARs
  • POM files

Every time you run this build new three files are added here. You can configure Artifactory to delete old snapshots to save space. I keep only 5 latest snapshots.

Trigger Jenkins build from GitLab

We are too lazy to manually run the continuous integration Jenkins build that we have just created. We can configure GitLab to do it for us automatically after each push. Go to your GitLab project settings, Web Hooks section. Enter following and then click the Add Web Hook button:

  • URL = http://jenkins/git/notifyCommit?url=git@gitlab.local:com.buransky/release-example.git
    • Hey! Think. Your URL is different, but the pattern should be the same.
  • Trigger = Push events

If you try to test this hook and click the Test Hook button, you may be surprised that no build is triggered. A reason (very often) can be that mechanism is very intelligent and if there are no new commits then the build is not run. So make a change in your source code, commit it, push it and then the Jenkins build should be triggered.

Have a break, make yourself a coffee

This has already been a lot of work. We are able to do a lot of stuff now. Servers work and talk to each other. I expect that you probably may need to set up SSH between individual machines, but that’s out of scope of this rant. Ready to continue? Let’s release this sh*t.

Generic Jenkins build to publish a release to Artifactory

We are about to create a parametric Jenkins build which checks out release revision from git, builds it and deploys artifacts to Artifactory. This build is generic so that it can be reused for individual projects. Let’s start with new freestyle Jenkins project and then set following:

  • Project name = Publish release to Artifactory
  • This build is parameterized
    • String parameter
    • Git parameter
      • Name = GIT_RELEASE_TAG
      • Parameter type = Tag
      • Tag filter = *
    • String parameter
      • Name = GRADLE_TASKS
      • Default value = clean assemble
  • Source Code Management – Git
    • Repository URL = $GIT_REPOSITORY_URL
    • Branches to build, Branch Specifier = */tags/${GIT_RELEASE_TAG}
  • Build Environment
    • Delete workspace before build starts
    • Gradle-Artifactory Integration
  • Artifactory Configuration
    • Artifactory server = http://artifactory/ (yours is different)
    • Publishing repository = libs-release-local (we are going to publish a release)
    • Capture and publish build info
    • Publish artifacts to Artifactory
      • Publish Maven descriptors
    • Use Maven compatible patterns
      • Ivy pattern = [organisation]/[module]/ivy-[revision].xml
      • Artifact pattern = [organisation]/[module]/[revision]/[artifact]-[revision](-[classifier]).[ext]
  • Build – Invoke Gradle script
    • Use Gradle wrapper
    • From Root Build Script Dir
    • Tasks = $GRADLE_TASKS

Generic Jenkins build to release a Gradle project

We also need a reusable parametric Jenkins build which runs the Gradle release plugin with provided parameters and then it triggers the generic publish Jenkins build which we have already created.

  • Project name = Release Gradle project
  • This build is parameterized
    • String parameter
    • String parameter
      • Name = RELEASE_VERSION
    • String parameter
      • Name = NEW_VERSION
  • Source Code Management – Git
    • Repository URL = $GIT_REPOSITORY_URL
    • Branches to build, Branch Specifier = */master
  • Additional Behaviours
    • Check out to specific local branch
      • Branch name = master
  • Build – Invoke Gradle script
    • Use Gradle wrapper
    • From Root Build Script Dir
    • Switches = -Prelease.useAutomaticVersion=true -PreleaseVersion=$RELEASE_VERSION -PnewVersion=$NEW_VERSION
    • Tasks = release
  • Trigger/call builds on another project (requires Parameterized Trigger plugin)
    • Projects to build = Publish release to Artifactory
    • Predefined parameters

Final release build

Now we are finally ready to create a build for our project which will create a release. It will do nothing but call the previously created generic builds. For the last time, create new freestyle Jenkins project and then:

  • Project name = Example release
  • This build is parameterized
    • String parameter
      • Name = RELEASE_VERSION
    • String parameter
      • Name = NEW_VERSION
  • Prepare an environment for the run
    • Keep Jenkins Environment Variables
    • Keep Jenkins Build Variables
    • Properties Content
      • GIT_REPOSITORY_URL=git@gitlab.local:com.buransky/release-example.git
  • Source Code Management – Git
    • Use SCM from another project
      • Template Project = Release Gradle project
  • Build Environment
    • Delete workspace before build starts
  • Build
    • Use builders from another project
      • Template Project = Release Gradle project


Let’s try to release our example project. If you followed my steps then the project should be currently in version 1.0.1-SNAPSHOT. Will release version 1.0.1 and advance current project version to the next development version which will be 1.0.2-SNAPSHOT. So simply run the Example release build and set:


Tools used


I am sure there must be some mistakes in this guide and maybe I also forgot to mention a critical step. Let me know if you experience any problems and I’ll try to fix it. It works on my machine so there must be a way how to make it working on yours.

Publish JAR artifact using Gradle to Artifactory

So I have wasted (invested) a day or two just to find out how to publish a JAR using Gradle to a locally running Artifactory server. I used Gradle Artifactory plugin to do the publishing. I was lost in endless loop of including various versions of various plugins and executing all sorts of tasks. Yes, I’ve read documentation before. It’s just wrong. Perhaps it got better in the meantime.

Executing following has uploaded build info only. No artifact (JAR) has been published.

$ gradle artifactoryPublish
Deploying build info to: http://localhost:8081/artifactory/api/build
Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408198981123/2014-08-16T16:23:00.927+0200/


Total time: 4.681 secs

This guy has saved me, I wanted to kiss him: StackOverflow – upload artifact to artifactory using gradle

I assume that you already have Gradle and Artifactory installed. I had a Scala project, but that doesn’t matter. Java should be just fine. I ran Artifactory locally on port 8081. I have also created a new user named devuser who has permissions to deploy artifacts.

Long story short, this is my final build.gradle script file:

buildscript {
    repositories {
        maven {
            url 'http://localhost:8081/artifactory/plugins-release'
            credentials {
                username = "${artifactory_user}"
                password = "${artifactory_password}"
            name = "maven-main-cache"
    dependencies {
        classpath "org.jfrog.buildinfo:build-info-extractor-gradle:3.0.1"

apply plugin: 'scala'
apply plugin: 'maven-publish'
apply plugin: "com.jfrog.artifactory"

version = '1.0.0-SNAPSHOT'
group = 'com.buransky'

repositories {
    add buildscript.repositories.getByName("maven-main-cache")

dependencies {
    compile 'org.scala-lang:scala-library:2.11.2'

tasks.withType(ScalaCompile) {
    scalaCompileOptions.useAnt = false

artifactory {
    contextUrl = "${artifactory_contextUrl}"
    publish {
        repository {
            repoKey = 'libs-snapshot-local'
            username = "${artifactory_user}"
            password = "${artifactory_password}"
            maven = true

        defaults {
            publications ('mavenJava')

publishing {
    publications {
        mavenJava(MavenPublication) {

I have stored Artifactory context URL and credentials in ~/.gradle/ file and it looks like this:


Now when I run the same task again, it’s what I wanted. Both Maven POM file and JAR archive are deployed to Artifactory:

$ gradle artifactoryPublish
:compileJava UP-TO-DATE
:compileScala UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.pom
Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.jar
Deploying build info to: http://localhost:8081/artifactory/api/build
Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408199196550/2014-08-16T16:26:36.232+0200/


Total time: 5.807 secs

Screenshot from 2014-08-16 16:32:07

Scala for-comprehension with concurrently running futures

Can you tell what’s the difference between the following two? If yes, then you’re great and you don’t need to read further.

Version 1:

val milkFuture = future { getMilk() }
val flourFuture = future { getFlour() }

for {
  milk <- milkFuture
  flour <- flourFuture
} yield (milk + flour)

Version 2:

for {
  milk <- future { getMilk() }
  flour <- future { getFlour() }
} yield (milk + flour)

You are at least curious if you got here. The difference is that the two futures in version 1 (can possibly) run in parallel, but in version 2 they can not. Function getFlour() is executed only after getMilk() is completed.

In the first version both futures are created before they are used in the for-comprehension. Once they exists it's only up to execution context when they run, but nothing prevents them to be executed. I am trying not to say that they for sure run in parallel becuase that depends on many factors like thread pool size, execution time, etc. But the point is that they can run in parallel.

The second version looks very similar, but the problem is that the "getFlour()" future is created only once the "getMilk()" future is already completed. Therefore the two futures can never run concurrently no matter what. Don't forget that the for-comprehension is just a syntactic sugar for methods "map", "flatMap" and "withFilter". There's no magic behind.

That's all folks. Happy futures to you.

Init.d shell script for Play framework distributed applications

I wrote a shell script to control Play framework applications packaged using built-in command dist. Applications packaged this way are zipped standalone distributions without any need to have Play framework installed on the machine that it’s supposed to run on. Everything needed is inside the package. Inside the zip, in the bin directory, there is an executable shell script named just like your application. You can start your application by running this script. That’s all it does, but I want more.

Script setup

Download the script from GitHub and make it executable:
chmod +x ./dist-play-app-initd

Before you run the script, you have to set values of NAME, PORT and APP_DIR variables.

  1. NAME – name of the application, must be the same as the name of shell script generated by Play framework to run the app
  2. PORT – port number at which the app should run
  3. APP_DIR – path to directory where you have unzipped the packaged app

Let’s take my side project Jugjane as an example. I ran “play dist” and it has generated “” file. If I unzip it, I get single directory named “jugjane-1.1-SNAPSHOT” which I move to “/home/rado/bin/jugjane-1.1-SNAPSHOT“. The shell script generated by Play framework is “/home/rado/bin/jugjane-1.1-SNAPSHOT/bin/jugjane“. I would like to run the application on port 9000. My values would be:


Start, stop, restart and check status

Now I can conveniently run my Play application as a daemon. Let’s run it.


To start my Jugjane application I simply run following:

$ ./dist-play-app-initd start
Starting jugjane at port 9000... OK [PID=6564]


$ ./dist-play-app-initd restart
Stopping jugjane... OK [PID=6564 stopped]
Starting jugjane at port 9000... OK [PID=6677]


$ ./dist-play-app-initd status
Checking jugjane at port 9000... OK [PID=6677 running]


$ ./dist-play-app-initd stop
Stopping jugjane... OK [PID=6677 stopped]

Start your application when machine starts

This depends on your operating system, but typically you need to move this script to /etc/init.d directory.

Implementation details

The script uses RUNNING_PID file generated by Play framework which contains ID of the application server process.

Safe start

After starting the application the script checks whether the RUNNING_PID file has been created and whether the process is really running. After that it uses wget utility to issue an HTTP GET request for root document to do yet another check whether the server is alive. Of course this assumes that your application serves this document. If you don’t like (or have) wget I have provided curl version for your convenience as well.

Safe stop

Stop checks whether the process whose ID is in the RUNNING_PID file really belongs to your application. This is an important check so that we don’t kill an innocent process by accident. Then it sends termination signals to the process starting with the most gentle ones until the process dies.


I thank my employer Dominion Marine Media allowing me to share this joy with you. Feel free to contribute.

The best code coverage for Scala

The best code coverage metric for Scala is statement coverage. Simple as that. It suits the typical programming style in Scala best. Scala is a chameleon and it can look like anything you wish, but very often more statements are written on a single line and conditional “if” statements are used rarely. In other words, line coverage and branch coverage metrics are not helpful.

Java tools

Scala runs on JVM and therefore many existing tools for Java can be used for Scala as well. But for code coverage it’s a mistake to do so.

One wrong option is to use tools that measure coverage looking at bytecode like JaCoCo. Even though it gives you a coverage rate number, JaCoCo knows nothing about Scala and therefore it doesn’t tell you which piece of code you forgot to cover.

Another misfortune are tools that natively support line and branch coverage metrics only. Cobertura is a standard in Java world and XML coverage report that it generates is supported by many tools. Some Scala code coverage tools decided to use Cobertura XML report format because of its popularity. Sadly, it doesn’t support statement coverage.

Statement coverage

Why? Because a typical Scala statement looks like this (a single line of code):
def f(l: List[Int]) = l.filter(_ > 0).filter(_ < 42).takeWhile(_ != 3).foreach(println(_))

Neither line nor branch coverage works here. When would you consider this single line as being covered by a test? If at least one statement of that line has been called? Maybe. Or all of them? Also maybe.

Where is a branch? Yes, there are statements that are executed conditionally, but the decision logic is hidden in internal implementation of List. Branch coverage tools are helpless, because they don't see this kind of conditional execution.

What we need to know instead is whether individual statements like _ > 0, _ < 42 or println(_) have beed executed by an automated test. This is the statement coverage.

Scoverage to the rescue!

Luckily there is a tool named Scoverage. It is a plugin for Scala compiler. There is also a plugin for SBT. It does exactly what we need. It generates HTML report and also own XML report containing detailed information about covered statements.

Scoverage plugin for SonarQube

Recently I have implemented a plugin for Sonar 4 so that statement coverage measurement can become an integral part of your team's continuous integration process and a required quality standard. It allows you to review overall project statement coverage as well as dig deeper into sub-modules, directories and source code files to see uncovered statements.

Project dashboard with Scoverage plugin:

Multi-module project overview:
Multi-module project overview:

Columns with statement coverage, total number of statements and number of covered statements:
Columns with statement coverage, total number of statements and number of covered statements:

Source code markup with covered and uncovered lines:
Source code markup with covered and uncovered lines:

Await without waiting

Scala has recently introduced async and await features. It allows to write clean and easy-to-understand code for cases where otherwise complex composition of futures would be needed. The same thing already exists in C# for quite a while. But I always had a feeling that I don’t really know how does it work. I tried to look at it from my old-school C++ thread point of view. Which thread runs which piece of code and where is some kind of synchronization between them? Let’s take a look at the following example in Scala:

async {
  ... some code A ...
  await { ... some code B ... }
  ... some code C ...

I don’t want to go into disgusting details here, but the point is to stop looking at the “async” as at a monolithic sequence of statements. In fact it gets split into several blocks of code that can be executed independently, but in well defined order. Try to imagine that each block becomes a “work item” for a thread. Code is also just a piece of data, a data structure. It can be an item in a queue. When a thread from thread pool is available, it picks up a work item from the top of the queue and executes it. Execution of each work item can possibly produce more work items.

I am sure you have started asking how many of these queues we have, how many worker threads for each queue and what about their priorities. These are details that you can google out. But back to the original question. Where is the awaiting?

Technically speaking there’s none. Threads don’t wait for a specific code to finish. Threads are just monkeys. They execute whatever is at the top of the queue. The “await” statement causes the code to be split into separate work items and defines order in which they must be executed. The block of code C is chained with execution of block B. Once B is done, C can be executed. Eventually, by an arbitrary thread. So the thread executing the body of the async block:

  1. Calls block A
  2. Fires off execution of block B (possibly executed by another thread)
  3. Done. Free to do something else. Go for a beer.

The result is that no thread is blocked by waiting for another thread to complete. A thread is either executing a code, or waiting for a work item to be queued. This is really cool. This way you can run a highly parallel application with just a few threads behind – usually the number of CPU cores. Play Framework works like this. Quite an opposite approach compared to Apache Tomcat where the default thread pool size is 200. There’s no need to have a thread per HTTP request.

This is a lot oversimplified. The truth is just a plain boring computer science:
SIP-22 – Async
Scala Async Project

With a little help from our friends

“How many bugs have your unit tests found? And why they didn’t find the one that’s currently killing our production? See? This proves that unit testing doesn’t work. It’s just a waste of money. My money.” said the boss. Of course not my boss.

That’s actually a pretty valid point. How to prove that unit tests that I have written have avoided a lot of problems? Unexistence is hard to see. Management has to be a little bit religious here. Defects found by testers are measurable, because they are officially reported. Everyone can see the issues chart, you hear about them during meetings.

But who has ever reported how many bugs he has avoided thanks to unit tests?

I am not a very religious type. Quite the opposite. That’s why I’m not feeling comfortable when advocating unit tests. I just can’t find any measures, numbers, graphs to show that would clearly visualize the benefits. The more I think of it, the more it gives me the impression that we should start a movement against unit tests.

Let all bugs rise and ruin down the production. We will count them and put them into glass jars with a little help of unit tests. Add salt, oil, sergeant pepper and serve it to the management with colorful defect burndown chart. Their oak tables full of canned bugs are the best evidence they can imagine. When you tell them that it will never happen again if we first write unit tests and then go to production, they will make you the employee of the week. Maybe even of the month.

Don’t worry. They will forget and it will come back again. Decreasing budgets, missed deadlines and always-more-important tasks will keep unit tests in the waiting line. Then you know what to do. Corkboard misses your photo. Let them out again! Get high with a little help from our friends.

Scala Wonderland: Case classes and pattern matching

Pattern matching is usually related to text search. In Scala it has much more sophisticated usage. You can write exciting decision logic when used together with case classes. Even after understanding what the two things mean I wasn’t able to use them as they deserve. It takes a while to really grasp them. Long and winding road.

Case classes allow easy pattern matching when otherwise complicated code would had to be written. See official documentation for introduction. Let’s look at some more interesting examples.

Exceptions are case classes

case class MyException(msg: String) extends Exception(msg)

The reason is that exception catching is in fact pattern matching. Catch block contains patterns and if there is a match then related piece of code is executed. Following code demonstrates this. The second case matches when the exception is either RuntimeException or IOException.

try { ...
} catch {
  case ex: MyException => Logger.error(ex.toString)
  case ex @ (_ : SQLException | _ : IOException) => println(ex)

Plain old data holders
If a class is designed to be just a data holder without any methods, it’s recommended to be a case class. It is syntactically easier to construct new instance and constructor parameters of a case class are by definition accessible from outside. Also they can be structurally decomposed using pattern matching. This is very handy.

Structural decomposition
Pattern is used not only to specify the conditions, but also to decompose the object being matched. Following example tries to find a tuple in a map based on provided key. If it finds one, it returns the second item of the tuple, which is a string. That’s the decomposition. In case it doesn’t find anything, it returns N/A. If you are curious why there are double brackets ((…)) then the reason is that the outer brackets are to denote function call and the inner brackets represent a tuple of two items.

def getValue(key: Int, data: Map[Int, (Int, String)]): String = {
  data.get(key) match {
    case Some((num, text)) => text
    case _ => "N/A"

These two creatures occur more and more in my code. Yes, and not to forget, if you check the previous code you can see that the function should return a string, but there is no return statement. In Scala resulting value is the value of the last statement. Here we have more last statements depending on the pattern matching, but we cover all possible execution paths and we always return a string. Compiler is happy and we are too.

Seduced by the West

I was born and lived 30 years in Bratislava, capital of Slovakia. After studies I started working for IBM as a C#/Java developer and stayed there for 5 years. Nice years. I have learned a lot, met great people, traveled around the world. Not to forget, I have earned some money. Nice money.

IBM has a pretty huge centre in Bratislava with about 3500 employees. Vast majority of them are busy with direct cardiopulmonary resuscitation to keep processes of the global monster alive. Sweat, blood and tears everywhere. But they also get some money. Nice money.

Even though negativism and complaining are typical Slovakian features, it hurts to see such a huge crowd of desperate young people. They express meaninglessness of their jobs in doses I just can’t absorb. Typically they are freshly graduated, can speak at least English, often German. They are full of potential. But they need money to live.

Average salary in my beautiful country is 789 EUR. A western company can beat it easily and still be profitable. I don’t want to blame IBM. Not at all. It’s perfectly reasonable what they do and very convenient for us. But we must know when to get off. Otherwise the monster will eat us alive. It will suck life out of our body and will let our corpses float in grey zone of endless legacy bullshit.

I am happy to have IBM in Bratislava. Don’t get me wrong. And this is not just about IBM. We have Accenture, HP, Dell, SAP a other stuff like that. Whenever something bad happens, I can work there and get the bloody money we all need. Nice money. But I’ll fight till my last penny to stay away.