Publish JAR artifact using Gradle to Artifactory

So I have wasted (invested) a day or two just to find out how to publish a JAR using Gradle to a locally running Artifactory server. I used Gradle Artifactory plugin to do the publishing. I was lost in endless loop of including various versions of various plugins and executing all sorts of tasks. Yes, I’ve read documentation before. It’s just wrong. Perhaps it got better in the meantime.

Executing following has uploaded build info only. No artifact (JAR) has been published.

$ gradle artifactoryPublish
:artifactoryPublish
Deploying build info to: http://localhost:8081/artifactory/api/build
Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408198981123/2014-08-16T16:23:00.927+0200/

BUILD SUCCESSFUL

Total time: 4.681 secs

This guy has saved me, I wanted to kiss him: StackOverflow – upload artifact to artifactory using gradle

I assume that you already have Gradle and Artifactory installed. I had a Scala project, but that doesn’t matter. Java should be just fine. I ran Artifactory locally on port 8081. I have also created a new user named devuser who has permissions to deploy artifacts.

Long story short, this is my final build.gradle script file:

buildscript {
    repositories {
        maven {
            url 'http://localhost:8081/artifactory/plugins-release'
            credentials {
                username = "${artifactory_user}"
                password = "${artifactory_password}"
            }
            name = "maven-main-cache"
        }
    }
    dependencies {
        classpath "org.jfrog.buildinfo:build-info-extractor-gradle:3.0.1"
    }
}

apply plugin: 'scala'
apply plugin: 'maven-publish'
apply plugin: "com.jfrog.artifactory"

version = '1.0.0-SNAPSHOT'
group = 'com.buransky'

repositories {
    add buildscript.repositories.getByName("maven-main-cache")
}

dependencies {
    compile 'org.scala-lang:scala-library:2.11.2'
}

tasks.withType(ScalaCompile) {
    scalaCompileOptions.useAnt = false
}

artifactory {
    contextUrl = "${artifactory_contextUrl}"
    publish {
        repository {
            repoKey = 'libs-snapshot-local'
            username = "${artifactory_user}"
            password = "${artifactory_password}"
            maven = true

        }       
        defaults {
            publications ('mavenJava')
        }
    }
}

publishing {
    publications {
        mavenJava(MavenPublication) {
            from components.java
        }
    }
}

I have stored Artifactory context URL and credentials in ~/.gradle/gradle.properties file and it looks like this:

artifactory_user=devuser
artifactory_password=devuser
artifactory_contextUrl=http://localhost:8081/artifactory

Now when I run the same task again, it’s what I wanted. Both Maven POM file and JAR archive are deployed to Artifactory:

$ gradle artifactoryPublish
:generatePomFileForMavenJavaPublication
:compileJava UP-TO-DATE
:compileScala UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:jar UP-TO-DATE
:artifactoryPublish
Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.pom
Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.jar
Deploying build info to: http://localhost:8081/artifactory/api/build
Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408199196550/2014-08-16T16:26:36.232+0200/

BUILD SUCCESSFUL

Total time: 5.807 secs

Happyend:
Screenshot from 2014-08-16 16:32:07

Scala for-comprehension with concurrently running futures

Can you tell what’s the difference between the following two? If yes, then you’re great and you don’t need to read further.

Version 1:

val milkFuture = future { getMilk() }
val flourFuture = future { getFlour() }

for {
  milk <- milkFuture
  flour <- flourFuture
} yield (milk + flour)

Version 2:

for {
  milk <- future { getMilk() }
  flour <- future { getFlour() }
} yield (milk + flour)

You are at least curious if you got here. The difference is that the two futures in version 1 (can possibly) run in parallel, but in version 2 they can not. Function getFlour() is executed only after getMilk() is completed.

In the first version both futures are created before they are used in the for-comprehension. Once they exists it's only up to execution context when they run, but nothing prevents them to be executed. I am trying not to say that they for sure run in parallel becuase that depends on many factors like thread pool size, execution time, etc. But the point is that they can run in parallel.

The second version looks very similar, but the problem is that the "getFlour()" future is created only once the "getMilk()" future is already completed. Therefore the two futures can never run concurrently no matter what. Don't forget that the for-comprehension is just a syntactic sugar for methods "map", "flatMap" and "withFilter". There's no magic behind.

That's all folks. Happy futures to you.

Init.d shell script for Play framework distributed applications

I wrote a shell script to control Play framework applications packaged using built-in command dist. Applications packaged this way are zipped standalone distributions without any need to have Play framework installed on the machine that it’s supposed to run on. Everything needed is inside the package. Inside the zip, in the bin directory, there is an executable shell script named just like your application. You can start your application by running this script. That’s all it does, but I want more.

Script setup

Download the script from GitHub and make it executable:
chmod +x ./dist-play-app-initd

Before you run the script, you have to set values of NAME, PORT and APP_DIR variables.

  1. NAME – name of the application, must be the same as the name of shell script generated by Play framework to run the app
  2. PORT – port number at which the app should run
  3. APP_DIR – path to directory where you have unzipped the packaged app

Let’s take my side project Jugjane as an example. I ran “play dist” and it has generated “jugjane-1.1-SNAPSHOT.zip” file. If I unzip it, I get single directory named “jugjane-1.1-SNAPSHOT” which I move to “/home/rado/bin/jugjane-1.1-SNAPSHOT“. The shell script generated by Play framework is “/home/rado/bin/jugjane-1.1-SNAPSHOT/bin/jugjane“. I would like to run the application on port 9000. My values would be:

NAME=jugjane
PORT=9000
APP_DIR=/home/rado/bin/jugjane-1.1-SNAPSHOT

Start, stop, restart and check status

Now I can conveniently run my Play application as a daemon. Let’s run it.

Start

To start my Jugjane application I simply run following:

$ ./dist-play-app-initd start
Starting jugjane at port 9000... OK [PID=6564]

Restart


$ ./dist-play-app-initd restart
Stopping jugjane... OK [PID=6564 stopped]
Starting jugjane at port 9000... OK [PID=6677]

Status


$ ./dist-play-app-initd status
Checking jugjane at port 9000... OK [PID=6677 running]

Stop


$ ./dist-play-app-initd stop
Stopping jugjane... OK [PID=6677 stopped]

Start your application when machine starts

This depends on your operating system, but typically you need to move this script to /etc/init.d directory.

Implementation details

The script uses RUNNING_PID file generated by Play framework which contains ID of the application server process.

Safe start

After starting the application the script checks whether the RUNNING_PID file has been created and whether the process is really running. After that it uses wget utility to issue an HTTP GET request for root document to do yet another check whether the server is alive. Of course this assumes that your application serves this document. If you don’t like (or have) wget I have provided curl version for your convenience as well.

Safe stop

Stop checks whether the process whose ID is in the RUNNING_PID file really belongs to your application. This is an important check so that we don’t kill an innocent process by accident. Then it sends termination signals to the process starting with the most gentle ones until the process dies.

Contribution

I thank my employer Dominion Marine Media allowing me to share this joy with you. Feel free to contribute.

The best code coverage for Scala

The best code coverage metric for Scala is statement coverage. Simple as that. It suits the typical programming style in Scala best. Scala is a chameleon and it can look like anything you wish, but very often more statements are written on a single line and conditional “if” statements are used rarely. In other words, line coverage and branch coverage metrics are not helpful.

Java tools

Scala runs on JVM and therefore many existing tools for Java can be used for Scala as well. But for code coverage it’s a mistake to do so.

One wrong option is to use tools that measure coverage looking at bytecode like JaCoCo. Even though it gives you a coverage rate number, JaCoCo knows nothing about Scala and therefore it doesn’t tell you which piece of code you forgot to cover.

Another misfortune are tools that natively support line and branch coverage metrics only. Cobertura is a standard in Java world and XML coverage report that it generates is supported by many tools. Some Scala code coverage tools decided to use Cobertura XML report format because of its popularity. Sadly, it doesn’t support statement coverage.

Statement coverage

Why? Because a typical Scala statement looks like this (a single line of code):
def f(l: List[Int]) = l.filter(_ > 0).filter(_ < 42).takeWhile(_ != 3).foreach(println(_))

Neither line nor branch coverage works here. When would you consider this single line as being covered by a test? If at least one statement of that line has been called? Maybe. Or all of them? Also maybe.

Where is a branch? Yes, there are statements that are executed conditionally, but the decision logic is hidden in internal implementation of List. Branch coverage tools are helpless, because they don't see this kind of conditional execution.

What we need to know instead is whether individual statements like _ > 0, _ < 42 or println(_) have beed executed by an automated test. This is the statement coverage.

Scoverage to the rescue!

Luckily there is a tool named Scoverage. It is a plugin for Scala compiler. There is also a plugin for SBT. It does exactly what we need. It generates HTML report and also own XML report containing detailed information about covered statements.

Scoverage plugin for SonarQube

Recently I have implemented a plugin for Sonar 4 so that statement coverage measurement can become an integral part of your team's continuous integration process and a required quality standard. It allows you to review overall project statement coverage as well as dig deeper into sub-modules, directories and source code files to see uncovered statements.

Project dashboard with Scoverage plugin:
01_dashboard

Multi-module project overview:
Multi-module project overview:

Columns with statement coverage, total number of statements and number of covered statements:
Columns with statement coverage, total number of statements and number of covered statements:

Source code markup with covered and uncovered lines:
Source code markup with covered and uncovered lines:

Await without waiting

Scala has recently introduced async and await features. It allows to write clean and easy-to-understand code for cases where otherwise complex composition of futures would be needed. The same thing already exists in C# for quite a while. But I always had a feeling that I don’t really know how does it work. I tried to look at it from my old-school C++ thread point of view. Which thread runs which piece of code and where is some kind of synchronization between them? Let’s take a look at the following example in Scala:

async {
  ... some code A ...
  await { ... some code B ... }
  ... some code C ...
}  

I don’t want to go into disgusting details here, but the point is to stop looking at the “async” as at a monolithic sequence of statements. In fact it gets split into several blocks of code that can be executed independently, but in well defined order. Try to imagine that each block becomes a “work item” for a thread. Code is also just a piece of data, a data structure. It can be an item in a queue. When a thread from thread pool is available, it picks up a work item from the top of the queue and executes it. Execution of each work item can possibly produce more work items.

I am sure you have started asking how many of these queues we have, how many worker threads for each queue and what about their priorities. These are details that you can google out. But back to the original question. Where is the awaiting?

Technically speaking there’s none. Threads don’t wait for a specific code to finish. Threads are just monkeys. They execute whatever is at the top of the queue. The “await” statement causes the code to be split into separate work items and defines order in which they must be executed. The block of code C is chained with execution of block B. Once B is done, C can be executed. Eventually, by an arbitrary thread. So the thread executing the body of the async block:

  1. Calls block A
  2. Fires off execution of block B (possibly executed by another thread)
  3. Done. Free to do something else. Go for a beer.

The result is that no thread is blocked by waiting for another thread to complete. A thread is either executing a code, or waiting for a work item to be queued. This is really cool. This way you can run a highly parallel application with just a few threads behind – usually the number of CPU cores. Play Framework works like this. Quite an opposite approach compared to Apache Tomcat where the default thread pool size is 200. There’s no need to have a thread per HTTP request.

This is a lot oversimplified. The truth is just a plain boring computer science:
SIP-22 – Async
Scala Async Project

With a little help from our friends

“How many bugs have your unit tests found? And why they didn’t find the one that’s currently killing our production? See? This proves that unit testing doesn’t work. It’s just a waste of money. My money.” said the boss. Of course not my boss.

That’s actually a pretty valid point. How to prove that unit tests that I have written have avoided a lot of problems? Unexistence is hard to see. Management has to be a little bit religious here. Defects found by testers are measurable, because they are officially reported. Everyone can see the issues chart, you hear about them during meetings.

But who has ever reported how many bugs he has avoided thanks to unit tests?

I am not a very religious type. Quite the opposite. That’s why I’m not feeling comfortable when advocating unit tests. I just can’t find any measures, numbers, graphs to show that would clearly visualize the benefits. The more I think of it, the more it gives me the impression that we should start a movement against unit tests.

Let all bugs rise and ruin down the production. We will count them and put them into glass jars with a little help of unit tests. Add salt, oil, sergeant pepper and serve it to the management with colorful defect burndown chart. Their oak tables full of canned bugs are the best evidence they can imagine. When you tell them that it will never happen again if we first write unit tests and then go to production, they will make you the employee of the week. Maybe even of the month.

Don’t worry. They will forget and it will come back again. Decreasing budgets, missed deadlines and always-more-important tasks will keep unit tests in the waiting line. Then you know what to do. Corkboard misses your photo. Let them out again! Get high with a little help from our friends.

Scala Wonderland: Case classes and pattern matching

Pattern matching is usually related to text search. In Scala it has much more sophisticated usage. You can write exciting decision logic when used together with case classes. Even after understanding what the two things mean I wasn’t able to use them as they deserve. It takes a while to really grasp them. Long and winding road.

Case classes allow easy pattern matching when otherwise complicated code would had to be written. See official documentation for introduction. Let’s look at some more interesting examples.

Exceptions are case classes

case class MyException(msg: String) extends Exception(msg)

The reason is that exception catching is in fact pattern matching. Catch block contains patterns and if there is a match then related piece of code is executed. Following code demonstrates this. The second case matches when the exception is either RuntimeException or IOException.

try { ...
} catch {
  case ex: MyException => Logger.error(ex.toString)
  case ex @ (_ : SQLException | _ : IOException) => println(ex)
}

Plain old data holders
If a class is designed to be just a data holder without any methods, it’s recommended to be a case class. It is syntactically easier to construct new instance and constructor parameters of a case class are by definition accessible from outside. Also they can be structurally decomposed using pattern matching. This is very handy.

Structural decomposition
Pattern is used not only to specify the conditions, but also to decompose the object being matched. Following example tries to find a tuple in a map based on provided key. If it finds one, it returns the second item of the tuple, which is a string. That’s the decomposition. In case it doesn’t find anything, it returns N/A. If you are curious why there are double brackets ((…)) then the reason is that the outer brackets are to denote function call and the inner brackets represent a tuple of two items.

def getValue(key: Int, data: Map[Int, (Int, String)]): String = {
  data.get(key) match {
    case Some((num, text)) => text
    case _ => "N/A"
  }
}

These two creatures occur more and more in my code. Yes, and not to forget, if you check the previous code you can see that the function should return a string, but there is no return statement. In Scala resulting value is the value of the last statement. Here we have more last statements depending on the pattern matching, but we cover all possible execution paths and we always return a string. Compiler is happy and we are too.

Seduced by the West

I was born and lived 30 years in Bratislava, capital of Slovakia. After studies I started working for IBM as a C#/Java developer and stayed there for 5 years. Nice years. I have learned a lot, met great people, traveled around the world. Not to forget, I have earned some money. Nice money.

IBM has a pretty huge centre in Bratislava with about 3500 employees. Vast majority of them are busy with direct cardiopulmonary resuscitation to keep processes of the global monster alive. Sweat, blood and tears everywhere. But they also get some money. Nice money.

Even though negativism and complaining are typical Slovakian features, it hurts to see such a huge crowd of desperate young people. They express meaninglessness of their jobs in doses I just can’t absorb. Typically they are freshly graduated, can speak at least English, often German. They are full of potential. But they need money to live.

Average salary in my beautiful country is 789 EUR. A western company can beat it easily and still be profitable. I don’t want to blame IBM. Not at all. It’s perfectly reasonable what they do and very convenient for us. But we must know when to get off. Otherwise the monster will eat us alive. It will suck life out of our body and will let our corpses float in grey zone of endless legacy bullshit.

I am happy to have IBM in Bratislava. Don’t get me wrong. And this is not just about IBM. We have Accenture, HP, Dell, SAP a other stuff like that. Whenever something bad happens, I can work there and get the bloody money we all need. Nice money. But I’ll fight till my last penny to stay away.

Scala Wonderland: Semicolons, singletons and companion objects

In Scala you may usually omit semicolon at the end of a statement. It is required if more statements are on a single line. Unfortunately there are cases when compiler doesn’t undrstand the code as you would expect. For example following is treated as two statements a and +b:

a + b

Solution is to use parenthesis (a + b).

Scala doesn’t have static members. It has singleton objects instead. Syntactically it looks like a class, except you use the keyword object. The main benefit is that an object can extend a class or mix in traits. On the other hand thay cannot take (constructor) parameters.

class Planet
object Earth extends Planet
object Sun extends Planet

When a singleton has the same name as a class, it’s called to be a companion object of that class. The class is denoted as the companion class. They both must be implemented in the same file. The beauty is that they can access each other’s private members. Typical usage is to have factory and helper methods in the companion object.

class Sheep(val name: String) {
  private var isBlack = false
}

object Sheep {
  // Factory method #1
  def apply(name: String, isBlack: Boolean) = {
    val sheep = new Sheep(name)
    sheep.isBlack = isBlack
    sheep
  }

  // Factory method #2
  def apply(name: String) = new Sheep(name)
}

Sheep("Dolly", true) // Calls factory method #1
Sheep("Daisy") // Calls factory method #2

Scala Wonderland: Lists

In functional style methods should not have side effects. A consequence of this philosophy is that List is immutable in Scala. Construction of a List is simple.

val abc = List("a", "b", "c")

There is one trick in the previous code. A common trick in Scala. It invokes method named apply on List companion object. Companion objects will be explained later. Until then you may look at this method as a static factory method that returns new instance of List. Following code does the same:

val abc = List.apply("a", "b", "c")

Very convenient is usage of the list concatenation “cons” operator ::. It prepands new element at the beginning of an list. Another useful object is Nil which represents an empty list. To construct the same list using cons you may write following.

val abc = "a" :: "b" :: "c" :: Nil

Pretty unusual to prepend a new element instead of appending it, right? The reason is that List is implemented as a linked list. Which means that prepending takes constant time, but appending is linear.
The last magic in this simple excercise is that the cons operator is right-associative. General rule in Scala says that if name of an operator ends with colon “:”, then it is executed on the right operand. Otherwise usual left-associativity is applied. Yet another equivalent piece of code.

val abc = Nil.::("c").::("b").::("a")

Immutability, the apply method, companion object, prepending, linked list, right associativity. Isn’t it too much for such a trivial code? There is a lot of magic in this wonderland.