Serving Kibana 4.6 (4.2+) from a subdirectory with nginx

This took me a little while to get right, but the final config is quite simple, so I thought it was worth sharing.

I wanted to serve kibana from a subdirectory, for example mydomain.com/kibana. There are known issues doing this, as of Kibana 4.2:

The problem is that even if you set up an nginx proxy configuration to forward requests from /kibana to the correct place, Kibana serves the app from /app/kibana and tries to load resources via their absolute url, e.g. /bundles/kibana.bundle.js.

The answer is a combination of kibana and nginx config (which isn’t very nice as it means your kibana config is not portable).

In kibana.yml, add the following:

server.basePath: "/kibana"

This means that when kibana generates the urls to load resources, it prefixes them with /kibana. On its own this just breaks kibana! It will try to load e.g. /kibana/bundles/kibana.bundle.js and return a 404. duh.

So, you need to strip out the prefix in your nginx config. Mine looks like this (kibana is running in a docker container on the same network as the nginx container)

location /kibana {
    proxy_set_header Host $host;
    rewrite /kibana/(.*)$ /$1 break;
    proxy_pass http://kibana:9292/;
}

Kibana is now available at mydomain.com/kibana – the resulting url will look like mydomain.com/kibana/app/kibana, but you get to keep the initial directory name, meaning it won’t interfere with other things you might be serving from the same host. It would be much neater if the server.basePath setting in kibana was all that was necessary to do this, but it doesn’t look like it’s going to be changed any time soon – see discussion at #6665 server.basePath results in 404 (to be fair changing the behaviour would break every existing install using the setting)

  

Docker and Maven for integration testing

I recently implemented LDAP authentication in our Dropwizard application. How to integration test this?

Here’s what I used:

* This docker image: nickstenning/slapd, containing a basic configuration of the OpenLDAP server.
* The fabric8io docker-maven-plugin
* Some maven foo.

The docker image

I required LDAP configured with some known groups and users to use for testing. I put the following Dockerfile in src/test/docker/ldap, along with users.ldif which defined the users I wanted to load.

FROM nickstenning/slapd

#Our envt variables
ENV LDAP_ROOTPASS topsecret
ENV LDAP_ORGANISATION My Org
ENV LDAP_DOMAIN domain.com

# Users and groups to be loaded
COPY users.ldif        /users.ldif

# Run slapd to load ldif files
RUN apt-get update && apt-get install -y ldap-utils && rm -rf /var/lib/apt/lists/*
RUN (/etc/service/slapd/run &) && sleep 2 && \
  ldapadd -h localhost -p 389 -c -x -D cn=admin,dc=domain,dc=com -w topsecret -v -f /users.ldif && killall slapd

The parent config

In my parent pom, I put the shared configuration for the fabric8io plugin into pluginManagement. This includes any images I want to run for multiple sub modules (I’m using postgres for other integration tests, in combination with the liquibase for setting up a test database), and the executions which apply to all submodules.

 <pluginManagement>
    <plugins> 
        <plugin>
            <groupId>io.fabric8</groupId>
            <artifactId>docker-maven-plugin</artifactId>
            <version>0.18.1</version>
            <configuration>                    
                <images>                       
                    
                </images>
            </configuration>
            <executions>
                <execution>
                    <id>build</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>build-nofork</goal>
                    </goals>
                </execution>
                <execution>
                    <id>run</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>start</goal>
                    </goals>
                </execution>
                <execution>
                    <id>stop</id>
                    <phase>post-integration-test</phase>
                    <goals>
                        <goal>stop</goal>
                    </goals>
                </execution>
            </executions>                    
        </plugin>
    </plugins>
</pluginManagement>

Remember that putting a plugin in pluginManagement in the parent does not mean it will be active in the child modules. It simply defines the default config which will be used, should you add the plugin to the plugins section in the child pom. This approach means we can avoid spinning up docker contained for sub modules which don’t need them.

Remember to use the maven-failsafe-plugin to run your integration tests. This ensures that the docker containers will be stopped even if your integration tests fail, so you don’t get containers hanging around disrupting your next test run.

The sub module config

A key optimisation I wanted to make was not to run the maven-docker-plugin if I was skipping tests – I’d assume I’m skipping tests for a fast build, so the last thing I want is to spin up unnecessary containers. To do this, I activated the docker maven plugin within a profile in the appropriate submodules, which is only active if you haven’t supplied the skipTests parameter.

<profiles>  
    <profile>
        <id>integ-test</id>
        <activation>
            <property>
                <name>!skipTests</name>
            </property>
        </activation>
        <build>
            <plugins>
                <plugin>
                    <groupId>io.fabric8</groupId>
                    <artifactId>docker-maven-plugin</artifactId>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

For submodules which only require the docker images defined in the parent pom, the above configuration is sufficient. However, in one submodule I also want to spin up the ldap image. To do that, I wanted to add to the images defined in the parent. This led me to a useful blog on how to Merging Plugin Configuration in Complex Maven Projects. The combine.children=”append”
attribute ensures that the merge behaviour is as required.

 <profile>
    <id>integ-test</id>
    <activation>
        <property>
            <name>!skipTests</name>
        </property>
    </activation>
    <build>
        <plugins>
            <plugin>
                <groupId>io.fabric8</groupId>
                <artifactId>docker-maven-plugin</artifactId>                
                <configuration>   
                    <!-- append docker images for this module -->                 
                    <images combine.children="append">
                                                                 
                    </images>                      
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

Result: completely portable integration tests.

  

Use Docker Compose to run a PHP site without contaminating your system

This must be the quickest way to get the LAMP stack up and running locally.
Nothing installed locally -> you can run different versions of PHP/MySQL without conflicts.
Database gets created on first startup and persisted in a named volume.
PHP files are in a mapped volume so you can edit without rebuilding the container.

The docker-compose.yml file:

version: "2"
services:
  site:
    image: php:5.6.27-apache   
    volumes:
      - ./site:/var/www/html
    depends_on:
      - mysql
    networks:
      back-tier:        
  mysql:
    image: mysql:5.5
    environment:
      MYSQL_ROOT_PASSWORD: topsecret
      MYSQL_DATABASE: sitedbname
      MYSQL_USER: sitedbuser
      MYSQL_PASSWORD: sitedbpassword
    volumes:
      - site_db:/var/lib/mysql
    networks:
      back-tier:
networks:
  back-tier:
volumes:
  site_db: 

Then just

docker-compose up

It takes a bit of the pain out of working on a legacy PHP site!

  

Gradle and the “Parent Pom”

Coincidentally, something came up I wanted to blog about, and it turns out it’s a follow up to the last blog post I wrote 9 months ago.

I’ve been increasingly unhappy with using “apply from” to apply a script plugin from a hard coded location. It’s no use if you’re offline, or not on the right VPN to access the file, or whatever. What I didn’t realise is that I could do apply all the same configuration defaults from a binary plugin. Then you get proper dependency management like you would with a Maven parent pom.

Here’s the code for the Plugin, which is just a standard gradle plugin

import org.gradle.api.Project
import org.gradle.api.Plugin
import org.apache.log4j.Logger
import org.gradle.api.GradleException

public final class CommonConfigPlugin implements Plugin<Project>{

    @Override
    void apply(Project project) {
        addDependencies(project)
        applyPlugins(project)
        configureReleasePlugin(project)
    }
    
    private void configureReleasePlugin(Project project) {

        if (!project.parent) {
            project.createReleaseTag.dependsOn { project.allprojects.uploadArchives }
        }

        //do this for all projects
        project.uploadArchives {
            ... config here ...
        }               
    }

    private void addDependencies(Project project) {
        project.afterEvaluate {            
            project.dependencies.add("testCompile", "junit:junit:4.11")
            project.dependencies.add("compile", "log4j:log4j:1.2.14")
        }
    }

    private void applyPlugins(Project project) {
        project.configure(project){
            apply plugin: 'java'
            apply plugin: 'maven'               
        }

        //apply these to parent project only
        if (!project.parent) {
            project.configure(project){
                apply plugin: 'sonar-runner'
                apply plugin: 'release'
            }
        }
    }
}

Then in the main build.gradle file for any project, I add the jar as a dependency and apply the plugin.

allprojects {
    repositories {
        mavenLocal()
        maven { url "http://my.repo"}
    }
    apply plugin: "my-common-config"

}
buildscript {

    repositories {
        mavenLocal()
        maven { url "http://my.repo"}
    }
    dependencies {
        classpath 'uk.co.anorakgirl.gradle:my-common-config-plugin:1.0.0'
    }
}

Any dependencies which the plugin has end up as dependencies in the buildscript of the project, meaning I don’t have to repeat the dependency on the release plugin, because my common config plugin already depends on it.

I’m going to put more in this common-config-plugin.

  

Gradle release ramblings

Over the last couple of weeks I had to get up to speed on gradle. We got inspired to try out Gradle at work following an great talk on Groovy for SysAdmins by Dan Woods at the Groovy & Grails eXchange 2013, during which Groovy and Gradle were used as a mechanism for building and deploying VMs.

First off, I seem to love Gradle! The build.gradle files are so slimline and readable. The challenges for the last couple of weeks were:

  • How to manage releases in a similar way to maven
  • How to include common elements in our builds (parent pom functionality

Gradle Release plugin

We decided to use the townsfolk gradle-release plugin as it seems to do pretty much what Maven release has been doing for us: check for snapshot dependencies, check version control is up to date, set version, build, tag, set next snapshot and commit.

To use the plugin, the dependency has to be made available to the build.gradle file itself. Standard dependencies are only available to the project being built. This is where the buildscript block comes in – dependencies declared here can be used in the groovy script itself.

buildscript{
    repositories {
        mavenCentral()
        maven { url "https://oss.sonatype.org/content/groups/public"}
    }
    dependencies {
        classpath 'com.github.townsfolk:gradle-release:1.2'
    }
}

The one thing that the plugin does not do is upload artifacts to a maven repository, but this is easy as Gradle allows you to declare dependencies between tasks. We add a dependency on the uploadArchives task, which comes from the maven plugin:

createReleaseTag.dependsOn uploadArchives

A couple of gotchas here. When using this with a multi-module project, you can only apply the plugin to the parent project, and all modules have to have the same version. We found initially that only the parent project was getting deployed to artifactory. The task dependency must be modified as follows for multi-module projects:

createReleaseTag.dependsOn allprojects.uploadArchives

Extracting common gradle elements to use across projects

The parent pom we use for maven projects has common plugin configurations, standard dependencies and so on. We decided to try and create a similar “parent” gradle file, to use across all our projects. The apply from syntax can be used to configure the project with an external script:

apply from: 'https://artifactory.url/../gradle-common-1.0.gradle'

Notice that the apply from is coming from a maven repository. We used the release plugin described above to release a common gradle file as a maven artifact, so this can managed following all our standard release practices too!

The common gradle file can contain all of the gradle build language, including applying other scripts. One thing it cannot contain is a buildscript block. To include common configuration in a buildscript block, you must create a second common file, containing a project.buildscript block:

project.buildscript {
    repositories {
        mavenCentral()
        maven { url "https://oss.sonatype.org/content/groups/public"}
    }
    dependencies {
        classpath 'com.github.townsfolk:gradle-release:1.2'
    }
}

Within the project, apply this script within the project buildscript block:

buildscript {
	apply from: 'https://artifactory.url/../gradle-common-buildscript-1.0.gradle'
}

This seems like doubling up to me, but is how it appears to work. It means we have two common gradle files to apply separately.

One final gotcha I found when moving the release plugin configuration into our common gradle files – when I tried to configure the task dependency in the external script, I got this error when building multi-module projects:

Could not find property 'uploadArchives' on project ':submodule'.

This seems to be because the parent is trying to configure the dependency on the subproject uploadArchives task, but the subproject does not yet have that task. It must be to do with the order in which gradle applies external scripts? Anyway, the solution was to delay evaluation of the dependency (those clever gradle folk think of everything!). Putting the dependency in a closure does this (Found this at #3 here 12-new-things-i-learned-from-a-three-day-gradle-training. So in the main common gradle file I have these lines, which apply the release plugin to the parent project only and create a lazy dependency on the uploadArchives task for all modules.

if (!project.parent) {
	apply plugin: 'release'
	project.createReleaseTag.dependsOn { allprojects.uploadArchives }
}

And that’s it, infrastructure in place!

(Credit for setting a lot of this up goes to JCDubs but I was so excited when I started working with it I had to write up what we learned on the way. I’m looking forward to lots more XML free building…)

  

My First Groovy DSL

I decided to have a go at automating the management of our Jenkins jobs. Basically most of our projects have similar builds, and I want to keep the job configs standard – they tend to diverge when we manage them via the UI. I’ve previously played with the Jenkins API using HttpBuilder to post XML config files. My DSL doesn’t do anything clever to generate the XML, but it makes the Jenkins config very readable:

jenkins {   
    url = "http://jenkins.url/"
    username = "jenkins"
    apiKey = "asdfadsgdsfshgdsfg"

    buildGroup {
        name = "Maven Builds"
        xml {
            feature = "release/xml/maven-test-config.xml"
            develop = "release/xml/maven-install-config.xml"
        }
        repos {
            project1 = "ssh://git.repo/project1.git"
            project2 = "ssh://git.repo/project2.git"
        }
    }

    buildGroup {
        name = "Grails Builds"
        xml {
            feature = "release/xml/grails-test-config.xml"
            develop = "release/xml/grails-install-config.xml"
        }
        repos {
            project4 = "ssh://git.repo/project4.git"
            project5 = "ssh://git.repo/project5.git"
        }
    }

}

I think this is so easy to read.

I’m not really sure if I’ve gone the right way about writing this, but here goes:

First, I have a main method, which parses the file. It binds the “jenkins” method to an instance of my JenkinsDsl class.

    Binding binding = new Binding();
    JenkinsDsl jenkinsDsl = new JenkinsDsl();
    binding.setVariable("jenkins",jenkinsDsl);
    GroovyShell shell = new GroovyShell(binding);
    Script script = shell.parse(file);
    script.run();

The JenkinsDsl class overrides the `call` method, and uses `methodMissing` to define how we handle the buildGroups. The really cool bit is I didn’t have to write anything special to get the url, username and password from the jenkins config file – groovy is automatically calling a ‘setProperty’ method on the delegate.

class JenkinsDsl {

    def call(Closure cl) {
        log.info "Processing Main Configuration"

        cl.setDelegate(this);
        cl.setResolveStrategy(Closure.DELEGATE_ONLY)
        cl.call();
    }

    def url
    def apiKey
    def username

    def jenkinsHttp

    def methodMissing(String methodName, args) {

        if (methodName.equals("buildGroup")) {

            Closure closure = args[0]
            closure.setDelegate(new JenkinsBuildGroup())
            closure.setResolveStrategy(Closure.DELEGATE_ONLY)
            closure()
            closure.delegate.updateJenkins(getJenkinsHttp())
        } else {
            log.error "Unsupported option: " + methodName
        }
    }

    def getJenkinsHttp() {
        if (!jenkinsHttp) {
            jenkinsHttp = new JenkinsHttp(url, username, apiKey)
        }
        jenkinsHttp
    }   
}

The JenkinsBuildGroup class is very similar – to save time I’ve used the ‘Expando’ class to collect the xml and repository details.

class JenkinsBuildGroup {

    def name
    def xml
    def repoList

    /**
     * Use methodMissing to load the xml and repos closures
     * @param methodName
     * @param args
     */
    def methodMissing(String methodName, args) {

        if (methodName.equals("xml")) {
            xml = new Expando()
            Closure closure = args[0]
            closure.setDelegate(xml)
            closure.setResolveStrategy(Closure.DELEGATE_ONLY)
            closure()
        } else if (methodName.equals("repos")) {
            repoList = new Expando()
            Closure closure = args[0]
            closure.setDelegate(repoList)
            closure.setResolveStrategy(Closure.DELEGATE_ONLY)
            closure()
        } else {
            log.error "Unsupported option: " + methodName
        }
    }

    def updateJenkins(def jenkinsHttp) {
        xml.getProperties().each { String jobType, String filePath ->

            def configXml = new File(filePath).text

            repoList.getProperties().each { String jobName, String repo ->
                String jobXml = configXml.replaceAll("REPOSITORY_URL", repo)
                jenkinsHttp.createOrUpdate(jobName + "-" + jobType, jobXml)
            }
        }
    }
}

To actually update jenkins, I substitute the correct repository in the template XML file and post it to the Jenkins API. The JenkinsHttp class is just a wrapper for HTTP Builder. It checks if a job exists so that it can correctly create or update.

I’m not sure if this is the right way to go about writing a groovy DSL, but it works! And once I’ve finalised the template XML files, Jenkins is going to be a vision of consistency.

  

Continuous Build Woes

I’ve not been happy with our Jenkins continuous build setup for a while. It was fine back in the day when: we had about 3 projects; we were still using Subversion(!); and we didn’t work on branches. We now have lots of projects, we use git, we follow a git flow based branching strategy, and we use Atlassian Stash to review pull requests before merging. I’ve tweaked the Jenkins config a bit, and we wrote a Stash plugin to prompt builds on commit, but it’s become harder and harder to maintain to the point that we’re not really using it, so deploying is never as easy as it should be, and test failures get where they shouldn’t.

Here are my key requirements; I am sure they aren’t that unusual.

1. Job Templates. We have various types of project (Maven, Gradle, Grails) but for each type, the build process is pretty much the same, with a different repository URL. What I want to do is create a template for a certain type of project, and then create jobs using that template. If I want to change the build process, I just change the template and all the jobs should change.

2. Stash Integration. I want to prevent merging to develop or master if there are failing tests.

3. Branch specific configuration. At the moment we have one job for building develop and feature branches, and a second for releasing from master. I’d like all builds associated with a project to be configured in the same place, with some conditional behaviour based on the branch. Feature branches just get compiled and tested. Develop branch also gets deployed to a SNAPSHOT release (maven), or a staging environment (grails). Master branch gets released.

Bamboo Evaluation

We use Atlassian Jira, Confluence and Stash (all of which I think are brilliant!) so my first thought was to move to Bamboo. I spent yesterday setting up an evaluation and trying out a few builds.

First off, it looks great – nice familiar UI, well laid out etc. However, it wasn’t as quick to get started as I’d hoped. Here’s my initial thoughts:

1. Too complicated! Each plan (which equates to a source code repository) can have multiple sequential Stages, containing multiple Jobs (which can run in parallel), each containing sequential Tasks. This might be great if you have a big and complex build process, but all our projects are pretty straightforward, so we’d always be using just the “Default Stage”, leaving lots of navigation with not much in it.

2. No instant Stash Integration. I sort of expected to be provided with some sort of “Add builds from Stash” function, which would automatically set up build triggers from Stash, and successful build notifications back again. But it seems I have to install a plugin on Stash, and for each repository enable it and paste in the notification URL for the correct plan.

3. No templating or hierarchical configuration. Configuring and maintaining lots of identical jobs for different repositories is going to be just as time consuming as in Jenkins.

4. No conditional Branch configuration. Bamboo offers “Branch Plans” so your plan can run on multiple branches (although I couldn’t seem to get it to pick up branches automatically as described). However, there doesn’t seem to be a way to conditionally run stages (or jobs/tasks) based on the branch. So it looks like I’d still have to have separate jobs for building develop (including deploying to staging) and feature branches (test only).

Back to Jenkins

So, I’m a bit disappointed that Bamboo doesn’t seem to solve the main problems I’m facing with Jenkins. It looks like I might as well spend some time working out how to manage Jenkins a little better (as I’m a lot more familiar with it, and we already have it).

I’ve already solved one of the problems, as I’ve found the Jenkins Stash Notifier plugin. And I’m working on some scripts to use the Jenkins rest api to manage jobs. I’ll post an update if I get to a solution I’m happy with!

  

Don’t commit your target folder

or an insidious error which resulted in jenkins builds where the sources jar did not match the binary jar

I still can’t work out exactly how this works. I recently made some changes to one of our Maven archetypes, and released the new version using Jenkins. I was confused to discover that projects created with the new version of the archetype did not contain the changes. I checked the git repository tag. I checked the source jar in Artifactory. Both contained the changes. I checked the binary jar and it did not (the changes were to resource files).

I checked out locally, ran maven clean and then maven package. The resulting binary jar was fine.

I wiped out the Jenkins workspace and repeated the build. The binary jar was created incorrectly! Cue lots of head scratching, until a quick thinking colleague asked if I’d checked in the target directory at any point. And I HAD! So although I’d wiped the jenkins workspace, the fresh checkout had got back the out of date target contents. AND the jenkins job did not begin with a maven clean.

A lesson in better house-keeping.

  

Finally… maven release and git flow!

I am really looking forward to using the new Maven JGit-Flow Plugin by Atlassian

.

At the moment, we’re following the git flow branching structure, with master, develop and feature branches. But we’re not using git flow for releases. Our release process is as follows:

  • Create release/1.0.0 branch.
  • Set version on develop branch to 1.1.0-SNAPSHOT
  • Push to origin
  • QA and final bug fixing on release branch
  • Merge to master
  • Maven Release on master branch
  • Merge to develop

This was the most we could streamline the process. It ensures we don’t release something which can’t merge to master.
(I should admit: sometimes we skip the release branch and merge straight from develop to master when we are ready for a release.)

Anyway, really hoping that the Atlassian plugin does away with all these manual steps. It looks like it will. The only thing it doesn’t seem to do is bump the version on the develop branch as soon as the release branch has been created. There’s a discussion here: https://bitbucket.org/atlassian/maven-jgitflow-plugin/issue/16/develop-version-on-release

I think it’s really important to do this. If work continues on the develop branch whilst release QA is underway, both the develop branch and the release branch will be installing snapshots with the same version number and different functionality. I’m sure this could lead to confusion somewhere, especially with multi-module projects. I’m pretty sure I read somewhere that the git flow module says you should bump the version on develop after release-start, but of course I can’t find it now. The only argument I can see for not bumping is what happens if you decide not to release, but don’t see it as a problem to miss a minor version number, and if it was, it’;s easy to use the Maven Versions plugin to reset the version on develop. This will be the minority case, so I think the plugin should cater first for the normal case when the release goes ahead.

I’d be interested to know when other developers bump the version on the develop branch? Either way, I’ll certainly be trying out the plugin when I next need to do some releasing.

  

maven release, git and the parent pom

Some strange maven behaviour took up some of my time today.

A lot of our projects have some common settings, so we have a ‘parent pom’ artifact.  I’ve been trying to tidy up our release procedure, and wanted to use the maven release plugin.

I thought I could be clever and define the scm settings in the parent pom like this:

 

 
<scm>
   <developerConnection>scm:git:https://our-repo-url/${repo.name}.git</developerConnection>
</scm>
 

All projects using this parent would then just have to define a “repo.name” property in the pom.

However, it didn’t work! when running maven release, maven was attempting to push to ‘https://our-repo-url/${repo.name}.git/artifactId” i.e. it was appending the artifactId to the scm url. Needless to say, the push failed, and the maven release failed.

After some googling I found that same problem is encountered when trying to release a single module of a multi module project (For example see http://stackoverflow.com/questions/6727450/releasing-a-multi-module-maven-project-with-git)

I guess the maven release plugin is trying to be clever, and perhaps this behaviour makes more sense in an SVN world.

To fix the problem, I defined an “scm.root” variable in the parent pom, and defined the scm connection for individual projects as below:

<scm>
   <developerConnection>${scm.root}/reponame.git</developerConnection>
</scm>