Use Docker Compose to run a PHP site without contaminating your system

This must be the quickest way to get the LAMP stack up and running locally.
Nothing installed locally -> you can run different versions of PHP/MySQL without conflicts.
Database gets created on first startup and persisted in a named volume.
PHP files are in a mapped volume so you can edit without rebuilding the container.

The docker-compose.yml file:

version: "2"
services:
  site:
    image: php:5.6.27-apache   
    volumes:
      - ./site:/var/www/html
    depends_on:
      - mysql
    networks:
      back-tier:        
  mysql:
    image: mysql:5.5
    environment:
      MYSQL_ROOT_PASSWORD: topsecret
      MYSQL_DATABASE: sitedbname
      MYSQL_USER: sitedbuser
      MYSQL_PASSWORD: sitedbpassword
    volumes:
      - site_db:/var/lib/mysql
    networks:
      back-tier:
networks:
  back-tier:
volumes:
  site_db: 

Then just

docker-compose up

It takes a bit of the pain out of working on a legacy PHP site!

  

Unit Testing AngularJS Directives with templateUrl

Quick blog as this confused me for a while. When running a unit test for a directive which had the template in an html file, I got the following error:

Error: Unexpected request: GET assets/views/template.html

Odd I thought, I haven’t been explicitly using $httpBackend. However, I discovered that when using when unit testing all HTTP requests are processed locally, and as the template is requested via HTTP, it too is processed locally.

The answer is to use karma-ng-html2js-preprocessor to generate an Angular module which puts your HTML files into a $templateCache to use in your tests. Then Angular won’t try to fetch the templates from the server.

First, install the dependency:

npm install karma-ng-html2js-preprocessor --save-dev

Then add the following to karma.conf.js (add to existing entries under files and preprocessors if they exist)


    files: [
        'build/assets/views/*.html'
    ],
    preprocessors: {
        'build/assets/views/*.html': "ng-html2js"
        },
    ngHtml2JsPreprocessor: {
        stripPrefix: 'build/',
        moduleName: 'ngTemplates' 
    }

Then in the unit test, add the line:


  beforeEach(module('ngTemplates'));

After doing this, you may encounter the following error:

Error: [$injector:modulerr] Failed to instantiate module ngTemplates due to:
Error: [$injector:nomod] Module ‘ngTemplates’ is not available! …

To get it available, you need to get the settings right – the module will only be created if html files exist in the specified directory. The stripPrefix setting allows you to ensure that the path to the view matches what is expected by you application, if the basePath in your karma.conf.js isn’t the same as the base of your application. Other settings are available too.

  

Quick backendless development with AngularJS

There are occasions when you want to run your AngularJS app without accessing a live REST API. There are various blog posts on the internet about doing this using $httpBackend, which is part of angular-mocks and very handy for unit testing.

For example:

var cakeData = [{ name: 'Hot Cross Bun'}];
$httpBackend.whenGET('/cakes').respond(function(method,url,data) { 
    return [200, cakeData, {}];
});

This is fine if you have small snippets of JSON to return. However, in real life data is usually bigger and uglier than that!

It would seem logical to put your mock data into JSON files, and return these when running without a live backend, keeping the code nice and succinct and readable. Unfortunately this doesn’t seem to be possible with $httpBackend method.

I tried something like this:

$httpBackend.whenGET('/cakes').respond(function(method, url, data) {
    return $http.get("/mock/cakes.json");
  });

This doesn’t work, because $httpBackend doesn’t work with returned promises. The respond method needs static data.
Workarounds include falling back to a synchronous `XMLHttpRequest` to get the data (ugh) or using a preprocessor to insert the contents of the json file into the code when you build. Neither seem particularly nice.

Using Http Interceptors to serve mock data

I came across this blog post: Data mocking in angular E2E testing, which describes an alternate approach to serving mock data for testing. This approach works just as well for running your app without a backend.

Here’s the code for a simple interceptor

angular.module('mock-backend',[])
    .factory('MockInterceptor', mockInterceptor)
    .config(function ($httpProvider) {
        $httpProvider.interceptors.push("MockInterceptor");
    });

function mockInterceptor() {
    return {
        'request': function (config) {
            if (config.url.indexOf('/cakes') >= 0) {
                config.url = 'mock/cakes.json';
            } 
            return config;
        }
    };
}

It’s fairly easy to use your build script to include this module conditionally when you want to run without a backend.

You can extend the interceptor logic; for example check the method, and switch POST to GET (you can’t POST to a file!). It’s not as sophisticated as a full mock backend, as data doesn’t change to reflect updates, but is a really quick way to view your app with a big chunk of data in it.

  

Partial updates of JSON data in Postgres (using JDBI)

With Postgres 9.5 comes the jsonb_set function, for updating a single key within a JSON column. Hooray!

A sample bit of SQL might look like this:

update mytable 
set myfield = jsonb_set(myfield,'{key, subkey}', '"new string value"'::jsonb) 
where id = 5

I’ve put a text value in the example, but the new value can be an entire JSON structure.

I’ve posted previously on using JSON and Postgres with JDBI. To use the jsonb_set function, we need to reuse the BindJson annotation covered in that post. The jsonb_set function also takes an array parameter, defining the path to the key to be set. For this I wrote a new Bind annotation:

@BindingAnnotation(BindTextArray.JsonBinderFactory.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.PARAMETER})
public @interface BindTextArray {
    
    String value();

    public static class JsonBinderFactory implements BinderFactory {

        @Override
        public Binder build(Annotation annotation) {
            return new Binder<BindTextArray, String[]>() {
                
                @Override
                public void bind(SQLStatement sqlStatement, BindTextArray bind, String[] array) {
                    try {
                        String fieldName = bind.value();
                        Connection con = sqlStatement.getContext().getConnection();                        
                        Array sqlArray = con.createArrayOf("text", array);
			sqlStatement.bindBySqlType(fieldName, sqlArray, Types.ARRAY);
                    } catch (SQLException ex) {
                        throw new IllegalStateException("Error Binding Array",ex);
                    }
                }
            };
        }
    }

(Code based on this post: http://codeonthecobb.com/2015/04/18/using-jdbi-with-postgres-arrays/).

Here’s the DAO for the SQL example above, using the new Bind annotation:

 
@SqlUpdate("update mytable set myfield = jsonb_set(myfield, :path,:json) where id = :id")
void updateMyTable(@Bind("id") int id, @BindTextArray("path") String[] path, @BindJson("json") String json)

As you can see, there are limitations to this functionality. We can’t update two different elements in the same JSON column, so if you want to do that, you still need to do it in code. However, the new syntax is handy if you want to update one section of your JSON document, without loading the whole thing into your code base.

  

Alternative Dropwizard token authentication

Out of the box, the Dropwizard auth module comes with OAuth and Basic Authentication. The documentation for implementing the related Authenticator and Authorizer is simple enough to follow, but there wasn’t much about how to implement an alternative mechanism for authentication, which I needed to do.

It turned out to be fairly fairly straightforward to write a new AuthFilter which used a non standard header for authorisation. I based the implementation on the existing OAuthCredentialAuthFilter filter. I just had to change the getCredentials header to read the header value I wanted to use.

Getting your priorities right

The authentication worked fine. Then I implemented authorisation as well. Remember:

Authentication is the mechanism for identifying the user (e.g. username + password)

Authorisation is the mechanism for determining what they are allowed to do (e.g. role membership)

I created an Authorizer implementation as per the user guide, annotated some resource methods with @RolesAllowed and even remembered to register the RolesAllowedDynamicFeature. However on testing and on running the application, DropWizard appeared to be checking the role membership BEFORE authenticating. Resulting in a 403 access denied, as the roles had not been loaded.

Time for some head scratching, debugging, and trying to register things in different order (no change) until I remembered noticing this line at the top of the AuthFilter I based mine on:

@Priority(Priorities.AUTHENTICATION)

Once I added that to my custom AuthFilter, everything started happening in the correct order.

Today’s lesson: always get your priorities right!

  

What the F is f:table

Today I needed to knock up a quick interface to some database tables for inputting and editing data to be used in the demonstration of some data extraction software. Aha I thought; I’ll try using Grails Scaffolding. I haven’t used grails scaffolding in earnest since taking a grails training course few years back. However, today I really did just need some simple CRUD functionality.

In Grails 3, you have two options for scaffolding – annotating the controller and having the views and controller methods auto-generated at runtime, or running a command to generate them statically so that you can modify them. I chose the latter, assuming that I’d want to do some customisation.

grails generate-all com.domain.Thing

You can then inspect the generated controller and views, and make any changes necessary. And this is where it all started to go wrong. The table containing the list of existing records didn’t look very nice. I’d removed the default ‘application.css’ which comes with grails, and used bootstrap to style the app. Without the default styles, the table has no spacing, and looks pretty awful.

No problem, I just need to add class=”table” to the table and I’ll get a standard bootstrap styled table. However, the generated index.gsp doesn’t contain a table tag. All I found was this:

<f:table collection="${thingList}"/>

The <f:table/> tag was a new one to me. Google suggests this comes from the grails fields plugin, but the documentation is very sparse: Grails 3 fields plugin.
The documentation doesn’t even mention the <f:table/>http://grails3-plugins.github.io/fields/snapshot/ref/Tags/table.html which helped a bit, in that it showed how to configure which fields to show in the table but didn’t help in changing styles or other formatting.

The main grails scaffolding documentation suggests running

grails install-templates

to get local copies of the templates used in scaffolding, but this doesn’t include anything to do with the fields plugin.

More detective work led to this Stackoverflow post, and onward to the fields plugin source code.

Finally… how to customise the f:table tag:

Place a file called _table.gsp in /grails-app/views/templates/_fields/

The default file contents are here: _table.gsp

After adding this file to the project and amending to use the required styles, the <f:table/> tag can be used throughout the project with reckless abandon.

My table looks nice now, but I think this sums up why I struggle with the grails plugin ecosystem; it feels a bit half-finished to be using an undocumented tag as part of what should be a quick start process for new users.

  

Using JDBI with Postgres JSON data

I’ve been migrating some raw JDBC code over to JDBI, and joyfully stripping out lines of boilerplate code for preparing statements, opening record sets, sometimes remembering to close them, handling SQL exceptions which won’t ever occur anyway, and so on. Using the SQL Object API means the only code you have to write is the SQL and a ResultSetMapper to determine how to create your domain objects from the resultset. It really promotes adherence to the single responsibility principle and discourages you from mixing logic in with your database access code.

The database in question has a number of fields containing JSON data. More specifically, they use the PostgreSQL jsonb data type. This has required a little more tinkering to get working.

Inserting jsonb data

Out of the box, JDBI provides two annotations for binding parameters. The @Bind annotation binds a single named argument, and @BindBean binds bean properties with matching names. However, to insert jsonb data, you need to first create an PGobject instance and bind that. To do this, I created a new Binding annotation, following the guidance here: http://jdbi.org/sql_object_api_argument_binding/

The annotation code looks like this:

BindingAnnotation(BindJson.JsonBinderFactory.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.PARAMETER})
public @interface BindJson {
    String value();

    public static class JsonBinderFactory implements BinderFactory {
        @Override
        public Binder build(Annotation annotation) {
            return new Binder<BindJson, String>() {                
                @Override
                public void bind(SQLStatement q, BindJson bind, String jsonString) {
                    try {
                        PGobject data = new PGobject();
                        data.setType("jsonb");
                        data.setValue(jsonString);
                        q.bind(bind.value(), data);                        
                    } catch (SQLException ex) {
                        throw new IllegalStateException("Error Binding JSON",ex);
                    }
                }
            };
        }
    }
}

To use it, annotate the json parameter with the new annotation:

@SqlUpdate("insert into my_table (id,data) VALUES (:id,:data)")
void insertJson(@Bind("id") int id, @BindJson("data") String jsonString);

And that’s it; it just works.

Querying json dynamically

I had a requirement where the parameter supplied to the query was the name of the json element to return. For example, consider the json below. I wanted to be able to paramterise a query to return either one of the key values.

{
   "element": {
      "key1": "value1",
      "key2": "value2",
      "key3": "value3"
   }
}

Using raw JDBC it was possible (although not very pretty) to concatenate a suitable sql statement and then execute it:

String sql = "select data->'element1'->'" + subKeyName + "' as value from mytable"
...

This isn’t possible when the SQL string is specified as a JDBI annotation. I found some useful Postgres json processing functions, including jsonb_extract_path_text which allows you to bind parameters normally:

@SqlQuery("select jsonb_extract_path_text(data,'element1',:subKeyName) as value from mytable")
List<String> getSubKey(@Bind("subKeyName") String subKeyName)

So far I haven’t come across any other issues using JDBI with a PostgreSQL JSON data store. I’m looking forward to trying out the new jsonb functionality in PostgreSQL 9.5 which supports writing partial updates to json fields, yippee!

  

My First Groovy DSL

I decided to have a go at automating the management of our Jenkins jobs. Basically most of our projects have similar builds, and I want to keep the job configs standard – they tend to diverge when we manage them via the UI. I’ve previously played with the Jenkins API using HttpBuilder to post XML config files. My DSL doesn’t do anything clever to generate the XML, but it makes the Jenkins config very readable:

jenkins {   
    url = "http://jenkins.url/"
    username = "jenkins"
    apiKey = "asdfadsgdsfshgdsfg"

    buildGroup {
        name = "Maven Builds"
        xml {
            feature = "release/xml/maven-test-config.xml"
            develop = "release/xml/maven-install-config.xml"
        }
        repos {
            project1 = "ssh://git.repo/project1.git"
            project2 = "ssh://git.repo/project2.git"
        }
    }

    buildGroup {
        name = "Grails Builds"
        xml {
            feature = "release/xml/grails-test-config.xml"
            develop = "release/xml/grails-install-config.xml"
        }
        repos {
            project4 = "ssh://git.repo/project4.git"
            project5 = "ssh://git.repo/project5.git"
        }
    }

}

I think this is so easy to read.

I’m not really sure if I’ve gone the right way about writing this, but here goes:

First, I have a main method, which parses the file. It binds the “jenkins” method to an instance of my JenkinsDsl class.

    Binding binding = new Binding();
    JenkinsDsl jenkinsDsl = new JenkinsDsl();
    binding.setVariable("jenkins",jenkinsDsl);
    GroovyShell shell = new GroovyShell(binding);
    Script script = shell.parse(file);
    script.run();

The JenkinsDsl class overrides the `call` method, and uses `methodMissing` to define how we handle the buildGroups. The really cool bit is I didn’t have to write anything special to get the url, username and password from the jenkins config file – groovy is automatically calling a ‘setProperty’ method on the delegate.

class JenkinsDsl {

    def call(Closure cl) {
        log.info "Processing Main Configuration"

        cl.setDelegate(this);
        cl.setResolveStrategy(Closure.DELEGATE_ONLY)
        cl.call();
    }

    def url
    def apiKey
    def username

    def jenkinsHttp

    def methodMissing(String methodName, args) {

        if (methodName.equals("buildGroup")) {

            Closure closure = args[0]
            closure.setDelegate(new JenkinsBuildGroup())
            closure.setResolveStrategy(Closure.DELEGATE_ONLY)
            closure()
            closure.delegate.updateJenkins(getJenkinsHttp())
        } else {
            log.error "Unsupported option: " + methodName
        }
    }

    def getJenkinsHttp() {
        if (!jenkinsHttp) {
            jenkinsHttp = new JenkinsHttp(url, username, apiKey)
        }
        jenkinsHttp
    }   
}

The JenkinsBuildGroup class is very similar – to save time I’ve used the ‘Expando’ class to collect the xml and repository details.

class JenkinsBuildGroup {

    def name
    def xml
    def repoList

    /**
     * Use methodMissing to load the xml and repos closures
     * @param methodName
     * @param args
     */
    def methodMissing(String methodName, args) {

        if (methodName.equals("xml")) {
            xml = new Expando()
            Closure closure = args[0]
            closure.setDelegate(xml)
            closure.setResolveStrategy(Closure.DELEGATE_ONLY)
            closure()
        } else if (methodName.equals("repos")) {
            repoList = new Expando()
            Closure closure = args[0]
            closure.setDelegate(repoList)
            closure.setResolveStrategy(Closure.DELEGATE_ONLY)
            closure()
        } else {
            log.error "Unsupported option: " + methodName
        }
    }

    def updateJenkins(def jenkinsHttp) {
        xml.getProperties().each { String jobType, String filePath ->

            def configXml = new File(filePath).text

            repoList.getProperties().each { String jobName, String repo ->
                String jobXml = configXml.replaceAll("REPOSITORY_URL", repo)
                jenkinsHttp.createOrUpdate(jobName + "-" + jobType, jobXml)
            }
        }
    }
}

To actually update jenkins, I substitute the correct repository in the template XML file and post it to the Jenkins API. The JenkinsHttp class is just a wrapper for HTTP Builder. It checks if a job exists so that it can correctly create or update.

I’m not sure if this is the right way to go about writing a groovy DSL, but it works! And once I’ve finalised the template XML files, Jenkins is going to be a vision of consistency.

  

Groovy Grails Exchange 2013

This week I attended the 2013 Groovy Grails Exchange, organised by Skills Matter (with whom I’ve also been lucky enough to attend a couple of courses over the years).

Finally got time to jot down a few thoughts – I came back with an almost full notebook, but these are the highlights:

1. What a great conference! Skills Matter did an amazing job of hosting. The speakers were varied and interesting. It was fantastically motivating to be amongst all these people who are actually pushing the technology forwards. As I work in a very small dev team (out in the sticks) I found it really interesting to see what goes on in the rest of the world.

2. Must. Learn. Spock.

3. Must also get back on board with Geb. I dabbled a little a while ago, and now that we’re about to start a new Grails project, functional testing with Spock & Geb has to be central to it. Lots of food for thought in the DevQA talk by Alvaro Sanchez-Mariscal, applicable even though we don’t have a dedicated QA team for our internal dev.

4. Forces on code – what makes code “good” depends on the context. Common sense really but well illustrated in the talk by by David the Coder

5. Do I need to learn a JavaScript framework? I enjoyed the very persuasive talk on Developing SPI Applications by Alvaro Sanchez-Mariscal. Separating the front end and back end into independent apps makes a lot of sense especially if you have dedicated UI developers. I’m not sure we have the resources to move away from GSPs just yet. But I’m certainly going to have a browse around ToDoMVC to get an idea of the options.

6. Jeff Brown’s live coding demo of Metaprogramming With The Groovy Runtime was a great refresher – nothing particularly that hadn’t been covered on the Groovy course I attended last year, but a reminder that I am pretty guilty of just writing Java like code inside .groovy files and not taking full advantage of groovy’s awesomeness.

7. The “Open Source and You” session by Peter Ledbrook made me think a bit more deeply about Open Source software – what to expect from it, the costs involved, how to manage a successful open source project. I’m definitely motivated to get more involved.

8. Gradle for deployment? @danveloper’s talk on Groovy for Sysadmins gave me lots to think about. I doubt I’ll ever end up hacking the kernel, but I like the software centric approach to deploying and maintaining servers.

9. Must pay more attention to release notes and road maps. New and upcoming versions of Groovy and Grails have some great new features that I’m looking forward to using. Changes notes for Groovy 2.2 and Grails 2.3 definitely worth a look. Also looking forward to plugin and build system changes in Grails 3.0 some time in 2014.

10. And finally, how come all these Open Source aficionados use Macs?! I’ve not got an iAnything yet and don’t plan to. Linux rocks :)

Anyway, in summary, well worth going for anyone working with Groovy or Grails. Podcasts of all the talks are available on the Skills Matter website, but I think attending in real is a fantastic opportunity to absorb knowledge from an enthusiastic and knowledgeable crowd, and worth every penny. Better get my early bird ticket for next year…

  

Updated JIRA sub-task sorting plugin

Following some user feedback, I’ve made some changes to the JIRA plugin I wrote for dragging sub-tasks, first mentioned in this post Jira web-resource plugin to drag and drop subtasks.

If you’ve used it, you may have found it annoying that it was actually too easy to drag subtasks, causing page refreshes when they weren’t wanted! Copying and pasting from the subtask list was impossible. You could also drag issues in the Issue Navigator, although it didn’t achieve anything.

Just a few teething problems then.

First I put a delay in the jQuery sortable options. I also stopped the page from submitting if the list order had not been changed. This was a definite improvement, but users still wanted to copy text from the list, which wasn’t possible. So now I’ve limited the “drag handle” to the cell in the table where the up and down arrows appear for sorting. You can only drag if you click in that area, anywhere else won’t work. It’s still not ideal, as it’s not immediately obvious how to drag. Better ideas welcomed!

You can get the updated source from github: https://github.com/anorakgirl/subtask-dragger/releases/tag/v0.6.
Or if you don’t feel like packaging it yourself, there’s a jar for download here: subtask-dragger-0.6.jar. USE AT YOUR OWN RISK!