Use Docker Compose to run a PHP site without contaminating your system

This must be the quickest way to get the LAMP stack up and running locally.
Nothing installed locally -> you can run different versions of PHP/MySQL without conflicts.
Database gets created on first startup and persisted in a named volume.
PHP files are in a mapped volume so you can edit without rebuilding the container.

The docker-compose.yml file:

version: "2"
services:
  site:
    image: php:5.6.27-apache   
    volumes:
      - ./site:/var/www/html
    depends_on:
      - mysql
    networks:
      back-tier:        
  mysql:
    image: mysql:5.5
    environment:
      MYSQL_ROOT_PASSWORD: topsecret
      MYSQL_DATABASE: sitedbname
      MYSQL_USER: sitedbuser
      MYSQL_PASSWORD: sitedbpassword
    volumes:
      - site_db:/var/lib/mysql
    networks:
      back-tier:
networks:
  back-tier:
volumes:
  site_db: 

Then just

docker-compose up

It takes a bit of the pain out of working on a legacy PHP site!

  

Unit Testing AngularJS Directives with templateUrl

Quick blog as this confused me for a while. When running a unit test for a directive which had the template in an html file, I got the following error:

Error: Unexpected request: GET assets/views/template.html

Odd I thought, I haven’t been explicitly using $httpBackend. However, I discovered that when using when unit testing all HTTP requests are processed locally, and as the template is requested via HTTP, it too is processed locally.

The answer is to use karma-ng-html2js-preprocessor to generate an Angular module which puts your HTML files into a $templateCache to use in your tests. Then Angular won’t try to fetch the templates from the server.

First, install the dependency:

npm install karma-ng-html2js-preprocessor --save-dev

Then add the following to karma.conf.js (add to existing entries under files and preprocessors if they exist)


    files: [
        'build/assets/views/*.html'
    ],
    preprocessors: {
        'build/assets/views/*.html': "ng-html2js"
        },
    ngHtml2JsPreprocessor: {
        stripPrefix: 'build/',
        moduleName: 'ngTemplates' 
    }

Then in the unit test, add the line:


  beforeEach(module('ngTemplates'));

After doing this, you may encounter the following error:

Error: [$injector:modulerr] Failed to instantiate module ngTemplates due to:
Error: [$injector:nomod] Module ‘ngTemplates’ is not available! …

To get it available, you need to get the settings right – the module will only be created if html files exist in the specified directory. The stripPrefix setting allows you to ensure that the path to the view matches what is expected by you application, if the basePath in your karma.conf.js isn’t the same as the base of your application. Other settings are available too.

  

Quick backendless development with AngularJS

There are occasions when you want to run your AngularJS app without accessing a live REST API. There are various blog posts on the internet about doing this using $httpBackend, which is part of angular-mocks and very handy for unit testing.

For example:

var cakeData = [{ name: 'Hot Cross Bun'}];
$httpBackend.whenGET('/cakes').respond(function(method,url,data) { 
    return [200, cakeData, {}];
});

This is fine if you have small snippets of JSON to return. However, in real life data is usually bigger and uglier than that!

It would seem logical to put your mock data into JSON files, and return these when running without a live backend, keeping the code nice and succinct and readable. Unfortunately this doesn’t seem to be possible with $httpBackend method.

I tried something like this:

$httpBackend.whenGET('/cakes').respond(function(method, url, data) {
    return $http.get("/mock/cakes.json");
  });

This doesn’t work, because $httpBackend doesn’t work with returned promises. The respond method needs static data.
Workarounds include falling back to a synchronous `XMLHttpRequest` to get the data (ugh) or using a preprocessor to insert the contents of the json file into the code when you build. Neither seem particularly nice.

Using Http Interceptors to serve mock data

I came across this blog post: Data mocking in angular E2E testing, which describes an alternate approach to serving mock data for testing. This approach works just as well for running your app without a backend.

Here’s the code for a simple interceptor

angular.module('mock-backend',[])
    .factory('MockInterceptor', mockInterceptor)
    .config(function ($httpProvider) {
        $httpProvider.interceptors.push("MockInterceptor");
    });

function mockInterceptor() {
    return {
        'request': function (config) {
            if (config.url.indexOf('/cakes') >= 0) {
                config.url = 'mock/cakes.json';
            } 
            return config;
        }
    };
}

It’s fairly easy to use your build script to include this module conditionally when you want to run without a backend.

You can extend the interceptor logic; for example check the method, and switch POST to GET (you can’t POST to a file!). It’s not as sophisticated as a full mock backend, as data doesn’t change to reflect updates, but is a really quick way to view your app with a big chunk of data in it.

  

Partial updates of JSON data in Postgres (using JDBI)

With Postgres 9.5 comes the jsonb_set function, for updating a single key within a JSON column. Hooray!

A sample bit of SQL might look like this:

update mytable 
set myfield = jsonb_set(myfield,'{key, subkey}', '"new string value"'::jsonb) 
where id = 5

I’ve put a text value in the example, but the new value can be an entire JSON structure.

I’ve posted previously on using JSON and Postgres with JDBI. To use the jsonb_set function, we need to reuse the BindJson annotation covered in that post. The jsonb_set function also takes an array parameter, defining the path to the key to be set. For this I wrote a new Bind annotation:

@BindingAnnotation(BindTextArray.JsonBinderFactory.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.PARAMETER})
public @interface BindTextArray {
    
    String value();

    public static class JsonBinderFactory implements BinderFactory {

        @Override
        public Binder build(Annotation annotation) {
            return new Binder<BindTextArray, String[]>() {
                
                @Override
                public void bind(SQLStatement sqlStatement, BindTextArray bind, String[] array) {
                    try {
                        String fieldName = bind.value();
                        Connection con = sqlStatement.getContext().getConnection();                        
                        Array sqlArray = con.createArrayOf("text", array);
			sqlStatement.bindBySqlType(fieldName, sqlArray, Types.ARRAY);
                    } catch (SQLException ex) {
                        throw new IllegalStateException("Error Binding Array",ex);
                    }
                }
            };
        }
    }

(Code based on this post: http://codeonthecobb.com/2015/04/18/using-jdbi-with-postgres-arrays/).

Here’s the DAO for the SQL example above, using the new Bind annotation:

 
@SqlUpdate("update mytable set myfield = jsonb_set(myfield, :path,:json) where id = :id")
void updateMyTable(@Bind("id") int id, @BindTextArray("path") String[] path, @BindJson("json") String json)

As you can see, there are limitations to this functionality. We can’t update two different elements in the same JSON column, so if you want to do that, you still need to do it in code. However, the new syntax is handy if you want to update one section of your JSON document, without loading the whole thing into your code base.

  

Alternative Dropwizard token authentication

Out of the box, the Dropwizard auth module comes with OAuth and Basic Authentication. The documentation for implementing the related Authenticator and Authorizer is simple enough to follow, but there wasn’t much about how to implement an alternative mechanism for authentication, which I needed to do.

It turned out to be fairly fairly straightforward to write a new AuthFilter which used a non standard header for authorisation. I based the implementation on the existing OAuthCredentialAuthFilter filter. I just had to change the getCredentials header to read the header value I wanted to use.

Getting your priorities right

The authentication worked fine. Then I implemented authorisation as well. Remember:

Authentication is the mechanism for identifying the user (e.g. username + password)

Authorisation is the mechanism for determining what they are allowed to do (e.g. role membership)

I created an Authorizer implementation as per the user guide, annotated some resource methods with @RolesAllowed and even remembered to register the RolesAllowedDynamicFeature. However on testing and on running the application, DropWizard appeared to be checking the role membership BEFORE authenticating. Resulting in a 403 access denied, as the roles had not been loaded.

Time for some head scratching, debugging, and trying to register things in different order (no change) until I remembered noticing this line at the top of the AuthFilter I based mine on:

@Priority(Priorities.AUTHENTICATION)

Once I added that to my custom AuthFilter, everything started happening in the correct order.

Today’s lesson: always get your priorities right!

  

What the F is f:table

Today I needed to knock up a quick interface to some database tables for inputting and editing data to be used in the demonstration of some data extraction software. Aha I thought; I’ll try using Grails Scaffolding. I haven’t used grails scaffolding in earnest since taking a grails training course few years back. However, today I really did just need some simple CRUD functionality.

In Grails 3, you have two options for scaffolding – annotating the controller and having the views and controller methods auto-generated at runtime, or running a command to generate them statically so that you can modify them. I chose the latter, assuming that I’d want to do some customisation.

grails generate-all com.domain.Thing

You can then inspect the generated controller and views, and make any changes necessary. And this is where it all started to go wrong. The table containing the list of existing records didn’t look very nice. I’d removed the default ‘application.css’ which comes with grails, and used bootstrap to style the app. Without the default styles, the table has no spacing, and looks pretty awful.

No problem, I just need to add class=”table” to the table and I’ll get a standard bootstrap styled table. However, the generated index.gsp doesn’t contain a table tag. All I found was this:

<f:table collection="${thingList}"/>

The <f:table/> tag was a new one to me. Google suggests this comes from the grails fields plugin, but the documentation is very sparse: Grails 3 fields plugin.
The documentation doesn’t even mention the <f:table/>http://grails3-plugins.github.io/fields/snapshot/ref/Tags/table.html which helped a bit, in that it showed how to configure which fields to show in the table but didn’t help in changing styles or other formatting.

The main grails scaffolding documentation suggests running

grails install-templates

to get local copies of the templates used in scaffolding, but this doesn’t include anything to do with the fields plugin.

More detective work led to this Stackoverflow post, and onward to the fields plugin source code.

Finally… how to customise the f:table tag:

Place a file called _table.gsp in /grails-app/views/templates/_fields/

The default file contents are here: _table.gsp

After adding this file to the project and amending to use the required styles, the <f:table/> tag can be used throughout the project with reckless abandon.

My table looks nice now, but I think this sums up why I struggle with the grails plugin ecosystem; it feels a bit half-finished to be using an undocumented tag as part of what should be a quick start process for new users.

  

Using JDBI with Postgres JSON data

I’ve been migrating some raw JDBC code over to JDBI, and joyfully stripping out lines of boilerplate code for preparing statements, opening record sets, sometimes remembering to close them, handling SQL exceptions which won’t ever occur anyway, and so on. Using the SQL Object API means the only code you have to write is the SQL and a ResultSetMapper to determine how to create your domain objects from the resultset. It really promotes adherence to the single responsibility principle and discourages you from mixing logic in with your database access code.

The database in question has a number of fields containing JSON data. More specifically, they use the PostgreSQL jsonb data type. This has required a little more tinkering to get working.

Inserting jsonb data

Out of the box, JDBI provides two annotations for binding parameters. The @Bind annotation binds a single named argument, and @BindBean binds bean properties with matching names. However, to insert jsonb data, you need to first create an PGobject instance and bind that. To do this, I created a new Binding annotation, following the guidance here: http://jdbi.org/sql_object_api_argument_binding/

The annotation code looks like this:

BindingAnnotation(BindJson.JsonBinderFactory.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.PARAMETER})
public @interface BindJson {
    String value();

    public static class JsonBinderFactory implements BinderFactory {
        @Override
        public Binder build(Annotation annotation) {
            return new Binder<BindJson, String>() {                
                @Override
                public void bind(SQLStatement q, BindJson bind, String jsonString) {
                    try {
                        PGobject data = new PGobject();
                        data.setType("jsonb");
                        data.setValue(jsonString);
                        q.bind(bind.value(), data);                        
                    } catch (SQLException ex) {
                        throw new IllegalStateException("Error Binding JSON",ex);
                    }
                }
            };
        }
    }
}

To use it, annotate the json parameter with the new annotation:

@SqlUpdate("insert into my_table (id,data) VALUES (:id,:data)")
void insertJson(@Bind("id") int id, @BindJson("data") String jsonString);

And that’s it; it just works.

Querying json dynamically

I had a requirement where the parameter supplied to the query was the name of the json element to return. For example, consider the json below. I wanted to be able to paramterise a query to return either one of the key values.

{
   "element": {
      "key1": "value1",
      "key2": "value2",
      "key3": "value3"
   }
}

Using raw JDBC it was possible (although not very pretty) to concatenate a suitable sql statement and then execute it:

String sql = "select data->'element1'->'" + subKeyName + "' as value from mytable"
...

This isn’t possible when the SQL string is specified as a JDBI annotation. I found some useful Postgres json processing functions, including jsonb_extract_path_text which allows you to bind parameters normally:

@SqlQuery("select jsonb_extract_path_text(data,'element1',:subKeyName) as value from mytable")
List<String> getSubKey(@Bind("subKeyName") String subKeyName)

So far I haven’t come across any other issues using JDBI with a PostgreSQL JSON data store. I’m looking forward to trying out the new jsonb functionality in PostgreSQL 9.5 which supports writing partial updates to json fields, yippee!

  

Enabling hibernate in Ubuntu 14.04

The new laptop (Dell Inspiron 5000) has one or two teething issues. Which is disappointing as I had installed Ubuntu on the XPS I had previously (at work) with no problems at all, so thought this laptop would be a safe bet.

At present I can’t use suspend. Although the laptop sounds like it is resuming, the screen remains blank and I have to do a hard reset. It’s a pain starting everything up again whenever I shut down, so I thought I’d try hibernate instead.

For some reason this is not available on the system tray menu. To hibernate from the terminal the command is:

sudo pm-hibernate

My laptop resumes fine after this. To add the option to the system tray menu, I edited the file

/var/lib/polkit-1/localauthority/10-vendor.d/com.ubuntu.desktop.pkla

I changed the setting ResultActive to yes in the two places below:

...
[Disable hibernate by default in upower]
Identity=unix-user:*
Action=org.freedesktop.upower.hibernate
ResultActive=yes

[Disable hibernate by default in logind]
Identity=unix-user:*
Action=org.freedesktop.login1.hibernate
ResultActive=yes
...

I also decided it would be useful for the laptop to hibernate if I shut the lid. To do this I edited the file

/etc/systemd/logind.conf

I replaced the line below:

#HandleLidSwitch=suspend

with

HandleLidSwitch=hibernate

Reboot required after these changed. Now it will at least hibernate. Thanks to this blog post (and the comments) for the guidance: http://ubuntuhandbook.org/index.php/2014/04/enable-hibernate-ubuntu-14-04/

Next I need to get the audio working…

  

Super quick Sonar/Postgres setup with docker

Maybe I am easily impressed but wow! Here is how I used docker to setup sonar on my (Ubuntu) laptop superquick.

Postgres

First, set up a postgres container. The command below creates and starts a container called sonar-postgres, using the official docker postgres image.

docker run --name sonar-postgres -e POSTGRES_USER=sonar -e POSTGRES_PASSWORD=secret -d postgres

The container is created containing a sonar database and user with the supplied password. There are various options to the run command, for example to restart the container automatically. See docker run for more details. -d means detach from the container and run it in the background.

The command above does not publish any ports to the host, so we can’t psql to localhost port 5432 to see the database. However, the postgres container does “expose” port 5432 to linked containers. To have a quick peek inside, the following command creates a temporary container which executes the psql command. After entering the password you chose earlier, you are logged into the sonar database. When you exit, the container is gone.

docker run -it --link sonar-postgres:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U sonar'

An alternative would have been to modify the original run command to include -p 5432:5432 which would publish the ports to the host and allow direct access via psql. This was handy for me when getting started but obviously not ideal in an environment where you might have several postgres containers.

Sonar

The default command to start sonar uses the built in h2 database:

docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube:5.1

It’s not good practice to use the embedded database for continued use. To set up a container which uses the postgres container created above, use the following command:

docker run -d --name sonarqube --link sonar-postgres:pgsonar -p 9000:9000 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=secret -e SONARQUBE_JDBC_URL=jdbc:postgresql://pgsonar:5432/sonar sonarqube:5.1

In the above command, the –link option links the postgres container with a hostname of “pgsonar”, which is used again in the SONARQUBE_JDBC_URL setting to tell sonar where to find the postgres database.

Tip! If the container won’t start, run the command without the -d switch, so it remains in the foreground and you can see the log output.

Once the container is started, navigate to http://localhost:9000/ and you should see the familiar sonarqube dashboard. This process felt quicker than the usual sonar install done properly, and I can definitely see myself using docker more to supercharge my dev environment!

  

ACPI PCC probe failed

I had one problem getting Ubuntu 14.04 to install on a Dell Inspiron 5000 series. Booting from the USB was no problem – hit F12 at startup to go to boot menu, and chose to boot from the USB drive. I didn’t have to disable Secure Boot or UEFI mode.

However, after selecting “Try Ubuntu”, all went blank. When I tried a second time in Legacy boot mode, I got a more helpful error message:

ACPI PCC probe failed

A quick google suggested this was a problem introduced Ubuntu 14.04.3. Some scary sounding fixes were suggested, but I took the easy option of downloading the 14.04.1 iso instead, which installed fine.