Executing JavaScript on the JVM with Nashorn

We’ve been using Nashorn for a while to execute some very simple Javascript expressions. The latest challenge was to run some JavaScript from a Node.js project, which in turn had dependencies on a third party package. There were a few gotchas which I thought I’d share (disclaimer: I’m not a JavaScript developer, so this might all be obvious stuff to some).

The original JavaScript was in a plain old text file, with functions at the top level

function testThing(thing) {
   return thing === 'tiger';
}

This is how I was executing it:

ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn");
String js = IOUtils.toString(this.getClass().getClassLoader().getResource("my-script.js"), "UTF-8");            
engine.eval(js);
Invocable invocable = (Invocable) engine;
Object result = invocable.invokeFunction("testThing", theThing);

Running Javascript generated from a Node.js project required a bit of tinkering:

1. In Java 8 Nashorn will only load ES5 compatible Javascript

I was running browserify to create a single JavaScript file from the Node.js project. Trying to load the generated instead of the plain javascript failed miserably:

javax.script.ScriptException: :1079:0 Expected : but found }

To get Nashorn to evaluate the js file, I had to transpile to ES5 using babel.

I decided I needed to run browsify in standalone mode, so that my exported functions were attached to a global variable. The scripts section in package.json had the following:

"build": "browserify src/main.js -r --standalone MyLib -o dist/build.js",
"transpile": "babel dist/build.js --out-file dist/build.es5.js"

This was a small step forward but Nashorn was still not able to load the script successfully…

2. Nashorn cannot set property of undefined!

Loading the transpiled file in Nashorn threw the following:

javax.script.ScriptException: TypeError: Cannot set property “MyLib” of undefined in <eval> at line number 19

Looking at the beginning of the transpiled file, you can see it is trying to determine where would be suitable to attach the global variable:

(function (f) {
  if ((typeof exports === "undefined" ? "undefined" : _typeof(exports)) === "object" && typeof module !== "undefined") {
    module.exports = f();
  } else if (typeof define === "function" && define.amd) {
    define([], f);
  } else {
    var g;if (typeof window !== "undefined") {
      g = window;
    } else if (typeof global !== "undefined") {      
      g = global;
    } else if (typeof self !== "undefined") {
      g = self;
    } else {
      g = this;
    }
    g.MyLib = f();
  }
})

A little bit of detective work using print() showed that it was falling through to the final “else” as it couldn’t find anything else to attach to. Adding this line to the Java, before parsing the file, magicked away this problem.

engine.eval("var global = this;");

(It doesn’t quite make sense to me why this works, as loading the script without this line it seems to think “this” is undefined?)

3. Executing methods of objects is different from executing functions

At this point, the file was loading without an error, but trying to execute the function using the original Java code with the new JavaScript file threw a NoSuchMethodException

java.lang.NoSuchMethodException: No such function testThing

This makes sense, as “testThing” is now a method of the MyLib global variable. However none of the permutations I tried in order to access the function worked. I tried all sorts, e.g:

Object result = invocable.invokeFunction("global.MyLib.testThing", theThing);

The key issue here is that testThing isn’t a function now, it is a method on an object, so we have to use invokeMethod:

Object result = invocable.invokeMethod(engine.eval("global.MyLib"), "testThing", theThing);

And that worked :)

Not sure if any of this is the correct way to approach executing Node.js based Javascript on the JVM, but these were the three gotchas that took me a while to work out.

  

Refactoring SQLiteOpenHelper so it isn’t a mile long

If, like me, you are new to Android Development you may find that your SQLiteOpenHelper implementation can quickly get long and disorganised, as data access methods for tables are added organically during development. It gets hard to see which methods are related, and starts to feel cumbersome. You can only have one SQLiteOpenHelper per database, but it isn’t difficult to refactor the single SQLiteOpenHelper into multiple classes with distinct responsibilities.

To keep things really tidy, I defined an interface for my new classes:

public interface MySQLiteDao {
    void onCreate(SQLiteDatabase db);
    void onUpdate(SQLiteDatabase db, int oldVersion, int newVersion);
}

The individual DAO classes are constructed with an instance of SQLiteOpenHelper which is used in data access methods. onCreate and onUpgrade contain the create/update sql specific to this table.

public class ThingDao implements MySQLiteDao {

    private SQLiteOpenHelper db;

    public ThingDao(SQLiteOpenHelper db) {
        this.db = db;
    }

    @Override
    public void onCreate(SQLiteDatabase sqlDb) {
        sqlDb.execSQL("CREATE TABLE ...)");
    }

    @Override
    public void onUpdate(SQLiteDatabase sqlDb, int oldVersion, int newVersion) {
        if (newVersion = 2) {           
            sqlDb.execSQL("...");
        }
    }

    public List<Thing> getThings() {
        List<Thing> things = new ArrayList<>();
        Cursor c = db.getReadableDatabase().rawQuery("SELECT ... ", new String[]{});

        for (int i = 0; i < c.getCount(); i++) {
           c.moveToPosition(i);
           things.add(new Thing(...);
        }
        
        c.close();
        return things;
    }

    public Thing getThing(int id) {
       ...
    }

    public void deleteThing(Thing thing) {
       ...
    }
    ...
}

Then the SQLiteOpenHelper implementation itself is much easier to read

public class MySqlHelper extends SQLiteOpenHelper {

    private static MySqlHelper instance;

    private static final String DATABASE_NAME = "my.db";
    private static final int SCHEMA = 1;


    private ThingDao thingDao;

    public synchronized static MySqlHelper getInstance(Context ctxt) {
        if (instance == null) {
            instance = new MySqlHelper(ctxt.getApplicationContext());
        }

        return (instance);
    }

    public MySqlHelper(Context context) {
        super(context, DATABASE_NAME, null, SCHEMA);
    }

    @Override
    public void onCreate(SQLiteDatabase db) {
        getThingDao().onCreate(db);
        //other DAOs called here
    }

    @Override
    public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
        getThingDao().onUpdate(db, oldVersion, newVersion);
        //other DAOs called here
    }

    public ThingDao getThingDao() {
        if (thingDao == null) {
            thingDao = new ThingDao(this);
        }
        return thingDao;
    }
}

You can group the data access methods into DAO classes in whatever way makes sense for the application – it doesn’t have to be one per table. In your application code, you can easily get an instance of the DAO you want.

ThingDao = MySqlHelper.getInstance(this).getThingDao();

This means you are working with smaller classes containing related methods – much easier to maintain than a sprawling mess containing every DAO method you dashed out in order to get the thing working…

  

How to use a JDBI ResultSetMapper with a non default constructor

If your ResultSetMapper does not require a non default constructor, it is easy to specify a specific Mapper for a DAO method:

@UseMapper(PersonMapper.class)
@SqlQuery("select id, data from person where id = :id")
Person getPerson(@Bind("id") String id);

However, sometimes the ResultSetMapper may require a non default constructor. In this case you can register a mapper with the DBI:

dbi.registerMapper(new PersonMapper(extraParameters));

JDBI will use the Mapper globally wherever the return type of the DAO method matches that of the ResultSetMapper.

If you want to use different ResultSetMapper implementations for DAO methods which return the same data type, you can register a ResultSetMapperFactory.

dbi.registerMapper(new PersonMapperFactory(extraParameters));

The ResultSetMapperFactory can return different ResultSetMapper implementations based on the StatementContext. This gives you access to things like the name of the method being called and the raw SQL.

public final class PersonMapperFactory implements ResultSetMapperFactory {

    private EmployeeMapper employeeMapper;
    private PersonMapper personMapper;

    public PersonMapperFactory(deptLookup) {
        this.employeeMapper = new EmployeeMapper(deptLookup);
        this.personMapper = new PersonMapper();
    }

    @Override
    public boolean accepts(Class type, StatementContext ctx) {
        return type.equals(Person.class);
    }

    @Override
    public ResultSetMapper mapperFor(Class type, StatementContext ctx) {        
        if (ctx.getSqlObjectMethod().getName().contains("Employee")) {
            return employeeMapper; 
        } else {
            return personMapper;
        }
    }
    
}

(Note that these examples use JDBI 2, I haven’t had chance to update yet…)

There’s some more info on the scope of a registered mapper in the JDBI SQL Object Reference.

  

Setting input values in AngularJS 1.6 Unit tests

Having spent hours yesterday wondering why my unit test would not work, I can confirm that the correct way to enter data into an input field in an AngularJS 1.6 unit test is as follows:

var field = element.find('input');       
var wrappedField = angular.element(field);
wrappedField.val('Some text');
wrappedField.triggerHandler('change');

I can further confirm that this will not be enough if you have set a debounce value in the ngModelOptions for the field. The debounce value defines a delay between input changes and triggering a $digest cycle. This can improve performance, as it saves thrashing all the code linked to the two way binding every time a key is pressed. However for the purposes of unit testing it also means that the model will not be updated immediately after running the code above.

I found the answer in this stackoverflow post: How to test an AngularJS watch using debounce function with Jasmine.

Adding the following after setting the field value causes the model updates to be processed immediately:

$timeout.flush();

I’ve added a working example of this unit test to my angular-testing repository on github.

  

Using two github accounts

I followed the very handy instructions at https://code.tutsplus.com/tutorials/quick-tip-how-to-work-with-github-and-multiple-accounts–net-22574 to set up different keys for my two github accounts.

My .ssh/config file looked like this:

Host github-work
    HostName github.com
    PreferredAuthentications publickey
    IdentityFile ~/.ssh/id_work

Host github.com
    HostName github.com
    PreferredAuthentications publickey
    IdentityFile ~/.ssh/id_rsa

I then had no problem cloning from github using `github-work` as a host

git clone git@github-work:work/work-project.git
Cloning into 'work-project'...
remote: Counting objects: 16, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 16 (delta 0), reused 16 (delta 0), pack-reused 0
Receiving objects: 100% (16/16), done.
Checking connectivity... done.

HOWEVER, pushing was another matter:

 git push -u origin master 
ERROR: Permission to work/work-project.git denied to mypersonalusername.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The push was not using the right ssh key – the access denied message refers to my personal github account username.

For debugging purposes, I ran the following:

ssh -v git@github-work

Amongst the output was:

debug1: Reading configuration data /home/me/.ssh/config
debug1: /home/me/.ssh/config line 1: Applying options for github-work
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Hostname has changed; re-reading configuration
debug1: Reading configuration data /home/me/.ssh/config
debug1: /home/me/.ssh/config line 8: Applying options for github.com

A bit of googling led to this stackoverflow post: why git push can not choose the right ssh key?.

The answer is to remove the vanilla/default github.com entry from the .ssh/config file so it only contains the section for the non-standard host. This works!

  

Serving Kibana 4.6 (4.2+) from a subdirectory with nginx

This took me a little while to get right, but the final config is quite simple, so I thought it was worth sharing.

I wanted to serve kibana from a subdirectory, for example mydomain.com/kibana. There are known issues doing this, as of Kibana 4.2:

The problem is that even if you set up an nginx proxy configuration to forward requests from /kibana to the correct place, Kibana serves the app from /app/kibana and tries to load resources via their absolute url, e.g. /bundles/kibana.bundle.js.

The answer is a combination of kibana and nginx config (which isn’t very nice as it means your kibana config is not portable).

In kibana.yml, add the following:

server.basePath: "/kibana"

This means that when kibana generates the urls to load resources, it prefixes them with /kibana. On its own this just breaks kibana! It will try to load e.g. /kibana/bundles/kibana.bundle.js and return a 404. duh.

So, you need to strip out the prefix in your nginx config. Mine looks like this (kibana is running in a docker container on the same network as the nginx container)

location /kibana {
    proxy_set_header Host $host;
    rewrite /kibana/(.*)$ /$1 break;
    proxy_pass http://kibana:9292/;
}

Kibana is now available at mydomain.com/kibana – the resulting url will look like mydomain.com/kibana/app/kibana, but you get to keep the initial directory name, meaning it won’t interfere with other things you might be serving from the same host. It would be much neater if the server.basePath setting in kibana was all that was necessary to do this, but it doesn’t look like it’s going to be changed any time soon – see discussion at #6665 server.basePath results in 404 (to be fair changing the behaviour would break every existing install using the setting)

  

Docker and Maven for integration testing

I recently implemented LDAP authentication in our Dropwizard application. How to integration test this?

Here’s what I used:

* This docker image: nickstenning/slapd, containing a basic configuration of the OpenLDAP server.
* The fabric8io docker-maven-plugin
* Some maven foo.

The docker image

I required LDAP configured with some known groups and users to use for testing. I put the following Dockerfile in src/test/docker/ldap, along with users.ldif which defined the users I wanted to load.

FROM nickstenning/slapd

#Our envt variables
ENV LDAP_ROOTPASS topsecret
ENV LDAP_ORGANISATION My Org
ENV LDAP_DOMAIN domain.com

# Users and groups to be loaded
COPY users.ldif        /users.ldif

# Run slapd to load ldif files
RUN apt-get update && apt-get install -y ldap-utils && rm -rf /var/lib/apt/lists/*
RUN (/etc/service/slapd/run &) && sleep 2 && \
  ldapadd -h localhost -p 389 -c -x -D cn=admin,dc=domain,dc=com -w topsecret -v -f /users.ldif && killall slapd

The parent config

In my parent pom, I put the shared configuration for the fabric8io plugin into pluginManagement. This includes any images I want to run for multiple sub modules (I’m using postgres for other integration tests, in combination with the liquibase for setting up a test database), and the executions which apply to all submodules.

 <pluginManagement>
    <plugins> 
        <plugin>
            <groupId>io.fabric8</groupId>
            <artifactId>docker-maven-plugin</artifactId>
            <version>0.18.1</version>
            <configuration>                    
                <images>                       
                    
                </images>
            </configuration>
            <executions>
                <execution>
                    <id>build</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>build-nofork</goal>
                    </goals>
                </execution>
                <execution>
                    <id>run</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>start</goal>
                    </goals>
                </execution>
                <execution>
                    <id>stop</id>
                    <phase>post-integration-test</phase>
                    <goals>
                        <goal>stop</goal>
                    </goals>
                </execution>
            </executions>                    
        </plugin>
    </plugins>
</pluginManagement>

Remember that putting a plugin in pluginManagement in the parent does not mean it will be active in the child modules. It simply defines the default config which will be used, should you add the plugin to the plugins section in the child pom. This approach means we can avoid spinning up docker contained for sub modules which don’t need them.

Remember to use the maven-failsafe-plugin to run your integration tests. This ensures that the docker containers will be stopped even if your integration tests fail, so you don’t get containers hanging around disrupting your next test run.

The sub module config

A key optimisation I wanted to make was not to run the maven-docker-plugin if I was skipping tests – I’d assume I’m skipping tests for a fast build, so the last thing I want is to spin up unnecessary containers. To do this, I activated the docker maven plugin within a profile in the appropriate submodules, which is only active if you haven’t supplied the skipTests parameter.

<profiles>  
    <profile>
        <id>integ-test</id>
        <activation>
            <property>
                <name>!skipTests</name>
            </property>
        </activation>
        <build>
            <plugins>
                <plugin>
                    <groupId>io.fabric8</groupId>
                    <artifactId>docker-maven-plugin</artifactId>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

For submodules which only require the docker images defined in the parent pom, the above configuration is sufficient. However, in one submodule I also want to spin up the ldap image. To do that, I wanted to add to the images defined in the parent. This led me to a useful blog on how to Merging Plugin Configuration in Complex Maven Projects. The combine.children=”append”
attribute ensures that the merge behaviour is as required.

 <profile>
    <id>integ-test</id>
    <activation>
        <property>
            <name>!skipTests</name>
        </property>
    </activation>
    <build>
        <plugins>
            <plugin>
                <groupId>io.fabric8</groupId>
                <artifactId>docker-maven-plugin</artifactId>                
                <configuration>   
                    <!-- append docker images for this module -->                 
                    <images combine.children="append">
                                                                 
                    </images>                      
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

Result: completely portable integration tests.

  

Use Docker Compose to run a PHP site without contaminating your system

This must be the quickest way to get the LAMP stack up and running locally.
Nothing installed locally -> you can run different versions of PHP/MySQL without conflicts.
Database gets created on first startup and persisted in a named volume.
PHP files are in a mapped volume so you can edit without rebuilding the container.

The docker-compose.yml file:

version: "2"
services:
  site:
    image: php:5.6.27-apache   
    volumes:
      - ./site:/var/www/html
    depends_on:
      - mysql
    networks:
      back-tier:        
  mysql:
    image: mysql:5.5
    environment:
      MYSQL_ROOT_PASSWORD: topsecret
      MYSQL_DATABASE: sitedbname
      MYSQL_USER: sitedbuser
      MYSQL_PASSWORD: sitedbpassword
    volumes:
      - site_db:/var/lib/mysql
    networks:
      back-tier:
networks:
  back-tier:
volumes:
  site_db: 

Then just

docker-compose up

It takes a bit of the pain out of working on a legacy PHP site!

  

Unit Testing AngularJS Directives with templateUrl

Quick blog as this confused me for a while. When running a unit test for a directive which had the template in an html file, I got the following error:

Error: Unexpected request: GET assets/views/template.html

Odd I thought, I haven’t been explicitly using $httpBackend. However, I discovered that when using when unit testing all HTTP requests are processed locally, and as the template is requested via HTTP, it too is processed locally.

The answer is to use karma-ng-html2js-preprocessor to generate an Angular module which puts your HTML files into a $templateCache to use in your tests. Then Angular won’t try to fetch the templates from the server.

First, install the dependency:

npm install karma-ng-html2js-preprocessor --save-dev

Then add the following to karma.conf.js (add to existing entries under files and preprocessors if they exist)


    files: [
        'build/assets/views/*.html'
    ],
    preprocessors: {
        'build/assets/views/*.html': "ng-html2js"
        },
    ngHtml2JsPreprocessor: {
        stripPrefix: 'build/',
        moduleName: 'ngTemplates' 
    }

Then in the unit test, add the line:


  beforeEach(module('ngTemplates'));

After doing this, you may encounter the following error:

Error: [$injector:modulerr] Failed to instantiate module ngTemplates due to:
Error: [$injector:nomod] Module ‘ngTemplates’ is not available! …

To get it available, you need to get the settings right – the module will only be created if html files exist in the specified directory. The stripPrefix setting allows you to ensure that the path to the view matches what is expected by you application, if the basePath in your karma.conf.js isn’t the same as the base of your application. Other settings are available too.

  

Quick backendless development with AngularJS

There are occasions when you want to run your AngularJS app without accessing a live REST API. There are various blog posts on the internet about doing this using $httpBackend, which is part of angular-mocks and very handy for unit testing.

For example:

var cakeData = [{ name: 'Hot Cross Bun'}];
$httpBackend.whenGET('/cakes').respond(function(method,url,data) { 
    return [200, cakeData, {}];
});

This is fine if you have small snippets of JSON to return. However, in real life data is usually bigger and uglier than that!

It would seem logical to put your mock data into JSON files, and return these when running without a live backend, keeping the code nice and succinct and readable. Unfortunately this doesn’t seem to be possible with $httpBackend method.

I tried something like this:

$httpBackend.whenGET('/cakes').respond(function(method, url, data) {
    return $http.get("/mock/cakes.json");
  });

This doesn’t work, because $httpBackend doesn’t work with returned promises. The respond method needs static data.
Workarounds include falling back to a synchronous `XMLHttpRequest` to get the data (ugh) or using a preprocessor to insert the contents of the json file into the code when you build. Neither seem particularly nice.

Using Http Interceptors to serve mock data

I came across this blog post: Data mocking in angular E2E testing, which describes an alternate approach to serving mock data for testing. This approach works just as well for running your app without a backend.

Here’s the code for a simple interceptor

angular.module('mock-backend',[])
    .factory('MockInterceptor', mockInterceptor)
    .config(function ($httpProvider) {
        $httpProvider.interceptors.push("MockInterceptor");
    });

function mockInterceptor() {
    return {
        'request': function (config) {
            if (config.url.indexOf('/cakes') >= 0) {
                config.url = 'mock/cakes.json';
            } 
            return config;
        }
    };
}

It’s fairly easy to use your build script to include this module conditionally when you want to run without a backend.

You can extend the interceptor logic; for example check the method, and switch POST to GET (you can’t POST to a file!). It’s not as sophisticated as a full mock backend, as data doesn’t change to reflect updates, but is a really quick way to view your app with a big chunk of data in it.