Setting input values in AngularJS 1.6 Unit tests

Having spent hours yesterday wondering why my unit test would not work, I can confirm that the correct way to enter data into an input field in an AngularJS 1.6 unit test is as follows:

var field = element.find('input');       
var wrappedField = angular.element(field);
wrappedField.val('Some text');
wrappedField.triggerHandler('change');

I can further confirm that this will not be enough if you have set a debounce value in the ngModelOptions for the field. The debounce value defines a delay between input changes and triggering a $digest cycle. This can improve performance, as it saves thrashing all the code linked to the two way binding every time a key is pressed. However for the purposes of unit testing it also means that the model will not be updated immediately after running the code above.

I found the answer in this stackoverflow post: How to test an AngularJS watch using debounce function with Jasmine.

Adding the following after setting the field value causes the model updates to be processed immediately:

$timeout.flush();

I’ve added a working example of this unit test to my angular-testing repository on github.

  

Using two github accounts

I followed the very handy instructions at https://code.tutsplus.com/tutorials/quick-tip-how-to-work-with-github-and-multiple-accounts–net-22574 to set up different keys for my two github accounts.

My .ssh/config file looked like this:

Host github-work
    HostName github.com
    PreferredAuthentications publickey
    IdentityFile ~/.ssh/id_work

Host github.com
    HostName github.com
    PreferredAuthentications publickey
    IdentityFile ~/.ssh/id_rsa

I then had no problem cloning from github using `github-work` as a host

git clone git@github-work:work/work-project.git
Cloning into 'work-project'...
remote: Counting objects: 16, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 16 (delta 0), reused 16 (delta 0), pack-reused 0
Receiving objects: 100% (16/16), done.
Checking connectivity... done.

HOWEVER, pushing was another matter:

 git push -u origin master 
ERROR: Permission to work/work-project.git denied to mypersonalusername.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The push was not using the right ssh key – the access denied message refers to my personal github account username.

For debugging purposes, I ran the following:

ssh -v git@github-work

Amongst the output was:

debug1: Reading configuration data /home/me/.ssh/config
debug1: /home/me/.ssh/config line 1: Applying options for github-work
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Hostname has changed; re-reading configuration
debug1: Reading configuration data /home/me/.ssh/config
debug1: /home/me/.ssh/config line 8: Applying options for github.com

A bit of googling led to this stackoverflow post: why git push can not choose the right ssh key?.

The answer is to remove the vanilla/default github.com entry from the .ssh/config file so it only contains the section for the non-standard host. This works!

  

Serving Kibana 4.6 (4.2+) from a subdirectory with nginx

This took me a little while to get right, but the final config is quite simple, so I thought it was worth sharing.

I wanted to serve kibana from a subdirectory, for example mydomain.com/kibana. There are known issues doing this, as of Kibana 4.2:

The problem is that even if you set up an nginx proxy configuration to forward requests from /kibana to the correct place, Kibana serves the app from /app/kibana and tries to load resources via their absolute url, e.g. /bundles/kibana.bundle.js.

The answer is a combination of kibana and nginx config (which isn’t very nice as it means your kibana config is not portable).

In kibana.yml, add the following:

server.basePath: "/kibana"

This means that when kibana generates the urls to load resources, it prefixes them with /kibana. On its own this just breaks kibana! It will try to load e.g. /kibana/bundles/kibana.bundle.js and return a 404. duh.

So, you need to strip out the prefix in your nginx config. Mine looks like this (kibana is running in a docker container on the same network as the nginx container)

location /kibana {
    proxy_set_header Host $host;
    rewrite /kibana/(.*)$ /$1 break;
    proxy_pass http://kibana:9292/;
}

Kibana is now available at mydomain.com/kibana – the resulting url will look like mydomain.com/kibana/app/kibana, but you get to keep the initial directory name, meaning it won’t interfere with other things you might be serving from the same host. It would be much neater if the server.basePath setting in kibana was all that was necessary to do this, but it doesn’t look like it’s going to be changed any time soon – see discussion at #6665 server.basePath results in 404 (to be fair changing the behaviour would break every existing install using the setting)

  

Docker and Maven for integration testing

I recently implemented LDAP authentication in our Dropwizard application. How to integration test this?

Here’s what I used:

* This docker image: nickstenning/slapd, containing a basic configuration of the OpenLDAP server.
* The fabric8io docker-maven-plugin
* Some maven foo.

The docker image

I required LDAP configured with some known groups and users to use for testing. I put the following Dockerfile in src/test/docker/ldap, along with users.ldif which defined the users I wanted to load.

FROM nickstenning/slapd

#Our envt variables
ENV LDAP_ROOTPASS topsecret
ENV LDAP_ORGANISATION My Org
ENV LDAP_DOMAIN domain.com

# Users and groups to be loaded
COPY users.ldif        /users.ldif

# Run slapd to load ldif files
RUN apt-get update && apt-get install -y ldap-utils && rm -rf /var/lib/apt/lists/*
RUN (/etc/service/slapd/run &) && sleep 2 && \
  ldapadd -h localhost -p 389 -c -x -D cn=admin,dc=domain,dc=com -w topsecret -v -f /users.ldif && killall slapd

The parent config

In my parent pom, I put the shared configuration for the fabric8io plugin into pluginManagement. This includes any images I want to run for multiple sub modules (I’m using postgres for other integration tests, in combination with the liquibase for setting up a test database), and the executions which apply to all submodules.

 <pluginManagement>
    <plugins> 
        <plugin>
            <groupId>io.fabric8</groupId>
            <artifactId>docker-maven-plugin</artifactId>
            <version>0.18.1</version>
            <configuration>                    
                <images>                       
                    
                </images>
            </configuration>
            <executions>
                <execution>
                    <id>build</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>build-nofork</goal>
                    </goals>
                </execution>
                <execution>
                    <id>run</id>
                    <phase>pre-integration-test</phase>
                    <goals>
                        <goal>start</goal>
                    </goals>
                </execution>
                <execution>
                    <id>stop</id>
                    <phase>post-integration-test</phase>
                    <goals>
                        <goal>stop</goal>
                    </goals>
                </execution>
            </executions>                    
        </plugin>
    </plugins>
</pluginManagement>

Remember that putting a plugin in pluginManagement in the parent does not mean it will be active in the child modules. It simply defines the default config which will be used, should you add the plugin to the plugins section in the child pom. This approach means we can avoid spinning up docker contained for sub modules which don’t need them.

Remember to use the maven-failsafe-plugin to run your integration tests. This ensures that the docker containers will be stopped even if your integration tests fail, so you don’t get containers hanging around disrupting your next test run.

The sub module config

A key optimisation I wanted to make was not to run the maven-docker-plugin if I was skipping tests – I’d assume I’m skipping tests for a fast build, so the last thing I want is to spin up unnecessary containers. To do this, I activated the docker maven plugin within a profile in the appropriate submodules, which is only active if you haven’t supplied the skipTests parameter.

<profiles>  
    <profile>
        <id>integ-test</id>
        <activation>
            <property>
                <name>!skipTests</name>
            </property>
        </activation>
        <build>
            <plugins>
                <plugin>
                    <groupId>io.fabric8</groupId>
                    <artifactId>docker-maven-plugin</artifactId>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

For submodules which only require the docker images defined in the parent pom, the above configuration is sufficient. However, in one submodule I also want to spin up the ldap image. To do that, I wanted to add to the images defined in the parent. This led me to a useful blog on how to Merging Plugin Configuration in Complex Maven Projects. The combine.children=”append”
attribute ensures that the merge behaviour is as required.

 <profile>
    <id>integ-test</id>
    <activation>
        <property>
            <name>!skipTests</name>
        </property>
    </activation>
    <build>
        <plugins>
            <plugin>
                <groupId>io.fabric8</groupId>
                <artifactId>docker-maven-plugin</artifactId>                
                <configuration>   
                    <!-- append docker images for this module -->                 
                    <images combine.children="append">
                                                                 
                    </images>                      
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

Result: completely portable integration tests.

  

Use Docker Compose to run a PHP site without contaminating your system

This must be the quickest way to get the LAMP stack up and running locally.
Nothing installed locally -> you can run different versions of PHP/MySQL without conflicts.
Database gets created on first startup and persisted in a named volume.
PHP files are in a mapped volume so you can edit without rebuilding the container.

The docker-compose.yml file:

version: "2"
services:
  site:
    image: php:5.6.27-apache   
    volumes:
      - ./site:/var/www/html
    depends_on:
      - mysql
    networks:
      back-tier:        
  mysql:
    image: mysql:5.5
    environment:
      MYSQL_ROOT_PASSWORD: topsecret
      MYSQL_DATABASE: sitedbname
      MYSQL_USER: sitedbuser
      MYSQL_PASSWORD: sitedbpassword
    volumes:
      - site_db:/var/lib/mysql
    networks:
      back-tier:
networks:
  back-tier:
volumes:
  site_db: 

Then just

docker-compose up

It takes a bit of the pain out of working on a legacy PHP site!

  

Unit Testing AngularJS Directives with templateUrl

Quick blog as this confused me for a while. When running a unit test for a directive which had the template in an html file, I got the following error:

Error: Unexpected request: GET assets/views/template.html

Odd I thought, I haven’t been explicitly using $httpBackend. However, I discovered that when using when unit testing all HTTP requests are processed locally, and as the template is requested via HTTP, it too is processed locally.

The answer is to use karma-ng-html2js-preprocessor to generate an Angular module which puts your HTML files into a $templateCache to use in your tests. Then Angular won’t try to fetch the templates from the server.

First, install the dependency:

npm install karma-ng-html2js-preprocessor --save-dev

Then add the following to karma.conf.js (add to existing entries under files and preprocessors if they exist)


    files: [
        'build/assets/views/*.html'
    ],
    preprocessors: {
        'build/assets/views/*.html': "ng-html2js"
        },
    ngHtml2JsPreprocessor: {
        stripPrefix: 'build/',
        moduleName: 'ngTemplates' 
    }

Then in the unit test, add the line:


  beforeEach(module('ngTemplates'));

After doing this, you may encounter the following error:

Error: [$injector:modulerr] Failed to instantiate module ngTemplates due to:
Error: [$injector:nomod] Module ‘ngTemplates’ is not available! …

To get it available, you need to get the settings right – the module will only be created if html files exist in the specified directory. The stripPrefix setting allows you to ensure that the path to the view matches what is expected by you application, if the basePath in your karma.conf.js isn’t the same as the base of your application. Other settings are available too.

  

Quick backendless development with AngularJS

There are occasions when you want to run your AngularJS app without accessing a live REST API. There are various blog posts on the internet about doing this using $httpBackend, which is part of angular-mocks and very handy for unit testing.

For example:

var cakeData = [{ name: 'Hot Cross Bun'}];
$httpBackend.whenGET('/cakes').respond(function(method,url,data) { 
    return [200, cakeData, {}];
});

This is fine if you have small snippets of JSON to return. However, in real life data is usually bigger and uglier than that!

It would seem logical to put your mock data into JSON files, and return these when running without a live backend, keeping the code nice and succinct and readable. Unfortunately this doesn’t seem to be possible with $httpBackend method.

I tried something like this:

$httpBackend.whenGET('/cakes').respond(function(method, url, data) {
    return $http.get("/mock/cakes.json");
  });

This doesn’t work, because $httpBackend doesn’t work with returned promises. The respond method needs static data.
Workarounds include falling back to a synchronous `XMLHttpRequest` to get the data (ugh) or using a preprocessor to insert the contents of the json file into the code when you build. Neither seem particularly nice.

Using Http Interceptors to serve mock data

I came across this blog post: Data mocking in angular E2E testing, which describes an alternate approach to serving mock data for testing. This approach works just as well for running your app without a backend.

Here’s the code for a simple interceptor

angular.module('mock-backend',[])
    .factory('MockInterceptor', mockInterceptor)
    .config(function ($httpProvider) {
        $httpProvider.interceptors.push("MockInterceptor");
    });

function mockInterceptor() {
    return {
        'request': function (config) {
            if (config.url.indexOf('/cakes') >= 0) {
                config.url = 'mock/cakes.json';
            } 
            return config;
        }
    };
}

It’s fairly easy to use your build script to include this module conditionally when you want to run without a backend.

You can extend the interceptor logic; for example check the method, and switch POST to GET (you can’t POST to a file!). It’s not as sophisticated as a full mock backend, as data doesn’t change to reflect updates, but is a really quick way to view your app with a big chunk of data in it.

  

Partial updates of JSON data in Postgres (using JDBI)

With Postgres 9.5 comes the jsonb_set function, for updating a single key within a JSON column. Hooray!

A sample bit of SQL might look like this:

update mytable 
set myfield = jsonb_set(myfield,'{key, subkey}', '"new string value"'::jsonb) 
where id = 5

I’ve put a text value in the example, but the new value can be an entire JSON structure.

I’ve posted previously on using JSON and Postgres with JDBI. To use the jsonb_set function, we need to reuse the BindJson annotation covered in that post. The jsonb_set function also takes an array parameter, defining the path to the key to be set. For this I wrote a new Bind annotation:

@BindingAnnotation(BindTextArray.JsonBinderFactory.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.PARAMETER})
public @interface BindTextArray {
    
    String value();

    public static class JsonBinderFactory implements BinderFactory {

        @Override
        public Binder build(Annotation annotation) {
            return new Binder<BindTextArray, String[]>() {
                
                @Override
                public void bind(SQLStatement sqlStatement, BindTextArray bind, String[] array) {
                    try {
                        String fieldName = bind.value();
                        Connection con = sqlStatement.getContext().getConnection();                        
                        Array sqlArray = con.createArrayOf("text", array);
			sqlStatement.bindBySqlType(fieldName, sqlArray, Types.ARRAY);
                    } catch (SQLException ex) {
                        throw new IllegalStateException("Error Binding Array",ex);
                    }
                }
            };
        }
    }

(Code based on this post: http://codeonthecobb.com/2015/04/18/using-jdbi-with-postgres-arrays/).

Here’s the DAO for the SQL example above, using the new Bind annotation:

 
@SqlUpdate("update mytable set myfield = jsonb_set(myfield, :path,:json) where id = :id")
void updateMyTable(@Bind("id") int id, @BindTextArray("path") String[] path, @BindJson("json") String json)

As you can see, there are limitations to this functionality. We can’t update two different elements in the same JSON column, so if you want to do that, you still need to do it in code. However, the new syntax is handy if you want to update one section of your JSON document, without loading the whole thing into your code base.

  

Alternative Dropwizard token authentication

Out of the box, the Dropwizard auth module comes with OAuth and Basic Authentication. The documentation for implementing the related Authenticator and Authorizer is simple enough to follow, but there wasn’t much about how to implement an alternative mechanism for authentication, which I needed to do.

It turned out to be fairly fairly straightforward to write a new AuthFilter which used a non standard header for authorisation. I based the implementation on the existing OAuthCredentialAuthFilter filter. I just had to change the getCredentials header to read the header value I wanted to use.

Getting your priorities right

The authentication worked fine. Then I implemented authorisation as well. Remember:

Authentication is the mechanism for identifying the user (e.g. username + password)

Authorisation is the mechanism for determining what they are allowed to do (e.g. role membership)

I created an Authorizer implementation as per the user guide, annotated some resource methods with @RolesAllowed and even remembered to register the RolesAllowedDynamicFeature. However on testing and on running the application, DropWizard appeared to be checking the role membership BEFORE authenticating. Resulting in a 403 access denied, as the roles had not been loaded.

Time for some head scratching, debugging, and trying to register things in different order (no change) until I remembered noticing this line at the top of the AuthFilter I based mine on:

@Priority(Priorities.AUTHENTICATION)

Once I added that to my custom AuthFilter, everything started happening in the correct order.

Today’s lesson: always get your priorities right!

  

What the F is f:table

Today I needed to knock up a quick interface to some database tables for inputting and editing data to be used in the demonstration of some data extraction software. Aha I thought; I’ll try using Grails Scaffolding. I haven’t used grails scaffolding in earnest since taking a grails training course few years back. However, today I really did just need some simple CRUD functionality.

In Grails 3, you have two options for scaffolding – annotating the controller and having the views and controller methods auto-generated at runtime, or running a command to generate them statically so that you can modify them. I chose the latter, assuming that I’d want to do some customisation.

grails generate-all com.domain.Thing

You can then inspect the generated controller and views, and make any changes necessary. And this is where it all started to go wrong. The table containing the list of existing records didn’t look very nice. I’d removed the default ‘application.css’ which comes with grails, and used bootstrap to style the app. Without the default styles, the table has no spacing, and looks pretty awful.

No problem, I just need to add class=”table” to the table and I’ll get a standard bootstrap styled table. However, the generated index.gsp doesn’t contain a table tag. All I found was this:

<f:table collection="${thingList}"/>

The <f:table/> tag was a new one to me. Google suggests this comes from the grails fields plugin, but the documentation is very sparse: Grails 3 fields plugin.
The documentation doesn’t even mention the <f:table/>http://grails3-plugins.github.io/fields/snapshot/ref/Tags/table.html which helped a bit, in that it showed how to configure which fields to show in the table but didn’t help in changing styles or other formatting.

The main grails scaffolding documentation suggests running

grails install-templates

to get local copies of the templates used in scaffolding, but this doesn’t include anything to do with the fields plugin.

More detective work led to this Stackoverflow post, and onward to the fields plugin source code.

Finally… how to customise the f:table tag:

Place a file called _table.gsp in /grails-app/views/templates/_fields/

The default file contents are here: _table.gsp

After adding this file to the project and amending to use the required styles, the <f:table/> tag can be used throughout the project with reckless abandon.

My table looks nice now, but I think this sums up why I struggle with the grails plugin ecosystem; it feels a bit half-finished to be using an undocumented tag as part of what should be a quick start process for new users.