Github actions caching with Cypress

This has been on my To Do list for a while. To make builds faster (and save github minutes) you can use the cache action to cache dependencies between builds (or jobs in the same workflow) rather than installing them each time.

To cache your node_modules folder, this is the step you would need:

- name: Cache Dependencies
  id: cache-deps
  uses: actions/cache@v2
  with:
     path: ~/node_modules
     key: cache-${{ hashFiles('yarn.lock') }}

In subsequent builds, if your yarn.lock file has not changed, the cache will be restored (in a few seconds) and you won’t need to wait for dependencies to install.

However, if you are using Cypress, configuring the cache as above results in a missing Cypress binary.

The cypress npm package is installed, but the Cypress binary is missing.

We expected the binary to be installed here: /home/runner/work/xxx/xxx/cypress/cache/9.2.0/Cypress/Cypress

Reasons it may be missing:

- You're caching 'node_modules' but are not caching this path: /home/runner/.cache/Cypress
- You ran 'npm install' at an earlier build step but did not persist: /home/runner/.cache/Cypress

It took me a while to get this right, but the steps below work to also cache the Cypress binary

- name: Cache Dependencies
  id: cache-deps
  uses: actions/cache@v2
  with:
    path: |
       ~/.cache/Cypress
       ~/node_modules
       **/node_modules
    key: cache-${{ hashFiles('yarn.lock') }}
- name: Install dependencies
  if: steps.cache-deps.outputs.cache-hit != 'true'
  run: yarn --frozen-lockfile

(Note: this project is a monorepo so I am also caching node_modules wherever it exists, not just in the root folder)

Single SPA, Lerna, Typescript and shared utility modules

I’ve been playing with using Lerna to manage a monorepo for my single-spa project. It seems like a really good fit so far. One stumbling block I encountered was implementing a single-spa utility module (in this case a styleguide/component library).

A quick overview of the setup. First the lerna.json file:

{
  "packages": ["apps/*"],
  "npmClient": "yarn",
  "useWorkspaces": true,
  "version": "0.1.0"
}

Each of the micro frontends (MFEs) is in a subdirectory under apps. The package.json for each has a start command with a different port:

    "start": "webpack serve --port 8501"

The root package.json uses Lerna to start all the micro frontends in parallel:

    "start": "lerna run start --parallel"

So far so good, and much easier than checking out several different projects and remembering to run start on all of them.

Typescript woes

I created a new single-spa utility module in the apps directory, using the single-spa CLI and selecting util-module as the module type.

First I installed the util module as a dependency in the other micro frontends. The lerna add command “Adds package to each applicable package. Applicable are packages that are not package and are in scope”

lerna add @myorg/my-util --ignore @myorg/root-config

However, when I imported anything from the util-module into any of the MFEs, typescript was not happy:

TS2307: Cannot find module '@myorg/my-util' or its corresponding type declarations.

I read a few suggestions about needing to specify a “main” in package.json for the util-module (I don’t want to depend on the built module) and generating type declaration files (I think the ts-config-single-spa config already does this). In the end the answer was to add a paths element to the compilerOptions in the tsconfig for each MFE

"compilerOptions": {
    "jsx": "react",
    "baseUrl": ".",
    "paths": {
      "@myorg/my-util": ["../my-util/src/myorg-myutil"]
    }
  },

This made my IDE happy!

Configuring as a shared dependency

Because this is a shared utility module, I don’t want to bundle it with every MFE. So I added it to webpack externals for each MFE:

return merge(defaultConfig, {
    externals: {
      "react": "react",
      "react-dom": "react-dom",
      "@myorg/my-util": "@myorg/my-util"
    }
  });

And added it to the import map in the root config. Note that there is no need to call registerApplication() for the util-module because it won’t be loaded independently.

<% if (isLocal) { %>
    <script type="systemjs-importmap">
    {
      "imports": {
        "@myorg/mfe1": "//localhost:8501/myorg-mfe1.js",
        "@myorg/mfe2": "//localhost:8502/myorg-mfe2.js",
        "@myorg/my-util": "//localhost:8503/myorg-myutil.js"
      }
    }
  </script>
<% } %>

And that’s it – all working in a development environment. The next step will be to look at how to hook up to Lerna’s version management abilities so I can test and deploy micro frontends independently.

A function for renaming PostgreSQL JSONB keys

I had to do some data wrangling which involved renaming a stack of jsonb keys in PostgreSQL.

This stackoverflow answer gives the correct SQL syntax. As it’s a bit fiddly getting the paths in the right place, I dredged up some PL/pgSQL to write a function I can re-use.

Top tips for writing PL/pgSQL (it’s been a long time): don’t accidentally put tab characters in your script, and remember assignment is := not just plain old equals.

Here’s the function:

CREATE OR REPLACE FUNCTION rename_jsonb_key(data jsonb, basepath text[], oldname text, newname text)
returns jsonb
AS
$$
DECLARE
    oldpath text[];
    newpath text[];
    value jsonb;
    result jsonb;
BEGIN
    IF data #> basepath ? oldname THEN
        oldpath := basepath || oldname;
        newpath := basepath || newname;
        value := data #> oldpath;
        result := jsonb_set(data #- oldpath, newpath, value);
    ELSE
        result := data;
    END IF;
    RETURN result;	
END;
$$
LANGUAGE 'plpgsql' IMMUTABLE;

and here’s an update which uses it:

 UPDATE my_table 
 SET my_field = rename_jsonb_key(my_field, '{path, to, key}','oldname','newname');

Transaction handling woes in JDBI 3

I’ve stalled on a project, because I decided to have a go with JDBI 3. I’ve only used JDBI 2 in previous projects – this is the general pattern I was using:

An Interface:

public interface AccountService {
    void addAccount(Account account, User user);
}  

Implementation:

public abstract class AccountServiceJdbi implements AccountService {
    
    @Override  
    @Transaction  
    public final void addAccount(Account account, User user) {
        long accountId =  accountDao().insertAccount(account);
        accountDao().linkAccountToOwner(accountId, user.getId());
    }
    
    @CreateSqlObject
    abstract AccountDao accountDao();
}

You can imagine the Dao – just a simple interface with @SQLQuery and @SQLUpdate annotations.
The Service is instantiated using

dbi.onDemand(AccountServiceJdbi.class);

I like this approach, because it is easy to write a Mock implementation of AccountService for use when testing other components, and to create a concrete extension of AccountServiceJdbi with a mock AccountDao for testing the service class logic.

The abstract class allows me to have transactional methods that combine Data Access methods from one or more DAOs. The implemented methods are final, because my class is not intended to be sub classed. This prohibits unintended overriding of methods, for example when creating a concrete implementation for testing.

HOWEVER…

If I try to follow this pattern in JDBI I get the following error:

java.lang.IllegalArgumentException: On-demand extensions are only supported for interfaces

As far as I can tell, these are my options in JDBI 3:

1. Use default methods in interfaces:

public interface AccountServiceJdbi extends AccountService {
    
    @Override  
    @Transaction  
    default void addAccount(Account account, User user) {
        long accountId =  accountDao().insertAccount(account);
        accountDao().linkAccountToOwner(accountId, user.getId());
    }
    
    @CreateSqlObject
    AccountDao accountDao();
}

This may look similar to what I was doing in JDBI 2, but it feels WRONG. I’ve lost all control over concrete implementations of this interface. You can’t make default methods final, so if I create an implementation for testing, I can’t guarantee that some of the default methods won’t be overridden.

Furthermore, in the abstract class, the `accountDao` method has default access, so can’t be accessed from outside the package. In JDBI 3 I lose this restriction; all methods in interfaces are public, so my AccountDao can be accessed from outside the package.

It just feels less concise; I am less able to prescribe how the classes should be used. More commenting and self enforced discipline will be required.

2. Handle transactions programmatically:

public final class AccountServiceJdbi implements AccountService {
    
    @Override 
    public void addAccount(Account account, User user) {
        jdbi.useTransaction(handle -> {
           AccountDao accountDao = handle.attach(AccountDao.class);
           long accountId =  accountDao().insertAccount(account);
           accountDao().linkAccountToOwner(accountId, user.getId());
       });
    }    
}

Because this is a concrete implementation, I can make it final and prohibit unintended sub classing. There is no unintended access to DAO classes.

However, the transaction handling code is entwined with the business logic, making it difficult to unit test this file without connecting to a database; I can’t easily create mock implementations of the Dao classes in order to test the business logic in isolation, because the DAOs are instantiated within the business logic. I guess I could do this with a proper mocking framework, but personally I’d much rather implement simple mocks myself for testing.

SO…

I’m still stalled. I don’t like either of these approaches, and I’m not very good at writing code I don’t like. I could go back to the JDBI 2 but that doesn’t seem right either. Suggestions welcome…

Executing JavaScript on the JVM with Nashorn

We’ve been using Nashorn for a while to execute some very simple Javascript expressions. The latest challenge was to run some JavaScript from a Node.js project, which in turn had dependencies on a third party package. There were a few gotchas which I thought I’d share (disclaimer: I’m not a JavaScript developer, so this might all be obvious stuff to some).

The original JavaScript was in a plain old text file, with functions at the top level

function testThing(thing) {
   return thing === 'tiger';
}

This is how I was executing it:

ScriptEngine engine = new ScriptEngineManager().getEngineByName("nashorn");
String js = IOUtils.toString(this.getClass().getClassLoader().getResource("my-script.js"), "UTF-8");            
engine.eval(js);
Invocable invocable = (Invocable) engine;
Object result = invocable.invokeFunction("testThing", theThing);

Running Javascript generated from a Node.js project required a bit of tinkering:

1. In Java 8 Nashorn will only load ES5 compatible Javascript

I was running browserify to create a single JavaScript file from the Node.js project. Trying to load the generated instead of the plain javascript failed miserably:

javax.script.ScriptException: :1079:0 Expected : but found }

To get Nashorn to evaluate the js file, I had to transpile to ES5 using babel.

I decided I needed to run browsify in standalone mode, so that my exported functions were attached to a global variable. The scripts section in package.json had the following:

"build": "browserify src/main.js -r --standalone MyLib -o dist/build.js",
"transpile": "babel dist/build.js --out-file dist/build.es5.js"

This was a small step forward but Nashorn was still not able to load the script successfully…

2. Nashorn cannot set property of undefined!

Loading the transpiled file in Nashorn threw the following:

javax.script.ScriptException: TypeError: Cannot set property “MyLib” of undefined in <eval> at line number 19

Looking at the beginning of the transpiled file, you can see it is trying to determine where would be suitable to attach the global variable:

(function (f) {
  if ((typeof exports === "undefined" ? "undefined" : _typeof(exports)) === "object" && typeof module !== "undefined") {
    module.exports = f();
  } else if (typeof define === "function" && define.amd) {
    define([], f);
  } else {
    var g;if (typeof window !== "undefined") {
      g = window;
    } else if (typeof global !== "undefined") {      
      g = global;
    } else if (typeof self !== "undefined") {
      g = self;
    } else {
      g = this;
    }
    g.MyLib = f();
  }
})

A little bit of detective work using print() showed that it was falling through to the final “else” as it couldn’t find anything else to attach to. Adding this line to the Java, before parsing the file, magicked away this problem.

engine.eval("var global = this;");

(It doesn’t quite make sense to me why this works, as loading the script without this line it seems to think “this” is undefined?)

3. Executing methods of objects is different from executing functions

At this point, the file was loading without an error, but trying to execute the function using the original Java code with the new JavaScript file threw a NoSuchMethodException

java.lang.NoSuchMethodException: No such function testThing

This makes sense, as “testThing” is now a method of the MyLib global variable. However none of the permutations I tried in order to access the function worked. I tried all sorts, e.g:

Object result = invocable.invokeFunction("global.MyLib.testThing", theThing);

The key issue here is that testThing isn’t a function now, it is a method on an object, so we have to use invokeMethod:

Object result = invocable.invokeMethod(engine.eval("global.MyLib"), "testThing", theThing);

And that worked :)

Not sure if any of this is the correct way to approach executing Node.js based Javascript on the JVM, but these were the three gotchas that took me a while to work out.

Refactoring SQLiteOpenHelper so it isn’t a mile long

If, like me, you are new to Android Development you may find that your SQLiteOpenHelper implementation can quickly get long and disorganised, as data access methods for tables are added organically during development. It gets hard to see which methods are related, and starts to feel cumbersome. You can only have one SQLiteOpenHelper per database, but it isn’t difficult to refactor the single SQLiteOpenHelper into multiple classes with distinct responsibilities.

To keep things really tidy, I defined an interface for my new classes:

public interface MySQLiteDao {
    void onCreate(SQLiteDatabase db);
    void onUpdate(SQLiteDatabase db, int oldVersion, int newVersion);
}

The individual DAO classes are constructed with an instance of SQLiteOpenHelper which is used in data access methods. onCreate and onUpgrade contain the create/update sql specific to this table.

public class ThingDao implements MySQLiteDao {

    private SQLiteOpenHelper db;

    public ThingDao(SQLiteOpenHelper db) {
        this.db = db;
    }

    @Override
    public void onCreate(SQLiteDatabase sqlDb) {
        sqlDb.execSQL("CREATE TABLE ...)");
    }

    @Override
    public void onUpdate(SQLiteDatabase sqlDb, int oldVersion, int newVersion) {
        if (newVersion = 2) {           
            sqlDb.execSQL("...");
        }
    }

    public List<Thing> getThings() {
        List<Thing> things = new ArrayList<>();
        Cursor c = db.getReadableDatabase().rawQuery("SELECT ... ", new String[]{});

        for (int i = 0; i < c.getCount(); i++) {
           c.moveToPosition(i);
           things.add(new Thing(...);
        }
        
        c.close();
        return things;
    }

    public Thing getThing(int id) {
       ...
    }

    public void deleteThing(Thing thing) {
       ...
    }
    ...
}

Then the SQLiteOpenHelper implementation itself is much easier to read

public class MySqlHelper extends SQLiteOpenHelper {

    private static MySqlHelper instance;

    private static final String DATABASE_NAME = "my.db";
    private static final int SCHEMA = 1;


    private ThingDao thingDao;

    public synchronized static MySqlHelper getInstance(Context ctxt) {
        if (instance == null) {
            instance = new MySqlHelper(ctxt.getApplicationContext());
        }

        return (instance);
    }

    public MySqlHelper(Context context) {
        super(context, DATABASE_NAME, null, SCHEMA);
    }

    @Override
    public void onCreate(SQLiteDatabase db) {
        getThingDao().onCreate(db);
        //other DAOs called here
    }

    @Override
    public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
        getThingDao().onUpdate(db, oldVersion, newVersion);
        //other DAOs called here
    }

    public ThingDao getThingDao() {
        if (thingDao == null) {
            thingDao = new ThingDao(this);
        }
        return thingDao;
    }
}

You can group the data access methods into DAO classes in whatever way makes sense for the application – it doesn’t have to be one per table. In your application code, you can easily get an instance of the DAO you want.

ThingDao = MySqlHelper.getInstance(this).getThingDao();

This means you are working with smaller classes containing related methods – much easier to maintain than a sprawling mess containing every DAO method you dashed out in order to get the thing working…

How to use a JDBI ResultSetMapper with a non default constructor

If your ResultSetMapper does not require a non default constructor, it is easy to specify a specific Mapper for a DAO method:

@UseMapper(PersonMapper.class)
@SqlQuery("select id, data from person where id = :id")
Person getPerson(@Bind("id") String id);

However, sometimes the ResultSetMapper may require a non default constructor. In this case you can register a mapper with the DBI:

dbi.registerMapper(new PersonMapper(extraParameters));

JDBI will use the Mapper globally wherever the return type of the DAO method matches that of the ResultSetMapper.

If you want to use different ResultSetMapper implementations for DAO methods which return the same data type, you can register a ResultSetMapperFactory.

dbi.registerMapper(new PersonMapperFactory(extraParameters));

The ResultSetMapperFactory can return different ResultSetMapper implementations based on the StatementContext. This gives you access to things like the name of the method being called and the raw SQL.

public final class PersonMapperFactory implements ResultSetMapperFactory {

    private EmployeeMapper employeeMapper;
    private PersonMapper personMapper;

    public PersonMapperFactory(deptLookup) {
        this.employeeMapper = new EmployeeMapper(deptLookup);
        this.personMapper = new PersonMapper();
    }

    @Override
    public boolean accepts(Class type, StatementContext ctx) {
        return type.equals(Person.class);
    }

    @Override
    public ResultSetMapper mapperFor(Class type, StatementContext ctx) {        
        if (ctx.getSqlObjectMethod().getName().contains("Employee")) {
            return employeeMapper; 
        } else {
            return personMapper;
        }
    }
    
}

(Note that these examples use JDBI 2, I haven’t had chance to update yet…)

There’s some more info on the scope of a registered mapper in the JDBI SQL Object Reference.

Setting input values in AngularJS 1.6 Unit tests

Having spent hours yesterday wondering why my unit test would not work, I can confirm that the correct way to enter data into an input field in an AngularJS 1.6 unit test is as follows:

var field = element.find('input');       
var wrappedField = angular.element(field);
wrappedField.val('Some text');
wrappedField.triggerHandler('change');

I can further confirm that this will not be enough if you have set a debounce value in the ngModelOptions for the field. The debounce value defines a delay between input changes and triggering a $digest cycle. This can improve performance, as it saves thrashing all the code linked to the two way binding every time a key is pressed. However for the purposes of unit testing it also means that the model will not be updated immediately after running the code above.

I found the answer in this stackoverflow post: How to test an AngularJS watch using debounce function with Jasmine.

Adding the following after setting the field value causes the model updates to be processed immediately:

$timeout.flush();

I’ve added a working example of this unit test to my angular-testing repository on github.

Using two github accounts

I followed the very handy instructions at https://code.tutsplus.com/tutorials/quick-tip-how-to-work-with-github-and-multiple-accounts–net-22574 to set up different keys for my two github accounts.

My .ssh/config file looked like this:

Host github-work
    HostName github.com
    PreferredAuthentications publickey
    IdentityFile ~/.ssh/id_work

Host github.com
    HostName github.com
    PreferredAuthentications publickey
    IdentityFile ~/.ssh/id_rsa

I then had no problem cloning from github using `github-work` as a host

git clone git@github-work:work/work-project.git
Cloning into 'work-project'...
remote: Counting objects: 16, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 16 (delta 0), reused 16 (delta 0), pack-reused 0
Receiving objects: 100% (16/16), done.
Checking connectivity... done.

HOWEVER, pushing was another matter:

 git push -u origin master 
ERROR: Permission to work/work-project.git denied to mypersonalusername.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The push was not using the right ssh key – the access denied message refers to my personal github account username.

For debugging purposes, I ran the following:

ssh -v git@github-work

Amongst the output was:

debug1: Reading configuration data /home/me/.ssh/config
debug1: /home/me/.ssh/config line 1: Applying options for github-work
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Hostname has changed; re-reading configuration
debug1: Reading configuration data /home/me/.ssh/config
debug1: /home/me/.ssh/config line 8: Applying options for github.com

A bit of googling led to this stackoverflow post: why git push can not choose the right ssh key?.

The answer is to remove the vanilla/default github.com entry from the .ssh/config file so it only contains the section for the non-standard host. This works!

Serving Kibana 4.6 (4.2+) from a subdirectory with nginx

This took me a little while to get right, but the final config is quite simple, so I thought it was worth sharing.

I wanted to serve kibana from a subdirectory, for example mydomain.com/kibana. There are known issues doing this, as of Kibana 4.2:

The problem is that even if you set up an nginx proxy configuration to forward requests from /kibana to the correct place, Kibana serves the app from /app/kibana and tries to load resources via their absolute url, e.g. /bundles/kibana.bundle.js.

The answer is a combination of kibana and nginx config (which isn’t very nice as it means your kibana config is not portable).

In kibana.yml, add the following:

server.basePath: "/kibana"

This means that when kibana generates the urls to load resources, it prefixes them with /kibana. On its own this just breaks kibana! It will try to load e.g. /kibana/bundles/kibana.bundle.js and return a 404. duh.

So, you need to strip out the prefix in your nginx config. Mine looks like this (kibana is running in a docker container on the same network as the nginx container)

location /kibana {
    proxy_set_header Host $host;
    rewrite /kibana/(.*)$ /$1 break;
    proxy_pass http://kibana:9292/;
}

Kibana is now available at mydomain.com/kibana – the resulting url will look like mydomain.com/kibana/app/kibana, but you get to keep the initial directory name, meaning it won’t interfere with other things you might be serving from the same host. It would be much neater if the server.basePath setting in kibana was all that was necessary to do this, but it doesn’t look like it’s going to be changed any time soon – see discussion at #6665 server.basePath results in 404 (to be fair changing the behaviour would break every existing install using the setting)