Matt Smith's Blog

Software Engineering related topics

Building Front-end Assets for Web Development

| Comments

Building, transpiling, or compiling front-end assets, whichever way you want to state it, modern web development is best accomplished using tools like Less, CoffeeScript, Browserify, RequireJs, or any other variant that does or will exist. The difficulty comes in unifying this vast pool of resources to arrive at the most efficient application you can for todays standard web browsers. But, more importantly that they leave the door open for you to migrate to the next powerful tool without significant effort. It is important to note that these technologies are not required to write an efficient application, but they are paramount when developing a maintainable codebase.

My colleagues and I are currently evaluating many different software stacks to determine which one we would like to use for a new project. While all of them have their pros and cons, it has been clear that using NodeJs and many of the tools developed on it’s platform is the way to go for our client side assets. The major players that we have seen that unify these tools up till now include Make, Grunt, Mimosa, and Gulp. The capabilities of these tools that we are interested in are:

  • Time and difficulty to understand it well
  • Easy environment specific configuration
  • Composability with existing tools
  • Reversibility
  • Development experience
    • Watch mode
    • Compile time


You may ask why in 2014 we would want to even consider a tool that has been around more than 20 years. Well, the answer is that it’s because it has been around for twenty years. It has been well tested and is still actively used to build many projects, not just C or C++ projects either. It is a versatile tool that will work with any technology you wish to build. As James Coglan has stated it adheres to the Unix ethos of building small sharp tools that can be easily composed together. Since Make is not bound to any particular language, it will allow you to use one central build tool for any application where multiple technologies are at play; i.e. .Net, Java, or Scala web server with JavaScript, and/or CoffeeScript front-end.

Peter Müller argues that using Make would require you to specify your dependency graph in multiple locations, thus violating DRY. However, being the devils advocate here for Make, do we really need to specify that dependency graph when you are using tools like Browserify, RequireJs, or even Less? Since Make is only composable via cli tools it makes it difficult to do much without first writing a cli tool that will do what you want. Thus for the occasional case where you do want to modify the pipeline it would require more setup and complication; as opposed to something that can be easily accomplished with minor code in a NodeJs based build tool. In short there is something to be said for having your build tools in the same language as your application.

Make is simple to understand and there really aren’t too many moving parts to grasp. Since it relies primarily on executing shell commands, it will work with just about anything you throw at it, and remain orthogonal to your application. It is a fast tool with great support for detecting when files have actually changed and really does need to be re-compiled. As a symptom of it’s orthogonality to your application it is inherently reversible. Watch mode can be accomplished by delegating to a cli tool like nodemon.


  • Make is a tried and true build tool with a 20 year track record
  • Technology agnostic
  • Optimal build path (Only builds changed files)
  • Simple and easy to learn


  • Small tweaks only available through cli tools
  • Yet another syntax to learn
  • Redefine dependency graph
  • Necessary to use temp files for intermediary steps (Many potential IO hits)


Like Make, Grunt allows you to specify tasks to perform as part of your build chain. The main advantage here is that you are build this in Node with JavaScript; which means you don’t need to learn a new syntax. Grunt is feature rich with many plugins to build with most of the popular tools out there today. One advantage to Make is that Grunt has the ability to read a file into memory once and pass that file through the different plugins to get you to a final file. Like Mimosa though this tool does not leverage Node streams, which is partly why Gulp is faster at doing similar tasks.

Grunt is a much younger tool than Make, but it has garnered the largest adoption of the Node community. Gulp is nipping at it’s heels; but from what I can tell Grunt is still king. Grunt has adopted the model of configuration over code, meaning you should only need to send configuration options to the plugins you use with minimal code modifications. Unfortunately, this does lead to a build configuration file that is slightly confusing and difficult to reason about. Compared to Mimosa or Gulp files which are usually much shorter and easier to understand at a quick glance. The primary configuration approach does help prevent you from coupling your application code with the build infrastructure, maintianing reversibility.


  • Many plugins to choose from
  • One-off tasks are easy to write


  • Strong community adoption, though waining
  • Hard to understand configuration


Compared to Make, Mimosa is a really young tool primarily focused on the web development workflow. It provides support for scaffolding new projects, building assets, and a development workflow. Mimosa has adopted a convention over configuration approach to its setup. This means that as long as you adhere to some sensible conventions then the Mimosa build configuration will be light and require little maintenance. There is a nice example of how minimal that is when using Backbone, Require.js, and Bower with only 1 line of config. You can easily find more examples doing so with different technologies.

Mimosa manipulates files in memory through the various build steps negating IO hits as much as possible. Note the file manipulation is in memory, but it does not utilized streams as Gulp does. In general though you do get a similar stream like approach instead of temp file approach as I outline in the Gulp comparison below. Gulp is faster at this primarily becuase it leverages Node streams, but as Mimosa’s author David Bashford has stated on gitter, “when the difference is 1ms vs 10ms… do you really notice? You don’t”.

There is a mimosa-adhoc-module which address the need for one-off operations, although I feel the plugin is clunky and not very intuitive. I don’t know how much you’d really need to do that if you are adhering to the sensible conventions, and so long as you adhere to the tenants of a 12 factor app. This does make it more difficult for you to accidentally couple your application to the build infrastructure.

The scaffolding feature in Mimosa is pretty cool, however in practice I’ve experienced that the skeletons do not stay up to date with the latest versions out there. Yeoman on the other hand, which only attempts to do scaffolding, seems to do better at staying current.

Mimosa’s ability to watch file changes, run them through the build pipeline, and then serve/reload your browser with the changes is really nice. There are simple switches at the command line that flip between development vs production versions of your application, definitely a killer feature.

Looking over the Mimosa source I am deeply concerned that there is no test coverage. This appears to be a common pattern for the core and the majority of its plugins. That scares me that future releases may have unintended consequences.

Overall, community support for Mimosa does not feel as accepted as other tools such as Grunt or Gulp.


  • Scaffold an application
  • Convention over configuration
  • Watch with server
  • Limited File IO


  • Young
  • Community acceptance seems minimal
  • Source is not tested


Like Mimosa, Gulp is a relatively young project and supports a streaming model for compiling your assets. So what does that mean any way and why should I care? Let’s take the example where I use CoffeeScript and Browserify. First I’d need to transpile the CoffeeScript to JavaScript, where I would see a new JavaScript file on disk for each corresponding CoffeeScript file. Then I would need to read each of those JavaScript files to buildup my Browserified bundle. Basically you end up with a workflow like so:

Temp File Flow

However, Gulp’s bread and butter comes by allowing you to do this workflow:

Stream Flow

Due to the Gulp’s model of code over convention, changes are easily made through JavaScript, thus keeping you in the same language you are already using to develop your application. This also means that you need to declare everything yourself, as opposed the the convention over configuration approach from Mimosa.

You will only have to learn the semantics of the libraries that you use, not a new language altogether. Gulp has a gulp-shell plugin to easily integrate with other cli tools. Watch functionality is built into the core of Gulp, giving that to you for free.

Since your build tools are using the same language and libraries as your application, care must be taken to ensure you do not couple the application too tightly with the build infrastructure. Coupling the two would break reversibility thus preventing you from changing to a different build tool in the future, should the need arise.


  • Less file IO
  • Fast build times
  • Strong community acceptance, and growing
  • One off issues are easy to resolve within a language you are familiar with


  • Young
  • Must be cautious about coupling / reversibility

Vim Plugins Presentation

| Comments

I have been looking at Vim lately to determine its viability as a full fledged IDE; primarily for development of Scala applications, but potentially for other languages as well. In this presentation I demonstrate the following to my colleagues:

For a full list of the plugins that I use check out my dotfiles

Refactoring to Patterns Presentation

| Comments

I gave a presentation on Refactoring to Patterns last week to my colleagues here at Towers Watson. I thought I was capturing video during the presentation so forgive the black screen, but feel free to listen if you like.

Fencepost Testing

| Comments


Unit testing software can be difficult, especially when you are considering what to test. You should test the happy path that you expect most of your code to adhere to, but what else? I use a concept I call Fencepost Testing to help me determine if I have sufficient test coverage. The concept is simple, consider your code and identify the boundary conditions it should adhere to. If what you are testing has one boundary or a “fencepost” then test that post. If you find multiple boundaries then you have multiple “fenceposts”, you should test each post and a wildcard in between. The “fenceposts” being the make it or break it components of the subject you are testing.

So, for some code that checks if a date is in the future I would breakup my tests as such:

  • Some arbitrary date in the past
  • Yesterday (Fencepost - Immediate boundary)
  • Today (Fencepost)
  • Tomorrow (Fencepost - Immediate boundary)
  • Some arbitrary date in the future

Some would argue that the arbitrary dates would not matter, but they serve as sanity checks to make sure I’m noti, for example, using the month value alone. The arbitrary dates could be random, since they should never fail. If they did, well then you’ve caught a bug.

Consider the regex ^\$\d+$ which simply tests is a string is a dollar amount. A slightly more complicated item to test, with an infinite number of posibilities that would and would not match. But, there are some fenceposts:

  • Must start with $
  • Must have a digit following the $
  • Can have many digits there after
  • Last character must be a digit

So my test cases would be:

  • ’ $1’ (There is a whitespace character at the beginning)
  • ‘a$1’ (Non-whitespace character before $)
  • ’$’ (Nothing following the $)
  • ‘$a’ (Non-whitespace non-digit character after $)
  • ’$ ’ (Whitespace character after $)
  • ‘$1’ (Simple one digit happy path)
  • ‘$1234567890’ (Simple multi-digit happy path)
  • ‘$1 ’ (One digit with trailing whitespace)
  • ‘$1a’ (One digit with trailing non-digit character)
  • ‘$1a1’ (Digits with intermittent non-digit character)
  • ‘$1,000’ (Digits with comma separated non-digit character)

In essence this should cover most of the cases that could be applied to that regex. If you do find a bug that passes this regex enexpectedly then add a new test case.

Testing the posts alone, or solely the happy path is not different than building fenceposts with no fence between them.

Fence posts only

Fetching GitHub Pull Requests

| Comments

A co-worker was doing some work today to integrate our TeamCity instance integrated with GitHub pull requests. And it got me thinking, I hate adding remotes to my local repo just to pull down pull requests. The solution:

  req = "!f() { git fetch origin refs/pull/$1/head:pr/$1; } ; f"


$ git req 39
remote: Counting objects: 10, done
remote: Compressing objects: 100% (10/10) done.
remote: Total 10 (delta 0), reused 1 (delta 0)
Unpacking objects: 100% (10/10), done.
 * [new ref]         reqs/pull/39/head -> pr/39
$ git checkout pr/39

And now I have that pull request locally without adding a remote.

Logstash Presentation

| Comments

I gave a presentation on Logstash at work to introduce my colleagues to what it can provide you.

Introduction to Bundler

| Comments

Over the last few years I have become a huge fan of using Rake as my build scripting language of source. It is rich in many aspects, which I don’t care to dive into right now. Since it runs on Ruby you have access to all the various gems already developed.

I recently started a new job at Dovetail Software, and they were already using Rake; however, they weren’t using Bundler. This is where my headache began which prompted this post. I proceeded to install all the gem dependencies manually by simply running gem install rake. The latest version at the time that I did that was 10.0.2 and the version that was previously developed against was Although the differences between these two different versions was minimal, there was one error that I got simply because I was on a newer version then all of my colleagues.

Bundler was designed to mitigate this problem. Bundler is a ruby gem version management solution. The best part is it only requires two additional files to the root of you project directory, one which you write and the second that is generated by Bundler. I know I just said generated, which is generally a bad thing when it comes to code, just bear with me.

The first file to create is a file named Gemfile no extension type just exactly as I wrote it. This file is used to specify the acceptable gems and versions that you intend to use. You can find detailed instructions on the content of a Gemfile here.

After you have the Gemfile you can generate the second file I mentioned Gemfile.lock. This is created by running bundle install, this command will do one additional thing on top of creating the file. Since Bundler is a gem version management utility the command will also download the gems for the versions you specified, along with those gems dependencies. The Gemfile.lock file will contain a list of all the gems installed and the exact versions that were installed.

Here is where the magic begins. Commit those files and move over to your buddies computer. Have him pull down that commit, and run bundle install The Gemfile.lock will not be changed this time, instead Bundler will install the gems with the exact versions as specified in the Gemfile.lock. Now if you leave those gems alone for a while and you either hire a new employee, get a new computer, yatta yatta yatta. Run bundle install on that new computer and it will use those same versions, even if a newer version exists.

Now when the time comes that you wish to update a gem, just run bundle update <gem>. Bundler will modify the Gemfile.lock, commit this and all your colleagues will know what version you are now using.

If you have multiple projects using various different versions of gems Bundler is there to rescue you again. When running Ruby it will attempt to use the latest version of an installed gem. If you preface your command execution with bundle exec <command>, then Bundler will ensure that the versions specified in the Gemfile.lock will be the ones that are pulled in with a require statement.

Now you only have one additional thing to do, and you will never accidentally require in the wrong version of a dependency. Simply put require 'bundler/setup' at the beginning of you entry ruby script. Bundler will then fail if the versions specified in the Gemfile.lock are not installed. It will prompt you to run bundle install. Now when you update a dependency and commit it, your colleagues will not be able to accidentally run the updated scripts.

I don’t intend for this to be an all inclusive documentation on Bundler, you can find that at the official Bundler website here, but hopefully this will server as a launching point to get started with using it. It has been and will continue to be a great tool to ensure your entire development team is on the same page.

Event Queue for FubuMVC.ServerSentEvents

| Comments

Out of the box the event queue in FubuMVC.ServerSentEvents just stores all events in memory while the web application is running. While this will work great for many people it does have it’s limitations, namely that it is a built in memory leak. It was originally designed with the idea that those who used the library would override the event queue with their own implementation. There are a couple of things you can do here, you can either persist the events to long term memory, or build in a mechanism to reduce the events in the queue, or whatever else you would like to do. (Sound like the Fubu way of doing things or what!)

When you implement your own queue you will need to implement both the IEventQueue and IEventQueueFactory interfaces. Then register them in your fubu service registry like so:

Fubu Service Registry
SetServiceIfNone<IEventQueueFactory<TOPIC>, YOUREVENTQUEUEFACTORY>();

You will need to do this for each of your topics for which you would like your event queue to be used.

How you implement your event queue is entirely up to you. The solution that I implemented was an in memory queue which would self reduce when the number of events surpassed some count.

Event Serialization in FubuMVC.ServerSentEvents

| Comments

The IServerEvent.Data property is defined by the interface to be an object. Hence this object needs to be serialized prior to broadcasting it down to connected clients. By default the FubuMVC.ServerSentEvents bottle will use the FubuMVC JSON serializer. So an example stream would look like this:

Example of stream with serialized data
data: { "property" : "value" }\n

You can easily change the serializer by implementing your own version of IDataFormatter then in your registry add ReplaceService<IDataFormatter, YOURIMPLEMENTATION>()

This is where appending the event name to the end of the event id comes in handy. You can route all of your events to the default message handler on the client, deserialize the data, then trigger a non-SSE event with the deserialized data, similar to how everything else is handled in FubuMVC.