Drag n Drop Sorting with Ember 2.x and JQuery UI

Drag and drop sorting lists of records in an Ember 2 application, using JQuery UI’s sortable plugin! Working example up on GitHub.

I’ve been rebuilding Three D Radio‘s internal software using Ember JS. One aspect is to allow announcers to create playlists to log what they play on air. I wanted announcers to be able to reorder tracks using simple drag-n-drop. In this post I’ll explain how to do it.

Firstly, this post is based on the work by Benjamin Rhodes. However, I found that his solution didn’t work out of the box. Whether that is API changes from Ember 1.11 to Ember 2.x I’m not sure. So what I’m going to do here is bring his technique up to date for 2016 and Ember 2.6.

Starting an Ember Project

I’ll build this from scratch so we have a complete example. You shouldn’t have problems integrating this into an existing project though. So we’ll create a new Ember CLI project called sortable, and install JQuery UI:

We need to add JQuery UI to our build as well.


We are going to need a model for the data we are going to sort. Here’s something simple

Inside the note model we’ll have two attributes, the content of the note, and an index for the sorted order:

Fake data with Mirage

For the sake of this example, we’ll use Mirage to pretend we have a server providing data. Skip this bit if you have your REST API done.

And provide some mock data:

A Route

We will need a route for viewing the list of notes, and a template. Here’s something simple that will do for now:

And in here we will simply return all the notes:

A template? No, a component!

We are going to display our notes in a table, but the Sortable plugin also works on lists if that’s what you’d like to do.

You may be tempted to just put your entire table into the list template that Ember created for you. However, you won’t be able to activate the Sortable plugin if you try this way. This is because we need to call sortable after the table has been inserted into the DOM, and a simple route won’t give you this hook. So, we will instead create a component for our table!

We will get to the logic in a moment, but first let’s render the table:

The important part here is to make sure your table contains <thead>  and <tbody>  components. We add the class sortable to the tbody, because that’s what we will make sortable. If you were rendering as a list, you add the sortable class to the list element.

Finally, in the template for our route, let’s render the table:

We should have something that looks like this:

Our table component rendering the notes
Our table component rendering the notes

A quick detour to CSS

Let’s make this slightly less ugly with a quick bit of CSS.

Which gives us a more usable table:

Just make the table a tiny bit less ugly
Just make the table a tiny bit less ugly

Make Them Sortable

Moving over to our component’s Javascript file, we need to activate the sortable plugin. We do this in the didInsertElement hook, which Ember calls for you once the component has been inserted into the DOM. In this method, we will look for elements with the sortable class, and make them sortable!

Persisting the records

At this point we have a sortable table where users can drag and drop to re-order elements. However, this is purely cosmetic. You’ll see that when you reorder the table the index column shows the numbers out of order.

We can now reorder notes, but the index fields are not updated
We can now reorder notes, but the index fields are not updated

Open up Ember Inspector and you will see the models’ index is never being updated. We’ll fix this now.

The first step is to store each note’s ID inside the table row that renders it. We will make use of this ID to update the index based on the order in the DOM. So a slight change to our component’s template:

Next is to give sortable an update function. This gets called whenever a drag-drop is made.

This function iterates over all the sortable elements in our table. Note that we get them from JQuery in their order in the DOM (ie the new sorted order). So, we create an array, and using the item’s ID store the index for each element. Note that I’m adding 1 to my indices to give values from 1 instead of 0.  Next step is to use this array to update the records themselves:

We update and save the record only if its index has actually changed. With long lists, this greatly reduces the number of hits to the server. (wish list: A method in ember that will save all dirty records with a single server request!)

Now when we reorder the index fields are updated correctly!
Now when we reorder the index fields are updated correctly!

And we’re done! A sortable list of Ember records, that persist those changes to the server*.

Have a look on GitHub!

(Note: If you’re using Mirage, you’ll get errors about saving records, because we need code to handle patches).

We The Unseen

I worked for South Australia’s youth circus organisation Cirkidz on their production We The Unseen. Using the same 3D projection mapping technology I developed at UniSA and expanding from the work I did with Half Real, we built several interactive projection based special effects to compliment the performance. Let’s have a look at the Storm.

So what’s going on here? We have two Microsoft Kinects, either side of the stage, tracking the performers in 3D. We can then use that information to make projected effects that respond to the performers movements.

For Storm, I created a particle simulation that would provide a (deliberately abstract) storm effect. We have two particle emitters; one at the top of the triangle at the back of the stage, and another at the front. This gives the illusion that particles are travelling from the sail out onto the floor. Then, we have a couple of forces attached to the performer. The first is a rather strong attractor, which draws the particles to the actor. The next is a vortex, which manipulates the direction.

The result is a particle system that appears to dance with the performer.

We The Unseen’s projected effects were developed over a 6 week period. The first step was to figure out what was actually possible to track with the Kinect. These are circus performers, not people in their living rooms doing over the top gestures!

Tracking multiple performers
Tracking multiple performers

Having multiple actors on stage is fine, even on unicycles:

Skeleton tracking not so much
Skeleton tracking not so much

The main problem was the size of the stage. For this reason we used two Kinect devices, which were connected to separate PCs and sent tracking data using a simple, custom network protocol. Calibration meant the tracking data would be in the same coordinate system. And again, due to the size of the stage, there was almost no interference between the two devices.

In fact, if there was more time, we would have tried four devices.

One of the things thought about for Half Real but never got done was using projectors as dynamic light sources. In We The Unseen, we had a chance:

It mostly works, but you start to see the limits of the Kinect. No matter how precisely you calibrate, depth errors start to cause misalignment of the projection. There’s also a bit of a jump when the tracking switches devices. But overall, works pretty good.

In a smaller space, you could do some very nice lighting effects with projectors and decent tracking data.

Another problem discovered during Half Real was controlling the projection system. The operator was busy dealing with lighting cues, sound cues, and then an entirely separate projection system.

For We The Unseen, I had time to integrate the projection system with QLab, using Open Sound Control. This allowed the operator to program the show exclusively in QLab, and OSC messages told the projection system what to do.

There were additional effects that we built but didn’t make it into the show. For example, we had this idea for some of the acrobatics to create impact effects for when performers landed from great  heights. The problem here was mostly aesthetic. The lighting designer of course wants to light the performers to have the biggest visual impact. For these acrobatic scenes there was simply too much stage lighting for the projectors to cut through. Projectors are getting brighter and brighter, for cheaper, but still can’t compete against stage lighting. So those effects we left unseen.

We The Unseen – pretty amazing youth circus, with a couple of special effects by me where they worked.

Tech runs
Tech runs


Web Audio is an amazingly powerful new Javascript API for building complex audio and music applications in the web browser. I wanted to check it out, so I built Syntho. You can try Syntho out right now!

Syntho is a monophonic synthesizer inspired by the Korg Volca Bass. It features 3 oscillators with sine, saw, triangle, and square wave shapes over 6 octaves. Each oscillator can be detuned independently, giving nice/horrible pulsing as the oscillators go in and out of phase.

A low pass filter with resonance affects the sound of the oscillators. The filter self oscillates if you push the resonance way up.

There is a low frequency oscillator that can be set to affect the pitch of the sound generating oscillators, or the filter cutoff point. The LFO supports triangle and square waveshapes.

Finally, there is an ADSR envelope generator. The ADSR can be set to control the amplitude of the sound, or the filter cutoff point, or both.

Syntho is completely modern Javascript. I use ES6 transpiled with Babel, Handlebars for keeping the HTML sane, and Twitter Bootstrap because I’m lazy with CSS.

The inner workings of Syntho and web-audio will probably be the subject of another series of video tutorials. But for now, the code is on GitHub.

Graveyard Ghoul

Three D Radio, a community radio station I help run in Adelaide, had a problem. There are sometimes no announcers available for the late night and extremely early morning timeslots. As all the announcers are volunteers, sometimes things come up and an announcer can’t make it in to do their show. The station switched over to a 5 CD changer and played pre-recorded shows in these situations. There were two glaring problems with this approach:

  1. The station’s volunteers couldn’t create new pre-recorded shows fast enough. This meant that listeners would end up hearing the same shows again, which is lame.
  2. Five CDs isn’t always enough content to make it through the night. This meant that you could listen to a show, go to bed, wake up the next morning and hear the same show. Even worse!

I built the Graveyard Ghoul to replace the CD changer with a never ending assortment of randomised music. Here’s how and why.

Graveyard Ghoul on GitHub!

Existing options

So why build something new? Let’s talk about existing options.

We could just load up an MP3 player with music, switch it to random, and forget about it. However, this would lead to a poor on-air sound, and would probably have the station breaking the law.

Australian community radio law mandates at least 20% of the music broadcast to be Australian. Three D goes further than this and has self imposed quotas of 40% Australian, 20% South Australian, and 25% music containing female artists.

We also need to broadcast messages and station IDs at regular intervals, so listeners know what they’re listening to. An MP3 player wouldn’t do this well.

Most radio stations use some kind of playout software to automate the on air sound. Most of these can be switched into an automatic mode and run the station without anybody present at all. Three D could have taken this approach.

The quotas again would be a problem. Metadata for all the music in Three D’s collection is stored in a Postgresql database that was implemented long before I joined the station. Most of the mp3 files themselves have either no or missing ID3 tags. So we would have to hack/script any playout software to interface with the existing database, or somehow shoehorn all the metadata into tags in the files themselves.

A fullblown playout software solution was also deemed to heavyweight to put into use. Three D is one of the few stations that don’t use one. We needed this solved fast, and trialing, purchasing, deploying, and migrating to new software was going to be too much effort for a station of 130 volunteers.


So I decided to try building something new, and if it worked out, suggest the station put it into use. The requirements I had in mind were:

  • Super simple user interface (ideally one button to press play)
  • Play a randomised selection of music from Three D’s music library
  • Meet all of Three D’s music quotas
  • Regularly broadcast station IDs between songs
  • Log the music played to the stations logging system (again, legal requirement)
  • Run on Linux (the on-air computer runs OpenSUSE)
  • Implement in Python. The main reason for this is there are other volunteers at the station who can code in Python, and could look into things if I wasn’t around. Otherwise I would have used Java or QT.

The Solution

After hacking away for a couple of nights after work I had enough to leave running for a week playing music non stop. Obscure bugs would mean the radio station would stop broadcasting, and nobody wants to hear silence on their radio. It now runs every night and whenever an announcer doesn’t show up.

The graveyard ghoul
The graveyard ghoul

Interesting bits

The Ghoul is a fairly simple, albeit important, piece of software. The main interesting bit is the scheduler, which decides what to play next.

The config file allows the programming committee to tweak the sound. For example, how many songs should be played before a station ID, or what the quotas are for Australian and South Australian music, music featuring female artists, and demos.

The Ghoul plays 5 totally random tracks when it first starts. This seeds the playlog with enough information to then start working towards meeting the quotas.

Stings/station IDs are inserted into the play queue with a bit of randomness, so it isn’t just a monotonous 4 songs, sting, 4 songs, sting, etc. The randomness can be tweaked through the config file.

Finally, we make sure the MP3 actually exists. This is a problem because some of the music in the catalogue database is from vinyl, or simply hasn’t been ripped to MP3 yet. The database is a bit of a mess, so the Ghoul checks that there is actually a file there to play. This is also why the method sits in a while true loop.

Other than the scheduler, the software makes use of threads to ensure the GUI stays snappy, and Python’s requests library to handle logging to the station’s intranet.


Half Real: an Interactive, Projected World

Half Real was a live action, interactive theatre production I worked on with The Border Project back in 2011. A case study article on Half Real was published at the 2012 ISMAR conference. This post is a more human readable summary of my work.

Half Real is based on a murder investigation, where the live audience votes on how they investigation proceeds. On stage, actors are immersed into a virtual world projected onto the set. Actors were tracked in 3D on set, and the projections reacted to their actions and movements. I built the software that drove the projected content.

I worked on Half Real, building the projection software over a four month period. The system builds on technology I was developing at the University of South Australia during my PhD. The code was all in C++ with OpenGL, running on Linux.

The Stage as a 3D Environment

The projection system for Half Real needed to track actors as they moved about on stage and have projections appear attached to the actors. Simple 2D projection, which is common in performance art, was not going to cut it. Instead, the entire set is modeled as a 3D scene. A calibration process figures out the position and orientation of the projectors, and creates a perspective correct projection in OpenGL. Rather than creating unique content for each projector, the content is created for the scene, and then we figure out what the projectors can see.

This means art assets only need to be created once, regardless of the number of projectors. In addition, more projectors can be easily added. Pre-show setup is simpler, as projectors only need to be roughly aligned; the calibration algorithm takes care of the rest.

3D model of the set
3D model of the set
Projected content on the physical set
Projected content in real life

Actor Tracking

Projected text attached to the performer
Projected text attached to the performer

We used a Microsoft Kinect with OpenNI for tracking actors on set. Kinect works well for theatre; it’s IR and isn’t affected by stage lighting, and it’s cheap. However, there are a few limitations. The tracking isn’t precise enough to project onto actor, so we project onto the set near them. Also, the resolution is low and it can’t track really small objects.

While the Kinect is quite good at tracking people, it is not able to reliably identify them. Actors enter and exit the set many times throughout each performance. We needed the system to automatically attach the correct information to the correct actor. Through the course of rehearsals, catch areas were identified in each scene. These catch areas are regions on the set that an actor would always walk through. Once the tracking system registered an actor passing through a catch area, that actor was associated with the correct virtual information. In addition to catch areas, dead areas that never associated tracked objects in the system were needed.

Pre-show projector calibration
Pre-show projector calibration

The set of Half Real was not simply a static scene, it contained a door and window that actors could use and walk or climb through, and a chair that was moved about on stage. The detection algorithm in the tracker would sometimes incorrectly register these objects as actors. By marking these areas as dead areas, the system would ignore these objects when associating virtual information. As with catch areas, the dead areas were specifically defined for each scene, as in some scenes actors moved into the range of the window or door when the virtual information needed to appear.

The only real problem I found with the Kinect was a bug in Open NI which caused a segfault. Segfaults are always bad, but when your entire set is a projected environment they are even worse. Luckily, I was able to do a quick fix before opening night.


All the possible paths through the show
All the possible paths through the show

Half Real was an interactive murder investigation. The audience were asked “Who killed Violet Vario?” Each scene would uncover clues about the murder, and the audience voted on what to investigate next. This meant that there was around 6 hours of scenes for an hour long show. A graph of the show is shown to the right.

Each member of the audience had a ZigZag controller, developed by Matthew Gardiner, in their hand. During a scene vote options would appear in the projected world. At the end of the scene the audience would vote on where the investigation would go.


This slideshow requires JavaScript.

There was one way communication from the ZigZag voting system to the projection system. The ZigZag system was responsible for communicating with the devices, tallying votes, and keeping track of what scene the show was in. The projection system polled the ZigZag system regularly to find out whether a scene change was necessary via http.

Resource Management

Half Real. Photo courtesy of Chris More
Half Real

All those scenes meant a lot of media resources. Half Real’s projected content consisted of images, video files, and procedurally generated content, such as the vote options. In all, 38GB of assets, mostly video, were used during the show. A level loading approach was used to manage these assets.

Each scene was described in an XML file, which listed the assets required for the scene, transitions and events, etc. A content manager was responsible for freeing data no longer required and loading new assets if required. Assets that were needed in consecutive scenes were reused, rather than reloaded.

Note the additional flag inUse. Half Real had smooth transitions between scenes. We didn’t want to delete an asset that was currently be projected, so we left it in RAM until the next transition.


Who killed Violet Vario?
Who killed Violet Vario?

Half Real successfully completed a tour of regional South Australia, before playing a three week, sold out season as part of the Melbourne Festival in 2011. That achievement is proof the technology and software developed was a success. However, as with any production there are lessons learned and room for improvement.

One of the major issues that had to be overcame was reliability and robustness. In Half Real, if the projection software crashed, the stage went dark. The system had to function correctly day after day, for extended periods of time. Decoupling subsystems was one of the most important factors in making the system robust. For example, it was important that the projection system kept running if the tracking system stopped responding.

Another issue was sequencing content to be projected in each scene. The projection system used XML files for each scene. This effectively meant there was one scene description for the projection, and another for lighting and sound.If there was time, making the projection system interoperable with existing stage management software, such as QLab, which would reduce the duplication, and make modifying the sequences of projected content much easier.

While Half Real made an important step in using SAR for interactive performance art, there are many more possibilities to be explored. For example, using projectors to simulate physical light sources, such as follow spot lights that automatically track the actor. Or, using the projectors to project directly onto the actors in order to change their appearance.

Turns out, I did some of these things working with Cirkidz!

An Interactive Projection Mapped Graffiti Wall

This post explains how I built a projection mapped, interactive graffiti wall for If There Was A Colour Darker Than Black I’d Wear It. This was built in collaboration with Lachlan Tetlow-Stewart. First, here’s a video of the end result:

Let’s break down the project.

  • Multi projector, projection mapped display onto buildings
  • Allow audience members to send SMS messages to tag the building
  • Animate tags onto wall
  • Video elements

That’s the basics. Oh, did I mention we’re projecting out of a van, powered by generator, in rural South Australia?

3D Projection Mapping

My research involves Augmented Reality using projectors; quite convenient. Here we create a simple 3D model of the geometry to be projected onto, and create our content mapped to the 3D environment.

This differs from most projection mapping techniques, where content is produced for a specific projector viewpoint and mapped in 2D. The advantage of 3D projection mapping is content can be created independently of the projectors. Projectors can be added as necessary, or viewpoints drastically changed, without having to re-author content. In our case, it meant we could project onto the buildings every night, without having to get the van or projectors into exactly the right location each time.

Finding known points on the 3D model in the real world
Finding known points on the 3D model in the real world

The 3D models are actually incredibly simple, just enough detail to allow perspectively correct projections. It does, however, require actually measuring the buildings to make sure the 3D model matches the real world. The calibration process involves finding landmarks in the 3D model (corners) in the projector image using a crosshair. From there, maths takes over and we end up with a correctly aligned projection.

This whole project was just an elaborate way to get the phone numbers of audience members. We would be using them later in the show. We played a video telling the audience to text in their tags, which would then appear on the wall.

The computer system received texts using a GSM Modem, as described in my post about SMS and Linux. The handler put the message into a MySQL database, then an incredibly simple web API allowed the projection system to get new messages as they were received.

Alpha mask for animating in new messages

We used animated alpha masks and OpenGL blending to make the texts appear in a pleasing manner. All lengths of time (time spent visible, etc) were randomised to make everything feel a bit more natural.

The result was a compelling experience for the audience. They got something fun to occupy themselves before the show proper started, and we got their phone numbers for use later in the show!

Technical Details

The projection system was a standard desktop computer with 2x Nvidia GTX560 graphics cards, running Ubuntu 12.04. The software was OpenGL and C++, built on top of a Spatial Augmented Reality framework developed in the Wearable Computer Lab during my PhD.

Text rendering was accomplished using FTGL texture fonts. The software generated pool of fonts at different sizes, so the best fitting font could be chosen for messages of different lengths. The generation on startup is important – generating the texture map of an FTGL TextureFont is an expensive process. Changing the size of a font at runtime will give you serious performance problems.

The correct font size had to be calculated for each message. Here’s my incredibly elegant algorithm for this:

So basically it looks through the fonts and chooses the biggest font that fits the message in the space available. What does willItFit do?

Each graffiti spot has a width and height that limits how much text will fit. This function simulates rendering the text and does some word wrap. FTGL doesn’t do any text wrapping so it’s up to us, and we’re using variable width graf style fonts. We use mFont->Advance() to calculate how many pixels wide a portion of text is. If a word fits on a line, we move to the next one. As soon as a word overflows the width, we drop down a line. We use mFont->LineHeight() to calculate the Y position. LINE_SCALE is just a line height adjustment because we found the default line height to have too much spacing for what we wanted.

If, after simulating rendering the entire message, we have gone beyond the bounds of the graffiti spot, we return false. The function above then tries again with a smaller font. If you wanted to be clever you would do a binary search to speed up finding the optimum font size, but in practice we never hit any performance problems.

Latex Thesis Template

Tweaking your own Latex template for a PhD dissertation is a rite of passage/time waster for most PhD candidates. There are lots of templates around on the Internet too, of varying quality.

I spent a fair bit of time procrastinating perfecting the template used for my thesis. I’ve pulled out all the unnecessary bits and put it up on GitHub. Hopefully some other poor PhD student will find it useful. This template is itself based on styles developed by Peter Hutterer, an earlier PhD student from the Wearable Computer Lab.

This template is particularly suited to students at the University of South Australia. It meets the guidelines specified by the Graduate Research Office. That said, with some adjustments this template should be useful for anybody.


  • Nice cover page
  • Author’s publications
  • Acknowledgements
  • TOC, List of Figures, Abbreviations, etc.

How to use:

  • Fork and clone the GitHub repository
  • Edit the information in thesis.tex
  • Update images/00/author_sig.png with your own signature
  • Update images/00/uni.png with your university’s logo
  • Add your publications as citations in 00-publications.tex
  • Tweak the styles as necessary
  • Write your damn thesis!

I’m happy to accept Pull Requests for improvements on this template.

Check it out on GitHub!

Happy Writing!


tl;dr; I’ve made a GitHub repo that makes FTGL work on OSX again.

FTGL is a library that makes it super convenient to render TrueType text in OpenGL applications. You can render text as textures and geometry, making it very flexible. There’s just one problem: if you’re using MacPorts or Homebrew on OSX, FTGL doesn’t work! Here’s how to work around it.

FTGL makes use of FreeType to actually render text. In newish versions of FreeType, some of their source files have been moved around and renamed. This is a problem on OSX since, by default, we are on a case-insensitive filesystem. We now have a name clash where both FTGL and FreeType seem to have a file named ftglyph.h. All of a sudden software that uses FTGL will no longer compile because the wrong files are being included!

The fix for this is fairly straight forward. Since FTGL depends on FreeType, FTGL should be modified to remove the name clash. Unfortunately, FTGL seems to have been abandoned, and has not had any updates since 2013. In the bug report linked above I have provided a patch that renames the file and updates references to it. I’ve also created a GitHub repository with the patch applied.

This problem doesn’t show up on Linux because on a case sensitive filesystem like Ext4, the FreeType file is ftglyph.h, while the FTGL file is named FTGlyph.h. No name clash.

So there, uninstall FTGL from MacPorts or Homebrew, clone my GitHub repo, and build/install from source. FTGL will work on OSX once more.

Long term you may want to look at moving away from FTGL in your own software. It is great at what it does, but hasn’t been updated in a long time. It uses OpenGL display lists internally, so will not work on modern OpenGL specs. But at least you can now use it if you need to.

Sending & Receiving SMS on Linux

A little while ago I worked on a mixed media theatre production called If There Was A Colour Darker Than Black I’d Wear It. As part of this production I needed to build a system that could send and receive SMS messages from audience members. Today we’re looking at the technical aspects of how to do that using SMS Server Tools.

There are actually a couple of ways to obtain incoming text messages:

  • Using an SMS gateway and software API
  • Using a GSM modem plugged into the computer, and a prepaid SIM

The API route is the easiest way to go from a programming aspect. It costs money, but most gateways provide a nice API to interface with, and you’ll be able to send larger volumes of messages.

BLACK had a few specific requirements that made the gateway unsuitable.

  1. We were projecting out of a van in regional South Australia. We had terrible phone reception, and mobile data was really flakey.
  2. We were going to be sending text messages to audience members later, and needed to have the same phone number.

So, we got hold of a USB GSM modem and used a prepaid phone SIM. This allowed us to receive unlimited messages for free. However, we couldn’t send messages as quickly as we would have liked.

Modem Selection

There are quite a few GSM modems to choose from. You are looking for one with a USB interface and a removable SIM. GSM modems that use wifi to connect to computers won’t work. You need to be able to remove the SIM because most mobile data SIMs won’t allow you to send or receive SMS messages. The other big requirement is Linux drivers, and Google is really your friend here. The main thing to watch out for is manufacturers changing the chipsets in minor product revisions.

We ended up going with an old Vodafone modem using a Huawei chipset. The exact model I used is HUAWEi Mobile Connect Model E169 It shows up in Linux like this:

SMS Tools

SMS Tools is an open source software package for interfacing with GSM modems on Linux. It includes a daemon, SMSD, which receives messages. SMSD is configured to run your own scripts when messages are received, allowing you to do pretty much anything you want with them.

Installation is straight forward on Ubuntu et al:

Next you’ll need to configure the software for your modem and scripts.

Configuration File

The configuration file is a bit unwieldy, but thankfully it comes with some sane default settings. Edit the file in your favourite text editor:

Modem Configuration

First up you will need to configure your modem. The modem configuration is at the end of the config file, and the exact parameters will vary depending on what modem you have. Let’s have a look at what I needed:

device is where you specify the file descriptor for your modem. If you’re using a USB modem, this will almost allways be /dev/ttyUSB0.

init specifies AT commands needed for your modem. Some modems require initialisation commands before they start doing anything. There are two strategies here, either find the manual for your modem, or take advantage of the SMSTools Forums to find a working configuration from someone else.

incoming is there to tell SMSTools you want to use this device to receive messages.

baudrate is, well, the baud rate needed for talking to the device.

Like I said, there are many options to pick from, but this is the bare minimum I needed. Check the SMSTools website and forum for help!

Event Handler

The other big important part of the config file is the event handler. Here you can specify a script/program that is run every time a message is sent or received. From this script you can do any processing you need, and could even reply to incoming messages.

My script is some simple Bash which inserts a message into a database, but more on that in a moment.

Sending Messages

Sending SMS messages is super easy. Smsd looks in a folder, specified in the config file, for outgoing messages. Any files that appear in this folder get sent automatically. By default this folder is /var/spool/sms/outgoing.

An SMS file contains a phone number to send to (including country code, but with out the +) and the body of the message. For example:

Easy! Just put files that look like this into the folder and you’re sending messages.

Receiving Messages

Let’s have a better look at the event handler. Remember, this script is called every time a message is sent or received. The information about the message is given to your program as command line arguments:

  1. The event type. This will be either SENT, RECEIVED, FAILED, REPORT, or CALL. We’re only interested in RECEIVED here.
  2. The path to the SMS file. You read this file to do whatever you need with the message

You can use any programming language to work with the message. However, it is very easy to use formail and Bash. For example:

From there you can do whatever you want. I put the message into a MySQL database.


That’s all you need to write programs that can send and receive SMS messages on Linux. Once you have smsd actually talking to your modem it’s pretty easy. However, in practice it’s also fragile.

The smsd log file is incredibly useful here. It lives in /var/log/smstools/smsd.log

Here are some of the errors I encountered and what to do about them:

Modem Not Registered

You’ll see an error that looks like this:

This means the modem has lost reception, and is trying to re-establish a connection. Unfortunately there is nothing you can do here but wait or, using a USB extension cable, trying to find a spot with better reception.

Write To Modem Error

An error like this:

means the software can no longer communicate with the modem. This is usually caused by the modem being accidentally unplugged, the modem being plugged in after the system has powered up, or by an intermittent glitch in the USB driver. To fix this, do the following:

  1. Stop smsd (sudo service smstools stop)
  2. Unplug the modem
  3. Wait 10 seconds or so
  4. Plug the modem back in
  5. Start smsd (sudo service smstools start)

Cannot Open Serial Port

You may see this error:

This occurs if you started the computer (and therefore smsd) before plugging in the modem. Follow the steps above to fix it.


So there you have it. Follow these steps and you can send and receive SMS messages on Linux, using a cheap prepaid SIM and GSM modem.

In the next post we’ll be looking at exactly what I used this setup for.