Remote Redux Debugging in Flutter

Connect your Flutter app’s Redux Store to the Redux Devtools from the web!

I really like Flutter, and I like using Redux when building mobile apps. There’s a great Redux implementation for Dart and Flutter, and a time travel capable debug store.

The Javascript world is spoilt with the fantastic Redux DevTools plugin. It allows you to inspect actions and application state in a web browser, time travel, and playback changes to your app’s state. There is an on screen time travel widget for Flutter, but that means sacrificing screen space for the UI.

So why not combine the Redux DevTools from the Javascript world with Redux.dart? Now you can, with the redux_remote_devtools package!

Debug your Redux Store with Flutter and Remote DevTools

This article gives a quick overview of how to get setup. The Git repository contains examples to help get you started.

Getting Started

Add the library to your app’s pubspec.yaml:

And add the middleware to your app, and provide it a reference to your store so time travel actions from the remote can be dispatched:

Startup the remotedev server, and then run your Flutter app:

You can then browse to http://localhost:8000 and start using Remote DevTools to debug your Flutter app!

Encoding Actions and State

In the Javascript world, Redux follows a convention that your redux state is a plain Javascript Object, and actions are also Javascript objects that have a type property. The JS Redux Devtools expect this. However, Redux.dart tries to take advantage of the strong typing available in Dart. To make Redux.dart work with the JS devtools, we need to convert actions and state instances to JSON before sending.

Remember that the primary reason for using devtools is to allow the developer to reason about what the app is doing. Therefore, exact conversion is not strictly necessary – it’s more important for what appears in devtools to be meaningful to the developer.

To make your actions and state JSON encodable, you have two options. Either add a toJson method to all your classes, or using a package like json_serializable to generate the serialisation code at build time. The GitHub search example demonstrates both approaches.

If your store is simple then you may be using enums for actions. These encode just fine without any extra effort.

Time Travel

If you have configured your app to use the DevToolsStore from redux_devtools, then you can time travel through your app state using the UI.

Time Travel through your app

Remember that there are limitations to time travel, especially if you are using epics or other asynchronous processing with your Redux store.


Being a new library there are still things to work out. PRs are welcome if you’re up for helping out.

Now go build something cool with Flutter!

Production Error Handling in Ionic

Nobody likes apps that crash or stop working properly. Handling and recovering from errors is obviously an important task for any developer; we should not assume that everything will run smoothly.

In this post we’re talking about what to do on top of your regular error handling — the last resort.

Read on the NextFaze Blog

Drag n Drop Sorting with Ember 2.x and JQuery UI

Drag and drop sorting lists of records in an Ember 2 application, using JQuery UI’s sortable plugin! Working example up on GitHub.

I’ve been rebuilding Three D Radio‘s internal software using Ember JS. One aspect is to allow announcers to create playlists to log what they play on air. I wanted announcers to be able to reorder tracks using simple drag-n-drop. In this post I’ll explain how to do it.

Firstly, this post is based on the work by Benjamin Rhodes. However, I found that his solution didn’t work out of the box. Whether that is API changes from Ember 1.11 to Ember 2.x I’m not sure. So what I’m going to do here is bring his technique up to date for 2016 and Ember 2.6.

Starting an Ember Project

I’ll build this from scratch so we have a complete example. You shouldn’t have problems integrating this into an existing project though. So we’ll create a new Ember CLI project called sortable, and install JQuery UI:

We need to add JQuery UI to our build as well.

Models

We are going to need a model for the data we are going to sort. Here’s something simple

Inside the note model we’ll have two attributes, the content of the note, and an index for the sorted order:

Fake data with Mirage

For the sake of this example, we’ll use Mirage to pretend we have a server providing data. Skip this bit if you have your REST API done.

And provide some mock data:

A Route

We will need a route for viewing the list of notes, and a template. Here’s something simple that will do for now:

And in here we will simply return all the notes:

A template? No, a component!

We are going to display our notes in a table, but the Sortable plugin also works on lists if that’s what you’d like to do.

You may be tempted to just put your entire table into the list template that Ember created for you. However, you won’t be able to activate the Sortable plugin if you try this way. This is because we need to call sortable after the table has been inserted into the DOM, and a simple route won’t give you this hook. So, we will instead create a component for our table!

We will get to the logic in a moment, but first let’s render the table:

The important part here is to make sure your table contains <thead>  and <tbody>  components. We add the class sortable to the tbody, because that’s what we will make sortable. If you were rendering as a list, you add the sortable class to the list element.

Finally, in the template for our route, let’s render the table:

We should have something that looks like this:

Our table component rendering the notes
Our table component rendering the notes

A quick detour to CSS

Let’s make this slightly less ugly with a quick bit of CSS.

Which gives us a more usable table:

Just make the table a tiny bit less ugly
Just make the table a tiny bit less ugly

Make Them Sortable

Moving over to our component’s Javascript file, we need to activate the sortable plugin. We do this in the didInsertElement hook, which Ember calls for you once the component has been inserted into the DOM. In this method, we will look for elements with the sortable class, and make them sortable!

Persisting the records

At this point we have a sortable table where users can drag and drop to re-order elements. However, this is purely cosmetic. You’ll see that when you reorder the table the index column shows the numbers out of order.

We can now reorder notes, but the index fields are not updated
We can now reorder notes, but the index fields are not updated

Open up Ember Inspector and you will see the models’ index is never being updated. We’ll fix this now.

The first step is to store each note’s ID inside the table row that renders it. We will make use of this ID to update the index based on the order in the DOM. So a slight change to our component’s template:

Next is to give sortable an update function. This gets called whenever a drag-drop is made.

This function iterates over all the sortable elements in our table. Note that we get them from JQuery in their order in the DOM (ie the new sorted order). So, we create an array, and using the item’s ID store the index for each element. Note that I’m adding 1 to my indices to give values from 1 instead of 0.  Next step is to use this array to update the records themselves:

We update and save the record only if its index has actually changed. With long lists, this greatly reduces the number of hits to the server. (wish list: A method in ember that will save all dirty records with a single server request!)

Now when we reorder the index fields are updated correctly!
Now when we reorder the index fields are updated correctly!

And we’re done! A sortable list of Ember records, that persist those changes to the server*.

Have a look on GitHub!

(Note: If you’re using Mirage, you’ll get errors about saving records, because we need code to handle patches).

We The Unseen

I worked for South Australia’s youth circus organisation Cirkidz on their production We The Unseen. Using the same 3D projection mapping technology I developed at UniSA and expanding from the work I did with Half Real, we built several interactive projection based special effects to compliment the performance. Let’s have a look at the Storm.

So what’s going on here? We have two Microsoft Kinects, either side of the stage, tracking the performers in 3D. We can then use that information to make projected effects that respond to the performers movements.

For Storm, I created a particle simulation that would provide a (deliberately abstract) storm effect. We have two particle emitters; one at the top of the triangle at the back of the stage, and another at the front. This gives the illusion that particles are travelling from the sail out onto the floor. Then, we have a couple of forces attached to the performer. The first is a rather strong attractor, which draws the particles to the actor. The next is a vortex, which manipulates the direction.

The result is a particle system that appears to dance with the performer.

We The Unseen’s projected effects were developed over a 6 week period. The first step was to figure out what was actually possible to track with the Kinect. These are circus performers, not people in their living rooms doing over the top gestures!

Tracking multiple performers
Tracking multiple performers

Having multiple actors on stage is fine, even on unicycles:

Skeleton tracking not so much
Skeleton tracking not so much

The main problem was the size of the stage. For this reason we used two Kinect devices, which were connected to separate PCs and sent tracking data using a simple, custom network protocol. Calibration meant the tracking data would be in the same coordinate system. And again, due to the size of the stage, there was almost no interference between the two devices.

In fact, if there was more time, we would have tried four devices.

One of the things thought about for Half Real but never got done was using projectors as dynamic light sources. In We The Unseen, we had a chance:

It mostly works, but you start to see the limits of the Kinect. No matter how precisely you calibrate, depth errors start to cause misalignment of the projection. There’s also a bit of a jump when the tracking switches devices. But overall, works pretty good.

In a smaller space, you could do some very nice lighting effects with projectors and decent tracking data.

Another problem discovered during Half Real was controlling the projection system. The operator was busy dealing with lighting cues, sound cues, and then an entirely separate projection system.

For We The Unseen, I had time to integrate the projection system with QLab, using Open Sound Control. This allowed the operator to program the show exclusively in QLab, and OSC messages told the projection system what to do.

There were additional effects that we built but didn’t make it into the show. For example, we had this idea for some of the acrobatics to create impact effects for when performers landed from great  heights. The problem here was mostly aesthetic. The lighting designer of course wants to light the performers to have the biggest visual impact. For these acrobatic scenes there was simply too much stage lighting for the projectors to cut through. Projectors are getting brighter and brighter, for cheaper, but still can’t compete against stage lighting. So those effects we left unseen.

We The Unseen – pretty amazing youth circus, with a couple of special effects by me where they worked.

Tech runs
Tech runs

Syntho

Web Audio is an amazingly powerful new Javascript API for building complex audio and music applications in the web browser. I wanted to check it out, so I built Syntho. You can try Syntho out right now!

Syntho is a monophonic synthesizer inspired by the Korg Volca Bass. It features 3 oscillators with sine, saw, triangle, and square wave shapes over 6 octaves. Each oscillator can be detuned independently, giving nice/horrible pulsing as the oscillators go in and out of phase.

A low pass filter with resonance affects the sound of the oscillators. The filter self oscillates if you push the resonance way up.

There is a low frequency oscillator that can be set to affect the pitch of the sound generating oscillators, or the filter cutoff point. The LFO supports triangle and square waveshapes.

Finally, there is an ADSR envelope generator. The ADSR can be set to control the amplitude of the sound, or the filter cutoff point, or both.

Syntho is completely modern Javascript. I use ES6 transpiled with Babel, Handlebars for keeping the HTML sane, and Twitter Bootstrap because I’m lazy with CSS.

The inner workings of Syntho and web-audio will probably be the subject of another series of video tutorials. But for now, the code is on GitHub.

Graveyard Ghoul

Three D Radio, a community radio station I help run in Adelaide, had a problem. There are sometimes no announcers available for the late night and extremely early morning timeslots. As all the announcers are volunteers, sometimes things come up and an announcer can’t make it in to do their show. The station switched over to a 5 CD changer and played pre-recorded shows in these situations. There were two glaring problems with this approach:

  1. The station’s volunteers couldn’t create new pre-recorded shows fast enough. This meant that listeners would end up hearing the same shows again, which is lame.
  2. Five CDs isn’t always enough content to make it through the night. This meant that you could listen to a show, go to bed, wake up the next morning and hear the same show. Even worse!

I built the Graveyard Ghoul to replace the CD changer with a never ending assortment of randomised music. Here’s how and why.

Graveyard Ghoul on GitHub!

Existing options

So why build something new? Let’s talk about existing options.

We could just load up an MP3 player with music, switch it to random, and forget about it. However, this would lead to a poor on-air sound, and would probably have the station breaking the law.

Australian community radio law mandates at least 20% of the music broadcast to be Australian. Three D goes further than this and has self imposed quotas of 40% Australian, 20% South Australian, and 25% music containing female artists.

We also need to broadcast messages and station IDs at regular intervals, so listeners know what they’re listening to. An MP3 player wouldn’t do this well.

Most radio stations use some kind of playout software to automate the on air sound. Most of these can be switched into an automatic mode and run the station without anybody present at all. Three D could have taken this approach.

The quotas again would be a problem. Metadata for all the music in Three D’s collection is stored in a Postgresql database that was implemented long before I joined the station. Most of the mp3 files themselves have either no or missing ID3 tags. So we would have to hack/script any playout software to interface with the existing database, or somehow shoehorn all the metadata into tags in the files themselves.

A fullblown playout software solution was also deemed to heavyweight to put into use. Three D is one of the few stations that don’t use one. We needed this solved fast, and trialing, purchasing, deploying, and migrating to new software was going to be too much effort for a station of 130 volunteers.

Requirements

So I decided to try building something new, and if it worked out, suggest the station put it into use. The requirements I had in mind were:

  • Super simple user interface (ideally one button to press play)
  • Play a randomised selection of music from Three D’s music library
  • Meet all of Three D’s music quotas
  • Regularly broadcast station IDs between songs
  • Log the music played to the stations logging system (again, legal requirement)
  • Run on Linux (the on-air computer runs OpenSUSE)
  • Implement in Python. The main reason for this is there are other volunteers at the station who can code in Python, and could look into things if I wasn’t around. Otherwise I would have used Java or QT.

The Solution

After hacking away for a couple of nights after work I had enough to leave running for a week playing music non stop. Obscure bugs would mean the radio station would stop broadcasting, and nobody wants to hear silence on their radio. It now runs every night and whenever an announcer doesn’t show up.

The graveyard ghoul
The graveyard ghoul

Interesting bits

The Ghoul is a fairly simple, albeit important, piece of software. The main interesting bit is the scheduler, which decides what to play next.

The config file allows the programming committee to tweak the sound. For example, how many songs should be played before a station ID, or what the quotas are for Australian and South Australian music, music featuring female artists, and demos.

The Ghoul plays 5 totally random tracks when it first starts. This seeds the playlog with enough information to then start working towards meeting the quotas.

Stings/station IDs are inserted into the play queue with a bit of randomness, so it isn’t just a monotonous 4 songs, sting, 4 songs, sting, etc. The randomness can be tweaked through the config file.

Finally, we make sure the MP3 actually exists. This is a problem because some of the music in the catalogue database is from vinyl, or simply hasn’t been ripped to MP3 yet. The database is a bit of a mess, so the Ghoul checks that there is actually a file there to play. This is also why the method sits in a while true loop.

Other than the scheduler, the software makes use of threads to ensure the GUI stays snappy, and Python’s requests library to handle logging to the station’s intranet.

Success!

Half Real: an Interactive, Projected World

Half Real was a live action, interactive theatre production I worked on with The Border Project back in 2011. A case study article on Half Real was published at the 2012 ISMAR conference. This post is a more human readable summary of my work.

Half Real is based on a murder investigation, where the live audience votes on how they investigation proceeds. On stage, actors are immersed into a virtual world projected onto the set. Actors were tracked in 3D on set, and the projections reacted to their actions and movements. I built the software that drove the projected content.

I worked on Half Real, building the projection software over a four month period. The system builds on technology I was developing at the University of South Australia during my PhD. The code was all in C++ with OpenGL, running on Linux.

The Stage as a 3D Environment

The projection system for Half Real needed to track actors as they moved about on stage and have projections appear attached to the actors. Simple 2D projection, which is common in performance art, was not going to cut it. Instead, the entire set is modeled as a 3D scene. A calibration process figures out the position and orientation of the projectors, and creates a perspective correct projection in OpenGL. Rather than creating unique content for each projector, the content is created for the scene, and then we figure out what the projectors can see.

This means art assets only need to be created once, regardless of the number of projectors. In addition, more projectors can be easily added. Pre-show setup is simpler, as projectors only need to be roughly aligned; the calibration algorithm takes care of the rest.

3D model of the set
3D model of the set
Projected content on the physical set
Projected content in real life

Actor Tracking

Projected text attached to the performer
Projected text attached to the performer

We used a Microsoft Kinect with OpenNI for tracking actors on set. Kinect works well for theatre; it’s IR and isn’t affected by stage lighting, and it’s cheap. However, there are a few limitations. The tracking isn’t precise enough to project onto actor, so we project onto the set near them. Also, the resolution is low and it can’t track really small objects.

While the Kinect is quite good at tracking people, it is not able to reliably identify them. Actors enter and exit the set many times throughout each performance. We needed the system to automatically attach the correct information to the correct actor. Through the course of rehearsals, catch areas were identified in each scene. These catch areas are regions on the set that an actor would always walk through. Once the tracking system registered an actor passing through a catch area, that actor was associated with the correct virtual information. In addition to catch areas, dead areas that never associated tracked objects in the system were needed.

Pre-show projector calibration
Pre-show projector calibration

The set of Half Real was not simply a static scene, it contained a door and window that actors could use and walk or climb through, and a chair that was moved about on stage. The detection algorithm in the tracker would sometimes incorrectly register these objects as actors. By marking these areas as dead areas, the system would ignore these objects when associating virtual information. As with catch areas, the dead areas were specifically defined for each scene, as in some scenes actors moved into the range of the window or door when the virtual information needed to appear.

The only real problem I found with the Kinect was a bug in Open NI which caused a segfault. Segfaults are always bad, but when your entire set is a projected environment they are even worse. Luckily, I was able to do a quick fix before opening night.

Interactivity

All the possible paths through the show
All the possible paths through the show

Half Real was an interactive murder investigation. The audience were asked “Who killed Violet Vario?” Each scene would uncover clues about the murder, and the audience voted on what to investigate next. This meant that there was around 6 hours of scenes for an hour long show. A graph of the show is shown to the right.

Each member of the audience had a ZigZag controller, developed by Matthew Gardiner, in their hand. During a scene vote options would appear in the projected world. At the end of the scene the audience would vote on where the investigation would go.

 

This slideshow requires JavaScript.

There was one way communication from the ZigZag voting system to the projection system. The ZigZag system was responsible for communicating with the devices, tallying votes, and keeping track of what scene the show was in. The projection system polled the ZigZag system regularly to find out whether a scene change was necessary via http.

Resource Management

Half Real. Photo courtesy of Chris More
Half Real

All those scenes meant a lot of media resources. Half Real’s projected content consisted of images, video files, and procedurally generated content, such as the vote options. In all, 38GB of assets, mostly video, were used during the show. A level loading approach was used to manage these assets.

Each scene was described in an XML file, which listed the assets required for the scene, transitions and events, etc. A content manager was responsible for freeing data no longer required and loading new assets if required. Assets that were needed in consecutive scenes were reused, rather than reloaded.

Note the additional flag inUse. Half Real had smooth transitions between scenes. We didn’t want to delete an asset that was currently be projected, so we left it in RAM until the next transition.

Success?

Who killed Violet Vario?
Who killed Violet Vario?

Half Real successfully completed a tour of regional South Australia, before playing a three week, sold out season as part of the Melbourne Festival in 2011. That achievement is proof the technology and software developed was a success. However, as with any production there are lessons learned and room for improvement.

One of the major issues that had to be overcame was reliability and robustness. In Half Real, if the projection software crashed, the stage went dark. The system had to function correctly day after day, for extended periods of time. Decoupling subsystems was one of the most important factors in making the system robust. For example, it was important that the projection system kept running if the tracking system stopped responding.

Another issue was sequencing content to be projected in each scene. The projection system used XML files for each scene. This effectively meant there was one scene description for the projection, and another for lighting and sound.If there was time, making the projection system interoperable with existing stage management software, such as QLab, which would reduce the duplication, and make modifying the sequences of projected content much easier.

While Half Real made an important step in using SAR for interactive performance art, there are many more possibilities to be explored. For example, using projectors to simulate physical light sources, such as follow spot lights that automatically track the actor. Or, using the projectors to project directly onto the actors in order to change their appearance.

Turns out, I did some of these things working with Cirkidz!