Where has ‘the truth’ gone?

After discussion with my employer, I’ve decided to remove my previous post – “The truth about the Gatwick ATC closures” – from this blog. To be clear, I’ve not been put under any pressure to do this, but it has been pointed out that I could inadvertently be damaging the working relationship with our customer, something I had never intended. This is a difficult time for our industry as a whole – now more than ever we need to be working together.

My original post was borne out of frustration towards the incredible amount of false information being presented as fact in the media. Call it ‘fake news’, call it ‘sensationalism journalism’, call it what you will – these articles serve as nothing but clickbait to the general public, but seriously damage the reputation of everyone working within our industry. The outpouring of positive feedback to my post was immense – it’s clear to me that the vast majority of aviation professionals are fed up with this rhetoric. I go into work every day and sit beside people putting all their effort into providing an outstanding safe, orderly and expeditious air traffic service – it’s time that those with an influential voice use it to defend the ‘coal face’ workers from this kind of onslaught in the media.

To everyone who read my post – Please use any platform you have to provide a positive influence on guiding debate in the media. The aviation industry faces difficult times in the near future, we must ensure that well-sourced facts form the basis of future discussions.

To the airport – The article was written as a defence of Gatwick, if it had the opposite effect then I apologise.

To my employer – Don’t just pay attention to my words, pay attention to everyone who stood up and applauded them as long overdue. You have a highly motivated workforce who will always provide the best possible service no matter what the circumstances – don’t let that good-will be eroded. Oh, and don’t worry, these posts won’t become a regular feature!

Monitoring Bitcoin with Nagios

I’ve recently resurrected an old Bitcoin wallet (mainly because the value has increased exponentially since I first was on the network 8 years ago!), and now run a Bitcoin (BTC) and Bitcoin Cash (BCH) node locally. Fitting in with my OCD-esque habit of monitoring all the things, I felt the need for some Nagios integration!

While there are a few plugins out there for Ethereum, I couldn’t find an equivalent for the Bitcoin networks that allow you to monitor a local node – so I’ve created one!

The plugin is available on GitHub, any expansions/improvements/fixes would be very welcome!

Controlling a Sony Bravia TV with Google Home

Continuing on the theme of ‘controlling everything with my voice’, I’ve successfully integrated yet another appliance into my setup – this time my TV!

I’ve owned a Sony Bravia 2015 TV for around 18 months now, and while the interface does slow to a crawl at times, overall I’m very impressed with the Android TV integration, and have gone as far as to switch from using a dedicated Kodi machine to running a Plex server and using the native Plex Android TV App. However, at the time of writing this, there isn’t a way of controlling low-level functions of the TV (power, volume etc) from Google Home, you can only use the Chromecast integration. While I’ll agree it’s not entirely practical to navigate media entries on the TV with your voice, I find myself wanting to switch the TV on/off from another room, or pause playback with my voice rather than scrambling for a remote when my phone rings.

Unfortunately it doesn’t appear that the Bravia TV offers any direct API for issuing commands over the network, so instead I looked for a device that could emulate the remote control. After a bit of searching I came across the Broadlink RM Mini 3 (Amazon link) – I was initially a bit sceptical given the low price point and very foreign documentation, however I was pleasantly surprised by how easy the setup was, although the Android application leaves a lot to be desired. Next step was to find a way of issuing commands to the Broadlink device over my network. While there is a handy Python library, and even an extension for the RM Mini 3 capable of issuing IR commands, neither of these would allow commands to be sent over a HTTP connection.

Given the easiest method of integrating with the Google Assistant/Google Home is through their IFTTT channel, this HTTP interface was going to save a lot of effort, so I set about creating one myself! The resulting code is a fork of the BlackBeanControl repository mentioned above, but with an additional Python script using web.py to expose an interface for sending commands over a HTTP request. I’ve then placed this script behind my home Apache proxy (sufficiently secured to prevent the entire internet being able to turn my TV on and off), and used the IFTTT Webhook channel to make the appropriate requests when triggered.

I’m somehow becoming even lazier than I ever imagined – now I don’t even have to reach for the remote to continue my binge-watching!

Talking to a Tesla through Google Home

The Google Home had the long awaited (at least by me!) UK release last week, and I was delighted to get my hands on one of the first. Since then I’ve gradually been linking it to more and more devices around my home (more blog posts on that subject to follow), but having played around with the Tesla API recently, I only had one integration on my mind! Sadly there’s nothing official that allows you to talk to your Tesla Model S or X through Google Home – although I hope something will be released in future – so I took it on myself to build something that would allow some basic back-and-forth conversations with my car.

While I’m continuing to work on the project, it’s now at the stage where hopefully others can expand and improve on it, so I’ve written up all the details on the TeslaVoice project page, and uploaded the associated files to GitHub.

Hack my ride

Just before Christmas, I had the pleasure of collecting my new car – a Tesla Model S! This is something I’d been lusting after for months since it was announced, and finally pulled the trigger on an order back in September. I can honestly say that I have never waited so impatiently for something in my life, and was absolutely delighted to finally take ownership on Christmas Eve – a rather extravagant early Christmas present! The driving experience is absolutely sublime, the ability to effortlessly transition from cruising, to having your insides re-arranged by the breathtaking acceleration, is something that has to be experienced to be believed!

Sadly my first week of ownership didn’t go as smoothly as hoped – I drove the car up to Manchester on Boxing Day to visit my parents, where I discovered that the supplied charging cable wasn’t functioning properly, something that’s fairly critical for a fully electric car! Due to holiday opening times I wasn’t able to get this replaced as quickly as I would have liked, but to their credit Tesla did replace the cable without hassle when I was actually able to make it to a dealership. Still, this left a slightly sour taste about the whole experience, and curbed my enthusiasm somewhat – until this week, when I’ve had time to enjoy the car some more!

Putting a few more miles on the clock quickly reminded me of why I’d bought the car in the first place, and then I started to delve into the possibilities of the API that’s unofficially offered by Tesla (their Android & iOS apps use it). There are a variety of services that take this data and will turn it into nice graphs and stats for you, however I was more interested in how I could integrate this into my blossoming “smart home” setup. My first venture has been creating a script that scrapes Google Calendar for events, and then will set the auto-conditioning on the Tesla at a set time before each event, meaning that the car is toasty warm/nicely chilled when you set off for work! I’ve put the source code on GitHub for the world to laugh at and hopefully improve!

This is definitely only scratching the surface of what’s possible with the Model S – I’m thinking of future scripts including:

  • Turning on home heating when leaving work
  • Estimating driving distance based on the days events, and setting the charge limit accordingly
  • Monitor power draw while plugged in, for cost estimation

What other car manufacturer gives you access to so much data to nerd out over?! I’m enjoying this car without even having to sit in the drivers seat!

Upgrading my home network

For a number of years, I’ve been using a TL-WR1043ND running DD-WRT as my home router, even going as far as replicating the same setup for friends and family, as it struck a happy midpoint of being powerful enough to be useful, but also simple and stable enough for the slightly less technically literate to manage. The DD-WRT setup was surprisingly simple, and I’ve been reasonably impressed by the performance and capabilities of the software, even on such a basic consumer model of router. That said, around 12 months ago I realised that my home network was rapidly outgrowing this basic setup, and I felt the need to lean towards something a bit more “prosumer“. There are quite a few different companies targeting this market, but one in particular stood out to me – Mikrotik.

Around 12 months ago I took the dive and bought a RB2011UiAS-2HnD-IN (that’s quite the product name!) – one of their mid-line models which seemed quite reasonably priced, and got stuck into learning the intricacies of RouterOS. Their own WinBox software provides a very usable GUI for configuring their hardware, granting a view to the myriad of different features of the board while keeping the learning curve shallow enough to avoid you becoming swamped by options. I’ve gradually tweaked and enabled more and more services, to the point where the single device is providing:

  • PPPoE to my ISP
  • DHCP
  • DNS (local and external)
  • Firewall
  • WiFi (more on this below)
  • L2TP connection to a VPN provider, with certain traffic automatically routed through this
  • … and a whole lot more!

Additionally, when I moved into my new house earlier this year, I set about removing the need for using HomePlug to connect various devices in different rooms, as I found that these tended to be unstable, causing slow transfer speeds and a high rate of dropped connections. I ended up running CAT6 from my study through to several other rooms in the house (I may blog about that project some time in the future!), which eliminated the need for the HomePlugs, but highlighted how poor my WiFi setup was (having previously blamed this on the dodgy connections). While researching ways to improve coverage, I struck upon a feature of RouterOS that I hadn’t yet taken advantage of – CAPsMAN (Controlled Access Point System MANager). This essentially allows you to delegate control of various MikroTik device radios to a central ‘manager’, which pushes out the WiFi configuration to create a seamless network across all access points. I picked up a couple of Home Access Points (hAP) and set these up as slaves to CAPsMAN running on the main router (as well as the radios on the router itself being delegated to CAPsMAN, not something that’s recommended officially but seems to work for me), and I haven’t had any complaints about sub-par WiFi performance since!

My next step involves upgrading the heart of my network to something with a few more gigabit speed ports – I’ve already run out of capacity in my “rack” (a re-purposed IKEA bookshelf) – so I’m looking at getting a CRS125-24G-1S-2HnD-IN (there we go again with the brilliant product names!) to act as the core router, and demoting the current RB2011UiAS-2HnD-IN to act as a switch and access point in the living room instead of the “dumb” switch in there currently.

While I realise there are quite a few alternative offerings coming to market that simplify home networking (Google WiFi, Ubiquiti Unifi etc), I’m more than happy with what Mikrotik have to offer both in terms of the hardware and software, and I continue to be impressed by how straightforward yet powerful my home network has become now that there’s something more powerful behind it. I might even start suggesting an upgrade to the parents network!

Unable to load libcec.so on Kodi

As part of a complete re-vamp of my home media setup (blog post coming soon on that!), I found myself trying to connect a Sony TV to an Intel NUC running Kodi v16 via one of PulseEight‘s brilliant USB-CEC Adapters. However, no matter what I tried (combinations of cables, all the settings on the TV, different ports on the NUC etc) I could not get Kodi to recognise the adapter. Despite the Kodi Wiki page on CEC suggesting that this should plug-and-play, I did not get any suggestion from either the TV or Kodi that the CEC dongle was being recognised.

After quite a lot of searching in the wrong place (at first I was convinced that my fresh install of Kodibuntu wouldn’t be the root cause), I found a couple of log lines that finally pointed me in the right direction:

DEBUG: Loading: libcec.so.3.0
ERROR: Unable to load libcec.so.3.0, reason: libcec.so.3.0: cannot open shared object file: No such file or directory

Turns out that Kodi is built against libcec.so.3.0, however the version in the repository (I believe that was bundled or at least suggested by Kodibuntu) is 3.1. Kodi should (theoretically) be built against libcec.so.3 (note the missing .0), which the appropriate symlinks would have taken care of, however it seems there are some changes between 3.0 and 3.1 that Kodi doesn’t like, so some simple switching of the links (I tried creating /usr/local/lib/libcec.so.3.0 -> /usr/local/lib/libcec.so.3.1, which in hindsight isn’t a great idea) caused various crashes that I wasn’t prepared to start debugging – so instead I used the suggestion in this forum thread:

$ apt-get install --reinstall libcec3=3.0.1-1~trusty

Which will put the old version of libcec on your machine. After a quick reboot, Kodi instantly recognised the CEC dongle as promised, and it has been working beautifully ever since!

Puppet-ing my home network

For just over 12 months, I’ve been using Puppet as a way of configuring the various machines I have dotted around my home network. The initial desire to move to an automated configuration management tool was to keep tabs on the various requirements across pieces of software I run, however I’ve now moved to the point where all deployments of new hardware and software (both my own, and third party) are all managed through Puppet, and life is much simpler (most of the time!). I’ve been meaning to write this post for a while, mainly as a “getting started” point for anyone who is thinking about embarking on a similar journey, hopefully this will help people avoid making some of the same mistakes I made along the way!

Firstly, a little about my network, to give some of the points context. Over the last 5 years, I’ve evolved from having a single desktop machine, to today having the same desktop, 2 microservers (one for mass storage, the other for backup, services and acting as the Puppetmaster), a Home Theatre PC, and 7 Raspberry Pis in various guises (an alarm clock, appliance monitors and TV encoder between them). All of these machines were originally configured manually, and re-imaged from scratch every time something went wrong, leading to endless pages of scribbled notes about various requirements and processes to get things working. Now, all of them pull a catalogue from a single Puppetmaster, which I update as requirements change. Adding a new host is a matter of installing a fresh operating system, installing Puppet, and adding a few lines to the Puppetmaster detailing what I want installed – the rest is taken care of!

I’ve ended up using Puppet for much more than I originally imagined, including:

  • SSH Keys
  • MySQL hosts & databases
  • Apache site configuration
  • Tomcat configuration (auto-deploying apps is on my to-do list)
  • Sendmail
  • Automated backups
  • Media sorting
  • ZNC configuration

Plus even more besides that! Most bits of my own software run through supervisord, which is also configured through Puppet.

The first, and arguably most important lesson I learned through my Puppet-isation process is to keep the configuration in some sort of versioning system. Initially I did not keep track of the changes I was making to Puppet manifests, and requirements that were added at 3am seemed completely illogical in the cold light of day – but by self-documenting the manifests as I went along, and tracking changes through my own Subversion repository, I can look back at the changes I’ve made over time, and save myself the hassle of accidentally removing something essential. I’ve lost track of the number of times I’ve had to revert back to a previous configuration, and having a system in place to do this automatically will definitely help you in the long run!

While writing your configuration, I’ve learnt to be explicit about your requirements. A lot of the modules and class definitions I wrote early on did not use any of the relationships you can define between attributes, and as a result I’d often be bitten by the Puppet parser attempting to configure things in an illogical order. For the sake of adding a few arrows into your manifests, it’s worth using these just to avoid the head-bashing when a configuration run fails intermittently! I still run into problems when configuring a new host, when a package I require isn’t installed in time to start up a service – using relationships from the ground up will hopefully avoid this ever becoming an issue.

Arguably the second most useful tool I installed (after Puppet itself) is the Puppet Dashboard. This fantastic tool pulls in the reports that Puppet generates, and spits them out into a very readable format which allows you to get straight to the heart of what’s causing failed runs, rather than having to resort to diving through the depths of the raw logs. It took me a while getting around to installing this, and I honestly regret not installing it straight away – it has saved me an incredible amount of time since. A word of warning though – the logs can take up an awful lot of space (my dashboard database is 400MB+ with only the last 3 months data stored) – make sure your SQL server is up to the task before starting this!

While installing things like the Puppet Dashboard, heira and the like, I’ve taken to Puppet-ing your Puppet, by which I mean writing the configurations for these tools into your Puppet manifests. Initially this seemed very strange to me, I didn’t want some almost-sentient piece of software configuring itself on my machine! However, the client will quite happily run on the same machine as the server to configure itself (in a strange cyclical dogfooding loop). There are plenty of third-party modules available that will do a lot of this for you, but I ended up writing my own for several things – as long as the installation is managed somehow, you’re going to have a much better time!

Linked to that, and my last tip that I’ll post here, is to write your own modules wherever you can. For the first few months after starting using Puppet, I tried to cram everything imaginable into the main manifest directory, and barely used templates and source files at all. Subsequently I’ve learned to separate the concerns wherever possible, which leads to much cleaner code in the module definitions, and much nicer looking node definitions as well. Third-party modules have been a mixed blessing for me, some of them have been coded extremely well to allow multi-platform support, but some have been hard-coded to alien platforms making them useless to me. I’d absolutely recommend going looking for someone else who’s done the hard work first, but don’t be afraid to roll up your sleeves and write a module yourself. Maybe even post it online for other people to enjoy! (I’m very hypocritical making that last point, none of my modules have made it online as they’re all terrible!)

I hope that some of the above might help steer someone in the right direction when embarking on a journey with Puppet. Obviously I’d recommend some more official reading before converting your professional network of 10,000+ machines across, but if you’re like me and need a comparatively small network taming, then take the plunge and get started!

Raspberry Shake: IoT for your boring appliances

My latest project (now I have some spare time again!) has been something quite simple – I’m terrible at putting the washing machine on, and then forgetting about it for the rest of the day, leaving a load of wet clothes inside to fester for hours on end. I know some washers and dryers come with alarms that beep at you when they’re finished, and I wanted to emulate this with an Internet of Things vibe.

So I came up with the Raspberry Shake – quite simply it’s an accelerometer connected up to a Raspberry Pi Zero with some LEDs to indicate status, all shoved inside a small box with some magnets attached, so it can cling to the side of any appliance. The Pi Zero runs a bit of Python code that checks for any movement, and sends notifications when the appliance starts and stops. I’ve made two so far, with plans for a third, and they’re working great!

You can see a full writeup and a video of the build on the Raspberry Shake project page