Controlling a Sony Bravia TV with Google Home

Continuing on the theme of ‘controlling everything with my voice’, I’ve successfully integrated yet another appliance into my setup – this time my TV!

I’ve owned a Sony Bravia 2015 TV for around 18 months now, and while the interface does slow to a crawl at times, overall I’m very impressed with the Android TV integration, and have gone as far as to switch from using a dedicated Kodi machine to running a Plex server and using the native Plex Android TV App. However, at the time of writing this, there isn’t a way of controlling low-level functions of the TV (power, volume etc) from Google Home, you can only use the Chromecast integration. While I’ll agree it’s not entirely practical to navigate media entries on the TV with your voice, I find myself wanting to switch the TV on/off from another room, or pause playback with my voice rather than scrambling for a remote when my phone rings.

Unfortunately it doesn’t appear that the Bravia TV offers any direct API for issuing commands over the network, so instead I looked for a device that could emulate the remote control. After a bit of searching I came across the Broadlink RM Mini 3 (Amazon link) – I was initially a bit sceptical given the low price point and very foreign documentation, however I was pleasantly surprised by how easy the setup was, although the Android application leaves a lot to be desired. Next step was to find a way of issuing commands to the Broadlink device over my network. While there is a handy Python library, and even an extension for the RM Mini 3 capable of issuing IR commands, neither of these would allow commands to be sent over a HTTP connection.

Given the easiest method of integrating with the Google Assistant/Google Home is through their IFTTT channel, this HTTP interface was going to save a lot of effort, so I set about creating one myself! The resulting code is a fork of the BlackBeanControl repository mentioned above, but with an additional Python script using to expose an interface for sending commands over a HTTP request. I’ve then placed this script behind my home Apache proxy (sufficiently secured to prevent the entire internet being able to turn my TV on and off), and used the IFTTT Webhook channel to make the appropriate requests when triggered.

I’m somehow becoming even lazier than I ever imagined – now I don’t even have to reach for the remote to continue my binge-watching!

Talking to a Tesla through Google Home

The Google Home had the long awaited (at least by me!) UK release last week, and I was delighted to get my hands on one of the first. Since then I’ve gradually been linking it to more and more devices around my home (more blog posts on that subject to follow), but having played around with the Tesla API recently, I only had one integration on my mind! Sadly there’s nothing official that allows you to talk to your Tesla Model S or X through Google Home – although I hope something will be released in future – so I took it on myself to build something that would allow some basic back-and-forth conversations with my car.

While I’m continuing to work on the project, it’s now at the stage where hopefully others can expand and improve on it, so I’ve written up all the details on the TeslaVoice project page, and uploaded the associated files to GitHub.

Hack my ride

Just before Christmas, I had the pleasure of collecting my new car – a Tesla Model S! This is something I’d been lusting after for months since it was announced, and finally pulled the trigger on an order back in September. I can honestly say that I have never waited so impatiently for something in my life, and was absolutely delighted to finally take ownership on Christmas Eve – a rather extravagant early Christmas present! The driving experience is absolutely sublime, the ability to effortlessly transition from cruising, to having your insides re-arranged by the breathtaking acceleration, is something that has to be experienced to be believed!

Sadly my first week of ownership didn’t go as smoothly as hoped – I drove the car up to Manchester on Boxing Day to visit my parents, where I discovered that the supplied charging cable wasn’t functioning properly, something that’s fairly critical for a fully electric car! Due to holiday opening times I wasn’t able to get this replaced as quickly as I would have liked, but to their credit Tesla did replace the cable without hassle when I was actually able to make it to a dealership. Still, this left a slightly sour taste about the whole experience, and curbed my enthusiasm somewhat – until this week, when I’ve had time to enjoy the car some more!

Putting a few more miles on the clock quickly reminded me of why I’d bought the car in the first place, and then I started to delve into the possibilities of the API that’s unofficially offered by Tesla (their Android & iOS apps use it). There are a variety of services that take this data and will turn it into nice graphs and stats for you, however I was more interested in how I could integrate this into my blossoming “smart home” setup. My first venture has been creating a script that scrapes Google Calendar for events, and then will set the auto-conditioning on the Tesla at a set time before each event, meaning that the car is toasty warm/nicely chilled when you set off for work! I’ve put the source code on GitHub for the world to laugh at and hopefully improve!

This is definitely only scratching the surface of what’s possible with the Model S – I’m thinking of future scripts including:

  • Turning on home heating when leaving work
  • Estimating driving distance based on the days events, and setting the charge limit accordingly
  • Monitor power draw while plugged in, for cost estimation

What other car manufacturer gives you access to so much data to nerd out over?! I’m enjoying this car without even having to sit in the drivers seat!

Upgrading my home network

For a number of years, I’ve been using a TL-WR1043ND running DD-WRT as my home router, even going as far as replicating the same setup for friends and family, as it struck a happy midpoint of being powerful enough to be useful, but also simple and stable enough for the slightly less technically literate to manage. The DD-WRT setup was surprisingly simple, and I’ve been reasonably impressed by the performance and capabilities of the software, even on such a basic consumer model of router. That said, around 12 months ago I realised that my home network was rapidly outgrowing this basic setup, and I felt the need to lean towards something a bit more “prosumer“. There are quite a few different companies targeting this market, but one in particular stood out to me – Mikrotik.

Around 12 months ago I took the dive and bought a RB2011UiAS-2HnD-IN (that’s quite the product name!) – one of their mid-line models which seemed quite reasonably priced, and got stuck into learning the intricacies of RouterOS. Their own WinBox software provides a very usable GUI for configuring their hardware, granting a view to the myriad of different features of the board while keeping the learning curve shallow enough to avoid you becoming swamped by options. I’ve gradually tweaked and enabled more and more services, to the point where the single device is providing:

  • PPPoE to my ISP
  • DHCP
  • DNS (local and external)
  • Firewall
  • WiFi (more on this below)
  • L2TP connection to a VPN provider, with certain traffic automatically routed through this
  • … and a whole lot more!

Additionally, when I moved into my new house earlier this year, I set about removing the need for using HomePlug to connect various devices in different rooms, as I found that these tended to be unstable, causing slow transfer speeds and a high rate of dropped connections. I ended up running CAT6 from my study through to several other rooms in the house (I may blog about that project some time in the future!), which eliminated the need for the HomePlugs, but highlighted how poor my WiFi setup was (having previously blamed this on the dodgy connections). While researching ways to improve coverage, I struck upon a feature of RouterOS that I hadn’t yet taken advantage of – CAPsMAN (Controlled Access Point System MANager). This essentially allows you to delegate control of various MikroTik device radios to a central ‘manager’, which pushes out the WiFi configuration to create a seamless network across all access points. I picked up a couple of Home Access Points (hAP) and set these up as slaves to CAPsMAN running on the main router (as well as the radios on the router itself being delegated to CAPsMAN, not something that’s recommended officially but seems to work for me), and I haven’t had any complaints about sub-par WiFi performance since!

My next step involves upgrading the heart of my network to something with a few more gigabit speed ports – I’ve already run out of capacity in my “rack” (a re-purposed IKEA bookshelf) – so I’m looking at getting a CRS125-24G-1S-2HnD-IN (there we go again with the brilliant product names!) to act as the core router, and demoting the current RB2011UiAS-2HnD-IN to act as a switch and access point in the living room instead of the “dumb” switch in there currently.

While I realise there are quite a few alternative offerings coming to market that simplify home networking (Google WiFi, Ubiquiti Unifi etc), I’m more than happy with what Mikrotik have to offer both in terms of the hardware and software, and I continue to be impressed by how straightforward yet powerful my home network has become now that there’s something more powerful behind it. I might even start suggesting an upgrade to the parents network!

Unable to load on Kodi

As part of a complete re-vamp of my home media setup (blog post coming soon on that!), I found myself trying to connect a Sony TV to an Intel NUC running Kodi v16 via one of PulseEight‘s brilliant USB-CEC Adapters. However, no matter what I tried (combinations of cables, all the settings on the TV, different ports on the NUC etc) I could not get Kodi to recognise the adapter. Despite the Kodi Wiki page on CEC suggesting that this should plug-and-play, I did not get any suggestion from either the TV or Kodi that the CEC dongle was being recognised.

After quite a lot of searching in the wrong place (at first I was convinced that my fresh install of Kodibuntu wouldn’t be the root cause), I found a couple of log lines that finally pointed me in the right direction:

DEBUG: Loading:
ERROR: Unable to load, reason: cannot open shared object file: No such file or directory

Turns out that Kodi is built against, however the version in the repository (I believe that was bundled or at least suggested by Kodibuntu) is 3.1. Kodi should (theoretically) be built against (note the missing .0), which the appropriate symlinks would have taken care of, however it seems there are some changes between 3.0 and 3.1 that Kodi doesn’t like, so some simple switching of the links (I tried creating /usr/local/lib/ -> /usr/local/lib/, which in hindsight isn’t a great idea) caused various crashes that I wasn’t prepared to start debugging – so instead I used the suggestion in this forum thread:

$ apt-get install --reinstall libcec3=3.0.1-1~trusty

Which will put the old version of libcec on your machine. After a quick reboot, Kodi instantly recognised the CEC dongle as promised, and it has been working beautifully ever since!

Puppet-ing my home network

For just over 12 months, I’ve been using Puppet as a way of configuring the various machines I have dotted around my home network. The initial desire to move to an automated configuration management tool was to keep tabs on the various requirements across pieces of software I run, however I’ve now moved to the point where all deployments of new hardware and software (both my own, and third party) are all managed through Puppet, and life is much simpler (most of the time!). I’ve been meaning to write this post for a while, mainly as a “getting started” point for anyone who is thinking about embarking on a similar journey, hopefully this will help people avoid making some of the same mistakes I made along the way!

Firstly, a little about my network, to give some of the points context. Over the last 5 years, I’ve evolved from having a single desktop machine, to today having the same desktop, 2 microservers (one for mass storage, the other for backup, services and acting as the Puppetmaster), a Home Theatre PC, and 7 Raspberry Pis in various guises (an alarm clock, appliance monitors and TV encoder between them). All of these machines were originally configured manually, and re-imaged from scratch every time something went wrong, leading to endless pages of scribbled notes about various requirements and processes to get things working. Now, all of them pull a catalogue from a single Puppetmaster, which I update as requirements change. Adding a new host is a matter of installing a fresh operating system, installing Puppet, and adding a few lines to the Puppetmaster detailing what I want installed – the rest is taken care of!

I’ve ended up using Puppet for much more than I originally imagined, including:

  • SSH Keys
  • MySQL hosts & databases
  • Apache site configuration
  • Tomcat configuration (auto-deploying apps is on my to-do list)
  • Sendmail
  • Automated backups
  • Media sorting
  • ZNC configuration

Plus even more besides that! Most bits of my own software run through supervisord, which is also configured through Puppet.

The first, and arguably most important lesson I learned through my Puppet-isation process is to keep the configuration in some sort of versioning system. Initially I did not keep track of the changes I was making to Puppet manifests, and requirements that were added at 3am seemed completely illogical in the cold light of day – but by self-documenting the manifests as I went along, and tracking changes through my own Subversion repository, I can look back at the changes I’ve made over time, and save myself the hassle of accidentally removing something essential. I’ve lost track of the number of times I’ve had to revert back to a previous configuration, and having a system in place to do this automatically will definitely help you in the long run!

While writing your configuration, I’ve learnt to be explicit about your requirements. A lot of the modules and class definitions I wrote early on did not use any of the relationships you can define between attributes, and as a result I’d often be bitten by the Puppet parser attempting to configure things in an illogical order. For the sake of adding a few arrows into your manifests, it’s worth using these just to avoid the head-bashing when a configuration run fails intermittently! I still run into problems when configuring a new host, when a package I require isn’t installed in time to start up a service – using relationships from the ground up will hopefully avoid this ever becoming an issue.

Arguably the second most useful tool I installed (after Puppet itself) is the Puppet Dashboard. This fantastic tool pulls in the reports that Puppet generates, and spits them out into a very readable format which allows you to get straight to the heart of what’s causing failed runs, rather than having to resort to diving through the depths of the raw logs. It took me a while getting around to installing this, and I honestly regret not installing it straight away – it has saved me an incredible amount of time since. A word of warning though – the logs can take up an awful lot of space (my dashboard database is 400MB+ with only the last 3 months data stored) – make sure your SQL server is up to the task before starting this!

While installing things like the Puppet Dashboard, heira and the like, I’ve taken to Puppet-ing your Puppet, by which I mean writing the configurations for these tools into your Puppet manifests. Initially this seemed very strange to me, I didn’t want some almost-sentient piece of software configuring itself on my machine! However, the client will quite happily run on the same machine as the server to configure itself (in a strange cyclical dogfooding loop). There are plenty of third-party modules available that will do a lot of this for you, but I ended up writing my own for several things – as long as the installation is managed somehow, you’re going to have a much better time!

Linked to that, and my last tip that I’ll post here, is to write your own modules wherever you can. For the first few months after starting using Puppet, I tried to cram everything imaginable into the main manifest directory, and barely used templates and source files at all. Subsequently I’ve learned to separate the concerns wherever possible, which leads to much cleaner code in the module definitions, and much nicer looking node definitions as well. Third-party modules have been a mixed blessing for me, some of them have been coded extremely well to allow multi-platform support, but some have been hard-coded to alien platforms making them useless to me. I’d absolutely recommend going looking for someone else who’s done the hard work first, but don’t be afraid to roll up your sleeves and write a module yourself. Maybe even post it online for other people to enjoy! (I’m very hypocritical making that last point, none of my modules have made it online as they’re all terrible!)

I hope that some of the above might help steer someone in the right direction when embarking on a journey with Puppet. Obviously I’d recommend some more official reading before converting your professional network of 10,000+ machines across, but if you’re like me and need a comparatively small network taming, then take the plunge and get started!

Raspberry Shake: IoT for your boring appliances

My latest project (now I have some spare time again!) has been something quite simple – I’m terrible at putting the washing machine on, and then forgetting about it for the rest of the day, leaving a load of wet clothes inside to fester for hours on end. I know some washers and dryers come with alarms that beep at you when they’re finished, and I wanted to emulate this with an Internet of Things vibe.

So I came up with the Raspberry Shake – quite simply it’s an accelerometer connected up to a Raspberry Pi Zero with some LEDs to indicate status, all shoved inside a small box with some magnets attached, so it can cling to the side of any appliance. The Pi Zero runs a bit of Python code that checks for any movement, and sends notifications when the appliance starts and stops. I’ve made two so far, with plans for a third, and they’re working great!

You can see a full writeup and a video of the build on the Raspberry Shake project page

Talking to a LIS3DH via Python on a Raspberry Pi

For my latest project (details coming soon available here) I acquired a couple of LIS3DH triple-axis accelerometers. As most of the products available through Adafruit are fairly well used, I didn’t bother checking what libraries were available before buying, but unfortunately for me only a C++ library had been written. I didn’t feel like learning C just for the purpose of this project, and so the only option left was to write my own Python library!

Thankfully I had some excellent starting points with the aforementioned C++ library, as well as the Python I2C library that Adafruit have published. I found myself referring back to the manufacturer datasheet quite often as well, mainly to clarify what each register contained.

While the task initially looked rather daunting (having had zero prior experience with bit-bashing through registers) – I found that with some pre-existing code to crib from, the various functions took shape rather quickly, and within an afternoon I’d produced a library exposing all the basic functions I’m likely to need for this project. I’ve put my code on Github in the hope that people will contribute to filling in the gaps, and improving where necessary.

“Invalid parameter provider” on Puppet

So I’ve just spent the last hour banging my head against my desk after trying to make some changes to a Puppet provider – for some reason when I’d made the changes, all of my nodes started failing to run, even ones that had nothing to do with the provider. All I was getting was an error when trying to retrieve the manifests – saying that it “Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter provider...“.

None of my Google-fu turned up anything useful, until I struck upon a single comment in the Puppet ticket database

Turns out that you may need to restart the Puppet master server after updating providers, or the entire system can fall apart before your very eyes.

I’m posting this here in the hope that

  1. I remember this next time, and avoid wasting hours debugging and trawling forum posts; and
  2. Someone might discover this post through a search engine one day, and be saved of my pain!

Happy Puppet-ing!

One week with Pebble Time

Pebble TimeTwo and a half years ago (wow – really that long?!) I wrote a post on my impressions of the Pebble watch, one of the very first projects I backed on Kickstarter. At the time, I was pretty unimpressed by the product as a whole package – while the hardware was impressive (at the time), the software really let the watch down, and sadly never saw a terrific improvement. The SDK alluded to in the original release did eventually turn up, and was followed by swathes of watchfaces and apps to run on your wrist, but none of these really captured my imagination, the watch remained a second-screen for my wrist on which I could view notifications.

Given my nonplussed attitude towards the product, I was surprised when I found myself throwing money at the new Pebble Time Kickstarter. The videos of the new watch grabbed me in a way that the original product had failed to – colour, animations, design, apps – this iteration seemed to correct everything that the original lacked. So, I waited patiently for the watch to arrive (they definitely improved their logistics since their first attempt), and have now had a week to play. So I repeat the question I answered last time – have I fallen in love with this watch?

The answer – slightly more than last time! The watch is definitely a much better designed product, it looks and feels a lot better on my wrist, as the original was starting to look very dated in this Apple Watch/Android Wear golden era of wearable technology. The menus flow an awful lot better with some slick animation, and even though I find the screen a little harder to read, the colours really do improve the display. It feels like much more of a product, rather than a proof-of-concept piece of hardware with some poorly thought out software thrown on top. Integration with my phone is much more seamless as well, the new Pebble Time app has replaced the need to have separate applications installed for receiving third-party notifications, and the watchface/app store seems better integrated.

So what’s putting me off? To me, it still seems like a convenient device to view notifications on, and not a lot more. It’s missing a few “killer apps” like the Android Wear integration with Maps, or gestures on the Apple Watch. While the Pebble Time may be a much more desirable piece of hardware, and streets ahead of the original edition, I feel the software has fallen short of the mark yet again.

That said, I won’t be rushing out to buy the Apple or Android equivalent – the price points, battery life and physically large size of the alternatives have put me off for the time being, so the Pebble Time does have a place on my wrist for the foreseeable future.