Controlling a Sony Bravia TV with Google Home

Continuing on the theme of ‘controlling everything with my voice’, I’ve successfully integrated yet another appliance into my setup – this time my TV!

I’ve owned a Sony Bravia 2015 TV for around 18 months now, and while the interface does slow to a crawl at times, overall I’m very impressed with the Android TV integration, and have gone as far as to switch from using a dedicated Kodi machine to running a Plex server and using the native Plex Android TV App. However, at the time of writing this, there isn’t a way of controlling low-level functions of the TV (power, volume etc) from Google Home, you can only use the Chromecast integration. While I’ll agree it’s not entirely practical to navigate media entries on the TV with your voice, I find myself wanting to switch the TV on/off from another room, or pause playback with my voice rather than scrambling for a remote when my phone rings.

Unfortunately it doesn’t appear that the Bravia TV offers any direct API for issuing commands over the network, so instead I looked for a device that could emulate the remote control. After a bit of searching I came across the Broadlink RM Mini 3 (Amazon link) – I was initially a bit sceptical given the low price point and very foreign documentation, however I was pleasantly surprised by how easy the setup was, although the Android application leaves a lot to be desired. Next step was to find a way of issuing commands to the Broadlink device over my network. While there is a handy Python library, and even an extension for the RM Mini 3 capable of issuing IR commands, neither of these would allow commands to be sent over a HTTP connection.

Given the easiest method of integrating with the Google Assistant/Google Home is through their IFTTT channel, this HTTP interface was going to save a lot of effort, so I set about creating one myself! The resulting code is a fork of the BlackBeanControl repository mentioned above, but with an additional Python script using web.py to expose an interface for sending commands over a HTTP request. I’ve then placed this script behind my home Apache proxy (sufficiently secured to prevent the entire internet being able to turn my TV on and off), and used the IFTTT Webhook channel to make the appropriate requests when triggered.

I’m somehow becoming even lazier than I ever imagined – now I don’t even have to reach for the remote to continue my binge-watching!

Talking to a Tesla through Google Home

The Google Home had the long awaited (at least by me!) UK release last week, and I was delighted to get my hands on one of the first. Since then I’ve gradually been linking it to more and more devices around my home (more blog posts on that subject to follow), but having played around with the Tesla API recently, I only had one integration on my mind! Sadly there’s nothing official that allows you to talk to your Tesla Model S or X through Google Home – although I hope something will be released in future – so I took it on myself to build something that would allow some basic back-and-forth conversations with my car.

While I’m continuing to work on the project, it’s now at the stage where hopefully others can expand and improve on it, so I’ve written up all the details on the TeslaVoice project page, and uploaded the associated files to GitHub.

Raspberry Shake: IoT for your boring appliances

My latest project (now I have some spare time again!) has been something quite simple – I’m terrible at putting the washing machine on, and then forgetting about it for the rest of the day, leaving a load of wet clothes inside to fester for hours on end. I know some washers and dryers come with alarms that beep at you when they’re finished, and I wanted to emulate this with an Internet of Things vibe.

So I came up with the Raspberry Shake – quite simply it’s an accelerometer connected up to a Raspberry Pi Zero with some LEDs to indicate status, all shoved inside a small box with some magnets attached, so it can cling to the side of any appliance. The Pi Zero runs a bit of Python code that checks for any movement, and sends notifications when the appliance starts and stops. I’ve made two so far, with plans for a third, and they’re working great!

You can see a full writeup and a video of the build on the Raspberry Shake project page

Talking to a LIS3DH via Python on a Raspberry Pi

For my latest project (details coming soon available here) I acquired a couple of LIS3DH triple-axis accelerometers. As most of the products available through Adafruit are fairly well used, I didn’t bother checking what libraries were available before buying, but unfortunately for me only a C++ library had been written. I didn’t feel like learning C just for the purpose of this project, and so the only option left was to write my own Python library!

Thankfully I had some excellent starting points with the aforementioned C++ library, as well as the Python I2C library that Adafruit have published. I found myself referring back to the manufacturer datasheet quite often as well, mainly to clarify what each register contained.

While the task initially looked rather daunting (having had zero prior experience with bit-bashing through registers) – I found that with some pre-existing code to crib from, the various functions took shape rather quickly, and within an afternoon I’d produced a library exposing all the basic functions I’m likely to need for this project. I’ve put my code on Github in the hope that people will contribute to filling in the gaps, and improving where necessary.

AlarmPi: The Raspberry Pi Smart Alarm Clock

When I left my previous job around 18 months ago, I promised myself I’d do something productive with the time I had between employment. During that time, I realised how much I hated my alarm clock going off every morning, and also how stupid and inflexible most alarm clocks are. I managed to achieve very little with that spare time between jobs, but this hatred of alarm clocks has been driven home even further since I’ve started working shifts in my new job – no alarm clock I could find had the ability to vary the alarm time based on a shift pattern (I suppose that’s a fairly niche feature!), and very few had decent internet radio connectivity to allow me to listen to music I like in the morning.

That productive feeling drew me to buy some parts from Adafruit and have a play with some electronics projects – the furthest I got was playing around with a LCD display as documented in this other blog post. More recently, my old alarm clock started to fail in rather interesting ways (ever been woken up at 3:27AM by a piercing screaming & static noise?), so I decided it was time to build my own, and the AlarmPi was born!

The core of the project is a Raspberry Pi connected up to a series of fairly basic components, all controlled by a Python script which takes input from all manner of sources, and shows information through the two front displays. I’ve put together a short video explaining some of the main features which can be viewed below, and you can read more about the AlarmPi on the project page

Text to Speech on a Raspberry Pi using Google Translate

For a couple of upcoming projects, I’ve been trying to find a way of making a Raspberry Pi take an input of a piece of text and vocalise it through a pair of connected speakers (so-called Speech Synthesis). There are a number of methods listed on the eLinux wiki page on the subject, however I found the suggested available packages produced rather robotic sounding results, and I was after something a bit more natural and pleasant sounding, rather than something to scare the bejeezus out of me every time it speaks. The most natural sounding offering is a hidden and unofficial API provided through the Google Translate service, which produces some very nice sounding audio, and is very accurate most of the time. Unfortunately, it’s limited to 100 characters at a time, which starts to be a problem when you want to read out large swathes of text.

There are a few scripts that I found (including this one from Dan Fountain) that offer an interface to this API, however the majority of them just split the input at the 100 character mark (or by the previous space to it), which leads to broken sounding sentences in some cases, where the pre-existing punctuation could be used. In order to get something slightly more natural sounding, I set about bodging together some Python, and came up with the following:

Please note: this script no longer works! Google made some changes to their TTS engine during July 2015 which meant this script would no longer work, as the translate_tts request would be redirected to a CAPTCHA page. There is an updated version of the script available in my SVN repository, and now at Github as well

#!/usr/bin/python

# googletts
# Created by Matt Dyson (mattdyson.org)
# http://mattdyson.org/blog/2014/07/text-to-speech-on-a-raspberry-pi-using-google-translate/
# Some inspiration taken from http://danfountain.com/2013/03/raspberry-pi-text-to-speech/

# Version 1.0 (12/07/14)

# Process some text input from our arguments, and then pass them to the Google translate engine
# for Text-To-Speech translation in nicely formatted chunks (the API cannot handle more than 100
# characters at a time).
# Splitting is done first by any punctuation (.,;:) and then by splitting by the MAX_LEN defined
# below.
# mpg123 is required for playing the resultant MP3 file that is returned by Google TTS

from subprocess import call
import sys
import re

MAX_LEN = 100 # Maximum length of a segment to send to Google for TTS
LANGUAGE = "en" # Language to use with TTS - this won't do any translation, just the voice it's spoken with

fullMsg = ""
i = 1

# Read our system arguments and add them into a single string
while i<len(sys.argv):
   fullMsg += sys.argv[i] + " "
   i+=1

# Split our full text by any available punctuation
parts = re.split("[\.\,\;\:]", fullMsg)

# The final list of parts to send to Google TTS
processedParts = []

while len(parts)>0: # While we have parts to process
   part = parts.pop(0) # Get first entry from our list

   if len(part)>MAX_LEN:
      # We need to do some cutting
      cutAt = part.rfind(" ",0,MAX_LEN) # Find the last space within the bounds of our MAX_LEN

      cut = part[:cutAt]

      # We need to process the remainder of this part next
      # Reverse our queue, add our remainder to the end, then reverse again
      parts.reverse()
      parts.append(part[cutAt:])
      parts.reverse()
   else:
      # No cutting needed
      cut = part

   cut = cut.strip() # Strip any whitespace
   if cut is not "": # Make sure there's something left to read
      # Add into our final list
      processedParts.append(cut.strip())

for part in processedParts:
   # Use mpg123 to play the resultant MP3 file from Google TTS
   call(["mpg123","-q","http://translate.google.com/translate_tts?tl=%s&q=%s" % (LANGUAGE,part)])

This can also be downloaded from my projects repository at http://projects.mattdyson.org/projects/speech/googletts, where updated versions may be available. The package mpg123 is required to play the resulting MP3 file that Google Translate returns. The easiest way to get this script installed will be with the following (run as root on your Raspberry Pi):

$ apt-get install mpg123
$ cd /usr/bin/
$ svn co http://projects.mattdyson.org/projects/speech speech
$ chmod +x speech/googletts
$ ln -s speech/googletts
$ googletts "Hello world, the installation of the text to speech script is now complete"

Unfortunately, if a clause of a sentence is longer than 100 characters there will still be an unwanted pause in the middle, as the script does not know where best to split the text, and if you’re using a lot of punctuation you might find the text takes a long time to read back. I’d be welcome to incorporate any improvements people may suggest!

Blinkytape

Yet another one of my Kickstarter jaunts turned up just before Christmas – the Blinkytape by BlinkinLabs. Essentially, this product is a strip of 60 LEDs connected to a USB interface, which allows you to address each “pixel” individually through a little bit of coding so you can build up your own programmable lighting show! So far I’ve only had chance to use this as a very nerdy alternative to Christmas lighting, and more generally expanding my knowledge of Python, but I’ve got big plans for it in future!

First up – getting started. I decided to use this in conjunction with a Raspberry Pi I had going spare from another project, as it gives me network connectivity and a platform to write and run Python scripts on. Conveniently, no powered external USB hub is required to run the Blinkytape off a Pi (as I had no other peripherals plugged in, your mileage may vary!), so it was just a case of plugging it in and installing the necessary Python libraries:

$ sudo apt-get install python-pip
$ sudo pip install pyserial

There is an official Blinkytape python library available from their GitHub repository (along with some other languages), however at the time when I was playing with this (before Christmas) their base class was lacking a lot of features – so I wrote my own! To get my integration script, run the following:

$ svn co http://projects.mattdyson.org/projects/blinkytape blinkytape

This will give you the main class (BlinkyTapeV2.py) and a couple of example files, all of which are commented in a (hopefully!) helpful manner to show what’s going on. The following video shows an example of the BouncingBlocks.py class in action (by running sudo python BouncingBlocks.py) followed by a more ‘festive’ example, something I knocked together very quickly to cycle through a series of effects in very Christmas-y red and green colours!

Overall, I’m very impressed by the quality of this product. I was expecting something very rough-and-ready, being a rather specialist product marketed through Kickstarter – however the LEDs themselves are very bright, and nicely packaged up in a plastic flexible strip in order to protect the circuitry. The ease with which I managed to write my own integration library is also a testament to how simple the electronic design of this product is.

So what am I planning on using this for? First up, I’m looking at building my own alarm clock that reads from Google Calendar to only wake me up when I need to be up – normal alarms don’t seem to have been built with shift work in mind! I’m hoping to integrate the Blinkytape into this project by creating an ambient light that gradually fades up after the alarm has gone off, hopefully easing the transition into daylight hours! There are also plenty of projects I was hoping to do with a Moore’sCloud Light, another Kickstarter project that sadly failed to meet their funding goals, but hopefully Blinkytape will fill the void! I’ll make sure to post back here with further updates when my Blinkytape gets put to use!

Using 20×4 RGB LCD over i2c with a Raspberry Pi

Now there’s a specialist blog post title if ever there were one…

Recently, I’ve been dabbling with electronics to fill the void of spare time I’ve found myself with while I’m between jobs. I’m currently working on a half-baked idea to create some sort of digital assistant who will take instructions in some form, and then read stuff back to me in a Siri-esque manner. Nothing sounds more awesome than having twitter @replies read out to you, right?! To kick off this project, and get me motivated to actually do something, I ordered a boatload of parts from Adafruit, and set about learning how to use them. First challenge – connecting up their 20×4 RGB backlight negative LCD screen to my Raspberry Pi.

In order to assist with this, I also bought the i2c / SPI character LCD backpack in order to save some GPIO pins for other uses. Due to my lack of attention while ordering, I failed to notice that the LCD backpack only has 16 pins, whereas the LCD screen I ordered has 18 (2 more for the extra background LEDs). Rather than giving up and being limited to only a single channel of control for the backlights, I decided to connect pins 14 to 18 direct into the Pi, and mash two separate libraries together to give myself full control. This is what I ended up with (click for big):

2013-05-02 21.36.11

Now, that looks like an absolute mess. That’s because it is. In an attempt to make that a bit more readable, here’s a Fritzing diagram of how it’s wired (again, click for big).

lcd_test_bb

Now, that’s even more confusing as I couldn’t find a Fritzing library with the right parts – so I’ve fudged a few things. Imagine there are ports 17 and 18 on the LCD, and that the LCD itself is 20×4 rather than 16×2. Secondly, imagine the chip in the middle is actually the i2c backpack mentioned above, so everything on the bottom is connected straight to ports 1 to 16 on the LCD, and the VCC/GND/CLK/DAT are connected to the Pi. So, in terms of wiring we get:

  • LCD #1 to #14 -> i2c backpack #1 to #14
  • LCD #15 -> 5V0
  • LCD #16 -> Raspberry Pi GPIO 17
  • LCD #17 -> Raspberry Pi GPIO 27
  • LCD #18 -> Raspberry Pi GPIO 22
  • i2c backpack GND -> GND
  • i2c backpack VCC -> 5V0
  • i2c backpack CLK -> Raspberry Pi SCL
  • i2c backpack DAT -> Raspberry Pi SDA

Now that’s all set up, you can use the standard AdafruitLcd Python library (nice adaptation that I used can be found here) to control the text shown on screen, but we need something bespoke for our background lighting. For future projects, I wanted the ability to control each colour individually, so I can set arbitrary RGB values on the screen, and also brighten/dim appropriately. The latest version of RPi.GPIO will let you do software Pulse Width Modulation, which will achieve this quite nicely for us. To install the latest version (0.5.2a at the time of writing), you’ll need to run the following on your Pi (as root):

$ wget https://pypi.python.org/packages/source/R/RPi.GPIO/RPi.GPIO-0.5.2a.tar.gz
$ tar xf RPi.GPIO-0.5.2a.tar.gz
$ cd RPi.GPIO-0.5.2a
$ python setup.py install

So, combining some standard example code for PWM on the Pi with the AdafruitLcd library, I developed my own little library for controlling a LCD wired up in this manner. To get up and running with the code I wrote, you will need (again, as root):

$ mkdir lcdtest
$ cd lcdtest
$ svn co http://projects.mattdyson.org/projects/LCDControl@889 .
$ git clone https://github.com/PDKK/RpiLcdBackpack.git
$ touch RpiLcdBackpack/__init__.py
$ python testLCD.py

Note: If you see IOError: [Errno 5] Input/output error when running testLCD.py, you may need to edit RpiLcdBackpack/RpiLcdBackpack.py and change the line self.__bus=smbus.SMBus(0) to self.__bus=smbus.SMBus(1). This should only happen on newer versions of the Pi, where the i2c bus number changed to 1 from 0.

Note 2 (added 15/10/14): The version of my LCDControl library that you’re checking out with the above command is now out-dated, I’ve updated the library to use pigpio instead of RPi.GPIO, as the latter was causing me flickering problems when the Pi was under load. To get the latest version, remove the @889 from the svn co command, you will need to have pigpio installed and running for this to work.

Once you run testLCD.py, you should see the screen flash a series of colours, followed by some messages appearing on the screen. Yaaay – it works!

The LCDControl class I’ve written is pretty basic (I’m still learning Python… slowly!) but allows you to set RGB or individual colour values for the backlights, and also pass in any message without worrying about formatting. Currently (version 1.0 at the time of writing), the LCDControl.setMessage method will split by the newline character (\n) and do the logic regarding line numbers for you (as the third display line on the LCD is actually carried over from the first line passed to the controller, and the fourth with the second) – future iterations of this code will allow you to do things such as full text wrapping, and scrolling text.

So there we have it – a 20×4 RGB LCD screen talking to a Raspberry Pi over i2c, retaining individual control over the background LEDs. As always, please leave a comment if you spot anything wrong with what I’ve written here, or have any feedback/suggestions/requests!

Using an Xbox 360 Wireless Controller with Raspberry Pi

As part of a project I’m working on at the moment (more information to come soon… more information here) I’ve been attempting to get my Xbox 360 Wireless Controller for Windows talking to my Raspberry Pi. Having spent a fair amount of time chasing various options around the internet, I thought I would share my eventual (and rather simple) solution here.

The first thing I found was PyGame – a python library that offers support for joysticks and gamepads, but primarily designed for game development. This post suggested that PyGame may solve the connectivity problems, and gives some example code for echoing out events, however I could not get this to work.

The Ubuntu wiki suggested a module called xpad, which is included by default on Ubuntu, but not on the Rasbian image I am using (Rasbian “wheezy”), although it is available through apt-get in the default repository. Unfortunately, this didn’t work for me either.

The eventual solution that worked for me came up in a blog post by Zephod about using an Xbox controller to run a remote control car, which suggested using Xboxdrv – a userspace driver for the Xbox controllers in Linux. There were suggestions on the Raspberry Pi forums that this would require building, but a simple `apt-get install xboxdrv` on the Pi worked for me. Once installed, execute the program (as root), and then re-sync the Xbox controller – this had me stumped for quite a while, and seemingly only needs to be done the first time you attempt to use the controller since the module was loaded. A re-sync for the wireless controller means holding the button on the reciever for ~3 seconds until it starts to flash, and then holding the button on top of the controller (to the right of where ‘Microsoft’ is written) until the lights flash. Once this has completed, you should see a new line in stdout for every event that happens on the controller – so press a button and see what happens!

This is the output I saw when starting up xboxdrv (having already done the sync) and pressing the “A” button on my controller – notice the A:1 changing to A:0 as I release the button (about 3/4 of the way across the terminal). Success! (Click for a larger image)

Zephod has written a small Python class to read from the output of xboxdrv and allow it to be read in a more usable format – I’ve not yet had chance to fully digest what it does and how it works (this is an exercise in me learning Python as well!), but it looks very promising, and I’m looking forward to continuing with my project!

Update 05/01/2013
I’ve had chance to play around with the xbox_read.py file I linked above – and it works a treat! I’ve set up a quick test python script that you can use to print out events. In order to use it, do:

$ git clone https://github.com/zephod/lego-pi.git legopi
$ touch legopi/__init__.py
$ wget http://projects.mattdyson.org/projects/robotarm/readEventTest.py
$ sudo python readEventTest.py

You may have to re-sync your controller as described above, and then once you start to move sticks/press buttons you should see a single line for each event! Magic!

Sound the alarm!

My colleague and partner in crimes against PHP has “not a blog”-ed about a recent waste of my Friday afternoon at work. There’s a pretty awesome video of it in action as well. Kudos to GrahamB on the light-triggered panic in the background.

Essentially, this wonderous creation boils down to an off-the-shelf novelty fuzz-light connected to a 13A 4-way extension lead that was adapted to be controlled via a USB relay. Only simple modification needed (I don’t trust myself with anything electrical usually) – snip the neutral cable on the extension lead and run through the relay (even I can’t cock that up). Through the use of some really dodgy python (not PHP, as Tom alludes to, for a change) that was adapted from numerous tutorials – the relay can then be controlled via a HTTP GET request to the relevant port with /on or /off as the request string.

This is then hit via a command to our work IRC bot, so you can type !emergency into any channel to turn the light on, or !emergencyover to turn it back off again, which will create the appropriate HTTP hit, and return appropriate witticisms from the bot. I’m not going to paste that code here though. Make your own.

You may wonder what the point of this probably-lethal waste of money & time is. Good question. I’m still asking myself that, but it’s pretty damn fun to control a flashing light from your computer…

The pyweb/serial shonky python script used for control:

"""
USB Relay HTTP Control Script

Accepts HTTP GET requests to /on and /off, sends the requisite serial commandto the USB relay board

Matt Dyson, 2012
http://mattdyson.org/blog/2012/11/sound-the-alarm/
"""
import serial
import web
from struct import *

commands = {
    'relay_1_on': 0x65,
    'relay_1_off': 0x6F,
    'relay_2_on': 0x66,
    'relay_2_off': 0x70,
    'info': 0x5A,
    'relay_states': 0x5B,
}

urls = (
    '/(.*)', 'relay'
)
app = web.application(urls, globals())

class relay:
    def GET(self, arg):
        if not arg:
            return 'No action given'
        elif arg=='on':
                return 'Turning light on' + send(commands['relay_1_on'])
        elif arg=='off':
                return 'Turning light off' + send(commands['relay_1_off'])
        else:
                return 'Unknown action'

def send(cmd):
    ser = serial.Serial('/dev/ttyACM0', 9600)
    ser.write(chr(cmd)+'\n')
    ser.close()
    return ". Done"

if __name__ == "__main__":
    app.run()

The code still contains the necessary commands for getting information/relay states/relay 2 control (currently unused), but these aren’t exposed via HTTP. We’ve not dreamt up a use for the second relay yet, I suspect further evil will happen at some point again in the future.