One week with Pebble Time

Pebble TimeTwo and a half years ago (wow – really that long?!) I wrote a post on my impressions of the Pebble watch, one of the very first projects I backed on Kickstarter. At the time, I was pretty unimpressed by the product as a whole package – while the hardware was impressive (at the time), the software really let the watch down, and sadly never saw a terrific improvement. The SDK alluded to in the original release did eventually turn up, and was followed by swathes of watchfaces and apps to run on your wrist, but none of these really captured my imagination, the watch remained a second-screen for my wrist on which I could view notifications.

Given my nonplussed attitude towards the product, I was surprised when I found myself throwing money at the new Pebble Time Kickstarter. The videos of the new watch grabbed me in a way that the original product had failed to – colour, animations, design, apps – this iteration seemed to correct everything that the original lacked. So, I waited patiently for the watch to arrive (they definitely improved their logistics since their first attempt), and have now had a week to play. So I repeat the question I answered last time – have I fallen in love with this watch?

The answer – slightly more than last time! The watch is definitely a much better designed product, it looks and feels a lot better on my wrist, as the original was starting to look very dated in this Apple Watch/Android Wear golden era of wearable technology. The menus flow an awful lot better with some slick animation, and even though I find the screen a little harder to read, the colours really do improve the display. It feels like much more of a product, rather than a proof-of-concept piece of hardware with some poorly thought out software thrown on top. Integration with my phone is much more seamless as well, the new Pebble Time app has replaced the need to have separate applications installed for receiving third-party notifications, and the watchface/app store seems better integrated.

So what’s putting me off? To me, it still seems like a convenient device to view notifications on, and not a lot more. It’s missing a few “killer apps” like the Android Wear integration with Maps, or gestures on the Apple Watch. While the Pebble Time may be a much more desirable piece of hardware, and streets ahead of the original edition, I feel the software has fallen short of the mark yet again.

That said, I won’t be rushing out to buy the Apple or Android equivalent – the price points, battery life and physically large size of the alternatives have put me off for the time being, so the Pebble Time does have a place on my wrist for the foreseeable future.

PCTV tripleStick 292e with TVHeadend

I’ve blogged before about my home AV set up, but something I’ve not talked about is the recent addition of a couple of TV tuners so that I can watch and record live Freeview channels. Until recently I’d been using TVHeadend version 3.2 on a Raspberry Pi, with a PCTV nanoStick T2 that worked out of the box on Raspbian for me. However, the time came when I wanted to be able to record and/or view multiple channels at once, so I set about getting a second tuner to be able to do this. Through a lack of attention paid while ordering, I ended up with a PCTV tripleStick rather than a second nanoStick, and this one sadly was not as easy to set up. I bounced around a lot of forums and blog posts in getting mine working, so I thought I’d consolidate my learnings here, in the hope that someone else may find this useful!

First off, the chipset on the tripleStick (Silicon Labs Si2168) is different to the nanoStick (Sony CXD2820R), hence the incompatibility with the old drivers. There’s a very detailed teardown and comments at Antti’s LinuxTV Blog which does a great job of explaining what’s under the hood, and the comments do offer some useful guidance (but also some misdirection!). I was previously running an older version of Raspbian (kernel 3.12 if I recall correctly), which failed to recognise the tripleStick as a DVB tuner at all, but several sources suggested that firmware was included in 3.16 and higher. I updated my Raspberry Pi with the usual apt-get update; apt-get upgrade; apt-get dist-upgrade to move up to a newer kernel version (3.18) which did get the dongle recognised in TVHeadend, however it appeared to not get any signal, despite being plugged in to the same aerial as the working nanoStick.

At this point I attempted upgrading to TVHeadend 4.0, something I should have done a considerable time ago anyway, however this had no effect and the dongle continued to show no signal through TVHeadend. Checking my logs, I found that my /var/log/syslog had repeated entries referring to “found a 'Silicon Labs Si2168' in cold state“, and claiming that firmware files had not been found. Many different message boards carried many different links to firmware, and suggesting different combinations that needed to be installed, several of which I found to be corrupt, however the one that worked for me was installed using the following:

$ wget http://palosaari.fi/linux/v4l-dvb/firmware/Si2168/dvb-demod-si2168-02.fw -O /lib/firmware/dvb-demod-si2168-02.fw

There are many suggestions that the file dvb-demod-si2168-b40-01.fw is also needed from that same source, however it seems to be working fine for me without this present. I’ve seen some reports that the tuner should appear as two separate entries in TVHeadend (one as a DVB-T tuner, and another as a a DVB-S), however since I’m only using DVB-T I’ve not seen any problems – your mileage may vary!

AlarmPi: The Raspberry Pi Smart Alarm Clock

When I left my previous job around 18 months ago, I promised myself I’d do something productive with the time I had between employment. During that time, I realised how much I hated my alarm clock going off every morning, and also how stupid and inflexible most alarm clocks are. I managed to achieve very little with that spare time between jobs, but this hatred of alarm clocks has been driven home even further since I’ve started working shifts in my new job – no alarm clock I could find had the ability to vary the alarm time based on a shift pattern (I suppose that’s a fairly niche feature!), and very few had decent internet radio connectivity to allow me to listen to music I like in the morning.

That productive feeling drew me to buy some parts from Adafruit and have a play with some electronics projects – the furthest I got was playing around with a LCD display as documented in this other blog post. More recently, my old alarm clock started to fail in rather interesting ways (ever been woken up at 3:27AM by a piercing screaming & static noise?), so I decided it was time to build my own, and the AlarmPi was born!

The core of the project is a Raspberry Pi connected up to a series of fairly basic components, all controlled by a Python script which takes input from all manner of sources, and shows information through the two front displays. I’ve put together a short video explaining some of the main features which can be viewed below, and you can read more about the AlarmPi on the project page

Text to Speech on a Raspberry Pi using Google Translate

For a couple of upcoming projects, I’ve been trying to find a way of making a Raspberry Pi take an input of a piece of text and vocalise it through a pair of connected speakers (so-called Speech Synthesis). There are a number of methods listed on the eLinux wiki page on the subject, however I found the suggested available packages produced rather robotic sounding results, and I was after something a bit more natural and pleasant sounding, rather than something to scare the bejeezus out of me every time it speaks. The most natural sounding offering is a hidden and unofficial API provided through the Google Translate service, which produces some very nice sounding audio, and is very accurate most of the time. Unfortunately, it’s limited to 100 characters at a time, which starts to be a problem when you want to read out large swathes of text.

There are a few scripts that I found (including this one from Dan Fountain) that offer an interface to this API, however the majority of them just split the input at the 100 character mark (or by the previous space to it), which leads to broken sounding sentences in some cases, where the pre-existing punctuation could be used. In order to get something slightly more natural sounding, I set about bodging together some Python, and came up with the following:

Please note: this script no longer works! Google made some changes to their TTS engine during July 2015 which meant this script would no longer work, as the translate_tts request would be redirected to a CAPTCHA page. There is an updated version of the script available in my SVN repository, and now at Github as well

#!/usr/bin/python

# googletts
# Created by Matt Dyson (mattdyson.org)
# http://mattdyson.org/blog/2014/07/text-to-speech-on-a-raspberry-pi-using-google-translate/
# Some inspiration taken from http://danfountain.com/2013/03/raspberry-pi-text-to-speech/

# Version 1.0 (12/07/14)

# Process some text input from our arguments, and then pass them to the Google translate engine
# for Text-To-Speech translation in nicely formatted chunks (the API cannot handle more than 100
# characters at a time).
# Splitting is done first by any punctuation (.,;:) and then by splitting by the MAX_LEN defined
# below.
# mpg123 is required for playing the resultant MP3 file that is returned by Google TTS

from subprocess import call
import sys
import re

MAX_LEN = 100 # Maximum length of a segment to send to Google for TTS
LANGUAGE = "en" # Language to use with TTS - this won't do any translation, just the voice it's spoken with

fullMsg = ""
i = 1

# Read our system arguments and add them into a single string
while i<len(sys.argv):
   fullMsg += sys.argv[i] + " "
   i+=1

# Split our full text by any available punctuation
parts = re.split("[\.\,\;\:]", fullMsg)

# The final list of parts to send to Google TTS
processedParts = []

while len(parts)>0: # While we have parts to process
   part = parts.pop(0) # Get first entry from our list

   if len(part)>MAX_LEN:
      # We need to do some cutting
      cutAt = part.rfind(" ",0,MAX_LEN) # Find the last space within the bounds of our MAX_LEN

      cut = part[:cutAt]

      # We need to process the remainder of this part next
      # Reverse our queue, add our remainder to the end, then reverse again
      parts.reverse()
      parts.append(part[cutAt:])
      parts.reverse()
   else:
      # No cutting needed
      cut = part

   cut = cut.strip() # Strip any whitespace
   if cut is not "": # Make sure there's something left to read
      # Add into our final list
      processedParts.append(cut.strip())

for part in processedParts:
   # Use mpg123 to play the resultant MP3 file from Google TTS
   call(["mpg123","-q","http://translate.google.com/translate_tts?tl=%s&q=%s" % (LANGUAGE,part)])

This can also be downloaded from my projects repository at http://projects.mattdyson.org/projects/speech/googletts, where updated versions may be available. The package mpg123 is required to play the resulting MP3 file that Google Translate returns. The easiest way to get this script installed will be with the following (run as root on your Raspberry Pi):

$ apt-get install mpg123
$ cd /usr/bin/
$ svn co http://projects.mattdyson.org/projects/speech speech
$ chmod +x speech/googletts
$ ln -s speech/googletts
$ googletts "Hello world, the installation of the text to speech script is now complete"

Unfortunately, if a clause of a sentence is longer than 100 characters there will still be an unwanted pause in the middle, as the script does not know where best to split the text, and if you’re using a lot of punctuation you might find the text takes a long time to read back. I’d be welcome to incorporate any improvements people may suggest!

Blinkytape

Yet another one of my Kickstarter jaunts turned up just before Christmas – the Blinkytape by BlinkinLabs. Essentially, this product is a strip of 60 LEDs connected to a USB interface, which allows you to address each “pixel” individually through a little bit of coding so you can build up your own programmable lighting show! So far I’ve only had chance to use this as a very nerdy alternative to Christmas lighting, and more generally expanding my knowledge of Python, but I’ve got big plans for it in future!

First up – getting started. I decided to use this in conjunction with a Raspberry Pi I had going spare from another project, as it gives me network connectivity and a platform to write and run Python scripts on. Conveniently, no powered external USB hub is required to run the Blinkytape off a Pi (as I had no other peripherals plugged in, your mileage may vary!), so it was just a case of plugging it in and installing the necessary Python libraries:

$ sudo apt-get install python-pip
$ sudo pip install pyserial

There is an official Blinkytape python library available from their GitHub repository (along with some other languages), however at the time when I was playing with this (before Christmas) their base class was lacking a lot of features – so I wrote my own! To get my integration script, run the following:

$ svn co http://projects.mattdyson.org/projects/blinkytape blinkytape

This will give you the main class (BlinkyTapeV2.py) and a couple of example files, all of which are commented in a (hopefully!) helpful manner to show what’s going on. The following video shows an example of the BouncingBlocks.py class in action (by running sudo python BouncingBlocks.py) followed by a more ‘festive’ example, something I knocked together very quickly to cycle through a series of effects in very Christmas-y red and green colours!

Overall, I’m very impressed by the quality of this product. I was expecting something very rough-and-ready, being a rather specialist product marketed through Kickstarter – however the LEDs themselves are very bright, and nicely packaged up in a plastic flexible strip in order to protect the circuitry. The ease with which I managed to write my own integration library is also a testament to how simple the electronic design of this product is.

So what am I planning on using this for? First up, I’m looking at building my own alarm clock that reads from Google Calendar to only wake me up when I need to be up – normal alarms don’t seem to have been built with shift work in mind! I’m hoping to integrate the Blinkytape into this project by creating an ambient light that gradually fades up after the alarm has gone off, hopefully easing the transition into daylight hours! There are also plenty of projects I was hoping to do with a Moore’sCloud Light, another Kickstarter project that sadly failed to meet their funding goals, but hopefully Blinkytape will fill the void! I’ll make sure to post back here with further updates when my Blinkytape gets put to use!

Kano: ICT education, easy as Pi!

I stumbled across the Kano Kickstarter project this evening, and felt compelled to take to this blog in order to say what an excellent idea this really is!

There has been a lot of negative media coverage over the state of ICT education in the UK, and from my perspective this seems fairly justified. As far as I can remember, none of my ICT teachers in high school actually had any qualification in the field, and only one or two had any relevant experience to bring to the classroom. The majority were almost completely unaware of anything other than the allocated syllabus, but it only took one particular teacher (who has influenced my career path much more than anyone will ever realise!) with a passion for programming and the subject in general to get me hooked. It’s worth noting that this inspiration didn’t come from the taught subject matter itself, it was extra-curricular activities that really got me started in the field. ICT will continue to be a niche subject until the curriculum is updated to actively engage kids, rather than subjecting them to endless lessons on dry subjects on network architectures and database schemas. I may be biased as a kinaesthetic learner, but I think that the best way to get kids to engage with and learn this subject is by getting hands on.

The Raspberry Pi foundation have done a fantastic job bringing a cheap (£30) computer in reach of everyone. I own a couple myself for general tinkering and hacking about, and can honestly say it’s the main reason why I started playing with electronics, and gave me the confidence to start on a whole raft of new projects (such as this) which I never would have considered before. However, selling the raw pieces as they do, this machine seems weird and scary, outside the reach of the majority of educators and parents. While this is not a failing of the foundation itself (as I think they’ve been slightly overwhelmed by demand from the hobbyist sector), it’s crying out for someone to take this excellent system and package it in a more friendly way. Enter Kano.

Kano appears on the face of things to be a very simple project – they’re packaging up the Pi with the majority of peripherals needed to run it, and crucially they’re including kid-friendly instructions on how to get the whole thing working. Their use of the phrase “Simple as Lego” really struck a chord with me – that’s exactly the right way to approach this kind of teaching, by letting the kids play, hack around and figure it out themselves.

I really hope that the guys behind Kano take some of the money they’ve made from this project (it’s already 3 times over their target as I write this, with another 27 days left to run!) and take these kits into schools at a lower per-unit cost for education uses, just to make them a truly irresistible purchase for any ICT department. I genuinely believe that giving kids access to this kind of kit as part of their curriculum will not only educate them, but it’ll help inspire a future generation of hacker nerds – and that’s no bad thing in my view!

Managing music with beets

In a previous post, I talked about how I use Subsonic in order to make my entire music collection available over the internet to either my phone, or any computer via its web-interface. I’m still using Subsonic to achieve this, but had one fairly major gripe left with the set-up – I was managing the library on disk manually. Subsonic is great for editing ID3 tags on individual songs, but it relies on files being in sensible per-album folders in order to populate its library, something which I very quickly got fed up of doing manually.

I’ve known for some time about the Musicbrainz project, which maintains a database of all music releases, and has a number of applications built on top of it which will scan, tag and move your music collection as desired. I recall using Musicbrainz Picard back in the day to sort my library before I moved to hosting music on my own server, but never found anything similar that I could run on my Ubuntu server, until now. Enter beets.

Beets is a program that will manage your entire music library, allowing command line access, and interfaces directly with the Musicbrainz API to tag tracks appropriately. Plugins for beets also allow you to update genres according to Last.fm, download cover art (which Subsonic will quite nicely pick up!), and even acoustically fingerprint unknown files to figure out what they are!

Installing Beets was as simple as following the instructions on their getting started guide, however importing my existing music proved a little more tricky, and I had a few false-starts at doing this. I eventually found that the easiest way to do this was to move my existing collection into a separate folder, and set up beets to sort my collection back into the original place. The ~/.config/beets/config.yaml file I ended up using looks like this:

directory: /media/music
library: /media/backup/beets/musiclibrary.blb
import:
    write: yes
    move: yes
    resume: yes
replace:
    '[\\/]': _
    '^\.': _
    '[\x00-\x1f]': _
    '[<>:"\?\*\|]': _
    '\.$': _
    '\s+$': ''
art_filename: cover
plugins: fetchart embedart lastgenre

Put simply, my music lives in /media/music (an NFS share), with my library file on a separate backup share. When importing files, I want them writing (moving) to their new location, so all I need to do is run beet import New\ Album/ and the files will be tagged and moved into place (with most dodgy characters removed – unfortunately some special characters still seem to slip through). Album art is also downloaded into the new folder as cover.jpg, embedded into the files themselves, and also the genre field is populated using Last.fm. A nightly scan configured in Subsonic picks up the new files and adds them into the library, making them available to listen!

The next step for me is to integrate importing into the same process as my automatic sorting of TV shows, as I currently still need to manually import newly downloaded tracks. However, even this is a massive improvement on the tedious processed previously needed for getting OCD-quality tags on new music!!

“I’m calling to offer you an upgrade…”

Recently, I’ve been plagued my 2 or 3 phone calls to my mobile each day, each claiming to be acting on behalf of O2 to offer me an upgrade to my contract. Apparently, once you reach the end of your contract, O2 declare open season for all manner of third-party companies to contact you offering contract renewals and new phones. I’m not currently in the market for a new phone, being more than happy with my Galaxy Nexus, so with the first couple I was more interested in a reduction of my monthly tariff. This led to the following conversation with one particular operator (slightly paraphrased)

Operator: “How much are you paying at the moment, and how many minutes/texts do you use?”
Me: “Surely you should already know that, if you’re phoning on behalf of O2”
Operator: “We don’t know that because of data protection”
Me: “[Stunned silence]… I’m paying around £20 a month, but only need 100 minutes and 300 texts. I’m more interested in increasing my data limit from 500MB to 1GB”
Operator: “Okay, well for £27 a month I can give you 500MB of data, unlimited texts and minutes”
Me: “That’s more than I’m paying now.”
Operator: “Yeah…”
Me: “I’ve just declined your offer of a £180 phone for free, so why don’t you try taking £10 a month off my contract for the next 18 months? Sound fair?”
Operator: “Well, for £25 a month…”
Me: “You’re not listening”
Operator: “I can see you’re just trying to waste my time… [hangs up]”

Now, I really don’t appreciate someone cold-calling me to sell me something, then being rude and hanging up once they can tell I’m not willing to give them any more of my money. I really didn’t know what to say when I’m told that my phone number being passed around was fine, but woe betide whoever passes details of my contract as well, because of “data protection”. I’d argue that my phone number is more sensitive than how many minutes/texts I get per month. The accusation of me wasting their time is just laughable as well.

Obviously, I was pissed at O2 for giving my details out to these clowns, so I took to the twittersphere to try and find out what was going on.

From some of the replies I got, it seems O2 aren’t the only operator doing this, Orange have been accused of passing on renewal customer details to third parties as well.

Subsequently, O2 have contacted me directly asking for details of who called and when, and so far have only come back with the answer that “these people are not O2”, and suggested using the Telephone Preference Service to block the calls. That’s not really going to work when O2 are passing on my details. I’m still waiting for a decent answer from O2, I can feel a strongly worded letter heading towards their customer complaints department (again) enquiring as to why they would sell on a long-term customers details to moronic third parties.

Kickstarting: You’re doing it wrong

Anyone who follows me on Twitter may have seen my various rantings last night about the OUYA console that I backed on Kickstarter in early July last year.

I know for a fact I’m not alone in my frustration – a recent Ask Me Anything thread on Reddit is filled with comments from irritated backers about the lack of communication from the OUYA team about shipping dates, and the absence of any transparency over what’s going on within the company right now. Even more concerning is their promise to release the OUYA for general sale through the likes of Amazon on the 4th of June. That’s a month away, and in that time they have to ship to 50% of their Kickstarter backers (according to their latest update – you will need to be a backer in order to view), and to the thousands that have pre-ordered after the Kickstarter campaign ended. In this update, they also said that…

“We successfully eclipsed 50 percent of units shipped and remain ahead of schedule to complete all shipments by the end of May”

Well… duh. Not a great achievement. If not everyone has received their pre-ordered consoles by the time retail units ship, something has gone very wrong with their production process. What is the point in supporting a project, if you get little (if any) advantage over general market customers? In my opinion, this is an extremely poor way to treat your early investors. It’s fantastic that the OUYA console will be reaching even more people through general release – but this should have been a secondary concern to fulfilling the existing delivery promises to backers, rather than compromising your delivery schedule. Delaying general release would also give more time for bug-fixing in the software/hardware – as it stands, they’ve lost the ability to use early backers as a beta-test, as there’s little to no distinction between the two tranches of consoles. Development and shipping logistics are difficult, and the majority of Kickstarter projects are run by individuals rather than businesses – take the time to get it right, rather than diving in head-first.

To me, all of this is a perfect demonstration of how Kickstarter should not be used. When you post a project, people are pledging money to support you, often paying over-the-odds to help an idea they think deserves to be brought to market. Kickstarter provides a less risky way for your average consumer to provide angel investments, and gives potential upstarts a platform to reach a massive audience of potential investors – a win/win scenario. By putting a project on Kickstarter, you are not offering it for sale, you are asking for people to come on board and be involved in the process of bringing your project to life – personally I find this very exciting! When I back a successful project, I expect regular updates on how things are progressing, as well as access to things like burn-down charts, details of early prototypes, voting on project direction and suchlike. While I realise this isn’t what everyone is after when backing a project, I think it’s a matter of courtesy to your backers to make this information available – treat these people as your investors, not future customers. You already have their money, so let them enjoy the ride with you, rather than keeping them in the dark. In the case of the OUYA, I’m almost insulted by the way I’ve been treated as a backer.

Sadly, the OUYA team are not alone in handling public relations in this way – almost every hardware project I’ve backed seems to be plagued by a lack of updates. Even something simple like the Twyst Winder (a project created by a group of high-school students near where I used to live in London) has only posted a single update since the project was successfully funded, and that was nearly a month ago. The Pebble Watch also seemed to become “too popular”, leading to (understandable) shipping delays, but the entire process was kept shielded from backers by a lack of communication.

However, the few software projects I’ve backed don’t seem to have this problem. Project Godus, for example, have video-streamed several of their internal meetings, published regular updates on how the game is progressing (including early gameplay videos), and shown off concept artwork and the like. While I’ll admit to not diligently following everything they do, I love having that kind of access to the project I’ve supported – I think this nicely captures exactly what Kickstarter should be, but sadly is not.

Using 20×4 RGB LCD over i2c with a Raspberry Pi

Now there’s a specialist blog post title if ever there were one…

Recently, I’ve been dabbling with electronics to fill the void of spare time I’ve found myself with while I’m between jobs. I’m currently working on a half-baked idea to create some sort of digital assistant who will take instructions in some form, and then read stuff back to me in a Siri-esque manner. Nothing sounds more awesome than having twitter @replies read out to you, right?! To kick off this project, and get me motivated to actually do something, I ordered a boatload of parts from Adafruit, and set about learning how to use them. First challenge – connecting up their 20×4 RGB backlight negative LCD screen to my Raspberry Pi.

In order to assist with this, I also bought the i2c / SPI character LCD backpack in order to save some GPIO pins for other uses. Due to my lack of attention while ordering, I failed to notice that the LCD backpack only has 16 pins, whereas the LCD screen I ordered has 18 (2 more for the extra background LEDs). Rather than giving up and being limited to only a single channel of control for the backlights, I decided to connect pins 14 to 18 direct into the Pi, and mash two separate libraries together to give myself full control. This is what I ended up with (click for big):

2013-05-02 21.36.11

Now, that looks like an absolute mess. That’s because it is. In an attempt to make that a bit more readable, here’s a Fritzing diagram of how it’s wired (again, click for big).

lcd_test_bb

Now, that’s even more confusing as I couldn’t find a Fritzing library with the right parts – so I’ve fudged a few things. Imagine there are ports 17 and 18 on the LCD, and that the LCD itself is 20×4 rather than 16×2. Secondly, imagine the chip in the middle is actually the i2c backpack mentioned above, so everything on the bottom is connected straight to ports 1 to 16 on the LCD, and the VCC/GND/CLK/DAT are connected to the Pi. So, in terms of wiring we get:

  • LCD #1 to #14 -> i2c backpack #1 to #14
  • LCD #15 -> 5V0
  • LCD #16 -> Raspberry Pi GPIO 17
  • LCD #17 -> Raspberry Pi GPIO 27
  • LCD #18 -> Raspberry Pi GPIO 22
  • i2c backpack GND -> GND
  • i2c backpack VCC -> 5V0
  • i2c backpack CLK -> Raspberry Pi SCL
  • i2c backpack DAT -> Raspberry Pi SDA

Now that’s all set up, you can use the standard AdafruitLcd Python library (nice adaptation that I used can be found here) to control the text shown on screen, but we need something bespoke for our background lighting. For future projects, I wanted the ability to control each colour individually, so I can set arbitrary RGB values on the screen, and also brighten/dim appropriately. The latest version of RPi.GPIO will let you do software Pulse Width Modulation, which will achieve this quite nicely for us. To install the latest version (0.5.2a at the time of writing), you’ll need to run the following on your Pi (as root):

$ wget https://pypi.python.org/packages/source/R/RPi.GPIO/RPi.GPIO-0.5.2a.tar.gz
$ tar xf RPi.GPIO-0.5.2a.tar.gz
$ cd RPi.GPIO-0.5.2a
$ python setup.py install

So, combining some standard example code for PWM on the Pi with the AdafruitLcd library, I developed my own little library for controlling a LCD wired up in this manner. To get up and running with the code I wrote, you will need (again, as root):

$ mkdir lcdtest
$ cd lcdtest
$ svn co http://projects.mattdyson.org/projects/LCDControl@889 .
$ git clone https://github.com/PDKK/RpiLcdBackpack.git
$ touch RpiLcdBackpack/__init__.py
$ python testLCD.py

Note: If you see IOError: [Errno 5] Input/output error when running testLCD.py, you may need to edit RpiLcdBackpack/RpiLcdBackpack.py and change the line self.__bus=smbus.SMBus(0) to self.__bus=smbus.SMBus(1). This should only happen on newer versions of the Pi, where the i2c bus number changed to 1 from 0.

Note 2 (added 15/10/14): The version of my LCDControl library that you’re checking out with the above command is now out-dated, I’ve updated the library to use pigpio instead of RPi.GPIO, as the latter was causing me flickering problems when the Pi was under load. To get the latest version, remove the @889 from the svn co command, you will need to have pigpio installed and running for this to work.

Once you run testLCD.py, you should see the screen flash a series of colours, followed by some messages appearing on the screen. Yaaay – it works!

The LCDControl class I’ve written is pretty basic (I’m still learning Python… slowly!) but allows you to set RGB or individual colour values for the backlights, and also pass in any message without worrying about formatting. Currently (version 1.0 at the time of writing), the LCDControl.setMessage method will split by the newline character (\n) and do the logic regarding line numbers for you (as the third display line on the LCD is actually carried over from the first line passed to the controller, and the fourth with the second) – future iterations of this code will allow you to do things such as full text wrapping, and scrolling text.

So there we have it – a 20×4 RGB LCD screen talking to a Raspberry Pi over i2c, retaining individual control over the background LEDs. As always, please leave a comment if you spot anything wrong with what I’ve written here, or have any feedback/suggestions/requests!