Talking to a LIS3DH via Python on a Raspberry Pi

For my latest project (details coming soon available here) I acquired a couple of LIS3DH triple-axis accelerometers. As most of the products available through Adafruit are fairly well used, I didn’t bother checking what libraries were available before buying, but unfortunately for me only a C++ library had been written. I didn’t feel like learning C just for the purpose of this project, and so the only option left was to write my own Python library!

Thankfully I had some excellent starting points with the aforementioned C++ library, as well as the Python I2C library that Adafruit have published. I found myself referring back to the manufacturer datasheet quite often as well, mainly to clarify what each register contained.

While the task initially looked rather daunting (having had zero prior experience with bit-bashing through registers) – I found that with some pre-existing code to crib from, the various functions took shape rather quickly, and within an afternoon I’d produced a library exposing all the basic functions I’m likely to need for this project. I’ve put my code on Github in the hope that people will contribute to filling in the gaps, and improving where necessary.

“Invalid parameter provider” on Puppet

So I’ve just spent the last hour banging my head against my desk after trying to make some changes to a Puppet provider – for some reason when I’d made the changes, all of my nodes started failing to run, even ones that had nothing to do with the provider. All I was getting was an error when trying to retrieve the manifests – saying that it “Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter provider...“.

None of my Google-fu turned up anything useful, until I struck upon a single comment in the Puppet ticket database

Turns out that you may need to restart the Puppet master server after updating providers, or the entire system can fall apart before your very eyes.

I’m posting this here in the hope that

  1. I remember this next time, and avoid wasting hours debugging and trawling forum posts; and
  2. Someone might discover this post through a search engine one day, and be saved of my pain!

Happy Puppet-ing!

One week with Pebble Time

Pebble TimeTwo and a half years ago (wow – really that long?!) I wrote a post on my impressions of the Pebble watch, one of the very first projects I backed on Kickstarter. At the time, I was pretty unimpressed by the product as a whole package – while the hardware was impressive (at the time), the software really let the watch down, and sadly never saw a terrific improvement. The SDK alluded to in the original release did eventually turn up, and was followed by swathes of watchfaces and apps to run on your wrist, but none of these really captured my imagination, the watch remained a second-screen for my wrist on which I could view notifications.

Given my nonplussed attitude towards the product, I was surprised when I found myself throwing money at the new Pebble Time Kickstarter. The videos of the new watch grabbed me in a way that the original product had failed to – colour, animations, design, apps – this iteration seemed to correct everything that the original lacked. So, I waited patiently for the watch to arrive (they definitely improved their logistics since their first attempt), and have now had a week to play. So I repeat the question I answered last time – have I fallen in love with this watch?

The answer – slightly more than last time! The watch is definitely a much better designed product, it looks and feels a lot better on my wrist, as the original was starting to look very dated in this Apple Watch/Android Wear golden era of wearable technology. The menus flow an awful lot better with some slick animation, and even though I find the screen a little harder to read, the colours really do improve the display. It feels like much more of a product, rather than a proof-of-concept piece of hardware with some poorly thought out software thrown on top. Integration with my phone is much more seamless as well, the new Pebble Time app has replaced the need to have separate applications installed for receiving third-party notifications, and the watchface/app store seems better integrated.

So what’s putting me off? To me, it still seems like a convenient device to view notifications on, and not a lot more. It’s missing a few “killer apps” like the Android Wear integration with Maps, or gestures on the Apple Watch. While the Pebble Time may be a much more desirable piece of hardware, and streets ahead of the original edition, I feel the software has fallen short of the mark yet again.

That said, I won’t be rushing out to buy the Apple or Android equivalent – the price points, battery life and physically large size of the alternatives have put me off for the time being, so the Pebble Time does have a place on my wrist for the foreseeable future.

PCTV tripleStick 292e with TVHeadend

I’ve blogged before about my home AV set up, but something I’ve not talked about is the recent addition of a couple of TV tuners so that I can watch and record live Freeview channels. Until recently I’d been using TVHeadend version 3.2 on a Raspberry Pi, with a PCTV nanoStick T2 that worked out of the box on Raspbian for me. However, the time came when I wanted to be able to record and/or view multiple channels at once, so I set about getting a second tuner to be able to do this. Through a lack of attention paid while ordering, I ended up with a PCTV tripleStick rather than a second nanoStick, and this one sadly was not as easy to set up. I bounced around a lot of forums and blog posts in getting mine working, so I thought I’d consolidate my learnings here, in the hope that someone else may find this useful!

First off, the chipset on the tripleStick (Silicon Labs Si2168) is different to the nanoStick (Sony CXD2820R), hence the incompatibility with the old drivers. There’s a very detailed teardown and comments at Antti’s LinuxTV Blog which does a great job of explaining what’s under the hood, and the comments do offer some useful guidance (but also some misdirection!). I was previously running an older version of Raspbian (kernel 3.12 if I recall correctly), which failed to recognise the tripleStick as a DVB tuner at all, but several sources suggested that firmware was included in 3.16 and higher. I updated my Raspberry Pi with the usual apt-get update; apt-get upgrade; apt-get dist-upgrade to move up to a newer kernel version (3.18) which did get the dongle recognised in TVHeadend, however it appeared to not get any signal, despite being plugged in to the same aerial as the working nanoStick.

At this point I attempted upgrading to TVHeadend 4.0, something I should have done a considerable time ago anyway, however this had no effect and the dongle continued to show no signal through TVHeadend. Checking my logs, I found that my /var/log/syslog had repeated entries referring to “found a 'Silicon Labs Si2168' in cold state“, and claiming that firmware files had not been found. Many different message boards carried many different links to firmware, and suggesting different combinations that needed to be installed, several of which I found to be corrupt, however the one that worked for me was installed using the following:

$ wget http://palosaari.fi/linux/v4l-dvb/firmware/Si2168/dvb-demod-si2168-02.fw -O /lib/firmware/dvb-demod-si2168-02.fw

There are many suggestions that the file dvb-demod-si2168-b40-01.fw is also needed from that same source, however it seems to be working fine for me without this present. I’ve seen some reports that the tuner should appear as two separate entries in TVHeadend (one as a DVB-T tuner, and another as a a DVB-S), however since I’m only using DVB-T I’ve not seen any problems – your mileage may vary!

AlarmPi: The Raspberry Pi Smart Alarm Clock

When I left my previous job around 18 months ago, I promised myself I’d do something productive with the time I had between employment. During that time, I realised how much I hated my alarm clock going off every morning, and also how stupid and inflexible most alarm clocks are. I managed to achieve very little with that spare time between jobs, but this hatred of alarm clocks has been driven home even further since I’ve started working shifts in my new job – no alarm clock I could find had the ability to vary the alarm time based on a shift pattern (I suppose that’s a fairly niche feature!), and very few had decent internet radio connectivity to allow me to listen to music I like in the morning.

That productive feeling drew me to buy some parts from Adafruit and have a play with some electronics projects – the furthest I got was playing around with a LCD display as documented in this other blog post. More recently, my old alarm clock started to fail in rather interesting ways (ever been woken up at 3:27AM by a piercing screaming & static noise?), so I decided it was time to build my own, and the AlarmPi was born!

The core of the project is a Raspberry Pi connected up to a series of fairly basic components, all controlled by a Python script which takes input from all manner of sources, and shows information through the two front displays. I’ve put together a short video explaining some of the main features which can be viewed below, and you can read more about the AlarmPi on the project page

Text to Speech on a Raspberry Pi using Google Translate

For a couple of upcoming projects, I’ve been trying to find a way of making a Raspberry Pi take an input of a piece of text and vocalise it through a pair of connected speakers (so-called Speech Synthesis). There are a number of methods listed on the eLinux wiki page on the subject, however I found the suggested available packages produced rather robotic sounding results, and I was after something a bit more natural and pleasant sounding, rather than something to scare the bejeezus out of me every time it speaks. The most natural sounding offering is a hidden and unofficial API provided through the Google Translate service, which produces some very nice sounding audio, and is very accurate most of the time. Unfortunately, it’s limited to 100 characters at a time, which starts to be a problem when you want to read out large swathes of text.

There are a few scripts that I found (including this one from Dan Fountain) that offer an interface to this API, however the majority of them just split the input at the 100 character mark (or by the previous space to it), which leads to broken sounding sentences in some cases, where the pre-existing punctuation could be used. In order to get something slightly more natural sounding, I set about bodging together some Python, and came up with the following:

Please note: this script no longer works! Google made some changes to their TTS engine during July 2015 which meant this script would no longer work, as the translate_tts request would be redirected to a CAPTCHA page. There is an updated version of the script available in my SVN repository, and now at Github as well

#!/usr/bin/python

# googletts
# Created by Matt Dyson (mattdyson.org)
# http://mattdyson.org/blog/2014/07/text-to-speech-on-a-raspberry-pi-using-google-translate/
# Some inspiration taken from http://danfountain.com/2013/03/raspberry-pi-text-to-speech/

# Version 1.0 (12/07/14)

# Process some text input from our arguments, and then pass them to the Google translate engine
# for Text-To-Speech translation in nicely formatted chunks (the API cannot handle more than 100
# characters at a time).
# Splitting is done first by any punctuation (.,;:) and then by splitting by the MAX_LEN defined
# below.
# mpg123 is required for playing the resultant MP3 file that is returned by Google TTS

from subprocess import call
import sys
import re

MAX_LEN = 100 # Maximum length of a segment to send to Google for TTS
LANGUAGE = "en" # Language to use with TTS - this won't do any translation, just the voice it's spoken with

fullMsg = ""
i = 1

# Read our system arguments and add them into a single string
while i<len(sys.argv):
   fullMsg += sys.argv[i] + " "
   i+=1

# Split our full text by any available punctuation
parts = re.split("[\.\,\;\:]", fullMsg)

# The final list of parts to send to Google TTS
processedParts = []

while len(parts)>0: # While we have parts to process
   part = parts.pop(0) # Get first entry from our list

   if len(part)>MAX_LEN:
      # We need to do some cutting
      cutAt = part.rfind(" ",0,MAX_LEN) # Find the last space within the bounds of our MAX_LEN

      cut = part[:cutAt]

      # We need to process the remainder of this part next
      # Reverse our queue, add our remainder to the end, then reverse again
      parts.reverse()
      parts.append(part[cutAt:])
      parts.reverse()
   else:
      # No cutting needed
      cut = part

   cut = cut.strip() # Strip any whitespace
   if cut is not "": # Make sure there's something left to read
      # Add into our final list
      processedParts.append(cut.strip())

for part in processedParts:
   # Use mpg123 to play the resultant MP3 file from Google TTS
   call(["mpg123","-q","http://translate.google.com/translate_tts?tl=%s&q=%s" % (LANGUAGE,part)])

This can also be downloaded from my projects repository at http://projects.mattdyson.org/projects/speech/googletts, where updated versions may be available. The package mpg123 is required to play the resulting MP3 file that Google Translate returns. The easiest way to get this script installed will be with the following (run as root on your Raspberry Pi):

$ apt-get install mpg123
$ cd /usr/bin/
$ svn co http://projects.mattdyson.org/projects/speech speech
$ chmod +x speech/googletts
$ ln -s speech/googletts
$ googletts "Hello world, the installation of the text to speech script is now complete"

Unfortunately, if a clause of a sentence is longer than 100 characters there will still be an unwanted pause in the middle, as the script does not know where best to split the text, and if you’re using a lot of punctuation you might find the text takes a long time to read back. I’d be welcome to incorporate any improvements people may suggest!

Blinkytape

Yet another one of my Kickstarter jaunts turned up just before Christmas – the Blinkytape by BlinkinLabs. Essentially, this product is a strip of 60 LEDs connected to a USB interface, which allows you to address each “pixel” individually through a little bit of coding so you can build up your own programmable lighting show! So far I’ve only had chance to use this as a very nerdy alternative to Christmas lighting, and more generally expanding my knowledge of Python, but I’ve got big plans for it in future!

First up – getting started. I decided to use this in conjunction with a Raspberry Pi I had going spare from another project, as it gives me network connectivity and a platform to write and run Python scripts on. Conveniently, no powered external USB hub is required to run the Blinkytape off a Pi (as I had no other peripherals plugged in, your mileage may vary!), so it was just a case of plugging it in and installing the necessary Python libraries:

$ sudo apt-get install python-pip
$ sudo pip install pyserial

There is an official Blinkytape python library available from their GitHub repository (along with some other languages), however at the time when I was playing with this (before Christmas) their base class was lacking a lot of features – so I wrote my own! To get my integration script, run the following:

$ svn co http://projects.mattdyson.org/projects/blinkytape blinkytape

This will give you the main class (BlinkyTapeV2.py) and a couple of example files, all of which are commented in a (hopefully!) helpful manner to show what’s going on. The following video shows an example of the BouncingBlocks.py class in action (by running sudo python BouncingBlocks.py) followed by a more ‘festive’ example, something I knocked together very quickly to cycle through a series of effects in very Christmas-y red and green colours!

Overall, I’m very impressed by the quality of this product. I was expecting something very rough-and-ready, being a rather specialist product marketed through Kickstarter – however the LEDs themselves are very bright, and nicely packaged up in a plastic flexible strip in order to protect the circuitry. The ease with which I managed to write my own integration library is also a testament to how simple the electronic design of this product is.

So what am I planning on using this for? First up, I’m looking at building my own alarm clock that reads from Google Calendar to only wake me up when I need to be up – normal alarms don’t seem to have been built with shift work in mind! I’m hoping to integrate the Blinkytape into this project by creating an ambient light that gradually fades up after the alarm has gone off, hopefully easing the transition into daylight hours! There are also plenty of projects I was hoping to do with a Moore’sCloud Light, another Kickstarter project that sadly failed to meet their funding goals, but hopefully Blinkytape will fill the void! I’ll make sure to post back here with further updates when my Blinkytape gets put to use!

Kano: ICT education, easy as Pi!

I stumbled across the Kano Kickstarter project this evening, and felt compelled to take to this blog in order to say what an excellent idea this really is!

There has been a lot of negative media coverage over the state of ICT education in the UK, and from my perspective this seems fairly justified. As far as I can remember, none of my ICT teachers in high school actually had any qualification in the field, and only one or two had any relevant experience to bring to the classroom. The majority were almost completely unaware of anything other than the allocated syllabus, but it only took one particular teacher (who has influenced my career path much more than anyone will ever realise!) with a passion for programming and the subject in general to get me hooked. It’s worth noting that this inspiration didn’t come from the taught subject matter itself, it was extra-curricular activities that really got me started in the field. ICT will continue to be a niche subject until the curriculum is updated to actively engage kids, rather than subjecting them to endless lessons on dry subjects on network architectures and database schemas. I may be biased as a kinaesthetic learner, but I think that the best way to get kids to engage with and learn this subject is by getting hands on.

The Raspberry Pi foundation have done a fantastic job bringing a cheap (£30) computer in reach of everyone. I own a couple myself for general tinkering and hacking about, and can honestly say it’s the main reason why I started playing with electronics, and gave me the confidence to start on a whole raft of new projects (such as this) which I never would have considered before. However, selling the raw pieces as they do, this machine seems weird and scary, outside the reach of the majority of educators and parents. While this is not a failing of the foundation itself (as I think they’ve been slightly overwhelmed by demand from the hobbyist sector), it’s crying out for someone to take this excellent system and package it in a more friendly way. Enter Kano.

Kano appears on the face of things to be a very simple project – they’re packaging up the Pi with the majority of peripherals needed to run it, and crucially they’re including kid-friendly instructions on how to get the whole thing working. Their use of the phrase “Simple as Lego” really struck a chord with me – that’s exactly the right way to approach this kind of teaching, by letting the kids play, hack around and figure it out themselves.

I really hope that the guys behind Kano take some of the money they’ve made from this project (it’s already 3 times over their target as I write this, with another 27 days left to run!) and take these kits into schools at a lower per-unit cost for education uses, just to make them a truly irresistible purchase for any ICT department. I genuinely believe that giving kids access to this kind of kit as part of their curriculum will not only educate them, but it’ll help inspire a future generation of hacker nerds – and that’s no bad thing in my view!

Managing music with beets

In a previous post, I talked about how I use Subsonic in order to make my entire music collection available over the internet to either my phone, or any computer via its web-interface. I’m still using Subsonic to achieve this, but had one fairly major gripe left with the set-up – I was managing the library on disk manually. Subsonic is great for editing ID3 tags on individual songs, but it relies on files being in sensible per-album folders in order to populate its library, something which I very quickly got fed up of doing manually.

I’ve known for some time about the Musicbrainz project, which maintains a database of all music releases, and has a number of applications built on top of it which will scan, tag and move your music collection as desired. I recall using Musicbrainz Picard back in the day to sort my library before I moved to hosting music on my own server, but never found anything similar that I could run on my Ubuntu server, until now. Enter beets.

Beets is a program that will manage your entire music library, allowing command line access, and interfaces directly with the Musicbrainz API to tag tracks appropriately. Plugins for beets also allow you to update genres according to Last.fm, download cover art (which Subsonic will quite nicely pick up!), and even acoustically fingerprint unknown files to figure out what they are!

Installing Beets was as simple as following the instructions on their getting started guide, however importing my existing music proved a little more tricky, and I had a few false-starts at doing this. I eventually found that the easiest way to do this was to move my existing collection into a separate folder, and set up beets to sort my collection back into the original place. The ~/.config/beets/config.yaml file I ended up using looks like this:

directory: /media/music
library: /media/backup/beets/musiclibrary.blb
import:
    write: yes
    move: yes
    resume: yes
replace:
    '[\\/]': _
    '^\.': _
    '[\x00-\x1f]': _
    '[<>:"\?\*\|]': _
    '\.$': _
    '\s+$': ''
art_filename: cover
plugins: fetchart embedart lastgenre

Put simply, my music lives in /media/music (an NFS share), with my library file on a separate backup share. When importing files, I want them writing (moving) to their new location, so all I need to do is run beet import New\ Album/ and the files will be tagged and moved into place (with most dodgy characters removed – unfortunately some special characters still seem to slip through). Album art is also downloaded into the new folder as cover.jpg, embedded into the files themselves, and also the genre field is populated using Last.fm. A nightly scan configured in Subsonic picks up the new files and adds them into the library, making them available to listen!

The next step for me is to integrate importing into the same process as my automatic sorting of TV shows, as I currently still need to manually import newly downloaded tracks. However, even this is a massive improvement on the tedious processed previously needed for getting OCD-quality tags on new music!!

“I’m calling to offer you an upgrade…”

Recently, I’ve been plagued my 2 or 3 phone calls to my mobile each day, each claiming to be acting on behalf of O2 to offer me an upgrade to my contract. Apparently, once you reach the end of your contract, O2 declare open season for all manner of third-party companies to contact you offering contract renewals and new phones. I’m not currently in the market for a new phone, being more than happy with my Galaxy Nexus, so with the first couple I was more interested in a reduction of my monthly tariff. This led to the following conversation with one particular operator (slightly paraphrased)

Operator: “How much are you paying at the moment, and how many minutes/texts do you use?”
Me: “Surely you should already know that, if you’re phoning on behalf of O2”
Operator: “We don’t know that because of data protection”
Me: “[Stunned silence]… I’m paying around £20 a month, but only need 100 minutes and 300 texts. I’m more interested in increasing my data limit from 500MB to 1GB”
Operator: “Okay, well for £27 a month I can give you 500MB of data, unlimited texts and minutes”
Me: “That’s more than I’m paying now.”
Operator: “Yeah…”
Me: “I’ve just declined your offer of a £180 phone for free, so why don’t you try taking £10 a month off my contract for the next 18 months? Sound fair?”
Operator: “Well, for £25 a month…”
Me: “You’re not listening”
Operator: “I can see you’re just trying to waste my time… [hangs up]”

Now, I really don’t appreciate someone cold-calling me to sell me something, then being rude and hanging up once they can tell I’m not willing to give them any more of my money. I really didn’t know what to say when I’m told that my phone number being passed around was fine, but woe betide whoever passes details of my contract as well, because of “data protection”. I’d argue that my phone number is more sensitive than how many minutes/texts I get per month. The accusation of me wasting their time is just laughable as well.

Obviously, I was pissed at O2 for giving my details out to these clowns, so I took to the twittersphere to try and find out what was going on.

From some of the replies I got, it seems O2 aren’t the only operator doing this, Orange have been accused of passing on renewal customer details to third parties as well.

Subsequently, O2 have contacted me directly asking for details of who called and when, and so far have only come back with the answer that “these people are not O2”, and suggested using the Telephone Preference Service to block the calls. That’s not really going to work when O2 are passing on my details. I’m still waiting for a decent answer from O2, I can feel a strongly worded letter heading towards their customer complaints department (again) enquiring as to why they would sell on a long-term customers details to moronic third parties.