Installing Debian 7.1 “Wheezy”

Every time I install Linux, no matter what the distribution, I have the same problems; configuring video and audio. When I installed Debian 7.1 recently I documented the steps I took. Hopefully you’ll find something of use here.

My Hardware

Video: Advanced Micro Devices [AMD] nee ATI RV710 [Radeon HD 4350]
Sound: Advanced Micro Devices [AMD] nee ATI RV710/730 HDMI Audio [Radeon HD 4000 series]
Monitor: Dell SP2208WFP (with built in webcam)
WebCam: OmniVision Technologies, Inc. Monitor Webcam

Video and sound devices can be determined using lspci. The Webcam is a USB device so it can be determined using lsusb.


For regular use the opensource Radeon display driver is considered adequate. If you’re a gamer or just want to squeeze every bit of performance out of your card then you might consider the AMD proprietary display driver. For my needs the opensource driver is fine.

On the first boot GNOME 3 failed to load. The installer recognized my video card and installed the correct package, xserver-xorg-video-radeon, but as documented on the wiki my card requires proprietary firmware. This firmware is available in non-free. The steps to install it are,

  1. Open up Synaptic Package Manager and select Settings, Repositories from the menu
  2. Tick the checkbox next to Non-DFSG-compatible Software (non-free)
  3. Click Close and then the Reload toolbar button
  4. Search for and install firmware-linux-nonfree
  5. Reboot

GNOME 3 should start successfully after the reboot.


When I didn’t hear any sound I thought I was going to be in for a world of pain with sound drivers, ALSA and PulseAudio. Luckily all I needed to do was select the right output device. My system has two sound devices, an onboard Intel device and the sound device on the AMD video card. My monitor is connected by HDMI. I have a soundbar on my monitor drawing it’s signal from the HDMI cable. All I needed to do was,

  1. Open up the System Settings
  2. Click the Sound icon
  3. In the Output tab select the HDMI audio device, i.e. RV710/730 HDMI Audio [Radeon HD 4000 series] Digital Stereo (HDMI)

Update (Aug-2014): HDMI audio is disabled by default in the kernel/audio driver. To enable it,

  1. Edit /etc/default/grub and append “″ to the GRUB_CMDLINE_LINUX_DEFAULT variable.
  2. Open a terminal execute sudo update-grub
  3. Reboot

Source: No sound on HDMI with Radeon driver


My monitor has a built in Webcam. It’s an OmniVision Technologies, Inc. Monitor Webcam. I managed to get it working but it’s a bit hit and miss. When then uvcvideo kernel module is loaded it should create the device /dev/video0. Sometimes it does, sometimes it doesn’t. To get it to work is a matter of removing the module and reloading. After two or three attempts it usually works. Not very satisfying but I don’t use it very often so I’m not that bothered.

$ sudo modprobe -r uvcvideo     # unload the module
$ sudo modprobe uvcvideo        # load the module

Once the module loads successfully and creates the /dev/video0 device you can use the Cheese application to test it.


The mic didn’t appear to be working initially but as with audio out all I needed to do was select the right device and turn up the volume. I use the mic on my monitor (attached to the webcam).

  1. Open up the System Settings
  2. Click the Sound icon
  3. Select the Input tab
  4. Select the Monitor Webcam Analog Stero device
  5. Turn the volume up to 100%

After getting the hardware working these are the applications I installed.

Flash Player

The Flash Player package is avaiable from contrib.

  1. Open up Synaptic Package Manager and select Settings, Repositories from the menu
  2. Tick the checkbox next to DFSG-compatible Software with Non-Free Dependencies (contrib)
  3. Click Close and then the Reload toolbar button
  4. Search for and install flashplugin-nonfree
  5. Restart your browser if you

Reference: FlashPlayer on the Debian Wiki

Java Browser Plugin

Simply install the package icedtea-6-plugin.


gThumb is a photo manager. I prefer it over the default, Shotwell, because it feels quicker and more lightweight. I already organise my photos in a year, month, day folder structure so I have no need for anything fancy.

While gThumb itself is very easy to install (the gthumb package) it was failing to properly list all the photos and videos in a directory containing one or more videos. The problem was it couldn’t find the image it uses to surround videos (to give the reel effect). The fix is simple,

$ sudo ln -s /usr/share/gthumb/ui/filmholes.png /usr/share/filmholes.png

Reference: Redhat Bug 842183 – Gthumb breaks file list on movies

Android Tools

The Android Tools are used when developing Android applications. After installing the Android SDK I found adb wouldn’t work. When I ran it it gave a “file not found”. The problem is down to missing shared libraries. Here are the steps I took to fix it,

$ sudo dpkg --add-architecture i386
$ sudo aptitude update
$ sudo aptitude install libstdc++6:i386 libgcc1:i386 zlib1g:i386 libncurses5:i386

Reference: Installing Android SDK tools on Debian Wheezy

IFTTT – An Alternative to Feed Aggregators

Since Google Reader was retired I’ve been on the lookout for something a bit different. I tried out a few services but none of them really fit in with my way of doing things. When I thought about it all I really wanted was a service that would send me an email when a new entry was posted to a blog I liked. Luckily that’s exactly the kind of thing IFTTT does.

IFTTT is a web service that allows you to create little “programs”. These programs, or recipes as they’re called, carry out an action following some trigger event. Each recipe has the same structure; “if this happens then do that”. The “this” piece is called the trigger. The “that” piece is called the action.

Example triggers,

  • I tweet a message
  • I update my Facebook status
  • I upload a photo to Instagram
  • A new entry is posted to a blog

Example actions,

  • Send me an SMS
  • Upload a photo to Facebook
  • Add an event to my Google Calendar
  • Send me an email

See About IFTTT for a more comprehensive description.

Actions and triggers are organized into channels, e.g. Facebook, Dropbox, Email, Feed (i.e. blog feed), etc.

Using the “New feed item” trigger in the Feed channel and the “Send me an email” action in the Email channel I put together a recipe for each blog I wanted to subscribe to. Here’s what my XKCD recipe looks like,

XKCD Recipe

The email subject has the blog name and entry title. For the body I chose to include a link to the entry rather than the full text. Here’s what an email from IFTTT looks like,

XKD Monster

I’ve subscribed to number of blogs this way. What I really like is getting email notifications pushed to me rather than having to use a blog aggregation tool. I use email as a sort of TODO list. This way I can treat new blog posts as something I need to do. Once I have the email I can come back to it whenever I want.

As good as IFTTT is there are some rough edges for my particular use case. I wouldn’t say they’re problems though because IFTTT is meant to be a general service.

  1. If I want to subscribe to a new blog it’s a bit of a nuisance having to create a whole new recipe from scratch. It would be great if I could create a new recipe based on another one. That way all I’d need to change is the recipe’s name, the blog feed URL and the mail subject (which includes the blog title). The positive side of this is that it stops me subscribing to blogs willy-nilly. I prefer a few quality subscriptions.
  2. I’d like to be able to export my recipes. Why? I think it’s the programmer in me. Now I have all these recipes I feel like they should be under version control.
  3. An IFTTT API would be great. If there was an API to create a recipe I could create a browser extension to automatically create a “Subscribe to RSS” recipe while on a blog.

So that’s my experience. It works for me.

Using Lego Mindstorms on Ubuntu

When I was growing up I loved playing with Lego, mechanical construction sets, chemistry sets (with proper dangerous chemicals kids), wood, metal, basically anything where I could build something with my hands. I even remember going though a phase of building lamp shades with ice-cream sticks. I don’t know how many dog houses and go-karts my brothers and I built. Thankfully I grew up in the Irish countryside where we had lots of space. It also helped that my father was a builder so there was always lots of material and tools about the house he was happy to let us borrow (we didn’t like to bother him by asking him for any of it of course :))

I think it was all this building stuff that led me into a career in programming. Weather you’re using your mind or your hands it’s the process of making something out of nothing I really love.

I miss using my hands and so after hearing recently about Lego’s new EV3 platform I pulled out my old Mindstorms Robotics Invention System 2.0. This is pretty old set but it’s still perfectly useabable. It comes with two motors, two touch sensors, a light sensor, an RCX (the computer) and more bits of Lego than you can shake a stick at.

I run Ubuntu so the Windows based program that came in the kit would have to stay there (besides, the cover says it only supports Windows 98/ME so what are the chances it’ll work on Windows 7).

What I needed was to do was get the infrared tower used to transmit programs to the RCX working. I wasn’t all that hopeful but after plugging it in I was more than surprised to see a new device appear, /dev/usb/legousbtower0. Apparently the Lego USB tower kernel module has been part of base Linux for some time now. How cool is that.

Now I needed a programming language. A quick google and I came across Not Quite C. It has a C-like syntax but given the restrictions of the RCX it’s pretty straight forward. What’s even better is it’s available from the Ubuntu repositories.

$ sudo apt-get install nqc

The next step was the trickiest, getting nqc to recognize the device. The first step was to give my user access to the /dev/usb/legousbtower0 device. This can be done with the command sudo chmod 666 /dev/usb/legousbtower0. For a more permanent solution create the file /etc/udev/rules.d/90-legotower.rules with the following contents,


You can use any group you’re a member of in place of <group>. You can find a list of the groups you’re a member of using ‘id -a‘. I used the group “adrian”. On a lot of systems there’ll be a group with the same name as your used id. This is fine so long as you’re the only one who’ll need access to the device. Otherwise you’ll need to find a common group or perhaps create a new one. By the way, the vendor and product ids in the udev rules file came from running lsusb.

The first time I tried to transfer a program to the RCX I got an error saying there was no firmware on the device. Apparently it only lasts as long as the batteries are powered. The nqc command can be used to install the firmware so all I needed to do was locate the firmware file. A few sites talk about the file /firm/firm0309.lgo on the system CD. My CD has no such file. I knew the firmware had to be on there somewhere though. Eventually I found it in the file This is an InstallShield archive. To extract the file install unshield and then expand the CAB file.

$ sudo apt-get install unshield
$ unshield -d /tmp/lego x <path to CD>/RIS2/

The firmware file will be located at /tmp/lego/Script/Firmware/firm0328.lgo.

To install the firmware run,

$ nqc -Susb -firmware /tmp/lego/Script/Firmware/firm0328.lgo

Needless to say the USB tower has to be connected and pointing towards your powered on RCX. The transfer takes a few minutes to complete.

Finally, here’s how to transfer a simple program (source file prog1.nqc) to the RCX and have it run automatically,

$ nqc -Susb -d prog1.nqc -run

Now that everything is up and running I can get on with building something.


Ten Useful Openstack Swift Features

CORS support

For security reasons Javascript running in a browser is not allowed to make requests to domains other than the one from where it came from. This is referred to as the Same Origin Policy. CORS is specification allowing browsers and application servers work out an agreement whereby these types of requests are allowed.

Swift 1.7.5 introduced CORS support. This means a Javascript application running in a browser and hosted outside of a Swift cluster can still query that cluster’s API. I expect to see lots of Swift based applications over the coming months thanks to this great new feature.

The ETag

Every object written to a Swift cluster has an ETag associated with it. The value of this ETag is the MD5 digest of the file’s contents. What makes this useful is that you can use it to make a conditional request for the object using the If-Match, If-None-Match headers.

For example, lets say your Swift cluster contains the object “movie1.mp4″. You already have a version of the file locally but you’re not sure if it’s exactly the same. You don’t want to have to download it unnecessarily because it would take too long and/or might incur bandwidth charges. What you can do is invoke a conditional download request, i.e.

$ md5sum movie1.mp4
d41d8cd98f00b204e9800998ecf8427e  movie1.mp4

$ curl -i -H 'X-Auth-Token: xxx' -H 'If-None-Match: d41d8cd98f00b204e9800998ecf8427e'
HTTP/1.1 304 Not Modified

The 304 response code tells us that our local version is the same as the remote version. If it wasn’t the same it would be downloaded.

Object Versioning

With object versioning, each PUT request to an object will result in the existing object being archived to a special “versions” container.

Versioning is controlled at the container level by setting the header “X-Versions-Location” to the name of the container where you want to archive object versions.

For more information see Object Versioning in the developer documentation.

Renaming or Moving an Object

Renaming or moving objects isn’t supported in the classic sense. However the same result can be achieved by downloading the existing object and re-uploading it to the new container and/or with a new name. Of course if the object is large this can be a time consuming operation. Luckily Swift supports server side copies. What this means is this object is copied from it’s source to destination all within the confines of the Swift cluster.

Either PUT or COPY can be used to perform a server side copy. Neither has an advantage over the other.

For example, given the container and object “photos/sunset.jpg”, here’s how to move it to “holiday-pics/sunset_glow.jpg”.

$ curl -X PUT -H 'X-Auth-Token: xxx' -H 'X-Copy-From: /photos/sunset.jpg'

See Copy Object for more details.

Expiring Objects

Objects can be given an expiry time. When that time is reached Swift will automatically delete the object. Object’s are given an expiry time by setting either the X-Delete-At or X-Delete-After headers.

The value given to X-Delete-At is a Unix Epoch timestamp. There are many ways of converting a time to an epoch integer. For example on UNIX you can to,

$ date +%s

The epoch converter website allows you to convert any time to an epoch and vice-versa.

X-Delete-After is a convenience header allowing you to give the number of seconds from now when you want the object deleted. Swift will use this value to calculate an epoch time this number of seconds into the future and set an X-Delete-At header on the object.

# delete the object on Sat, 19 Dec 2015 19:18:52 GMT
$ curl -X POST -H 'X-Auth-Token: xxx' -H 'X-Delete-On: 1450552732'
# delete the object 24 hours from now, 24*60*60
$ curl -X POST -H 'X-Auth-Token: xxx' -H 'X-Delete-After: 86400'

See Expiring Object Support in the developer documentation for more information.

Segmented Objects

Swift has an object size limit of 5Gb. Larger objects must be split up on the client side and the segments uploaded individually. A manifest object is then used to create a logical object built up from the segments.

It’s not just large objects that can be segmented. Any sized object can be broken up. Having said that I can’t think of a good reason why you’d want to do this?

On UNIX the split command can be used to split an object up into segments. There’s two ways of using split, either give it the number of segments or the size of the segments you want to split the object up into.

# split an object up into 5 segments
$ split -n 5 at_the_beach.mp4 at_the_beach.mp4-

# split an object up where each segment is at most 5GB
$ split -b 5G at_the_beach.mp4 at_the_beach.mp4-

In each case the result will be a number of files called at_the_beach.mp4-xx where xx is an alphabetically ordered sequence of characters, e.g. aa, ab, ac and so on. This ordering is important because when Swift rebuilds the object it sorts the segments by name before concatenating them. Each of these segments should be uploaded into the same container.

The manifest object is an object with no content. Instead it has the header X-Object-Manifest. The value of this header is the container name and common prefix of the segments making up the object. Assuming the segments were uploaded into a container called ‘holiday-pics’ and the prefix was ‘at_the_beach.mp4-’ the header value would be ‘holiday-pics/at_the_beach.mp4-’.

$ curl -X PUT -H 'X-Auth-Token: xxx' -H 'X-Object-Manifest: holiday-pics/at_the_beach.mp4-'

A GET for the manifest object will return the reassembled source object. Swift will stream each segment in sequence so from the client side it will appear as one continuous object.

See Large Object Support in the developer documentation for more information.


Accounts, Containers and Objects can all have custom metadata headers associated with them. These headers are simple name/value pairs. Custom headers are distinguished from system headers with a prefix, X-Account-Meta, X-Container-Meta or X-Object-Meta.

Headers can be set on an existing account, container or object using the POST method. Alternatively the headers can be set when the container or object is being created using PUT.

# Set a header on an existing object
$ curl -X POST -H 'X-Auth-Token: xxx' -H 'X-Object-Meta-Location: Dublin'

# Set a header on a container when creating it
$ curl -X PUT -H 'X-Auth-Token: xxx' -H 'X-Container-Meta-Year: 2012'

Metadata can be retrieved using the HEAD method.

$ curl -i -X HEAD -H 'X-Auth-Token: xxx'
HTTP/1.1 204 No Content
Content-Length: 0
X-Container-Object-Count: 3
Accept-Ranges: bytes
X-Timestamp: 1355861803.81992
X-Container-Bytes-Used: 2
X-Container-Meta-Year: 2012
Content-Type: text/plain; charset=utf-8
Date: Wed, 19 Dec 2012 20:07:03 GMT


Permissions in Swift are controlled at the container level. Only users classified as administrators can create containers. By default regular users can’t create or access containers.

Users can be given read or write permissions for all the objects in a container using the X-Container-Read and X-Container-Write headers.

# Give bob and alice read access to the holiday-pics container
$ curl -X POST -H 'X-Auth-Token: xxx' -H 'X-Container-Read: bob, alice'

# Give bob write access to the holiday-pics container
$ curl -X POST -H 'X-Auth-Token: xxx' -H 'X-Container-Write: bob'

For more information see the ACLs section of the developer docs.

Pseudo-Hierarchical Directories

While containers can be compared with regular filesystem directories, it’s not possible to nest them. A container with thousands of objects can become extremely difficult to navigate and manage. With that in mind Swift supports a feature called pseudo-hierarchical directories. These are directory structures derived from the names of objects themselves. For example, lets say we have a container with the following six objects,


By using a delimiter character Swift can be asked to list the objects as if they were in a directory structure similar to,

|-- 2010
|   `-- 001.jpg
|-- 2011
|   |-- 001.jpg
|   `-- 002.jpg
`-- 2012
    |-- 001.jpg
    |-- 002.jpg
    `-- 003.jpg

For example,

$ curl -X GET -H 'X-Auth-Token: xxx'

$ curl -X GET -H 'X-Auth-Token: xxx' ""

$ curl -X GET -H 'X-Auth-Token: xxx' ""

For more information see Pseudo-Hierarchical Folders/Directories in the Developer Guide.

Swift All in One

Like anything, the best way to learn more about Swift it to play around with it. Luckily that’s relatively easy thanks to Swift All in One, a set of instructions describing how to setup a fully functional Swift cluster on a single machine (ideally a VM).