come on down to clug park and meet some geeks online

22 March 2017

Johann Botha (joe)

Quick Update

Pixies, La La Land and Star Trek uniforms…

  • Week of 13 to 19 March.
  • Health week: 4 day fast, usually I’d do 5 days, not smart fasting through a Pixies show.
  • Limited photos this week. Phone stolen. Some photos were lost. I borrowed a few of Paul’s photos.
  • Monday, fasting day, Headspace meditation, plumber came to unblock a pipe, work, gym for 300 Vitality points, catch up chat with Dirk and Dian, plumber returned and sorted out the pipe, nap, work, decluttered my apartment a bit, sunset walk, in bed early.
  • I’m thinking I should do a mini 3 day fast per month.
  • I managed to achieve all my weekly Vitality fitness goals last month, so the first 1/24 of the Apple Watch is free.
  • “First we know, then we learn.”

  • Tuesday, fasting day, up early, podcast drive to the gym, turns out the Nu at The Point gym serves bulletproof coffee – yum (yeah, I kinda ignore a tiny bit of fat in my fasting thinking), some weight training with a Guardians of the Galaxy soundtrack, some interval training swimming sprints, steam bath and cold showers, office – document writing day, catch up chat with Dan, green tea and coconut oil, I randomly walked passed the VIP movies at the Waterfront and decided to watch La La Land – which was very entertaining, yet somehow hard to watch – the jazz theme is great.
  • “It’s dying, Mia. It’s dying on the vine, and the world says: let it die. It had its time.” — Seb, La La Land, about jazz. Touching moment, authentic, real belief draws you in.

  • “I care not for a man’s religion whose dog is not the better for it.” ~- Abraham Lincoln

  • Listen to this episode: Non-Ordinary States of Consciousness
  • Wednesday, fasting day, Headspace meditation on a rainy day, podcast drive to the gym, swim, office, attended the first Cape Town EdTech cluster meetup – not bad.
  • Seems I’m moving again soon.
  • Jacques and I are plotting a new challenge to lose 10kg by 21 April.
  • “There’s a podcast for that”, the new “there’s an app for that”.

  • Jacques has found a solution for people to stay at their target weight. We should all just wear Star Trek uniforms to work.
  • Thursday, Jaco gave me a tour of the farm he lives on, gym – had some eggs for breakfast – which tasted amazing after not eating for 4 days, office, smoothie with Heleen – who is going running in six countries for six weeks – and documenting it to create awareness of the global water crisis – this may just be the ultimate millennial job (-:, battled the traffic to get to the Southern Suburbs, chilled at a guesthouse in Bishopscourt with Paul, pre-drinks and a great selection of starters at Hudsons in Claremont, Uber ride to Kirstenbosch, watched The Springbok Nude Girls, watched Pixies, my phone disappeared – pick pocket – walking down the hill at the end of the concert, phoned it and it rang, but was soon switched off, tried Find My iPhone but no luck – seems they caught a thief with 11 phones, Forresters with Paul and Tara.
  • Note to self: only take cheap phones to concerts. :-/
  • The Nude Girls were rad. Pixies were good – music was very good, but it kinda felt like they were just going through the motions – limited interaction between band members, and between the band and the crowd, maybe a case of: don’t meet your heros.
  • Cat typing… never gets old.
  • “Most people are under-slept and over-caffeinated.”

  • Friday, woke up in a very misty Sea Point, coffee with Heleen, office, fetched Mia, the usual biltong shopping, a glass of Chardonnay at Dirk’s house – while Mia watched V for Vendetta, Col’caccios and gelato with Mia, we watched Agora – movie about a geeky girl trying to save The Great Library of Alexandria while a bunch of Christians destroy everything – interesting perspective.
  • Saturday, brunch at Ya-Ya cafe with Mia – fed Mia her usual dose of about 24 vitamins, phone chat with Heleen before her flight, coconut vanilla orange blossom banting ice cream at Cold Gold with Mia, we sent off some invites for Mia’s birthday party, nap, swim, watched Kong in 3D with Mia at the Eikestad mall – very entertaining, followed by a totally lame Spur dinner.
  • Stay away from the Spur. It used to be okay quality at a good price. Now it’s just expensive and bad.
  • I think I did about 7 loads of laundry this week.
  • Sunday, brunch at Basic Bistro – Mia and I edited our list of Geeky Movies for Kids, swim and steam bath – Mia swam 1km with the Apple Watch – she seems to like the stats, took a drive out to the end of the Jonkershoek road, wine tasting at Stark-Conde, a visit to the Ride In – chats about the hockey season and interval training, watched the end of Agora with Mia – good movie, nap while Mia watched Hackers, snacks finding drive and chats about ideas for startups, we browsed some of Mia’s old photos, made drinks with big round ice balls, I showed Mia a Paul Graham video – Before the Startup, mostly because she could not find any other movies she wanted to watch.
  • Make things people want.
  • Note to self: spend more time browsing old photos.
  • I miss being able to make notes on my phone.
  • “Aggressively, we all defend the role we play. Regrettably, time’s come to send you on your way. We’ve seen it all, bonfires of trust, flash floods of pain. It doesn’t really matter don’t you worry it’ll all work out.” — Exitlude, The Killers (Tune of the week)

Have a fun week, crazy kids.

by joe at 22 March 2017 06:41 AM

21 March 2017

Johann Botha (joe)

Geeky Movies for Kids

I have this weird worry that kids today have a huge list of great movies to catch up on. So, I started making a list of classic movies for kids, age 10-16ish, geeky gems, iconic cultural reference stuff. Movie education. Things you would wish you saw before they were referenced in a Simpsons episode. If that makes sense. Suggestions welcome.

Obviously check age restrictions and filter to suit your moral framework.

Seen by genetic offspring #1…

Back to the Future series
Ferris Bueller's Day Off
Star Wars and Clone Wars series
Princess Bride
Indiana Jones series
War Games
2001: A Space Odyssey
Galaxy Quest
The Last Starfighter
Studio Ghibli films
Guardians of the Galaxy
The Count Of Monte Cristo
Flash Gordon
My Fair Lady
The Royal Tenenbaums
The Truman Show
Star Trek series, the reboot mostly
Honey I Shrunk the Kids
The Mask
Ace Ventura series
V for Vendetta
Men in Black
Harry Potter series
Chitty-Chitty Bang-Bang
Treasure Planet
Batman Returns

Find and watch…

Jurassic Park series
Groundhog Day
The Fifth Element
Forrest Gump
Neverending Story
Karate Kid
Donnie Darko
The Hitchhiker's Guide to the Galaxy
The Last Action Hero
Short Circuit
Edward Scissorhands
Titan A.E.
Close Encounters of the Third Kind
Corpse Bride
Night before Christmas
Secret of Nimh
The Bicycle Thief (1948)
The Dark Crystal
Teenage Mutant Ninja Turtles 1990
The Frighteners
Flight of the Navigator
Miracle on 34th Street
The Warriors
Miniscule: Valley of the Lost Ants
The Iron Giant
Monty Python and the Holy Grail
Dead Poets Society
Little Shop of Horrors
The Lost Boys
New Batman series
Napoleon Dynamite
Time Bandits

Maybe not just yet, but good.

Shaun of the Dead
Army of Darkness
Being John Malkovich
12 Monkeys
Alien series
Blade Runner
The Prestige
Johnny Mnemonic
Pitch Black

by joe at 21 March 2017 07:02 PM

12 March 2017

Johann Botha (joe)

Quick Update

Woordfees, Street Soiree, bone marrow and Chardonnay…

  • Monday, school run, catch up chat with Dirk at Stellenbosch Square, gym, podcast drive to the office, a salad at Sumo, VitB shot, supplement shopping like a gym nerd, stayed in the office fairly late, lifesaving photo processing.
  • Tuesday, gym, office, stayed at the office till well after it was dark, weekly blogging, found a note on my car windscreen.
  • I ran out of podcasts to listen to. Eek. Found a good show: Recode Decode.
  • Wednesday, Headspace meditation, work, gym, work, ginger shot and a chat with Dirk – I think we can all agree that meeting at the gym is distracting, Stellenbosch Street Soiree with Paul and Hanno – good fun – not too busy – nice weather, a burger at Steam (5 Ryneveld) – urgh, this whole “steampunk” thing is super lame, but the food was nice.
  • Thursday, up at 4:00, podcast drive to gym, office, installed a dating app and deleted it again the next day, found some new slops – walked right through the previous pair, sunset promenade walk.
  • Friday, podcast drive to gym, office, dev meeting, late afternoon visit to Dirk’s house, aKing Woordfees show with Paul, Al, Margot, Laudo, Dawid and Gelika, Balboa, ended up at Al’s house with Paul and Elmi.
  • Safe as Houses – great song.
  • Seems Paul has some history with aKing romance lyrics.
  • I suspect not having WhatsApp is some form of modern contraceptive.
  • Saturday, lunch at Eikendal with Paul, the bone marrow and snails starter with their Chardonnay was awesome, a nap turned into a 16 hour sleep.
  • Hmmm. Did I mention the bone marrow and Chardonnay combo.
  • You should make broccoli sprouts.
  • Sunday, watched The Legend of Kaspar Hauser – rather strange movie, lunch at Trumpet, nap, watched The Blues Brothers – one of those movies that’s really hard to believe that I had not seen it yet, took a walk up to the chain bridge, caught the last Woordfees show at Bloekomhoek with some of Gelika’s tasty lamb tacos, some photo processing and blogging while listening to a mix of Woordfees acts with a Newlands Spring Mountain Weiss, watched Blue Valentine – intense and sad.
  • Tune of the week: Vitalic – Poison Lips
  • I found a desktop app which lets me post Instagram photos. Now I can use Instagram again. I have not posted anything in about 3 weeks. Check out Flume. 500px was amusing for a while, but it’s not very social.
  • Best Sunday music: You’re Gonna Make Me Lonesome When You Go – Madeleine Peyroux

Have a fun week, crazy kids.

by joe at 12 March 2017 08:20 PM

08 November 2016

Jonathan Carter (highvoltage)

A few impressions of DebConf 16 in Cape Town

DebConf16 Group Photo

DebConf16 Group Photo by Jurie Senekal.


Firstly, thanks to everyone who came out and added their own uniqueness and expertise to the pool. The feedback received so far has been very positive and I feel that the few problems we did experience was dealt with very efficiently. Having a DebConf in your hometown is a great experience, consider a bid for hosting a DebConf in your city!

DebConf16 Open Festival (5 August)

The Open Festival (usually Debian Open Day) turned out pretty good. It was a collection of talks, a job fair, and some demos of what can be done with Debian. I particularly liked Hetzner’s stand. I got to show off some 20 year old+ Super Mario skills and they had some fun brain teasers as well. It’s really great to see a job stand that’s so interactive and I think many companies can learn from them.

The demo that probably drew the most attention was from my friend Georg who demoed some LulzBot Mini 3D Printers. They really seem to love Debian which is great!

DebConf (6 August to 12 August)

If I try to write up all my thoughts and feeling about DC16, I’ll never get this post finished. Instead, here as some tweets from DebConf that other have written:





Day Trip

We had 3 day trips:

Brought to you by


DebConf16 Orga Team.

See you in Montréal!

DebConf17 dates:

  • DebCamp:  31 July to 4 August 2017
  • DebConf: 6 August to 12 August 2017
  • More details on the DebConf Wiki.

The DC17 sponsorship brochure contains a good deal of information, please share it with anyone who might be interested in sponsoring DebConf!


by jonathan at 08 November 2016 08:01 PM

25 September 2016

Michael Gorven (cocooncrash)

XBMC for Raspberry Pi

This page describes how to install XBMC on a Raspberry Pi running Raspbian. You can either install packages on an existing Raspbian installation, or you can download a prebuilt image and flash it to an SD card.

Installing packages on an existing installation

I've published a Debian archive containing packages for Kodi/XBMC and some dependencies which it requires. This can be setup on an existing Raspbian installation (including the foundation image).


The easiest way to install the package is to add my archive to your system. To do this, store the following in /etc/apt/sources.list.d/mene.list:

deb wheezy contrib

and import the archive signing key:

sudo apt-key adv --keyserver --recv-key 5243CDED

Then update the package lists:

sudo apt-get update

You can then install it as you would with any other package, for example, with apt-get:

sudo apt-get install kodi

The user which you're going to run Kodi as needs to be a member of the following groups:

audio video input dialout plugdev tty

If the input group doesn't exist, you need to create it:

addgroup --system input

and setup some udev rules to grant it ownership of input devices (otherwise the keyboard won't work in Kodi), by placing the following in /etc/udev/rules.d/99-input.rules:

SUBSYSTEM=="input", GROUP="input", MODE="0660"
KERNEL=="tty[0-9]*", GROUP="tty", MODE="0660"

The GPU needs at least 96M of RAM in order for XBMC to run. To configure this add or change this line in /boot/config.txt:


You will need to reboot if you changed this value.


To run XBMC, run kodi-standalone from a VT (i.e. not under X). XBMC accesses the display directly and not via Xorg.

If you want Kodi to automatically start when the system boots, edit /etc/default/kodi and change ENABLED to 1:


Run sudo service kodi start to test this.

Release history

  • 15.2-2: Isengard 15.2 release, and most PVR addons.
  • 14.2-1: Helix 14.2 release.
  • 14.1-1: Helix 14.1 release.
  • 14.0-1: Helix 14.0 release.
  • 13.1-2: Link to libshairplay for better AirPlay support.
  • 13.1-1: Gotham 13.1 release.
  • 12.3-1: Frodo 12.3 release.
  • 12.2-1: Frodo 12.2 release.
  • 12.1-1: Frodo 12.1 release. Requires newer libcec (also in my archive).
  • 12.0-1: Frodo 12.0 release. This build requires newer firmware than the archive or image contains. Either install the packages from the untested archive, the twolife archive or use rpi-update. (Not necessary as of 2013/02/11.)

Flashing an SD card with a prebuilt image

I've built an image containing a Raspbian system with the XBMC packages which you can download and flash to an SD card. You'll need a 1G SD card (which will be completely wiped).


Decompress the image using unx:

% unxz raspbian-xbmc-20121029.img.xz

And then copy the image to the SD card device (make sure that you pick the correct device name!)

% sudo cp xbmc-20121029-1.img /dev/sdb


The image uses the same credentials as the foundation image, username "pi" and password "raspberry". You can use the raspi-config tool to expand the root filesystem, enable overclocking, and various other configuration tasks.


Both Raspbian and Kodi can be updated using normal Debian mechanisms such as apt-get:

# sudo apt-get update
# sudo apt-get dist-upgrade

Release history

Unstable versions

I've started building packages for the upcoming Jarvis release. These are in the new unstable section of the archive. To install these packages update your source list to look like this:

deb wheezy contrib unstable

Release history

  • 16.1-1: Jarvis 16.1
  • 16.0-1: Jarvis 16.0
  • 16.0~git20151213.a724f29-1: Jarvis 16.0 Beta 4
  • 15.2-2: Isengard 15.2 with packaging changes to support PVR addons, and most PVR addons.
  • 15.2-1: Isengard 15.2
  • 15.1-1: Isengard 15.1
  • 15.0-1: Isengard 15.0
  • 15.0~git20150702.9ff25f8-1: Isengard 15.0 RC 1.
  • 15.0~git20150501.d1a2c33-1: Isengard 15.0 Beta 1.
  • 14.2-1: Helix 14.2 release.
  • 14.1-1: Helix 14.1 release.
  • 14.0-1: Helix 14.0 release.
  • 14.0~git20141203.35b4f38-1: Helix 14.0 RC 2
  • 14.0~git20141130.ea20b83-1: Helix 14.0 RC 1
  • 14.0~git20141125.4465fbf-1: Helix 14.0 Beta 5
  • 14.0~git20141124.ec361ca-1: Helix 14.0 Beta 4
  • 14.0~git20141116.88a9a44-1: Helix 14.0 Beta 3
  • 14.0~git20141103.d6947be-1: Helix 14.0 Beta 1. This requires firmware as of 2014/10/06 and libcec 2.2.0 (both included in the archive). There are also builds for Jessie but I haven't tested them. PVR addons are also updated.
  • 14.0~git20141002.d2a4ee9-1: Helix 14.0 Alpha 4

by mgorven at 25 September 2016 04:54 AM

02 July 2016

Jonathan Carter (highvoltage)

So, that was DebCamp16!

Picture: Adrian Frith

University of Cape Town, host location to DebConf 16. Picture: Adrian Frith

What an amazingly quick week that was

Our bid to host DebConf in Cape Town was accepted nearly 15 months ago. And before that, the bid itself was a big collective effort from our team. So it’s almost surreal that the first half of the two weeks of DebCamp/DebConf is now over.

Things have been going really well. The few problems we’ve had so far were too small to even mention. It’s a few degrees colder than it usually is this time of the year and there’s already snow on the mountains, so Cape Town is currently quite chilly.

Hacking by the fire at the sports club

Gathering some heat by the fire at the sports club while catching up to the world the day before it all started.

All Kinds of Quality Time

I really enjoyed working with the video team last year, but this year there was just 0 time for that. Working on the orga team means dealing with a constant torrent of small tasks, which is good in its own way because you get to deal with a wide variety of Debian people you might not usually get to interact with, but video team problems are more fun and interesting. Next year I hope to do a lot more video work again. If you’re at DebConf over the next week, I can highly recommend that you get involved!

Video team hacking away at problems

Video team members hacking away at problems late at night during DebCamp

The first time I met Debian folk was early in 2004. I worked at the Shuttleworth Foundation as “Open Source Technical Co-ordinator” at the time, and Mark Shuttleworth had them over for one of the early Ubuntu sprints in Canonical’s early days. I was so intimidated by them back then that I could hardly even manage to speak to them. I was already a big free software fan before working there, but little did I even dream to think that I would one day be involved in a project like Ubuntu or Debian. My manager back then encouraged me to go talk to them and get involved and become a Debian Developer and joked that I should become I guess that was when the initial seeds got planted and since then I’ve met many great people all over the world who have even became friends during UDS, DebConf, BTS and other hackfests where Debianites hang out. It gave me a really nice warm feeling to have all these amazing, talented and really friendly people from all over coming together in this little corner of the world to work together on projects that I think are really important.

Finding a warm space to work in the Happy Feet hack lab

Finding a warm space to work in the Happy Feet hack lab

Oh the Chicken

Back at DebConf12, someone (I don’t remember the exact history) brought a rubber chicken to DebConf who was simply called “Pollito” (“chicken” in Spanish). Since then the chicken has grown into somewhat of a mascot for DebConf. Back in 2012 I already imagined that if we would ever host a DebConf, I’d make a little of picture book story about Pollito. Last year after DC15, when bringing Pollito over, I created a little story called “Pollito’s first trip to Africa“. I was recovering from flu while putting that together and didn’t spend much time on it, but it turned out to be somewhat of a hit. I was surprised to see it in the #debconf topic ever since I posted it :)

We gave a tour of the campus on the first and second days and it was quite time consuming and there was no way we could do it every day for the rest of DebCamp, so on the 2nd day I smacked together a new rush job called “Pollito’s Guide to DC16“. The idea was that newcommers could use it as a visual guide and rely on others who have been there for a while if they get stuck. I wish I had the time to make it a lot nicer, but I think the general idea is good and next year we can have a much nicer one that might not be quite as Pollito focused.

Pollito's Guide to DC16

Pollito’s Guide to DC16

Debian Maintainer

After all these years, I finally sat down and applied to become a Debian Maintainer, and the application was successful (approved yesterday \o/). Now just for the wait until my key is uploaded to the keyring. I haven’t yet had time to properly process this but I think once the DebConf dust settles and I had some time to recover, I will be ecstatic.

Some actual DebCampy work

Everywhere I go, I see people installing a bunch of GNOME extensions on their Debian GNOME desktops shortly after installation using (I noticed this even at DebCamp!). A few months ago I thought that it’s really about time someone package up some of the really popular ones. So I started to put together some basic packaging for AIMS Desktop around a month ago. During the last few days of DebCamp, things were going well enough with the organisational tasks that I had some time to do some actual packaging work and improve these so that they’re ready for upload to the Debian archives. The little DebCamp time I had ended up being my very own little extension fest :)

I worked on the following packages which are ready for upload:

  • gnome-shell-extension-pixelsaver
  • gnome-shell-extension-move-clock
  • gnome-shell-extension-dashtodock
  • gnome-shell-extension-remove-dropdown-arrows

I worked on the following packages which still need minor work, might be able to get them in uploadable state by the end of DebConf:

  • gnome-shell-extension-trash-applet
  • gnome-shell-extension-topicons
  • gnome-shell-extension-taskbar
  • gnome-shell-extension-refresh-wifi
  • gnome-shell-extension-disconnect-wifi
  • gnome-shell-extension-hide-activities
  • gnome-shell-extension-impatience

The actual packaging of GNOME extensions is actually pretty trivial. It’s mostly source-only JavaScript with some CSS and translations and maybe some gsettings schemas and dialogs. Or at least, it would be pretty trivial, but many extensions are without licenses, contain embedded code (often JavaScript) from other projects, or have no usable form of upstream tarball, to name a few of the problems. So I’ve been contacting the upstream authors of these packages where there have been problems, and for the most part they’ve been friendly and pretty quick to address the problems.

So that’s it, for now.

I couldn’t possibly sum up the last week and everything that lead up to it in a single blog post. All I can really say is thank you for letting me be a part of this very special project!

by jonathan at 02 July 2016 08:57 PM

25 March 2016

Jonathan Carter (highvoltage)

DebConf 16 Updates

debconf16-blogRegistration now open

DebConf 16 is taking place in Cape Town, South Africa. For more information read the registration opening announcement.

Quick summary

  • DebCamp: 2016-06-23 to 2016-07-01
  • DebConf: 2016-07-02 to 2016-07-09
  • Open Weekend is taking place during the cross-over from DebCamp to DebConf.
  • To sign up, go to for the latest information which includes an FAQ and direct sign-up link.
  • If you’re requesting sponsorship, you must be registered by 2016-04-10.

Call for talk proposals open

If you have a proposal for a session at DC16, please read the full call for proposals announcement.

Quick info

  • Events aren’t limited to talks or BoF sessions, we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be beneficial to the Debian community.
  • Submit a talk at
  • First batch of talks/events wil be announced during April, cut-off is on 1 May.

Video team

If you’re on the video team, or are planning to get involved then you might want to join the video team mailing list and #debconf-video IRC channel on oftc. Meetings are kicking off soon, if you’re planning to join an IRC meeting, you can mark your availability on Framadate.

Sponsoring DebConf 16

We have some great sponsors for DebConf 16 already, but sadly we don’t have any local sponsors yet. Sponsorship is open to all, but if you know of a South African company in particular who might be interested, then feel free to send them a copy of the DebConf 16 Sponsorship Brochure which contains all the options and how to get in touch with the sponsorship team.

Sponsoring DebConf has never been more accessible to South African companies so if you’d like to get your company name on the Debian map and get some great exposure, then this is the perfect opportunity!

by jonathan at 25 March 2016 02:54 PM

06 January 2016

Michael Gorven (cocooncrash)

Memory optimised decompressor for Pebble Classic

TLDR: DEFLATE decompressor in 3K of RAM

For a Pebble app I've been writing, I need to send images from the phone to the watch and cache them in persistent storage on the watch. Since the persistent storage is very limited (and the Bluetooth connection is relatively slow) I need these to be as small as possible, and so my original plan was to use the PNG format and gbitmap_create_from_png_data(). However, I discovered that this function is not supported on the earlier firmware used by the Pebble Classic. Since PNGs are essentially DEFLATE compressed bitmaps, my next approach was to manually compress the bitmap data. This meant that I needed a decompressor implementation ("inflater") on the watch.

The constraint

The major constraint for Pebble watchapps is memory. On Pebble Classic apps have 24K of RAM available for the compiled code (.text), global and static variables (.data and .bss) and heap (malloc()). There is an additional 2K for the stack (local variables). The decompressor implementation needed to have both small code size and variable usage. I discovered tinf which seemed to fit the bill, and tried to get it working.

Initially, trying to decompress something simply crashed the app. It took some debug prints to determine that code in tinf_uncompress() wasn't even being executed, and I realised that it was exceeding the 2K stack limit. I changed the TINF_DATA struct to be allocated on the heap to get past this. At this stage it was using 1.2K of .text, 1.4K of .bss, 1K of stack, and 1.2K of heap (total 4.8K). I set about optimising the implementation for memory usage.

Huffman trees

Huffman coding is a method to represent frequently used symbols with fewer bits. It uses a tree (otherwise referred to as a dictionary) to convert symbols to bits and vice versa. DEFLATE can use Huffman coding in two modes: dynamic and fixed. In dynamic mode, the compressor constructs an optimal tree based on the data being compressed. This results in the smallest representation of the actual input data; however, it has to include the computed tree in the output in order for a decompressor to know how to decode the data. In some cases the space used to serialise the tree negates the improvement in the input representation. In this case the compressor can used fixed mode, where it uses a static tree defined by the DEFLATE spec. Since the decompressor knows what this static tree is, it doesn't need to be serialised in the output.

The original tinf implementation builds this fixed tree in tinf_init() and caches it in global variables. Whenever it encounters a block using the fixed tree it has the tree immediately available. This makes sense when you have memory to spare, but in this case we can make another tradeoff. Instead we can store the fixed tree in the same space used for the dynamic tree, and rebuild it every time it is needed. This saves 1.2K of .bss at the expense of some additional CPU usage.

The dynamic trees are themselves serialised using Huffman encoding (yo dawg). tinf_decode_trees() needs to first build the code tree used to deserialise the dynamic tree, which the original implementation loads into a local variable on the stack. There is an intermediate step between the code tree and dynamic tree however (the bit length array), and so we can borrow the space for the dynamic instead of using a new local variable. This saves 0.6K of stack.

The result

With the stack saving I was able to move the heap allocation back to the stack. (Since the stack memory can't be used for anything else it's kind of free because it allows the non-stack memory to be used for something else.) The end result is 1.2K of .text, 0.2K of .bss and 1.6K of stack (total 3.0K), with only 1.4K counting against the 24K limit. That stack usage is pretty tight though (trying to use app_log() inside tinf causes a crash) and is going to depend on the caller using limited stack. My modified implementation will allocate 1.2K on the heap by default, unless you define TINF_NO_MALLOC. Using zlib or gzip adds 0.4K of .text. You can find the code on bitbucket.

by mgorven at 06 January 2016 06:28 AM

31 October 2015

Adrian Frith (htonl)

Historical topographic maps of Cape Town

I’ve made an interactive website with six sets of topographic maps of Cape Town and surrounds covering the period from 1940 to 2010. You can zoom in and move around the maps, switching from one era to another.

31 October 2015 11:00 PM

03 October 2015

Simon Cross (Hodgestar)

Where to from here?

Closing speech at the end of PyConZA 2015.

We’ve reached the end of another PyConZA and I’ve found myself wondering: Where to from here? Conferences generate good idea, but it’s so easy for daily life to intrude and for ideas to fade and eventually be lost.

We’ve heard about many good things and many bad things during the conference. I’m going to focus on the bad for a moment.

We’ve heard about imposter syndrome, about a need for more diversity, about Django’s flaws as a web framework, about Python’s lack of good concurrency solutions when data needs to be shared, about how much civic information is locked up in scanned PDFs, about how many scientists need to be taught coding, about the difficulty of importing CSV files, about cars being stolen in Johannesburg.

The world is full of things that need fixing.

Do we care enough to fix them?

Ten years ago I’d never coded Python professionally. I’d never been to a Python software conference, or even a user group meeting.

But, I got a bit lucky and took a job at which there were a few pretty good Python developers and some time to spend learning things.

I worked through the Python tutorial. All of it. Then a few years later I worked through all of it again. I read the Python Quick Reference. All of it. It wasn’t that quick.

I started work on a personal Python project. With a friend. I’m still working on it. At first it just read text files
into a database. Slowly, it grew a UI. And then DSLs and programmatically generated SQL queries with tens of joins. Then a tiny HTML rendering engine. It’s not finished. We haven’t even released version 1.0. I’m quietly proud of it.

I wrote some games. With friends. The first one was terrible. We knew nothing. But it was about chickens. The second was better. For the third we bit off more than we could chew. The fourth was pretty awesome. The fifth wasn’t too bad.

I changed jobs. I re-learned Java. I changed again and learned Javascript. I thought I was smart enough to handle
threading and tons of mutable state. I was wrong. I learned Twisted. I couldn’t figure out what deferreds did. I wrote my own deferred class. Then I threw it away.

I asked the PSF for money to port a library to Python 3. They said yes. The money was enough to pay for pizza. But it was exciting anyway.

We ported another library to Python 3. This one was harder. We fixed bugs in Python. That was hard too. Our patches were accepted. Slowly. Very slowly. In one case, it took three years.

Someone suggested I run PyConZA. I had no idea how little I knew about running conferences, so I said yes. I asked the PSF for permission. They didn’t know how little I knew either, so they said yes too. Luckily, I got guidance and support from people who did. None of them were developers. Somehow, it worked. I suspect mostly because everyone was so excited.

We got amazing international speakers, but the best talk was by a local developer who wasn’t convinced his talk would interest anyone.

I ran PyConZA three more times, because I wasn’t sure how to hand it over to others and I didn’t want it to not happen.

This is my Python journey so far.

All of you are at different places in your own journeys, and if I were to share some advice from mine, it might go as follows:

  • Find people you can learn from
    • … and make time to learn yourself
  • Take the time to master the basics
    • … so few people do
  • Start a project
    • … with a friend(s)
  • Keep learning new things
    • … even if they’re not Python
  • Failure is not going to go away
    • … keep building things anyway
  • Don’t be scared to ask for money
    • … or for support
    • … even from people who aren’t developers
  • Sometimes amazing things come from one clueless person saying, “How hard can it be?”
  • Often success relies mostly on how excited other people are
  • Stuff doesn’t stop being hard
    • … so you’re going to have to care
    • … although what you care about might surprise you.

Who can say where in this complicated journey I changed from novice, to apprentice, to developer, to senior
developer? Up close, it’s just a blur of coding and relationships. Of building and learning, and of success and

We are all unique imposters on our separate journeys — our paths not directly comparable — and often wisdom seems largely about shutting up when one doesn’t know the answer.

If all of the broken things are going to get fixed — from diversity to removing the GIL — it’s us that will have to fix them, and it seems unlikely that anyone is going to give us permission or declare us worthy.

Go build something.

by admin at 03 October 2015 03:05 PM

31 August 2015

Christel Breedt (Pirogoeth)


Triggers: Transgender dysphoria/suicide.

She goes deathly calm. Then, in a almost imperceptible monotone she speaks. " I'll take a
rope from the cupboard..."

She uses the phrase "it hurts" and " I can't go on." so many times I lose track. Her eyes beg me to tell her it's OK to die. I can't bring myself to accept that, but I feel cruel and selfish for it.

"All I can do is keep going, fueled by the desperation of ending this pain. It's like a sharp ache...that never dulls."

She looks at me for minutes on end, never breaking eye contact. Tears roll silently down her face, too dignified for snot or puffy eyes they just leak out. I hold her hands and she crushes them but says "Don't touch me" when I try to hug her. I take my chances and rest an arm over her anyway. I wipe the tears from her face with slow precise exacting gentleness.

"I have to go on. I take care of you because I must. I work when I can find a breath in the agony on only those things that might save me.

But we both know nothing can save me. We can't get surgery. Nobody will pay for it. "

Her face forms shapes I've never seen. I hold myself in stasis, gently smiling and holding her eye contact as my eyes brim. I hold this space for her, even though my mind is losing it's shit with horror and fear.

Her eyes are dull. Vacant. She is everywhere in the eyes of torture victims I've seen in photographs - the ones who lived.

We've had a quarter bottle of jack just to get her defenses down this far.

She's been alone so long, before I came, with nobody to care how she feels, that she has no idea how to tell me what she's feeling. I say "It's like a story".

At the end,both of us fading into sleep, we just stare at each other, filled with quiet desperation, like lovers in a car crash saying goodbye because you already know one of you might not make it to morning if the other can't stay awake and find help. I must find help. But where? She is a nameless face in a crowd. The surplus people.

I'm scared to sleep. If I hide the rope...who am I kidding. She built hydrogen gas bombs as a kid in the well on their farm. How can I stop her. I must live with what she lives with.

Is today the end of my willpower?

Is today my last moment?
There is no help. We've been through therapy and drugs. Now it is surgery or death.
I have always known this
I've known this for two years - she was dying.
Every moment was cherished
I may only have so few.

I have always accepted she may not make it. 45% suicide rate.

That statistic haunts me like a taunting spectre when we make plans to go to South America one day, or climb a mountain when I've mastered my physio.


She will not be alone when this suffering is all she knows.

She will not be alone at the end after a life like like hers

by Whizper ( at 31 August 2015 11:51 AM

16 August 2015

Simon Cross (Hodgestar)

So what is this roleplaying thing anyway?

I ran a roleplaying module [1] for some friends from work and after initially neglecting to explaining what roleplay is, I wrote this:

Roleplaying is a form of collaborative storytelling — a group of people gathering to tell a story together. This broad definition covers quite a range of things — one can tell very different kinds of stories and collaborate in very different ways.

What I’m planning to run is called “tabletop roleplaying” [2]. The stories told centre around a group of characters (the protagonists in a movie). Each person playing is in charge of one of these main characters, except for one person who has no character and instead handles everything that isn’t one of the main characters (they are a bit like the director of a movie).

Tabletop roleplaying is a little like a radio drama — almost everything is done by narrating or speaking in character. You’ll be saying things you want your character to say and describing the actions you want your character to take. Light acting, such as putting on accents or changing tone of voice or changing posture, can be quite fun, but is by no means a requirement.

The “director”, also called the “storyteller” or “DM” [3], describes the situations the main characters find themselves in, decides on the consequences of their actions and takes on the role of minor supporting characters and antagonists (often villains, because they’re exciting). The storyteller is also an arbitrator and a facilitator and attempts to maintain consensus and suitable pacing of the story.

Often you’ll want to have your character attempt an action that might not succeed [4]. For example, they might want to shoot a villain, charm their way past a bouncer, pick a lock or run across a burning bridge. In some cases success will be more likely than others. A character who is good looking or persuasive might find charming the bouncer easy. A character who has never picked up a gun might find shooting the villain hard.

The set of rules used to determine success or failure is called “the system”. The rules might be as simple as “flip a coin” or they might take up a whole book [5]. Dice are a commonly used way of randomly determining results with high numbers typically indicating more successful outcomes and lower numbers less successful ones.

Since the real world is very very complicated, the rules usually model different aspects of it in varying degrees of detail and this often sets the tone of the story to some extent. For example, a system for telling stories about bank robbers in the 1920s might have very detailed rules on vault locks, while a system for telling fantasy stories will likely have special rules for elves and dwarves.

All systems have shortcomings, and when these are encountered it’s usually the storyteller’s job to apply common sense and tweak the outcome accordingly.

The system I’m planning to use is an extremely simplified version of Dungeons & Dragons, 3rd Edition. The full rules run to many books. I’m hoping to explain the few rules we’ll be using in 10-15 minutes.

The story I’m planning to run focuses on a down-on-their-luck rock band about to enter a battle of the bands contest. The twist is that it’s a fantasy setting so there are elves and dwarves and, of course, in the hands of suitably skilled musicians, music is literally magical.

Some practical considerations

Someone has to supply the table to sit around. This is the person hosting the game. Ke can be a player or the storyteller or even uninvolved [6]. Traditionally the host also supplies a kettle and tea or coffee.

Everyone needs to show up and sit at the table, preferably roughly at the same time. This is surprisingly hard.

In order to remain at the table for prolonged periods, one needs things to nibble on. Traditionally people who are not the host bring snacks and drinks of various kinds. Roleplayers seem to take a perverse delight in bringing the unhealthiest snacks they can find, but this is perhaps a tradition best improved on.

Remembering the details of the story can be tricky, so it’s often useful to scribble short notes for oneself. A pen and paper come in handy.

I’ll give each player a couple of pages of information about their character. These are called the “character sheet”. The primary role of the character sheet is to supply margins to scribble notes and doodles in (see above).

It’s likely that time will fly remarkably quickly. If there are six of us, each person will get on average less than ten minutes of “screen time” per hour and probably a lot less given that the storyteller usually uses more than their fair share and there are always distractions and side tracks like discussing the rules or office gossip [7]. If we run out of time, we can always continue the story another day if we’re excited enough.

Lastly, the point is to have fun and tell an interesting story [8].


Host: Person who supplies the table, and usually warm beverages like tea and coffee.

Table: Thing one plays at.

Storyteller: The person managing the world the story takes place in, the consequences of players actions and playing the minor characters and antagonists.

Players: The people who are not the storyteller.

Player character: One of the protagonists of the story. Each player has their own player character to narrate.

NPC: Non-player character. All the characters in the story who are not player characters.

RPG: Roleplaying Game. Also rocket-propelled grenade.

System: The rules used to determine the outcomes of risky actions.

Dice: Things one rolls. Usually because one is required to do so by the rules, but often just for fun.

Fun: The point. :)


[1] The module was This is Vörpal Mace. If you’re keen to play it, you can download it from Locustforge.

[2] So called because it usually takes place around a table.

[3] “DM” stands for “Dungeon Master” and is a silly legacy term from the earliest tabletop roleplaying games which mostly focused on a group of heroes running around vast dungeons full of traps and monsters. The storyteller’s role was mostly to invent traps and monsters, hence the title.

[4] Because otherwise the story would be very boring. :)

[5] A whole book is far more common. :P

[6] Although letting six people invade your house for an evening for an activity you’re not involved in requires a special kind of friendship.

[7] One can avoid this time-divided-by-number-of-people limit by having multiple scenes running concurrently. This is a lot of fun, but hell on the storyteller. :)

[8] And it’s easy to lose track of this amongst all the details of playing your character, keeping track of what’s happening and figuring out the rules.

[9] This footnote is not related to anything.

by admin at 16 August 2015 11:06 PM

13 August 2015

Adrian Frith (htonl)

South African provinces as they might have been

The post-apartheid political map of South Africa might well have looked quite different. The Eastern Cape might have been divided into two provinces, with the Kat River and Great Fish River on the boundary. The Northern Cape might not have existed, with the Western Cape meeting North West at the Orange River. Gauteng might have been much bigger – or much smaller. The Western Cape might have stopped south of Citrusdal – or it might have incorporated all of Namaqualand.

13 August 2015 10:00 PM

09 August 2015

Christel Breedt (Pirogoeth)

Simply FABiLUS

I seem to be in a mood to publish my old drafts today. Here is one I wrote shortly before Christmas of 2011(I memory serves...).


 7 days ago at 10:30pm on a Thursday night I walked into a rustic eatery two blocks away from my home in Observatory. I was tired, and a little annoyed at my husband for invoking the power of our relationship to convince me to come and meet the owners of the place.

Fabio, a happy-go-lucky Italian economics major and Wesley, an ex-programmer from Durban, had just opened their vegetarian-only restaurant that Monday and they had big ideas for converting the space they had rented into an open Artists Collective and Cultural Exchange such as Observatory had never seen.

They drank strong coffee and talked into the wee hours... by the time I arrived the topic was deeply philosophical and ranged between Anarchy vs. Capitalism, the importance of community and the ethics of vegetarian cooking.

I only had to spend a short amount of time with these charming and attractive young men to realise that we were all kindred spirits, and that much of our beliefs and ideas overlapped. I was hooked!

They needed people to help them run the shop because they were short staffed, but they were frank about the fact that money was too tight to mention. Arno and I felt so powerfully about the worth of the idea they were trying to establish that we joined their cause without reservation and in exchange for our meals.

It very quickly transpired that our biggest value would be in the realm of the kitchen. Arno's incredible cooking very quickly became a hit - customers wistfully commented that his food made them miss their mother's home cooking and dozens of people expressed amazement at the fact that such simple and un-fucked up food could be so good. Arno and I brought our belief in eating what you think smells good (within the basic boundaries of basic balanced meals) to the menu, and it was soon decided that we would not have a fixed menu but rather simply offer a set meal of the day (as chosen by the chef who cooked it) and a selection of bespoke smoothies alongside the usual coffees and teas.

Very soon Arno and I were both practically living in the shop. Every single one of our team members did their level best to be on duty as long and often as possible, usually at least 12-16 hours a day. We all believed so passionately in this collective dream of ours that we were willing to sacrifice whatever we could muster to help our dream survive.

Unfortunately this was not enough. Not one, but two of our financial backers abruptly absconded without so much as an explanation, and suddenly Wesley and Fabio were left high and dry having spent their investments on renovations, fittings, furniture and equipment. Suddenly left without a cent of running expenses to float our company through the difficult early months, we floundered. Before we knew it the dream had been scuppered, and all seemed lost.

But this is where the story really starts.

In the seven days that we grew to know each other better we became a family. The pure unselfish sacrifice that each of our team members brought to the project was inspiring. Fabio, while working a day job to help float himself financially, would come in the evenings after a long day at the office and still work until closing time. Wesley gave up almost every cent he had trying to keep us in running capital, and would often be awake from 5am until after midnight, and ended up doing the dishes most of the time. Bianca, a Swiss language teacher, would come and help out on her off days after working a 12 hour shift as a barmaid. Arno and I did our best to show them the good Afrikaans Protestant work ethic. For those seven days I learned what it meant to have a group of people who could work together almost seamlessly. In those seven days there was not one cruel or harsh word spoken between us, despite us all being under undue pressure to make ends meet. We had meetings often, and everyone's opinion was respected and valued. We debated new ideas and made decisions as a team, often unanimously. We all knew what was at stake, we all had a shared vision, and so we all just got on with the work at hand. Most evenings we would end the day by sharing the leftover dinner from our day's preparations and drinking our signature fruit water ( water with a slice of whatever fresh fruits were available. My favourite was Melon and Mint)

When it finally came to the day when Wesley, who held the lease in his name, had to inform our landlord that we would default on our rent in January and request a cancellation of our contract, the weather chose to tell the whole of the neighbourhood of our sorrow - it was cold and dark and wet all day. Everyone in the store could sense the change in our mood and it seemed things were to be as dark and grey as the weather.

However, the following day, exactly one week after we first met, we decided to have a ceremonial drunk. We all sat around the table with glasses of red wine and played poker with dried chillies for chips. Then we had a rather wonderful philosophical discussion about Polyamory, after which we all sat down to what would likely be our last meal together as the Fabilus team. We had fantastic potjiekos with fresh ciabatta and rice; to a man, every one of us overate.

We had, in a way, survived a great challenge together - even though in the end we lost - and through this loss we were bonded together as friends. The love I came to feel for my teammates will never be lost, and the joy of our shared experience will never be taken away. I will always have the wonderful music that I copied from Fabio's iPhone - beautiful jazz that became Fabilus' signature sound and will always remind me of how uncomplicated and kind Fabio was. I will always remember the way that Bianca smoked her vanilla rolled cigarettes and would help steer our meetings when they went off track by bringing out her detailed little notebook. Wesley's cheerfulness and willingness to always be the first to help out even when he was visibly dead on his feet. I'll remember the madness of us having cold showers in the back yard while someone held watch at the back door; of braaing potjiekos on a simple brick fireplace in the back yard. Watching people play chess through the front windows on our hand-painted board, and having the umbrellas make Cape Town Flowers when the wind got especially strong and nearly lifted them out of our make-shift mountings. Buying vegetables with Wesley at the market, buying malva pudding with Fabio, hugging Bianca after she changed her mind about needing a hug after Lucas (our arch enemy and one of the investors who pulled out) visited the shop briefly. Falling asleep on the hideous green couch with the pink cloth over it. Making hummus for the first time. Eating gourmet food every day for a week. Drawing the menu in chalk on the wall, a different dish each day. The dress that Hans gave me that he thought couldn't possibly be his own design because it was too bohemian. Making our own chocolate ice cream. Seeing Arno more happy than I've known him to be in years - more even than a vacation could have achieved.

So what if we will be entering the New Year not a cent richer for the work we did for Fabilus? We have nevertheless been enriched by the experience; our hearts are lighter and more at peace than they have been in years.

Thank you, Fabilus. We will miss you.

by Whizper ( at 09 August 2015 01:26 AM

A moderately pointless post...with some sense at the end.

This post was drafted somewhere in the last three months and languished unpublished...because ADD.

It seems to have some good to it, so I shall publish it unaltered as a random example of my stream of consciousness. It has some amusing bits. ;)


Forgive the dialect of my writing at present - I am something of a sponge when it comes to picking up turns of phrase or particular accents. If I watch some american television show set, for example, in the American Deep South, I inevitably end up with a southern drawl despite my never having set foot in the place.

Likewise, I tend to find myself communicating in a far more genteel manner when I've spent too many hours reading regency novels or watching period dramas. It is not an affections so much as an affliction. My partners have teased me often.

I don't precisely know what it is I mean to write here, only that I am burdened with some thoughts that will not rest in my mind - and yet remain tantalisingly out of sight. I cannot seem to find them, but I feel the effect of them deeply. They colour my emotional landscape, washing it out of colour and leaving me feeling a undercurrent of sadness, longing, regret and vulnerability.

The laughter of my landlady in the next room infuriates me. I feel offended that she should be so comfortable while I, who unquestionably has the moral high ground, should suffer so. It serves only to remind me that a good, open, candid heart that means only ever to prosper everyone will get you absolutely nowhere in life.

I do my best to put such thoughts aside, as they only serve to excite my anxieties and leave me nauseous, sleepless, restless and fretful. Regardless, I continue to wish hexes on her almost daily - and can only justify this in my conscience because I do not truly believe that wishing may make things so.

The prospect of confronting her if we do not find another home soon daunts me. I know I have the fire to face this danger, but it usually comes at so very dear a cost to my mental health and is by far the most difficult part of my battle against the delusions, panics and bloodmindedness that at times overtakes my reason during times of stress.

The changes in my medication - slight reductions to make allowance for the endless plague of heartburn I suffer - has been an... interesting experience.

I have not slept much if at all in several days. I burn constantly with anxiety, a knawing discomfort much like excessive hunger that lingers in my stomach and draws all my muscles up in a bunch as I think. I try to remind myself to take deep breaths at every opportunity, which helps, but it is only when I eventually concede to the necessity of a tranquiliser that the feeling eases, and then only in proportion to my dose. 

This I keep as light as I am able, and as infrequent as possible, for if I were to give into the temptation of exceeding my daily allowance, I should take four or five times as much before it would truly settle my nerves.  And so, instead, I suffer through the discomfort and do my best not to allow it to affect my behaviour too much. For my troubles I am at last, after so many months (and years) again able to write something.

I am glad for the change in my motivation. I so often feel as if I were a waste of space. I have nothing but my passion and my wit to recommend me - all other skills needed for survival I seem to have been denied by fate. At least now I can think a bit more, write a bit more...perhaps perceive how I may become more whole. My loves deserves better than a series of short tempered outburst and my endless pool of emotional distress.

Last night my girl and I walked in the rain to the park on Raleigh road. It is the first happy moment I have had with her in much longer than I care to remember. I played her Keane's Somewhere only we know - my lyrical attempt at romance - as we walked. She was quite amused by that as I am not often given to such gestures. We spoke a while about how few "simple things" we had had in so long. My meaning was well understood: This could be the end of everything (indeed, so it may be every day). I am making time now for you because I need something to hang onto in this darkness.

We found a bench, quite wet, and talked for a while smiling and laughing. A lingering police van drove four times past us before settling his mind that we were not vagrants. We smoked a cigarette or two.

It began to rain lightly, and we hid ourselves under the playground equipment, hunkering down in the dark damp spaces under the slide and jungle gym. The rain grew heavier for a while, and during a lull we abandoned the attempt and accepted defeat.

On the way home, I walked with bare feet in the pooling gutters, and softened the impact of the stony tar on my soles by walking in windswept piles of oak leaves. It was so happy a moment I feel overcome remembering it now. I feel so very very sad most of the time that such small, short bursts of true bliss are like stars in the night sky of my mind. I rely upon them to guide me through the night.

I feel too often so afflicted. It has become so that I do not speak of it any longer. My lovers and friends neither see me nor speak with me. I have become a ghost by my own choosing. I prefer it this way. Less times to pretend to be well. Less lies to tell when asked about my wellfare.

Only with my girl do I feel full at ease (to my shame, as there are others who love me as well). It is her particular personality that allows this: She is a very steady person, one not given to high emotions or influenced by the moods of others. This offers me such a constancy of forebearance when I become undone. She is also able to share the dry, dark humour that helps me grapple with life.

She almost never imagines it to be her fault when I am unreasonable, or feels pressed to make a fanfare of concern over each time I weep quietly - something that plagues my other lover, a man who has such depth of kindness that he takes all the blame onto himself so easily that none is left for anyone else.  With him it is hard to safely be as ill as I am, without the constant pressure to reassure my companion that this is but a passing discomfort, much the coughing or sneezing one experiences during the flu.

Just because I candidly must admit that I am not well does not mean I am not glad for your company, or willing to share your burdens. There is no need to speak in whispers near me, or stop your sentences midway, or halt your speaking altogether after I have admitted I am not utterly sprigthly.

"I have a disease of the mind" I want to say, "It is chronic, and recurring. And the manifestation of it's symptoms are weeping, rages, fretfullness, self absorbtion, thoughts of self harm and withdrawal. These are not features of my person. They will pass, and I will be myself. Do not worry so much about me..."

But alas mental illness frightens people so much that they do not grasp it's symptomset as they do with physical illnesses. For that reason they can also not set their minds at ease about my eventual prognosis, and so inevitably find themselves unable to gauge how to behave in the presence of this frigthening thing between us. This, most heavily, weighs on me when I think of him. I so desperately want this to be different. All that is left to me is candor.

I have abandoned the attempt at being less frank about the matter. I decided that to help someone to see past this obstacle and treat me as normal, I must first give them opportunity to do so. I have resigned myself to whatever opinion people seem to form of me. I cannot control it. I can merely try my best to create some constancy in my behaviour that eventually reveals my real nature.

In writing this, perhaps such a part of me is revealed. I cannot say for sure. All I know is that writing has always left me feeling the better for it, and so I do not do it to please others or with a goal in mind, but rather more as a meditation, an expression of my thoughts that flow out much like images on canvas under the brush.

Only at the end does it begin to seem as if a picture has emerged that might be pleasing to an audience and like any vain artist, I set my mind to tidying up the more obvious flaws in my creation, and with it a small twinge of pride at having, at last, created something at all.

I feel I should thank you for your time.

Thank you.

by Whizper ( at 09 August 2015 01:16 AM

03 July 2015

Adrian Frith (htonl)

How many people live in countries where same-sex marriage is legal?

After the recent US Supreme Court ruling legalising same-sex marriage (SSM) throughout that country, a claim was recently brought up on a Wikipedia talk page that more than one billion people now live in countries (or states/provinces) where SSM is legal. I thought I’d check out the numbers, and update my old graph showing how this has changed over time.

03 July 2015 10:00 PM

07 June 2015

Tristan Seligmann (mithrandi)

Adventures in deployment with Propellor, Docker, and Fabric


After playing around with Docker a bit, I decided that it would make an ideal deployment platform for my work services (previously we were using some ad-hoc isolation using unix users and not much else). While Docker’s security is…suspect…compared to a complete virtualization solution (see Xen), I’m not so much worried about complete isolation between my services, as things like easy resource limits and imaging. You can build this yourself out of cgroups, chroot, etc. but in the end you’re just reinventing the Docker wheel, so I went with Docker instead.

However, Docker by itself is not a complete solution. You still need some way to configure the Docker host, and you also need to build Docker images, so I added Propellor (which I recently discovered) and Fabric to the mix.


Propellor is a configuration management system (in the sense of Puppet, Chef, Salt, et al.) written in Haskell, where your configuration itself is Haskell code. For someone coming from a systems administration background, the flexibility and breadth offered by a real programming language like Haskell may be quite daunting, but as a programmer, I find it far more straightforward to just write code that does what I want, extracting common pieces into functions and so on. Our previous incarnation of things used Puppet for configuration management, but it always felt very awkward to work with; another problem is that Puppet was introduced after a bunch of the infrastructure was in place, meaning a lot of things were not actually managed by Puppet because somebody forgot. Propellor was used to configure a new server from scratch, ensuring that nothing was done ad-hoc, and while I won’t go into too much detail about Propellor, I am liking it a lot so far.

The role of Propellor in the new order is to configure things to provide the expected platform. This includes installing Docker, installing admin user accounts, SSH keys, groups, and so on.


The Docker workflow I adopted is based on the one described by Glyph. I would strongly recommend you go read his excellent post for the long explanation, but the short version is that instead of building a single container image, you instead build three: A “build” container used to produce the built artifacts from your sources (eg. Python wheels, Java/Clojure JARs), a “run” container which is built by installing the artifacts produced by running the “build” container, and thus does not need to contain your toolchain and -dev packages (keeping the size down), and a “base” container which contains the things shared by the “build” and “run” containers, allowing for even more efficiency of disk usage.

While I can’t show the Docker bits for our proprietary codebases, you can see the bits for one of our free software codebases, including instructions for building and running the images. The relative simplicity of the .docker files is no accident; rather than trying to shoehorn any complex build processes into the Docker image build, all of the heavy lifting is done by standard build and install tools (in the case of Documint: apt/dpkg, pip, and setuptools). Following this principal will save you a lot of pain and tears.


The steps outlined for building the Docker images are relatively straightforward, but copy/pasting shell command lines from a README into a terminal is still not a great experience. In addition, our developers are typically working from internet connections where downloading multi-megabyte Docker images / packages / etc. is a somewhat tedious experience, and uploading the resulting images is ten times worse (literally ten times worse; my connection at home is 10M down / 1M up ADSL, for example). Rather than doing this locally, this should instead run on one of our servers which has much better connectivity and a stable / well-defined platform configuration (thanks to Propellor). So now the process would be “copy/paste shell command lines from a README into an ssh session” — no thanks. (For comparison, our current deployment processes use some ad-hoc shell scripts lying around on the relevant servers; a bit better than copy/pasting into an ssh session, but not by much.)

At this point, froztbyte reminded me of Fabric (which I knew about previously, but hadn’t thoughto f in this context). So instead I wrote some fairly simple Fabric tasks to automate the process of building new containers, and also deploying them. For final production use, I will probably be setting up a scheduled task that automatically deploys from our “prod” branch (much like our current workflow does), but for testing purposes, we want a deploy to happen whenever somebody merges something into our testing release branch (eventually I’d like to deploy test environments on demand for separate branches, but this poses some challenges which are outside of the scope of this blog post). I could build some automated deployment system triggered by webhooks from BitBucket (where our private source code is hosted), but since everyone with access to merge things into that branch also has direct SSH access to our servers, Fabric was the easiest solution; no need to add another pile of moving parts to the system.

My Fabric tasks look like this (censored slightly to remove hostnames):

def build_uat_documint():
    with settings(warn_only=True):
        if run('test -d /srv/build/documint').failed:
            run('git clone --quiet -- /srv/build/documint')
    with cd('/srv/build/documint'):
        run('git pull --quiet')
        run('docker build --tag=fusionapp/documint-base --file=docker/base.docker .')
        run('docker build --tag=fusionapp/documint-build --file=docker/build.docker .')
        run('docker run --rm --tty --interactive --volume="/srv/build/documint:/application" --volume="/srv/build/documint/wheelhouse:/wheelhouse" fusionapp/documint-build')
        run('cp /srv/build/clj-neon/src/target/uberjar/clj-neon-*-standalone.jar bin/clj-neon.jar')
        run('docker build --tag=fusionapp/documint --file=docker/run.docker .')

def deploy_uat_documint():
    with settings(warn_only=True):
        run('docker stop --time=30 documint')
        run('docker rm --volumes --force documint')
    run('docker run --detach --restart=always --name=documint --publish=8750:8750 fusionapp/documint')

Developers can now deploy a new version of Documint (for example) by simply running fab build_uat_documint deploy_uat_documint. Incidentally, the unit tests are run during the container build (from the .docker file), so deploying a busted code version by accident shouldn’t happen.

by mithrandi at 07 June 2015 10:57 PM

07 March 2015

Tristan Seligmann (mithrandi)

Axiom benchmark results on PyPy

EDIT: Updated version now available.

EDIT: Fixed the issue with the store-opening benchmark

Axiom conveniently includes a few microbenchmarks; I thought I’d use them to give an idea of the speed increase made possible by running Axiom on PyPy. In order to do this, however, I’m going to have to modify the benchmarks a little. To understand why this is necessary, one has to understand how PyPy achieves the speed it does: namely, through the use of JIT (Just-In-Time) compilation techniques. In short, these techniques mean that PyPy is compiling code during the execution of a program; it does this “just in time” to run the code (or actually, if I understand correctly, in some cases only after the code has been run). This means that when a PyPy program has just started up, there is a lot of performance overhead in the form of the time taken up by JIT compilation running, as well as time taken up by code being interpreted slowly because it has not yet been compiled. While this performance hit is quite significant for command-line tools and other short-lived programs, many applications making use of Axiom are long-lived server processes; for these, any startup overhead is mostly unimportant, the performance that interests us is the performance achieved once the startup cost has already been paid. The Axiom microbenchmarks mostly take the form of performing a certain operation N times, recording the time taken, then dividing that time by N to get an average time per single operation. I have made two modifications to the microbenchmarks in order to demonstrate the performance on PyPy; first, I have increased the value of “N”; second, I have modified the benchmarks to run the entire benchmark twice, throwing away the results from the first run and only reporting the second run. This serves to exclude startup/”warmup” costs from the benchmark.

All of the results below are from my desktop machine running Debian unstable on amd64, CPython 2.7.5, and PyPy 2.1.0 on a Core i7-2600K running at 3.40GHz. I tried to keep the system mostly quiet during benchmarking, but I did have a web browser and other typical desktop applications running at the same time. Here’s a graph of the results; see the rest of the post for the details, especially regarding the store-opening benchmark (which is actually slower on PyPy).

[graph removed, see the new post instead]

To get an example of how much of a difference this makes, let’s take a look at the first benchmark I’m going to run, item-creation 15. This benchmark constructs an Item type with 15 integer attributes, then runs 10 transactions where each transaction creates 1000 items of that type. In its initial form, the results look like this:

mithrandi@lorien> python item-creation 15
mithrandi@lorien> pypy item-creation 15

That’s about 165µs per item creation on CPython, and 301µs on PyPy, nearly 83% slower; not exactly what we were hoping for. If I increase the length of the outer loop (number of transactions) from 10 to 1000, and introduce the double benchmark run, the results look a lot more encouraging:

mithrandi@lorien> python item-creation 15
mithrandi@lorien> pypy item-creation 15

That’s about 159µs per item creation on CPython, and only 87µs on PyPy; that’s a 45% speed increase. The PyPy speed-up is welcome, but it’s also interesting to note that CPython benefits slightly from the changes to the benchmark. I don’t have any immediate explanation for why this might be, but the difference is only about 3%, so it doesn’t matter too much.

The second benchmark is inmemory-setting. This benchmark constructs 10,000 items with 5 inmemory attributes (actually, the number of attributes is hardcoded, due to a limitation in the benchmark code), and then times how long it takes to set all 5 attributes to new values on each of the 10,000 items. I decreased the number of items to 1000, wrapped a loop around the attribute setting to repeat it 1000 times, and introduced the double benchmark run:

mithrandi@lorien> python inmemory-setting
mithrandi@lorien> pypy inmemory-setting

That’s 486ns to set an attribute on CPython, and 129ns on PyPy, for a 74% speed increase. Note that this benchmark is extremely sensitive to small fluctuations since the operation being measured is such a fast one, so the results can vary a fair amount between benchmarks run. For interest’s sake, I repeated the benchmark except with a normal Python class substituted for Item, in order to compare the overhead of setting an inmemory attribute as compared with normal Python attribute access. The result was 61ns to set an attribute on CPython (making an inmemory attribute about 700% slower), and 2ns on PyPy (inmemory is 5700% slower). The speed difference on PyPy is more to do with how fast setting a normal attribute is on PyPy, than to do with Axiom being slow.

The third benchmark is integer-setting. This benchmark is similar to inmemory-setting except that it uses integer attributes instead of inmemory attributes. I performed the same modifications, except with an outer loop of 100 iterations:

mithrandi@lorien> python integer-setting
mithrandi@lorien> pypy integer-setting

That’s 12.3µs to set an attribute on CPython, and 3.8µs on PyPy, a 69% speed increase.

The fourth benchmark is item-loading 15. This benchmark creates 10,000 items with 15 integer attributes each, then times how long it takes to load an item from the database. On CPython, the items are deallocated and removed from the item cache immediately thanks to refcounting, but on PyPy a gc.collect() after creating the items is necessary to force them to be garbage collected. In addition, I increased the number of items to 100,000 and introduced the double benchmark run:

mithrandi@lorien> python item-loading 15
mithrandi@lorien> pypy item-loading 15

That’s 90µs to load an item on CPython, and 57µs on PyPy, for a modest 37% speed increase.

The fifth benchmark is multiquery-creation 5 15. This benchmark constructs (but does not run) an Axiom query involving 5 different types, each with 15 attributes (such a query requires Axiom to construct SQL that mentions each item table, and each column in those tables) 10,000 times. I increased the number of queries constructed to 100,000 and introduced the double benchmark run:

mithrandi@lorien> python multiquery-creation 5 15
mithrandi@lorien> pypy multiquery-creation 5 15

55µs to construct a query on CPython; 8µs on PyPy; 86% speed increase.

The sixth benchmark is query-creation 15. This benchmark is the same as multiquery-creation, except for queries involving only a single item type. I increased the number of queries constructed to 1,000,000 and introduced the double benchmark run:

mithrandi@lorien> python query-creation 15
mithrandi@lorien> pypy query-creation 15

15.5µs to construct a query on CPython; 1.6µs on PyPy; 90% speed increase.

The final benchmark is store-opening 20 15. This benchmark simply times how long it takes to open a store containing 20 different item types, each with 15 attributes (opening a store requires Axiom to load the schema from the database, among other things). I increased the number of iterations from 100 to 10,000; due to a bug in Axiom, the benchmark will run out of file descriptors partway, so I had to work around this. I also introduced the double benchmark run:

mithrandi@lorien> python store-opening 20 15
mithrandi@lorien> pypy store-opening 20 15

1.41ms to open a store on CPython; 2.02ms on PyPy; 44% slowdown. I’m not sure what the cause of the slowdown is.

A bzr branch containing all of my modifications is available at lp:~mithrandi/

by mithrandi at 07 March 2015 11:24 AM

Axiom benchmark results on PyPy 2.5.0

This is a followup to a post I made about 1.5 years ago, benchmarking Axiom on PyPy 2.1.0. Not too much has changed in Axiom since then (we fixed two nasty bugs that mainly affected PyPy, but I don’t expect those changes to have had much impact on performance), but PyPy (now at 2.5.0) has had plenty of work done on it since then, so let’s see what that means for Axiom performance!

Unlike my previous post, I’m basically just going to show the results here without much commentary:

Graph of Axiom performance

A few notes:

  • I didn’t redo the old benchmark results, but the hardware/software I ran the benchmarks on is not significantly different, so I think the results are still valid as far as broad comparisons go (as you can see, the CPython results match fairly closely).
  • The benchmark harness I’m using now is improved over the last time, using some statistical techniques to determine how long to run the benchmark, rather than relying on some hardcoded values to achieve JIT warmup and performance stability. Still could use some work (eg. outputting a kernel density estimate / error bars, rather than just a single mean time value).
  • There is one new benchmark relative to the last time, powerup-loading; PyPy really shines here, cutting out a ton of overhead. There’s still room for a few more benchmarks of critical functions such as actually running and loading query results (as opposed to just constructing query objects).
  • The branch I used to run these benchmarks is available on Github.
  • The horizontal axis is cut off at 1.0 so you can’t actually see how store-opening lines up, but the raw data shows that PyPy 2.1.0 took about 53% longer on this benchmark, whil PyPy 2.5.0 only takes about 2% longer.

by mithrandi at 07 March 2015 11:22 AM

28 February 2015

Andre Truter (Cacofonix)

Old software in business

One thing that I cannot understand is why a business that is not a one-man-show would use outdated software, especially something like Internet Explorer 6 or 7 or anything under 10.
I can understand if a very small business of private person who do not have an IT department of IT support can still be stuck with such old software, but a big business with branches all over the country should know that especially IE < 10 has security risks and do not support HTML5.

So if you then ask a supplier to write you a web application and the web application makes use of HTML5, then you should not wonder why it does not work with your IE 8 browser.

I can understand that it might not always be possible to upgrade all workstations to the latest operating system if you use expensive proprietary operating systems, but then you can at least standardise on an open source browser like Firefox or a free browser like Chrome. Both of them have very good track records for security and they support HTML5 and keeping them up to date does not cost anything.

So why are your staff stuck with an old, insecure browser that does not support HTML5? We are not living the the 90's anymore!

The same goes for office suites. LibreOffice is kept up to date and there are other alternatives like OpenOffice (Backed by Apache, Oracle and IBM). With a little training, you can move all your users over to LibreOffice or OpenOffice and never have to pay for an upgrade again and always have the latest stable and secure version of the software available, no matter what OS you run.

To me it just makes sense to invest some time and money once to get ensure a future of not being locked in. In the long run it saves money and if enough businesses does that, it might even force Microsoft to come to the Open Document Standards table and bring the price of it's Office suite down or even make it also free, which will benefit everybody as we will have a real open document format that everybody can use and nobody can be locked in.

Just my humble opinion.

Some links:
Firefox Web Browser
Google Chrome Web Browser
Best Microsoft Office Alternatives

by andre at 28 February 2015 03:35 PM

03 December 2014

Neil Blakey-Milner (nbm)

Starting is one of the hardest things to do

Just over six years ago I stopped posting regularly to my fairly long-lived blog (apparently started in April 2003), and just under four years ago I noticed it was not loading because the server it was on had broken somehow, and I didn't care enough to put it back up. (Astute readers will note how those two dates are roughly a few months in to my last two job start dates...)

I've written a number of blog posts over the last few years, but I haven't posted any of them. I don't even have most of them anymore, although a few are in the branches of git repos somewhere. I'll try dig some of them up - one of the things I enjoyed about my old blog was rereading what I was thinking about at various times in the past. (I guess I'll try get that back online.)

I've also written two different bits of blog software since then - since that's what one does whenever one contemplates starting a blog up again (although, to be fair, I also started setting up Wordpress and Tumblr blogs too).

The first used Hyde, a static blog generation tool (whose development seems to have halted a year ago). The second is what I'm using now - a collection of Flask modules and some glue code that constitutes gibe2, which I wrote just under a year ago. It uses Frozen-Flask to freeze a Flask app into an HTML tree that can be served by any static web server.

Putting something out there - let alone a first pass, a beginning with some implied commitment to continue - is quite scary. In my career, I've tended to be the joiner and sometimes the completer to a project, not the initiator. I find it easy to take something from where it is to where I think it should go, and hard to imagine what should exist.

I should mention that I have succeeded in posting a bit to about the occasional spurts of time I get to work on a little D language OpenGL someday-might-be-a-game. The minimum expended effort there is a lot higher - building a new feature or doing a major refactor, and then explaining it all in a post - in fact the posts are sometimes a lot harder to put together than the code, and the code is quite a bit further ahead than the posts now. It will probably be very sporadic with posts mostly coinciding with my long weekends and vacations. (And, unsurprisingly, I've just finished up a vacation where I finally got around to putting the final touches on this here blog.)

Connecting together starting up being scary and game programming: I've been watching Handmade Hero religiously since it started (the first few on Youtube the day after, the last few live). Talk about being a tough thing to contemplate starting - one hour a day of writing a game from scratch streaming to the public every weekday - every single little mistake in your code and in your explanations there for people to pick apart without any opportunity for editing. And having ~1000 streamers in your first few shows - no pressure!

That definitely put just putting together a few words a week with no implied time-bound commitment into perspective, so here we are.

I stopped reading blogs roughly when I stopped caring about mine. I hope to find a few interesting people to follow. A few of my friends used to post, albeit infrequently. I hope I can convince them to do so again as well.

by Neil Blakey-Milner at 03 December 2014 07:45 AM

22 August 2014

Adrianna Pińska (Confluence)

Yet another way to play videos in an external player from Firefox

I spent some time today trying to figure out how to get Firefox to play embedded videos using anything other than Flash, which is an awful, stuttering, resource-devouring mess. The internet is littered with old browser extensions and user scripts which allegedly make this possible (e.g. by forcing sites like YouTube to use the vlc media plugin instead), but I was unable to get any of them to work correctly.

Here’s a quick hack for playing videos from YouTube and any other site that youtube-dl can handle in an external mplayer window. It’s based on several existing HOWTOs, and has the benefit of utilising a browser extension which isn’t dead yet, Open With, which was designed for opening websites in another browser from inside Firefox.

I wrote a little script which uses youtube-dl to download the video and write it to stdout, and pipes this output to mplayer, which plays it. Open With is the glue which sends URLs to the script. You can configure the extension to add this option to various context menus in the browser — for example, I can see it if I right-click on an URL or on an open page. You may find this less convenient than overriding the behaviour of the embedded video on the page, but I prefer to play my videos full-screen anyway.

This is the script:

youtube-dl -o - $1 | mplayer -

Make it executable. Now open Open With’s preferences, add this executable, and give it a nicer name if you like. Enjoy your stutter-free videos. :)

(Obviously you need to have youtube-dl and mplayer installed in order for this to work. You can adapt this to other media players — just check what you need to do to get them to read from stdin.)

by confluence at 22 August 2014 02:41 PM

21 August 2014

Simon Cross (Hodgestar)

Character Creation 3000W

by Simon Cross, Mike Dewar and Adrianna Pińska

Your character creation skills have progressed far beyond writing
numbers on paper. Your characters have deftly crafted manerisms and
epic length backgrounds. They breathe emotion and seem more life-like
than many of your friends.

Yet, somehow, when you sit down at a table to play your beautiful
creations, things don’t quite work out.

Perhaps the story heads in an unexpected direction, leaving your
creation out of place and struggling to fit in? Or maybe they’re fun
to play initially but their actions begin to feel repetitive and

If any of this sounds familiar, read on.

Reacting to failure

It’s easy to spend all your time imagining a character’s successes —

their victories and their crowning moments — but what happens when
they fail? How do they respond to minor setbacks? And big ones?

Maybe they’re stoic about it? Perhaps it’s likely to cause a crisis of
faith? Maybe they react by doubling down and uping the stakes? Maybe
they see failure as an opportunity to learn and grow? Perhaps they’re
accustomed to failure? Perhaps they see failure as a sign that they’re
challenging themselves and pushing their abilities?

The dice and the DM are going to screw you. Make sure you have a plan
for how to roleplay your character when they do.


A character’s goals are things strongly tied to specific events. A
philosophy colours every situation. The two are often aligned, but a
philosophy is more broadly useful. It gives you a handle on how your
character might behave in circumstances where it is not otherwise
obvious what they would do.

To take a hackneyed example: your backstory might involve punishing an
old partner who screwed you. This goal could feed a number of
rather different philosophies:

  • “I always keep my word, and I promised Jimmy I’d get him back.”
  • “Any situation can be solved with enough violence.”
  • “Karma controls the universe. What goes around comes around.”

The goal is the same, but each philosophy implies very different
day-to-day behaviour.

There are going to be times when other characters’ plots and goals are
centre-stage, and it behooves us as roleplayers to have a plan for
these awkward (and hopefully brief) moments. A philosophy allows your
character to participate in others’ plots as a unique and distinct
individual, rather than as a bored bystander.

Your character’s philosophy becomes vitally important when paradigm
shifts occur in-game. Setting changes erode the importance of lesser
goals and past history and create a strong need for a philosophy that
guides your character’s immediate responses and further development.

It may be interesting to construct characters with goals that
contradict their philosophy. For example, a pacifist might wish to
exact revenge on the person who killed their brother. This creates an
interesting conflict that will need to be resolved.

Randomly fucking with people is not a philosophy.

Interacting with colleagues

Your character is going to spend a lot of time interacting with their

colleagues — the other player characters — so it’s worthwhile
thinking about how they do that.

It’s tempting (and a bit lazy) to think of characters as relating to
everyone else the same way. This leads to loners and overly friendly
Energizer bunnies, both of which get old very quickly.

Avoid homogenous party dynamics.

If your character’s interactions with the other player characters are
all the same, you have failed.

Varied interactions also help make party disagreements more
interesting. Without varied interactions, you have to resolve all
disagreements by beating each other over the head with the logic stick
until consensus (or boredom) is reached. Unique relationships and
loose factions make disagreements more interesting to roleplay and
help the party find plausible lines along which to unite for a given

If your character is part of a command structure, spend some time
thinking about how they respond to orders they disagree with. Remember
that the orders are likely issued by someone your character knows and
has an existing relationship with. What is that relationship?

Also keep in mind that your character has likely been given such
orders before, and since they appear to still be part of the command
structure, they’ve probably come to terms with this in some way that
both they and their immediate superiors can live with.

Obviously everyone has their limits, though — where are your
character’s? How much does it take for other player characters or
NPCs to cross the line?


Sometimes even if you do everything right you find yourself in a
situation where your character is no longer fun to play. Maybe the
campaign took an unexpected turn or you’ve just run out of ideas for
them as they are. It’s time for your character to change — to embark
on a new personal story arc.

Great characters aren’t static. They grow and react to events around
them. Perhaps a crushing defeat has made them re-consider their
philosophy — or made them more committed to it? Or maybe frustration
with their current situation has made them reconsider their options?

It helps to think broadly about how your character might develop while
you’re creating them. Make sure you’d still find the character
interesting to play even if their stance on some important issues
shifted. Don’t become too invested in your character remaining as they
are. Be flexible — don’t have only one plan for character

Your character’s philosophy and general outlook can be one of the most
interesting things to tweak. Small changes can often have big
ramifications for how they interact with others.

Don’t feel you have to leave character development for later in the
campaign! The start of a campaign is often when character changes are
most needed to make a character work well and it sets the stage for
further character development later on.


Think about how you convey who your character is to the other
players. They’re probably not going to get to read your epic
backstory, so they’re going to have to learn about who your character
is in other ways.

Likely the first thing people will hear about your character is his or
her name — so make it a good one. It’s going to be repeated a lot so
make sure it conveys something about who your character is. If they’re
an Italian mobster, make sure their name sounds like they’re an
Italian mobster. That way whenever the DM or another player says your
character’s name, it reminds everyone who your character is.

The second thing people hear will probably be a description of your
character. Take some time to write one. Don’t rely on dry statistics
and descriptions. Stick to what people would see and remember about
your character if they met him or her for a few minutes. Don’t mention
hair colour unless hair is an important feature.

After introductions are done, you probably won’t get another
invitation to monologue about your character. So do it in character
instead. Tell the NPC about that time in ‘Nam. Regale the party with
tales from your epic backstory. As in real life, try not to ramble on,
but equally, don’t shy away from putting your character in the
spotlight for a few moments. Continually remind the others at the
table who your character is.

Last but not least, remember that the most epic backstory is pointless
if no one finds out about it. The point of dark secrets is for them to
be uncovered and for your character to confront them.


Don’t fear failure. Have a philosophy. Have varied interactions with
others. Embrace change. Share who you are.


  • Kululaa dot COMMMM!
  • Mefridus von Utrecht (for a philosophy that involves others)
  • Attelat Vool (for starting life after failure)

This article was also published in the CLAWmarks 2014 Dragonfire edition.

by admin at 21 August 2014 09:36 AM

17 March 2014

Adrianna Pińska (Confluence)

Why phishing works

Let me tell you about my bank.

I would expect an institution which I trust to look after my money to be among the most competent bureaucratic organisations I deal with. Sadly, interactions which I and my partner H have had with our bank in recent years suggest the opposite.

Some of these incidents were comparatively minor: I once had to have a new debit card issued three times, because I made the fatal error of asking if I could pick it up from a different branch (the first two times I got the card, my home branch cancelled it as soon as I used it). More recently, when I got a replacement credit card, every person I dealt with had a completely different idea of what documents I needed to provide in order to collect it. When H bought a used car, it turned out that it was literally impossible for him to pay for it with a transfer and have the payment clear immediately — after he was assured by several employees that it would not be a problem.

Some incidents were less minor. H was once notified that he was being charged for a replacement credit card when his current card was years away from expiring. Suspicious, he called the bank — the employee he spoke to agreed that it was weird, had no idea what had happened, and said they would cancel the new card. H specifically asked if there was anything wrong with his old card, and was assured that everything was fine and he could keep using it. Of course everything was not fine — it suddenly stopped working in the middle of the holiday season, and he had to scramble to replace it at short notice. All he got was a vague explanation that the card had to be replaced “for security reasons”. From hints dropped by various banking employees he got the impression that a whole batch of cards had been compromised and had quietly been replaced — at customers’ expense, of course.

What happened last weekend really takes the cake. The evening before we were due to leave on a five-day trip, well after bank working hours, H received an SMS telling him that his accounts had been frozen because he had failed to provide the bank with some document required for FICA. This was both alarming and inexplicable, because we had both submitted documents for FICA well before the deadline years ago. The accounts seemed fine when he checked them online. When he contacted the bank, he was assured that he had been sent the SMS in error, everything was fine, and he didn’t need to provide any FICA documents.

So we left on our trip. I’m sure you can see where this is going.

On Thursday evening H’s accounts were, in fact, frozen. He tried unsuccessfully during the trip to get the bank to unfreeze them, but since it was closed for the weekend there was pretty much nothing he could do until we got back home.

Hilariously, although employees from the bank made a house call this morning to re-FICA him, he had to go to the branch anyway because they needed a photocopy of his ID (scanning and printing is magically not the same as photocopying).

Again, what actually happened is a mystery — the bank claims that they have no FICA documents on record for H (as an aside, why are customers not given receipts when they submit these documents to the bank? If there is no record of the transaction, the bank can shift blame to the customer with impunity if it loses vital documentation).

We’re very fortunate that none of these incidents had devastating consequences for us, since they only impacted one of us at a time. If we relied on a single bank account, we could easily have ended up without food, without electricity, or stranded in the middle of the Karoo with no fuel. This is pretty clearly not an OK situation.

The common thread running through all these incidents is a lack of communication: both between the bank and its customers, and within the bank. We rapidly discovered through our dealings that while the bank maintains the facade of a single, unified entity, it in fact comprises several more-or-less autonomous divisions, and communication between these divisions often resembles diplomacy in the Balkans.

I would not be surprised to discover that various aspects of customers’ information are distributed over several disconnected databases. There appears to be no way for certain types of information to be linked to accounts, which necessitates the use of bizarre hacks. When H’s credit card was disabled, this was in no way reflected in the online interface — he just had an enormous amount of money reserved on the card. When his accounts were frozen, this was again not apparent in the interface (which was recently updated) — his available credit balance was just zeroed, and he got a non-specific error whenever he tried to transfer funds. I believe that the back-end infrastructure for managing this information effectively and making it available to customers and employees simply does not exist.

As a result of this, I have learned not to believe a word that a bank employee says if there is any chance that the issue crosses some internal jurisdictional boundary. In the best case scenario they are aware of their own lack of information and are able to direct you to the appropriate department. In the worst case scenario, they dispense outright misinformation — something which can have devastating consequences (for you).

We live in an age with an abundance of instant communications methods. In spite of this, it appears to be beyond the bank’s abilities to inform people timeously about what is going on with their accounts. It has backed off from electronic communications presumably because of fears of phishing, and has fallen back to two obsolete and inefficient channels: voice phonecalls, which are prone to misunderstandings and human error and leave the customer with no written record, and SMSes, which have a character limit. Both these channels are vulnerable to blips in the cellphone network infrastructure, customers having no airtime, or customers just not being available to answer their phones at certain times (since voice phonecalls are synchronous). The bank also uses these channels as if it believes that they are intrinsically secure and trustworthy, which is ludicrous, as anyone who keeps getting calls from “the Microsoft Support Centre” can attest.

In order for this dysfunctional system to work, the bank appears to expect its customers to accept unexpected additional charges unquestioningly, and to follow instructions issued by unknown persons calling from unverified internal numbers, even if they are nonsensical or suspicious and cannot be corroborated by other bank employees contacted through more reputable public-facing channels. It is standard procedure for such callers to demand personal information from customers in order to verify their identities, but they offer no evidence of their own legitimacy.

In short, the bank expects us to be the kind of credulous chumps who fall prey to phishing scams. It’s easy to see why phishing works when genuine communications from the bank are so unprofessional and follow such laughable security practices that they are virtually indistinguishable from phishing.

I haven’t named my bank, although it’s pretty easy to find out what it is if you know where to look, because I’m sceptical that there are significant differences between the way South African banks operate; particularly the Big Four. I’ve certainly heard the same kinds of horror stories from customers of other banks.

This is the final straw which has led us to investigate other banking options. Whether the pastures we’re moving to are greener or just differently green remains to be seen.

If you feel strongly that there is a bank which is notably better or worse than the others, or you just want to share your own tale of banking woe, let me know in the comments. I’ll be over here, stuffing my money into a sock to hide under my mattress.

by confluence at 17 March 2014 12:10 PM

25 November 2013

Michael Gorven (cocooncrash)

Statically linked e2fsprogs binaries for Android

I needed to resize the ext4 filesystem of the /data partition on my Android phone and spent ages looking for a prebuilt binary of resize2fs (since it isn't included in ROMs or recovery). I didn't find one and so spent ages trying to build it myself. To save others the trouble, here are the binaries.

Just the main ones (mke2fs, resize2fs, tune2fs, fsck): tar.xz 630K or zip 1.5M
Everything tar.xz 1.6M or zip 5.8M

by mgorven at 25 November 2013 04:58 AM

17 October 2013

Simeon Miteff (simeon)

DBE is right about Delphi and Office

Some thoughts on the current furore about the Department of Basic Education’s policy to standardise on MS Office and Delphi in teaching:

MS Office

Word processors and spreadsheets are simple tools for solving simple problems. For hard typesetting and number crunching problems we have better tools.

My point is that we’re already making a pragmatic choice with computer literacy training in schools: most scholars will use computers to solve simple problems, so schools train them to be able to do that (not to build mathematical models in Octave or to typeset engineering manuals in LaTeX). Should we now insist that they be trained to use a relatively obscure FOSS office suite instead? LibreOffice is a fork of one piece of crap, emulating another piece of crap known as MS Office. DBE is standardising, on the de-facto (albeit, piece of crap) standard, and that is the sensible choice, get over it.


I did computer programming in Pascal (with Borland Turbo Pascal) at high school. The curriculum covered the basic constructs of the imperative programming style. At the time, Borland had already enhanced their Pascal implementation to support much of the language features currently available in Delphi. When I started university, the Computer Science program I took required basic programming ability as a prerequisite, and having done Pascal at school was sufficient to fulfil this requirement. Students who didn’t have programming at school took an additional semester course (Introduction to programming in C).

I never touched Pascal again.

Not only was the Pascal I was taught at school not an immediately useful skill for the workplace, but in retrospect, the abstract concepts I was taught served only as a grounding for getting into real computer science which followed at university: data structures and algorithms, computational complexity, computer architecture, AI, compilers and the study of programming languages, distributed systems, and so on. I wouldn’t have become a capable programmer without studying mathematics and science (as apposed to being trained in programming).

In my opinion, Pascal (and consequently Delphi) is good enough for education. Stated another way: Java is no better for the purpose of teaching a teenager how a for loop works. No programming language taught at school can turn a scholar into serious career programmer.

Real problems

There are so many difficult problems to solve in South Africa’s education system: consider that the majority of scholars are being taught all subjects in their second, or third (natural) language. Why aren’t we having a debate about whether Afrikaans should be taught as a second language to kids who speak Xhosa at home, when they’re expected to understand what’s going on in a history lesson delivered in English?


On this particular issue, y’all need to calm down and give DBE a break.

by Simeon Miteff at 17 October 2013 10:27 PM

08 September 2013

Andy Rabagliati (wizzy)

Richard Stallman visits Cape Town

Myself with Richard Stallman Richard Stallman visited Cape Town, and gave a talk at the University of Cape Town titled "A Free Digital Society"

On the 4th September I picked Richard Stallman up from the airport and we went to dinner with a number of people that had helped organise the event at Brass Bell, Kalk Bay.

Around Cape Town

He stayed with me and my friend Lerato in Muizenberg, and he is a gracious and accommodating guest. I took the Thursday off work, and we had lunch at a little local restaurant in Marina da Gama, and I then took him for a drive around the peninsula on a beautiful Cape day.

A Free Digital Society

The talk was at the Leslie Social Sciences building at University of Cape Town, but as we arrived Richard realised he had left behind a Gnu fluffy toy he wanted to auction, so I had to dash back to Muizenberg to pick it up.

Gnu Public Licence

The other organisers delayed the start of the talk by ten minutes so I could do the introductions, thank you. He talked for about two hours on a range of topics, dwelling on the GNU Public Licence, his 30 year old prescription for how developers should set about writing code that will become a living part of a commonly-owned set of tools and procedures that other people can build upon and improve.


He outlined his suspicions regarding the trackability of cellphones, and the vast amounts of information that is stored by cellphone companies - mostly metadata (number called, duration, cellphone location) and the implications for privacy. He does not own one.


He touched on software used in schools, exhorting educators to think about the larger consequences of software usage, and putting students on the right track - to empower them to realise they are in control of the software, and not just treat it as a purchased item that remains the property of the software company.

Voting Software

He highlighted the many dangers and pitfalls of computerising the counting of votes, from the closed nature of the software, the possibility of bugs or deliberate malware in the code, the difficulty of knowing if the software that was verified is indeed the software that is running on the machine for the duration of the vote, and the difficulty of a recount, or even what that means.


Bringing a South African flavour to his talk, he discussed eTolling - which he objected to because of the potential for abuse for surveillance purposes. He also pointed out that if proper attention was paid to anonymising the data, he would no longer object.

Lerato and Richard


He let me keep a South African music CD in his collection, which was very nice of him, and I like it very much. In short, we had a capacity audience from diverse backgrounds, no glitches, and he very much enjoyed his trip, saying he would look forward to coming again. We are certainly blessed to live in such a beautiful part of the world, that people like Richard want to visit.

Other commentary

A couple of other posts :-

by Andy at 08 September 2013 02:32 PM

08 August 2013

Simeon Miteff (simeon)

Unbundling of the Telkom SA local loop

It was inevitable that I would eventually post something about the punny namesake of this blog: the “local loop” in South Africa. This is the last-mile (copper) infrastructure owned and controlled by the incumbent telco: Telkom SA

Local loop unbundling (LLU) is a regulatory policy that forces an operator with “significant market power” to allow competitors to directly resell their access network. Predictably, the incumbent operators always push back against LLU, and this gives rise to different “flavours”. “Full LLU” means access to raw copper lines at the exchange, while “bitstream access” refers to a number of wholesale PPPOE backhaul scenarios presented as an alternative to proper LLU.

In South Africa, ISPs wishing to provide ADSL services can either become pure reseller’s of Telkom SA’s end-to-end DSL ISP product, or they can purchase wholesale access to the ADSL “cloud” via a product called IP Connect. While it seems similar to bitstream access, Telkom has deliberately crippled IP Connect with a really ugly L3 routed hack involving three VRFs and a RADIUS realm per ISP, some RADIUS proxying and ISP-allocated address pools configured inside Telkom’s network.

Much has been written about the shortcomings of IP Connect, but the TL;DR version is that you need to tunnel to do anything other than the simplest best-effort IPv4 service that could work. What’s perhaps more interesting is to note that compared to the much more flexible bitstream service they could deliver using L2TP, IP Connect is very likely more expensive to run (in my opinion).

So, briefly back to regulatory matters: despite a government policy that LLU must happen, as is typical of our regulator, they have dragged their feet on implementing LLU regulation for many years. Telkom SA has cried wolf about LLU not being viable (as expected) and so the rest of the nation waits while we slip down the global competitiveness rankings.

With the recent replacement of the minister of communications, it seems there is some action (or noise?) around LLU (for example, see this article on Techcentral).

I predict that ICASA means business this time, but there is a catch: Telkom SA will offer bitstream access as a compromise to what they’ll present as the impending disaster of full LLU.

ICASA will fall for it, and Telkom SA will present a fully developed bitstream product as their “gift” to the industry (which was actually ready for launch in 2010, and will save Telkom money, compared to IP Connect).

What remains to be seen is:

  1. Whether ICASA will regulate the price of the “units” of LLU? If I’m right, the “unit” will be bitstreams, not copper lines. In my opinion, the price must be regulated for bitstream access to be an acceptable substitute for full LLU (I’m not saying price regulation is a good thing – that’s a completely different question).
  2. Just how far Telkom will push the interconnect point for ISPs away from the end user? The ideal scenario is access to the bitstream at the exchange. If they force interconnection with ISPs on a regional basis instead, they’ll make it less like full LLU (consequently defeat the purpose of opening up the access network to competitors), and artificially inflate the price by claiming the cost of backhaul to those regional interconnection points.

I hope I’m wrong and we get full LLU, but I have a feeling my prediction will be spot-on.

by Simeon Miteff at 08 August 2013 09:51 AM

06 August 2013

Brendan Hide (Tricky)

My server rebuild, part 1 – with Ubuntu, and a vague intro to btrfs


Much had changed since I last mentioned my personal server – it has grown by leaps and bounds (it now has a 7TB md RAID6) and it had recently been rebuilt with Ubuntu Server.

Arch was never a mistake. Arch Linux had already taught me so much about Linux (and will continue to do so on my other desktop). But Arch definitely requires more time and attention than I would like to spend on a server. Ideally I’d prefer to be able to forget about the server for a while until a reminder email says “um … there’s a couple updates you should look at, buddy.”

Space isn’t free – and neither is space

The opportunity to migrate to Ubuntu was the fact that I had run out of SATA ports, the ports required to connect hard drives to the rest of the computer – that 7TB RAID array uses a lot of ports! I had even given away my very old 200GB hard disk as it took up one of those ports. I also warned the recipient that the disk’s SMART monitoring indicated it was unreliable. As a temporary workaround to the lack of SATA ports, I had even migrated the server’s OS to a set of four USB sticks in an md RAID1. Crazy. I know. I wasn’t too happy about the speed. I decided to go out and buy a new reliable hard drive and a SATA expansion card to go with it.

The server’s primary Arch partition was using about 7GB of disk. A big chunk of that was a swap file, cached data and otherwise miscellaneous or unnecessary files. Overall the actual size of the OS, including the /home folder, was only about 2GB. This prompted me to look into a super-fast SSD drive, thinking perhaps a smaller one might not be so expensive. It turned out that the cheapest non-SSD drive I could find actually cost more than one of these relatively small SSDs. Yay for me. :)

Choice? Woah?!

In choosing the OS, I’d already decided it wouldn’t be Arch. Out of all the other popular distributions, I’m most familiar with Ubuntu and CentOS. Fedora was also a possibility – but I hadn’t seriously yet considered it for a server. Ubuntu won the round.

The next decision I had to make didn’t occur to me until Ubiquity (Ubuntu’s installation wizard) asked it of me: How to set up the partitions.

I was new to using SSDs in Linux – I’m well aware of the pitfalls of not using them correctly, mostly due to their risk of poor longevity if misused.

I didn’t want to use a dedicated swap partition. I plan on upgrading the server’s motherboard/CPU/memory not too far in the future. Based on that I decided I will put swap into a swap file on the existing md RAID. The swap won’t be particularly fast but its only purpose will be for that rare occasion when something’s gone wrong and the memory isn’t available.

This then left me to give the root path the full 60GB out of an Intel 330 SSD. I considered separating /home but it just seemed a little pointless, given how little was used in the past. I first set up the partition with LVM – something I’ve recently been doing whenever I set up a Linux box (really, there’s no excuse not to use LVM). When it got to the part where I would configure the filesystem, I clicked the drop-down and instinctively selected ext4. Then I noticed btrfs in the same list. Hang on!!

But a what?

Btrfs (“butter-eff-ess”, “better-eff-ess”, “bee-tree-eff-ess”, or whatever you fancy on the day) is a relatively new filesystem developed in order to bring Linux’ filesystem capabilities back on track with current filesystem tech. The existing King-of-the-Hill filesystem, “ext” (the current version called ext4) is pretty good – but it is limited, stuck in an old paradigm (think of a brand new F22 Raptor vs. an F4 Phantom with a half-jested attempt at an equivalency upgrade) and is unlikely to be able to compete for very long with newer Enterprise filesystems such as Oracle’s ZFS. Btrfs still has a long way to go and is still considered experimental (depending on who you ask and what features you need). Many consider it to be stable for basic use – but nobody is going to make any guarantees. And, of course, everyone is saying to make and test backups!


The most fundamental difference between ext and btrfs is that btrfs is a “CoW” or “Copy on Write” filesystem. This means that data is never actually deliberately overwritten by the filesystem’s internals. If you write a change to a file, btrfs will write your changes to a new location on physical media and will update the internal pointers to refer to the new location. Btrfs goes a step further in that those internal pointers (referred to as metadata) are also CoW. Older versions of ext would have simply overwritten the data. Ext4 would use a Journal to ensure that corruption won’t occur should the AC plug be yanked out at the most inopportune moment. The journal results in a similar number of steps required to update data. With an SSD, the underlying hardware operates a similar CoW process no matter what filesystem you’re using. This is because SSD drives cannot actually overwrite data – they have to copy the data (with your changes) to a new location and then erase the old block entirely. An optimisation in this area is that an SSD might not even erase the old block but rather simply make a note to erase the block at a later time when things aren’t so busy. The end result is that SSD drives fit very well with a CoW filesystem and don’t perform as well with non-CoW filesystems.

To make matters interesting, CoW in the filesystem easily goes hand in hand with a feature called deduplication. This allows two (or more) identical blocks of data to be stored using only a single copy, saving space. With CoW, if a deduplicated file is modified, the separate twin won’t be affected as the modified file’s data will have been written to a different physical block.

CoW in turn makes snapshotting relatively easy to implement. When a snapshot is made the system merely records the new snapshot as being a duplication of all data and metadata within the volume. With CoW, when changes are made, the snapshot’s data stays intact, and a consistent view of the filesystem’s status at the time the snapshot was made can be maintained.

A new friend

With the above in mind, especially as Ubuntu has made btrfs available as an install-time option, I figured it would be a good time to dive into btrfs and explore a little. :)

Part 2 coming soon …

by Tricky at 06 August 2013 05:43 PM

fsck progress bar for ext4

I had a power outage affect my server’s large md RAID array. Rather than let the server as a whole be down while waiting for it to complete an fsck, I had it boot without the large array so I could run the fsck manually.

However, when running it manually I realised I had no way of knowing how far it was and how long it would take to complete. This is especially problematic with such a large array. With a little searching  I found the tip of adding the -C parameter when calling fsck. I couldn’t find this in the documentation however: fsck –help showed no such option.

The option turns out to be ext4-specific, and thus shows a perfectly functional progress bar with a percentage indicator. To find the information, instead of “fsck –help” or “man fsck”, you have to input “fsck.ext4 –help” or “man fsck.ext4″. :)

by Tricky at 06 August 2013 04:47 PM