come on down to clug park and meet some geeks online

11 December 2017

Johann Botha (joe)

Quick Update

Ghee in my tea, and three Clifton sunsets…

Volvo Ocean Race
  • Week of 4 to 10 December.
  • Monday, tea, backups, worked at home while Mia watched the first Lord of the Rings movie – not my idea, we found Mia some new shoes, podcast drive to drop Mia off at a friend’s party, found a 10m measuring tape for Mia’s shot put hobby, nap, cooked some broccoli and kefir shots with Heleen.
  • Raise resourceful children.
  • “Stupid is normal.”

  • Tuesday, gym, office, clicked around shipping tracking websites, showed somebody Jacques’ car, software testing, got some new GPON toys, found some ghee, Kauai Vitality voucher breakfast for supper (was out of ideas).
  • I’m still on the 6g VitC per day plan. I think it might be working.
  • Wednesday, office, Clifton 1st sunset with Georg, Dan and Heleen, quick promenade walk to get 10k steps.
  • Listen to the The Polybius Conspiracy.
  • How many companies could you still start, with the energy / health-span you have?
  • One good reason for longevity… I figure it takes most people until around 35 to find and fix themselves.
  • Thursday, ghee in my TiTea, gym, office, wrote a newsletter, Clifton 1st sunset with Georg, Cath and Heleen – great swimming water temperature, burger in a bowl at Da Vinci’s, vegan ice cream, in bed a bit late.
  • “Bitcoin isn’t a bubble. It’s the pin.” — Simon Dingle

  • Friday, office, dev meeting, Jacques fetched a server, extended Clifton 1st visit with Georg, Cath and Heleen, stayed on the beach till well after dark, Mojo market, time well spent.
  • Keep learning and researching. Things seem much less intimidating if you encounter them a 2nd time.
  • Saturday, breakfast at La Vie, started a broccoli sprouting project with Heleen, lunch at Helemhuijs with Paul, Darryl and Felix – very good – have not seen Felix since aB 2015, walked around De Waterkant, wine tour with Heleen, drove to Beau Constantia – which was a bit lame, Buitenverwachting – which was very nice, Chapmans Peak Hotel visit, we made some gluten-free wraps, 11h sleep.
  • Sunday, up early, gym, breakfast with Jacques at Harvest Cafe in Bo Kaap, we installed Debian 9.3 on a quad core Xeon server with 32GB memory – NVMe drives are crazy fast, braai with Wouter, Rene and Alex, fell asleep learning about SR-IOV.
  • Page 701 is going to be exciting. Listen to the Tim Urban interview with Tim Ferriss.
  • Tune of the week: The Killers – Run For Cover

Have a fun week, crazy kids.

by joe at 11 December 2017 01:58 PM

04 December 2017

Christel Breedt (Pirogoeth)

THE SPOONIE DIET





Really excited!!

Ok peeps, so I thought I should get into this food blogging thing and yesterday was grocery shopping day I thought I'd do a beginner's guide.

Here's how to eat like a spoonie:

Day 1: A perfect meal.

Cook from scratch and freeze leftovers.

Congratulations!

You may now sleep like the dead.



Day 2:  PAIN!!!

Use your frozen veg to ice your joints.

Forget to set the timer on your roasted chicken due to brain fog.

Enjoy a crispy Poulet Noire




Day 3: Freezer food.

Kitchen still smells like burnt chicken.

Wander around aimlessly for twenty minutes wondering why you were in the kitchen before realising it's already dinner time.

But wait! We have leftovers! The system works!



Day 4:  It'll do...

Forgot to defrost leftovers.

Toss a two chicken breasts and rice in the same bowl of water in the microwave.

Cook until rice doesn't  break your teeth and serve with barbecue spice.



Day 5: Leftovers again

Chug it down dutifully. Remember to chew.

Alarm reminds you to defrost leftovers.

Instead of refrigerating them afterwards you forget them on the counter by the microwave.




Day 6:  Uh oh...

Leftovers don't smell right.

a) Risk foodpoisoning or
b)Eat a hunk of cheese, a carrot & a tomato.

Tomorrow is another day.



Day 7: Suprise Flare!

Admit that no cooking or shopping is happening because you can't walk.

a)Get takeout or
b)Eat forbidden foods from the stash of shame



by Whizper (noreply@blogger.com) at 04 December 2017 12:32 PM

Johann Botha (joe)

How To Browse The Web

Things you must do before you browse the web…

Step 1) Install Firefox. You want an independent browser. No evil empires.

If you are unsure about Step 1, watch an Enron documentary or two, and then go for a long walk.

Step 2) Install some plugins for Firefox. You need:

a) Privacy Badger, from the EFF. You know who the EFF are right?

b) HTTPS Everywhere. Also EFF, and Aaron Swartz was involved is this plugin, you know who that is right? Watch: The Internet’s Own Boy.

c) uBlock Origin. Because the brightest minds of our generation are building “AdTech”, yet ads are _still_ lame, and because 0.1% of humans (muppets) click on ads, we have to endure this foolish torture, or just block them.

d) NoScript. Because Google “tag manager” is just spyware, and you don’t want people running evil javascript in your browser, trying to sell you things you don’t need.

Step 3) Read Social Media Sanity and Use This Tech.

by joe at 04 December 2017 07:45 AM

Quick Update

Three Riddick movies and high doses of vitamin C…

Sea Point Moon
  • Week of 27 November to 3 December.
  • Health week 4/4.
  • Monday, skipped a gym day, walked to the office, late afternoon Clifton 1st session with Georg and Heleen – swimming and some aerobie playing, Hussar steak with Georg.
  • I’m experimenting with 6g VitC and 6g Lysine per day – Pauling method. Cholesterol hacking takes time. 3 month cycles.
  • Tuesday, office, dev meeting, lunchtime smoothie podcast walk, nap, reading and watched some Open/R videos.
  • Are your Tuesdays happy?
  • Worth a read: things which lower testosterone. I’m on 581 ng/dL total Testosterone at the moment. Not ideal.
  • I went full hippie and now use organic shampoo and conditioner.
  • Being over 40 is all about avoiding chronic inflammation and muscle loss.
  • Wednesday, walked to gym, podcast walk to the office, some sysadmin tasks, backups, photo processing, new major version of the Octotel automation system went live, I took 10g of VitC in total through the day.
  • The JP diet. Maybe Jacques will go gluten free now.
  • Businesses run for love. Love the businesses you build.
  • “Cattle, not pets.” — DevOps

  • Thursday, mobility routine, walked to the office, data migration day, vitB shot, hardware shopping list refactoring at La Vie with a glass of Chardonnay, in bed early.
  • Friday, just in time hardware shopping before Jacques arrives back from London, gym, fetched Mia from her last day of school for 2017 and had a nice lunch at Vergelegen to celebrate, afternoon at Dirk’s house – we chatted about Instagram for anti-social people, just in time visit to the Waterfront VIP cinemas to watch Justice League – which I thought was rather lame, but Mia seemed to like it, epic weather, walked around the Volvo Ocean Race exhibition again and let Mia do the excavator challenge, I finished it in 48sec this time, fish and calamari for supper, vegan ice cream, in bed early, time well spent.
  • “acting like the Doomsday Clock has a snooze button.” — Justice League

  • Context before conversation. Reverse polish notation.
  • Saturday, backups and reading – nerdy rabbit hole as usual, brunch at Blue Cafe in Tamboerskloof with Mia – the full monty with kefir shots, we watched 12 Monkeys – Mia enjoyed it, cooked Mia some broccoli.
  • I’ve consumed at least 200g of collagen in last 30 days.
  • Sunday, made Mia some breakfast and watched Pitch Black, lunch at with Georg and Mia at Manga in Mouille Point – the food was pretty average, but the dessert was rad, watched The Chronicles of Riddick with a Famous Grouse, braai at home, wrote a blog post about How To Browse The Web, watched Riddick, sunset promenade walk with Mia, we watched The Prestige – great Nolan movie – Mia was somewhat confused, but it was cool to see her work it out.
  • We’re making slow progress with the Geeky Movies list.
  • Cool little film about happiness.

Have a fun week, crazy kids.

by joe at 04 December 2017 06:36 AM

24 November 2017

Christel Breedt (Pirogoeth)

#NotAllChristiansHate


Today I watched this talk. I felt like I was hearing my inner monologue from a decade ago. I was moved to write about my feelings, especially in the current world climate of hate masquerading as religious piety.

We are living in a new age of persecution by the Christian faith. You heard me right.

There is a dominant narrative in a large number of modern denominations born from a school of doctrines that have popularised the following ideas that Christians use to justify oppressing others:

1. Spare the rod: The belief that physical violence is a valid and even essential tool for appropriate parenting. This teaches that you are not allowed sanctity of your person if you disobey a dogmatic rule set by an authority figure, regardless of your opinions on the rule. This attitude later justifies to Christians why violating the bodies of nonbelievers is a valid form of care and love.

2. Demons can possess people, it is an affliction that affects their ability to conform to dogma, and it is your duty as a fellow believer to liberate them, by force if necessary. This line of reasoning teaches that different beliefs are a disease in need of a cure, and the cure may be administered involuntarily because the demon is in control and thus the person is not fully human.

3. Demonposession is catching. By associating with the possessed, you make yourself vulnerable to possession and familial disaster. This has two effects. It teaches believers to other divergent thinkers as possessed, thus efficiently dehumanising them, and it invokes the aforementioned notion of forced treatment for their affliction but elevates it to a group level.

4. We must liberate the world from the yolk of Satan. This neatly corrals all divergent thinkers ( and to dogmatic Christians EVERYONE not of their faith is divergent) into a position of infantilisation to the parental love of Christ and his "bride", the church. And since we've already established you shan't spare the rod...

It's easy to see how some would then take this rationale and extend it to include rape, assault, even murder or genocide. If the Stanford prison experiment taught us anything it is that a few bad rules can make a monster out of anyone.

I watched her talk on cults and the abuse she experienced and it hit me like a gut punch.

This was my life.

This was my partner's life too.

Divergence led to othering because of perceived satanic influence and this later justified abuse. Her eventual motive for rejecting conservative Christian doctrine pretty much matches my experience: That's not love.

My father is ultra conservative. It is commonly agreed by him and my grandmother that I am deranged, and that my beliefs result from demon possession.

When I stood my ground against his advocacy for the revocation of abortion rights and transphobic propaganda he became blatant in his attempt to force me to accept his abusive worldview or risk the relationship.

I chose to walk away.

I love my father. I miss debating art and philosophy with him. He loved discussing maths and science. I miss being held by him, his scruffy beard and sharp wit. I miss arguing with him...just not the arguments about dogma.

If you ask me to choose between two people I love I will not choose the one who made me choose. It's that simple.

If your doctrine teaches a seperation from  sinners as an instruction from god, if it teaches humiliation as punishment and excommunication,  the use of  a the rod, if  it demands unquestioning faith and obedience to power and accepts violence as a normal experience in families, if it encourages coerced marriages and shames difference or independent thought then I reject your dogma.

If it is a choice between your god or my sinful friends, I choose my friends because it was your god who made me choose between loving Him and loving everyone else.

If God is love, then let there be love or I reject your hypocritical god.

I will not respect your religious rights if they remove the right of others to choose to reject your doctrines and morality. So if you cry for your religious freedom to me while believing this you can cry me a river. I will not protect your faith.

Religious freedom is the freedom to choose your practice of faith. Any action that impedes this freedom is a violence against an individual's right to autonomy.

It is incompatible with a society free of war and cannot be thought of as ethical when viewed from the moral values common to the overwhelming majority of cultures, which is that we generally abhor violence towards others and should avoid it.

So no, you do not hide your hate under the veil of religious freedom or for that matter free speech ( but that's another rant...) with me.

I will give you no quarter. No excuse for abuse.

Christians, you need to put your house in order. Take the beam from thine own eye. If you are in a church where these doctrines are accepted and can safely challenge this, speak out.

If you are silent, know that in the words of Archbishop Tutu you have taken the side of the oppressor.

Show the world that #notallchristians hate.

Regards,

Someone who read the Bible,
Cover to cover,
But got stuck on Corinthians 13.

PS. I'm agnostic now, so I've no horse in this race as it were. So don't go presumptiously frothing at the mouth about how I'm just  some Dawkinsian verbally abusive atheist who denounces all religious practice. You don't know me.


by Whizper (noreply@blogger.com) at 24 November 2017 06:53 PM

22 November 2017

Graham Poulter (verdant)

Elements of Narrative Improv

This post provides a paraphrased summary of the elements of Narrative Improv as laid out by Parallelogramophonograph or "Pgraph" in their excellent little book, "Do It Now: Essays on Narrative Improv."

Narrative Improv is a form of improvised theatre in which the group creates a full-length play that result in a more-or-less coherent story that runs all the way from "once upon a time" through to the "happily ever after" (or not so happily, if you like).  It's a flexible form that stands in contrast to both highly structured forms like the Armando or Harold, and to the unstructured "montage" improv of somewhat-connected scenes with once-off characters.

I recently took a class on Narrative Improv with Neil Curran of Lower The Tone (based in Dublin, Ireland), and our group "The Players" has been doing some shows. You'll want to do such a class or buy the book to follow the rest of the post as a handy reminder. What follows is mostly a paraphrase from the "What comes next?" section on page 73 of "Do It Now" with a few insertions of my own based off the rest of the book.  Unfortunately there's no eBook, just the tree version.  Without further ado, the elements of narrative improv:

The story spine
Once Upon A Time ... And Every Day ... Until One Day ... And Because Of That ... And Because Of That (repeat to taste) ... Until Finally ... And Ever Since Then. Virtually any plot can be arranged or rearranged into a story spine form, and you can practice making story spines in a group exercise where each person in turn adds an element to the story.

Invest in normalcy
It's almost impossible to spend too much time on the "once upon a time ... and every day" part of the show where everything that happens is stuff that always happens. Even if two characters start in a sword fight, that's just something they do every day. Normalcy builds the world, the characters, the relationships. Take up multiple scenes, a large fraction of your show, just showing different parts of the everyday world of these people. If someone brings in plot too soon just act like their proposed plot is something that happens all the time. One day, when the "One Day" finally happens, you will have loads of material to work with!

Identify the protagonist
Around the "Until One Day" point, different characters will be making offers of goals and desires. As a group you gradually home in with a "spotlight" on different characters to settle on one with a strong offer, who will become the protagonist for the rest of the story. Give the protagonist a strong and simple desire - even if it's just to get their rug back (it really tied the room together!). Then the rest of the story becomes scenes that either help or hurt the protagonist in pursuit of their goal.

Make scenes have clear consequences
More than "making bold choices", by giving actions and scenes clear, strong consequences, the "And Because Of That" will come naturally, and you get a feel for where the story is going. This frees you from "oh no where are we going" scrambling for the next step, giving you space to have playful moments in your scenes.

Be clunky but clear
Be overt and clear about the details of scenes and relationships, particularly early in your group's development. It's the opposite of being vague and hoping that the others figure out what you're getting at or fill in the gaps. "My dear brother Kevin, what a dank cave this is!" comes out clunky but makes the relationship and location clear! Subtlety will develop over time.

Explore a range of acting styles and tones
Deliberately try out new genres, and new acting styles from everyday realism, to stage realism, to over-the-top stylized acting. Try out new character tropes from movies and theatre. These deepen your stories and make your shows distinct, taking your improv to new places.

Pgraph round it off saying to play, practice and enjoy getting it horribly wrong - that's how you learn! -  and revel in getting it right.  If you've done narrative improv I hope these reminders help, and if you haven't I hope it inspires you to get the book and take a class!

by Graham Poulter (noreply@blogger.com) at 22 November 2017 09:49 PM

20 November 2017

Jonathan Carter (highvoltage)

New powerline goodies in Debian

About powerline

Powerline does some font substitutions that allow additional theming for terminal applications such as tmux, vim, zsh, bash and more. The powerline font has been packaged in Debian for a while now, and I’ve packaged two powerline themes for vim and zsh. They’re currently only in testing, but once my current todo list on packages look better, I’ll upload them to stretch-backports.

For vim, vim-airline

vim-airline is different from previous vim powerline plugins in that it doesn’t depend om perl or python, it’s purely implemented in vim config files.

Demo

Here’s a gif from the upstream site, they also demo various themes on there that you can get in Debian by installing the vim-airlines-themes package.

Vim Airline demo gif

How to enable

Install the vim-airline package, and add the following to your .vimrc file:

" Vim Airline theme
let g:airline_theme='powerlineish'
let g:airline_powerline_fonts = 1
let laststatus=2

The vim-airline-themes package contains additional themes that can be defined in the snippet above.

For zsh, powerlevel9k

Demo

Here’s a gif from upstream that walks through some of its features. You can configure it to display all kinds of system metrics and also information about VCS status in your current directory.

Powerline demo gif

Powerlevel9k has lots of options and features. If you’re interested in it, you should probably take a look at their readme file on GitHub for all the details.

How to enable

Install the zsh-theme-powerlevel9k package and add the following to your to your .zshrc file.

source /usr/share/powerlevel9k/powerlevel9k.zsh-theme

by jonathan at 20 November 2017 07:22 PM

17 November 2017

Jonathan Carter (highvoltage)

I am now a Debian Developer

It finally happened

On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montréal, my application was approved. If you’re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I’d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it.

How it started

In 1999… no wait, I can’t start there, as much as I want to, this is a short post, so… In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its “boot-floppies” installer program kept crashing on my very vanilla computers. 

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn’t want to support Windows ever again, and I didn’t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don’t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called “freshmeat” that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again.

Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up “highvoltage@debian.org”. I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :)

Ubuntu and beyond

Ubuntu 4.10 default desktop – Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just “warty” written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called “Ubuntu” and the desktop edition has naked people on it. I wasn’t sure what he meant and was kind of dumbfounded so I just laughed and said something like “Uh ok”. At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline “Linux for Human Beings”. Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal.

Closer to Ubuntu’s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying “Go talk to them! Go talk to them!”, but I felt so intimidated by them that I couldn’t even bring myself to walk up and say hello.

In the interest of keeping this short, I’m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach’s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I’d like to do a similar video series that might help a new generation of packagers.

I’ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I’d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It’s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I’ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

by jonathan at 17 November 2017 05:48 PM

13 November 2017

Christel Breedt (Pirogoeth)

About PMDD

About a month ago I start both Concerta for ADHD and Zoladex for PMDD/Endometriosis.

Gotta tell ya, this has been a trippy month...

Everything was fantastic. I was doing stuff, organised, I was just rocking round the clock...

Then I ovulated and all hell broke loose.

For those unfamiliar with the disorder you may want to Google PMDD. It's PMS - but instead of just moody or emotional you're psychotic (literally) and suicidal or violent. AKA clinically insane. Until you start to bleed.

The heaviest bleeding of my period has always been the day my partner and I know it's going to be ok again.

This month was different. I hope that means the meds are working.

It wasn't as bad as usual...but that was sorta worse from a certain perspective because I realise now I'm usually so far off the reservation I don't KNOW how bad it is.

Before all this I was flying high. I felt fantastic. Life without stimulants for my ADHD was HELL. Getting the meds I needed was liberating.

I was in a flawless routine. Up and at it by 8:30. I was eating, sleeping, working like a well adjusted adult for once. I was ticking off items on my task list so fast I was running out of to do's, a completely novel experience for me.

Then I ovulated.

Day one of PMDD everything I was shattered like glass. I stopped eating, cooking, bathing, moving, talking, working. My sleep cycle isn't. I wake up or pass out seemingly at random. Reality seems abstract. People feel like NPC's in a MMORPG. I don't know what day it is without checking the calendar on my phone. I lost a day twice in a week - I can't remember what I did that day at all.

Everything hurts. I'm sorta limply trying to institute my usual chronic illness coping strategies but I keep finding myself doing something else, usually hours later, super confused about how I got so off track.

There are times, like now, when I'm lucid. I try to write or tell people about it. I guess I'm trying to take field notes for my research, or warm people of what's going on. But twice I've found half written essays in text on my phone.

This piece has been discovered after I passed out around one today:

"I haven't eaten. I was awake until 4am, then slept an hour and was awake again by 6.

I have a back spasm and I keep catching my jaw clenching. I think that's how I passed out earlier - I took a tranquilliser for the back spasm. How many? I can't remember.

I should eat. I never defrosted anything. There's fish fingers. Been saving those for days like today when I fail to function. Hurrah. The system works!

I'll be back shortly. Fooooooood..."

The next day:

"I made fish fingers in the oven. Twice. The first time I forgot to turn the oven on, lol."

Four hours later I was asleep again. I wrote this the next morning:

"I lost another day. I was awake for 4 hours, slept for 8, woke up tired and throbbing with anxiety.

Things are more sore than usual. Burning hands, random sharp pains in my legs.

If I take another Rivotril I may as well write off the next week too - the hangover at this kind of dose lasts days. Damnit I have things to do! You know! Like survive!

What is happening physiologically?

I wonder. If I can understand what's causing all these secondary issues maybe I can compensate. Many sites talk about Progesterone intolerance. Seems consistent with my case. Would explain why going on birth control at 14 coincided with me getting more nuts, and the disaster that was depo provera.

Thank goodness I resisted doing that again. Sometimes I do get it right on instinct - I never had a hard science reason for saying no, I just felt it would be bad."

It's tough when all you have is instinct. People don't respect your intuition. Guess that's how I ended up in this mess. I never had a good enough explanation for why I said there was something wrong with me.

It's the hardest thing about this: Being treated like the girl that calls  wolf.

I've been judged a lot. Too lazy. Too selfish. Too dependant. Unwilling to commit. Defiant. Martyr complex. Hystrionic. Hypochondriac. Arrogant. Stubborn. Superstitious. Inflexible. A bitch. Borderline. Manipulative.

Really sick day in and day out isn't something people understand. They can't conceive of it.

I think people just have a tolerance limit for drama...and if your life sucks more than they can deal with they bounce.

Can't blame them. I've had a lifetime to get used to this high octane level of ridiculously improbable bad luck. I don't even see it anymore. I was diagnosed as really seriously challenged at 4.

It's truly still surprising to me when I post something I think of as dark but funny online ie. "When your endometriosis is so bad they put you on chemo...:/" and people respond like my mum died.

That was supposed to be like a little funny...I mean it's not actually chemo, they just called it that, it's a hormone treatment....oh whatever. "Thanks for the sympathies." I finally end up saying instead, feeling like a tool for making people worry so much.

I forget it's a surprise to them that it's THAT bad. You get to the point of being insensitive to how really very strange it all is when it's your daily life.

I don't feel like I'm a downer, but I guess I am. It's like having cancer. Just admitting you are ill reminds people of this terrible truth they feel they should be tiptoeing around, bowing to, like they need to dress in black, cover mirrors, speak in hushed tones, bring flowers to your grave...

And I'm like, dude, I'm right here. Just chill. If I need a black parade I'll show up dressed like Wednesday Addams. If I'm not crying about it, you don't need to be on your toes about it with me either. I want to laugh about this. It helps.

I dunno what to say. What do I need from people? Why did I write this?

Maybe just for others like me. It's been the only thing that made me feel less nuts  - reading about how others felt.

So yeah man, you peeps aren't alone. I get you. *Hugs*


by Whizper (noreply@blogger.com) at 13 November 2017 07:33 AM

08 August 2017

Adrian Frith (htonl)

Linguistic diversity map of South Africa

The linguistic diversity index measures the probability that two people selected at random from a population speak different home languages. The map below, which I produced, depicts the linguistic diversity index calculated on a 10-kilometre-wide hexagonal grid across South Africa.

A map of South Africa showing the linguistic diversity index calculated on a 10-kilometre-wide hexagonal grid Linguistic diversity; click to enlarge

08 August 2017 10:00 PM

24 July 2017

Jonathan Carter (highvoltage)

Plans for DebCamp17

In a few days, I’ll be attending DebCamp17 in Montréal, Canada. Here are a few things I plan to pay attention to:

  • Calamares: My main goal is to get Calamares in a great state for inclusion in the archives, along with sane default configuration for Debian (calamares-settings-debian) that can easily be adapted for derivatives. Calamares itself is already looking good and might make it into the archives before DebCamp even starts, but the settings package will still need some work.
  • Gnome Shell Extensions: During the stretch development cycle I made great progress in packaging some of the most popular shell extensions, but there are a few more that I’d like to get in for buster: apt-update-indicator, dash-to-panel (done), proxy-switcher, tilix-dropdown, tilix-shortcut
  • zram-tools: Fix a few remaining issues in zram-tools and get it packaged into the archive.
  • DebConf Committee: Since January, I’ve been a delegate on the DebConf committee, I’m hoping that we get some time to catch up in person before DebConf starts. We’ve been working on some issues together recently and so far we’ve been working together really well. We’re working on keeping DebConf organisation stable and improving community processes, and I hope that by the end of DebConf we’ll have some proposals that will prevent some re-occurring problems and also help mend old wounds from previous years.
  • ISO Image Writer: I plan to try out Jonathan Riddell’s iso image writer tool and check whether it works with the use cases I’m interested in (Debian install and live media, images created with Calamares, boots UEFI/BIOS on optical and usb media). If it does what I want I’ll probably package it too if Jonathan Riddell didn’t get to it yet.
  • Hashcheck: Kyle Robertze wrote a tool called Hashcheck for checking install media checksums from a live environment. If he gets a break during DebCamp from video team stuff, I’m hoping we can look at some improvements for it and also getting it packaged in Debian.

by jonathan at 24 July 2017 09:08 AM

25 September 2016

Michael Gorven (cocooncrash)

XBMC for Raspberry Pi

This page describes how to install XBMC on a Raspberry Pi running Raspbian. You can either install packages on an existing Raspbian installation, or you can download a prebuilt image and flash it to an SD card.

Installing packages on an existing installation

I've published a Debian archive containing packages for Kodi/XBMC and some dependencies which it requires. This can be setup on an existing Raspbian installation (including the foundation image).

Installing

The easiest way to install the package is to add my archive to your system. To do this, store the following in /etc/apt/sources.list.d/mene.list:

deb http://archive.mene.za.net/raspbian wheezy contrib

and import the archive signing key:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key 5243CDED

Then update the package lists:

sudo apt-get update

You can then install it as you would with any other package, for example, with apt-get:

sudo apt-get install kodi

The user which you're going to run Kodi as needs to be a member of the following groups:

audio video input dialout plugdev tty

If the input group doesn't exist, you need to create it:

addgroup --system input

and setup some udev rules to grant it ownership of input devices (otherwise the keyboard won't work in Kodi), by placing the following in /etc/udev/rules.d/99-input.rules:

SUBSYSTEM=="input", GROUP="input", MODE="0660"
KERNEL=="tty[0-9]*", GROUP="tty", MODE="0660"

The GPU needs at least 96M of RAM in order for XBMC to run. To configure this add or change this line in /boot/config.txt:

gpu_mem=128

You will need to reboot if you changed this value.

Running

To run XBMC, run kodi-standalone from a VT (i.e. not under X). XBMC accesses the display directly and not via Xorg.

If you want Kodi to automatically start when the system boots, edit /etc/default/kodi and change ENABLED to 1:

ENABLED=1

Run sudo service kodi start to test this.

Release history

  • 15.2-2: Isengard 15.2 release, and most PVR addons.
  • 14.2-1: Helix 14.2 release.
  • 14.1-1: Helix 14.1 release.
  • 14.0-1: Helix 14.0 release.
  • 13.1-2: Link to libshairplay for better AirPlay support.
  • 13.1-1: Gotham 13.1 release.
  • 12.3-1: Frodo 12.3 release.
  • 12.2-1: Frodo 12.2 release.
  • 12.1-1: Frodo 12.1 release. Requires newer libcec (also in my archive).
  • 12.0-1: Frodo 12.0 release. This build requires newer firmware than the raspberrypi.org archive or image contains. Either install the packages from the raspberrypi.org untested archive, the twolife archive or use rpi-update. (Not necessary as of 2013/02/11.)

Flashing an SD card with a prebuilt image

I've built an image containing a Raspbian system with the XBMC packages which you can download and flash to an SD card. You'll need a 1G SD card (which will be completely wiped).

Flashing

Decompress the image using unx:

% unxz raspbian-xbmc-20121029.img.xz

And then copy the image to the SD card device (make sure that you pick the correct device name!)

% sudo cp xbmc-20121029-1.img /dev/sdb

Customising

The image uses the same credentials as the foundation image, username "pi" and password "raspberry". You can use the raspi-config tool to expand the root filesystem, enable overclocking, and various other configuration tasks.

Updating

Both Raspbian and Kodi can be updated using normal Debian mechanisms such as apt-get:

# sudo apt-get update
# sudo apt-get dist-upgrade

Release history

Unstable versions

I've started building packages for the upcoming Jarvis release. These are in the new unstable section of the archive. To install these packages update your source list to look like this:

deb http://archive.mene.za.net/raspbian wheezy contrib unstable

Release history

  • 16.1-1: Jarvis 16.1
  • 16.0-1: Jarvis 16.0
  • 16.0~git20151213.a724f29-1: Jarvis 16.0 Beta 4
  • 15.2-2: Isengard 15.2 with packaging changes to support PVR addons, and most PVR addons.
  • 15.2-1: Isengard 15.2
  • 15.1-1: Isengard 15.1
  • 15.0-1: Isengard 15.0
  • 15.0~git20150702.9ff25f8-1: Isengard 15.0 RC 1.
  • 15.0~git20150501.d1a2c33-1: Isengard 15.0 Beta 1.
  • 14.2-1: Helix 14.2 release.
  • 14.1-1: Helix 14.1 release.
  • 14.0-1: Helix 14.0 release.
  • 14.0~git20141203.35b4f38-1: Helix 14.0 RC 2
  • 14.0~git20141130.ea20b83-1: Helix 14.0 RC 1
  • 14.0~git20141125.4465fbf-1: Helix 14.0 Beta 5
  • 14.0~git20141124.ec361ca-1: Helix 14.0 Beta 4
  • 14.0~git20141116.88a9a44-1: Helix 14.0 Beta 3
  • 14.0~git20141103.d6947be-1: Helix 14.0 Beta 1. This requires firmware as of 2014/10/06 and libcec 2.2.0 (both included in the archive). There are also builds for Jessie but I haven't tested them. PVR addons are also updated.
  • 14.0~git20141002.d2a4ee9-1: Helix 14.0 Alpha 4

by mgorven at 25 September 2016 04:54 AM

06 January 2016

Michael Gorven (cocooncrash)

Memory optimised decompressor for Pebble Classic

TLDR: DEFLATE decompressor in 3K of RAM

For a Pebble app I've been writing, I need to send images from the phone to the watch and cache them in persistent storage on the watch. Since the persistent storage is very limited (and the Bluetooth connection is relatively slow) I need these to be as small as possible, and so my original plan was to use the PNG format and gbitmap_create_from_png_data(). However, I discovered that this function is not supported on the earlier firmware used by the Pebble Classic. Since PNGs are essentially DEFLATE compressed bitmaps, my next approach was to manually compress the bitmap data. This meant that I needed a decompressor implementation ("inflater") on the watch.

The constraint

The major constraint for Pebble watchapps is memory. On Pebble Classic apps have 24K of RAM available for the compiled code (.text), global and static variables (.data and .bss) and heap (malloc()). There is an additional 2K for the stack (local variables). The decompressor implementation needed to have both small code size and variable usage. I discovered tinf which seemed to fit the bill, and tried to get it working.

Initially, trying to decompress something simply crashed the app. It took some debug prints to determine that code in tinf_uncompress() wasn't even being executed, and I realised that it was exceeding the 2K stack limit. I changed the TINF_DATA struct to be allocated on the heap to get past this. At this stage it was using 1.2K of .text, 1.4K of .bss, 1K of stack, and 1.2K of heap (total 4.8K). I set about optimising the implementation for memory usage.

Huffman trees

Huffman coding is a method to represent frequently used symbols with fewer bits. It uses a tree (otherwise referred to as a dictionary) to convert symbols to bits and vice versa. DEFLATE can use Huffman coding in two modes: dynamic and fixed. In dynamic mode, the compressor constructs an optimal tree based on the data being compressed. This results in the smallest representation of the actual input data; however, it has to include the computed tree in the output in order for a decompressor to know how to decode the data. In some cases the space used to serialise the tree negates the improvement in the input representation. In this case the compressor can used fixed mode, where it uses a static tree defined by the DEFLATE spec. Since the decompressor knows what this static tree is, it doesn't need to be serialised in the output.

The original tinf implementation builds this fixed tree in tinf_init() and caches it in global variables. Whenever it encounters a block using the fixed tree it has the tree immediately available. This makes sense when you have memory to spare, but in this case we can make another tradeoff. Instead we can store the fixed tree in the same space used for the dynamic tree, and rebuild it every time it is needed. This saves 1.2K of .bss at the expense of some additional CPU usage.

The dynamic trees are themselves serialised using Huffman encoding (yo dawg). tinf_decode_trees() needs to first build the code tree used to deserialise the dynamic tree, which the original implementation loads into a local variable on the stack. There is an intermediate step between the code tree and dynamic tree however (the bit length array), and so we can borrow the space for the dynamic instead of using a new local variable. This saves 0.6K of stack.

The result

With the stack saving I was able to move the heap allocation back to the stack. (Since the stack memory can't be used for anything else it's kind of free because it allows the non-stack memory to be used for something else.) The end result is 1.2K of .text, 0.2K of .bss and 1.6K of stack (total 3.0K), with only 1.4K counting against the 24K limit. That stack usage is pretty tight though (trying to use app_log() inside tinf causes a crash) and is going to depend on the caller using limited stack. My modified implementation will allocate 1.2K on the heap by default, unless you define TINF_NO_MALLOC. Using zlib or gzip adds 0.4K of .text. You can find the code on bitbucket.

by mgorven at 06 January 2016 06:28 AM

31 October 2015

Adrian Frith (htonl)

Historical topographic maps of Cape Town

I’ve made an interactive website with six sets of topographic maps of Cape Town and surrounds covering the period from 1940 to 2010. You can zoom in and move around the maps, switching from one era to another.

31 October 2015 11:00 PM

03 October 2015

Simon Cross (Hodgestar)

Where to from here?

Closing speech at the end of PyConZA 2015.

We’ve reached the end of another PyConZA and I’ve found myself wondering: Where to from here? Conferences generate good idea, but it’s so easy for daily life to intrude and for ideas to fade and eventually be lost.

We’ve heard about many good things and many bad things during the conference. I’m going to focus on the bad for a moment.

We’ve heard about imposter syndrome, about a need for more diversity, about Django’s flaws as a web framework, about Python’s lack of good concurrency solutions when data needs to be shared, about how much civic information is locked up in scanned PDFs, about how many scientists need to be taught coding, about the difficulty of importing CSV files, about cars being stolen in Johannesburg.

The world is full of things that need fixing.

Do we care enough to fix them?

Ten years ago I’d never coded Python professionally. I’d never been to a Python software conference, or even a user group meeting.

But, I got a bit lucky and took a job at which there were a few pretty good Python developers and some time to spend learning things.

I worked through the Python tutorial. All of it. Then a few years later I worked through all of it again. I read the Python Quick Reference. All of it. It wasn’t that quick.

I started work on a personal Python project. With a friend. I’m still working on it. At first it just read text files
into a database. Slowly, it grew a UI. And then DSLs and programmatically generated SQL queries with tens of joins. Then a tiny HTML rendering engine. It’s not finished. We haven’t even released version 1.0. I’m quietly proud of it.

I wrote some games. With friends. The first one was terrible. We knew nothing. But it was about chickens. The second was better. For the third we bit off more than we could chew. The fourth was pretty awesome. The fifth wasn’t too bad.

I changed jobs. I re-learned Java. I changed again and learned Javascript. I thought I was smart enough to handle
threading and tons of mutable state. I was wrong. I learned Twisted. I couldn’t figure out what deferreds did. I wrote my own deferred class. Then I threw it away.

I asked the PSF for money to port a library to Python 3. They said yes. The money was enough to pay for pizza. But it was exciting anyway.

We ported another library to Python 3. This one was harder. We fixed bugs in Python. That was hard too. Our patches were accepted. Slowly. Very slowly. In one case, it took three years.

Someone suggested I run PyConZA. I had no idea how little I knew about running conferences, so I said yes. I asked the PSF for permission. They didn’t know how little I knew either, so they said yes too. Luckily, I got guidance and support from people who did. None of them were developers. Somehow, it worked. I suspect mostly because everyone was so excited.

We got amazing international speakers, but the best talk was by a local developer who wasn’t convinced his talk would interest anyone.

I ran PyConZA three more times, because I wasn’t sure how to hand it over to others and I didn’t want it to not happen.

This is my Python journey so far.

All of you are at different places in your own journeys, and if I were to share some advice from mine, it might go as follows:

  • Find people you can learn from
    • … and make time to learn yourself
  • Take the time to master the basics
    • … so few people do
  • Start a project
    • … with a friend(s)
  • Keep learning new things
    • … even if they’re not Python
  • Failure is not going to go away
    • … keep building things anyway
  • Don’t be scared to ask for money
    • … or for support
    • … even from people who aren’t developers
  • Sometimes amazing things come from one clueless person saying, “How hard can it be?”
  • Often success relies mostly on how excited other people are
  • Stuff doesn’t stop being hard
    • … so you’re going to have to care
    • … although what you care about might surprise you.

Who can say where in this complicated journey I changed from novice, to apprentice, to developer, to senior
developer? Up close, it’s just a blur of coding and relationships. Of building and learning, and of success and
failure.

We are all unique imposters on our separate journeys — our paths not directly comparable — and often wisdom seems largely about shutting up when one doesn’t know the answer.

If all of the broken things are going to get fixed — from diversity to removing the GIL — it’s us that will have to fix them, and it seems unlikely that anyone is going to give us permission or declare us worthy.

Go build something.

by admin at 03 October 2015 03:05 PM

16 August 2015

Simon Cross (Hodgestar)

So what is this roleplaying thing anyway?

I ran a roleplaying module [1] for some friends from work and after initially neglecting to explaining what roleplay is, I wrote this:

Roleplaying is a form of collaborative storytelling — a group of people gathering to tell a story together. This broad definition covers quite a range of things — one can tell very different kinds of stories and collaborate in very different ways.

What I’m planning to run is called “tabletop roleplaying” [2]. The stories told centre around a group of characters (the protagonists in a movie). Each person playing is in charge of one of these main characters, except for one person who has no character and instead handles everything that isn’t one of the main characters (they are a bit like the director of a movie).

Tabletop roleplaying is a little like a radio drama — almost everything is done by narrating or speaking in character. You’ll be saying things you want your character to say and describing the actions you want your character to take. Light acting, such as putting on accents or changing tone of voice or changing posture, can be quite fun, but is by no means a requirement.

The “director”, also called the “storyteller” or “DM” [3], describes the situations the main characters find themselves in, decides on the consequences of their actions and takes on the role of minor supporting characters and antagonists (often villains, because they’re exciting). The storyteller is also an arbitrator and a facilitator and attempts to maintain consensus and suitable pacing of the story.

Often you’ll want to have your character attempt an action that might not succeed [4]. For example, they might want to shoot a villain, charm their way past a bouncer, pick a lock or run across a burning bridge. In some cases success will be more likely than others. A character who is good looking or persuasive might find charming the bouncer easy. A character who has never picked up a gun might find shooting the villain hard.

The set of rules used to determine success or failure is called “the system”. The rules might be as simple as “flip a coin” or they might take up a whole book [5]. Dice are a commonly used way of randomly determining results with high numbers typically indicating more successful outcomes and lower numbers less successful ones.

Since the real world is very very complicated, the rules usually model different aspects of it in varying degrees of detail and this often sets the tone of the story to some extent. For example, a system for telling stories about bank robbers in the 1920s might have very detailed rules on vault locks, while a system for telling fantasy stories will likely have special rules for elves and dwarves.

All systems have shortcomings, and when these are encountered it’s usually the storyteller’s job to apply common sense and tweak the outcome accordingly.

The system I’m planning to use is an extremely simplified version of Dungeons & Dragons, 3rd Edition. The full rules run to many books. I’m hoping to explain the few rules we’ll be using in 10-15 minutes.

The story I’m planning to run focuses on a down-on-their-luck rock band about to enter a battle of the bands contest. The twist is that it’s a fantasy setting so there are elves and dwarves and, of course, in the hands of suitably skilled musicians, music is literally magical.

Some practical considerations

Someone has to supply the table to sit around. This is the person hosting the game. Ke can be a player or the storyteller or even uninvolved [6]. Traditionally the host also supplies a kettle and tea or coffee.

Everyone needs to show up and sit at the table, preferably roughly at the same time. This is surprisingly hard.

In order to remain at the table for prolonged periods, one needs things to nibble on. Traditionally people who are not the host bring snacks and drinks of various kinds. Roleplayers seem to take a perverse delight in bringing the unhealthiest snacks they can find, but this is perhaps a tradition best improved on.

Remembering the details of the story can be tricky, so it’s often useful to scribble short notes for oneself. A pen and paper come in handy.

I’ll give each player a couple of pages of information about their character. These are called the “character sheet”. The primary role of the character sheet is to supply margins to scribble notes and doodles in (see above).

It’s likely that time will fly remarkably quickly. If there are six of us, each person will get on average less than ten minutes of “screen time” per hour and probably a lot less given that the storyteller usually uses more than their fair share and there are always distractions and side tracks like discussing the rules or office gossip [7]. If we run out of time, we can always continue the story another day if we’re excited enough.

Lastly, the point is to have fun and tell an interesting story [8].

Glossary

Host: Person who supplies the table, and usually warm beverages like tea and coffee.

Table: Thing one plays at.

Storyteller: The person managing the world the story takes place in, the consequences of players actions and playing the minor characters and antagonists.

Players: The people who are not the storyteller.

Player character: One of the protagonists of the story. Each player has their own player character to narrate.

NPC: Non-player character. All the characters in the story who are not player characters.

RPG: Roleplaying Game. Also rocket-propelled grenade.

System: The rules used to determine the outcomes of risky actions.

Dice: Things one rolls. Usually because one is required to do so by the rules, but often just for fun.

Fun: The point. :)

Footnotes

[1] The module was This is Vörpal Mace. If you’re keen to play it, you can download it from Locustforge.

[2] So called because it usually takes place around a table.

[3] “DM” stands for “Dungeon Master” and is a silly legacy term from the earliest tabletop roleplaying games which mostly focused on a group of heroes running around vast dungeons full of traps and monsters. The storyteller’s role was mostly to invent traps and monsters, hence the title.

[4] Because otherwise the story would be very boring. :)

[5] A whole book is far more common. :P

[6] Although letting six people invade your house for an evening for an activity you’re not involved in requires a special kind of friendship.

[7] One can avoid this time-divided-by-number-of-people limit by having multiple scenes running concurrently. This is a lot of fun, but hell on the storyteller. :)

[8] And it’s easy to lose track of this amongst all the details of playing your character, keeping track of what’s happening and figuring out the rules.

[9] This footnote is not related to anything.

by admin at 16 August 2015 11:06 PM

13 August 2015

Adrian Frith (htonl)

South African provinces as they might have been

The post-apartheid political map of South Africa might well have looked quite different. The Eastern Cape might have been divided into two provinces, with the Kat River and Great Fish River on the boundary. The Northern Cape might not have existed, with the Western Cape meeting North West at the Orange River. Gauteng might have been much bigger – or much smaller. The Western Cape might have stopped south of Citrusdal – or it might have incorporated all of Namaqualand.

13 August 2015 10:00 PM

07 June 2015

Tristan Seligmann (mithrandi)

Adventures in deployment with Propellor, Docker, and Fabric

Preface

After playing around with Docker a bit, I decided that it would make an ideal deployment platform for my work services (previously we were using some ad-hoc isolation using unix users and not much else). While Docker’s security is…suspect…compared to a complete virtualization solution (see Xen), I’m not so much worried about complete isolation between my services, as things like easy resource limits and imaging. You can build this yourself out of cgroups, chroot, etc. but in the end you’re just reinventing the Docker wheel, so I went with Docker instead.

However, Docker by itself is not a complete solution. You still need some way to configure the Docker host, and you also need to build Docker images, so I added Propellor (which I recently discovered) and Fabric to the mix.

Propellor

Propellor is a configuration management system (in the sense of Puppet, Chef, Salt, et al.) written in Haskell, where your configuration itself is Haskell code. For someone coming from a systems administration background, the flexibility and breadth offered by a real programming language like Haskell may be quite daunting, but as a programmer, I find it far more straightforward to just write code that does what I want, extracting common pieces into functions and so on. Our previous incarnation of things used Puppet for configuration management, but it always felt very awkward to work with; another problem is that Puppet was introduced after a bunch of the infrastructure was in place, meaning a lot of things were not actually managed by Puppet because somebody forgot. Propellor was used to configure a new server from scratch, ensuring that nothing was done ad-hoc, and while I won’t go into too much detail about Propellor, I am liking it a lot so far.

The role of Propellor in the new order is to configure things to provide the expected platform. This includes installing Docker, installing admin user accounts, SSH keys, groups, and so on.

Docker

The Docker workflow I adopted is based on the one described by Glyph. I would strongly recommend you go read his excellent post for the long explanation, but the short version is that instead of building a single container image, you instead build three: A “build” container used to produce the built artifacts from your sources (eg. Python wheels, Java/Clojure JARs), a “run” container which is built by installing the artifacts produced by running the “build” container, and thus does not need to contain your toolchain and -dev packages (keeping the size down), and a “base” container which contains the things shared by the “build” and “run” containers, allowing for even more efficiency of disk usage.

While I can’t show the Docker bits for our proprietary codebases, you can see the bits for one of our free software codebases, including instructions for building and running the images. The relative simplicity of the .docker files is no accident; rather than trying to shoehorn any complex build processes into the Docker image build, all of the heavy lifting is done by standard build and install tools (in the case of Documint: apt/dpkg, pip, and setuptools). Following this principal will save you a lot of pain and tears.

Fabric

The steps outlined for building the Docker images are relatively straightforward, but copy/pasting shell command lines from a README into a terminal is still not a great experience. In addition, our developers are typically working from internet connections where downloading multi-megabyte Docker images / packages / etc. is a somewhat tedious experience, and uploading the resulting images is ten times worse (literally ten times worse; my connection at home is 10M down / 1M up ADSL, for example). Rather than doing this locally, this should instead run on one of our servers which has much better connectivity and a stable / well-defined platform configuration (thanks to Propellor). So now the process would be “copy/paste shell command lines from a README into an ssh session” — no thanks. (For comparison, our current deployment processes use some ad-hoc shell scripts lying around on the relevant servers; a bit better than copy/pasting into an ssh session, but not by much.)

At this point, froztbyte reminded me of Fabric (which I knew about previously, but hadn’t thoughto f in this context). So instead I wrote some fairly simple Fabric tasks to automate the process of building new containers, and also deploying them. For final production use, I will probably be setting up a scheduled task that automatically deploys from our “prod” branch (much like our current workflow does), but for testing purposes, we want a deploy to happen whenever somebody merges something into our testing release branch (eventually I’d like to deploy test environments on demand for separate branches, but this poses some challenges which are outside of the scope of this blog post). I could build some automated deployment system triggered by webhooks from BitBucket (where our private source code is hosted), but since everyone with access to merge things into that branch also has direct SSH access to our servers, Fabric was the easiest solution; no need to add another pile of moving parts to the system.

My Fabric tasks look like this (censored slightly to remove hostnames):

@hosts('my.uat.host')
def build_uat_documint():
    with settings(warn_only=True):
        if run('test -d /srv/build/documint').failed:
            run('git clone --quiet -- https://github.com/fusionapp/documint.git /srv/build/documint')
    with cd('/srv/build/documint'):
        run('git pull --quiet')
        run('docker build --tag=fusionapp/documint-base --file=docker/base.docker .')
        run('docker build --tag=fusionapp/documint-build --file=docker/build.docker .')
        run('docker run --rm --tty --interactive --volume="/srv/build/documint:/application" --volume="/srv/build/documint/wheelhouse:/wheelhouse" fusionapp/documint-build')
        run('cp /srv/build/clj-neon/src/target/uberjar/clj-neon-*-standalone.jar bin/clj-neon.jar')
        run('docker build --tag=fusionapp/documint --file=docker/run.docker .')


@hosts('my.uat.host')
def deploy_uat_documint():
    with settings(warn_only=True):
        run('docker stop --time=30 documint')
        run('docker rm --volumes --force documint')
    run('docker run --detach --restart=always --name=documint --publish=8750:8750 fusionapp/documint')

Developers can now deploy a new version of Documint (for example) by simply running fab build_uat_documint deploy_uat_documint. Incidentally, the unit tests are run during the container build (from the .docker file), so deploying a busted code version by accident shouldn’t happen.

by mithrandi at 07 June 2015 10:57 PM

07 March 2015

Tristan Seligmann (mithrandi)

Axiom benchmark results on PyPy

EDIT: Updated version now available.

EDIT: Fixed the issue with the store-opening benchmark

Axiom conveniently includes a few microbenchmarks; I thought I’d use them to give an idea of the speed increase made possible by running Axiom on PyPy. In order to do this, however, I’m going to have to modify the benchmarks a little. To understand why this is necessary, one has to understand how PyPy achieves the speed it does: namely, through the use of JIT (Just-In-Time) compilation techniques. In short, these techniques mean that PyPy is compiling code during the execution of a program; it does this “just in time” to run the code (or actually, if I understand correctly, in some cases only after the code has been run). This means that when a PyPy program has just started up, there is a lot of performance overhead in the form of the time taken up by JIT compilation running, as well as time taken up by code being interpreted slowly because it has not yet been compiled. While this performance hit is quite significant for command-line tools and other short-lived programs, many applications making use of Axiom are long-lived server processes; for these, any startup overhead is mostly unimportant, the performance that interests us is the performance achieved once the startup cost has already been paid. The Axiom microbenchmarks mostly take the form of performing a certain operation N times, recording the time taken, then dividing that time by N to get an average time per single operation. I have made two modifications to the microbenchmarks in order to demonstrate the performance on PyPy; first, I have increased the value of “N”; second, I have modified the benchmarks to run the entire benchmark twice, throwing away the results from the first run and only reporting the second run. This serves to exclude startup/”warmup” costs from the benchmark.

All of the results below are from my desktop machine running Debian unstable on amd64, CPython 2.7.5, and PyPy 2.1.0 on a Core i7-2600K running at 3.40GHz. I tried to keep the system mostly quiet during benchmarking, but I did have a web browser and other typical desktop applications running at the same time. Here’s a graph of the results; see the rest of the post for the details, especially regarding the store-opening benchmark (which is actually slower on PyPy).

[graph removed, see the new post instead]

To get an example of how much of a difference this makes, let’s take a look at the first benchmark I’m going to run, item-creation 15. This benchmark constructs an Item type with 15 integer attributes, then runs 10 transactions where each transaction creates 1000 items of that type. In its initial form, the results look like this:

mithrandi@lorien> python item-creation 15
0.000164939785004
mithrandi@lorien> pypy item-creation 15
0.000301389718056

That’s about 165µs per item creation on CPython, and 301µs on PyPy, nearly 83% slower; not exactly what we were hoping for. If I increase the length of the outer loop (number of transactions) from 10 to 1000, and introduce the double benchmark run, the results look a lot more encouraging:

mithrandi@lorien> python item-creation 15
0.000159110188484
mithrandi@lorien> pypy item-creation 15
8.7410929203e-05

That’s about 159µs per item creation on CPython, and only 87µs on PyPy; that’s a 45% speed increase. The PyPy speed-up is welcome, but it’s also interesting to note that CPython benefits slightly from the changes to the benchmark. I don’t have any immediate explanation for why this might be, but the difference is only about 3%, so it doesn’t matter too much.

The second benchmark is inmemory-setting. This benchmark constructs 10,000 items with 5 inmemory attributes (actually, the number of attributes is hardcoded, due to a limitation in the benchmark code), and then times how long it takes to set all 5 attributes to new values on each of the 10,000 items. I decreased the number of items to 1000, wrapped a loop around the attribute setting to repeat it 1000 times, and introduced the double benchmark run:

mithrandi@lorien> python inmemory-setting
4.86490821838e-07
mithrandi@lorien> pypy inmemory-setting
1.28742599487e-07

That’s 486ns to set an attribute on CPython, and 129ns on PyPy, for a 74% speed increase. Note that this benchmark is extremely sensitive to small fluctuations since the operation being measured is such a fast one, so the results can vary a fair amount between benchmarks run. For interest’s sake, I repeated the benchmark except with a normal Python class substituted for Item, in order to compare the overhead of setting an inmemory attribute as compared with normal Python attribute access. The result was 61ns to set an attribute on CPython (making an inmemory attribute about 700% slower), and 2ns on PyPy (inmemory is 5700% slower). The speed difference on PyPy is more to do with how fast setting a normal attribute is on PyPy, than to do with Axiom being slow.

The third benchmark is integer-setting. This benchmark is similar to inmemory-setting except that it uses integer attributes instead of inmemory attributes. I performed the same modifications, except with an outer loop of 100 iterations:

mithrandi@lorien> python integer-setting
1.23480038643e-05
mithrandi@lorien> pypy integer-setting
3.80326986313e-06

That’s 12.3µs to set an attribute on CPython, and 3.8µs on PyPy, a 69% speed increase.

The fourth benchmark is item-loading 15. This benchmark creates 10,000 items with 15 integer attributes each, then times how long it takes to load an item from the database. On CPython, the items are deallocated and removed from the item cache immediately thanks to refcounting, but on PyPy a gc.collect() after creating the items is necessary to force them to be garbage collected. In addition, I increased the number of items to 100,000 and introduced the double benchmark run:

mithrandi@lorien> python item-loading 15
9.09668397903e-05
mithrandi@lorien> pypy item-loading 15
5.70205903053e-05

That’s 90µs to load an item on CPython, and 57µs on PyPy, for a modest 37% speed increase.

The fifth benchmark is multiquery-creation 5 15. This benchmark constructs (but does not run) an Axiom query involving 5 different types, each with 15 attributes (such a query requires Axiom to construct SQL that mentions each item table, and each column in those tables) 10,000 times. I increased the number of queries constructed to 100,000 and introduced the double benchmark run:

mithrandi@lorien> python multiquery-creation 5 15
5.5426299572e-05
mithrandi@lorien> pypy multiquery-creation 5 15
7.98981904984e-06

55µs to construct a query on CPython; 8µs on PyPy; 86% speed increase.

The sixth benchmark is query-creation 15. This benchmark is the same as multiquery-creation, except for queries involving only a single item type. I increased the number of queries constructed to 1,000,000 and introduced the double benchmark run:

mithrandi@lorien> python query-creation 15
1.548528409e-05
mithrandi@lorien> pypy query-creation 15
1.56546807289e-06

15.5µs to construct a query on CPython; 1.6µs on PyPy; 90% speed increase.

The final benchmark is store-opening 20 15. This benchmark simply times how long it takes to open a store containing 20 different item types, each with 15 attributes (opening a store requires Axiom to load the schema from the database, among other things). I increased the number of iterations from 100 to 10,000; due to a bug in Axiom, the benchmark will run out of file descriptors partway, so I had to work around this. I also introduced the double benchmark run:

mithrandi@lorien> python store-opening 20 15
0.00140788140297
mithrandi@lorien> pypy store-opening 20 15
0.00202187280655

1.41ms to open a store on CPython; 2.02ms on PyPy; 44% slowdown. I’m not sure what the cause of the slowdown is.

A bzr branch containing all of my modifications is available at lp:~mithrandi/divmod.org/pypy-benchmarking.

by mithrandi at 07 March 2015 11:24 AM

Axiom benchmark results on PyPy 2.5.0

This is a followup to a post I made about 1.5 years ago, benchmarking Axiom on PyPy 2.1.0. Not too much has changed in Axiom since then (we fixed two nasty bugs that mainly affected PyPy, but I don’t expect those changes to have had much impact on performance), but PyPy (now at 2.5.0) has had plenty of work done on it since then, so let’s see what that means for Axiom performance!

Unlike my previous post, I’m basically just going to show the results here without much commentary:

Graph of Axiom performance

A few notes:

  • I didn’t redo the old benchmark results, but the hardware/software I ran the benchmarks on is not significantly different, so I think the results are still valid as far as broad comparisons go (as you can see, the CPython results match fairly closely).
  • The benchmark harness I’m using now is improved over the last time, using some statistical techniques to determine how long to run the benchmark, rather than relying on some hardcoded values to achieve JIT warmup and performance stability. Still could use some work (eg. outputting a kernel density estimate / error bars, rather than just a single mean time value).
  • There is one new benchmark relative to the last time, powerup-loading; PyPy really shines here, cutting out a ton of overhead. There’s still room for a few more benchmarks of critical functions such as actually running and loading query results (as opposed to just constructing query objects).
  • The branch I used to run these benchmarks is available on Github.
  • The horizontal axis is cut off at 1.0 so you can’t actually see how store-opening lines up, but the raw data shows that PyPy 2.1.0 took about 53% longer on this benchmark, whil PyPy 2.5.0 only takes about 2% longer.

by mithrandi at 07 March 2015 11:22 AM

28 February 2015

Andre Truter (Cacofonix)

Old software in business

One thing that I cannot understand is why a business that is not a one-man-show would use outdated software, especially something like Internet Explorer 6 or 7 or anything under 10.
I can understand if a very small business of private person who do not have an IT department of IT support can still be stuck with such old software, but a big business with branches all over the country should know that especially IE < 10 has security risks and do not support HTML5.

So if you then ask a supplier to write you a web application and the web application makes use of HTML5, then you should not wonder why it does not work with your IE 8 browser.

I can understand that it might not always be possible to upgrade all workstations to the latest operating system if you use expensive proprietary operating systems, but then you can at least standardise on an open source browser like Firefox or a free browser like Chrome. Both of them have very good track records for security and they support HTML5 and keeping them up to date does not cost anything.

So why are your staff stuck with an old, insecure browser that does not support HTML5? We are not living the the 90's anymore!

The same goes for office suites. LibreOffice is kept up to date and there are other alternatives like OpenOffice (Backed by Apache, Oracle and IBM). With a little training, you can move all your users over to LibreOffice or OpenOffice and never have to pay for an upgrade again and always have the latest stable and secure version of the software available, no matter what OS you run.

To me it just makes sense to invest some time and money once to get ensure a future of not being locked in. In the long run it saves money and if enough businesses does that, it might even force Microsoft to come to the Open Document Standards table and bring the price of it's Office suite down or even make it also free, which will benefit everybody as we will have a real open document format that everybody can use and nobody can be locked in.

Just my humble opinion.

Some links:
Firefox Web Browser
Google Chrome Web Browser
Best Microsoft Office Alternatives

by andre at 28 February 2015 03:35 PM

03 December 2014

Neil Blakey-Milner (nbm)

Starting is one of the hardest things to do

Just over six years ago I stopped posting regularly to my fairly long-lived blog (apparently started in April 2003), and just under four years ago I noticed it was not loading because the server it was on had broken somehow, and I didn't care enough to put it back up. (Astute readers will note how those two dates are roughly a few months in to my last two job start dates...)

I've written a number of blog posts over the last few years, but I haven't posted any of them. I don't even have most of them anymore, although a few are in the branches of git repos somewhere. I'll try dig some of them up - one of the things I enjoyed about my old blog was rereading what I was thinking about at various times in the past. (I guess I'll try get that back online.)

I've also written two different bits of blog software since then - since that's what one does whenever one contemplates starting a blog up again (although, to be fair, I also started setting up Wordpress and Tumblr blogs too).

The first used Hyde, a static blog generation tool (whose development seems to have halted a year ago). The second is what I'm using now - a collection of Flask modules and some glue code that constitutes gibe2, which I wrote just under a year ago. It uses Frozen-Flask to freeze a Flask app into an HTML tree that can be served by any static web server.

Putting something out there - let alone a first pass, a beginning with some implied commitment to continue - is quite scary. In my career, I've tended to be the joiner and sometimes the completer to a project, not the initiator. I find it easy to take something from where it is to where I think it should go, and hard to imagine what should exist.

I should mention that I have succeeded in posting a bit to play.TechGeneral.org about the occasional spurts of time I get to work on a little D language OpenGL someday-might-be-a-game. The minimum expended effort there is a lot higher - building a new feature or doing a major refactor, and then explaining it all in a post - in fact the posts are sometimes a lot harder to put together than the code, and the code is quite a bit further ahead than the posts now. It will probably be very sporadic with posts mostly coinciding with my long weekends and vacations. (And, unsurprisingly, I've just finished up a vacation where I finally got around to putting the final touches on this here blog.)

Connecting together starting up being scary and game programming: I've been watching Handmade Hero religiously since it started (the first few on Youtube the day after, the last few live). Talk about being a tough thing to contemplate starting - one hour a day of writing a game from scratch streaming to the public every weekday - every single little mistake in your code and in your explanations there for people to pick apart without any opportunity for editing. And having ~1000 streamers in your first few shows - no pressure!

That definitely put just putting together a few words a week with no implied time-bound commitment into perspective, so here we are.

I stopped reading blogs roughly when I stopped caring about mine. I hope to find a few interesting people to follow. A few of my friends used to post, albeit infrequently. I hope I can convince them to do so again as well.

by Neil Blakey-Milner at 03 December 2014 07:45 AM

22 August 2014

Adrianna Pińska (Confluence)

Yet another way to play videos in an external player from Firefox

I spent some time today trying to figure out how to get Firefox to play embedded videos using anything other than Flash, which is an awful, stuttering, resource-devouring mess. The internet is littered with old browser extensions and user scripts which allegedly make this possible (e.g. by forcing sites like YouTube to use the vlc media plugin instead), but I was unable to get any of them to work correctly.

Here’s a quick hack for playing videos from YouTube and any other site that youtube-dl can handle in an external mplayer window. It’s based on several existing HOWTOs, and has the benefit of utilising a browser extension which isn’t dead yet, Open With, which was designed for opening websites in another browser from inside Firefox.

I wrote a little script which uses youtube-dl to download the video and write it to stdout, and pipes this output to mplayer, which plays it. Open With is the glue which sends URLs to the script. You can configure the extension to add this option to various context menus in the browser — for example, I can see it if I right-click on an URL or on an open page. You may find this less convenient than overriding the behaviour of the embedded video on the page, but I prefer to play my videos full-screen anyway.

This is the script:

#!/bin/sh
youtube-dl -o - $1 | mplayer -

Make it executable. Now open Open With’s preferences, add this executable, and give it a nicer name if you like. Enjoy your stutter-free videos. :)

(Obviously you need to have youtube-dl and mplayer installed in order for this to work. You can adapt this to other media players — just check what you need to do to get them to read from stdin.)

by confluence at 22 August 2014 02:41 PM

21 August 2014

Simon Cross (Hodgestar)

Character Creation 3000W

by Simon Cross, Mike Dewar and Adrianna Pińska

Your character creation skills have progressed far beyond writing
numbers on paper. Your characters have deftly crafted manerisms and
epic length backgrounds. They breathe emotion and seem more life-like
than many of your friends.

Yet, somehow, when you sit down at a table to play your beautiful
creations, things don’t quite work out.

Perhaps the story heads in an unexpected direction, leaving your
creation out of place and struggling to fit in? Or maybe they’re fun
to play initially but their actions begin to feel repetitive and
uninteresting?

If any of this sounds familiar, read on.

Reacting to failure

It’s easy to spend all your time imagining a character’s successes —

their victories and their crowning moments — but what happens when
they fail? How do they respond to minor setbacks? And big ones?

Maybe they’re stoic about it? Perhaps it’s likely to cause a crisis of
faith? Maybe they react by doubling down and uping the stakes? Maybe
they see failure as an opportunity to learn and grow? Perhaps they’re
accustomed to failure? Perhaps they see failure as a sign that they’re
challenging themselves and pushing their abilities?

The dice and the DM are going to screw you. Make sure you have a plan
for how to roleplay your character when they do.

Philosophy

A character’s goals are things strongly tied to specific events. A
philosophy colours every situation. The two are often aligned, but a
philosophy is more broadly useful. It gives you a handle on how your
character might behave in circumstances where it is not otherwise
obvious what they would do.

To take a hackneyed example: your backstory might involve punishing an
old partner who screwed you. This goal could feed a number of
rather different philosophies:

  • “I always keep my word, and I promised Jimmy I’d get him back.”
  • “Any situation can be solved with enough violence.”
  • “Karma controls the universe. What goes around comes around.”

The goal is the same, but each philosophy implies very different
day-to-day behaviour.

There are going to be times when other characters’ plots and goals are
centre-stage, and it behooves us as roleplayers to have a plan for
these awkward (and hopefully brief) moments. A philosophy allows your
character to participate in others’ plots as a unique and distinct
individual, rather than as a bored bystander.

Your character’s philosophy becomes vitally important when paradigm
shifts occur in-game. Setting changes erode the importance of lesser
goals and past history and create a strong need for a philosophy that
guides your character’s immediate responses and further development.

It may be interesting to construct characters with goals that
contradict their philosophy. For example, a pacifist might wish to
exact revenge on the person who killed their brother. This creates an
interesting conflict that will need to be resolved.

Randomly fucking with people is not a philosophy.

Interacting with colleagues

Your character is going to spend a lot of time interacting with their

colleagues — the other player characters — so it’s worthwhile
thinking about how they do that.

It’s tempting (and a bit lazy) to think of characters as relating to
everyone else the same way. This leads to loners and overly friendly
Energizer bunnies, both of which get old very quickly.

Avoid homogenous party dynamics.

If your character’s interactions with the other player characters are
all the same, you have failed.

Varied interactions also help make party disagreements more
interesting. Without varied interactions, you have to resolve all
disagreements by beating each other over the head with the logic stick
until consensus (or boredom) is reached. Unique relationships and
loose factions make disagreements more interesting to roleplay and
help the party find plausible lines along which to unite for a given
scenario.

If your character is part of a command structure, spend some time
thinking about how they respond to orders they disagree with. Remember
that the orders are likely issued by someone your character knows and
has an existing relationship with. What is that relationship?

Also keep in mind that your character has likely been given such
orders before, and since they appear to still be part of the command
structure, they’ve probably come to terms with this in some way that
both they and their immediate superiors can live with.

Obviously everyone has their limits, though — where are your
character’s? How much does it take for other player characters or
NPCs to cross the line?

Development

Sometimes even if you do everything right you find yourself in a
situation where your character is no longer fun to play. Maybe the
campaign took an unexpected turn or you’ve just run out of ideas for
them as they are. It’s time for your character to change — to embark
on a new personal story arc.

Great characters aren’t static. They grow and react to events around
them. Perhaps a crushing defeat has made them re-consider their
philosophy — or made them more committed to it? Or maybe frustration
with their current situation has made them reconsider their options?

It helps to think broadly about how your character might develop while
you’re creating them. Make sure you’d still find the character
interesting to play even if their stance on some important issues
shifted. Don’t become too invested in your character remaining as they
are. Be flexible — don’t have only one plan for character
development.

Your character’s philosophy and general outlook can be one of the most
interesting things to tweak. Small changes can often have big
ramifications for how they interact with others.

Don’t feel you have to leave character development for later in the
campaign! The start of a campaign is often when character changes are
most needed to make a character work well and it sets the stage for
further character development later on.

Sharing

Think about how you convey who your character is to the other
players. They’re probably not going to get to read your epic
backstory, so they’re going to have to learn about who your character
is in other ways.

Likely the first thing people will hear about your character is his or
her name — so make it a good one. It’s going to be repeated a lot so
make sure it conveys something about who your character is. If they’re
an Italian mobster, make sure their name sounds like they’re an
Italian mobster. That way whenever the DM or another player says your
character’s name, it reminds everyone who your character is.

The second thing people hear will probably be a description of your
character. Take some time to write one. Don’t rely on dry statistics
and descriptions. Stick to what people would see and remember about
your character if they met him or her for a few minutes. Don’t mention
hair colour unless hair is an important feature.

After introductions are done, you probably won’t get another
invitation to monologue about your character. So do it in character
instead. Tell the NPC about that time in ‘Nam. Regale the party with
tales from your epic backstory. As in real life, try not to ramble on,
but equally, don’t shy away from putting your character in the
spotlight for a few moments. Continually remind the others at the
table who your character is.

Last but not least, remember that the most epic backstory is pointless
if no one finds out about it. The point of dark secrets is for them to
be uncovered and for your character to confront them.

Epilogue

Don’t fear failure. Have a philosophy. Have varied interactions with
others. Embrace change. Share who you are.

Thanks

  • Kululaa dot COMMMM!
  • Mefridus von Utrecht (for a philosophy that involves others)
  • Attelat Vool (for starting life after failure)

This article was also published in the CLAWmarks 2014 Dragonfire edition.

by admin at 21 August 2014 09:36 AM

17 March 2014

Adrianna Pińska (Confluence)

Why phishing works

Let me tell you about my bank.

I would expect an institution which I trust to look after my money to be among the most competent bureaucratic organisations I deal with. Sadly, interactions which I and my partner H have had with our bank in recent years suggest the opposite.

Some of these incidents were comparatively minor: I once had to have a new debit card issued three times, because I made the fatal error of asking if I could pick it up from a different branch (the first two times I got the card, my home branch cancelled it as soon as I used it). More recently, when I got a replacement credit card, every person I dealt with had a completely different idea of what documents I needed to provide in order to collect it. When H bought a used car, it turned out that it was literally impossible for him to pay for it with a transfer and have the payment clear immediately — after he was assured by several employees that it would not be a problem.

Some incidents were less minor. H was once notified that he was being charged for a replacement credit card when his current card was years away from expiring. Suspicious, he called the bank — the employee he spoke to agreed that it was weird, had no idea what had happened, and said they would cancel the new card. H specifically asked if there was anything wrong with his old card, and was assured that everything was fine and he could keep using it. Of course everything was not fine — it suddenly stopped working in the middle of the holiday season, and he had to scramble to replace it at short notice. All he got was a vague explanation that the card had to be replaced “for security reasons”. From hints dropped by various banking employees he got the impression that a whole batch of cards had been compromised and had quietly been replaced — at customers’ expense, of course.

What happened last weekend really takes the cake. The evening before we were due to leave on a five-day trip, well after bank working hours, H received an SMS telling him that his accounts had been frozen because he had failed to provide the bank with some document required for FICA. This was both alarming and inexplicable, because we had both submitted documents for FICA well before the deadline years ago. The accounts seemed fine when he checked them online. When he contacted the bank, he was assured that he had been sent the SMS in error, everything was fine, and he didn’t need to provide any FICA documents.

So we left on our trip. I’m sure you can see where this is going.

On Thursday evening H’s accounts were, in fact, frozen. He tried unsuccessfully during the trip to get the bank to unfreeze them, but since it was closed for the weekend there was pretty much nothing he could do until we got back home.

Hilariously, although employees from the bank made a house call this morning to re-FICA him, he had to go to the branch anyway because they needed a photocopy of his ID (scanning and printing is magically not the same as photocopying).

Again, what actually happened is a mystery — the bank claims that they have no FICA documents on record for H (as an aside, why are customers not given receipts when they submit these documents to the bank? If there is no record of the transaction, the bank can shift blame to the customer with impunity if it loses vital documentation).

We’re very fortunate that none of these incidents had devastating consequences for us, since they only impacted one of us at a time. If we relied on a single bank account, we could easily have ended up without food, without electricity, or stranded in the middle of the Karoo with no fuel. This is pretty clearly not an OK situation.

The common thread running through all these incidents is a lack of communication: both between the bank and its customers, and within the bank. We rapidly discovered through our dealings that while the bank maintains the facade of a single, unified entity, it in fact comprises several more-or-less autonomous divisions, and communication between these divisions often resembles diplomacy in the Balkans.

I would not be surprised to discover that various aspects of customers’ information are distributed over several disconnected databases. There appears to be no way for certain types of information to be linked to accounts, which necessitates the use of bizarre hacks. When H’s credit card was disabled, this was in no way reflected in the online interface — he just had an enormous amount of money reserved on the card. When his accounts were frozen, this was again not apparent in the interface (which was recently updated) — his available credit balance was just zeroed, and he got a non-specific error whenever he tried to transfer funds. I believe that the back-end infrastructure for managing this information effectively and making it available to customers and employees simply does not exist.

As a result of this, I have learned not to believe a word that a bank employee says if there is any chance that the issue crosses some internal jurisdictional boundary. In the best case scenario they are aware of their own lack of information and are able to direct you to the appropriate department. In the worst case scenario, they dispense outright misinformation — something which can have devastating consequences (for you).

We live in an age with an abundance of instant communications methods. In spite of this, it appears to be beyond the bank’s abilities to inform people timeously about what is going on with their accounts. It has backed off from electronic communications presumably because of fears of phishing, and has fallen back to two obsolete and inefficient channels: voice phonecalls, which are prone to misunderstandings and human error and leave the customer with no written record, and SMSes, which have a character limit. Both these channels are vulnerable to blips in the cellphone network infrastructure, customers having no airtime, or customers just not being available to answer their phones at certain times (since voice phonecalls are synchronous). The bank also uses these channels as if it believes that they are intrinsically secure and trustworthy, which is ludicrous, as anyone who keeps getting calls from “the Microsoft Support Centre” can attest.

In order for this dysfunctional system to work, the bank appears to expect its customers to accept unexpected additional charges unquestioningly, and to follow instructions issued by unknown persons calling from unverified internal numbers, even if they are nonsensical or suspicious and cannot be corroborated by other bank employees contacted through more reputable public-facing channels. It is standard procedure for such callers to demand personal information from customers in order to verify their identities, but they offer no evidence of their own legitimacy.

In short, the bank expects us to be the kind of credulous chumps who fall prey to phishing scams. It’s easy to see why phishing works when genuine communications from the bank are so unprofessional and follow such laughable security practices that they are virtually indistinguishable from phishing.

I haven’t named my bank, although it’s pretty easy to find out what it is if you know where to look, because I’m sceptical that there are significant differences between the way South African banks operate; particularly the Big Four. I’ve certainly heard the same kinds of horror stories from customers of other banks.

This is the final straw which has led us to investigate other banking options. Whether the pastures we’re moving to are greener or just differently green remains to be seen.

If you feel strongly that there is a bank which is notably better or worse than the others, or you just want to share your own tale of banking woe, let me know in the comments. I’ll be over here, stuffing my money into a sock to hide under my mattress.

by confluence at 17 March 2014 12:10 PM

25 November 2013

Michael Gorven (cocooncrash)

Statically linked e2fsprogs binaries for Android

I needed to resize the ext4 filesystem of the /data partition on my Android phone and spent ages looking for a prebuilt binary of resize2fs (since it isn't included in ROMs or recovery). I didn't find one and so spent ages trying to build it myself. To save others the trouble, here are the binaries.

Just the main ones (mke2fs, resize2fs, tune2fs, fsck): tar.xz 630K or zip 1.5M
Everything tar.xz 1.6M or zip 5.8M

by mgorven at 25 November 2013 04:58 AM

17 October 2013

Simeon Miteff (simeon)

DBE is right about Delphi and Office

Some thoughts on the current furore about the Department of Basic Education’s policy to standardise on MS Office and Delphi in teaching:

MS Office

Word processors and spreadsheets are simple tools for solving simple problems. For hard typesetting and number crunching problems we have better tools.

My point is that we’re already making a pragmatic choice with computer literacy training in schools: most scholars will use computers to solve simple problems, so schools train them to be able to do that (not to build mathematical models in Octave or to typeset engineering manuals in LaTeX). Should we now insist that they be trained to use a relatively obscure FOSS office suite instead? LibreOffice is a fork of one piece of crap, emulating another piece of crap known as MS Office. DBE is standardising, on the de-facto (albeit, piece of crap) standard, and that is the sensible choice, get over it.

Delphi

I did computer programming in Pascal (with Borland Turbo Pascal) at high school. The curriculum covered the basic constructs of the imperative programming style. At the time, Borland had already enhanced their Pascal implementation to support much of the language features currently available in Delphi. When I started university, the Computer Science program I took required basic programming ability as a prerequisite, and having done Pascal at school was sufficient to fulfil this requirement. Students who didn’t have programming at school took an additional semester course (Introduction to programming in C).

I never touched Pascal again.

Not only was the Pascal I was taught at school not an immediately useful skill for the workplace, but in retrospect, the abstract concepts I was taught served only as a grounding for getting into real computer science which followed at university: data structures and algorithms, computational complexity, computer architecture, AI, compilers and the study of programming languages, distributed systems, and so on. I wouldn’t have become a capable programmer without studying mathematics and science (as apposed to being trained in programming).

In my opinion, Pascal (and consequently Delphi) is good enough for education. Stated another way: Java is no better for the purpose of teaching a teenager how a for loop works. No programming language taught at school can turn a scholar into serious career programmer.

Real problems

There are so many difficult problems to solve in South Africa’s education system: consider that the majority of scholars are being taught all subjects in their second, or third (natural) language. Why aren’t we having a debate about whether Afrikaans should be taught as a second language to kids who speak Xhosa at home, when they’re expected to understand what’s going on in a history lesson delivered in English?

Conclusion

On this particular issue, y’all need to calm down and give DBE a break.

by Simeon Miteff at 17 October 2013 10:27 PM

08 September 2013

Andy Rabagliati (wizzy)

Richard Stallman visits Cape Town

Myself with Richard Stallman Richard Stallman visited Cape Town, and gave a talk at the University of Cape Town titled "A Free Digital Society"

On the 4th September I picked Richard Stallman up from the airport and we went to dinner with a number of people that had helped organise the event at Brass Bell, Kalk Bay.

Around Cape Town

He stayed with me and my friend Lerato in Muizenberg, and he is a gracious and accommodating guest. I took the Thursday off work, and we had lunch at a little local restaurant in Marina da Gama, and I then took him for a drive around the peninsula on a beautiful Cape day.

A Free Digital Society

The talk was at the Leslie Social Sciences building at University of Cape Town, but as we arrived Richard realised he had left behind a Gnu fluffy toy he wanted to auction, so I had to dash back to Muizenberg to pick it up.

Gnu Public Licence

The other organisers delayed the start of the talk by ten minutes so I could do the introductions, thank you. He talked for about two hours on a range of topics, dwelling on the GNU Public Licence, his 30 year old prescription for how developers should set about writing code that will become a living part of a commonly-owned set of tools and procedures that other people can build upon and improve.

Tracking

He outlined his suspicions regarding the trackability of cellphones, and the vast amounts of information that is stored by cellphone companies - mostly metadata (number called, duration, cellphone location) and the implications for privacy. He does not own one.

Education

He touched on software used in schools, exhorting educators to think about the larger consequences of software usage, and putting students on the right track - to empower them to realise they are in control of the software, and not just treat it as a purchased item that remains the property of the software company.

Voting Software

He highlighted the many dangers and pitfalls of computerising the counting of votes, from the closed nature of the software, the possibility of bugs or deliberate malware in the code, the difficulty of knowing if the software that was verified is indeed the software that is running on the machine for the duration of the vote, and the difficulty of a recount, or even what that means.

E-Tolling

Bringing a South African flavour to his talk, he discussed eTolling - which he objected to because of the potential for abuse for surveillance purposes. He also pointed out that if proper attention was paid to anonymising the data, he would no longer object.

Lerato and Richard

Farewell

He let me keep a South African music CD in his collection, which was very nice of him, and I like it very much. In short, we had a capacity audience from diverse backgrounds, no glitches, and he very much enjoyed his trip, saying he would look forward to coming again. We are certainly blessed to live in such a beautiful part of the world, that people like Richard want to visit.

Other commentary

A couple of other posts :-

by Andy at 08 September 2013 02:32 PM

08 August 2013

Simeon Miteff (simeon)

Unbundling of the Telkom SA local loop

It was inevitable that I would eventually post something about the punny namesake of this blog: the “local loop” in South Africa. This is the last-mile (copper) infrastructure owned and controlled by the incumbent telco: Telkom SA

Local loop unbundling (LLU) is a regulatory policy that forces an operator with “significant market power” to allow competitors to directly resell their access network. Predictably, the incumbent operators always push back against LLU, and this gives rise to different “flavours”. “Full LLU” means access to raw copper lines at the exchange, while “bitstream access” refers to a number of wholesale PPPOE backhaul scenarios presented as an alternative to proper LLU.

In South Africa, ISPs wishing to provide ADSL services can either become pure reseller’s of Telkom SA’s end-to-end DSL ISP product, or they can purchase wholesale access to the ADSL “cloud” via a product called IP Connect. While it seems similar to bitstream access, Telkom has deliberately crippled IP Connect with a really ugly L3 routed hack involving three VRFs and a RADIUS realm per ISP, some RADIUS proxying and ISP-allocated address pools configured inside Telkom’s network.

Much has been written about the shortcomings of IP Connect, but the TL;DR version is that you need to tunnel to do anything other than the simplest best-effort IPv4 service that could work. What’s perhaps more interesting is to note that compared to the much more flexible bitstream service they could deliver using L2TP, IP Connect is very likely more expensive to run (in my opinion).

So, briefly back to regulatory matters: despite a government policy that LLU must happen, as is typical of our regulator, they have dragged their feet on implementing LLU regulation for many years. Telkom SA has cried wolf about LLU not being viable (as expected) and so the rest of the nation waits while we slip down the global competitiveness rankings.

With the recent replacement of the minister of communications, it seems there is some action (or noise?) around LLU (for example, see this article on Techcentral).

I predict that ICASA means business this time, but there is a catch: Telkom SA will offer bitstream access as a compromise to what they’ll present as the impending disaster of full LLU.

ICASA will fall for it, and Telkom SA will present a fully developed bitstream product as their “gift” to the industry (which was actually ready for launch in 2010, and will save Telkom money, compared to IP Connect).

What remains to be seen is:

  1. Whether ICASA will regulate the price of the “units” of LLU? If I’m right, the “unit” will be bitstreams, not copper lines. In my opinion, the price must be regulated for bitstream access to be an acceptable substitute for full LLU (I’m not saying price regulation is a good thing – that’s a completely different question).
  2. Just how far Telkom will push the interconnect point for ISPs away from the end user? The ideal scenario is access to the bitstream at the exchange. If they force interconnection with ISPs on a regional basis instead, they’ll make it less like full LLU (consequently defeat the purpose of opening up the access network to competitors), and artificially inflate the price by claiming the cost of backhaul to those regional interconnection points.

I hope I’m wrong and we get full LLU, but I have a feeling my prediction will be spot-on.

by Simeon Miteff at 08 August 2013 09:51 AM

06 August 2013

Brendan Hide (Tricky)

My server rebuild, part 1 – with Ubuntu, and a vague intro to btrfs

History

Much had changed since I last mentioned my personal server – it has grown by leaps and bounds (it now has a 7TB md RAID6) and it had recently been rebuilt with Ubuntu Server.

Arch was never a mistake. Arch Linux had already taught me so much about Linux (and will continue to do so on my other desktop). But Arch definitely requires more time and attention than I would like to spend on a server. Ideally I’d prefer to be able to forget about the server for a while until a reminder email says “um … there’s a couple updates you should look at, buddy.”

Space isn’t free – and neither is space

The opportunity to migrate to Ubuntu was the fact that I had run out of SATA ports, the ports required to connect hard drives to the rest of the computer – that 7TB RAID array uses a lot of ports! I had even given away my very old 200GB hard disk as it took up one of those ports. I also warned the recipient that the disk’s SMART monitoring indicated it was unreliable. As a temporary workaround to the lack of SATA ports, I had even migrated the server’s OS to a set of four USB sticks in an md RAID1. Crazy. I know. I wasn’t too happy about the speed. I decided to go out and buy a new reliable hard drive and a SATA expansion card to go with it.

The server’s primary Arch partition was using about 7GB of disk. A big chunk of that was a swap file, cached data and otherwise miscellaneous or unnecessary files. Overall the actual size of the OS, including the /home folder, was only about 2GB. This prompted me to look into a super-fast SSD drive, thinking perhaps a smaller one might not be so expensive. It turned out that the cheapest non-SSD drive I could find actually cost more than one of these relatively small SSDs. Yay for me. :)

Choice? Woah?!

In choosing the OS, I’d already decided it wouldn’t be Arch. Out of all the other popular distributions, I’m most familiar with Ubuntu and CentOS. Fedora was also a possibility – but I hadn’t seriously yet considered it for a server. Ubuntu won the round.

The next decision I had to make didn’t occur to me until Ubiquity (Ubuntu’s installation wizard) asked it of me: How to set up the partitions.

I was new to using SSDs in Linux – I’m well aware of the pitfalls of not using them correctly, mostly due to their risk of poor longevity if misused.

I didn’t want to use a dedicated swap partition. I plan on upgrading the server’s motherboard/CPU/memory not too far in the future. Based on that I decided I will put swap into a swap file on the existing md RAID. The swap won’t be particularly fast but its only purpose will be for that rare occasion when something’s gone wrong and the memory isn’t available.

This then left me to give the root path the full 60GB out of an Intel 330 SSD. I considered separating /home but it just seemed a little pointless, given how little was used in the past. I first set up the partition with LVM – something I’ve recently been doing whenever I set up a Linux box (really, there’s no excuse not to use LVM). When it got to the part where I would configure the filesystem, I clicked the drop-down and instinctively selected ext4. Then I noticed btrfs in the same list. Hang on!!

But a what?

Btrfs (“butter-eff-ess”, “better-eff-ess”, “bee-tree-eff-ess”, or whatever you fancy on the day) is a relatively new filesystem developed in order to bring Linux’ filesystem capabilities back on track with current filesystem tech. The existing King-of-the-Hill filesystem, “ext” (the current version called ext4) is pretty good – but it is limited, stuck in an old paradigm (think of a brand new F22 Raptor vs. an F4 Phantom with a half-jested attempt at an equivalency upgrade) and is unlikely to be able to compete for very long with newer Enterprise filesystems such as Oracle’s ZFS. Btrfs still has a long way to go and is still considered experimental (depending on who you ask and what features you need). Many consider it to be stable for basic use – but nobody is going to make any guarantees. And, of course, everyone is saying to make and test backups!

Mooooooo

The most fundamental difference between ext and btrfs is that btrfs is a “CoW” or “Copy on Write” filesystem. This means that data is never actually deliberately overwritten by the filesystem’s internals. If you write a change to a file, btrfs will write your changes to a new location on physical media and will update the internal pointers to refer to the new location. Btrfs goes a step further in that those internal pointers (referred to as metadata) are also CoW. Older versions of ext would have simply overwritten the data. Ext4 would use a Journal to ensure that corruption won’t occur should the AC plug be yanked out at the most inopportune moment. The journal results in a similar number of steps required to update data. With an SSD, the underlying hardware operates a similar CoW process no matter what filesystem you’re using. This is because SSD drives cannot actually overwrite data – they have to copy the data (with your changes) to a new location and then erase the old block entirely. An optimisation in this area is that an SSD might not even erase the old block but rather simply make a note to erase the block at a later time when things aren’t so busy. The end result is that SSD drives fit very well with a CoW filesystem and don’t perform as well with non-CoW filesystems.

To make matters interesting, CoW in the filesystem easily goes hand in hand with a feature called deduplication. This allows two (or more) identical blocks of data to be stored using only a single copy, saving space. With CoW, if a deduplicated file is modified, the separate twin won’t be affected as the modified file’s data will have been written to a different physical block.

CoW in turn makes snapshotting relatively easy to implement. When a snapshot is made the system merely records the new snapshot as being a duplication of all data and metadata within the volume. With CoW, when changes are made, the snapshot’s data stays intact, and a consistent view of the filesystem’s status at the time the snapshot was made can be maintained.

A new friend

With the above in mind, especially as Ubuntu has made btrfs available as an install-time option, I figured it would be a good time to dive into btrfs and explore a little. :)

Part 2 coming soon …

by Tricky at 06 August 2013 05:43 PM