Planet Twisted

June 27, 2017

Itamar Turner-Trauring

It may not be your fault, but it's always your responsibility

If you're going to be a humble programmer, you need to start with the assumption that every reported bug is your fault. This is a good principle, but what if it turns out the user did something stupid? What if it really is a third-party library that is buggy, not your code?

Even if a bug isn't your fault it's still your responsibility, and in this post I'll explain how to apply that in practice.

First, discover the source of the problem

A user has reported a bug: they're using some code you wrote and something has gone horribly wrong. What can the source of the problem be?

  • User error: they did something they shouldn't have, or they're confused about what should have happened.
  • Environmental problems: their dependencies are slightly different than yours, their operating system is slightly different, and so on.
  • Third party bugs: someone else's code is buggy, not yours.
  • A bug in your code: you made a mistake somewhere.

A good starting assumption is that you are at fault, that your code is buggy. It's hard, I know: I often find myself assuming other people's code is the problem, only to find it was my own mistake. But precisely because it's so hard to blame oneself it's better to start with that as the presumed cause, to help overcome the bias against admitting a mistake.

If something is a bug in your code then you can go and fix it. But sometimes users will have problems that aren't caused by a bug in your code: sometimes users do silly things, or a library you depend on has a bug. What then?

Then, take responsibility

Even if the fault was elsewhere, you are still responsible, and you can take appropriate action.

User error

If the user made a mistake, or had a misunderstanding, that implies your design is at fault. Maybe your API encourages bad interaction patterns, maybe your error handling isn't informative enough, maybe your user interface doesn't ask users to confirm that yes, they want to delete all their data. Whatever the problem, user mistakes are something you can try to fix with a better design:

  • Give the API guide rails to keep users from doing unsafe operations.
  • Create better error messages, allowing the user to diagnose mistakes on their own.
  • Make the UI prevent dangerous operations.
  • Add an onboarding system to a complex UI.
  • Try to remove the UI altogether and just do the right thing.

If a better design is impossible, the next best thing to do is write some documentation, and explain why the users shouldn't do that, or document a workaround. The worst thing to do is to dismiss user error as the user's problem: if one person made a mistake, probably others will as well.

Environmental problems

If your code doesn't work in a particular environment, well, that's your responsibility too:

  • You can package your software in a more isolated fashion, so the environment affects it less.
  • You can make your software work in more environments.
  • You can add a sanity check on startup that warns users if their environment won't work.

If all else fails, write some documentation.

Third party bugs

Found a bug in someone else's code?

  • Stop supporting older versions of a library if it introduces bugs.
  • If it's a new bug you've discovered, file a bug report so they can fix it.
  • Add a workaround to your code.

And again, if all else fails, write some documentation explaining a workaround.

It's always your responsibility

Releasing code into the world is a responsibility: you are telling people they can rely on you. When a user reports a problem, there's almost always something you can do. So take your responsibility seriously and fix the problem, regardless of whose fault it is.

Best of all is avoiding problems in the first place: I've made many mistakes you can avoid by signing up for my weekly newsletter. Every week I'll share an engineering or career mistake and how you can avoid it.

June 27, 2017 04:00 AM

June 23, 2017

Hynek Schlawack

Sharing Your Labor of Love: PyPI Quick and Dirty

A completely incomplete guide to packaging a Python module and sharing it with the world on PyPI.

by Hynek Schlawack ( at June 23, 2017 12:00 AM

June 21, 2017

Itamar Turner-Trauring

The bad reasons you're forced to work long hours

Working long hours is unproductive, unhealthy, and unfortunately common. I strongly believe that working less hours is good for you and your employer, yet many companies and managers force you to work long hours, even as it decreases worker productivity.

So why do they do it? Let's go over some of the reasons.

Leading by example

Some managers simply don't understand that working long hours is counter-productive. Consider the founders of a startup. They love their job: the startup is their baby, and they are happy to works long hour to ensure it succeeds. That may well be inefficient and counter-productive, but they won't necessarily realize this.

The employees that join afterwards take their cue from the founders: if the boss is working long hours, it's hard not to do so yourself. And since the founders love what they're building it never occurs to them that long hours might not be for everyone, or even might be an outright negative for the company. Similar situations can also happen in larger organizations, when a team lead or manager put in long hours out of a sense of dedication.

A sense of entitlement

A less tractable problem is a manager who thinks they own your life. Jason Fried describes this as a Managerial Entitlement Complex: the idea that if someone is paying you a salary they are entitled to every minute of your time.

In this situation the problem isn't ignorance on the part of your manager. The problem is that your manager doesn't care about you as a human being or even as an employee. You're a resource provided by the human resources department, just like the office printer is provided by the IT department.

Control beats profits

Another problem is the fact that working hours are easy to measure, and therefore easy to control. When managers or companies see their employees as a cost center (and at least in the US the corporate culture is heavily biased against labor costs) the temptation to "control costs" by measuring and maximizing hours can be hard to resist.

Of course, this results in less output and so it is not rational behavior if the goal is to maximize profits. Would companies would actually choose labor control over productivity? Evidence from other industries suggests they would.

Up until the 1970s many farms in California forced their workers to use a short hoe, which involved bending over continuously. The result was a high rate of worker injuries. Employers liked the short hoe because they could easily control farm workers' labor: because of the way the workers bent over when using the short hoe it was easy to see whether or not they were working.

After a series of strikes and lawsuits by the United Farm Workers the short hoe was banned. The result? According to the CEO of a large lettuce grower, productivity actually went up.

(I learned this story from the book Solving the Climate Crisis through Social Change, by Gar W. Lipow. The book includes a number of other examples and further references.)

Bad incentives, or Cover Your Ass

Bad incentives in one part of the company can result in long hours in another. Consider this scenario: the sales team, which is paid on commission, has promised a customer to deliver a series of features in a month. Unfortunately implementing those features will take 6 months. The sales team doesn't care: they're only paid for sales, and delivering the product isn't their problem.

Now put yourself in the place of the tech lead or manager whose team has to implement those features. You can try to push back against the sales team's promises, but in many companies that will result in being seen as "not a team player." And when the project fails you and your team will be blamed by sales for not delivering on the company's promises.

When you've been set up to fail, your primary goal is to demonstrate that the inevitable failure was not your fault. The obvious and perhaps only way for you to do this is to have your team work long hours, a visible demonstration of commitment and effort. "We did everything we could! We worked 12 hour days, 6 days a week but we just couldn't do it."

Notice that in this scenario the manager may be good at their job; the issue is the organization as a whole.

Hero syndrome

Hero syndrome is another organizational failure that can cause long working hours. Imagine you're an engineer working for a startup that's going through a growth spurt. Servers keep going down under load, the architecture isn't keeping up, and there are lots of other growing pains. One evening the whole system goes down, and you stay up until 4AM bringing it back up. At the next company event you are lauded as a hero for saving the day... but no one devotes any resources to fixing the underlying problems.

The result is hero syndrome: the organization rewards those who save the day at the last minute, rather than work that prevents problems in the first place. And so they end up with a cycle of failure. Tired engineers making mistakes, lack of resources to build good infrastructure, and rewards for engineers who work long hours to try to duct tape a structure that is falling apart.

Avoiding bad companies

Working long hours is not productive. But since many companies don't understand this, when you're looking for a new job be on the lookout for the problems I described above. And if you'd like more tips to help you work a sane, productive workweek, check out my email course, the Programmer's Guide to a Sane Workweek.

June 21, 2017 04:00 AM

June 19, 2017

Hynek Schlawack

Why Your Dockerized Application Isn’t Receiving Signals

Proper cleanup when terminating your application isn’t less important when it’s running inside of a Docker container. Although it only comes down to making sure signals reach your application and handling them, there’s a bunch of things that can go wrong.

by Hynek Schlawack ( at June 19, 2017 12:00 AM

June 14, 2017

Itamar Turner-Trauring

Lawyers, bad jokes and typos: how not to name your software

When you're writing software you'll find yourself naming every beast of the field, and every fowl of the air: projects, classes, functions, and variables. There are many ways to fail at naming projects, and when you do the costs of a bad name can haunt you for years.

To help you avoid these problems, let me share some of bad naming schemes I have been responsible for, observed, or had inflicted on me. You can do better.

Five naming schemes to avoid

They're gonna try to take it

Rule #1: don't give your software the same name as a heavy metal band, or more broadly anyone who can afford to have a lawyer on retainer.

Long ago, the open source Twisted project had a sub-package for managing other subprocesses. Twisted's main package is called twisted, and the author decided to call this package twisted.sister. This was a mistake.

One day my company, which advertised Twisted consulting services, received a cease and desist letter from the lawyers of the band Twisted Sister. They indicated that Twisted's use of the name Twisted Sister was a violation of the band's intellectual property, demanded we stop immediately, after which they wanted to discuss damages. Since my company didn't actually own Twisted this was a little confusing, but I passed this on to the project.

The project wrote the lawyers explaining that Twisted was run by hobbyists, just so it was clear there was no money to be had. Twisted also changed the package name from twisted.sister to twisted.sibling: none of us believed the lawyers' claim had any validity, but no one wanted to deal with the hassle of fighting them.

A subject of ridicule

Rule #2: don't pick a name that will allow people to make bad jokes about you.

Continuing with the travails of the Twisted project, naming the project "Twisted" was a mistake. Python developers have, until recent years, not been very comfortable with asynchronous programming, and Twisted is an async framework. Unfortunately, having a name with negative connotations meant this discomfort was verbalized in a way that associated it with the project.

"Twisted" led people to say things like "Twisted is so twisted," over and over and over again. Other async libraries for Python, like asyncore or Tornado, had neutral names and didn't suffer from this problem.

Bad metaphors

Rule #3: if you're going to use an extended metaphor, pick one that makes sense.

Continuing to pick on Twisted yet again (sorry!), one of Twisted's packages is a remote method invocation library, similar to Java RMI. The package is called twisted.spread, the wire format is twisted.spread.banana, the serialization layer is twisted.spread.jelly, and the protocol itself is twisted.spread.pb.

This naming scheme, based on peanut butter and jelly sandwiches, has a number of problems. To begin with, PB&J is very American, and software is international. As a child and American emigrant living in a different country, the peanut butter and banana sandwiches my mother made led to ridicule by my friends.

Minor personal traumas aside, this naming scheme has no relation to what the software actually does. Silliness is a fine thing, but names should also be informative. The Homebrew project almost falls into this trap, with formulas and taps and casks and whatnot. But while the metaphor is a little unstable on its feet, it's not quite drunk enough to completely fall over.


Rule #4: avoid names with common typos.

One of my software projects is named Crochet. Other people—and I make this typo too, to be fair—will mistakenly write "crotchet" instead, which the dictionary describes as "a perverse fancy; a whim which takes possession of the mind; a conceit."

Bonus advice: you may wish to avoid whims or conceits when naming your software projects.

I can't even

Rule #5: avoid facepalms.

I used to work for a company named ClusterHQ, and our initial product was named Flocker. When the company shut down the CEO wrote a blog post titled ClusterF**ed.

Why you shouldn't listen to my advice

Twisted has many users, from individuals to massive corporations. My own project, Crochet, has a decent number. ClusterHQ shut down for reasons that had nothing to do with its name. So it's not clear any of this makes a difference.

You should certainly avoid names that confuse your users, and you'll be happier if you can avoid lawsuits. But if you're going to be writing software all day, you should enjoy yourself while you do. If your programming language supports Unicode symbols, why not use emoji in your project name?

🙊🙉🙈 has a nice sound to it.

By the way, if you'd like to learn how to avoid my many mistakes, subscribe to my weekly newsletter. Every week I share one of my programming or career mistakes and how you can avoid it.

June 14, 2017 04:00 AM

June 12, 2017

Hynek Schlawack

Hardening Your Web Server’s SSL Ciphers

There are many wordy articles on configuring your web server’s TLS ciphers. This is not one of them. Instead I will share a configuration which is both compatible enough for today’s needs and scores a straight “A” on Qualys’s SSL Server Test.

by Hynek Schlawack ( at June 12, 2017 10:00 AM

June 11, 2017

Twisted Matrix Laboratories

Twisted 17.5.0 Released

On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 17.5!

The highlights of this release are:

  • twisted.python.url has been spun out into the new 'hyperlink' package; importing twisted.python.url is now a compatibility alias
  • Initial support for OpenSSL 1.1.0.
  • Fixes around the reactor DNS resolver changes in 17.1, solving all known regressions
  • Deferred.asFuture and Deferred.fromFuture, to allow you to map asyncio Futures to Twisted Deferreds and vice versa, for use the with Python 3+ asyncioreactor in Twisted
  • Support for TLS 1.3 ciphersuites, in advance of a released OpenSSL to enable the protocol
  • Further Python 3 support in twisted.web, initial support in twisted.mail.smtp.

For more information, check the NEWS file (link provided below).

You can find the downloads on PyPI (or alternatively our website). The NEWS file is also available on GitHub.

Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)

by Amber Brown ( at June 11, 2017 01:22 AM

June 05, 2017

Jp Calderone

Twisted Web in 60 Seconds: HTTP/2

Hello, hello. It's been a long time since the last entry in the "Twisted Web in 60 Seconds" series. If you're new to the series and you like this post, I recommend going back and reading the older posts as well.

In this entry, I'll show you how to enable HTTP/2 for your Twisted Web-based site. HTTP/2 is the latest entry in the HTTP family of protocols. It builds on work from Google and others to improve performance (and other) shortcomings of the older HTTP/1.x protocols in wide-spread use today.

Twisted implements HTTP/2 support by building on the general-purpose H2 Python library. In fact, all you have to do to have HTTP/2 for your Twisted Web-based site (starting in Twisted 16.3.0) is install the dependencies:

$ pip install twisted[http2]

Your TLS-based site is now available via HTTP/2! A future version of Twisted will likely extend this to non-TLS sites (which requires the Upgrade: h2c handshake) with no further effort on your part.

by Jean-Paul Calderone ( at June 05, 2017 05:56 PM

June 03, 2017

Jonathan Lange

SPAKE2 in Haskell: What is SPAKE2?

Last post, I discussed how I found myself implementing SPAKE2 in Haskell. Here, I want to discuss what SPAKE2 is, and why you might care.

I just want to send a file over the internet

Long ago, Glyph lamented that all he wanted to do was send a file over the internet. Since then, very little has changed.

If you want to send a file to someone, you either attach it to an email, or you upload it to some central, cloud-based service that you both have access to: Google Drive; Dropbox; iCloud; etc.

But what if you don’t want to do that? What if you want to send a file directly to someone, without intermediaries?

First, you need to figure out where they are. That’s not what SPAKE2 does.

Once you have figured out where they are, you need:

  1. Their permission
  2. Assurance that you are sending the file to the right person
  3. A secure channel to send the actual data

SPAKE2 helps with all three of these.

What is SPAKE2?

SPAKE2 is a password-authenticated key exchange (PAKE) protocol.

This means it is a set of steps (a protocol) to allow two parties to share a session key (“key exchange”) based on a password that they both know (“password-authenticated”).

There are many such protocols, but as mentioned last post, I know next to nothing about cryptography, so if you want to learn about them, you’ll have to go elsewhere.

SPAKE2 is designed under a certain set of assumptions and constraints.

First, we don’t know if the person we’re talking to is the person we think we are talking to, but we want to find out. That is, we need to authenticate them, and we want to use the password to do this (hence “password-authenticated”).

Second, the shared password is expected to be weak, such as a PIN code, or a couple of English words stuck together.

What does this mean?

These assumptons have a couple of implications.

First, we want to give our adversary as few chances as possible to guess the password. The password is precious, we don’t want to lose it. If someone discovers it, they could impersonate us or our friends, and gain access to precious secrets.

Specifically, this means we want to prevent offline dictionary attacks (where the adversary can snapshot some data and run it against all common passwords at their leisure) against both eavesdropping adversaries (those snooping on our connection) and active adversaries (people pretending to be our friend).

Second, we don’t want to use the password as the key that encrypts our payload. We need to use it to generate a new key, specific to this session. If we re-use passwords, eventually we’ll send some encrypted content for which the plaintext content is known, the eavesdropper will find this and be able to brute force at their leisure.

How does SPAKE2 solve this?

To explain how SPAKE2 solves this, it can help to go through a couple of approaches that definitely do not work.

For example, we could just send the password over the wire. This is a terrible idea. Not only does it expose the password to eavesdroppers, but it also gives us no evidence that the other side knows the password. After all, we could send them the password, and they could send it right back.

We need to send something over the wire that is not the password, but that could only have been generated with the password.

So perhaps our next refinement might be that we send our name, somehow cryptographically signed with password.

This is better than just sending the password, but still leaves us exposed to offline dictionary attacks. After all, our name is well-known in plain text, so an eavesdropper can look out for it in the protocol, snaffle up the ciphertext, and then run a dictionary against it at their leisure. It also leaves open the question of how we will generate a session key.

SPAKE2 goes a few steps further than this. Rather than sending a signed version of some known text, each side sends an “encrypted” version of a random value, using the password as a key.

Each side then decrypts the value it receives from the other side, and then uses its random value and the other random value as inputs to a hash function that generates a session key.

If the passwords are the same, the session key will be the same, and both sides will be able to communicate.

That is the shorter answer for “How does SPAKE2 work?”. The longer answer involves rather a lot of mathematics.

Show me the mathematics

When I was learning SPAKE2, this was a bit of a problem for me. I hit three major hurdles.

  1. Notation—maths just has obscure notation
  2. Terminology—maths uses non-descriptive words for concepts
  3. Concepts—some are merely unfamiliar, others genuinely difficult

In this post, I want to help you over all of these hurdles, such that you can go and read papers and blog posts by people who actually understand what they are talking about. This means that I’ll try to go out of my way to explain the notation and terminology while also going through the core concepts.

I want to stress that I am not an expert. What you’re reading here is me figuring this out for myself, with a little help from my friends and the Internet.


We can think of SPAKE2 as having five stages:

  1. Public system parameters, established before any exchange takes place
  2. A secret password shared between two parties
  3. An exchange of data
  4. Using that exchange to calculate a key
  5. Generating a session key

We’ll deal with each in turn.

System parameters

First, we start with some system parameters. These are things that both ends of the SPAKE2 protocol need to have baked into their code. As such, they are public.

These parameters are:

  • a group, \(G\), of prime order \(p\)
  • a generator element, \(g\), from that group
  • two arbitrary elements, \(M\), \(N\), of the group

What’s a group? A group \((G, +)\) is a set together with a binary operator such that:

  • adding any two members of the group gets you another member of the group (closed under \(+\))
  • the operation + is associative, that is \(X + (Y + Z) = (X + Y) + Z\) (associativity)
  • there’s an element, \(0\), such that \(X + 0 = X = 0 + X\) for any element x in the group (identity)
  • for every element, \(X\), there’s an element \(-X\), such that \(X + (-X) = 0\) (inverse)

It’s important to note that \(+\) stands for a generic binary operation with these properties, not necessarily any kind of addition, and \(0\) stands for the identity, rather than the numeral 0.

To get a better sense of this somewhat abstract concept, it can help to look at a few concrete examples. These don’t have much to do with SPAKE2 per se, they are just here to help explain groups.

The integers with addition form a group with \(0\) as the identity, because you can add and subtract (i.e. add the negative) them and get other integers, and because addition is associative.

The integers with multiplication are not a group, because what’s the inverse of 2?

But the rational numbers greater than zero with multiplication do form a group, with 1 as the identity.

Likewise, integers with multiplication modulo some fixed number do form a group—a finite group. For example, for integers with multiplication modulo 7, the identity is 1, multiplication is associative, and the inverse of 2 is 4, because \((2 \cdot 4) \mod 7 = 1\).

But but! When we are talking about groups in the abstract, we’ll still call the operation \(+\) and the identity \(0\), even if the implementation is that the operation is multiplication.

But but but! This is not at all a universally followed convention, so when you are reading about groups, you’ll often see the operation written as a product (e.g. \(XY\) or \(X \cdot Y\) instead of \(X + Y\)) and the identity written as \(1\).

Still with me?

Why groups?

You might be wondering why we need this “group” abstraction at all. It might seem like unnecessary complexity.

Abstractions like groups are a lot like the programming concept of an abstract interface. You might write a function in terms of an interface because you want to allow for lots of different possible implementations. Doing so also allows you to ignore details about specific concrete implementations so you can focus on what matters—the external behaviour.

It’s the same thing here. The group could be an elliptic curve, or something to do with prime numbers, or something else entirely—SPAKE2 doesn’t care. We want to define our protocol to allow lots of different underlying implementations, and without getting bogged down in how they actually work.

For SPAKE2, we have an additional requirement for the group: it is finite and has a prime number of elements. We’ll use \(p\) to refer to this number—this is what is meant by “of prime order \(p\)” above.

Due to the magic of group theory [1], this gives \(G\) some bonus properties:

  • it is cyclic, we can generate all of the elements of the group by picking one (not the identity) and adding it to itself over and over
  • it is abelian, that is \(X + Y = Y + X\), for any two elements \(X\), \(Y\) in \(G\) (commutativity)

Which explains what we mean by “a generator element”, \(g\), it’s just an element from the group that’s not the identity. We can use it to make any other element of the group by adding it to itself. If the group weren’t cyclic, we could, in general, only use \(g\) to generate a subgroup.

The process of adding an element to itself over and over is called scalar multiplication [2]. In mathematical notation, we write it like this:

\begin{equation*} Y = nX \end{equation*}

Or slightly more verbosely like:

\begin{equation*} Y = n \cdot X \end{equation*}

Where \(n\) is an integer and \(X\) is a member of the group, and \(Y\) is the result of adding \(X\) to itself \(n\) times.

If \(n\) is 0, \(Y\) is \(0\). If \(n\) is 1, \(Y\) is \(X\).

Just as sometimes the group operator is written with product notation rather than addition, so to scalar multiplication is sometimes written with exponentiation, to denote multiplying a thing by itself n times. e.g.

\begin{equation*} Y = X^n \end{equation*}

I’m going to stick to the \(n \cdot X\) notation in this post, and I’m always going to put the scalar first.

Also, I am mostly going to use upper case (e.g. \(X\), \(Y\)) to refer to group elements (with the exception of the generator element, \(g\)) and lower case (e.g. \(n\), \(k\)) to refer to scalars. I’m going to try to be consistent, but it’s always worth checking for yourself.

Because the group \(G\) is cyclic, if we have some group element \(X\) and a generator \(g\), there will always be a number, \(k\), such that:

\begin{equation*} X = k \cdot g \end{equation*}

Here, \(k\) would be called the discrete log of \(X\) with respect to base \(g\). “Log” is a nod to exponentiation notation, “discrete” because this is a finite group.

Time to recap.

SPAKE2 has several public parameters, which are

  • a group, \(G\), of prime order \(p\), which means it’s cyclic, abelian, and we can do scalar multiplication on it
  • a generator element, \(g\), from that group, that we will do a lot of scalar multiplication with
  • two arbitrary elements, \(M\), \(N\), of the group, where no one knows the discrete log [3] with respect to \(g\).

Shared secret password

The next thing we need to begin a SPAKE2 exchange is a shared secret password.

In human terms, this will be a short string of bytes, or a PIN.

In terms of the mathematical SPAKE2 protocol, this must be a scalar, \(pw\).

How we go from a string of bytes to a scalar is completely out of scope for the SPAKE2 paper. This confused me when trying to implement SPAKE2 in Haskell, and I don’t claim to fully understand it now.

We HKDF expand the password in order to get a more uniform distribution of scalars [4]. This still leaves us with a byte string, though.

To turn that into an integer (i.e. a scalar), we simply base256 decode the byte string.

This gives us \(pw\), which we use in the next step.

Data exchange

At this point, the user has entered a password and we’ve converted it into a scalar.

We need some way to convince the other side that we know the password without actually sending the password to them.

This means two things:

  1. We have to send them something based on the password
  2. We get to use all of the shiny mathematics we introduced earlier

The process for both sides is the same, but each side needs to know who’s who. One side is side A, and other is side B, and how they figure out which is which happens outside the protocol.

Each draw a random scalar between \(0\) and \(p\): \(x\) for side A, \(y\) for side B. They then use that to generate an element: \(X = x \cdot g\) for side A, \(Y = y \cdot g\) for side B.

They then “blind” this value by adding it to one of the elements that make up the system parameters, scalar multiplied by \(pw\), our password.

Thus, side A makes \(X^{\star} = X + pw \cdot M\) and side B makes \(Y^{\star} = Y + pw \cdot N\).

They then each send this to the other side and wait to receive the equivalent message.

Again, the papers don’t say how to encode the message, so python-spake2 just base-256 encodes it and has side A prepend the byte A (0x41) and side B prepend B (0x42).

Calculating a key

Once each side has the other side’s message, they can start to calculate a key.

Side A calculates its key like this:

\begin{equation*} K_A = x \cdot (Y^{\star} - pw \cdot N) \end{equation*}

The bit inside the parentheses is side A recovering \(Y\), since we defined \(Y^{\star}\) as:

\begin{equation*} Y^{\star} = Y + pw \cdot N \end{equation*}

We can rewrite that in terms of \(Y\) by subtracting \(pw \cdot N\) from both sides:

\begin{equation*} Y = Y^{\star} - pw \cdot N \end{equation*}

Which means, as long as both sides have the same value for \(pw\), can swap in \(Y\) like so:

\begin{align*} K_A& = x \cdot Y \\ & = x \cdot (y \cdot g) \\ & = xy \cdot g \end{align*}

Remember that when we created \(Y\) in the first place, we did so by multiplying our generator \(g\) by a random scalar \(y\).

Side B calculates its key in the same way:

\begin{align*} K_B& = y \cdot (X^{\star} - pw \cdot N) \\ & = y \cdot X \\ & = y \cdot (x \cdot g) \\ & = yx \cdot g \\ & = xy \cdot g \end{align*}

Thus, if both sides used the same password, \(K_A = K_B\).

Generating a session key

Both sides now have:

  • \(X^{\star}\)
  • \(Y^{\star}\)
  • Either \(K_A\) or \(K_B\)
  • \(pw\), or at least their own opinion of what \(pw\) is

To these we add the heretofore unmentioned \(A\) and \(B\), which are meant to be identifiers for side A and side B respectively. Each side presumably communicates these to each other out-of-band to SPAKE2.

We then hash all of these together, using a hash algorithm, \(H\), that both sides have previously agreed upon, so that:

\begin{equation*} SK = H(A, B, X^{\star}, Y^{\star}, pw, K) \end{equation*}

Where \(K\) is either \(K_A\) or \(K_B\).

I don’t really understand why this step is necessary—why not use \(K\)? Nor do I understand why each of the inputs to the hash is necessary—\(K\) is already derived from \(X^{\star}\), why do we need \(X^{\star}\)?

In the code, we change this ever so slightly:

\begin{equation*} SK = H(H(pw), H(A), H(B), X^{\star}, Y^{\star}, K) \end{equation*}

Basically, we hash all of the variable length fields to make them fixed length to avoid collisions between values. [5]

python-spake2 uses SHA256 as the hash algorithm for this step. I’ve got no idea why this and not, say, HKDF.

And this is the session key. SPAKE2 is done!

Did SPAKE2 solve our problem?

We wanted a way of authenticating a remote connection using a password, without having to share that password, and without using that password to encrypt known plaintext. We’ve done that.

The SPAKE2 protocol above will result in two sides negotiating a shared session key, sending only randomly generated data over the wire.

Is it vulnerable to offline dictionary attacks? No. The value we send over the wire is just a random group element encrypted with the password. Even if an eavesdropper gets that value and runs a dictionary against it, they’ll have no way of determining whether they’ve cracked it or not. After all, one random value looks very much like another.

Where to now?

I’m looking forward to learning about elliptic curves, and to writing about what it was like to use Haskell in particular to implement SPAKE2.

I learned a lot implementing SPAKE2, then learned a lot more writing this post, and have much to learn still.

But perhaps the biggest thing I’ve learned is that although maths isn’t easy, it’s at least possible, and that sometimes, if you want to send a file over the Internet, what you really need is a huge pile of math.

Let me know if I’ve got anything wrong, or if this inspires you do go forth and implement some crypto papers yourself.


This post owes a great deal to Brian Warner’s “Magic Wormhole” talk, to Jake Edge’s write-up of that talk, and to Michel Abdalla and David Pointcheval’s paper “Simple Password-Based Encrypted Key Exchange Protocols”.

Bice Dibley, Chris Halse Rogers, and JP Viljoen all read early drafts and provided helpful suggestions. This piece has been much improved by their input. Any infelicities are my own.

I wouldn’t have written this without being inspired by Julia Evans. Julia often shares what she’s learning as she learns it, and does a great job at making difficult topics seem approachable and fun. I highly recommend her blog, especially if you’re into devops or distributed systems.

[1]I used to know the proof for this but have since forgotten it, so I’m taking this on faith for now.
[2]With scalar multiplication, we aren’t talking about a group, but rather a \(\mathbb{Z}\)-module. But at this point, I can’t even, so look it up on Wikipedia if you’re interested.
[3]Taking this on faith too.
[4]Yup, faith again.
[5]I only sort of understand why this is necessary.

by Jonathan Lange at June 03, 2017 11:00 PM

June 01, 2017

Glyph Lefkowitz

The Sororicide Antipattern

Composition is better than inheritance.”. This is a true statement. “Inheritance is bad.” Also true. I’m a well-known compositional extremist. There’s a great talk you can watch if I haven’t talked your ear off about it already.

Which is why I was extremely surprised in a recent conversation when my interlocutor said that while inheritance might be bad, composition is worse. Once I understood what they meant by “composition”, I was even more surprised to find that I agreed with this assertion.

Although inheritance is bad, it’s very important to understand why. In a high-level language like Python, with first-class runtime datatypes (i.e.: user defined classes that are objects), the computational difference between what we call “composition” and what we call “inheritance” is a matter of where we put a pointer: is it on a type or on an instance? The important distinction has to do with human factors.

First, a brief parable about real-life inheritance.

You find yourself in conversation with an indolent heiress-in-waiting. She complains of her boredom whiling away the time until the dowager countess finally leaves her her fortune.

“Inheritance is bad”, you opine. “It’s better to make your own way in life”.

“By George, you’re right!” she exclaims. You weren’t expecting such an enthusiastic reversal.

“Well,”, you sputter, “glad to see you are turning over a new leaf”.

She crosses the room to open a sturdy mahogany armoire, and draws forth a belt holstering a pistol and a menacing-looking sabre.

“Auntie has only the dwindling remnants of a legacy fortune. The real money has always been with my sister’s manufacturing concern. Why passively wait for Auntie to die, when I can murder my dear sister now, and take what is rightfully mine!”

Cinching the belt around her waist, she strides from the room animated and full of purpose, no longer indolent or in-waiting, but you feel less than satisfied with your advice.

It is, after all, important to understand what the problem with inheritance is.

The primary reason inheritance is bad is confusion between namespaces.

The most important role of code organization (division of code into files, modules, packages, subroutines, data structures, etc) is division of responsibility. In other words, Conway’s Law isn’t just an unfortunate accident of budgeting, but a fundamental property of software design.

For example, if we have a function called multiply(a, b) - its presence in our codebase suggests that if someone were to want to multiply two numbers together, it is multiply’s responsibility to know how to do so. If there’s a problem with multiplication, it’s the maintainers of multiply who need to go fix it.

And, with this responsibility comes authority over a specific scope within the code. So if we were to look at an implementation of multiply:

def multiply(a, b):
    product = a * b
    return product

The maintainers of multiply get to decide what product means in the context of their function. It’s possible, in Python, for some other funciton to reach into multiply with frame objects and mangle the meaning of product between its assignment and return, but it’s generally understood that it’s none of your business what product is, and if you touch it, all bets are off about the correctness of multiply. More importantly, if the maintainers of multiply wanted to bind other names, or change around existing names, like so, in a subsequent version:

def multiply(a, b):
    factor1 = a
    factor2 = b
    result = a * b
    return result

It is the maintainer of multiply’s job, not the caller of multiply, to make those decisions.

The same programmer may, at different times, be both a caller and a maintainer of multiply. However, they have to know which hat they’re wearing at any given time, so that they can know which stuff they’re still repsonsible for when they hand over multiply to be maintained by a different team.

It’s important to be able to forget about the internals of the local variables in the functions you call. Otherwise, abstractions give us no power: if you have to know the internals of everything you’re using, you can never build much beyond what’s already there, because you’ll be spending all your time trying to understand all the layers below it.

Classes complicate this process of forgetting somewhat. Properties of class instances “stick out”, and are visible to the callers. This can be powerful — and can be a great way to represent shared data structures — but this is exactly why we have the ._ convention in Python: if something starts with an underscore, and it’s not in a namespace you own, you shouldn’t mess with it. So: other._foo is not for you to touch, unless you’re maintaining type(other). self._foo is where you should put your own private state.

So if we have a class like this:

class A(object):
    def __init__(self):
        self._note = "a note"

we all know that A()._note is off-limits.

But then what happens here?

class B(A):
    def __init__(self):
        self._note = "private state for B()"

B()._note is also off limits for everyone but B, except... as it turns out, B doesn’t really own the namespace of self here, so it’s clashing with what A wants _note to mean. Even if, right now, we were to change it to _note2, the maintainer of A could, in any future release of A, add a new _note2 variable which conflicts with something B is using. A’s maintainers (rightfully) think they own self, B’s maintainers (reasonably) think that they do. This could continue all the way until we get to _note7, at which point it would explode violently.

So that’s why Inheritance is bad. It’s a bad way for two layers of a system to communicate because it leaves each layer nowhere to put its internal state that the other doesn’t need to know about. So what could be worse?

Let’s say we’ve convinced our junior programmer who wrote A that inheritance is a bad interface, and they should instead use the panacea that cures all inherited ills, composition. Great! Let’s just write a B that composes in an A in a nice clean way, instead of doing any gross inheritance:

class Bprime(object):
    def __init__(self, a):
        for var in dir(a):
            setattr(self, var, getattr(a, var))

Uh oh. Looks like composition is worse than inheritance.

Let’s enumerate some of the issues with this “solution” to the problem of inheritance:

  • How do we know what attributes Bprime has?
  • How do we even know what type a is?
  • How is anyone ever going to grep for relevant methods in this code and have them come up in the right place?

We briefly reclaimed self for Bprime by removing the inheritance from A, but what Bprime does in __init__ to replace it is much worse. At least with normal, “vertical” inheritance, IDEs and code inspection tools can have some idea where your parents are and what methods they declare. We have to look aside to know what’s there, but at least it’s clear from the code’s structure where exactly we have to look aside to.

When faced with a class like Bprime though, what does one do? It’s just shredding apart some apparently totally unrelated object, there’s nearly no way for tooling to inspect this code to the point that they know where self.<something> comes from in a method defined on Bprime.

The goal of replacing inheritance with composition is to make it clear and easy to understand what code owns each attribute on self. Sometimes that clarity comes at the expense of a few extra keystrokes; an __init__ that copies over a few specific attributes, or a method that does nothing but forward a message, like def something(self): return self.other.something().

Automatic composition is just lateral inheritance. Magically auto-proxying all methods1, or auto-copying all attributes, saves a few keystrokes at the time some new code is created at the expense of hours of debugging when it is being maintained. If readability counts, we should never privilege the writer over the reader.

  1. It is left as an exercise for the reader why proxyForInterface is still a reasonably okay idea even in the face of this criticism.2 

  2. Although ironically it probably shouldn’t use inheritance as its interface. 

by Glyph at June 01, 2017 06:25 AM

Itamar Turner-Trauring

The best place to practice your programming skills

If you want to be a better programmer, what is the best way to practice and improve your skills? You can improve your skills by working on a side project, but what if you don't have the free time or motivation? You can learn from a mentor or more experienced programmers, but what if you don't work with any?

There is one more way you can practice skills, and in many ways it's the ideal way. It doesn't require extra time, or experienced colleagues: all it takes is a job as a software engineer. The best way to improve your skills is by practicing them on the job.

Now, when you see the word "skills" you might immediately translate that into "tools and technologies". And learning new technologies on the job is not always possible, especially when your company is stuck on one particular tech stack. But your goal as a programmer isn't to use technologies.

Your goal is to solve problems, and writing software is a technique that helps you do that. You job might be helping people plan their travel, or helping a business sell more products, or entertaining people. These goals might require technology, but the technology is a means not an end. So while understanding particular technologies is both useful and necessary, it is just the start of the skills you need as a programmer.

The skills you need

Many of the most important skills you need as a programmer have nothing to do with the particulars of any technology, and everything to do with learning how to better identify and solve problems.

Here are some of the skills that will help you identify problems:

  • Business goals: you choose what problems to focus on based on your organization's goals.
  • Root cause analysis: when a problem is presented you don't just accept it, but rather dig in and try to figure out the underlying problem.
  • Identifying development bottlenecks: you notice when software development is slowing down and try to figure out why.

And some of the skills you'll need to solve problems once they're found:

  • Self-management: you can organize your own time while working on a particular solution. You stay focused and on task, and if things are taking you too long you'll notice and ask for help.
  • Planning your own software project: given a problem statement and a high-level solution, you can break up the task into the necessary steps to implement that solution. You can then take those steps and figure out the best order in which to implement them.
  • Debugging: when faced with a bug you don't understand you are able to figure out what is causing it, or at least how to work around it.

These are just some of the skills you can practice and improve during your normal workday. And while there are ways your organization can help your learning—like using newer technologies, or having expert coworkers—you can practice these particular skills almost anywhere.

The best practice is the real thing

Every time you solve a problem at work you are also practicing identifying and solving problems, and practicing under the most realistic of situations. It's a real problem, you have limited resources, and there's a deadline.

Compare this to working on a side project. With a side project you have come up with the goal on your own, and there's no real deadline. It's possible to make side projects more realistic, but it's never quite the same as practicing the real thing.

Here's how to practice your skills at work:

  1. Pick a particular skill you want to work on.
  2. Pay attention to how you apply it: when do you use this skill? Are you conscious of when it's necessary and how you're applying it? Even just realizing something is a skill and then paying attention to how and when you use it can help you improve it.
  3. Figure out ways to measure the skill's impact, and work on improving the way you apply it. With debugging, for example, you can see how long it takes you to discover a problem. Then notice what particular techniques and questions speed up your debugging, and make sure you apply them consistently.
  4. Learn from your colleagues. Even if they're no more experienced than you, they still have different experiences, and might have skills and knowledge you don't.
  5. Try to notice the mistakes you make, and the cues and models that would have helped you avoid a particular mistake. This is what I do in my weekly newsletter where I share my own mistakes and what I've learned from them.
  6. If possible, find a book on the topic and skim it. You can skim a book in just an hour or two and come away with some vague models and ideas for improvement. As you do your job and notice yourself applying the skill those models will come to mind. You can then read a small, focused, and relevant part of the book in detail at just the right moment: when you need to apply the skill.

Practicing on the job

Your job is always the best place to practice your skills, because that is where you will apply your skills. The ideal, of course, is to find a job that lets you try new technologies and learn from experienced colleagues. But even if your job doesn't achieve that ideal, there are many critical skills that you can practice at almost any job. Don't waste your time at work: every workday can also be a day where you are learning.

June 01, 2017 04:00 AM

May 30, 2017

Hynek Schlawack

On Conference Speaking

I’ve seen quite a bit of the world thanks to being invited to speak at conferences. Since some people are under the impression that serial conference speakers possess some special talents, I’d like to demystify my process by walking you through my latest talk from start to finish.

by Hynek Schlawack ( at May 30, 2017 12:00 AM

May 26, 2017

Jonathan Lange

SPAKE2 in Haskell: the journey begins

There’s a joke about programmers that’s been doing the rounds for the last couple of years:

We do these things not because they are easy, but because we thought they would be easy.

This is about how I became the butt of a tired, old joke.

My friend Jean-Paul decided to start learning Haskell by writing a magic-wormhole client.

magic-wormhole works in part by negotiating a session key using SPAKE2: a password-authenticated key exchange protocol, so one of the first things Jean-Paul needed was a Haskell implementation.

Eager to help Jean-Paul on his Haskell journey, I volunteered to write a SPAKE2 implementation in Haskell. After all, there’s a pure Python implementation, so all I’d need to do is translate it from Python to Haskell. I know both languages pretty well. How hard could it be?

Turns out there are a few things I hadn’t really counted on.

I know next to nothing about cryptography

Until now, I could summarise what I knew about cryptography into two points:

  1. It works because factoring prime numbers is hard
  2. I don’t know enough about it to implement it reliably, I should use proven, off-the-shelf components instead

This isn’t really a solid foundation for implementing crypto code. In fact, it’s a compelling argument to walk away while I still had the chance.

My ignorance was a particular liability here, since python-spake2 assumes a lot of cryptographic knowledge. What’s HKDF? What’s Ed25519? Is that the same thing as Curve25519? What’s NIST? What’s q in this context?

python-spake2 also assumes a lot of knowledge about abstract algebra. This is less of a problem for me, since I studied a lot of that at university. However, it’s still a problem. Most of that knowledge has sat unused for fifteen or so years. Dusting off those cobwebs took time.

My rusty know-how was especially obvious when reading the PDFs that describe SPAKE2. Mathematical notation isn’t easy to read, and every subdiscipline has its own special variants (“Oh, obviously q means the size of the subgroup. That’s just convention.”)

For example, I know that what’s in spake2/ is the multiplicative group of integers modulo n, and I know what “the multiplicative group of integers modulo n” means, but I understand about 2% of the Wikipedia page on the subject, and I have even less understanding about how the group is relevant to cryptography.

The protocol diagrams that appear in the papers I read were a confusing mess of symbols at first. It took several passes through the text, and a couple of botched explanations to patient listeners on IRC before I really understood them. These diagrams now seem clear & succinct to me, although I’m sure they could be written better in code.

python-spake2 is idiosyncratic

The python-spake2 source code is made almost entirely out of object inheritance and mutation, which makes it hard for me to follow, and hard to transliterate into Haskell, where object inheritance and mutation are hard to model.

This is a very minor criticism. With magic-wormhole and python-spake2, Warner has made creative, useful software that solves a difficult problem and meets a worthwhile need.

crypto libraries rarely have beginner-friendly documentation

python-spake2 isn’t alone in assuming cryptographic knowledge. The Haskell library cryptonite is much the same. Most documentation I could find about various topics on the web pointed to pages, which either link to papers or C code.

I think this is partly driven by a concern for user safety, “if you don’t understand it, you shouldn’t be using it”. Maybe this is a good idea. The problem is that it can be hard to know where to start in order to gain that understanding.

To illustrate, I now sort of get how an elliptic curve might form a group, but have forgotten enough algebra to not know about what subgroups there are, how that’s relevant to the implementation of ed25519, how subgroups and groups relate to fields, to say nothing of how elliptic curve cryptography actually works.

I don’t really know where to go to remedy this ignorance, although I’m pretty sure doing so is within my capabilities, I just need to find the right book or course to actually teach me these things.

Protocols ain’t protocols

The mathematics of SPAKE2 are fairly clearly defined, but there is a large gap between “use this group element” and sending some bits over the wire.

python-spake2 doesn’t clearly distinguish between the mathematics of SPAKE2 and the necessary implementation decisions it makes in order to be a useful networked protocol.

This meant that when translating, it was hard to tell what was an essential detail and what was accidental detail. As Siderea eloquently points out, software is made of decisions. When writing the Haskell version, which decisions do I get to make, and which are forced upon me? Must this salt be the empty string? Can I generate the “blind” any way I want?

Eventually, I found a PR implementing SPAKE2 (and SPAKE2+, SPAKE2-EE, etc.) in Javascript. From the discussion there, I was able to synthesize a rough standard for implementing.

Jean-Paul helped by writing an interoperability test harness, which gave me an easy way to experiment with design choices.


Happily, as of this weekend, I’ve been able to overcome my lack of knowledge of cryptography, the idiosyncracies of python-spake2, the documentation quirks of crypto libraries, and the lack of a standard for SPAKE2 on the network to implement SPAKE2 in Haskell, first with NIST groups, then with Ed25519.

No doubt much could be better—I would very much welcome feedback, whether it’s about my Haskell, my mathematics, or my documentation—but I’m pretty happy with the results.

This has been a fun, stretching, challenging exercise. Even though it took more time and was more difficult than I expected, it has been such a privilege to be able to tackle it. Not only have I learned much, but I also feel much more confident in my ability to learn hard things.

I hope to follow up with more posts, covering:

  • just what is SPAKE2, and why should I care?
  • how can I use SPAKE2 (and especially, haskell-spake2)?
  • what was it like to write a Haskell version of a Python library?
  • what’s up with Ed25519? (this is somewhat ambitious)

by Jonathan Lange at May 26, 2017 11:00 PM

May 11, 2017

Hynek Schlawack

Please Fix Your Decorators

If your Python decorator unintentionally changes the signatures of my callables or doesn’t work with class methods, it’s broken and should be fixed. Sadly most decorators are broken because the web is full of bad advice.

by Hynek Schlawack ( at May 11, 2017 12:00 PM

April 29, 2017

Moshe Zadka

April 19, 2017

Glyph Lefkowitz

So You Want To Web A Twisted

As a rehearsal for our upcoming tutorial at PyCon, Creating And Consuming Modern Web Services with Twisted, Moshe Zadka, we are doing a LIVE STREAM WEBINAR. You know, like the kids do, with the video games and such.

As the webinar gods demand, there is an event site for it, and there will be a live stream.

This is a practice run, so expect “alpha” quality content. There will be an IRC channel for audience participation, and the price of admission is good feedback.

See you there!

by Glyph at April 19, 2017 03:29 AM

Moshe Zadka

Twisted Tutorial Webinar

Glyph and I are giving a tutorial about Twisted and web services at PyCon. In order to try it out, we are giving a webinar. Please come, learn, and let us know if you like it!

by moshez at April 19, 2017 03:14 AM

April 17, 2017

Itamar Turner-Trauring

Learning without a mentor: how to become an expert programmer on your own

If you're an intermediate or senior programmer you may hit the point where you feel you're no longer making progress, where you're no longer learning. You're good at what you do, but you don't know what to learn next, or how: there are too many options, it's hard to get feedback or even tell you're making progress.

A mentor can help, if they're good at teaching... but what do you do if you don't have a mentor? How do you become a better programmer on your own?

In order to learn without a mentor you need to be able to recognize when you're learning and when you're not, and then you need to choose a new topic and learn it.

How to tell if you're learning

If you're not getting any feedback from an expert it can be tricky to tell whether you're actually learning or not. And lacking that knowledge it's easy to get discouraged and give up.

Luckily, there's an easy way to tell whether you're learning or not: learning is uncomfortable. If you're in your comfort zone, if you're thinking to yourself "this isn't so hard, I can just do this" you're not learning. You're just going over things you already know; that's why it feels comfortable.

If you're irritated and a little confused, if you feel clumsy and everything seems harder than it should be: now you're learning. That feeling of clumsiness is your brain engaging with new material it doesn't quite understand. You're irritated because you can't rely on existing knowledge. It's hard because it's new. If you feel this way don't stop: push through until you're feeling comfortable again.

You don't want to take this too far, of course. Pick a topic that is too far out of your experience and it will be so difficult you will fail to learn anything, and the experience may be so traumatic you won't want to learn anything.

Choosing something to learn

When choosing something to learn you want something just outside your comfort zone: close enough to your existing knowledge that it won't overwhelm you, far enough that it's actually new. You also want to pick something you'll be able to practice: without practice you'll never get past the point of discomfort.

Your existing job is a great place to practice new skills because it provides plenty of time to do so, and you'll also get real-world practice. That suggests picking new skills that are relevant to your job. As an added bonus this may give you the opportunity to get your employer to pay for initial training or materials.

Let's consider some of the many techniques you can use to learn new skills on the job.


If you have colleagues you work with you will occasionally see them do something you think is obviously wrong, or miss something you think is the obviously right thing to do. For example, "obviously you should never do file I/O in a class constructor."

When this happens the tempting thing to do, especially if you're in charge, is to just tell them to change to the obviously better solution and move on. But it's worth resisting that urge, and instead taking the opportunity to turn this into a learning experience, for them and for you.

The interesting thing here is the obviousness: why is something obvious to you, and not to them? When you learn a subject you go through multiple phases:

  • Conscious ignorance: you don't know anything.
  • Conscious knowledge: you know how to do the task, but you have to think it through.
  • Unconscious knowledge: you just know what to do.

When you have unconscious knowledge you are an expert: you've internalized a model so well you apply it automatically. There's are two problems with being an expert, however:

  • It's hard for you to explain why you're making particular decisions. Since your internal model is unconscious you can't clearly articulate why or how you made the decision.
  • Your model is rarely going to be the best possible model, and since you're applying it unconsciously you may have stopped improving it.

Teaching means taking your unconscious model and turning it into an explicit conscious model someone else can understand. And because teaching makes your mental model conscious you also get the opportunity to examine your own assumptions and improve your own understanding, ending up with a better model.

You don't have to teach only colleagues, of course: you can also write a blog post, or give a talk at a meetup or at a conference. The important thing is to notice the places you have unconscious knowledge and try to make it conscious by explaining it to others.

Switching jobs

While learning is uncomfortable, I personally suffer from a countervailing form of discomfort: I get bored really easily. As soon as I become comfortable with a new task or skill I'm bored, and I hate that.

In 2004 I joined a team writing high performance C++. As soon as I'd gotten just good enough to feel comfortable... I was bored. So I came up with slightly different tasks to work on, tasks that involved slightly different skills. And then I was bored again, so I moved on to a different team in the company, where I learned even more skills.

Switching tasks within your own company, or switching jobs to a new company, is a great way to get out of your comfort zone and learning something new. It's daunting, of course, because you always end up feeling clumsy and incompetent, but remember: that discomfort means you're learning. And every time you go through the experience of switching teams or jobs you will become better at dealing with this discomfort.

Learning from experts: skills, not technologies

Another way to learn is to learn from experts that don't work with you. It's tempting to try to find experts who will teach you new technologies and tools, but skills are actually far more valuable.

Programming languages and libraries are tools. Once you're an experienced enough programmer you should be able to just pick them up as needed if they become relevant to your job.

Skills are trickier: testing, refactoring, API design, debugging... skills will help you wherever you work, regardless of technology. But they're also easier to ignore or miss. There are skills we'd all benefit from that we don't even know exist.

So read a book or two on programming, but pick a book that will teach you a skill, not a technology. Or try to find the results of experts breaking down their unconscious models into explicit models, for example:

Conclusion: learning on your own

You don't need a mentor to learn. You can become a better software engineer on your own by:

  • Recognizing when you're feeling comfortable and not learning.
  • Finding ways to learn, including teaching, switching jobs and learning skills from experts (and I have some more suggestions here).
  • Forcing yourself through the discomfort of learning: if you feel incompetent you're doing the right thing.

April 17, 2017 04:00 AM

April 13, 2017

Moshe Zadka

April 06, 2017

Itamar Turner-Trauring

You don't need a Computer Science degree

If you never studied Computer Science in school you might believe that's made you a worse programmer. Your colleagues who did study CS know more about algorithms and data structures than you do, after all. What did you miss? Are you really as good?

My answer: you don't need to worry about it, you'll do just fine. Some of the best programmers I've worked with never studied Computer Science at all.

A CS education has its useful parts, but real-world programming includes a broad array of skills that no one person can master. Programming is a team effort, and different developers can and should have different skills and strengths for the team to succeed. Where a CS degree does give you a leg up is when you're trying to get hired early in your career, but that is a hurdle that many people overcome.

What you learn in Comp Sci

I myself did study CS and Mathematics in college, dropped out of school, and then later went back to school and got a liberal arts degree. Personally I feel the latter was more useful for my career.

Much of CS is devoted to theory, and that theory doesn't come up in most programming. Yes, proving that all programming languages are equivalent is interesting, but that one sentence is all that I've taken away from a whole semester's class on the subject. And yes, I took multiple classes on data structures and algorithms... but I'm still no good at implementing new algorithms.

I ended up being bored in most of my CS classes. I dropped out to get a programming job where I could just write code, which I enjoyed much more. In some jobs that theory I took would be quite useful, but for me at least it has mostly been irrelevant.

Writing software in the real world

It's true that lacking a CS education you might not be as good as data structures. But chances are you have some other skill your colleagues lack, a unique strength that you contribute to your team.

Let me give a concrete example: at a previous job we gave all candidates a take home exercise, implementing a simple Twitter-like server. I reviewed many of the solutions, and each solution had different strengths. For example:

  • Doing a wonderful job packaging a solution and making it easy to deploy.
  • Choosing a data structure that made access to popular feeds faster, for better scalability.
  • Discussing how the API was badly designed and could result in lost messages, and suggesting improvements.
  • Discussing the ways in which the design was likely to cause business problems.
  • Adding monitoring, and explaining how to use the monitoring to decide when to scale up the service.

Packaging, data structures, network API design, big picture thinking, operational experience: these are just some of the skills that contribute to writing software. No one person can have them all. That means you can focus on your own strengths, because you're part of a team. Here's what that's meant in my case:

  • I'm pretty bad at creating new data structures or algorithms, and mostly it doesn't matter. Sometimes you're creating new algorithms, it's true. But most web developers, for example, just need to know that hashmaps give faster lookups than iterating over a list. But I've had coworkers who were good at it... and they worked on those problems when they came up.
  • In my liberal arts degree I learned how to craft better abstractions, and writing as a form of thinking. This has proven invaluable when designing new products and solving harder problems.
  • From the friends I made working on open source projects I learned about testing, and how to build robust software. Many of them had no CS education at all, and had learned on their own, from books and forums and their own practice.

Job interviews

The one place where a Computer Science degree is unquestionably useful is getting a job early in your career. When you have no experience the degree will help; once you have experience your degree really doesn't matter.

It's true that many companies have an interview process that focuses on algorithmic puzzles. Unfortunately interviewing skills are often different than job skills, and need to be strengthened separately. And my CS degree doesn't really help me, at least: I've forgotten most of what I've learned, and algorithms were never my strength. So whenever I'm interviewing for jobs I re-read an old algorithms textbook and do some practice puzzles.

In short: don't worry about lacking a CS degree. Remember that you'll never be able to know everything, or do everything: it's the combined skills of your whole team that matters. Focus on your strengths, improve the skills you have, and see what new skills you can learn from your teammates.

April 06, 2017 04:00 AM

March 26, 2017

Itamar Turner-Trauring

Why and how you should test your software

This was the second draft of a talk I'll be giving at PyCon 2017. You can now watch the actual talk instead.

March 26, 2017 04:00 AM

March 20, 2017

Itamar Turner-Trauring

Dear recruiter, "open floor space" is not a job benefit

I was recently astonished by a job posting for a company that touted their "open floor space." Whoever wrote the ad appeared to sincerely believe that an open floor office was a reason to work for the company, to be mentioned alongside a real benefit like work/life balance.

While work/life balance and an open floor plan can co-exist, an open floor plan is a management decision much more akin to requiring long hours: both choose control over productivity.

The fundamental problem facing managers is that productivity is hard to measure. Faced with the inability to measure productivity, managers may feel compelled to measure time spent working. Never mind that it's counter-productive: at least it gives management control, even if it's control over the wrong thing.

Here is a manager explaining the problem:

I [would] like to manage [the] team's output rather than managing their time, because if they are forced to spend time inside the office, it doesn't mean they are productive or even working. At the same time it's hard to manage output, because software engineering tasks are hard to estimate and things can go out of the track easily.

In this case at least the manager involved understands that what matters is output, not hours in the office. But not everyone realizes is as insightful.

Choosing control

In cases where management does fall into the trap of choosing control over productivity, the end result is a culture where the only thing that matters is hours in the office. Here's a story I heard from a friend about a startup they used to work at:

People would get in around 8 or 9, because that's when breakfast is served. They work until lunch, which is served in the office, then until dinner, which is served in the office. Then they do social activities in the office, video games or board games, and then keep working until 10PM or later. Their approach was that you can bring your significant other in for free dinner, and therefore why leave work? Almost like your life is at the office.

Most of the low level employees, especially engineers, didn't feel that this was the most productive path. But everyone knew this was the most direct way to progress, to a higher salary, to becoming a team lead. The number of hours in the office is a significant part of how your performance is rated.

I don't think people can work 12 hours consistently. And when you're not working and you're in an open plan office, you distract people. It's not about hours worked, it's about hours in office, there's ping pong tables... so there's always someone asking you to play ping pong or distracting you with a funny story. They're distracting you, their head wasn't in the zone, but they had to be in the office.

A team of 10 achieved what a team of 3 should achieve.

Control through visibility

Much like measuring hours in the office, an open floor office is designed for control rather than productivity. A manager can easily see what each developer is doing: are they coding? Are they browsing the web? Are they spending too much time chatting to each other?

In the company above the focus on working hours was apparently so strong that the open floor plan was less relevant. But I've no doubt there are many companies where you'll start getting funny looks from your manager if you spend too much time appearing to be "unproductive."

To be fair, sometimes open floor plans are just chosen for cheapness, or thoughtlessly copied from other companies because all the cool kids are doing it. Whatever the motivation, they're still bad for productivity. Programming requires focus, and concentrated thought, and understanding complex systems that cannot fit in your head all at once and so constantly need to be swapped in and out.

Open floor spaces create exactly the opposite of the environment you need for programming: they're full of noise and distraction, and headphones only help so much. I've heard of people wearing hoodies to block out not just noise but also visual distraction.

Dear recruiter

Work/life balance is a real job benefit, for both employers and employees: it increases productivity while allowing workers space to live outside their job. But "open office space" is not a benefit for anyone.

At worst it means your company is perversely sabotaging its employees in order to control them better. At best it means your company doesn't understand how to enable its employees to do their job.

March 20, 2017 04:00 AM

Moshe Zadka

March 12, 2017

Itamar Turner-Trauring

Unit testing, Lean Startup, and everything in-between

This was my first draft of a talk I gave at PyCon 2017. You can now watch the actual talk instead.

March 12, 2017 05:00 AM

March 05, 2017

Itamar Turner-Trauring

Why you're (not) failing at your new job

It's your first month at your new job, and you're worried you're on the verge of getting fired. You don't know what you're doing, everyone is busy and you need to bother them with questions, and you're barely writing any code. Any day now your boss will notice just how bad a job you're doing... what will you do then?

Luckily, you're unlikely to be fired, and in reality you're likely doing just fine. What you're going through happens to almost everyone when they start a new job, and your panicked feeling will eventually pass.

New jobs make you incompetent

Just like you, every time I've started a new job I've had to deal with feeling incompetent.

  • In 2004 I started a new job as a C++ programmer, writing software for the airline industry. On my first day of work the VP of Engineering told me he didn't expect me to fully productive for 6 months. And for good reason: I barely knew any C++, I knew nothing about airlines, and I had to learn a completely new codebase involving both.
  • In 2012 I quit my job to become a consultant. Every time I got a new client I had to learn a new codebase and a new set of problems and constraints.
  • At my latest job I ended up writing code in 3 programming languages I didn't previously know. And once again I had to learn a new codebase and a new business logic domain.

Do you notice the theme?

Every time you start a new job you are leaving behind the people, processes and tools you understand, and starting from scratch. A new language, new frameworks, new tools, new codebases, new ways of doing things, people you don't know, business logic you don't understand, processes you're unfamiliar with... of course it's scary and uncomfortable.

Luckily, for the most part this is a temporary feeling.

This too will pass

Like a baby taking their first steps, or a child on their first bike ride, those first months on a new job will make you feel incompetent. Soon the baby will be a toddler, the child will be riding with confidence. You too will soon be productive and competent.

Given some time and effort you will eventually learn what you need to know:

...the codebase.

...the processes, how things are done and maybe even why.

...the programming language.

...who to ask and when to ask them.

...the business you are operating in.

Since that's a lot to learn, it will take some time, but unless you are working for an awful company that is expected and normal.

What you can do

While the incompetent phase is normal and unavoidable, there is still something you can do about it: learn how to learn better. Every time you start a new job you're going to be learning new technologies, new processes, new business logic. The most important skill you can learn is how to learn better and faster.

The faster you learn the faster you'll get past the feeling of incompetence when you start a new job. The faster you learn the faster you can become a productive employee or valued consultant.

Some skills are specific to programming. For example, when learning new programming languages, I like skimming a book or tutorial first before jumping in: it helps me understand the syntax and basic concepts. Plus having a mental map of the book helps me know where to go back to when I'm stuck. Other skills are more generic, e.g. there is considerable research on how learning works that can help you learn better.

Finally, another way I personally try to learn faster is by turning my mistakes into educational opportunities. Past and present, coding or career, every mistake I make is a chance to figure out what I did wrong and how I can do better. If you'd like to avoid my past mistakes, sign up to get a weekly email with one of my mistakes and what you can learn from it.

March 05, 2017 05:00 AM

February 19, 2017

Itamar Turner-Trauring

When AI replaces programmers

The year is 2030, and artificial intelligence has replaced all programmers. Let's see how this brave new world works out:

Hi! I'm John the Software Genie, here to help you with all your software needs.

Hi, I'd like some software to calculate the volume of my house.

Awesome! May I have access to your location?

Why do you need to access my location?

I will look up your house in your city's GIS database and use its dimensions to calculate its volume.

Sorry, I didn't quite state that correctly. I want some software that will calculate the dimensions of any house.

Awesome! What is the address of this house?

No, look, I don't want anything with addresses. You can have multiple apartments in a house, and anyway some structures don't have an address, or are just being designed... and the attic and basement doesn't always count... How about... I want software that calculates the volume of an abstract apartment.

Awesome! What's an abstract apartment?

Grrrr. I want software that calculates the sum of the volumes of some rooms.

Awesome! Which rooms?

You know what, never mind, I'll use a spreadsheet.

I'm sorry Dave, I can't let you do that.


Just a little joke! I'm sorry you decided to go with a spreadsheet. Your usage bill for $153.24 will be charged to your credit card. Have a nice day!

Back to the present: I've been writing software for 20 years, and I find the idea of being replaced by an AI laughable.

Processing large amounts of data? Software's great at that. Figuring out what a human wants, or what a usable UI is like, or what the real problem you need to solve is... those are hard.

Imagine what it would take for John the Software Genie to learn from that conversation. I've made my share of mistakes over the years, but I've learned enough that these days I can gather requirements decently. How do you teach an AI to gather requirements?

We might one day have AI that is as smart as a random human, an AI that can learn a variety of skills, an AI that can understand what those strange and pesky humans are talking about. Until that day comes, I'm not worried about being replaced by an AI, and you shouldn't worry either.

Garbage collection didn't make programmers obsolete just because it automated memory management. In the end automation is a tool to be controlled by human understanding. As a programmer you should focus on building the skills that can't be automated: figuring out the real problems and how to solve them.

February 19, 2017 05:00 AM

February 11, 2017

Twisted Matrix Laboratories

Twisted 17.1.0 Released

On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 17.1!

The highlights of this release are:

  • twisted.web.client.Agent now supports IPv6! It's also now the primary web client in Twisted, with twisted.web.client.getPage being deprecated in favour of it and Treq.
  • twisted.web.server has had many cleanups revolving around timing out inactive clients.
  • twisted.internet.ssl.CertificateOptions has had its method argument deprecated, in favour of the new raiseMinimumTo, lowerMaximumSecurityTo, and insecurelyLowerMinimumTo arguments, which take TLSVersion arguments. This allows you to better give a range of versions of TLS you wish to negotiate, rather than forcing yourself to any one version.
  • twisted.internet.ssl.CertificateOptions will use OpenSSL's MODE_RELEASE_BUFFERS, which will let it free unused memory that was held by idle TLS connections.
  • You can now call the new twist runner with python -m twisted.
  • twisted.conch.ssh now has some ECDH key exchange support and supports hmac-sha2-384.
  • Better Unicode support in twisted.internet.reactor.spawnProcess, especially on Windows on Python 3.6.
  • More Python 3 porting in Conch, and more under-the-hood changes to facilitate a Twisted-wide jump to new-style classes only on Python 2 in 2018/2019. This release has also been tested on Python 3.6 on Linux.
  • Lots of deprecated code removals, to make a sleeker, less confusing Twisted.
  • 60+ closed tickets.

For more information, check the NEWS file (link provided below).

You can find the downloads on PyPI (or alternatively our website). The NEWS file is also available on GitHub.

Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)

by Amber Brown ( at February 11, 2017 10:08 AM

February 10, 2017

Glyph Lefkowitz

Make Time For Hope

Pandora hastened to replace the lid! but, alas! the whole contents of the jar had escaped, one thing only excepted, which lay at the bottom, and that was HOPE. So we see at this day, whatever evils are abroad, hope never entirely leaves us; and while we have THAT, no amount of other ills can make us completely wretched.

It’s been a rough couple of weeks, and it seems likely to continue to be so for quite some time. There are many real and terrible consequences of the mistake that America made in November, and ignoring them will not make them go away. We’ll all need to find a way to do our part.

It’s not just you — it’s legit hard to focus on work right now. This is especially true if, as many people in my community are, you are trying to motivate yourself to work on extracurricular, after-work projects that you used to find exciting, and instead find it hard to get out of bed in the morning.

I have no particular position of authority to advise you what to do about this situation, but I need to give a little pep talk to myself to get out of bed in the morning these days, so I figure I’d share my strategy with you. This is as much in the hope that I’ll follow it more closely myself as it is that it will be of use to you.

With that, here are some ideas.

It’s not over.

The feeling that nothing else is important any more, that everything must now be a life-or-death political struggle, is exhausting. Again, I don’t want to minimize the very real problems that are coming or the need to do something about them, but, life will go on. Remind yourself of that. If you were doing something important before, it’s still important. The rest of the world isn’t going away.

Make as much time for self-care as you need.

You’re not going to be of much use to anyone if you’re just a sobbing wreck all the time. Do whatever you can do to take care of yourself and don’t feel guilty about it. We’ll all do what we can, when we can.1

You need to put on your own oxygen mask first.

Make time, every day, for hope.

“You can stand anything for 10 seconds. Then you just start on a new 10 seconds.”

Every day, set aside some time — maybe 10 minutes, maybe an hour, maybe half the day, however much you can manage — where you’re going to just pretend everything is going to be OK.2

Once you’ve managed to securely fasten this self-deception in place, take the time to do the things you think are important. Of course, for my audience, “work on your cool open source code” is a safe bet for something you might want to do, but don’t make the mistake of always grimly setting your jaw and nose to the extracurricular grindstone; that would just be trading one set of world-weariness for another.

After convincing yourself that everything’s fine, spend time with your friends and family, make art, or heck, just enjoy a good movie. Don’t let the flavor of life turn to ash on your tongue.

Good night and good luck.

Thanks for reading. It’s going to be a long four years3; I wish you the best of luck living your life in the meanwhile.

  1. I should note that self-care includes just doing your work to financially support yourself. If you have a job that you don’t feel is meaningful but you need the wages to survive, that’s meaningful. It’s OK. Let yourself do it. Do a good job. Don’t get fired. 

  2. I know that there are people who are in desperate situations who can’t do this; if you’re an immigrant in illegal ICE or CBP detention, I’m (hopefully obviously) not talking to you. But, luckily, this is not yet the majority of the population. Most of us can, at least some of the time, afford to ignore the ongoing disaster. 

  3. Realistically, probably more like 20 months, once the Rs in congress realize that he’s completely destroyed their party’s credibility and get around to impeaching him for one of his numerous crimes. 

by Glyph at February 10, 2017 07:58 AM

Itamar Turner-Trauring

Buggy Software, Loyal Users: Why Bug Reporting is Key To User Retention

Your software has bugs. Sorry, mine does too.

Doesn't matter how much you've tested it or how much QA has tested it, some bugs will get through. And unless you're NASA, you probably can't afford to test your software enough anyway.

That means your users will be finding bugs for you. They will discover that your site doesn't work on IE 8.2. They clicked a button and a blank screen came up. Where is that feature you promised? WHY IS THIS NOT WORKING?!

As you know from personal experience, users don't enjoy finding bugs. Now you have buggy software and unhappy users. What are you going to do about it?

Luckily, in 1970 the economist Albert O. Hirschman came up with an answer to that question.

Exit, Voice and Loyalty

In his classic treatise Exit, Voice and Loyalty, Hirschman points out that users who are unhappy with a product have exactly two options. Just two, no more:

  1. Exiting, i.e. giving up on your product.
  2. Voicing their concerns.

Someone who has given up on your software isn't likely to tell you about their issues. And someone who cares enough to file a bug is less likely to switch away from your software. Finally, loyal users will stick around and use their voice when otherwise they would choose exit.

Now, your software has no purpose without users. So chances are you want to keep them from leaving (though perhaps you're better off without some of them - see item #2).

And here's the thing: there's only two choices, voice or exit. If you can convince your users to use their voice to complain, and they feel heard, your users are going to stick around.

Now at this point you might be thinking, "Itamar, why are you telling me obvious things? Of course users will stick around if we fix their bugs." But that's not how you keep users around. You keep users around by allowing them to express their voice, by making sure they feel heard.

Sometimes you may not fix the bug, and still have satisfied users. And sometimes you may fix a bug and still fail at making them feel heard; better than not fixing the bug, but you can do better.

To make your users feel heard when they encounter bugs you need to make sure:

  1. They can report bugs with as little effort as possible.
  2. They hear back from you.
  3. If you choose to fix the bug, you can actually figure out the problem from their bug report.
  4. The bug fix actually gets delivered to them.

Let's go through these requirements one by one.

Bug reporting

Once your users notice a problem you want them to communicate it to you immediately. This will ensure they choose the path of voice and don't contemplate exiting to your shiny competitor at the domain next door.

Faster communication also makes it much more likely the bug report will be useful. If the bug occurred 10 seconds ago the user will probably remember what happened. If it's a few hours later you're going to hear something about how "there was a flying moose on the screen? or maybe a hyena?" Remember: users are human, just like you and me, and humans have a hard time remembering things (and sometimes we forget that.)

To ensure bugs get reported quickly (or at all) you want to make it as easy as possible for the user to report the problem. Each additional step, e.g. creating an account in an issue tracker, means more users dropping out of the reporting process.

In practice many applications are designed to make it as hard as possible to report bugs. You'll be visiting a website, when suddenly:

"An error has occurred, please refresh your browser."

And of course the page will give no indication of how or where you should report the problem if it reoccurs, and do the people who run this website even care about your problem?

Make sure that's not how you're treating your users when an error occurs.

Improving bug reporting

So let's say you include a link to the bug report page, and let's say the user doesn't have to jump through hoops and email verification to sign up and then fill out the 200 fields that JIRA or Bugzilla think are really important and would you like to tell us about your most common childhood nightmare from the following list? We'll assume that bug reporting is easy to find and easy to fill it out, as it should be.

But... you need some information from the user. Like, what version of the program they were using, the operating system and so on and so forth. And you do actually need that information.

But the user just wants to get this over with, and every additional piece of information you ask for is likely to make them give up and go away. What to do?

Here's one solution: include it for them.

I've been using the rather excellent Spacemacs editor, and as usual when I use new software I had to report a bug. So it goes.

Anyway, to report a bug in Spacemacs you just hit the bug reporting key combo. You get a bug reporting page. In your editor. And it's filled in the Spacemacs version and the Emacs version and all the configuration you made. And then you close the buffer and it pre-populates a new GitHub issue with all this information and you hit Submit and you're done.

This is pretty awesome. No need to find the right web page or copy down every single configuration and environmental parameter to report a bug: just run the error-reporting command.

Spacemacs screenshot

Also, notice that this is a lot better than automatic crash reporting. I got to participate and explain the bits that were important to me, it's not the "yeah we reported the crash info and let's be honest you don't really believe anyone looks at these do you?"-vibe that one gets from certain software.

You can do much the same thing with a command-line tool, or a web-based application. Every time an error occurs or the user is having issues (e.g. the user ran yourcommand --help):

  1. Ask for the user's feedback in the terminal, or within the context of the web page, and then file the bug report for them.
  2. Gather as much information as you can automatically and include it in the bug report, so the user only has to report the part they care about.

Responding to the bug report

The next thing you have to do is actually respond to the bug report. As I write this the Spacemacs issue I filed is sitting there all sad and lonely and unanswered, and it's a little annoying. I do understand, open source volunteer-run project and all. (This is where loyalty comes in.)

But for a commercial product I am paying for I want to know that my problem has been heard. And sometimes the answer might be "sorry, we're not going to fix that this quarter." And that's OK, at least I know where I stand. But silence is not OK.

Respond to your bug reports, and tell your users what you're doing about it. And no, an automated response is not good enough.

Diagnosing the bug

If you've decided to fix the bug you can now proceed to do so... if you can figure out what the actual problem is. If diagnosing problems is impossible, or even just too expensive, you're not going to do fix the problem. And that means your users will continue to be affected by the bug.

Let's look at a common bug report:

I clicked Save, and your program crashed and deleted a day's worth of my hopes and dreams.

This is pretty bad: an unhappy user, and as is often the case the user simply doesn't have the information you need to figure out what's going on.

Even if the bug report is useful it can often be hard to reproduce the problem. Testing happens in controlled environments, which makes investigation and reproduction easier. Real-world use happens out in the wild, so good luck reproducing that crash.

Remember how we talked about automatically including as much information as possible in the bug report? You did that, right? And included all the relevant logs?

If you didn't, now's the time to think about it: try to ensure that logs, core dumps, and so on and so forth will always be available for bug reports. And try to automate submitting them as much as possible, while still giving the user the ability to report their specific take on the problem.

(And don't forget to respect your user's privacy!)

Distributing the fix

So now you've diagnosed and fixed your bug: problem solved! Or rather, problem almost solved. If the user hasn't gotten the bug fix all this work has been a waste of time.

You need fast releases, and you need automated updates where possible. If it takes too long for fixes to reach your users then for for practical purposes your users are not being heard. (There are exceptions, as always: in some businesses your users will value stability over all else.)

A virtuous circle

If you've done this right:

  1. Your users will feel heard, and choose voice over exit.
  2. You will get the occasional useful bug report, allowing you to improve your product.
  3. All your users will quickly benefit from those improvements.

There is more to be done, of course: many bugs can and should be caught in advance. And many users will never tell you their problems unless you ask.

But it's a good start at making your users happy and loyal, even when they encounter the occasional bug.

February 10, 2017 05:00 AM

January 31, 2017

Jonathan Lange

Announcing haskell-cli-template

Last October, I announced servant-template, a cookiecutter template for creating production-ready Haskell web services.

Almost immediately after making it, I wished I had something for building command-line tools quickly. I know stack comes with a heap of them, but:

  • it’s hard to predict what they’ll do
  • adding a new template requires submitting a PR
  • cookiecutter has existed for ages and is pretty much better in every way

So I made haskell-cli-template. It’s very simple, it just makes a Haskell command-line project with some tests, command-line parsing, and a CircleCI build.

I wanted to integrate logging-effect, but after a few months away from it my tired little brain wasn’t able to figure it out. I like command-line tools with logging controls, so I suspect I’ll add it again in the future.

Let me know if you use haskell-cli-template to make anything cool, and please feel free to fork and extend.

by Jonathan Lange at January 31, 2017 12:00 AM

January 30, 2017

Jonathan Lange

Announcing graphql-api: Haskell library for GraphQL

Late last year, my friend Tom tried to convince me that writing REST APIs was boring and repetitive and that I should give this thing called GraphQL a try.

I was initially sceptical. servant, the REST library that I’m most familiar with, is lovely. Its clever use of Haskell’s type system means that all the boring boilerplate I’d have to write in other languages just goes away.

However, after watching Lee Byron’s Strange Loop talk on GraphQL I began to see his point. Being able to get many resources with the same request is very useful, and as someone who writes & runs servers, I very much want clients to ask only for the data that they need.

The only problem is that there isn’t really a way to write GraphQL servers in Haskell—until now.

Introducing graphql-api

Tom and I put together a proof-of-concept GraphQL server implementation called graphql-api, which we released to Hackage today.

It lets you take a GraphQL schema and translate it into a Haskell type that represents the schema. You can then write handlers that accept and return native Haskell types. graphql-api will take care of parsing and validating your user queries, and Haskell’s type system will make sure that your handlers handle the right thing.

Using graphql-api

Say you have a simple GraphQL schema, like this:

type Hello {
  greeting(who: String!): String!

which defines a single top-level type Hello that contains a single field, greeting, that takes a single, required argument who.

A user would query it with something like this:


And expect to see an answer like:

  "data": {
    "greeting": "Hello World!"

To do this in Haskell with GraphQL, first we’d define the type:

type Hello = Object "Hello" '[]
  '[ Argument "who" Text :> Field "greeting" Text ]

And then a handler for that type:

hello :: Handler IO Hello
hello = pure greeting
   greeting who = pure ("Hello " <> who <> "!")

We can then run a query like so:

queryHello :: IO Response
queryHello = interpretAnonymousQuery @Hello hello "{ greeting(who: \"World\") }"

And get the output we expect.

There’s a lot going on in this example, so I encourage you to check out our tutorial to get the full story.

graphql-api’s future

Tom and I put graphql-api together over a couple of months in our spare time because we wanted to actually use it. However, as we dug deeper into the implementation, we found we really enjoyed it and want to make a library that’s genuinely good and helps other people do cool stuff.

The only way to do that, however, is to release it and get feedback from our users, and that’s what we’ve done. So please use graphql-api and tell us what you think. If you build something cool with it, let us know.

For our part, we want to improve the error messages, make sure our handling for recursive data types is spot on, and smooth down a few rough edges.


Tom and I want to thank J. Daniel Navarro for his great GraphQL parser and encoder, which forms the basis for what we built here.

About the implementation

graphql-api is more-or-less a GraphQL compiler hooked up to type-based executing (aka “resolving”) engine that’s heavily inspired by Servant and uses various arcane type tricks from GHC 8.

We tried to stick to implementing the GraphQL specification. The spec is very well written, but implementing it has taught us that GraphQL is not at all as simple as it looks at first.

I can’t speak very well to the type chicanery, except to point you at the code and at the Servant paper.

The compiler mostly lives in the GraphQL.Internal.Validation module. The main idea is that it takes the AST and translates it into a value that cannot possibly be wrong.

All the syntax stuff is from the original graphql-haskell, with a few fixes, and a tweak to guarantee that Name values are guaranteed to be correct.

by Jonathan Lange at January 30, 2017 12:00 AM

January 29, 2017

Itamar Turner-Trauring

Does your job contradict your beliefs?

As an employee your work is chosen by the owners and managers of the company: you choose the means, they choose the ends. Becoming an employee doesn't mean abdicating your moral responsibility, however. Even if someone else has chosen the goals you are still responsible for your own actions.

As an employee you need to ask yourself if the goals you're working for are worthwhile, and if the people you are working for deserve the power to direct your actions.

Let me tell you about the first time I was forced to ask myself this question.

The past is a different country

I grew up Israel, where as a Jewish citizen I was subject to the draft. The State of Israel is both Zionist and democratic, inherently contradictory ideals: a nation united by ethnicity, language and land versus the natural rights of individual human beings. Most Israeli citizens are indeed Jewish, but a large minority are Arab.

The contradiction is even more acute outside the borders of Israel, in the occupied territories of the West Bank and Gaza, where Israel's Jewish military rules millions of Palestinian Arabs. Whatever their opinion of the Occupation, most Israeli Jews are Zionists. They believe Israel must remain a Jewish state with a Jewish army, a bulwark against Palestinian terrorism and Israel's hostile Arab neighbors.

At age seventeen I had my first contact with the military: I was summoned to an initial screening at the Bakum, a military base that processes new recruits. The base was covered with asphalt, concrete, barbed wire and dirt, which turned to mud when it rained. The buildings were ugly concrete blocks and flimsy metal shacks that had seen better decades. Everything, inside and out, was painted in shades of khaki, gray and rust.

At the Bakum I was examined by a doctor. He gave me a physical profile of 97, the highest possible value; urban legend has it that the extra three points are taken off for circumcision. A profile of 97 meant I would be assigned combat duty, known in Israel as kravi.

As a kravi soldier I would have been required, as other Israeli soldiers have, to demolish homes, evict families, delay ambulances at checkpoints, threaten to shoot civilians for violating curfews. I didn't believe these actions protected Israel from Palestinian terrorists.

So I took the easy way out: I signed up for the Academic Reserve. I would go to college as a civilian, studying computer science, then serve in the army as a programmer for six years.

In the summer of 1999, after my first year in college, I went through a month of unstrenuous basic training for non-combatants. I hated every minute of it, the half-hearted brainwashing and the petty sadism. The following year, bored and unmotivated, I dropped out of college and started a software company.

Sooner or later the Academic Reserve office would notice I'd stopped taking classes, and I would be drafted into a kravi unit. My mother suggested I write a letter to the army – via a distant relative who worked for the Department of Defense – explaining my qualifications as a programmer and requesting my skills be put to use. Perhaps I could reach an accommodation allowing me to work on my company in my spare time.

I was summoned to a number of interviews in the following months, at office buildings, anonymous houses in residential areas, even a war monument behind a military base. None of them went well. I thoughtlessly, in retrospect perhaps deliberately, wrecked my chances at getting a security clearance, telling an interviewer my political views: I did not believe in the leadership of the country or the military. It was unlikely I would be trusted with classified material.

My final interview was different: I actually liked the soldiers I met. They were intelligent and sympathetic and I was sure I'd enjoy working with them, but I left the interview feeling miserable. I reached not so much a decision as a self-diagnosis: I could never be a soldier, I could not give up my right and duty to make my own decisions.

In the present

I did manage to get out of military service, but that's a longer story. But before that I spent literally years refusing to admit to myself that this was not a job I was willing to take, whatever the social expectations.

Being an employee is not quite the same as being a soldier. But these days when I'm looking for a job I try to be more aware of what I am willing to do. I won't even bother applying to companies that do anything related to "defense" or the military, for example.

You will have your own criteria, of course, but I would urge you to consider whether your current employment matches your beliefs.

January 29, 2017 05:00 AM

January 26, 2017

Itamar Turner-Trauring

Coding skills you won't learn in school: Object Ownership

There are many skills you need to acquire as a programmer, and some of them are not part of the standard software engineering curriculum. Instead you're expected to learn them by osmosis, or by working with someone more experienced. David MacIver covers one such skill: tracking which type a value has.

Another skill you need is an understanding object ownership in your code: knowing which part of your code owns a particular object in memory, and what its expectations are for access. Lacking this understanding you might write code that causes your program to crash or to suffer from subtle bugs. Even worse, some programming languages won't even provide you with facilities to help you in this task.

Learning by osmosis

Here's how I learned this skill. When I was in university I was once had to implement a Red-Black Tree in C. I'd basically skipped all the classes for the intro to C class, and still gotten a perfect grade, so as far as the school was concerned I knew how to code in C.

In reality I had no clue what I was doing. I couldn't write a working tree implementation: my code kept segfaulting, I couldn't keep the structure straight. Eventually I turned in a half-broken solution and squeaked by with a grade of 60 out of 100.

I spent the next 5 years writing Python code, and then got a job writing C and C++. I did make some mistakes, e.g. my first project crashed every night (you can hear that story by signing up for my newsletter), but in general I was able to write working code even though I hadn't really written any C or C++ for years.

What changed?

I believe the one of the key skills I learned was object ownership, as a result of all the concurrent Python I was writing, plus the fact that C++ has a better model than C for object ownership. Let's see what I learned over those years.

Object ownership for memory deallocation

Consider the following C function:

char* do_something(char* input);

Someone is going to have deallocate input and someone is going to have to deallocate the returned result of do_something(). But who? If two different functions try to deallocate the same allocation your program's memory will be corrupted. If no one deallocates the memory your program will suffer from memory leaks.

This is where object ownership comes in: you ensure each allocation has only one owner, and only that owner should deallocate it. The GNOME project's developer documentation explains how their codebase makes this work:

Each allocation has exactly one owner; this owner may change as the program runs, by transferring ownership to another piece of code. Each variable is owned or unowned, according to whether the scope containing it is always its owner. Each function parameter and return type either transfers ownership of the values passed to it, or it doesn't. ... By statically calculating which variables are owned, memory management becomes a simple task of unconditionally freeing the owned variables before they leave their scope, and not freeing the unowned variables.

GNOME has a whole set of libraries, conventions and rules for making this happen, because the C programming language doesn't have many built-in facilities to deal with ownership.

C++, on the other hand, has built a broad range of utilities for just this purpose. For example, you can wrap an allocation in a shared_ptr object. Every time it is copied it will increment a counter, every time it is deallocated it will decrement the counter. When the counter hits zero the wrapped allocation will be deallocated. That means you don't need to track ownership for purposes of deallocation: the shared_ptr is the owner, and will deallocate at the right time.

This can be simplified even further by using languages like Java or Python that provide garbage collection: the language runtime will do all the work for you. You never have to track ownership for purposes of deallocating memory.

Object access rights

Even when memory allocation is handled by the language runtime, there are still reasons to think about object ownership. In particular there is the question of mutation: modifying an object's contents. Memory deallocation is the ultimate form of mutation, but normal mutation can also break your program.

Consider the following Python program:

words = ["hello", "there", "another"]
counts = wordcount(words)

What do you expect to be printed? Typically you'd expect to see ["hello", "there", "another"], but there is another option. You may also get [] printed if wordcount() was implemented as follows:

def wordcount(words):
    result = Counter()
    while words:
        word = words.pop()
        result[word] += 1
    return result

In this implementation wordcount() is mutating the list it is given. Reintroducing the concept of an object owner makes this clearer: each object is owned by a scope, and that scope might not want to grant write access to the object when it passes it to a function.

Unfortunately in Python, Java and friends there is no real way for a caller to tell whether a function will mutate an input, nor whether a parameter to a function can be mutated. So you need to learn a set of conventions and assumptions about when this will happen and when it's safe to do so: you build a mental model of object ownership and access rights. I suspect most Python programmers wouldn't expect wordcount() to mutate its inputs: it violates the mental model we have for object ownership and access.

The concept of private attributes (explicit in Java, by convention in Python) is one way access rights are controlled, but it doesn't solve the problem in all cases. When conventions don't help and you're uncertain you have to refer to API docs, or sometimes even the implementation, to know who can or might modify objects. This is similar to how C programmers deal with memory allocation.

Interestingly, C and C++ have a mechanism that can often solve this problem: const. You can define arguments as being const, i.e. unchangeable:

std::map<string,int> wordcount(const vector<string> &words);

If you try to mutate the argument the compiler will complain and prevent you from doing so.

Other approaches: Rust vs. functional languages

Outside of const the C++ features for object ownership management grew over time, as library code. The Rust programming language, in contrast, provides object ownership as a feature within the compiler itself. And this applies both to ownership for purposes of memory allocation but also for purposes of mutation. Rust attempts to provide this features while still providing the full power of C and C++, and in particular control over memory allocation.

Where Rust code requires the programmer to make explicit decisions about object ownership, functional languages take a different approach. In functional languages like Haskell or Clojure if you call a wordcount function you know the arguments you pass in won't be mutated, because objects are immutable and can't be changed. If objects can't be mutated it doesn't matter who owns them: they are the same everywhere.

The need to track object ownership in your head or in code for mutation control is obviated by making objects immutable. Couple this with garbage collection and you need spend much less time thinking about object ownership when writing purely functional code.

Summary: what you need to know

Depending on the programming language you're coding in you need to learn different models of thinking about object ownership:

  • When you're writing functional code you don't have to think about it much of the time.
  • When you're writing Java/Ruby/Python/JS/Go you need to think about object ownership as it applies to mutation: who is allowed to mutate an object? Will a function mutate an object when you don't expect it? If you're writing concurrent code this becomes much more important: conventions no longer suffice, and access needs to be explicitly controlled with locks.
  • When you're writing Rust the compiler understands a broad range explicit annotations for object ownership, and will enforce safe interactions.
  • When you're writing C++ you can rely on const for mutation control, up to a point, and on library code for automatic memory deallocation.
  • When you're writing C you can rely on const for mutation control, up to a point, and the rest is up to you.

Next time you're writing some code think about who owns each object, and what guarantees the owner expects when it passes an object to other code. Over time you'll build a better mental model of how ownership works.

And if you're writing in a non-functional language consider using immutable (sometimes known as "persistent") data structures. You can get much of the benefits of a functional language by simply reducing the scope of mutation, and therefore how much you need to track object ownership.

January 26, 2017 05:00 AM

January 19, 2017

Itamar Turner-Trauring

Specialist vs. Generalist: which is better for your career?

One of the decisions you'll need to make during the course of you career as a software developer is whether you should become:

  1. A specialist, an expert in a certain subject.
  2. A generalist, able to take on a wide variety of different work.

Miquel Beltran argues that specialization is the path to choose. At the end of his essay he suggests:

Stick to one platform, framework or language and your professional career will be better on the long run.

I think he's both right and wrong. Specialization is a great career move... but I don't think being a generalist is bad for your career either. In fact, you can be both and still have a good career, because there are two distinct areas in which this question plays out.

Getting hired is not the same as getting things done

Getting hired and getting things done are two different tasks, and you need different skills to do each.

When you're trying to get hired you are trying to show why you are the best candidate. That means dealing with the company's attitude towards employees, and the reason they are hiring, and the way they approach their work. It's not about how well you'll do or your job, or how good a programmer you are, or any of that: it's just about getting your foot in the door.

Once you're in, once you're an employee or a consultant, what matters is the results you deliver. It doesn't matter if you've only spent a few weeks previously writing iOS apps so long you do a good job writing an iOS app after you've been hired. And if you've spent years as an iOS developer and you fail to deliver, the fact you specialize in the iOS apps isn't going to help you.

Since getting hired and doing the work are separate tasks, that means you need to separate the decision to be a specialist or generalist into two questions: which will help you get hired, and which will make you better at actually doing your job?

Specialization is a marketing technique

If the question is how you should get hired then you are in the realm of marketing, not engineering. Specialization is a marketing technique: it's a way to demonstrate why you should be hired because you are an expert in your specialty.

Because specialization a marketing technique, specialization doesn't necessarily need to map to specialization on an engineering level. Let me give some examples from my career.

In 2001 I started contributing to an open source Python project, a networking framework called Twisted. I have used this experience in a variety of ways:

  • In 2004 got a job offer from a company that was writing Java, because I had recently added multicast support to Twisted and they wanted to use multicast for an internal project. I had a little experience writing Java, but mostly they wanted to hire me because I was specialist in multicast.
  • I turned that job down, but later that year I got a job at ITA Software, writing networking code in C++. I didn't know any C++... but I knew stuff about networking.
  • When I left ITA I spent a couple years doing Twisted consulting. I was a Twisted specialist.
  • At my latest job I got hired in part because I knew networking protocols... but also because I had experience participating in open source projects.

While all these specializations are related, they are not identical: each job I got involved being a specialist in a different area.

It's not what you can do, it's what you emphasize

Now, you could argue that the reasons I got hired enough are close enough that I am indeed a specialist: in networking or distributed systems. But consider that earlier in my career I did a number of years of web development. So back in 2004 I could have applied to web development jobs, highlighted that part of my resume, and relegated my open source networking work to a sentence at the end.

You likely have many engineering and "soft" skills available to you. Instead of focusing on one particular skillset ("I am an Android developer") you can focus on some other way you are special. E.g. If you're building a consulting pipeline then maybe it's a some business vertical you specialize in, to differentiate yourself from all the other Android developers.

But if you're marketing yourself on a one-off basis, which is certainly the case when you're applying for a job, you can choose a specialty that fits the occasion. Here's how my former colleague Adam Dangoor does it:

Pick one thing from what they talk about that you think is probably the least pitched-to aspect. E.g. if they’re a Python shop everyone will say that they know Python well. But you can spot that e.g. they need help with growing a team and you have experience with that. It could very well be that 10 other candidates do too, but you just say that and you’re the one candidate who can grow a team.

Specialist or Generalist?

So which should you chose, generalist or specialist?

When it comes to engineering skills, or just learning in general, my bias is towards being a generalist. When I went back to school to finish my degree I focused on the humanities and social science; I didn't take a single programming class. You may have different biases then I do.

But engineering skills are fundamentally different than how you market yourself. You can be a generalist in your engineering skills and market yourself as a specialist. In particular, when applying for jobs, you should try to be a specialist in what the company needs.

Sometimes a technical specialty is exactly what they want: you have some set of skills that are hard to find. But often there's a bit more to it than that. They might say they need an Android expert", but what they really need is someone to ship things fast.

They're looking for "an Android expert" because they don't want a learning curve. So if you emphasize the times you've delivered projects quickly and an on schedule you might get the job even, though another candidate had a couple more years of Android experience than you do..

In short, when it comes to engineering skills I tend towards being a generalist, but that may just be my personal bias. When marketing yourself, be a specialist... but there's nothing keeping you from being a different specialist every time you apply for a new job.

January 19, 2017 05:00 AM

January 18, 2017

Jack Moffitt

Servo Talk at LCA 2017

My talk from was just posted, and you can go watch it. In it I cover some of the features of Servo that make it unique and fast, including the constellation and WebRender.

Servo Architecture: Safety & Performance by Jack Moffitt, LCA 2017, Hobart, Australia.

by Jack Moffitt ( at January 18, 2017 12:00 AM

January 12, 2017

Jonathan Lange

Announcing grafanalib

Late last year, as part of my work at Weaveworks, I published grafanalib, a Python DSL for building Grafana dashboards.

We use it a lot, and it’s made our dashboards much nicer to maintain. I’ve written a blog post about it that you can find it on the Weaveworks blog.

by Jonathan Lange at January 12, 2017 12:00 AM

January 11, 2017

Itamar Turner-Trauring

Your Job is Not Your Life: staying competitive as a developer

Are you worried about keeping your programming skills up-to-date so you can stay employable? Some programmers believe that to succeed you must spend all of your time learning, practicing and improving your craft. How do you fit all that in and still have a life?

In fact, it's quite possible to limit yourself to programming during work hours and still be employable and successful. If you do it right then staying competitive, if it's even necessary, won't require giving up your life for you job.

What does it mean to be "competitive?"

Before moving on to solutions it's worth understanding the problem a little more. The idea of "competitiveness" presumes that every programmer must continually justify their employment, or they will be replaced by some other more qualified developer.

There are shrinking industries where this is the case, but at the moment at least demand for programmers is quite high. Add on the fact that hiring new employees is always risky and worrying about "competitiveness" seems unnecessary. Yes, you need to do well at your job, but I doubt most programmers are at risk of being replaced a moment's notice.

Instead of worrying about "competitiveness" you should focus on the ability to easily find a new job. For example, there are other ways you improve your chances at finding a new job that have nothing to do with your engineering skills:

  • Living below your means will allow you to save money for a rainy day. You'll have more time to find a job if you need to, and more flexibility in what jobs you can take.
  • Keep in touch with old classmates and former colleagues; people you know are the best way to find a new job. Start a Slack channel for ex-coworkers and hang out. This can also be useful for your engineering skills, as I'll discuss later on.

Moving on to engineering skills, the idea that you need to put in long hours outside of work is based both on the need to become an expert, and on the need to keep up with changing technology. Both can be done on the job.

Becoming an expert

You've probably heard the line about expertise requiring 10,000 hours of practice. The more hours you practice the better, then, right?

In fact many of the original studies were about number of years, not number of hours (10 years in particular). And the kind of practice matters. What you need is "deliberate practice":

... deliberate practice is a highly structured activity, the explicit goal of which is to improve performance. Specific tasks are invented to overcome weaknesses, and performance is carefully monitored to provide cues for ways to improve it further. We claim that deliberate practice requires effort and is not inherently enjoyable.

Putting aside knowledge of particular technologies, the kinds of things you want to become an expert at are problem solving, debugging, reading unknown code, etc.. And while you could practice them on your own time, the most realistic forms of practice will be at your job. What you need to do is utilize your work as a form of practice.

How should you practice? The key is to know your own weaknesses, and to get feedback on how you're doing so you can improve. Here are two ways to do that:

  1. Code reviews: a good code reviewer will point out holes in your design, in the ways you've tested your code, in the technology you're using. And doing code reviews will also improve your skills as you consider other people's approaches. A job at an organization with a good code review culture will be valuable to your skills and to your career.
  2. Self-critique: whenever you make a mistake, try to think about what you should have noticed, what mental model would have caught the problem, and how you could have chosen better. Notice the critique is not of the result. The key is to critique the process, so that you do better next time.

I write a weekly newsletter about my many mistakes, and while this is ostensibly for the benefit of the readers I've actually found it has helped me become a more insightful programmer. If you want to learn how to make self-critique useful than just an exercise in negativity I recommend the book The Power of Intuition by Gary Klein.

Learning technical skills

Beyond expertise you also need technical skills: programming languages, frameworks, and so on. You will never be able to keep up with all the changing technologies that are continuously being released. Instead, try the following:

  • Switching jobs: when you're looking for a new job put some weight on organizations that use newer or slightly different technologies than the ones you know. You'll gain a broader view of the tools available than what you'd get a single company.
  • Building breadth: instead of learning many technologies in depth, focus on breadth. Most tools you'll never use, but the more you know of the more you can reach for... and building breadth takes much less time.
  • Find a community: you'll never know everything. But knowing many programmers with different experiences than you means you have access to all of their knowledge. You can find online forums like Subreddits, IRC, mailing lists and so on. But if you don't feel comfortable with those you can also just hang out on Slack with former coworkers who've moved on to another job.

Your job is not your life

All of the suggestions above shouldn't require much if any time outside of your job. If you enjoy programming and want to do it for fun, by all means do so. But your job shouldn't be the only thing you spend you life on.

If you would like to learn how to to get a job that doesn't overwhelm your life, join my free 6-part email course.

Join the course: Getting to a Sane Workweek

Don't let your job take over your life. Join over 800 other programmers on the journey to a saner workweek by taking this free 6-part email course. You'll learn how you can work reasonable hours and still succeed in your career a programmer.

Unsubscribe at any time. Powered by ConvertKit

January 11, 2017 05:00 AM

January 06, 2017

Itamar Turner-Trauring

The fourfold path to software quality

How do you achieve software quality? How do you write software that actually works, software that isn't buggy, software that doesn't result in 4AM wake up calls when things break in production?

There are four different approaches you can take, four paths to the ultimate goal. Which path you choose to take will depend on your personality, skills and the circumstances of your work.

The path of the Yolo Programmer

The first path is that of the Yolo Programmer. As a follower of the famous slogan "You Only Live Once", the Yolo Programmer chooses not to think about software quality. Instead the Yolo Programmer enjoys the pure act of creation; writing code is a joy that would only be diminished by thoughts of maintenance or bugs.

It's easy to look down on the Yolo Programmer, to deride their approach a foolish attitude only suitable for children. As adults we suppress our playfulness because we worry about the future. But even though the future is important, the joy of creation is still a fundamental part of being human.

When you have the opportunity, when you're creating a prototype or some other code that doesn't need to be maintained, embrace the path of the Yolo Programmer. There's no shame in pure enjoyment.

The path of the Rational Optimizer

In contrast to the Yolo Programmer, the Rational Optimizer is well aware of the costs of bugs and mistakes. Software quality is best approached by counter-balancing two measurable costs: the cost of bugs to users and the business vs. the cost of finding and fixing the bugs.

Since bugs are more expensive the later you catch them, the Rational Optimizer invests in catching bugs as early as possible. And since human effort is expensive, the Rational Optimizer loves tools: software can be written once and used many times. Tools to find bugs are thus an eminently rational way to increase software quality.

David R. MacIver's post The Economics of Software Correctness is a great summary of this approach. And he's built some really wonderful tools: your company should hire him if you need to improve your software's quality.

The path of Mastery

The path of Mastery takes a different attitude, which you can see in the title Kate Thompson's book Zero Bugs and Program Faster (note that she sent me a free copy, so I may be biased).

Mastery is an attitude, a set of assumptions about how one should write code. It assumes that the code we create can be understood with enough effort. Or, if the code is not understandable, it can and should be simplified until we can understand it.

The path of Mastery is a fundamentally optimistic point of view: we can, if we choose, master our creations. If we can understand our code we can write quality code. We can do so by proving to ourselves that we've covered all the cases, and by learning to structure our code the right way. With the right knowledge, the right skills and the right attitude we can write code with very few bugs, perhaps even zero bugs.

To learn more about this path you should read Thompson's book; it's idiosyncratic, very personal, and full of useful advice. You'll become a better programmer by internalizing her lessons and attitude.

The path of the Software Clown

The final path is the path of the Software Clown. If Mastery is a 1980s movie training montage, the Software Clown is a tragicomedy: all software is broken, failure is inevitable, and nothing ever works right. There is always another banana peel to slip on, and that would be sad if it weren't so funny.

Since the Software Clown is always finding bugs, the Software Clown makes sure they get fixed, even when they're in someone else's software. Since software is always broken, the Software Clown plans for brokenness. For example, if bugs are inevitable then you should make sure users have an easy time reporting them.

Since banana peels are everywhere, the Software Clown learns how to avoid them. You can't avoid everything, and you won't avoid everything, but you can try to avoid as many as possible.

If you'd like to avoid the many mistakes I've made as a software engineer, sign up for my Software Clown newsletter. You'll get the story of one of my mistakes in your inbox every week and how you can avoid making it.

These are the four paths you can take, but remember: there is no one true answer, no one true path. Try to learn them all, and the skills and attitudes that go along with them; you'll become a better programmer and perhaps even a better person.

Avoid my programming mistakes!

Get a weekly email with one of my many software and career mistakes, and how you can avoid it. Here's what readers are saying:

"Are you reading @itamarst's "Software Clown" newsletter? If not, you should be. There's a gem in every issue." - @glyph

I won't share your email with anyone else. Unsubscribe at any time. Powered by ConvertKit

January 06, 2017 05:00 AM

January 02, 2017

Itamar Turner-Trauring

When software ecosystems die

How much can you rely on the frameworks, tools and libraries you build your software on? And what can you do to reduce the inherent risk of depending on someone else's software?

Years ago I watched a whole software ecosystem die.

Not the slow decline of a programming language that is losing its users, or a no longer maintained library that has a newer, incompatible replacement. This was perma-death: game over, no resurrection, no second chances.

Here's what happened, and what you can learn from it.

The story of mTropolis

Back in the 1990s the Next Big Thing was multimedia, and in particular multimedia CD-ROMs. The market leader was Macromedia Director, a rather problematic tool.

Macromedia Director started out as an animation tool, using a sequence of frames as its organizing metaphor, which meant using it for hypermedia involved a rather bizarre idiom. Your starting screen would be frame 1 on the timeline, with a redirect to itself on exit, an infinite busy loop. Remember this started as animation tool, so the default was to continue on to later frames automatically.

When you clicked on a button that took you to a new screen it worked by moving you to another frame, let's say frame 100. Frame 100 would have a "go to frame 100" on exit to made sure you didn't continue on to frame 101, and then 102, etc.

Then in 1995 mTropolis showed up, a newer, better competitor to Director. It was considered by many to be the superior alternative, even in its very first release. It had a much more suitable conceptual model, features that were good enough to be copied by Director, and a loyal fan base.

In 1997 mTropolis was bought by Quark, maker of the the QuarkXPress desktop publishing software. A year later in 1998 Quark decided to end development of mTropolis.

mTropolis' users were very upset, of course, so they tried to buy the rights off of Quark and continue development on their own.

The purchase failed. mTropolis died.

Market leader or scrappy competitor?

The story of mTropolis had a strong impression on me as a young programmer: I worked with Director, so I was not affected, but the developers who used mTropolis were dead in the water. All the code they'd built was useless as soon as a new OS release broke mTropolis in even the smallest of ways.

This isn't a unique story, either: spend some time reading Our Incredible Journey. Startups come and go, and software ecosystems die with them.

Professor Beekums has an excellent post about switching costs in software development. He argues that given the choice between equivalent market leader and smaller competitor you should choose the latter, so you don't suffer from monopoly pricing.

But what do you do when they're not equivalent, or it's hard to switch? You still need to pick. I would argue that if they're not equivalent, the market leader is much safer. Macromedia was eventually bought by Adobe, and so Director is now Adobe Director. Director was the market leader in 1998, and it's still being developed and still available for purchase, almost 20 years later.

mTropolis may have been better, but mTropolis wasn't the market leader. And mTropolis is dead, and has been for a very long time.

Making the choice

So which do you go for, when you have the choice?

If you're dealing with open source software, much of the problem goes away. Even if the company sponsoring the software shuts down, access to the source code gives you a way to switch off the software gradually.

With Software-as-a-Service you're back in the realm of choosing between monopoly pricing and chance of software disappearing. And at least with mTropolis the developers still could use their licensed copies; an online SaaS can shut down at any time.

Personally I'd err on the side of choosing the market leader, but it's hard to give a general answer. Just remember: the proprietary software you rely on today might be gone tomorrow. Be prepared.

January 02, 2017 05:00 AM

December 23, 2016

Ralph Meijer


For me, Christmas and Jabber/XMPP go together. I started being involved with the Jabber community around the end of 2000. One of the first things that I built was a bot that recorded the availability presence of my online friends, and show this on a Christmas tree. Every light in the tree represents one contact, and if the user is offline, the light is darkened.As we are nearing Christmas, I put the tree up on the frontpage again, as many years before.

Over the years, the tooltips gained insight in User Moods and Tunes, first over regular Publish-Subscribe, later enhanced with the Personal Eventing Protocol. A few years later, Jingle was born, and in 2009, stpeter wrote a great specification that solidifies the relationship between Christmas and Jabber/XMPP.

Many things have changed in those 16 years. I've changed jobs quite a few times, most recently switching from the Mailgun team at Rackspace, to an exciting new job at VimpelCom as Chat Expert last April, working on Veon (more on that later). The instant messaging landscape has changed quite a bit, too. While we, unfortunately, still have a lot of different incompatible systems, a lot of progress has been made as well.

XMPP's story is long from over, and as such I am happy and honored to serve as Chair of the XMPP Standards Foundation since last month. As every year, my current focus is making another success of the XMPP Summit and our presence with the Realtime Lounge and Devroom at FOSDEM in Brussels in February. This is always the highlight of the year, with many XMPP enthousiasts, as well as our friends of the wider Realtime Communications, showing and discussing everything they are working on, ranging from protocol discussions to WebRTC and IoT applications.

Like last year, one of the topics that really excite me is the specification known as Mediated Information eXchange (MIX). MIX takes the good parts of the Multi User Chat (MUC) protocol, that has been the basis of group chat in XMPP for quite a while, redesigned on top of XMPP Publish-Subscribe. Modern commercial messaging systems, for business use (e.g. Slack and HipChat), as well as for general use (e.g. WhatsApp, WeChat, Google's offerings), have tried various approaches on the ancient model of multi-part text exchange, adding multi-media and other information sources, e.g. using integrations, bots, and cards.

MIX is the community's attempt to provide a building block that goes beyond the tradional approach of a single stream of information (presence and messages) to a collection of orthogonal information streams in the same context. A room participant can select (manually or automatically by the user agent) which information streams are of interest at that time. E.g. for mobile use or with many participants, exchanging the presence information of all participants can be unneeded or even expensive (in terms of bandwidth or battery use). In MIX, presence is available as a separate stream of information that can be disabled.

Another example is Slack's integrations. You can add streams of information (Tweets, continuous integration build results, or pull requests) to any channel. However, all participants have no choice to receive the resulting messages, intermixed with discussion. The client's notification system doesn't make any distinction between the two, so you either suffer getting alerts for every build, or mute the channel and possibly miss interesting discussion. The way around it is to have separate channels for notifications and discussion, possibly muting the former.

Using MIX, however, a client can be smarter about this. It can offer the user different ways to consume these information streams. E.g. notifications on your builds could be in a side bar. Tweets can be disabled, or modeled as a ticker. And it can be different depending on which of the (concurrent) clients you are connected with. E.g. the desktop or browser-based client has more screen real-estate to show such orthogonal information streams at the same time, a mobile client might still show the discussion and notifications interleaved.

All-in-all MIX allows for much richer, multi-modal, and more scalable interactions. Some of the other improvements over MUC include persistent participation in channels (much like IRC bouncers, but integrated), better defined multi-device use (including individual addressing), reconnection, and message archiving. I expect the discussions at the XMPP Summit to tie the loose ends as a prelude to initial implementations.

I am sure that FOSDEM and the XMPP Summit will have many more exciting topics, so I hope to see you there. Until then, Jabber on!

by ralphm at December 23, 2016 01:28 PM