Planet Twisted

August 25, 2016

Itamar Turner-Trauring

From 10x programmer to 0.1x programmer: creating more with less

You've heard of the mythical 10x programmers, programmers who can produce ten times as much as us normal humans. If you want to become a better programmer this myth is demoralizing, but it's also not useful: how can you write ten times as much code? On the other hand, consider the 0.1x programmer, a much more useful concept: anyone can choose to write only 10% code as much code as a normal programmer would. As they say in the business world, becoming a 0.1x programmer is actionable.

Of course writing less code might seem problematic, so let's refine our goal a little. Can you write 10% as much code as you do now and still do just as well at your job, still fixing the same amount of bugs, still implementing the same amount of features? The answer may still be "no", but at least this is a goal you can more easily work towards incrementally.

Doing more with less code

How do you do achieve just as much while writing less code?

1. Use a higher level programming language

As it turns out many of us are 0.1x programmers without even trying, compared to previous generations of programmers that were stuck with lower-level programming languages. If you don't have to worry about manual memory management or creating a data structure from scratch you can write much less code to achieve the same goal.

2. Use existing code

Instead of coding from scratch, use an existing library that achieves the same thing. For example, earlier this week I was looking at the problem of incrementing version numbers in source code and documentation as part of a release. A little searching and I found an open source tool that did exactly what I needed. Because it's been used by many people and improved over time chances are it's better designed, better tested, and less buggy than my first attempt would have been.

3. Spend some time thinking

Surprisingly spending more time planning up front can save you time in the long run. If you have 2 days to fix a bug it's worth spending 10% of that time, an hour and half, to think about how to solve it. Chances are the first solution you come up with in the first 5 minutes won't be the best solution, especially if it's a hard problem. Spend an hour more thinking and you might come up with a solution that takes two hours instead of two days.

4. Good enough features

Most feature requests have three parts:

  1. The stuff the customer must have.
  2. The stuff that is nice to have but not strictly necessary.
  3. The stuff the customer is willing to admit is not necessary.

The last category is usually dropped in advance, but you're usually still asked to implement the middle category of things that the customer and product manager really really want but aren't actually strictly necessary. So figure out the real minimum path to implement a feature, deliver it, and much of the time it'll turn out that no one will miss those nice-to-have additions.

5. Drop the feature altogether

Some features don't need to be done at all. Some features are better done a completely different way than requested.

Instead of saying "yes, I'll do that" to every feature request, make sure you understand why someone needs the feature, and always consider alternatives. If you come up with a faster, superior idea the customer or product manager will usually be happy to go along with your suggestion.

6. Drop the product altogether

Sometimes your whole product is not worth doing: it will have no customers, will garner no interest. Spending months and months on a product no one will ever use is a waste of time, not to mention depressing.

Lean Startup is one methodology for dealing with this: before you spend any time developing the product you do the minimal work possible to figure out if it's worth doing in the first place.

Conclusion

Your goal as programmer is not to write code, your goal is to solve problems. From low-level programming decisions to high-level business decisions there are many ways you can solve problems with less code. So don't start with "how do I write this code?", start with "how do I solve this problem?" Sometimes you'll do better not solving the problem at all, or redefining it. As you get better at solving problems with less code you will find yourself becoming more productive, especially if you start looking at the big picture.

It took me a while to learn all this, and I made a lot of mistakes along the way: writing too much code or solving the wrong problem. If you'd like to avoid all the mistakes I made then check out Software Clown, where I share my mistakes and what you can learn from them.

August 25, 2016 04:00 AM

Moshe Zadka

Time Series Data

When operating computers, we are often exposed to so-called “time series”. Whether it is database latency, page fault rate or total memory used, these are all exposed as numbers that are usually sampled at frequent intervals.

However, not only computer engineers are exposed to such data. It is worthwhile to know what other disciplines are exposed to such data, and what they do with it. “Earth sciences” (geology, climate, etc.) have a lot of numbers, and often need to analyze trends and make predictions. Sometimes these predictions have, literally, billions dollars’ worth of decision hinging on them. It is worthwhile to read some of the textbooks for students of those disciplines to see how to approach those series.

Another discipline that needs to visually inspect time series data is physicians. EKG data is often vital to analyze patients’ health — and especially when compared to their historical records. For that, that data needs to be saved. A lot of EKG research has been done on how to compress numerical data, but still keep it “visually the same”. While the research on that is not as rigorous, and not as settled, as the trend analysis in geology, it is still useful to look into. Indeed, even the basics are already better than so-called “roll-ups”, which preserve none of the visual distinction of the data, flattening peaks and filling hills while keeping a score of “standard deviation” that is not as helpful as is usually hoped for.

by moshez at August 25, 2016 03:50 AM

August 24, 2016

Hynek Schlawack

Hardening Your Web Server’s SSL Ciphers

There are many wordy articles on configuring your web server’s TLS ciphers. This is not one of them. Instead I will share a configuration which is both compatible enough for today’s needs and scores a straight “A” on Qualys’s SSL Server Test.

by Hynek Schlawack (hs@ox.cx) at August 24, 2016 03:40 PM

August 22, 2016

Hynek Schlawack

Better Python Object Serialization

The Python standard library is full of underappreciated gems. One of them allows for simple and elegant function dispatching based on argument types. This makes it perfect for serialization of arbitrary objects – for example to JSON in web APIs and structured logs.

by Hynek Schlawack (hs@ox.cx) at August 22, 2016 12:30 PM

August 20, 2016

Moshe Zadka

Extension API: An exercise in a negative case study

I was idly contemplating implementing a new Jupyter kernel. Luckily, they try to provide facility to make it easier. Unfortunately, they made a number of suboptimal choices in their API. Fortunately, those mistakes are both common and easily avoidable.

Subclassing as API

They suggest subclassing IPython.kernel.zmq.kernelbase.Kernel. Errr…not “suggest”. It is a “required step”. The reason is probably that this class already implements 21 methods. When you subclass, make sure to not use any of these names, or things will break randomly. If you do not want to subclass, good luck figuring out what the assumption that the system makes about these 21 methods because there is no interface or even prose documentation.

The return statement in their example is particularly illuminating:

        return {'status': 'ok',
                # The base class increments the execution count
                'execution_count': self.execution_count,
                'payload': [],
                'user_expressions': {},
               }

Note the comment “base class increments the execution count”. This is a classic code smell: this seems like this would be needed in every single overrider, which means it really belongs in the helper class, not in every kernel.

None

The signature for the example do_execute is:

    def do_execute(self, code, silent, store_history=True, 
                   user_expressions=None,
                   allow_stdin=False):

Of course, this means that user_expressions will sometimes be a dictionary and sometimes None. It is likely that the code will be written to anticipate one or the other, and will fail in interesting ways if None is actually sent.

Optional Overrides

As described in this section there are also ways to make the kernel better with optional overrides. The convention used, which is nowhere explained, is that do_ methods mean you should override to make a better kernel. Nowhere it is explained why there is no default history implementation, or where to get one, or why a simple stupid implementation is wrong.

Dictionaries

 

All overrides return dictionaries, which get serialized directly into the underlying communication platform. This is a poor abstraction, especially when the documentation is direct links to the underlying protocol. When wrapping a protocol, it is much nicer to use an Interface as the documentation of what is assumed — and define an attr.s-based class to allow returning something which is automatically the correct type, and will fail in nice ways if a parameter is forgotten.

Summary

If you are providing an API, here are a few positive lessons based on the issues above:

  • You should expect interfaces, not subclasses. Use composition, not subclassing.If you want to provide a default implementation in composition, just check for a return of NotImplemeted(), and use the default.
  • Do the work of abstracting your customers from the need to use dictionaries and unwrap automatically. Use attr.s to avoid customer boilerplate.
  • Send all arguments. Isolate your customers from the need to come up with sane defaults.
  • As much as possible, try to have your interfaces be side-effect free. Instead of asking the customer to directly make a change, allow the customer to make the “needed change” be part of the return type. This will let the customers test their class much more easily.

by moshez at August 20, 2016 06:56 PM

August 19, 2016

Twisted Matrix Laboratories

Twisted 16.3.2 Released

On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 16.3.2.

This is a bug fix & security fix release, and is recommended for all users of Twisted. The fixes are:
  • A bugfix for a HTTP/2 edge case, (included in 16.3.1)
  • Fix for CVE-2008-7317 (generating potentially guessable HTTP session identifiers) (included in 16.3.1)
  • Fix for CVE-2008-7318 (sending secure session cookies over insecured connections) (included in 16.3.1)
  • Fix for CVE-2016-1000111 (http://httpoxy.org/) (included in 16.3.1)
  • Twisted's HTTP server, when operating over TLS, would not cleanly close sockets, causing it to build up CLOSE_WAIT sockets until it would eventually run out of file descriptors.
For more information, check the NEWS file (link provided below).

You can find the downloads on PyPI (or alternatively our website). The NEWS file is also available at on GitHub.

Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)

by HawkOwl (noreply@blogger.com) at August 19, 2016 09:45 AM

August 18, 2016

Jonathan Lange

Patterns are half-formed code

If “technology is stuff that doesn’t work yet”[1], then patterns are code we don’t know how to write yet.

In the Go Programming Language, the authors show how to iterate over elements in a map, sorted by keys:

To enumerate the key/value pairs in order, we must sort the keys explicitly, for instances, using the Strings function from the sort package if the keys are strings. This is a common pattern.

—Go Programming Language, Alan A. A. Donovan & Brian W. Kernighan, p94

The pattern is illustrated by the following code:

import "sort"

var names []string
for name := range ages {
    name = append(names, name)
}
sort.Strings(names)
for _, name := range names {
    fmt.Printf("%s\t%d\n", name, ages[name])
}

Peter Norvig calls this an informal design pattern: something referred to by name (“iterate through items in a map in order of keys”) and re-implemented from scratch each time it’s needed.

Informal patterns have their place but they are a larval form of knowledge, stuck halfway between intuition and formal understanding. When we see a recognize a pattern, our next step should always be to ask, “can we make it go away?”

Patterns are one way of expressing “how to” knowledge [2] but we have another, better way: code. Source code is a formal expression of “how to” knowledge that we can execute, test, manipulate, verify, compose, and re-use. Encoding “how to” knowledge is largely what programming is [3]. We talk about replacing people with programs precisely because we take the knowledge about how to do their job and encode it such that even a machine can understand it.

So how can we encode the knowledge of iterating through the items in a map in order of keys? How can we replace this pattern with code?

We can start by following Peter Norvig’s example and reach for a dynamic programming language, such as Python:

names = []
for name in ages:
    names.append(name)
names.sort()
for name in names:
    print("{}\t{}".format(name, ages[name]))

This is a very literal translation of the first snippet. A more idiomatic approach would look like:

names = sorted(ages.keys())
for name in names:
    print("{}\t{}".format(name, ages[name])

To turn this into a formal pattern, we need to extract a function that takes a map and returns a list of pairs of (key, value) in sorted order, like so:

def sorted_items(d):
    result = []
    sorted_keys = sorted(d.keys())
    for k in sorted_keys:
        result.append((k, d[k]))
    return result

for name, age in sorted_items(ages):
    print("{}\t{}".format(name, age))

The pattern has become a function. Instead of a name or a description, it has an identifier, a True Name that gives us power over the thing. When we invoke it we don’t need to comment our code to indicate that we are using a pattern because the name sorted_items makes it clear. If we choose, we can test it, optimize it, or perhaps even prove its correctness.

If we figure out a better way of doing it, such as:

def sorted_items(d):
    return [(k, d[k]) for k in sorted(d.keys())]

Then we only have to change one place.

And if we are willing to tolerate a slight change in behavior,

def sorted_items(d):
    return sorted(d.items())

Then we might not need the function at all.

It was being able to write code like this that drew me towards Python and away from Java, way back in 2001. It wasn’t just that I could get more done in fewer lines—although that helped—it was that I could write what I meant.

Of course, these days I’d much rather write:

import Data.List (sort)
import qualified Data.HashMap as Map

sortedItems :: (Ord k, Ord v) => Map.Map k v -> [(k, v)]
sortedItems d = sort (Map.toList d)

But that’s another story.

[1]Bran Ferren, via Douglas Adams
[2]Patterns can also contain “when to”, “why to”, “why not to”, and “how much” knowledge, but they _always_ contain “how to” knowledge.
[3]The excellent SICP lectures open with the insight that what we call “computer science” might be the very beginning of a science of “how to” knowledge.

by Jonathan Lange at August 18, 2016 05:00 PM

Itamar Turner-Trauring

Less stress, more productivity: why working fewer hours is better for you and your employer

There's always too much work to be done on software projects, too many features to implement, too many bugs to fix. Some days you're just not going through the backlog fast enough, you're not producing enough code, and it's taking too long to fix a seemingly-impossible bug. And to make things worse you're wasting time in pointless meetings instead of getting work done.

Once it gets bad enough you can find yourself always scrambling, working overtime just to keep up. Pretty soon it's just expected, and you need to be available to answer emails at all hours even when there are no emergencies. You're tired and burnt out and there's still just as much work as before.

The real solution is not working even harder or even longer, but rather the complete opposite: working fewer hours.

Some caveats first:

  • The more experienced you are the better this will work. If this is your first year working after school you may need to just deal with it until you can find a better job, which you should do ASAP.
  • Working fewer hours is effectively a new deal you are negotiating with your employer. If you're living from paycheck to paycheck you have no negotiating leverage, so the first thing you need to do is make sure you have some savings in the bank.

Fewer hours, more productivity

Why does working longer hours not improve the situation? Because working longer makes you less productive at the same time that it encourages bad practices by your boss. Working fewer hours does the opposite.

1. A shorter work-week improves your ability to focus

As I've discussed before, working while tired is counter-productive. It takes longer and longer to solve problems, and you very quickly hit the point of diminishing returns. And working consistently for long hours is even worse for your mental focus, since you will quickly burn out.

Long hours: "It's 5 o'clock and I should be done with work, but I just need to finish this problem, just one more try," you tell yourself. But being tired it actually takes you another three hours to solve. The next day you go to work tired and unfocused.

Shorter hours: "It's 5 o'clock and I wish I had this fixed, but I guess I'll try tomorrow morning." The next morning, refreshed, you solve the problem in 10 minutes.

2. A shorter work-week promotes smarter solutions

Working longer hours encourages bad programming habits: you start thinking that the way to solve problems is just forcing yourself to get through the work. But programming is all about automation, about building abstractions to reduce work. Often you can get huge reductions in effort by figuring out a better way to implement an API, or that a particular piece of functionality is not actually necessary.

Let's imagine your boss hands you a task that must ship to your customer in 2 weeks. And you estimate that optimistically it will take you 3 weeks to implement.

Long hours: "This needs to ship in two weeks, but I think it's 120 hours to complete... so I guess I'm working evenings and weekends again." You end up even more burnt out, and probably the feature will still ship late.

Shorter hours: "I've got two weeks, but this is way too much work. What can I do to reduce the scope? Guess I'll spend a couple hours thinking about it."

And soon: "Oh, if I do this restructuring I can get 80% of the feature done in one week, and that'll probably keep the customer happy until I finish the rest. And even if I underestimated I've still got the second week to get that part done."

3. A shorter work-week discourages bad management practices

If your response to any issue is to work longer hours you are encouraging bad management practices. You are effectively telling your manager that your time is not valuable, and that they need not prioritize accordingly.

Long hours: If your manager isn't sure whether you should go to a meeting, they might tell themselves that "it might waste an hour of time, but they'll just work an extra hour in the evening to make it up." If your manager can't decide between two features, they'll just hand you both instead of making a hard decision.

Shorter hours: With shorter hours your time becomes more scarce and valuable. If your manager is at all reasonable less important meetings will get skipped and more important features will be prioritized.

Getting to fewer hours

A short work-week mean different things to different people. One programmer I know made clear when she started a job at a startup that she worked 40-45 hours a week and that's it. Everyone else worked much longer hours, but that was her personal limit. Personally I have negotiated a 35-hour work week.

Whatever the number that makes sense to you, the key is to clearly explain your limits and then stick to them. Tell you manager "I am going to be working a 40-hour work week, unless it's a real emergency." Once you've explained your limits you need to stick to them: no answering emails after hours, no agreeing to do just one little thing on the weekend.

And then you need to prove yourself by still being productive, and making sure that when you are working you are working. Spending a couple hours a day at work watching cat videos probably won't go well with shorter hours.

There are companies where this won't fly, of course, where management is so bad or norms are so out of whack that even a 40-hour work week by a productive team member won't be acceptable. In those cases you need to look for a new job, and as part of the interview figure out the work culture and project management practices of prospective employers. Do people work short hours or long hours? Is everything always on fire or do projects get delivered on time?

Whether you're negotiating your hours at your existing job or at a new job, you'll do better the more experienced and skilled of a programmer you are. These days I have enough provable skills that I can do OK at negotiating, but it's taken learning from a lot of mistakes to get there. If you want to improve you skills and learn faster than I did, head over to Software Clown where you can hear about my past mistakes and how you can avoid them.

August 18, 2016 04:00 AM

August 17, 2016

Glyph Lefkowitz

Probably best to get this out of the way before this weekend:

If I meet you at a technical conference, you’ll probably see me extend my elbow in your direction, rather than my hand. This is because I won’t shake your hand at a conference.

People sometimes joke about “con crud”, but the amount of lost productivity and human misery generated by conference-transmitted sickness is not funny. Personally, by the time the year is out, I will most likely have attended 5 conferences. This means that if I get sick at each one, I will spend more than a month out of the year out of commission being sick.

When I tell people this, they think I’m a germophobe. But, in all likelihood, I won’t be the one getting sick. I already have 10 years of building up herd immunity to the set of minor ailments that afflict the international Python-conference-attending community. It’s true that I don’t particularly want to get sick myself, but I happily shake people’s hands in more moderately-sized social gatherings. I’ve had a cold before and I’ve had one again; I have no illusion that ritually dousing myself in Purell every day will make me immune to all disease.

I’m not shaking your hand because I don’t want you to get sick. Please don’t be weird about it!

by Glyph at August 17, 2016 06:42 PM

August 14, 2016

Glyph Lefkowitz

A Container Is A Function Call

It seems to me that the prevailing mental model among users of container technology1 right now is that a container is a tiny little virtual machine. It’s like a machine in the sense that it is provisioned and deprovisioned by explicit decisions, and we talk about “booting” containers. We configure it sort of like we configure a machine; dropping a bunch of files into a volume, setting some environment variables.

In my mind though, a container is something fundamentally different than a VM. Rather than coming from the perspective of “let’s take a VM and make it smaller so we can do cool stuff” - get rid of the kernel, get rid of fixed memory allocations, get rid of emulated memory access and instructions, so we can provision more of them at higher density... I’m coming at it from the opposite direction.

For me, containers are “let’s take a program and made it bigger so we can do cool stuff”. Let’s add in the whole user-space filesystem so it’s got all the same bits every time, so we don’t need to worry about library management, so we can ship it around from computer to computer as a self-contained unit. Awesome!

Of course, there are other ecosystems that figured this out a really long time ago, but having it as a commodity within the most popular server deployment environment has changed things.

Of course, an individual container isn’t a whole program. That’s why we need tools like compose to put containers together into a functioning whole. This makes a container not just a program, but rather, a part of a program. And of course, we all know what the smaller parts of a program are called:

Functions.2

A container of course is not the function itself; the image is the function. A container itself is a function call.

Perceived through this lens, it becomes apparent that Docker is missing some pretty important information. As a tiny VM, it has all the parts you need: it has an operating system (in the docker build) the ability to boot and reboot (docker run), instrumentation (docker inspect) debugging (docker exec) etc. As a really big function, it’s strangely anemic.

Specifically: in every programming language worth its salt, we have a type system; some mechanism to identify what parameters a function will take, and what return value it will have.

You might find this weird coming from a Python person, a language where

1
2
def foo(a, b, c):
    return a.x(c.d(b))

is considered an acceptable level of type documentation by some3; there’s no requirement to say what a, b, and c are. However, just because the type system is implicit, that doesn’t mean it’s not there, even in the text of the program. Let’s consider, from reading this tiny example, what we can discover:

  • foo takes 3 arguments, their names are “a”, “b”, and “c”, and it returns a value.
  • Somewhere else in the codebase there’s an object with an x method, which takes a single argument and also returns a value.
  • The type of <unknown>.x’s argument is the same as the return type of another method somewhere in the codebase, <unknown-2>.d

And so on, and so on. At runtime each of these types takes on a specific, concrete value, with a type, and if you set a breakpoint and single-step into it with a debugger, you can see each of those types very easily. Also at runtime you will get TypeError exceptions telling you exactly what was wrong with what you tried to do at a number of points, if you make a mistake.

The analogy to containers isn’t exact; inputs and outputs aren’t obviously in the shape of “arguments” and “return values”, especially since containers tend to be long-running; but nevertheless, a container does have inputs and outputs in the form of env vars, network services, and volumes.

Let’s consider the “foo” of docker, which would be the middle tier of a 3-tier web application (cribbed from a real live example):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
FROM pypy:2
RUN apt-get update -ym
RUN apt-get upgrade -ym
RUN apt-get install -ym libssl-dev libffi-dev
RUN pip install virtualenv
RUN mkdir -p /code/env
RUN virtualenv /code/env
RUN pwd

COPY requirements.txt /code/requirements.txt
RUN /code/env/bin/pip install -r /code/requirements.txt
COPY main /code/main
RUN chmod a+x /code/main

VOLUME /clf
VOLUME /site
VOLUME /etc/ssl/private

ENTRYPOINT ["/code/main"]

In this file, we can only see three inputs, which are filesystem locations: /clf, /site, and /etc/ssl/private. How is this different than our Python example, a language with supposedly “no type information”?

  • The image has no metadata explaining what might go in those locations, or what roles they serve. We have no way to annotate them within the Dockerfile.
  • What services does this container need to connect to in order to get its job done? What hostnames will it connect to, what ports, and what will it expect to find there? We have no way of knowing. It doesn’t say. Any errors about the failed connections will come in a custom format, possibly in logs, from the application itself, and not from docker.
  • What services does this container export? It could have used an EXPOSE line to give us a hint, but it doesn’t need to; and even if it did, all we’d have is a port number.
  • What environment variables does its code require? What format do they need to be in?
  • We do know that we could look in requirements.txt to figure out what libraries are going to be used, but in order to figure out what the service dependencies are, we’re going to need to read all of the code to all of them.

Of course, the one way that this example is unrealistic is that I deleted all the comments explaining all of those things. Indeed, best practice these days would be to include comments in your Dockerfiles, and include example compose files in your repository, to give users some hint as to how these things all wire together.

This sort of state isn’t entirely uncommon in programming languages. In fact, in this popular GitHub project you can see that large programs written in assembler in the 1960s included exactly this sort of documentation convention: huge front-matter comments in English prose.

That is the current state of the container ecosystem. We are at the “late ’60s assembly language” stage of orchestration development. It would be a huge technological leap forward to be able to communicate our intent structurally.


When you’re building an image, you’re building it for a particular purpose. You already pretty much know what you’re trying to do and what you’re going to need to do it.

  1. When instantiated, the image is going to consume network services. This is not just a matter of hostnames and TCP ports; those services need to be providing a specific service, over a specific protocol. A generic reverse proxy might be able to handle an arbitrary HTTP endpoint, but an API client needs that specific API. A database admin tool might be OK with just “it’s a database” but an application needs a particular schema.
  2. It’s going to consume environment variables. But not just any variables; the variables have to be in a particular format.
  3. It’s going to consume volumes. The volumes need to contain data in a particular format, readable and writable by a particular UID.
  4. It’s also going to produce all of these things; it may listen on a network service port, provision a database schema, or emit some text that needs to be fed back into an environment variable elsewhere.

Here’s a brief sketch of what I want to see in a Dockerfile to allow me to express this sort of thing:

1
2
3
4
5
6
7
8
9
FROM ...
RUN ...

LISTENS ON: TCP:80 FOR: org.ietf.http/com.example.my-application-api
CONNECTS TO: pgwritemaster.internal ON: TCP:5432 FOR: org.postgresql.db/com.example.my-app-schema
CONNECTS TO: {{ETCD_HOST}} ON: TCP:{{ETCD_PORT}} FOR: com.coreos.etcd/client-communication
ENVIRONMENT NEEDS: ETCD_HOST FORMAT: HOST(com.coreos.etcd/client-communication)
ENVIRONMENT NEEDS: ETCD_PORT FORMAT: PORT(com.coreos.etcd/client-communication)
VOLUME AT: /logs FORMAT: org.w3.clf REQUIRES: WRITE UID: 4321

An image thusly built would refuse to run unless:

  • Somewhere else on its network, there was an etcd host/port known to it, its host and port supplied via environment variables.
  • Somewhere else on its network, there was a postgres host, listening on port 5432, with a name-resolution entry of “pgwritemaster.internal”.
  • An environment variable for the etcd configuration was supplied
  • A writable volume for /logs was supplied, owned by user-ID 4321 where it could write common log format logs.

There are probably a lot of flaws in the specific syntax here, but I hope you can see past that, to the broader point that the software inside a container has precise expectations of its environment, and that we presently have no way of communicating those expectations beyond writing a Melvilleian essay in each Dockerfile comments, beseeching those who would run the image to give it what it needs.


Why bother with this sort of work, if all the image can do with it is “refuse to run”?

First and foremost, today, the image effectively won’t run. Oh, it’ll start up, and it’ll consume some resources, but it will break when you try to do anything with it. What this metadata will allow the container runtime to do is to tell you why the image didn’t run, and give you specific, actionable, fast feedback about what you need to do in order to fix the problem. You won’t have to go groveling through logs; which is always especially hard if the back-end service you forgot to properly connect to was the log aggregation service. So this will be an order of magnitude speed improvement on initial deployments and development-environment setups for utility containers. Whole applications typically already come with a compose file, of course, but ideally applications would be built out of functioning self-contained pieces and not assembled one custom container at a time.

Secondly, if there were a strong tooling standard for providing this metadata within the image itself, it might become possible for infrastructure service providers (like, ahem, my employer) to automatically detect and satisfy service dependencies. Right now, if you have a database as a service that lives outside the container system in production, but within the container system in development and test, there’s no way for the orchestration layer to say “good news, everyone! you can find the database you need here: ...”.

My main interest is in allowing open source software developers to give service operators exactly what they need, so the upstream developers can get useful bug reports. There’s a constant tension where volunteer software developers find themselves fielding bug reports where someone deployed their code in a weird way, hacked it up to support some strange environment, built a derived container that had all kinds of extra junk in it to support service discovery or logging or somesuch, and so they don’t want to deal with the support load that that generates. Both people in that exchange are behaving reasonably. The developers gave the ops folks a container that runs their software to the best of their abilities. The service vendors made the minimal modifications they needed to have the container become a part of their service fabric. Yet we arrive at a scenario where nobody feels responsible for the resulting artifact.

If we could just say what it is that the container needs in order to really work, in a way which was precise and machine-readable, then it would be clear where the responsibility lies. Service providers could just run the container unmodified, and they’d know very clearly whether or not they’d satisfied its runtime requirements. Open source developers - or even commercial service vendors! - could say very clearly what they expected to be passed in, and when they got bug reports, they’d know exactly how their service should have behaved.


  1. which mostly but not entirely just means “docker”; it’s weird, of course, because there are pieces that docker depends on and tools that build upon docker which are part of this, but docker remains the nexus. 

  2. Yes yes, I know that they’re not really functions Tristan, they’re subroutines, but that’s the word people use for “subroutines” nowadays. 

  3. Just to be clear: no it isn’t. Write a damn docstring, or at least some type annotations

by Glyph at August 14, 2016 10:22 PM

Python Packaging Is Good Now

Okay folks. Time’s up. It’s too late to say that Python’s packaging ecosystem terrible any more. I’m calling it.

Python packaging is not bad any more. If you’re a developer, and you’re trying to create or consume Python libraries, it can be a tractable, even pleasant experience.

I need to say this, because for a long time, Python’s packaging toolchain was … problematic. It isn’t any more, but a lot of people still seem to think that it is, so it’s time to set the record straight.

If you’re not familiar with the history it went something like this:

The Dawn

Python first shipped in an era when adding a dependency meant a veritable Odyssey into cyberspace. First, you’d wait until nobody in your whole family was using the phone line. Then you’d dial your ISP. Once you’d finished fighting your SLIP or PPP client, you’d ask a netnews group if anyone knew of a good gopher site to find a library that could solve your problem. Once you were done with that task, you’d sign off the Internet for the night, and wait about 48 hours too see if anyone responded. If you were lucky enough to get a reply, you’d set up a download at the end of your night’s web-surfing.

pip search it wasn’t.

For the time, Python’s approach to dependency-handling was incredibly forward-looking. The import statement, and the pluggable module import system, made it easy to get dependencies from wherever made sense.

In Python 2.01, Distutils was introduced. This let Python developers describe their collections of modules abstractly, and added tool support to producing redistributable collections of modules and packages. Again, this was tremendously forward-looking, if somewhat primitive; there was very little to compare it to at the time.

Fast forwarding to 2004; setuptools was created to address some of the increasingly-common tasks that open source software maintainers were facing with distributing their modules over the internet. In 2005, it added easy_install, in order to provide a tool to automate resolving dependencies and downloading them into the right locations.

The Dark Age

Unfortunately, in addition to providing basic utilities for expressing dependencies, setuptools also dragged in a tremendous amount of complexity. Its author felt that import should do something slightly different than what it does, so installing setuptools changed it. The main difference between normal import and setuptools import was that it facilitated having multiple different versions of the same library in the same program at the same time. It turns out that that’s a dumb idea, but in fairness, it wasn’t entirely clear at the time, and it is certainly useful (and necessary!) to be able to have multiple versions of a library installed onto a computer at the same time.

In addition to these idiosyncratic departures from standard Python semantics, setuptools suffered from being unmaintained. It became a critical part of the Python ecosystem at the same time as the author was moving on to other projects entirely outside of programming. No-one could agree on who the new maintainers should be for a long period of time. The project was forked, and many operating systems’ packaging toolchains calcified around a buggy, ancient version.

From 2008 to 2012 or so, Python packaging was a total mess. It was painful to use. It was not clear which libraries or tools to use, which ones were worth investing in or learning. Doing things the simple way was too tedious, and doing things the automated way involved lots of poorly-documented workarounds and inscrutable failure modes.

This is to say nothing of the fact that there were critical security flaws in various parts of this toolchain. There was no practical way to package and upload Python packages in such a way that users didn’t need a full compiler toolchain for their platform.

To make matters worse for the popular perception of Python’s packaging prowess2, at this same time, newer languages and environments were getting a lot of buzz, ones that had packaging built in at the very beginning and had a much better binary distribution story. These environments learned lessons from the screw-ups of Python and Perl, and really got a lot of things right from the start.

Finally, the Python Package Index, the site which hosts all the open source packages uploaded by the Python community, was basically a proof-of-concept that went live way too early, had almost no operational resources, and was offline all the dang time.

Things were looking pretty bad for Python.


Intermission

Here is where we get to the point of this post - this is where popular opinion about Python packaging is stuck. Outdated information from this period abounds. Blog posts complaining about problems score high in web searches. Those who used Python during this time, but have now moved on to some other language, frequently scoff and dismiss Python as impossible to package, its packaging ecosystem as broken, PyPI as down all the time, and so on. Worst of all, bad advice for workarounds which are no longer necessary are still easy to find, which causes users to pre-emptively break their environments where they really don’t need to.


From The Ashes

In the midst of all this brokenness, there were some who were heroically, quietly, slowly fixing the mess, one gnarly bug-report at a time. pip was started, and its various maintainers fixed much of easy_install’s overcomplexity and many of its flaws. Donald Stufft stepped in both on Pip and PyPI and improved the availability of the systems it depended upon, as well as some pretty serious vulnerabilities in the tool itself. Daniel Holth wrote a PEP for the wheel format, which allows for binary redistribution of libraries. In other words, it lets authors of packages which need a C compiler to build give their users a way to not have one.

In 2013, setuptools and distribute un-forked, providing a path forward for operating system vendors to start updating their installations and allowing users to use something modern.

Python Core started distributing the ensurepip module along with both Python 2.7 and 3.3, allowing any user with a recent Python installed to quickly bootstrap into a sensible Python development environment with a one-liner.

A New Renaissance

I won’t give you a full run-down of the state of the packaging art. There’s already a website for that. I will, however, give you a précis of how much easier it is to get started nowadays. Today, if you want to get a sensible, up-to-date python development environment, without administrative privileges, all you have to do is:

1
2
3
$ python -m ensurepip --user
$ python -m pip install --user --upgrade pip
$ python -m pip install --user --upgrade virtualenv

Then, for each project you want to do, make a new virtualenv:

1
2
3
$ python -m virtualenv lets-go
$ . ./lets-go/bin/activate
(lets-go) $ _

From here on out, now the world is your oyster; you can pip install to your heart’s content, and you probably won’t even need to compile any C for most packages. These instructions don’t depend on Python version, either: as long as it’s up-to-date, the same steps work on Python 2, Python 3, PyPy and even Jython. In fact, often the ensurepip step isn’t even necessary since pip comes preinstalled. Running it if it’s unnecessary is harmless, even!

Other, more advanced packaging operations are much simpler than they used to be, too.

  • Need a C compiler? OS vendors have been working with the open source community to make this easier across the board:
    1
    2
    3
    4
    5
    $ apt install build-essential python-dev # ubuntu
    $ xcode-select --install # macOS
    $ dnf install @development-tools python-devel # fedora
    C:\> REM windows
    C:\> start https://www.microsoft.com/en-us/download/details.aspx?id=44266
    

Okay that last one’s not as obvious as it ought to be but they did at least make it freely available!

  • Want to upload some stuff to PyPI? This should do it for almost any project:

    1
    2
    3
    $ pip install twine
    $ python setup.py sdist bdist_wheel
    $ twine upload dist/*
    
  • Want to build wheels for the wild and wooly world of Linux? There’s an app4 for that.

Importantly, PyPI will almost certainly be online. Not only that, but a new, revamped site will be “launching” any day now3.

Again, this isn’t a comprehensive resource; I just want to give you an idea of what’s possible. But, as a deeply experienced Python expert I used to swear at these tools six times a day for years; the most serious Python packaging issue I’ve had this year to date was fixed by cleaning up my git repo to delete a cache file.

Work Still To Do

While the current situation is good, it’s still not great.

Here are just a few of my desiderata:

  • We still need better and more universally agreed-upon tooling for end-user deployments.
  • Pip should have a GUI frontend so that users can write Python stuff without learning as much command-line arcana.
  • There should be tools that help you write and update a setup.py. Or a setup.python.json or something, so you don’t actually need to write code just to ship some metadata.
  • The error messages that you get when you try to build something that needs a C compiler and it doesn’t work should be clearer and more actionable for users who don’t already know what they mean.
  • PyPI should automatically build wheels for all platforms by default when you upload sdists; this is a huge project, of course, but it would be super awesome default behavior.

I could go on. There are lots of ways that Python packaging could be better.

The Bottom Line

The real takeaway here though, is that although it’s still not perfect, other languages are no longer doing appreciably better. Go is still working through a number of different options regarding dependency management and vendoring, and, like Python extensions that require C dependencies, CGo is sometimes necessary and always a problem. Node has had its own well-publicized problems with their dependency management culture and package manager. Hackage is cool and all but everything takes a literal geological epoch to compile.

As always, I’m sure none of this applies to Rust and Cargo is basically perfect, but that doesn’t matter, because nobody reading this is actually using Rust.

My point is not that packaging in any of these languages is particularly bad. They’re all actually doing pretty well, especially compared to the state of the general programming ecosystem a few years ago; many of them are making regular progress towards user-facing improvements.

My point is that any commentary suggesting they’re meaningfully better than Python at this point is probably just out of date. Working with Python packaging is more or less fine right now. It could be better, but lots of people are working on improving it, and the structural problems that prevented those improvements from being adopted by the community in a timely manner have almost all been addressed.

Go! Make some virtualenvs! Hack some setup.pys! If it’s been a while and your last experience was really miserable, I promise, it’s better now.


Am I wrong? Did I screw up a detail of your favorite language? Did I forget to mention the one language environment that has a completely perfect, flawless packaging story? Do you feel the need to just yell at a stranger on the Internet about picayune details? Feel free to get in touch!


  1. released in October, 2000 

  2. say that five times fast. 

  3. although I’m not sure what it means to “launch” when the site is online, and running against the production data-store, and you can use it for pretty much everything... 

  4. “app” meaning of course “docker container” 

by Glyph at August 14, 2016 09:17 AM

What’s In A Name

Amber’s excellent lightning talk on identity yesterday made me feel many feels, and reminded me of this excellent post by Patrick McKenzie about false assumptions regarding names.

While that list is helpful, it’s very light on positively-framed advice, i.e. “you should” rather than “you shouldn’t”. So I feel like I want to give a little bit of specific, prescriptive advice to programmers who might need to deal with names.

First and foremost: stop asking for unnecessary information. If I’m just authenticating to your system to download a comic book, you do not need to know my name. Your payment provider might need a billing address, but you absolutely do not need to store my name.

Okay, okay. I understand that may make your system seem a little impersonal, and you want to be able to greet me, or maybe have a name to show to other users beyond my login ID or email address that has to be unique on the site. Fine. Here’s what a good “name” field looks like:

You don’t need to break my name down into parts. If you just need a way to refer to me, then let me tell you whatever the heck I want. Honorific? Maybe I have more than one; maybe I don’t want you to use any.

And this brings me to “first name / last name”.

In most cases, you should not use these terms. They are oversimplifications of how names work, appropriate only for children in English-speaking countries who might not understand the subtleties involved and only need to know that one name comes before the other.

The terms you’re looking for are given name and surname, or perhaps family name. (“Middle name” might still be an appropriate term because that fills a more specific role.) But by using these more semantically useful terms, you include orders of magnitude more names in your normalization scheme. More importantly, by acknowledging the roles of the different parts of a name, you’ll come to realize that there are many other types of name, such as:

If your application does have a legitimate need to normalize names, for example, to interoperate with third-party databases, or to fulfill some regulatory requirement:

  • When you refer to a user of the system, always allow them to customize how their name is presented. Give them the benefit of the doubt. If you’re concerned about users abusing this display-name system to insult other users, it's understandable that you may need to moderate that a little. But there's no reason to ever moderate or regulate how a user's name is displayed to themselves. You can start to address offensive names by allowing other users to set nicknames for them. Only as a last resort, allow other users to report their name as not-actually-their-name, abusive or rude; if you do that, you have to investigate those reports. Let users affirm other users’ names, too, and verify reports: if someone attracts a million fake troll accounts, but all their friends affirm that their name is correct, you should be able to detect that. Don’t check government IDs in order to do this; they’re not relevant.
  • Allow the user to enter their normalized name as a series of names with classifiers attached to each one. In other words, like this:
  • Keep in mind that spaces are valid in any of these names. Many people have multi-word first names, middle names, or last names, and it can matter how you classify them. For one example that should resonate with readers of this blog, it’s “Guido” “van Rossum”, not “Guido” “Van” “Rossum”. It is definitely not “Guido” “Vanrossum”.
  • So is other punctuation. Even dashes. Even apostrophes. Especially apostrophes, you insensitive clod. Literally ten billion people whose surnames start with “O’” live in Ireland and they do not care about your broken database input security practices.
  • Allow for the user to have multiple names with classifiers attached to each one: “legal name in China”, “stage name”, “name on passport”, “maiden name”, etc. Keep in mind that more than one name for a given person may simultaneously accurate for a certain audience and legally valid. They can even be legally valid in the same context: many people have social security cards, birth certificates, driver’s licenses and passports with different names on them; sometimes due to a clerical error, sometimes due to the way different systems work. If your goal is to match up with those systems, especially more than one of them, you need to account for that possibility.

If you’re a programmer and you’re balking at this complexity: good. Remember that for most systems, sticking with the first option - treating users’ names as totally opaque text - is probably your best bet. You probably don’t need to know the structure of the user’s name for most purposes.

by Glyph at August 14, 2016 12:48 AM

August 12, 2016

Glyph Lefkowitz

The One Python Library Everyone Needs

Do you write programs in Python? You should be using attrs.

Why, you ask? Don’t ask. Just use it.

Okay, fine. Let me back up.

I love Python; it’s been my primary programming language for 10+ years and despite a number of interesting developments in the interim I have no plans to switch to anything else.

But Python is not without its problems. In some cases it encourages you to do the wrong thing. Particularly, there is a deeply unfortunate proliferation of class inheritance and the God-object anti-pattern in many libraries.

One cause for this might be that Python is a highly accessible language, so less experienced programmers make mistakes that they then have to live with forever.

But I think that perhaps a more significant reason is the fact that Python sometimes punishes you for trying to do the right thing.

The “right thing” in the context of object design is to make lots of small, self-contained classes that do one thing and do it well. For example, if you notice your object is starting to accrue a lot of private methods, perhaps you should be making those “public”1 methods of a private attribute. But if it’s tedious to do that, you probably won’t bother.

Another place you probably should be defining an object is when you have a bag of related data that needs its relationships, invariants, and behavior explained. Python makes it soooo easy to just define a tuple or a list. The first couple of times you type host, port = ... instead of address = ... it doesn’t seem like a big deal, but then soon enough you’re typing [(family, socktype, proto, canonname, sockaddr)] = ... everywhere and your life is filled with regret. That is, if you’re lucky. If you’re not lucky, you’re just maintaining code that does something like values[0][7][4][HOSTNAME][“canonical”] and your life is filled with garden-variety pain rather than the more complex and nuanced emotion of regret.


This raises the question: is it tedious to make a class in Python? Let’s look at a simple data structure: a 3-dimensional cartesian coordinate. It starts off simply enough:

1
class Point3D(object):

So far so good. We’ve got a 3 dimensional point. What next?

1
2
class Point3D(object):
    def __init__(self, x, y, z):

Well, that’s a bit unfortunate. I just want a holder for a little bit of data, and I’ve already had to override a special method from the Python runtime with an internal naming convention? Not too bad, I suppose; all programming is weird symbols after a fashion.

At least I see my attribute names in there, that makes sense.

1
2
3
class Point3D(object):
    def __init__(self, x, y, z):
        self.x

I already said I wanted an x, but now I have to assign it as an attribute...

1
2
3
class Point3D(object):
    def __init__(self, x, y, z):
        self.x = x

... to x? Uh, obviously ...

1
2
3
4
5
class Point3D(object):
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

... and now I have to do that once for every attribute, so this actually scales poorly? I have to type every attribute name 3 times?!?

Oh well. At least I’m done now.

1
2
3
4
5
6
class Point3D(object):
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z
    def __repr__(self):

Wait what do you mean I’m not done.

1
2
3
4
5
6
7
8
class Point3D(object):
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z
    def __repr__(self):
        return (self.__class__.__name__ +
                ("(x={}, y={}, z={})".format(self.x, self.y, self.z)))

Oh come on. So I have to type every attribute name 5 times, if I want to be able to see what the heck this thing is when I’m debugging, which a tuple would have given me for free?!?!?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
class Point3D(object):
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z
    def __repr__(self):
        return (self.__class__.__name__ +
                ("(x={}, y={}, z={})".format(self.x, self.y, self.z)))
    def __eq__(self, other):
        if not isinstance(other, self.__class__):
            return NotImplemented
        return (self.x, self.y, self.z) == (other.x, other.y, other.z)

7 times?!?!?!?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
class Point3D(object):
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z
    def __repr__(self):
        return (self.__class__.__name__ +
                ("(x={}, y={}, z={})".format(self.x, self.y, self.z)))
    def __eq__(self, other):
        if not isinstance(other, self.__class__):
            return NotImplemented
        return (self.x, self.y, self.z) == (other.x, other.y, other.z)
    def __lt__(self, other):
        if not isinstance(other, self.__class__):
            return NotImplemented
        return (self.x, self.y, self.z) < (other.x, other.y, other.z)

9 times?!?!?!?!?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
from functools import total_ordering
@total_ordering
class Point3D(object):
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z
    def __repr__(self):
        return (self.__class__.__name__ +
                ("(x={}, y={}, z={})".format(self.x, self.y, self.z)))
    def __eq__(self, other):
        if not isinstance(other, self.__class__):
            return NotImplemented
        return (self.x, self.y, self.z) == (other.x, other.y, other.z)
    def __lt__(self, other):
        if not isinstance(other, self.__class__):
            return NotImplemented
        return (self.x, self.y, self.z) < (other.x, other.y, other.z)

Okay, whew - 2 more lines of code isn’t great, but now at least we don’t have to define all the other comparison methods. But now we’re done, right?

1
2
from unittest import TestCase
class Point3DTests(TestCase):

You know what? I’m done. 20 lines of code so far and we don’t even have a class that does anything; the hard part of this problem was supposed to be the quaternion solver, not “make a data structure which can be printed and compared”. I’m all in on piles of undocumented garbage tuples, lists, and dictionaries it is; defining proper data structures well is way too hard in Python.2


namedtuple to the (not really) rescue

The standard library’s answer to this conundrum is namedtuple. While a valiant first draft (it bears many similarities to my own somewhat embarrassing and antiquated entry in this genre) namedtuple is unfortunately unsalvageable. It exports a huge amount of undesirable public functionality which would be a huge compatibility nightmare to maintain, and it doesn’t address half the problems that one runs into. A full enumeration of its shortcomings would be tedious, but a few of the highlights:

  • Its fields are accessable as numbered indexes whether you want them to be or not. Among other things, this means you can’t have private attributes, because they’re exposed via the apparently public __getitem__ interface.
  • It compares equal to a raw tuple of the same values, so it’s easy to get into bizarre type confusion, especially if you’re trying to use it to migrate away from using tuples and lists.
  • It’s a tuple, so it’s always immutable. Sort of.

As to that last point, either you can use it like this:

1
Point3D = namedtuple('Point3D', ['x', 'y', 'z'])

in which case it doesn’t look like a type in your code; simple syntax-analysis tools without special cases won’t recognize it as one. You can’t give it any other behaviors this way, since there’s nowhere to put a method. Not to mention the fact that you had to type the class’s name twice.

Alternately you can use inheritance and do this:

1
2
class Point3D(namedtuple('_Point3DBase', 'x y z'.split()])):
    pass

This gives you a place you can put methods, and a docstring, and generally have it look like a class, which it is... but in return you now have a weird internal name (which, by the way, is what shows up in the repr, not the class’s actual name). However, you’ve also silently made the attributes not listed here mutable, a strange side-effect of adding the class declaration; that is, unless you add __slots__ = 'x y z'.split() to the class body, and then we’re just back to typing every attribute name twice.

And this doesn’t even mention the fact that science has proven that you shouldn’t use inheritance.

So, namedtuple can be an improvement if it’s all you’ve got, but only in some cases, and it has its own weird baggage.


Enter The attr

So here’s where my favorite mandatory Python library comes in.

Let’s re-examine the problem above. How do I make Point3D with attrs?

1
2
import attr
@attr.s

Since this isn’t built into the language, we do have to have 2 lines of boilerplate to get us started: the import and the decorator saying we’re about to use it.

1
2
3
import attr
@attr.s
class Point3D(object):

Look, no inheritance! By using a class decorator, Point3D remains a Plain Old Python Class (albeit with some helpful double-underscore methods tacked on, as we’ll see momentarily).

1
2
3
4
import attr
@attr.s
class Point3D(object):
    x = attr.ib()

It has an attribute called x.

1
2
3
4
5
6
import attr
@attr.s
class Point3D(object):
    x = attr.ib()
    y = attr.ib()
    z = attr.ib()

And one called y and one called z and we’re done.

We’re done? Wait. What about a nice string representation?

1
2
>>> Point3D(1, 2, 3)
Point3D(x=1, y=2, z=3)

Comparison?

1
2
3
4
5
6
>>> Point3D(1, 2, 3) == Point3D(1, 2, 3)
True
>>> Point3D(3, 2, 1) == Point3D(1, 2, 3)
False
>>> Point3D(3, 2, 3) > Point3D(1, 2, 3)
True

Okay sure but what if I want to extract the data defined in explicit attributes in a format appropriate for JSON serialization?

1
2
>>> attr.asdict(Point3D(1, 2, 3))
{'y': 2, 'x': 1, 'z': 3}

Maybe that last one was a little on the nose. But nevertheless, it’s one of many things that becomes easier because attrs lets you declare the fields on your class, along with lots of potentially interesting metadata about them, and then get that metadata back out.

1
2
3
4
5
>>> import pprint
>>> pprint.pprint(attr.fields(Point3D))
(Attribute(name='x', default=NOTHING, validator=None, repr=True, cmp=True, hash=True, init=True, convert=None),
 Attribute(name='y', default=NOTHING, validator=None, repr=True, cmp=True, hash=True, init=True, convert=None),
 Attribute(name='z', default=NOTHING, validator=None, repr=True, cmp=True, hash=True, init=True, convert=None))

I am not going to dive into every interesting feature of attrs here; you can read the documentation for that. Plus, it’s well-maintained, so new goodies show up every so often and I might miss something important. But attrs does a few key things that, once you have them, you realize that Python was sorely missing before:

  1. It lets you define types concisely, as opposed to the normally quite verbose manual def __init__.... Types without typing.
  2. It lets you say what you mean directly with a declaration rather than expressing it in a roundabout imperative recipe. Instead of “I have a type, it’s called MyType, it has a constructor, in the constructor I assign the property ‘A’ to the parameter ‘A’ (and so on)”, you say “I have a type, it’s called MyType, it has an attribute called a”, and behavior is derived from that fact, rather than having to later guess about the fact by reverse engineering it from behavior (for example, running dir on an instance, or looking at self.__class__.__dict__).
  3. It provides useful default behavior, as opposed to Python’s sometimes-useful but often-backwards defaults.
  4. It adds a place for you to put a more rigorous implementation later, while starting out simple.

Let’s explore that last point.

Progressive Enhancement

While I’m not going to talk about every feature, I’d be remiss if I didn’t mention a few of them. As you can see from those mile-long repr()s for Attribute above, there are a number of interesting ones.

For example: you can validate attributes when they are passed into an @attr.s-ified class. Our Point3D, for example, should probably contain numbers. For simplicity’s sake, we could say that that means instances of float, like so:

1
2
3
4
5
6
7
import attr
from attr.validators import instance_of
@attr.s
class Point3D(object):
    x = attr.ib(validator=instance_of(float))
    y = attr.ib(validator=instance_of(float))
    z = attr.ib(validator=instance_of(float))

The fact that we were using attrs means we have a place to put this extra validation: we can just add type information to each attribute as we need it. Some of these facilities let us avoid other common mistakes. For example, this is a popular “spot the bug” Python interview question:

1
2
3
4
5
6
7
class Bag:
    def __init__(self, contents=[]):
        self._contents = contents
    def add(self, something):
        self._contents.append(something)
    def get(self):
        return self._contents[:]

Fixing it, of course, becomes this:

1
2
3
4
5
class Bag:
    def __init__(self, contents=None):
        if contents is None:
            contents = []
        self._contents = contents

adding two extra lines of code.

contents inadvertently becomes a global varible here, making all Bag objects not provided with a different list share the same list. With attrs this instead becomes:

1
2
3
4
5
6
7
@attr.s
class Bag:
    _contents = attr.ib(default=attr.Factory(list))
    def add(self, something):
        self._contents.append(something)
    def get(self):
        return self._contents[:]

There are several other features that attrs provides you with opportunities to make your classes both more convenient and more correct. Another great example? If you want to be strict about extraneous attributes on your objects (or more memory-efficient on CPython), you can just pass slots=True at the class level - e.g. @attr.s(slots=True) - to automatically turn your existing attrs declarations a matching __slots__ attribute. All of these handy features allow you to make better and more powerful use of your attr.ib() declarations.


The Python Of The Future

Some people are excited about eventually being able to program in Python 3 everywhere. What I’m looking forward to is being able to program in Python-with-attrs everywhere. It exerts a subtle, but positive, design influence in all the codebases I’ve seen it used in.

Give it a try: you may find yourself surprised at places where you’ll now use a tidily explained class, where previously you might have used a sparsely-documented tuple, list, or a dict, and endure the occasional confusion from co-maintainers. Now that it’s so easy to have structured types that clearly point in the direction of their purpose (in their __repr__, in their __doc__, or even just in the names of their attributes), you might find you’ll use a lot more of them. Your code will be better for it; I know mine has been.


  1. Scare quotes here because the attributes aren’t meaningfully exposed to the caller, they’re just named publicly. This pattern, getting rid of private methods entirely and having only private attributes, probably deserves its own post... 

  2. And we hadn’t even gotten to the really exciting stuff yet: type validation on construction, default mutable values... 

by Glyph at August 12, 2016 09:47 AM

Remember that thing I said in my pycon talk about native packaging being the main thing to worry about, and single-file binaries being at best a stepping stone to that and at worst a bit of a red herring? You don’t have to take it from me. From the authors of a widely-distributed command-line application that was rewritten from Python into Go specifically for easier distribution, and then rewritten in Python:

... [the] majority of people prefer native packages so distributing precompiled binaries wasn’t a big win for this type of project1 ...

I don’t want to say “I told you so”, but... no. Wait a second. That is exactly what I want to do. That is what I am doing.

I told you so.

by Glyph at August 12, 2016 03:37 AM

August 11, 2016

Glyph Lefkowitz

Hello lazyweb,

I want to run some “legacy” software (Trac, specifically) on a Swarm cluster. The files that it needs to store are mostly effectively write-once (it’s the attachments database) but may need to be deleted (spammers and purveyors of malware occasionally try to upload things for spamming or C&C) so while mutability is necessary, there’s a very low risk of any write contention.

I can’t use a networked filesystem, or any volume drivers, so no easy-mode solutions. Basically I want to be able to deploy this on any swarm cluster, so no cheating and fiddling with the host.

Is there any software that I can easily run as a daemon that runs in the background, synchronizing the directory of a data volume between N containers where N is the number of hosts in my cluster?

I found this but it strikes me as ... somehow wrong ... to use that as a critical part of my data-center infrastructure. Maybe it would actually be a good start? But in addition to not being really designed for this task, it’s also not open source, which makes me a little nervous. This list, or even this smaller one is huge and bewildering. So I was hoping you could drop me a line if you’ve got an idea what I could use for this.

by Glyph at August 11, 2016 05:28 AM

August 08, 2016

Glyph Lefkowitz

I like keeping a comprehensive an accurate addressbook that includes all past email addresses for my contacts, including those which are no longer valid. I do this because I want to be able to see conversations stretching back over the years as originating from that person.

Unfortunately this causes problems when sending mail sometimes. On macOS, at least as of El Capitan, neither the Mail application nor the Contacts application have any mechanism for indicating preference-order of email addresses that I’ve been able to find. Compounding this annoyance, when completing a recipient’s address based on their name, it displays all email addresses for a contact without showing their label, which means even if I label one “preferred” or “USE THIS ONE NOW”, or “zzz don’t use this hasn’t worked since 2005”, I can’t tell when I’m sending a message.

But it seems as though it defaults to sending messages to the most recent outgoing address for that contact that it can see in an email. For people I send email to regularly to this is easy enough. For people who I’m aware have changed their email address, but where I don’t actually want to send them a message, I think I figured out a little hack that makes it work: make a new folder called “Preferred Addresses Hack” (or something suitably), compose a new message addressed to the correct address, then drag the message out of drafts into the folder; since it has a recent date and is addressed to the right person, Mail.app will index it and auto-complete the correct address in the future.

However, since the previous behavior appeared somewhat non-deterministic, I might be tricking myself into believing that this hack worked. If you can confirm it, I’d appreciate it if you would let me know.

by Glyph at August 08, 2016 11:24 PM

Itamar Turner-Trauring

Why living below your means can help you find a better job

Sooner or later you're going to have to find a new job. Maybe you'll decide you can't take another day of the crap your boss is putting you through. Maybe you'll get laid off, or maybe you'll just decide to move to a new city.

Whatever the reason, when you're looking for a new job one of the keys to success is the ability to be choosy. If you need a job right now to pay your bills that means you've got no negotiating leverage. And there's no reason to think the first job offer you get will be the best job for you; maybe it'll be the second offer, or the tenth.

It's true that having in-demand skills or being great at interviews will help you get more and better offers. But interviewing for jobs always takes time... and if you can't afford to go without income for long then you won't be able to pick and choose between offers.

How do you make sure you don't need to find a new job immediately and that you can be choosy about which job you take? By living below your means.

Saving for a rainy day

You're going to have to find a new job someday but you should start preparing right now. By the time you realize you need a new job it may be too late. How do you prepare? By saving money in an easy to access way, an emergency stash that will pay for your expenses while you have no income. Cash in the bank is a fine way to do this since the goal is not to maximize returns, the goal is to have money available when you need it.

Let's say you need to spend 100 Gold Pieces a month to sustain your current lifestyle. And you decide you want 4 months of expenses saved just in case you lose your job during a recession, when jobs will take longer to find. That means you need 400GP savings in the bank.

If your expenses are 100GP and you have no savings that suggests your take-home pay is also 100GP (if you're spending more than you make better fix that first!). Increasing pay is hard to do quickly, so you need to reduce your spending temporarily until you have those 400GP saved. At that point you can go back to spending your full paycheck with the knowledge that you have money saved for a rainy day.

But you can do better.

Living below your means

If you're always spending less than your income you get a double benefit: you're saving more and it takes you longer to exhaust your savings. To see that we can look at two two scenarios, one where you permanently reduce your spending to 80GP a month and another where you permanently reduce it to 50GP a month.

Scenario 1: You have 100GP take-home pay, 80GP expenses. You're saving 20GP a month, so it will take you 20 months to save 400GP. Since your monthly expenses are 80GP this means you can now go 400/80 = 5 months without any pay.

Scenario 2: You have 100GP take-home pay, 50GP expenses. You're saving 50GP a month, so it will take you 8 months to save 400GP. Since your monthly expenses are 50GP this means you can now go 400/50 = 8 months without any pay.

As you can see, the lower your expenses the better off you are. At 80GP/month expenses it takes you 20 months to save 5 months' worth of expenses. At 50GP/month expenses it only takes you 8 months to save 8 months' worth of expenses. Reducing your expenses allows you to save faster and makes your money last longer!

The longer your money lasts the more leverage you have during a job search: you can walk away from a bad job offer and have an easier time negotiating a better offer. And you also have the option of taking a lower-paid job if that job is attractive enough in other ways. Finally, a permanently reduced cost of living also means that over time you are less and less reliant on your job as a source of income.

Reduce your expenses today!

To prepare for the day when you need to look for a job you should reduce expenses temporarily until you've saved enough to pay for a few months' living expenses. Once you've done that you'll have a sense of whether that lower level of expenses works for you; there's a pretty good chance you'll be just as happy at 90GP or 80GP as at 100GP.

In that case you should permanently reduce your expenses rather than going back to 100GP a month. You'll have more money in the bank, you'll have money you can invest for the long term, and the money you have saved will last you longer when you eventually have to look for a new job.

August 08, 2016 04:00 AM

August 03, 2016

Itamar Turner-Trauring

Why lack of confidence can make you a better programmer

What if you're not a superstar programmer? What if other people work longer than you, or harder than you, or have built better projects than you? Can you succeed without self-confidence? I believe you can, and moreover I feel that lack of confidence can actually make you a better programmer.

The problem with self-confidence

It's easy to think that self-confidence is far more useful than lack of confidence. Self-confidence will get you to try new things and can convince others of your worth. Self-confidence seems self-evidently worthwhile: if it isn't worthwhile, why is that self-confident person so confident?

But in fact unrealistic self-confidence can be quite harmful. Software is usually far more complex than we believe it to be, far harder to get right than we think. And if we're self-confident we may think our software works even when it doesn't.

When I was younger I suffered from the problem of too much self-confidence. I wrote software that didn't work, or was built badly... and I only realized its flaws after the fact, when it was too late. Software that crashed at 4AM, patches to open source software that never worked in the first place, failed projects I learned nothing from (at the time)... it's a long list.

I finally became a better programmer when I learned to doubt myself, to doubt the software that I wrote.

Why good programmers lack confidence in themselves

Why does doubting yourself, lacking confidence in yourself, make you a better a programmer?

When we write software we're pretty much always operating beyond the point of complexity where we can fit the whole program in our mind. That means you always have to doubt your ability to catch all the problems, to come up with the best solutions.

And so we get peer feedback, write tests, and get code reviews, all in the hopes of overcoming our inevitable failures:

  1. "My design isn't great," you think. So you talk it over with a colleague and together you come up with an even better idea.
  2. "My code might have bugs," you say. So you write tests, and catch any bugs and prevent future bugs.
  3. "My code might still have UI problems," you imagine. So you manually test your web application and check all the edge cases.
  4. "I might have forgotten something," you consider. So you get a code review, and get suggestions for improving your code.

These techniques also have the great benefit of teaching you to be a better programmer, increasing the complexity you can work with. You'll still need tests and code reviews and all the rest; our capacity for understanding is always finite and always stretched.

You can suffer from too much lack of confidence: you should not judge yourself harshly. But we're all human, and therefore all fallible. If you want your creations to succeed you should embrace lack of confidence: test your code, look for things you've missed, get help from others. And if you head over to Software Clown you can discover the many mistakes caused by my younger self's over-confidence, and the lessons you can learn from those mistakes.

August 03, 2016 04:00 AM

August 02, 2016

Itamar Turner-Trauring

Why living below your means can help you find a better job

Sooner or later you're going to have to find a new job. Maybe you'll decide you can't take another day of the crap your boss is putting you through. Maybe you'll get laid off, or maybe you'll just decide to move to a new city.

Whatever the reason, when you're looking for a new job one of the keys to success is the ability to be choosy. If you need a job right now to pay your bills that means you've got no negotiating leverage. And there's no reason to think the first job offer you get will be the best job for you; maybe it'll be the second offer, or the tenth.

It's true that having in-demand skills or being great at interviews will help you get more and better offers. But interviewing for jobs always takes time... and if you can't afford to go without income for long then you won't be able to pick and choose between offers.

How do you make sure you don't need to find a new job immediately and that you can be choosy about which job you take? By living below your means.

Saving for a rainy day

You're going to have to find a new job someday but you should start preparing right now. By the time you realize you need a new job it may be too late. How do you prepare? By saving money in an easy to access way, an emergency stash that will pay for your expenses while you have no income. Cash in the bank is a fine way to do this since the goal is not to maximize returns, the goal is to have money available when you need it.

Let's say you need to spend 100 Gold Pieces a month to sustain your current lifestyle. And you decide you want 4 months of expenses saved just in case you lose your job during a recession, when jobs will take longer to find. That means you need 400GP savings in the bank.

If your expenses are 100GP and you have no savings that suggests your take-home pay is also 100GP (if you're spending more than you make better fix that first!). Increasing pay is hard to do quickly, so you need to reduce your spending temporarily until you have those 400GP saved. At that point you can go back to spending your full paycheck with the knowledge that you have money saved for a rainy day.

But you can do better.

Living below your means

If you're always spending less than your income you get a double benefit: you're saving more and it takes you longer to exhaust your savings. To see that we can look at two two scenarios, one where you permanently reduce your spending to 80GP a month and another where you permanently reduce it to 50GP a month.

Scenario 1: You have 100GP take-home pay, 80GP expenses. You're saving 20GP a month, so it will take you 20 months to save 400GP. Since your monthly expenses are 80GP this means you can now go 400/80 = 5 months without any pay.

Scenario 2: You have 100GP take-home pay, 50GP expenses. You're saving 50GP a month, so it will take you 8 months to save 400GP. Since your monthly expenses are 50GP this means you can now go 400/50 = 8 months without any pay.

As you can see, the lower your expenses the better off you are. At 80GP/month expenses it takes you 20 months to save 5 months' worth of expenses. At 50GP/month expenses it only takes you 8 months to save 8 months' worth of expenses. Reducing your expenses allows you to save faster and makes your money last longer!

The longer your money lasts the more leverage you have during a job search: you can walk away from a bad job offer and have an easier time negotiating a better offer. And you also have the option of taking a lower-paid job if that job is attractive enough in other ways. Finally, a permanently reduced cost of living also means that over time you are less and less reliant on your job as a source of income.

Reduce your expenses today!

To prepare for the day when you need to look for a job you should reduce expenses temporarily until you've saved enough to pay for a few months' living expenses. Once you've done that you'll have a sense of whether that lower level of expenses works for you; there's a pretty good chance you'll be just as happy at 90GP or 80GP as at 100GP.

In that case you should permanently reduce your expenses rather than going back to 100GP a month. You'll have more money in the bank, you'll have money you can invest for the long term, and the money you have saved will last you longer when you eventually have to look for a new job.

August 02, 2016 04:00 AM

August 01, 2016

Hynek Schlawack

Please Fix Your Decorators

If your Python decorator unintentionally changes the signatures of my callables or doesn’t work with class methods, it’s broken and should be fixed. Sadly most decorators are broken because the web is full of bad advice.

by Hynek Schlawack (hs@ox.cx) at August 01, 2016 12:00 PM

July 31, 2016

Hynek Schlawack

Testing & Packaging

How to ensure that your tests run code that you think they are running, and how to measure your coverage over multiple tox runs (in parallel!).

by Hynek Schlawack (hs@ox.cx) at July 31, 2016 07:00 PM

Itamar Turner-Trauring

Write test doubles you can trust using verified fakes

When you're writing tests for your code you often encounter some complex object that impedes your testing. Let's say some of your code uses a client library to talk to Twitter's API. You don't want your tests to have to talk to Twitter's servers: your tests would be slower, flakier, harder to setup, and require a working network connection.

The usual solution is to write a test double of some sort, an object that pretends to be the Twitter client but is easier to use in a test. Terminology varies slightly across programming communities, but you're probably going to make a fake, mock or stub Twitter client for use in your tests.

As you can tell from names like "fake" and "mock", there's a problem here: if your tests are running against a fake object, how do you know they will actually work against the real thing? You will often find the fake is insufficiently realistic, or just plain wrong, which means your tests using it were a waste of time. I've personally had Python mock objects cause completely broken code to pass its tests, because the mock was a bit too much of a mockup.

There's a better kind of test double, though, known as a "verified fake". The key point about a verified fake is that, unlike regular fakes, you've actually proven it has the same behavior as the real thing. That means that when you use the verified fake for a Twitter client in a test you can trust that the fake will behave the same as real Twitter client. And that means you can trust your tests are not being misled by a broken fake.

Read more...

July 31, 2016 04:00 AM

July 29, 2016

Glyph Lefkowitz

Don’t Trust Sourceforge, Ever

If you use a computer and you use the Internet, chances are you’ll eventually find some software that, for whatever reason, is still hosted on Sourceforge. In case you’re not familiar with it, Sourceforge is a publicly-available malware vector that also sometimes contains useful open source binary downloads, especially for Windows.


In addition to injecting malware into their downloads (a practice they claim, hopefully truthfully, to have stopped), Sourceforge also presents an initial download page over HTTPS, then redirects the user to HTTP for the download itself, snatching defeat from the jaws of victory. This is fantastically irresponsible, especially for a site offering un-sandboxed binaries for download, especially in the era of Let’s Encrypt where getting a TLS certificate takes approximately thirty seconds and exactly zero dollars.

So: if you can possibly find your downloads anywhere else, go there.


But, rarely, you will find yourself at the mercy of whatever responsible stewards1 are still operating Sourceforge if you want to get access to some useful software. As it happens, there is a loophole that will let you authenticate the binaries that you download from them so you won’t be left vulnerable to an evil barista: their “file release system”, the thing you use to upload your projects, will allow you to download other projects as well.

To use it, first, make yourself a sourceforge account. You may need to create a dummy project as well. Sourceforge maintains an HTTPS-accessible list of key fingerprints for all the SSH servers that they operate, so you can verify the public key below.

Then you’ll need to connect to their upload server over SFTP, and go to the path /home/frs/project/<the project’s name>/.../ to get the file.

I have written a little Python script2 that automates the translation of a Sourceforge file-browser download URL, one that you can get if you right-click on a download in the “files” section of a project’s website, and runs the relevant scp command to retrieve the file for you. This isn’t on PyPI or anything, and I’m not putting any effort into polishing it further; the best possible outcome of this blog post is that it immediately stops being necessary.


  1. Are you one of those people? I would prefer to be lauding your legacy of decades of valuable contributions to the open source community instead of ridiculing your dangerous incompetence, but repeated bug reports and support emails have gone unanswered. Please get in touch so we can discuss this. 

  2. Code:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    #!/usr/bin/env python2
    
    import sys
    import os
    
    sfuri = sys.argv[1]
    
    # for example,
    # http://sourceforge.net/projects/refind/files/0.9.2/refind-bin-0.9.2.zip/download
    
    import re
    matched = re.match(
        r"https://sourceforge.net/projects/(.*)/files/(.*)/download",
        sfuri
    )
    
    if not matched:
        sys.stderr.write("Not a SourceForge download link.\n")
        sys.exit(1)
    
    project, path = matched.groups()
    
    sftppath = "/home/frs/project/{project}/{path}".format(project=project, path=path)
    
    def knows_about_web_sf_net():
        with open(
                os.path.expanduser("~/.ssh/known_hosts"), "rb"
        ) as read_known_hosts:
            data = read_known_hosts.read().split("\n")
            for line in data:
                if 'web.sourceforge.net' in line.split()[0]:
                    return True
        return False
    
    sfkey = """
    web.sourceforge.net ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2uifHZbNexw6cXbyg1JnzDitL5VhYs0E65Hk/tLAPmcmm5GuiGeUoI/B0eUSNFsbqzwgwrttjnzKMKiGLN5CWVmlN1IXGGAfLYsQwK6wAu7kYFzkqP4jcwc5Jr9UPRpJdYIK733tSEmzab4qc5Oq8izKQKIaxXNe7FgmL15HjSpatFt9w/ot/CHS78FUAr3j3RwekHCm/jhPeqhlMAgC+jUgNJbFt3DlhDaRMa0NYamVzmX8D47rtmBbEDU3ld6AezWBPUR5Lh7ODOwlfVI58NAf/aYNlmvl2TZiauBCTa7OPYSyXJnIPbQXg6YQlDknNCr0K769EjeIlAfY87Z4tw==
    """
    
    if not knows_about_web_sf_net():
        with open(
                os.path.expanduser("~/.ssh/known_hosts"), "ab"
        ) as append_known_hosts:
            append_known_hosts.write(sfkey)
    cmd = "scp web.sourceforge.net:{sftppath} .".format(sftppath=sftppath)
    print(cmd)
    os.system(cmd)
    

by Glyph at July 29, 2016 06:06 AM

July 06, 2016

Twisted Matrix Laboratories

Twisted 16.3 Released

On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 16.3.0.

The highlights of this release are:
  • The Git migration has happened, so we've updated our development documentation to match. We're now trialling accepting pull requests at github.com/twisted/twisted, so if you've ever wanted an excuse to contribute, now's the chance!
  • In our steady shedding of baggage, twisted.spread.ui, twisted.manhole (not to be confused with twisted.conch.manhole!), and a bunch of old and deprecated stuff from twisted.python.reflect and twisted.protocols.sip have been removed.
  • twisted.web's HTTP server now handles pipelined requests better -- it used to try and process them in parallel, but this was fraught with problems and now it processes them in series, which is less surprising to code that expects the Request's transport to not be buffered (e.g. WebSockets). There is also a bugfix for HTTP timeouts not working in 16.2.
  • Twisted now has HTTP/2 support in its web server! This is currently not available by default -- you will need to install hyper-h2, which is available in the [h2] setuptools extras. If you want to play around with it "pip install twisted[h2]" (on Python 2, a bugfix release will make it available on Python 3) .
  • 53 tickets closed overall, including cleanups that move us closer to a total Python 3 port.
For more information, check the NEWS file (link provided below).

You can find the downloads on PyPI (or alternatively our website). The NEWS file is also available on GitHub.

Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)

by HawkOwl (noreply@blogger.com) at July 06, 2016 12:50 PM

June 26, 2016

Itamar Turner-Trauring

Code faster by typing less

Sometimes it's disappointing to see how much code you've written in a given day, so few lines for so much effort. There are big picture techniques you can use to do better, like planning ahead so you write the most useful lines of code. But in the end you still want to be producing as much code as possible given the situation. One great way to do that: type less.

If you're only producing a few lines of code in a day, where did all the rest of your time at the keyboard go? As a programmer you spend a lot of time typing, after all. Here's some of what I spend time on:

  • Opening and closing files.
  • Checking in code, merging code, reverting code.
  • Searching and browsing code to figure out how things work and where a bug might be coming from.
  • Stepping through code in a debugger.

If you pay attention you will find you spend a significant amount of time on these sort of activities. And there's a pretty good chance you're also wasting time doing them. Consider the following transcript:

$ git add myfile.py
$ git add anotherfile.py
$ git add --interactive justwantsomeofthisone.py
$ git diff | less
$ git commit -m "I wrote some code, hurrah."
$ git push

You're typing the same thing over and over again! Every time you open a file: lots of typing. Every time you find a class definition manually: more typing.

What do programmers do when they encounter manual, repetitive work? Automate! Pretty much every IDE or text editor for programmers has built-in features or 3rd party packages to automate and reduce the amount of time you waste on these manual tasks. That means Emacs, Vim, Sublime, Eclipse or whatever... as long as it's aimed at programmers, is extendable and has a large user base you're probably going to find everything you need.

As an example, on a day to day basis I use the following Emacs add-ons:

  • Elpy for Python editing, which lets me jump to the definition of method or class with a single keystroke and then jump right back with another single keystroke. It also highlights (some) errors in the code.
  • Magit, which changes using git from painful and repetitive to pleasant and fast. The example above would involve typing a a TAB <arrow key down> Ctrl-space <arrow key down> a c I wrote some code, hurrah. Ctrl-C Ctrl-C. That's a little obscure if you've never used it, but trust me: it's a really good UI.
  • Projectile, which let's me jump to any file in a git or other VCS repository in three keystrokes plus a handful of characters from the filename.
  • undo-tree-mode, which makes my undo history easily, quickly and visually accessible.

Each of these tools saves just a little bit of time... but I use them over and over again every single day. Small savings add up to big ones.

Here's what you should do: every month or two allocate a few hours to investigating and learning new features or tools for your text editor. Whatever your editor there are tools available, and sometimes even just keyboard shortcuts, that will speed up your development. You want to limit how much time you spend on this because too much time spent will counteract any savings. And you want to do this on an ongoing basis because there's always new tools and shortcuts to discover. Over time you will find yourself spending less time typing useless, repetitive text... and with more time available to write code.

June 26, 2016 04:00 AM

June 15, 2016

Itamar Turner-Trauring

Writing for software engineers: here's the best book I've found

Where some software engineers can only see the immediate path ahead, more experienced software engineers keep the destination in mind and navigate accordingly. Sometimes the narrow road is the path to righteousness, and sometimes the broad road is the path to wickedness, or at least to broken deadlines and unimplemented requirements. How do you learn to see further, beyond the immediate next steps, beyond the stated requirements? One method you can use is writing.

Writing as thinking can help you find better solutions for easy problems, and solve problems that seem impossible. You write a "design document", where "design" is a verb not a noun, an action not a summary. You write down your assumptions, your requirements, your ideas, the presumed tradeoffs and try to nail down the best solution. Unlike vague ideas in your head, written text can be re-read and sharpened. By putting words to paper you force yourself to clarify your ideas, focus your definitions, make your assumptions more explicit. You can also share written documents with others for feedback and evaluation; since software development is a group effort that often means a group of people will be doing the thinking.

Once you've found a better way forward you need to convince others that your chosen path makes sense. That means an additional kind of writing, writing for persuasion and action: explaining your design, convincing the stakeholders.

How do you learn how to write? If you're just starting your journey as a writer you need to find the right guide; there are many kinds of writing with different goals and different styles. Some books obsess over grammar, others focus on academic writing or popular non-fiction. Writing as a programmer is writing in the context of an organization, writing that needs to propel you forward, not merely educate or entertain. This is where Flower and Ackerman's "Writers at Work" will provide a wise and knowledgeable guide. (There are a dozen other books with the same name; make sure you get the right one!)

Linda Flower is one the originators of the cognitive process theory of writing, and so is well suited to writing a book that covers not just surface style, but the process of adapting writing to one's situation. This book won't teach you how to write luminous prose or craft a brilliant academic argument. The first example scenario the book covers is of someone who has joined the Request For Proposal (RFP) group at a software company. The ten page scenario talks about the RFP process in general, how RFPs are usually constructed within the organization, the team in charge of creating RFPs and their various skills and motivations, the fact RFP's may end up in competitors' hands... What this book will teach you is how do the writing you do in your job: complex, action oriented, utilitarian.

"Writers at Work" focuses on process, context, and writing as a form of organizational action, and helps you understand how to approach your task. It will help you answer these questions and more:

  • What context are you writing in? The book guides you through the process of discovering the audience, rhetorical situation, discourse community, goals, problems and agendas. Software development always has a stated reason, but often there are deeper reasons why you're working on a particular project, specific ways to communicate with different people (engineers, management, customers), differing agendas and multiple audiences. The book will help you define the situation you are writing in, which will help you figure out what you're trying to build and then convince others when you've found the solution.
  • How do you define the problem you are trying to solve? The book points out the helpful technique of operationalization, defining the problem in a way that implies an action to solve it.
  • Structuring your writing: if you have a design, how do you communicate it in a convincing way? How do you explain it to different audiences with different levels of knowledge?
  • How do you test your writing? Writing can require testing just as much as software does.

For reasons I don't really understand this practical, immensely useful textbook is out of print, and (for now) you can get copies for cheap. Buy a copy, read it, and start writing!

June 15, 2016 04:00 AM

June 07, 2016

Moshe Zadka

__name__ == __main__ considered harmful

Every single Python tutorial shows the pattern of

# define functions, classes,
# etc.

if __name__ == '__main__':
    main()

This is not a good pattern. If your code is not going to be in a Python module, there is no reason not to unconditionally call ‘main()’ at the bottom. So this code will only be used in modules — where it leads to unpredictable effects. If this module is imported as ‘foo’, then the identity of ‘foo.something’ and ‘__main__.something’ will be different, even though they share code.

This leads to hilarious effects like @cache decorators not doing what they are supposed to, parallel registry lists and all kinds of other issues. Hilarious unless you spend a couple of hours debugging why ‘isinstance()’ is giving incorrect results.

If you want to write a main module, make sure it cannot be imported. In this case, reversed stupidity is intelligence — just reverse the idiom:

# at the top
if __name__ != '__main__':
    raise ImportError("this module cannot be imported")

This, of course, will mean that this module cannot be unit tested: therefore, any non-trivial code should go in a different module that this one imports. Because of this, it is easy to gravitate towards a package. In that case, put the code above in a module called ‘__main__.py‘. This will lead to the following layout for a simple package:

PACKAGE_NAME/
             __init__.py
                 # Empty
             __main__.py
                 if __name__ != '__main__':
                     raise ImportError("this module cannot be imported")
                 from PACKAGE_NAME import api
                 api.main()
             api.py
                 # Actual code
             test_api.py
                 import unittest
                 # Testing code

And then, when executing:

$ python -m PACKAGE_NAME arg1 arg2 arg3

This will work in any environment where the package is on the sys.path: in particular, in any virtualenv where it was pip-installed. Unless a short command-line is important, it allows skipping over creating a console script in setup.py completely, and letting “python -m” be the official CLI. Since pex supports setting a module as an entry point, if this tool needs to be deployed in other environment, it is easy to package into a tool that will execute the script:

$ pex . --entry-point SOME_PACKAGE --output-file toolname

by moshez at June 07, 2016 05:40 AM

June 05, 2016

Jonathan Lange

Evolving toward property-based testing with Hypothesis

Many people are quite comfortable writing ordinary unit tests, but feel a bit confused when they start with property-based testing. This post shows how two ordinary programmers started with normal Python unit tests and nudged them incrementally toward property-based tests, gaining many advantages on the way.

Background

I used to work on a command-line tool with an interface much like git’s. It had a repository, and within that repository you could create branches and switch between them. Let’s call the tool tlr.

It was supposed to behave something like this:

List branches:

$ tlr branch
foo
* master

Switch to an existing branch:

$ tlr checkout foo
* foo
master

Create a branch and switch to it:

$ tlr checkout -b new-branch
$ tlr branch
foo
master
* new-branch

Early on, my colleague and I found a bug: when you created a new branch with checkout -b it wouldn’t switch to it. The behavior looked something like this:

$ tlr checkout -b new-branch
$ tlr branch
foo
* master
new-branch

The previously active branch (in this case, master) stayed active, rather than switching to the newly-created branch (new-branch).

Before we fixed the bug, we decided to write a test. I thought this would be a good chance to start using Hypothesis.

Writing a simple test

My colleague was less familiar with Hypothesis than I was, so we started with a plain old Python unit test:

    def test_checkout_new_branch(self):
        """
        Checking out a new branch results in it being the current active
        branch.
        """
        tmpdir = FilePath(self.mktemp())
        tmpdir.makedirs()
        repo = Repository.initialize(tmpdir.path)
        repo.checkout("new-branch", create=True)
        self.assertEqual("new-branch", repo.get_active_branch())

The first thing to notice here is that the string "new-branch" is not actually relevant to the test. It’s just a value we picked to exercise the buggy code. The test should be able to pass with any valid branch name.

Even before we started to use Hypothesis, we made this more explicit by making the branch name a parameter to the test:

    def test_checkout_new_branch(self, branch_name="new-branch"):
        tmpdir = FilePath(self.mktemp())
        tmpdir.makedirs()
        repo = Repository.initialize(tmpdir.path)
        repo.checkout(branch_name, create=True)
        self.assertEqual(branch_name, repo.get_active_branch())

(For brevity, I’ll elide the docstring from the rest of the code examples)

We never manually provided the branch_name parameter, but this change made it more clear that the test ought to pass regardless of the branch name.

Introducing Hypothesis

Once we had a parameter, the next thing was to use Hypothesis to provide the parameter for us. First, we imported Hypothesis:

    from hypothesis import given
    from hypothesis import strategies as st

And then made the simplest change to our test to actually use it:

    @given(branch_name=st.just("new-branch"))
    def test_checkout_new_branch(self, branch_name):
        tmpdir = FilePath(self.mktemp())
        tmpdir.makedirs()
        repo = Repository.initialize(tmpdir.path)
        repo.checkout(branch_name, create=True)
        self.assertEqual(branch_name, repo.get_active_branch())

Here, rather than providing the branch name as a default argument value, we are telling Hypothesis to come up with a branch name for us using the just("new-branch") strategy. This strategy will always come up with "new-branch", so it’s actually no different from what we had before.

What we actually wanted to test is that any valid branch name worked. We didn’t yet know how to generate any valid branch name, but using a time-honored tradition we pretended that we did:

    def valid_branch_names():
        """Hypothesis strategy to generate arbitrary valid branch names."""
        # TODO: Improve this strategy.
        return st.just("new-branch")

    @given(branch_name=valid_branch_names())
    def test_checkout_new_branch(self, branch_name):
        tmpdir = FilePath(self.mktemp())
        tmpdir.makedirs()
        repo = Repository.initialize(tmpdir.path)
        repo.checkout(branch_name, create=True)
        self.assertEqual(branch_name, repo.get_active_branch())

Even if we had stopped here, this would have been an improvement. Although the Hypothesis version of the test doesn’t have any extra power over the vanilla version, it is more explicit about what it’s testing, and the valid_branch_names() strategy can be re-used by future tests, giving us a single point for improving the coverage of many tests at once.

Expanding the strategy

It’s only when we get Hypothesis to start generating our data for us that we really get to take advantage of its bug finding power.

The first thing my colleague and I tried was:

    def valid_branch_names():
        return st.text()

But that failed pretty hard-core.

Turns out branch names were implemented as symlinks on disk, so valid branch name has to be a valid file name on whatever filesystem the tests are running on. This at least rules out empty names, ".", "..", very long names, names with slashes in them, and probably others (it’s actually really complicated).

Hypothesis had made something very clear to us: neither my colleague nor I actually knew what a valid branch name should be. None of our interfaces documented it, we had no validators, no clear ideas for rendering & display, nothing. We had just been assuming that people would pick good, normal, sensible names.

It was as if we had suddenly gained the benefit of extensive real-world end-user testing, just by calling the right function. This was:

  1. Awesome. We’ve found bugs that our users won’t.
  2. Annoying. We really didn’t want to fix this bug right now.

In the end, we compromised and implemented a relatively conservative strategy to simulate the good, normal, sensible branch names that we expected:

    from string import ascii_lowercase

    VALID_BRANCH_CHARS = ascii_lowercase + '_-.'

    def valid_branch_names():
        # TODO: Handle unicode / weird branch names by rejecting them early, raising nice errors
        # TODO: How do we handle case-insensitive file systems?
        return st.text(alphabet=VALID_BRANCH_CHARS, min_size=1, max_size=112)

Not ideal, but much more extensive than just hard-coding "new-branch", and much clearer communication of intent.

Adding edge cases

There’s one valid branch name that this strategy could generate, but probably won’t: master. If we left the test just as it is, then one time in a hojillion the strategy would generate "master" and the test would fail.

Rather than waiting on chance, we encoded this in the valid_branch_names strategy, to make it more likely:

    def valid_branch_names():
        return st.text(
            alphabet=letters, min_size=1, max_size=112).map(lambda t: t.lower()) | st.just("master")

When we ran the tests now, they failed with an exception due to the branch master already existing. To fix this, we used assume:

    from hypothesis import assume

    @given(branch_name=valid_branch_names())
    def test_checkout_new_branch(self, branch_name):
        assume(branch_name != "master")
        tmpdir = FilePath(self.mktemp())
        tmpdir.makedirs()
        repo = Repository.initialize(tmpdir.path)
        repo.checkout(branch_name, create=True)
        self.assertEqual(branch_name, repo.get_active_branch())

Why did we add master to the valid branch names if we were just going to exclude it anyway? Because when other tests say “give me a valid branch name”, we want them to make the decision about whether master is appropriate or not. Any future test author will be compelled to actually think about whether handling master is a thing that they want to do. That’s one of the great benefits of Hypothesis: it’s like having a rigorous design critic in your team.

Going forward

We stopped there, but we need not have. Just as the test should have held for any branch, it should also hold for any repository. We were just creating an empty repository because it was convenient for us.

If we were to continue, the test would have looked something like this:

    @given(repo=repositories(), branch_name=valid_branch_names())
    def test_checkout_new_branch(self, repo, branch_name):
        """
        Checking out a new branch results in it being the current active
        branch.
        """
        assume(branch_name not in repo.get_branches())
        repo.checkout(branch_name, create=True)
        self.assertEqual(branch_name, repo.get_active_branch())

This is about as close to a bona fide “property” as you’re likely to get in code that isn’t a straight-up computer science problem: if you create and switch to a branch that doesn’t already exist, the new active branch is the newly created branch.

We got there not by sitting down and thinking about the properties of our software in the abstract, nor by necessarily knowing much about property-based testing, but rather by incrementally taking advantage of features of Python and Hypothesis. On the way, we discovered and, umm, contained a whole class of bugs, and we made sure that all future tests would be heaps more powerful. Win.

This post was originally published on hypothesis.works. I highly recommend checking out their site if you want to test faster and fix more.

by jml at June 05, 2016 03:00 PM

May 31, 2016

Moshe Zadka

Stop Feeling Guilty about Writing Classes

I’ve seen a few talks about “stop writing classes”. I think they have a point, but it is a little over-stated. All debates are bravery debates, so it is hard to say which problem is harder — but as a recovering class-writing-guiltoholic, let me admit this: I took this too far. I was avoiding classes when I shouldn’t have.

Classes are best kept small

It is true that classes are best kept small. Any “method” which is not really designed to be overridden is often best implemented as a function that accepts a “duck-type” (or a more formal interface).

This, of course, sometimes leads to…

If a class has only one public method, except __init__, it wants to be a function

Especially given function.partial, it is not needed to decide ahead of time which arguments are “static” and which are “dynamic”

Classes are useful as data packets

This is the usual counter-point to the first two anti-class sentiments: a class which is nothing more than a bunch of attributes (a good example is the TCP envelope: source IP/target IP/source port/target port) are useful. Sure, they could be passed around as dictionaries, but this does not make things better. Just use attrs — and it is often useful to write two more methods:

  • Some variant of “serialize”, an instance method that returns some lower-level format (dictionary, string, etc.)
  • Some variant of “deserialize”, a class method that takes the lower-level format above and returns a corresponding instance.

It is perfectly ok to write this class rather than shipping dictionaries around. If nothing else, error messages will be a lot nicer. Please do not feel guilty.

by moshez at May 31, 2016 02:43 PM

May 19, 2016

Hynek Schlawack

Conditional Python Dependencies

Since the inception of wheels that install Python packages without executing arbitrary code, we need a static way to encode conditional dependencies for our packages. Thanks to PEP 508 we do have a blessed way but sadly the prevalence of old setuptools versions makes it a minefield to use.

by Hynek Schlawack (hs@ox.cx) at May 19, 2016 12:00 AM

May 18, 2016

Twisted Matrix Laboratories

Twisted 16.2 Released

On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 16.2!

Just in time for PyCon US, this release brings a few headlining features (like the haproxy endpoint) and the continuation of the modernisation of the codebase. More Python 3, less deprecated code, what's not to like?
  • twisted.protocols.haproxy.proxyEndpoint, a wrapper endpoint that gives some extra information to the wrapped protocols passed by haproxy;
  • Migration of twistd and other twisted.application.app users to the new logging system (twisted.logger);
  • Porting of parts of Twisted Names' server to Python 3;
  • The removal of the very old MSN client code and the deprecation of the unmaintained ICQ/OSCAR client code;
  • More cleanups in Conch in preparation for a Python 3 port and cleanups in HTTP code in preparation for HTTP/2 support;
  • Over thirty tickets overall closed since 16.1.
For more information, check the NEWS file (link provided below).

You can find the downloads on PyPI (or alternatively our website). The NEWS file is also available on GitHub.

Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)

by HawkOwl (noreply@blogger.com) at May 18, 2016 05:10 PM

May 06, 2016

Moshe Zadka

Forking Skip-level Dependencies

I have recently found I explain this concept over and over to people, so I want to have a reference.

Most modern languages comes with a “dependency manager” of sorts that helps manage the 3rd party libraries a given project uses. Rust has Cargo, Node.js has npm, Python has pip and so on. All of these do some things well and some things poorly. But one thing that can be done (well or poorly) is “support forking skip-level dependencies”.

In order to explain what I mean, here as an example: our project is PlanetLocator, a program to tell the user which direction they should face to see a planet. It depends on a library called Astronomy. Astronomy depends on Physics. Physics depends on Math.

  • PlanetLocator
    • Astronomy
      • Physics
        • Math

PlanetLocator is a SaaS, running on our servers. One day, we find Math has a critical bug, leading to a remote execution vulnerability. This is pretty bad, because it can be triggered via our application by simply asking PlanetLocator for the location of Mars at a specific date in the future. Luckily, the bug is simple — in Math’s definition of Pi, we need to add a couple of significant digits.

How easy is it to fix?

Well, assume PlanetLocator is written in Go, and not using any package manager. A typical import statement in PlanetLocator is

import “github.com/astronomy/astronomy”

A typical import statement in Astronomy is

import “github.com/physics/physics

..and so on.

We fork Math over to “github.com/planetlocator/math” and fix the vulnerability. Now we have to fork over physics to use the forked math, and astronomy to use the forked physics and finally, change all of our imports to import the forked astronomy — and Physics, Astronomy and PlanetLocator had no bugs!

Now assume, instead, we had used Python. In our requirements.txt file, we could put

git+https://github.com/planetlocator/math#egg=math

and voila! even though Physics’ “setup.py” said “install_requires=[‘math’]”, it will get our forked math.

When starting to use a new language/dependency manager, the first question to ask is: will it support me forking skip-level dependencies? Because every upstream maintainer is,  effectively, an absent-maintainer if rapid response is at stake (for any reason — I chose security above, but it might be beating the competition to a deadline, or fulfilling contractual obligations).

by moshez at May 06, 2016 04:17 AM

May 03, 2016

Glyph Lefkowitz

Letters To The Editor: Re: Email

Since I removed comments from this blog, I’ve been asking y’all to email me when you have feedback, with the promise that I’d publish the good bits. Today I’m making good on that for the first time, with this lovely missive from Adam Doherty:


I just wanted to say thank you. As someone who is never able to say no, your article on email struck a chord with me. I have had Gmail since the beginning, since the days of hoping for an invitation. And the day I received my invitation was the the last day my inbox was ever empty.

Prior to reading your article I had over 40,000 unread messages. It used to be a sort of running joke; I never delete anything. Realistically though was I ever going to do anything with them?

With 40,000 unread messages in your inbox, you start to miss messages that are actually important. Messages that must become tasks, tasks that must be completed.

Last night I took your advice; and that is saying something - most of the things I read via HN are just noise. This however spoke to me directly.

I archived everything older than two weeks, was down to 477 messages and kept pruning. So much of the email we get on a daily basis is also noise. Those messages took me half a second to hit archive and move on.

I went to bed with zero messages in my inbox, woke up with 21, archived 19, actioned 2 and then archived those.

Seriously, thank you so very much. I am unburdened.


First, I’d like to thank Adam for writing in. I really do appreciate the feedback.

Second, I wanted to post this here not in service of showcasing my awesomeness1, but rather to demonstrate that getting to the bottom of your email can have a profound effect on your state of mind. Even if it’s a running joke, even if you don’t think it’s stressing you out, there’s a good chance that, somewhere in the back of your mind, it is. After all, if you really don’t care, what’s stopping you from hitting select all / archive right now?

At the very least, if you did that, your mail app would load faster.


  1. although, let there be no doubt, I am awesome 

by Glyph at May 03, 2016 06:06 AM

April 27, 2016

Itamar Turner-Trauring

How you should choose which technology to learn next

Keeping up with the growing software ecosystem — new databases, new programming languages, new web frameworks — becomes harder and harder every year as more and more software is written. It is impossible to learn all existing technologies, let alone the new ones being released every day. If you want to learn another programming language you can choose from Dart, Swift, Go, Idris, Futhark, Ceylon, Zimbu, Elm, Elixir, Vala, OCaml, LiveScript, Oz, R, TypeScript, PureScript, Haskell, F#, Scala, Dylan, Squeak, Julia, CoffeeScript... and about a thousand more, if you're still awake. This stream of new technologies can be overwhelming, a constant worry that your skills are getting rusty and out of date.

Luckily you don't need to learn all technologies, and you are likely to use only a small subset during your tenure as a programmer. Instead your goal should be to maximize your return on investment: learn the most useful tools, with the least amount of effort. How then should you choose which technologies to learn?

Don't spend too much time on technologies which are either too close or too far from your current set of knowledge. If you are an expert on PostgreSQL then learning another relational database like MySQL won't teach you much. Your existing knowledge is transferable for the most part, and you'd have no trouble applying for a job requiring MySQL knowledge. On the other hand a technology that is too far from your current tools will be much more difficult to learn, e.g. switching from web development to real-time embedded devices.

Focus on technologies that can build on your existing knowledge while still being different enough to teach you something new. Learning these technologies provides multiple benefits:

  • Since you have some pre-existing knowledge you can learn them faster.
  • They can help you with your current job by giving you a broader but still relevant set of tools.
  • They can make it easier to expand the scope of a job search because they relate to your existing experience.

There are three ways you can build on your existing knowledge of tools and technologies:

  1. Branch out to nearby technologies: If you're a backend web developer you are interacting with a database, with networking via the HTTP protocol, with a browser running Javascript. You will end up knowing at least a little bit about these technologies, and you have some sense of how they interact with the technology you already know well. These are all great candidates for a new technology to learn next.
  2. Alternative solutions for a problem you understand: If you are an expert on the PostgreSQL database you might want to learn MongoDB. It's still a database, solving a problem whose parameters you already understand: how to store and search structured data. But the way MongoDB solves this problem is fundamentally different than PostgreSQL, which means you will learn a lot.
  3. Enhance your usage of existing tools: Tools for testing your existing technology stack can make you a better programmer by providing faster feedback and a broader view of software quality and defects. Learning how to better use a sophisticated text editor like Emacs/Vim or an IDE like Eclipse with your programming language of choice can make you a more productive programmer.

Neither you nor any other programmer will ever be able to learn all the technologies in use today: there are just too many. What you can and should do is learn those that will help with your current projects, and those that you can learn more easily. The more technologies you know, the broader the range of technologies you have at least partial access to, and the easier it will be to learn new ones.

April 27, 2016 04:00 AM

April 24, 2016

Glyph Lefkowitz

Email Isn’t The Thing You’re Bad At

I’ve been using the Internet for a good 25 years now, and I’ve been lucky enough to have some perspective dating back farther than that. The common refrain for my entire tenure here:

We all get too much email.

A New, New, New, New Hope

Luckily, something is always on the cusp of replacing email. AOL instant messenger will totally replace it. Then it was blogging. RSS. MySpace. Then it was FriendFeed. Then Twitter. Then Facebook.

Today, it’s in vogue to talk about how Slack is going to replace email. As someone who has seen this play out a dozen times now, let me give you a little spoiler:

Slack is not going to replace email.

But Slack isn’t the problem here, either. It’s just another communication tool.

The problem of email overload is both ancient and persistent. If the problem were really with “email”, then, presumably, one of the nine million email apps that dot the app-stores like mushrooms sprouting from a globe-spanning mycelium would have just solved it by now, and we could all move on with our lives. Instead, it is permanently in vogue1 to talk about how overloaded we all are.

If not email, then what?

If you have twenty-four thousand unread emails in your Inbox, like some kind of goddamn animal, what you’re bad at is not email, it’s transactional interactions.

Different communication media have different characteristics, but the defining characteristic of email is that it is the primary mode of communication that we use, both professionally and personally, when we are asking someone else to perform a task.

Of course you might use any form of communication to communicate tasks to another person. But other forms - especially the currently popular real-time methods - appear as a bi-directional communication, and are largely immutable. Email’s distinguishing characteristic is that it is discrete; each message is its own entity with its own ID. Emails may also be annotated, whether with flags, replied-to markers, labels, placement in folders, archiving, or deleting. Contrast this with a group chat in IRC, iMessage, or Slack, where the log is mostly2 unchangeable, and the only available annotation is “did your scrollbar ever move down past this point”; each individual message has only one bit of associated information. Unless you have catlike reflexes and an unbelievably obsessive-compulsive personality, it is highly unlikely that you will carefully set the “read” flag on each and every message in an extended conversation.

All this makes email much more suitable for communicating a task, because the recipient can file it according to their system for tracking tasks, come back to it later, and generally treat the message itself as an artifact. By contrast if I were to just walk up to you on the street and say “hey can you do this for me”, you will almost certainly just forget.

The word “task” might seem heavy-weight for some of the things that email is used for, but tasks come in all sizes. One task might be “click this link to confirm your sign-up on this website”. Another might be “choose a time to get together for coffee”. Or “please pass along my resume to your hiring department”. Yet another might be “send me the final draft of the Henderson report”.

Email is also used for conveying information: here are the minutes from that meeting we were just in. Here is transcription of the whiteboard from that design session. Here are some photos from our family vacation. But even in these cases, a task is implied: read these minutes and see if they’re accurate; inspect this diagram and use it to inform your design; look at these photos and just enjoy them.

So here’s the thing that you’re bad at, which is why none of the fifty different email apps you’ve bought for your phone have fixed the problem: when you get these messages, you aren’t making a conscious decision about:

  1. how important the message is to you
  2. whether you want to act on them at all
  3. when you want to act on them
  4. what exact action you want to take
  5. what the consequences of taking or not taking that action will be

This means that when someone asks you to do a thing, you probably aren’t going to do it. You’re going to pretend to commit to it, and then you’re going to flake out when push comes to shove. You’re going to keep context-switching until all the deadlines have passed.

In other words:

The thing you are bad at is saying ‘no’ to people.

Sometimes it’s not obvious that what you’re doing is saying ‘no’. For many of us — and I certainly fall into this category — a lot of the messages we get are vaguely informational. They’re from random project mailing lists, perhaps they’re discussions between other people, and it’s unclear what we should do about them (or if we should do anything at all). We hang on to them (piling up in our Inboxes) because they might be relevant in the future. I am not advocating that you have to reply to every dumb mailing list email with a 5-part action plan and a Scrum meeting invite: that would be a disaster. You don’t have time for that. You really shouldn’t have time for that.

The trick about getting to Inbox Zero3 is not in somehow becoming an email-reading machine, but in realizing that most email is worthless, and that’s OK. If you’re not going to do anything with it, just archive it and forget about it. If you’re subscribed to a mailing list where only 1 out of 1000 messages actually represents something you should do about it, archive all the rest after only answering the question “is this the one I should do something about?”. You can answer that question after just glancing at the subject; there are times when checking my email I will be hitting “archive” with a 1-second frequency. If you are on a list where zero messages are ever interesting enough to read in their entirety or do anything about, then of course you should unsubscribe.

Once you’ve dug yourself into a hole with thousands of “I don’t know what I should do with this” messages, it’s time to declare email bankruptcy. If you have 24,000 messages in your Inbox, let me be real with you: you are never, ever going to answer all those messages. You do not need a smartwatch to tell you exactly how many messages you are never going to reply to.

We’re In This Together, Me Especially

A lot of guidance about what to do with your email addresses email overload as a personal problem. Over the years of developing my tips and tricks for dealing with it, I certainly saw it that way. But lately, I’m starting to see that it has pernicious social effects.

If you have 24,000 messages in your Inbox, that means you aren’t keeping track or setting priorities on which tasks you want to complete. But just because you’re not setting those priorities, that doesn’t mean nobody is. It means you are letting availability heuristic - whatever is “latest and loudest” - govern access to your attention, and therefore your time. By doing this, you are rewarding people (or #brands) who contact you repeatedly, over inappropriate channels, and generally try to flood your attention with their priorities instead of your own. This, in turn, creates a culture where it is considered reasonable and appropriate to assume that you need to do that in order to get someone’s attention.

Since we live in the era of subtext and implication, I should explicitly say that I’m not describing any specific work environment or community. I used to have an email startup, and so I thought about this stuff very heavily for almost a decade. I have seen email habits at dozens of companies, and I help people in the open source community with their email on a regular basis. So I’m not throwing shade: almost everybody is terrible at this.

And that is the one way that email, in the sense of the tools and programs we use to process it, is at fault: technology has made it easier and easier to ask people to do more and more things, without giving us better tools or training to deal with the increasingly huge array of demands on our time. It’s easier than ever to say “hey could you do this for me” and harder than ever to just say “no, too busy”.

Mostly, though, I want you to know that this isn’t just about you any more. It’s about someone much more important than you: me. I’m tired of sending reply after reply to people asking to “just circle back” or asking if I’ve seen their email. Yes, I’ve seen your email. I have a long backlog of tasks, and, like anyone, I have trouble managing them and getting them all done4, and I frequently have to decide that certain things are just not important enough to do. Sometimes it takes me a couple of weeks to get to a message. Sometimes I never do. But, it’s impossible to be mad at somebody for “just checking in” for the fourth time when this is probably the only possible way they ever manage to get anyone else to do anything.

I don’t want to end on a downer here, though. And I don’t have a book to sell you which will solve all your productivity problems. I know that if I lay out some incredibly elaborate system all at once, it’ll seem overwhelming. I know that if I point you at some amazing gadget that helps you keep track of what you want to do, you’ll either balk at the price or get lost fiddling with all its knobs and buttons and not getting a lot of benefit out of it. So if I’m describing a problem that you have here, here’s what I want you to do.

Step zero is setting aside some time. This will probably take you a few hours, but trust me; they will be well-spent.

Email Bankruptcy

First, you need to declare email bankruptcy. Select every message in your Inbox older than 2 weeks. Archive them all, right now. In the past, you might have to worry about deleting those messages, but modern email systems pretty much universally have more storage than you’ll ever need. So rest assured that if you actually need to do anything with these messages, they’ll all be in your archive. But anything in your Inbox right now older than a couple of weeks is just never going to get dealt with, and it’s time to accept that fact. Again, this part of the process is not about making a decision yet, it’s just about accepting a reality.

Mailbox Three

One extra tweak I would suggest here is to get rid of all of your email folders and filters. It seems like many folks with big email problems have tried to address this by ever-finer-grained classification of messages, ever more byzantine email rules. At least, it’s common for me, when looking over someone’s shoulder to see 24,000 messages, it’s common to also see 50 folders. Probably these aren’t helping you very much.

In older email systems, it was necessary to construct elaborate header-based filtering systems so that you can later identify those messages in certain specific ways, like “message X went to this mailing list”. However, this was an incomplete hack, a workaround for a missing feature. Almost all modern email clients (and if yours doesn’t do this, switch) allow you to locate messages like this via search.

Your mail system ought to have 3 folders:

  1. Inbox, which you process to discover tasks,
  2. Drafts, which you use to save progress on replies, and
  3. Archive, the folder which you access only by searching for information you need when performing a task.

Getting rid of unnecessary folders and queries and filter rules will remove things that you can fiddle with.

Moving individual units of trash between different heaps of trash is not being productive, and by removing all the different folders you can shuffle your messages into before actually acting upon them you will make better use of your time spent looking at your email client.

There’s one exception to this rule, which is filters that do nothing but cause a message to skip your Inbox and go straight to the archive. The reason that this type of filter is different is that there are certain sources or patterns of message which are not actionable, but rather, a useful source of reference material that is only available as a stream of emails. Messages like that should, indeed, not show up in your Inbox. But, there’s no reason to file them into a specific folder or set of folders; you can always find them with a search.

Make A Place For Tasks

Next, you need to get a task list. Your email is not a task list; tasks are things that you decided you’re going to do, not things that other people have asked you to do5. Critically, you are going to need to parse e-mails into tasks. To explain why, let’s have a little arithmetic aside.

Let’s say it only takes you 45 seconds to go from reading a message to deciding what it really means you should do; so, it only takes 20 seconds to go from looking at the message to remembering what you need to do about it. This means that by the time you get to 180 un-processed messages that you need to do something about in your Inbox, you’ll be spending an hour a day doing nothing but remembering what those messages mean, before you do anything related to actually living your life, even including checking for new messages.

What should you use for the task list? On some level, this doesn’t really matter. It only needs one really important property: you need to trust that if you put something onto it, you’ll see it at the appropriate time. How exactly that works depends heavily on your own personal relationship with your computers and devices; it might just be a physical piece of paper. But for most of us living in a multi-device world, something that synchronizes to some kind of cloud service is important, so Wunderlist or Remember the Milk are good places to start, with free accounts.

Turn Messages Into Tasks

The next step - and this is really the first day of the rest of your life - start at the oldest message in your Inbox, and work forward in time. Look at only one message at a time. Decide whether this message is a meaningful task that you should accomplish.

If you decide a message represents a task, then make a new task on your task list. Decide what the task actually is, and describe it in words; don’t create tasks like “answer this message”. Why do you need to answer it? Do you need to gather any information first?

If you need to access information from the message in order to accomplish the task, then be sure to note in your task how to get back to the email. Depending on what your mail client is, it may be easier or harder to do this6, but in the worst case, following the guidelines above about eliminating unnecessary folders and filing in your email client, just put a hint into your task list about how to search for the message in question unambiguously.

Once you’ve done that:

Archive the message immediately.

The record that you need to do something about the message now lives in your task list, not your email client. You’ve processed it, and so it should no longer remain in your inbox.

If you decide a message doesn’t represent a task, then:

Archive the message immediately.

Do not move on to the next message until you have archived this message. Do not look ahead7. The presence of a message in your Inbox means you need to make a decision about it. Follow the touch-move rule with your email. If you skip over messages habitually and decide you’ll “just get back to it in a minute”, that minute will turn into 4 months and you’ll be right back where you were before.

Circling back to the subject of this post; once again, this isn’t really specific to email. You should follow roughly the same workflow when someone asks you to do a task in a meeting, or in Slack, or on your Discourse board, or wherever, if you think that the task is actually important enough to do. Note the slack timestamp and a snippet of the message so you can search for it again, if there is a relevant attachment. The thing that makes email different is really just the presence of an email box.

Banish The Blue Dot

Almost all email clients have a way of tracking “unread” messages; they cheerfully display counters of them. Ignore this information; it is useless. Messages have two states: in your inbox (unprocessed) and in your archive (processed). “Read” vs. “Unread” can be, at best, of minimal utility when resuming an interrupted scanning session. But, you are always only ever looking at the oldest message first, right? So none of the messages below it should be unread anyway...

Be Ruthless

As you try to start translating your flood of inbound communications into an actionable set of tasks you can actually accomplish, you are going to notice that your task list is going to grow and grow just as your Inbox was before. This is the hardest step:

Decide you are not going to do those tasks, and simply delete them. Sometimes, a task’s entire life-cycle is to be created from an email, exist for ten minutes, and then have you come back to look at it and then delete it. This might feel pointless, but in going through that process, you are learning something extremely valuable: you are learning what sorts of things are not actually important enough to do you do.

If every single message you get from some automated system provokes this kind of reaction, that will give you a clue that said system is wasting your time, and just making you feel anxious about work you’re never really going to get to, which can then lead to you un-subscribing or filtering messages from that system.

Tasks Before Messages

To thine own self, not thy Inbox, be true.

Try to start your day by looking at the things you’ve consciously decided to do. Don’t look at your email, don’t look at Slack; look at your calendar, and look at your task list.

One of those tasks, probably, is a daily reminder to “check your email”, but that reminder is there more to remind you to only do it once than to prevent you from forgetting.

I say “try” because this part is always going to be a challenge; while I mentioned earlier that you don’t want to unthinkingly give in to availability heuristic, you also have to acknowledge that the reason it’s called a “cognitive bias” is because it’s part of human cognition. There will always be a constant anxious temptation to just check for new stuff; for those of us who have a predisposition towards excessive scanning behavior have it more than others.

Why Email?

We all need to make commitments in our daily lives. We need to do things for other people. And when we make a commitment, we want to be telling the truth. I want you to try to do all these things so you can be better at that. It’s impossible to truthfully make a commitment to spend some time to perform some task in the future if, realistically, you know that all your time in the future will be consumed by whatever the top 3 highest-priority angry voicemails you have on that day are.

Email is a challenging social problem, but I am tired of email, especially the user interface of email applications, getting the blame for what is, at its heart, a problem of interpersonal relations. It’s like noticing that you get a lot of bills through the mail, and then blaming the state of your finances on the colors of the paint in your apartment building’s mail room. Of course, the UI of an email app can encourage good or bad habits, but Gmail gave us a prominent “Archive” button a decade ago, and we still have all the same terrible habits that were plaguing Outlook users in the 90s.

Of course, there’s a lot more to “productivity” than just making a list of the things you’re going to do. Some tools can really help you manage that list a lot better. But all they can help you to do is to stop working on the wrong things, and start working on the right ones. Actually being more productive, in the sense of getting more units of work out of a day, is something you get from keeping yourself healthy, happy, and well-rested, not from an email filing system.

You can’t violate causality to put more hours into the day, and as a frail and finite human being, there’s only so much work you can reasonably squeeze in before you die.

The reason I care a lot about salvaging email specifically is that it remains the best medium for communication that allows you to be in control of your own time, and by extension, the best medium for allowing people to do creative work.

Asking someone to do something via SMS doesn’t scale; if you have hundreds of unread texts there’s no way to put them in order, no way to classify them as “finished” and “not finished”, so you need to keep it to the number of things you can fit in short term memory. Not to mention the fact that text messaging is almost by definition an interruption - by default, it causes a device in someone’s pocket to buzz. Asking someone to do something in group chat, such as IRC or Slack, is similarly time-dependent; if they are around, it becomes an interruption, and if they’re not around, you have to keep asking and asking over and over again, which makes it really inefficient for the asker (or the asker can use a @highlight, and assume that Slack will send the recipient, guess what, an email).

Social media often comes up as another possible replacement for email, but its sort order is even worse than “only the most recent and most frequently repeated”. Messages are instead sorted by value to advertisers or likeliness to increase ‘engagement’”, i.e. most likely to keep you looking at this social media site rather than doing any real work.

For those of us who require long stretches of uninterrupted time to produce something good – “creatives”, or whatever today’s awkward buzzword for intersection of writers, programmers, graphic designers, illustrators, and so on, is – we need an inbound task queue that we can have some level of control over. Something that we can check at a time of our choosing, something that we can apply filtering to in order to protect access to our attention, something that maintains the chain of request/reply for reference when we have to pick up a thread we’ve had to let go of for a while. Some way to be in touch with our customers, our users, and our fans, without being constantly interrupted. Because if we don’t give those who need to communicate with such a tool, they’ll just blast @everyone messages into our slack channels and @mentions onto Twitter and texting us Hey, got a minute? until we have to quit everything to try and get some work done.

Questions about this post?

Go ahead and send me an email.


Acknowledgements

As always, any errors or bad ideas are certainly my own.

First of all, Merlin Mann, whose writing and podcasting were the inspiration, direct or indirect, for many of my thoughts on this subject; and who sets a good example because he won’t answer your email.

Thanks also to David Reid for introducing me to Merlin's work, as well as Alex Gaynor, Tristan Seligmann, Donald Stufft, Cory Benfield, Piët Delport, Amber Brown, and Ashwini Oruganti for feedback on drafts.


  1. Email is so culturally pervasive that it is literally in Vogue, although in fairness this is not a reference to the overflowing-Inbox problem that I’m discussing here. 

  2. I find the “edit” function in Slack maddening; although I appreciate why it was added, it’s easy to retroactively completely change the meaning of an entire conversation in ways that make it very confusing for those reading later. You don’t even have to do this intentionally; sometimes you make a legitimate mistake, like forgetting the word “not”, and the next 5 or 6 messages are about resolving that confusion; then, you go back and edit, and it looks like your colleagues correcting you are a pedantic version of Mr. Magoo, unable to see that you were correct the first time. 

  3. There, I said it. Are you happy now? 

  4. Just to clarify: nothing in this post should be construed as me berating you for not getting more work done, or for ever failing to meet any commitment no matter how casual. Quite the opposite: what I’m saying you need to do is acknowledge that you’re going to screw up and rather than hold a thousand emails in your inbox in the vain hope that you won’t, just send a quick apology and move on. 

  5. Maybe you decided to do the thing because your boss asked you to do it and failing to do it would cost you your job, but nevertheless, that is a conscious decision that you are making; not everybody gets to have “boss” priority, and unless your job is a true Orwellian nightmare, not everything your boss says in email is an instant career-ending catastrophe. 

  6. In Gmail, you can usually just copy a link to the message itself. If you’re using OS X’s Mail.app, you can use this Python script to generate links that, when clicked, will open the Mail app:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    from __future__ import (print_function, unicode_literals,
                            absolute_import, division)
    
    from ScriptingBridge import SBApplication
    import urllib
    
    mail = SBApplication.applicationWithBundleIdentifier_("com.apple.mail")
    
    for viewer in mail.messageViewers():
        for message in viewer.selectedMessages():
            for header in message.headers():
                name = header.name()
                if name.lower() == "message-id":
                    content = header.content()
                    print("message:" + urllib.quote(content))
    

    You can then paste these links into just about any task tracker; if they don’t become clickable, you can paste them into Safari’s URL bar or pass them to the open command-line tool. 

  7. The one exception here is that you can look ahead in the same thread to see if someone has already replied. 

by Glyph at April 24, 2016 11:54 PM

Moshe Zadka

Use virtualenv

In a conversation recently with a friend, we agreed that “something the instructions tell you to do ‘sudo pip install’…which is good, because then you know to ignore them”.

There is never a need for “sudo pip install”, and doing it is an anti-pattern. Instead, all installation of packages should go into a virtualenv. The only exception is, of course, virtualenv (and arguably, pip and wheel). I got enough questions about this that I wanted to write up an explanation about the how, why and why the counter-arguments are wrong.

What is virtualenv?

The documentation says:

virtualenv is a tool to create isolated Python environments.

The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into/usr/lib/python2.7/site-packages (or whatever your platform’s standard location is), it’s easy to end up in a situation where you unintentionally upgrade an application that shouldn’t be upgraded.

Or more generally, what if you want to install an application and leave it be? If an application works, any change in its libraries or the versions of those libraries can break the application.

The tl:dr; is:

  • virtualenv allows not needing administrator privileges
  • virtualenv allows installing different versions of the same library
  • virtualenv allows installing an application and never accidentally updating a dependency

The first problem is the one the “sudo” comment addresses — but the real issues stem from the second and third: not using a virtual environment leads to the potential of conflicts and dependency hell.

How to use virtualenv?

Creating a virtual environment is easy:

$ virtualenv dirname

will create the directory, if it does not exist, and then create a virtual environment in it. It is possible to use it either activated or unactivated. Activating a virtual environment is done by

$ . dirname/bin/activate
(dirname)$

this will make python, as well as any script installed using setuptools’ “console_scripts” option in the virtual environment, on the command-execution path. The most important of those is pip, and so using pip will install into the virtual environment.

It is also possible to use a virtual environment without activating it, by directly calling dirname/bin/python or any other console script. Again, pip is an example of those, and used for installing into the virtual environment.

Installing tools for “general use”

I have seen a couple of times the argument that when installing tools for general use it makes sense to install them into the system install. I do not think that this is a reasonable exception for two reasons:

  • It still forces to use root to install/upgrade those tools
  • It still runs into the dependency/conflict hell problems

There are a few good alternatives for this:

  • Create a (handful of) virtual environments, and add them to users’ path.
  • Use “pex” to install Python tools in a way that isolates them even further from system dependencies.

Exploratory programming

People often use Python for exploratory programming. That’s great! Note that since pip 7, pip is building and caching wheels by default. This means that creating virtual environments is even cheaper: tearing down an environment and building a new one will not require recompilation. Because of that, it is easy to treat virtual environments as disposable except for configuration: activate a virtual environment, explore — and whenever needing to move things into production, ‘pip freeze’ will allow easy recreation of the environment.

by moshez at April 24, 2016 04:54 AM

April 20, 2016

Glyph Lefkowitz

Far too many things can stop the BLOB

It occurs to me that the lack of a standard, well-supported, memory-efficient interface for BLOBs in multiple programming languages is one of the primary driving factors of poor scalability characteristics of open source SaaS applications.

Applications like Gitlab, Redmine, Trac, Wordpress, and so on, all need to store potentially large files (“attachments”). Frequently, they elect to store these attachments (at least by default) in a dedicated filesystem directory. This leads to a number of tricky concurrency issues, as the filesystem has different (and divorced) concurrency semantics from the backend database, and resides only on the individual API nodes, rather than in the shared namespace of the attached database.

Some databases do support writing to BLOBs like files. Postgres, SQLite, and Oracle do, although it seems MySQL lags behind in this area (although I’d love to be corrected on this front). But many higher-level API bindings for these databases don’t expose support for BLOBs in an efficient way.

Directly using the filesystem, as opposed to a backing service, breaks the “expected” scaling behavior of the front-end portion of a web application. Using an object store, like Cloud Files or S3, is a good option to achieve high scalability for public-facing applications, but that creates additional deployment complexity.

So, as both a plea to others and a note to myself: if you’re writing a database-backed application that needs to store some data, please consider making “store it in the database as BLOBs” an option. And if your particular database client library doesn’t support it, consider filing a bug.

by Glyph at April 20, 2016 01:01 AM

April 15, 2016

Itamar Turner-Trauring

Improving your skills as a 9 to 5 programmer

Do you only code 9 to 5, but wonder if that's good enough? Do you see other programmers working on personal projects or open source projects, going to hackathons, and spending all their spare time writing software? You might think that as someone who only writes software at their job, who only works 9-5, you will never be as good. You might believe that only someone who eats, sleeps and breathes code can excel. But actually it's possible to stick to a 40-hour week and still be a valuable, skilled programmer.

Working on personal or open source software projects doesn't automatically make you better programmer. Hackathons might even be a net negative if they give you the impression that building software to arbitrary deadlines while exhausted is a reasonable way to produce anything of value. There are inherent limits to your productive working hours. If you don't feel like spending more time coding when you get home, then don't: you'll be too tired or unfocused to gain anything.

Spending time on side projects does have some value, but the most useful result is not so much practice as knowledge. Established software projects tend to use older technology and techniques, simply because they've been in existence for a while. The main value you get from working on other software projects and interacting with developers outside of work is knowledge of:

  1. A broader range of technologies and tools.
  2. New techniques and processes. Perhaps your company doesn't do much testing, but you can learn about test-driven development elsewhere.

Having a broad range of tools and techniques to reach for is a valuable skill both at your job and when looking for a new job. But actual coding is not an efficient way to gain this knowledge. You don't actually need to use new tools and techniques, and you'll never really have to time to learn all tools and all techniques in detail anyway. You get the most value just from having some sense of what tools and techniques are out there, what they do and when they're useful. If a new tool you discover is immediately relevant to your job you can just learn it during working hours, and if it's not you can should just file it away in your brain for later.

Learning about new tools can also help you find a new job, even when you don't actually use them. I was once asked at an interview about the difference between NoSQL and traditional databases. At the time I'd never used MongoDB or any other NoSQL database, but I knew enough to answer satisfactorily. Being able to answer that question told the interviewer I'd be able to use that tool, if necessary, even if I hadn't done it before.

Instead of coding in your spare time you can get similar benefits, and more efficiently, by directly focusing on acquiring knowledge of new tools and techniques. And since this knowledge will benefit your employer and you don't need to spend significant time on it, you can acquire it during working hours. You're never actually working every single minute of your day, you always have some time when you're slacking off on the Internet. Perhaps you're doing so right now! You can use that time to expand your knowledge.

Each week you should allocate one hour of your time at work to learning about new tools and techniques. Choosing a particular time will help you do this on a regular basis. Personally I'd choose Friday afternoons, since by that point in the week I'm not achieving much anyway. Don't skip this hour just because of deadlines or tiredness. You'll do better at deadlines, and be less tired, if you know of the right tools and techniques to efficiently solve the problems you encounter at your job.

April 15, 2016 04:00 AM

April 13, 2016

Glyph Lefkowitz

I think I’m using GitHub wrong.

I use a hodgepodge of https: and : (i.e. “ssh”) URL schemes for my local clones; sometimes I have a remote called “github” and sometimes I have one called “origin”. Sometimes I clone from a fork I made and sometimes I clone from the upstream.

I think the right way to use GitHub would instead be to always fork first, make my remote always be “origin”, and consistently name the upstream remote “upstream”. The problem with this, though, is that forks rapidly fall out of date, and I often want to automatically synchronize all the upstream branches.

Is there a script or a github option or something to synchronize a fork with upstream automatically, including all its branches and tags? I know there’s no comment field, but you can email me or reply on twitter.

by Glyph at April 13, 2016 09:11 PM

April 04, 2016

Twisted Matrix Laboratories

Twisted 16.1 Released

On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 16.1!

This release is hot off the heels of 16.0 released last month, including some nice little tidbits. The highlights include:
  • twisted.application.internet.ClientService, a service that maintains a persistent outgoing endpoint-based connection -- a replacement for ReconnectingClientFactory that uses modern APIs;
  • A large (77% on one benchmark) performance improvement when using twisted.web's client on PyPy;
  • A few conch modules have been ported to Python 3, in preparation for further porting of the SSH functionality;
  • Full support for OpenSSL 1.0.2f and above;
  • t.web.http.Request.addCookie now accepts Unicode and bytes keys/values;
  • twistd manhole no longer uses a hard-coded SSH host key, and will generate one for you on the fly (this adds a 'appdirs' PyPI dependency, installing with [conch] will add it automatically);
  • Over eighteen tickets overall closed since 16.0.
For more information, check the NEWS file (link provided below).

You can find the downloads on PyPI (or alternatively our website). The NEWS file is also available on GitHub.

Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)

by HawkOwl (noreply@blogger.com) at April 04, 2016 05:14 PM