Planet Twisted

February 05, 2024

Glyph Lefkowitz

Let Me Tell You A Secret

I do consulting1 on software architecture, network protocol development, python software infrastructure, streamlined cloud deployment, and open source strategy, among other nerdy things. I enjoy solving challenging, complex technical problems or contributing to the open source commons. On the best jobs, I get to do both.

Today I would like to share with you a secret of the software technology consulting trade.

I should note that this secret is not specific to me. I have several colleagues who have also done software consulting and have reflected versions of this experience back to me.2

We’ll get to the secret itself in a moment, but first, some background.


Companies do not go looking for consulting when things are going great. This is particularly true when looking for high-level consulting on things like system architecture or strategy. Almost by definition, there’s a problem that I have been brought in to solve. Ideally, that problem is a technical challenge.

In the software industry, your team probably already has some software professionals with a variety of technical skills, and thus they know what to do with technical challenges. Which means that, as often as not, the problem is to do with people rather than technology, even it appears otherwise.

When you hire a staff-level professional like myself to address your software team’s general problems, that consultant will need to gather some information. If I am that consultant and I start to suspect that the purported technology problem that you’ve got is in fact a people problem, here is the secret technique that I am going to use:

I am going to go get a pen and a pad of paper, then schedule a 90-minute meeting with the most senior IC3 engineer that you have on your team. I will bring that pen and paper to the meeting. I will then ask one question:

What is fucked up about this place?

I will then write down their response in as much detail as I can manage. If I have begun to suspect that this meeting is necessary, 90 minutes is typically not enough time, and I will struggle to keep up. Even so, I will usually manage to capture the highlights.

One week later, I will schedule a meeting with executive leadership, and during that meeting, I will read back a very lightly edited4 version of the transcript of the previous meeting. This is then routinely praised as a keen strategic insight.


I should pause here to explicitly note that — obviously, I hope — this is not an oblique reference to any current or even recent client; if I’d had this meeting recently it would be pretty awkward to answer that “so, I read your blog…” email.5 But talking about clients in this way, no matter how obfuscated and vague the description, is always a bit professionally risky. So why risk it?

The thing is, I’m not a people manager. While I can do this kind of work, and I do not begrudge doing it if it is the thing that needs doing, I find it stressful and unfulfilling. I am a technology guy, not a people person. This is generally true of people who elect to go into technology consulting; we know where the management track is, and we didn’t pick it.

If you are going to hire me for my highly specialized technical expertise, I want you to get the maximum value out of it. I know my value; my rates are not low, and I do not want clients to come away with the sense that I only had a couple of “obvious” meetings.

So the intended audience for this piece is potential clients, leaders of teams (or organizations, or companies) who have a general technology problem and are wondering if they need a consultant with my skill-set to help them fix it. Before you decide that your issue is the need to implement a complex distributed system consensus algorithm, check if that is really what’s at issue. Talk to your ICs, and — taking care to make sure they understand that you want honest feedback and that they are safe to offer it — ask them what problems your organization has.

During this meeting it is important to only listen. Especially if you’re at a small company and you are regularly involved in the day-to-day operations, you might feel immediately defensive. Sit with that feeling, and process it later. Don’t unload your emotional state on an employee you have power over.6

“Only listening” doesn’t exclusively mean “don’t push back”. You also shouldn’t be committing to fixing anything. While the information you are gathering in these meetings is extremely valuable, and you should probably act on more of it than you will initially want to, your ICs won’t have the full picture. They really may not understand why certain priorities are set the way they are. You’ll need to take that as feedback for improving internal comms rather than “fixing” the perceived problem, and you certainly don’t want to make empty promises.

If you have these conversations directly, you can get something from it that no consultant can offer you: credibility. If you can actively listen, the conversation alone can improve morale. People like having their concerns heard. If, better still, you manage to make meaningful changes to address the concerns you’ve heard about, you can inspire true respect.

As a consultant, I’m going to be seen as some random guy wasting their time with a meeting. Even if you make the changes I recommend, it won’t resonate the same way as someone remembering that they personally told you what was wrong, and you took it seriously and fixed it.

Once you know what the problems are with your organization, and you’ve got solid technical understanding that you really do need that event-driven distributed systems consensus algorithm implemented using Twisted, I’m absolutely your guy. Feel free to get in touch.


  1. While I immensely value my patrons support and am eternally grateful for their support, at — as of this writing — less than $100 per month it doesn’t exactly pay the SF bay area cost-of-living bill. 

  2. When I reached out for feedback on a draft of this essay, every other consultant I showed it to said that something similar had happened to them within the last month, all with different clients in different sectors of the industry. I really cannot stress how common it is. 

  3. “individual contributor”, if this bit of jargon isn’t universal in your corner of the world; i.e.: “not a manager”. 

  4. Mostly, I need to remove a bunch of profanity, but sometimes I will also need to have another interview, usually with a more junior person on the team to confirm that I’m not relaying only a single person’s perspective. It is pretty rare that the top-of-mind problems are specific to one individual, though. 

  5. To the extent that this is about anything saliently recent, I am perhaps grumbling about how tech CEOs aren’t taking morale problems generated by the constant drumbeat of layoffs seriously enough

  6. I am not always in the role of a consultant. At various points in my career, I have also been a leader needing to sit in this particular chair, and believe me, I know it sucks. This would not be a common problem if there weren’t a common reason that leaders tend to avoid this kind of meeting. 

by Glyph at February 05, 2024 10:36 PM

January 25, 2024

Glyph Lefkowitz

The Macintosh

A 4k ultrawide classic MacOS desktop screenshot featuring various Finder windows and MPW Workshop

Today is the 40th anniversary of the announcement of the Macintosh. Others have articulated compelling emotional narratives that easily eclipse my own similar childhood memories of the Macintosh family of computers. So instead, I will ask a question:

What is the Macintosh?

As this is the anniversary of the beginning, that is where I will begin. The original Macintosh, the classic MacOS, the original “System Software” are a shining example of “fake it till you make it”. The original mac operating system was fake.

Don’t get me wrong, it was an impressive technical achievement to fake something like this, but what Steve Jobs did was to see a demo of a Smalltalk-76 system, an object-oriented programming environment with 1-to-1 correspondences between graphical objects on screen and runtime-introspectable data structures, a self-hosting high level programming language, memory safety, message passing, garbage collection, and many other advanced facilities that would not be popularized for decades, and make a fake version of it which ran on hardware that consumers could actually afford, by throwing out most of what made the programming environment interesting and replacing it with a much more memory-efficient illusion implemented in 68000 assembler and Pascal.

The machine’s RAM didn’t have room for a kernel. Whatever application was running was in control of the whole system. No protected memory, no preemptive multitasking. It was a house of cards that was destined to collapse. And collapse it did, both in the short term and the long. In the short term, the system was buggy and unstable, and application crashes resulted in system halts and reboots.

In the longer term, the company based on the Macintosh effectively went out of business and was reverse-acquired by NeXT, but they kept the better-known branding of the older company. The old operating system was gradually disposed of, quickly replaced at its core with a significantly more mature generation of operating system technology based on BSD UNIX and Mach. With the removal of Carbon compatibility 4 years ago, the last vestigial traces of it were removed. But even as early as 2004 the Mac was no longer really the Macintosh.

What NeXT had built was much closer to the Smalltalk system that Jobs was originally attempting to emulate. Its programming language, “Objective C” explicitly called back to Smalltalk’s message-passing, right down to the syntax. Objects on the screen now did correspond to “objects” you could send messages to. The development environment understood this too; that was a major selling point.

The NeXSTEP operating system and Objective C runtime did not have garbage collection, but it provided a similar developer experience by providing reference-counting throughout its object model. The original vision was finally achieved, for real, and that’s what we have on our desks and in our backpacks today (and in our pockets, in the form of the iPhone, which is in some sense a tiny next-generation NeXT computer itself).


The one detail I will relate from my own childhood is this: my first computer was not a Mac. My first computer, as a child, was an Amiga. When I was 5, I had a computer with 4096 colors, real multitasking, 3D graphics, and a paint program that could draw hard real-time animations with palette tricks. Then the writing was on the wall for Commodore and I got a computer which had 256 colors, a bunch of old software that was still black and white, an operating system that would freeze if you held down the mouse button on the menu bar and couldn’t even play animations smoothly. Many will relay their first encounter with the Mac as a kind of magic, but mine was a feeling of loss and disappointment. Unlike almost everyone at the time, I knew what a computer really could be, and despite many pleasant and formative experiences with the Macintosh in the meanwhile, it would be a decade before I saw a real one again.

But this is not to deride the faking. The faking was necessary. Xerox was not going to put an Alto running Smalltalk on anyone’s desk. People have always grumbled that Apple products are expensive, but in 2024 dollars, one of these Xerox computers cost roughly $55,000.

The Amiga was, in its own way, a similar sort of fake. It managed its own miracles by putting performance-critical functions into dedicated hardware which rapidly became obsolete as software technology evolved much more rapidly.

Jobs is celebrated as a genius of product design, and he certainly wasn’t bad at it, but I had the rare privilege of seeing the homework he was cribbing from in that subject, and in my estimation he was a B student at best. Where he got an A was bringing a vision to life by creating an organization, both inside and outside of his companies.

If you want a culture-defining technological artifact, everybody in the culture has to be able to get their hands on one. This doesn’t just mean that the builder has to be able to build it. The buyer also has to be able to afford it, obviously. Developers have to be able to develop for it. The buyer has to actually want it; the much-derided “marketing” is a necessary part of the process of making a product what it is. Everyone needs to be able to move together in the direction of the same technological future.

This is why it was so fitting that Tim Cook was made Jobs's successor. The supply chain was the hard part.

The crowning, final achievement of Jobs’s career was the fact that not only did he fake it — the fakes were flying fast and thick at that time in history, even if they mostly weren’t as good — it was that he faked it and then he built the real version and then he bridged the transitions to get to the real thing.

I began here by saying that the Mac isn’t really the Mac, and speaking in terms of a point in time analysis that is true. Its technology today has practically nothing in common with its technology in 1984. This is not merely an artifact of the length of time here: the technology at the core of various UNIXes in 1984 bears a lot of resemblance of UNIX-like operating systems today1. But looking across its whole history from 1984 to 2024, there is undeniably a continuity to the conceptual “Macintosh”.

Not just as a user, but as a developer moving through time rather than looking at just a few points: the “Macintosh”, such as it is, has transitioned from the Motorola 68000 to the PowerPC to Intel 32-bit to Intel 64-bit to ARM. From obscurely proprietary to enthusiastically embracing open source and then, sadly, much of the way back again. It moved from black and white to color, from desktop to laptop, from Carbon to Cocoa, from Display PostScript to Display PDF, all the while preserving instantly recognizable iconic features like the apple menu and the cursor pointer, while providing developers documentation and SDKs and training sessions that helped them transition their apps through multiple near-complete rewrites as a result of all of these changes.


To paraphrase Abigail Thorne’s first video about Identity, identity is what survives. The Macintosh is an interesting case study in the survival of the idea of a platform, as distinct from the platform itself. It is the Computer of Theseus, a thought experiment successfully brought to life and sustained over time.

If there is a personal lesson to be learned here, I’d say it’s that one’s own efforts need not be perfect. In fact, a significantly flawed vision that you can achieve right now is often much, much better than a perfect version that might take just a little bit longer, if you don’t have the resources to actually sustain going that much longer2. You have to be bad at things before you can be good at them. Real artists, as Jobs famously put it, ship.

So my contribution to the 40th anniversary reflections is to say: the Macintosh is dead. Long live the Mac.


Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!


  1. including, ironically, the modern macOS. 

  2. And that is why I am posting this right now, rather than proofreading it further. 

by Glyph at January 25, 2024 06:31 AM

Unsigned Commits

I am going to tell you why I don’t think you should sign your Git commits, even though doing so with SSH keys is now easier than ever. But first, to contextualize my objection, I have a brief hypothetical for you, and then a bit of history from the evolution of security on the web.


paper reading “Sign Here:” with a pen poised over it


It seems like these days, everybody’s signing all different kinds of papers.

Bank forms, permission slips, power of attorney; it seems like if you want to securely validate a document, you’ve gotta sign it.

So I have invented a machine that automatically signs every document on your desk, just in case it needs your signature. Signing is good for security, so you should probably get one, and turn it on, just in case something needs your signature on it.

We also want to make sure that verifying your signature is easy, so we will have them all notarized and duplicates stored permanently and publicly for future reference.

No? Not interested?


Hopefully, that sounded like a silly idea to you.

Most adults in modern civilization have learned that signing your name to a document has an effect. It is not merely decorative; the words in the document being signed have some specific meaning and can be enforced against you.

In some ways the metaphor of “signing” in cryptography is bad. One does not “sign” things with “keys” in real life. But here, it is spot on: a cryptographic signature can have an effect.

It should be an input to some software, one that is acted upon. Software does a thing differently depending on the presence or absence of a signature. If it doesn’t, the signature probably shouldn’t be there.


Consider the most venerable example of encryption and signing that we all deal with every day: HTTPS. Many years ago, browsers would happily display unencrypted web pages. The browser would also encrypt the connection, if the server operator had paid for an expensive certificate and correctly configured their server. If that operator messed up the encryption, it would pop up a helpful dialog box that would tell the user “This website did something wrong that you cannot possibly understand. Would you like to ignore this and keep working?” with buttons that said “Yes” and “No”.

Of course, these are not the precise words that were written. The words, as written, said things about “information you exchange” and “security certificate” and “certifying authorities” but “Yes” and “No” were the words that most users read. Predictably, most users just clicked “Yes”.

In the usual case, where users ignored these warnings, it meant that no user ever got meaningful security from HTTPS. It was a component of the web stack that did nothing but funnel money into the pockets of certificate authorities and occasionally present annoying interruptions to users.

In the case where the user carefully read and honored these warnings in the spirit they were intended, adding any sort of transport security to your website was a potential liability. If you got everything perfectly correct, nothing happened except the browser would display a picture of a small green purse. If you made any small mistake, it would scare users off and thereby directly harm your business. You would only want to do it if you were doing something that put a big enough target on your site that you became unusually interesting to attackers, or were required to do so by some contractual obligation like credit card companies.

Keep in mind that the second case here is the best case.

In 2016, the browser makers noticed this problem and started taking some pretty aggressive steps towards actually enforcing the security that HTTPS was supposed to provide, by fixing the user interface to do the right thing. If your site didn’t have security, it would be shown as “Not Secure”, a subtle warning that would gradually escalate in intensity as time went on, correctly incentivizing site operators to adopt transport security certificates. On the user interface side, certificate errors would be significantly harder to disregard, making it so that users who didn’t understand what they were seeing would actually be stopped from doing the dangerous thing.

Nothing fundamental1 changed about the technical aspects of the cryptographic primitives or constructions being used by HTTPS in this time period, but socially, the meaning of an HTTP server signing and encrypting its requests changed a lot.


Now, let’s consider signing Git commits.

You may have heard that in some abstract sense you “should” be signing your commits. GitHub puts a little green “verified” badge next to commits that are signed, which is neat, I guess. They provide “security”. 1Password provides a nice UI for setting it up. If you’re not a 1Password user, GitHub itself recommends you put in just a few lines of configuration to do it with either a GPG, SSH, or even an S/MIME key.

But while GitHub’s documentation quite lucidly tells you how to sign your commits, its explanation of why is somewhat less clear. Their purse is the word “Verified”; it’s still green. If you enable “vigilant mode”, you can make the blank “no verification status” option say “Unverified”, but not much else changes.

This is like the old-style HTTPS verification “Yes”/“No” dialog, except that there is not even an interruption to your workflow. They might put the “Unverified” status on there, but they’ve gone ahead and clicked “Yes” for you.

It is tempting to think that the “HTTPS” metaphor will map neatly onto Git commit signatures. It was bad when the web wasn’t using HTTPS, and the next step in that process was for Let’s Encrypt to come along and for the browsers to fix their implementations. Getting your certificates properly set up in the meanwhile and becoming familiar with the tools for properly doing HTTPS was unambiguously a good thing for an engineer to do. I did, and I’m quite glad I did so!

However, there is a significant difference: signing and encrypting an HTTPS request is ephemeral; signing a Git commit is functionally permanent.

This ephemeral nature meant that errors in the early HTTPS landscape were easily fixable. Earlier I mentioned that there was a time where you might not want to set up HTTPS on your production web servers, because any small screw-up would break your site and thereby your business. But if you were really skilled and you could see the future coming, you could set up monitoring, avoid these mistakes, and rapidly recover. These mistakes didn’t need to badly break your site.

We can extend the analogy to HTTPS, but we have to take a detour into one of the more unpleasant mistakes in HTTPS’s history: HTTP Public Key Pinning, or “HPKP”. The idea with HPKP was that you could publish a record in an HTTP header where your site commits2 to using certain certificate authorities for a period of time, where that period of time could be “forever”. Attackers gonna attack, and attack they did. Even without getting attacked, a site could easily commit “HPKP Suicide” where they would pin the wrong certificate authority with a long timeline, and their site was effectively gone for every browser that had ever seen those pins. As a result, after a few years, HPKP was completely removed from all browsers.

Git commit signing is even worse. With HPKP, you could easily make terrible mistakes with permanent consequences even though you knew the exact meaning of the data you were putting into the system at the time you were doing it. With signed commits, you are saying something permanently, but you don’t really know what it is that you’re saying.


Today, what is the benefit of signing a Git commit? GitHub might present it as “Verified”. It’s worth noting that only GitHub will do this, since they are the root of trust for this signing scheme. So, by signing commits and registering your keys with GitHub, you are, at best, helping to lock in GitHub as a permanent piece of infrastructure that is even harder to dislodge because they are not only where your code is stored, but also the arbiters of whether or not it is trustworthy.

In the future, what is the possible security benefit? If we all collectively decide we want Git to be more secure, then we will need to meaningfully treat signed commits differently from unsigned ones.

There’s a long tail of unsigned commits several billion entries long. And those are in the permanent record as much as the signed ones are, so future tooling will have to be able to deal with them. If, as stewards of Git, we wish to move towards a more secure Git, as the stewards of the web moved towards a more secure web, we do not have the option that the web did. In the browser, the meaning of a plain-text HTTP or incorrectly-signed HTTPS site changed, in order to encourage the site’s operator to change the site to be HTTPS.

In contrast, the meaning of an unsigned commit cannot change, because there are zillions of unsigned commits lying around in critical infrastructure and we need them to remain there. Commits cannot meaningfully be changed to become signed retroactively. Unlike an online website, they are part of a historical record, not an operating program. So we cannot establish the difference in treatment by changing how unsigned commits are treated.

That means that tooling maintainers will need to provide some difference in behavior that provides some incentive. With HTTPS where the binary choice was clear: don’t present sites with incorrect, potentially compromised configurations to users. The question was just how to achieve that. With Git commits, the difference in treatment of a “trusted” commit is far less clear.

If you will forgive me a slight straw-man here, one possible naive interpretation is that a “trusted” signed commit is that it’s OK to run in CI. Conveniently, it’s not simply “trusted” in a general sense. If you signed it, it’s trusted to be from you, specifically. Surely it’s fine if we bill the CI costs for validating the PR that includes that signed commit to your GitHub account?

Now, someone can piggy-back off a 1-line typo fix that you made on top of an unsigned commit to some large repo, making you implicitly responsible for transitively signing all unsigned parent commits, even though you haven’t looked at any of the code.

Remember, also, that the only central authority that is practically trustable at this point is your GitHub account. That means that if you are using a third-party CI system, even if you’re using a third-party Git host, you can only run “trusted” code if GitHub is online and responding to requests for its “get me the trusted signing keys for this user” API. This also adds a lot of value to a GitHub credential breach, strongly motivating attackers to sneakily attach their own keys to your account so that their commits in unrelated repos can be “Verified” by you.

Let’s review the pros and cons of turning on commit signing now, before you know what it is going to be used for:

Pro Con
Green “Verified” badge Unknown, possibly unlimited future liability for the consequences of running code in a commit you signed
Further implicitly cementing GitHub as a centralized trust authority in the open source world
Introducing unknown reliability problems into infrastructure that relies on commit signatures
Temporary breach of your GitHub credentials now lead to potentially permanent consequences if someone can smuggle a new trusted key in there
New kinds of ongoing process overhead as commit-signing keys become new permanent load-bearing infrastructure, like “what do I do with expired keys”, “how often should I rotate these”, and so on

I feel like the “Con” column is coming out ahead.


That probably seemed like increasingly unhinged hyperbole, and it was.

In reality, the consequences are unlikely to be nearly so dramatic. The status quo has a very high amount of inertia, and probably the “Verified” badge will remain the only visible difference, except for a few repo-specific esoteric workflows, like pushing trust verification into offline or sandboxed build systems. I do still think that there is some potential for nefariousness around the “unknown and unlimited” dimension of any future plans that might rely on verifying signed commits, but any flaws are likely to be subtle attack chains and not anything flashy and obvious.

But I think that one of the biggest problems in information security is a lack of threat modeling. We encrypt things, we sign things, we institute rotation policies and elaborate useless rules for passwords, because we are looking for a “best practice” that is going to save us from having to think about what our actual security problems are.

I think the actual harm of signing git commits is to perpetuate an engineering culture of unquestioningly cargo-culting sophisticated and complex tools like cryptographic signatures into new contexts where they have no use.

Just from a baseline utilitarian philosophical perspective, for a given action A, all else being equal, it’s always better not to do A, because taking an action always has some non-zero opportunity cost even if it is just the time taken to do it. Epsilon cost and zero benefit is still a net harm. This is even more true in the context of a complex system. Any action taken in response to a rule in a system is going to interact with all the other rules in that system. You have to pay complexity-rent on every new rule. So an apparently-useless embellishment like signing commits can have potentially far-reaching consequences in the future.

Git commit signing itself is not particularly consequential. I have probably spent more time writing this blog post than the sum total of all the time wasted by all programmers configuring their git clients to add useless signatures; even the relatively modest readership of this blog will likely transfer more data reading this post than all those signatures will take to transmit to the various git clients that will read them. If I just convince you not to sign your commits, I don’t think I’m coming out ahead in the felicific calculus here.

What I am actually trying to point out here is that it is useful to carefully consider how to avoid adding junk complexity to your systems. One area where junk tends to leak in to designs and to cultures particularly easily is in intimidating subjects like trust and safety, where it is easy to get anxious and convince ourselves that piling on more stuff is safer than leaving things simple.

If I can help you avoid adding even a little bit of unnecessary complexity, I think it will have been well worth the cost of the writing, and the reading.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics such as “What else should I not apply a cryptographic signature to?”.


  1. Yes yes I know about heartbleed and Bleichenbacher attacks and adoption of forward-secret ciphers and CRIME and BREACH and none of that is relevant here, okay? Jeez. 

  2. Do you see what I did there. 

by Glyph at January 25, 2024 12:29 AM

January 23, 2024

Glyph Lefkowitz

Your Text Editor (Probably) Isn’t Malware Any More

In 2015, I wrote one of my more popular blog posts, “Your Text Editor Is Malware”, about the sorry state of security in text editors in general, but particularly in Emacs and Vim.

It’s nearly been a decade now, so I thought I’d take a moment to survey the world of editor plugins and see where we are today. Mostly, this is to allay fears, since (in today’s landscape) that post is unreasonably alarmist and inaccurate, but people are still reading it.

Problem Is It Fixed?
vim.org is not available via https Yep! http://www.vim.org/ redirects to https://www.vim.org/ now.
Emacs's HTTP client doesn't verify certificates by default Mostly! The documentation is incorrect and there are some UI problems1, but it doesn’t blindly connect insecurely.
ELPA and MELPA supply plaintext-HTTP package sources Kinda. MELPA correctly responds to HTTP only with redirects to HTTPS, and ELPA at least offers HTTPS and uses HTTPS URLs exclusively in the default configuration.
You have to ship your own trust roots for Emacs. Fixed! The default installation of Emacs on every platform I tried (including Windows) seems to be providing trust roots.
MELPA offers to install code off of a wiki. Yes. Wiki packages were disabled entirely in 2018.

The big takeaway here is that the main issue of there being no security whatsoever on Emacs and Vim package installation and update has been fully corrected.

Where To Go Next?

Since I believe that post was fairly influential, in particular in getting MELPA to tighten up its security, let me take another big swing at a call to action here.

More modern editors have made greater strides towards security. VSCode, for example, has enabled the Chromium sandbox and added some level of process separation. Emacs has not done much here yet, but over the years it has consistently surprised me with its ability to catch up to its more modern competitors, so I hope it will surprise me here as well.

Even for VSCode, though, this sandbox still seems pretty permissive — plugins still seem to execute with the full trust of the editor itself — but it's a big step in the right direction. This is a much bigger task than just turning on HTTPS, but I really hope that editors start taking the threat of rogue editor packages seriously before attackers do, and finding ways to sandbox and limit the potential damage from third-party plugins, maybe taking a cue from other tools.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!


  1. the documention still says “gnutls-verify-error” defaults to nil and that means no certificate verification, and maybe it does do that if you are using raw TLS connections, but in practice, url-retrieve-synchronously does appear to present an interactive warning before proceeding if the certificate is invalid or expired. It still has yet to catch up with web browsers from 2016, in that it just asks you “do you want to do this horribly dangerous thing? y/n” but that is a million times better than proceeding without user interaction. 

by Glyph at January 23, 2024 02:05 AM

January 22, 2024

Glyph Lefkowitz

Okay, I’m A Centrist I Guess

Today I saw a short YouTube video about “cozy games” and started writing a comment, then realized that this was somehow prompting me to write the most succinct summary of my own personal views on politics and economics that I have ever managed. So, here goes.

Apparently all I needed to trim down 50,000 words on my annoyance at how the term “capitalism” is frustratingly both a nexus for useful critque and also reductive thought-terminating clichés was to realize that Animal Crossing: New Horizons is closer to my views on political economy than anything Adam Smith or Karl Marx ever wrote.


Cozy games illustrate that the core mechanics of capitalism are fun and motivating, in a laboratory environment. It’s fun to gather resources, to improve one’s skills, to engage in mutually beneficial exchanges, to collect things, to decorate. It’s tremendously motivating. Even merely pretending to do those things can captivate huge amounts of our time and attention.

In real life, people need to be motivated to do stuff. Not because of some moral deficiency, but because in a large complex civilization it’s hard to tell what needs doing. By the time it’s widely visible to a population-level democratic consensus of non-experts that there is an unmet need — for example, trash piling up on the street everywhere indicating a need for garbage collection — that doesn’t mean “time to pick up some trash”, it means “the sanitation system has collapsed, you’re probably going to get cholera”. We need a system that can identify utility signals more granularly and quickly, towards the edges of the social graph. To allow person A to earn “value credits” of some kind for doing work that others find valuable, then trade those in to person B for labor which they find valuable, even if it is not clearly obvious to anyone else why person A wants that thing. Hence: money.

So, a market can provide an incentive structure that productively steers people towards needs, by aggregating small price signals in a distributed way, via the communication technology of “money”. Authoritarian communist states are famously bad at this, overproducing “necessary” goods in ways that can hold their own with the worst excesses of capitalists, while under-producing “luxury” goods that are politically seen as frivolous.

This is the kernel of truth around which the hardcore capitalist bootstrap grindset ideologues build their fabulist cinematic universe of cruelty. Markets are motivating, they reason, therefore we must worship the market as a god and obey its every whim. Markets can optimize some targets, therefore we must allow markets to optimize every target. Markets efficiently allocate resources, and people need resources to live, therefore anyone unable to secure resources in a market is undeserving of life. Thus we begin at “market economies provide some beneficial efficiencies” and after just a bit of hand-waving over some inconvenient details, we get to “thus, we must make the poor into a blood-sacrifice to Moloch, otherwise nobody will ever work, and we will all die, drowning in our own laziness”. “The cruelty is the point” is a convenient phrase, but among those with this worldview, the prosperity is the point; they just think the cruelty is the only engine that can possibly drive it.

Cozy games are therefore a centrist1 critique of capitalism. They present a world with the prosperity, but without the cruelty. More importantly though, by virtue of the fact that people actually play them in large numbers, they demonstrate that the cruelty is actually unnecessary.

You don’t need to play a cozy game. Tom Nook is not going to evict you from your real-life house if you don’t give him enough bells when it’s time to make rent. In fact, quite the opposite: you have to take time away from your real-life responsibilities and work, in order to make time for such a game. That is how motivating it is to engage with a market system in the abstract, with almost exclusively positive reinforcement.

What cozy games are showing us is that a world with tons of “free stuff” — universal basic income, universal health care, free education, free housing — will not result in a breakdown of our society because “no one wants to work”. People love to work.

If we can turn the market into a cozy game, with low stakes and a generous safety net, more people will engage with it, not fewer. People are not lazy; laziness does not exist. The motivation that people need from a market economy is not a constant looming threat of homelessness, starvation and death for themselves and their children, but a fun opportunity to get a five-star island rating.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor!


  1. Okay, I guess “far left” on the current US political compass, but in a just world socdems would be centrists. 

by Glyph at January 22, 2024 05:41 PM

January 02, 2024

Hynek Schlawack

How to Ditch Codecov for Python Projects

Codecov’s unreliability breaking CI on my open source projects has been a constant source of frustration for me for years. I have found a way to enforce coverage over a whole GitHub Actions build matrix that doesn’t rely on third-party services.

by Hynek Schlawack (hs@ox.cx) at January 02, 2024 12:00 AM

December 21, 2023

Hynek Schlawack

Don’t Start Pull Requests from Your Main Branch

When contributing to other users’ repositories, always start a new branch in your fork.

by Hynek Schlawack (hs@ox.cx) at December 21, 2023 05:00 AM

December 08, 2023

Glyph Lefkowitz

Annotated At Runtime

PEP 0593 added the ability to add arbitrary user-defined metadata to type annotations in Python.

At type-check time, such annotations are… inert. They don’t do anything. Annotated[int, X] just means int to the type-checker, regardless of the value of X. So the entire purpose of Annotated is to provide a run-time API to consume metadata, which integrates with the type checker syntactically, but does not otherwise disturb it.

Yet, the documentation for this central purpose seems, while not exactly absent, oddly incomplete.

The PEP itself simply says:

A tool or library encountering an Annotated type can scan through the annotations to determine if they are of interest (e.g., using isinstance()).

But it’s not clear where “the annotations” are, given that the PEP’s entire “consuming annotations” section does not even mention the __metadata__ attribute where the annotation’s arguments go, which was only even added to CPython’s documentation. Its list of examples just show the repr() of the relevant type.

There’s also a bit of an open question of what, exactly, we are supposed to isinstance()-ing here. If we want to find arguments to Annotated, presumably we need to be able to detect if an annotation is an Annotated. But isinstance(Annotated[int, "hello"], Annotated) is both False at runtime, and also a type-checking error, that looks like this:

1
Argument 2 to "isinstance" has incompatible type "<typing special form>"; expected "_ClassInfo"

The actual type of these objects, typing._AnnotatedAlias, does not seem to have a publicly available or documented alias, so that seems like the wrong route too.

Now, it certainly works to escape-hatch your way out of all of this with an Any, build some version-specific special-case hacks to dig around in the relevant namespaces, access __metadata__ and call it a day. But this solution is … unsatisfying.

What are you looking for?

Upon encountering these quirks, it is understandable to want to simply ask the question “is this annotation that I’m looking at an Annotated?” and to be frustrated that it seems so obscure to straightforwardly get an answer to that question without disabling all type-checking in your meta-programming code.

However, I think that this is a slight misframing of the problem. Code that is inspecting parameters for an annotation is going to do something with that annotation, which means that it must necessarily be looking for a specific set of annotations. Therefore the thing we want to pass to isinstance is not some obscure part of the annotations’ internals, but the actual interesting annotation type from your framework or application.

When consuming an Annotated parameter, there are 3 things you probably want to know:

  1. What was the parameter itself? (type: The type you passed in.)
  2. What was the name of the annotated object (i.e.: the parameter name, the attribute name) being passed the parameter? (type: str)
  3. What was the actual type being annotated? (type: type)

And the things that we have are the type of the Annotated we’re querying for, and the object with annotations we are interrogating. So that gives us this function signature:

1
2
3
4
5
def annotated_by(
    annotated: object,
    kind: type[T],
) -> Iterable[tuple[str, T, type]]:
    ...

To extract this information, all we need are get_args and get_type_hints; no need for __metadata__ or get_origin or any other metaprogramming. Here’s a recipe:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
def annotated_by(
    annotated: object,
    kind: type[T],
) -> Iterable[tuple[str, T, type]]:
    for k, v in get_type_hints(annotated, include_extras=True).items():
        all_args = get_args(v)
        if not all_args:
            continue
        actual, *rest = all_args
        for arg in rest:
            if isinstance(arg, kind):
                yield k, arg, actual

It might seem a little odd to be blindly assuming that get_args(...)[0] will always be the relevant type, when that is not true of unions or generics. Note, however, that we are only yielding results when we have found the instance type in the argument list; our arbitrary user-defined instance isn’t valid as a type annotation argument in any other context. It can’t be part of a Union or a Generic, so we can rely on it to be an Annotated argument, and from there, we can make that assumption about the format of get_args(...).

This can give us back the annotations that we’re looking for in a handy format that’s easy to consume. Here’s a quick example of how you might use it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@dataclass
class AnAnnotation:
    name: str

def a_function(
    a: str,
    b: Annotated[int, AnAnnotation("b")],
    c: Annotated[float, AnAnnotation("c")],
) -> None:
    ...

print(list(annotated_by(a_function, AnAnnotation)))

# [('b', AnAnnotation(name='b'), <class 'int'>),
#  ('c', AnAnnotation(name='c'), <class 'float'>)]

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how do I do Python metaprogramming, but, like, not super janky”.

by Glyph at December 08, 2023 01:56 AM

December 06, 2023

Glyph Lefkowitz

Safer, Not Later

Facebook — and by extension, most of Silicon Valley — rightly gets a lot of shit for its old motto, “Move Fast and Break Things”.

As a general principle for living your life, it is obviously terrible advice, and it leads to a lot of the horrific outcomes of Facebook’s business.

I don’t want to be an apologist for Facebook. I also do not want to excuse the worldview that leads to those kinds of outcomes. However, I do want to try to help laypeople understand how software engineers—particularly those situated at the point in history where this motto became popular—actually meant by it. I would like more people in the general public to understand why, to engineers, it was supposed to mean roughly the same thing as Facebook’s newer, goofier-sounding “Move fast with stable infrastructure”.

Move Slow

In the bad old days, circa 2005, two worlds within the software industry were colliding.

The old world was the world of integrated hardware/software companies, like IBM and Apple, and shrink-wrapped software companies like Microsoft and WordPerfect. The new world was software-as-a-service companies like Google, and, yes, Facebook.

In the old world, you delivered software in a physical, shrink-wrapped box, on a yearly release cycle. If you were really aggressive you might ship updates as often as quarterly, but faster than that and your physical shipping infrastructure would not be able to keep pace with new versions. As such, development could proceed in long phases based on those schedules.

In practice what this meant was that in the old world, when development began on a new version, programmers would go absolutely wild adding incredibly buggy, experimental code to see what sorts of things might be possible in a new version, then slowly transition to less coding and more testing, eventually settling into a testing and bug-fixing mode in the last few months before the release.

This is where the idea of “alpha” (development testing) and “beta” (user testing) versions came from. Software in that initial surge of unstable development was extremely likely to malfunction or even crash. Everyone understood that. How could it be otherwise? In an alpha test, the engineers hadn’t even started bug-fixing yet!

In the new world, the idea of a 6-month-long “beta test” was incoherent. If your software was a website, you shipped it to users every time they hit “refresh”. The software was running 24/7, on hardware that you controlled. You could be adding features at every minute of every day. And, now that this was possible, you needed to be adding those features, or your users would get bored and leave for your competitors, who would do it.

But this came along with a new attitude towards quality and reliability. If you needed to ship a feature within 24 hours, you couldn’t write a buggy version that crashed all the time, see how your carefully-selected group of users used it, collect crash reports, fix all the bugs, have a feature-freeze and do nothing but fix bugs for a few months. You needed to be able to ship a stable version of your software on Monday and then have another stable version on Tuesday.

To support this novel sort of development workflow, the industry developed new technologies. I am tempted to tell you about them all. Unit testing, continuous integration servers, error telemetry, system monitoring dashboards, feature flags... this is where a lot of my personal expertise lies. I was very much on the front lines of the “new world” in this conflict, trying to move companies to shorter and shorter development cycles, and move away from the legacy worldview of Big Release Day engineering.

Old habits die hard, though. Most engineers at this point were trained in a world where they had months of continuous quality assurance processes after writing their first rough draft. Such engineers feel understandably nervous about being required to ship their probably-buggy code to paying customers every day. So they would try to slow things down.

Of course, when one is deploying all the time, all other things being equal, it’s easy to ship a show-stopping bug to customers. Organizations would do this, and they’d get burned. And when they’d get burned, they would introduce Processes to slow things down. Some of these would look like:

  1. Let’s keep a special version of our code set aside for testing, and then we’ll test that for a few weeks before sending it to users.
  2. The heads of every department need to sign-off on every deployed version, so everyone needs to spend a day writing up an explanation of their changes.
  3. QA should sign off too, so let’s have an extensive sign-off process where each individual tester does a fills out a sign-off form.

Then there’s my favorite version of this pattern, where management decides that deploys are inherently dangerous, and everyone should probably just stop doing them. It typically proceeds in stages:

  1. Let’s have a deploy freeze, and not deploy on Fridays; don’t want to mess up the weekend debugging an outage.
  2. Actually, let’s extend that freeze for all of December, we don’t want to mess up the holiday shopping season.
  3. Actually why not have the freeze extend into the end of November? Don’t want to mess with Thanksgiving and the Black Friday weekend.
  4. Some of our customers are in India, and Diwali’s also a big deal. Why not extend the freeze from the end of October?
  5. But, come to think of it, we do a fair amount of seasonal sales for Halloween too. How about no deployments from October 10 onward?
  6. You know what, sometimes people like to use our shop for Valentine’s day too. Let’s just never deploy again.

This same anti-pattern can repeat itself with an endlessly proliferating list of “environments”, whose main role ends up being to ensure that no code ever makes it to actual users.

… and break things anyway

As you may have begun to suspect, there are a few problems with this style of software development.

Even back in the bad old days of the 90s when you had to ship disks in boxes, this methodology contained within itself the seeds of its own destruction. As Joel Spolsky memorably put it, Microsoft discovered that this idea that you could introduce a ton of bugs and then just fix them later came along with some massive disadvantages:

The very first version of Microsoft Word for Windows was considered a “death march” project. It took forever. It kept slipping. The whole team was working ridiculous hours, the project was delayed again, and again, and again, and the stress was incredible. [...] The story goes that one programmer, who had to write the code to calculate the height of a line of text, simply wrote “return 12;” and waited for the bug report to come in [...]. The schedule was merely a checklist of features waiting to be turned into bugs. In the post-mortem, this was referred to as “infinite defects methodology”.

Which lead them to what is perhaps the most ironclad law of software engineering:

In general, the longer you wait before fixing a bug, the costlier (in time and money) it is to fix.

A corollary to this is that the longer you wait to discover a bug, the costlier it is to fix.

Some bugs can be found by code review. So you should do code review. Some bugs can be found by automated tests. So you should do automated testing. Some bugs will be found by monitoring dashboards, so you should have monitoring dashboards.

So why not move fast?

But here is where Facebook’s old motto comes in to play. All of those principles above are true, but here are two more things that are true:

  1. No matter how much code review, automated testing, and monitoring you have some bugs can only be found by users interacting with your software.
  2. No bugs can be found merely by slowing down and putting the deploy off another day.

Once you have made the process of releasing software to users sufficiently safe that the potential damage of any given deployment can be reliably limited, it is always best to release your changes to users as quickly as possible.

More importantly, as an engineer, you will naturally have an inherent fear of breaking things. If you make no changes, you cannot be blamed for whatever goes wrong. Particularly if you grew up in the Old World, there is an ever-present temptation to slow down, to avoid shipping, to hold back your changes, just in case.

You will want to move slow, to avoid breaking things. Better to do nothing, to be useless, than to do harm.

For all its faults as an organization, Facebook did, and does, have some excellent infrastructure to avoid breaking their software systems in response to features being deployed to production. In that sense, they’d already done the work to avoid the “harm” of an individual engineer’s changes. If future work needed to be performed to increase safety, then that work should be done by the infrastructure team to make things safer, not by every other engineer slowing down.

The problem is that slowing down is not actually value neutral. To quote myself here:

If you can’t ship a feature, you can’t fix a bug.

When you slow down just for the sake of slowing down, you create more problems.

The first problem that you create is smashing together far too many changes at once.

You’ve got a development team. Every engineer on that team is adding features at some rate. You want them to be doing that work. Necessarily, they’re all integrating them into the codebase to be deployed whenever the next deployment happens.

If a problem occurs with one of those changes, and you want to quickly know which change caused that problem, ideally you want to compare two versions of the software with the smallest number of changes possible between them. Ideally, every individual change would be released on its own, so you can see differences in behavior between versions which contain one change each, not a gigantic avalanche of changes where any one of hundred different features might be the culprit.

If you slow down for the sake of slowing down, you also create a process that cannot respond to failures of the existing code.

I’ve been writing thus far as if a system in a steady state is inherently fine, and each change carries the possibility of benefit but also the risk of failure. This is not always true. Changes don’t just occur in your software. They can happen in the world as well, and your software needs to be able to respond to them.

Back to that holiday shopping season example from earlier: if your deploy freeze prevents all deployments during the holiday season to prevent breakages, what happens when your small but growing e-commerce site encounters a catastrophic bug that has always been there, but only occurs when you have more than 10,000 concurrent users. The breakage is coming from new, never before seen levels of traffic. The breakage is coming from your success, not your code. You’d better be able to ship a fix for that bug real fast, because your only other option to a fast turn-around bug-fix is shutting down the site entirely.

And if you see this failure for the first time on Black Friday, that is not the moment where you want to suddenly develop a new process for deploying on Friday. The only way to ensure that shipping that fix is easy is to ensure that shipping any fix is easy. That it’s a thing your whole team does quickly, all the time.

The motto “Move Fast And Break Things” caught on with a lot of the rest of Silicon Valley because we are all familiar with this toxic, paralyzing fear.

After we have the safety mechanisms in place to make changes as safe as they can be, we just need to push through it, and accept that things might break, but that’s OK.

Some Important Words are Missing

The motto has an implicit preamble, “Once you have done the work to make broken things safe enough, then you should move fast and break things”.

When you are in a conflict about whether to “go fast” or “go slow”, the motto is not supposed to be telling you that the answer is an unqualified “GOTTA GO FAST”. Rather, it is an exhortation to take a beat and to go through a process of interrogating your motivation for slowing down. There are three possible things that a person saying “slow down” could mean about making a change:

  1. It is broken in a way you already understand. If this is the problem, then you should not make the change, because you know it’s not ready. If you already know it’s broken, then the change simply isn’t done. Finish the work, and ship it to users when it’s finished.
  2. It is risky in a way that you don’t have a way to defend against. As far as you know, the change works, but there’s a risk embedded in it that you don’t have any safety tools to deal with. If this is the issue, then what you should do is pause working on this change, and build the safety first.
  3. It is making you nervous in a way you can’t articulate. If you can’t describe an known defect as in point 1, and you can’t outline an improved safety control as in step 2, then this is the time to let go, accept that you might break something, and move fast.

The implied context for “move fast and break things” is only in that third condition. If you’ve already built all the infrastructure that you can think of to build, and you’ve already fixed all the bugs in the change that you need to fix, any further delay will not serve you, do not have any further delays.

Unfortunately, as you probably already know,

This motto did a lot of good in its appropriate context, at its appropriate time. It’s still a useful heuristic for engineers, if the appropriate context is generally understood within the conversation where it is used.

However, it has clearly been taken to mean a lot of significantly more damaging things.

Purely from an engineering perspective, it has been reasonably successful. It’s less and less common to see people in the industry pushing back against tight deployment cycles. It’s also less common to see the basic safety mechanisms (version control, continuous integration, unit testing) get ignored. And many ex-Facebook engineers have used this motto very clearly under the understanding I’ve described here.

Even in the narrow domain of software engineering it is misused. I’ve seen it used to argue a project didn’t need tests; that a deploy could be forced through a safety process; that users did not need to be informed of a change that could potentially impact them personally.

Outside that domain, it’s far worse. It’s generally understood to mean that no safety mechanisms are required at all, that any change a software company wants to make is inherently justified because it’s OK to “move fast”. You can see this interpretation in the way that it has leaked out of Facebook’s engineering culture and suffused its entire management strategy, blundering through market after market and issue after issue, making catastrophic mistakes, making a perfunctory apology and moving on to the next massive harm.

In the decade since it has been retired as Facebook’s official motto, it has been used to defend some truly horrific abuses within the tech industry. You only need to visit the orange website to see it still being used this way.

Even at its best, “move fast and break things” is an engineering heuristic, it is not an ethical principle. Even within the context I’ve described, it’s only okay to move fast and break things. It is never okay to move fast and harm people.

So, while I do think that it is broadly misunderstood by the public, it’s still not a thing I’d ever say again. Instead, I propose this:

Make it safer, don’t make it later.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how do I make changes to my codebase, but, like, good ones”.

by Glyph at December 06, 2023 08:01 PM

November 20, 2023

Jp Calderone

First Cabal code contribution

First Cabal code contribution 2023-11-20

First Cabal code contribution

introduction

A few weeks ago Shae Erisson and I took a look at Cabal to try to find a small contribution we could make. And we managed it! Here’s the story.

First, an introduction. Cabal is a system for building and packaging Haskell libraries and programs. A few years ago I started learning Haskell in earnest. After a little time spent with stack, a similar tool, I’ve more recently switched to using Cabal and have gotten a great deal of value from it.

I did contribute a trivial documentation improvement earlier this year but before the experience related here I hadn’t looked at the code.

Shae and I had a hunch that we might find some good beginner tasks in the Cabal issue tracker so we started there.

I had previously noticed that the Cabal project uses a label for supposedly “easy” issues. Shae also pointed out a group of labels related to “new comers”. We found what appeared to be a tiered system with issues group by the anticipated size of the change. For example, there is a label for “one line change” issues and another for “small feature” and a couple more in between. We looked at the “one line change” issues first but the correct one line change was not obvious to us. These issues have a lot of discussion on them and we didn’t want to try to figure out how to resolve the open questions. We decided to look at a more substantial issue and hope that there would be less uncertainty around them.

The candidate we found, Use ProjectFlags in CmdClean, was labeled with “one file change” and looked promising. It didn’t have a description (presumably the reporter felt the summary made the task sufficiently obvious) but there was a promising hint from another contributor and a link to a very similar change made in another part of the codebase.

dev env

While we were looking around, Shae was getting a development environment set up. The Cabal project has a nice contributor guideline linked from the end of their README that we found without too much trouble. Since Shae and I both use Nix, Shae took the Nix-oriented route and got a development environment using:

nix develop github:yvan-sraka/cabal.nix

(I’ll replicate the disclaimer from the contributor that this is “supported solely by the community”.)

This took around 30 minutes to download and build (and Shae has pretty fast hardware). A nice improvement here might be to set up a Nix build cache for this development shell for everyone who wants to do Nix-based development on Cabal to share. I expect this would take the first-run setup time from 30 minutes to just one or two at most (depending mostly on download bandwidth available).

By the time we felt like we had a handle on what code changes to make and had found some of the pieces in a checkout of the project the development environment was (almost) ready. The change itself was quite straightforward.

the task

Our objective was to remove some implementation duplication in support of the --project-dir and --project-file options. Several cabal commands accept these options and some of them re-implement them where they could instead be sharing code.

We took the existing data type representing the options for the cabal clean command:

data CleanFlags = CleanFlags
  { cleanSaveConfig :: Flag Bool
  , cleanVerbosity :: Flag Verbosity
  , cleanDistDir :: Flag FilePath
  , cleanProjectDir :: Flag FilePath
  , cleanProjectFile :: Flag FilePath
  }
  deriving (Eq)

And removed the final two fields which exactly overlap with the fields defined by ProjectFlags. Then we changed the type of cleanCommand. It was:

cleanCommand :: CommandUI CleanFlags

And now it is:

cleanCommand :: CommandUI (ProjectFlags, CleanFlags)

Finally, we updated nearby code to reflect the type change. liftOptionL made this straightforward since it let us lift the existing operations on CleanFlags or ProjectFlags into the tuple.

One minor wrinkle we encountered is that ProjectFlags isn’t a perfect fit for the clean command. Specifically, the former includes support for requesting that a cabal.project file be ignored but this option doesn’t make sense for the clean command. This was easy to handle because someone had also already noticed and written a helper to remove the extra option where it is not wanted. However, this might suggest that the factoring of the types could be further improved so that it’s easier to add only the appropriate options.

At this point we thought we were done. Cabal CI proved we were mistaken. The details of the failure were a little tricky to track down (as is often the case for many projects, alas). Eventually we dug the following error out of the failing job’s logs:

Error: cabal: option `--project-dir' is ambiguous; could be one of:
  --project-dir=DIR Set the path of the project directory
  --project-dir=DIR Set the path of the project directory

Then we dug the test command out of the GitHub Actions configuration to try to reproduce the failure locally. Unfortunately, when we ran the command we got far more test failures than CI had. We decided to build cabal and test the case manually. This went smoothly enough and we were quickly able to find the problem and manually confirm the fix (which was simply to remove the field for --project-dir in a place we missed). GitHub Actions rewarded us for this change with a green checkmark.

the rest

At this point we waited for a code review on the PR. We quickly received two and addressed the minor point made (we had used NamedFieldPuns, of which I am a fan, but the module already used RecordWildCards and it’s a bit silly to use both of these so close together).

Afterwards, the PR was approved and we added the “squash+merge me” label to trigger some automation to merge the changes (which it did after a short delay to give other contributors a chance to chime in if they wished).

In the end the change was +41 / -42 lines. For a refactoring that reduced duplication this isn’t quite as much of a reduction as I was hoping for. The explanation ends up being that importing the necessary types and helper functions almost exactly balanced out the deleted duplicate fields. Considering the very small scope of this refactoring this ends up not being very surprising.

Thanks to Shae for working on this with me, to the Cabal maintainers who reviewed our PR, to all the Cabal contributors who made such a great tool, and to the “Cabal and Hackage” room on matrix.org for answering random questions.

by Jean-Paul Calderone at November 20, 2023 12:00 AM

November 19, 2023

Jp Calderone

Good News Everyone

Good News Everyone 2023-11-19

Good News Everyone

I’m turning my blog back on! I know you’re so excited.

I’m going to try to gather up my older writing and host it here. Also, I promise to try to consider making this a little prettier.

Meanwhile, check out my new post about contributing to Cabal!

by Jean-Paul Calderone at November 19, 2023 12:00 AM

November 09, 2023

Glyph Lefkowitz

iOS Mail To Omnifocus Task

One of my longest-running frustrations with iOS is that the default mail app does not have a “share” action, making it impossible to do the one thing that a mail client needs to be able to do for me, which is to selectively turn messages into tasks. This deficiency has multiple components which makes it difficult to work around:

  1. There is no UI to “share” a message from within the mail app, so you can’t share it to OmniFocus or a shortcut.
  2. There is no way to query for a mail message in Shortcuts and then get some kind of “message” object, so there’s no way you can start in Shortcuts and manually invoke something.
  3. There’s no such thing as MailKit on iOS, so third-party apps can’t query your mail database either.

To work around this, I’ve long subscribed to the “AirMail” app, which has a “message to omnifocus” action but is otherwise kind of a buggy mess.

But today, I read that you can set up an iPad in a split-screen view and drag messages from the built-in Mail app’s message list view into the OmniFocus inbox, and I extrapolated, and discovered how to get Mail-message-to-Omnifocus-task work on an iPhone.

I’m thrilled that this functionality exists, but it is a bit of a masterclass in how to get a terrible UX out of a series of decisions that were probably locally reasonable. So, without any further ado, here’s how you do it:

  1. Open up mail.app, and find the message you want to share in the message list. Here, you have two choices:
    1. With one finger, press and hold on a message until you feel a haptic. When you feel the haptic, immediately move your finger a little bit, before you see the preview come up. This lets you operate directly from the message list, but is very fiddly.
    2. Tap the message to open the detail view, then press and hold on the sent date in the top right.
  2. Continue holding down with your first finger. With a second finger, swipe up from the bottom to enter the multitasking view, or to go back to your home screen. While holding your first finger in place, either switch to or launch OmniFocus.
  3. With your second finger, navigate to the Inbox, drag your first finger to the bottom of the list, and release it. Voila! You should have a task with a brief summary and a link back to the message.
  4. Swipe up from the bottom to switch back to Mail, then archive the message.

by Glyph at November 09, 2023 05:48 AM

October 06, 2023

Hynek Schlawack

How to Stop macOS from Stealing Focus after Switching Apps Across Desktops

For years, I’ve had a macOS bug that occurred randomly and that I could only fix by rebooting or logging out. The Internet is full of similar problems, but I never found a solution to mine, and my pleas for help on Twitter went unanswered. Now, I have found a workaround myself.

by Hynek Schlawack (hs@ox.cx) at October 06, 2023 12:00 AM

August 29, 2023

Glyph Lefkowitz

Get Your Mac Python From Python.org

One of the most unfortunate things about learning Python is that there are so many different ways to get it installed, and you need to choose one before you even begin. The differences can also be subtle and require technical depth to truly understand, which you don’t have yet.1 Even experts can be missing information about which one to use and why.

There are perhaps more of these on macOS than on any other platform, and that’s the platform I primarily use these days. If you’re using macOS, I’d like to make it simple for you.

The One You Probably Want: Python.org

My recommendation is to use an official build from python.org.

I recommed the official installer for most uses, and if you were just looking for a choice about which one to use, you can stop reading now. Thanks for your time, and have fun with Python.

If you want to get into the nerdy nuances, read on.

For starters, the official builds are compiled in such a way that they will run on a wide range of macs, both new and old. They are universal2 binaries, unlike some other builds, which means you can distribute them as part of a mac application.

The main advantage that the Python.org build has, though, is very subtle, and not any concrete technical detail. It’s a social, structural issue: the Python.org builds are produced by the people who make CPython, who are more likely to know about the nuances of what options it can be built with, and who are more likely to adopt their own improvements as they are released. Third party builders who are focused on a more niche use-case may not realize that there are build options or environment requirements that could make their Pythons better.

I’m being a bit vague deliberately here, because at any particular moment in time, this may not be an advantage at all. Third party integrators generally catch up to changes, and eventually achieve parity. But for a specific upcoming example, PEP 703 will have extensive build-time implications, and I would trust the python.org team to be keeping pace with all those subtle details immediately as releases happen.

(And Auto-Update It)

The one downside of the official build is that you have to return to the website to check for security updates. Unlike other options described below, there’s no built-in auto-updater for security patches. If you follow the normal process, you still have to click around in a GUI installer to update it once you’ve clicked around on the website to get the file.

I have written a micro-tool to address this and you can pip install mopup and then periodically run mopup and it will install any security updates for your current version of Python, with no interaction besides entering your admin password.

(And Always Use Virtual Environments)

Once you have installed Python from python.org, never pip install anything globally into that Python, even using the --user flag. Always, always use a virtual environment of some kind. In fact, I recommend configuring it so that it is not even possible to do so, by putting this in your ~/.pip/pip.conf:

1
2
[global]
require-virtualenv = true

This will avoid damaging your Python installation by polluting it with libraries that you install and then forget about. Any time you need to do something new, you should make a fresh virtual environment, and then you don’t have to worry about library conflicts between different projects that you may work on.

If you need to install tools written in Python, don’t manage those environments directly, install the tools with pipx. By using pipx, you allow each tool to maintain its own set dependencies, which means you don’t need to worry about whether two tools you use have conflicting version requirements, or whether the tools conflict with your own code.2

The Others

There are, of course, several other ways to install Python, which you probably don’t want to use.

The One For Running Other People’s Code, Not Yours: Homebrew

In general, Homebrew Python is not for you.

The purpose of Homebrew’s python is to support applications packaged within Homebrew, which have all been tested against the versions of python libraries also packaged within Homebrew. It may upgrade without warning on just about any brew operation, and you can’t downgrade it without breaking other parts of your install.

Specifically for creating redistributable binaries, Homebrew python is typically compiled only for your specific architecture, and thus will not create binaries that can be used on Intel macs if you have an Apple Silicon machine, or will run slower on Apple Silicon machines if you have an Intel mac. Also, if there are prebuilt wheels which don’t yet exist for Apple Silicon, you cannot easily arch -x86_64 python ... and just install them; you have to install a whole second copy of Homebrew in a different location, which is a headache.

In other words, homebrew is an alternative to pipx, not to Python. For that purpose, it’s fine.

The One For When You Need 20 Different Pythons For Debugging: pyenv

Like Homebrew, pyenv will default to building a single-architecture binary. Even worse, it will not build a Framework build of Python, which means several things related to being a mac app just won’t work properly. Remember those build-time esoterica that the core team is on top of but third parties may not be? “Should I use a Framework build” is an enduring piece of said esoterica.

The purpose of pyenv is to provide a large matrix of different, precise legacy versions of python for library authors to test compatibility against those older Pythons. If you need to do that, particularly if you work on different projects where you may need to install some random super-old version of Python that you would not normally use to test something on, then pyenv is great. But if you only need one version of Python, it’s not a great way to get it.

The Other One That’s Exactly Like pyenv: asdf-python

The issues are exactly the same as with pyenv, as the tool is a straightforward alternative for the exact same purpose. It’s a bit less focused on Python than pyenv, which has pros and cons; it has broader community support, but it’s less specifically tuned for Python. But a comparative exploration of their differences is beyond the scope of this post.

The Built-In One That Isn’t Really Built-In: /usr/bin/python3

There is a binary in /usr/bin/python3 which might seem like an appealing option — it comes from Apple, after all! — but it is provided as a developer tool, for running things like build scripts. It isn’t for building applications with.

That binary is not a “system python”; the thing in the operating system itself is only a shim, which will determine if you have development tools, and shell out to a tool that will download the development tools for you if you don’t. There is unfortunately a lot of folk wisdom among older Python programmers who remember a time when apple did actually package an antedeluvian version of the interpreter that seemed to be supported forever, and might suggest it for things intended to be self-contained or have minimal bundled dependencies, but this is exactly the reason that Apple stopped shipping that.

If you use this option, it means that your Python might come from the Xcode Command Line Tools, or the Xcode application, depending on the state of xcode-select in your current environment and the order in which you installed them.

Upgrading Xcode via the app store or a developer.apple.com manual download — or its command-line tools, which are installed separately, and updated via the “settings” application in a completely different workflow — therefore also upgrades your version of Python without an easy way to downgrade, unless you manage multiple Xcode installs. Which, at 12G per install, is probably not an appealing option.3

The One With The Data And The Science: Conda

As someone with a limited understanding of data science and scientific computing, I’m not really qualified to go into the detailed pros and cons here, but luckily, Itamar Turner-Trauring is, and he did.

My one coda to his detailed exploration here is that while there are good reasons to want to use Anaconda — particularly if you are managing a data-science workload across multiple platforms and you want a consistent, holistic development experience across a large team supporting heterogenous platforms — some people will tell you that you need Conda to get you your libraries if you want to do data science or numerical work with Python at all, because Conda is how you install those libraries, and otherwise things just won’t work.

This is a historical artifact that is no longer true. Over the last decade, Python Wheels have been comprehensively adopted across the Python community, and almost every popular library with an extension module ships pre-built binaries to multiple platforms. There may be some libraries that only have prebuilt binaries for conda, but they are sufficiently specialized that I don’t know what they are.

The One for Being Consistent With Your Cloud Hosting

Another way to run Python on macOS is to not run it on macOS, but to get another computer inside your computer that isn’t running macOS, and instead run Python inside that, usually using Docker.4

There are good reasons to want to use a containerized configuration for development, but they start to drift away from the point of this post and into more complicated stuff about how to get your Python into the cloud.

So rather than saying “use Python.org native Python instead of Docker”, I am specifically not covering Docker as a replacement for a native mac Python here because in a lot of cases, it can’t be one. Many tools require native mac facilities like displaying GUIs or scripting applications, or want to be able to take a path name to a file without elaborate pre-work to allow the program to access it.

Summary

If you didn’t want to read all of that, here’s the summary.

If you use a mac:

  1. Get your Python interpreter from python.org.
  2. Update it with mopup so you don’t fall behind on security updates.
  3. Always use venvs for specific projects, never pip install anything directly.
  4. Use pipx to manage your Python applications so you don’t have to worry about dependency conflicts.
  5. Don’t worry if Homebrew also installs a python executable, but don’t use it for your own stuff.
  6. You might need a different Python interpreter if you have any specialized requirements, but you’ll probably know if you do.

Acknowledgements

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like “which Python is the really good one”.


  1. If somebody sent you this article because you’re trying to get into Python and you got stuck on this point, let me first reassure you that all the information about this really is highly complex and confusing; if you’re feeling overwhelmed, that’s normal. But the good news is that you can really ignore most of it. Just read the next little bit. 

  2. Some tools need to be installed in the same environment as the code they’re operating on, so you may want to have multiple installs of, for example, Mypy, PuDB, or sphinx. But for things that just do something useful but don’t need to load your code — such as this small selection of examples from my own collection: certbot, pgcli, asciinema, gister, speedtest-clipipx means you won’t have to debug wonky dependency interactions. 

  3. The command-line tools are a lot smaller, but cannot have multiple versions installed at once, and are updated through a different mechanism. There are odd little details like the fact that the default bundle identifier for the framework differs, being either org.python.python or com.apple.python3. They’re generally different in a bunch of small subtle ways that don’t really matter in 95% of cases until they suddenly matter a lot in that last 5%. 

  4. Or minikube, or podman, or colima or whatever I guess, there’s way too many of these containerization Pokémon running around for me to keep track of them all these days. 

by Glyph at August 29, 2023 08:17 PM

July 20, 2023

Glyph Lefkowitz

Bilithification

Several years ago at O’Reilly’s Software Architecture conference, within a comprehensive talk on refactoring “Technical Debt: A Masterclass”, r0ml1 presented a concept that I think should be highlighted.

If you have access to O’Reilly Safari, I think the video is available there, or you can get the slides here. It’s well worth watching in its own right. The talk contains a lot of hard-won wisdom from a decades-long career, but in slides 75-87, he articulates a concept that I believe resolves the perennial pendulum-swing between microservices and monoliths that we see in the Software as a Service world.

I will refer to this concept as “the bilithification strategy”.

Background

Personally, I have long been a microservice skeptic. I would generally articulate this skepticism in terms of “YAGNI”.

Here’s the way I would advise people asking about microservices before encountering this concept:

Microservices are often adopted by small teams due to their advertised benefits. Advocates from very large organizations—ones that have been very successful with microservices—frequently give talks claiming that microservices are more modular, more scalable, and more fault-tolerant than their monolithic progenitors. But these teams rarely appreciate the costs, particularly the costs for smaller orgs. Specifically, there is a fixed operational marginal cost to each new service, and a fairly large fixed operational overhead to the infrastructure for an organization deploying microservices in at all.

With a large enough team, the operational cost is easy to absorb. As the overhead is fixed, it trends towards zero as your total team size and system complexity trend towards infinity. Also, in very large teams, the enforced isolation of components in separate services reduces complexity. It does so specifically intentionally causing the software architecture to mirror the organizational structure of the team that deploys it. This — at the cost of increased operational overhead and decreased efficiency — allows independent parts of the organization to make progress independently, without blocking on each other. Therefore, in smaller teams, as you’re building, you should bias towards building a monolith until the complexity costs of the monolith become apparent. Then you should build the infrastructure to switch to microservices.

I still stand by all of this. However, it’s incomplete.

What does it mean to “switch to microservices”?

The biggest thing that this advice leaves out is a clear understanding of the “micro” in “microservice”. In this framing, I’m implicitly understanding “micro” services to be services that are too small — or at least, too small for your team. But if they do work for large organizations, then at some point, you need to have them. This leaves open several questions:

  • What size is the right size for a service?
  • When should you split your monolith up into smaller services?
  • Wait, how do you even measure “size” of a service? Lines of code? Gigabytes of memory? Number of team members?

In a specific situation I could probably look at these questions for that situation, and make suggestions as to the appropriate course of action, but that’s based largely on vibes. There’s just a lot of drawing on complex experiences, not a repeatable pattern that a team could apply on their own.

We can be clear that you should always start with a monolith. But what should you do when that’s no longer working? How do you even tell when it’s no longer working?

Bilithification

Every codebase begins as a monolith. That is a single (mono) rock (lith). Here’s what it looks like.

a circle with the word “monolith” on it

Let’s say that the monolith, and the attendant team, is getting big enough that we’re beginning to consider microservices. We might now ask, “what is the appropriate number of services to split the monolith into?” and that could provoke endless debate even among a team with total consensus that it might need to be split into some number of services.

Rather than beginning with the premise that there is a correct number, we may observe instead that splitting the service into N services where N is more than one may be accomplished splitting the service in half N-1 times.

So let’s bi (two) lithify (rock) this monolith, and take it from 1 to 2 rocks.

The task of splitting the service into two parts ought to be a manageable amount of work — two is a definitively finite number, as composed to the infinite point-cloud of “microservices”. Thus, we should search, first, for a single logical seam along which we might cleave the monolith.

a circle with the word “monolith” on it and a line through it

In many cases—as in the specific case that r0ml gave—the easiest way to articulate a boundary between two parts of a piece of software is to conceptualize a “frontend” and a “backend”. In the absence of any other clear boundary, the question “does this functionality belong in the front end or the back end” can serve as a simple razor for separating the service.

Remember: the reason we’re splitting this software up is because we are also splitting the team up. You need to think about this in terms of people as well as in terms of functionality. What division between halves would most reduce the number of lines of communication, to reduce the quadratic increase in required communication relationships that comes along with the linear increase in team size? Can you identify two groups who need to talk amongst themselves, but do not need to talk with all of each other?2

two circles with the word “hemilith” on them and a double-headed arrow between them

Once you’ve achieved this separation, we no longer have a single rock, we have two half-rocks: hemiliths to borrow from the same Greek root that gave us “monolith”.

But we are not finished, of course. Two may not be the correct number of services to end up with. Now, we ask: can we split the frontend into a frontend and backend? Can we split the backend? If so, then we now have four rocks in place of our original one:

four circles with the word “tetartolith” on them and double-headed arrows connecting them all

You might think that this would be a “tetralith” for “four”, but as they are of a set, they are more properly a tetartolith.

Repeat As Necessary

At some point, you’ll hit a point where you’re looking at a service and asking “what are the two pieces I could split this service into?”, and the answer will be “none, it makes sense as a single piece”. At that point, you will know that you’ve achieved services of the correct size.

One thing about this insight that may disappoint some engineers is the realization that service-oriented architecture is much more an engineering management tool than it is an engineering tool. It’s fun to think that “microservices” will let you play around with weird technologies and niche programming languages consequence-free at your day job because those can all be “separate services”, but that was always a fantasy. Integrating multiple technologies is expensive, and introducing more moving parts always introduces more failure points.

Advanced Techniques: A Multi-Stack Microservice Environment

You’ll note that splitting a service heavily implies that the resulting services will still all be in the same programming language and the same tech stack as before. If you’re interested in deploying multiple stacks (languages, frameworks, libraries), you can proceed to that outcome via bilithification, but it is a multi-step process.

First, you have to complete the strategy that I’ve already outlined above. You need to get to a service that is sufficiently granular that it is atomic; you don’t want to split it up any further.

Let’s call that service “X”.

Second, you identify the additional complexity that would be introduced by using a different tech stack. It’s important to be realistic here! New technology always seems fun, and if you’re investigating this, you’re probably predisposed to think it would be an improvement. So identify your costs first and make sure you have them enumerated before you move on to the benefits.

Third, identify the concrete benefits to X’s problem domain that the new tech stack would provide.

Finally, do a cost-benefit analysis where you make sure that the costs from step 2 are clearly exceeded by the benefits from step three. If you can’t readily identify that in advance – sometimes experimentation is required — then you need to treat this as an experiment, rather than as a strategic direction, until you’ve had a chance to answer whatever questions you have about the new technology’s benefits benefits.

Note, also, that this cost-benefit analysis requires not only doing the technical analysis but getting buy-in from the entire team tasked with maintaining that component.

Conclusion

To summarize:

  1. Always start with a monolith.
  2. When the monolith is too big, both in terms of team and of codebase, split the monolith in half until it doesn’t make sense to split it in half any more.
  3. (Optional) Carefully evaluate services that want to adopt new technologies, and keep the costs of doing that in mind.

There is, of course, a world of complexity beyond this associated with managing the cost of a service-oriented architecture and solving specific technical problems that arise from that architecture.

If you remember the tetartolith, though, you should at least be able to get to the right number and size of services for your system.


Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from more specificity on the sort of insight you've seen here.


  1. AKA “my father” 

  2. Denouncing “silos” within organizations is so common that it’s a tired trope at this point. There is no shortage of vaguely inspirational articles across the business trade-rag web and on LinkedIn exhorting us to “break down silos”, but silos are the point of having an organization. If everybody needs to talk to everybody else in your entire organization, if no silos exist to prevent the necessity of that communication, then you are definitionally operating that organization at minimal efficiency. What people actually want when they talk about “breaking down silos” is a re-org into a functional hierarchy rather than a role-oriented hierarchy (i.e., “this is the team that makes and markets the Foo product, this is the team that makes and markets the Bar product” as opposed to “this is the sales team”, “this is the engineering team”). But that’s a separate post, probably. 

by Glyph at July 20, 2023 08:59 PM

June 26, 2023

Hynek Schlawack

Two Ways to Turbo-Charge tox

No, it’s not (just) run-parallel – let’s cut the local tox runtime by 75%!

by Hynek Schlawack (hs@ox.cx) at June 26, 2023 02:00 PM

June 05, 2023

Hynek Schlawack

Why I Like Nox

Ever since I got involved with open-source Python projects, tox has been vital for testing packages across Python versions (and other factors). However, lately, I’ve been increasingly using Nox for my projects instead. Since I’ve been asked why repeatedly, I’ll sum up my thoughts.

by Hynek Schlawack (hs@ox.cx) at June 05, 2023 12:00 AM

May 08, 2023

Moshe Zadka

Safe, Simple, Automatic Releases

Two words that strike fear in the heart of every software developer: "release process". Whether deploying a new version of a billion-person social network or a tiny little open source library, release processes are often manual, complicated, and easy to mess up.

Much has been said on the "new version of a billion-person social network" side. But what about those small, barely a person, open source libraries?

The focus here will be on

  • Open source
  • Python libraries

Automatic

A new version? What is the number?

As already explored, CalVer works better than SemVer. So you already know something about the version.

With most standard CalVer schemes, the version starts with <Year>.<Month>. For example, something released in April 2023 would be 2023.04....

One option is to choose as the last digit a running number of releases in the month. So, 2023.4.3 would be the third release in April 2023. This means that when assigning the version, you have to look and see what previous releases, if any, happened this month.

More stateless is to choose the day. However, this means it is impossible to release more than once per day. This can be literally irresponsible: what happens if you need to quickly patch up a major goof?

There is another, sneakier, option. It is to take advantage that Python version standard does not mandate only three parts to the version.

For example, 2023.4.27.3 could be the third release in April 27th, 2023. Having to quickly release two versions sounds stressful, and I'm sorry someone had to experience that.

On top of all the stress, they had to have the presence of mind to check how many releases already happened today. This sounds like heaping misery on top of an already stressful situation.

Can we do better? Go completely stateless? Yes, we can.

Autocalver will assign versions to packages using <Year>.<Month>.<Day>.<Seconds from beginning of day>. The last number is only a little bit useful, but does serve as an increasing counter. The timestamp is taken from the commit log, not the build time, so builds are reproducible.

Safe

"Three may keep a Secret, if two of them are dead.", Benjamin Franklin, Poor Richard's Almanac, 1735.

Uploading your package to PyPI safely is not trivial. You can generate a package-specific token, then store it as an encrypted secret in GitHub actions, and then unpack it at the right stage, avoid leaking it, and push the package.

Sounds like being half a bad line away from leaking the API key. Luckily, there is a way to avoid it. The PyPI publish GitHub Action uses OpenID Connect to authenticate the runner against PyPI.

You will want to configure GitHub, with the appropriate parameters, as a trusted publisher.

Simple

When do you release? Are you checking releases manually? Why?

CI should make sure that, pre-merge, branches pass all automated checks. "Keep the main branch in a releasable state", as the kids used to say.

Wait. If main branch is releasable, why not...release it?

on:
  push:
    branches:
      - trunk

This will configure a GitHub action that releases on every push to trunk. (Trunk is the "main branch" of a tree, though yours might be named differently.)

See the entire action in autocalver's workflows

Summary

Configuring your project to use autocalver gives you automatic version numbers. Using PyPI/GitHub trusted publishing model eliminates complicated secret sharing schemes. GitHub upload actions on merge to main branch removes the need to make a decision about when to release.

Putting all of them together results in releases taking literally 0 work. More time for fun!

by Moshe Zadka at May 08, 2023 03:00 AM

April 30, 2023

Glyph Lefkowitz

Post-PyCon-US 2023 Notes

PyCon 2023 was last week, and I wanted to write some notes on it while the memory is fresh. Much of this was jotted down on the plane ride home and edited a few days later.

Health & Safety

Even given my smaller practice run at PyBay, it was a bit weird for me to be back around so many people, given that it was all indoors.

However, it was very nice that everyone took masking seriously. I personally witnessed very few violations of the masking rules, and they all seemed to be momentary, unintentional slip-ups after eating or drinking something.

As a result, I’ve now been home for 4 full days, am COVID negative and did not pick up any more generic con crud. It’s really nice to be feeling healthy after a conference!

Overall Vibe

I was a bit surprised to find the conference much more overwhelming than I remembered it being. It’s been 4 years since my last PyCon; I was out of practice! It was also odd since last year was in person and at the same venue, so most folks had a sense of Salt Lake, and I really didn’t.

I think this was good, since I’ll remember this experience and have a fresher sense of what it feels like (at least a little bit) to be a new attendee next year.

The Schedule

I only managed to attend a few talks, but every one was excellent. In case you were not aware, un-edited livestream VODs of the talks are available with your online ticket, in advance of the release of the final videos on the YouTube channel, so if you missed these but you attended the conference you can still watch them1

My Talk

My talk, “How To Keep A Secret”, seemed to be very well received.2

I got to talk to a lot of people who said they learned things from it. I had the idea to respond to audience feedback by asking “will you be doing anything differently as a result of seeing the talk?” and so I got to hear about which specific information was actually useful to help improve the audience’s security posture. I highly recommend this follow-up question to other speakers in the future.

As part of the talk, I released and announced 2 projects related to its topic of better security posture around secrets-management:

  • PINPal, a little spaced-repetition tool to help you safely rotate your “core” passwords, the ones you actually need to memorize.

  • TokenRing, a backend for the keyring module which uses a hardware token to require user presence for any secret access, by encrypting your vault and passwords as Fernet tokens.

I also called for donations to a local LGBT+ charity in Salt Lake City and made a small matching donation, to try to help the conference have a bit of a positive impact on the area’s trans population, given the extremely bigoted law passed by the state legislature in the run-up to the conference.

We raised $330 in total3, and I think other speakers were making similar calls. Nobody wanted any credit; everyone who got in touch and donated just wanted to help out.

Open Spaces

I went to a couple of open spaces that were really engaging and thought-provoking.

  • Hynek hosted one based on his talk (which is based on this blog post) where we explored some really interesting case-studies in replacing subclassing with composition.

  • There was a “web framework maintainers” open-space hosted by David Lord, which turned into a bit of a group therapy session amongst framework maintainers from Flask, Django, Klein (i.e. Twisted), and Sanic. I had a few key takeaways from this one:

    • We should try to keep our users in the loop with what is going on in the project. Every project should have a project blog so that users have a single point of contact.

      • It turns out Twisted does actually have one of these. But we should actually post updates to that blog so that users can see new developments. We have forgotten to even post.

      • We should repeatedly drive users to those posts, from every communications channel; social media (mastodon, twitter), chat (discord, IRC, matrix, gitter), or mailing lists. We should not be afraid to repeat ourselves a bit. We’re often afraid to spam our users but there’s a lot of room between where we are now — i.e. “users never hear from us” — and spamming them.

    • We should regularly remind ourselves, and each other, that any work doing things like ticket triage, code review, writing for the project blog, and writing the project website are valuable work. We all kinda know this already, but psychologically it just feels like ancillary “stuff” that isn’t as real as the coding itself.

    • We should find ways to recognize contributions, especially the aforementioned less-visible stuff, like people who hang out in chat and patiently direct users to the appropriate documentation or support channels.

The Sprints

The sprints were not what I expected. I sat down thinking I’d be slogging through some Twisted org GitHub Actions breakage on Klein and Treq, but what I actually did was:

  • Request an org on the recently-released PyPI “Organizations” feature, got it approved, and started adding a few core contributors.

  • Have some lovely conversations with PyCon and PSF staff about several potential projects that I think could really help the ecosystem. I don’t want to imply anyone has committed to anything here, so I’ll leave a description of exactly what those were for later.

  • Filed a series of issues against BeeWare™ Briefcase™ detailing exactly what I needed from Encrust that wasn’t already provided by Briefcase’s existing Mac support.

  • I also did much more than I expected on Pomodouroboros, including:

  • I talked to my first in-the-wild Pomodouroboros user, someone who started using the app early enough to get bitten by a Pickle data-migration bug and couldn’t upgrade! I’d forgotten that I’d released a version that modeled time as a float rather than a datetime.

  • Started working on a design with Moshe Zadka for integrations for external time-tracking services and task-management services.

  • I had the opportunity to review datetype with Paul Ganssle and explore options for integrating it with or recommending it from the standard library, to hopefully start to address the both the datetime-shouldn’t-subclass-date problem and the how-do-you-know-if-a-datetime-is-timezone-aware problem.

  • Speaking of Twisted infrastructure maintenance, special thanks to Paul Kehrer, who noticed that pyasn1 was breaking Twisted’s CI, and submitted a PR to fix it. I finally managed to do a review a few days after the conference and that’s landed now.

Everything Else

I’m sure I’m forgetting at least a half a dozen other meaningful interactions that I had; the week was packed, and I talked to lots of interesting people as always.

See you next year in Pittsburgh!.


  1. Go to your dashboard and click the “Join PyCon US 2023 Online Now!” button at the top of the page, then look for the talk on the “agenda” tab or the speaker in the search box on the right. 

  2. Talks like these and software like PINPal and TokenRing are the sorts of things things that I hope to get support for from my Patreon, so please go there if you’d like to support my continuing to do this sort of work. 

  3. If you’d like to make that number bigger, I’ll do another $100 match on this blog post, and update that paragraph if I receive anything; just send the receipt to encircle@glyph.im. A reader sent in another matching donation and I made a contribution, so the total raised is now $530. 

by Glyph at April 30, 2023 11:47 PM

March 31, 2023

Glyph Lefkowitz

No Executable Is An Island

One of the perennial talking points in the Python packaging discourse is that it’s unnecessarily difficult to create a simple, single-file binary that you can hand to users.

This complaint is understandable. In most other programming languages, the first thing you sit down to do is to invoke the compiler, get an executable, and run it. Other, more recently created programming languages — particularly Go and Rust — have really excellent toolchains for doing this which eliminate a lot of classes of error during that build-and-run process. A single, dependency-free, binary executable file that you can run is an eminently comprehensible artifact, an obvious endpoint for “packaging” as an endeavor.

While Rust and Go are producing these artifacts as effectively their only output, Python has such a dizzying array of tooling complexity that even attempting to describe “the problem” takes many thousands of words and one may not ever even get around to fully describing the complexity of the issues involved in the course of those words. All the more understandable, then, that we should urgently add this functionality to Python.

A programmer might see Python produce wheels and virtualenvs which can break when anything in their environment changes, and see the complexity of that situation. Then, they see Rust produce a statically-linked executable which “just works”, and they see its toolchain simplicity. I agree that this shouldn’t be so hard, and some of the architectural decisions that make this difficult in Python are indeed unfortunate.

But then, I think, our hypothetical programmer gets confused. They think that Rust is simple because it produces an executable, and they think Python’s complexity comes from all the packaging standards and tools. But this is not entirely true.

Python’s packaging complexity, and indeed some of those packaging standards, arises from the fact that it is often used as a glue language. Packaging pure Python is really not all that hard. And although the tools aren’t included with the language and the DX isn’t as smooth, even packaging pure Python into a single-file executable is pretty trivial.

But almost nobody wants to package a single pure-python script. The whole reason you’re writing in Python is because you want to rely on an enormous ecosystem of libraries, and some low but critical percentage of those libraries include things like their own statically-linked copies of OpenSSL or a few megabytes of FORTRAN code with its own extremely finicky build system you don’t want to have to interact with.

When you look aside to other ecosystems, while Python still definitely has some unique challenges, shipping Rust with a ton of FFI, or Go with a bunch of Cgo is substantially more complex than the default out-of-the-box single-file “it just works” executable you get at the start.1

Still, all things being equal, I think single-file executable builds would be nice for Python to have as a building block. It’s certainly easier to produce a package for a platform if your starting point is that you have a known-good, functioning single-file executable and you all you need to do is embed it in some kind of environment-specific envelope. Certainly if what you want is to copy a simple microservice executable into a container image, you might really want to have this rather than setting up what is functionally a full Python development environment in your Dockerfile. After team-wide philosophical debates over what virtual environment manager to use, those Golang Dockerfiles that seem to be nothing but the following 4 lines are really appealing:

1
2
3
4
FROM golang
COPY *.go ./
RUN go build -o /app
ENTRYPOINT ["/app"]

All things are rarely equal, however.

The issue that we, as a community, ought to be trying to address with build tools is to get the software into users’ hands, not to produce a specific file format. In my opinion, single-file binary builds are not a great tool for this. They’re fundamentally not how people, even quite tech-savvy programming people, find and manage their tools.

A brief summary of the problems with single-file distributions:

  1. They’re not discoverable. A single file linked on your website will not be found via something like brew search, apt search, choco search or searching in a platform’s GUI app store’s search bar.
  2. They’re not updatable. People expect their system package manager to update stuff for them. Standalone binaries might add their own updaters, but now you’re shipping a whole software-update system inside your binary. More likely, it’ll go stale forever while better-packaged software will be updated and managed properly.
  3. They have trouble packaging resources. Once you’ve got your code stuffed into a binary, how do you distribute images, configuration files, or other data resources along with it? This isn’t impossible to solve, but in other programming languages which do have a great single-file binary story, this problem is typically solved by third party tooling which, while it might work fine, will still generally exist in multiple alternative forms which adds its own complexity.

So while it might be a useful building-block that simplifies those annoying container builds a lot, it hardly solves the problem comprehensively.

If we were to build a big new tool, the tool we need is something that standardizes the input format to produce a variety of different complex, multi-file output formats, including things like:

  • deb packages (including uploading to PPA archives so people can add an apt line; a manual dpkg -i has many of the same issues as a single file)
  • container images (including the upload to a registry so that people can "$(shuf -n 1 -e nerdctl docker podman)" pull or FROM it)
  • Flatpak apps
  • Snaps
  • macOS apps
  • Microsoft store apps
  • MSI installers
  • Chocolatey / NuGet packages
  • Homebrew formulae

In other words, ask yourself, as a user of an application, how do you want to consume it? It depends what kind of tool it is, and there is no one-size-fits-all answer.


In any software ecosystem, if a feature is a building block which doesn’t fully solve the problem, that is an issue with the feature, but in many cases, that’s fine. We need lots of building blocks to get to full solutions. This is the story of open source.

However, if I had to take a crack at summing up the infinite-headed hydra of the Problem With Python Packaging, I’d put it like this:

Python has a wide array of tools which can be used to package your Python code for almost any platform, in almost any form, if you are sufficiently determined. The problem is that the end-to-end experience of shipping an application to end users who are not Python programmers2 for any particular platform has a terrible user experience. What we need are more holistic solutions, not more building blocks.3

This makes me want to push back against this tendency whenever I see it, and to try to point towards more efficient ways to achieving a good user experience, with the relatively scarce community resources at our collective disposal4. Efficiency isn’t exclusively about ideal outcomes, though; it’s the optimization a cost/benefit ratio. In terms of benefits, it’s relatively low, as I hope I’ve shown above.

Building a tool that makes arbitrary Python code into a fully self-contained executable is also very high-cost, in terms of community effort, for a bunch of reasons. For starters, in any moderately-large collection of popular dependencies from PyPI, at least a few of them are going to want to find their own resources via __file__, and you need to hack in a way to find those, which is itself error prone. Python also expects dynamic linking in a lot of places, and messing around with C linkers to change that around is a complex process with its own huge pile of failure modes. You need to do this on pre-existing binaries built with options you can’t necessarily control, because making everyone rebuild all the binary wheels they find on PyPI is a huge step backwards in terms of exposing app developers to confusing infrastructure complexity.

Now, none of this is impossible. There are even existing tools to do some of the scarier low-level parts of these problems. But one of the reasons that all the existing tools for doing similar things have folk-wisdom reputations and even official documentation expecting constant pain is that part of the project here is conducting a full audit of every usage of __file__ on PyPI and replacing it with some resource-locating API which we haven’t even got a mature version of yet5.

Whereas copying all the files into the right spots in an archive file that can be directly deployed to an existing platform is tedious, but only moderately challenging. It usually doesn’t involve fundamentally changing how the code being packaged works, only where it is placed.

To the extent that we have a choice between “make it easy to produce a single-file binary without needing to understand the complexities of binaries” or “make it easy to produce a Homebrew formula / Flatpak build / etc without the user needing to understand Homebrew / Flatpak / etc”, we should always choose the latter.


If this is giving you déjà vu, I’ve gestured at this general concept more vaguely in a few places, including tweeting6 about it in 2019, saying vaguely similar stuff:

Everything I’ve written here so far is debatable.

You can find that debate both in replies to that original tweet and in various other comments and posts elsewhere that I’ve grumbled about this. I still don’t agree with that criticism, but there are very clever people working on complex tools which are slowly gaining popularity and might be making the overall packaging situation better.

So while I think we should in general direct efforts more towards integrating with full-featured packaging standards, I don’t want to yuck anybody’s yum when it comes to producing clean single-file executables in general. If you want to build that tool and it turns out to be a more useful building block than I’m giving it credit for, knock yourself out.

However, in addition to having a comprehensive write-up of my previously-stated opinions here, I want to impart a more concrete, less debatable issue. To wit: single-file executables as a distribution mechanism, specifically on macOS is not only sub-optimal, but a complete waste of time.


Late last year, Hynek wrote a great post about his desire for, and experience of, packaging a single-file binary for multiple platforms. This should serve as an illustrative example of my point here. I don’t want to pick on Hynek. Prominent Python programmers wish for this all the time.. In fact, Hynek also did the thing I said is a good idea here, and did, in fact, create a Homebrew tap, and that’s the one the README recommends.

So since he kindly supplied a perfect case-study of the contrasting options, let’s contrast them!

The first thing I notice is that the Homebrew version is Apple Silicon native, whereas the single-file binary is still x86_64, as the brew build and test infrastructure apparently deals with architectural differences (probably pretty easy given it can use Homebrew’s existing Python build) but the more hand-rolled PyOxidizer setup builds only for the host platform, which in this case is still an Intel mac thanks to GitHub dragging their feet.

The second is that the Homebrew version runs as I expect it to. I run doc2dash in my terminal and I see Usage: doc2dash [OPTIONS] SOURCE, as I should.

So, A+ on the Homebrew tap. No notes. I did not have to know anything about Python being in the loop at all here, it “just works” like every Ruby, Clojure, Rust, or Go tool I’ve installed with the same toolchain.

Over to the single-file brew-less version.

Beyond the architecture being emulated and having to download Rosetta28, I have to note that this “single file” binary already comes in a zip file, since it needs to include the license in a separate file.7 Now that it’s unarchived, I have some choices to make about where to put it on my $PATH. But let’s ignore that for now and focus on the experience of running it. I fire up a terminal, and run cd Downloads/doc2dash.x86_64-apple-darwin/ and then ./doc2dash.

Now we hit the more intractable problem:

a gatekeeper launch-refusal dialog box

The executable does not launch because it is neither code-signed nor notarized. I’m not going to go through the actual demonstration here, because you already know how annoying this is, and also, you can’t actually do it.

Code-signing is more or less fine. The codesign tool will do its thing, and that will change the wording in the angry dialog box from something about an “unidentified developer” to being “unable to check for malware”, which is not much of a help. You still need to notarize it, and notarization can’t work.

macOS really wants your executable code to be in a bundle (i.e., an App) so that it can know various things about its provenance and structure. CLI tools are expected to be in the operating system, or managed by a tool like brew that acts like a sort of bootleg secondary operating-system-ish thing and knows how to manage binaries.

If it isn’t in a bundle, then it needs to be in a platform-specific .pkg file, which is installed with the built-in Installer app. This is because apple cannot notarize a stand-alone binary executable file.

Part of the notarization process involves stapling an external “notarization ticket” to your code, and if you’ve only got a single file, it has nowhere to put that ticket. You can’t even submit a stand-alone binary; you have to package it in a format that is legible to Apple’s notarization service, which for a pure-CLI tool, means a .pkg.

What about corporate distributions of proprietary command-line tools, like the 1Password CLI? Oh look, their official instructions also tell you to use their Homebrew formula too. Homebrew really is the standard developer-CLI platform at this point for macOS. When 1Password distributes stuff outside of Homebrew, as with their beta builds, it’s stuff that lives in a .pkg as well.


It is possible to work around all of this.

I could open the unzipped file, right-click on the CLI tool, go to “Open”, get a subtly differently worded error dialog, like this…

a gatekeeper open-right-click dialog box

…watch it open Terminal for me and then exit, then wait multiple seconds for it to run each time I want to re-run it at the command line. Did I mention that? The single-file option takes 2-3 seconds doing who-knows what (maybe some kind of security check, maybe pyoxidizer overhead, I don’t know) but the Homebrew version starts imperceptibly instantly.

Also, I then need to manually re-do this process in the GUI every time I want to update it.

If you know about the magic of how this all actually works, you can also do xattr -d com.apple.quarantine doc2dash by hand, but I feel like xattr -d is a step lower down in the user-friendliness hierarchy than python3 -m pip install9, and not only because making a habit of clearing quarantine attributes manually is a little like cutting the brake lines on Gatekeeper’s ability to keep out malware.

But the point of distributing a single-file binary is to make it “easy” for end users, and is explaining gatekeeper’s integrity verification accomplishing that goal?


Apple’s effectively mandatory code-signing verification on macOS is far out ahead of other desktop platforms right now, both in terms of its security and in terms of its obnoxiousness. But every mobile platform is like this, and I think that as everyone gets more and more panicked about malicious interference with software delivery, we’re going to see more and more official requirements that software must come packaged in one of these containers.

Microsoft will probably fix their absolute trash-fire of a codesigning system one day too. I predict that something vaguely like this will eventually even come to most Linux distributions. Not necessarily a prohibition on individual binaries like this, or like a GUI launch-prevention tool, but some sort of requirement imposed by the OS that every binary file be traceable to some sort of package, maybe enforced with some sort of annoying AppArmor profile if you don’t do it.

The practical, immediate message today is: “don’t bother producing a single-file binary for macOS users, we don’t want it and we will have a hard time using it”. But the longer term message is that focusing on creating single-file binaries is, in general, skating to where the puck used to be.

If we want Python to have a good platform-specific distribution mechanism for every platform, so it’s easy for developers to get their tools to users without having to teach all their users a bunch of nonsense about setuptools and virtualenvs first, we need to build that, and not get hung up on making a single-file executable packer a core part of the developer experience.


Thanks very much to my patrons for their support of writing like this, and software like these.

Oh, right. This is where I put the marketing “call to action”. Still getting the hang of these.

Did you enjoy this post and want me to write more like it, and/or did you hate it and want the psychological leverage and moral authority to tell me to stop and do something else? You can sign up here!


  1. I remember one escapade in particular where someone had to ship a bunch of PKCS#11 providers along with a Go executable in their application and it was, to put it lightly, not a barrel of laughs. 

  2. Shipping to Python programmers in a Python environment is kind of fine now, and has been for a while

  3. Yet, even given my advance awareness of this criticism, despite my best efforts, I can’t seem to stop building little tools that poorly solve only one problem in isolation

  4. And it does have to be at our collective disposal. Even the minuscule corner of this problem I’ve worked on, the aforementioned Mac code-signing and notarization stuff, is tedious and exhausting; nobody can take on the whole problem space, which is why I find writing about this is such an important part of the problem. Thanks Pradyun, and everyone else who has written about this at length! 

  5. One of the sources of my anti-single-file stance here is that I tried very hard, for many years, to ensure that everything in Twisted was carefully zipimport-agnostic, even before pkg_resources existed, by using the from twisted.python.modules import getModule, getModule(__name__).filePath.sibling(...) idiom, where that .filePath attribute might be anything FilePath-like, specifically including ZipPath. It didn’t work; fundamentally, nobody cared, other contributors wouldn’t bother to enforce this, or even remember that it might be desirable, because they’d never worked in an environment where it was. Today, a quick git grep __file__ in Twisted turns up tons of usages that will make at least the tests fail to run in a single-file or zipimport environment. Despite the availability of zipimport itself since 2001, I have never seen tox or a tool like it support running with a zipimport-style deployment to ensure that this sort of configuration is easily, properly testable across entire libraries or applications. If someone really did want to care about single-file deployments, fixing this problem comprehensively across the ecosystem is probably one of the main things to start with, beginning with an international evangelism tour for importlib.resources

  6. This is where the historical document is, given that I was using it at the time, but if you want to follow me now, please follow me on Mastodon

  7. Oops, I guess we might as well not have bothered to make a single-file executable anyway! Once you need two files, you can put whatever you want in the zip file... 

  8. Just kidding. Of course it’s installed already. It’s almost cute how Apple shows you the install progress to remind you that one day you might not need to download it whenever you get a new mac. 

  9. There’s still technically a Python included in Xcode and the Xcode CLT, so functionally macs do have a /usr/bin/python3 that is sort of a python3.9. You shouldn’t really use it. Instead, download the python installer from python.org. But probably you should use it before you start disabling code integrity verification everywhere. 

by Glyph at March 31, 2023 12:35 AM

March 29, 2023

Glyph Lefkowitz

A Response to Jacob Kaplan-Moss’s “Incompetent But Nice”

Jacob Kaplan-Moss has written a post about one of the most stressful things that can happen to you as a manager: when someone on your team is getting along well with the team, apparently trying their best, but unable to do the work. Incompetent, but nice.

I have some relevant experience with this issue. On more than one occasion in my career:

  1. I have been this person, more than once. I have both resigned, and been fired, as a result.
  2. I’ve been this person’s manager, and had to fire them after being unable to come up with a training plan that would allow them to improve.
  3. I’ve been an individual contributor on a team with this person, trying to help them improve.

So I can speak to this issue from all angles, and I can confirm: it is gut-wrenchingly awful no matter where you are in relation to it. It is social stress in its purest form. Everyone I’ve been on some side of this dynamic with is someone that I’d love to work with again. Including the managers who fired me!

Perhaps most of all, since I am not on either side of an employer/employee relationship right now1, I have some emotional distance from this kind of stress which allows me to write about it in a more detached way than others with more recent and proximate experience.

As such, I’d like to share some advice, in the unfortunate event that you find yourself in one of these positions and are trying to figure out what to do.

I’m going to speak from the perspective of the manager here, because that’s where most of the responsibility for decision-making lies. If you’re a teammate you probably need to wait for a manager to ask you to do something, if you’re the underperformer here you’re probably already trying as hard as you can to improve. But hopefully this reasoning process can help you understand what the manager is trying to do here, and find the bits of the process you can take some initiative to help with.

Step 0: Preliminaries

First let’s lay some ground rules.

  1. Breathe. Maintaining an explicit focus on explicitly regulating your own mood is important, regardless of whether you’re the manager, teammate, or the underperformer.
  2. Accept that this may be intractable. You’re going to do your best in this situation but you are probably choosing between bad options. Nevertheless you will need to make decisions as confidently and quickly as possible. Letting this situation drag on can be a recipe for misery.
  3. You will need to do a retrospective.2 Get ready to collect information as you go through the process to try to identify what happened in detail so you can analyze it later. If you are the hiring manager, that means that after you’ve got your self-compassion together and your equanimous professional mood locked in, you will also need to reflect on the fact that you probably fucked up here, and get ready to try to improve your own skills and processes so that you don’t end up this situation again.

I’m going to try to pick up where Jacob left off, and skip over the “easy” parts of this process. As he puts it:

The proximate answer is fairly easy: you try to help them level up: pay for classes, conferences, and/or books; connect them with mentors or coaches; figure out if something’s in the way and remove the blocker. Sometimes folks in this category just need to learn, and because they’re nice it’s easy to give them a lot of support and runway to level up. Sometimes these are folks with things going on in their lives outside work and they need some time (or some leave) to focus on stuff that’s more important than work. Sometimes the job has requirements that can be shifted or eased or dropped – you can match the work to what the person’s good at. These situations aren’t always easy but they are simple: figure out the problem and make a change.

Step 1: Figuring Out What’s Going On

There are different reasons why someone might underperform.

Possibility: The person is over-leveled.

This is rare. For the most part, pervasive under-leveling is more of a problem in the industry. But, it does happen, and when it happens, what it looks like is someone who is capable of doing some of the work that they’re assigned, but getting stuck and panicking with more challenging or ambiguous assignments.

Moreover, this particular organizational antipattern, the “nice but incompetent” person, is more likely to be over-leveled, because if they’re friendly and getting along with the team, then it’s likely they made a good first impression and made the hiring committee want to go to bat for them as well, and may have done them the “favor” of giving them a higher level. This is something to consider in the retrospective.

Now, Jacob’s construction of this situation explicitly allows for “leveling up”, but the sort of over-leveling that can be resolved with a couple of conference talks, books, and mentoring sessions is just a challenge. We’re talking about persistent resistance to change here, and that sort of over-leveling means that they have skipped an entire rung on the professional development ladder, and do not have the professional experience at this point to bridge the gaps between their experience and the ladder.

If this is the case, consider a demotion. Identify the aspects of the work that the underperformer can handle, and try to match them to a role. If you are yourself the underperformer, proactively identifying what you’re actually doing well at and acknowledging the work you can’t handle, and identifying any other open headcount for roles at that level can really make this process easier for your manager.

However, be sure to be realistic. Are they capable enough to work at that reduced level? Does your team really have a role for someone at that level? Don’t invent makework in the hopes that you can give them a bootleg undergraduate degree’s worth of training on the job; they need to be able to contribute.

Possibility: Undiagnosed health issues

Jacob already addressed the “easy” version here: someone is struggling with an issue that they know about and they can ask for help, or at least you can infer the help they need from the things they’ve explicitly said.

But the underperformer might have something going on which they don’t realize is an issue. Or they might know there’s an issue but not think it’s serious, or not be able to find a diagnosis or treatment. Most frequently, this is a mental health problem, but it can also be things like unexplained fatigue.

This possibility is the worst. Not only do you feel like a monster for adding insult to injury, there’s also a lot of risk in discussing it.

Sometimes, you feel like you can just tell3 that somebody is suffering from a particular malady. It may seem obvious to you. If you have any empathy, you probably want to help them. However, you have to be careful.

First, some illnesses qualify as disabilities. Just because the employee has not disclosed their disability to you, does not mean they are unaware. It is up to the employee whether to tell you, and you are not allowed to require them to disclose anything to you. They may have good reasons for not talking about it.

Beyond highlighting some relevant government policy I am not equipped on how to advise you on how to handle this. You probably want to loop in someone from HR and/or someone from Legal, and have a meeting to discuss the particulars of what’s happening and how you’d like to try to help.

Second, there’s a big power differential here. You have to think hard about how to broach the subject; you’re their boss, telling them that you think they’re too sick to work. In this situation, they explicitly don’t necessarily agree, and that can quite reasonably be perceived as an attack and an insult, even if you’re correct. Hopefully the folks in Legal or HR can help you with some strategies here; again, I’m not really qualified to do anything but point at the risks involved and say “oh no”.

The “good” news here is that if this really is the cause, then there’s not a whole lot to change in your retrospective. People get sick, their families get sick, it can’t always be predicted or prevented.

Possibility: Burnout

While it is certainly a mental health issue in its own right, burnout is specifically occupational and you can thus be a bit more confident as a manager recognizing it in an employment context.

This is better to prevent than to address, but if you’ve got someone burning out badly enough to be a serious performance issue, you need to give them more leave than they think they need, and you need to do it quickly. A week or two off is not going to cut it.

In my experience, this is the most common cause of an earnest but agreeable employee underperforming, and it is also the one we are most reluctant to see. Each step on the road to burnout seems locally reasonable.

Just push yourself a little harder. Just ask for a little overtime. Just until this deadline; it’s really important. Just until we can hire someone; we’ve already got a req open for that role. Just for this one client.

It feels like we should be able to white-knuckle our way through “just” a little inconvenience. It feels that way both individually and collectively. But the impacts are serious, and they are cumulative.

There are two ways this can manifest.

Usually, it’s a gradual decline that you can see over time, and you’ll see this in an employee that was previously doing okay, but now can’t hack it.

However, another manifestation is someone who was burned out at their previous role, did not take any break between jobs, and has stepped into a moderately stressful role which could be a healthy level of challenge for someone refreshed and taking frequent enough breaks, but is too demanding for someone who needs to recover.

If that’s the case, and you feel like you accurately identified a promising candidate, it is really worthwhile to get that person the break that they need. Just based on vague back-of-the-envelope averages, it would typically be about as expensive to find a way to wrangle 8 weeks of extra leave than to go through the whole hiring process for a mid-career software engineer from scratch. However, that math assumes that the morale cost of firing someone is zero, and the morale benefit of being seen to actually care about your people and proactively treat them well as also zero.

If you can positively identify this as the issue, then you have a lot of work to do in the retrospective. People do not spontaneously burn out by themselves. this is a management problem and this person is likely to be the first domino to fall. You may need to make some pretty big changes across your team.

Possibility: Persistent Personality Conflict

It may be that someone else on the team is actually the problem. If the underperformer is inconsistent, observe the timing of the inconsistencies: does it get much worse when they’re assigned to work closely with someone else? Note that “personality conflict” does not mean that the other person is necessarily an asshole; it is possible for communication styles to simply fail to mesh due to personality differences which cannot be easily addressed.

You will be tempted to try to reshuffle responsibilities to keep these team members further apart from each other.

Don’t.

People with a conflict that is persistently interfering in day-to-day work need to be on different teams entirely. If you attempt to separate them but have them working closely, then inevitably one is going to get the higher-status projects and the other is going to be passed over for advancement. Or they’re going to end up drifting back into similar areas again.

Find a way to transfer them internally far enough away that they can have breathing room away from this conflict. If you can’t do that, then a firing may be your best option.

In the retrospective, it’s worth examining the entire team dynamic at this point, to see if the conflict is really just between two people, or if it’s more pervasive than that and other members of the team are just handling it better.

Step 2: Help By Being Kind, Not By Being Nice

Again, we’re already past the basics here. You’ve already got training and leave and such out of the way. You’re probably going to need to make a big change.

Responding to Jacob, specifically:

Firing them feels wrong; keeping them on feels wrong.

I think that which is right is heavily context-dependent. But, realistically, firing is the more likely option. You’ve got someone here who isn’t performing adequately, and you’ve already deployed all the normal tools to try to dig yourself out of that situation.

So let’s talk about that first.

Firing

Briefly, If you need to fire them, just fire them, and do it quickly.

Firing people sucks, even obnoxious people. Not to mention that this situation we’re in is about a person that you like! You’ll want to be nice to this person. The person is also almost certainly trying their best. It’s pretty hard to be agreeable to your team if you’re disappointing that team and not even trying to improve.

If you find yourself thinking “I probably need to fire this person but it’s going to be hard on them”, the thought “hard on them” indicates you are focused on trying to help them personally, and not what is best for your company, your team, or even the employee themselves professionally. The way to show kindness in that situation is not to keep them in a role that’s bad for them and for you.

It would be much better for the underperformer to find a role where they are not an underperformer, and at this point, that role is probably not on your team. Every minute that you keep them on in the bad role is a minute that they can’t spend finding a good one.

As such, you need to immediately shift gears towards finding them a soft landing that does not involve delaying whatever action you need to take.

Being kind is fine. It is not even a conflict to try to spend some company resources to show that kindness. It is in the best interest of all concerned that anyone you have to fire or let go is inclined to sing your praises wherever they end up. The best brand marketing in the world for your jobs page is a diaspora of employees who wish they could still be on your team.

But being nice here, being conflict-avoidant and agreeable, only drags out an unpleasant situation. The way to spend company resources on kindness is to negotiate with your management for as large a severance package as you can manage, and give as much runway as possible for their job search, and clarity about what else you can do.

For example, are you usable as a positive reference? I.e., did they ever have a period where their performance was good, which you could communicate to future employers? Be clear.

Not-Firing

But while I think it’s the more likely option, it’s certainly not the only option. There are certainly cases where underperformers really can be re-situated into better roles, and this person could still find a good fit within the team, or at least, the company. You think you’ve solved the mystery of the cause of the problem here, and you need to make a change. What then?

In that case, the next step is to have a serious conversation about performance management. Set expectations clearly. Ironically, if you’re dealing with a jerk, you’ve probably already crisply communicated your issues. But if you’re dealing with a nice person, you’re more likely to have a slow, painful drift into this awkward situation, where they probably vaguely know that they’re doing poorly but may not realize how poorly.

If that’s what’s happening, you need to immediately correct it, even if firing isn’t on the table. If you’ve gotten to this point, some significant action is probably necessary. Make sure they understand the urgency of the situation, and if you have multiple options for them to consider, give them a clear timeline for how long they have to make a decision.

As I detailed above, things like a down-leveling or extended leave might be on the table. You probably do not want to do anything like that in a normal 1x1: make sure you have enough time to deal with it.

Remember: Run That Retrospective!

In most cases where this sort of situation develops, is a clear management failure. If you’re the manager, you need to own it.

Try to determine where the failure was. Was it hiring? Leveling? A team culture that encourages burnout? A poorly-vetted internal transfer? Accepting low performance for too long, not communicating expectations?

If you can identify a systemic cause as actionable, then you need to make sure there is time and space to make necessary changes, and soon. It’s stressful to have to go through this process with one person, but if you have to do this repeatedly, any dynamic that can damage multiple people’s productivity persistently is almost definitionally a team-destroyer.


  1. REMEMBER TO LIKE AND SUBSCRIBE 

  2. I know it’s a habit we all have from industry jargon — heck, I used the term in my own initial Mastodon posts about this — but “postmortem” is a fraught term in the best of circumstances, and super not great when you’re talking about an actual person who has recently been fired. Try to stick to “retrospective” or “review” when you’re talking about this process with your team. 

  3. Now, I don’t know if this is just me, but for reasons that are outside the scope of this post, when I was in my mid teens I got a copy of the DSM-IV, read the whole thing back to back, and studied it for a while. I’ve never had time to catch up to the DSM-5 but I’m vaguely aware of some of the changes and I’ve read a ton of nonfiction related to mental health. As a result of this self-education, I have an extremely good track record of telling people “you should see a psychiatrist about X”. I am cautious about this sort of thing and really only tell close friends, but as far as I know my hit-rate is 100%. 

by Glyph at March 29, 2023 09:49 PM

March 26, 2023

Glyph Lefkowitz

Telemetry Is Not Your Enemy

Part 1: A Tale of Two Metaphors

In software development “telemetry” is data collected from users of the software, almost always delivered to the authors of the software via the Internet.

In recent years, there has been a great deal of angry public discourse about telemetry. In particular, there is a lot of concern that every software vendor and network service operator collecting any data at all is spying on its users, surveilling every aspect of our lives. The media narrative has been that any tech company collecting data for any purpose is acting creepy as hell.

I am quite sympathetic to this view. In general, some concern about privacy is warranted whenever some new data-collection scheme is proposed. However it seems to me that the default response is no longer “concern and skepticism”; but rather “panic and fury”. All telemetry is seen as snooping and all snooping is seen as evil.

There’s a sense in which software telemetry is like surveillance. However, it is only like surveillance. Surveillance is a metaphor, not a description. It is far from a perfect metaphor.

In the discourse around user privacy, I feel like we have lost a lot of nuance about the specific details of telemetry when some people dismiss all telemetry as snooping, spying, or surveillance.

Here are some ways in which software telemetry is not like “snooping”:

  1. The data may be aggregated. The people consuming the results of telemetry are rarely looking at individual records, and individual records may not even exist in some cases. There are tools, like Prio, to do this aggregation to be as privacy-sensitive as possible.
  2. The data is rarely looked at by human beings. In the cases (such as ad-targeting) where the data is highly individuated, both the input (your activity) and the output (your recommendations) are both mainly consumed by you, in your experience of a product, by way of algorithms acting upon the data, not by an employee of the company you’re interacting with.1
  3. The data is highly specific. “Here’s a record with your account ID and the number of times you clicked the Add To Cart button without checking out” is not remotely the same class of information as “Here’s several hours of video and audio, attached to your full name, recorded without your knowledge or consent”. Emotional appeals calling any data “surveillance” tend to suggest that all collected data is the latter, where in reality most of it is much closer to the former.

There are other metaphors which can be used to understand software telemetry. For example, there is also a sense in which it is like voting.

I emphasize that voting is also a metaphor here, not a description. I will also freely admit that it is in many ways a worse metaphor for telemetry than “surveillance”. But it can illuminate other aspects of telemetry, the ones that the surveillance metaphor leaves out.

Data-collection is like voting because the data can represent your interests to a party that has some power over you. Your software vendor has the power to change your software, and you probably don’t, either because you don’t have access to the source code. Even if it’s open source, you almost certainly don’t have the resources to take over its maintenance.

For example, let’s consider this paragraph from some Microsoft documentation about telemetry:

We also use the insights to drive improvements and intelligence into some of our management and monitoring solutions. This improvement helps customers diagnose quality issues and save money by making fewer support calls to Microsoft.

“Examples of how Microsoft uses the telemetry data” from the Azure SDK documentation

What Microsoft is saying here is that they’re collecting the data for your own benefit. They’re not attempting to justify it on the basis that defenders of law-enforcement wiretap schemes might. Those who want literal mass surveillance tend to justify it by conceding that it might hurt individuals a little bit to be spied upon, but if we spy on everyone surely we can find the bad people and stop them from doing bad things. That’s best for society.

But Microsoft isn’t saying that.2 What Microsoft is saying here is that if you’re experiencing a problem, they want to know about it so they can fix it and make the experience better for you.

I think that is at least partially true.

Part 2: I Qualify My Claims Extensively So You Jackals Don’t Lose Your Damn Minds On The Orange Website

I was inspired to write this post due to the recent discussions in the Go community about how to collect telemetry which provoked a lot of vitriol from people viscerally reacting to any telemetry as invasive surveillance. I will therefore heavily qualify what I’ve said above to try to address some of that emotional reaction in advance.

I am not suggesting that we must take Microsoft (or indeed, the Golang team) fully at their word here. Trillion dollar corporations will always deserve skepticism. I will concede in advance that it’s possible the data is put to other uses as well, possibly to maximize profits at the expense of users. But it seems reasonable to assume that this is at least partially true; it’s not like Microsoft wants Azure to be bad.

I can speak from personal experience. I’ve been in professional conversations around telemetry. When I have, my and my teams’ motivations were overwhelmingly focused on straightforwardly making the user experience good. We wanted it to be good so that they would like our products and buy more of them.

It’s hard enough to do that without nefarious ulterior motives. Most of the people who develop your software just don’t have the resources it takes to be evil about this.

Part 3: They Can’t Help You If They Can’t See You

With those qualifications out of the way, I will proceed with these axioms:

  1. The developers of software will make changes to it.
  2. These changes will benefit some users.
  3. Which changes the developers select will be derived, at least in part, from the information that they have.
  4. At least part of the information that the developers have is derived from the telemetry they collect.

If we can agree that those axioms are reasonable, then let us imagine two user populations:

  • Population A is privacy-sensitive and therefore sees telemetry as bad, and opts out of everything they possibly can.
  • Population B doesn’t care about privacy, and therefore ignores any telemetry and blithely clicks through any opt-in.

When the developer goes to make changes, they will have more information about Population B. Even if they’re vaguely aware that some users are opting out (or refusing to opt in), the developer will know far less about Population A. This means that any changes the developer makes will not serve the needs of their privacy-conscious users, which means fewer features that respect privacy as time goes on.

Part 4: Free as in Fact-Free Guesses

In the world of open source software, this problem is even worse. We often have fewer resources with which to collect and analyze telemetry in the first place, and when we do attempt to collect it, a vocal minority among those users are openly hostile, with feedback that borders on harassment. So we often have no telemetry at all, and are making changes based on guesses.

Meanwhile, in proprietary software, the user population is far larger and less engaged. Developers are not exposed directly to users and therefore cannot be harassed or intimidated into dropping their telemetry. Which means that proprietary software gains a huge advantage: they can know what most of their users want, make changes to accommodate it, and can therefore make a product better than the one based on uninformed guesses from the open source competition.

Proprietary software generally starts out with a panoply of advantages already — most of which boil down to “money” — but our collective knee-jerk reaction to any attempt to collect telemetry is a massive and continuing own-goal on the part of the FLOSS community. There’s no inherent reason why free software’s design cannot be based on good data, but our community’s history and self-selection biases make us less willing to consider it.

That does not mean we need to accept invasive data collection that is more like surveillance. We do not need to allow for stockpiled personally-identifiable information about individual users that lives forever. The abuses of indiscriminate tech data collection are real, and I am not suggesting that we forget about them.

The process for collecting telemetry must be open and transparent, the data collected needs to be continuously vetted for safety. Clear data-retention policies should always be in place to avoid future unanticipated misuses of data that is thought to be safe today but may be de-anonymized or otherwise abused in the future.

I want the collaborative feedback process of open source development to result in this kind of telemetry: thoughtful, respectful of user privacy, and designed with the principle of least privilege in mind. If we have this kind of process, then we could hold it up as an example for proprietary developers to follow, and possibly improve the industry at large.

But in order to be able to produce that example, we must produce criticism of telemetry efforts that is specific, grounded in actual risks and harms to users, rather than a series of emotional appeals to slippery-slope arguments that do not correspond to the actual data being collected. We must arrive at a consensus that there are benefits to users in allowing software engineers to have enough information to do their jobs, and telemetry is not uniformly bad. We cannot allow a few users who are complaining to stop these efforts for everyone.

After all, when those proprietary developers look at the hard data that they have about what their users want and need, it’s clear that those who are complaining don’t even exist.


  1. Please note that I’m not saying that this automatically makes such collection ethical. Attempting to modify user behavior or conduct un-reviewed psychological experiments on your customers is also wrong. But it’s wrong in a way that is somewhat different than simply spying on them. 

  2. I am not suggesting that data collected for the purposes of improving the users’ experience could not be used against their interest, whether by law enforcement or by cybercriminals or by Microsoft itself. Only that that’s not what the goal is here. 

by Glyph at March 26, 2023 09:58 PM

March 25, 2023

Glyph Lefkowitz

What Would You Say You Do Here?

What have I been up to?

Late last year, I launched a Patreon. Although not quite a “soft” launch — I did toot about it, after all — I didn’t promote it very much.

I started this way because I realized that if I didn’t just put something up I’d be dithering forever. I’d previously been writing a sprawling monster of an announcement post that went into way too much detail, and kept expanding to encompass more and more ideas until I came to understand that salvaging it was going to be an editing process just as brutal and interminable as the writing itself.

However, that post also included a section where I just wrote about what I was actually doing.

So, for lots of reasons1, there are a diverse array of loosely related (or unrelated) projects below which may not get finished any time soon. Or, indeed, may go unfinished entirely. Some are “done enough” now, and just won’t receive much in the way of future polish.

That is an intentional choice.

The rationale, as briefly as I can manage, is: I want to lean into the my strength2 of creative, divergent thinking, and see how these ideas pan out without committing to them particularly intensely. My habitual impulse, for many years, has been to lean extremely hard on strategies that compensate for my weaknesses in organization, planning, and continued focus, and attempt to commit to finishing every project to prove that I’ll never flake on anything.

While the reward tiers for the Patreon remain deliberately ambiguous3, I think it would be fair to say that patrons will have some level of influence in directing my focus by providing feedback on these projects, and requesting that I work more on some and less on others.

So, with no further ado: what have I been working on, and what work would you be supporting if you signed up? For each project, I’ll be answering 3 questions:

  1. What is it?
  2. What have I been doing with it recently?
  3. What are my plans for it?

This. i.e. blog.glyph.im

What is it?

For starters, I write stuff here. I guess you’re reading this post for some reason, so you might like the stuff I write? I feel like this doesn’t require much explanation.

What have I done with it recently?

You might appreciate the explicitly patron-requested Potato Programming post, a screed about dataclass, or a deep dive on the difficulties of codesigning and notarization on macOS along with an announcement of a tool to remediate them.

What are my plans for it?

You can probably expect more of the same; just all the latest thoughts & ideas from Glyph.

Twisted

What is it?

If you know of me you probably know of me as “the Twisted guy” and yeah, I am still that. If, somehow, you’ve ended up here and you don’t know what it is, wow, that’s cool, thanks for coming, super interested to know what you do know me for.

Twisted is an event-driven networking engine written in Python, the precursor and inspiration for the asyncio module, and a suite of event-driven programming abstractions, network protocol implementations, and general utility code.

What have I done with it recently?

I’ve gotten a few things merged, including type annotations for getPrimes and making the bundled CLI OpenSSH server replacement work at all with public key authentication again, as well as some test cleanups that reduce the overall surface area of old-style Deferred-returning tests that can be flaky and slow.

I’ve also landed a posix_spawnp-based spawnProcess implementation which speed up process spawning significantly; this can be as much as 3x faster if you do a lot of spawning of short-running processes.

I have a bunch of PRs in flight, too, including better annotations for FilePath Deferred, and IReactorProcess, as well as a fix for the aforementioned posix_spawnp implementation.

What are my plans for it?

A lot of the projects below use Twisted in some way, and I continue to maintain it for my own uses. My particular focus is in quality-of-life improvements; issues that someone starting out with a Twisted project will bump into and find confusing or difficult. I want it to be really easy to write applications with Twisted and I want to use my own experiences with it.

I also do code reviews of other folks’ contributions; we do still have over 100 open PRs right now.

DateType

What is it?

DateType is a workaround for a very specific bug in the way that the datetime standard library module deals with type composition: to wit, that datetime is a subclass of date but is not Liskov-substitutable for it. There are even #type:ignore comments in the standard library type stubs to work around this problem, because if you did this in your own code, it simply wouldn’t type-check.

What have I done with it recently?

I updated it a few months ago to expose DateTime and Time directly (as opposed to AwareDateTime and NaiveDateTime), so that users could specialize their own functions that took either naive or aware times without ugly and slightly-incorrect unions.

What are my plans for it?

This library is mostly done for the time being, but if I had to polish it a bit I’d probably do two things:

  1. a readthedocs page for nice documentation
  2. write a PEP to get this integrated into the standard library

Although the compatibility problems are obviously very tricky and a PEP would probably be controversial, this is ultimately a bug in the stdlib, and should be fixed upstream there.

Automat

What is it?

It’s a library to make deterministic finite-state automata easier to create and work with.

What have I done with it recently?

Back in the middle of last year, I opened a PR to create a new, completely different front-end API for state machine definition. Instead of something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
class MachineExample:
    machine = MethodicalMachine()

    @machine.state()
    def a_state(self): ...

    @machine.state()
    def other_state(self): ...

    @machine.input()
    def flip(self): ...

    @machine.output()
    def _do_flip(self): return ...

    on.upon(flip, enter=off, outputs=[_do_flip], collector=list)
    off.upon(flip, enter=on, outputs=[_do_flip], collector=list)

this branch lets you instead do something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
class MachineProtocol(Protocol):
    def flip(self) -> None: ...

class MachineCore: ...

def buildCore() -> MachineCore: ...
machine = TypicalBuilder(MachineProtocol, buildCore)

@machine.state()
class _OffState:
    @machine.handle(MachineProtocol.flip, enter=lambda: _OnState)
    def flip(self) -> None: ...

@machine.state()
class _OnState:
    @machine.handle(MachineProtocol.flip, enter=lambda: _OffState)
    def flip(self) -> None: ...

MachineImplementation = machine.buildClass()

In other words, it creates a state for every type, and type safety that much more cleanly expresses what methods can be called and by whom; no need to make everything private with tons of underscore-prefixed methods and attributes, since all the caller can see is “an implementation of MachineProtocol”; your state classes can otherwise just be normal classes, which do not require special logic to be instantiated if you want to use them directly.

Also, by making a state for every type, it’s a lot cleaner to express that certain methods require certain attributes, by simply making them available as attributes on that state and then requiring an argument of that state type; you don’t need to plot your way through the outputs generated in your state graph.

What are my plans for it?

I want to finish up dealing with some issues with that branch - particularly the ugly patterns for communicating portions of the state core to the caller and also the documentation; there are a lot of magic signatures which make sense in heavy usage but are a bit mysterious to understand while you’re getting started.

I’d also like the visualizer to work on it, which it doesn’t yet, because the visualizer cribs a bunch of state from MethodicalMachine when it should be working purely on core objects.

Secretly

What is it?

This is an attempt at a holistic, end-to-end secret management wrapper around Keyring. Whereas Keyring handles password storage, this handles the whole lifecycle of looking up the secret to see if it’s there, displaying UI to prompt the user (leveraging a pinentry program from GPG if available)

What have I done with it recently?

It’s been a long time since I touched it.

What are my plans for it?

  • Documentation. It’s totally undocumented.
  • It could be written to be a bit more abstract. It dates from a time before asyncio, so its current Twisted requirement for Deferred could be made into a generic Awaitable one.
  • Better platform support for Linux & Windows when GPG’s pinentry is not available.
  • Support for multiple accounts so that when the user is prompted for the relevant credential, they can store it.
  • Integration with 1Password via some of their many potentially relevant APIs.

Fritter

What is it?

Fritter is a frame-rate independent timer tree.

In the course of developing Twisted, I learned a lot about time and timers. LoopingCall encodes some of this knowledge, but it’s very tightly coupled to the somewhat limited IReactorTime API.

Also, LoopingCall was originally designed with the needs of media playback (particularly network streaming audio playback) in mind, but I have used it more for background maintenance tasks and for animations. Both of these things have requirements that LoopingCall makes awkward but FRITTer is designed to meet:

  1. At higher loads, surprising interactions can occur with the underlying priority queue implementation, and different algorithms may make a significant difference to performance. Fritter has a pluggable implementation of a priority queue and is carefully minimally coupled to it.

  2. Driver selection is a first-class part of the API, with an included, public “Memory” driver for testing, rather than LoopingCall’s “testing is at least possible.reactor attribute. This means that out of the box it supports both Twisted and asyncio, and can easily have other things added.

  3. The API is actually generic on what constitutes time itself, which means that you can use it for both short-term (i.e.: monotonic clock values as float-seconds) and long-term (civil times as timezone-aware datetime objects) recurring tasks. Recurrence rules can also be arbitrary functions.

  4. There is a recursive driver (this is the “tree” part) which both allows for:

    a. groups of timers which can be suspended and resumed together, and

    b. scaling of time, so that you can e.g. speed up or slow down the ticks for AIs, groups of animations, and so on, also in groups.

  5. The API is also generic on what constitutes work. This means that, for example, in a certain timer you can say “all work units scheduled on this scheduler, in addition to being callable, must also have an asJSON method”. And in fact that’s exactly what the longterm module in Fritter does.

I can neither confirm nor deny that this project was factored out of a game engine for a secret game project which does not appear on this list.

What have I done with it recently?

Besides realizing, in the course of writing this blog post, that its CI was failing its code quality static checks (oops), the last big change was the preliminary support for recursive timers and serialization.

What are my plans for it?

  • These haven’t been tested in anger yet and I want to actually use them in a larger project to make sure that they don’t have any necessary missing pieces.

  • Documentation.

Encrust

What is it?

I have written about Encrust quite recently so if you want to know about it, you should probably read that post. In brief, it is a code-shipping tool for py2app. It takes care of architecture-independence, code-signing, and notarization.

What have I done with it recently?

Wrote it. It’s brand new as of this month.

What are my plans for it?

I really want this project to go away as a tool with an independent existence. Either I want its lessons to be fully absorbed into Briefcase or perhaps py2app itself, or for it to become a library that those things call into to do its thing.

Various Small Mac Utilities

What is it?

  • QuickMacApp is a very small library for creating status-item “menu bar apps” in Python which don’t have much of a UI but want to run some Python code in the background and occasionally pop up a notification or ask the user a question or something. The idea is that if you have a utility that needs a minimal UI to just ask the user one or two things, you should be able to give it a GUI immediately, without thinking about it too much.
  • QuickMacHotkey this is a very minimal API to register hotkeys on macOS. this example is what comes up if you search the web for such a thing, but it hasn’t worked on a current Python for about 11 years. This isn’t the “right” way to do such a thing, since it provides no UI to set the shortcut, you’d have to hard-code it. But MASShortcut is now archived and I haven’t had the opportunity to investigate HotKey, so for the time being, it’s a handy thing, and totally adequate for the sort of quick-and-dirty applications you might make with QuickMacApp.
  • VEnvDotApp is a way of giving a virtualenv its own Info.plist and bundle ID, so that command-line python tools that just need to pop up a little mac GUI, like an alert or a notification, can do so with cross-platform tools without looking like it’s an app called “Python”, or in some cases breaking entirely.
  • MOPUp is a command-line updater for upstream Python.org macOS Python. For distributing third-party apps, Python.org’s version is really the one you want to use (it’s universal2, and it’s generally built with compiler options that make it a distributable thing itself) but updating it by downloading a .pkg file from a web browser is kind of annoying.

What have I done with it recently?

I’ve been releasing all these tools as they emerge and are factored out of other work, and they’re all fairly recent.

What are my plans for it?

I will continue to factor out any general-purpose tools from my platform-specific Python explorations — hopefully more Linux and Windows too, once I’ve got writing code for my own computer down, but most of the tools above are kind of “done” on their own, at the moment.

The two things that come to mind though are that QuickMacApp should have a way of owning the menubar sometimes (if you don’t have something like Bartender, menu-bar-status-item-only apps can look like they don’t do anything when you launch them), and that MOPUp should probably be upstreamed to python.org.

Pomodouroboros

What is it?

Pomodouroboros is a pomodoro timer with a highly opinionated take. It’s based on my own experience of ADHD time blindness, and is more like a therapeutic intervention for that specific condition than a typical “productivity” timer app.

In short, it has two important features that I have found lacking in other tools:

  1. A gigantic, absolutely impossible to ignore visual timer that presents a HUD overlay over your entire desktop. It remains low-opacity and static most of the time but pulses every 30 seconds to remind you that time is passing.
  2. Rather than requiring you to remember to set a timer before anything happens, it has an idea of “work hours” when you want to be time-sensitive and presents constant prompting to get started.

What have I done with it recently?

I’ve been working on it fairly consistently lately. The big things I’ve been doing have been:

  1. factoring things out of the Pomodouroboros-specific code and into QuickMacApp and Encrust.
  2. Porting the UI to the redesigned core of the application, which has been implemented and tested in platform-agnostic Python but does not have any UI yet.
  3. fully productionizing the build process and ensuring that Encrust is producing binary app bundles that people can use.

What are my plans for it?

In brief, “finish the app”. I want this to have its own website and find a life beyond the Python community, with people who just want a timer app and don’t care how it’s written. The top priority is to replace the current data model, which is to say the parts of the UI that set and evaluate timers and edit the list of upcoming timers (the timer countdown HUD UI itself is fine).

I also want to port it to other platforms, particularly desktop Linux, where I know there are many users interested in such a thing. I also want to do a CLI version for folks who live on the command line.

Finally: Pomodouroboros serves as a test-bed for a larger goal, which is that I want to make it easier for Python programmers, particularly beginners who are just getting into coding at all, to write code that not only interacts with their own computer, but that they can share with other users in a real way. As you can see with Encrust and other projects above, as much as I can I want my bumpy ride to production code to serve as trailblazing so that future travelers of this path find it as easy as possible.

And Here Is Where The CTA Goes

If this stuff sounds compelling, you can obviously sign up, that would be great. But also, if you’re just curious, go ahead and give some of these projects some stars on GitHub or just share this post. I’d also love to hear from you about any of this!

If a lot of people find this compelling, then pursuing these ideas will become a full-time job, but I’m pretty far from that threshold right now. In the meanwhile, I will also be doing a bit of consulting work.

I believe much of my upcoming month will be spoken for with contracting, although quite a bit of that work will also be open source maintenance, for which I am very grateful to my generous clients. Please do get in touch if you have something more specific you’d like me to work on, and you’d like to become one of those clients as well.


  1. Reasons which will have to remain mysterious until I can edit about 10,000 words of abstract, discursive philosophical rambling into something vaguely readable. 

  2. A strength which is common to many, indeed possibly most, people with ADHD. 

  3. While I want to give myself some leeway to try out ideas without necessarily finishing them, I do not want to start making commitments that I can’t keep. Particularly commitments that are tied to money! 

by Glyph at March 25, 2023 06:19 AM

March 18, 2023

Glyph Lefkowitz

Building And Distributing A macOS Application Written in Python

Why Bother With All This?

In other words: if you want to run on an Apple platform, why not just write everything in an Apple programming language, like Swift? If you need to ship to multiple platforms, you might have to rewrite it all anyway, so why not give up?

Despite the significant investment that platform vendors make in their tools, I fundamentally believe that the core logic in any software application ought to be where its most important value lies. For small, independent developers, having portable logic that can be faithfully replicated on every platform without massive rework might be tricky to get started with, but if you can’t do it, it may not be cost effective to support multiple platforms at all.

So, it makes sense for me to write my applications in Python to achieve this sort of portability, even though on each platform it’s going to be a little bit more of a hassle to get it all built and shipped since the default tools don’t account for the use of Python.

But how much more is “a little bit” more of a hassle? I’ve been slowly learning about the pipeline to ship independently-distributed1 macOS applications for the last few years, and I’ve encountered a ton of annoying roadblocks.

Didn’t You Do This Already? What’s New?

So nice of you to remember. Thanks for asking. While I’ve gotten this to mostly work in the past, some things have changed since then:

  • the notarization toolchain has been updated (altool is now notarytool),
  • I’ve had to ship libraries other than just PyGame,
  • Apple Silicon launched, necessitating another dimension of build complexity to account for multiple architectures,
  • Perhaps most significantly, I have written a tool that attempts to encode as much of this knowledge as possible, Encrust, which I have put on PyPI and GitHub. If this is of interest to you, I would encourage you to file bugs on it, and hopefully add in more corner cases which I have missed.

I’ve also recently shipped my first build of an end-user application that successfully launches on both Apple Silicon and Intel macs, so here is a brief summary of the hoops I needed to jump through, from the beginning, in order to make everything work.

Wait did you say you wrote a tool? Is this fixed, then?

Encrust is, I hope, a temporary stopgap on the way to a much better comprehensive solution.

Specifically, I believe that Briefcase is a much more holistic solution to the general problem being described here, but it doesn’t suit my very specific needs right now4, and it doesn’t address a couple of minor points that I was running into here.

It is mostly glue that is shelling out to other tools that already solve portions of the problem, even when better APIs exist. It addresses three very specific layers of complexity:

  1. It enforces architecture independence, so that your app built on an M1 machine will still actually run on about half of the macs remaining out there2.
  2. It remembers tricky nuances of the notarization submission process, such as the highly specific way I need to generate my zip files to avoid mysterious notarization rejections3.
  3. Providing a common and central way to store the configuration for these things across repositories so I don’t need to repeat this process and copy/paste a shell script every time I make a tiny new application.

It only works on Apple Silicon macs, because I didn’t bother to figure out how pip actually determines which architecture to download wheels for.

As such, unfortunately, Encrust is mostly a place for other people who have already solved this problem to collaborate to centralize this sort of knowledge and share ideas about where this code should ultimately go, rather than a tool for users trying to get started with shipping an app.

Open Offer

That said:

  1. I want to help Python developers ship their Python apps to users who are not also Python developers.
  2. macOS is an even trickier platform to do that on than most.
  3. It’s now easy for me to sign, notarize, and release new applications reliably

Therefore:

If you have an open source Python application that runs on macOS5 but can’t ship to macOS — either because:

  1. you’ve gotten stuck on one of the roadblocks that this post describes,
  2. you don’t have $100 to give to Apple, or because
  3. the app is using a cross-platform toolkit that should work just fine and you don’t have access to a mac at all, then

Send me an email and I’ll sign and post your releases.

What’s this post about, then?

People still frequently complain that “Python packaging” is really bad. And I’m on record that packaging Python (in the sense of “code”) for Python (in the sense of “deployment platform”) is actually kind of fine right now; if what you’re trying to get to is a package that can be pip installed, you can have a reasonably good experience modulo a few small onboarding hiccups that are well-understood in the community and fairly easy to overcome.

However, it’s still unfortunately hard to get Python code into the hands of users who are not also Python programmers with their own development environments.

My goal here is to document the difficulties themselves to try to provide a snapshot of what happens if you try to get started from scratch today. I think it is useful to record all the snags and inscrutable error messages that you will hit in a row, so we can see what the experience really feels like.

I hope that everyone will find it entertaining.

  • Other Mac python programmers might find pieces of trivia useful, and
  • Linux users will have fun making fun of the hoops we have to jump through on Apple platforms,

but the main audience is the maintainers of tools like Briefcase and py2app to evaluate the new-user experience holistically, and to see how much the use of their tools feels like this. This necessarily includes the parts of the process that are not actually packaging.

This is why I’m starting from the beginning again, and going through all the stuff that I’ve discussed in previous posts again, to present the whole experience.

Here Goes

So, with no further ado, here is a non-exhaustive list of frustrations that I have encountered in this process:

  • Okay. Time to get started. How do I display a GUI at all? Nothing happens when I call some nominally GUI API. Oops: I need my app to exist in an app bundle, which means I need to have a framework build. Time to throw those partially-broken pyenv pythons in the trash, and carefully sidestep around Homebrew; best to use the official Python.org from here on out.
  • Bonus Frustration since I’m using AppKit directly: why is my app segfaulting all the time? Oh, target is a weak reference in objective C, so if I make a window and put a button in it that points at a Python object, the Python interpreter deallocates it immediately because only the window (which is “nothing” as it’s a weakref) is referring to it. I need to start stuffing every Python object that talks to a UI element like a window or a button into a global list, or manually calling .retain() on all of them and hoping I don’t leak memory.
  • Everything seems to be using the default Python Launcher icon, and the app menu says “Python”. That wouldn’t look too good to end users. I should probably have my own app.
  • I’ll skip the part here where the author of a new application might have to investigate py2app, briefcase, pyoxidizer, and pyinstaller separately and try to figure out which one works the best right now. As I said above, I started with py2app and I’m stubborn to a fault, so that is the one I’m going to make work.
  • Now I need to set up py2app. Oops, I can’t use pyproject.toml any more, time to go back to setup.py.
  • Now I built it and the the app is crashing on startup when I click on it. I can’t see a traceback anywhere, so I guess I need to do something in the console.
    • Wow; the console is an unusable flood of useless garbage. Forget that.
    • I guess I need to run it in the terminal somehow. After some googling I figure out it’s ./dist/MyApp.app/Contents/Resources/MacOS/MyApp. Aha, okay, I can see the traceback now, and it’s … an import error?
    • Ugh, py2app isn’t actually including all of my code, it’s using some magic to figure out which modules are actually used, but it’s doing it by traversing import statements, which means I need to put a bunch of fake static import statements for everything that is used indirectly at the top of my app’s main script so that it gets found by the build. I experimentally discover a half a dozen things that are dynamically imported inside libraries that I use and jam them all in there.
  • Okay. Now at least it starts up. The blank app icon is uninspiring, though, time to actually get my own icon in there. Cool, I’ll make an icon in my favorite image editor, and save it as... icons must be PNGs, right? Uhh... no, looks like they have to be .icns files. But luckily I can convert the PNG I saved with a simple 12-line shell script that invokes sips and iconutil6.

At this point I have an app bundle which kinda works. But in order to run on anyone else’s computer, I have to code-sign it.

  • In order to code-sign anything, I have to have an account with Apple that costs $99 per year, on developer.apple.com.
  • The easiest way to get these certificates is to log in to Xcode itself. There’s a web portal too but using it appears to involve a lot more manual management of key material, so, no thanks. This requires the full-fat Xcode.app though, not just the command-line tools that come down when I run xcode-select --install, so, time to wait for an 11GB download.
  • Oops, I made the wrong certificate type. Apparently the only right answer here is a “Developer ID Application” certificate.
  • Now that I’ve logged in to Xcode to get the certificate, I need to figure out how to tell my command-line tools about it (for starters, “codesign”). Looks like I need to run security find-identity -v -p codesigning.
  • Time to sign the application’s code.
    • The codesign tool has a --deep option which can sign the whole bundle. Great!
    • Except, that doesn’t work, because Python ships shared libraries in locations that macOS doesn’t automatically expect, so I have to manually locate those files and sign them, invoking codesign once for each.
    • Also, --deep is deprecated. There’s no replacement.
    • Logically, it seems like I still need --deep, because it does some poorly-explained stuff with non-code resource files that maybe doesn’t happen properly if I don’t? Oh well. Let's drop the option and hope for the best.8
    • With a few heuristics I think we can find all the relevant files with a little script7.

Now my app bundle is signed! Hooray. 12 years ago, I’d be all set. But today I need some additional steps.

  • After I sign my app, Apple needs to sign my app (to indicate they’ve checked it for malware), which is called “notarization”.
    • In order to be eligible for notarization, I can’t just code-sign my app. I have to code-sign it with entitlements.
    • Also I can’t just code sign it with entitlements, I have to sign it with the hardened runtime, or it fails notarization.
    • Oops, out of the box, the hardened runtime is incompatible with a bunch of stuff in Python, including cffi and ctypes, because nobody has implemented support for MAP_JIT yet, so it crashes at startup. After some thrashing around I discover that I need a legacy “allow unsigned executable memory” entitlement. I can’t avoid importing this because a bunch of things in py2app’s bootstrapping code import things that use ctypes, and probably a bunch of packages which I’m definitely going to need, like cryptography require cffi directly anyway.
    • In order to set up notarization external to Xcode, I need to create an App Password which is set up at appleid.apple.com, not the developer portal.
    • Bonus Frustration since I’ve been doing this for a few years: Originally this used to be even more annoying as I needed to wait for an email (with altool), and so I couldn’t script it directly. Now, at least, the new notarytool (which will shortly be mandatory) has a --wait flag.
    • Although the tool is documented under man notarytool, I actually have to run it as xcrun notarytool, even though codesign can be run either directly or via xcrun codesign.
    • Great, we’re ready to zip up our app and submit to Apple. Wait, they’re rejecting it? Why???
    • Aah, I need to manually copy and paste the UUID in the console output of xcrun notarytool submit into xcrun notarytool log to get some JSON that has some error messages embedded in it.
    • Oh. The bundle contains internal symlinks, so when I zipped it without the -y option, I got a corrupt archive.
    • Great, resubmitted with zip -y.
    • Oops, just kidding, that only works sometimes. Later, a different submission with a different hash will fail, and I’ll learn that the correct command line is actually ditto -c -k --sequesterRsrc --keepParent MyApp.app MyApp.app.zip.
      • Note that, for extra entertainment value, the position of the archive itself and directory are reversed on the command line from zip (and tar, and every other archive tool).
    • notarytool doesn’t record anything in my app though; it puts the “notarization ticket” on Apple's servers. Apparently, I still need to run stapler for users to be able to launch it while those servers are inaccessible, like, for example, if they’re offline.
    • Oops, not stapler. xcrun stapler. Whatever.
    • Except notarytool operates on a zip archive, but stapler operates on an app bundle. So we have to save the original app bundle, run stapler on it, then re-archive the whole thing into a new archive.

Hooray! Time to release my great app!

  • Whoops, just got a bug report that it crashes immediately on every Intel mac. What’s going on?
  • Turns out I’m using a library whose authors distribute both aarch64 and x86_64 wheels; pip will prefer single-architecture wheels even if universal2 wheels are also available, so I’ve got to somehow get fat binaries put together. Am I going to have to build a huge pile of C code by myself? I thought all these Python hassles would at least let me avoid the C hassles!
  • Whew, okay, no need for that: there’s an amazing Swiss-army knife for macOS binary wheels, called delocate that includes a delocate-fuse tool that can fuse two wheels together. So I just need to figure out which binaries are the wrong architecture and somehow install my fixed/fused wheels before building my app with py2app.

    • except, oops, this tool just rewrites the file in-place without even changing its name, so I have to write some janky shell scripts to do the reinstallation. Ugh.
  • OK now that all that is in place, I just need to re-do all the steps:

    • universal2-ize my virtualenv!
    • build!
    • sign!
    • archive!
    • notarize!
    • wait!!!
    • staple!
    • re-archive!
    • upload!

And we have an application bundle we can ship to users.

It’s just that easy.

As long as I don’t need sandboxing or Mac App Store distribution, of course. That’s a challenge for another day.


So, that was terrible. But what should be happening here?

Some of this is impossible to simplify beyond a certain point - many of the things above are not really about Python, but are about distribution requirements for macOS specifically, and we in the Python community can’t affect operating system vendors’ tooling.

What we can do is build tools that produce clear guidance on what step is required next, handle edge cases on their own, and generally guide users through these complex processes without requiring them to hit weird binary-format or cryptographic-signing errors on their own with no explanation of what to do next.

I do not think that documentation is the answer here. The necessary steps should be discoverable. If you need to go to a website, the tool should use the webbrowser module to open a website. If you need to launch an app, the tool should launch that app.

With Encrust, I am hoping to generalize the solutions that I found while working on this for this one specific slice of the app distribution pipeline — i.e. a macOS desktop application desktop, as distributed independently and not through the mac app store — but other platforms will need the same treatment.

However, even without really changing py2app or any of the existing tooling, we could imagine a tool that would interactively prompt the user for each manual step, automate as much of it as possible, verify that it was performed correctly, and give comprehensible error messages if it was not.

For a lot of users, this full code-signing journey may not be necessary; if you just want to run your code on one or two friends’ computers, telling them to right click, go to ‘open’ and enter their password is not too bad. But it may not even be clear to them what the trade-off is, exactly; it looks like the app is just broken when you download it. The app build pipeline should tell you what the limitations are.

Other parts of this just need bug-fixes to address. py2app specifically, for example, could have a better self-test for its module-collecting behavior, launching an app to make sure it didn’t leave anything out.

Interactive prompts to set up a Homebrew tap, or a Flatpak build, or a Microsoft Store Metro app, might be similarly useful. These all have outside-of-Python required manual steps, and all of them are also amenable to at least partial automation.


Thanks to my patrons on Patreon for supporting this sort of work, including development of Encrust, of Pomodouroboros, of posts like this one and of that offer to sign other people’s apps. If you think this sort of stuff is worthwhile, you might want to consider supporting me over there as well.


  1. I am not even going to try to describe building a sandboxed, app-store ready application yet. 

  2. At least according to the Steam Hardware Survey, which as of this writing in March of 2023 pegs the current user-base at 54% apple silicon and 46% Intel. The last version I can convince the Internet Archive to give me, from December of 2022, has it closer to 51%/49%, which suggests a transition rate of 1% per month. I suspect that this is pretty generous to Apple Silicon as Steam users would tend to be earlier adopters and more sensitive to performance, but mostly I just don’t have any other source of data. 

  3. It is truly remarkable how bad the error reporting from the notarization service is. There are dozens of articles and forum posts around the web like this one where someone independently discovers this failure mode after successfully notarizing a dozen or so binaries and then suddenly being unable to do so any more because one of the bytes in the signature is suddenly not valid UTF-8 or something. 

  4. A lot of this is probably historical baggage; I started with py2app in 2008 or so, and I have been working on these apps in fits and starts for… ugh… 15 years. At some point when things are humming along and there are actual users, a more comprehensive retrofit of the build process might make sense but right now I just want to stop thinking about this

  5. If your application isn’t open source, or if it requires some porting work, I’m also available for light contract work, but it might take a while to get on my schedule. Feel free to reach out as well, but I am not looking to spend a lot of time doing porting work. 

  6. I find this particular detail interesting; it speaks to the complexity and depth of this problem space that this has been a known issue for several years in Briefcase, but there’s just so much other stuff to handle in the release pipeline that it remains open. 

  7. I forgot both .a files and the py2app-included python executable itself here, and had to discover that gap when I signed a different app where that made a difference. 

  8. Thus far, it seems to be working. 

by Glyph at March 18, 2023 09:45 PM

Hynek Schlawack

How to Automatically Switch to Rosetta With Fish and Direnv

I love my Apple silicon computer, but having to manually switch to Rosetta-enabled shells for my Intel-only projects was a bummer.

by Hynek Schlawack (hs@ox.cx) at March 18, 2023 12:00 PM

February 14, 2023

Glyph Lefkowitz

Data Classification

Is there a place for non-@dataclass classes in Python any more?

I have previously — and somewhat famously — written favorably about @dataclass’s venerable progenitor, attrs, and how you should use it for pretty much everything.

At the time, attrs was an additional dependency, a piece of technology that you could bolt on to your Python stack to make your particular code better. While I advocated for it strongly, there are all the usual implicit reasons against using a new thing. It was an additional dependency, it might not interoperate with other convenience mechanisms for type declarations that you were already using (i.e. NamedTuple), it might look weird to other Python programmers familiar with existing tools, and so on. I don’t think that any of these were good counterpoints, but there was nevertheless a robust discussion to be had in addressing them all.

But for many years now, dataclasses have been — and currently are — built in to the language. They are increasingly integrated to the toolchain at a deep level that is difficult for application code — or even other specialized tools — to replicate. Everybody knows what they are. Few or none of those reasons apply any longer.

For example, classes defined with @dataclass are now optimized as a C structure might be when you compile them with mypyc, a trick that is extremely useful in some circumstances, which even attrs itself now has trouble keeping up with.

This all raises the question for me: beyond backwards compatibility, is there any point to having non-@dataclass classes any more? Is there any remaining justification for writing them in new code?

Consider my original example, translated from attrs to dataclasses. First, the non-dataclass version:

1
2
3
4
5
class Point3D:
    def __init__(self, x, y, z):
        self.x = x
        self.y = y
        self.z = z

And now the dataclass one:

1
2
3
4
5
6
7
from dataclasses import dataclass

@dataclass
class Point3D:
    x: int
    y: int
    z: int

Many of my original points still stand. It’s still less repetitive. In fewer characters, we’ve expressed considerably more information, and we get more functionality (repr, sorting, hashing, etc). There doesn’t seem to be much of a downside besides the strictness of the types, and if typing.Any were a builtin, x: any would be fine for those who don’t want to unduly constrain their code.

The one real downside of the latter over the former right now is the need for an import. Which, at this point, just seems… confusing? Wouldn’t it be nicer to be able to just write this:

1
2
3
4
class Point3D:
    x: int
    y: int
    z: int

and not need to faff around with decorator semantics and fudging the difference between Mypy (or Pyright or Pyre) type-check-time and Mypyc or Cython compile time? Or even better, to not need to explain the complexity of all these weird little distinctions to new learners of Python, and to have to cover import before class?

These tools all already treat the @dataclass decorator as a totally special language construct, not really like a decorator at all, so to really explore it you have to explain a special case and then a special case of a special case. The extension hook for this special case of the special case notwithstanding.

If we didn’t want any new syntax, we would need a from __future__ import dataclassification or some such for a while, but this doesn’t seem like an impossible bar to clear.


There are still some folks who don’t like type annotations at all, and there’s still the possibility of awkward implicit changes in meaning when transplanting code from a place with dataclassification enabled to one without, so perhaps an entirely new unambiguous syntax could be provided. One that more closely mirrors the meaning of parentheses in def, moving inheritance (a feature which, whether you like it or not, is clearly far less central to class definitions than ‘what fields do I have’) off to its own part of the syntax:

1
2
3
data Point3D(x: int, y: int, z: int) from Vector:
    def method(self):
        ...

which, for the “I don’t like types” contingent, could reduce to this in the minimal case:

1
2
data Point3D(x, y, z):
    pass

Just thinking pedagogically, I find it super compelling to imagine moving from teaching def foo(x, y, z):... to data Foo(x, y, z):... as opposed to @dataclass class Foo: x: int....

I don’t have any desire for semantic changes to accompany this, just to make it possible for newcomers to ignore the circuitous historical route of the @dataclass syntax and get straight into defining their own types with legible reprs from the very beginning of their Python journey.

(And make it possible for me to skip a couple of lines of boilerplate in short examples, as a bonus.)


I’m curious to know what y’all think, though. Shoot me an email or a toot and let me know.

In particular:

  1. Do you think there’s some reason I’m missing why Python’s current method for defining classes via a bunch of dunder methods is still better than dataclasses, or should stick around into the future for reasons beyond “compatibility”?
  2. Do you think “compatibility” is sufficient reason to keep the syntax the way it is forever, and I’m underestimating the cost of adding a keyword like this?
  3. If you do think that a change should be made, would you prefer:
    1. changing the meaning of class itself via a __future__ import,
    2. a new data keyword like the one I’ve proposed,
    3. a new keyword that functions exactly like the one I have proposed but really want to bikeshed the word data a bunch,
    4. something more incremental like just putting dataclass and field in builtins,
    5. or an option I haven’t even contemplated here?

If I find I’m not alone in this perhaps I will wander over to the Python discussion boards to have a more substantive conversation...


Thank you to my patrons who are helping me while I try to turn… whatever this is… along with open source maintenance and application development, into a real job. Do you want to see me pursue ideas like this one further? If so, you can support me on Patreon as well!

by Glyph at February 14, 2023 12:44 AM

February 04, 2023

Thomas Vander Stichele

Meet Me in the Bathroom

"Welcome to pre-9/11 New York City, when the world was unaware of the profound political and cultural shifts about to occur, and an entire generation was thirsty for more than the post–alternative pop rock plaguing MTV. In the cafés, clubs, and bars of the Lower East Side there convened a group of outsiders and misfits full of ambition and rock star dreams."

Music was the main reason I wanted to move to New York - I wanted to walk the same streets that the Yeah Yeah Yeahs, the National, Interpol, the Walkmen, the Antlers and Sonic Youth were walking. In my mind they'd meet up and have drinks with each other at the same bars, live close to each other, and I'd just run into them all the time myself. I'm not sure that romantic version of New York ever existed. Paul Banks used to live on a corner inbetween where I live and where my kids go to school now, but that is two decades ago (though for a while, we shared a hairdresser). On one of my first visits to New York before moving here, I had a great chat with Thurston Moore at a café right before taking the taxi back to the airport. And that's as close as I got to living my dream.

But now the documentary "Meet me in the Bathroom" (based on the book of the same name) shows that version of New York that only existed for a brief moment in time.

"Meet Me In The Bathroom — ??inspired by Lizzy Goodman’s book of the same name — chronicles the last great romantic age of rock ’n’ roll through the lens of era-defining bands."

Read the book, watch the documentary (available on Google Play among other platforms), or listen to the Spotify playlist Meet Me in the Bathroom: Every Song From The Book In Chronological Order. For bonus points, listen to Losing My Edge (every band mentioned in the LCD Soundsystem song in the order they're mentioned)

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

by Thomas at February 04, 2023 02:32 AM

How to Support Through Layoffs

The number one question I've gotten in the past week has been "how can I support people affected by the layoff?"

First, some general advice:

  • For everyone, remember to "comfort in, dump out" - vent your frustrations to people less affected, support those who are more affected than you.
  • To support Xooglers, don't ask "how can I help?" as that places more burden on them. Offer specific ways you can help them - "Can I write you a Linkedin recommendation? Can I connect you with this person I know at this company who's hiring?". People are affected disproportionally, and if you want to prioritize your help, consider starting with the people on a visa who are now on a tight deadline to find a new sponsor or face leaving the country.
  • To support your colleagues still here, remember we're not all having the same experience. In particular, people outside of the US will be in limbo for weeks or months to come. People can be anywhere on a spectrum of "long time Googler, first mass layoff" to "I've had to go through worse". Don't assume, lead with curiosity, and listen.

Some resources people shared you might find helpful:

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

by Thomas at February 04, 2023 02:30 AM

All Teams

"I have hands but I am doing all I can to have daily independence so I can’t be ‘all hands’ at work. I can share ideas, show up, and ask for help when I physically can’t use my hands. Belonging means folks with one hand, no hand and limited hands are valued in the workplace." - Dr. Akilah Cadet

If you've been wondering why over the past few months you're seeing a lot more "All Teams" meetings on your calendar, it's because language is ever evolving with the time, and people are starting to become more aware and replace ableist language.

Read more:

If your team still has "all hands" meetings, propose a more creative name, or default to "all teams" instead. Take some time to familiarize yourself with other ableist language and alternatives.

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

by Thomas at February 04, 2023 02:29 AM

How Not to Say the Wrong Thing

"if you’re going to open your mouth, ask yourself if what you are about to say is likely to provide comfort and support. If it isn’t, don’t say it. Don’t, for example, give advice."

Susan Silk's Ring Theory is a helpful model to navigate what not to say during times of grief and traumatic events.

Picture a center ring, and inside it the people most affected by what's going on. Picture a larger circle around it, with inside it the people closest to those in the center. Repeat outwards.

The person in the center ring can say anything they want to anyone, anywhere.
Everyone else can say those things too, but only to people in the larger outside rings. Otherwise, you support and comfort.

Now, consider where in this diagram you are, and where the people you are talking to are.

"Comfort IN, dump OUT."

This model applies in other situations - for example, managers are better off complaining to their own managers or peers, while supporting their own reports and absorbing their complaints with empathy and compassion.

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

by Thomas at February 04, 2023 02:27 AM

SRE Philosophy With Jennifer Mace

"Even the most junior SRE on call starts having director authority. [..] There is a power in that relationship that SRE does have when they think something is in danger. And it's a power we have to be careful not to misuse. But it's important, because that's our job."

Macey is the guest on Episode 1 of SRE Prodcast, Google's podcast about Site Reliability Engineering. She goes in-depth on some of the core tenets of SRE, including risk, on-call, toil, design involvement, and more. (As a side note, I'm reasonably certain that I'm not the entertaining Belgian that was causing her team failure loops, but I'm too afraid to ask.)

The whole series is worth a listen, but just like the podcast itself - start with macey's advice.

"My definition of toil: toil is boring or repetitive work that does not gain you a permanent improvement."

Taken from The Playlist - a curated perspective on the intersection of form and content (subscribe, discuss)

flattr this!

by Thomas at February 04, 2023 02:27 AM

January 23, 2023

Glyph Lefkowitz

A Very Silly Program

One of the persistently lesser-known symptoms of ADHD is hyperfocus. It is sometimes quasi-accurately described as a “superpower”1 2, which it can be. In the right conditions, hyperfocus is the ability to effortlessly maintain a singular locus of attention for far longer than a neurotypical person would be able to.

However, as a general rule, it would be more accurate to characterize hyperfocus not as an “ability to focus on X” but rather as “an inability to focus on anything other than X”. Sometimes hyperfocus comes on and it just digs its claws into you and won’t let go until you can achieve some kind of closure.

Recently, the X I could absolutely not stop focusing on — for days at a time — was this extremely annoying picture:

chroma subsampling carnage

Which lead to me writing the silliest computer program I have written in quite some time.


You see, for some reason, macOS seems to prefer YUV422 chroma subsampling3 on external displays, even when the bitrate of the connection and selected refresh rate support RGB.4 Lots of people have been trying to address this for a literal decade5 6 7 8 9 10 11, and the problem has gotten worse with Apple Silicon, where the operating system no longer even supports the EDID-override functionality available on every other PC operating system that supports plugging in a monitor.

In brief, this means that every time I unplug my MacBook from its dock and plug it back in more than 5 minutes later, its color accuracy is destroyed and red or blue text on certain backgrounds looks like that mangled mess in the picture above. Worse, while the color distinction is definitely noticeable, it’s so subtle that it’s like my display is constantly gaslighting me. I can almost hear it taunting me:

Magenta? Yeah, magenta always looked like this. Maybe it’s the ambient lighting in this room. You don’t even have a monitor hood. Remember how you had to use one of those for print design validation? Why would you expect it to always look the same without one?

Still, I’m one of the luckier people with this problem, because I can seem to force RGB / 444 color format on my display just by leaving the display at 120Hz rather than 144, then toggling HDR on and then off again. At least I don’t need to plug in the display via multiple HDMI and displayport cables and go into the OSD every time. However, there is no API to adjust, or even discover the chroma format of your connected display’s link, and even the accessibility features that supposedly let you drive GUIs are broken in the system settings “Displays” panel12, so you have to do it by sending synthetic keystrokes and hoping you can tab-focus your way to the right place.

Anyway, this is a program which will be useless to anyone else as-is, but if someone else is struggling with the absolute inability to stop fiddling with the OS to try and get colors to look correct on a particular external display, by default, all the time, maybe you could do something to hack on this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import os
from Quartz import CGDisplayRegisterReconfigurationCallback, kCGDisplaySetMainFlag, kCGDisplayBeginConfigurationFlag
from ColorSync import CGDisplayCreateUUIDFromDisplayID
from CoreFoundation import CFUUIDCreateString
from AppKit import NSApplicationMain, NSApplicationActivationPolicyAccessory, NSApplication

NSApplication.sharedApplication().setActivationPolicy_(NSApplicationActivationPolicyAccessory)

CGDirectDisplayID = int
CGDisplayChangeSummaryFlags = int

MY_EXTERNAL_ULTRAWIDE = '48CEABD9-3824-4674-9269-60D1696F0916'
MY_INTERNAL_DISPLAY = '37D8832A-2D66-02CA-B9F7-8F30A301B230'

def cb(display: CGDirectDisplayID, flags: CGDisplayChangeSummaryFlags, userInfo: object) -> None:
    if flags & kCGDisplayBeginConfigurationFlag:
        return
    if flags & kCGDisplaySetMainFlag:
        displayUuid = CGDisplayCreateUUIDFromDisplayID(display)
        uuidString = CFUUIDCreateString(None, displayUuid)
        print(uuidString, "became the main display")
        if uuidString == MY_EXTERNAL_ULTRAWIDE:
            print("toggling HDR to attempt to clean up subsampling")
            os.system("/Users/glyph/.local/bin/desubsample")
            print("HDR toggled.")

print("registered", CGDisplayRegisterReconfigurationCallback(cb, None))

NSApplicationMain([])

and the linked desubsample is this atrocity, which I substantially cribbed from this helpful example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#!/usr/bin/osascript

use AppleScript version "2.4" -- Yosemite (10.10) or later
use framework "Foundation"
use framework "AppKit"
use scripting additions

tell application "System Settings"
    quit
    delay 1
    activate
    current application's NSWorkspace's sharedWorkspace()'s openURL:(current application's NSURL's URLWithString:"x-apple.systempreferences:com.apple.Displays-Settings.extension")
    delay 0.5

    tell application "System Events"
    tell process "System Settings"
        key code 48
        key code 48
        key code 48
            delay 0.5
        key code 49
        delay 0.5
        -- activate hdr on left monitor

        set hdr to checkbox 1 of group 3 of scroll area 2 of ¬
                group 1 of group 2 of splitter group 1 of group 1 of ¬
                window "Displays"
        tell hdr
                click it
                delay 1.0
                if value is 1
                    click it
                end if
        end tell

    end tell
    end tell
    quit
end tell

This ridiculous little pair of programs does it automatically, so whenever I reconnect my MacBook to my desktop dock at home, it faffs around with clicking the HDR button for me every time. I am leaving it running in a background tmux session so — hopefully — I can finally stop thinking about this.

by Glyph at January 23, 2023 03:06 AM

January 09, 2023

Hynek Schlawack

December 31, 2022

Moshe Zadka

The "Dynamic" Properties in PyProject

When writing a pyproject.toml file, the project section is optional. However, if it does exist, two of its properties are required:

  • name
  • version

If these two properties are not there, the section will be ignored.

This is a lie. But it is not a big lie: it is almost true.

In general, if either of these two properties are not there, the section will be ignored. However, there is a way to indicate that either, or both, of these properties will be filled in by the build system later on.

This is done with dynamic.

For example

[project]
name = "my-package"
dynamic = ["version"]

This is the most common setting. However, it is possible to set dynamic to ["name", "version"] and avoid both parameters.

by Moshe Zadka at December 31, 2022 04:00 AM

December 12, 2022

Glyph Lefkowitz

Potato Programming

One potato, two potato, three potato, four
Five potato, six potato, seven potato, more.

Traditional Children’s Counting Rhyme

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Knuth, Donald
“Structured Programming with go to statements”
Computing Surveys, Vol. 6, No. 4, December 1974
(p. 268)
(Emphasis mine)

Knuth’s admonition about premature optimization is such a cliché among software developers at this point that even the correction to include the full context of the quote is itself a a cliché.

Still, it’s a cliché for a reason: the speed at which software can be written is in tension — if not necessarily in conflict — with the speed at which it executes. As Nelson Elhage has explained, software can be qualitatively worse when it is slow, but spending time optimizing an algorithm before getting any feedback from users or profiling the system as a whole can lead one down many blind alleys of wasted effort.

In that same essay, Nelson further elaborates that performant foundations simplify architecture1. He then follows up with several bits of architectural advice that is highly specific to parsing—compilers and type-checkers specifically—which, while good, is hard to generalize beyond “optimizing performance early can also be good”.

So, here I will endeavor to generalize that advice. How does one provide a performant architectural foundation without necessarily wasting a lot of time on early micro-optimization?

Enter The Potato

Many years before Nelson wrote his excellent aforementioned essay, my father coined a related term: “Potato Programming”.

In modern vernacular, a potato is very slow hardware, and “potato programming” is the software equivalent of the same.

The term comes from the rhyme that opened this essay, and is meant to evoke a slow, childlike counting of individual elements as an algorithm operates upon them. it is an unfortunately quite common software-architectural idiom whereby interfaces are provided in terms of scalar values. In other words, APIs that require you to use for loops or other forms of explicit, individual, non-parallelized iteration. But this is all very abstract; an example might help.

For a generic business-logic example, let’s consider the problem of monthly recurring billing. Every month, we pull in the list of all of all subscriptions to our service, and we bill them.

Since our hypothetical company has an account-management team that owns the UI which updates subscriptions and a billing backend team that writes code to interface with 3rd-party payment providers, we’ll create 2 backends, here represented by some Protocols.

Finally, we’ll have an orchestration layer that puts them together to actually run the billing. I will use async to indicate which things require a network round trip:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
class SubscriptionService(Protocol):
    async def all_subscriptions(self) -> AsyncIterable[Subscription]:
        ...

class Subscription(Protocol):
    account_id: str
    to_charge_per_month: money

class BillingService(Protocol):
    async def bill_amount(self, account_id: str, amount: money) -> None:
        ...

To many readers, this may look like an entirely reasonable interface specification; indeed, it looks like a lot of real, public-facing “REST” APIs. An equally apparently-reasonable implementation of our orchestration between them might look like this:

1
2
3
async def billing(s: SubscriptionService, b: BillingService) -> None:
    async for sub in s.all_subscriptions():
        await b.bill_amount(sub.account_id, sub.to_charge_per_month)

This is, however, just about the slowest implementation of this functionality that it’s possible to implement. So, this is the bad version. Let’s talk about the good version: no-tato programming, if you will. But first, some backstory.

Some Backstory

My father began his career as an APL programmer, and one of the key insights he took away from APL’s architecture is that, as he puts it:

Computers like to do things over and over again. They like to do things on arrays. They don’t want to do things on scalars. So, in fact, it’s not possible to write a program that only does things on a scalar. [...] You can’t have an ‘integer’ in APL, you can only have an ‘array of integers’. There’s no ‘loop’s, there’s no ‘map’s.

APL, like Python2, is typically executed via an interpreter. Which means, like Python, execution of basic operations like calling functions can be quite slow. However, unlike Python, its pervasive reliance upon arrays meant that almost all of its operations could be safely parallelized, and would only get more and more efficient as more and more parallel hardware was developed.

I said ‘unlike Python’ there, but in fact, my father first related this concept to me regarding a part of the Python ecosystem which follows APL’s design idiom: NumPy. NumPy takes a similar approach: it cannot itself do anything to speed up Python’s fundamental interpreted execution speed3, but it can move the intensive numerical operations that it implements into operations on arrays, rather than operations on individual objects, whether numbers or not.

The performance difference involved in these two styles is not small. Consider this case study which shows a 5828% improvement4 when taking an algorithm from idiomatic pure Python to NumPy.

This idiom is also more or less how GPU programming works. GPUs cannot operate on individual values. You submit a program5 to the GPU, as well as a large array of data6, and the GPU executes the program on that data in parallel across hundreds of tiny cores. Submitting individual values for the GPU to work on would actually be much slower than just doing the work on the CPU directly, due to the bus latency involved to transfer the data back and forth.

Back from the Backstory

This is all interesting for a class of numerical software — and indeeed it works very well there — but it may seem a bit abstract to web backend developers just trying to glue together some internal microservice APIs, or indeed most app developers who aren’t working in those specialized fields. It’s not like Stripe is going to let you run their payment service on your GPU.

However, the lesson generalizes quite well: anywhere you see an API defined in terms of one-potato, two-potato iteration, ask yourself: “how can this be turned into an array”? Let’s go back to our example.

The simplest change that we can make, as a consumer of these potato-shaped APIs, is to submit them in parallel. So if we have to do the optimization in the orchestration layer, we might get something more like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
from asyncio import Semaphore, AbstractEventLoop

async def one_bill(
    loop: AbstractEventLoop,
    sem: Semaphore,
    sub: Subscription,
    b: BillingService,
) -> None:
    await sem.acquire()
    async def work() -> None:
        try:
            await b.bill_amount(sub.account_id, sub.to_charge_per_month)
        finally:
            sem.release()
    loop.create_task(work)

async def billing(
    loop: AbstractEventLoop,
    s: SubscriptionService,
    b: BillingService,
    batch_size: int,
) -> None:
    sem = Semaphore(batch_size)
    async for sub in s.all_subscriptions():
        await one_bill(loop, sem, sub, b)

This is an improvement, but it’s a bit of a brute-force solution; a multipotato, if you will. We’ve moved the work to the billing service faster, but it still has to do just as much work. Maybe even more work, because now it’s potentially got a lot more lock-contention on its end. And we’re still waiting for the Subscription objects to dribble out of the SubscriptionService potentially one request/response at a time.

In other words, we have used network concurrency as a hack to simulate a performant design. But the back end that we have been given here is not actually optimizable; we do not have a performant foundation. As you can see, we have even had to change our local architecture a little bit here, to include a loop parameter and a batch_size which we had not previously contemplated.

A better-designed interface in the first place would look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
class SubscriptionService(Protocol):
    async def all_subscriptions(
        self, batch_size: int,
    ) -> AsyncIterable[Sequence[Subscription]]:
        ...

class Subscription(Protocol):
    account_id: str
    to_charge_per_month: money

@dataclass
class BillingRequest:
    account_id: str
    amount: money

class BillingService(Protocol):
    async def submit_bills(
        self,
        bills: Sequence[BillingRequest],
    ) -> None:
        ...

Superficially, the implementation here looks slightly more awkward than our naive first attempt:

1
2
3
4
5
6
7
8
async def billing(s: SubscriptionService, b: BillingService, batch_size: int) -> None:
    async for sub_batch in s.all_subscriptions(batch_size):
        await b.submit_bills(
            [
                BillingRequest(sub.account_id, sub.to_charge_per_month)
                for sub in sub_batch
            ]
        )

However, while the implementation with batching in the backend is approximately as performant as our parallel orchestration implementation, backend batching has a number of advantages over parallel orchestration.

First, backend batching has less internal complexity; no need to have a Semaphore in the orchestration layer, or to create tasks on an event loop. There’s less surface area here for bugs.

Second, and more importantly: backend batching permits for future optimizations within the backend services, which are much closer to the relevant data and can achieve more substantial gains than we can as a client without knowledge of their implementation.

There are many ways this might manifest, but consider that each of these services has their own database, and have got to submit queries and execute transactions on those databases.

In the subscription service, it’s faster to run a single SELECT statement that returns a bunch of results than to select a single result at a time. On the billing service’s end, it’s much faster to issue a single INSERT or UPDATE and then COMMIT for N records at once than to concurrently issue a ton of potentially related modifications in separate transactions.

Potato No Mo

The initial implementation within each of these backends can be as naive and slow as necessary to achieve an MVP. You can do a SELECT … LIMIT 1 internally, if that’s easier, and performance is not important at first. There can be a mountain of potatoes hidden behind the veil of that batched list. In this way, you can avoid the potential trap of premature optimization. Maybe this is a terrible factoring of services for your application in the first place; best to have that prototype in place and functioning quickly so that you can throw it out faster!

However, by initially designing an interface based on lists of things rather than individual things, it’s much easier to hide irrelevant implementation details from the client, and to achieve meaningful improvements when optimizing.

Acknowledgements

This is the first post supported by my Patreon, with a topic suggested by a patron.


  1. It’s a really good essay, you should read it. 

  2. Yes, I know it’s actually bytecode compiled and then run on a custom interpreting VM, but for the purposes of comparing these performance characteristics “interpreted” is a more accurate approximation. Don’t @ me. 

  3. Although, thankfully, a lot of folks are now working very hard on that problem. 

  4. No, not a typo, that’s a 4-digit improvement. 

  5. Typically called a “shader” due to its origins in graphically shading polygons. 

  6. The data may rerepresenting vertices in a 3-D mesh, pixels in texture data, or, in the case of general-purpose GPU programming, “just a bunch of floating-point numbers”. 

by Glyph at December 12, 2022 09:40 PM

October 27, 2022

Glyph Lefkowitz

Super Swing Districts

In my corner of the social graph, when we talk about politics today, we tend to use a lot of moralizing language. A lot of emotive language. And that makes sense; overt fascist are repeating the strategy of using the right of trans people to, like, be alive, as a wedge issue to escalate to full-blown eugenics and antisemitism. There’s a lot of moral stuff and a lot of emotional stuff happening there.

But when we get down to it, politics is a highly technical discipline that requires a lot of work. You don’t need to just have the right opinion, you have to actually do a lot of math to figure out efficient ways to deploy resources, effective strategies to convince the undecided and to command the attention of the disengaged. It’s also adversarial: the bad guys are trying to do the same thing, so if you do find some efficient way to campaign, they will soon find out and try to dismantle it.

So while we might talk abstractly about “doing the work”, a lot of the work is tedious and difficult analysis of a lot of very confusing numbers. Not to mention the fact that it requires maintaining the tenacious mindset of a happy Sisyphus due to its adversarial nature. To be frank, I’m not great at either of those things.

Luckily, my uncle is. He is a professor of political science who — beyond the obvious familial bias I might have — I tend to think is a really smart guy with a lot of good ideas. More importantly, however, is that he does do “the work” I’m talking about here.

So here is some of that work: SuperSwingDistricts.org. This is a slate of democratic downballot candidates for office across the USA who need your support right now. Specifically it is a carefully curated slate to maximize spend efficiency via the reverse-coattails effect, multiplied by finding the areas where there are the most overlapping high-leverage elections. You can read more about the specifics on the website, and the specifics of the vetting of the candidates bona-fides, but you can also just take my word for it and Donate Now via ActBlue. Just like... gobs of money.

Political fundraising is not really my wheelhouse, and I am not that comfortable doing it. I hope that we can stop this “democracy” machine from constantly falling apart all the time so I can work on fixing the other broken systems in my life like Python-langauge native application packaging for various platforms. But this one is really, really important. Many of these candidates are in pivotal positions that will help prevent authoritarians from seizing the actual physical mechanisms of elections themselves, and attempting a more successful coup in 2024.

So: donate now.

by Glyph at October 27, 2022 06:49 AM

October 14, 2022

Hynek Schlawack

How to Fix the set-output GitHub Actions Deprecation Warning

If you have a GitHub Actions workflow that sets an output using echo ::set-output key=value, you have started to see an unhelpful deprecation warning. Here’s how to fix it.

by Hynek Schlawack (hs@ox.cx) at October 14, 2022 12:00 AM

September 19, 2022

Hynek Schlawack

September 07, 2022

Hynek Schlawack

You Can Build Portable Binaries of Python Applications

Contrary to popular belief, it’s possible to ship portable executables of Python applications without sending your users to Python packaging hell.

by Hynek Schlawack (hs@ox.cx) at September 07, 2022 12:00 AM

August 18, 2022

Hynek Schlawack

Easier Crediting of Contributors on GitHub

GitHub has the concept of co-authors of a commit. You’ve probably seen it in the web UI, when multiple people are listed to have committed something. I want to be gracious with credit where it’s due and I’ve found ways to make it easier.

by Hynek Schlawack (hs@ox.cx) at August 18, 2022 12:00 AM