Ten things worth your attention this morning — pulled from the noise, with the politics, finance, and industry gossip left on the cutting room floor. A vector-graphics editor refuses to die, Simon Willison watches a boundary collapse in real time, and someone built an encyclopedia of beautifully wrong things.
Version 1.4.4 of Inkscape — the free, open-source vector-graphics editor that has stubbornly refused to die since 2003 — landed yesterday to two hundred and sixty-one upvotes and a comment thread populated almost entirely by people surprised it’s still going. It is. The release is a polish point: gradient mesh fixes, snapping improvements, a long list of small performance wins. None of it is glamorous. All of it is the result of volunteer labor, accumulated over decades.
The headline isn’t the changelog. It’s that the alternative to Adobe’s subscription tax has been quietly improving the whole time, on a budget rounding to zero, and is now a serious tool for serious work.
Simon Willison spent the spring trying to keep two ideas separate — “vibe coding,” the art of asking a model to make something work without especially caring how, and “agentic engineering,” the discipline of treating that same model as a collaborator with tests, reviews, and a paper trail. His new post is the report from the field: the boundary is dissolving in real time, and he is not entirely sure he likes it.
What’s quietly remarkable about the piece is the absence of certainty. Most writing in this space arrives in two flavors, hype or panic. Willison is doing the harder thing — building, watching the results, and writing down what he sees.
Google announced what amounts to reCAPTCHA’s adulthood: a fraud-defense suite that scores every interaction as human-or-not before the user ever sees a checkbox. It uses behavioral signals collected across the open web. Two hundred and fifty-nine upvotes. Two hundred and fifty-five comments — split, with industrial precision, between developers exhausted by today’s captcha experience and privacy advocates pointing out exactly what kind of telemetry that requires.
The architecture is the actual story. To know whether you’re a bot, the system has to watch you across enough of the web to recognize what you’re not. “Less friction” and “more surveillance” are the same engineering decision, dressed in different clothes for different audiences.
Val.town’s engineering team published an unusually candid two-year migration log. Started on Supabase auth because it was easy. Moved to Clerk when usage scaled and easy stopped being enough. Just landed on Better Auth — an open-source TypeScript library — because every hosted alternative had quietly begun extracting rents at scale. The post names the embarrassments along the way, including the parts where the team’s own assumptions were wrong.
The pattern is becoming familiar across the auth-as-a-service category: hosted providers were a generous deal in 2022 and a less-generous one each year since. Worth reading before you sign your next vendor contract.
The single most-upvoted post on the page is a slow, mordant taxonomy of office theater — the inbox-zero performances, the conspicuous typing in meetings, the deliberate Slack pings at eleven o’clock at night — written by someone who has been measuring the gap between actual work and visible work for long enough to be funny about it. The piece resists the obvious lazy reading (“everyone is slacking”) for something more uncomfortable: the systems most companies use to evaluate productivity are themselves the strongest argument for the theater they get.
A small but rigorous community has spent the last few years drafting a set of principles for computing as if energy, materials, and decades of useful life mattered. The list reads like a software equivalent of permaculture: care for life, care for the chips, fair share, design for repair, mature gracefully. It is deeply uncool. It is also one of the more clearheaded pieces of computing philosophy published in years.
The page itself is hand-coded HTML, hosted on a shoestring, and weighs less than a single tweet. That is, of course, the point. A document about computing for the long term, served as if the long term were already here.
An attempt at the kind of essay the deep-learning field has been notably bad at producing: a unified, readable account of what we actually understand about why these networks work. The author is rigorous about distinguishing what is conjecture, what has been proven on toy cases, and what is reliable empirical pattern with no underlying theorem.
The conclusion is not “we have a theory.” It is “here is the precise shape of our current ignorance.” That, in 2026 — with hundreds of billions of dollars riding on systems whose internal behavior remains substantially mysterious — counts as progress. A hundred and sixty-one upvotes, a comment thread mostly populated by researchers arguing about which of the author’s framings hold up. Worth the time.
Someone built an encyclopedia composed entirely of plausible-sounding articles a language model invented and never bothered to verify. Every entry is wrong. Every entry is footnoted. Every entry would, with minor edits, pass for a reasonable Wikipedia stub. The result is the most pointed argument for source-checking published this year, dressed as a joke.
The comment section is doing the predictable thing — half delighted, half anxious about what it implies — but the project itself is more than a gag. It is the cleanest demonstration of a specific failure mode that the average user can hold in their head.
undo button.A new entrant in the agent-runtime category, with one specific, clarifying idea: every action an agent takes happens inside a transactional, versioned filesystem you can inspect, branch, and roll back. If you have ever watched an agent confidently rm the wrong directory and take two hours of work with it, you understand the design instinct exactly.
One hundred and forty-seven upvotes, a hundred and five comments, and a steady stream of builders describing the specific moment they wished this had existed. The pitch is honest: not a new agent, not a new model — just a different floor for the agent to walk on.
SQLite — a single C file, public domain, embedded in approximately every smartphone — has been formally added to the Library of Congress’s list of recommended storage formats for digital preservation. The criteria the Library applies are unsentimental: open documentation, broad adoption, format stability, lossless capacity. SQLite passes all of them.
The move codifies what working programmers have known for two decades. The right answer to “what should this dataset live in for the next century?” is, embarrassingly often, a single .sqlite file. The institution responsible for the United States’ archival memory now formally agrees.