Reading Log

by Kurt Pan

Give it a lot of input. The more input you give it, the better its output is.

It's surprisingly good at UI design given that it's mainly a text model.

claude -p "Read the SPEC.md file and implement it"

I tell it to keep things simple, stay away from frameworks, and just write raw SQL. In the broken version, I let it do whatever it wants.

A key is writing a clear spec ahead of time, which provides context to the agent as it works in the codebase. Having a document for the agent that outlines the project’s structure and how to run e.g. builds and linters is helpful. Asking the agent to perform a code review on its own work is surprisingly fruitful. Finally, I have a personal “global” agent guide describing best practices for agents to follow, specifying things like problem-solving approach, use of TDD, etc.

Recently Apple released a tool called 'Container'. Yes, that's right. So, we checked it out and it seemed better than Docker as it provided one isolated VM per container – a perfect fit for running AI generated code.

This is more than a just an experiment. It's a philosophy shift bringing compute and agency back to your machine. No cloud dependency. No privacy tradeoffs. While the best models will probably be always with the giants, we hope that we will still have local tools which can get our day-to-day work done with the privacy we deserve.

Instead of your app being a fancy form that sends data to a server, it has it’s own local database. Sometimes the server is just another client to sync with. It can be a fundamental inversion of how we typically build web applications.

If you’re building something new and can work within the constraints, I encourage you to try local-first. The worst case is you’ll learn a new architecture pattern. The best case is you’ll build something that feels impossibly fast to your users.

At the heart of the U.S. government were an ascendant set of ideas that saw the internet as the ultimate neoliberal project: a borderless marketplace where free-flowing information would lead to optimal prices, ideas, and solutions. Full of messianic cultural confidence following the fall of the Soviet Union, they believed that if information were allowed to flow, the values of American capitalism would triumph on their own merits.

Anonymity loves company — so Tor needed to be sold to the general public. That necessity led to an unlikely alliance between cypherpunks and the U.S. Navy.

Observing these two worlds — the military academics and the cypherpunks — interacting, through sharing test results, theoretical discussions, phone calls, emails, and eating the occasional roasted onion, we see the beginnings of a distinctive idea of what privacy means. Somewhere between the cypherpunk’s everyday, radical, decentralized vision of privacy and the high-security traffic protection desired by the military, a shared idea was forming. This saw privacy as being strongly shaped by the clusters of power and control built into digital infrastructure. This understanding of privacy as a structure would unite an odd coalition around Tor over the next three decades: activists, journalists, drug buyers, hackers, and the military itself.

This strange story of a group of libertarian hackers teaming up with the U.S. military amid the aftershocks of the Cold War presents a more nuanced picture of privacy than the familiar lone-user-versus-state narrative. It shows different groups coming together to change how — through laws, technologies, practices, and cultural values — we police the boundaries between different material systems of power. Understood in this way, we can see privacy as setting out where the domain of the community, of the family, of the state, of a corporation, of an institution or an individual begins and ends.

The workflow is simple: I publish a blog post, share it on Bluesky, edit the post to add the AT URI, and the replies to that Bluesky post become the comments on my blog.

This approach scales with the platform because it uses the platform.

In my opinion, the web is better when independent sites can connect to broader conversations without sacrificing their independence.

It doesn’t sort, and it runs faster than any algorithm that does.

The finished algorithm slices the graph into layers, moving outward from the source like Dijkstra’s. But rather than deal with the whole frontier at each step, it uses the Bellman-Ford algorithm to pinpoint influential nodes, moves forward from these nodes to find the shortest paths to others, and later comes back to other frontier nodes. It doesn’t always find the nodes within each layer in order of increasing distance, so the sorting barrier doesn’t apply. And if you chop up the graph in the right way, it runs slightly faster than the best version of Dijkstra’s algorithm.

The real magic happens when we use inline SVGs.

The really cool thing is that SVGs are first-class citizens in the DOM. We can use CSS and JavaScript to select and modify SVG nodes, as if they were HTML elements.

He’s called for semiconductor export controls to China, drawing a public rebuke from Nvidia CEO Jensen Huang.

Anthropic will thus be a barometer of AI’s progress, rising and falling on the strength of the technology.

He was interested almost entirely in math and physics. When the dot-com boom exploded around him in his high school years, it barely registered. “Writing some website actually had no interest to me whatsoever,” he tells me. “I was interested in discovering fundamental scientific truth.”

At Baidu, the AI team’s progress became seeds of its undoing. Turf battles broke out within the company over control of its increasingly valuable technology, know-how, and resources. Eventually, meddling from powerbrokers in China sparked a talent exodus and the lab fell apart. Andrew Ng declined to comment.

“The leaders of a company, they have to be trustworthy people,” he says. “They have to be people whose motivations are sincere, no matter how much you're driving forward the company technically. If you're working for someone whose motivations are not sincere, who's not an honest person, who does not truly want to make the world better, it's not going to work. You're just contributing to something bad.”

The endless drive to scale ends up covering the planet with solar panels and datacenters.

Here’s a cheat sheet of questions I ask myself when reviewing code: How does this code fit into the rest of the system? What’s its interaction with other parts of the codebase? How does it affect the overall architecture? Does it impact future planned work?

The goal shouldn’t be to merge as quickly as possible, but to accept code that is of high quality. Otherwise, what’s the point of a code review in the first place? That’s a mindset shift that’s important to make.

I didn't like using AI that much. Reviewing code is vastly less enjoyable process than writing it. Had my stubborn desire to enjoy coding set me up to be left behind?

When you write code, how much of your time do you truly spend pushing buttons on the keyboard? It's probably less than you think. Much of your prime coding time is actually reading and thinking, often while waiting for compiling, a page refresh, or for tests to run. LLMs do not make rustc go faster. If you're “embracing the vibes” and not even looking at the code produced, you're simply going to hit a productivity wall once the codebase gets large enough. And once you do you'll have to reckon with the complete lack of standards and proper abstractions.

It turns out that most of any activity is not spent going at top speed.

When I have had engineers who were 10x as valuable as others it was primarily due to their ability to prevent unnecessary work. Talking a PM down from a task that was never feasible. Getting another engineer to not build that unnecessary microservice. Making developer experience investments that save everyone just a bit of time on every task. Documenting your work so that every future engineer can jump in faster. These things can add up over time to one engineer saving 10x the time company wide than what they took to build it.

Notably, AI coding assistants do very little to prevent unnecessary work. On the contrary, AI often seems to encourage hastiness and over-building.

The problem is that productivity does not scale.

I think a lot of the more genuine 10x AI hype is coming from people who are simply in the honeymoon phase or haven't sat down to actually consider what 10x improvement means mathematically.

It's okay to sacrifice some productivity to make work enjoyable. More than okay, it's essential in our field. If you force yourself to work in a way you hate, you're just going to burn out. Only so much of coding is writing code, the rest is solving problems, doing system design, reasoning about abstractions, and interfacing with other humans. You are better at all those things when you feel good. It's okay to feel pride in your work and appreciate the craft. Over the long term your codebase will benefit from it.

How did you know it was base64 encoded JSON and not just a base64 string?”

Whenever you see ey, that’s {" and then if it’s followed by a letter, you’ll get J followed by a letter.

You can spot base64 json with your naked eye, and you don’t need to decode it on the fly!

How making noise for anything but the direst emergency should be an off-by-default privilege that only the user can explicitly grant, instead if being the default for all electricity-powered objects.

If you're designing objects, please take some time to test their notification mechanism near an asleep toddler and/or a sleep-deprived lunatic, instead of making piling more noisy interruptions to our already notification-saturated reality.

A PDF file is effectively a graph of objects that may reference each other. Objects reference other objects by use of indirect references.

A design document is a technical report that outlines the implementation strategy of a system in the context of trade-offs and constraints.

Think of a design document like a proof in mathematics. The goal of a proof is to convince the reader that the theorem is true. The goal of a design document is to convince the reader the design is optimal given the situation.

The act of writing a design document helps to add rigor to what are otherwise vague intuitions. Writing reveals how sloppy your thinking was (and later, code will show how sloppy your writing was).

An individual will usually only be an expert in at most one thing, so the broad quasi-expertise offered by the LLM fundamentally allows them to do things they couldn't do before.

At least for the moment and in aggregate across society, they have been significantly more life altering for individuals than they have been for organizations.

The moment money can buy dramatically better ChatGPT, things change. Large organizations get to concentrate their vast resources to buy more intelligence. And within the category of “individual” too, the elite may once again split away from the rest of society. Their child will be tutored by GPT-8-pro-max-high, yours by GPT-6 mini.

It was supposed to be a top secret government megabrain project wielded by the generals, not ChatGPT appearing basically overnight and for free on a device already in everyone's pocket. Remember that William Gibson quote “The future is already here, it's just not evenly distributed”? Surprise – the future is already here, and it is shockingly distributed. Power to the people.

Create a personal “ramblings” channel for each teammate in your team’s chat app of choice. Ramblings channels let everyone share what’s on their mind without cluttering group channels. Think of them as personal journals or microblogs inside your team’s chat app, a lightweight way to add ambient social cohesion.

Each ramblings channel should be named after the team member, and only that person can post top-level messages. Others can reply in threads, but not start new ones. All the ramblings channels should be in a Ramblings section at the bottom of the channel list. They should be muted by default, with no expectation that anyone else will read them.

Because they are so free and loose, some of our best ideas emerge from ramblings. They’re often the source of feature ideas, small prototypes, and creative solutions to long-standing problems.

When you’re placed in a high-stakes, time-pressured situation, like live coding, your brain reacts exactly like it would to any other threat. The amygdala gets activated. Cortisol levels spike. Your prefrontal cortex, the part of the brain responsible for complex reasoning and working memory, gets impaired. Either mildly, moderately, or severely, depending on the individual and their baseline stress resilience.

For some people, especially those with even mild performance anxiety, it becomes nearly impossible to think clearly. Your attention narrows. You can’t hold multiple steps in your head. You forget what you just typed a few seconds ago. It feels like your IQ dropped by 30 points. In fact, it feels like you’re a completely different version of yourself; a much dumber one.

Live coding fails to measure what we think it measures. It’s more accurately measuring cortisol under stress than coding skills.

The best way to desensitize your brain to stress is repeated exposure.

Claude Code has considerably changed my relationship to writing and maintaining code at scale. I still write code at the same level of quality, but I feel like I have a new freedom of expression which is hard to fully articulate.

Maintenance is Significantly Cheaper

A habit I have been trying to form is to give an idea a shot before I fully shoot it down.

For me the difference between pre-Claude Code and post-Claude Code is so substantial that any increments between it and others (which will be better in some ways, worse in others) is not worth the hassle for such a small incremental win.

What makes learning with AI groundbreaking is that it can meet you at your skill level. Now an AI can directly address questions at your level of understanding, and even do rote work for you. This changes the learning curve.

Mastery remains difficult. Cheaters, in the long run, won't prosper here!

Creative fields are extremely competitive, and beating competition for attention requires novelty. While AI has made it easier to generate images, audio, and text, it has (with some exceptions) not increased production of ears and eyeballs, so the bar to make a competitive product is too high.

Summarizing is a core AI skill, but it doesn't help much here: Spam is already quietly shuffled into the Spam folder. A summary of junk is, well, junk. For important email, I don't want a summary: An AI is likely to produce less specifically crafted information than the sender, and I don't want to risk missing important details.

Enter your email to subscribe to updates.