LLMs and Debugging

How can software developers leverage LLMs while avoiding pitfalls? Here are some lessons learned so far.

A photograph of intricate woven yarn with uneven sunlight illuminating the colorful fibers. Uneven sunlight illuminates colorful fibers of woven yarn.

You may already use some form of AI in your day-to-day workflow. You might use ChatGPT to ask questions and research new technologies or GitHub’s Copilot assistant to generate code. You may even use an editor like Cursor or Windsurf to orchestrate whole projects from scaffolding to self-debugging (Cursor recently announced Yolo Mode).

Some of the folks I know are skeptics of AI tools. Some are fans and advocates. Others are ambivalent and/or tired of the hype. Some, like me, have quietly used AI to be more productive while trying to maintain a healthy distance and skepticism. Whatever your stance, I think we’re past the point of writing off AI entirely.

My background is in business and software development, and until recently, I have mostly ignored AI (this is the first time I’m writing about it). I’m going to use some software jargon in this article—but I’ll try to keep it generally applicable when possible because I think it would be helpful for more non-developers to understand how software developers are using these tools in the real world.


There are many different camps that AI critics fall into, and most of them agree that “AI” (Generative AI and LLMs, more specifically) is real and useful. The topic of “AI Skepticism” has been broadly discussed recently in response to journalist Casey Newton, who wrote about it in December for his newsletter, Platformer:

The most persuasive way you can demonstrate the reality of AI, though, is to describe how it is already being used today. Not in speculative sci-fi scenarios, but in everyday offices and laboratories and schoolrooms. And not in the ways that you already know — cheating on homework, drawing bad art, polluting the web — but in ones that feel surprising and new.

Newton’s take is that there are two camps of AI Skepticism: those who think it’s fake and it sucks, and those who believe it’s real and dangerous. Newton puts himself in the latter and argues that it’s perilous to assume that AI is snake oil soon to join NFTs and the Metaverse in the dustbin of history. In response, many have argued that Newton oversimplified the nuanced concerns of AI skeptics.

Benjamin Riley, a former attorney and education researcher who writes about generative AI systems through the lens of cognitive science, listed nine branches of AI Skepticism in response to Newton, with a final catchall category to further demonstrate the diversity of the field. Among them are skeptics of AI in education (where Riley places himself), two camps of scientific skepticism, skeptics of AI’s use in art and literature, and many more. I think that most of Riley’s skeptics make good points, and none of them (with the exception of the “neo-Luddites,” who may fall into Newton’s first category) believe that AI is fake or that it sucks.

When skeptics criticize “AI,” they’re often speaking about specific concerns, and not of the technology as a whole. They’re concerned with the religious fervor of Silicon Valley tech bros, the hysteria of AI doomers, or AI’s technical limitations—the latter camp being the quickest to point out that when we say “AI,” we’re really talking about LLMs. Some fear how companies like OpenAI may reconfigure society around their business models. Others call out the baseless marketing claims they use to fundraise.

These are reasonable concerns, and as consumers, we shouldn’t use these technologies mindlessly. As developers, we should pay special attention to the potential harms. But we also shouldn’t ignore generative AI as it reconfigures our industry and skills. And we should learn as much as we can about these technologies so that we can criticize them effectively. As Simon Willison recently wrote in his excellent recap of things we learned about LLMs in 2024:

LLMs absolutely warrant criticism. We need to be talking through these problems, finding ways to mitigate them and helping people learn how to use these tools responsibly in ways where the positive applications outweigh the negative.

I like people who are skeptical of this stuff. The hype has been deafening for more than two years now, and there are enormous quantities of snake oil and misinformation out there. A lot of very bad decisions are being made based on that hype. Being critical is a virtue.

Despite all the hype, there are good applications for these tools, and developers are in a unique position to counter uninformed opinions and help decision-makers implement generative AI safely.


And the fact is, LLMs are getting pretty damn useful. When GitHub Copilot first came out, I had mixed feelings. It impressed me some of the time, like when it would suggest the automated tests I needed for some code I’d written. Other times, it got in my way, making irrelevant suggestions and breaking my already scant concentration. So, I turned it off and would check back in from time to time. But ChatGPT was genuinely useful as a research, prototyping, and debugging tool from the beginning, and it rapidly improved as OpenAI released new models throughout 2022-2023. Anthropic’s Claude caught on among developers in early 2024, and it’s still what I primarily use today.

While it seemed like breakthroughs in LLM research and training had slowed, they’ve increasingly become more context-aware, the costs of training and running them have dropped substantially (reducing the environmental impacts), and developer tooling has improved rapidly. Integrated code assistants like Cursor and Codeium rolled out project-aware agents capable of completing tasks end-to-end with the developer riding shotgun. Recently, GitHub released a similar multi-file editing feature called Copilot Edits and announced a free tier.

If you haven’t tried these new tools yet, you should. It’s not that they won’t make mistakes—they will—but I’ve found that, on balance, I work much faster with them than without them. And as I’ve learned how to use them effectively, the odds have improved.


As Simon Willison points out, “LLMs are power-user tools—they’re chainsaws disguised as kitchen knives.”:

They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.

A chainsaw is an apt metaphor. Without thinking through each interaction, you’ll find that you’ve mutilated your favorite bonsai tree. You should have used shears instead.

Knowing when—and more importantly—when not to ask an LLM to generate code for you is an important skill that comes with practice. My purpose here is not to give specific techniques or advice about using these tools (I’m still learning myself), but there are some clear heuristics emerging, many of which are already common sense in software development.

Even without a code assistant, you won’t arrive at the right solution (or even solve the right problem!) without understanding your domain. The first step should always be to think about the design of your system and the problem you’re trying to solve. Basically, don’t outsource your thinking. While that may seem obvious, I’ve found that in practice, it’s much too easy to imbue the LLM with agency that it doesn’t have, and that problem has wide-reaching implications. The good news is that the AI is not an engineer—so your job is safe for now.


I was recently updating an open-source library that we maintain at my company, Honeybadger, where we help developers improve the reliability and performance of their web applications. We provide client-library packages that integrate with a bunch of different programming languages and web frameworks, and one of the languages we support is Elixir—a popular functional programming language built on top of Erlang, an ancient language created at Ericsson for the telecommunications industry.

I’d noticed that we weren’t testing on the latest version of Elixir, which is currently 1.17, and we were a few versions behind. It’s normally a quick fix to update our automated test suite to test our code against new versions when they’re released, but in this case, it surfaced a couple of test failures on the latest Elixir version. That meant that something had changed in the underlying platform that our code runs on. While I’ve worked with Elixir for years as a part of the service we provide at Honeybadger (and even made a tiny contribution to the language), it had been a while since I’d debugged a problem like this.

I knew that two tests were failing, both in our logger implementation—which basically listens for events from Elixir and reports them to Honeybadger to help developers understand what’s happening when their code executes. Elixir has a feature called “pattern matching,” which allows us to handle specific types of events—errors in this case—and the issue was that something had changed in the structure of the events, causing Elixir’s pattern matching to break. Something about the shape of the events had changed between Elixir versions 1.15 and 1.17. But what changed?

Since I’ve been experimenting with LLMs and code assistants, I decided to see if it could find the issue—so I copied the test failure output (which provides information about why the test is failing) and pasted it into Codium’s Windsurf editor (using Claude 3.5 Sonnet) and asked it what was causing the bug:

why is the following test failing on Elixir 1.17 but not 1.15?

1) test GenEvent terminating with an error (Honeybadger.LoggerTest)
Error:    test/honeybadger/logger_test.exs:47
   Assertion with == failed
   code: assert request["context"]["name"] == "Honeybadger.LoggerTest.MyEventHandler"
   left: nil
   right: "Honeybadger.LoggerTest.MyEventHandler"
   stacktrace:
       test/honeybadger/logger_test.exs:76: (test)

What happened next was impressive. On the first try, it:

  1. Identified the failing test file
  2. Found the source file where the code was failing
  3. Explained the issue confidently, even referencing Elixir’s 1.17 release notes
  4. Proposed a change to the source file

The proposed change was small and looked incredibly reasonable at first glance; the solution was to expand the pattern-matching constraints so that the events from both Elixir versions would match. So I accepted it and ran the tests again, and they passed! I was impressed—this just saved me a bunch of time. After a few more exchanges, I fixed the second failing test (which was related), polished up the code a bit, and was almost done.

Of course, I wasn’t going to use this fix without documenting it first, so I asked Windsurf to summarize the conversation and actions it took, and it wrote it up for me. Everything looked good, so I committed (saved) the changes and pushed the code up to GitHub for a final review. The tests were now passing on all supported Elixir versions! But something didn’t feel right. Did I really understand the problem in the first place?

Normally, I would have started by debugging the data myself, which involves gathering more information, including what the events look like and how they changed between Elixir versions. I’d have spent more time reading the documentation and release notes for the failing Elixir versions. Only then would I have made a change to handle this specific scenario. I’d assumed Claude knew what it was talking about—it mentioned the release notes, after all—but how much knowledge was it really working with? It was hard to tell, and that’s a problem with LLMs: they’re a black box.

So I started from the beginning and soon discovered that there was just a small inconsistency in the structure of one particular event. Claude’s solution—expanding the constraints—did fix the bug, but it risked introducing new and more subtle bugs in the future when other events—events we don’t care about—could potentially invoke this code path. That would be bad, and what’s worse, if I continued to commit “fixes” like this, those changes would compound over time, reducing the quality and understanding of our codebase.

What I realized is that Claude was bullshitting me. It didn’t understand the underlying issue because it wasn’t actually described in the release notes. When it confidently identified the problem and proposed a solution, it had regurgitated the information I’d prompted it with, and I’d fallen for it. Oops.

In the end, I fixed the bug myself, and while it did involve changing some constraints, my solution addressed the root cause. Instead of just accepting more events, I added some specific handling (and documentation) for Elixir 1.17, leaving the old code path unchanged.

And that, dear reader, is how not to use an LLM.


LLMs themselves are confidently gullible pattern-matching engines that reflect our own thoughts back to us. But paradoxically, they’re also good at reasoning and making connections. This ability is useful in software development and also in figuring out what the hell people are talking about on the internet. But it also means that you shouldn’t trust them, and you definitely shouldn’t use them to replace your search engine or traditional research techniques.

Mike Caulfield, an online verification and information literacy researcher, is studying Claude’s ability to infer meaning and provide interpretive context. Search engines are backed by a database of known answers—not known to you (that’s why you’re searching for them)—but in general. They’re good at surfacing known unknowns. According to Caulfield, LLMs excel at the opposite: they surface unknown unknowns:

When you give these systems space to tell you (roughly) what’s in the linguistic vicinity of the linguistic representation of your problem, some pretty amazing things happen.

LLMs can help you understand your domain better and build up the context you need to solve problems faster—if you don’t let them beguile you. Code assistants and LLMs in general blur the lines between the two use cases, often pretending to be search engines when what you really want is to fill in the missing context. And what you ask them makes a difference:

What is striking is when you get over the idea of “give me this particular thing” and lean into “map out this issue using this lexically-inflected template for reasoning” it thinks up all the sorts of things you wouldn’t think to have asked.

In one experiment, Caulfield evaluated a TikTok video that used a newspaper article from 1928 to claim that the U.S. government had secretly funded the Wright Brothers before achieving flight in 1903, withholding technology from the public. The newspaper article claimed that the U.S. had spent billions on “aeronautics” since 1899—a claim that would be difficult for students to verify because the context needed to search for it is fuzzy. Instead of asking Claude for the answer directly, he fed it the video transcript and asked it to evaluate the evidence.

Claude’s response pointed out that a major war had occurred (WWI) between the Wright Brothers achieving flight in 1903 and the article’s date, and suggested learning how much aeronautical spending occurred prior to 1903 and whether it was connected to the Wright Brothers. Again, instead of asking Claude for the answer, Caulfield asked it to construct a Google query, which led him to actual funding numbers from the period. After some further research, he learned that most government spending prior to 1903 went not to the Wright Brothers but to Samuel Langley:

The LLM is used to give the lay of the land, but not to build a detailed map of it. It’s not that Claude is bad at facts in general — it is able to tell you about the Wright Brothers and about Langley quite well. But in finding elements of more detail, it is perhaps better to have the students transition to search.

Caulfield recommends using the LLM as a part of a larger habit, what Sam Wineberg calls “taking bearings”—in which you survey the landscape before diving into a research session (or debugging some code). In this way, LLMs can surface relevant questions to ask when you don’t know what to search for. You want to ask the right questions, and defer to traditional research and debugging to fill in specific knowledge. Once you have a complete picture of the problem, feeding that context back into your code assistant often yields a better result.


LLMs are potent tools and great at writing code. They excel at many routine tasks and can save you time and effort: setting up boilerplate code, refactoring and transformations, documenting behavior, suggesting optimizations, fixing bugs, and many more. One of my favorite use cases is building small personal tools that aren’t complicated but were previously time-prohibitive to create and maintain.

But you need to know if the code LLMs write is the right code. For that reason, a good rule is to have them work a level or two below your skill level. If you don’t fully understand the code they generate, take the opportunity to learn by studying the output and move to traditional research when necessary—then start from the beginning and write that code yourself. At the end of the day, solving the hard problems is what makes us engineers. If we approach LLMs with this mindset, they can be a useful tool for developers at all career stages.

No matter how experienced you are, it’s important to remain skeptical. AI is not magic. It’s not even a sharp tool. It’s a wildly destructive tool that can make you more productive but also wastes a lot of time and effort if wielded unconsciously.

New year, old me

A color photo of a frozen pond with a line of yellow and gold tinged trees on the far shore. A frozen pond I photographed in early January 2024.

I’ve been struggling creatively recently. Maybe it’s the winter doldrums, or the annual existential crisis that usurps my winter holidays. Either way, it’s a new year, and I crave change.

For as long as I can remember, I’ve oscillated between routine and chaos. I’d love to find a middle ground—some stable plane to exist on—but I’m not sure there is one. This may be part of my bipolar disorder (something I’ve never discussed in public). Although I hesitate to pathologize. In any case, inspiration arrives in fits and starts, and always has. And, in some ways, it must.

As 2025 rounds the bend, I’m in the season of chaos. I’m disorganized, lacking discipline, longing for control. As John Keats once wrote, my mind is like a “pack of scattered cards.”1 I’ve ridden this rollercoaster enough times to know that by mid January, I’ll be okay. Back into “the swing of things.” Except, I’m not sure I want to be—not in the way that leads me full circle come next December.

In the past, when feeling unmoored, I’ve lured myself back to stability with “productivity.” At the start of a new year it’s tempting to yield to the distraction of resolutions, goals, and the chimera of change. New year, new me. The problem is, every time the clock strikes twelve on January 1, I’m still myself.

At the same time, discipline is important for actually making things. It’s impossible to create without discipline, and I’d like to create—or at least to be in the process. But more often than not, I get stuck inside my head. I think capturing the ebb and flow of this “artistic temperament” or whatever you want to call it is the key.

Longinus summed it up well when he wrote that “sublime impulses are exposed to greater dangers when they are left to themselves without the ballast and stability of knowledge; they need the curb as often as the spur.”2

I rode horses as a kid, and—unlike mechanical vehicles—they can be difficult to control. When riding a horse, you can brake or accelerate, just like driving a car. The difference between a horse and a car is that the horse has a mind of its own. That’s the crux of the matter: inspiration requires chaos (the spur), but creation requires discipline (the curb). Routine tends to kill both after a while.

This year, I won’t make resolutions. Actually, I stopped making resolutions long ago—but it’s good to remember why. But I do like the first of the year to review and set intentions.

So here it is: In 2025, I want to live in the chaos, to harness it, to use it as a tool to shape myself and the world I want to live in. To not be quelled by the routine of daily life and the weight of responsibility. To be tempered by discipline. And to (somehow) find space for it all.

  1. In response to a poem sent to him by Shelley: “You, I am sure, will forgive me for sincerely remarking that you might curb your magnanimity, and be more of an artist, and load every rift of your subject with ore. The thought of such discipline must fall like cold chains upon you, who perhaps never sat with your wings furled for six months together. And is this not extraordinary talk for the writer of Endymion, whose mind was like a pack of scattered cards?” (Redfield Jamison 1993 98)

  2. from “On the Sublime” (Redfield Jamison 1993 98)

Write in the moment

I was recording a podcast with my friend John today, and he had some great writing advice: when inspiration strikes, stop and write the first draft immediately. The idea is that your energy comes through in the words you write. When you care about something, people can tell. On the flip side, when writing feels like a chore, your writing suffers. 

I was thinking about this after the podcast today, and I agree that you should capture the thought when an idea energizes you. Gerald Weinberg mentioned that technique in his book The Fieldstone Method. But for me, ideas take time to fully develop, and my best writing often comes out in the editing process. 

Writing is also a skill that can be perfected, just like any other. Just because I don’t feel like writing doesn’t mean I must write poorly, the same as if I don’t feel like playing the piano, I must play poorly. That said, when I can feel the notes I’m playing, the music sounds better. So maybe John is onto something. 

No, I think that’s it. Writing is a skill, and skills take practice to achieve mastery. Mastery—when my technique feels natural—is where I find true enjoyment in anything and do my best work.

I guess what I’m saying—and I’m saying this to myself as much as to anyone else—is just write. Write when you don’t feel like it, and especially write when the mood strikes because those are the moments when you’ll really feel the music.

Webmentions

How I used 1990s technology to avoid writing JavaScript.

Have you heard of webmentions? They’re similar to pingbacks—but modern—and allow websites to notify each other about different types of activity (like replies on social media). As of 2017, the protocol is a W3C recommendation.

Here’s an example:

When I post a link to this article on Mastodon, Mastodon sends a request back to my website every time someone likes, reposts, or replies to it. I can then display that activity here on my own website (see the bottom of this article). When I post an article here, I can also send webmentions to any websites I mention (link to).

In the case of Mastodon, it’s actually a tad more complicated than that; because Mastodon doesn’t send webmentions natively, there’s a service I use—called Bridgy—to watch Mastodon and send me webmentions on its behalf.

The problem with webmentions and static sites

I wanted to add webmentions to my blog but had a problem. I use a static site generator (Jekyll) to build my website. That means I can’t receive webmentions and process them dynamically without using a 3rd-party service, like Netlify Functions. I want my blog to run forever, though, and I’ve found serverless functions challenging to maintain (Node.js, gross). Also, I’m not an expert on the Webmention protocol and didn’t want to maintain my own implementation.

Fortunately, there is a 3rd-party service specifically for handling webmentions: Webmention.io. Webmention.io is an endpoint for receiving webmentions on your behalf, providing an API that returns well-formatted JSON. There are even plugins for various static site builders, including Jekyll. Most of them query the Webmention.io API at build time and bake in the webmentions for each URL, and/or use client-side JavaScript to grab new webmentions on every page view.


I like my websites to be fast. That’s why I use a static site generator to render plain HTML pages in the first place—servers are much quicker when requesting static pages. I also like the generator to be fast, and querying an API (like Webmention.io) to fetch the webmentions for each page at build time is slow, even with caching. Since Jekyll can render JSON files directly from the file system, I really want to store my webmentions with the rest of my static files and even push them to my git repository for permanent archiving.

Webmention.io can also send Webhooks. Instead of querying an API every time I build my blog, I could listen for incoming webhooks, save the JSON with the rest of my files, and then regenerate the blog (or, better yet, just the mentioned page). For that, I’d still need a dynamic endpoint, but a much simpler one—it just had to authenticate an HTTP POST request and write the JSON data in the request body to disk.

Adding webmentions to this blog

I still didn’t want to use a 3rd-party service to host an HTTP endpoint, and I didn’t want to manage a server process; I wanted a cgi-bin. When I started building websites over 20 years ago, I used Perl and CGI to run simple scripts, like a guestbook (I wrote my own). I prefer Ruby these days—and Perl has deprecated CGI—but could that approach still work? I thought it would be fun to try. It turns out it does work!

By this point, I was full-on coding like it was 1999, and I needed a web server—so obviously, I reached for Apache. Apache still supports mod_cgi, and although deprecated, Perl still supports CGI (and, as a one-time popular legacy system, will probably continue to do so for some time.)

Since I was hosting my blog at Netlify (which cannot run Apache directly), I created a $5/month cloud server at Hetzner. After some basic security hardening, I installed Apache and configured it to serve Jekyll’s _site directory (where it saves the static files). I then added a simple Perl script to Apache’s cgi-bin directory, and after fiddling with file permissions, I had a working endpoint for processing webhooks from Webmention.io.


Here’s how the whole thing works:

  1. Apache serves the static _site directory generated by Jekyll.
  2. When Webmention.io receives a new webmention, it sends a webhook to /cgi-bin/webmention at my website, invoking a process to run my Perl CGI script.
  3. The Perl script authenticates the request; if valid, it saves a new file in Jekyll’s _data directory.
  4. I also run jekyll build --watch on the server, which regenerates the site when the file system changes.
  5. I periodically commit those new files and push them to GitHub for archiving.

That’s it. Now that I had new webmentions saved in my Jekyll project, I could display them using Jekyll’s Liquid templates. You can see all the webmentions for this article below the closing paragraph—and if you like or repost it on Mastodon, Reddit, or Bluesky, you’ll also appear here.


What about monitoring and reliability? My Perl script had a bug when I first tested this, meaning incoming webmentions were lost. Webmention.io doesn’t retry webhooks; even if they do, it’s nice to know when things are failing. For that, I use a product we built at work—Hook Relay. Hook Relay sits between Webmention.io and my site and retries webhooks if my script fails or my site is down. I can also see all of the sent requests, which is handy!

Focus

The best tools in the world won’t help if you lack focus. Attention and practice are all that matter. Make them count.

I started a Mastodon server

This is the 2023 annual update for the journalism-focused Mastodon server that I started on a whim last year. It was originally published at open collective.


It’s been a wild year for social media, with Twitter in turmoil and myriad upstarts competing to be the alternative. I launched federated.press in November 2022 as a refuge for journalists and supporting folks during the largest-ever Twitter exodus—back when X was still Twitter. Since then, we’ve grown to over 2,300 users, with roughly 200-300 active every month. Our members include freelance and independent journalists, congressional correspondents, news organizations, trade unions, and plenty of regular folks like myself. Here’s a recap of our first year.

In January, we joined the Open Collective Foundation, a 501(c)(3) fiscal host. Being sponsored by OCF means we can accept tax-deductible donations—which is important since servers aren’t cheap! (To help us keep the lights on, please become a monthly supporter.) OCF has been an invaluable partner and has saved me a lot of time in administrative work.

In March, we announced a community partnership with a fact-checking project, News Detective. News Detective is part of MIT’s Sandbox Innovation Fund Program, and they’re working with journalism students and the public to promote media literacy. We worked together to test a custom Mastodon integration that integrated federated.press with their fact-checking platform, making it easier to crowdsource fact-checking of content on Mastodon.

Throughout the year, I also struggled with our hosting provider in a futile attempt to keep costs down. Without getting too technical, when we started, we were hosted by Amazon Web Services (AWS), a popular cloud hosting provider. AWS is great for high-availability services (like Mastodon) that need to scale up quickly, and they provide redundancy in case of unexpected failures. That’s great for solo operators like myself, who are working (very) part-time. Unfortunately, AWS is also quite expensive. Early on, it became clear that we needed extra resources to guarantee a reliable service to our users. Still, I was already spending much more than we were bringing in: monthly donations were ~$12/month, but our total operating costs were ~$250/month and growing. As the person who took the initiative to launch federated.press, I was paying the difference (and still am).

Since I was already well over my budget for funding this project, I couldn’t justify adding additional resources; instead, I paid personally by responding to intermittent outages, often during my work days and evenings. I won’t lie—it was stressful. I knew I’d burn out if I didn’t find a more sustainable option. So, I began to research other hosting providers. I didn’t want to be the only person on-call for the server anymore, so I needed to find a managed hosting provider—someone who knew how to run Mastodon.

Enter Jeff Brown of Fourth Estate and Honeytree Technologies. Jeff is a technologist like myself, and in addition to already running Newsie.social (one of the largest news/media Mastodon servers) and recently taking over management of Journa.host (a server for verified journalists), he owns a web services company that offers managed Mastodon hosting. With Jeff’s expertise in technology and journalism and with Mastodon in particular, Honeytree seemed like a good fit for federated.press.

In October, Jeff and I discussed the technical requirements and logistics of migrating federated.press from AWS to a new dedicated server provided by Honeytree. We set a date for the migration—October 13, 2023. I scheduled a few hours of downtime for that day via our status page, and we were all set. When Friday the 13th rolled around (a date that didn’t escape me), Jeff and his team worked to move our database and file storage to the new server. Everything went well, and we were back up within the scheduled maintenance window with a new, faster, cheaper server.

How much cheaper? So far, we’re paying Honeytree roughly half of what we paid AWS and other managed services in September. Our September hosting expenses were $298.79, and we’re paying Honeytree a flat $149/month for a fully-provisioned server. Best of all, performance has been significantly better and more stable since the migration. Thanks, Jeff and team!

That brings us to today, November 3, 2023—one year after I first registered the federated.press domain name. Has it all been worth it? If you’d asked me a few months ago, my answer may have been different, but today it’s yes—definitely worth it. I’ve learned a ton, met some really exciting people, and have had the opportunity to support the journalism community—something that has become increasingly important to me over the past few years. I’m also grateful to our members, especially those who have offered encouragement or contributed financially to help keep federated.press up and running. There is much work to do, but I still believe that ActivityPub and Mastodon are important contributions to the World Wide Web, and I’m glad that we’re a small part of that.

Aggressively part-time

I recently used the phrase “aggressively part-time” to describe the culture and work ethic at Honeybadger. We typically work on Honeybadger for 20-30 hours a week. To get things done, we have to be focused and efficient.

This approach allows us to maximize the time we spend on life outside of work.

I’ve been thinking about applying this concept to the rest of my life. Work is just one area where I desire progress. I want my relationships, interests, and hobbies to improve too.

In family life, I want to become a better partner to my wife and father to my children and to enjoy meaningful experiences together.

I want to expand my knowledge and improve my thinking to grow intellectually and solve problems.

I don’t want to have hobbies—I want to be good at them. I want to develop skill and experience success in each area.

These areas encompass my life goals.

Each area demands focus to improve. But, like work, I can’t pursue everything at once—I need to prioritize.

Aggressively part-time leaves room for your other endeavors—whatever they may be.

A thief in the night

The film that haunted my young dreams.

A photograph of an overturned van in a parking lot. In the background, a sign reads: 'Only Jesus saves from hell' in all caps.

“Are you gonna die?”

“Yes Billy, I am going to die.”

“Are you afraid?”

“Billy, have you ever heard about Jesus?”

The thought haunted me at night. Like Billy was about to do, I’d said the sinner’s prayer as a young child—probably three or four years old—and I’d meant it, too. I didn’t want to go to hell. A few years later, it turned out that going to hell was kid’s stuff. Don’t get me wrong, I still feared the Devil. But now I also feared the guillotine.

The guillotine—that shining death implement from the French Revolution. That chopper of heads. I hadn’t learned about France or the revolution yet, though. I was too young. To me, the guillotine was something to look forward to if you didn’t get taken in the rapture.

Billy was unlucky enough to live during the seven-year period known as the Great Tribulation, after God called the true believers to heaven in an event known as “the rapture.” Following the rapture, the remaining Christians—and those who hadn’t converted—were left to endure God’s wrath on the earth. They would ultimately face the Antichrist, a world ruler who would force humanity to renounce Christ and take the “mark of the beast”—a physical mark on the hand or forehead. Those who refused would be killed. So for Billy, the only path to salvation was a literal, physical death by execution—but only after asking Jesus into his heart.

The story of Billy’s conversion and subsequent death was from a film called Image of the Beast—the third in a four-part film series from the 1970s-80s called A Thief in the Night. The films tell the story of the biblical rapture and tribulation foretold by the book of Revelation. Events unfold on a pretribulationist timeline, where all true Christians are taken bodily into heaven before the tribulation begins—leaving a remnant behind.

A Thief in the Night was my first encounter with apocalyptic Bible prophecy. It was also my first encounter with anti-government conspiracy theories—themes that had already consumed my parents when I was born in the early 1980s. They’d both converted to evangelical Christianity in the decade that saw Nixon resign following the Watergate scandal, the end of the Vietnam War, and the birth of the Religious Right in the wake of the social movements of the 1960s.

For the past decade, my dad had researched and written a newsletter and two books on the Trilateral Commission—a nongovernmental international organization founded by David Rockefeller in 1973. The group’s stated purpose was to foster closer cooperation between the United States and its allies in Asia, Western Europe, and North America. Some people (like my dad) sensed nefarious plans, however. They said that the group was a conspiracy of a globalist elite working across governments to bring about a new world order.

In the first film—titled eponymously, A Thief in the Night—the United Nations uses the crisis following the rapture to establish a one-world government. Those who don’t pledge loyalty to the new government by receiving the Mark of the Beast are quickly arrested, except for those who escape to form a militant underground resistance. A series of escalating conflicts follow, leading to insurgent warfare and the systematic execution of dissidents by the UN-backed government.

The films are poorly made. The goal is not to entertain, but to save souls by scaring the audience into repentance and salvation. For many, the story should seem more silly than terrifying. The fear comes from believing that it’s true—that the same events will soon play out in the real world. Little is left to the imagination; the second film—A Distant Thunder—opens with a message scrolling up the screen like an FBI warning:

The motion picture you are about to experience is fiction. The prophecy is not. The producers of this film are not prophets. They are drawing to your attention what God has said in his own word.

The disclaimer continues:

The film is based upon many references in the books of Daniel and Revelation and upon the following Biblical prophecy:

“For the Lord Himself shall descend from heaven with a shout, with the voice of an archangel, and the dead in Christ shall rise first: then they which are alive and remain shall be caught up together with them in the clouds to meet the lord in the air.” 1 Thessalonians 4:16,17

And from Matthew 24:21:

“For then shall be great tribulation, such as was not since the beginning of the world to this time, no nor ever shall be.”

It still might seem silly to many people, but to a seven or eight-year-old—who has been repeatedly assured by adults of its inevitability—it’s the stuff of nightmares. The intersecting themes of shame, isolation, and physical violence combine to form a cocktail of fear and self-doubt. The fear of being left behind, alone, without your family—the only chance of rejoining them being a violent death and your severed head in a basket on the ground. Dying bravely will prove your worth once and for all.

Death by execution is scary, but the real horror is in the waiting. Waiting for the rapture to see if you’ll be left. Waiting in a jail cell to see if you have the courage to resist taking the mark. Waiting for the chopping block as it cuts down the saints in front of you. Always waiting for the next terrible test of faith.

I’d lie awake in bed, standing in that line of martyrs snaking its way to the execution stage. My family had been taken, and I’d been left behind. I’d prayed the sinner’s prayer before—since I could barely speak—but I wasn’t good enough. I didn’t truly believe. Would I be brave enough to join them now? Would I happily give this life for a new life in eternity, even as every fiber of my being screamed no? And then—after everything—would I finally be accepted, or would I still find myself in hell, tortured for all eternity?

“I’ll pray again, just to be sure.”

Alone, in the dark—I probably gave my life to Christ hundreds of times.

An estimated 300 million people have seen a Thief in the Night. It’s been screened primarily in homes and churches, where the righteous gather the lost to give them the good news: you can avoid all of this—all it costs is your life. In 1995, Tim LaHaye and Jerry B. Jenkins published the first of a new series—Left Behind—which sold so well that it became a multimedia franchise by the mid-2000s.

In Billy’s final moments, David—the only friendly adult he has left—refuses to give information to the UN soldiers to save Billy’s life, knowing he recently prayed the sinner’s prayer:

“Billy, you’re free. They’re gonna take you outside and lay you down. Now you close your eyes, and tell ’em you love Jesus no matter what.”

Turning to the soldiers:

“Now I ask you, what can you do now? The boy’s free. He belongs to Jesus Christ.”

Opinion detox

Feeling overwhelmed? Try an opinion detox.

Opinions are like junk food. They feel good going down but leave you with indigestion and high blood pressure.

Here’s how an opinion detox works:

For 30 days, don’t consume opinions.

Opinions are judgments formed from sources other than facts or knowledge. Common sources of opinion:

  • Social media posts
  • Comments and replies
  • News opinion pages
  • Podcasts
  • Self-help books
  • Magazines
  • Certain friends and relatives1

Like a detox fad diet, it’s not helpful to cut out everything–that’s why this isn’t an information detox.

The goal isn’t to stop thinking; it’s to chill out and start thinking for yourself.

With that in mind, replace those salty, processed, high-fructose opinions with clean, whole sources of information:

  • Books
  • Research papers
  • Academic studies
  • Other well-researched content (essays, articles, podcasts, documentaries, etc.—but not too much! When in doubt, cut it out.)

“But wait, what should I do with my own opinions?”

You’re free to share your opinions, but there’s a problem: people will want to reciprocate. So here’s a better approach: use a notepad. Any notepad will do. Write down your passing thoughts, and let them go.

Once your thoughts have had time to cool, you can shape them into something permanent. Wait to share them until your detox is over.

If you consume too many opinions, try an opinion detox.


  1. If someone immediately comes to mind, they probably qualify.

Make customer support your competitive advantage

Let’s suspend disbelief for a moment, and imagine that you are a customer of my cable company. Your bill just arrived, and it’s $50 more than it should be.

You probably know what happens next: you call my 1-800 number and speak to a very frustrating robot that I purchased to waste your time. Eventually, you get through to a person (not me) who has the power to help you, and after some back-and-forth, you’ll convince them to fix your bill. Afterward, you’ll wonder: why do I put up with this?

I’ll tell you why.

You see, in corporate parlance, the point at which you are frustrated enough to stop being my customer is called your “breakpoint,”1 and—fortunately for me—you didn’t reach it.

Modern customer service employs artificial intelligence to find your breakpoint and allow you to reach that limit but not cross it. At scale, this is much more efficient than ensuring that each customer is individually satisfied.

Isn’t technology grand?

If you run a business, it’s helpful to remember that this is what your customers are dealing with regularly.

Granted, it’s not always such a nightmare scenario. Nowadays, many companies are trying to “personalize” the support experience—but by and large, customer support is a metric to be optimized, and the bar is LOW.

When your competitors view customer support in this way, make it your competitive advantage.

When your next support ticket comes in, ask yourself: how can I surprise this customer with delightful customer support?

If you run a business and aren’t personally handling support, maybe you should be—it’s a great way to talk to your customers.

  1. Everyone Hates Customer Service. This Is Why.