Longevity, AI, and Zeno's progress paradox
There’s a disconnect between our intuitions about incremental progress and its cumulative effect
Lately, I've been thinking about Zeno's paradoxes. They’re basically different permutations of the same idea: that progress becomes unintuitive and weird when we break it into infinitesimal parts.
For example:
… a single grain of millet makes no sound upon falling, but a thousand grains make a sound. Hence a thousand nothings become something, an absurd conclusion.
This logic is obviously flawed (the single grain does make an imperceptible sound), but the metaphor is still useful. (Zeno did the best he could without calculus.)
Longevity
I just watched the Bryan Johnson documentary (“Don't Die”), which polarized the Internet. Whether you like or dislike the ethos behind Johnson’s meticulous, N=1, anti-aging experiment, it's worth watching.
A Zeno-like paradox exists in the discourse around longevity: people bristle at the idea of radical life extension, but enthusiastically support the incremental scientific advances that lead us there. There is a disconnect between the intermediate steps and the logical conclusion.
To see where intuition breaks down, consider the following line of questioning with someone (probably outside of Silicon Valley) who might oppose longevity research on principle, think it’s a waste of resources, or generally dislike the kinds of people who fund the work.
Should we, say, double people’s lifespan to 150 years? No way, they’d say. That research is just being funded by tech billionaires so they can hoard money and power a bit longer. People shouldn’t play god, and besides, society isn’t set up to have so many old people.
But do you want to cure cancer? Of course. What about dementia and Alzheimers? Of course. What about diabetes, heart disease, loss of vision and hearing, etc? Yes to all that too. You could repeat this line of questioning for every cause of aging and death, and most reasonable person would be in favor of a cure.
What you're left with sounds a lot like radical life extension – the systematic elimination of all the unfortunate ways that someone might age and die, aside from random bad luck.
Life expectancy is not an immutable law of biology. We’ve been working on human life extension for a while now, doubling life expectancy at birth over the last century. Clean water, vaccines, antibiotics, public awareness around smoking, fortified breakfast cereals, GLP-1 drugs, etc are all just lifespan extension protocols in disguise.
This exercise clarified my own view on longevity. Once I realized that I’m in favor of every step along the path to ending aging, it became easier to accept the conclusion. We will live longer, due to a portfolio of scientific advances that are each worthwhile and already underway1. The more important question to ask is: as the future arrives, how do we evenly distribute it?
Artificial intelligence
The current trajectory of AI progress presents another Zeno-like paradox.
As a software engineer, I’ve had my share of existential dread over the last year as AI models keep saturating benchmarks. What value could I possibly contribute to society if we create intelligent workers that never sleep, can think 100x faster, and can interface directly with the world’s knowledge? To put it another way, what would I contribute if we had a data center full of Nobel prize winners?
The prospect of widespread cognitive automation is scary when you jump to the end state! Instead, let’s start from the present day and work forward.
With today’s computing infrastructure, I’ve never had to touch a physical server. When I run one command, a small army of robots automatically build my code, test it, check it for security issues, and deploy it to the cloud. Aside from one class, I’ve never had to write binary code, think about the logic gates in my CPU, etc. Modern software libraries like PyTorch are so powerful that you can write GPT-2 in about ~150 lines of code. You could argue that most of a programmer’s job is already automated.
Now enter AI. I use an AI code editor called Cursor, which writes a significant amount of code for me. I still do the hard tasks, but Cursor is great at writing helper functions, React components, unit tests, and so on.
Would I rather just describe features and let AI agent go build them for me? Yep.
Would I rather chat with an AI colleague about the problem I’m trying to solve, let it go try a few ideas, and report back? Even better.
Would I rather do a few hours of deep work each morning, or each week, and let an AI handle the tediousness of writing and maintaining code altogether? Dream scenario. I think Jason Crawford put it best when he wrote that the future of humanity is in management.

The path to AGI doesn’t sound so bad when you view it from the bottom up, as a series of tools that give people more time and leverage2. It’s bittersweet to imagine an AI doing my job better than I can, yet I’ve already begun the handoff – the code I’m writing is literally being used to train my replacement. Over time, AI might make my job unrecognizable, but I’m starting to come around to that idea.
Conclusion
For topics like longevity and AI, I think debate centers (unproductively) on whether technological progress is even worthwhile. Extrapolating far into the future leads to extreme views of technology, with the Techno-Optimist Manifesto on one side, and something resembling Degrowth on the other.
Instead, if you break down progress into small, tangible steps – curing a disease, spending less time doing boring work, making food more affordable – few people would object. This bottom-up view of progress can win consensus because it’s rooted in known problems that people understand and want to solve.
On the other hand, we can’t stack up a bunch of incremental progress and assume the end result will be 100% good! Social media, climate change, antibiotic resistance, nuclear weapons, CFCs, dead zones, asbestos, and microplastics are all unintended consequences of technologies that brought great benefits, at least for a while. Letting technological progress run unsupervised over a long horizon seems like a historically bad idea.
This brings us full circle to Zeno’s paradox again. There’s a disconnect between our intuitions about incremental progress and its cumulative effects. We readily accept small technological improvements (curing diseases, AI tools), while resisting their logical conclusions (radical lifespan extension, AGI workers). Simultaneously, we’re often too optimistic that progress will be linear, and that there won’t be pernicious side effects3.
My takeaway is that you probably can’t trust your most pessimistic or optimistic intuitions. Don’t let uncertainty about the future make you a doomer, and don’t be a naive techno-optimist who dismisses all precaution. Progress has to happen, but on a closed loop: build or discover the next useful thing, evaluate, course-correct, and repeat.
There is debate about whether radical life extension will actually happen this century. So far, we’ve increased life expectancy through low-hanging fruit, like better public health and medicine, not by slowing biological aging.
This view of AI is overly simplistic/optimistic (see conclusion). First, not all careers are conducive to the trend that “everyone gets a promotion”, and I realize that I’m in a privileged position as a software engineer with a college degree. Automation needs to be coupled with new opportunities for upward job mobility. There are also numerous risks posed by AI, some existential (again, see conclusion).
At the limit, one bad invention could lead to catastrophe, as posited in Nick Bostrom’s Vulnerable World Hypothesis.