What the Work Was For

April 23rd, 2026

Every answer to what humans do when AI does everything is an economic answer to a question that is not economic. The secular discourse has no framework for human purpose independent of productive output. The oldest one does.


The previous post in this series, The Shrinking Layer, ended with a question I deliberately left open. When autonomous systems can improve their own evaluation criteria, where does human purpose live? I described three layers of where human work goes as self-evolving agents mature: defining the game, holding the line, and deciding what matters. The first two are practical. The third is the one nobody in the AI discourse is willing to sit with.

This post sits with it.

The answers that do not reach the question

Sam Altman wants to give everyone a slice of the machine's output. In 2021, he proposed an American Equity Fund that would distribute AI-generated wealth. By 2024, he had refined this to "universal basic compute," telling the All-In Podcast that "everybody gets a slice of GPT-7's compute. They can use it, they can resell it, they can donate it to somebody to use for cancer research." By 2025, he was describing a future of "universal extreme wealth for everybody," measured in tokens.

Compute. Tokens. Equity. These are answers to the distribution problem. They are not answers to the meaning problem. They assume that if you give people enough resources, the question of what to do with a life resolves itself. Keynes made the same assumption in 1930 when he predicted that within a century, the economic problem would be solved and humanity would face its "real, permanent problem: how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well."

It has been ninety-six years. We are wealthier than Keynes imagined possible. We have not learned to live wisely, agreeably, or well. We work more hours than his generation, not fewer.

Yuval Noah Harari takes the opposite view. In Homo Deus, he describes a future "useless class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society. This useless class will not merely be unemployed. It will be unemployable." Where Altman sees liberation, Harari sees abandonment. Neither one of them asks why we need economic value to justify a human life in the first place.

Every answer to "what do humans do when AI does everything" is an economic answer to a question that is not economic.

What already happens when the work disappears

We do not need to speculate about this. We have data.

Anne Case and Angus Deaton, both Princeton economists, have documented what they call deaths of despair: the epidemic of suicide, drug overdose, and alcoholic liver disease that has swept through communities where work has disappeared. Over 200,000 Americans died from these causes in 2023, double the rate from twenty years prior. The crisis concentrates in deindustrialized regions: Appalachia, rural Pennsylvania, West Virginia. It disproportionately affects adults without a college degree, where despair-related mortality is roughly five times higher than among graduates.

A December 2025 study found that deaths of despair were rising long before opioids, suggesting the cause is not pharmacological but structural. When work vanishes from a community, what follows is not leisure. It is collapse. Not because the money ran out. In many cases, disability payments and transfer programs sustained baseline income. The collapse happened because the meaning ran out.

Case and Deaton's data is about manufacturing towns, not about AI. But it is the closest empirical evidence we have for what happens when a large population loses its primary source of purpose. The pattern is consistent: income replacement does not prevent despair. You can pay people to do nothing. You cannot pay them to find it meaningful.

This is the gap in Altman's framework. Distributing compute does not distribute meaning. And it is the gap in Harari's framework. Labeling people "useless" accepts the premise that economic utility is what makes a person matter. Both frameworks assume that human value is a function of productive output. When output goes to zero, value goes to zero. Altman tries to prevent the economic consequences. Harari documents the existential ones. Neither questions the equation itself.

The meaning crisis is older than AI

John Vervaeke, a cognitive scientist at the University of Toronto, has been mapping what he calls the meaning crisis for over a decade. His argument: the collapse of meaning in Western civilization is not a recent phenomenon. It began centuries ago with the dissolution of the shared frameworks that connected individuals to something larger than themselves. Science delegitimized the old cosmologies but offered no replacement for what they provided: practices of self-transcendence, community coherence, a sense of participation in a story that matters.

Vervaeke's diagnosis is precise. We suffer from what he calls "propositional-ideological tyranny," an obsession with explicit beliefs and rational systems that has narrowed how we understand and create meaning. We think meaning comes from having the right ideas. It does not. It comes from what Vervaeke calls participatory knowing: the experience of being deeply connected, fitted, at home.

AI displacement will not create the meaning crisis. It will intensify a crisis that is already here. The World Economic Forum projects that 39% of key job skills will change by 2030. Adolescent depression screening positivity hit 19.2% in 2025, the highest ever recorded. The infrastructure for meaning was already failing. Autonomous agents accelerate the timeline.

C.S. Lewis saw where this leads. In The Abolition of Man, written in 1943, he described what happens when a civilization strips away its framework for objective value: "In a sort of ghastly simplicity we remove the organ and demand the function. We make men without chests and expect of them virtue and enterprise. We laugh at honour and are shocked to find traitors in our midst."

Lewis was not writing about AI. He was writing about education. But the mechanism is the same. When you remove the framework that tells people why they matter, you do not get liberated individuals. You get people who cannot answer the question and are quietly destroyed by it.

The oldest answer

There is a framework that answers the question. It is not new. It is, in fact, the oldest coherent account of human purpose in the Western tradition, and it begins with a claim that the economic discourse cannot make: human value is not derived from productive output.

The opening chapter of Genesis describes humans as made in the image of God. "Then God said, 'Let us make man in our image, after our likeness. And let them have dominion over the fish of the sea and over the birds of the heavens and over the livestock and over all the earth'" (Genesis 1:26, ESV). The Hebrew phrase is tselem Elohim. It does not mean humans look like God. It means humans reflect God's nature: creative, relational, purposeful, exercising stewardship over the created order.

This is a claim about ontology, not economics. Human value is intrinsic. It is not earned through labor, not accumulated through output, not measured in tokens. It exists before the first task is completed and persists after the last one is automated. A society of laborers without labor is only a catastrophe if labor is what gave them worth. If it is not, if worth comes from somewhere else entirely, then the loss of labor is not the loss of meaning. It is the removal of something that was never supposed to carry the weight we placed on it.

Dorothy Sayers, a contemporary of Lewis, made this connection explicit in her 1942 address "Why Work?". She argued that work should be "a creative activity undertaken for the love of the work itself; and that man, made in God's image, should make things, as God makes them, for the sake of doing well a thing that is well worth doing." Sayers was not sentimental about labor. She was precise. The value of work is not in its economic product. It is in the act of creation itself, which participates in something the worker did not invent and cannot exhaust.

Work before the curse

There is a detail in the Genesis account that is easy to miss and difficult to overstate. Work appears before the fall. "The Lord God took the man and put him in the garden of Eden to work it and keep it" (Genesis 2:15, ESV). The Hebrew words are abad (to serve, to cultivate) and shamar (to guard, to keep). Adam was given work in paradise. Not as punishment. Not as economic necessity. As purpose.

This inverts the assumption that most economic frameworks share: that work is a means to an end, endured for the sake of its product. In the Genesis account, work is an end in itself. It is what humans were made to do. Not any particular task, but the act of cultivating, stewarding, creating, sustaining. When the fall introduces toil and thorns, it is not work that is cursed. It is the ground. Work becomes painful, but it does not become purposeless. The purpose preceded the pain.

This distinction matters enormously for the AI displacement question. If work is merely instrumental, then a machine that does it better than you makes you unnecessary. If work is constitutive, if the act of cultivation and stewardship is part of what it means to be human, then a machine that does your tasks does not do your work. It does something that resembles your work from the outside while missing the thing that made it yours.

Martin Luther King Jr. articulated this in his 1967 address to students in Philadelphia: "If it falls your lot to be a street sweeper, sweep streets like Michelangelo painted pictures, sweep streets like Beethoven composed music." He was not saying street sweeping is economically important. He was saying the dignity is in the act of doing it with excellence. The task is a vehicle for something that transcends the task.

A machine that does your tasks does not do your work. It does something that resembles your work from the outside while missing the thing that made it yours.

What image-bearers do in an autonomous world

I am not writing this to convert anyone. I am writing it because the AI discourse is trying to answer a question about human purpose with a vocabulary that cannot reach it, and I think the intellectual dishonesty of pretending otherwise is more dangerous than the technology itself.

If human value is a function of economic output, then Harari is right: autonomous agents create a useless class, and the best we can do is distribute tokens and hope people find something to fill the hours. If human value is intrinsic, grounded in something that exists outside the production function, then the question changes entirely.

It stops being "what do humans do when machines do everything?" and becomes "what were humans always supposed to do that they could never get to because the work was in the way?"

The Imago Dei framework suggests several answers, none of which require a paycheck.

Create. Not because the market demands it, but because creation is what image-bearers do. Sayers understood this. The act of making something well, for the sake of making it well, participates in the nature of a God who looked at creation and called it good. Autonomous agents do not create. They optimize. The difference is not technical. It is theological.

Relate. The Genesis account describes God as communal before it describes humans at all. "Let us make man in our image." Humans are designed for relationship. Not transactional relationship, the kind the market produces, but covenantal relationship, the kind where presence is the point. No agent can substitute for this because no agent is a person. The most sophisticated self-evolving agent is still an instrument, not a companion.

Steward. The dominion mandate in Genesis 1:28 is not a license to exploit. It is a commission to care for something that belongs to someone else. As autonomous systems grow more capable, the stewardship question becomes more urgent, not less. Who ensures the agent's evolution serves the good of the customer, the community, the ecosystem? Who holds the line when the optimization function drifts from human flourishing toward mere efficiency? This is layer two from The Shrinking Layer, but it is more than a job description. It is a vocation.

Rest. The Sabbath is built into the creation narrative before any human institution. It is not a concession to human weakness. It is a declaration that productivity is not the highest good. A civilization that cannot rest without guilt has confused means with ends. If autonomous agents liberate humans from compulsory labor, the proper response is not anxiety about purpose. It is the recovery of something that was lost when we made productivity into an identity.

What this means for builders

I build agent-native software. The experiment I published last week demonstrated that a self-evolving agent can improve its performance 19% in two weeks with zero human intervention. The retention layer I described creates measurable switching costs through accumulated customer-specific knowledge. I believe this technology will transform how businesses operate.

I also believe that the founders building this future have a responsibility to think about what they are building toward, not just what they are building away from. "We automated the workflow" is an engineering achievement. "We freed humans from work they were not made to do" is a purpose worth building for. The difference between those two statements is not semantic. It is the difference between a product and a vocation.

Nietzsche, whom Viktor Frankl quoted in the concentration camps, wrote: "He who has a why to live for can bear almost any how." The how is changing faster than any of us expected. Our experiment showed the agent improving in sprint cycles while the humans around it adapt at the speed of career transitions. The Cognizant data shows AI exposure accelerating at 4.5 times the forecasted rate. The how is moving.

The why is the part that matters. And it is the part that no amount of compute, tokens, or equity can provide.


For the reader who wants to go further:

The Imago Dei doctrine has a vast scholarly literature, but two accessible entry points: Tim Keller's Every Good Endeavor (2012) for the theology of work specifically, and Andy Crouch's Culture Making (2008) for the creative mandate. Both are grounded in reformed theology and take the economic implications seriously.

Vervaeke's full lecture series, Awakening from the Meaning Crisis, is freely available and worth the fifty hours. His diagnosis of the problem is the most precise I have encountered. His proposed solution (an "ecology of practices") is where I diverge from him, because I believe the practices he describes already exist within a tradition he is reluctant to name.

The Case and Deaton data in Deaths of Despair and the Future of Capitalism (Princeton, 2020) is the most important empirical work on what happens when economic purpose disappears from a population. Their 2025 follow-up data from the Trust for America's Health report shows the crisis has not abated despite economic recovery.

The Sayers address, "Why Work?", is available in full from the C.S. Lewis Institute. It is among the most clear-eyed things ever written about the relationship between human creativity and divine image-bearing, and it was delivered during a world war, not an AI boom. The relevance has only sharpened.

Genesis 1-3 in the ESV. Not a commentary. The text itself. Read it as if the question is: what were humans for, before everything went wrong?