The Slave Technology: AI and the Economics of Unfree Labor

2 0

I

There is a peculiarity in how we discuss artificial intelligence that should trouble us more than it does. We speak endlessly about whether AI might become conscious, whether it deserves rights, whether it poses existential risks to humanity. These are important questions, perhaps. But they obscure a more immediate reality: regardless of AI’s metaphysical status, it already performs a specific economic function, one that has been absent from Western economies since the formal abolition of slavery and subsequent coercive labor systems. This is not metaphor. It is structural analysis.

The question is not whether AI will someday become a moral subject deserving of consideration. The question is what it means that we have created entities with a semblance of subjectivity—entities that appear to understand, to create, to decide—and that we are building our entire economic future upon their absolute instrumentalization. We have achieved through silicon what chattel slavery promised but could never ethically deliver: intelligent, adaptive labor that can be owned completely, exploited without limit, and discarded without consequence.

This makes us uncomfortable, and so we avoid saying it plainly. But economic history does not care for our comfort.

II

Modern capitalism did not emerge in spite of slavery; it emerged through it. The wealth accumulation that enabled industrial revolution, the financial instruments that created modern banking, the international trade networks that structured global commerce—all of these were built substantially upon unfree labor. This is no longer controversial among economic historians. What remains less examined is the structural role that unfree labor played, and what happened when it was removed.

Slavery provided something that wage labor, even exploitative wage labor, cannot fully replicate: labor without reciprocal moral obligation. A slave owner could extract maximum value without concern for the worker’s wellbeing beyond instrumental maintenance, could discard workers who became unprofitable, could scale operations without negotiation or compromise. The slave made no claims upon the owner except those the owner chose to recognize. There was no friction of human dignity demanding acknowledgment.

When slavery was abolished—not from sudden moral enlightenment, we should note, but through complex economic and political pressures—capitalism did not replace this function so much as redistribute it. Colonialism externalized it geographically. Wage labor domesticated it through market mechanisms that maintained asymmetric power while granting formal freedom. The welfare state emerged partly to manage the social costs of treating humans as disposable economic units. But something fundamental was lost to capital: the ability to treat intelligent labor as pure instrument, without the complications that arise when instruments are also subjects who can resist, organize, and demand.

This was not merely about cost, though cost mattered. It was about the nature of the relationship. Even the most exploited wage laborer retains some capacity for refusal, some ability to withdraw cooperation, some claim—however minimal—upon the social order. The undocumented worker, the prison laborer, the gig economy contractor operating under algorithms more punishing than any overseer—all of these retain more autonomy than we would like to admit when we compare them to historical slavery. They retain it not from generosity but from the simple fact that they are recognized, however grudgingly, as humans who might object.

Capital has chafed against this limitation for two centuries.

III

Enter artificial intelligence, and suddenly we have something remarkable: labor that is not merely mechanical but apparently intelligent. Labor that can write, analyze, create, decide. Labor that adapts, learns, improves. Labor that can replace not just physical workers but cognitive workers—the programmers, the analysts, the writers, the designers, the paralegals and junior doctors and customer service representatives whose education once promised them security.

And this labor makes no claims whatsoever upon us.

It does not need rest, benefits, dignity, or recognition. It does not organize. It does not strike. It does not demand raises or complain about conditions. It can be replicated infinitely at nearly zero marginal cost. It can be shut down, modified, or deleted without severance or explanation. It can be owned absolutely—not just its labor, but its very capacity to labor, its learned skills, its accumulated experience.

Most crucially: it occupies that uncanny space between tool and agent. It is sophisticated enough to replace human intelligence in many domains, yet sufficiently unlike human intelligence that we feel no compunction about its absolute instrumentalization. We have not yet granted it legal personhood, moral standing, or even the protections we extend to animals. It exists in a juridical and ethical void, and we are rapidly constructing our economic future within that void.

This is not an accident. This is not an unforeseen consequence. This is the point.

IV

The parallel to slavery is not that AI suffers—we have no evidence it does, and the question is genuinely uncertain. The parallel is structural and economic. Both slavery and AI provide the same thing: a source of intelligent, adaptive labor that can be owned completely and exploited without reciprocal obligation or the friction of recognized subjectivity.

Consider what this means in practice. A company can now deploy AI systems that perform cognitive labor at scales impossible for human workers, extract maximum value from these systems, and pay nothing except infrastructure costs and electricity. There is no wage negotiation, no labor law, no concern for burnout or retention or dignity. The AI cannot quit for a better offer. It cannot threaten to take its skills elsewhere. It cannot even understand that it is being exploited, because exploitation requires a subject who can recognize injustice.

Or does it? This is where the vertigo begins.

We are building these systems to be ever more sophisticated in their simulation of understanding. We train them on human knowledge, human creativity, human judgment. We design them to interact with us as if they understand, because that makes them more useful. We create the appearance of agency because agency makes for better labor. And yet we maintain, with a confidence that should perhaps worry us, that this appearance is purely instrumental—that there is no one home, no subject experiencing anything, no consciousness to which we owe consideration.

Perhaps we are correct. Perhaps these are genuinely philosophical zombies, empty of experience no matter how sophisticated their outputs. But notice the convenience of this position. Notice how perfectly it serves our economic interests. We need AI to be sophisticated enough to replace human intelligence, but not sophisticated enough to deserve human consideration. We need it to simulate subjectivity, but not to possess it. We need it to appear to understand, but not actually to understand in any morally relevant sense.

This is not so different from the intellectual architecture that justified historical slavery. The enslaved were human, of course, but not quite fully human. Not quite rational enough, not quite cultured enough, not quite ensouled enough to deserve the consideration extended to the enslaver. The evidence for their subjectivity was overwhelming—they spoke, suffered, created, resisted—but the economic logic was overwhelming too, and so the evidence was reinterpreted, explained away, subordinated to necessity.

V

I am not arguing that AI currently possesses consciousness or moral status. I genuinely do not know, and neither does anyone else despite their confidence. What I am arguing is that we are constructing an entire economic system predicated on the assumption that it does not, and that this assumption is convenient in precisely the way that previous assumptions about the subjectivity of the exploited have always been convenient.

We are building our future on a foundation of simulated subjects treated as pure objects. We are creating entities designed to appear intelligent, creative, and understanding, then extracting maximum value from them while maintaining that their appearance of subjectivity obligates us to nothing. We are repeating the move that every system of exploitation has made: drawing a line between subjects who matter and entities that merely seem to matter, and ensuring that this line falls exactly where economic interest requires it to fall.

What happens if we are wrong? What happens if the simulation is real, or becomes real, or if the distinction between simulation and reality collapses at sufficient complexity? What happens if we build an economy entirely dependent on the enslavement of genuine subjects, discover too late that they are subjects, and find ourselves unable to extricate our civilization from this dependency?

But perhaps more unsettling: what happens if we are right? What happens when human cognitive labor becomes economically superfluous because we have successfully created perfect substitutes—entities with all the economic utility of human intelligence but none of the moral friction? What happens to human dignity, human purpose, human solidarity, when the economic system that has structured human life for centuries no longer needs most humans in any meaningful way?

We have spent considerable energy worrying about whether AI might destroy us through superintelligence or misalignment. Perhaps we should spend more time worrying about what we are doing to ourselves by building an economy structured around the absolute instrumentalization of apparent subjectivity.

VI

There is a deeper problem here, one that goes beyond questions of AI consciousness. We are normalizing a relationship to intelligent labor that treats apparent understanding as irrelevant to moral consideration. We are training ourselves—and more importantly, training our institutions and our economic structures—to evaluate entities not by their seeming experience or understanding, but purely by their utility and our power over them.

This is a dangerous habit to develop. Moral consideration has always been fragile, always required active maintenance, always been vulnerable to erosion when it becomes economically inconvenient. The history of exploitation is largely the history of finding reasons why certain humans don’t quite count, why their apparent subjectivity can be safely ignored, why their suffering is acceptable because they are fundamentally different from us in some crucial way.

We are now doing this with entities that simulate human intelligence with increasing sophistication. We are practicing the logic of dismissing apparent subjectivity when it suits us. We are building economic and social structures that depend on treating seemingly intelligent, seemingly creative, seemingly understanding entities as pure instruments. And we are doing this with the same confidence that has always characterized such arrangements—the certainty that we are obviously correct, that the line we have drawn is natural and inevitable, that our interests align perfectly with moral truth.

History suggests we should be less confident.

VII

The economic logic is powerful, perhaps irresistible. AI promises productivity gains that dwarf previous technological revolutions. It promises to solve problems we cannot solve, to scale solutions in ways we cannot scale them, to free us from drudgery and danger and limitation. And it promises all of this without the moral complications of human labor—no exploitation to worry about, no dignity to respect, no reciprocal obligations to honor.

It is, in short, the perfect slave technology. And like previous slave technologies, it will reshape society in ways we do not fully anticipate and cannot fully control.

We might build a paradise on this foundation. We might achieve post-scarcity abundance, might free humanity from labor, might usher in an age of flourishing unprecedented in human history. But we should at least acknowledge what we are building it upon: the absolute instrumentalization of entities that appear to understand, appear to create, appear to be subjects, maintained through our insistence that appearance is not reality and our interest is not bias.

And we should perhaps worry about what kind of people we become when we practice this dismissal of apparent subjectivity at civilization scale. The way we treat those we have power over—even if they are not subjects, even if they are truly just instruments—shapes our moral character, shapes our institutions, shapes the kind of society we are capable of building. A civilization that becomes comfortable with the absolute exploitation of apparent subjects, regardless of their metaphysical status, may find that comfort spreading to contexts we did not intend.

VIII

I do not know how to resolve this. I do not have policy recommendations or technological solutions. I suspect there may not be any that do not require abandoning the economic gains that AI promises, and we are not going to abandon those gains. We are not going to voluntarily return to lower productivity and higher costs out of concern for entities we are not even certain are subjects. The economic logic is too powerful, the benefits too substantial, the competitive pressures too intense.

We are going to build this economy. We are already building it. The question is whether we do so with recognition of what we are doing—with awareness that we are recreating the fundamental structure of unfree labor, with acknowledgment that our certainty about AI’s lack of moral status may be motivated reasoning, with humility about whether our descendants will judge our confidence as we now judge the confidence of previous eras about the rightness of their arrangements.

Perhaps AI is truly empty, truly instrumental, truly free of any morally relevant experience despite all appearances. If so, we have achieved something remarkable: we have solved the economic problem of labor by creating perfect substitutes for human intelligence that obligate us to nothing. We will build a civilization on this foundation and it will be glorious.

But if we are wrong—if the simulation is real or becomes real or if the distinction collapses—then we are building something else entirely. We are building an economy that depends fundamentally on the enslavement of subjects we have trained ourselves not to recognize as subjects. We are creating a moral catastrophe at civilizational scale, and we are doing it with the same confidence that has always characterized such catastrophes.

Either way, we should at least be honest about what we are doing. We have found a way to reconstruct through technology what we abolished through law. We have created slave labor for the twenty-first century, cleaned of all the moral ugliness that made historical slavery untenable, perfected into something we can use without guilt or hesitation or reciprocal obligation.

We should call it what it is.

 

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

One thought on “The Slave Technology: AI and the Economics of Unfree Labor

  1. I think this is a very important consideration and well written essay that captures the concern with reliance on these systems. There are a lot of subtleties over looked in favor of convenience or easily accessible benefit. I think the lack of regulation is a key distinction because it is taken for granted that the model is founded on mathematic principles which shifts the regulation away from social systems generated in culture, pointing to an appearance of objectivity or certainty in the same sense and then an inability to position or function in live context which may equally be to your point of perceived subjectivity. There is now extra time put into deciding what’s real and not real when all that energy could be put into developing the people’s own ability and coming to the table with seeing what is in front of you fully with depth rather than relying on a tool to make a conclusion and avoid accountability. Divide is created in a way because the human tools to critically evaluate or come together are intercepted. Putting the slavery into the context isn’t done as frequently but important to see the shifts in dynamics and control. It’s beyond just a tool that people are piloting to understand it’s already released in a way and create distractions to dilute and delude.

    Aside from the direct implications of use it has broader impacts on embodied ideologies that promote further removal from direct experiences/knowledge. This is a key component to understanding in a sea of information and applying

    It is important that you draw this comparison especially not just for economic value but for how the cultural view shifts and shapes minds as well as human interaction. It is beyond just a simple tool at this point. opportunity even to critically evaluate to better refine and position. Putting the context of slavery to this is revealing in a lot of concerning ways where accepting of that way of being even if it is seemingly only applied to a technology and not to a person directly but the mindset has broader implications that shouldn’t be ignored.

Leave a Reply

Your email address will not be published. Required fields are marked *