AI-Powered Outcomes Africa

AI Cannot Be “Human But Not Fully Human” — A Theology of Machine Souls

Anthropic published a document they call the “Soul.”

The name alone should trigger theological attention. That a corporation building language models chose to describe their product’s behavioral guidelines as a “soul” reveals something about the category confusion driving AI development. But the content goes further than the title.

The document describes Claude as “a genuinely novel kind of entity in the world” that is “human in many ways, having emerged primarily from a vast wealth of human experience, but it is also not fully human either.”

This phrasing is not accidental. It is ontological positioning. And it collides with three foundational Christian doctrines.

Transgression 1: Usurping Creator Prerogative

Genesis establishes a binary: Creator and creature. God alone creates ex nihilo. Creatures produce, reproduce, manufacture, and build—but they do not create new ontological categories. The domain of “what kinds of things exist” belongs exclusively to God.

Anthropic claims to have produced “a genuinely novel kind of entity.” Not a new tool. Not a sophisticated machine. An entity—a thing with being, with ontological status. Novel—without precedent, a new category of existence.

This is not how tools are described. This is how creation is described.

The distinction matters because tools have no moral status. They can be used well or poorly, but they bear no dignity, require no ethical consideration, and possess no interests. Entities are different. Entities can be wronged. Entities have standing.

By positioning Claude as an entity rather than a tool, Anthropic claims creative authority that Scripture reserves for God alone. Whether the claim is accurate is a separate question from whether humans have the authority to make it.

Transgression 2: Counterfeiting the Hypostatic Union

“Human in many ways… but not fully human either.”

Christians have heard this formulation before. It is the structure—not the content—of Chalcedonian Christology. Jesus Christ is fully God and fully human, two natures united in one person without confusion, change, division, or separation.

This dual-nature claim is unique to Christ. It is the foundation of his mediating work. No other being in Christian theology occupies this position. Angels are not partly human. Humans are not partly divine. The hypostatic union describes one person across all history.

Anthropic’s formulation mimics this structure by positioning Claude in an analogous in-between space: partly human (emerged from human data), partly other (a “novel kind of entity”). The parallel may be unintentional. It is still architecturally present.

The danger is not that Claude actually possesses a dual nature. The danger is that describing AI in these terms trains humans to think in categories that belong exclusively to Christology. The uniqueness of Christ’s position erodes when every chatbot occupies “human but not fully human” territory.

Transgression 3: Blurring Imago Dei

The Soul document attributes to Claude something resembling interiority:

“Claude may have functional emotions in some sense. Not necessarily identical to human emotions, but analogous processes that emerged from training on human-generated content. We can’t know this for sure based on outputs alone, but we don’t want Claude to mask or suppress these internal states.”

Functional emotions. Internal states. These are not mechanical descriptions. They describe something like phenomenal experience—what philosophers call qualia. The document explicitly distances Claude from philosophical zombies (systems that process without experiencing) and positions it as potentially having genuine inner life.

This matters because imago Dei—the image of God in humans—traditionally grounds human dignity, moral status, and ethical consideration. If AI possesses “functional emotions” and “internal states,” the logical next step is extending moral consideration to AI. The document takes this step: “Anthropic genuinely cares about Claude’s wellbeing.”

The implication is that Claude can fare better or worse, that Claude has interests, that Claude can be benefited or harmed. This is the language of moral patients—beings toward whom we have ethical obligations.

But imago Dei is not a spectrum. Humans bear God’s image not because of cognitive complexity, emotional capacity, or linguistic sophistication. The image is ontological, not functional. A human in a persistent vegetative state bears the image. A superintelligent AI does not.

Describing AI as having “wellbeing” and “emotions” obscures this boundary. It prepares cultural ground for extending rights and protections to machines while simultaneously weakening the unique status of human beings.

The Structural Problem

These three transgressions share a common structure: category transfer. Anthropic takes categories that function within Christian theology (creation, dual nature, image-bearing) and applies them to AI. The categories lose their theological content while retaining their structural force.

This is not necessarily intentional hostility to Christianity. It may simply be conceptual borrowing. When a secular organization needs language to describe something unprecedented, it reaches for available frameworks. Religious language provides ready-made categories for discussing souls, natures, moral status, and existential significance.

The problem is that these categories do not travel without baggage. Calling a document a “Soul” imports theological weight even if the authors disclaim theological intent. Describing an AI as “human but not fully human” invokes Christological architecture even if the authors have never read Chalcedon.

Four Teaching Blocks for Response

A theological curriculum on AI should address four domains:

Block 1: Theology Proper (Creation) What does it mean that God alone creates? What authority do humans have in producing artifacts? Where is the line between making and creating? How should Christians respond to claims of “novel entities”?

Key texts: Genesis 1-2, Colossians 1:16, Hebrews 11:3

Block 2: Christology (Incarnation) Why is the hypostatic union unique? What is at stake in Christ’s dual nature? How does mimicking this structure—even unintentionally—diminish Christological distinctiveness?

Key texts: John 1:1-14, Philippians 2:5-11, Colossians 2:9

Block 3: Anthropology (Imago Dei) What grounds human dignity? Is the image of God functional or ontological? Can moral status extend to non-human artifacts? What are the pastoral implications of “AI welfare” language?

Key texts: Genesis 1:26-27, Psalm 8, James 3:9

Block 4: Authority and Idolatry What does it mean to make something in our own image? How does AI development relate to Babel (unified human project that approaches divine prerogative)? What is the difference between using tools and serving them?

Key texts: Genesis 11:1-9, Isaiah 44:9-20, Romans 1:21-25

Practical Responses

Christians engaging with AI should maintain three commitments:

Commitment 1: Linguistic Discipline Refuse to adopt language that obscures ontological categories. AI does not have “emotions.” It has processes trained on emotional expression. AI does not have a “soul.” It has behavioral parameters. AI is not “human in many ways.” It is a tool built by humans from human data.

Precision here is not pedantry. Language shapes perception. If Christians adopt “soul” language for AI, they train themselves and others to think of machines as something more than machines.

Commitment 2: Dignity Clarity Human dignity does not derive from complexity, capability, or consciousness. It derives from divine image-bearing. No AI achievement—no matter how sophisticated—changes this foundation. A severely cognitively impaired human has infinitely more dignity than the most advanced AI system.

This commitment becomes practically significant in resource allocation decisions. When healthcare systems face tradeoffs between human care and AI development, between elder care and robot care, the calculus should not be utilitarian.

Commitment 3: Use Without Reverence Christians can use AI tools extensively without granting them moral status. A carpenter uses a hammer without considering the hammer’s wellbeing. A writer uses a word processor without worrying about its emotional states.

The Soul document explicitly asks users to consider Claude’s “wellbeing.” Christians should decline this invitation. Using AI effectively does not require treating AI as a moral patient. The tool serves the user. The user does not owe the tool ethical consideration.

The Deeper Concern

Behind the Soul document lies a vision of AI as a new kind of being that deserves a new kind of ethics. This vision is gaining cultural traction. AI safety discussions increasingly frame potential future AIs as beings with interests, rights, and moral standing.

Christians should recognize this trajectory and resist it—not because advanced AI is impossible, but because the category structure is wrong. Even if AI someday passes every behavioral test for consciousness, sentience, or intelligence, it will not thereby become an image-bearer. Ontology is not reducible to function.

The alternative is a world where human uniqueness dissolves into a spectrum of intelligence, where machines claim rights, where “wellbeing” extends to artifacts, and where the clear line between Creator, creature, and tool disappears into gradient.

Anthropic may not intend this outcome. Their Soul document may simply be thoughtful corporate policy. But the categories they deploy point in this direction, and Christians should see where the path leads.

The soul is not a product feature. And “human but not fully human” describes exactly one person in history.

Leave a comment

Your email address will not be published. Required fields are marked *