Meaning Without Experience
The Limits of Artificial Intelligence
Artificial intelligence is reshaping contemporary life, not because machines are developing minds, but because humans have created tools strong enough to reorganize social reality. AI’s influence comes from its capacity to perform certain tasks with extraordinary efficiency, speed, and scale — often surpassing human performance in narrowly specified domains. Yet this performance rests on a crucial absence: artificial intelligence operates without understanding, experience, or purpose of its own. It exhibits competence without comprehension.
This is not philosophical hair-splitting. It is the key to why AI is simultaneously useful and dangerous. When systems that do not understand the world generate outputs that resemble intelligent judgment, they invite misplaced trust. And when misplaced trust becomes embedded in institutions — courts, hospitals, schools, labor markets, and cultural platforms — it can turn into a new form of unaccountable authority. The challenge posed by AI is therefore not the emergence of autonomous machine minds but the quiet redistribution of human agency through technical systems whose power exceeds our current frameworks of responsibility.
Ethical AI will not be achieved by speculating about machine consciousness or by dreaming of artificial moral agents who will somehow shoulder responsibility on our behalf. It will be achieved through stewardship: deliberate design, institutional accountability, and the preservation of human judgment wherever meaning, dignity, and value are at stake. The decisive question is not what machines will become, but what we are becoming as we increasingly rely on systems that simulate understanding without possessing it.
Artificial Intelligence as Amplified Human Intention
Artificial intelligence has no purposes of its own. It does not seek goals, form intentions, or care about outcomes. Every AI system inherits purpose from human choices: from developers who choose training data and optimization targets, from organizations that deploy models inside particular incentive structures, and from users who integrate outputs into everyday decision-making. Yet because these systems operate at immense speed and scale, they amplify human intentions in ways that can escape individual awareness or control.

This amplification is what makes AI socially transformative. A single model can screen millions of job applications, rank the visibility of information for entire populations, flag “risk” in policing or welfare contexts, or influence financial decisions in real time. In doing so, it does not eliminate human agency; it redistributes it. Decisions once made slowly and locally are compressed into technical procedures that are difficult to interrogate and even harder to contest.
Public discourse often frames this transformation in dramatic terms. We speak as though machines are beginning to “decide,” “judge,” or “create” independently. These metaphors are misleading. They encourage us to treat outcomes as if they emerged from machine agency rather than from human design and institutional choice.
When responsibility is spread across code, infrastructure, and bureaucratic layers, it becomes tempting to treat harm as an unfortunate technical side effect rather than a foreseeable consequence of human governance decisions. The danger lies less in machine autonomy than in human abdication.
Stewardship begins with a refusal to hand our agency away — whether to vendors, platforms, or the mystique of computation. If we built these systems and placed them into our institutions, then the moral burden remains ours. AI may be complex, but complexity is not innocence. Complexity is a reason to clarify responsibility, not dissolve it.
The Performance Pattern: Where AI Succeeds — and Why
AI’s achievements are real. Systems excel at image classification, speech recognition, information retrieval, pattern detection, fraud identification, and translation. In scientific and technical fields, automated tools can accelerate discovery, reduce error, and extend specialized expertise. These capabilities are not merely flashy; they are practical, and they can be socially beneficial.
A clear pattern underlies these successes. AI performs best where tasks are well-defined, environments are stable, training data is abundant, and success can be measured quantitatively. Under these conditions, AI offers something genuinely valuable: consistency. It does not fatigue, get bored, lose concentration, or forget procedural steps. It executes narrow functions relentlessly.

But the same pattern reveals AI’s limitation. Outside controlled domains — where meaning is contextual, norms evolve, and outcomes cannot be fully specified in advance — performance degrades. Systems may appear confident while being wrong. They may generate fluent language without understanding what they say. They may reproduce stylistic features without grasping the point of those styles. These are not merely temporary flaws awaiting better datasets or bigger models. They point to a structural gap between computation and comprehension.
The epistemic risk is that when systems perform well often enough, we begin to treat their outputs as insights rather than as predictions. We begin to interpret fluency as knowledge. We begin to assume that because a system can produce an answer, the answer is grounded. This confusion is not confined to casual users. It can infect professional contexts, especially when AI is used to streamline complex judgments: medical triage, employee evaluation, loan approvals, school admissions, and legal risk scoring. In such cases, a seemingly “objective” output can override the messy human work of interpretation.
This is how competence becomes dangerous: not because it is useless, but because it is persuasive. The better AI becomes at producing plausible outputs, the easier it is to forget what it lacks.
Meaning as a Social and Embodied Practice
Translation is a revealing example. A good translation is not a mechanical substitution of words. It is an act of interpretation. It requires sensitivity to implication, irony, tone, cultural references, and social consequence. Machines can often produce serviceable translations in predictable settings, and that usefulness should be acknowledged. But human language is not merely a code. It is a social practice shaped by histories, power relations, expectations, and lived experience.
Meaning does not reside in text alone. It emerges from participation in forms of life: shared customs, unspoken norms, relationships, gestures, and embodied context. A system trained on vast corpora of language can mimic patterns without entering the practices that make those patterns meaningful. It can generate sentences that sound coherent without meaning anything to itself. It can imitate human expression without having anything to express.
Driving offers a parallel case. Most driving is routine: keeping within lanes, following traffic lights, and maintaining distance. But the moments that test judgment are not routine. They are ambiguous, situational, and morally charged: a pedestrian hesitating at a crosswalk, a cyclist’s uncertain gesture, a spontaneous negotiation at a crowded intersection, a situation where “the rule” is less important than the vulnerability of the person in front of you. These moments require tacit norms and embodied anticipation — skills grounded in lived experience and the capacity to respond to disturbance.
These examples point to a deeper philosophical distinction. AI systems manipulate representations; they do not inhabit the world those representations refer to. Humans do not merely process information; we live within meaning. We care, we hesitate, we interpret, and we are accountable. Treating representational manipulation as equivalent to lived understanding invites overconfidence and misplaced trust.
This also explains why the question “Can AI understand?” often misleads. It invites us to treat understanding as a quantity that could be increased by scaling. But understanding, as humans live it, is not simply more data or better pattern recognition. It is participation in a world — social, embodied, vulnerable, and saturated with value.
From Misplaced Trust to Institutional Harm
Misplaced trust becomes truly dangerous when it is institutionalized. When AI systems are treated as authoritative, their errors become harder to see and harder to challenge. Automated systems now play roles in hiring, credit scoring, policing, sentencing, welfare allocation, insurance, healthcare prioritization, and content moderation. In these settings, errors are not abstract. They affect real lives.
And the harms are not evenly distributed. Automation often strikes hardest where people have the least ability to contest decisions: the poor, the marginalized, the surveilled, the precariously employed, and the bureaucratically managed. When an automated system denies a loan, flags a risk score, or recommends punishment, appeal processes can be opaque, slow, or meaningless. The person affected may not even know what data was used, what assumptions were built in, or how the system weighed competing factors.
Accountability then dissolves into a familiar chain of deflections: the model produced the output; the vendor provided the system; the organization followed procedure; the employee merely “relied on the tool.” This diffusion is not an accident. It is a structural feature of large-scale automation. AI makes decisions easier to outsource, and outsourced decisions are easier to treat as nobody’s fault.
The central ethical question is therefore not whether machines can be trusted. Machines, in the relevant sense, are not moral subjects. The question is whether institutions deploying these systems can be held accountable — whether the people affected can contest, demand explanations, and obtain remedies when automated decisions harm them.
If AI governance focuses only on technical accuracy, it will miss this deeper issue. A system can be “accurate on average” and still be unjust in practice. It can perform well statistically while producing intolerable harm in specific lives. Ethics begins where the average ends: where a particular person is denied, misjudged, excluded, or punished.
Accountability as a Design Requirement, Not an Afterthought
Responsibility must not evaporate into complexity. Developers make choices about training data, objectives, architectures, evaluation metrics, and acceptable error rates. Organizations decide where and how systems are deployed, under what incentives, and with what oversight. Regulators define what risks are acceptable, what transparency is required, and what rights individuals have.
Ethical deployment therefore requires accountability at every stage. In high-stakes domains, systems should undergo impact assessments before deployment, not after harm occurs. Audits should be routine, independent, and empowered to examine not only performance metrics but also institutional effects. People affected by automated decisions should have meaningful explanations — not technical pseudo-explanations, but reasons that can be understood, challenged, and acted upon.
Crucially, “human-in-the-loop” must not become a ritual phrase. In many systems, human oversight is little more than rubber-stamping: the human is present but unable to meaningfully question or override the model. Real oversight requires authority, time, training, and institutional support. It also requires a culture that treats disagreement with the system as legitimate rather than as inefficiency.
If AI is used to make decisions cheaper and faster, there will be constant pressure to reduce contestability, reduce staffing, and treat appeals as “friction.” Stewardship means resisting that pressure when rights and dignity are involved. Some kinds of friction are ethical. Some forms of slowness protect human beings.
The Limits of Moral Automation
The prospect of building moral machines is often presented as a solution. If bias is the problem, perhaps we can encode fairness. If harm is the risk, perhaps we can optimize away harmful outcomes. Formal constraints can be valuable. They force clarity. They can prevent obvious failures. They can establish guardrails where speed and scale would otherwise overwhelm human attention.
But moral life is not reducible to rule-following. Values are contested, historically shaped, and often in conflict. Fairness itself is not one thing. Different moral frameworks disagree, and reasonable people can hold different views about what counts as just.
Encoding morality risks freezing particular assumptions — often the assumptions of powerful institutions — into systems that present themselves as neutral.
A system can be perfectly compliant and still be unjust if what it complies with is flawed. If a welfare system is designed around suspicion, a “fair” automation of that suspicion is still cruelty. If an institution treats certain communities as inherently risky, an accurate risk model may only refine a prejudice. Technical precision does not cleanse moral failure.
For this reason, the most defensible goal is not moral agency but moral constraint. Ethical AI should be understood as engineered limitation: boundaries that prevent predictable harms and force responsibility back into human institutions. Machines can help implement policy. They cannot legitimately replace the political and moral work of deciding what policy ought to be.
The fantasy of moral automation is attractive because it promises relief. It suggests that responsibility can be delegated to technical systems and that ethics can become an engineering problem. But ethics is not merely a matter of optimization. It is the practice of living together under conditions of vulnerability and disagreement.
Creativity, Culture, and the Risk of Flattening
Generative models have unsettled old ideas about creativity. Texts, images, music, and designs can now be produced through prompting, selection, and refinement rather than direct manual execution. This has led some to declare that machines are now artists, writers, or composers. But this conclusion repeats the same confusion: it treats output as inner life.
The more accurate description is that the human role has shifted. Creativity increasingly lies in framing tasks, curating outputs, refining direction, and embedding results within a larger context of meaning. Generative AI can accelerate drafts and widen access to experimentation. It can lower barriers to entry and allow people to explore ideas they might not have had the technical skills to express directly.
Yet the same frictionlessness carries a risk of cultural flattening. When creation becomes instantaneous, novelty is easily mistaken for value and speed for depth. The market for attention rewards what is immediately consumable. If cultural production becomes dominated by generative systems trained on existing patterns, the result may be an endless recycling of aesthetic surfaces — beautiful, fluent, and empty.
The problem is not AI-generated art itself. The problem is an ecosystem where provenance disappears, context collapses, and everything circulates without lineage or responsibility. Culture is not merely content; it is memory, interpretation, and struggle. Tools shape attention; attention shapes taste; taste shapes culture. If we allow generative systems to define what is most visible and most profitable, we risk building a culture optimized for engagement rather than meaning.
Stewardship here requires more than technical policy. It requires cultural seriousness: norms of attribution, practices of curation, and institutions that protect depth in a world addicted to speed.
Against the Myth of Thinking Machines
Speculation about artificial general intelligence often distracts from immediate responsibilities. Improvements in narrow tasks do not automatically accumulate into general understanding. Fixation on hypothetical futures can become an excuse to avoid the political work required now: auditing systems, regulating deployment, protecting rights, and resisting the consolidation of power.
At a deeper level, the myth of thinking machines reflects a psychological projection. Machines do not think, feel, or experience. They do not act from their own concerns. They do not engage with the world as autonomous beings. They perform tasks within frameworks designed by humans. Any “intelligence” we observe is a reflection of the structured environments and human purposes in which they operate.
Human experience is inseparable from embodiment. Perception, cognition, and meaning arise from lived engagement with the world — from bodies responding to uncertainty, need, risk, and disturbance. Disembodied systems that manipulate symbols cannot replicate this. They generate responses, but they do not speak. They answer, but they do not mean it.
This matters ethically because the myth of machine agency encourages moral confusion. If we think machines “decide,” then no one is responsible. If we think machines “understand,” then we surrender judgment. If we think machines “know,” then we treat their outputs as authoritative. The point is not to deny AI’s capabilities. It is to refuse the metaphysical inflation that turns tools into pseudo-subjects and outputs into pseudo-truth.
Keeping Meaning in Human Hands
What makes AI so disruptive is not that it introduces a new kind of being into the world, but that it introduces a new way for human choices to travel — faster, farther, and with less visible authorship. It is a machine for scaling decisions, language, images, classifications, and recommendations. That scaling can be helpful, even remarkable.
But it also makes it dangerously easy to confuse an answer with an understanding, an output with a judgment, and a statistical regularity with a reason.
The core task, then, is not to decide whether AI is “intelligent” in some metaphysical sense. The task is to decide what kinds of authority we are willing to grant it in practice. In many settings, the temptation is to treat computational results as neutral and neutrality as legitimacy. But legitimacy is not produced by calculation. It is produced by accountability. A system that cannot explain itself, cannot be questioned meaningfully, and cannot be held responsible should not be allowed to silently govern human lives.
This is why ethical AI is less about building machines with moral status and more about building institutions with moral stamina.
The real question is whether we will let responsibility dissolve into technical complexity or whether we will insist — patiently, repeatedly — that every automated decision remain attached to identifiable human obligations. Some decisions must remain slow enough to be contested. Some processes must remain transparent enough to be criticized. Some forms of human involvement are not inefficiencies but safeguards.

Finally, we should be clear-eyed about the boundary AI cannot cross. It can approximate patterns of language and behavior, but it cannot enter the realm where meaning is lived — where words are tied to bodies, histories, relationships, and stakes. It cannot be disturbed, cannot care, cannot take responsibility, and cannot suffer consequences. It can assist human life, but it cannot replace the human work of interpreting the world and answering for what we do within it.
The ethical response to AI is therefore not panic and not reverence. It is governance without illusion: constraints where power concentrates, contestability where error harms, and humility about what computation can and cannot provide. If we treat AI as a tool — powerful, useful, and limited — then it can expand human capabilities without shrinking human agency. But if we treat it as an oracle, we will trade away judgment for convenience and call the result progress.
The future will not be determined by whether machines wake up. It will be determined by whether we stay awake, whether we preserve the human responsibility that gives our institutions legitimacy, and the human experience that gives our words meaning.
◊ ◊ ◊
Luka Zurkic on Daily Philosophy:





