Saturday, 9 May 2026

Artificial Intelligence is Situated


Intelligence is not a property we possesses so much as a condition we inhabit.

It is not sealed within the skull, nor does it operate independently of circumstance. It is sustained, shaped, and made possible by an environment so pervasive that it disappears from view. What we call “intelligence” is inseparable from the ecological and material systems that support it -- air, gravity, temperature, infrastructure, language, tools, other people. Remove these, and intelligence does not diminish; it becomes irrelevant. Worse: it becomes useless.

The image of the mind as a self-contained engine persists because it flatters us. It suggests portability, autonomy, independence. But this is a fiction. Intelligence is not transferable in that way. A simple, almost crude example makes the point. Take any intelligent person and remove them completely from their environment. Put them somewhere where none of those conditions hold. A vacuum will do. Absolute darkness, no air, no pressure, nothing to stand on. Their intelligence doesn’t help them. It can’t. There is nothing for it to work on, nothing to engage with, nothing to sustain it. It isn’t that they fail to think well enough. Thinking itself no longer matters. So intelligence is not portable in the way we like to imagine. It does not detach cleanly from the conditions that sustain it. It belongs to a system.

That system, incidentally, is far more fragile than it appears. We are starting to see this now in ways that are harder to ignore. Small shifts in temperature, chemistry, biodiversity, things that once seemed minor or remote, turn out to have consequences that ripple through everything else. Not just for us but for all sorts of other organisms that are less adaptable, less robust, or perhaps just less fortunate.

Intelligence is often described as an adaptation, and that is right as far as it goes. But it is an adaptation to a very particular range of circumstances. It doesn’t generalise indefinitely. It works within a narrow band of conditions, and outside that band its usefulness fades quickly. We don’t tend to think in those terms. We talk about intelligence as though it were a kind of general-purpose capacity, something that could, given enough sophistication, solve anything. That idea starts to look doubtful when you remind yourself how dependent intelligence is on the specific environment in which it developed. This matters when people start talking about superintelligence. The assumption is usually that, if intelligence becomes sufficiently advanced, it might break free of these constraints. It might redesign its own environment. It might even create entirely new conditions for itself. In other words, it might escape the ecosystem that currently constrains it and wipe us out as a consequence.

That sounds both impressive and terrifying, but it slips something past you. It quietly assumes that intelligence comes first and environment follows. That the latter can be rearranged at will by the former. But intelligence, as we know it, doesn’t arrive first. It is already an outcome of a millennia-long process of adjustment, of adaptation, to an existing set of conditions. It emerges out of that process, it doesn’t stand outside it.

Even an artificial system would have to begin somewhere. It would require materials, energy, stability, a physical substrate of some kind. It would operate within constraints, even if those constraints looked very different from the ones we recognise. The idea of a system simply engineering a “niche” for itself skips over the fact that niches are not conjured out of nothing. They develop. They are shaped and enabled by what is already there.

There is another complication, which is usually left to one side. Intelligence, in any meaningful sense, is tied to need. Not abstract need, but quite specific pressures: to find, to avoid, to obtain, to maintain. These are not optional. They come with being alive. It’s worth being quite clear about this. Plenty of things persist. A building might survive a landslide. A stone might endure for centuries. But neither of those things has any stake in continuing to exist. They don’t act on their own behalf. Needs belong to living systems, and more particularly to species and their individual members. Organisms don't invent reasons for their survival. Those reasons are inherited. They are part of a larger pattern. The individual is an instance of that pattern, not its origin.

Artificial systems complicate this picture in a revealing way. They are, in the first instance, tools. Their purposes are not their own; they are assigned. They do not act on their own behalf but on ours, or on behalf of whatever system deploys them. In that sense they have functions but not needs. They continue to operate only insofar as something else requires them to, maintains them, powers them. If one wanted to imagine such a system becoming “intelligent” in a way that resembles the situated intelligence described here, it would not be enough for it to become more complex or more capable. It would have to acquire purposes of its own. It would have to cease to exist merely for something else. That would not just be an increase in intelligence. It would be a fundamental transformation in kind.

That has significant consequences for how we think about intelligence. If intelligence arises in response to need, and if need arises within living systems, then intelligence is bound up with those systems in a deep and inextricable way. It is not just a problem-solving ability. It is tied to survival, to continuation, to the ongoing negotiation of conditions that allow something to keep going at all. Take that away, and something changes. You might still have systems that process information, that optimise, that respond in complex ways. But it becomes less clear in what sense they are “situated” in the same way. Less clear what, if anything, is at stake for them or “it”.

There is a tendency to whitewash here, to assume that more complexity simply means more intelligence. But that misses something important. Intelligence, at least as we encounter it in the world, as we always encounter it, is not just about complexity. It is about dependency. About being caught up in a set of conditions that you don’t control and can’t step outside. That idea is not especially comfortable. It cuts against the picture of intelligence as something sovereign, something self-sufficient. But the alternative picture, that intelligence can outstrip the very conditions that make it possible, starts to look less convincing.

To say that intelligence is situated is therefore not simply to observe that it occurs somewhere. It is to recognise that it is constituted by that “somewhere”, that it cannot be abstracted from it without distortion. Intelligence does not stand over and against its environment. It is a function of it. And until that relationship is properly understood, intelligence will continue to be mistakenly imagined as something it is not: a detachable capacity, a transferable asset, a power that can outstrip the very conditions that make it possible.