Artificial intelligence now occupies a familiar place within creative practice. It can generate images, structure text, suggest forms, and produce variations at speed. It is increasingly used to draft, to illustrate, to test directions that might otherwise take time and labour. In that sense, it is already a useful tool. It removes certain frictions and obstacles. It makes certain kinds of work easier to do. That much is evident. What is less often examined is the kind of discovery such systems make possible, and the kind they do not.
There is a tendency to treat AI output as though it were generative in the fullest sense—as though it were capable of showing us things we had not seen before, of introducing genuinely new forms or ideas into the world. It can certainly surprise. It can produce unfamiliar combinations, unexpected arrangements, images that do not resemble anything one has previously encountered in quite that way. But that kind of novelty is not yet the same as discovery in the stronger sense. It is closer to the reconfiguration of what is already there. This distinction is easy to miss because, at the level of individual experience, the unfamiliar often feels like the new. An AI-generated image may present something that is new to us personally. It may prompt recognition, curiosity, even a sense of revelation. But that does not necessarily mean that anything has been discovered in a way that shifts understanding more broadly. The surprise lies in the encounter, not in the underlying structure.
The difference becomes clearer if one considers the role of error.
In artistic practice, errors are often not simply failures. They can be productive. A mark lands unexpectedly, a material behaves unpredictably, a process goes somewhat awry. Occasionally, something in that deviation presents itself as worth pursuing. The artist does not simply correct it. The deviation is taken up, followed, developed. What began as an accident becomes the basis for something else.
This is not guaranteed. Most errors remain just that. But the possibility matters. It introduces an openness into the process, a willingness to recognise significance where it was not initially intended.
This kind of recognition is not governed by rules alone. It depends on a sensitivity to what is at stake in the work, to what matters within the situation at hand. It is tied, however indirectly, to the same conditions that structure other forms of intelligent activity: the need to select, to pursue, to make use of what presents itself.
Artificial systems do not operate in that way. They can generate variations. They can produce deviations from a given pattern. But there is no equivalent moment in which a deviation is encountered as promising, as something that calls for further development on its own terms. There is no stake in the outcome, no sense in which one possibility matters more than another except insofar as that has already been specified.
The system produces. It does not pursue.
This is not a limitation of intelligence in any simple sense. It is a consequence of how such systems are situated. They do not exist within conditions that require them to make something of what they encounter. They do not depend on the success or failure of a decision in any lived way. Without that, the relation to novelty changes.
What is often described as creativity in this context is therefore better understood as recombination under constraint. That can be powerful, and it can be useful. It can expose connections, surface patterns, generate material that might serve as a starting point. But it does not amount to discovery in the sense that involves the recognition and pursuit of something that was not already circumscribed in advance. This is one reason why AI-generated imagery can feel limited as an artistic medium in its own right. It can produce convincing illustrations of what is already familiar. It can render scenes, styles and compositions with remarkable fluency. But it rarely compels in the way that work shaped through sustained engagement with a medium can. What it shows tends, in the end, to resolve into what is already known, however elaborately reassembled. That does not make it useless.
Used as a tool, AI can be valuable. It can assist in drafting, in testing ideas, in communicating an insight more clearly or efficiently than might otherwise be possible. It can remove some of the technical burdens that accompany making. In that respect, it sits alongside other devices and techniques—photography, tracing, projection—that have long extended what can be done without requiring mastery at every stage of production.
This raises a reasonable question. If an idea is genuinely insightful, does it matter how it is presented? Does the use of AI to articulate or illustrate that idea diminish its value?
Not necessarily. If the substance is there, clarity may well be an advantage. Indeed, it may reveal more accurately what is worth attending to. In some cases, the removal of material difficulty exposes a lack of substance more clearly. Where there is little to say, fluency becomes a kind of disguise. Where there is something to say, clarity can allow it to stand without distraction.
This cuts both ways.
The presence of fluency is no guarantee of insight. A well-formed essay or image may contain very little that compels attention, while something genuinely new may appear awkward, partial, or unresolved. The distinction is not always easy to make, but it matters. A musician is not necessarily a composer, and a composer need not be a virtuoso performer. Technical facility and clarity of presentation can refine what is there, but they do not in themselves produce it.
AI brings this into sharper focus. When coherence, fluency, and polish can be generated with ease, they become less reliable indicators of substance. What remains is the more difficult question of whether there is anything worth saying, or showing, in the first place. AI, by contrast, tends to strip away some of that density. What remains is often more direct, but also more exposed. The question of what is actually being said becomes harder to avoid. This is not an argument against the use of AI. It is an attempt to situate it more clearly. As a tool, it has a place. It can assist, accelerate, clarify. But it does not stand in for the conditions under which discovery, in the stronger sense, tends to occur.
Even if such moments are rare, the possibility of encountering something genuinely new remains a central motivation in much creative work. It shapes how artists attend to what they are doing, how they respond to what arises in the process. That possibility depends on more than the capacity to generate variation. It depends on the ability to recognise, and to pursue, what matters when it appears.
That is not something that can simply be automated.

0 comments:
Post a Comment