We need to create and maintain a market of the authentic, as a response to an era of expediency, wether AI catches up or not.
Nostalgic forgiveness notwithstanding, old-school visual effects still receive more praise than the newest all-digital imitation. The word CGI (Computer-Generated Imagery) has turned into a nigh-derogatory term among movie commentators. It wasn’t so when its quality and novelty peaked with Pirates of the Caribbean’s Davy Jones (below, left) and until more recently, when large studios have extracted more for less, outsourcing to underpaid FX studios.
Now the phrase “this movie uses practical effects” or “this game was animated by hand” (Cuphead, above, right), became a desirable badge of honour for the relevant marketing departments. Tom Cruise famously boasted that Top Gun: Maverick featured “no CGI”. While this isn’t true at all, his movie was still a success. Turns out we don’t mind CGI— if we don’t notice it. Or perhaps intention matters; where we want to be tricked for amusement, but we still expect mastery, as with a good stage magician.
Roughly speaking, Large Language Models and Image Generation tools operate on statistical probability, so any word or pixel output is the most statistically likely for the previous word or pixel, while factoring other constraints (such as prompts, guardrails or intrinsic limitations). While it’s tempting to think human minds were doing the same all along in a grey-matter analog, according to experts like Marina Pantcheva (PhD in Theoretical Linguistics), Humans use hierarchical structures and Universal Grammar; while LLMs follow statistics and sequential order, ([link](https://www.rws.com/blog/large-language-models-humans/)). For now, at least.
And yet, OpenAI, Anthropic and other AI curators and developers seek to imitate the processes of the human brain. In the process of writing this very article, I have written tenfold more paragraphs and followed branching trains of thought; yet, in the interest of brevity, they didn’t made the cut (and shall save for another article). Like any essayist, I’ve critiqued myself and chose the best lines to take my point across. This selective method (not mine, everyone’s) surely inspired the incremental, adaptive learning and fitness that LLMs and Generative AIs use.
Based on a screenshot of a Wallace & Gromit (a British cultural institution, below, left) movie and/or series, I asked a Generative AI to imitate it ...with questionable success, if one judges by my poor and blasphemous attempt below. However better it can be done by a more experienced prompt writer (a new area of expertise, no doubt) and with another tool like MidJourney, the result in no way carries the wealth of meaning that the work of Nick Park and other humans loaded with. It will catch up, for sure; but it shall never surpass, for it’s missing a vital ingredient.
To illustrate, see how the real deal (left, if needed be said) features a colder lighting set up, perhaps hinting at old low-consumption fluorescent tubes that a middle-lower class household could afford. While this seems like a negative feeling and an error of artistic decision, its authenticity lies in the ability to be wrong, but in all the right ways. Perhaps seeks to remind us of a reality we, the consumers, lived and suffered—,and learned to accept as familiar. Conversely, AI is biased to oblige, thus seeks the statistically most likely and apparently pleasing result.
And it’s not just a matter of active choice. More often than not, narratives have to adapt to what’s possible within the constraints of a production, and then true invention or happy accidents occur. The creators of the original Star Trek came up with the concept of transporters to save money on producing all the landing scenes. When Rocky takes Adrian to ice-skating to a closed rink, that was to save money on extras.
Researched Donald MacKinnon (popularised by John Cleese) and his team at the IPAR research unit uncovered key insights about creativity. They found it to be rule-bound, following measurable patterns, and centred on the individual's mental process. Above a certain level, creativity is not tied to intelligence but is linked to traits like self-confidence, risk-taking, and tolerance for chaos. Creative individuals often exhibit qualities such as self-awareness and openness to emotions. Creativity itself is a process, ranging from brief moments of originality to years-long developments, as seen in Darwin’s work. MacKinnon’s 1964 paper, The Creative Personality, explored these themes in depth, addressing creative processes, products, and traits. And more importantly, as John Cleese once said, Creativity is about to allow oneself to be wrong.
AI doesn’t know how to be wrong... in all the right ways. And I’d posit that it never will, for the fundamental and insurmountable reason that it does not have real constraints, mostly manufactured ones. AI doesn’t know how to stumble, gracefully. Sure, it might catch up and learn, if one day, perhaps sooner than expected, the power of Quantum Computing combines with the power of AI, as portrayed in (spoiler alert) the science-fiction mini series DEVS. But my suspicion is that it will never truly predict the authentic, fractal complexity of reality. The counter-argument might be, that it doesn’t have to. Enough complexity might be just sufficient to pass as full complexity.
But for now— the authentic, the analogue, the stop-motion, hand-drawn, the type-writer script or the manuscript, even the old-school CGI, are inventions born out of Parsimonia, pressure of circumstance, family issues, paying the bills– all the limitations, opportunities and pains of human stories, so far removed from the boundless, field of statistical possibilities at the whim of a producer writing a prompt.
The foreseeable future is of a shorter deep of field in times of greater technological “disruption” just as this one (an unhappy word that seems to almost celebrate to perturb and subvert the cultural environment). While Meta and Facebook is researching into AI-users; where companies openly suggest hiring less humans and more AI employees (in a quest to find out once again if there is such thing as bad publicity); while AI-driven YouTube channels thrive among the most unwarned (Toddlers and the Elderly, for the most part); while the ‘Dead Internet’ is becoming less of a hypothesis and more of a reality-- the word “AI” became a bad word in some context, synonym of ‘Fake” and “trickery”, so PR Departments beware.
In an era of expediency, we need to foster a market of authenticity and human-made cultural goods (know in corporate vernacular as “content”), which does not mean purely traditional or technologically allergic, but driven by human minds and human hands for human eyes and human hearts.
Join Leonardo on Peerlist!
Join amazing folks like Leonardo and thousands of other people in tech.
Create ProfileJoin with Leonardo’s personal invite link.
0
4
0