Welcome to my personal newsletter. I’m publishing weekly essays on digital technology and culture, in the run-up to my January 2024 book FILTERWORLD: How Algorithms Flattened Culture. Subscribe or read the archive here.
AI is going to write your emails for you. It’s going to generate your vacation travel itineraries and tell you which restaurants in Lisbon are best. It’s going to generate a pitch deck for a work meeting, then summarize that meeting for you. It’s going to turn you into an artist, a designer, an architect, an interior decorator. It’s going to replace your therapist, your assistant, and probably your boss.
Or at least that’s the current marketing message of Google, Microsoft, Midjourney, Stable Diffusion, and a slew of other AI companies soaking up a sudden avalanche of investment dollars in preparation for what they see as the next technological revolution. Some people interpret this prediction as sudden utopian freedom: Anyone can produce a facsimile of any creative labor instantly with the machine. Others see it as completely dystopian, the AI replacing human artists, writers, musicians, and office workers. In either case, the United States does not have the infrastructure for the sudden mass unemployment the hypothetical becoming real would cause.
Two of my latest New Yorker columns have been about AI: the first, why AI image generators can’t draw hands, and the second, trying out Microsoft’s Bing AI search and thinking through its consequences. Both cases point out why AI tools aren’t as far along as the marketing suggests they are. They might produce useful content, but they stop short of actual originality, authenticity, or insight. Bing can generate a travel itinerary, but on closer inspection it’s a rehash of a blog post that a human wrote. Midjourney can generate a realistic human being, but the fingers have too many knuckles. The stage we’re in right now is an extension of the old uncanny valley problem: any image or object (like a robot) that looks close to human but not quite makes our brains freak out.
In the world of Full Generative AI, as the tech companies are proposing, everything falls into the uncanny valley. It will be increasingly hard to tell if an image is manmade, or if a memo written by a coworker, or if a clip of someone’s voice was recorded from reality. The replicas will improve so much that it will take time and effort to determine in each case to what degree a piece of content is “organic.” (Tech companies will probably prefer that we not perform this labor, since AI content will be infinitely cheap and fast to produce — that is to say, more profitable for them.) There’s an emerging term for this, too. “Synthetic media” refers to any media that has been generated or modified by AI.
The existence of synthetic media colors our experience of all other kinds of media, changing our perceptions and expectations. This is true of all forms of technology: The existence of photographs changed the viewer’s demand for realism from painting, since photos achieved realism so much more literally. Today, human artists are already being insulted by accusations that their work is AI-generated or modified. At the same time, I came across this series of clips on TikTok that seem to feature an AI-generated version of Timothee Chalamet’s voice playing Regulus Black in a hypothetical Harry Potter movie, set to a montage of Chalamet clips and Harry Potter B-roll. You should watch it, it’s a little mind-blowing.
I can’t totally tell if it’s AI or an extremely intricate edit, but it certainly seems like AI, and it’s extremely convincing. Commenters wail over the emotive voiceover. In this case, synthetic media makes for the pinnacle of fanfiction: You can fan-cast your own movie and then produce it in a way that can seem real, at least for a minute or two. (The AI company Runway is now moving into AI-generated video.) When anyone can produce anything, authenticity — an unmediated relationship between creator and result, like Chalamet and his voice — might be cheapened, or made into a niche aesthetic fetish, like film photography. This video falls wholly into the uncanny valley, but in a more fascinating than disgusting way. Unless you’re an actor, I suppose.
A switch has been flipped: Generative AI is now a cheap public tool, and it will probably remain so for the foreseeable future. Like Gmail or the TikTok feed, it’s a technology you can use right now. It seems most likely that people will use it to create more and more, and increasingly indistinguishable, synthetic media. But it will also be up to users to decide if that non-human content is worth consuming.
Reminders
— I just published a New Yorker column on the House of Representatives’ recent interrogation of TikTok CEO Shou Zi Chew, which reflected badly on both social media and politicians’ understanding of technology.
— Read my newsletter related to the essay above, “Generative AI and the Death of the Artist.”
— Listen to the podcast I recorded on Junichiro Tanizaki’s 1933 essay on a different kind of technology’s impact on culture.
Are we are entering the era of tchotchke content?
AI creating endless recyclable (well who knows the carbon impact...might be closer to landfillable...) internet things. Most will little to no use beyond fleeting entertainment and/or unverifiable, unsourced, infotainment and the hunt is on to see which human-bot tag team breaks through to influence/inspire the bot-on-bot content we neither crave, nor need, but dangnabbit we shall drink and hold, and with great slimy fisty thirsts. Peak content giving way to peak peak content, and when we peak behind that content we will see the dusty apparitions of creativity stagnated to what was (flatulent content?).
All this said I am excited to have the robots finally help me make more He-Man/Seinfeld fanfic to share with my two readers on the 18th substack I created to auto-blog for a twitter feed I hope will create ad revenue to build server capital for my crypto allowance. By the power of Grayskull (coin)!