You are not late to the AI party

“The trouble with computers is that all they give you is answers.” - Pablo Picasso

You have no doubt noticed quite a bit of noise about Artificial Intelligence lately, with the recent release of text-based ChatGPT, and image generators like Dall-E, Midjourney and Stable Diffusion in the months prior.

I can’t recall the last time a technology generated this much coverage, both broadly and specifically within the media and marketing sector. There have been so many articles and newsletters and blog posts (mostly with the tired trope of “an AI wrote that paragraph!"), so it’s understandable to think you’re way behind on this potentially monumental shift.

But you’re not late. Realistically, if you’re in an agency, brand or media company that’s implementing some bright shiny new AI-powered tool that got sold in by a breathless salesperson in the last six months, you’re probably being taken for a ride.

We’re still at the beginning of the beginning of this technology (despite the field being almost seventy years old). Nobody knows where it’s going, and it’s a challenge simply to think about what the potential pathways are.

There is no one simple mental model for thinking about AI.

Information design pioneer Edward Tufte talks about seeing with your eyes, not your words. What he means is that we often simply look at the world and collapse what we’re seeing down to the simple descriptors of what we already know. A tree, a crowd, a sunset. In doing so we miss the actual substance of what we’re looking at.

With technology in particular this is an easy trap to fall into (I can’t count how many “Uber for X” startup pitches I’ve heard, which is now evolving into “TikTok for X”). And with AI tools booming our default mode for beginning to comprehend what we’re seeing is to look for the familiar comparisons.

One common line is that ChatGPT is like a conversation with an overconfident bloke who’s read all the books in the library but doesn’t actually understand them (or remember them correctly). This isn’t incorrect, but it does beg the question of “what’s the use”? While ChatGPT is a fun and powerful toy, it doesn’t really point to where we might be headed.

Another mental model is that AI tools are like Excel spreadsheets. This requires an understanding of just how much spreadsheet software changed office jobs, but once you get that it sort of makes sense. At some point most desk jobs require some sort of use and knowledge of Excel (or Google Sheets if you work somewhere cool). You get good at it, and it makes your job a bit easier. Some people spend most of their days using it. Some people use spreadsheets in ways that they were absolutely never designed for. In this sense we could look for how general AI tools like ChatGPT and Dall-E are being used in workplaces in order to find the valuable tools to build. And indeed, dozens of startups are being started every day doing exactly this.

But these types of definitions are just mapping our current world onto a new technology. What about truly new mental models?

Jaron Lanier is one of the wise old sages of Silicon Valley. He possesses the unique talent of being immersed in technology for decades yet clearly seeing the negatives and avoiding the hype machine. His view on AI?

My attitude is that there is no AI. What is called AI is a mystification, behind which there is the reality of a new kind of social collaboration facilitated by computers. A new way to mash up our writing and art.

I think Lanier has a point here. Without trying to predict the future, we will likely see AI evolve to become a new kind of collaborative technology for both work and play. Sitting behind most AI tools is an enormous pile of training data. AI tools give us the ability to collaborate with literally millions of people who have created that training data, all in an instant.

AI may be the best reminder yet that everything is a remix. (It’s worth noting at this point there are all sorts of ethical and legal issues with that training data – a topic I am absolutely not about to wade into)

One thing I’ve noticed is that the mental models people apply to AI tools are often more a mirror than an insight. Engineers see the world as a series of problems that can be solved through rational optimisation, and so the future of AI is all about optimisation. Creatives are constantly looking for new and unusual inspiration and ways to rapidly communicate their ideas, and so many are already using AI to generate randomness and quickly mock up ideas. Managers are looking to improve productivity, and so look at AI as a way to automate processes or decrease errors. But…

The AI isn’t coming to take your job.

Firstly, AI has no agency, so “it” can’t do anything. Your boss might be coming to take your job, but that’s not really a headline that gets the clicks.

The rapid-unemployment-because-AI narrative completely underestimates the speed at which technology shifts from the general to the specific. Despite some big leaps in what’s possible now compared to a few years ago, the technical and cultural implementation of new tools in workplaces moves at a human speed. Your job will likely involve some interaction with AI tools in the next few years. But you already come into contact with AI every day – whether it’s facial recognition systems, audio assistants, or your phone camera.

It’s worth calling out here the one specific thing that ChatGPT is very good at – generating passable but often incorrect slabs of text about known topics. So if your job is writing for a content farm, I’m sorry but AI is coming to take your job. Really fast. (This will also cause a huge problem for Google in the forever battle with content farms, but more on that later)

Another common narrative is that AI will make anyone who works in commercial creativity redundant. Outsiders to the creative sector often see Dall-E and ChatGPT and figure that AI will wipe out all those hipsters in t-shirts riding skateboards around the “office”. But the words and pictures produced by creatives are not the work. The work is a long and often weird process, of which the words and pictures are a small final output. Creatives are safe for a while, and many I’ve talked with are already using AI tools for creative nudges and rapid prototyping.

There’s also a common trope that AI will create a whole new range of job descriptions that we’ve never imagined before. Which is great fodder for futurists but realistically doesn’t mean much. AI will get integrated into tools and technology, and we will use those tools and technologies. To go back to the Excel comparison, spreadsheets didn’t spawn “formula writer” as a job description. If spreadsheets helped you do your job better you just learnt how to use them.

There’s more blind spots than you might think.

There is a strong selection bias to the AI use cases making headlines right now. And this points to one of the biggest blind spots in getting from “fun demo” to “useful tools”. For every amazing thing that an AI tool has created, there are thousands of failed efforts – nonsense text and six-fingered hands. And humans had to wade through all those failures to find the one winner that makes the headline.

That’s not to say things won’t get better. But they will get better in quite specific directions. You may recall the buzz a few years back when DeepMind’s AlphaGo beat the world’s best Go player. One reason this was possible is that games like Go can be scored. As a result AlphaGo was programmed to play games against itself, collecting data on what patterns resulted in a win. But when it comes to image generation like Dall-E, how do you score the output? Even though ChatGPT has a thumbs-up or thumbs-down feedback mechanism, it still requires human feedback. So while certain areas of AI tooling will see rapid development (tasks where a computer can measure and score the quality of output), other areas will be remarkably slow (tasks where humans need to give feedback or scoring is highly subjective).

Improvements won’t come at the pace we expect. Our expectations of technological progress are based on the last few decades of hypergrowth – the shift from a blinking cursor to hyper-realistic video games, from a Nokia 8210 to an iPhone. The drivers of that growth (primarily Moore’s Law but also global-scale manufacturing) aren’t super relevant to AI tools. An order-of-magnitude improvement would be nice in the next five years, but it’s highly unlikely.

Cost is another blind spot. Specialising these tools for a specific sector (or even company) is a tempting leap to make. But that is a huge investment that very very few companies could afford to make, particularly without a clear path to how it would have a positive financial impact. According to OpenAI’s Sam Altman, ChatGPT costs a few cents per query. And that’s after training the GPT-3 large language model, which has been estimated to have cost between $4.6million and $12million (Open AI hasn’t confirmed either way).

What’s The Next Thing?

After several years of interesting AI tools being publicly released, ChatGPT seems to be the one that has captured the public imagination. One interesting aspect of ChatGPT is that it’s actually not that new. It’s been around in basically the current form for several years (the GPT-3 language model was released in June 2020), but wasn’t publicly accessible via such a simple web-based interface.

Given ChatGPT’s rapid rise to tech stardom, we’re already seeing the other big tech companies prepare to roll out their own versions. Google likely has way cooler AI tools ready to go, but has been held back by (probably very reasonable) ethical concerns. By the time you’re reading this it’s likely they (and many other tech firms) have already rolled out public AI tools, putting those ethical issues aside. We’re at the beginning of a flood of general purpose tools and demos that will bring years of breathless headlines. And then we’ll begin to see the innovation.

It’s not until these general purpose AI tools become widely programmable that this real innovation will happen. And it’s not obvious what that innovation will be. When Google Maps launched it wasn’t obvious that we would end up with Uber. When the iPhone was released it wasn’t obvious that we would end up with Instagram. As AI tools move out of large (and well-funded) big-tech labs to become more accessible for new and interesting applications, the Darwinian process of use cases will begin. Trying to crystal ball what these use cases are is a fools errand. To use well worn line – when the Model T Ford was launched, nobody predicted that we would end up with Walmart.

One thing seems certain: AI is the bright shiny object in the world of media and brands for the foreseeable future. This is not to say that it’s not going to add value any time soon though. And it’s going to be challenging to find that value amongst the noise.

What’s next for brands?

At risk of crystal-balling, if we think of AI tools as a collaboration with a massive number of people through a huge dataset, it’s possible to start thinking about some applications for brands and agencies. It’s worth noting some of these ideas are already being worked on, and I’m not going to point to specific tools as the whole space is moving a bit too quick for that.

Customer data is one area that could get interesting. The ability to take huge amounts of data from customer feedback (both text and calls transcribed by AI tools) and find unseen patterns and emergent trends is well suited to the way AI is working right now. It’s no silver bullet, and the danger would be thinking a tool like this could deliver insights that result in company-defining change. But it’s easy to see how this is a one-percenter that is bolted into a platform like Salesforce pretty soon.

Similarly the ability for AI text tools to not feel like rigid scripted response bots could be a big leap for customer support. We get into the murky area of support chat bots pretending to be human at this point, but I actually doubt people care if they’re chatting with a computer as long as it’s not frustrating and obviously scripted. Again, this is a problem well suited to current AI methods.

It’s tempting to think media planning is ripe for AI-fuelled disruption. Feed in thousands of media plans and results, and then just ask an AI tool to build you a response to a brief. But there’s both an input and output problem to overcome to achieve this. On the input side, media plans are inconsistently structured. And while the ingestion of unstructured data is certainly easier these days, the ability to extract consistent and accurate inputs from years worth of media plans is just not there yet. Additionally, the business outcomes and objectives for any plan are usually in another document somewhere or (more likely) lost to time.

On the output side media planning highlights perhaps the largest deficiency in AI tools: zero awareness of context or culture. The bulk of work that any agency in marketing and communications does is deeply reliant on understanding not just objectives and audience, but the culture in which that brand and audience exist. While AI tools could make some of the repetitive bits of this process faster, that human awareness will continue to be important.

Similarly for creative agencies, the cultural shortcomings of AI tools in both text and images make them unsuitable for creating fully-formed ready-to-roll ideas. But both upstream and downstream there is already a plethora of tools emerging to make creative work better. Upstream, the use of AI tools for idea generation and exploration is well underway. Even basic use of ChatGPT with early ideas can help uncover new and interesting directions. Again it’s not a silver bullet, but it’s another tool in toolbox for creatives. Downstream, in the production of creative, AI tools are also already prevalent – to the point that many people editing video or audio, shooting and retouching photos or illustrations don’t even think about them as AI anymore. They’re just tools. We’ll likely see an acceleration in some of these use cases, but similar to AI in creative generation, they just add to the toolbox. (I’ve consciously avoided talking about creative-optimisation tools that churn out thousands of different ads to run on Instagram etc. While they do exist and probably work fine for driving sales, I’m not sure they’re relevant for marketers and agencies working on brands with long-term goals)

What’s next for media companies?

The internet drove the cost of media distribution down to essentially zero. Two decades in and it’s not controversial to say that this has caused quite a few challenges. And now, AI tools will bring the cost of media creation down to essentially zero. It’s hard to imagine how this won’t go very bad.

But first, the good bits. Similar to creative production, AI tools are already embedded into a lot of the technology used in media companies, whether writing tools or homepage optimisation or unearthing breaking news in the sea of social media. This should be a good thing, and newsrooms will be like so many other workplaces where AI tools keep making things 1% better, while highlighting the tasks that humans are uniquely suited for.

The downside will be a flood of content that doesn’t require any real insight or original research. The original sin of the digital media era is the reliance on advertising, which means a reliance on page views (and in turn, a reliance on traffic from social networks). This reality has resulted in both content farms (sites churning out low quality content aimed purely at answering search queries) and clickbait. Content creation for both has been outsourced to the lowest bidder, and can now be outsourced to AI tools for a fraction of the cost and time.

One upside of this incoming flood may be that trusted media brands become more valuable. However we’ve heard that argument before with social platforms, and the reality was arguably the complete opposite. While it’s obvious that something is wrong when an AI-generated photograph has a six-fingered hand, it’s harder to spot that sixth finger in text. When it comes to most media, good enough will satisfy most people. And it’s never been easier or cheaper to create content that’s good enough.

So yes, the next few years are going to be weird and bumpy for media companies. But to make matters worse their survival is very much in the hands of big tech. Just this year we’ve already seen every big tech company scramble to get AI tools into market. But behind the scenes there is just as much work going on to stop the incoming flood of AI content, particularly at Google (and YouTube), Microsoft, Baidu and Amazon. These companies are responsible for billions of searches every day, and now need to understand how to deal with AI-generated content when looking for answers. The survival of media companies will likely be an afterthought when big tech’s stock price is under threat.

The party is kicking off.

It’s clear that AI tools are going to have a huge impact on the next few decades. There is a good chance the broader AI sector will be the defining technology of the next twenty years, taking over from the smartphone (and the internet before that, and the PC before that).

What form that impact takes is unknowable right now. But it’s worth paying attention to the ideas and experiments that are happening. One huge benefit of where we are with technological innovation today is that quite a lot of it happens in public.

You can start playing around with tools like Dall-E and ChatGPT right now. And through playing with them you’ll start to understand potential use cases. And any of those use cases – from music generation to realistic voice synthesis to automatic video editing to data visualisation – are probably already projects that are in development. And some of those projects will become central to your job and your business over the next few years.

You’re not late to the AI party. But it’s definitely kicking off.

- February 2023