Meaning-making, AI and Nick Cave.
Hello campers,
It’s been a while! But welcome back to the latest issue of Curious Behaviour, where we discuss meaning-making, what makes humans unique, and Nick Cave as the poster-grandparent for a meaning revival.
I’ve spent the last ten weeks on a great project developing a CX strategy for a challenger brand that wants to double in size over the next five years, whilst improving margins. Operating at the intersection of brand, CX and commercial teams like this is my sweet spot. And it really doesn’t hurt that the client is ambitious, the team I’m working with is great, and it’s a fascinating topic.
If this sounds like something I can help with please drop me a line. I have some availability from January.
Anyhoo…
Meaning-making and what makes us human
I’ve really enjoyed this series of articles by Vaughan Tan, author of The Uncertainty Mindset, all about meaning-making and what it is that makes us human. It’s rare you read something that makes you see things very differently, but I’ve found myself returning again and again recently to his concept of meaning-making.
What makes us human is not about doing routine, uncreative work that only reproduces what others have done before. What makes us human is our ability to do things which are not-yet-understood, which require us to be able to create meaning where there wasn’t meaning before. The meaning of “meaning” here is specific: Deciding or recognising that a thing or action or idea has (or lacks) value, that it is worth (or not worth) pursuing.
We meaning-make when we make any decision “about the subjective value of a thing”.
This involves not only recognising value but also making decisions about what is worth pursuing. E.g. should I pursue that MBA? Is Nas a superior rapper to Biggie? Should we prioritise making money over planetary destruction?
Critically, whilst meaning-making is a uniquely human ability, it’s really the result of our non-rationality as much as our ability to reason. Our tendency to forget, to make mistakes, to “be partisan or arbitrary, to not follow instructions precisely, to be slipshod — but also to do new things, to create stuff, to be unexpected, to not take things for granted”.
Not only do these abilities not exist in AI systems at all, but LLM models rely on the unique meaning-making abilities of humans to work in the first place.
In fact Tan suggests we have things backwards when we say that LLMs seem human and intelligent. That the excitement surrounding AI’s capabilities may stem from a misunderstanding of what constitutes humanness, and also a misunderstanding of what is actually going on under the hood of LLMs like ChatGPT and Claude.
One of the many underlying assumptions in the AI models that have become well-known to lay consumers is that things found together are likely to be connected, and the connection is probably stronger if you find those things together frequently. In other words, these AI models learn by finding patterns of association in masses of data, and it infers patterns from how frequently associations occur.
…
The AI models that went public in 2022 are the result of machines processing and learning from enormous volumes of content, much more content than any human can process and learn from in an entire lifetime. These models have learned patterns of association from content where the associations were made by humans, and where the definitions of what counts as association and patterning have been given to them by their developers.
…
The frankly amazing output of AI models today is the result of them borrowing from millions (or even billions?) of human-years of associational work, content that resulted from humans giving meaning to the world around them and articulating that meaning.
As Vaughn summarises, “AI systems can’t make meaning yet — but they depend on meaning-making work, always done by humans, to come into being, be useable, be used, and be useful”.
He goes further, defining four types of meaning-making, with examples that show how they apply to making and using AI systems effectively.
Type 1: Deciding that something is subjectively good or bad.
“Nas’ 1994 debut album Illmatic is still an astonishing piece of work” or “Nigel Farage is a moron”.
“Instructions on how to make explosives are really bad, so we should filter them out of our AI model’s outputs”.
Type 2: Deciding that something is subjectively worth doing (or not).
“An MBA is worth the investment” or “I want to speak to my colleague, but it’s not worth going into the office for”.
“The AI system’s output is good enough to use as-is. We don’t need to do any more manual cleanup of the text”.
Type 3: Deciding what the subjective value-orderings of a set of things should be. “Elf is the best Christmas movie, followed by Home Alone and then Die Hard”.
“When defining AI policy, our highest priority is ensuring that we enable technology development to be as rapid as possible. AI safety comes next, followed by social equitability from deployment of AI systems”.
Type 4: Deciding to reject existing decisions about subjective quality/worth/value-ordering.
“Lots of marketing and business folks think customer-centricity is both important and helpful but I think they are wrong and it’s a vague and unhelpful construct”
“The prompt engineering contest judges thought this prompt was excellent, but I think the output it produced was not as good as what this other prompt produced”
[The AI examples above are his, but the other examples are my own. Here is Vaughn’s original framework. 🙂]
Where it gets really interesting for me is his comment that most of us are “out of practice” with meaning-making.
This may be for a few reasons:
Our society that has prioritised efficiency over effectiveness. If efficiency is doing things right, effectiveness is doing the ‘right thing’. In the workplace, I’ve lost count of the times I’ve seen teams furiously sprinting towards a deadline to complete some output - prototype, new product or feature, report - without really being able to articulate why that output or outcome has value or merit vs other goals or possible solutions. The cult of agile in the modern corporation is an obvious symptom of this broader societal shift. “Let’s do a sprint!”. “What are we trying to achieve?”. “Agile!”
Cultural devaluation of ambiguity and loss of meaning-making institutions. Perhaps we’ve never been good with ambiguity, but the decline of organised religion and cultural institutions like trade unions and community organisations has eroded the time and space we used to dedicate to meaning-making. Spending every Sunday morning pondering god and the meaning of life was not the highlight of my childhood, but perhaps it played a useful role in helping us exercise our meaning-making muscles. Religion certainly provides a framework for thinking about what has value, both relative and absolute, whether or not you agree with the nature of those judgements.
Our increasing reliance on technology risks atrophying our meaning-making abilities even further. As we outsource more knowledge work and cognitive tasks to technology, we bypass deeper processes of reflection and subjective interpretation. LLMs are no doubt a breakthrough, but we’ve been doing this for a while: compressed book summaries, skimming the news headlines, reading wikipedia summaries, relying on the first SEO-driven content from google to answer questions we might have. As Vaughn says, “We have outsourced so much of our thinking and decision-making to algorithms that we rarely have to grapple with the process of assigning meaning anymore.”
And yet…meaning-making will become more and more important. Certainly for those of us engaged in knowledge work.
As more tasks are taken over by AI systems, workers will have to learn how to be better (more mindful, more sophisticated) at making explicit subjective judgments of value and defending them with evidence and reasoning.
We are becoming worse at meaning-making, just as it becomes more important than ever.
Maybe the liberal arts won’t be so useless after all?
Nick Cave as meaning-making evangelist
If there is anyone at the vanguard of a resurgence in meaning-making, and it’s importance, then I would suggest it is Nick Cave.
I am very late to the Nick Cave love-in. I’ve known people over the years that were obsessed. The sort of people that have seen him live 20 times, and go to multiple gigs on the same tour. But it was never really my thing.
Then last year I randomly listened to a Louis Theroux podcast interviewing Nick, which was tremendous, and made me want to read his book, “Love Hope and Carnage”, which is one long conversation with his friend Sean O’Hagan.
My copy of the book has passages underlined on almost every page, but it’s the passages about grief that really rattled my bones. His son Arthur died aged only 15 and he talks so eloquently and penetratingly about the growth that can follow such an earth-shattering tragedy.
You are tested to the extremes of your resilience, but it's also almost impossible to describe the terrible intensity of that experience. Words just fall away.
But i also think it is important to say that these feelings i am describing, this point of absolute annihilation, it is not exceptional. In fact it is ordinary, in that it happens to all of us at some time or another. We are all, at some point in our lives, obliterated by loss. If you haven't been by now, you will be in time - that's for sure. And, of course, if you have been fortunate enough to have been truly loved, in this world, you will also cause extraordinary pain to others when you leave it. That's the covenant of life and death and the terrible beauty of grief.
Then I recently (finally!) got to see him live, and was absolutely blown away. He is without a doubt the most natural performer and front-man I’ve ever seen. A two-and-a-half hour set, without a break or a sip of water, from a 67 year-old! Utterly astonishing. Highest possible recommendation.
I’ll leave you with two more quotes from his book, this time on religion, which features throughout the book, and has seen him held up by the media as evidence of a revival in Christianity, and organised religion, as more and more of us grasp for something beyond our screens, our things and our achievements:
I find that being open to the intimations and yearnings that we have for something larger than what may empirically exist is extraordinarily creative and worth at the very least investigating
There's two registers - sort of what is actually scientifically true and that which is demonstrably false, and in fact between the two, there's a world.
That’s all for this issue. I hope you have a lovely Christmas, and get time to unwind and spend time with loved ones. I’m very much looking forward to doing so.
See you in January.
I’m Michaeljon, an independent strategy and innovation leader.
I work with ambitious clients to create new products, services and business models that change behaviour and unlock new sources of growth.
I specialise in solutions that create both commercial and social impact.
Recent projects include:
🤸♂️ Working alongside a CEO to develop a new preventative medicine and longevity business. Including proposition, investor pitch deck, business model and service offering.
🌿 Leading a growth strategy and innovation project for a high street bank to drive sustainable energy adoption among homeowners. Leading to new propositions, business models and financial products that reduce emissions and increase revenue.
🚑 Partnering with a US health-tech founder to understand, and co-design with, opioid users, and discover how his cutting-edge technology could save lives. Then shaping the launch strategy, target market identification, product design and positioning.
📱 Supporting a newly appointed CEO of a philanthropic foundation working on digital rights and the open internet, to design a new organisational strategy and global positioning.
Get in touch if this sounds like the type of expertise you need.