Fake news is fake news, design vs. AI, the power of constraints, and light vs dark arts
Hello campers,
I’m currently knee deep in the field of preventative medicine and longevity, working on an exciting new project. This week has involved interviewing a 70 year old woman that can deadlift their own body weight, as well as experimental biohackers and competitive athletes. The joy of my profession is not only getting to help clients solve fuzzy problems around proposition, behaviour change and business model, but getting to meet fascinating people with wonderful stories and diverse passions.
In this week’s newsletter we have erudite explorations of how design and AI can co-exist, de-inventing computers, mission statements gone awry, how to prevent overthinking and more. But first…
Calling out AI bullshit
If you only read one thing I link to this week, please read this absolute take-down of AI grifters and snake oil merchants by someone that knows what they’re talking about.
Most organizations cannot ship the most basic applications imaginable with any consistency, and you're out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn't a recipe for disaster, it's a cookbook for someone looking to prepare a twelve course fucking catastrophe.
So many highlighted passages, but his description of ‘Synergy Greg’ - the archetypal corporate grifter native to LinkedIn that move seamlessly from touting blockchain to quantum computing and now LLMs - is pure *chef’s kiss*. I will absolutely be calling people out as having ‘Synergy Greg’ vibes from now on.
Starbucks loses its mind.
Fascinating summary of problems plaguing Starbucks by Ed Cotton. What used to be known as the ‘third place’ now makes 30% of sales through it’s app as pre-orders picked up in store. Which has led to big queues and frustrated customers, as their processes and equipment are designed for the pre-COVID era (aka pre-pre-order?).
In addition, drive-throughs now account for 50% of sales.
All of which means there isn’t much ‘third placing’ going on any more, so they’re in a bit of a pickle. What to do?
Perhaps they should introduce dedicated ‘third spaces’ for freelancers and hybrid workers, with excellent wifi, co-working spaces, even meeting rooms that you can book in advance. Maybe a small business membership that gives them discounts and preferential access to a table to work from. Perhaps they just double-down on convenience, speed and satisficing, a la McDonalds. Perhaps they do both but clearly segment their approach, so some neighbourhoods have the in-and-out convenience of a walk-in vending machine, and others have the space and service to encourage dwell-time?
Or…
Or…
Perhaps they should re-write their mission statement to “nurture the limitless possibilities of human connection” and create a film production company to “shine a light on the stories and people who inspire us, from young, emerging artists to innovators, changemakers and others who are making a positive impact on the world”.
Design against AI.
A slightly more optimistic and playful take on AI comes from design legend John Maeda, who has been creating the Design in Tech report since 2015. It’s always illuminating and as the former Professor of Design and Computation at MIT Media Labs, he is incredibly well placed to offer a designer’s take on AI and separate the insight from the hyperbole.
There is a great 30min summary of his talk here, which jumps from looking at the history of AI and LLMs and how we got to where we are, to some great examples of new technologies, as well as a few smart predictions. I love how he jumps from theory to playful practice, from historical context to bold prediction - genuinely curious, whipsmart and erudite.
Biggest take-out for me was how designers have an important role to play in establishing the ‘value proposition for criticality’ - i.e. how to foster more critical thinking about AI technology, the role it plays in our work and lives and, as we’ve discussed, what we really value.
This last slide in particular reminds me of the last issue of Curious Behaviour, where we discussed being hard to model, embodied intelligence and how AI forces us to figure out what makes us truly human.
Do check out the whole presentation.
The perils of the overthinker.
As a self-confessed over-thinker, I thought there was some excellent advice from product leader John Cutler here. I particularly enjoyed the pragmatism and humility of these three:
#7 The people you’re trying to influence spend 98% of their day overwhelmed by business as usual.
#13 Hack existing power structures—it’s much easier than trying to change them.
#16 Watch out for imposing your worldview on people. Have you asked about what people care about?
Going into organisations as a consultant usually involves helping clients figure out how to respond to external or internal change, or how to implement a change, and this is by nature an uncomfortable process for all. The real heavy lifting is not done with fancy charts or trademarked processes, but by asking questions and listening intently in order to quickly develop a working relationship built on mutual trust.
Designing our own constraints
I’m fascinated by the idea of designing systems for ourselves that make our desired behaviours easier. Software like Freedom and Opal help us fight our own propensity to get distracted on our phones or computers, but more and more designers are looking at baking these constraints into hardware.
Last issue I wrote about beautiful handmade computers with stripped back functionality that allow you to focus on writing without distraction.
This new tablet - Daylight - is designed in a similar vein.
I’m fascinated by the copy too, explicitly talking about ‘de-inventing’ the computer and designing for ‘deep focus and wellbeing’.
It’s a tool, not a master.
It helps you focus, but it doesn’t dictate your focus.
Less distraction, less addiction, less eye strain, less blue light.
Technology that feels a bit more human and a bit less demanding.
It’s an e-ink tablet, but with less lag than a kindle, that you can also use outside. I do love the single-purpose nature of the kindle but often wish reading and annotating pdfs was just as easy. I could imagine using this to catch up on articles and papers, as well as note taking for work, whilst minimising the chances I ‘quickly check’ substack, bbc sport, reddit, linkedin etc and lose 45mins of the day.
I’m watching this space with a keen interest.
‘Governance Light Arts’
“Show me the incentive, and I will show you the outcome”, so said Charlie Munger.
This is a great profile of serial acquirer Watsco, a US-based heating, ventilation, and air conditioning company (HVAC) who has an incredible incentive strategy that is the definition of long-term thinking: you get shares in the company but you only get to cash them in when you retire. Leave the company before that date and you forfeit all the shares. It sounds extreme, but 9 out of 10 employees stick around until retirement age, and many work beyond.
It’s a great example of incentives not only being perfectly aligned to the business strategy, but also acting as a forcing function on organisational culture. It also shows how their strategy reflects the reality of the HVAC market in the US, dominated as it is by local players who have built trusted relationships with customers over decades. How does a nationwide serial acquirer of these companies incentivise the acquirees (often family run local businesses) to stick around after acquisition, whilst also delivering clear advantages to being part of a national organisation?
As an aside, I love the author’s term the ‘light arts’ of governance, as a direct counterpoint to it’s more famous sibling, the sinister dark arts.
Fake news is fake news
"Fake news comprises only 0.15% of Americans’ daily media diet."
Chris Perry pointed me towards this great resource by the Computational Social Science Lab at Annenberg School of Communication exploring how US citizens consumer new content, including the gobsmacking statistic above.
There is further supporting evidence in a fascinating new paper in Nature suggesting that the idea social media is awash in fake news and responsible for broader polarisation, is in fact a myth.
Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information.
Curious to hear people’s thoughts on why we fall for this narrative so readily? Is it because the tech companies are perceived as the nefarious monsters hacking our attention? Is it that we define fake news more broadly / loosely than academics?
That’s it for this week. Do let me know your thoughts in the comments.
x