I’ve been thinking about trust a lot lately. What I’ve been struggling with is - why is AI so useful for business; yet has so much potential to be harmful1 for society? And the more I think about it - the more it is apparent to me that what really bugs me about large scale application of AI in a consumer setting is how out-of-kilt with our established and typical understanding of trust it is.
This essay will be an attempt to dive into that relationship.
So much about human relationships is about persuasion. Persuasion is the main mechanism by which sales and marketing work. Persuasion is what determines who we vote for. Persuasion is how you raise children (well, sort of). It’s how you romance a partner (well, sort of).
Persuasion is how you align objectives and ways to achieve those objectives within an organisation, within a family unit or within the larger societal system between large groups of humans2.
Now, this is important - because without the ability to influence other humans to align objectives - every human needs to do their own homework on everything. By humans co-influencing each other - we allow for distribution of objectives, specialisation of objectives and we can distribute ourselves to do more. To use a very simple example - in a business where there is one employee - that one employee has to do everything, but that limits how much they can do, because they have to do everything and they only have so much time. This is why businesses hire more employees and specialise them to different tasks - but they need to keep them aligned to common objectives (the whole challenge of organisational management). Scale this relationship up to societal level and persuasion becomes the mechanism by which one human brain hopes to get the rest of the system aligned to its’ internal state. Many brains trying to do this to each other stabilises the system in some sort of mutually beneficial state.
So if you look at persuasion as a core mechanism of society - what is persuasion really? It’s a communication pattern of form “I want you to do/think/act/feel X (because of these reasons Y that are beneficial to you)” - but where X is typically relatively simple, Y can be arbitrarily complex. Because each human can’t do their own homework on Y every time (this is incredibly inefficient) - we have evolved trust as a mechanism (in systems terms - a heuristic) by which we shortcut this process in order to make a call on a persuasion and move on. We typically don’t even realise we’re doing it - and even though critical thinking can make you more resistant to some forms of persuasion - we are all part of persuasive loops.
Now, trust can be conceptualised in many forms (I like the trust equation but it’s beneficial to understand that this is all conceptual and there’s no “right answer”) but it boils down to - we trust people and are willing to be persuaded with less Y the more we trust a person / organisation.
With humans - this is part of our nature and it is what we learn at an instinctive level through socialisation. This becomes part of our internal way of working of how we relate to other humans, largely subconsciously, but also through our thinking processes.
This can all be a little onerous when kept in the theoretical - so let’s try an example. Take legal advice - we as humans, wouldn’t walk up to the first person in the park and ask them for legal advice. If we did that, we’d first need to spend ages vetting the person to convince ourselves of their reasons. There need to be good reasons (Y) to do what they are telling us to do (X). But in order to evaluate that, we’d need to learn a lot about the reasons ourselves (we need to know what are all possible reasons to recognise good reasons). This makes no sense if we’re trying to be efficient with our time (which division of labour and specialisation is all about). This is why you go to a lawyer - where you can skip a lot of that vetting by trusting the person. You trust that a university has been an effective filter of people in knowing a lot about law (by an institution you trust to be a good vetting of knowledge). This helps with the “credibility” part of the trust equation. You don’t trust any lawyer though - you check that they’ve been certified (by another institution you trust for different reasons to be a good vetting of character) - for instance - by passing the bar and not being disbarred. This helps with the “reliability” part of the trust equation. And you will look at the firm the lawyer works for - because firms have an aligned incentive to customers (the customer is always right) - and a firm that has survived for a long time has had to satisfy customers for a long time - so you safeguard yourself from someone being self-interested - as it’s in the interest of firms for customers to have good outcomes. This last one is probably the most contentious one - as there will exist firms who are self-interested - which is why longevity matters. Over a long period of time - it’s hard to maintain a firm that is not in a customers’ interest. This is the whole reason why firms use “since 1927” in their slogans. This helps with the “self-interest” part of the trust equation3. Finally - if a law firm has been used by you specifically in the past and you’ve had good experiences - you build intimacy, or a personal relationship that deepens your trust. The final part of the equation. And even with all those components - people still find it hard (in their gut) to trust a lawyer from a top university, a long track record, from a well reputable firm - with a very important legal decision. This is because the importance of what you’re trying to do will make it that much harder to trust another brain to do it for you.
Therefore - I conclude that in human-to-human systems, trust is the paramount heuristic of being able to offload work from your brains to other brains.
For 150 000 years that we’ve been Homo Sapiens, our primary framework of trust has been within human-to-human relationships.
For the last 80 years - software engineers have been developing computers to execute formal logic. Formal logic is extremely mathematical, deterministic and most importantly - can be vetted. A formal logic system always behaves in the same way - which helps with its’ predictability - and shortcuts the need to test reliability. For the large part of those 80 years - when working on your own computer - this was relatively easy for human brains to follow - the computer was “walking you through the homework”.
Where it got a little bit more complex is when more complex algorithms are developed, that are harder to follow - and you need to trust that the software engineer has solved the problem well. This is where firm reputation comes back in and why a brand matters. If a piece of software has been helping all these brands / people in solving this same problem you have - it’s much easier for you to trust it. But this is because software is predictable, reproducible and deterministic. It is knowable.
For 80 years, we’ve been trained to think that a software’s ability to do an objective is determined by the quality of the engineering behind it - and vetted by customers being happy that the software (a mechanical predictable machine) is delivering that objective repeatedly.
In an environment where the interface is you → your computer → software → output → you - this is the only thing that matters. That’s why we trust a spreadsheet software to do what it says on the tin - many people have been using it and having success. We don’t expect the SUM function in a spreadsheet to suddenly start delivering wrong sums - and if it did, we’d consider it a bug - one easily noticeable by many people using the software. This was easier back when software couldn’t be updated on the fly behind the scenes (and introduce bugs) as it was more stable - but I’d ask you, reader, to ignore that relatively new development, as it is less important to the overall argument4.
This contract between us as human brains and software as a way to execute our brain will - is fairly simple - and is - in effect - no different to using a toaster, an oven or a car. It’s a machine where you pull levers and get outputs out - always same - very verifiable. We developed a trust equation in relation to software that is not that different to the trust equation to any tool over the past 20 000 years of building tools. We as societies have a fairly stable trust equation for this.
The Internet was the first disruption of that contract in that suddenly you are not just interacting with a tool - you are interacting with other humans - but hidden in that they’re somehow in your toaster.
Imagine if you had to use your toaster - but it wasn’t a machine, it was just a portal to a factory where a whole bunch of humans are toasting bread. It takes a bit of getting used to5 but the trust equation is still fairly simple - these are clearly humans on the other side - so I need to use the same trust equation I would use if I was ordering toasted bread for my breakfast from a toasted bread brand. I would no longer apply the reliability of the machine, because unlike a machine that is fairly predictable and knowable - I can account for the fact that humans are on the other side.
Once you know that there’s humans on the other side of the toaster - the relationship becomes much easier - as it’s closer to what we’ve learned over 20 000 years of evolving and refining our socialisation.
Where this gets a little bit trickier is when every toaster you use looks the same on the outside - and you need to take a screwdriver and open it to see which factory the toast is actually coming from. This is why digital literacy is such a big thing and why many people still struggle in vetting which Internet sources to trust and which sources not to trust.
But what is relatively clear to everyone is that when you’re interacting with the Internet - you’re interacting with other humans. Not everyone is good at taking apart the shell to see which humans you’re interacting to6 - but it’s at least easy to understand that the contract is between human brains and other human brains - and thus an appropriate trust equation should be used there.
This is the contract where we use the media trust equation - and one that we haven’t been stabilising for a long time - but we do have a couple hundred years of information being passed on in ways where organisations spread information at mass scale - where we had to learn to vet and trust sources of information.
Things get incredibly complicated now. We could muddle this essay a lot by going into what the ability of Generative AI is and what its level of intelligence is.
I almost don’t want to do that - it’s a topic for another post. For purposes of clarity, I’ll state that I’m confident that Gen AI is very subhuman in its intelligence - but for the purposes of this essay - let’s assume Gen AI has human intelligence - as that is how it’s marketed to consumers at large.
I think the reason why Gen AI is harmful in a wider society context is that it is impossible for a typical consumer to apply the appropriate trust equation here. Is Gen AI a piece of software - a toaster developed by a company with reputable software engineers7? Or is it a portal to human knowledge, with all its fallacies - where we have to apply a media trust equation? Or is it a thinking being, with its own set of motivations, where the more appropriate equation would be a human-to-human equation?
I would argue that - even with what the marketing is telling us - most people are applying the tool trust equation, followed by people who are applying a media trust equation, followed by people who are applying a human-to-human trust equation. And because Gen AI is a great tool, a worse source of information and a horrible source of advice and thinking - this is why the more AI literate you are - the less you use it. If you apply the tool trust equation - you won’t question the reasoning and advice and the source of information. If you apply the media equation - you won’t question the reasoning and the advice - and only if you apply the human-to-human equation - will you question everything.
This matters - as advice is the primary category of usage for most people - followed by it being used as a source of information.
Now this might seem counterintuitive - if you’re applying the tool equation - why would you go to it for advice?
And this is where - I think - the conflation of software and marketing comes in.
What the industry is telling you is - “…this is a piece of software that we, world class engineers, have built, that is like having a team of PhD experts in your pocket, that you can go to for advice, information, to make your life better…”. The industry is telling you that this is a toaster for advice - and when you use it - there’s seemingly no humans involved - so you apply a framework of a toaster to it.
Now, if you’re a bit more savvy, you will understand that Generative AI has no concept of truth - it’s just a statistical compressor of information available on the Internet - so when searching for information - you will treat it as an imperfect media outlet - and you will check its homework. You won’t be immediately persuaded. This is using the media equation. This is understanding that the toaster is a portal to the factory.
But where it gets really opaque is that it’s hard to see through the marketing of “these machines can reason” and this specifically matters if you’re going to use Generative AI models as thinkers and reasoners in an autonomous agentic manner. This is where I think decades of science fiction and 80 years of using computers makes us think - when these things reason - they are being objective rational machines that use formal logic to arrive to conclusions. We don’t apply a human-to-human equation, we apply a human-to-robot equation.
And that equation assumes:
That equation assumes that Generative AI is super-human. And our starting position we’ve outlined at the top is that it’s human intelligence.
You wouldn’t trust any human to give you legal advice - yet people do trust generative AI to give legal advice. Some people use this sensibly - and they understand that the value returned is no greater than polling a room of random people (with the acknowledgement that there is some wisdom in crowds). And this is appropriate for light touch stuff. But when it’s stuff of greater consequence - if we were using the human-to-human equation - you’d get nervous. You’d get a feeling in your gut even when a well-reputable lawyer, who went to a well-reputable university and is in a well-reputable firm gives you advice. But the machine feels a lot lighter.
The wisdom of crowds is what makes this complicated - because the tool equation assumes “I’ll try it 10 times and if it works, it will work the 11th” - and the wisdom of crowds means that 10/10 it might work - but that doesn’t guarantee that it will work on the 11th. The problem of trust here is extremely complex and nuanced - and so I would argue - even if these systems aren’t intelligent at all - that the appropriate way to think about them is to apply the human-to-human equation if you’re going to use them for advice or use them in an autonomous fashion - something that is incredibly unnatural for our heuristics to do because they are very obviously not human11.
And this is what I think consumers at large won’t be able to do. It will take a while before a new trust equation is developed and a new contract and heuristic - the question is what will be the societal effects of mass-scale generative AI persuasion it produces in the meantime.
The same problem is applicable to business - but business have strong vetting procedures and better evaluation of feedback loops. A business takes a lot of care to vet humans working for it, to vet software used, to vet suppliers it works with.
And a business will typically invest into a pilot programme and measure results. Some businesses will apply generative AI where a subhuman-intelligence wisdom of crowds is appropriate - they will generate good ROI - and will adopt it. I would argue they would still need to monitor it - in the same way they would monitor the relationship with a supplier or monitor the humans working for them are still performing at a level they would want. We have whole systems of procurement and organisational management for that very reason. My fear is many businesses will fall in the trap of using the tool equation and vetting the tool now - but will not vet continuously - which would be a mistake - but a mistake with a much longer feedback loop.
Some businesses have fallen for the marketing - and they’ve announced programmes where generative AI is going to do massive things without vetting, in a race to be first and capture market attention. These business have walked back from those claims .
Smart businesses are employing people who have long expertise in human-to-ai trust equations (i.e. ML expertise, AI expertise or systems expertise) to setup ongoing programmes of vetting to understand where to use AI and when to stop using it.
That’s why it’s much more likely that in the overall business system - AI will be used as a great tool to enhance productivity, but with limited usage in the advice space or in the information space. When thinking about agentic applications - as I’ve argued before - businesses need to evolve systems of control that are similar to systems of control over human workforces. I can see a business using AI to improve their workers productivity. I don’t see a business that will sign on an AI model to be their sole accountant or sole legal council. No matter what the marketing will make you believe.
But the critical thing to understand from a business perspective is the runway. Even if AI today was as capable as a human (and it is not) - you would want to apply at least the same vetting and monitoring you apply to humans. Only when AI becomes superhuman can you start thinking about a robot trust12 equation - but at that point the change is so transformational to society that I’d argue it makes little sense to prepare for it now.
Everything else is software. And you wouldn’t have trusted accountancy software from Open AI 4 years ago, you would’ve trusted a software firm that has domain expertise in building accountancy software to build the right controls and systems to create a useful and predictable tool.
My current best fitting model is that the collapse of the trust heuristic is the primary way to think about what AI is doing to society.
What I hope will happen is:
Consumers will recognise that they need to vet AI in a way they vet human experts they work with - by looking at domain credentials and business credentials
Insurance organisations will recognise that they need to provide insurance contracts that mediate risk between AI organisations and individuals / businesses in a way that is different to the software contract
Legal profession is going to recognise that the trust equation is different and evolve the contracting between AI organisations and individuals / businesses in a way that expands the concept of liability
Call me a cynic, but what I think will happen is:
Consumers will spiral into bad patterns and develop trust issues and mental health issues - something we’re already seeing happening
Bad actors will recognise the collapse of the trust heuristic - and will exploit it to selfishly persuade towards their own goals - be it bad political actors, bad businesses or bad individuals (or all of those combined)
AI businesses won’t recognise the harm they’re doing, because the global economic system is only optimising for growth (as expressed by user numbers, eyeballs in front of ads, or what have you)
This is why I believe regulation is necessary. Not heavy handed regulation of what should companies do - not even regulation of what businesses are allowed to do (businesses should be allowed to take AI risks and research in air gapped environments). But regulation that incentivises businesses to avoid customer harm when deploying AI systems. To safeguard consumers as we’re adjusting to a new way of trusting what these systems are and should be doing for us.


