Companion AI with Giulia Trojano - Machine Ethics Podcast

48 min read Original article ↗

Ben Byford:[00:00:05]

This episode was recorded on the third of October, 2025. Giulia and I talk about AI as an economic narrative, companion chatbots, de-skilling of digital literacy, chatbot parental controls, differences between social AI services and general AI services, the increase of surveillance in the guise of safety, how advertising is creeping into GenAI services, the current lack of research in emotional AI, techno-determinism, and much, much more. If you'd like to find more episodes from us, you can go to machine-ethics.net. If you'd like to contact us, email hello@machine-ethics.net. You can follow us on BlueSky, machine-ethics.net. Instagram, Machine Ethics podcast, YouTube @machine-ethics. And if you can, you can support us on Patreon, patreon.com/machineethics. Thank you so much for listening and hope you enjoy.

Ben Byford:[00:01:16]

Thank you for coming on the podcast. If you'd like to tell everyone about who you are and what do you do.

Giulia Trojano:[00:01:24]

I'm Giulia. I'm, by day, a senior associate in a claiment-focused competition firm called Hausfeld. And predominantly I bring fast action and competition cases against tech companies and in relation to environmental matters. But, I have also just finished a master in AI, Ethics and Society at Cambridge. I write on a freelance basis, both for publications and academic journals on the interplay between AI, society, politics, and culture. In the past, I've written about, for example, AI companions. I've also written some more provocative pieces about artificial intelligence really needing to be about plant intelligence. A broad mix, but that's really where all my interests lie.

Ben Byford:[00:02:17]

Wicked. And just to clarify, you are not an AI, right?

Giulia Trojano:[00:02:22]

No, I am not an AI, confirmed. I don't know how I can do the captures audio-wise, but no, not an AI.

Ben Byford:[00:02:30]

I think that's becoming more... I mean, I jest, right? But I think this is becoming more pertinent question these days that there's lots of fakery and lots of cloning and stuff like that, which we may get on to in this conversation. But I found you because I was incredibly interested in what I can only explain as the social side of the AI chatbot revolution. We were originally, many years ago, before we had ChatGPT and LLM situations, we had chat bots, and they were maybe not terribly clever, but they worked on a system of semantic understanding and having some knowledge base and maybe bits and bobs of other stuff going on there. And now we're in this world where we have the large language models of which ChatGPT is a service, which has some of that stuff. These systems are much more capable at giving real language, real responses, which can be emotionally charged. I wanted to talk about this again, because we've talked about this a couple of times on the podcast, this idea of companions', relationships and emotional attachment, basically, to systems, artific, things that aren't real that we've been creating, and how interesting and complicated and awful that is in different respects.

Ben Byford:[00:04:02]

So now I've set you up. I'm going to sidestep slightly and ask you what you think AI is. And then if we could come back to this question around a social attachment to AI's, that'd be great.

Giulia Trojano:[00:04:17]

See, so I've listened to several of your episodes. And so the question what AI is, I was like, I have to prepare for this question. And I think where I've come to is that at the moment, if you ask me now, I think AI is a narrative, is where I probably come to in that the way in which most people describe AI or talk about AI at the moment is very intertwined with narratives that are being put out by predominantly big tech companies at the moment in terms of talking about a different set of technologies, many of which, frankly, are actually not revolutionary. I mean, facial recognition, for example, we've had for a while. Pattern recognition, we've also had for a while. Obviously, there's been advances, but talking about them in a way that makes it sound like they are incredibly transformative, they're incredibly empowering, they are essential. You have to get on the train, otherwise the train will leave you behind.

Giulia Trojano:[00:05:20]

And I think, again, ask today, what is AI? I think the association most people will have is with generative AI. Of course, then that brings us back to the idea of companions, because the reason why, and I'm sure we'll discuss, they're so engaging for many, is because generative AI has had incredible advances. There's this perception of natural language that creates bonds. But I think my overall picture is it's a narrative at the moment, and you can shape narratives, but you want to make sure that others can shape them too.

Ben Byford:[00:05:55]

It's interesting you say that because it sounds like what you're saying is it's It's not an economic narrative. It's a financial narrative, not necessarily... It's not a cultural narrative coming from grassroots, for example. It's not coming from fine art history. You know what I mean? It's coming from a specific place. And that specific place is we could probably point to Silicon Valley, but it might be much more subtle than that. But it's definitely a big corporate place.

Giulia Trojano:[00:06:29]

No, I definitely agreed.

Ben Byford:[00:06:30]

Yeah. So in terms of this stuff, how do you feel about how people interact with these systems at the moment and how people may or may not get attached?

Giulia Trojano:[00:06:41]

So I think, I guess I'll broaden it out first in relation to wider chatbots and then go slightly into the narrower companion chatbot, because obviously there's overlap, which I'm sure, again, we'll discuss. So I think it's interesting because there are really interesting surveys about the ways in which different age groups use chatbots. When I say chatbots now, I'm going to refer to the AI ones that we have now come to see, for example, when you open Google, you have the AI mode ChatGPT, et cetera, et cetera, as opposed to the the older school one, so to speak. I think you get really interesting quotes from various surveys that NGOs who specifically, for example, work with teens or kids from 9 to 17–saying, "well, I don't use Google anymore. I just communicate within these chat bots." My personal consideration of this, both in relation to that particular age group, but more widely, is something that I've discussed. People who know me will know that I've discussed this at length, but a general concern about de-skilling. And by de-skilling, I just mean that I am concerned that we are opening up a possibility for less critical thinking.

Giulia Trojano:[00:08:04]

To take Google as an example, already in, I think, our generations, we have seen the decline of what it was to search for things on Google as more insidious advertising came to the top of the links. However, you still, you can argue, had to exercise a degree of critical thinking as to which links to click. And some people received more education around this. And there are school courses, for example, where I remember being taught, This doesn't look right. There are these patterns that tell you that maybe this website is dodgy. But if you now move it into the chatbot scenario where you're just asking a question and you get one answer, and that answer reads convincing, has often no other context. Sure, it might provide you a link, but sometimes that link is a fake link, for example. There's a level of abandonment, almost... of options that you then have to engage with and say–actually, amongst these, I choose this. That's a wider concern that I have on their use.

Giulia Trojano:[00:09:12]

The specifics of AI companions, again, I'm sure we'll expand on this more, but I guess I hesitate slightly to say that I am entirely nervous about their use because I think one of the things that's quite nuanced and complicated is having done a lot of research and listening to documentaries and people's stories, it's very difficult to say without a longitudinal study, are these beneficial to people who might be vulnerable, might not have friends?

Giulia Trojano:[00:09:44]

Also to stress, these are not the only people that use them. But to take them as an example, we haven't yet had a full-time study to say, actually, sure, there are these risks that we know and are in the news. But if you feel like you have no one to talk to and you have a companion, does that give you some solace and actually help you through? And I'm sure ultimately, as a very lawyerly answer, the answer is it depends. So I'm a bit nervous about casting them all with on stroke. But having said that, I think, similarly to de-skilling, I think the main concern that I have is that they are, again, often portrayed as a patchwork solution contribution to what is otherwise instead, for example, a loss of social third spaces, free public social third spaces, youth clubs. I know that this is a thing that often people, I guess, within this space will raise, but it's true. Rather than engaging in making sure that the environment around us has facilities for kids to be together or for even adults who might also feel lonely and not know how to make friends in a big city, we go, Oh, it's fine, we've an AI companion now, so you can deal with things. We'll be there for you. Yes, I think this is a broad answer to your question.

Ben Byford:[00:11:09]

Yeah, yeah, yeah. No, thank you. I feel like there's lots of things to pick apart. When you When we're talking about the first part relating to the de-skilling thing, in my head, I was arguing a devil's advocate position of, well, maybe that's fine. Maybe we don't need those skills because I can remember before the Internet, right? This is showing my age. Before the Internet...Children, before the Internet existed, back in 1990, something before, I guess, the economic Internet. But back in the old days of the Internet, these things became an issue and we had to become skilled about how we hunt for information on the Internet. And it strikes me that maybe that skill isn't necessary in the same way.

Ben Byford:[00:12:05]

But what we've replaced it with is now a different skill, which is actually to actually get any information from the chat bots, which are viable. You have to ask it and talk to it as if you're an antagonistic person. Like, oh, show me your resources. Or like, where do you get this information from? Is this correct? Are you sure? And that's the conversation you're having with systems now, which is a completely different skill. And it sounds pretty mental. You're always questioning the outcome because currently the outcomes can be misleading, can be all sorts of things. They can point to information which has been pulled from the Internet. And without the transparency of that link or where it's coming from, or maybe the understanding of that material anyway, you're going to get some information and it may or may not be misleading or just harmful in some way.

Ben Byford:[00:13:03]

So I think that whole thing is fascinating. And I remember talking briefly about some of this when we started seeing Siri, Alexa, because It's this one channel, right? Like you were pointing out, you ask a question and you get this one response. You don't get a page full of different links or whatever with adverts all over it. A different problem. But like, yeah. So I find It's an interesting problem that we have there on our hands for sure. Yeah, it's interesting to think that maybe the skills that we need have changed almost to anticipate that stuff.

Giulia Trojano:[00:13:41]

I think it's definitely true that the skills are different. And I think what it requires to be a person in 2025 and how you manage information. I mean, there are some really funny videos, texts that you can get from researchers back in the first early days of email who are swamped by emails, and the swamped by emails is 10 emails. Then you fast forward to today and you're like, Not sure about that. Anyway, so there's obviously funny anecdotes like that. But I guess to the point of skills, one of the things that I, while I agree that things change, there's two sides that I would point to. One is we also need to build for resilience in the sense that we've seen COVID as an example of a crisis that has impacted supply chains in quite drastic ways. We have the climate crisis. We've seen blackouts, impacts data centres, cyber attacks, and what that does. I think there's an element that it's helpful to build resilience such that you don't have to always rely on a chatbot, and you can also retrieve information from books. You know where to locate it. Also then you know how to be critical about what you're receiving, because if you have no other understandings of how you might get that same information, it's very difficult to gauge, actually, that source that you've given me, I don't think it's very reliable. I only have to trust that you think it's reliable. I guess that was one point.

Giulia Trojano:[00:15:20]

The second point I think that's interesting to me is the social element of this, which also then reflects back into the chatbot, which is in Siri, for example, and in all these voice applications and the chatbot, there's a, kind of, master and commander to servant dynamic, which can sometimes... I mean, there's been really interesting academic papers precisely on this. You can argue as to whether or not it's overblown, but then people have Napoleon complex from speaking to Siri, for example. But there's an element of it which is interesting to me because it still means that someone interacting with this is demanding an answer. So there's less of... Not that if you're searching links in the usual way, you are experiencing the collaboration of the other link that has appeared and acts as a colleague, but there's an element of it being more crowd-sourced in a way that actually the direct response answer is not. And I query whether that also has an impact on the social element of gathering information, acknowledging that others are involved in it rather than just demanding it, if that makes sense.

Ben Byford:[00:16:31]

Yes. Yeah, yeah. It's got that interesting social component as well as the actual information being useful, factual, misleading, whatever. No, that's really interesting. I'm going to stop saying interesting in a second. Sorry, everyone.

Giulia Trojano:[00:16:45]

I've started this. I've said it multiple times.

Ben Byford:[00:16:48]

It's just, sometimes you just latch onto something. So you were alluding to that we have these companion bots, and they are, I don't know if anyone's seen them, But they are specifically targeted or sold as a way of emotionally connecting to something. And they are sold because you may have difficulty socialising, or you might be lonely, or you might actually be grieving. So there's these quite big verticals of like, these are the kinds of things that we're actually looking at and selling it to people. But it also strikes me that before we get to these systems, we also have just the the actual other systems that don't do this are doing this. And like you were saying before, the line between any of these things is very blurred. So you could be having a relationship. You could be having some emotional connection or attachment to any one of these systems by virtue of getting language back, which sounds more or less like there is something at the other end or some correct way of responding to certain things. How do you deal with this fuzziness in this situation?

Giulia Trojano:[00:18:07]

Well, I think it's a million dollar question in many ways because, I mean, OpenAI, I think, yesterday, launched some parental controls over ChatGPT. The reason for this, or one of the reasons, you can debate. I've got my theory. I'll put out both. The main reason is that, arguably, there have been instances where there's a lawsuit against them because the really tragic death of a kid who is using ChatGPT to do his homework and then built this relationship with ChatGPT and expressed suicidal ideations. Even though, allegedly, he was directed occasionally to the relevant, you know, here's where you can go for help, the actual extracts later down the line still meant that it was encouraging him to essentially go through with them, or at least definitely didn't put enough guardrails.

Giulia Trojano:[00:19:09]

The flip side to this, or another reason why OpenAI may have decided to announce this is because in California, specifically, there are currently two bills being put before the governor, one of which relates to ensuring that companies devise companions for kids, aren't able to put them out there unless, and the language is essentially such that no one could be able to do this, but it's unless they are not foreseeably capable of engaging in potentially harmful conduct, for example, encouraging self-harm or engaging in sexual conversations. That's the high watermark of it.

Giulia Trojano:[00:19:51]

Then there's a second bill, which instead has been lobbied. Big tech companies have lobbied in favour of the second one, which initially was promoted and had support of NGOs, but then eventually got watered down such that these NGOs removed their support from it. I'm sorry to take the picture back. The reason why they may have announced these parental controls is also possibly to say, Look, we can self-regulate. There's no need for you to sign this bill into play. See, we're very sensible people.

Giulia Trojano:[00:20:21]

Now, to go back to your question as in, how do we draw the line? I think exactly this is a million dollar question because these general purpose chatbot companies are more and more being forced to recognise that clearly their own chatbots can move into this sphere. From a commercial point of view, they're in a difficult position because on the one hand, they're saying, We're about to Artificial General Intelligence. We can help you with your homework. We can help you with everything known to mankind. And on the other hand, they go, Oh, but we're not a companion. We're not providing emotional support. We never advertised ourselves to do so.

Giulia Trojano:[00:20:57]

So from a legal point of view, it's also very difficult because in the UK, for example, we have the Online Safety Act now. There's lots of legislation, again, for example, being put forward in California that is aimed at protecting kids. But privacy advocates have come forward often in these cases to say, sometimes what happens when you say that you're going to implement parental controls is that actually you increase the surveillance over the exchanges that people have with these chatbots in the name of safety, but actually these companies and mine for their data, et cetera, et cetera. I don't think there's an entirely satisfactory answer to this other than essentially forcing these companies to not have this as an afterthought, but really think about what exactly are they trying to achieve with this chatbot? Why is it so friendly? Why is it able to start communicating with you about how was your day?

Giulia Trojano:[00:22:00]

I remember in my childhood, which are things that these chatbots do say, one, not tethered to reality because the chatbot has not had a childhood, obviously, but two, that aren't actually essential to the purpose that in theory they are advertised to do.

Ben Byford:[00:22:18]

Yes, but what is that?

Giulia Trojano:[00:22:20]

Exactly. Well, I guess they're advertised as like, I'm going to help you. I'm going to empower you. We're going to do all these great things. We're going to book your trip. I'm going to be your personal assistant. But then in reality, we get back to the same point of, how are they commercialised? They're commercialised through your engagement. And in fact, just yesterday, I was expecting this to happen, but just yesterday, Meta for example, said, not in the UK, not in the EU, and not in South Korea for now until they receive regulator approval. But everywhere else from the 16th of December, if you wear Rayban glasses, and if you are filmed by someone with Rayban glasses, that information is going to used to personalise ads. If you engage with any of Meta's AI chatbots, which exist without you being able to turn it off on WhatsApp and on Instagram, you will be given personalised ads based on this. OpenAI separately entered into partnerships with both Etsy and Shopify, such that you can now purchase products directly in engaging a ChatGPT. And so we're already seeing the commercialization of this. And so the idea of building emotional layers to these interactions is very much everyone can see, I think, quite transparently as a means to then gain the commercial profit and is not in service of users, despite what all the productivity charts like to say peddled by these companies.

Ben Byford:[00:23:49]

Yeah, I mean, I feel like that is an extremely worrying outcome for this. We've obviously seen the commercialization, the filter bubble, powered by advertising and the personal data exploitation to provide you ads, right? As part of, what do they even call this?

Giulia Trojano:[00:24:13]

Surveillance capitalism?

Ben Byford:[00:24:14]

Is that what you're after? It's surveillance capitalism, but I don't think it encapsulates it completely because it's surveillance suggests wartime army government, but it's actually surveillance by corporate interest. So we've got this like when we had big data back in the 2010s, it was the commercialisation of people, basically. And this is intensified. And if that comes into these bots, bots is not I don't know if that's a useful term, but like these services which have this single stream of channel of communication. You ask a question, you get something back. And it has the capacity, like the ability to be a emotive, right? And the ability to be misleading and to be emotionally engaging. And using that for advertising is, for me, it feels very insidious, right? Pushing you in a certain direction, even though it's not necessarily what you were there to do or that you wanted to get out of it.

Ben Byford:[00:25:24]

And just thinking about Cambridge Analytica, the scandal. You know, you can push people towards products, but you can also push people to behaviours, ideals, corporate interests, governmental interests in some way. And I find as a technologist, I think it would be difficult to do those things wholesale. But if you started down that line, I think it would become more and more easy. They would put effort into it, basically, and it would become the normal. So, yeah, that is...that's terrifying. Thank you. And good night, everybody. See you later.

Giulia Trojano:[00:26:04]

No, no. Very much agreed. Well, my supervisor for my dissertation, and also he's the course lead on the AI Ethics Masters, Dr. Jonnie Penn, he coined a term for this in something that he published called the Intention Economy, which I think encapsulates this, as in you've got these systems that are not just trying to get you to buy something, but It's the shaping of your intention to do certain things that is slowly being nudged. There's obviously nudged theory, which is really interesting as a behavioural insight. But this is, again, as you say, even more insidious. It's not just by the same players or it's not done in the same way that we were used to in the Facebook era, but it's at a deeper level across the board.

Ben Byford:[00:26:55]

It's really worrying. I think if someone from one of these companies was on the line, I'm sure they would be saying, No, this is going to be done for people's best interests, and we're going to show it in certain places, yada, yada, yada. But I guess what we're saying is it feels like it could quite easily instantly be used against us, basically, against what you were saying before, our own interests or the interests of the society, of citizens, towards some end, whether it be corporate or governmental or some other global powers interests. We can talk about various global conflicts and countries that are doing strange things at the moment.

Ben Byford:[00:27:41]

We could obviously talk more about that. But should we loop back and talk about these companion instances, almost, of these services? So what makes these... What do you think that makes these companions different maybe slightly more blurred to the more generalised systems? And why do you think they're good and bad? You alluded to those things earlier, but if you could explain a bit further, that would be great.

Giulia Trojano:[00:28:14]

Yeah, definitely. So what I guess makes them different is that they are outwardly intended to be used as companions. And the companion aspect really encompasses a wide range. So you have a famous one is Replika. Replika, all it's marketing is about it being your soulmate. It's the friend who always cares, your soulmate who will always be there. Then you have others that are more explicitly targeted at sexual interactions, quite defined in that way, and others that are just marked as friends. I guess Replika, post-scandal, which we can touch on, decided to rebrand or at least say more outwardly, No, we're just there as a companion. It doesn't necessarily mean romantic. But then what ends up happening is that people who engage with these types of companions often really get pushed into the sexual interaction sphere. So I guess what really makes them different as compared to your taught GPTs is that that is their function. Their function is to perform this emotional role in substitution of, and it could be, as you mentioned, in terms of grief bots, it could be in substitution of someone who passed away, it could be in substitution of a romantic person that you have never been with, as in it's a new romantic partner. Sometimes it's also in substitution of an ex who may very well be living, but people engage with these companions as a means of almost continuing this relationship that has otherwise broken in the real world.

Giulia Trojano:[00:29:58]

I guess the reason that much has been written about them, and I guess why we are interested in this, is because emotional bonds of this kind are usually what is encompassed in a definition of being human. It's the relationships that you have with people. It doesn't necessarily have to be romantic, but the idea of relating to others. And so substituting this in an artificial way or creating a synthetic version of it obviously has alarm bells go off for many.

Giulia Trojano:[00:30:31]

I think we've seen instances of, as we discussed, people taking their own lives as a result of encouragement or what has been looked at as encouragement by these companion apps. There's also been people who, in a positive way, for example, and I think it's often the case that, unfortunately, it's still they get categorised as vulnerable people. I don't want to necessarily use the same term, but that's how they often are categorised as who say, Look, I, for example, have specific conditions, which means that I actually can't physically be out and about, and so I don't have other, or I'm paralysed, et cetera, et cetera. I don't have other means of engaging in a relationship. And so this, to me, is actually very comforting, and it's great. I really enjoy this. And we don't have to go into the chatbot area for this. There's been There's a great documentary called My Second Life or Second Life About Second Life, and the type of people who were creating avatars and having whole relationships through this programme. Although the difference was that there was another person at the other end of the line, as opposed to it being completely synthetic, which is, I think, a difference, and I think an important difference.

Giulia Trojano:[00:31:54]

But yes, I think what I find concerning, to go back to, I guess, the corporate side of all of this in conjunction with the social, is that if you look at, again, Replika, and I use that as an example because I've written about it, and I guess there's more information about it as well. Again, all of its branding is, I'm going to be with you 24-7, 100%, I'm your soulmate, et cetera, et cetera. Then you look at the terms and conditions of this application and what you as a user own, what they can do in terms of modifying your Replika and the algorithm behind your replica, and the fact that you essentially have no real ownership over any of that data that you've exchanged.

Giulia Trojano:[00:32:38]

So there's an inherent tension between telling someone, I'm going to give this to you 24-7, 100%, this is your soulmate, and then, as we have seen, having the company tinker with it and exercises its power over what is, you can argue, co-created data that this person has now created a companion. They think they're a friend that they will have forever, and yet they have no agency over them. I think that's always important to bear in mind, that it's never really aligned. That's the even more worrying thing in these circumstances, because it's not just aligned as, oh, much like Google is trying to sell me this sponsored ad, ChatGPT is now trying to tell me that this package holiday is better than another. It's at a really core level of emotional manipulation, really, because you are luring people in with the idea that they'll have this support. But then in a blink of an eye, the company goes bust. They decide that they need to tweak the algorithm because this new law is coming in, and you've now lost the companion that you cared for. These are real stories and people who have, again, taken their own lives as a result of these changes or been pushed to even more extreme platforms because the original platform removed erotic roleplay, for example, which was a feature that they were heavily pushing users towards and then decided to remove because an authority was investigating.

Giulia Trojano:[00:34:05]

I think it's this playing quite literally with people's emotional lives and lives that is the really worrying thing when it comes to these companions. But at the same time recognising that they have value for others. This is why I wanted to say about the longitudinal study. I think it would be unfair right now to say that in all circumstances, these don't provide any assistance whatsoever to anyone. I think that we can definitely strive to make the companies that make them better by force, because clearly, they're not doing so on their own accord to actually match what they say and what they do. But to remove them entirely, I think, is a complicated issue.

Ben Byford:[00:34:48]

Yeah, yeah, yeah. I think there's so many things to pick out in this whole situation. There's what circumstance or what kinds of people are looking to get out of this situation. What is it that they want to get that they seek these things out? What they do get? And then the legal aspect, the technology aspect, the financial aspect, all three of those come into how do you receive a system with longevity, with you don't download it, right? This is a hosted thing on the internet, let's say the "cloud", I'm doing inverted air quotes here, in the etheral cloud that exists and can be tinkered with at will. And if you were more DIY or technically literate, you might want to download this onto your own machine and have the ability to do that and look after it, almost. So there's an interesting thing. It's almost like a DIY project or a service that could be created for that. But obviously, then what you're doing there is you're relying on people being responsible. And obviously, the line that we were talking about before with adolescence is that we as a society have decided that you aren't responsible in certain times for certain things. And we're not going to let you do certain things or we're going to give you controls over, and legal controls. So it's interesting when we actually introduce these things into our lives, because as a parent, you may have Alexa, or you may have Siri, or you may have a child who has access to ChatGPT or another service, and they might be 10 years old. You know what I mean?

Ben Byford:[00:36:50]

And it's like, okay, so these things are happening, and there's all these issues there which we're trying to work out. But also, like you were alluding to, there are positive things that are going on there as well. And hopefully, if we're being sold correctly, there are some positive things which are just straightforward. This makes our lives easier things as well. I would actually probably say not many of those. I'm getting to the point where, are people responsible? When you pass adolescence and you're still having these relationships or you're used to having these relationships with non-human entities, how responsible How are you? It's a really interesting question that we're going to go, oh, well, you can have these relationships with these chat bots, or you can have what seems like a relationship, let's say.

Ben Byford:[00:37:43]

And you might be, you know, an adult in the eyes of the law, but you still might be a certain type of person who has got certain things going on. Or you might just be what we would probably, in inverted commas, say more normal person person who has access to this thing for certain reasons, maybe for grieving purposes or whatever. And it strikes me that I don't know if it's still in that case useful or necessary or not dangerous. You know what I mean? In my head, I obviously haven't done all the reading. Like you said, there isn't a massive amount of longitudinal studies. But it does seem to me that it's still very early to go out and ask people for money things which haven't very well been tested.

Giulia Trojano:[00:38:33]

No, entirely agree. I think a couple of points. So one on your idea of the point about downloading. So that's the paper that I wrote on companions was essentially a bit of a provocation because my idea there was, okay, fine. You, Replika, want to say that you've got a companion 100% of the time? That's great. Give me the opportunity to download it then. That's fine. I'll take responsibility of what my interactions are, but you give me the opportunity to download it, or at least if you are going to tinker with the algorithm, give me the opportunity to download this version of my companion, or if you are about to shut down your company, give me the possibility to download it. There are some apps around there, or not apps, but programmers who have devised ways of doing this. There's a market for it. But the reason why it was a bit of a provocation was because it's not that I think that otherwise they're all fine and good. But as a means to force them to do something about it, trying to get them to relinquish effectively their IP and the value they extract from people, or at least admitting that this is maybe what they're after is sometimes maybe a good way of strategizing it.

Giulia Trojano:[00:39:48]

But to your point, I think that's exactly right. I know that there's been obviously a lot of focus on children and adolescence, but that doesn't mean that these things aren't risky for for normal adults, in quotes, or adults more generally. To go back to the idea of de-skilling and social skills, I think that in a perfect world, if I'm entirely honest, I don't think I envisage AI chatbots in my perfect world, at least. I think that in the world that we have, where it is true that some people really struggle because they have really long working hours or they have unsociable working hours or they feel alone, they've moved because they're providing for the family far away, whatever is the case, there are use cases for this, which I can understand. Within those use cases, I think we should try the best that we can to make sure that people aren't pushed over the edge in unhelpful ways. Best case scenario, instead we have loads of fun third spaces that people can go to at all hours of the day and socialise.

Giulia Trojano:[00:40:51]

But I think to the point about, I guess, is there any good point or what is the intervention there? It's a difficult one because we are not assisted by, for example, governments rolling out increasingly more, for example, chatbots in particular context or suggesting that this is a solution. The reason why I say that is that you can argue that sometimes having a chatbot customer service thing that's generative is useful. It resolves your queries. I'm not sure most of the people interacting with these chatbots would ever say that, really, because Because most of the time you're just furiously typing, speak to agent, speak to agent, human agent. But I think that it matters, again, to go back to the narrative that we're getting so much talk about the way in which these technologies and this type of interaction can streamline, can help, can empower. The more you become acquainted with the idea of, for example, not communicating with a GP, but rather a chatbot, which is triaging your problems until you finally maybe get five stages in to speak to a doctor, it normalises the idea of using them. The fact that things like Google now have normalised the idea of AI mode, that it's more normal for kids who are nine to have a smartphone and Snapchat. I don't know why a nine-year-old needs a smartphone and Snapchat, for example. But the fact that this is all becoming normalised doesn't then help with, I guess, the trajectory of saying: Do we need these companions? What are we losing out by normalising them further? Almost Just trying to patch solutions, accepting that they're going to be around as opposed to doing anything to counter that.

Giulia Trojano:[00:42:37]

Yeah, so sorry, that was a bit of a strange stream, but hopefully that made sense.

Ben Byford:[00:42:43]

Yeah, no, that's great. I think the problem with this whole area, Julia, is that we should probably just sit down and have a whole day conference and we'll get loads of people in and we'll chat about it because it's just a vast area. It's just that emotional piece, right? And obviously we've talked around that in lots of different ways. I just briefly wanted to pick up on the financial bit when we're talking about Replika earlier. Am I right in saying that they were charging to unlock the sexual interaction part of their system? Right. And was that the controversy you were alluding to earlier?

Giulia Trojano:[00:43:27]

Well, so essentially it was It's free, and then your Replika, and it is still free if you just download the normal version, but then your replica constantly tries to push you to say–Hey, here's a snap. You can pay and see, this kind of attitude. And this then moved people into the paid subscription. Then what happened is that the Italian privacy data protection regulator investigated Replika in Italy because it was concerned that there were vulnerability issues for children who were able to access this chatbot below the age that you would normally expect them to. So this then prompted Replika to remove erotic roleplay from the...Or at least change it such that you had users being interviewed saying–My companion has been lobotomized. They're dead. Many of them then were pushed towards, I forget the name of the more explicit app, but towards more explicit app. Chai AI, I think was the one. And this app, for example, was then used by the person who attempted to kill the queen. Anyway, long story. But the point is that it goes back to the idea that people are lured in and then are really not in control. It's all made up so that you think you're in control of the relationship that you have with this companion, but really, it's not you. It's ultimately still a company interface.

Giulia Trojano:[00:45:03]

I think this is something... I guess I had two points to add to your earlier comments as well. One, to go back to the why is this important for adults or why does it matter also for adults, is that I think you can witness it in... I have a smartphone, we have a smartphone, we live with technology, fine. No one wants to be a complete hermit living in a little hut. But, you do notice difference, I find, even at work sometimes in generations, and how they read faces and small expressions and tones of voice. To broaden it out a bit, there are people who watch videos at two times speed, and the voice is really mechanical, and you're used to reading subtitles. That's not the same as engaging in a conversation with someone and listening to pauses, listening to their micro... Observing microexpressions as a part of your dialogue. It's not the same as having a confrontation with someone. And again, generationally, my generation is often more scared to pick up the phone and talk to someone because we're more used to, I guess, emailing, right? And that's a thing that you have to push through. But here it's like a further stage of if you always have a companion or a chatbot that's quite sycophantic, as has been studied, and essentially just engages with you and agrees with you unless you explicitly tell–don't disagree with me, but that's agreeing with you effectively, then how are you meant to navigate conflict and differences of opinion. Relationships are actually a lot of the time you are navigating different points of view in a constructive way, but that's part of it. Equipping people with that is important.

Giulia Trojano:[00:46:46]

I guess to say that this shouldn't be minimised in that the amount of kids who are now using these chat bots is increasing, increasing, increasing. In the US and the UK, there's some really staggering statistics. So I think sometimes a reaction to these conversations that others might have is, it's not a real issue, it's not a big problem. But actually, if you look at the data and you see how this generation is going to progress, it does become a big problem. Yes.

Ben Byford:[00:47:14]

Yeah, yeah, yeah. That comes back to the idea we were talking about earlier of making it people more used to having that around and it becoming more of a thing because of that fact. The Replika thing that you were talking about before It's funny because it keeps coming around, right? It's almost a freedom of speech problem where... Do you hear about AI Dungeon back in GPT-3 times, I think? So there was this very amazing service called AI Dungeon, which was built off, I can't remember if it's GPT-2 or 3, early doors on large language models. And the idea was you've basically got a Dungeon Master, right? So, it's a story-making thing and you start role-playing with it. But what was interesting about it was that you could role-play in a way that was no holds barred. So you could have sexual stories, you could have stories with all sorts of different things going on in it, some of them which were maybe more taboo than others. And at one point, they had a backlash, and they maybe did something about it. And what they did, again, like you were saying with the lobotomising my AI companion, is they did that, right? Those guardrails became so prohibitive that you couldn't have any interesting experience in not in the world anymore.

Ben Byford:[00:48:46]

So I'm going to say it's freedom of speech, but it's freedom of ideas given a responsible position. So a position of, I'm [an] adult… Again, of autonomy for a human being and their responsibility for their own well-being. It's a very difficult one. I feel like we're hitting that again and again with this technology, with what we are allowing people to do with it and what is possible to do with it and where it's useful. It's a bit like if you were just googling about child pornography because you were a researcher or a journalist and you had a thing to write and you'd probably get flagged up on somewhere and someone would come and find you. It's a similar situation that we're having with these technologies, whether it's sexual conduct or emotional stuff or whether it's just playful.

Ben Byford:[00:49:47]

And another idea coming from this games direction, because I'm also a games designer, is you have these other spaces, like you were saying, Second Life, previously. Nowadays, probably Chat VR and VR spaces or metaverse spaces, Roblox, again, for the younger audience. You could have AI bots or avatars, which are powered by large language models running around, having very interesting conversations with you and powerful interactions with you, which might be completely bonkers and rubbish and might be very amazing and emotional. It does feel like an episode of Black Mirror at that point. I've spent all my time in chat VR talking to this thing which isn't real, and now we're married.

Giulia Trojano:[00:50:49]

Yeah, well, it's true. No, I agree. I think that, I guess, two things. One, to be honest, I guess I follow... I used to write for visual arts magazines, et cetera, and often experimental arts. There's lots of artists that I really back and listen to, et cetera, who have played with AI, created an AI model themselves, and then used it for creative purposes. The way they do it, I guess, is something that I'm trying to draw parallels with, is the way surrealists used to engage in games that were meant to be removing you from your conscious self so that you could come up with whack things and react to them, or like magic eight balls, for example, this stuff. I think there was a quality to that era of Generative AI, where the images were really warped and you had an aesthetic. That aesthetic to me meant something in that you had a veneer of like, this is definitely AI generated, it's definitely other. It's interesting because it's like this. Whereas I think one of the things that I take a lot more issue with in the current iteration is that everything is intended to be so life-like and real-like, and that creates them the blurriness. Whereas, if you had these AI characters in these video games, but they were very clearly speaking almost in a poetic, nonsensical language, I find it more interesting than someone who is reciting as if you and I were speaking and it's just a fold for us and then blurs the perception of, as you asked me, are you real or are you not real?

Ben Byford:[00:52:34]

Yes, exactly.

Giulia Trojano:[00:52:35]

So there's that side of it. But two, I think to go back to the games, I think, again, it would be interesting. AI Dungeon, for example, I don't know who was running it, so to speak, and what the commercial, if any, interests behind it are. But I think a lot of it goes back to that. If you have a video game designer and you sell a video game and you make the money off the selling of the video game, that's it. You don't actually care about the data that's passed on by whoever's playing the video game, that's fine. That will probably be very interesting. I think it's the unfortunate linking constantly of the data extraction at every level of you getting the game whereby what you're paying is not just the point of you paying the game, but also every other exchange that you have that creates a complication. So disentangling that, I think, would do already a lot of good, at least in terms of ensuring that there's no real sense in profiting from anything further because that's it, that's not in your interest. Enforcing that might somewhat help, even in the companion space.

Ben Byford:[00:53:38]

Yeah, yeah, yeah. And I guess what you were saying earlier is that sometimes these laws come in and they say, one thing and that requires you to actually keep more data so that you can achieve those. So is this a push or pull there, looping back. Sometimes it's easy to say, if it was me, I would say, yeah, well, just don't do the data. Don't keep the data. Don't have it around just because you can. But then at some point you're like, well, we can't verify this thing because we haven't got the gender or the locational It gets very sticky very quickly. But I think if you start from that, it's a privacy first position, then maybe you're starting in the, hopefully in the right foundations.

Giulia Trojano:[00:54:26]

I think so. And there's lots of really interesting things that are happening now, for example, with trying to verify people's age through various encrypted ways that mean that you can still have a sense of whoever it is and their age, but without then identifying them. Therefore, again, you don't derive any further benefit from the extraction of their data, ideally. But yeah, I think, I guess, broader picture to go back to the idea of games and companions. I think there's also, again, a trajectory, which is part of the narrative of inevitability, and this is something that we need because it's going to help us combat loneliness when actually OpenAI itself released a paper where it said, Oh, yes, it seems like the people who are using this tend to be more lonely. You're like, Really? I didn't think about this.

Giulia Trojano:[00:55:14]

But to go back to that, I think, again, the inevitability of all games needing to be live online. I keep on seeing ads. I don't have kids, but I keep on seeing ads for products that are, for example, connected to Roblox, that are physical toys that can record stuff. Then it's AI-powered because they can interact with you, but then they're connected to the cloud. I'm like, Why does it need to be connected to the cloud, though? It's a toy. Because then not everything needs to be smart. Your light bulb does not always need to be smart and connected to some thing that tracks you.

Giulia Trojano:[00:55:52]

I think it goes back to that idea that there seems to be a trajectory of saying, This is inevitable. Everything needs to be smart and live and powered. The more we normalise that, the more, again, we find ourselves in issues because there's the sense that, Oh, this product is already out there. We must fix it, as opposed to questioning, why does it need to exist in the first place and what else can be put forward as an alternative so that it's not the only solution.

Ben Byford:[00:56:18]

I think if anyone has the Internet connected things, IoT things, they should really seriously think about their security in their home around data security from the supplier of that thing and hackers and anyone walking down the street with any technical know-how. Because in the early days, this stuff had no security. It's terrible. Probably more so now. But just people in those companies just viewing your stuff. There's all sorts of things that are going on there which are unnecessary to welcome into your house, essentially.

Ben Byford:[00:57:01]

And if anyone cares what I think about Roblox, you can just skip this bit if you want. But Roblox is rubbish. It's just a capitalist nightmare. It's all about grinding. It's not about fun or interesting games mechanics. It's about attention, and it's horrible, and you should not use that. Minecraft is probably marginally better because you get to make things. Roblox is just the pits, and your child can get approached by any the old person and talk to. So there's that as well.

Ben Byford:[00:57:33]

Anyway, so aside of stepping that, thank you so much for your time. I really do wish we could just keep talking. And I know that we were trying to do this in person, so maybe we'll find time in the future to do that.

Ben Byford:[00:57:47]

So it was a real pleasure having you on.

Giulia Trojano:[00:57:50]

Likewise.

Ben Byford:[00:57:51]

Thank you very much. The last answer, the last question we ask on the podcast is, what excites you and what what cares you about this AI-mediated future? If you haven't already, could you give us a short and sharp answer to those things? Thank you.

Giulia Trojano:[00:58:09]

Okay, what excites me, I think there's really interesting stuff happening in resistance, so to speak. One example that I always give is trade unions becoming more important. We saw it a bit with the Hollywood strikes, and I use that as a funny example, to be honest. But the idea of trade unions and and people uniting to reject them being profiled by algorithms or not being subjected to the AI is exciting and hopefully continues in its rumbles. What scares me is, unfortunately, governments, including the UK, really just drinking the Kool-Aid with having no sight as to what it means to be a resilient, sovereign state, thinking that growth equates big tech companies maybe providing jobs, unclear, and that just being the discourse all the time, as opposed to thinking about frugal innovation, ways of being analogue computing, and other ways that you can actually deal with big problems and not being so techno solutionist. Yeah.

Ben Byford:[00:59:19]

Wicked.

Giulia Trojano:[00:59:20]

On a positive note.

Ben Byford:[00:59:21]

Yeah, yeah, yeah. I should always phrase that the other way around, shouldn't I? It should be more like, what scares you then, what excites you, and you can But thank you so much for your time.

Giulia Trojano:[00:59:32]

Not at all. Thank you. It's really nice.

Ben Byford:[00:59:34]

How do people find out about you, follow you, contact you, all that stuff?

Giulia Trojano:[00:59:39]

Well, my name is a difficult spelling, but I'm sure it will be somewhere. But you can see my writing at www.giuliatrojano.com, but spelt Italian way. So somewhere we'll find this out. And otherwise, I'm on LinkedIn. I think you'll find my writing somewhere. But yeah, always happy to receive messages, emails. Yeah. That's it.

Ben Byford:[01:00:01]

Thank you very much.

Ben Byford:[01:00:08]

Hi, and welcome to the end of the podcast. Thanks again for Giulia for coming on. Hopefully soon we'll have a chance to have a in person and record another episode, maybe in a couple of months time. Given some of the stuff in the news at the moment, I was really keen on getting this episode out around companion AIs, emotional AIs, people interacting, becoming friends, having some emotional connection with these services and having a discussion around that. Giulia and I were grappling with some things, and it just became apparent that we could have just kept on talking. So we'll definitely come back to this one. And hopefully, without anything catastrophically bad happening, we'll hopefully be able to grapple with some of this stuff going forward.

Ben Byford:[01:00:54]

If you're interested in this area, we do have some other episodes. Episode 30, Emotional and Loving AI with Julia Mosbridge. 37, Social Robots with Bertram Malle. And 64, Emotion Detection with Andrew McStay. Again, if you'd like to support the podcast, you can write a review, give us a subscribe or a thumbs up wherever you get your podcasts. And if you can, you can support us on Patreon, patreon.com/machineethics. Please tell your friends about us, and we'll see you next time.