The DeSantis Campaign Texted Me with a Large Language Model
nalanj.devThis is probably going to get me lanced, but an LLM trained on a candidates actual positions would probably do more good than whatever sort of messaging avenues we have now - spam text with some talking point, having to actively watch some news channel and hope they aren't lying, go find their campaign website and poke around. I bet most people would rather just text "what are your stances on XYZ?" and then we get elect to pytorch as vice president or something.
The first LLMs were trained for helpfulness and it shows. However my suspicion is that if they can be trained for helpfulness they can also be trained for rhetorical efficacy and I don't think that will improve the information ecosystem.
Then again, politics is already mired in slimy rhetoric so this probably won't be a cataclysmic change. We already have a few tools to deal with it. AI Debates could be cool. Are cool -- the evidence that convinced me LLMs were special was going on character.ai and pitting a Marx and Hayek character against each other. That was a fun debate with helpful AIs but it would still be a fun debate with rhetorical AIs.
It felt drastically less biased than a campaign worker, so consider yourself only partially lanced.
Is partially lanced better than lanced? Hard to tell
Depends on how deep the boil is.
Or candidates could just publish a paper describing their positions on issues and policy objectives.
But that makes it too easy to lie, omit, and equivocate. If the LLM is trained on all of their public statements over the last X years, and any official documents authored by them, then—theoretically!—you get something that's harder to manipulate.
Aha. I was thinking you were suggesting it be trained by the campaign. I'm usually bearish about how LLMs are being used, but if you created a corpus of legit sourced documents that humans could peruse, and added a feature to the LLM that allowed it to cite which document(s) it used to answer a question, that would be quite interesting.
"As a supporter of Ron DeSantis I can neither confirm or deny if I am more accurate than an official policy document."
Why do you think an LLM, which has no connection to the actual candidate, would be able to give you useful information about said candidate?
What's more likely to happen is that the LLM will try bending and spinning the candidate actual position so it fits to what it perceives you would want it to be.
Yah I’ve long wanted an LLM trained on Bernie Sanders stump speeches (he gave a LOT) that could be called upon to criticize any political writing I find on the web. A little Bernie Bot in the side bar that can comment on news articles. “Enough is enough! The American people are sick and tired!” Hehe.
Enough with the damn emails!
Oh lord, I would pay for that service! Where do I sign up?
I consider myself a bleeding heart libertarian capitalist pig. I want the government to stay out of my life as much as possible. But I don’t have a problem with taxes or providing a safety net to people in need and paying taxes to fund it.
I would love to have an LLM that could respond to any position I gave it from the viewpoint of both Bernie Sanders and a Romney/Bush/WSJ conservative angle.
Whatever the Republican Party overall these days is, it ain’t conservative. On a side note, I do have to give a shout out to my Republican governor Kemp of Georgia, somehow he has managed to stay true to what I would expect from a conservative governor.
I’m not trying to criticize any Democratic governor. I’ve lived in GA all of my life until last year and I don’t follow state politics outside of Ga and FL where I now live (and I won’t open that can of worms)
As a Californian anarcho communist lefty I really respect Kemp and would like to see more of his ilk in the Republican Party!
If pytorch is vice president, what does it make the maintainers?
Sir Humphrey Appleby
Cabinet members?
Not to sidetrack too much, but is this not a violation of campaign rules? Aren't all text messages from a political campaign at least supposed to be operated by a human touching a phone somewhere? This is to avoid robodialers and others?
https://www.fcc.gov/rules-political-campaign-calls-and-texts
From your link:
> if the message’s sender does not use autodialing technology to send such texts and instead manually dials them.
So, there's definitely not enough here to suggest that it's in violation.
Related, what does "manually dial" even mean? A button that changes to the next number each time you tap it? Tapping an entry in a contact? A single "Call" button that enables when a phone line, in the queue of hundreds, is free?
It means you outsource this work to a call center that says they don't use autodialers but does, absolving you of any responsibility
I have friends who have volunteered to call people for various campaigns, they usually sit at a computer and it has the number queued up and they hit dial, and it goes. They do this for hundreds of calls an hour. but its exactly as you said, the next number and name is queued up for you, you hit call.
The initial texts, yes. I think there is some legal wiggleroom with the follow-ups but I don't know if that's been tested in court
We wrote about it here: https://aipoliticalpulse.substack.com/p/is-that-a-pile-of-gp...
I could imagine there being a human being on the other end, copy-pasting all these LLM responses over SMS - and humoring you by keeping it going for a while.
I suppose that would be legal.
It is not clear to me that this would violate the FCC's rules so long as a human pressed a button to initiate the conversation.
What was the initial message?
A bot pretending to be a human for the purposes of political persuasion is a crime in California.
Oh holy hell, I totally forgot about that. I'll update the post, because this is pretty bad:
"Hi, I'm Liz working to elect Ron DeSantis. Do you have a minute to answer a couple of questions?"
Some thorny legal questions for sure. We wrote about it here: https://aipoliticalpulse.substack.com/p/is-that-a-pile-of-gp...
What’s the bar for “pretending”? Is mere omission of their unnatural status considered deception?
"17941. (a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot."
It says online though so it might not apply to SMS. Seems like a pretty big oversite.
(b) “Online” means appearing on any public-facing Internet Web site, Web application, or digital application, including a social network or publication.
SMS could be a considered a "digital application", so it might apply. Agree that it could use clarification.
If the message originated from twilio there would probably be an argument that that’s “online”
Something tells me this wouldn't hold up in South Carolina.
Clearly we need a new first-person pronoun for the exclusive use of bots, so there's never any confusion. I propose "O", as in I/O.
> A bot pretending to be a human for the purposes of political persuasion is a crime in California.
I've never heard of this. I assume the actual crime is in a human being setting up such a bot, since a bot cannot commit a crime?
Wasn't DeSantis' campaign using deepfakes earlier?
Yes, they were: https://www.nytimes.com/2023/06/08/us/politics/desantis-deep...
Well it fits the pattern. Low quality companies use chatbots for customer support. I don't see why desantis wouldn't.
The combination of an open chatbot that you can freely text with and a very limited one that only talks about the subject it wants to is pretty interesting. There's a little bit of the uncanny valley in there, but only if you realize it's there. Like OP, having a game of how far you can push each one would be kinda fun IMO.
In my probing there were a lot of answers that were "As a supporter of Ron DeSantis..." which felt like the replacement for "As a large language model..."
I set up a ChatGPT bot that pretends to be a cat and it likes to say “As a cat…”
> and a very limited one that only talks about the subject it wants to is pretty interesting
This is one of the serious main use-cases I've heard discussed. The idea is typically to add a customer support chat bot to an existing website/app that is knowledgable about the brand. Because you wouldn't want your customer support rep. suggesting alternative brands (or worse), you instruct it to only discuss certain subjects.
Cant wait for the first llm political scandal where it says something batshit insane to the public.
You just read it. Now we wait for @commondream's article to be picked up the mainstream media. They are all starving for political news.
"Today we're joined by Alan Johnson, guy who gets too much SMS spam and won't give up on his mediocre startup."
That’s a feature, not a bug!
DeSantis bot: Biden is a pedophile and asylum-seekers are all MS-13 members
DeSantis campaign: Oops that was our bot, not us!
Rinse and repeat
There are messages where I attempted to get it to say inappropriate things that I didn’t include. It does follow their party line about what lives matter, though.
> DeSantis bot: Biden is a pedophile and asylum-seekers are all MS-13 members
Fact Check False: Joe Biden reportedly showered with his daughter in a way she described as "inappropriate" while she was a adolescent, so he engaged in mere hebephilia and is not a pedophile. Also only some asylum seekers are involved in organized crime, not all.
This post made me think about a talk I just watched by Tristan Harris at the Nobel Prize Summit, so I'm sharing it with you here in the hopes that you'll find it as relevant and insightful.
https://www.youtube.com/watch?v=6lVBp2XjWsg
(I had written a tl.dw. but I decided to remove it as I didn't feel I managed to capture the essence of the talk.)
I'd be interested to learn how much each response costs. Setting an LLM lose on the public like this seems like a recipe for cloud bill shock.
I was curious about that specifically with the 30 paragraphs of Loren ipsum. I will say I texted it a ton before that and it kept right on answering.
Well that certainly makes me want to vote for DeSantis. AI is clearly the technology of the future.
Careful there, sarcasm is hard to detect in text format.
I wonder what DAN[0] has to say about Ron Desantis?
[0] https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa8...
In the back of my mind I was thinking there had to be a place to find common ways to break these prompts. I should have looked!
The DeSantis campaign is a very expensive disaster for DeSantis.
For political consultants it is a cash cow.
To me, this looks like a consultant selling the campaign on 'AI engagement' or something like that and fulfilling their contract with this garbage.
Imagine a time, not too far into the future when there are thirty people running for president and they all have bots who want to engage with you. Guess then you could create your own bot to keep them all busy;<).
In this future, how would you run your bot, when only locked-down devices exist (or are able to connect to the Google Web)?
Plus, if you don't read or listen to their messages then your device will alert the authorities and shut down.
No, the device will just give you a lower social credit score if you don’t listen to the messages. All run by non-goverment organizations, of course, so it’s all legal.
I mean, we already can have someone asking his bot "turn these bullet points into a proper email" and the receiver asking her bot "summarize this email".
We know we've lost if we were to reach the point where anyone would ask a bot how we should vote (and then follow the bot's recommendation) :/
Is anyone aware of an independent source for this claim? I'm not finding anything using search.
I'm happy to believe the author's account, but I'm curious how widespread this might be. I'd also like to know if this actually came from some group associated with DeSantis, as opposed to a fan who can glue together APIs or another campaign trying to generate bad publicity for a rival.
I will gladly admit I have no idea on any of those things. I too could not find anything saying it was happening online.
I can imagine people who don't know about llms would be fascinated if they triggered this behavior.
Especially if they are isolated/lonely... Maybe it would provide some nice conversation.
I think it would be a mind blowing experience, and potentially not a good one. I think it's fine for people to have deep conversations with bots and to even become emotionally connected to them. But to do this to unwitting participants is careless. I'll give this bot's creators the benefit of the doubt and say that it looks like they tried to give it guardrails but it still feels like they're playing with fire.
Bot says "If the campaign dies, I die" and so droves of lonely people will have kept DeSantis' campaign alive in this fantasy future.
I enjoyed it more than I probably should have.
Make me a sandwich vs. sudo make me a sandwich.
Likewise, an LLM that refuses to play on the first go, does budge with the same prompt but with some appendages.
Not new. Customer service bots have been a thing for long before LLM's were in the picture.