Language models trained on media diets can predict public opinion
arxiv.orgThis is pretty scary if true. Imagine you would like to influence an individual’s opinion in a very specific way. At present the only way to do this is to write articles in the general vicinity of your cause. However, with an accurate forecast of public opinion, it’s possible to tune the input content such that the output is most closely aligned with the public opinion we are trying to generate. That is, given a history of K previous articles, it’s possible to show a K+1 article that influences an individual’s opinion most closely towards the target.
We may soon be moving into a new political era with propaganda tools so powerful that most historians will be convinced that representative democracy was a historical anomaly of our technological timeline, not an “End of History” inevitability that many either implicitly or explicitly believe it to be.
To a large extent, this is already known. Propaganda is a thing because it's effective. They don't call it "programming" for nothing. Media headlines and coverage shape puplic opinion. Oldest secret in the book. This is actively exploited by media companies as well as intelligence agencies, both foreign and domestic.
I don't think AI is doing anything here that wasn't already there. The scary component of AI is that deep fakes will be used to shape public opinion. It could go one of two ways: either people are repeatedly duped by the fakes or they lose trust in media altogether and withdraw from it entirely, for better or worse.
> Imagine you would like to influence an individual’s opinion in a very specific way.
You make it sound like this isn't already known, and already done well before mass media existed. AI is just going to make it easier to pull off tricks that are difficult by last decade's standards.
But people will adjust.
In other words, anything can be “both-sided”. Great /s.