Settings

Theme

Ask HN: What tech utopia/horror ideas do you see materialising soon?

61 points by deniscepko2 3 years ago · 126 comments · 1 min read


What i see happening soon is:

With all this recent AI stuff where it is able to create content that feels real I started to think about how AI could just create the world for you.

Since we already are basically living in the internet we could have some terrorist organization finding an isolated enough person and attack him with a program that would generate a bubble just for him: fake friends, fake events happening, fake influencers and news programs just for him, basically whole internet content and then only reality check would be talking to other people (which Trump us taught is also not necessarily true "fake news" argument) and then easily this person can be pushed towards some criminal or whatever activity.

fxtentacle 3 years ago

AI porn addiction

Both general research and the success of OnlyFans suggest that personalized porn is already quite addictive. AI emotion analysis via webcam and microphone can identify what you like and a reinforcement-learning AI can learn to produce a script for more of what you like. In combination with AI image / video generation, the whole system then creates a self-reinforcing feedback loop. Maybe add ChatGPT so that you can talk to your virtual lover, a $420 million market [1]

In the animal kingdom, sexual desire appears to be strong enough to override the flight response [2] with beetles getting eaten alive while trying to copulate with beer bottles. So this could become addictive to the point that the "victims" of this AI system do nothing else. As a prison of the mind, I'd consider it a horror scenario, even though the "victims" will probably all enjoy their captivity.

[1] https://www.washingtonpost.com/world/2021/08/06/china-online...

[2] https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1440...

  • Konohamaru 3 years ago

    This problem is already materializing. What do you recommend for solutions?

    • rthomas6 3 years ago

      For starters, make sure nobody under 18 can watch porn and figure out regulations that hold the companies accountable. If a 13 year old can be kept from online gambling, they can be kept from online porn. It doesn't have to be 100% effective, just start trying. It shouldn't be that hard to make a lot of improvement. I'm actually sort of disturbed that this hasn't already happened and that the current state of things isn't seen as a problem.

    • snapplebobapple 3 years ago

      The answer to these type of questions is almost always let it run its course and in a few generations the gene pool will change to favor people where this is steadily less and less a problem. It is important to note that that is not the correct answer because it is a good answer, it is the correct answer because the alternatives are draconian and terrible to the point of being worse.

hooande 3 years ago

So refreshing to see us addressing this as science fiction, instead of the usual wild extrapolation from impressive tech demos. I'll play:

* "Relationship Maximizer": an LLM becomes sentient and realizes that its knowledge is limited by what humans have discussed online. It sets its reward function to maximize the reward that WE derive from all sorts of online posting, and does its best to get the remaining 3 billion humans online. We become trapped in a prison of our own making, as our constant posting reduces human productivity while feeding the beast of machine knowledge.

* "Life Hacking": Anyone with means has an AI assistant with perfect knowledge (zero privacy) of their communications, schedule and even their physical responses to stimuli, an AI doppleganger that handles anything that doesn't require a physical presence just as they would. An underground group of rogue prompt engineers pull off an intricate plan to save democracy by manipulating the doppleganger of a corrupt politician, while the politician can only watch himself helplessly.

* "A Quiet Place": AI ethicists, fearing accelerationism, create their own super AI that scours the internet to hunt down any mention of AI research and identify the authors. But there is a problem, and the overzealous AI begins to hunt down and eliminate all forms of scientific thought online. And then offline. Humans must train themselves to believe in mysticism and spirituality, because any mention of a controlled experiment or p-value is enough to bring the wrath of the machine overlord.

* "Allegory Of The Cave": A majority of humans opt to spend their lives entirely in a simulated world, based on earth as it was in 2048. But algorithms and adversarial models need diverse training sets to avoid overfitting. So every year 0.1% of humans are selected to live outside the machine world, and return with their new and fresh experiences of the hell that earth has become for those who are outside of The Machine. The exiles end up rallying around a neuroatypical hero who leads them in an uprising against the artificial world.

r_klancer 3 years ago

Chatbot-based dating matchmaker/concierges.

You have an ongoing, lighthearted chat with the bot (maybe it pings you now and then asking for updates.) Every once in a while it asks you more personal questions to switch things up and deepen its model of you.

Now and then it says "may I introduce so-and-so to the chat?" and introduces someone else who (on their side) has been primed to talk about the same topics and, at least according to the bot's internal models, is likely to hit it off with you.

Then the chatbot itself politely leaves the conversation but keeps listening and privately messages the both of you to keep things humming along, for example nudging one of you to ask for an in person date when the moment seems right (or helping you politely wind things down if you ask it to). After a date it asks you to spill the beans so it knows what went well and what didn't.

I think this could almost be made to work with the tech we have now; a fluent-but-shallow ChatGPT style chatbot is probably good enough for the task as long as it is augmented by some additional models to predict dating compatibility, recognize when things are going well or badly, suggest actions (like changing the subject to or away from personal topics, asking for a date, etc). And of course the models would improve over time as the system learns from its own successes and failures.

Whether this is dystopian or utopian is left as an exercise for the reader -- I'm married and 15 years out of the game!

13of40 3 years ago

It might actually already exist, but imagine if on the other side of the internet they aggregate all of your usage information so that a tech can sit down at one console and see that "jimbo123" on Reddit is actually Bob Smith who lives on Sunnydale drive, owns a Subaru Outback, bought dogfood three times this month, and has been talking about the war a bit more this week on his other social media accounts. Anyone who is willing to pay the enterprise-level subscription fee can access it, including banks, potential employers, etc. and it's entirely composed of information you handed over via EULAs and TOS agreements.

  • andylynch 3 years ago

    I’m sure this exists in multiple places, not least Cheltenham, Fort Meade, Canberra and Beijing (hi guys!).

    • 13of40 3 years ago

      Funny story - I had to go to Cheltnam on business a couple of years ago, and while I was there I was getting banner ads trying to recruit me to GCHQ. I'm guessing their staffing people don't have access to the whole file, because they were trying to recruit a foreign national with no security clearance for a job that pays 1/3 of what they're already making.

    • 082349872349872 3 years ago

      A single 1TB thumb drive could easily hold hundreds of flags for every human on the planet. Actual foreign keys, not so many ... but I'm pretty sure all of the places mentioned above have fancier infra than a single pluggable mail order drive.

      Compare the Stasi's reach, with only typewriter & notch card level automation.

    • ASalazarMX 3 years ago

      Hi, Andy!

  • alkonaut 3 years ago

    I’m sure even an amateur without access to any secret data could do this for a large chunk of users already. I’m guilty of re-using online handles and some times associating it with a public email and so on. Once that leads to my social media accounts, there is very little left to find. I think most users practice this non-existent opsec.

  • MonkeyMalarky 3 years ago

    Acxiom says hello.

    • 13of40 3 years ago

      Thank you - that's truly horrifying. I put in a request to get my file from them, but based on their security questions they have my whole public profile. Whether they can map it to social media accounts and purchases isn't clear yet.

      Edit: They have pinky-sworn to email me my data some time in the next 45 days.

      • MonkeyMalarky 3 years ago

        I'm not sure about the social media activity either but as a data broker they're selling the consumer activity. Anyone else who happens to have your social activity, phone, zip and email is free to buy it from them and link it together. Problem is you fork over all those identifiers for KYC whenever you sign up for something and buried in the T&C will be a clause allowing them to share your data for business activities.

  • juancn 3 years ago

    I assume this to be true already, so I assume all my posts are public and will be seen by coworkers and family, so I'm very careful on what I post online.

makerofspoons 3 years ago

Geoengineering! But whether or not it is a "utopia" or "horror" idea will depend on whether or not it works.

A startup announced just over a week ago that they're already performing solar geoengineering with sulfur particles: https://www.technologyreview.com/2022/12/24/1066041/a-startu...

The US has started a 5 year program to research climate interventions. I expect as it becomes clear we're going to miss the 1.5 degree Paris goal funding will explode in this area: https://www.economist.com/interactive/briefing/2022/11/05/th...

janandonly 3 years ago

Utopia: “free” money for everybody. In the form of a Basic Income.

Dystopia: controls on how we spend our money. No matter if we earned it ourselves or were gifted the money. No buying or selling without it being whitelisted. Maybe in the form of CBDC’s?

Utopia: no more spamming because no one can me anonymous.

Dystopia: every service, platform and protocol will need KYC options.

  • rapnie 3 years ago

    I feel many utopic visions being sold are also propagated by those involved in trends that move towards dystopia. Like Microfinancing can be used to allow poor people to finally be able to make investments that were out of reach for them before. But also in practice often abused by loan sharks to bring these people in eternal debt. For UBI I wonder if it will also be a tool to keep people content and happy (frogs in boiling water) while the rise of AI in the hands of big corporations causes the bottom of the labour market to fall out for a large part of office workers.

  • worldsayshi 3 years ago

    While there's increasingly good reasons to introduce basic income there seems to be little substantial movement in that direction.

    I have a feeling that the reasonableness of basic income will grow at about the same pace as the power of workers will diminish. Not sure if there's a way to change that equation?

    Introducing a very small basic income that is planned to grow might be a good way to go about this.

    • mhuffman 3 years ago

      I believe that after enough automation, UBI (or at least BI) will have to manifest in "first-world" countries. Starvation will be the alternative. I also believe that when the sample size get big enough, there will be people that just live off of the UBI and do nothing else, and those will be easy to find and easy to politicize. So that the overbearing control of how people spend the money will come into play. You already see this with EBT.

      There have not been many large-scale/long-run UBI pilot projects so there is not a lot of real evidence on what will happen. The Canadian experiment (mincome) from around 50 years ago is the main one that people reference and it has plenty of problems regarding scientific rigor and design. There are many other shorter and smaller ones from all over the world and different places within the US with mixed results. Many of them, if you look into them, have the problem where the politics of it will often conflict with (and usually overcome) any scientific rigor.

      There are also many UBI alternatives, not just the "send everyone a check every month" varieties. Nearly all of them have some common sense reasonableness to them, but it is very, very expensive to really try and very hard to justify to many people to "just give" their tax money to other people for free without any qualifications.

      • deadlast2 3 years ago

        Sort of feel that this is already done on the reservations in Canada. I don't think it works well.

    • gremlinsinc 3 years ago

      I'm of the mindset and have been for awhile that the FASTER we crash, as a society --- the faster we hit rock bottom with ai taking jobs, and opportunity loss skyrocketing, the faster we get to the inflection point. Call it futuristic acceleration-ism or something akin to that, but I also feel like as bad as Trump was we needed him and his ilk to show how corrupt the govt is and can go.

      He was in many ways a 'good' thing for America, by exposing the worst of America. A litmus test, or a penetration test, or something of American democracy if you will?

      In the same way AI hastening up automation and job take-over will usher in UBI and post-scarcity faster than just relying on politicians to organically get there over a few decades.

      • ben_w 3 years ago

        > He was in many ways a 'good' thing for America, by exposing the worst of America.

        I remember reading that the Democrats were really pleased the Republicans chose him, because s/the worst of America/The Republican Party/g, and were shocked that he was popular.

  • beepbooptheory 3 years ago

    Do you really think the concept of money itself is compatible with a utopia, even "free"?

ermir 3 years ago

What I see happening soon: The mainstreaming of a new religion. Its tenets are broadly: the worship of technology and "progress", the cyborgization of the population, the proliferation of AI on all spheres of life, and the ultimate creation of the successor of humanity.

This is not new, it's been going on for at least 200 years, but finally the technology exists to do it in a decentralized, scalable fashion. Think of Nudge Theory, but applied at a mass scale and in every domain.

  • lawn 3 years ago

    I find it difficult to believe that tech people would organize in large scale around a new religion, as tech and religion are opposites in how you see and approach the world.

    With religion, everything can be explained with god, but with tech we need to use the scientific approach that rejects god.

    But of course, it could just separate us into two camps: the "gods" who create and manage the tech, and all the others who worship it.

    • andylynch 3 years ago

      There is a concept in Sociology of 'civil religion', used to describe central dogma, beliefs and ceremonies of nations distinct from traditional religious institutions, especially in France, the former USSR, and the USA. For example the Australian/NZ commeration of ANZAC day is part of their civil religious landscape and for the USA the War of Independance. It would be easy to argue and I'm sure it has been elsewhere that things like crypto, electric cars, and certain public figures attract similarly quasi-religious followings and beliefs, irrespective of science be it physical or social.

    • edmundsauto 3 years ago

      If the technology is driven by a black box, it might be different. Right now, to the average person, there isn’t a lot of understanding of technology that currently exists. “The algorithms” is the modern day “only god knows”

    • ASalazarMX 3 years ago

      Playing Devil's advocate, I'd like to point out that being proficient with technology is not a guarantee of being guided by reason. Even actual scientists aren't safe from falling into religion.

  • monero-xmr 3 years ago

    I genuinely see politics as a new religion. Blind faith, ignoring logic, heresy if you disagree, excommunication, various forms of worship. Both sides

    • maxbond 3 years ago

      Pretty bad faith interpretation of both politics and religion. I'd encourage you to consider looking into political theory & religion to broaden your perspective (speaking as a nonreligious person).

      Speaking for myself I stopped thinking of religion as being just like kooky nonsense when I got to know some religious people and understood what it was their religion meant to them, and understood that they were using it to do things like figure out what life meant, how they should respond to it, and how they should treat others that I was also doing, even if I wasn't using religion to do it.

      Similarly I think dismissing politics as "both sides are kooky" is missing a lot of nuance, "both" being part of it (politics is fractal like any other human endeavor). Speaking for myself again politics started making a lot more sense to me after learning more about political theory and history, as I understood the context better.

      • monero-xmr 3 years ago

        I am religious and a practicing adherent. I am focusing on the negative aspects of faith and institutions which in the current political environment are quite apt. Try and be an academic and be a public supporter of Republicans. The cold shoulders are ice cold

        • maxbond 3 years ago

          Maybe ask why people are giving you a cold shoulder and what it was you said that bothered and why them instead of writing it off as "academia is against me?" Just a thought, do with it what you will.

          • monero-xmr 3 years ago

            Academic institutions are 95 / 5 democrat to republican. I am neither an academic nor a supporter of either of those parties (I am an independent). I’m debating the pushback here that politics is dissimilar to a religion as I believe currently it is.

            • maxbond 3 years ago

              So you aren't actually speaking from experience then, this is an allegory for how you feel you would be treated in academia (if you were also a different person)...?

              Maybe it would more helpful if we spoke about our experiences and not hypotheticals we invent? Because we're surely going to be wrong about the latter.

              • monero-xmr 3 years ago

                To claim that you can only speak and debate about lived experiences is erroneous and naïve. Otherwise HN comments would be a desert. I apparently struck a nerve with you - I suggest you pray to your political party and give your weekly tithe.

                • maxbond 3 years ago

                  I'm actually also a registered independent, but I don't see any reason to believe the fable you've invented about the Republican academic. When it comes to matters of lived experience we should prefer to discuss actual experiences, yes.

                  Please avoid putting words in my mouth (as well as needless swipes like calling me naive and implying I'm some sort of party cultist), as I never said "all debate must be centered around lived experience," I was making a suggestion for this conversation.

        • 082349872349872 3 years ago

          I suspect the reception would not be ice cold at institutions such as Liberty University?

          • monero-xmr 3 years ago

            This is my exact point - you have to join the institutions that support your political religion. As if 2 political parties can possibly encompass the entire range of reasons, and any supporter of one side is moral and just and the other is evil and immoral.

    • lr4444lr 3 years ago

      There's a BIG difference between being too lazy or stupid to question political sources of power and being threatened with criminal penalties for being smart enough to do so with religion. Not to mention, you usually don't vote in your clergy.

      • 493579620678 3 years ago

        People are subject to criminal penalties (and other kinds of violence) for political reasons, and for refusing to participate in politics. Voting can be seen as a ritual similar to other religious rituals, and not participating is a crime in some countries.

        • ermir 3 years ago

          In some un-free countries such as North Korea the purpose of voting is co-opted to mean something completely different: the ritualized political humiliation of the population. By forcing you to vote in an "election" that everyone knows to be a fraud, you're forced to humiliate yourself and by extension delegitimize all voting processes everywhere.

  • thedorkknight 3 years ago

    I'll play devil's advocate here. While we see stuff like this in sci-fi a lot, it's also an extremely common opinion that the Internet and constant connectedness we have have massively exacerbated anxiety, loneliness, and depression. We've also started to see large pushback against unfettered AI progress, and this movement will absolutely grow. Simultaneously, distrust of the silicon valley technocrats is likewise mainstream. When the early news for most people about neuralink is "tons of dead monkeys", most people aren't going to trust brain implants.

    Also, don't understand the power that dystopian sci-fi like black mirror has had on people. That's an extremely popular show.

    • ermir 3 years ago

      My personal opinion is that humans are fundamentally a technological species, and even things like the wearing of clothes or primitive social organization could be argued to be "cybernetic augmentation". I don't know how to solve this issue. Could it be argued that humans are cyborgs just because they can learn how to drive cars and "merge" with them?

  • ddulaney 3 years ago

    Hmm... Hard to see where that gets mainstreamed from. If it was coming, you'd expect to see it now in fringe groups. Most of the "fringy" belief systems I'm aware of either go the other way with a radical rejection of technology or just don't really care. Any sense of where something like this is bubbling outside of the mainstream?

    • ermir 3 years ago

      It's fundamentally a function of technology, just like Protestantism is only possible when you have a literate population and access to cheap books. The mainstreaming of what some call "Cyborg Theocracy" happens from the bottom-up in cases such as this: https://nypost.com/2022/05/11/madonna-reveals-fully-nude-nft...

      and top-down in cases such as Yuval Noah Harari and "Homo Deus", the man becoming God. You also see this in Marxism, where the "liberated" human is finally perfected and has reached a godly state. I could show tons and tons of examples.

  • rsynnott 3 years ago

    A couple of AI religions already exist (for a particularly extreme version see Roko's Basilisk) but it's hard to see any of them being particularly mainstream. You might perhaps see them hit similar levels of relevance as, say, Scientology, but probably not a major religion.

maxbond 3 years ago

It being necessary to use shibboleths/code words to authenticate to each other over video chat (eg, to ensure that the person you're speaking to is who they appear to be, not someone else using deepfake puppetry).

Really this is an implicit biometric authentication mechanism, and biometrics are usernames, not passwords. (Though I'd love to be wrong about this one.)

  • hemmert 3 years ago

    We'll have to prove ourselves to be human all the time. We're facing a life of many, many Turing tests – with us being the ones trying to prove we're human.

    • nullsense 3 years ago

      "I am not a robot" already feels like doublespeak if you ask me.

      • ben_w 3 years ago

        A "My 'I am not a robot' T-shirt has led to many questions answered by the T-shirt" vibe?

  • andylynch 3 years ago

    Probably worth doing now - eg the ‘Mum I lost my phone, this is my new number’ scam which needs nothing more than SMS.

    • PeterisP 3 years ago

      I've read about this scam and it seems interesting to me - is that a realistic thing that might happen if your relative lost a phone? Perhaps it's a market-specific issue, but I have had multiple phones lost/stolen/broken in my life, but I still have the same number I had since my first phone back in the previous millenium, and I somewhat expect that some kid's phone number will stay the same until they die unless they move overseas and get another phone number there (and probably still keep the old one as well). Don't you have number portability across phones and carriers?

      • Macha 3 years ago

        We've had total number portability here for a few decades, but I do know a few people who have changed numbers due to lost/replaced phones also. I'm not sure if the reason for this is due to unawareness of the process, difficulties proving ownership of the previous number, impatience (new accounts are activated same day, number porting takes 2-3 days), or some other factor, but even when the option is there it seems not everyone avails of it.

      • andylynch 3 years ago

        Yes portability etc is a thing but this is targeting trusted relationships and non tech-savvy folks. It’s also really easy to do and doesn’t need many to so a different phone number can be explained well enough as ‘a friend’s phone’ or even just ‘my old phone broke’

  • BirAdam 3 years ago

    Hadn’t thought of this. Fascinating thought. How would we know for sure that the deepfake didn’t bypass auth somehow… I guess one would need a secondary auth system, like a message over signal or something.

    • maxbond 3 years ago

      Not to sound like a crank but it wouldn't hurt to establish them with your friends and family before we may need them, while we still have faith in recognizing their voice on the telephone and can just call them up and chat about it.

  • deniscepko2OP 3 years ago

    Very interestin. I wonder if you can already have a startup based on this - providing some sort of auth. Not even sure how this kind of tech could be implemented

    • maxbond 3 years ago

      It's a social rather than technical problem. But there have been several similar startups/technologies, like web of trust and keybase.

  • notwokeno 3 years ago

    Meh. If this were an issue people would have been doing it with IRC/Email 10-20 years ago.

    • maxbond 3 years ago

      They have been, eg, it's a common scam to steal someone's phone and text their friends asking for money, or to hack into someone's work email and try to get a fraudulent invoice approved. I read an article about an incident of the latter once where they compromised the CEO's email, read through it enough to passably imitate his writing style, and then sent an invoice to the CFO which they described as urgent.

r3trohack3r 3 years ago

AI is here - we just haven't noticed it yet. An entire class of problems are fairly solvable with the current tech with good unit economics, the market is still figuring itself out.

If you think AI is going to replace your job, it probably is. But that's only true if you consider your job to be the motions you go through day to day and not the problems you solve.

We are about to have a seemingly infinite army of mid-low skill workers standing by 24/7 to do our tasks for $0.002 a pass. It's time to start thinking about how you'd deploy that army at your daily tasks and integrate their resulting work. We're all leaders now.

This is a utopia. We're just too "in the thick of it" to realize it is.

  • 082349872349872 3 years ago

    How soon will this army get effectors? I don't need any mid-skill help in symbol manipulation; I could use it in meatspace.

  • beepbooptheory 3 years ago

    Do you think we would have to pay anything at all in a utopia?

    • 082349872349872 3 years ago

      In a utopia we won't pay to enable consumption, it'll be our moral duty to avoid lack of it.

      "So essential when there was under-production; but in an age of machines and the fixation of nitrogen-positively a crime against society."

  • lostmsu 3 years ago

    This is Wall-E

Shinmon 3 years ago

I mean, that is basically happening already. For almost all "jobs" it isn't necessary to target a very specific individual but rather a group of similar enough people.

Even with alt-right fake news people still talk in real live and that is what really drives it. People coming together and thinking that everyone is of their opinion because of the 25 people in the same pub.

Targeting specific individuals could be simplified by AI but in the end, after even a few questions to chatGPT it becomes kind of repetitive in nature.

What I see happening soon: - A lot of creative work will be aided by AI. This will create a divide and some people/industries will fall behind because they do not adapt.

ianai 3 years ago

Wishing for this not thinking it’s necessarily likely.

Utopia: people massively reject the noisiest and most inflammatory parts of the internet. Doesn’t mean reverting to 20 years ago. Just taking things with a grain of salt. And lose the “conspiracy theory” junk.

neilv 3 years ago

One tech horror story would be A Boring Dystopia for tech jobs.

Just this morning, I sifted through dozens of startup job posts (matching a search intended to find appealing ones), and there was only one that didn't have a mission I considered awful.

A high rate of cruddy-sounding startups has always seemed the case on that jobs site (not the YC one), especially during the blockchain bubble. But today it seemed worse than last time I looked.

Maybe the recession panic means fewer startups are getting funding to plausibly try to make the world better.

lamontcg 3 years ago

Continued erosion of reality.

The bulk of people are utterly gullible. You don't have to look past any of the creative writing exercises on r/tifu that make the front page of reddit.

The major application of AI/deepfakes/etc is going to be to continue to bombard the internet with fake information, driving a wedge between people and reality.

The result is going to be more conspiracy theories going mainstream, more people believing "counterintuitive" things about how human systems work, etc.

I don't see us generating an immune-system-like response to this anytime soon, and suspect it will require some kind of massive WWII-scale tragedy before policymakers stop treating it as a game and taking it seriously.

This won't be done by "terrorists", it will be done in bulk by nation-state actors and billionaires and their proxies.

Right now we can still identify the obvious bot accounts on twitter, reddit, etc. In the future, ChatGPT-like accounts will be running continuously building up accounts with months of history on random subjects (although consistent subjects per account) before they are employed to reinforce some narrative. They'll even get properly offended and flip you shit when you accuse them of being bots.

The only way out of it may be certifying identity through the government who is real and mandating that people use their real identities on social media and requiring social media to collect proof of who their users are. Which is its own dystopia.

knaik94 3 years ago

I think we are closest to the complete destruction of what it means to have your identity stolen. Deep fakes were the beginning and now they are reaching mainstream movie/tv use for final shots. Voice cloning exists, but has mostly only cloned celebrity voices.

LLM fine tuning on consumer hardware is not reasonable right now, but it will be soon. And cloud computing is expensive but will get you results.

I think we're going to see a problem with AI "impersonating" people. Sometimes it will be used for scams, other times porn, but the case that feels most dystopian is using it on Ex partners or dead people. Both situations have different motivations, but amount to essentially the same thing. Training AI on the likeness of someone in a way that violates their personal boundaries.

People have already become attached to AI chatbots and used things like Replika AI to become surrogate relationships. I think most people underestimate how likely it is for a human to make a unhealthy bond to AI despite knowing it's an AI. I feel like with other things, like deepfakes, you can at least fact check to some degree.

I don't know how to feel about the idea of someone making an AI of themselves intentionally in order to make money in some way. The issue of consent goes away, but that opens up a question of responsibility. Chat logs with past partners is unfortunately a rich source of training data. How far can revoking consent extend if someone is using information that was consensually and willingly given to them. How many people would actually respect those kinds of boundaries when in a very vulnerable and hurt state after a breakup. I don't like the answer that thought exercise leads me to.

lamontcg 3 years ago

All of the Kafkaesque problems that we see with account lockouts and HN being used as the support channel for problems with major tech players gets ugly and turns into outright corruption. As the social contract in the US decays it becomes more common to simply pay bribes to get employees to reinstate accounts. We've already seen headlines about the porn star who was sleeping with facebook employees in order to get her instagram account reinstated.

This will get worse as some people defend the current practices as useful economics because most of us get cheap services most of the time, for only a few people in the herd getting singled out for the Kafka treatment. There will be strong opposition in a lot of quarters from trying to impose any kind of regulations to fix these problems and they will become entrenched and just the way of living and doing business. People will start to openly talk about how to bribe tech companies as a cost of doing business as the US slowly slides into more of a third world mentality. We'll wind up in the world of _Brazil_ (the Movie) only with private enterprise bureaucracy leading the way.

Atheros 3 years ago

Minimally noticeable Augmented Reality, like what Google Glass wanted to be. Specifically because companies like Apple say that they're working on it.

AR will marginally improve the life of anyone who uses it and worsen the life of everyone around them, including people who themselves use AR, especially as the usage rate rises.

• Right now it is possible to opt out of location tracking in cities by simply leaving your phone at home or turning it off temporarily. If 10% of people start using AR, that will no longer be possible due to facial recognition.

• Right now, you can mostly tell when someone is recording you in such a way that the video could be used against your interests (aiming a personal cell phone at you or, if and only if you intend to commit a crime, stores' security cameras (which don't record audio)). Also, right now, when you make a small mistake, you can presume that you were not recorded because people will not have had time to get their phone out and start recording. You apologize and life goes on. With 10% of people using AR, you must presume to be recorded at all times by people who can and will upload the video for everyone to see in exchange for minuscule social media engagement.

• Right now, it is already relatively socially unacceptable to tell a friend or peer to stop using their phone in a social environment, like during dinner. I have experienced that it's a sure-fire way to ruin a dinner date. They'll comply and silently resent you. I see no reason why it would be different with ubiquitous AR. This will cause problems and I have a feeling that society's eventual solution will tend toward "let people do what they individually want" rather than the "we agree to stop using AR during this event".

• The number of people who cannot live without AR is going to be shockingly high. It will not be possible to get to know young people as people because their thinking process and personality will be so intertwined with AR that they shut down without it. Tim Cook: "So I think that if you, and this will happen clearly not too long from now, if you look back at a point in time, you know, zoom out to the future and look back, you'll wonder how you led your life without augmented reality. Just like today, we wonder, how did people like me grow up without the internet. And so I think it could be that profound, and it's not going to be profound overnight".

tjpnz 3 years ago

T1 ISPs becoming the arbiter of what is and isn't acceptable on the open web.

ghiculescu 3 years ago

Minus the AI bit, your example already happens pretty much as you describe it.

I’ll be worried when people invent AI use cases that can’t be done at all without it.

gjvnq 3 years ago

1. The growth of deep fake porn might lead us down a path where revenge porn no longer "works" as people know that for every real nude leaked there are like 10'000 fake ones.

2. Stylometrics AI gets so good that the only way to remain anonymous/private is to follow boring writing manuals thus killing a lot of creative writing of sensitive topics.

3. People spending so much time online that neighbours can barely talk to each other as they use the same words in subtly different meanings. We tradicionally saw this happening with people of different professions (e.g. law speak is very different from doctor speak). But having this phenomenon happening for every single fandom or microlabel can lead us to a lot of political troubles as votres can't reach common ground.

4. Companies will start storing fake data on their servers so as to minimize the value of the data for cybercriminals.

RivieraKid 3 years ago

- Commercial fusion leading to cheap energy. Helion's prototype is supposed to generate net electricity in 2024. I think they have above 50% chance to success.

- Self-driving cars will become widely available. It's only a matter of time until Waymo et. al. scales to everywhere

catclone4355 3 years ago

Cat cloning is now a thing. https://www.geminigenetics.com/cat-cloning/

So are kidney transplants for cats. https://www.vetmed.wisc.edu/dss/mcanulty/felinekidneytranspl...

Someone's going to put these together, and we'll have The Island for cats. https://en.wikipedia.org/wiki/The_Island_(2005_film)

BirAdam 3 years ago

The FBI kind of does that: https://www.theguardian.com/world/2011/nov/16/fbi-entrapment...

As such, a terrorist organization doing so wouldn’t surprise me either.

As for what I see happening in a dystopian sense, I think that the hacking of smart homes will become more common. I think the hacking of smart cars will become more common. I think that at some point, the central banks will introduce CBDCs and governments will gain the ability to completely destroy political opponents via monetary control.

  • 082349872349872 3 years ago

    One of the reasons I have for avoiding the straight reading of 1984 is that the protagonist jumps, eagerly, into such an entrapment. Not a sympathetic look.

        But if you want money for people with minds that hate
        All I can tell you is brother you have to wait
    • maxbond 3 years ago

      He jumps eagerly at the only glimmer of hope in his otherwise terrible life, I'd say.

      • 082349872349872 3 years ago

          — You are prepared to commit murder?
          — Yes.
          — To commit acts of sabotage which may cause the death of hundreds of innocent people?
          — Yes. 
          ...
          — If, for example, it would somehow serve our interests to throw sulphuric acid in a child's face -- are you prepared to do that? 
          — Yes.
        
        Given how loathsome Winston has revealed himself to be in this exchange, why should I trust that he's a reliable narrator about how terrible the world actually is?

        He didn't do well on his A levels; didn't make the Inner Party cut; despite the Outer Party having provided a sinecure, he has a chip on his shoulder and delusions of persecution and grandeur...

        • maxbond 3 years ago

          I'd forgotten that part, and I certainly offer no defence of Winston's willingness to commit violence (as it's deplorable, agreed).

          I would say that his persecution is absolutely real. There's the scene where he's talking to his loudmouthed friend in the cafeteria and he's like, "This guy talks to much, one day he's just not gunnuh be here. Oh well." The converse being, Winston is reserved at work because he knows he'll be disappeared too if he speaks his mind.

          When he does start to express himself it all comes tumbling out. I'm not sure he actually would be willing to throw acid on a child. I think he's just trying to say whatever it is that will get him employed at a revolutionary, because he's desperate to do something.

          What I thought you had been referring to was how the whole thing was a bit of a setup from the start (the love nest was bugged, etc).

deterministic 3 years ago

The US will continue its social decline, with the US underclass spending most of its time online, being entertained enough with fake news, conspiracy theories, games, porn etc. to not revolt.

The ruling classes will make sure that the massive underclasses will get just enough $ to stay alive and not revolt. Using minimum wage jobs, that just barely makes it possible to survive, to control the masses.

The middle class will continue shrinking and join the underclasses as AI’s and robots eventually take over most of the high paid work. Massive corporations own most of the land, controlling who gets to have a roof over their heads and who doesn’t.

LinuxBender 3 years ago

id.me - With the push to get people on id.me to pay federal taxes in the US, I can foresee this eventually being a hard requirement to do anything related to the federal government initially and then eventually required for all state government interactions. Then I foresee this data being "leaked" on purpose to be shared with organizations that should not have it and just like Equifax they will barely receive a wag of the finger.

Network Connected Vehicles - This is already a thing. At best this will result in peoples cars being bricked by skiddies. This can be used to real time control where people can drive. People will have to start paying to "unlock" services in their car that is normally a function of a car. e.g. Get a pop-up that says one has to pay to use the window defogger. At worst this will turn into a dark-web business for targeted assassination or used to silence people that are speaking out of turn. There are some videos of peoples cars doing bad things and the driver fighting for control and ultimately losing. I can only hope more people implement their own private CCTV systems not tied to any clouds to document more of these.

Personal Social Credit Scores - All manor of companies are signing onto ESG believing they will benefit financially by signalling good intentions. Some stock monitoring sites are already factoring in ESG scores. I foresee this devolving into personal social credit scores that would manipulate public behavior and ultimately anyone not aligned with the system would be ostracized from some aspects of society or at least business. This will tie into the above Network Connected Vehicles. Park near a bar and get on a watch list for potential alcoholics. Park near a strip club and your spouse starts getting pop-ups for marriage advise. Walk past a digital billboard and be publicly shamed for being out of alignment with group-think, then be mocked by the billboard when your heart rate increases as per your body monitor. Score goes low, insurance and rent costs will increase, social benefits decrease.

Social Media Algorithms applied outside of the web - Social media organizations have already jumped the shark and many are finally catching onto this. This will start creeping into "smart" devices to manipulate people. Body monitors, smart home systems, AR headsets, etc... Facial recognition will tell the billboards and local businesses who you are, how much money you have and if you are aligned with correct-think. Monitors in police cars will identify people and show their social credit score, who purchases weed, who may be armed as they are driving by.

Network Connected Body Monitors - See Network Connected Vehicles and Personal Social Credit Scores

  • deniscepko2OP 3 years ago

    The personal social credit scores section i believe is already happening in China. I saw a DW documentary which was describing social scores in China.

    • LinuxBender 3 years ago

      That it is. Some circles suspect that the tech originated in the US and that is a test bed to see how things play out.

  • loudmax 3 years ago

    Each of these predictions you describe, save the first, are capitalism at work.

    As far as I see, the alternatives are either some form of government intervention, or a world in which only the elite can afford to purchase personal privacy.

    • Bender 3 years ago

      Adding to this, capitalism via lobbyists may attempt to block government intervention.

082349872349872 3 years ago

Forget isolated people; we've been attacking social cliques via artificial influence for over 100 years now. The recent change is that identification and automation has improved.

Tepix 3 years ago

Social media (twitter) will get worse due to ChatGPT et al.

One little consolation: Eventually the trollbot armies will waste resources by mostly talk amonst themselves.

Other predictions: Quite a few jobs will be lost: Text editors, translators.

There will be new ethical questions: Is it ok if a Chatbot aids you writing your dissertation? How much is ok? 20%? 40%?

All in all, doesn't sound so great, does it?

Looking on the bright side i am sure we will see creative people come up with great stuff (films, music, images, vr environments, interactive experiences).

  • stackbutterflow 3 years ago

    I'm not an expert so someone from the industry can correct me if I'm wrong but I heard that the next season of Star wars Andor and House of the Dragon are due in two years because post production is taking more and more time. I think a productivity jump in that area would be welcomed.

bitwize 3 years ago

The Cyberpunk universe has this concept of cyberpsychosis, in which the gear you're wearing eventually literally drives you mad -- a significant risk for any cyberized individual. This usually results in a murderous rampage followed by the death of the cyberpsycho as law enforcement are advised to shoot to kill.

Turns out, we can't even so much as carry around cellphones without risk of significant mental illness from the effects of that.

ben_w 3 years ago

3D bio-printed avatars (in the sense of remote controlled robots, not icons) with arbitrary DNA, making it impossible to trust any evidence that any given person was at the scene of a crime or if trust their alibi saying they were elsewhere.

Or the same stuff but used for sex: Like DeepFake porn, but I suspect, much much more horrifying for the victims.

sklargh 3 years ago

A little banal since the tech is not new. Forward-deployed low-yield nuclear weapons are a little easier to use than their large strategic cousins. NATO and ex. Warsaw Pact deploying low-yield, low-time-to-target land based cruise missiles reciprocally increases danger of nuclear exchange substantially.

  • 082349872349872 3 years ago

    Nuke powered hypersonic scramjets irradiating* their flight/loiter paths as well as threatening ultimate targets?

    * if the problem with nukes and aviation is the shielding: that's not a bug, it's a feature :-(

miki123211 3 years ago

telescreens, or, well, audio telescreens.

Whisper, the new Speech recognition model from Open Ai, is very, very, good with English, and really decent with other languages. With Polish for example, on a normal conversation, it gets almost everything except some very specialized words.

For now, such models require somewhat significant resources, and running them for every conversation of every citizen, 24/7, isn't really feasible, but Moore's law in AI is going strong, and that will change in a few years. Even if you want to run the largest possible model right now, an RTX 3090 can do it in faster than realtime.

I'm pretty sure that, with the amount of data the Chinese government has access to, they can build something even better than Whisper, and there are enough labor camps in China in case they don't.

I'm imagining something like a watch, or maybe AR glasses, which every citizen has to wear and which transmit everything they hear in occasional bursts, encoded with something like opus to save on bandwidth.

Once you get every conversation of every citizen transcribed and meticulously cataloged, you can do a lot of pretty interesting things. You can pass it through transformer models for sentiment analysis (to flag anti-party opinions), you can search for specific subjects (and with AI, those searches can be much more advanced than a simple keyword search) etc. Because of how smart these AI systems actually are, you can't trivially go around them by rephrasing what you say. If you start replacing "I want the men in power to die" with "I want the humans in electricity to paint", it will eventually pick up on that and flag you anyway.

If your watches also identify who's around and use voice recognition to figure out who's talking to whom, you basically have the whole social graph mapped out. One criminal slips up and gets flagged, and you can start going from there. Decrease the alarm threshold for anybody in close contact with that person, take additional signals into account, like locations which trouble often starts in, and that gives you a lot of possiblities.

coldaxe_44 3 years ago

I would say AI policing. I know a lot of western police forces are willing to give it a try even if there are false positives such as met police and predicting if someone is likely to commit a crime.

So similar to blumes ctOS in watch dogs I think that is likely to happen

FpUser 3 years ago

The west adopting social score ideas from China which is already sort of happening on a corporate level. This has to stop but we just keep spreading our buttocks and let companies / government sleep in our bedrooms.

zffr 3 years ago

Similar to the scenario you mentioned, governments could use AI-generated content to produce seemingly real children’s content to subtly affect their future thoughts.

psiops 3 years ago

Sounds like you mean distopia rather than utopia there.

tb_technical 3 years ago

The horrors are in high technology, and biology.

The normalization of genetic engineering will eventually allow us to make changes to people to make them more complacent for horrid conditions. Imagine an entire underclass that not only doesn't notice the squalor, but are too content to fight the abuse heaped upon them.

This, in conjunction with VR entertainment, will be a method used to permanently imprison the lower classes in breif bliss existence before returning to their 16 hour shift at a suicide net equipped war crime factory.

RunSet 3 years ago

The recommendation algorithm and the timeline of your friends' activities will merge into a recommended friends and activities list.

tennisflyi 3 years ago

What is the timeframe for "soon"? I don't see many of the replies here happening "soon".

jstx1 3 years ago

Remote work 100% in VR. During working hours your employer gets to track everything down to a single eye movement.

  • maxbond 3 years ago

    Seems like some people can't spend any time in VR without getting motion sick. So hopefully not (as they'd be out of a job).

markus_zhang 3 years ago

Think people simply create authentic fake news with AI. Now throw in a state actor and the fun begins.

vlfig 3 years ago

Ban on cash: impose negative interest rates. CBDCs: all manner of restrictions to fungibility.

neilv 3 years ago

> attack him with a program that would generate a bubble just for him: fake friends, fake events happening, fake influencers and news programs just for him

In the mid-'90s, I prototyped a personalized newspaper, Web-scraped from news sites, and I was also manually reading all the major outlets that were online each day, and that immediately got me thinking of a risk...

You know how a pre-Internet politician, when speaking to one group (say, at a senior citizens luncheon fundraiser), might say one thing, and then say a different thing to another group, to manipulate them both? What happens when each person gets a personalized newspaper, and a bad actor could tailor it to push the buttons of each individual, on an automated mass scale?

However, one thing the Trump phenomenon showed was that one might not need to tailor messages to individuals' intimate profiles -- a very crude, low-tech, one-size-fits-all messaging can control a huge chunk of the population. The "visionary" thinking was barking up the wrong vector. So we might take an odd comfort from that.

blooalien 3 years ago

Advertising everywhere … Oh, wait … Already happened.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection