Settings

Theme

Gradual Disempowerment: How Even Incremental AI Progress Poses Existential Risks

arxiv.org

87 points by mychaelangelo a year ago · 86 comments

Reader

yapyap a year ago

I implore everyone to watch the movie ‘Money Monster’, not only because it’s a great movie but also because I think it has a minor plot point that basically predicts how AI will be used.

(small spoiler)

In Money Monster it turns out the hedge fund manager who is blaming their state of the art AI bot for malfunctioning and rapidly selling off certain stock, tanking it because of that, did so out of a machine code error. He can’t explain what the error was or how or why it happened cause ‘he didn’t program their trading bot, some guy in Asia did.’ But as it turns out he did do it in some way.

I feel like using AI as a way to abstract blame even more when something goes wrong will be a big thing, even when secretly it was not the AI (ML) or who trained the thing’s fault.

  • phero_cnstrcts a year ago

    My best guess is that at some point a “neutral“ AI will be put in charge of everything and everybody must obey for the good of society. As in one day you only have 22 energy points to spend on electricity or another day you can only eat crickets and whatnot. But it will only appear neutral - in essence somebody will control the AI and hence control the people. And people will agree to its decisions because the AI “knows best.”

  • Centigonal a year ago
  • teg4n_ a year ago

    people already do this with algorithms. IMO AI is just a continuation of that.

    • azeirah a year ago

      AI is genuinely just a rebranding of "algorithms". Like I get it, it's faster and it works by feeding data instead of feeding code... but code is data, data is code etcetc...

      Jim Keller said it best, about every 10 orders of magnitude in available computation the paradigm of computing shifts to a higher level of mathematical building blocks. In the beginning we had pure logic. Then came addition and subtraction, then came vectors, then came matrices, and now we're at tensors, he believes the next building block is graphs.

      This is citing from memory while I'm sleep-deprived, but I think the general idea holds. Approximately every 10x increase in computation, there's a paradigm shift in what is possible.

      But it's just algorithms. Always has been, and still is.

      • esafak a year ago

        Only if animals run algorithms too. AI models are stochastic and dynamic. Algorithms are usually understood to be heuristics, fixed.

alephnerd a year ago

While this is a well written paper, I'm not sure it's really contextualizing realistic risks that may arise from AI.

It feels like a lot of "Existential AI Risk" types are divorced from the physical aspects of maintaining software - eg. your model needs hardware to compute, you need cell towers and fiber optic cables to transmit.

It feels like they always anthropomorphize AI as some sort of "God".

The "AI Powered States" aspect is definetly pure sci-fi. Technocratic states have been attempted, and econometrics literally the exact same mathematical models used in AI/ML (Shapely values are an Econometrics tool, Optimization Theory itself got it's start thanks to GosPlan and other attempts and modeling and forecasting economic activity, etc).

As we've seen with the Zizian cult, very smart people can fall into a fallacy trap of treating AI as some omnipotent being that needs to either be destroyed or catered to.

  • ADeerAppeared a year ago

    > It feels like they always anthropomorphize AI as some sort of "God".

    It's not like that. It is that. They're playing Pascal's Wager against an imaginary future god.

    The most maddening part is that the obvious problem with that has been well identified by those circles, dubbed "Pascal's Mugging", but they're still rambling on about "extinction risk" whilst disregarding the very material ongoing issues AI causes.

    They're all clowns whose opinions are to be immediately discarded.

    • duvenaud a year ago

      Which material ongoing issues are we ignoring? The paper is mainly talking about how the mundane problems we're already starting to have could lead to an irrecoverable catastrophe, even without any sudden betrayal or omnipotent AGI.

      So I think we might be on the same side on this one.

    • alephnerd a year ago

      Yep. And it's annoying. So many cycles are being wasted essentially thinking about bad science fiction.

      The same effort could be expended on plenty of other problems that are unsolved.

    • cyrillite a year ago

      Can you explain how pascal’s mugging functions with respect to risk rather than reward?

      • ADeerAppeared a year ago

        In what sense?

        The "Mugging" going on is that "AI safety" folks proclaim that AI might have an "extinction risk" or infinite-negative outcome.

        And they proclaim that therefore, we should be devoting considerable resources (i.e. on the scale of billions) to avoiding that even if the actual chance of this scenario is minimal to astronomically small. "ChatGPT won't kill us now, but in 1000 years it might" kinda shit. For some this ends with "and therefore you need to approve my research funding application", for others (including Altman) it has mutated into "We must build AGI first because we're the only people who can do it without destroying the world".

        The problem is that this is absurd. They're focussing on a niche scenario whilst ignoring horrific problems caused in the here and now.

        "Skynet might happen in Y3K" is no excuse to flood the current internet with AI slop, create a sizeable economic bubble, seek to replace entire economic sectors with outsourced "Virtual" employees, and perhaps most ethically concerning of all: create horrific CSAM torment nexuses where even near-destitute gig economy workers in Kenya walk out of the job.

        Yet "AI safety" folks would have you believe so.

        • tim333 a year ago

          The people who say it's absurd tend to be the least informed while the people saying it's a major risk include the guy who got a Nobel prize for inventing the current stuff and the leading researchers. Here's some names in the field. 15/19 think the risk is significant https://x.com/AISafetyMemes/status/1884562099612889106/photo...

          SMBC is quite funny on the AI risks eg. https://www.smbc-comics.com/comic/signal-2

          • ADeerAppeared a year ago

            It's called absurd not because it's not understood, or because there aren't technical counterarguments to be made.

            It's called absurd because it does not deserve to be humoured the effort of writing out those arguments.

            > Here's some names in the field. 15/19 think the risk is significant

            A list that is largely a pile of clowns and morons, many with direct financial interests in amplifying the "danger"/power of AI.

            This is why the doomsday cult is not taken serious.

  • duvenaud a year ago

    One of the authors here. I don't think we anthropomorphize AI as some sort of God.

    Here's a more prosaic analogy that might be helpful. Imagine tomorrow there's a new country full of billions of extremely conscientious, skilled workers. They're willing to work for extremely low wages, and to immigrate to any country and don't even demand political representation.

    Various countries start allowing them to immigrate because they are great for the economy. In fact, they're so great for economies and militaries that countries compete to integrate them as quickly and widely as possible.

    At first this is great for most of the natives, especially business owners. But the floor for being employable is suddenly really high, and most people end up in a sort of soft retirement. The government, still favoring natives, introduces various make-work and affirmative action programs. But for anything important, it's clear that having a human in the loop is a net drag and they tend to cause problems.

    The immigrant population grows endlessly, and while GDP is going through the roof and services are all cheaper than ever, people's savings eventually dwindle as the cost of basic resources like land gets bid up. There are always more lucrative uses for their capital by the immigrants and capital owners compared to the natives. Educating new native humans for important new skills is harder and harder as the economy becomes more sophisticated.

    I don't have strong opinions about what happens from here, but the point is that this is a much worse position for the native population to be in than currently.

    Does that make sense? Even if this scenario doesn't seem plausible, do you agree that I'm not talking about anything omnipotent, just more competitive?

    • pdfernhout a year ago

      Thanks for co-writing an insightful paper! Something I put together around 2010 on possibilities for what happens from here: https://pdfernhout.net/beyond-a-jobless-recovery-knol.html "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

    • sidkshatriya a year ago

      This is a fantastic analogy -- thanks for sharing.

      Another way to understand AI in my view is to look at (often smaller) resource rich countries around the world (Oil, Minerals etc). Often the government is more worried about the resource rather than the people that live in the country. The government often does not bother educate them, take good care of them, give them a voice in the future of the country etc. because those citizens are not the ones that pay the bills, or are the main source of GDP output or source of political power etc.

      Similarly in an AI heavy economy, unless systems are not designed right governments might start ignoring their citizens. If democracy is not robust or money has a big role in elections the majority voice of humans is likely to matter less and less going forward.

      Norway is a good example of resource rich country that still looks out for its citizens. So it should be possible to be resource rich/AI rich and have a happy citizenry. I suppose balancing all moving parts would be difficult.

      The way to deal with the risks of AI would be to make AI available to all -- this is my strong belief. There are more risk to AI being walled off to select nations / classes of citizens on grounds of various real / imagined risks. This could create a very privileged class of countries and people that have AI while the others citizens don't / cant. With such huge advantages, AI would wreak greater havoc on the "have-nots". (Over)regulation of AI can have worse consequences under some conditions.

  • xpe a year ago

    A lot hangs on what “realistic” means. The world is probabilistic, but people’s use of language often butchers the nuance. One quick way to compare events is to multiply its probability times impact.

    • alephnerd a year ago

      India and China almost went to war with each other in 2020.

      The decision of whether or not to shell Chinese troops and start a war between two nuclear states came down to one brigadier [0]

      Conventional War between two nuclear states is a MAJOR risk, and has already happened before in 1999 (even if Clinton's State Department didn't want to call it that).

      These kinds of flashpoints are way more likely to cause suffering in the near future than some sort of AGI/ASI.

      [0] - https://theprint.in/defence/nearing-breaking-point-gen-narav...

      • xpe a year ago

        > These kinds of flashpoints are way more likely to cause suffering in the near future than some sort of AGI/ASI.

        Ok. Then where does your thinking lead?

        One good starting question is: what set of mitigation strategies do we pursue for a probabilistic mix of failure modes?

  • xpe a year ago

    How many people have you talked to face to face about various existential risk scenarios? Have you gotten into probabilities and the logic involved? That’s what I’ve done and this is the level of rigor that is table stakes for calculating the cost-benefit over different possible outcomes.

    • alephnerd a year ago

      > How many people have you talked to face to face about various existential risk scenarios

      A decent amount.

      I started off as a NatSec adjacent staffer and have helped build Cybersecurity and Defense Tech companies, so a lot of my peers and friends are working directly on US-China, US-Russia, Israel-Iran, Saudi-Iran, Saudi-Turkey, India-Pakistan, and India-China relations.

      These are all relations that could explode into cataclysmic wars (and have already sparked or exacerbated plenty of wars like the Syrian Civil War, Yemen Civil War, Libyan Civil War, Ethiopian Civil War, Russia-Ukraine War, Myanmar Civil War, Afghan War, etc). We are already going through a global trend of re-armament, with every country expanding their conventional, nuclear, and non-conventional warfighting capabilities. Just about every nuclear state has the nuclear triad or is in the process of implementing a nuclear triad. And China's nuclear rearmament race has forced India to rearm which has forced Pakistan to rearm, and is causing a bunch of regional arms races.

      I think the world is more likely to end due to bog standard conflicts escalating into an actual war. Not some sort of AGI/ASI going skynet

      • xpe a year ago

        Maybe you are right in predicting (guessing?) as to which is more likely. Still, we don’t have the luxury of just rank ordering failure modes and only mitigating the first.

        Not to mention that the race for AI technology is likely going to make geopolitics more volatile. If something happens, it doesn’t matter how we bucketed it conceptually. Reality doesn’t care about where we draw the lines.

        The false dichotomies abound in many of these discussions.

      • phkahler a year ago

        >> I think the world is more likely to end due to bog standard conflicts escalating into an actual war. Not some sort of AGI/ASI going skynet

        WOPPER, not skynet. You saw War Games right?

        • alephnerd a year ago

          Naw, was before my time. I did see Terminator though.

          Did Wargames age well? I tried watching a couple iconic 80s movies like Scarface but I just couldn't stand it.

          • pram a year ago

            It's OK, but the climax is absolutely classic.

            "A strange game. The only winning move is not to play."

            • alephnerd a year ago

              Sounds intriguing. I know broad strokes about the plot but it sounds like the tension aspect is pretty cool. Definetly watching it tonight

        • brandall10 a year ago

          It's WOPR = War Operation Plan Response.

  • Saline9515 a year ago

    AI powered state would be a significant improvement in many places with endemic corruption. An AI has no incentive to steal.

    • alephnerd a year ago

      An "AI-Driven State" is literally what econometrics is. All of the same math, models, and techniques used in AI/ML are the exact same used in Econometrics and Optimization Theory.

      A pure technocratic driven state leveraging econometrics or optimization theory for the sake of it has failed multiple times. For example, the failure of the USSR's planned economy, the failure of the US's Vietnam War objectives due to a hyper-metrics driven workflow, etc.

      On a separate note, I have always considered returning to grad school and making a small career of translating esoteric Econometrics models into ML models to make a brief publishing career. A Russian American friend of mine did something similar by essentially basing his CS research career on older Soviet optimization theory research that wasn't translated into English, so he could boost his publishing ability.

      • hsuduebc2 a year ago

        Technocratic states have not failed because of technocracy itself, but because their implementation was distorted by political, ideological, or cultural factors. Technocracy as a principle is not the issue-failure occurs when it is combined with rigid, non-adaptive systems that prioritize dogma over reality.

        People are the problem here. As always. But of course AI itself need to be managed by people too so it can rose similar problem. Politics itself is the issue.

      • Saline9515 a year ago

        Econometrics are very far away from what current LLMs are capable of.

    • gorbachev a year ago

      The people who control the AI, however, do.

      AI is not some runaway Skynet type of a thing. It's controlled by people, who will use it for good and bad.

      Not to mention that AI is and will be owned by people who are already concentrating power in their own hands. They're not going to, voluntarily, relinquish that power.

      • Saline9515 a year ago

        It is possible to decentralize the AI model, add verifiability, and let it acquire real world data on its own.

    • serviceberry a year ago

      AI has some approximation of the incentives of whoever trained it. Unless it's handed to us from above, I don't see how that would work.

      • Saline9515 a year ago

        An AI has no hedonistic incentives such as the will to manipulate public procurement contracts to get kickbacks and buy itself a new SUV.

    • bryanrasmussen a year ago

      the corrupt statesman discovered that after the AI which was trained on him was put into place he got the same amount of money delivered into his Swiss bank accounts, as the AI was just taking bribes and depositing them as the training dictated.

  • lemonberry a year ago

    Kevin Kelly was on a recent episode of the Farnham Street podcast and was surprised by how much he anthropomorphized AI. Maybe that's not a surprise to people that know more about him, but it was to me.

    • alephnerd a year ago

      I think the user experience of seeing a model extruding realistic sounding text is what broke a lot of people.

      Back in HS, I introduced a buddy of mine to ELIZA mode in emacs, and it completely broke her mind the same way LLMs did for a lot of people - like she actually conversed with ELIZA and used it to solve her anxiety during the college admissions process. Yet ELIZA used very simple heuristics to extrude human sounding text. And my friend wasn't some dummy or Luddite - she ended up going to an Ivy to study economics and medicine.

      It goes to show that User Experience is all that really matters for technology.

  • llamaimperative a year ago

    Yeah it would only be worrisome if there were significant incentives to enlist more hardware, cell towers, and fiber optic cables to create and operate increasingly powerful AI and to improve its ability to act directly on the physical world

    • alephnerd a year ago

      It's not even about that. That isn't even how a model is created.

      And that's what I'm getting it. It's basically bad science fiction being used to create some form of a pseudo-religious experience.

      At least your local Mandirs, mosques, churches, synagogues, gurdawara, and other formal religions do food drives and connect a subset of your local community.

      What do AI Doomers do? Nothing.

      • ctoth a year ago

        > What do AI Doomers do? Nothing.

        You seem very keyed up about this, so it is a good thing you are wrong. AI doomers are far more likely people to be giving to GiveWell, or actually thinking about how they are helping society. Source: Am AI doomer. Donate to GiveWell.

      • llamaimperative a year ago

        I also wasn’t commenting on how a model is created…?

        Do you disagree there are significant incentives to scale up the power, ubiquity, and direct physical impact of these systems?

        Please answer with yes or no, and not some snarky “bad science fiction” straw man.

        • alephnerd a year ago

          While I do think there is an incentive to scale up physical infrastructure, there is a lot of "AI Washing" happening in the space, with bog standard energy projects being justified for "AI Scaling", especially because ESG as a investment category is dead, and a lot of energy investments would be bucketed under the ESG asset class.

          > I also wasn’t commenting on how a model is created…?

          That comment wasn't targeted at you. I just find AI washing arguments laughable sometimes. It's very similar to the ambulance chasing and scare tactics that is a major part of Cybersecurity GTM.

          • llamaimperative a year ago

            > there is a lot of "AI Washing" happening in the space

            Total red herring. Your argument for why AI cannot be a risk is because it depends on so much physical infrastructure. Sure, this infrastructure isn't going to come from thin air, but if there are strong incentives to put in place then it makes no difference whether it came from thin air or VC checkbooks. It is then in place, ergo your rationale for why this cannot be a risk is invalid.

      • TheDudeMan a year ago

        They are trying to convince the world of the problem. What else can they do?

  • tim333 a year ago

    I think the more realistic threat model is from existing malware and similar stuff. The same people who write viruses, rootkits, try to hack elections, sever communication cables and so on will probably try to do evil AI. Then instead of Putin trying to take over the world and build torture chambers for the resisters you'll have a Putinbot trying to do so for ever more, pre-armed with nukes.

  • drysine a year ago

    >anthropomorphize AI as some sort of "God"

    Lets call this AI theomorph)

randomcatuser a year ago

Another thing I don't like about this paper is how it wraps real, interesting questions in the larger framework of "existential risk" (which I don't... really think exists)

For example:

> "Instead of merely(!) aligning a single, powerful AI system, we need to align one or several complex systems that are at risk of collectively drifting away from human interests. This drift can occur even while each individual AI system successfully follows the local specification of its goals"

Well yes, making systems and incentives is a hard problem. But maybe we can specify a specific instance of this, instead of "what if one day it goes rogue!"

In our society, there are already many superhuman AI systems (in the form of companies) - and somehow, they successfully contribute to our wellbeing! In fact life is amazing (even for dumb people in society, who have equal rights). And the reason is, we have categorized the ways it goes rogue (monopoly, extortion, etc) and responded adequately.

So the "extinction by industrial dehumanization" reads a lot like "extinction by cotton mills" - i mean, look on the bright side!

  • duvenaud a year ago

    > we have categorized the ways it goes rogue (monopoly, extortion, etc) and responded adequately.

    This objection is a reasonable one. But the point of the paper is that a lot of the ways we have of addressing these systemic problems will probably not work once we're mostly unemployed (or doing make-work). E.g. going on strike, holding a coup, or even organizing politically will become less viable. And once we're a net drag on growth, the government will have incentives to route resources away from us. Right now they have to invest in their human capital to remain competitive.

    Is that any more convincing?

tiborsaas a year ago

If we are speculating on existential risks, then consider Satya Nadella's take on the future of software: https://www.youtube.com/watch?v=a_RjOhCkhvQ

It's quite creepy that in his view all tools, features, access and capabilities will be accessible to an AI agent which can just do the task. This sounds fine in a narrow scope, but if it's deployed at the scale of Windows, then it suddenly becomes a lot more scary. Don't just think of the home users, but businesses and institutions will be running these systems.

The core problem is that we can't be sure what a new generation of AI models will be capable of after a few years of iteration. They might find it trivial to control our machines which can provide them unprecedented access to follow an agenda. Malware exists today to do this, but they can be spotted, isolated and analyzed. When the OS by design is welcoming these attacks there's nothing we can do probably.

But please tell me I've consumed too many sci-fi.

etaioinshrdlu a year ago

Bureaucratic systems have been able to fail like this for a long time: https://news.ycombinator.com/item?id=17350645

Now we have the tools to make it happen more often.

whodidntante a year ago

Once PRISM becomes R/W, you will not even know if what you read/hear/see on the internet is actually what others have written/said/created. You will interact with the world as the government wants you to, tailored to each individual.

Each time we choose to allow an AI to "improve" what we write/create, each time we choose to allow AI to "summarize" what we read/consume, we choose to take another step along this road. Eventually, it will be a simple "optimization" to allow AI to do this on a protocol level, making all of our lives "easier" and more "efficient"

Of course, I am not sure if anyone will actually see this comment, or if this entire thread is an AI hallucination, keeping me managed and docile.

Jordan-117 a year ago

This just underscores the feeling that most of the problems people have with AI are actually problems with rampant capitalism. Negative externalities, regulatory capture, the tragedy of the commons -- AI merely intensifies them.

I've heard it said that corporations are in many ways the forerunners of the singularity, able to act with superhuman effectiveness and marshall resources on a world-altering scale in pursuit of goals that don't necessarily align with societal welfare. This paper paints a disturbing picture of what it might look like if those paperclip (profit) maximizing systems become fully untethered from human concerns.

I was always a little skeptical of the SkyNet model of AI risk, thinking the danger was more in giving an AI owner class unchecked power and no need to care about the wants or needs of a disempowered labor class (see Swanwick's "Radiant Doors" for an unsettling portrayal of this). But this scenario, where capitalism takes on a mind of its own and becomes autonomous and even predatory, feels even bleaker. It reminds me of Nick Land's dark visions of the posthuman technological future (which he's weirdly eager to see, for some reason).

  • gom_jabbar a year ago

    I think it not only feels bleak, but that it (that capitalism is an it is important to Nick Land from the perspective of complex adaptive systems) is just realistic.

    I have a research project on Nick Land's core thesis that capitalism is AI. If you want to go ultra-deep into his theory, check it out: https://retrochronic.com/

    It's fundamental to understand that capital is not just teleological - converging with AI on the event horizon of the Singularity - but teleoplexic (i.e. "capitalism takes on a mind of its own and becomes autonomous").

  • cyrillite a year ago

    I’d push it further than you, even, and have been thinking this way for a while. The economy is AI. I know that’s a ridiculously simplistic way to put it and a network of individual actors doesn’t function identically, but for all intents and purposes we’re massive amounts of distributed compute running a “capitalist” algorithm and it isn’t perfectly aligned with us either. We don’t have AI problems, we have a generic class of algorithm problems that keep popping up wherever agents interact dynamically.

upghost a year ago

Obviously scary AI future means we should probably give full regulatory capture to a handful of wealthy individual corporations. You know. For our own safety.

  • iNic a year ago

    The following is some unstructured thoughts:

    I know you are being facetious, but it is not obvious (to me at least) what the best approach is in this scenario where future AI capabilities and use is unclear. I don't know what the correct analogy for future AIs is because they don't exists yet. If AI have some offensive advantage then nuclear weapons might be the right analogy. The world would not be safer if everyone has their own nuclear weapon. If the harmless & helpful personal assistant analogy is correct, then we should democratize. But Biden's mandate that AI training runs of a certain size should be reported to the government (not in any way hindered) was (to me) just obviously good. Such a report would take at most the effort of one person-day (per Dario Amodei) and gives the government some overview of progress.

tehjoker a year ago

i know a guy who wrote this about capitalism (Karl Marx) about how there is this system that dis-empowers human decision making... all that is solid melts into air...

randomcatuser a year ago

I find this argument a bit weak:

for example, regarding human marginalization in states, it's just rehashing basic tropes about government (tldr, technology exacerbates the creation of surveillance states)

- "If the creation and interpretation of laws becomes far more complex, it may become much harder for humans to even interact with legislation and the legal system directly"

Well duh. That's why as soon as we notice these things, we pass laws against it. AI isn't posing the "existential risk", the way we set up our systems are. There are lots of dictators, coups, surveillance states today. And yet, there are more places in which society functions decently well.

So overall, I'm more of the opinion that people will adapt and make things better for themselves. All this anthropomorphization of "the state" and "AI" obscures the basic premise, which is we created all this stuff, and we can (and have) modified the system to suit human flourishing

  • duvenaud a year ago

    > as we notice these things, we pass laws against it

    Well, the claim is that that's the sort of thing that will get harder once humans aren't involved in most important decisions.

    > which is we created all this stuff, and we can (and have) modified the system to suit human flourishing

    Why did we create North Korea? Why did we create WWI? We create horrible traps for ourselves all the time by accident. But so far there has been a floor on how bad things can get (sort of, most of the time), because eventually you run out of people to maintain the horrible system. But soon horrible systems will be more self-sustaining.

marstall a year ago

or ... our minds and bodies will quite rapidly adapt!

  • mrbungie a year ago

    Adapting to it normally involves a lot of death and misery.

  • yapyap a year ago

    we as humans evolve, yes.

    BUT:

    1. we evolve quickly in the grand scheme of the universe but slow when talking about human lives.

    2. what would rapidly adapting to AI mean? if it’s become reliant on it for the most basic tasks in the way that people nowadays have become so reliant on calculators they barely even bother with doing math in their head, sure. If you mean adapting alongside AI where we in some way would also become way smarter as humans? Nonsense.

    AI can be a great tool, nothing more though.

    • marstall a year ago

      we adapted to the rise of language, the industrial revolution, automation of agriculture, roads and cars, all the previous forms of computer-driven automation ... all of which made some parts of us redundant. and which taken together left us richer, safer and freeer. AI will probably come a few signal benefits (an AI doctor for every person on the planet?) and plenty of sludge and pollution - and we will adapt to that in many ways.

  • CooCooCaCha a year ago

    Which is being forced onto me by people who I truly believe want me dead or at least don’t care how many bodies they throw at building their cyberpunk dystopia.

    • brookst a year ago

      This sounds like projection. Most people don’t think like that; the people who are (rightly or wrongly) advancing AI mostly don’t think of you at all, and almost universally believe they are helping humanity.

      They could be wrong (I’m ambivalent), but assuming bad intent is almost always wrong.

      • drysine a year ago

        >the people who are (rightly or wrongly) advancing AI mostly don’t think of you at all

        Just like lawnmowers don't think of hedgehogs.

brookst a year ago

Maybe this is peak AI panic?

It seems wild that someone could unironically talk about tools “disempowering” their users. Like, I get it, C disempowers programmers by shielding them from assembly language, and Cuisinarts disempower chefs, and airplanes disempower travelers by not making them walk through each territory.

But… isn’t that a pretty tortured interpretation of tool use? Doesn’t it lead to “the Stone Age was a mistake”, and right back to Douglas Adams’ “Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place”

I get that AI can be scary, and hey, it might kill us all and that would be bad, but this particular framing is just embarrassing.

  • mrob a year ago

    Modern AI does not act like a tool. Tools behave predictably and deterministically. They act as a multiplier to human ability, not a replacement. Modern AI is a new class of thing that humanity has no experience with.

    • tac19 a year ago

      Human consultants have been a thing for a long time, they're a tool of business. They're meant as a multiplier to the corporate ability, not a replacement. It seems like a shift in economics and availability, rather than a shift in kind, if those consultants are virtual rather than human. No?

      • mrob a year ago

        Calling humans a "tool" is metaphorical. Humans are not literally tools.

        • tac19 a year ago

          But they are in the abstract sense of something to employ to achieve a result. That's why consultants are hired, not for their humanity, but for their utility.

          • mrob a year ago

            But people talking about tools usually mean tools in the concrete sense. Even very complicated tools in the concrete sense, e.g. compilers or CNC mills, only act as multipliers to the operator's ability. A consultant is an agent, not a tool, and this is a better comparison for modern AI. It's an important distinction because an agent can fail or act against your interests through no fault of your own. The ability multiplier is unpredictable and potentially negative.

            • tac19 a year ago

              Well I don't think getting hung up on such definitions will be fruitful. But here is the point i was trying to make: humans, as individuals and as collectives, do indeed have a lot of experience outsourcing intellectual jobs. They do this knowing full well that the "expert" they're employing is not a deterministic box, and may in fact be secretly working against their interests. None of those problems or potential issues is different if the expert is a human or a silicon agent.

              • mrob a year ago

                The human agent has a physical body like yours and shares your evolutionary history. They have reasons to care about things like reputation and social status. An AI agent only "cares" about maximizing or minimizing a number. It's much more difficult to determine if this aligns with your interests.

                • tac19 a year ago

                  Maybe. But a human agent also has personal needs, desires, and self-interests that may motivate treachery. Human's have an evolutionarily proven propensity for duplicity and deceit. It may turn out that some silicon experts are more neutral and less prone to betrayal, and are worthy of trust.

  • _vertigo a year ago

    All of your examples are tools that incrementally improve upon what came before and, importantly, still require the user to have an understanding of and experience with the underlying skill.

    The risk being posed here is that AI may not land as an incremental improvement that still requires the user to maintain some understanding of the underlying skill.

    We aren’t quite there yet with the current LLMs. You still need to have a base level of knowledge to use them effectively. But if they were just a little bit better, hallucinated just a little bit less, the value of actually knowing things goes way down.

    What would the incentive be to learn the underlying skill or area if the LLM can handle things just fine on its own? Why not just let the LLM figure it out and do it? And at that point, it ceases to be a tool and starts to be something you are completely dependent on. That is a risk.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection