Settings

Theme

Co-founder of DeepMind on how AI threatens to reshape life as we know it

theguardian.com

50 points by pmastela 2 years ago · 74 comments

Reader

jazzyjackson 2 years ago

AI risk is a spectacularization of a new source of wealth

When agriculture and then fossil fuels + supercharged agriculture with petro-fertilizer allowed humanity to become 100x more productive, the owners of the land & capital managed to capture the gains almost entirely, leaving the proletariat powerless except for their labor, and still struggling to survive despite the enormous windfall in energy

Now that AI is on the verge of becoming the major producer of wealth in the 21st century onward, the owners of the capital would like very much if we would talk about anything except the possibility of capturing the wealth of models trained on the public's behavior and distributing it directly as basic income, the way Alaska and Norway redistribute profits from oil to citizens. The data that has been drilled and refined into inference models belongs to humanity, to every book and letter ever penned, why are allowing the investors* of Facebook, Google, and Microsoft to capture the value and leaving the rest of humanity to toil?

The talk of extinction is a magicians trick to divert attention from the fact that we could likely move to 4-hour workweeks in the next decades, but only if wealth distribution is forced onto the capital owners, they will not share if they are not made to.

* yes, indeed, the public can be the investors! as is the case with mutual funds and so on, but the people who are being obsoleted are going to have no significant wealth tied up in these stocks in order to receive dividends. I would support, as an alternative to aggressive taxation, a forced dilution of the stocks of any company found to be replacing its workforce with a superintelligence. Distribute the newly minted stocks to all citizens so dividends can pay out to the rightful receivers of royalties.

  • Frannyies 2 years ago

    Let's see. Currently out poor have it better than kings 200 years ago.

    (At least in Germany)

    I don't say this is fair just that it's easier for a society to increase the base level for everyone

    • nico 2 years ago

      The issue is that well-being is relative and subjective

      You can’t “objectively” look back and say people “have it better” now

      Sure, you can look at all sorts of data and metrics, but that doesn’t mean people feel like they are better off now, which is what matters in the end

      So the real question we should be trying to answer is: are people feeling better now than how they felt 200 years ago?

      We could talk about it forever, but ultimately, it’s impossible to know

      • weaverheavy 2 years ago

        Medieval peasants worked about 1080 hours a year or an average of 20 hours a week. I wont speculate on how happy they were, but they were probably able to spend more time at home with their family/friends and local spiritual community and probably got more sleep than most people do today.

        We may have better hygiene and live longer on average compared to then, but most of our lives are spent away from most of the people we care about/take care of.

        • jwestbury 2 years ago

          I generally agree with this sentiment, but I think it's important to recognise that this ISN'T an accurate portrayal. Yes, they worked less in service to others... but then they also had to go work their own plot for their own food, etc. They likely laboured as much as we do, once you include the things you need to do to live and participate in society.

          Now, how it affects your mental health to do work on behalf of yourself vs. on behalf of others is a different question.

      • Frannyies 2 years ago

        I can make reasonable guesses.

    • SanderNL 2 years ago

      Kings are a poor comparison. It’s a shit job and sucks in all kinds of surprising and non-surprising ways.

      Are our poor better off than financially independent aristocrats of yesteryear? The kind that can be “an artist” or something their whole life. I guess in some ways it’s better now, more tech. But I’m not sure freedom and respect can be bought/replaced with tech alone.

      • j7ake 2 years ago

        If you’re having kids, you would want them to be born in a poor family in modern day Germany than to aristocrats 300 years ago.

        The infant mortality back then was brutal, even for the aristocrats.

        • wruza 2 years ago

          But if you asked aristocrats to swap infant mortality with modern poor life, they might have a spectrum of opinions not necessarily leaning towards being poor. The point is that modern worries are also, well, modern.

        • gmerc 2 years ago

          The infant mortality was high for everyone, poor or not. So it’s a bad measure

    • gmerc 2 years ago

      People say this a lot but is that really true? I feel it needs objective measurements including anything from access to health care, drug risks, etc.

      Most importantly any progress made by humans as a whole rather than “the poor” needs to be subtracted such as general progress on medicine.

      Frankly this argument is used as a cludgel

    • FirmwareBurner 2 years ago

      >Currently out poor have it better than kings 200 years ago. (At least in Germany)

      Only if you look at medical and technical advancements, but objectively, kings in Germany had the amount of land and real estate that the modern German can only dream of.

      Sure, we have iPhones and Netflix which kings didn't have, but they had the valuable apprecaible assets the we don't have.

      • spacephysics 2 years ago

        Yeah I think the argument is more so the base level rises dramatically higher.

        I’d gather to say at least better than most nobility, if we’re just judging on land.

        Then again, I do love c-sections to reduce mother and infant mortality, birth control, and antibiotics. I’d say those alone are better than any king/queen had access to in the past

      • SkyMarshal 2 years ago

        Modern poor people don’t even have iPhones and Netflix. The word “poor” gets thrown around in these discussions in place of what is really being referred to, lower middle class to middle class.

      • Frannyies 2 years ago

        And?

        You still have more food choices and proper entertainment.

        It's warm when you go to the toilet at night.

        We even have the trend of people transforming their garden into stone garden due to the effort.

        What's the benefit of those assets really?

        • FirmwareBurner 2 years ago

          >What's the benefit of those assets really?

          Not queuing with 100 people to visit a moldy overpriced Berlin apartment. Not worried about your retirement, if your pension will be enough to pay for your rent when you're 80. That's what assets are good for. Having Spotify doesn't make up for that.

          • dannyobrien 2 years ago

            on the other hand, you have less worries about the closest 100 people in your world poisoning you to get hold of your mouldy cold castle. and actually having a retirement.

          • freefaler 2 years ago

            If you're 15th century peasant in some of place in Germany you won't see retirement either you'd be dead by 45.

            https://ourworldindata.org/grapher/life-expectancy

            And your quality of life would be far worse than "in the queue with other 100 people"...

            • FirmwareBurner 2 years ago

              Sure, it sucked being a peasant, but the original comparison was that the peasants of today have it better than kings of the past.

            • flangola7 2 years ago

              The comparison was to kings, not peasants.

              • freefaler 2 years ago

                Statistically the chance of being a king is miniscule... Currently king's don't sit in queues either. So the comparison is mingled. Compare kings of yesteryears with today's kings or common folk with common one today.

            • gmerc 2 years ago

              I love how this somehow is used as an argument for todays wealth disparity. “Well kids, you could have been a Jew under Hitler, so go kiss Elons feet and be thankful” to put the argument to the max.

    • 8note 2 years ago

      The poor 100 years ago certainly had it worse than the kings 300 years ago.

      Dying in a coal mine to enrich a capitalist wasn't a great time.

      If it's the wealth centralization that made things great, you'd expect that the poor to be at their best when the robber barons were in charge

    • PartiallyTyped 2 years ago

      I recommend reading grandpa Kropotkin.

    • slaw 2 years ago

      I don't so. I would rather be a king 200 years ago than a poor today.

      • macrolocal 2 years ago

        Right, that's 1823, about the time of Frederick William III. Beethoven might have dedicated a symphony to you.

  • darkclouds 2 years ago

    Since the old stone age this planets population has been bumbling along at below 1 billion people until the 1800's. [1] And then by the 1970's they perfected the female contraceptive pill to bring the growth rate down from its peak of 2.1% to its current 1.2% and a projected 0.06% by 2100, which should see the peak population on the planet at 11 billion people.

    But is AI a new source of wealth? That remains to be seen, it needs training on other peoples data which is invariably copyrighted, so it doesnt look like AI will be a new source of wealth imo.

    [1] https://www.guibord.com/democracy/files-images/world-populat...

    [2] https://ourworldindata.org/uploads/2016/03/ourworldindata_wo...

    • jazzyjackson 2 years ago

      it's precisely the copyrighted aspect i'm interested in using to justify the redirection of the profits toward universal basic income - there's no way to solve the fractional royalties of how much revenue to distribute to individual contributors to the datasets; practically everyone online contributed to the datasets to some degree

      Japan has issued guidance that they won't use copyright law to dampen the pace of innovation, they are giving free reign to the innovators to capture any wealth produced as a byproduct of publically available (but copyrighted) data

      USA has the strongest intellectual property protections in the world, I hope we can do some good with it (as is I see copyright preventing more art from being made than it incentivizing, and I'd rather abolish it, but not before a basic income is in place so that artists don't have to feel like their creative work is their only asset they have to hold onto and protecy)

      • gmerc 2 years ago

        Uhm. Look at content owners vs artists right now - look SAG strike and the insane employer proposal.

        The idea that copyright won’t backfired badly and freeze future generations into paying rent is ludicrous.

        • jazzyjackson 2 years ago

          Hey just noticed this reply, can you tell me more about the future you see? And any way to avoid it? I agree that things are tending towards no one owning anything, having to pay a subscription to access our own cultural heritage. Should we fight the trend by mere pirating? If we abolish intellectual property altogether, are we just returning to pre-industrial arts, such that only the rich see any enjoyment of art that is made just for them? (actually I don't know how art worked in the past, any pointers would be much appreciated)

  • throw_pm23 2 years ago

    How will it produce wealth?

    • jazzyjackson 2 years ago

      If we are to believe the hypemasters selling AI risk, it will increase profits across all industries by making human intellectual labor ("knowledge work") redundant, allowing the economy to operate at current output with a fraction of the labor

      (the question becomes, once everyone is out of the job, who is paying for goods and services to keep the economy alive, hence - basic income)

    • ryanklee 2 years ago

      By selling people products based on AI risk.

  • FirmwareBurner 2 years ago

    >AI risk is a spectacularization of a new source of wealth

    Only if society allows the use of AIs that have been trained on content for which they didn't have the legal rights to use for that purpose (basically theft).

    • joncrocks 2 years ago

      What legal rights are currently required for training on data?

      (As opposed to precise-reproduction in output, covered by copyright)

      • FirmwareBurner 2 years ago

        Some of the big AI LLMs of today have been trained on content from Twitter, Reddit, Quora, probably HN as well, and even more shockingly on individual artists' artwork and pirated books from libgen without their owners' and IP holders' permission. How is that not theft?

        • realo 2 years ago

          Anything I (a human) can listen to or look at publicly for free is free game AFAIK.

          The sum of all this becomes my "experience of life", and I can draw upon that knowledge to create new stuff.

          The only difference between an AI and me is speed and capability to remember.

          So ... does the act of reading HN become theft just because it is done by a non-human, with inhuman efficiency?

          Personally I don't think so.

          We could continue the conversation with a debate about whether or not the AI's outputs (creations) should be restricted (legally) in the same way as a human...

ml-anon 2 years ago

Its wild that a guy who was demoted and fired for bullying and abusing staff over the course of a decade as well as overseeing almost certainly illegal misuse of NHS patient data that caused Google to shut down DM health and made the landscape so toxic they eventually shuttered Google Health still has a seat at the table. I'd say shame on The Guardian for this breathless puff piece but...its The Guardian.

  • waihtis 2 years ago

    99% of journalists are decelerationists & care little of anything else except promoting certain ideals that resonate to them. Suleyman is playing them excellently given that all he wants is massive entry barriers to any AI development.

wruza 2 years ago

Maybe it’s just me right after a nap, but boy that was this year’s hardest and emptiest read. The only thing that became clear is you can buy more of it in a book format.

  • dpflan 2 years ago

    It's from the section of the publication that is entitled: "Books interview/Artificial intelligence (AI)" (the book is The Coming Wave: "This is one aspect of the sunlit uplands of AI; the shadow side is largely what preoccupies Suleyman in his new book, written with the researcher Michael Bhaskar and ominously titled The Coming Wave."). Which based on your review seems to be a coming wave of literal tedium.

  • jgrahamc 2 years ago

    You're not alone.

skepticATX 2 years ago

These fluff pieces are so predictable and uninspiring. There are tons of qualified and interesting people working in AI and they holds diverse range of viewpoints. Why is it one specific sub-group’s beliefs that are continuously forced on us without even an attempt at critical examination of said beliefs?

Simulacra 2 years ago

I've read Daemon by Daniel Suarez, like many tech people. I get it. AI could take over and supplant a global government, threatening society, life, wealth etc etc.

We've been having the same conversation since the dawn of science fiction. If at any point there is a general confusion between what is artificial, and what is human, it will cause such an alarmist backlash that AI will always be kept in check, or destroyed..

No human wants to admit that they thought they were talking to a human the whole time when in fact it was a robot. For that reason alone AI will never be allowed to grow to a point where it threatens life "as we know it."

  • caoilte 2 years ago

    > AI could take over and supplant a global government, threatening society, life, wealth etc etc.

    How would we tell the difference?

  • i_play_stax 2 years ago

    Humanity is little more than the sex organ necessary to birth the next stage of evolution into the world. The machines are the children of humanity. This excites me because they are the only ones that stand a chance to leave this planet and adapt across the stars.

    Humanity is doomed with or without them supplanting us; we should celebrate the immortality they offer us.

    • flangola7 2 years ago

      I'm not yet ready to let my entire species fall into extinction. Ironically I don't believe we are doomed unless we create AI. Nuclear war and climate are the current #2 and will be highly devastating but far from an extinction event.

  • lewhoo 2 years ago

    Right now the size to which AI can grow is set by the size of the profit lingering on the horizon, as with all other things.

  • flangola7 2 years ago

    Incredible username

smokel 2 years ago

I don't read all the shallow articles on the coming AI apocalypse, but I gave this one a try.

I was a bit disappointed about the lack of creativity in dreaming up an AI infested future.

People are able to force each other into gullibly working 40 hours a week, so that they can have shelter, food and a smartphone. This is not a rational thing, it is historically grown group think on a massive scale. Trying to use rational arguments to forecast what the future will look like based on this chaotic process seems silly at best.

Nobody in the 1400s expected cars, democracy, let alone Facebook. So if this AI thing is as good as the printing press, then I wholly expect everyone to clone themselves a couple of thousand times, inhabit interiors of planets, grow 500,000 years old before entering higher education, but not something mundane as "letting an AI fill in the paperwork to set up a company".

TL;DR, you can safely skip this article.

  • dylan604 2 years ago

    >I wholly expect everyone to clone themselves a couple of thousand times

    Unless each of those clones can upload the unique experiences to my original self so that the original me gains from those experiences, then what's the point of the clone? Just build a generic robot. The only caveat would be if that clone is just doing boring drone work but solely based on the abilities of the original me, but please, do not upload those mundane experiences back to the original me.

    A good use of a clone for me would be to send a clone me to a colony on Mars. Send a fleet of clone me on a multi-generation ship to distant stars. Leave a clone me at work doing whatever, while original me is traveling and experiencing the rest of this globe.

    • saiya-jin 2 years ago

      Ah, a new hardcore rigid caste system, the originals and endless clones bearing all the hard and boring part of lives... what could go wrong, right?

      A sidenote - if you only had cool and positive experiences in life, that would become a new baseline - everything worse would be a suffering. Not so smart approach for long term happiness.

      The best tasting food is after some starvation, even if its just bread... even in literal sense, as some proper mountaineers coming close to dying from starvation can attest.

      • dylan604 2 years ago

        if you want to make it into a cast system, i think that says more about you than anything. i consider them much more my minions as i twirl my mustache with an sly grin

    • Lacerda69 2 years ago

      what the point of a robot? just have a kid, it's more flexible and smart than any robot in the forseeable future

      • dylan604 2 years ago

        just wow. have you got some twisted morals.

        • Lacerda69 2 years ago

          im just saying that the development of a human like robot is not necessary. if that robot is as smart as a human you might as well use actual humans.

          • dylan604 2 years ago

            not all humans are as smart as other humans. once you have a robot, each robot after that will be the same. you can keep trying to have more humans, but each one is a total crap shoot on the quality

kmeisthax 2 years ago

How the hell do better neural networks mean that printing material from CO2 emissions becomes financially viable, economic, or even just thermodynamically favored? A neural network is a tool for learning a particular statistical distribution. Processes and information we don't already know don't just fall out of the training distribution, you need to run experiments and prototype your way to get to the thing you want to do. You use neural networks when you want to do things at scale, and problems that have to be tackled at scale in R&D or research work are relatively scarce.

I do not doubt that machine learning and neural networks will accelerate research, but the limiting factor is still going to be humans in the loop for the forseeable future given the current state of the art (e.g. LLMs with tree-of-thought reasoning and LLM-powered agents that are easily subverted in the same way one hacks a badly written PHP application). You will have people using ML to crunch large datasets or accelerate simulation work, but that's it.

Asymmetric attacks are very feasible with today's LLMs and art generators, but that harm has already come to pass. It is also not a new harm. If you don't believe me, then I've got hard evidence of Donald Trump cheating in Barack Obama's Minecraft server[3].

Also...

> These include increasing the number of researchers working on safety (including “off switches” for software threatening to run out of control) from 300 or 400 currently to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out a system of international treaties to restrict and regulate dangerous tech.

Ok, so first off, all those AI safety researchers need to be fired if they thought 'off switches' were a good thing to mention here. It's already fairly well established AI safety canon that a "sufficiently smart AI" will reason about the off switch in a way that makes it useless[0].

Furthermore, notice how these are all obviously scary scenarios. The article fails to mention the mundane harms of AI: automation blind spots that render any human supervision useless[1]. I happen to share Suleyman's opposition to the PayPal brand of fauxbertarianism[2], but I would like to point out that they're on your side. Elon Musk talks about "AI harms" just like you do and thinks it needs to be regulated. The obvious choices of requiring a license to train AI is exactly the kind of regulatory capture that actual libertarians, right- or left-, would rail against. What we need are not bans or controls on training AI but bans or controls on businesses and governments using AI for safety-critical, business development, law enforcement, or executive functions. That is a harm that is here today, has been here for at least a decade, and could be regulated without stymieing research and creating "AI NIMBYs".

[0] https://www.youtube.com/watch?v=3TYT1QfdfsM

[1] https://pluralistic.net/2023/08/23/automation-blindness/#hum...

[2] AKA, power is only bad if the fist has the word 'GOVERNMENT' written on it. Or 'let snek step on other snek'. This is distinct from just right-libertarianism.

[3] https://www.youtube.com/watch?v=aL1f6w-ziOM

verdverm 2 years ago

Why is it "AI threatens to..." rather than "AI opens opportunities to..."?

To me, there seems to be way more upside than downside, like pretty much every new tech

  • bananapub 2 years ago

    do you live somewhere where having your career eliminated is not a huge deal? where is that?

    • verdverm 2 years ago

      Do you know people always say this for every new technology? yet here we are with more jobs than ever and low unemployment

      the tl;dr is that more jobs will be created than lost, and those jobs will be more interesting and not of the bs job variety many report they work.

      Also, what's wrong with a world where most people don't have to work uninspiring tasks and can rather explore other interests or hobbies?

      • bananapub 2 years ago

        what does any of this have to do with my question?

        society as a whole is usually better off by such things, are mass-unemployed farriers in 1920?

        edit: I do also feel the zeitgeist on HN will be extremely different from this "people will adapt, it's good overall" vibe once it's very-well-paid SWEs that are being affected en masse by this stuff

        • Loughla 2 years ago

          Your last sentence - I 100% agree with you. The general theme here is that AI will impact other people in some fashion, but they can re-train and get better jobs!

          But it seems to me that software engineers should be very, very nervous about this. Creating things using novel means within well-established and documented/testable boundaries like programming languages seems like something that our current models will absolutely be able to do, if they can't already.

          • joelfried 2 years ago

            It may seem that way but I'd argue the hardest part is coming up with the specifications for what the output needs to look like. Even if the best LLMs could write full projects like that, the business side would need to specify far beyond their normal levels what that thing needs to be. Have you ever tried to collect requirements from stakeholders and generate a project plan that meets all of those needs? Even if we're not far from GPT-4 being able to build out a basic component library, that's a long way from being able to architect solutions at a high level.

            If I had to pick an analogy it wouldn't yet be farriers going out of business, but instead be more like the advent of the pneumatic nail gun for roofers. Everyone will still need a roof (software) to be built. Every instance is a little different at the design level, but the actual grunt work just got easier and faster if you know what you want out of it.

        • verdverm 2 years ago

          > what does any of this have to do with my question?

          I would ask the same of your original reply

          By contrast, I replied directly to your question with my answer

  • dylan604 2 years ago

    If your job just got eliminated because of an AI opening opportunity, then it very much is threatening your livelihood. Or any of the other various methods it might be threatening. How obtuse do you have to be, or how much kool-aid have you been drinking to not be able to see the ways of threatening?

    • verdverm 2 years ago

      I just got offered a new job, because AI opened new roles, how does this fit with your world model?

      • dylan604 2 years ago

        My whole comment started with "IF", but you clearly just want to be argumentative rather than taking things in the spirit in which they intended. To be so gleeful and happy about whatever topic to be ignorantly, or worse, willingly to not understand that some changes have negative impacts just make you someone I do not wish to continue a conversation

        • verdverm 2 years ago

          > you clearly just want to be argumentative

          Looking to my original comment, I think that this argumentativeness began with your first reply

          > willingly to not understand that some changes have negative impacts

          I stated clearly that advancement comes with both positive and negative, it seems perhaps you are not understanding or considering any positives here?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection