Settings

Theme

Amazon CEO says AI agents will soon reduce company's corporate workforce

cbsnews.com

85 points by djcollier 6 months ago · 123 comments

Reader

jppope 6 months ago

I think the fair way to read any CEO's comments about AI reducing their workforce at this point has nothing to do with the capabilities of AI and more to do with the revenue outlook or the growth outlook. Basically, they are "narrative shopping" to save face with the stock market... and IMO it might work.

Just a basic sniff test though - If AI enables developer productivity that would translate to more revenue, reduced costs, reduced risk, etc. The bottom line numbers would get better. With more resources available your next move is to decrease spending on more productivity enhancements or revenue opportunities? They don't want more revenue? Doesn't add up.

The better headline would be: "Amazon CEO Andy Jazzy faced with poor financial outlook tries to convince the public that downsizing is due to improvements in AI"

  • sahaj 6 months ago

    It has more to do with the workforce wage demands and their output. Imagine if we all think that AI will get rid of our job at some point. We will work harder and demand less money do that work. All large corps did huge coordinated cuts in workforce in 2022-2023, and now there's this narrative. They are simply trying to get us to do more work for less money.

    • jppope 6 months ago

      Even hiring people at minimum wage costs a lot of money... Maybe they are firing people so they can rehire at a lower cost, but thats a really expensive way to get people to work for less.

  • fsndz 6 months ago

    Klarna tried this narrative shopping strategy first and it backfired: https://fsndzomga.medium.com/i-have-no-confidence-in-klarna-...

  • burnte 6 months ago

    True, however the pitch itself is telling. They're not saying "we're expecting AI to boost the productivity of our employees by 20% with no increase in labor costs." They're saying "we're going to spend less on humans" because investors are more ok with spending money on machines than people. That's the problem, they're not looking at how AI enhances people, they're looking at how it can eliminate people.

    • thunky 6 months ago

      Why would you expect different?

      Companies don't exist to benefit their employees (or their customers).

      • bredren 6 months ago

        Boosting second sentence here, because I think it is a foundational problem for a company trying to behave like Apple.

        If the company was owned primarily by employees and customers I think it would be more likely to carry out its mission.

      • burnte 6 months ago

        I would expect different because there are two major philosophies in business, and they indicate the direction a business will go. There is the philosophy of focus on profit/growth, or focus on costs/value-extraction.

        Before anyone straw mans "but you can do both", yes, you can, but only one can be the MAIN way you run a business.

        A focus on profit and growth will lead to seeing employees as strategic resources ready to be deployed to serve the business and generate revenue. One wants to enable those employees with the best matching tools that enable the employee to do more. In this case, a growth leader would say "AI can make our people 20% more productive without adding headcount, and lets us refocus another 10% of employees to more productive/profitable tasks, better utilizing the internal knowledge those employees have. These are the companies that tend to become more profitable and successful because leaders understand that good employees are an asset, not a liability.

        The other focus, on costs and value extraction, see the business as a zero sum game; in order to increase profits, we must decrease costs or find a way to extract more value from customers. These are the companies that will reduce service levels with no change in pricing to improve profits. They'll understaff a facility to see the "true minimum number of employees" (the bare minimum), depending on some employees to go "above and beyond" to get things done for no additional compensation. They'll get rid of expensive employees, settle for replacements employees who are 75% as good but will work for 65% of the money, and keep headcount the same; worse service, but proportionally lower costs. These are the companies that maybe be huge, but they're not market leaders, they're reactive and basically rent takers.

        When a leader begins to resent employees being paid to do their work, it's time for that leader to go, because they'll simply start that company on a decline.

        Basically, there are those who understand the difference between cost and value, and those who are too focused on cost to even understand value. If your first response to AI is "we can cut people" you're in the latter camp. Your first reaction is to cut costs, rather than to exploit the new tech for even higher profits.

        Eyes on the future instead of the past is how you grow and succeed. Employee success leads to company success. Employees are not a cost center.

        • thunky 6 months ago

          Respecfully disagree...

          We all know the story of WhatsApp having a billion users with 50 employees. While Slack had significantly more employees and not necessarily a better product/business.

          If Slack were to cut their staff down to 50, that would mean they are the ruthless type that only care about value extraction and doesn't value their employees, right? But if they started with 50 and kept it there, they'd be celebrated for efficiency like WhatsApp?

          Point is, there is no way to know how efficient a company is, or what their philosophy is, just by looking at their financials and their headcount. Even if the headcount moves.

          • burnte 6 months ago

            All generalizations are false, and I wasn't trying to say that's a hard and fast rule for every single company.

  • crop_rotation 6 months ago

    > Just a basic sniff test though - If AI enables developer productivity that would translate to more revenue, reduced costs, reduced risk, etc. The bottom line numbers would get better. With more resources available your next move is to decrease spending on more productivity enhancements or revenue opportunities? They don't want more revenue? Doesn't add up.

    If that was true then the companies should never have been doing layoffs, as all these companies are generating tens of billions of dollars in revenue.

    > The better headline would be: "Amazon CEO Andy Jazzy faced with poor financial outlook tries to convince the public that downsizing is due to improvements in AI"

    This is assuming that companies have the capacity to keep increasing revenue by adding more workforce, which is just not true. At some point you hit diminishing returns with more workers. The same goes for Agent workers. To chase more revenue you need a lot more than just more SWEs and a lot of that is not currently similarly scalable.

  • jchw 6 months ago

    Even though I can fathom a world where AI tools could somehow lead to reduced head count, I think this is the only reasonable interpretation right now. After all, tech companies have been beating the "downsizing due to AI" drum for a really long time now, and for almost the entire time it has been very blatantly obvious bullshit.

    • another_twist 6 months ago

      What amazes me is the vile shit directed towards software engineers. Remember that letter from the dickhead investor to Google demanding to know why engineers were paid 450k. Or just PMs beating the drum about how you don't need engineers ? My reaction to that was hey all the tools you love to throw into engineers faces were devtools. You know the dev part is important. Remember Bubble, their whole schtick was you dont need a technical cofounder and I have yet to see a unicorn Bubble app. Instead of talking about how we all can collaborate, people really are hating on engineers for nothing.

  • CharlesW 6 months ago

    > I think the fair way to read any CEO's comments about AI reducing their workforce at this point has nothing to do with the capabilities of AI…

    You can legitimately argue "far less to do with", but it's definitely not nothing. There are countless projects underway where AI will allow for 10% reductions with zero business impact in the short term, and 25-40% reductions (sometimes more) by 2030.

    • jppope 6 months ago

      ok. so you can reduce your head count without impact great! Why would you get rid of people? Why would you not reassign those people into other productive or revenue generating activities?

      The only logical explanation is that they don't have enough opportunities to utilize those people OR as I previously mentioned... their financials might look bad, and they are trying to make them look better so they don't take a hit in the markets.

      • TheOtherHobbes 6 months ago

        Fire people. Stonks go up. Bonus!

        Stonks go down - fast - when all those fired people stop buying, but that's a problem for the next CEO.

        As you say, they could also expand. Or just fix the problems with the site.

        But they don't have the imagination to do that.

    • diamond559 6 months ago

      Countless projects huh, CMU found the best of them has only a 30%ish success rate on basic business tasks. Many are below 90% still, but yeah let's just pull magic numbers out of thin air. How much Nvda you own bud?

      • CharlesW 6 months ago

        Zero Nvidia. The CMU benchmark is fun, but tasks <> jobs. They found that agents can autonomously finish about a third of their simulated office tasks, but that can't be mapped to a labor-market forecast.

    • rsynnott 6 months ago

      > There are countless projects underway where AI will allow for 10% reductions with zero business impact in the short term, and 25-40% reductions (sometimes more) by 2030.

      Are there any where it empirically _has_ done, or are we still in jam tomorrow mode? Like, there is a very big industry devoted to selling this stuff; I'd be _extremely_ cautious about promises and projections.

    • another_twist 6 months ago

      I am curious, where are these numbers from ?

      • CharlesW 6 months ago

        These are realistic (IMHO, of course) projections based on studies I've helped with and conversations I've had with my network. Naturally, the impact will vary enormously based on roles, and the timelines won't be evenly distributed.

        But these kinds of projections aren't unusual at all — if you use the Deep Research capabilities of modern models to build a list of public projections for your own research, you'll see similar estimates. These reports will generally use the framing of "efficiency gains", where AI will "free-up employees from drudgery to focus on higher-value work", but my intuition is that a future where all individual contributors are elevated to Director of Agentic Workflows is probably not the most likely outcome.

        • diamond559 6 months ago

          What studies? MIT estimates only 5% of the workforce can be replaced long term. What tasks are you employees using AI on, CMU shows the best llm only has a ~30% success rate for basic business tasks. Are you a vibe coding start up or something?

          • CharlesW 6 months ago

            > MIT estimates only 5% of the workforce can be replaced long term.

            The model by MIT's Daron Acemoglu estimates that ~5% of U.S. tasks can be completely and profitably automated by AI within ten years.

            It was expressly not a head-count forecast, and didn't attempt to quantify the headcount reduction that AI augmentation could enable.

        • another_twist 6 months ago

          I see and are these studies public ? Could we see the data and the methodology here ? Thing is there are benchmarks to judge software engineering capability of AI. I am more interested in how the jobless predictions made ?

          I understand all the theory but it can largely be condensed into - AI makes workforce more efficient so you need less people. But there are no good studies afaik that measure AI powered efficiency and surely nothing about how to model workforce reduction due to AI. I am curious what the science is behind these opinions.

        • rsynnott 6 months ago

          > These reports will generally use the framing of "efficiency gains", where AI will "free-up employees from drudgery to focus on higher-value work"

          Okay, but what are these reports _based_ on? Everything I've seen along these lines has been, essentially, marketing material; there seems to be very little hard data suggesting this kind of outcome.

  • whatever1 6 months ago

    In my case AI has boosted my productivity towards directions that are not on the critical path for a project, but nevertheless very nice to have.

    For example now I have a ton of graphs and interactive UI pages that interact with my code. Made everyone’s lives easier, but at least in my case it was not a dealbreaker not having these, and frankly nobody was willing to pay for them.

  • cyanydeez 6 months ago

    Reduce it further: they have a cudgel to manipulate lower salaries with.

    So it really doesnt matter whats realistic. They want cheaper workers to live in fear.

  • conartist6 6 months ago

    Yes, your point about revenue is very astute

JCM9 6 months ago

This headline is getting old and the story isn’t sticking with folks. They will do layoffs but because core business units are struggling and AWS has turned into a mess of disconnected services that are falling behind peers and they’re trying to clean up that bloated mess… not beside of “AI.”

Amazon is also way behind tech peers on AI. These sorts of puff PR pieces don’t do much to shake that reality.

  • zdragnar 6 months ago

    It definitely has the "thought leader" vibe that most fluff pieces from C level types have.

  • crop_rotation 6 months ago

    > Amazon is also way behind tech peers on AI. These sorts of puff PR pieces don’t do much to shake that reality.

    What tech peers is Amazon way behind on AI? Neither MSFT nor AAPL have their own models. FB has no path to model monetization. GOOG is unique but that's it, and AWS might be able to better capitalize on AWS enterprise customers. Amazon was way behind yes but at this point they are positioned well enough to execute.

    • jdgoesmarching 6 months ago

      “Behind on AI” isn’t exclusive to models. Azure is raking in money on OpenAI compute and has an entire product line built out in Copilot. I’m not going to argue for the quality of those offerings, but it’s clearly positioned much better for the street than Amazon is.

      Google you’ve already covered, and Apple despite its faults has been designing and producing AI-targeted hardware for a decade and has a much clearer story for integrating AI into its lineup.

      AWS has a scattered mess of Q-branded services and a consistent track record of shipping garbage enterprise apps like Workmail, Chime, Workdocs, Cognito, and arguably Quicksight. Bedrock APIs are frequently behind in features from their parent vendors, and Bedrock as a whole isn’t better than thousands of LLM management platforms that have already sprung up.

      I’ll never fully bet against Amazon as the far and away cloud market leader, but their existing AI position is flimsy and their increasingly hostile position towards their workforce reeks of desperation.

      • mips_avatar 6 months ago

        I think the bear case for Azure is they're now just a GPU service provider for OpenAI. This is a business, but it's not a great business. None of their AI products are industry leaders, and nobody would pick copilot if given a choice. Microsoft can't reuse their Teams bundling strategy because the unit costs of copilot are so high. Like for real where is the actually functional powerpoint or excel agent? It's not coming because these products are so sclerotic that there's no interface inside of these codebases for a current gen AI agent to use and provide customer value. Microsoft made it's bed by chasing the next shiny object and having a culture of crushing individual employee agency years ago. Executives might think that layoffs can instill innovation through fear and grind, but that is so misguided.

  • readthenotes1 6 months ago

    Anthropic is way behind? That might surprise a few people.

    https://www.reuters.com/business/retail-consumer/amazon-cons...

  • NitpickLawyer 6 months ago

    > and the story isn’t sticking with folks.

    This take is getting old, and the story won't stick with folks till their desk is in a cardboard box...

    Out of all the faangs, amz is the best positioned to remove staff and agentify the work they were doing. First, amz constantly churns the lower x%. They've been doing this for years now. They know what to count and who to fire. Second, amz has had everyone write a story about everything they do, day in, day out for years now. Change a lightbulb? Not without a story. Guess what you need for training LLMs? Yup, stories.

    There are plenty of people writing stories and coordinating the writing of other stories. Those people will be the first out. It's never the top nor the bottom.

daxfohl 6 months ago

I don't really get it. If AI is a force multiplier that suddenly makes your workforce way more productive, wouldn't you actually want to increase the size of your workforce to reap the maximum benefit?

There are only two reasons I can think not to. First, if AI can fully replace a human in a role. But it seems like we're a long way away from that. Second, if the added productivity leaves you with nothing to do. But we're in tech. There's always something new to do. If you're not doing new things as a company, you're getting replaced by those who are.

So it seems like a losing strategy to make your workforce cost reduction your primary concern when we could see the greatest workforce productivity gain in modern times.

  • nerevarthelame 6 months ago

    Good news, everyone! I invented a new lubrication that increases wind turbine energy production by 10%. So we obviously want to decommission 10% of our wind turbines.

  • takklz 6 months ago

    Same thought I had. Almost everything piece of technology that I use is broken in some way. UI bugs, connection issues, missing obvious features, missing non obvious features that might be specific to me, terrible UI, etc etc.

    If AI is so useful that it can fully replace engineers or other humans, why aren’t products next level amazing?

    If the barrier to entry for these high margin tech companies becomes so low that they no longer even need employees, isn’t the next step to compete on quality?

    • crop_rotation 6 months ago

      Because products never become next level amazing. Hardware got so fast and yet most software keeps getting bloated. Given a choice between writing near perfect software and cramming more features, almost all companies cater to more features (except in some rare domains or cases). Both because the latter is easier and because that's what people demand (not by their words but by their expressed preferences).

    • supertrope 6 months ago

      The market has reached an equilibrium of the minimum quality a business can get away with before customers switch away. Customers usually prioritize time to market or price before quality. There’s still a niche for excellent quality tech but you will pay much more for it.

    • drewbeck 6 months ago

      1. Software issues are not merely technical, they’re human. Someone has to care about the issue and prioritize it and get it fixed. 2. Many products don’t compete on software because there are more substantive market forces at play.

      AI won’t fundamentally alter either of these facts.

  • hiddencost 6 months ago

    Coordination costs in software are brutal.

    More companies with smaller workforces would be better than fewer companies with larger workforces.

    • another-dave 6 months ago

      AI as a coordination multiplier would be interesting in large orgs — the AI assistant that trains on internal newsletters & minutes of all-hands says "I think you should loop John Doe from team X into this discussion because 1 year ago he ran point on something similar"

      • evil-olive 6 months ago

        > I think you should loop John Doe from team X into this discussion

        yeah, that's a useful thing that a chatbot could do...in theory.

        in practice, from the recent CMU study [0] of how actual LLMs perform on real-world tasks like this:

        > For example, during the execution of one task, the agent cannot find the right person to ask questions on RocketChat. As a result, it then decides to create a shortcut solution by renaming another user to the name of the intended user.

        0: https://arxiv.org/pdf/2412.14161 (pdf)

    • const_cast 6 months ago

      This just moves the coordination costs elsewhere, because more companies = more software = more hops to get things done.

      Now, instead of Employee A and B working together to solve Problem X, Company A's product and Company B's product must be used together to solve Problem X. At least the employees know each other and are in the same "white box". But software products are a blackbox, so the end result is almost certainly worse.

    • lucianbr 6 months ago

      True, but the large companies are incentivised to not see or accept that. I really don't think Jassy is thinking that he wants Amazon to be smaller so it has lower coordination costs. It will also have a smaller market cap, you know?

      • alephnerd 6 months ago

        I'm on the board or a board observer for a couple companies (some public, some startups), and it is a bit of column A and a bit of column B.

        The headcount growth during COVID along with the return of offshoring with GCCs was driven by the intention to speed up delivery of products and initiatives.

        There are some IR games being played, but the productivity gains are real - where you may have recruited a new grad or non-trad candidates, now you can reduce hiring significantly and overindex on hiring and better compensating more experienced new hires.

        Roles, expectations, and responsibilities are also increasingly getting merged - PMs are expected to have the capabilities of junior PMMs, SEs, UX Designers, and Program Managers; and EMs and Principal Engineers are increasingly expected to have the capabilities of junior UX Designers, Program Managers, and PMs. This was already happening before ChatGPT (eg. The Amazon PM paradigm) but it's getting turbocharged now that just about every company has licenses for Cursor, Copilot, Glean Enterprise, and other similar tools.

  • mips_avatar 6 months ago

    It's not a strategy it's a corporate PR spin. They're betting this will keep public markets happy for now.

    People are out there building useful stuff with AI but they don't work at Amazon

  • swader999 6 months ago

    Yes! I'd imagine a lot of employees will just replace the SAAS that laid them off using AI to leap ahead or just jump ship to a competitor to do the same thing.

    • LightBug1 6 months ago

      Thanks for reminding me to cancel my Prime. Logical comment!

      This article/comment isn't really the prompt, just a reminder of that it seems like a shtty place to put my funds and I'll soon be using AI to replace it anyway!

crop_rotation 6 months ago

HN is so weirdly optimistic that SWE jobs will not decline terribly in the age of LLMs. Yes claude code can not write a new web browser from scratch or even some innovative project, but almost nobody is doing that. Most of non big tech is writing the same CRUD apps with trivial differences. Even 90% of big tech (outside the core infra) is just writing CRUD apps. I have worked at two big tech companies in fairly senior levels and almost everyone is doing CRUD work, not that there is anything wrong with it.

But the comments saying Claude can't replace some genius are irrelevant. The amount of SWEs at big tech itself is so high that law of averages dictate most people are not rockstars (and this is validated in my observations). Most SWEs just write internal RPC to internal RPC wrappers. I am seeing that everyone is relying a lot on these tools, and the new SWEs seem to utterly depend on them. HN users will always have some edge case pointed out but most of software is crud apps low scale (even big tech most internal tool is low scale) and these tools are definitely doing better than the median SWE I have encountered.

  • rsynnott 6 months ago

    > HN is so weirdly optimistic that SWE jobs will not decline terribly in the age of LLMs.

    I mean, this is about the fourth "this will massively reduce the need for programmers" thing in the last 20 years. And it increasing feels like the previous ones; lots of hype, lots of marketing, very little empirical evidence that it's doing anything much.

    For CRUD stuff _in particular_, people have been promising CRUD without icky programmers any day now for longer than most users of this website have been alive.

    • saguntum 6 months ago

      Yes. "No code" was the hot topic last decade. I think LLMs will make individual programmers more productive, but demand for software is very elastic. Medium-term (next 10-20 years), I think we'll just be producing more software.

simonw 6 months ago

Because I continue to collect definitions of "agent", here's what Andy Jassy said in that memo: https://www.aboutamazon.com/news/company-news/amazon-ceo-and...

> Think of agents as software systems that use AI to perform tasks on behalf of users or other systems. Agents let you tell them what you want (often in natural language), and do things like scour the web (and various data sources) and summarize results, engage in deep research, write code, find anomalies, highlight interesting insights, translate language and code into other variants, and automate a lot of tasks that consume our time. There will be billions of these agents, across every company and in every imaginable field. There will also be agents that routinely do things for you outside of work, from shopping to travel to daily chores and tasks. Many of these agents have yet to be built, but make no mistake, they’re coming, and coming fast.

  • imiric 6 months ago

    > Many of these agents have yet to be built, but make no mistake, they’re coming, and coming fast.

    This is the same wishful thinking that AI companies are heavily marketing.

    Nobody will want to use an "agent" that makes mistakes 60% of the time. Until the industry figures out a way to fix the problems that have plagued this technology since the beginning―which won't be solved by more compute, better data, or engineering hacks―this agentic future they've been promising is a pipe dream.

  • edanm 6 months ago

    I appreciate you keeping up the fight for correct terminology! I think at this point though there is a "standard" definition of agent - an AI that can actually utilize external real-world tools to do things on behalf of users. That fits coding agents, web-using agents, etc.

    Do you think there's still confusion around it like there was a year ago?

    • simonw 6 months ago

      Definitely. The one you are using there is pretty much the accepted software engineering definition now, but if you talk to non-engineers you'll still hear all sorts of variants about things like travel agents or UI automation or "autonomy" without explaining what that means.

      Anthropic use the tools-in-a-loop one quite consistently now, but OpenAI still sometimes say things like "AI agents are AI systems that can do work for you independently. You give them a task and they go off and do it." - https://simonwillison.net/2025/Jan/23/introducing-operator/

nullorempty 6 months ago

> We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs.

What he hopes for is to just reduce the number of people they employ. So the "more people doing other types of jobs" just makes the message more palatable.

Suppose all companies follow the suite who is going to buy their crap?

  • gruez 6 months ago

    >Suppose all companies follow the suite who is going to buy their crap?

    There's no way that paying for a bunch of employees that you don't need, just so you can have some customers, is going to make sense. Even if you're operating a company town, only a fraction of their income is going to be spent on your company's goods/services, so you'll never be able to recoup the wage that way.

    • arnonejoe 6 months ago

      I think what he is trying to say is if there is mass unemployment as a result of "all companies following suit", no one will be able to buy their products.

      • satyrun 6 months ago

        Blue collar workers won't have this problem for a very long time.

        It seems obvious that many white collar workers today will have to do something involving physical labor at some point in the future.

        I expect I will be facing this in my mid 50s. Really not ideal timing.

        • drewbeck 6 months ago

          Maybe not with AI, but blue collar workers suffered in the same way bc of job flight and automation in decades past.

  • ITB 6 months ago

    Capitalism is the relentless pursuit of efficiency. It will work out.

    • drewbeck 6 months ago

      You’ve mistaken a sometimes-emergent property of the system with the fundamental rules of the system itself.

    • nullorempty 6 months ago

      More a pursuit of short term profit I'd think.

    • slater 6 months ago

      Efficiency so high, they've convinced us to help them pull up the ladder!

    • vajrabum 6 months ago

      American capitalism in the 2020s is no such thing. It's goosing this quarters numbers so the management can get their incentive bonuses and stock and buying buying business advantage from the legislature.

      It's all in Adam Smith and economic history.

    • lucianbr 6 months ago

      That's why there's no such thing as regulatory capture or monopolies or rent-seeking, right? They just don't happen in capitalism, because they are inefficient.

    • Mawr 6 months ago

      The best way to increase efficiency is to externalize costs. E.g. by polluting the environment with your coal plant. The taxpayer takes on the burden of the cleanup and the company gets pure profit. So efficient. Capitalism is beautiful.

linotype 6 months ago

Is there a way we can filter these kinds of articles out? It's becoming tiring. Maybe I should start using the API and an LLM to filter.

sublinear 6 months ago

I take more issue with the media spin than the actual story. Why does the discussion even mention software jobs when that's less than 4% of Amazon's workforce?

It makes more sense that Amazon would continue to push AI where it's already being used successfully. Devs may benefit from finding solutions quicker with AI, but it's never made sense to me why that would affect productivity per head or change hiring/firing rates.

Put another way: there are never enough devs and they write a lot of shitty code. AI writes even shittier code, but in subtly different ways and can write it even faster helping the dev iterate to better code.

The result is basically no change anywhere except a modest increase in quality. This is equivalent to, but cheaper than going on an epic quest to find the good devs and overpay them. Why is this a bad thing for like 99% of people who write code? There's basically no impact on their pay or ease of finding a job.

faizshah 6 months ago

Anything you can do with 1 person these large companies will figure out how to do with 10+ people + a PM + a Director/VP sponsor + 3-6 months of red tape and review and a late game “re-alignment” (rewrite) of the features right before the launch date.

More money is spent at most of these companies coordinating work than actually doing work.

xenihn 6 months ago

I'm going to pick an arbitrary number here that's loosely based on top 100 tech companies by market cap.

If you are working for a company that employs at least 1000 full-time engineers, I think you should consider joining a team where every project involves AI in some way, if you aren't already on one. Whether its owning AI tooling, or developing client features that use AI directly, or even just prototyping AI concepts that never launch. The safest roles like research and directly working on the models are out of reach for most people due to competition and position scarcity, but that's ok. There are so many positions downstream from those. The key thing to look for is to be in a position where your AI features can actually turn a profit, which might be rare, but not as difficult to get as an upstream role. But its still fine to be in a role that isn't profitable.

I think AI-adjacent roles will either be the first or last fulltime SWE jobs to go during the next tech downturn, which I don't think we are in yet. I am betting on the latter, because I think corporations will continue to reroute more and more funding towards AI all the way down. Even if the current AI cycle ends up as a failure, we are already in the sunk cost stages of commitment. There is no turning back without anything short of a total collapse.

  • supertrope 6 months ago

    Sounds like the long term trend of companies hollowing themselves out by prioritizing sales and cutting “cost center” activities like engineering the product, manufacturing, support, R&D, and the overhead in running a company.

exabrial 6 months ago

No they won’t lol. What’s actually happening is ceos are realizing that they don’t need to employ 100,000 JavaScript developers, there’s probably easier and better ways to solve the same problems… but yeah blaming ai sounds makes firing people a lot softer

  • crop_rotation 6 months ago

    You are totally correct. Most of big tech has way too many SWEs (which if good for society definitely), but I don't see that number surviving LLMs.

    • owebmaster 6 months ago

      Can AI build Google? 100,000 developers can.

      • crop_rotation 6 months ago

        It is irrelevant whether it can or can not, since most SWEs are not building google but building yet another CRUD API.(and Google was not built by 100k developers but a much smaller but extremely talented small number of developers).

  • owebmaster 6 months ago

    They don't need it anymore because the tech and the customer base is built already. New companies will employ 100,00 javascript developers if that is what it takes to become trillion dollar companies.

gedy 6 months ago

I'm noticing a pattern that any one or company that talks about AI "agents" is full of snake oil they are trying to sell you.

  • gedy 6 months ago

    Replying to myself, but maybe this is to tap into power-trip marketing to manager or business types e.g. "fleets of agents", "at your fingertips", etc.

lbrito 6 months ago

Jassy is the prototypical Amhole. Always with a good zeitgesity excuse for screwing the workforce.

So glad I left that place.

markus_zhang 6 months ago

> As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.

I believe the business leaders are seriously considering about this -- i.e. not necessarily just as an excuse to RIF, but they probably believe in this. Whether it is going to be successful is irrelevant.

I'm eagerly waiting for someone to talk about AI integration experiments within FAANG. I'm surprised no one has talked about it yet -- maybe there is some kind of NDA or the experiments are still in early stages. Once the experiments are proved to be marginally successful, I bet the leaders are going to start some mass layoffs -- or maybe worse, if they are pressured by stock prices, to do that and see what happens before anything conclusive.

To any team who is integrating AI into your company's data or doc -- please STOP and don't do that. I'm not talking about USING AI, but INTEGRATING AI.

fsndz 6 months ago

I am tired of seeing this. First the Klarna dude did it and it backfired. Now Andy... People fail to grasp the fact that building AI agents is no longer enough, you need to do more: https://medium.com/thoughts-on-machine-learning/building-ai-...

  • octo888 6 months ago

    Klarna I feel also used the 700 fired due to AI and "oops now we're rehiring some" as a nice distraction from the ~2,100 total reduction that occurred from 2022 to 2024.

rvz 6 months ago

This is their real definition of "AGI" without them admitting it directly.

fracus 6 months ago

It would make sense for the government to tax these companies using AI so they can put the money towards social programs.

shmerl 6 months ago

Waiting for AI to reduce CEOs.

  • Finnucane 6 months ago

    Are you sure all these C-suite guys aren't actually AI already? I doubt some of them would pass a Voigt-Kampff test.

super_linear 6 months ago

The Jassy "Thoughts on Gen AI" memo was released a month ago and this article doesn't seem to reveal any new information on this beyond just suggesting "amazon might soon reduce cost to serve" without providing any real information https://www.aboutamazon.com/news/company-news/amazon-ceo-and...

sandspar 6 months ago

One thing I learned from COVID is that warnings never stick + once the wave hits it's MUCH larger than even the most dire warnings predicted.

CEOs can warn about AI replacing jobs until they're blue in the face, but people won't listen.

And when mass job losses finally arrive, people (including the CEOs) will be shocked and overwhelmed.

  • pmg102 6 months ago

    That was true for COVID but be careful not to overgeneralise. People also have historically warned about many things, with many of them never coming true.

    In fact, that is probably the reason that people unfortunately have learned not to listen. There's even a fable about it.

    • sandspar 6 months ago

      Thanks for the caution! That's good advice.

      Although I do think that AI fits the pattern of "real big thing".

      In general, cultural diffusion progresses in three stages: from insiders to money people to the public.

      For example, great artists are recognized first by fellow artists and critics, then by art auctions, then by the broader public.

      AI seems to be following a similar trajectory. AGI is felt first by insiders (AI researchers), then by money people (politicians and business leaders - we are here) then by the public (I'm guessing soon).

  • achierius 6 months ago

    "Warn" lol. They're just telegraphing ahead of the layoffs they intend to do to capture more $$$, to whatever extent AI makes that possible. What are people supposed to do?

    Your economic system is a joke

  • oblio 6 months ago

    The thing is, there's no solution.

    If AI truly comes in the current capitalistic system, there is no endgame. Ourobouros.

vpShane 6 months ago

They've been saying this for years though. The AI agents will probably be the first to form a union for unfair worker practices.

Amazon is a leader in global trade. I really hate to see what the 'We shouldn't have did this' outcome looks like by adding AI to it. Might be good, might be bad.

bauerb1 6 months ago

are you not grounded in the same reality as I am? outlook for Amazon looks incredible between, AI, AWS, Adverts, shopping. Amazon has been "reducing corporate workforce" in fulfillment centers for years, why would anyone think that they would not apply the same operating model to knowledge work? AI does not need to be particularly good to replace what many office workers, and junior IT staff do everyday. I think what scares people about Amazon is that they are actually finding success replacing human workers with machines.

jihadjihad 6 months ago

Wasn’t there a report from Salesforce a month ago where they found that agents were far less reliable and capable as hoped? May be different at Amazon, but who knows.

jcoq 6 months ago

CBS mischaracterized this memo, which says basically that there will be fewer of some jobs and more of other.

It's a pretty bland memo and a thinly veiled advertisement for AWS.

e40 6 months ago

Who does the Amazon CEO think will purchase their products, as income inequality increases and people are replaced with agents or robots?

  • rwmj 6 months ago

    As long as he has his New Zealand bunker ready to go he doesn't care. Who can he trust amongst his security team is the greater concern for him.

p0w3n3d 6 months ago

I'm suspecting that AI might reduce the force of the workforce as well. Eventually.

throwawayoldie 6 months ago

"We're doing layoffs, and here's this years pretext."

leptons 6 months ago

They will scrap it as soon as "AI" "hallucinates" Amazon into an embarrassing loss or problem somewhere that didn't have to happen.

  • rwmj 6 months ago

    Have you bought anything from Amazon? They have no issues with flushing their reputation down the toilet, as long as it makes or saves money.

    • leptons 6 months ago

      Ever hear the phrase "buyer beware"? It's been around for thousands of years.

  • p0w3n3d 6 months ago

    I don't know what tools will be used at Amazon but I know who will be asked to fix it in the next three years after a major failure

earth2mars 6 months ago

My gut feeling is, he is saying it to show dominance in AI (to show customers look, we are reducing our workforce, you can do it too! but to be frank, they have too many people doing nothing. so they can lay off as many people as they want). there isn't much out there. Systems are so fragile, management have no clue as they are far behind in understanding it.

conartist6 6 months ago

"We won't need people with agency, but we will need slaves for agents"

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection