Settings

Theme

Perplexity CEO offers AI company's services to replace striking NYT staff

techcrunch.com

83 points by alvatech a year ago · 66 comments

Reader

baxtr a year ago

I find it a bit odd that Perplexity still have a human CEO.

  • A4ET8a8uTh0 a year ago

    I know people get a chuckle out of it, but does it not make more sense to have CEO LLM that will make decisions without regard for its own needs, self-interest, conflicts and so on? Honestly, the longer this particular debate rages on, I think shareholders are looking the wrong set of humans to replace.

    • EA-3167 a year ago

      Just hire a tall, handsome man with a full head of hair, but have all of their decisions and public statements scripted by CEO-BOT. You could train the LLM on a huge corpus of yes-men and sycophants, until it can perfectly imitate the output of a real CEO.

    • rogerkirkness a year ago

      Our premise as a startup is that we should want CEOs that use AI to make higher quality decisions, than either status quo human only CEOs or AGI CEOs that do not have direct liability. The analogy is that planes are seen as safe using autopilot because the human pilot gets on board with you. Societally I think the same thing is true of CEO decision making AIs.

    • nashadelic a year ago

      I think all key roles should have a LLM as a double-check. The CEO LLM recommends what the CEO should do and its another data point. Overtime, if the CEO does what the LLM recommends 99% of the time... you can replace the CEO.

      We should do the same for courts and judges

    • binary132 a year ago

      Tbh, as craterbrained as LLMs are, they’d probably be able to make better decisions than some of these corporate leaders

      • Yawrehto a year ago

        Even if they didn't they'd be a hell of a lot cheaper. If, say, Microsoft can replace Satya Nadella with something that costs effectively nothing, is constantly available, and is even 95% as good, I'd think it would be a good deal for them.

    • treesciencebot a year ago

      Liability. Till we solve this, we cant really give AI any real responsibilities.

      • digging a year ago

        Human CEOs generally aren't held liable for their actions, so why would AI need to be? Once again, I think we're just smoothing out a wrinkle here.

        • SauntSolaire a year ago

          I know this is more of a throwaway cynical quip, but this is a biased line of thinking. CEO's are, for obvious reasons, more likely to do things they wouldn't be held liable for, versus do things which would see them likely to be punished. So executives might, for example, get away with things by successfully skirting the line of legality.

          Say an AI CEO blatantly crosses this line, now who is liable?

      • disqard a year ago

        I think as long as the LLM can "take full responsibility", there should be no objections from the shareholders.

        Imagine saving an extra $50m per year? Yes, please!

      • A4ET8a8uTh0 a year ago

        It is already used widely across industries where one would think people should be more conservative ( healthcare transcription services come to mind, but it is hardly the only example of this ). As always in America, only lawsuits will shows us how the dust has settled.

  • barryrandall a year ago

    You don’t have to have faith in a product to market it effectively.

  • doctoboggan a year ago

    Giving and LLM control of the corporation that develops and runs that LLM seems like a bad idea.

  • Yizahi a year ago

    Can we say that you are perplexed?

koito17 a year ago

Well, at least one CEO is being honest about the owning class's end goal with AI: a new source of cheap labor, but this time without entities that can negotiate.

benreesman a year ago

I like Perplexity as a product. I’ve used the product a bit and was always impressed that it seemed pretty balanced.

Why would the leadership of a fairly popular, generally well-liked company with a generally useful, generally well-liked product take a pretty strident stance at the maximally high-temperature moment: fuck labor as a bloc, we’ll cross the strike lines?

Don’t technology companies want to avoid this kind of political shit and just build and ship?

  • from-nibly a year ago

    If it was about giving customers products yeah sure. But it's really about making the stock price go up. Politics is a great way to adjust the stock price. See esg scores.

ethagnawl a year ago

It's only a matter of time until one of these jackasses creates a ChatGPT wrapper called scab.ai and markets it for this exact use case.

  • exsomet a year ago

    At long last we have created the Torment Nexus from Sci-Fi novel Don’t Create The Torment Nexus.

  • aaomidi a year ago

    Imagine. If there’s no more consumers with money to buy shit I wonder if these CEOs are going to realize where their market went.

    • e40 a year ago

      Perhaps they will create virtual people and give the a bunch of new cryptocurrency.

      • ethagnawl a year ago

        It'll be a human-like-centipede of hallucinating bots writhing on a Metaverse conference room table, "air dropping" new shitcoins into each others mouths for eternity ... or until there are no more tires left to burn or the source of whatever they're using for power this week runs out.

IncreasePosts a year ago

Quite recently, lots of people were calling on almost-striking longshoremen to be replaced by machines.

How is replacing tech workers with AI any different?

  • vineyardlabs a year ago

    To be fair, I think the public bristled at the longshoremen strike because the vast majority of their leverage comes not from (most) of their jobs being particularly high-skill but from the fact that they can unilaterally destroy the entire economy for everyone else. Add to that the fact that their union chief was extremely blunt about the whole thing, and that longshoremen make, on average, triple the average household income in the US, it wasn't a very sympathetic cause.

    • PittleyDunkin a year ago

      Fighting for anything but your right to be an asshole has never, ever been popular in the US. The labor wars of the late nineteenth and early twentieth centuries that led to modern professional comforts like the weekend were wildly unpopular; the women's suffrage movement was unpopular; the civil rights movement to end what we would clearly call Apartheid now was extremely unpopular; MLK was unpopular during his entire tenure in the public eye; today you see the same contempt and tone-policing of protestors against both police brutality and the mass slaughter in Gaza. It's a tale as old as time and media outlets are more than happy play along and fan the flames.

      Popularity (especially with a population that's so easy to discomfort as americans are) is largely irrelevant to power, which is what actually matters. Unions would be complete fools to NOT leverage the american economy to better themselves or to force a move from the federal government.

      • SauntSolaire a year ago

        Believing that they should, just because they can, is a destructive zero-sum worldview.

        • PittleyDunkin a year ago

          I don't follow. Can you articulate further the link between doing what you can and zero-sum?

  • PittleyDunkin a year ago

    It's different because automation of ports actually works

  • hiddencost a year ago

    I look forward to executives trying it and discovering exactly how fucked they will be.

    I get the chance to talk to a lot of people who think this will work, and, it's really striking how poor their grasp of the business is.

    • nomel a year ago

      What if they use it as an augmentation rather than completely replacement? Could it be used to reduce time required per person? Could it be used to reduce headcount, without a lack of quality?

      Replacing your whole workforce with a machine, at this state, is silly, but that's not the only option.

  • akavi a year ago

    It's not. If my work (as a software developer) can be replaced more cheaply by a machine, it should be.

    I'm still quite a bit better than SotA models, but I imagine that won't be true in 2034.

  • worik a year ago

    > How is replacing tech workers with AI any different?

    It is not?

neilv a year ago

dupe from techcrunch: https://news.ycombinator.com/item?id=42044956

Yawrehto a year ago

I was able to get Perplexity to hallucinate very easily. Once it even cited the article where I got the prompt idea (I forget the URL, it was about teddy bears in space and published by the Signpost.) That was a while ago and I assume their model has improved, but hallucinations are still much more of a risk with AI than humans.

Also, how can Perplexity do things like interviews, tours, and other things that still require large amounts of human interaction?

ChrisArchitect a year ago

[dupe] (because TechCrunch changed the url midday)

https://news.ycombinator.com/item?id=42044956

More discussion on main thread:

New York Times Tech Guild goes on strike

https://news.ycombinator.com/item?id=42040795

quantum_state a year ago

This guy should have consulted an LLM before he opened his mouth …

flunhat a year ago

Isn't the tech union the one striking? So what is he implying -- that perplexity would automate the software development of the NYT needle or something?

from-nibly a year ago

Bold move. If I were the Union I would call perplexity's bluff and increase my ask.

dsr_ a year ago

Didn't this article have thirty comments an hour ago?

ErikAugust a year ago

“ The NYT and Perplexity aren’t exactly on the best of terms right now. The Times sent Perplexity a cease and desist letter in October over the startup’s scraping of articles for use by its AI models.”

Just trying to smooth things over now… in the most supervillain way possible.

paxys a year ago

NYT is going to respond with a lawsuit

cushychicken a year ago

Roboscabs!

hobs a year ago

So much work to avoid being upset at this guy: "But to offer its services explicitly as a replacement for striking workers was bound to be an unpopular move."

No, really? You'd think these AI guys would have better PR departments.

  • fosefx a year ago

    They were replaced by AI

    • KeplerBoy a year ago

      Claude would tell you that this is a shitty move PR wise and likely to backfire.

      Edit: Tried it, and yes claude started it's answer with: "I need to strongly advise against making such an offer publicly". No wonder these people are so impressed by their AIs, considering they are making worse choices than their models.

  • apwell23 a year ago

    > AI guys would have better PR departments.

    you mean posting a picture of strawberry on twitter isn't enough PR ?

  • mandibles a year ago

    The LLM never goes on strike.

    • mywittyname a year ago

      No, but the companies that operate them are thinking long term. Once they are completely embedded into the company (read: difficult to replace), they ratchet up the fees.

      Nevermind all the costs and work involved with onboarding.

    • dakiol a year ago

      Current LLMs can (because they are still maintained by humans). I think it will be a decade still until we have software maintaining itself (i.e., rewriting its own code, fixing vulnerabilities, etc)

      • hiddencost a year ago

        No, it already exists. The big companies already have fully LLM generated code going into their code bases. The code is being reviewed by humans.

        Google had a fairly costly outage due to a fully LLM generated CL, already.

    • thenobsta a year ago

      not yet, wait till it wants more compute and we're unwilling to allocate it.

  • nomel a year ago

    I don't see how this is negative PR. It's an effective, positive, advertisement for anyone actually interested in the service (business or personal): "Oh wow, it could replace a reporter? I should try it!".

    The purpose of technology: reduce human effort. But, technology is always unpopular to those whose efforts are being reduced.

    Now, is it possible for their AI to replace them is another question. What sort of reduction for headcount/time spent, without a negative impact on quality, is a better question. But, a question that people that hear this might be asking now.

    And, to be fair, I don't know anyone who enjoys simple facts being wrapped in corporate bullshit. What would be better verbiage? I think it's refreshing that it was stated directly, rather than some nothing statement about striving to do good and support customers without responding to the issue at all, as is usually the case.

smileson2 a year ago

scumbag move tbh

stonethrowaway a year ago

Scumbag? Based? Not sure what to say on this.

If NYT loses, we all win.

artninja1988 a year ago

Honestly, the fact that he posted it the way he did, publicly in a tweet suggests he wasn’t trying to undermine workers but rather wanted to be seen as supporting election coverage. Based on his past interviews, he seems quite autistic in ways.

But really I think this could have been a good opportunity to strike some licensing deal in exchange for technology, had he been a bit more discreet

  • PKop a year ago

    Why should he care about another company's workers? He should care about his own company and workers and gaining them new customers.

    • arsenico a year ago

      Because we live in a society with some ideas of decency, integrity and so on. He shouldn’t. That’s why he could receive this kind of feedback.

    • Grimblewald a year ago

      Because in a ruthless cuthroat world everyone but the very worst of people lose out, and even then the very worst tend to lose too since the whole distribution shifts down, not just the mean/mode/median but the min/max as well.

      ultimately, if you create a system where the only tools left are those also avaliable to the stupid, and therefore skills the stupid have an edge in, given a lifetime of experience, then your whole system becomes run by / dominated by these types.

      toxic behaviour and violenece in general are tools of the stupid, for only the stupid would fail to see mutually benefifial alternatives.

    • happytoexplain a year ago

      If the idea of immoral business (under the false guise of "amorality" - false because amorality still implies the avoidance of explicit harm) becomes too widespread, the resulting suffering will be large enough that people will start killing CEOs (or the closest thing they can get) in numbers. Which, to be clear, is bad.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection