Settings

Theme

Paul Graham calls A.I. ‘the exact opposite of a solution in search of a problem’

fortune.com

18 points by hayksaakian 2 years ago · 47 comments

Reader

medler 2 years ago

It feels like we are well on our way toward the Trough of Disillusionment in the Gartner hype cycle for ChatGPT. It’s losing users daily. People are saying that it’s gotten worse since launch (don’t know if this is true but it’s what I’m hearing). As far as I know, it hasn’t revolutionized anything except spam generation.

  • vidarh 2 years ago

    It's unsurprising if it's losing users, both because the hype was immense. My girlfriends flatmate used it to write a legal-sounding letter to their landlord. My 14yo son has used it to write fun stories to share with his friends. Seemingly "everyone" has tried it. And for most of them it doesn't solve a frequent real issue.

    At the same time for many of those of us for whom it is useful, a lot of the initial use was experimentation to find the right use cases. I still use it, but I use it for the things where I know it works for me, and so I use it less than at the peak when use was dominated by learning.

    Others will have found areas where alternatives like self-hosted LLMs are good enough.

    Use will like grow again over time as people get more experience and build useful things on top of it. But it won't be the same rush.

  • tracerbulletx 2 years ago

    I wish people would just do some deeper investigation before forming opinions based on their biases. If you actually use it you know it's incredibly useful for learning, getting answers to general problems that 99% of the time work fine and it helps you understand those answers. The amount of times it's wrong or misleading is not significant enough to even be annoying during day to day use. Just talking about current practical usefulness here not even speculating about the future, just what it's useful for today.

    • bsder 2 years ago

      > If you actually use it you know it's incredibly useful for learning, getting answers to general problems that 99% of the time work fine and helps you understand its answers. The amount of times it's wrong or misleading is not significant enough to even be annoying during day to day use. Just talking about current practical usefulness here not even speculating about the future, just want it's useful for today.

      We used to just call that "a search engine". Remember that? Somebody would put up a website about their favorite pet topic and when the search engine unearthed it, that's what you got.

      It was the ad-ification of search engines that killed that. So, AI allows us to go back to the Internet circa 2000? That's a big innovation? I mean, I'll take it, but that's a pretty low bar ...

      • kalb_almas 2 years ago

        > Somebody would put up a website about their favorite pet topic and when the search engine unearthed it, that's what you got.

        The difference is that reading someone's blog post forces you to take their trajectory through the material and it might not go over the exact points you're curious or confused about. With a forum like StackOverflow you often have to settle for problems that are merely close enough to your own that the solutions apply to it.

        Models like ChatGPT allow you to ask for blog posts on any topic on demand and then ask for follow-up blog posts about whatever aspect of the previous blog post you want to elaborate.

      • tracerbulletx 2 years ago

        I am having good conversations with the thing about books as I read them and having it summarize parts I want to keep notes on and make knowledge graphs, and construct new worked examples and diving into specific topics and having significantly enhanced learning experiences with it as a partner literally every day for the last few months. So you're entitled to your opinion but it's simply not the same thing as a search engine.

        • Pearse 2 years ago

          This is such a great use case thanks for sharing.

          I spent some time talking to chatgtp about the history of art philosophy and technology, as if we were writing a book together.

          I found it was great to just get a very broad overview and then ask questions about the things I wanted to know more about.

          Not a groundbreaking way to use an LLM but I really enjoyed it.

          I'm going to take your idea of talking to it about books too.

      • Spivak 2 years ago

        I see that an an incredibly high bar. Even with all the resources of Google they (and everyone else) are losing the SEO war. These algos at their core fundamentally can't handle adversarial input and we've been making faster horses since AltaVista. Even if the only thing LLms are useful for is search that's still great.

        • bsder 2 years ago

          > Even with all the resources of Google they (and everyone else) are losing the SEO war.

          Google is choosing to lose the SEO war because they think it would impact their ad revenue.

          Every site that takes longer than 100ms to load? Gone. Wipes out vast quantities of ad-infested sewage. Login required? Gone. Websites now have to choose between reach and subscribers and the ones that choose subscribers will have to get better. Javascript required to access content? Gone. No more ad bidding system at all and no vectors for spreading virii. Tracking stuff from Facebook/Instagram/TikTok/etc.? Gone. No more analytics tracking everybody.

          We don't need AI to fix this. We need an alternative search engine. If "AI" is what gets us there, I'll take it. But it ain't "AI".

    • medler 2 years ago

      I didn’t say it wasn’t useful to some people, I just said it hadn’t revolutionized anything. Also, the Gartner hype cycle ends with the “Plateau of Productivity,” in which the technology is proven to be useful over the long term, so it is sort of implicit in my comment that I expect we will ultimately find lots of niches where LLMs are useful. Maybe you should have gotten ChatGPT to explain that to you before you made your flippant comment.

      • orange_fritter 2 years ago

        > lots of niches where LLMs are useful

        Isn't this a bit of an oxymoron?

        I feel like every comment on HN of mine now lately is just defending ChatGPT, but I don't think that's a reason to self-censor my comments... yet.

        I was watching a car chase on youtube yesterday and used an LLM to tell me the city location based on a description of the news station logo. So that was a "niche" I guess. I also got it to teach me how to use the https://gene.iobio.io/ software as I was using it, and I'm pretty good at it now! I asked it to help me to understand the connotations of several similar sentences in another spoken language I'm learning. It's helped me with understanding property tax appraisal records. We use it at work daily to analyze code, it shortens research by 30% or so. Calling it "niche" is 100% correct: yes it's very good at its "niches" which happen to be... almost everything except math? Have it craft a choose-your-own-adventure detective story and get back to me, because that niche is surprisingly fun.

        If you can predict the implications of this technology or make an insightful assessment beyond "we don't have flying cars yet" ... let me know.

    • dragonwriter 2 years ago

      > I wish people would just do some deeper investigation before forming opinions based on their biases. If you actually use it you know it's incredibly useful for learning, getting answers to general problems that 99% of the time work fine and it helps you understand those answers.

      I actually use it and I find that that 99% is ridiculously high.

      > The amount of times it's wrong or misleading is not significant enough to even be annoying during day to day use

      There may be use cases where it isn’t wrong a lot, or you may jave a high tolerance before annoyance hits, or you may be failing to detect it being wrong, but that certainly doesn’t match my experience.

    • catchnear4321 2 years ago

      > I wish people would just do some deeper investigation before forming opinions based on their biases.

      well then they wouldn’t be people now, would they?

      your reaction is fair. and rare.

      • medler 2 years ago

        > your reaction is fair. and rare.

        He read something he didn’t like and then attacked the person who said it by basically calling them an unthinking robot. It’s not fair and it’s incredibly common.

        • catchnear4321 2 years ago

          > by basically calling them an unthinking robot

          speaking of comments both unfair and incredibly common… this gross oversimplification does a disservice to all parties involved.

          op lamented a lack of introspection and a nastiness to judge a thing as useless or bad based upon an individual’s success or lack thereof with the thing.

          if you found it offensive, perhaps it’s worth thinking on.

  • VladimirGolovin 2 years ago

    Hype or not, I'm absolutely keeping my ChatGPT subscripion because it has proven to be consistently useful in my everyday life.

    • JohnFen 2 years ago

      I haven't found it to be more than very marginally useful in my life. It's interesting that some people think it's incredibly useful and others find it not worth the bother. I wonder what the difference is?

      • mewpmewp2 2 years ago

        I use it for tons of things personally.

        1. Data reformatting, data parsing, anything that requires transforming data in anyway. E.g. give me all blah in this massive raw content of whatever. Give this data as json, this as csv.

        2. Writing small scripts to do various other data related actions, automations.

        3. Summaries. Extracting key points. Asking questions about content I don't understand well. Asking clarification questions.

        4. Bulletpoints, outlines, asking for feedback for my own content, what could I be missing, brainstorming.

        5. Coding obviously, but with Copilot mainly rather than ChatGPT specifically.

        6. Asking it to review/criticise whatever work or output I do.

        7. Anything learning/research related, I use it avidly. If I want to learn about any subject. I haven't found hallucinations to be an issue at all, because I view the content with grain of salt in the first place and I double check if I think something is out of place, but it's really, really good for learning, because I can actively question it compared to a course or whatever, and me being able to debate with it is amazing for learning. And compared to a person I never have to feel that my question may be stupid, I can just spew to it whatever immediately comes to mind and get clarifications. If I had a person as a mentor, I might be more careful, but this removes the aspect of social anxiety completely. I can keep asking clarifying questions infinitely.

        8. Decisions. Trying to understand pros and cons of various decisions. Allowing it to brainstorm those pros and cons. I combine with my own pros and cons of course.

        To me it's like this dream thing you can bounce your thoughts back and forth with, without having to worry about judgment etc.

        • JuanPosadas 2 years ago

          > I haven't found hallucinations to be an issue at all, because I view the content with grain of salt in the first place

          Even without ChatGPT, looking for information you're not already an expert on can be a minefield of opinions disguised as facts, false dichotomies, biased perspectives/framings, omissions, etc..

          And it's already bad enough before you consider that someone may have intentionally attempted to mislead you.

        • barrysteve 2 years ago

          May I point out that your use cases 1,2,3,4 and some of 6, would fit neatly into Word and Excel.

          Microsoft could 'embrace' a significant portion of ChatGPTs use cases, if they trained up something 80% as good as ChatGPT and bundled it with Office 360.

          We've seen AI techniques find new homes in Photoshop, 3D modelling software, ect.

          ChatGPT needs it's own use case.

      • mrguyorama 2 years ago

        Some people have significantly more need for generating vacuous, meaningless textual content from very little initial seed content.

      • interstice 2 years ago

        Imagination? Or more seriously perhaps a higher tolerance for not 100% accurate output?

        • JohnFen 2 years ago

          I don't expect or demand 100% accuracy. As to imagination... perhaps. I'm not sitting around trying to think up things to have it do. I've just tried using it to speed up rote stuff, but I find it's faster and easier just to do it myself to begin with.

          • mewpmewp2 2 years ago

            What is some of the stuff that you tried it for?

            • JohnFen 2 years ago

              Mostly 1 and 2 on the list you made in another comment. Some of 5.

              But my original comment wasn't really about me in particular, it was a general question about what the difference between the two groups are.

bsder 2 years ago

Marketers gonna market. Anybody remember all the "big data" hype?

ChatGPT and its ilk don't enable me to do something today that I couldn't do yesterday. Nor do they enable me to do something an order of magnitude faster than I could do yesterday.

Contrast this to when microprocessors hit. Suddenly, things like industrial control went from the size of multiple refrigerators to a PC board. When the price dropped (things like the 6502), engineers went absolutely bonkers building amazing things.

  • noncoml 2 years ago

    Give it some time. How long was it from the day iPhone was introduced until Uber?

injb 2 years ago

Can't read the article. Does he mean it's a problem in search of a solution then? Or does he mean it's "the exact opposite of a solution, in search of a problem" (in other words, a problem in search of a problem)?

  • TSiege 2 years ago

    Here's a link to the full quote on xitter.

    "AI is the exact opposite of a solution in search of a problem. It's the solution to far more problems than its developers even knew existed."

    https://twitter.com/paulg/status/1689874390442561536

  • saltcured 2 years ago

    The exact opposite could be a non-solution trying to evade all problems. ;-)

    And that's why I personally don't have much faith in the LLM approach. Natural language is too full of ambiguity to pretend it delivers meaning. It makes oblique references to meaning, based on awful assumptions about the audience and presumed mental context.

    I see enough absurd miscommunication and word salad between native speakers. I really don't want a magic box to vaguely emulate this.

  • hayksaakianOP 2 years ago

    I think what he's trying to say is it's capable of solving more problems than we think.

    • dragonwriter 2 years ago

      That is implying that we haven’t actually found the problems for it to solve yet, which would make it a solution in search of a problem where PG is optimistic about the outcome of the search.

      What I think he is trying to say is that it is a solution that addresses a broad range of problems that are clear now but weren’t envisioned by its creators, which is both impossible given the early claims (its creators already billed it as a general solution with easentially no limits) and, even if you ignore those claims and reduce it to a question of it being broadly applicable as a solution, generally premature (most of the more specific things it is billed as a solution for it hasn’t solved, though some people might have ideas, not yet proven jn practice, of how it can be a component of a solution).

omscs99 2 years ago

Hell, I doubt most businesses even have a functional full text search for internal documents. My job certainly doesn’t, but the SVP feels the need to make noises about LLMs and throw the word “revolution” around

It’s like nobody wants to talk about reasonable solutions that incrementally make things better, everything has to be some meme “revolution”

FrankWilhoit 2 years ago

He seems to think that what we need (and want) is <b><i>something to hand off to</i></b>. We're tired. We don't want to do it any more -- for any value of "it". Actually we can't do it any more; we've reduced everything to performance, so it doesn't matter who carries the torch. AI may as well carry it; even if it drops it, it may do so amusingly, and what more could we ask?

nunez 2 years ago

How much money does pg have on OpenAI?

  • sacnoradhq 2 years ago

    Ownership in OpenAI LP appears to be (mostly?) the missus' department.

    The cap table is moot in the near-term because of the profit precedence structure.

fargle 2 years ago

what I parsed it as is: [(The exact opposite of a solution) in search of a problem] => a non-solution in search of a problem

what he actually said: [The exact opposite of (a solution in search of a problem)] => naive translation (a problem in search of a solution)

but what he meant is: a solution in search of a problem that ended up finding far more problems than anyone suspected.

"problem in search of solution" => bad. exact opposite of bad => good.

what it really is? I think closer to the first one.

why do they talk in such gobbledygook

kristianp 2 years ago

https://archive.is/Kvs3l

kristianp 2 years ago

> the supply chain and procurement industry need this badly

Anyone have examples of how GPT can help there?

  • simne 2 years ago

    I'm not sure about GPT, because this is visual task.

    Good examples are typical courier service, or FedEX. Or companies like Amazon make their own delivery services.

    They usually have standard boxes/envelopes with standard barcode markings for correspondence.

    All these boxes/envelopes gathered from sending offices and concentrated into sort facility, where machine vision could sort them, to separate container for each destination branch office, or for some places, big central office have it's own container.

    Than containers load to big truck and run to destination city, where unload at destination branch office.

    In destination branch office next sorting distribute packages to containers of small offices.

    Then smaller truck deliver containers to small offices.

    Similar thing happen on returns, just in target office created new mark and placed over old mark.

    What could be wrong? - Some delivery companies allow containers of free form, not just rectangular or envelopes, just limiting weight and sum of sizes, so you for example could send car body kit, or skis, not inside rectangular container.

    For human, handling such things is trivial task, but for machine it is nightmare. But if this thing will see something like GPT-4, which claimed multimodal, possible it will detect it right and understand how to handle it.

simne 2 years ago

I could agree with Paul Graham only in one - current AI systems use too much energy when doing really interest tasks, and make too little value.

For others his words, they are against Capitalism, because this is it's typical way - Pareto's principle, use new tool on 20% cases, where you could make 80% profit, and don't wait, until tool will handle 100% cases.

If mankind lived against Capitalism, we would not have cars and planes, we would still use steam engines.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection