Settings

Theme

Bard is much worse at puzzle solving than ChatGPT

twofergoofer.com

90 points by cowllin 3 years ago · 85 comments

Reader

hackpert 3 years ago

Wow I had hoped for a more productive discussion than these 1-1 comparisons of Bard vs ChatGPT that I'm seeing everywhere. The model deployed with this version of Bard is clearly a smaller model than the biggest LaMDA/PaLM models Google has been working on for ages. Which, according to their publications, show unprecedented results on _proof writing_ of all things (see Minerva). While their strategic decisions may be questionable (or they're just trying to quantize the model for mass deployment without burning billions per month in compute costs), its almost silly to question Google's ability to build useful LLMs.

  • seanhunter 3 years ago

    At the moment unless we get more information about what metric you're supposed to evaluate it on, you could probably simplify the headline to just "Bard is much worse than chatgpt" without any loss of accuracy.

    It's not really realistic to expect people to give Google credit for these amazing models they have published results about but haven't let people play with - they have given people Bard and people are evaluating it based on the criteria most obvious to them - a comparison to a very similar product that has just been released.

  • Traubenfuchs 3 years ago

    They knew the war they were entering, they knew their enemies, they knew how they'd get evaluated and still decided to get this model out in its current state, leading to the conclusion: Yes, this is really the best they can do and it's much worse than the state of the art.

    In any case, it's a massive marketing blunder, the public opinion formed within the last hours was overwhelmingly "Bard sucks compared to ChatGPT."

    • carlmr 3 years ago

      >Yes, this is really the best they can do and it's much worse than the state of the art.

      This is the best they can do under pressure.

      ChatGPT surprised the world with how good it was, then Google scrambled to get something out quick.

      A project like this is a massive undertaking, the first mover has the advantage that they can calmly refine their model until they find it presentable.

      The question is, is what Google is delivering good for the timeframe since OpenGPT exploded in popularity enough for Google leadership to take note. Since that moment, realistically, is when they put pressure on their devs to push something out the door.

      I think we'll see a better iteration soon. Not only from Google, but from other competitors.

  • random_cynic 3 years ago

    > its almost silly to question Google's ability to build useful LLMs.

    Unless they release a model one can "use" and verify their claims it's literally silly to make this statement.

    • amelius 3 years ago

      There are useful open source LLMs. Or are you questioning their ability to configure; make install?

  • DeathArrow 3 years ago

    > its almost silly to question Google's ability to build useful LLMs.

    It's almost silly to presume anything without proofs. People are judging Google based on what Google has shown.

  • xiphias2 3 years ago

    It seems like they don't want to be the best, just good and cheap enough that they don't lose users therefore ad revenue.

    They behave like Yahoo when Google took over.

  • letitgo12345 3 years ago

    Even the best Google models seem to be lagging for reasoning tasks vs OpenAI ones at the moment - see the graphs at https://github.com/suzgunmirac/BIG-Bench-Hard

  • peyton 3 years ago

    Everyone’s gonna have bigger models. Where are their useful models?

  • celim307 3 years ago

    Thats like saying car brand X is more reliable than car brand Y, because brand X won formula one.

  • masakreTech 3 years ago

    you can always go back to reddit

fenomas 3 years ago

Am I missing something? Most of TFA is about Bard failing to answer with rhyming words, but in the only prompts shown the author doesn't actually ask for rhyming words. He just says the hint and the name of the puzzle.

Is this not simply: "Bard is worse than ChatGPT at having seen the 'how-to-play' page for my side project during its training"?

  • comex 3 years ago

    Clicking through to the link next to 'last week's text' and then to 'full rules', it looks like the author is starting the chat sessions with a full explanation that isn't included in the screenshots. (Also, the last screenshot shows the author explicitly asking about rhymes.)

    • fenomas 3 years ago

      Ah thanks - for others here is the link, though TFA may not necessarily have used the same prompt: https://docs.google.com/document/d/1_eg_jiUE5y8e5zeiz5HCGDc2...

      Based on that it looks like the author asked all 25 test puzzles in one big prompt, which one supposes would favor larger models. To compare "puzzle solving" you'd think it would make more sense to ask one puzzle at a time?

      • cowllinOP 3 years ago

        I tried it both ways; with individual prompts and prompts in bulk. I ran both tests the same way. There's a tradeoff in writing a legible/interesting blog post and relating step-by-step the way the evaluation was ran! Appreciate you reading and the feedback :)

jackblemming 3 years ago

How is this possible? Google makes people do 8 rounds of leetcode. How could they be beaten? Nothing makes sense anymore.

  • BoorishBears 3 years ago

    People from Google have argued that's exactly why they're failing.

    Personally, as someone who worked at a company that was up over 500% during the pandemic, shipped absolutely nothing during that spike, and then deflated below their pre-pandemic pricing, I saw the foley in hiring smart people first hand.

    It's not enough to hire the smartest people, and in fact it can be a competitive disadvantage. The smartest people often want their piece of the product to reflect their ingenuity no matter how ancillary it is to the core mission. Unfortunately that often precludes the kind of agility that businesses need to stay competitive.

    OpenAI managed to poach Googlers by simply not having fiefdoms built by smart people™. I imagine if Google had built GPT-4, it wouldn't be having downtime today. Because it wouldn't be public. And it might never be public because it doesn't scale for Google scale yet, and the ethicists want their say, and we need to integrate it into Borg and the front end hasn't passed through enough layers of design and...

  • natch 3 years ago

    I think they just got lazy and entitled. Maybe ChatGPT will be the scare they need. It feels bad though; they almost don’t deserve the energy this fight will give them.

SteveNuts 3 years ago

It's so sad to me to see the downfall of google from the absolute coolest company on the planet to the one that's now trying to keep up.

kodah 3 years ago

That's a clever game to get it to play. Today I asked ChatGPT to give me 1000 Fibonacci numbers starting with the 2000th number and it crashed. Later I asked it the same prompt and it repeatedly gave me code to calculate the Fibonacci numbers in Python.

  • Tiberium 3 years ago

    > and it crashed

    I hope you understand that it's only because of inability of OpenAI's servers to keep up with demand or some issue in their backend code - language models themselves can't "crash" like normal programs on some kind of input, because they "just" generate new tokens.

    • kodah 3 years ago

      I know little more than what I've read about LLMs and other models. I just got it to do them again and it output them rather quickly: https://imgur.com/a/jfviCCe

      • leto_ii 3 years ago

        are the numbers correct?

        • kodah 3 years ago

          I spot checked an random assortment of them and so far yes, but it only takes one wrong one to set every subsequent number off.

          • vineyardmike 3 years ago

            > it only takes one wrong one to set every subsequent number off.

            Except its not actually doing that calculation, so one wrong one shouldn't truly affect the rest like "Real" math.

    • mike_d 3 years ago

      > language models themselves can't "crash"

      That is like saying the Excel document didn't crash, but Excel did when it tried to parse it. As far as I know there is no proof that you can't cause a LLM to crash with user input.

      > because they "just" generate new tokens.

      I can write a program that counts to 100 that crashes reliably.

      • inimino 3 years ago

        > As far as I know there is no proof that you can't cause a LLM to crash with user input.

        Well, then you simply don't understand how they work.

      • hoseja 3 years ago

        It's a fixed number of matrix multiplications. You can't make matrix multiplication crash by feeding it any sort of strange numbers (unless they haven't handled their Infs and NaNs).

        • mike_d 3 years ago

          > You can't make matrix multiplication crash by feeding it any sort of strange numbers

          Matrices don't multiply themselves. You need hardware and software. As I pointed out the LLM is effectively just data that is being processed by a program. It is silly to assume you have no bugs in that software or the underlying operating system.

          • gs17 3 years ago

            The type of bug the first comment was discussing, where recursion could cause it to crash (out of memory? stack overflow?) is categorically impossible based on how the model works, and a bug in the software running it would not change that.

  • jasfi 3 years ago

    What would be more interesting is to see Bard vs ChatGPT debating various topics.

boffinism 3 years ago

> Twofer Goofer HQ's adherence to strict "perfect" rhyme can be tricky for those slant rhyme-inclined.

And yet one puzzle they hammer Bard for failing is "Cactus Practice". What accent do you have to have for that to be a perfect rhyme?

  • cowllinOP 3 years ago

    From Chicago ... and with more than a thousand solves on that puzzle we've never received a single complaint about the rhyme on that one (plus the stats say users find it an extremely easy rhyme to solve)! Curious how you pronounce that one such that they don't rhyme?

    • tetromino_ 3 years ago

      According to the dictionary (and matching my own non-native pronunciation), cactus is /ˈkæktʌs/, practice is /ˈpɹæktɪs/

      The terminal /tʌs/ is not quite the same thing as /tɪs/; since they are both unstressed, the difference can be hard to notice in fast speech, but becomes clear when enunciating.

  • sethaurus 3 years ago

    In standard Australian English, this is a perfect rhyme.

visarga 3 years ago

Offtopic - have you seen Phind?

https://www.phind.com/

It is very fast and wins the search benchmarks here:

https://twitter.com/vladquant/status/1638305110869807104

mikewarot 3 years ago

If I understand how Large Language models work, they don't actually know about spelling.... they are given tokens that represent words, and can only infer things from the context of those tokens across terabytes of data that they're given.

Any rhyming done is an impressive result.

  • ben0x539 3 years ago

    I mean, English spelling certainly doesn't free you from the need to figure rhyming out from context.

  • ramblerman 3 years ago

    It is amazing, but somewhat explicable as an emergent effect.

    Dont forget that the model has seen all the poems and raps on the internet. It built some latent space where certain words always cluster together in the context of poems, and in which location.

    In this case it really has the best database available to say, what next word would slot in nicely here - as that is precisely what it was trained to do.

    It is amazing, but somewhat explicable as an emergent effect.

    Dont forget that the model has seen all the poems and raps on the internet. It built some latent space where certain words always cluster together in the context of poems, and in which location.

    In this case it really has the best database available to say, what next word would slot in nicely here - as that is precisely how it was trained.

    I find it more amazing tbh that you can ask for a poem about something, and that it then sticks to the plot, makes references to the start etc than the actual rhyming.

  • chewxy 3 years ago

    Well the nature of byte pair encoding has a side effect of encoding sounds as well. I briefly explored it here: https://youtu.be/fTDQUxha9qU?t=480

milemi 3 years ago

"Bard is much worse than ChatGPT at solving an obscure word game I invented" would have been a more honest title, but would probably generate less clicks for the author.

Bard may still be much worse than ChatGPT at solving all kinds of puzzles, but the article is click bait for promoting the author's word game, not an actual investigation that warrants that conclusion.

  • vineyardmike 3 years ago

    Having read through the word game, I agree with others that it's good that the game is less likely to be in the corpus. I think rhyming, while a challenging task, may be a poor benchmark for ability. The author doesn't seem to understand rhyming too well (cactus practice is a weak rhyme at best)

    I completely disagree with the "hasty rhyming test" - Skeleton and Gelatin don't rhyme (-ton vs -tin), and rhyme worse than protein and poutine (-een vs --een).

    • igravious 3 years ago

      > The author doesn't seem to understand rhyming too well (cactus practice is a weak rhyme at best)

      cactus / practice ?

      they rhyme to my mind

      skeleton / gelatin ?

      also rhyme to my ears

      protein / poutine

      also rhyme enough to be considered to rhyme

      You appear to be operating under the impression that the every syllable of a rhyming couplet has to rhyme exactly for it to be considered a rhyme. This is an incorrect assumption. In fact, the above rhymes are arguably more pleasing because they are inexact rhymes rather than being exact forced rhymes.

      In your world the only actual rhymes would be

      bold / cold / gold

      and

      double / trouble / bubble

      types of rhymes but the world considers the following to be perfectly acceptable

      sent to meet her / centimetre

      and so on

      • CamouflagedKiwi 3 years ago

        What most people consider a rhyme is that the vowel and coda of the last syllable match (of course we don't mostly reach for the technical definition).

        I guess the examples there might be accent dependent. Protein/poutine is the only one of those first examples that really rhymes to me; skeleton/gelatin and cactus/practice both have different vowels. Maybe different for you though.

        • igravious 3 years ago

          > What most people consider a rhyme is that the vowel and coda of the last syllable match

          ??? Oh yeah, says who?

          protein / poutine

          p[]oh teen / poo teen => both start with a p, then there's an oh or oo (which are similar) and both end the same way – disyllabic rhyme

          cactus / practice ?

          []ah ck təss / []ah ck tiss => ignoring the first consonant (cluster) which anchors the rhyme and pronouncing the u as a schwa (which it is), the ck's are the same and təss and tiss are totes similar – disyllabic rhyme

          skeleton / gelatin ?

          trisyllabic goodness – again, the way we pronounce the on in first word is not like the on in frond but like the ən in motion (UHn) and the way we pronounce the at in the second word is not like the at in bat or cat but like the ət in … hmm, none spring to mind but it's an UHt sound here if you listen to it – sgɛ́lɪtən or ˈskelɪtən – ˈʤelətɪn – so you've k vying with g (both hard), e with e, l with l, ɪ with ə (close sounding!), t with t, ə with ɪ (close sounding!), n with n

          ===

          listen with your ears, not with your eyes

        • porphyra 3 years ago

          Protein and poutine do not rhyme if you pronounce poutine the proper way in Canadian French.

          And you are right of course that gelatin and practice have the short "ee" sound at the end whereas skeleton and and cactus have the "uh" sound.

          • cableshaft 3 years ago

            > Protein and poutine do not rhyme if you pronounce poutine the proper way in Canadian French.

            I looked it up, and sure, it sounds about the same in Canadian French as someone saying Vladimir Putin[1]. But I've never heard anyone say it that way myself, and in neutral French (according to the linked video, at least), it's pronounced 'pooh-teen', which sounds exactly like protein (I don't know if you pronounce protein different, but for me it's 'pro-teen').

            [1]: https://youtu.be/yyis4TgmXYg

      • vineyardmike 3 years ago

        > You appear to be operating under the impression that the every syllable of a rhyming couplet has to rhyme exactly for it to be considered a rhyme

        I'm not operating under that impression, but the author is [1].

        To me, the final "sounds" should match - not every syllable, an end rhyme according to wiki [0]. Specifically I would consider a rhyme to require matching sounds "at least from last vowel to end", but I don't think of rhymes first from the strict definition. Perhaps it's an accent thing but "-us" in cactus is not the same sound as "-ice" in practice. If a child made a poem with these sounds I would tell them "good job, it's a rhyme", and perhaps for the purpose of a silly word game too. But I would not use it as a passing case for a test of any sort like the author.

        What's more pleasing is irrelevant, what is relevant is if its a true rhyme.

        [0] https://en.wiktionary.org/wiki/end_rhyme#English

        [1] https://news.ycombinator.com/reply?id=35258385&goto=item%3Fi...

        • ShamelessC 3 years ago

          > Perhaps it's an accent thing but "-us" in cactus is not the same sound as "-ice" in practice.

          Indeed, it's an accent thing. In America at least, pronouncing "cactus" with an "is" or an "us" sound are both valid.

          I think that given that the author provided a working definition, and your provided failure example is actually passing that definition (just only for the author's dialect), and given that you are now essentially trying to change the subject to "my definition of rhyme is the correct one"; well, you're just being...pedantic? argumentative? I'm not sure.

    • cowllinOP 3 years ago

      In the article, I mention that Twofer Goofer requires perfect or strict rhyme. Perfect and strict rhyme require that all syllables are pronounced identically in the speaker's tongue (for me, American Midwest accent), except for the first sound of the word which can vary.

      Hence pooh-teen and proh-tein do not rhyme. Skell-ih-tin and Gell-ih-tin do rhyme.

      A game like this requires a pretty tight rhyming definition to not annoy players in a given day!

      Thanks for reading: https://www.masterclass.com/articles/perfect-vs-imperfect-rh...

  • LesZedCB 3 years ago

    what's wrong with that?

    the use of novel puzzles is frankly awesome because there's a much lower chance of contamination from previous puzzles so we get a chance to see how much generalization they've achieved.

    • milemi 3 years ago

      I'm complaining about the title writing a check that the blog post can't cash.

      • Skunkleton 3 years ago

        GPT-4 says: A more accurate and balanced title might be: "Comparing Bard and ChatGPT in Puzzle Solving: An Examination within the Context of a Word Game"

      • cowllinOP 3 years ago

        Fair! But if I wrote Twofer Goofer in the title it would not resonate at at all. Alas, tradeoffs.

porphyra 3 years ago

How do you navigate this blog to read the other articles? I couldn't find any way to read the one on gpt4 (clicking the underlined "wrote about" does nothing) and twofergoofer.com/blog goes to a 404.

ralfd 3 years ago

It is interesting how lackluster the reactions are about Bard, when it would have been jaw-gapping amazing just a year ago.

masakreTech 3 years ago

Bard is basically trash

NoZebra120vClip 3 years ago

I tried to play hangman with it, but it was on crack.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection