Settings

Theme

A.I. accurately predicted the full baseball post-season back in July

marketwired.com

85 points by Cortexia 9 years ago · 70 comments

Reader

sixhobbits 9 years ago

This reminds me of one of the chapters from "How Not to Be Wrong: The Power of Mathematical Thinking" by Jordan Ellenberg (highly recommended). He describes how "stock brokers" would send out a "free stock prediction" to thousands of email addresses. The prediction would be a simple up/down prediction for a specific stock. The prediction was randomly chosen. But these "brokers" would send an equal number of up and down predictions, ensuring that they got a correct prediction for half of their recipients. They would then throw away half of the emails (the wrong half), and repeat with the remaining half. After ten predictions, there would still be a small number of people remaining for whom they'd sent only correct predictions to (10 in a row, which seems really impressive if you can't see the full picture). They would then contact these few people and offer to keep selling them predictions for a fee.

Stories like this (And Paul the Octopus, who I see was mentioned already) are exactly the same thing. Thousands of people are trying to using deep learning (i.e. stats), or other crazy methods as in this article, to make predictions. Of course every now and then one of them is going to work better than expected. This would be the case even if people were simply using random numbers. But we ignore all the ones that fail and give heaps of attention to the Pauls.

  • CapacitorSet 9 years ago

    If anyone is interested, this is known as p-hacking in statistics (https://en.wikipedia.org/wiki/Data_dredging), and works in a similar way.

    For instance, you have a statistical population of one hundred men and one hundred women: you collect as much data as possible about them - as many features as possible, actually - until you find something which happens to be statistically significant for your group (eg. salt consumption). Then, you publish your results, pretending that the feature you found was the original hypothesis for the study ("Our study confirms that salt consumption is higher in males.")

    • verbify 9 years ago

      It would be far more specific - you'd collect all their medical details, their ethnicity, age, etc., and then you end up with:

      'Salt consumption can increase the risk of liver consumption for middle-aged males of African descent'

      • rrobukef 9 years ago

        ... liver consumption ...

        • verbify 9 years ago

          I meant liver disease. But I'll leave it this way because it's funnier. And pretty tasty.

          • apetresc 9 years ago

            "Consumption" is an old-fashioned word for classes of tuberculosis, which can affect the liver. So you could still be right :)

        • AznHisoka 9 years ago

          its filled with vitamin a.

      • Revex 9 years ago

        I think I see these types of click-bait headlines all the time... and come to think of it, they have very small sample sizes.

  • Frqy3 9 years ago

    Here is a modern version of the same scam [0], using social media accounts and deleting the wrong predictions while the account is set to private.

    [0] https://medium.com/message/how-to-always-be-right-on-the-int...

  • codethief 9 years ago

    Fantastic comment! In fact, it seems that sports games, or at least NBA games, can be described accurately and consistently using (slightly modified) random walks. Put differently: Outcomes are indeed random and there's not much machine learning you can do here.

    Source: https://arxiv.org/abs/1109.2825

    And here's a slightly more exciting description of a talk one of the authors gave on that topic at UMass Amherst last year:

    https://www.physics.umass.edu/seminars/statistics-of-basketb...

    EDIT: I was too stupid to realize that the paper linked above actually supports the parent's opinion, i.e. the idea that successful predictions are statistical artifacts, contrary to what I was thinking earlier.

  • rosser 9 years ago

    "The general root of superstition is that men observe when things hit, and not when they miss, and commit to memory the one, and pass over the other." — Sir Francis Bacon

  • kowdermeister 9 years ago

    Derren Brown did the same thing with horse racing

    https://www.youtube.com/watch?v=lX94fV4TWbc

  • jonshariat 9 years ago

    But this isn't that at all.

    1. They made the predictions well before hand and released them to the public.

    2. As the article stated, they also did the same thing with Hockey, Derby, and Academy Awards.

    • garyrob 9 years ago

      If there were an extremely large number of AI's making all those predictions publicly in advance, so many that one might randomly do that well, then the comment would be accurate. But that does not appear to be the case.

      There was absolutely SOME luck involved, however, because I don't believe that, for instance, there is zero randomness in the World Series, which would have to be the case if one could absolutely predict it accurately.

      [UPDATE: to be clear, I'm assuming that Unanimous didn't make thousands of similarly high-level predictions, and then only report the ones that did well. I think that's a reasonable assumption, because there aren't thousands of high-level predictions on the level of the Oscars and World Series.]

      [UPDATE 2: I just registered at the site. It appears that many people can ask the same question, many times. The same question looks like it can be asked, in fact, many thousands of times. If they were simply cherry-picking the one answer out of thousands that was correct, then this is p-hacking. However, the press release is listing questions asked by prominent entities such as Newsweek and TechRepublic. There aren't all that many of such entities asking such questions of UNU. So the water is a little murky, but it still looks like UNU is doing something impressive.]

  • jvandonsel 9 years ago

    This technique was also described on The Simpsons: http://simpsons.wikia.com/wiki/Professor_Pigskin

  • treehau5 9 years ago

    How dare you blaspheme against our prophet, Paul?

    (no seriously, great comment)

  • CortexiaOP 9 years ago

    Except this was a prediction that was done formally for the Boston Globe, at their request. You can see their article about it here:

    https://www.bostonglobe.com/sports/redsox/2016/10/04/group-g...

    That's pretty different than sending out thousands of random predictions. This was ONE prediction about MLB.

    • vannevar 9 years ago

      But we don't know how many other predictions were also formally done, by other entities. We're only hearing about this one because it was right.

  • CortexiaOP 9 years ago

    They predicted the Kentucky Derby (Superfecta) using this same A.I., based on a challenge from another reporter:

    http://www.newsweek.com/artificial-intelligence-turns-20-110...

no_protocol 9 years ago

Nothing about this seems to add up.

They claim they made the prediction in early July, but link to a newspaper article dated 4 August that indicates the predictions were made just one day earlier.

They picked the team with the best record all season long to win the championship. They got one of the division winners wrong.

Just publishing the current favorites from MLB.com's probability page [0] as of 3 August would have also gotten 9 of 10 postseason teams correct, including going 6/6 on division winners. So the 'knowledge' of fans voting actually did worse than a monte carlo simulation.

I'm not impressed.

There's no way this should be considered predicting the "full baseball post-season," and I am not seeing any evidence that it happened in July. Wish they'd have shared it.

[0] http://mlb.com/mlb/standings/probability.jsp?ymd=20161002

  • CortexiaOP 9 years ago

    They tend to publish academic papers about the predictions. This one is obviously too recent to review, but here is an academic paper (IEEE) about their SUPERBOWL PREDICTIONS, complete with formal statistics:

    http://unu.ai/wp-content/uploads/2016/10/Crowds-Vs-Swarms-SH...

    • zach 9 years ago

      By "They tend to publish academic papers" you mean you used an "academic paper" template and uploaded it to your website.

  • bluetwo 9 years ago

    According to the article:

    "A group of Boston Globe readers accurately predicted nine of baseball’s 10 playoff teams after participating in a 30-minute online experiment using Unanimous A.I.’s Swarm Intelligence on Aug. 3."

    So they don't credit the AI as much as the readers. I agree it is all fishy. Someone trying to pump up the value of their company.

    • CortexiaOP 9 years ago

      Actually, that's how a Swarm Intelligence works - it's a real-time system that connects LIVE PEOPLE using swarming algorithms.

      So, the Boston Globe provided the people and provided the questions... they formed a Swarm Intelligence, and made the predictions.

      The Boston Globe did this to see if the swarm intelligence could make strong picks. It did.

      • bluetwo 9 years ago

        I get it, but there is a little bit of dishonesty in saying it was the system that did the work. It was the system that automated discovery of a solution, but it was the people that did the work.

        As I pointed out in a different post, this is an update of the established technique of delta polling. Delta polling is useful, and an automated way of doing it can help if find even more uses for a lower cost. I see the value here. But, it isn't AI, and the system isn't doing the assessment. It is not intelligence.

  • fleitz 9 years ago

    There's also the issue of the full suite of predictions, if these were the only predictions made then it's impressive, but if they made lots of predictions then some of their predictions coming true may be no better than chance.

    • CortexiaOP 9 years ago

      They also predicted which managers would win the MVP awards, and which players would win the CY YOUNG awards but those don't get announced for 2 weeks.

    • FonzieBear 9 years ago

      Well, The Boston Globe only made the one set of predictions.

llamataboot 9 years ago

UNU seems to get their press releases on here a lot. As far as I can see there's not much "AI" involved, just a UI over the "wisdom of crowds" method of making predictions. In this case, the Cubs were heavily favored all season to win the World Series, had arguably one of the best GMs and managers in baseball, and a raft of all-star players. Goat aside, it was fairly smart money to lean towards them from mid-season on.

Same thing with their Kentucky Derby prediction this year. The swarm literally decided the horses in the exact odds they were going off at (which makes sense since gambling odds by their very nature are "the wisdom of the crowd") and that's how they finished.

Tangokat 9 years ago

Not to be overly critical but:

It does not match my definition of A.I:

"UNU enables groups of online users to think together as a unified emergent intelligence -- a "brain of brains" that can express itself as a singular entity. Touted to as the world's first "hive mind," the UNU platform has had over 60,000 human participants in swarming sessions this year, together answering over 250,000 questions."

Also I would reasonably expect some of those 250.000 questions to beat the odds and get answered right.

mehwoot 9 years ago

1) The AI was just sythesizing answers given by human readers. It didn't do any of its own analysis of the data set.

2) The experiment was published in August, when the regular season was already two thirds completed. The cubs were well ahead of everybody at that point and were favourites to win (although in baseball that doesn't necessarily mean you are going to win in the postseason). Here are the standings at that Date: http://www.baseball-reference.com/games/standings.cgi?year=2...

You can see that the 10 playoff teams were ranked 1-5 in each league at that point. So predicting the playoff teams was just "Which 10 teams are leading right now", which they asked humans about.

The AI didn't predict the full post-season, just which two teams would be in the World Series, which happened to be the team everybody thought it would be from one league and the second placed team from the other.

bluetwo 9 years ago

This reminds me very much of delta polling, where you survey experts in a field with a complex and unsolvable question, tally the results, send that information back to the experts, and then ask them again. After a few rounds this tends to arrive at what is usually a pretty solid answer.

It is used sometimes in scientific and medical research. An automated tool is pretty neat, but like others said, it doesn't really classify as AI. I'm not sure how much money I would really put down on the bets the site makes, but it is similar in some ways to the scandal that rocked Draft Kings/Fan Duel, where admins were using high-level data to make bets on opposing systems. They did in fact make money.

blurbleblurble 9 years ago

It irritates me that this is called "A.I".

losteverything 9 years ago

Anyone remember Tamara Rand. [0]

Well, one of the greatest Tamara Rand jokes was from CNN sports tonight: "The Cubs are predicted to win the World Series. Only thing is it was predicted by Tamara Rand."

Quite cool at a time when tv commentary was never light hearted.

[0] http://hoaxes.org/archive/permalink/tamara_rand

zitterbewegung 9 years ago

What UNU does is more like "An live online poll of a group of people picked the post-season in July".

andrewclunn 9 years ago

How many AIs screwed it up? Remember the hits, forget the misses.

gnicholas 9 years ago

I'd be curious to know what else they predicted that turned out to be wrong. This could be an impressive run, or it could be that the company's press release highlights several victories and omits several (or more) failures.

I have no evidence one way or the other but would be interested to see more context.

Xeroday 9 years ago

Has Unu made any incorrect predictions? Their blog only seems to cover the big, successful ones.

lawnchair_larry 9 years ago

Survivorship bias.

Also why the stated historical performance for your 401k funds are probably tricking you.

orasis 9 years ago

Oh cool. How many AIs did they have doing the predictions? Survivorship bias.

pgodzin 9 years ago

The article mentions "swarm intelligence" that essentially forms a hive-mind. Where is the AI/ML when it seems like it just picks the most popular responses from its many respondents?

CortexiaOP 9 years ago

Here is the latest UNU election pick: http://unu.ai/election-fatigue/

davesque 9 years ago

What are the chances? Probably not that slim considering how many people are trying to make predictions using methods like this.

FonzieBear 9 years ago

What a game. What a series.

  • vecter 9 years ago

    Hi and welcome to Hacker News! Please only post comments that add something meaningful to the topic of discussion (a proclaimed artificial intelligence that claims to have predicted this result much earlier).

joshagogo 9 years ago

Found this forward-looking post on which states will pass marijuana legalization ballot issues. http://unu.ai/legalization/

joshagogo 9 years ago

Who does the A.I. say will win the election next week?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection