Settings

Theme

The Uselessness of Useful Knowledge

quantamagazine.org

83 points by bainsfather 4 years ago · 21 comments

Reader

ugvgu0oiua 4 years ago

One skeptical way of looking at it is that the the explosion of "data science" and ML is basically comp sci running into modeling space in a way it never had before, and getting into territory that it wasn't equipped to handle.

It wasn't that long ago that there were posts on here about statisticians giving conference keynotes about how data science was basically old wine in new bottles, and were being ridiculed for being behind the times, etc.

Now we see that basically no one actually knows what's going on. My guess is when the dust settles a lot of things will be explained, but it won't be as different from established statistical and information theory as some would make it out to be. That is, some of this is new discovery and figuring out new territory, and some of it is neglecting basics that have been there all along.

My guess is the next phase of this is basically comp sci ML research rediscovering mathematical statistics and information theory.

  • WastingMyTime89 4 years ago

    > My guess is the next phase of this is basically comp sci ML research rediscovering mathematical statistics and information theory.

    That’s the trick students doing ML courses for their CS degree don’t get. The people leading ML research today are statisticians.

    Most of them learned their chops before neural networks became cool again. It’s extremely hard to publish anything good without a very solid background in maths.

    There is no large divide today between ML and statistics where people could be forgetting things. It’s very much the same field. The main issue is that statistics were already a somewhat immature part of mathematics even before the ML craziness.

  • Isinlor 4 years ago

    ML is actually overthrowing some truism from classical statistics.

    Classical statistics did not predict double descent phenomena.

    The whole idea, that you should not have more parameters than data, was wrong.

    • ugvgu0oiua 4 years ago

      I think the story with that isn't finished yet, and in some ways is the perfect example of what I mean.

  • keithalewis 4 years ago

    Bingo! Who remembers Symbolics and their lisp machine? Moore's law, not theoretical advancement, is the real story.

    • awild 4 years ago

      I took a course on machine learning for my masters. I had trouble understanding most of it, and a lot of trouble solving the coursework (lots of training models), because I didn't get the theory. The kid who had an insane gaming rig always had the best solutions and his understanding was by his own declaration not much better than mine, he could just train way faster so he could make more mistakes and had the better trained models in general.

      One of the techniques we learned in the course, was by the account of the Prof, working extremely well, and well documented, its just that nobody really understood why it worked and people were trying to prove why it got optimal solutions.

YeGoblynQueenne 4 years ago

I honestly don't understand all this flowing uphill and flowing downhill talk. We advance science when we understand stuff. Untill we understand stuff, we don't have science, we just have stuff. Experimentation can come before or after, but science is the knowledge that comes with understanding that explains observations- not the experiments that generate observations.

People could still flow boats before Navier-Stokes? Yes, so people had boats, i.e. stuff. Now we have Navier-Stokes which is science, not stuff.

Btw, Yan LeCun knows this much better than me, but neural networks are already ancient. The first "artificial neuron", the Pitts & McCulloch neuron, was described in 1938. Frank Rosenblatt created his Perceptron in 1958. Kunihiko Fukishima described the Neocognitron, daddy of the Convolutional Neural Network, in 1979. Hochreiter and Schmidhuber described Long-Short-Term Memory Networks in 1995. Yan LeCunn himself used CNNs to learn to recognise handwritten digits in zip codes in 1989.

That's at least 30 years of research on deep neural nets- almost a human generation. Many of today's postgraduate students studying deep neural nets weren't even born when all this was being done. If this is just the experimentation phase before we pass on to the theorising and understanding phase- when are we going to get to the understanding phase? In 100 years?

nathias 4 years ago

Basing everything in your daily life on unknown/unknowable algorithms seems like a step towards a society where knowledge loses its value.

  • nzmsv 4 years ago

    Isn't this what we already do? We don't really understand how brains work, how what we call knowledge is stored, processed, and pruned. Psychology is a lot more immature than ML. Basically we hope that brains work well enough to define for us what "knowledge" means.

    • nathias 4 years ago

      We don't understand nature, but if we're faustian, we know we can know it and that this knowledge will give us power over nature. This is something different, its a product of knowledge that we can't know or where knowledge is powerless even if possible.

      • nzmsv 4 years ago

        Understanding nature is something a lot of people talk about, but it's not obvious to me that it is possible. As in, not possible at all, at exactly the same level as what you are referring to.

        Put simply: there are a lot more possible states in a giant universe than in an individual ape's brain, or even in a collective system made up of many such apes. Wouldn't information theory tell us that we can never represent the larger system inside the smaller one? Or if we do represent it, it has to be as a model that is necessarily incomplete?

        • nathias 4 years ago

          Yea, nobody is talking about knowing each trivial particular state, but general principles and processes and to on... Knowledge doesn't imply that you need to replicate the object of knowledge to the same level of detail.

kangnkodos 4 years ago

This is a bad analogy.

The main difference between alchemy and chemistry is that chemistry follows the scientific method.

When an alchemist learned something new, they kept the information to themselves and tried to profit from it. They wanted to turn lead into gold, and then keep the secret method to themselves.

A chemist profits by sharing the new information.

robthebrew 4 years ago

tl;dr, but it reminded me of the "big" news yesterday that via ML you can guess bank ATM codes from shielded hand movements. Reading down, it turned out you could guess 30% after the 3 try limit imposed by the machines. Not terrible, but completely impractical unless you steal a LOT of bank cards.

  • devoutsalsa 4 years ago

    I feel like you’d start building up the expertise to steal cards & use ML to guess pin codes only to realize you could just mug someone after the ATM withdrawal.

    • aspenmayer 4 years ago

      I think the potential use case here is to enhance the utility of and likelihood of success using a skimmer with a built-in camera, with ML increasing the chances of deducing PINs that would otherwise fallback to bruteforcing, which is statistically unlikely compared to 30% success rate with this ML method. Nothing to scoff at, and good to know for blue teams.

  • shoto_io 4 years ago

    I used an AI summarizer to summarize it for you... is this useful?!

    Could they be the modern equivalent of ancient alchemists? And would it be such a bad thing if they turned our lives into gold-plated jumble? Yann LeCun is co-recipient of the 2018 Turing Award for his work on neural networks. He argues that AI research is just a necessary adolescent phase characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach.

    Alchemy is considered to have been a useful precursor to modern chemistry, more proto-science than hocus-pocus. We could enter a golden age of modern-day alchemy, in the best sense of the word, but we should never forget cautionary lessons of history.

  • taneq 4 years ago

    > completely impractical unless you steal a LOT of bank cards.

    Like... 3 or 4? What's that, two average wallets?

    (Leaving aside the question of whether scamming a few hundred dollars out of someone's debit account is practical in any sense compared with all the other ways you could legally make way more money with the same skillset, of course.)

    • anyfoo 4 years ago

      Yeah I was wondering, too. 30%, i.e. 1/3, success rate per card seems incredibly good, actually? But possibly I just completely misread what "30% after the 3 try limit imposed by the machines" means?

    • stdbrouw 4 years ago

      But you first have to observe them using the card… so 5 cards implies mugging 5 people, which is quite different from stealing wallets from random targets, easiest first.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection