Computer Scientist Leads Way to next Revolution in Artificial Intelligence
sciencedaily.comLast time I was at UMass Amherst it was Bong Day or something, and people were out blithely blazing up on the lawn of the university square.
Fittingly, this article reads like a stoner's ramblings. It doesn't get into what exactly a "Super-Turing" machine is capable of that a Turing machine is not. Some googling turned up some theoretical possibilities (oracles and such) which do not appear to be physically buildable.
Just because Psychology and Philosophy majors bring in most of the school's money doesn't mean the Comp-Sci dept isn't fantastic.
According to Google, it's third in the country. http://pages.cs.wisc.edu/~remzi/rank.html
That's the worst ranking that I can possibly think of. It doesn't even use Google Scholar, which may introduce an interesting metric, like cited papers from universities.
Also, how do people not know this already? The results number in Google is wrong. It's just an inaccurate estimate designed to give people the "wow" factor when you search in Google.
By all other measures it's still a very good school for Comp-Sci, especially considering that it's a public university with somewhat reasonable tuition. Also, while the general acceptance rate is high, you do need a 3.5 GPA to get into the Comp-Sci program.
For example, take a look this publication based ranking: http://www.urch.com/forums/computer-science-admissions/79645...
Oh, I'm not disputing that Amherst is a good school, just pointing out that that metric is a stupid way to rank schools.
All of her publications are available on her webpage, except this latest one, which I assume is the subject of the press release: http://binds.cs.umass.edu/publications.html
Last time I was at UMass Amherst it was Bong Day or something, and people were out blithely blazing up on the lawn of the university square.
Must have been some years ago, then, because Extravaganja has been held in Amherst town for a long while now.
It doesn't get into what exactly a "Super-Turing" machine is capable of that a Turing machine is not.
I'm not great at interpreting technobabble, but I think it means everything in NP-Hard is now solvable in linear time.
Or something.
That isn't right. Super-Turing just means that it has capabilities a classical Turing machine does not[1], which is why this article is so terrible. What are these extra capabilities it does and how does it do them?
Yeah, that's what I was trying to poke fun at. Me bad at jokes, apparently.
No, stronger. It means the Halting Problem can be solved. Time to break out the bubbly! Or not.
The more I look into it, the more this Super-Turing business sounds like the computability-theory version of FTL neutrinos. According to Martin Davis, Siegelmann's paper involves neural nets with arbitrary real-numbered (as opposed to integer or rational) weights being able to recognize languages on the alphabet {a,b} that a Turing machine can't.
So you have this thing that can get uncomputable results because it's given uncomputable inputs! In other words, no more interesting, or practical, than attempting to do computation with the n-body problem, and almost certainly nothing we can actually build.
Here's the Science paper that describes super-Turing computation:
http://binds.cs.umass.edu/papers/1995_Siegelmann_Science.pdf
I haven't had time to do anything more than skim it. My initial bogometer reading is not quite pegged, but it's close.
Thank you for posting the article. I was having some difficulty finding more information.
The article did not make sense so I dug up some of the papers in question. Those do not make sense either.
It appears to be a poorly executed and overstated attempt at (re-)discovering high-order inductive computational models. These are equivalent to Turing models unless you happen to have a hypercomputer at your disposal.
The endless parade of low-quality claims like this is why no one takes AI research seriously, even the quality work.
"no one takes AI research seriously"
Really? You don't think modern AI research has produced any serious results? Do you know how many applications machine learning methods are used in today?
I just hate the headlines. In one of the first lectures in my post grad school our prof. told us how to critically examine research papers and claims. He said "If anyone claims to come up with something "revolutionary" you should be 10 times more skeptical about his claims. It is very likely that the person probably doesn't know what he is talking about."
Of course in these cases the news reporters are to be blamed than the scientists who worked on the proble,.
"This model is inspired by the brain," is another red-flag to me.
as were "more powerful than a Turing Machine" and "2^aleph_null possible behaviours" for me
Why? Biologically inspired ANNs have been around since the 50s. What's the problem with trying to model neural computation?
Because it's such a buzzword and also the things that are used in practice are barely near modeling actual neural computation in the Drosophilia brain, let alone the human one (http://www.scientificamerican.com/article.cfm?id=brain-in-a-...).
I agree and it's a shame that writers feel like they need to use a hook to attract readers. However the reason I decided to post the article is because I was hoping that the users of hacker news could provide more input concerning the concepts discussed. As I would like to get more involved in AI, this article seem to be presenting a new method to look at it (At least I think it's new).
As scott_s pointed out, the reporters have merely published what the university gave them: http://www.umass.edu/newsoffice/newsreleases/articles/149986...
The article is very confused but I think what it is trying to say the researchers are building a Hardware based Recurrent Neural Network that operates with Real Numbers (i.e. analog). Hence it will be exponentially more powerful than a turing machine. The possibility of being able to build a physical device that can harness the reals to infinite precision is a big assumption.
So this device is an example of a Hypercomputer. Its existence would disprove the Chuch-Turing Hypothesis. It would also be very difficult to verify that it was actually calculating what it claimed [1]. As a hypercomputer it could by definition solve the halting problem. It is also more powerful than a quantum computer.
Simple argument: a turing machine can inefficiently simulate a quantum computer. By definition a turing machine cannot simulate a hypercomputer. It could hence compute incomputable functions. And since it could tell whether a program will halt it could compute Chaitins constant. One consequence of that is the resolution of twin prime, goldbach's conjecture and other open number theory problems. It should also be able to compute Solomonoff's Universal prior and hence act in a bayes optimal manner. Strong AI.
If what is claimed can be done then this is a very big deal.
Some other implicit or explicit arguments in this article:
- Human brain is more than a turing maching
- turing machine will not be able to realize AI
- "Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed."
I do not buy any of those arguments. And am not sure what that last quote is supposed to mean.
From an earlier paper:
> such super-Turing capabilities can only be achieved in cases where the evolving synaptic patters [sic] are themselves non-recursive (i.e., non Turing-computable)
"Interactive Evolving Recurrent Neural Networks are Super-Turing", 2012, Jérémie Cabessa http://jcabessa.byethost32.com/papers/CabessaICAART12.pdf
So, create a neural network that changes after a non-Turing-computable pattern and its output might not be Turing-computable.
Wish there were a bit more information here. The article's a little breathless without telling us exactly how this is practically improves on ANNs and the Turing model, and I couldn't find a more accurate representation of this paper.
Hypercomputation models that depend on things like infinite-precision real numbers have been around for a while, including in Siegelmann's work, so I'm curious to know what specific advance is being reported here in "Neural computing".
It's not an article, but a copy-paste of a press release: http://www.umass.edu/newsoffice/newsreleases/articles/149986...
Edit: never mind, I thought she was talking about a real machine until I read the paper. Even the Turing machine model assumes infinite storage space. So there must be more to her "super-turing" machine than just infinite storage.
I wonder if you could just build a ARNN and "fudge" the infinite precision with a reasonably large precision? Or with a big disk, you could compute and unreasonable amount of precision :) Or, store the infinite precision in a lazy way that only calculates as much as you need for a particular answer.
The thing is that you could do all that on a Turing machine, so any model just "fudging" infinite precision would be equivalent in power to a Turing machine.
Meh. I'll wait till this research is published in AAAI, IJCAI, IEEE PAMI or something of that caliber, and not "Science Daily".
It's published in Neural Computation, it says so right there in the article.
For anyone else who's slightly miffed by the presence of three columns, only one of which contains the actual story: