MIT IQ: The MIT Intelligence Quest
iq.mit.eduMIT is in the process of converting its brand's value into money. It's interesting, because the brand is so strong that they probably can announce even more stupid "quests" and "initiatives" and "labs" (like the IBM "collaboration") to extract value from industry without seriously harming the quality of the students who apply. But as a current student it's kind of heartbreaking. This place has been so special for so long. But even MIT cannot hold out against the corporate cancer that's been spreading through higher education for the last twenty years.
EDIT: just wanted to add, I'd love to ask the marketing team behind this site how they came up with quest. What the fuck is a quest, and how is it different than all of these researchers just doing whatever they wanted, anyway? MIT isn't funding these labs directly in most cases and the work has been ongoing, it's just a branding exercise.
MIT has been in decline since the 1980s. I attribute it to hiring professional education people as administrators, rather than MIT alums as administrators and as equal partners in running the institution. The end of the Cold War and decline in defense research funding might be a factor, too. I was at MIT in the late 90s and I could see it getting worse even over a few years.
It is still decent in bio, some of the graduate institutes, etc, but the core engineering school isn’t the MIT of engineering anymore. My employee friends largely agree.
Colleges are businesses that deal in information and knowledge for money.
Colleges are businesses that provide a signal of competence, and a network of peers who also want that signal, in exchange for money.
Yes, and?
I don't see anything super wrong about marketing & presenting a research endeavor. I mean its a lot more accessible than a 10 year old website by a professor with a bunch of links to papers, and if it helps get people from different fields, and across academia and industry collaborate then great.
What makes MIT so special?
They do what they do exceptionally well but arent THAT different.
Its all branding - and I say that having taught there for 3 years.
Love wandering those halls.
Disagree. The culture is special, and that culture is what drives specific change. The bar is high, and because everyone around you is pushing themselves you end up pushing yourself further than you might have otherwise. The culture pushes you to think big and think impact. The institute gives you such permission to grow your ego in order to grow your work. Having Harvard nearby creates an even more special atmosphere as your social life often involves mixing with others on an intellectual level that spans subject matter. To me that’s just some of the magic.
I also couldn't put my finger on any specific difference, but it really does feel like the halls have a magic to them.
(current undergrad, I'm sure you have more insight into what's going on)
I think q, iq..
someone has to grab the steering wheel, and it can be mit and other responsible adults, or it could be a bunch of hair brained insert rude word folks of the sort that hijacked neurosciences (hbp anyone) or, God protect us the European semantic web community. They are in the offices of the great and good as we speak, extracting your tax dollars.
This is the high water mark, the summer solstice if you will, of the current wave.
Buckle up, Serious People are going to rediscover the fundamental Hard Problems and relocate the current Hot Topics into their appropriate ontologies.
I can't tell if this is satire about the current state of AI and its hype cycle or not.
It is referencing the following xkcd: https://xkcd.com/1831/
yeah, cousin comments are treating it as if it's serious, but the capital letters make it sound sarcastic
These are the serious people, look at josh's Google scholar and read the intuitive reasoning work.
What is some evidence or reasoning that we are reaching certain hard limits as you mentioned?
I'll take a crack. Progress in ML applications to previously intractable problems has created an irrational optimism that AGI is on the near term horizon.
I'm not aware of any novel abstraction which led to a solution to these intractable problems. The problems became tractable because of silicon and incremental algorithm improvements.
This is another way of saying, yeah intractable problems are being made tractable, but these problems aren't stepping stones to AGI.
For example, machine translation from one human language to another has been acclaimed as one of the big success areas in deep learning. But when one looks deeper, there be dragons...
https://www.theatlantic.com/technology/archive/2018/01/the-s...
Oh wow. That's Douglas Hofstadter in great form. Could you please submit that to HN so it has a chance to get to the fist page?
Never mind- there's already a conversation. Thanks for posting anyway.
I never mentioned limits, and the reasoning is purely inductive, based on previous events.
We are far away from understanding intelligence in all directions. Top down, bottom up, neurologically, psychologically, logically, mathematically and last but not least philosophically.
There's optimism at the moment, because we are doing more stuff with more annotated data (the annotations providing the semantic grounding, as in "Not hot dog" vs. "Not in category 339492-883764-399274"). The key difference this time being access to (and processing power for-) sufficiently large "training sets" (read "samples") for deep-learning algorithms (read "statistical models"). From an AGI point of view, this is nothing but an expensive parlor trick, because the "intelligent" part is the annotation, not the categorization after the fact.
The annotation is not even particularly smart. In supervised classification, labels are basically scalars, standing for... whatever the researcher means them to stand for. The class represented by the label can be as broad or as narrow as the researcher wants it to be. Even the relation of the class with the data it is supposed to represent is arbitrary and its choice entirely unprincipled and based on instinct alone.
Which goes to show that a) our machine learning models are dumb as bricks and b) they are as far from AGI as worms are from building a rocket to go to the moon, where their god lives (see all those holes up there?).
Please explain in simpler terms ?
Winter is coming.
I'll take a stab at it.
I think they are saying...Intellectuals are going to change the way they guide their research or formulate their hypothesis based on the this work at MIT. This person believes that this project signifies a paradigm shift in the fundamental origins of thought, hereafter nothing will be the same.
I think this person is trying way to hard to sound smart or poetic.
Nah. enord's saying that they'll all sit down together and realise that they don't know very much about intelligence after all.
So all the current excitement around super-intelligent AIs, and whatnot, will go the same way as it has the previous times we all got excited.
Some of the most intelligent people in the world will gather together, and learn that they still don't even know what "intelligent" means, much less how to actually build it.
Then they will discover that this has happened before--several times--and the reasons they had to think that it was all different this time were all based on exaggeration and wishful thinking, promoted by people trying to make a buck.
We can throw more hardware at the problem now, but even so, every advance seems to be accompanied by a reassessment and lengthening of the distance to the finish line.
But isn't AI getting percievably better every run?
The finish line of a marathon is getting nearer every step, but we've yet to clear even the first mile.
"this person" was channeling the pretentious buzzword-packed hype of the OP article.
MIT is also giving the course Artificial General Intelligence at https://agi.mit.edu/ List of recommended articles and papers (reading material for the AGI course): https://agi.mit.edu/vote-ai/
Sounds like they are solving the exodus of brains to Industry by effectively creating Industry-funded departments located on campus. “You will be an MIT employee, fully funded by Google.”
Near the bottom of the page:
> A key to the success of MIT IQ will be identifying industry allies who share our passion for tackling big, real-world problems. That work is already underway: we have forged a number of collaborative projects with industry, such as the MIT–IBM Watson AI Lab.
However, the biggest incentive to go into industry is income. Sure, your research and your department can do more with more funding but wouldn't you still make about the same amount as a grad/research scientist/professor?
Maybe Google pays more than average for these positions.
Related:
UC Berkeley launches Center for Human-Compatible Artificial Intelligence
http://news.berkeley.edu/2016/08/29/center-for-human-compati...
> And today, by tapping the united strength of these and other interlocking fields and capitalizing on what they can teach each other, we seek to answer the deepest questions about intelligence — and to deliver transformative new gifts for humankind.
Ugh, this kind of corporate-speak is nauseating. Can anyone understand what this "quest" actually entails?
Understanding the recipie and salient ingredients by which nature generates intelligence so that we can build a theory of intelligence is long overdue but they’re starting at too high of a level. We don’t have a good model for artificial neurogenesis that allows us to create complex AI from a simpler set of building blocks, we don’t have a genotype-to-phenotype mapping or a generic representation to encode complex phenotypes in a mathematical genetic abstraction, we don’t have an abstraction by which mutation can create open ended phenotypic variation, we don’t have a model for artificial evolution to drive the evolution of novelty.
If we want to solve this problem we’re going to have to reverse engineer intelligence. Otherwise we’re just going to continue to run into walls by trying to either brute force our way from the ground up and by ignore lessons from biological intelligence or philosophize from the top down.
What is the “problem?” Depending on what you’re trying to solve, you don’t need to go as low level as possible — it’s likely a mistake as to try and model the universe is a futile exercise.
I don't think connected neural ensembles made from a deep learning architecture can scale to what we would call general artificial intelligence.
At least not with decades of manual supervision.
For some reason I read it as "the MIT intelligence test", and was confused.
I respect what they are doing but I disagree with the methods. Intelligence could be one of those kind of things that is very hard to engineer directly. And we'd have more success if we inspected the underlying processes and then simulated those.
I've posted this before but here is my proposal: https://scrollto.com/life-a-universe-simulation/
What I propose is a minimum viable digital environment that can support the creation of self-organized turing machines that feed off their environment. What this really means is, coming up with a digital environment that can support the evolutionary process. Evolution requires vast space, vast time (in this case -- clock cycles), and principles that allow for both storage and movement of information. The storage and movement of information is accomplished most simply by roughly emulating mass/energy conservation/conversion laws that we have in our universe. With just collisions that form stationary quasiparticles, and can also annihilate to reform the moving fundamental particles, universal computation is enabled.
Toffoli and Fredkin discovered the power of collision-based computing decades ago. There is a lot of literature and good results they derived on the power of these types of systems.
Let's create life the only way we know it formed -- evolution. It's far more elegant and less engineered than trying to unravel how chaos formed competitive results on a million-deep evolutionary ancestor tree.
Make sure you cull the programs that might end up escaping the sim and destroying all humans.
If they figure out how to escape I think I'll let them have it.