Intel Unveils Strategy for State-Of-the-Art Artificial Intelligence
newsroom.intel.comFor a millisecond I thought Intel were doing something new and interesting. Neuromorphic or neuromeristive architectures maybe.
Nope it's just some rebranded x86 cores.
They aren't rebranded x86 cores, This is from Nervana systems which was acquired by Intel few months back.
More details about Nervana architecture. https://www.nervanasys.com/nervana-engine-delivers-deep-lear...
Previous HN discuasion about Nervana systems.
Nervana TPUs aren't rebranded x86 cores. They aren't neuromorphic either, and I don't know what neuromeristive is.
Neuromorphic means brain like, so basically, super parallel.
Neuromemristive,although I haven't heard the term before, means using memristors to compute artificial neural networks.
>using memristors to compute artificial neural networks
What does that mean?
That means memristors are used to store neural network weights, and a signal passing through a memristor would be effectively multiplied (scaled) by the weight value.
Interesting. I wonder if the technology will ever be economically viable to build chips out of.
Perhaps it uses a memrister?
I think memristers are an HP thing. Intel's competing product is 3D cross point, which, according to marketing is a persistent memory technology that's higher density than DRAM and faster than flash.
There are a bunch of companies with memristor based products, HP is just the loudest about it.
Not just the loudest, it was the first one to implement them.
Do you mind explaining those architectures?
I think he means something along the lines of this IBM chip:
https://www.wired.com/2014/08/ibm-unveils-a-brain-like-chip-...
I think Intel would be wiser just improving the performance of proven algorithms, rather than trying something radically different.
Hardware battle over AI platforms is heating up. Will be interesting if Intel can catch nvidia, or GPUs in general.
As someone who works in the semiconductor industry, I'm worried that joining an AI hardware company is a risky move that will falter just like General Magic and other hyped hardware startups.
Intel itself was once a risky company to join, as the whole concept of a microprocessor seemed both stupid and crazy at the same time. Their 4004 series calculator controller was a useless toy compared to more serious computer hardware at the time and their 8008 was marginally less toy-like but equally useless for enterprise computing.
We're in the 8088 stage of AI right now.
Good point. I have a cognitive bias against startups because I think what they're doing isn't complex, but it is fallacious.
You shouldn't chase complexity; chase opportunity. For the same opportunity, a non-complex approach will normally be superior to a complex approach.
Also, complexity is hard. And even harder to do substantially better than competitors.
Maybe I'm biased, but I can count far more company fortunes that were made doing a simple thing (at the right time) than a complex thing.
Nvidia is heavily pushing AI right now, and it could be their biggest market in the future. But they're not a startup: Nvidia is valued at $50B. Their core market is selling high-end graphics cards for hard-core gamers. Even if their AI business failed completely tomorrow, it's still a viable company.
Worth noting that while gaming may be the biggest part of their business, it only accounts for ~50-55% of their revenue.
Have any other examples of failed hardware? I love reading about these, most of which I get from r/shittykickstarters, but I'd like to have a rich vein of failed SV startups
Thinking Machines and Transmeta come first to my mind. I'm foggy on details around Thinking Machines, but I vividly remember hype and lustrous lies/promises media pushed about Transmeta.
Thinking Machines' demise is pretty well documented in the "Inc Magazine" article about them: http://www.inc.com/magazine/19950915/2622.html
I only know Transmeta, because Linus Torvalds worked there.
Thankfully none of the major players are startups.
Not after Nervana got scooped up.
I seriously doubt it, so far all Intel attempts for performance minded GPU use have been sub-par.
They are just good enough as integrated GPUs for the occasional gamer and that is it.
I hope so, because nvidia's cudnn is an awful black box of brokenness and we only use it because there's no reasonable alternative.
I would love to hear any opinions on how one might invest in this, the no-brainer Nvidia has had a huge run already, although I wouldn't be surprised if it was double from here in a year's time.
If Intel is smart they will invest heavily into library development. Researchers use whatever is fast and right now cuda is fast, not just because nvidia has the best GPUs (it's about even) but because the primitives in cuda are so much faster. Matrix multiplication on the same hardware is like 3x faster in cuda than opencl or competing libraries and using the neutral networking primitives is even faster. Intel needs to invest in good, low level libraries so researchers can hack on their platform, build new things, etc. Ultimately I think researchers drive what platform gets widely used since training takes so much longer than inference.
No, researchers use whatever is fastest for development and flexible. Otherwise we'd all be using neon.
i agree. development speed is more important. is there any benchmark that compares neon to other systems?
IIRC, because of Winograd kernels, 3x3 convolutions are amazingly fast and almost all the nets now have switched to 3x3 convolutions.
I bought both NVIDIA and AMD. AMD has a big GPU patent portfolio as well. AMD trades at about 1/5th NVIDIA and could be a takeover target for a bigger player trying to move into AI.
In my opinion NVIDIA needs a moat. CUDA is a good start but they need proprietary data that's hard to create. I thought they should buy Yahoo which would give them a large and unique data set that NVIDIA users could tie into through an API.
Won't get anywhere with a dataset. You need dozens or hundreds of them, and they are usually created using Amazon Turk, so anyone with some money can create one. And they need to be commissioned by the AI researchers, because they know best what they want to study.
NVIDIA needs to become more efficient and build deep learning hardware instead of graphics cards that do faster computation than CPU. If only they could create a small form AI processor for robotics that could do vision at 500+ frames per second. Take a look at this video from 7 years ago which does 1Khz vision: https://youtu.be/-KxjVlaLBmk?t=158
We need that.
This. Intel has a lot of catchup to do there. NVidia's drivers and libs are simply on a different level. For example I've seen the same OpenGL code execute faster on an ancient Tegra 4 (mobile) platform than on an Intel Haswell with HD Graphics (which is supposed to beat the Tegra 4 by miles according to raw specs).
See OpenCV and TBB for examples (from Intel) of this strategy at work.
It really depends on how the market shapes up. You might see one software company rise to dominance where they interoperate with different platforms interchangeably, or you might see a single hardware vendor come to dominate a number of much smaller software vendors.
NVidia is the horse to bet on now, but it remains to be seen how they can can expand beyond their current model of leaning heavily on their GPU technology.
As AI becomes more pervasive you'll want to have it integrated into smaller, more power conscious devices, such as would be the case in robotics. Does NVidia have a solution here that scales down? Clearly they're focused on scaling up, as that's where the money is today.
Does NVidia have a solution here that scales down? Clearly they're focused on scaling up, as that's where the money is today.
Yes, they have invested heavily in this. Everytime they talk about their "car/auto segment" (which is all the time if you listen to their investor calls) it's mostly about scale-down.
Cars, especially the sort that will need to do self-driving, will have massive batteries in them. I'm talking about Roomba-sized devices where they can't power a full GPU worth of gear.
They may have large batteries if they are electric cars, but it won't be for the GPUs.
The NVidia PX2 platform[1] - which is what the Telsa self-driving features uses[2] - is available in a 10W config. I presume this isn't the full self-driving mode, but the Jetson TK1 can do full image tracking and recognition in less than 30W (and that is a 2 year old platform).
[1] http://www.nvidia.com/object/drive-px.html
[2] https://blogs.nvidia.com/blog/2016/10/20/tesla-motors-self-d...
[3] http://developer.download.nvidia.com/embedded/jetson/TK1/doc...
Where did you get the nonsensical idea that they are going to use desktop GPUs in cars? The board they specifically designed for self driving cars merely needs a few watts for 500 GFLOPS. Why would this not be suitable for a "Roomba-sized device"?
https://m.f.ix.de/scale/geometry/1280/q50/imgs/18/1/6/4/4/6/...
Google packs their cars to the gills with hardware, so just going based on what fully autonomous systems use today.
They don't necessarily have to scale down to get the benefits of AI in side smaller devices such as robotics. There's no reason why the robotics devices can't or won't be equipped with a variety of radios to facilitate communication.
The AI agent can easily be somewhere centralized where power isn't a big deal.
Then the robot stops being analogous to an animal with it's own brain and starts being something more like an appendage.
Or maybe they'll be like those magical mops in Fantasia.
At least two good reasons that you would have things running locally: Latency sensitive applications Privacy sensitive applications
There could come a day when you have to sign in to Facebook to use your car, like if they get to harass you with advertising the trip's free.
Optimistically your car, perhaps instead their car.
Oh god, imagine being in a self driving car that does it's computing in the cloud - and then you enter a tunnel.
Obviously cars are not really power or space limited and they can't rely in connectivity. But it's funny to think about anyway.
I'm thinking more like warehouse or factory that has a central rack-server and all the robots in the factory are wirelessly connected to it.
People look at a humanoid robot and they see an entity that is similar to a human but made of silicone and steel instead of meat and bone. I think it's more accurate to compare the robots in this factory to the hands and eyes of a large disembodied rackmount brain.
There's the Jetson boards, which are ARM + an NVidia GPU wrapped into one small package.
That's an interesting device: http://www.nvidia.ca/object/jetson-tk1-embedded-dev-kit.html
Also can't remember the last time I saw a USB port jammed on sideways.
I think there are probably efficiencies to be gained by making chips specifically designed for tensor flow rather than repurposing gpus.
They should call it "State-of-the-ARTificial Intelligence". Teehee.