New Amazon EC2 GPU Instance Type
phx.corporate-ir.netThere's more technical info in my post at http://aws.typepad.com/aws/2013/11/build-3d-streaming-applic...
Very weird they do not use the amazon domain for that and yet it looks exactly like amazon.
Teaching consumers bad habits.
It appears to be a service hosted by Thomson Reuters:
http://corporatesolutions.thomsonreuters.com/investor-relati...
Many more examples (including several large companies) can be found at:
https://www.google.com/search?q=inurl:http://phx.corporate-i...
That's a really good point, and one that I will be bringing up with my manager immediately after re:Invent (I am part of the AWS team).
If I remember correctly, it has to do with disclosure requirements.
Corporate-IR (or whoever runs that domain) meets the criteria/is authorized for disclosure of information to investors.
Other companies use them too, for example NVIDIA: http://phx.corporate-ir.net/phoenix.zhtml?c=116466&p=irol-ir...
Yeah, think of something like earnings reports. It's very important that no one gets early access, and very useful to have a third party handle it so you can prove that no one has early access. And if the third party screws it up, they get investigated by the SEC, not you.
I particularly like how http://phx.corporate-ir.net/ does a 302 to http://www.ccbn.com/ which doesn't even go anywhere.
I saw the previous developer of the CCBN (Corporate Communications Broadcast Network) pop up on in the HN comments last time this was mentioned to explain its stagnation and mishandling after being bought by Thomson-Reuters. Can't find it now myself; anyone else have the link?
This one? https://news.ycombinator.com/item?id=5787892
Interestingly, it seems like they recently sold it to NASDAQ: http://corporatesolutions.thomsonreuters.com/investor-relati...
(op here) - I can't edit the link sadly. Should've linked to Jeff's AWS blog post.
How do the GPUs on this compare with NVidia desktop GPUs? Anyone know?
Also, very exciting that they're supporting GPU cloud rendering - that's going to be big for 3D.
Via Jeff Barr: NVIDIA GRID™ (GK104 "Kepler") GPU (Graphics Processing Unit), 1,536 CUDA cores and 4 GB of video (frame buffer) RAM.
Thanks!
My experience is that graphics card stats are a decidedly slippery fish as far as comparison goes.
However, a quick bit of Googling implies that this is almost identical, at least on paper, to a GeForce 770 or 680.
http://www.geforce.co.uk/whats-new/articles/introducing-the-...
Unfortunately, without knowing more details (clock speed, memory bandwidth) it's hard to say more.
Guess someone (possibly me) needs to benchmark 'em. :)
UPDATE - excellent info further down this thread: https://news.ycombinator.com/item?id=6678744
> making it ideally suited for video creation services, 3D visualizations, streaming graphics-intensive applications ...
And, presumably, cracking hashes!
I've never used Amazon EC2, but with this kind of application, I might have to give it a try. Buying a $300 graphics card just to try some GPU programming is ridiculous.
You don't need to buy a $300 graphics card to experiment with GPU programming.
The current and previous generation Intel CPUs (Haswell and Ivy Bridge, respectively) have on-die GPUs which support OpenCL: http://software.intel.com/en-us/articles/intel-sdk-for-openc...
AMD's APUs are quite cheap (~$100) CPU+GPU designs similar to those in the upcoming PS4 and XBox One (though the retail APUs are somewhat less powerful). They've been more-or-less designed specifically around the needs of a heterogenous OpenCL application.
Finally, the last several generations of NVidia cards all support both CUDA and OpenCL; the newer cards do support additional features though. You should be able to pick up a low-end, recent-edition Nvidia GPU for roughly $100.
The new g2.2xlarge instances are $0.650/hour, and the existing cg1.4xlarge are $2.100/hour; so it may make sense to experiment on AWS a bit, then buy your own card for long-term use if you decide to spend more time doing GPU programming.
Sadly, Intel's integrated-GPU OpenCL still doesn't support Linux, and only just started supporting OS X in 10.9 Mavericks[1]. Usually Intel's Linux GPU support is great; I don't know why this is different.
(Intel do have a Linux OpenCL implementation for Xeon CPU cores and Xeon Phi coprocessor[2], which doesn't help me much. On-CPU OpenCL is fine but hardly faster than regular CPU code, and Phi coprocessors aren't very common currently.)
[1] http://forums.macrumors.com/showthread.php?t=1620203 [2] http://software.intel.com/en-us/vcsource/tools/opencl
I take it that you don't play video games on your desktop machine, then.
As I understand it, scrypt is designed to mitigate this: https://en.wikipedia.org/wiki/Scrypt
Except the interface to scrypt is kinda weird so nobody writes sane library bindings for it. (I tried once. I didn't get very far.)
GPUs were rendered useless by ASIC miners, no?
EDIT: Sorry for jumping on the Bitcoin hype train too soon! Many other uses for cracking hashes.
For bitcoins, yes. I believe OP is talking about password hash cracking, a la oclHashcat-plus: http://hashcat.net/oclhashcat-plus/
Aha, you're absolutely right. Shame "cracking hashes" is synonymous to Bitcoins in my mind.
lets go mining.
I'm trying Folding@Home on it now. Looks like it might not recognize the GPU.
22:42:58:WU02:FS00:0x15:GPU memtest failure 22:42:58:WU02:FS00:0x15: 22:42:58:WU02:FS00:0x15:Folding@home Core Shutdown: GPU_MEMTEST_ERROR 22:42:58:WU02:FS00:0x15:Starting GUI Server 22:42:59:WARNING:WU02:FS00:FahCore returned: GPU_MEMTEST_ERROR (124 = 0x7c)
If you are confident that this should be working, post a note to the EC2 forum so that we can investigate.
With stuff like this it looks like the devices we use could be only streaming clients in the future and wont require a lot of processing power but excellent network connectivity.
That goes a bit against the trend in web development to move much of the processing to the client side so i wonder where this will go.
Really high performance streaming of apps/games could revert the trend of making everything browser based in favor of streamed native apps.
I work on some opengl software that renders slideshows, and this is precisely what we need. We've used the bigger CG1.4xlarge nodes in the past but they are very expensive for what we're doing. The lower price on this (65¢/hr instead of $2.40) is going to be much more manageable for us.
Sounds awesome. Send me a link when you have it working!
this is huge beyond graphics, new levels of performance can be achieved with GPGPU for data intensive startups. i would love to see someone build a company around this.
Unfortunately GPU used in g2.2xlarge instances isn't good for double precision calculations.
why is that?
It's optimized for 3D and CAD not for HPC.
An ideas whether this would make a decent bitcoin miner?
The bitcoin network difficulty is rising so fast that even the first-gen ASICs are becoming obsolete.
For example if you have a good Radeon HD7970, you can get about .8 GH/s. Based on the rate of difficulty increase the 7970 would mine about 0.02 BTC in all of November 2013 and 0.01 BTC in December 2013 and < 0.01/month after that.
For various reasons Nvidia cards are slower at BTC mining than AMD. The fastest Nvidia card, the Tesla S2070, can only hash about 0.750 GH/s.
Even 60GH/s ASIC miners will be earning < 0.10 BTC per month by March 2014. In August 2013 a 60GH/s miner would make ~0.8 BTC PER DAY. That's how quickly the difficulty is increasing.
At this point no GPU would make a decent bitcoin miner, except as a hobby.
I'd guess it'd hit about ~75% of a GTX 680 at this task.
What makes you think that? Specs look pretty similar to a 680.
800 MHz core clock of each K520 GPU versus 1058 MHz boost clock of GTX 680...
Fantastic - that's exactly the info I've been looking for. Thanks.
can they preload http://wiki.postgresql.org/wiki/PGStrom?