Settings

Theme

Vectordash: GPU instances for deep learning

vectordash.com

45 points by frlnBorg 8 years ago · 11 comments

Reader

jsty 8 years ago

What's your licensing situation with Nvidia regards their prohibition [1] on datacenter deployment for 'consumer' cards?

[1] https://news.ycombinator.com/item?id=16002068

  • yorwba 8 years ago

    It does not sound like they are deploying in datacenters: https://vectordash.com/hosting/

    That said, the license also has this:

    No Sublicensing or Distribution. Customer may not sell, rent, sublicense, distribute or transfer the SOFTWARE; or use the SOFTWARE for public performance or broadcast; or provide commercial hosting services with the SOFTWARE.

    which seems to prohibit Vectordash's individual hosts from participating.

    • jsty 8 years ago

      Ah right, hadn't seen that. Thanks! If the vectordash team are reading, I'd make the nature of the service a bit clearer to potential users. There's no mention I can find outside the 'hosting' page that these aren't your machines.

  • disgruntledphd2 8 years ago

    I have no real idea but sometimes its better to ask forgiveness rather than permission (i'm not in any way associated with this service).

ctlaltdefeat 8 years ago

If I understand correctly, the instances available are containerized instances that users run (i.e, the system matches hosts to guests and takes a cut).

Beyond being dangerous on multiple levels, there doesn't seem to be any guarantee of storage or network bandwidth/traffic. Having a multi-TFLOP GPU to train with is hardly useful if you can't get the training data on the device in a reasonable amount of time, or hold that data in local storage.

  • Samin100 8 years ago

    We ensure each instance has ample storage (min 50GB), internet speeds, and hardware specs such that the GPU is the bottleneck! If a user isn’t satisfied with an instance, then there’s no charge whatsoever :)

jcims 8 years ago

With more GPU-in-the-cloud offerings coming on line, is there a utility to dump GPU memory to see if your cloud provider has wiped it between customers?

  • Samin100 8 years ago

    Just read a pretty interesting paper on just this recently - we actually load/unload the drivers for every instance, which in turn also wipes the GPU’s memory. There’s a tool I wrote to test just this, albeit I haven’t uploaded it to GitHub yet. Might do that sometime this weekend.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection