Settings

Theme

Ask HN: How to distribute a lot of images throughout multiple Universities

16 points by adezxc 3 years ago · 17 comments · 1 min read


Basically, I've gotten coursework at my university to consider and start using a distributed file system for storing large amounts of crystal diffraction images. It would need to have multiple copies of the files distributed in case one of the servers goes down and be scalable as it will be always increasing. I've looked into things like LOCKSS[1] and IPFS[2] but LOCKSS seems to be limiting itself to storing articles and IPFS doesn't provide the data reliability in case one of the nodes goes down. Did anyone ever encounter a similar task and what did you use for that?

[1] https://www.lockss.org/ [2] https://ipfs.tech/

zcw100 3 years ago

IPFS does provide data reliability with the use of pinning services, a private cluster, or cooperative cluster. It seems to be difficult how to communicate how IPFS works in this regard and there are a lot of misunderstandings about it. There are some people who want IPFS to be an infinite free hard drive in the sky with automatic replication and persistence till the end of time. (it is not). Then there are the people who worry that, "OMG someone can just put evil content onto my machine and I have to provide it!" (it does not)

IPFS makes it very easy to replicate content, but you don't have to replicate anything you don't want to. Resources cost money so you either have to ask someone to do it for free, and you get what you get as far as reliability, or you pay someone and you get better reliability so long as you keep paying.

jjgreen 3 years ago

This is private data right? Maybe a private bittorrent tracker with a few nodes which "grab everything" to ensure persistence. Never done it myself, but might be a direction worth researching ...

brudgers 3 years ago

How much data do you have now?

How fast is it increasing?

What is your budget for hardware?

What is your budget for software?

What is your budget for development labor?

What is your budget for maintenance?

I mean the simplest thing that might work is talking to your university IT department...

...or calling AWS sales or another commercial organization specializing in these things.

The second most complicated thing you can do is to roll your own.

The most complicated thing you can do is to have someone else do it.

Good luck.

hannibal529 3 years ago

This is a simple task with NATS JetStream object storage https://docs.nats.io/nats-concepts/jetstream/obj_store/obj_w.... Just provision a JetStream cluster and an object store bucket. If you want to span the cluster over multiple clouds with a supercluster, that’s an option as well.

DrStartup 3 years ago

Sounds like you’d want to setup a private multi org cloud storage system.

Something like this https://min.io/ or similar. There are a dozen or so open source / commercial s3-like object storage systems out there.

I have a friend that does this kind of mission critical infrastructure for research universities.

Dm if you’d like

mikewarot 3 years ago

If you're replicating one primary file system to many secondary systems, MARS might be helpful[1]. It was developed by 1&1, who hosts my personal website, along with petabytes of other people's stuff.

[1] https://github.com/schoebel/mars

rom16384 3 years ago

I was thinking about SyncThing, https://github.com/syncthing/syncthing but it's a file synchronization tool, meaning every node would have a full copy, and it would propagate deletes from one node to another.

Quequau 3 years ago

Isn't rsync designed for use cases like this?

Gigachad 3 years ago

How much data? Why not chuck it on S3 or Dropbox?

  • adezxcOP 3 years ago

    Terabytes, later growing into 10s of terabytes, as a lot of those images got either deleted or stay at the specific university machines. I actually don't know, I'd guess so there's no accidents where if one vigilant user decides to delete the contents or change them so the file could be compared by a hash.

    • tlonny 3 years ago

      I would understand properly why S3 isn't a viable option before going down the rabbit hole of trying to roll your own distributed file store.

      > I'd guess so there's no accidents where if one vigilant user decides to delete the contents or change them so the file could be compared by a hash

      This risk can be easily mitigated using S3 permissions/access controls.

      • Gigachad 3 years ago

        Agreed. Tens of terabytes is fine in S3. Sure, it’s expensive, but so is every other solution. S3 removes the headaches for you.

toomuchtodo 3 years ago

BitTorrent or ceph?

hooverd 3 years ago

Try asking your PI?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection