Show HN: A Url Permanence Service
purl.lyIs having a Libya based domain name really the best way to ensure permanent URL structure?
You could use http://purl.org which has been around since the 90s.
How do you resolve offering a permanent URL for something whilst also complying with DMCA takedowns for copyright materials when the end service may have removed the content but you continue to publish it?
There is already http://purl.org
See http://en.wikipedia.org/wiki/Persistent_Uniform_Resource_Loc... also, this is a "protocol"
Sounds cool, but only as long as purl.ly itself is up. We’ve seen what happens with single point of failure services like this when twitter’s link shortener t.co was down.
Actually, if you notice, the purl.ly link has the original link after it. Even if purl.ly goes down, there's at least SOME reference to be tracked down. Relatively graceful, especially when compared to a WebCite link: http://www.webcitation.org/5IfzstWm1
This is a good idea in theory, but not if in form of a company. This would require something like a consortium where Google, Microsoft, Twitter, Facebook, ... and a few more federates to create a service paying for the bill and with the intention to take it up for the future as long as possible.
Plus there's already a service that does the same thing, is free, and is heavily relied upon, WebCite.
There's also the DOI system that mostly solves this problem for more permanent material ( research articles).
Infinite loops: http://purl.ly/tinyurl.com/qui3na
purly -> tinyurl -> baconized purly
and here a 'not found' message, though created. http://purl.ly/news.ycombinator.com/threads?id=pg
Tried the service and got two errors first,
I cannot add a URL that does not have "http" I cannot type a URL that has "https"
Would be nice to be able to add https adresses and add them in any form.
Also an idea, make it as a web browser plugin so I can change the url in the brower and add it to my link library. Then it's even better. I don't like detours.
Question, what happen if I prul.ly a Url once then the content changes and I want to save the new content as well (Different content, same url)?
Anyway, I really like the concept, keep going!
Annelie @detectify
Interesting choice to use a Libyan domain for a service like this.
How is using a Libyan domain interesting here? Bit.ly is doing it.
Who? Bitly.com?
Perhaps Libya is as reliable as anywhere else in the world - it's all a matter of perspective. Ask The Pirate Bay guys how confident they would be using a .com. Still, the idea of putting a service that offers 'permanence' on a domain so far out of reach seems like a bad idea to me.
The nameservers themselves are here in the us, which will hopefully help. But you have a good point.
"The Libyan governing authority for .ly domains, NIC.ly, explained this week that domains that run afoul of the country's 'morality' laws are being taken offline"
http://arstechnica.com/business/2010/10/libya-beginning-to-p...
I'd say a lot has changed in Libya since 2010...
I don't like the redirect page, it's a large and heavy page with a forced delay and what looks like placeholders for a ton of ads.
It might be better packaged as something blogs and forums can automagically implement for a fee instead of trying to make money off ads.
I did something similar as a weekend project, haven't checked-up on it in a while but it seems to still work: http://const.it/http://online.wsj.com/article/SB100014240529...
The original idea was just to provide a consistent link which would fallback to a cache when necessary and back to the original content for reddit/hn type traffic. Then it made sense to do some paywall busting and readability functionality on top of it and those features overshadowed the original concerns.
Thanks for all of the feedback everyone... it has been very exciting to actually "launch" something and get some feedback. You guys did a swell job of uncovering some bugs and edge cases. I'm going to keep pushing and at least get it working as advertised.
I did some research ahead of time and did come across purl.org, but had no idea about WebCite and a couple of the others. Yes, my project is basically the same as those.
Does this work as a single point of failure company? Who knows, but it's been fun.
If you have an interested in permanent identifiers, you might also be interested the archival resource key standard https://wiki.ucop.edu/display/Curation/ARK and the EZID service http://n2t.net/ezid/ . Disclaimer, I work at the same digital library where the standard and the service are developed and maintained.
It's pretty quick and dirty... purl.ly links will detect a 404 at the destination and redirect you to the google cache instead. Works great if that page is in the google cache, but it may not be.
I'll get around to caching the full content of the destinations at time of purl.ly creation next, and serve that if google is missing it.
So it's basically WebCite.
seems that some urls with a query string give an error
eg make a purl for http://www.reddit.com/top/?sort=top&t=hour which generates http://purl.ly/www.reddit.com/top/?sort=top&t=hour which gives a 404
Looks like not many people are actually using it:
It launched with this Show HN...
Fair enough, I'm just surprised there's so few links. Looks like most people that have seen it haven't even tried it...
looks similar to what archive.org is doing, except that the later saves complete websites recursively ..
Seems that cats are involved?