Show HN: Archive websites from the command line
github.comAt first glance I love this, looking forward to putting it through its paces. I often use wget to backup local copies of websites and it's been frustrating having no easy/automatic system to (a) keep easy information about when the snapshot was taken and (b) allows me to incrementally relate that snapshot into a network of historical info
Thanks for checking out the project. You can also query the underlying database at ~/.erised/erised.sqlite3 if you prefer SQL over JSON strings.
Cool.
How does it work? What's under hood?
It uses Electron under the hood.
ok, but does it archive whole website or just one page (given URL)?
It only archives the one page.