This post is intended to be a tongue-in-cheek response/attack to this recent blog post. Similarly to the original poster (if not directly inspired by) I have previously accumulated a collection of links, that was and still is publicly available on my student website.
This is how the page worked: The main file links.lisp
contains a large s-expression that corresponds to a tree with
categories as nodes and links as leaves. Running the script
evaluates that expression to HTML text. If I wanted to add a link, I had to
find the correct category or create it, add an expression for
the type of link that I wanted, save the file and run my
favourite LISP interpreter over it.
What I like about this approach is that it is
:description field that is intended to
contain some explanatory text. That field is just a string
though. Including markup was possible, but the easy thing to do
was clearly to just give a string, or not give a description at
all.
After graduating, I wanted to let that page stay mostly as-is, so I decided to begin anew with my collection. This did not lead me to abandon all reason and implement a ridiculously complicated CGI solution, however. Instead, my new link collection relies on tried and tested web technologies, namely XML and XSLT.
I just have links.xml that
contains all the data and metadata. This file just sits on the
web server, no dynamic content needed! To render the XML to
nice semantic HTML that the browser can display, we use its
builtin XSLT processor. Sadly I may not be able to have a setup
this simple for long. Google intends to remove
support for this long-standing web standard. If that
happens, I will probably just render to a HTML file, host that
and be sad.