Settings

Theme

Never Ever Use Content Addressable Storage

frederic.vanderessen.com

1 points by fvdessen 3 months ago · 6 comments

Reader

taylodl 3 months ago

This paragraph explains the problem:

So when a user wants to remove a CAS-addressed document, before really deleting it you need to detect if it's the last reference. This is not easy to do, it is in fact much harder to do correctly than eating the cost of storing duplicate files.

And this paragraph is the purported solution:

And usually when CAS is considered as a solution, it's to solve the need of deduplicating files to save on storage. But even there, the good solution is to give files their own internal uuids as storage keys, store its hash alongside, and generate external uuids for each file upload, then use refcounts to handle the final delete.

The problem is this solution reframes the problem but doesn't solve it. It still requires:

- Accurate reference counting

- Careful handling of deletes

- Synchronization across systems

Which is all part of the original problem.

At the end of the day, you can't safely and scalably do distributed deletes with refcounts unless you centralize the operation, which kills scalability. There are work-arounds, such as marking the file as unreferenced and then running a garbage collector to delete unreferenced files, but the author doesn't discuss them.

  • fvdessenOP 2 months ago

    indeed, a centralized database with transactions was implied in the solution. You're right to point out this is not always available. I did not talk about it simply because the software I worked on never reached a scale beyond what a centralised database can take. I will edit this article to make it clearer.

FrankWilhoit 3 months ago

Hash content+provenance. Done.

  • taylodl 3 months ago

    One of the goals is to avoid duplicate storage for the same item. That's what makes deletion difficult - how do you know you hold the last reference to the item in storage?

    • FrankWilhoit 3 months ago

      Provenance includes time -- and refers to security context. Everything has to match in order to assess eligibility for dedup. The "last reference" by whom? Sometimes there might be duplicate content, with two copies visible to disjoint audiences: can't dedup that. Of course it is an architecture smell.

    • FrankWilhoit 3 months ago

      Further to this, the selling point for the relational algebra and the normal forms was storage efficiency, but that was at a time when not only was storage vastly more expensive in relative terms, but also OLTP was barely even foreseeable. The rationale for deduplication emerges from the ideal behind the normal forms, which is that of a systemwide data dictionary, in which no literal ever occurs more than once.

      The security implications of OLTP were never understood; the older paradigm, in which the DBAs were the only role that would ever touch the data, was never explicitly repudiated, so it continued to have mindshare among architects and users.

      These two things taken together -- save storage above all, and the implicit union of security contexts -- led to the universal antipattern of overloading all lifecycle stages of a business object onto a single table, with the "status" of a particular record indicated by a union discriminator code.

      As you know, I always use the example of invoices -- unapproved, then approved -- because of that example's extreme simplicity, and because of its obvious, immediate connection to cash going out the door. And as you also know, no one ever "gets it".

      But to rehearse (NB. we are in an AP context, not an AR context), accounting controls require separation of roles between (A) the creation of an invoice, (B) the approval of an invoice, and (C) its subsequent processing into a payment. What that means is that a newly-received invoice, an approved-but-not-paid invoice, and a paid invoice, are three different types.

      Now shall those three types be overloaded onto one database table? The computing-resources perspectives of the 1950s say "Yes, please!" And as long as a trusted super-role -- the DBAs -- are responsible for the integrity of the batch processes that create the reports that are then routed exclusively to the people who approve the invoices, and the other reports that subsequently go to the people who cut the checks, oh for God's sake don't make me repeat myself like this.

      You see the problem: in an OLTP world, this all falls to sh1t. Suppose you keep a timestamp-last-touched and a user-ID-last-touched-by? (Whisper it only: some systems don't; a lot of the others only keep one.) Does that give you separation of roles? NO: because everybody in all the roles has to have rights on the invoice table. So the accounting controls have to be satisfied some completely other way.

      The three (in our toy example) kinds of invoices are three different types and therefore they need to be in three different tables, each with its own access rights and its own audit trail. If you do not do this, then Murphy's Law and Occam's Razor, for once in magnificent agreement, both say that you are cutting wild checks. Do you not care? Why do you not care? "That's what risk insurance is for"? Okay, we give up; but you are still cutting wild checks.

      The one-type-per-process-step-and-one-table-per-type model can, of course, be implemented in such a way as to minimize duplication; but (A) there is a tradeoff against performance, and (B) the architectural tradition that would enable this does not exist, because that path was not taken back when it was time to take it.

      As to (A), we are not addressing those who think that it matters how fast you get the wrong answer.

      As to (B), we do not have a time machine, and still less do we have the ability to convince the vendors of enterprise software that they have been doing it wrong for a lifetime.

      But you're still cutting wild checks.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection