BigFAT – Backward compatible FAT extension for unlimited file size
segger.comI like the idea. Making it backwards compatible with FAT means that, in principle, regular FAT filesystem implementations could be transparently changed to support big fat files (hehe) transparently.
However, reading the spec, it doesn't look fully backwards compatible? It seems like there are file structures which are possible to represent in FAT which aren't possible to represent in BigFAT. In FAT, I could have a 4GB-128kB size file called "hello.txt", and next to it, a file called "hello.txt.000.BigFAT". A FAT filesystem will show this as intended, but a BigFAT implementation will show it as one file' "hello.txt". That makes this a breaking change.
I would kind of have hoped that they had found an unused but always-zero bit in some header which could be repurposed to identify whether a file has a continuation or not, or some other clever way of ensuring that you can represent all legal FAT32 file structures.
There are so many good filesystems out there. Is it really necessary to keep dragging FAT along?
ReactOS is using btrfs, which has so many useful options that FAT will never see (zstd, xxhash, flash-aware options, snapshots, send/receive, etc.). This is positioned both for Linux and Windows.
Microsoft itself restrains ReFS to enterprise use, and btrfs offers so much more functionality. We should stop using a file system from the '80s.
Nothing beats the simplicity of FAT.
Btrfs has a lot of bugs while being active for a long time. This is mostly related to its complexity.
If I'm going to implement a filesystem for a custom hardware I would definitely not chose btrfs.
If you want snapshots, dedup, transparent compression, and scrubs then you have precisely three open and/or available choices: ZFS, btrfs, and ReFS.
By all means, choose the Microsoft solution, because patent licensing is good for everyone.
And the bug myth past into history years ago.
"So, we'll repeat this once more: as a single-disk filesystem, btrfs has been stable and for the most part performant for years."
https://arstechnica.com/gadgets/2021/09/examining-btrfs-linu...
None of those features you mention mean anything in the situations where FAT is still being used.
FAT is still popular because it is very easy to implement, and anything can read and write to it. It is pretty easy to implement the file system on a low power microcontroller and have it write data to an SD card. Your users can then plug that SD card into any computer and view the data, or add to it.
Using btrfs in a situation like this means a lot more coding on your end, and your users lose the convenience of the SD card using a file system they can easily interact with.
Nobody is using FAT for their primary system partition. It is almost exclusively relegated to embedded systems and small external storage devices where broad compatibility is an important feature.
But I just want 5GB files
Microsoft just wants a check from you.
We are all forced to pay for this ancient software every time we buy a device that uses it.
Wouldn't this money be better used elsewhere?
https://en.m.wikipedia.org/wiki/File_Allocation_Table#Patent...
AFAIK, FAT32 is Patent Free, even the Long File Name Patent has expired. The only patent left are on exFAT.
Microsoft isn't getting a check from me using FAT32.
Hmm that's like saying Google is free.
Not at all, seeing as all FAT32 related patents expired many years ago
You know patents have a limited life, right?
x86_64 with SSE2 is also patent-free right now, as an example.
All of those patents expired a long time ago
> Nothing beats the simplicity of FAT.
Or the sheer ubiquity, and therefore cross-device compatibility.
Camera manufacturers and SD card manufacturers can't start shipping SD cards formatted with btrfs until Windows supports it out of the box. They can start shipping SD cards formatted with FAT32 and software/firmware which reads and writes FAT32+BigFAT.
More specifically, they need a filesystem that both Windows and MacOS can read. No one wants to take their SD card to a friend's computer and have it not work for reasons they won't understand.
The shared set there is basically just fat and exfat.
If Microsoft and Apple collaborated on a new filesystem, or even just supported it, then we might have a possible successor. However even with that, the millions of already shipped devices won't support it. This during the transition period of many years there will still need to be support for fat.
That keeps fat the lowest common denominator and everything supporting it.
Remember last time they tried a universal media filesystem with UDF? It was implemented in the most incompatible ways as a token gesture by both Microsoft and Apple. These companies want their own, patented, proprietary fs so they can maintain lock-in.
The only way to get a universal standard is to have the community do it and have enough people use it that the big companies have to capitulate.
The problem is you can't get there without out-of-the-box support.
exFAT is a good candidate for a replacement "lowest common denominator" file system, and support for it is growing rapidly now that Microsoft has effectively open-sourced it.
But as you pointed out, in a transitionary period there is still a need to support older devices and software. FOSS purists may also not approve of using exFAT in some situations, since the relevant patents have not yet expired, even if MS has released them to the OIN.
Now that exFAT is "open", I've seen it cropping up much more often. SD cards often ship with it, especially large ones.
That happened before it was open. exFAT is the standard filesystem on SDXC
> can't start shipping SD cards formatted with btrfs until Windows supports it out of the box
3rd parties can write drivers for Windows, you know. A small, read-only FAT partition on a USB stick or SD card could contain the installable drivers necessary to read/write the rest of the disk.
However, that's unnecessary. The best option for a universal file system is UDF. Windows, Mac, and Linux all have full read/write support.
I guess what is needed is a BSD implementation of btrfs.
Still, something similar to fuse might help with the licensing.
No, what is needed is for Windows and macOS to support btrfs out of the box.
If they can ship software that reads BigFAT, why can’t they ship software that reads btrfs?
Other people have given good answers, but here's another one: People's computers can already mount BigFAT-formatted drives.
Do you know what happens when you insert a btrfs-formatted SD card or USB stick into a Windows or macOS machine? It tells you that the drive is unreadable and asks if you want to initialize it. If the user answers yes to that question, the system formats the drive and all of their data is lost.
With a BigFAT-formatted drive, the system will mount it no problem, the user will be able to browse the contents, and the only weird part is that their largest files are split into parts.
Because you only need the software for >4gb files and block level access requires root or admin usually. This is can be fully userspace if your OS already supports Fat32 (it does).
1. BTRFS is a lot more complex.
2. Switching to BTRFS would be a breaking change. BigFAT wouldn't be. You can still use the card in devices that do not support it, without needing to reformat. Those devices would just lose access to some files.
Probably simplicity. It would be easier for a manufacturer to do a quick firmware update that implements BigFAT than having them support BTRFS.
> Is it really necessary to keep dragging FAT along?
Anything involving embedded and without deep pockets has no other option, FAT (sadly) still is the least common denominator. Some speak ExFAT, but not sure how good the tooling support is outside of Microsoft, and there are still patent concerns.
I believe the exFAT patents expire or expired this year.
Even without patent concerns, exFAT takes more effort to implement (lots of features you might not need) and ends up requiring more ROM space that may or may not be available… even if you want to write files exceeding 4GB to some external media.
e.g. some widget without network connection that optionally logs an audit trail to an attached USB media, could still end up with only a couple hundred kilobytes of soldered-on ROM to store the whole firmware, while wanting to write more than 4GB of audit logs.
> ReactOS is using btrfs, which has so many useful options that FAT will never see (zstd, xxhash, flash-aware options, snapshots, send/receive, etc.). This is positioned both for Linux and Windows.
I just want to transfer files on USB sticks without worrying about file size or the OS accessing it. The infuriating part is that it is 2022 and if you want to reliably and easily move files larger than 3-4GB on removable media people tell you to use proprietary MS file systems like ExFAT And NTFS. That is unacceptable.
We NEED a simple, portable, and freely open file system spec for removable media that handles large drives and files.
FAT is for weak little cost-optimized embedded-microcontroller devices that write one file at a time to an SD card — which is something we're still building to this day, in the form of IoT devices. We don't really have any better option for this use-case; every newer filesystem is either non-portable, or assumes stronger hardware such that the overhead of using it on these devices would be huge.
I would note that one way to work around the cost-incentives of IoT manufacturers, would be to encourage them to externalize the storage-layer costs from the device's BOM, by focusing on getting "object-storage oriented" NAND flash controllers pushed down from enterprise to regular retail availability. That way, all the filesystem-layer smarts end up living in the SD card itself — which is sold separately. (It'd be sort of a second coming of the ancient Commodore 1540/1541 paradigm, where the disk controller presented not as block storage, but as, essentially, a single-user serial-attached NAS.)
BtrFS is unsafe for production use unless it's coupled with really good backups.
On the other hand, NTFS on Windows, Ext* on Linux, or ZFS on any supported OS, has not been known to eat data as frequently.
Is BtrFS without RAID safe?
According to them (https://btrfs.wiki.kernel.org/index.php/Status) only RAID56 is unstable.
There are still major bugs in the rest of it. You can trivially corrupt a mirror. There are examples for how to reproduce it exactly in qemu with virtual drives. I caused me total dataloss on my first Btrfs filesystem at least 8 years back. That bug is apparently still there. The unbalancing issues are still there. I have zero trust of Btrfs in any form.
Can you link to that? I have been using btrfs in raid1 and single for ~3-4 years now without any data loss.
You (and many others) might well be using RAID1 mirrors without problems. I did as well. But the problems here are not encountered during day-to-day usage. They are bugs in the recovery codepaths following hardware failures. I suffered this due to a transient SATA cable glitch, but the instructions let you exactly reproduce this with qemu with a recent kernel. I've not tried the qemu approach myself; I moved over to ZFS a good while back now.
I've had a hunt for the specific instructions but I'm afraid I can't find it again with a search. The gist of it was to:
- create Btrfs mirror using two qemu virtual disks
- pull the (virtual) plug on one of the pair to disconnect it, then later reconnect it
- Btrfs ends up hosing both the outdated and current copies of the mirror, leading to complete dataloss of the entire mirror
Synology uses Btrfs + mdadm RAID for their NAS boxes, which are regarded as rock-solid.
Whatever voodoo they are pulling out of thin air, is clearly not present on commodity, non-tuned Btrfs filesystems. That, and most people have backups set up on their NAS.
Well, if you want, you can just pull the drives out of your Synology box and hook them up to any Linux system [1]. So whichever way they're tuning it, it can't be that voodoo-esque.
[1] https://kb.synology.com/en-global/DSM/tutorial/How_can_I_rec...
With all due respect, this is now far from true.
"So, we'll repeat this once more: as a single-disk filesystem, btrfs has been stable and for the most part performant for years."
https://arstechnica.com/gadgets/2021/09/examining-btrfs-linu...
I mostly use fat as a go between for different operating systems, things could be installed to implement similar functionality around another file system, but it's nice to have a default built into everything format that works on every machine. It has its flaws, but the universality of it is a huge strength
What FAT32 filesystem in the real world has a file named "foo.000.BigFAT" on it?
I can imagine that if bigfat is successful, such files will start to exist.
Imagine someone takes a bigfat drive and puts it in a non-bigfat capable machine, then zips up a directory and publishes it.
When that directory is unzipped on a bigfat machine, should the bigfat files be re-joined, or should they show as separate files? One breaks the OS file API and the unzip program might crash/fail, while the other leads to the application trying to create filenames which can't exist in the filesystem.
> should the bigfat files be re-joined, or should they show as separate files
They're only "rejoined" by the BigFAT compatible filesystem driver on access. By running such a driver, you're agreeing that such files should "appear" as one.
Hey, notice that you're suggesting that BigFAT should be disabled by default here; you think the user should have to choose to be running a driver with BigFAT-support. Maybe reflect on whether that's a desirable situation, or if it would've been preferable if the feature could've been enabled by default.
See my response to https://news.ycombinator.com/item?id=32753207. I'm not saying it will break everyone's FAT32 drives, but it is a breaking change in a filesystem, which seems like something kernel people would usually try to avoid.
It's as backwards compatible as any other fat extension done so far.
For example, LFN fails if you create too many files with the same first 6 letters :)
I'm actually honestly not sure why representing all legal FAT32 file structures is a particularly useful goal?
FAT in particular, in all of it's forms, has always had limitations and weirdness in filenames, etc.
I don't understand your LFN example. Which FAT file structure can be represented with LFN disabled that's no longer possible to represent if you add support for LFN?
If BigFAT was actually backwards compatible, it would've been a no-brainer to add support for in filesystem drivers. But since it changes the interpretation of some legitimate structures, adding support for BigFAT is a breaking change. I don't know whether operating systems will want to make breaking changes to their FAT32 filesystems, but it certainly seems like a bigger ask.
Err, a directory with all possible six character prefixes, differentiated at the seventh character, is representable without LFN but not with it. Wikipedia actually has links/info on it if you want more.
My favourite sobriquet for MS has long been the DOS view of an Office 97 installation in the filesystem:
MICROS~1
(Long name, Microsoft Office.)
So if you create files called MICROS~2 - MICROS~0 in theory you can create enough abbreviated names that there are not available short names for long filenames you wish to create. Every LFN must have a "real" 8.3 counterpart.
I imagine the premise is that if you mount this disk in an implementation that doesn't understand these structures it works and you don't corrupt it, making the format backwards compatible with old implementations. This is similar to the trick used to add long filenames: putting a special 8.3 file with a ~ that includes the full file name.
>Unfortunately, exFAT has been adopted by the SD Association as the default file system for SDXC cards larger than 32 GB. In our view, this should never have happened, as it forces anyone who wants to access SDXC cards to get a license from Microsoft, basically making this a field owned by Microsoft.
So, this is a bit of a cultural/perception gap between FOSS developers and standards bodies. Most standards bodies have a patent policy of "as long as all the standards-essential patents are licensable for a uniform fee, we're good". Convincing patent holders to not extract royalties from their patents for the sake of easing the lives of FOSS implementers is much, much harder[0].
Microsoft isn't even the only SEP holder for SD, and the standard makes no attempt at being a royalty-free standard. In fact, early SD standards were NDA'd[1] and prohibited FOSS implementation at all.
[0] In fact, so hard that the EU has a conspiracy theory that Google/AOM bullied a patent holder into doing this
[1] Remember, SD cards were basically MMC with primitive DRM
Are the exFAT patents still a problem nowadays?
> exFAT was a proprietary file system until 2019, when Microsoft released the specification and allowed OIN members to use their patents.
The patent will also expire in 2027 [1]. We can look forward to it being entirely unencumbered at that point.
https://patents.google.com/patent/US20090164440?oq=US2009164...
I sometimes wonder if companies could choose to expire their patents earlier. Especially in cases when there are little to no strategic value to uphold them, but lots of potential value to unlock when they are gone.
> We also support the eventual inclusion of a Linux kernel with exFAT support in a future revision of the Open Invention Network’s Linux System Definition, where, once accepted, the code will benefit from the defensive patent commitments of OIN’s 3040+ members and licensees.
I don’t know exactly what that means. But it sounds like something different from “we hereby grant everybody a license to any and all exFAT patents.”
Good for OIN, but it doesn't help non-Linux systems.
> Why not exFAT... Microsoft owns several patents, and anyone who implements or uses exFAT technology needs Microsoft's permission, which typically also includes paying fees to Microsoft.
While BigFAT not being encumbered by any patents is a good thing, the camera industry have pretty much standardized on exFAT for their removable file storage format. Something I'm curious about is how a 5GB video file (quite common and actually on the smaller size for 4K and 8K recording sessions) is written and accessed between the two file systems. BigFAT says that the file would be written in 4GB chunks; is there something similar happening with exFAT or is the file "one chunk?" (Apologies if I have the terms wrong -- I'm not a filesystem expert.) The author laments that the exFAT format has been adopted for SDXC cards but given who all is in this group and what their use cases are I can discount "because Microsoft strong-armed them" as a reason for them selecting it.
The industry could have used UDF. Derived from ISO 9660, but it supports read-write random access storage.
I'm guessing they didn't if FAT12/16/32->exFAT driver changes are comparatively simple, and/or results in a smaller code base to support FAT32 and exFAT on the same device (e.g. a camera).
And on a camera that costs anywhere from USD$1000 to USD$6500 does the cost of an exFAT license really matter?
Yes, it does.
If you manufacture 100K of them, and save 10 dollars on every piece, you got an extra million in savings.
From the wiki, my understanding is the licence is $0.25 a unit, not $10?
It was a number to illustrate a point, not exact figure for this specific case.
Yes, the manufacturers will go great lengths to minimize variable costs. If they can shave $0.25, they will. At volumes, it matters.
ExFAT is not limited to a 4GB maximum file size. It just has more than 4GB in the file.
I guess 4GB seemed like a reasonable limit when FAT32 was designed.
Most likely FAT32 has a 32bit number for file size and ExFAT presumably has either a 64bit one or stores file size in some format other than bytes.
> I guess 4GB seemed like a reasonable limit when FAT32 was designed.
FAT32 was always seen as stop-gap measure for low-end consumer hardware when introduced in 1996; NTFS was introduced 3 years prior to handle terabyte-scale data for enterprise users.
> Most likely FAT32 has a 32bit number for file size and ExFAT presumably has either a 64bit one
Correct.
I actually am disappointed that Microsoft has a chance to fix some inherent problems with FAT but didn't, even considering the main use case of a simple FS. Notably, it still has the notorious year 2100 bug (or 2108 bug, depending on the implementation), the metadata is weird and not at all straightforward, it's basically just extending FAT32 and some minor updates since Unicode and UTC are now here.
The question I have is, why Segger? When I saw this I was like "the debugger company?!?!" Clearly this wouldn't fall under their business, so it makes sense for them to open it up, but why did they do it in the first place?
They offer their own file system implementation (emFile) which supports either their own storage format (EFS) or FAT. The BigFAT article is posted in the emFile section of their website.
My suspicion is that customers are bugging them to support large files in emFile and they don't want to pay the license fee for exFAT. I think they even can't do that with their current licensing model, which is one-time per product (not item) or product-family payment.
EDIT: I tried to find out if Microsoft's exFAT is licensed per product or per unit and I found that it used to be a 300000 USD flat fee in 2009 but seems to be free since 2019. So my theory from above has no basis and I wonder why Segger does not simply implement exFAT?
It is not true that is "free since 2019".
Source:
https://www.microsoft.com/en-us/legal/intellectualproperty/t...
https://www.paragon-software.com/exfat-license/
https://en.wikipedia.org/wiki/ExFAT#Legal_status
You maybe of the hook if you use Linux >= 5.7. And it seem that you are of the hook if you are a member of the Open Invention Network (OIN).
But SEGGER's embOS is not based on Linux and their costumers a OEMs themselves. So their costumers would need to be OIN members or pay royalties to MS.
This is interesting, thanks. So the Linux kernel contains essentially non-free code that is, despite being under GPL, in effect not usable by others because of patents.
They have an entire RTOS ecosystem which supports a gazillon different microcontrollers.
I'm a bit puzzled as to how split files with name standardization is an "extension." It seems to me that SEGGER is simply proposing a de facto file naming convention, and offering a few free tools (including a few abstraction drivers) to encourage adoption.
Can somebody fill me in, here- where's the value in what SEGGER is proposing, as opposed to what the entire IT community has already been doing for decades?
Well, if we view FAT32 + this name convention as a new filesystem, then filesystem drivers could let you transparently operate on files bigger than 4GB (GiB?) and take care of the splitting for you. FAT32 + this convention would essentially become a filesystem which supports files up to around 4TB. You wouldn't have to make the choice between the patent-encumbered exFAT and the open but limited FAT32.
I think many programmers have used file splitting technology at the application level for decades. I know I wrote one for a backup utility (Drive Image) back in the 90s that would split the output files into smaller pieces for transfer to removable media (floppies, Zip drives, CDs, DVDs, etc.).
It sounds like BigFat is an extension that takes away the need to do this at the application level. The code does all the splitting and merging for you so you can write a program that acts like the file is on a file system that supports files > 4GB.
I had the same impression, I cannot see how it is different from the tools that allow file splitting (for later archiving on floppies. CD's, DVD's, etc.) that exist since forever, when/if implemented in OS filesystem drivers then it will be more "transparent", but until then it seems to me not different from multi-part archives, such as rar or similar.
If I may it would make more sense (to me at least) to use a directory and have a descriptor file, not entirely unlike multi-part vmdk's are implemented.
Using split, cat or rar doesn't allow random access.
But also this "extension" doesn't seem like allowing it.
On the contrary (but it is a specific "niche" case) the mentioned vmdk split format allows to mount the vmdk same as monolithic, with full random access.
Of course it does. "BigFAT allows random read and write access to any file, even if larger than 4GB, as required by databases."
I think we are talking of two different things, I was talking of the extension document/specification, that one seems to bring no particular innovation.
The actual implementation (by SEGGER or by someone else):
>Q: Can I implement BigFAT myself?
>A: Absolutely. BigFAT is a specification made available by SEGGER. Anybody is free to write a piece of software implementing it. No fees, no royalties, no headaches. You do not even have to let SEGGER or anybody else know.
is what may allow that (random access), this implementation would be useful if - instead of a "feature" of a given app/program - it would be implemented as a filesystem driver of sorts.
Edit: I guess my first look is wrong, on second look, it appears it needs their own filesystem driver. If the hack as I wrote would've worked, it would've been very clever, and dangerous...
> de facto file naming convention
From a first look, like it's using Microsoft's own hack of long file names[1] to create file entries that look like they belong to 1 file. A file that has a long file name (more than the 8+3 character limit) is actually several file entries, but they're empty files. Seems like the tool is creating non-empty files instead, that Windows is chaining together as one.
[1] https://en.wikipedia.org/wiki/Design_of_the_FAT_file_system#...
If only they released it back when exFAT was released. Now it has no future.
Is this only compatible with FAT32, or is it also compatible with FAT12/16? It would be very cool if this would support floppy disks.
Regarding the format, once you convert it, does the target device need to have a driver to support the format? It mentions that this would allow for > 4GB files for TVs, but these are typically non-updated very out of date OSes.
I think MS missed a trick by not making the boot sector also contain a simplistic driver, although it would have been a push to keep it all down at 512 bytes.
> Is this only compatible with FAT32, or is it also compatible with FAT12/16? It would be very cool if this would support floppy disks.
It's simple enough to work on basically anything, but for what purpose?
The max file size on FAT12/16 is the same as the max drive size.
And FAT32 is very easy to implement for any system dealing with multiple megabytes.
As I replied to the comment, it's about having a filesystem that scales from floppy disks to hard drives. This is quite important for hobby OSes.
How do you get files > 4 GiB on a floppy disk?
You don't of course, it's about having a filesystem that works from floppy disk all the way to hard drive.
Awesome concept, especially for academia ... but is there a value proposition?
I love seeing this, don't get me wrong. I am just curious is there are any real world applications for this?
It would be great to have a non-patent-encumbered simple file format that's supported everywhere. The fact that this is based on FAT32 might help adoption, everyone's computers can already at least read a BigFAT drive, and BigFAT support could be added at the application level for systems which don't support it at an OS level.
Are the filesystem used on bsd and linux distros patent encumbered? Isn't UFS2 simple enough?
UFS2 might be the technically perfect tool for the job, but that doesn't matter when Windows doesn't support it. A camera manufacturer or SD card manufacturer can't start shipping their customers SD cards formatted with UCS2 when it's not supported by Windows. They could start shipping FAT32 SD cards and software and firmware which can read and write FAT32+BigFAT.
In my opinion UDF [1] would be a great option. Although it's mainly used in DVDs, it can also be created as read/write capable. [2]
[1] https://en.m.wikipedia.org/wiki/Universal_Disk_Format
[2] https://duncanlock.net/blog/2013/05/13/using-udf-as-an-impro...
If you wish for an fs common to both BSD and Linux, ext2fs would be perfect - better than UFS for the job.
> but is there a value proposition?
Straight from their FAQ: We see emFile customers asking for solutions for bigger files. Implementing exFAT is not an option for us, as it is patent encumbered. SEGGER would need Microsoft's permission to implement and offer it, and our customers need to deal with Microsoft again to be able to use it in their products. This can be time-consuming and also expensive. We feel there should be a free alternative. The more popular BigFAT becomes, the better.
I guess using anything but FAT would make it hard for their developer base.
I already use FAT32 for some USB sticks when I need to be able to use them on various OSes without having to give it any thought, or for long term archiving.
This would be extremely niche but would have its audience. Heck, I wouldn't be surprised if HN readers adopted it just to taste the thrill of unlimited and obscene power.
Would it not be possible to create a filesystem with modern capabilities but with backwards compatibility with FAT? Why can't we just have "legacy" commands built into the ReFS filesystem that process any FAT filesystem access?
I'm very ignorant to this but I'd love some insight from someone vastly more knowledgeable than me.
A filesystem translates a filename-based streaming I/O API to the way a disk talks, which is "Read/write 512 times X bytes of data at disk block N."
The I/O API or "commands" are the same; different filesystems will implement it differently.
Thanks!
I wasn't aware of the intricacies of this stuff - I'll have to do some more reading in my free time.
Is there a linux kernel driver for this somewhere?
If they want it to spread they should also write a fuse implementation and think about operating system support for Linux or BSD.
The thing missing on the page is some kind of performance benchmark which I would love to read/see.
Is the big file handling seemless? If not why not just split files and use regular FAT32.
And what about converting FAT32 to a linux partition? Or buy a new disk and move data over to that.
Edit: it is a genuine question. downvote implies not but honestly it is.
How is this not BackFAT?
Looks good. Keep at it!