ESDI Adventures | OS/2 Museum

10 min read Original article ↗

At long last, I got hold of a decently well functioning ESDI drive. From my earlier adventures, I had a WD1007V-SE2 controller, as well as an older WD1007A. The WD1007A (Compaq branded) used to live in a Hyundai 286 machine together with a Microscience ESDI drive. But the Microscience drive tragically died some years ago.

I also have a somewhat working ST4182E drive (152MB), but its heads have an unfortunate tendency to stick when the drive is not use and then require manual intervention, so the drive isn’t very usable.

Now I got a CDC/Imprimis/Seagate Wren VI drive, CDC model number 94196-766 or Seagate model ST4766E. It has a formatted capacity of about 664MB and it was about the biggest ESDI drive in Seagate’s product lineup.

A 1991 ST4766E ESDI drive

The drive was sold as untested and I didn’t expect much of it. Visual inspection revealed mild corrosion in one area of the PCB, but there was no obvious cause for it (no leaky capacitors or some such). It may have been a result of sub-optimal storage conditions.

Before powering up the drive for the first time, I was rather apprehensive. But the drive spun up just fine and made the right kind of noises—heads unlatching followed by a seek test; by now I have a very good idea what a Seagate drive of CDC lineage should sound like. There were no suspicions noises and for a full-height 5.25″ drive with 8 disks inside the ST4766E is fairly quiet.

Controller Setup

Setting up a system with the ST4766E and WD1007V-SE2 was not entirely trivial. The WD1007V has its own BIOS, but it is not what one might expect from a disk controller BIOS. The WD BIOS can format a drive but it does not provide an INT 13h service.

Instead, a standard AT compatible BIOS is assumed. That is because the WD1007V presents an ESDI drive through the standard PC/AT style disk interface, which also happens to look awfully like IDE.

I plugged the WD1007V into my favorite board (Alaris Cougar) after disabling the Adaptec VLB IDE controller on the motherboard and the floppy controller on the WD1007V.

Then I ran the WD BIOS to format the drive. That took a while but went well. Then I tried to partition the drive and install DOS on it, which went rather less well.

FDISK did its job, but took unusually long time. The DOS FORMAT command progressed… very… very… very slowly. It was in fact so slow that I was pretty sure something had to be wrong.

Which gave me some time to do a bit of research. The WD1007V manual claims that the controller only supports up to 53 sectors per track, and the drive was jumpered to 54 (which did not stop the low-level format from succeeding!). There is an old OnTrack Q&A document that also says 53 sectors was the maximum for the WD1007V. And there is an old Usenet post where the author complains that with 54 sectors per track, “the thing ran ridiculously slowly”. The drive’s own manual notes that 53 sectors is the most common setting, but does not explain why it would be.

I strongly suspect that the slowness is caused by the controller’s inability to handle 1:1 interleave at 54 sectors per track. If the drive misses every single sector when accessing the disk, that would sure slow things down a lot.

So I gave up on the DOS FORMAT in progress, re-jumpered the drive to 53 sectors (which reduces the capacity a little), and enabled alternate sectors when formatting. After the format was done, I let the controller apply the bad sector map which is stored on the drive itself.

Note that formatting with “alternate sectors” aka spare sectors means that 1 sector per track is set aside for defect management. The alternate sector is assigned ID 0 and therefore won’t be normally used. If one bad sector is found on a track, the controller can mark it as bad and use the alternate sector instead.

While this reduces drive capacity, it is critical for operating systems that can only manage a limited number of drive defects (and a 650 MB drive can have rather more defects than a 20 MB drive, unsurprisingly). It can also be useful for systems that can only mark entire tracks as bad. For the FAT file system, it might be on balance better to not use alternate sectors and just let DOS mark the corresponding clusters as bad.

One catch during post-formatting setup was that the motherboard BIOS detected the drive with CHS geometry 1632/15/53 (yes, it can be detected because the WD1007V ESDI controller supports the IDENTIFY DRIVE command). Except with alternate sectors enabled, that is wrong! The BIOS must be set to use 52 sectors per track, not 53.

This is a deficiency of the WD1007V controller. It could reduce the number of sectors per track it’s reporting when alternate sectors are enabled (there is a jumper on the controller), but WD probably didn’t think of that because in 1989, when the WD1007V was made, operating systems and BIOSes weren’t using IDENTIFY DRIVE yet.

I also let the motherboard BIOS apply geometry translation—obviously the drive’s native 1632/15/52 geometry has more than 1024 cylinders, which means that translation is required to access the full drive capacity.

DOS Setup and Experience

At any rate, with the drive jumpered to 53 SPT and the BIOS set to 52 (to account for the “missing” alternate sector), FDISK wasn’t weirdly slow and DOS FORMAT ran at a reasonable speed, finishing in a few minutes.

DOS FORMAT discovered three additional bad sectors (or at least three bad clusters), which seems quite good for a drive made in 1991. Then I installed PC DOS 2000 on the drive without any incident, and followed with a few random utilities.

Norton SysInfo shows that the BIOS translated the drive geometry from 15 sides to 30 and halved the number of tracks from 1630 to 815. Note that SysInfo shows the drive model as WD1007V, which is what the ESDI controller returns as the model in IDENTIFY DRIVE.

Over 600MB of disk space!

The drive is of course not that fast even by mid-1990s hard disk standards, but then again the ST4766E is a drive model which was released in 1988. It is a standard 3,600 RPM drive, which means average rotational latency is 8.33ms.

The seek times of the drive are actually quite impressive for a drive first available in 1988. Seagate gives the average access time as 15.5ms, with track-to-track seek times of 3ms and maximum seek time of 37ms.

Compare this to old stepper motor drives which often had average seek times on the order of 80ms. CDC of course used voice coil actuators in their Wren drives which is why their seek times were far better.

Norton SysInfo agrees with the data provided by Seagate:

ST4766E benchmark results

Average seek time of 15.1 ms is really good for a late 1980s drive model. The transfer rate is also very good at about 1.3 MB/sec; it can’t be too far from the theoretical maximum.

While 15 Mbps corresponds to 1.876 megabytes per second, the drive’s sustained transfer rate can never be that high. There is some storage overhead and each sector needs more than the equivalent of 512 bytes on disk. More importantly, there is additional overhead caused by switching heads and seeking to the next track.

ESDI vs SCSI

The CDC/Imprimis/Seagate Wren VI drive has a bigger brother, the Wren VII with an unformatted capacity of 1.2GB—quite a bit more than the Wren VI’s 766MB. The catch is that the Wren VII was not available with an ESDI interface, only SCSI. (The Wren VI was available with both ESDI and SCSI interfaces.)

Yet when one looks at the drive details, it turns out that the Wren VII and Wren VI are mechanically more or less identical. Same number of platters, same data density. How is that possible?

The more than 50% capacity increase was possible thanks to ZBR. The Wren VII is divided into several zones with different number of sectors per track. The innermost zone of the Wren VII is the same as on the Wren VI, with 15 Mbit/s transfer rate. But the outer zones use higher transfer rates and therefore can pack more sectors on a track; the outermost zone on the Wren VII uses 21 Mbit/s transfer rate.

And this is exactly where ESDI was at a clear disadvantage compared to SCSI and even IDE. Yes, there were high capacity ESDI drives, up to 1.5GB. But these drives required faster transfer rates, up to 24 Mbit/s. For any given ESDI drive, if the drive could store N sectors per track, a SCSI variant of the same drive could store N sectors on the innermost tracks but N + M on the outer tracks.

If ZBR could increase the drive capacity by 50%, and speed up the outer tracks as well, who was going to say no to that? Around 1990, ZBR was becoming more common in SCSI drives as well as IDE, whereas ESDI was limited by the fixed transfer rate. Again the problem wasn’t that the ESDI transfer rates were low, it was that they were generally fixed.

ESDI was actually intelligent enough that it was possible to implement ZBR, because the controller could change the drive’s sectors per track and possibly the transfer rate on the fly. However, I don’t think there was any defined way for the drive to tell the controller what it was capable of.

With technologies like ZBR, it was far easier to put a smart controller on the drive itself (SCSI or IDE) rather than designing a highly complex interface between the drive and controller. Because SCSI and IDE hid these details, drive vendors could ship more intelligent drives without having to wait for new interface specifications and new controllers.

ESDI, an Intermediate Step

ESDI is an interesting evolutionary step between completely dumb ST-506 drives and self-contained, intelligent SCSI or IDE drives. An ESDI controller can discover the drive geometry and other information about the drive, and ESDI drives can store a factory defect list for the controller to use when formatting. These are clear advances compared to ST-506.

The problem with ESDI is that to achieve higher capacities, drives needed to use higher transfer rates. ESDI started at 10 Mbit/s, continued with 15 Mbit/s, and went up to 24 Mbit/s. That meant a new drive quite probably needed a new controller. And as mentioned above, technologies like ZBR were difficult if not impossible to exploit with ESDI.

ESDI could have evolved with more and more complexity being added to the interface between drive and controller. But it made much more sense to put the drive and controller together, completely hide the internal complexity, and only expose a much higher level and more stable host interface like SCSI or IDE.

With those, drive vendors could use ZBR, use higher RPMs, perform advanced defect management, and do all kinds of things which ESDI could only do with great difficulty or not at all.

In a way, by about 1990 ESDI started getting in the way more than it helped. Which is why it was completely obsoleted by SCSI on the high end and IDE on the low end, and by 1992 ESDI drives all but vanished.