[rescue] Macs & IDE vs. SCSI

Dave McGuire mcguire at neurotica.com
Sat Apr 12 14:00:11 CDT 2003


On Saturday, April 12, 2003, at 12:55 PM, Patrick Giagnocavo 
+1.717.201.3366 wrote:
> Don't forget protocol overhead as well.

   *sigh*

   ...which enables us to do such advanced high-tech things as let each 
drive have its own functioning control electronics.  Yes, there's 
protocol overhead.  That's one reason why it works better.  
Fortunately, SCSI's protocol overhead is tiny.

> I agreee on the disconnection speedup but as for "winmodem":
>
> Argh... this is simply not true.  When you are using "UDMA Mode 2" or
> anything faster or more recent than that, the transfers from disk to
> CPU are done exactly the same way as a SCSI adapter does it (DMA
> transfer direct to memory, then 1 interrupt to tell the CPU that data
> is there).

   *sigh*

   While, during raw data transfers, this is absolutely correct, this is 
only half...or less than half...of the story.  A SCSI drive's ability 
to reorder previously issued read/write commands, sorting them to 
minimize seeking and the effects of rotational delays, again tips the 
balance from "even" WAY back into SCSI's favor.  (Think one level 
"above" the actual transfers...to the transactions that *result* in 
those transfers.)  While it's true that many well-written IDE device 
drivers do this at the OS level, frankly my CPUs have better things to 
do than handhold a brain-damaged disk drive by doing optimizations that 
the DRIVE should be doing...offloading that stuff to the drive is a 
clear advantage, as only the drive itself knows its physical geometries 
(this is especially true when using LBA addressing...and who doesn't 
use LBA addressing?  Running any <500MB IDE drives over there?) so only 
the drive itself knows how to best optimize those reads and writes!  
This is even more true when you come up against blocks that are 
remapped to another area of the disk.

   And that's not even mentioning disconnect/reconnect and the whole IDE 
one-disk-transaction-at-a-time thing...You may argue about putting one 
drive per bus and all, but I don't *have* to do that with SCSI, because 
it was designed correctly in the first place.  Well, I guess you 
mentioned that you agree about the disconnect/reconnect speedups.  
They're HUGE, though...on everything except a DOS system.

   That's also ignoring the read/write request setup stuff, which, for 
IDE, is *very* WinModem-like...UDMA or otherwise.

   While it's true that command queueing was recently added to the IDE 
spec, very few drives actually implement it, because it's expensive to 
develop...which is at odds with the #1 and #2 mantras in IDE technology 
development: "cheap", and "increase capacity at any price other than 
dollars".

> I ran tests in "lab" conditions, and real-world experience also shows
> it to be the case.
>
> Basically, on any x86 machine produced since 1998-1999 you will see no
> more than 1% difference in CPU utilization between SCSI and IDE.  This
> is --provided-- that the OS has any kind of halfway decent IDE driver.

   Sure, I'll buy that...at least for the lab conditions part.  With one 
machine doing one thing with one drive, sure.  But how many access 
patterns are like that?

> Of course, IDE implementations on other architectures might suck, but
> then again a poorly-written SCSI driver shouldn't be used to damn SCSI
> either.

   No argument there...but that's not what's going on here.  I think we 
can avoid wasting time and bandwidth on that subtopic.

> A quick way to check is to correlate the number of interrupts/s with
> the number of disk transactions/s (under OpenBSD, run "systat
> vmstat").  If the "good" IDE mode is being used, there will be one
> interrupt for each disk transaction - just as with SCSI.

   No, that's not a good way to check at all, unless you're out to prove 
the performance is the same, as interrupt load isn't a measure of 
performance.  A better way to measure the relative performance would be 
to, say, do something like a NetBSD "make world" on two different 
source trees at the same time on two different drives on the same 
bus...once with a pair of SCSI drives and once with a pair of IDE 
drives.

   So yes, thanks to IDE evolving in the past few years to something 
that SCSI had built-in twenty years ago, they're both at one interrupt 
per transaction.  But that's not the whole story.

   You keep drawing attention to the aspects of SCSI and IDE that are 
common to the two interconnects.  It's the differences that are 
important here.

> That is really the only point I am trying to make when I make IDE vs
> SCSI comments.  In the bad old days of IDE, each 512-byte block
> transfer resulted in an interrupt to the CPU - which definitely turned
> the CPU into an expensive drive controller.

   Yes...and while IDE (like everything else) has evolved, it hasn't 
evolved very quickly, and it had a really bad and very limiting base to 
start from.  It will never escape, for example, the silly master/slave 
thing...nor the LBA limits (though manufacturers keep developing really 
ugly kludges to get past things like the 528MB CHS barrier, then the 
8.4GB LBA barrier, then the 137GB LBA barrier) nor the 16-bit-wide 
interface (which SCSI isn't locked into), or even the rather 
x86-specific way register-level reads and writes work (at the hardware 
level.../RD, /WR, ALE, /CS0, /CS1 signals on the bus).

   I recently designed and built a very simple IDE interface with a 
microcontroller (pics on my web server under "projects" if anyone 
cares)...though my application is a low-speed one and uses only the PIO 
modes, it afforded me the opportunity to learn a lot more about the 
low-level aspects of IDE and how it works.  Sheesh, I thought it was 
bad *before*...now I kinda wish I didn't know just how bad it really is.

   I'm sorry Patrick, I hate to argue with you and I feel bad about it, 
but I simply don't believe you're right.

         -Dave

--
Dave McGuire             "My belly these days is too substantial
St. Petersburg, FL           for most hosiery."       -Robert Novak


More information about the rescue mailing list