[rescue] Q: OpenVMS NFS performance questions

James Lockwood james at foonly.com
Fri Dec 6 19:03:16 CST 2002


On Fri, 6 Dec 2002, Eric Dittman wrote:

> > So, what do people think is the best OS for NFS serving?  A few years, I
> > got the impression that studios tended to think that Solaris was the
> > best (mainly since apparently it didn't crash as much, or cause the
> > clients to crash as much).  I'm quite certain from my own experience
> > that Linux and NT aren't the best.  One of these days I want to try and
> > rigorously compare NetBSD and Solaris 8 against each other on an
> > Ultra1.  It is pretty high on my list of things to do once I graduate
> > since replacing an old P75 running linux as a file server is a high
> > priority.
>
> I recommended *BSD since it will run on the Alpha he has and
> won't cost anything.

You seem to be talking at cross purposes.  Francisco wanted to see why NFS
performance was poor on OpenVMS/Alpha, and Joshua wanted to determine the
ideal NFS serving platform for his network.  Two separate issues.

Francisco, OpenVMS NFS performance probably isn't the best, but it should
be fairly close to most Unix implementations.  With speeds like you've
mentioned I would suspect a fundamental networking problem such as severe
packet loss or a duplex mismatch.  Can you transfer a large file via FTP
in a reasonable amount of time?

Joshua, I would put Solaris at the top of the heap for large scale NFS
serving, where you can afford multiple CPUs and disk arrays with lots of
cache.  In smaller setups it's still competitive but the gap narrows
substantially.

My recent experience with OpenBSD and NetBSD shows them to drag behind
Solaris in performance, though they all seem to be substantially ahead of
Linux for protocol conformity.  Client architecture plays a role as well,
you will see significantly better performance with a Solaris client and a
Solaris server than with mismatched clients due to proper NFSv3 support.
TCP vs UDP is another discussion entirely, which one is appropriate
depends almost entirely on your network architecture and loading.

For what it's worth, I see 90%+ of 100mbps wire speed on large transfers
between Solaris/Solaris, and io tps tracks disk seek time to within 25%
for synchronous operations serving off of a single noncached disk.  That's
good enough for me.

I can't say anything for the other BSDs, and I have heard that there have
been improvements.  If you find anything to add to this, I'm sure we would
all appreciate hearing it.

-James



More information about the rescue mailing list