[rescue] SS10 Sbus slot count
Big Endian
bigendian at mac.com
Thu May 16 16:23:20 CDT 2002
>Network attached storage. If I use 1 fastE or FDDI interface, then all
>machines have to share 12.5MB/s of bandwidth. I want to do a FDDI direct
>connect between the file server and my main workstation, then I want to
>do a connection either directly to another workstation (of the FastE variety),
>or just plug it into my 100mbit switch. And then the second FDDI interface
>would tap into the FDDI interface on my lanplex, meaning that each attached
>10mbit device would be able to talk to the file server as if it was the only
>machine connected. Well, eventually the second FDDI interface would connect
>to the FDDI concentrator, but I don't have that working yet. The lack of
>linux or NetBSD support for FDDI on non-PCI or EISA machines is frustrating
>though.
I understand that its for NAS, but the point is you can do more with
something else if you need that much expansion capacity. An SS1k
comes to mind, so does a n AS2100. Network topology ideas next.
>What is the point of fast disk systems on a file server if nothing can talk
>it any where close to full speed?
>
ok. From this stand point you need to design your network with
multiple path routed connections. You should also work in multiple
points of usage. So you have two fddi rings and the FastE. You plug
the FastE into the switch, put up two routers(use the sparc10/20
here) that pass packets between the fastE and the dual FDDI rings.
At any of those routing points you have access to the full bandwidth
of the server. attach your workstations to two segments each and run
dynamic routing (OSPF) so that as a link saturates it begins to fill
the next one. you get more bandwidth out of the mesh, but at the
cost of more shit to run and more complexity. I'm attempting to do
something like this now but w/ a single FDDI ring and a single FastE
segment.
><snip PCI Ultrasparc example>
>
>I don't think I'll be getting a suitable Ultrasparc anytime soon, but we'll
>see. An alpha seems like a more budget friendly candidate. On the upside,
>if I'm using a PCI machine, I can fall back to NetBSD instead of Solaris.
for some fddi boards yes. My fddi board is still the NPI card, so it
still needs solaris. Honestly though, solaris isn't that bad. I
used to hate it, but then I ran linux systems in production. Thank
GOD for solaris now.
daniel
--
-----------------------------------------------------------------
"Fragile. Do not drop." -- Posted on a Boeing 757.
More information about the rescue
mailing list