The Question is:
1) is gigabit ethernet usable under 6.2 1H3.
2) Currently running a cluster of 4100 under FDDI, but under pressure to go
with 100MB ethernet. I read some time ago that FDDI was having better
performance than 100MB ethernet.
One of the cluster node is a satelite and putting real time data over FDDI. I
had zero problems with this. I'll go 100MB ethernet if I'm sure I'll have same
or better performance.
The Answer is :
DEGPA-based full-duplex 1000Base-SX Gigabit Ethernet (IEEE 802.3z,
IEEE 802.3x, IEEE 802.2) would require an OpenVMS upgrade, requiring
OpenVMS Alpha V7.1 (with the ECO kit ALPDEGPA03_071, or later) or
(preferably) an upgrade to OpenVMS V7.1-2 or V7.2-1 or later release,
also with current ECOs applied.
DEGPA bandwidth will be better than that of FDDI, but you will want
to evaluate for any changes in latency and responsiveness -- with
real-time data, bandwidth is obviously only part of the environment.
Gigabit Ethernet may or may not provide better latency than FDDI.
With an OpenVMS Cluster operating over a LAN, SCS connections will
favor channels with larger buffer sizes. If buffer sizes are equal,
then SCS favors channel with lower latency. If you require latency
as the deciding factor, then set NISCS_MAX_PKTSZ to 1498. Also, with
current OpenVMS releases, you can use LAVC$STOP_BUS to disable SCS
access to a particular channel.
FDDI has advantages over Fast Ethernet (100Base) in terms of support
for longer distances, dual-ring support, and guaranteed access to the
network. The FDDI protocol makes better use of the aggregate channel
bandwidth, with the token-based protocol providing efficiency approaching
99%, where Fast Ethernet will operate at roughly 65% to 87% of the channel
(depending on the packet size) as a result of the CSMA/CD. A dual-homed
FDDI station also gets immediate access to a redundant channel on a
channel failure, where Fast Ethernet must run a spanning tree as part
of the failover, and this can require upwards of 30 seconds to complete.