HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations

Previous Contents Index

4.11.1 Multiple LAN Adapters

Multiple LAN adapters are supported. The adapters can be for different LAN types or for different adapter models for the same LAN type.

Multiple LAN adapters can be used to provide the following:

  • Increased node-to-node throughput by distributing the load across multiple LAN paths.
  • Increased availability of node-to-node LAN communications. Multiple LAN Path Load Distribution

When multiple node-to-node LAN paths are available, the OpenVMS Cluster software chooses the set of paths to use based on the following criteria, which are evaluated in strict precedence order:

  1. Recent history of packet loss on the path
    Paths that have recently been losing packets at a high rate are termed lossy and will be excluded from consideration. Channels that have an acceptable loss history are termed tight and will be further considered for use.
  2. Priority
    Management priority values can be assigned to both individual LAN paths and to local LAN devices. A LAN path's priority value is the sum of these priorities. Only tight LAN paths with a priority value equal to, or one less than, the highest priority value of any tight path will be further considered for use.
  3. Maximum packet size
    Tight, equivalent-priority channels whose maximum packet size is equivalent to that of the largest packet size of any tight equivalent-priority channel will be further considered for use.

  4. Equivalent latency
    LAN paths that meet the preceding criteria will be used if their latencies (computed network delay) are closely matched to that of the fastest such channel. The delay of each LAN path is measured using cluster communications traffic on that path. If a LAN path is excluded from cluster communications use because it does not meet the preceding criteria, its delay will be measured at intervals of a few seconds to determine if its delay, or packet loss rate, has improved enough so that it then meets the preceding criteria.

Packet transmissions are distributed in round-robin fashion across all communication paths between local and remote adapters that meet the preceding criteria. Increased LAN Path Availability

Because LANs are ideal for spanning great distances, you may want to supplement an intersite link's throughput with high availability. You can do this by configuring critical nodes with multiple LAN adapters, each connected to a different intersite LAN link.

A common cause of intersite link failure is mechanical destruction of the intersite link. This can be avoided by path diversity, that is, physically separating the paths of the multiple intersite links. Path diversity helps to ensure that the configuration is unlikely to be affected by disasters affecting an intersite link.

4.11.2 Configuration Guidelines for LAN-Based Clusters

The following guidelines apply to all LAN-based OpenVMS Cluster systems:

  • Alpha and VAX systems can be configured with any mix of LAN adapters.
  • All LAN paths used for OpenVMS Cluster communication must operate with a minimum of 10 Mb/s throughput and low latency. You must use translating bridges or switches when connecting nodes on one type of LAN to nodes on another LAN type. LAN segments can be bridged to form an extended LAN.
  • Multiple, distinct OpenVMS Cluster systems can be configured onto a single, extended LAN. OpenVMS Cluster software performs cluster membership validation to ensure that systems join the correct LAN OpenVMS cluster.

4.11.3 Ethernet (10/100) and Gigabit Ethernet Advantages

The Ethernet (10/100) interconnect is typically the lowest cost of all OpenVMS Cluster interconnects.

Gigabit Ethernet interconnects offer the following advantages in addition to the advantages listed in Section 4.11:

  • Very high throughput (1 Gb/s)
  • Support of jumbo frames (7552 bytes per frame) for cluster communications

4.11.4 Ethernet (10/100) and Gigabit Ethernet Throughput

The Ethernet technology offers a range of baseband transmission speeds:

  • 10 Mb/s for standard Ethernet
  • 100 Mb/s for Fast Ethernet
  • 1 Gb/s for Gigabit Ethernet

Ethernet adapters do not provide hardware assistance, so processor overhead is higher than for CI or DSSI.

Consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers. General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. Fast Ethernet and Gigabit Ethernet can significantly improve throughput. Multiple Ethernet adapters can be used to improve cluster performance by offloading general network traffic.

Reference: For LAN configuration guidelines, see Section 4.11.2.

4.11.5 Ethernet Adapters and Buses

The following Ethernet adapters and their internal buses are supported in an OpenVMS Cluster configuration:

  • DEFTA-xx (TURBOchannel)
  • DE2xx (ISA)
  • DE425 (EISA)
  • DE435 (PCI)
  • DE450 (PCI)
  • DE500-xx (PCI)
  • DE600-xx (PCI)
  • DE602-xx (PCI)
  • DEGPA-xx (PCI)
  • TGEC (embedded)
  • COREIO (TURBOchannel)
  • PMAD (TURBOchannel)
  • DE422 (EISA)
  • SGEC (embedded)
  • TGEC (embedded)
  • DESVA (embedded)
  • DESQA (Q-bus)
  • DELQA (Q-bus)

Reference: For complete information about each adapter's features and order numbers, access the Compaq website at:


Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.

4.11.6 Ethernet-to-FDDI Bridges and Switches

You can use transparent Ethernet-to-FDDI translating bridges to provide an interconnect between a 10-Mb/s Ethernet segment and a 100-Mb/s FDDI ring. These Ethernet-to-FDDI bridges are also called 10/100 bridges. They perform high-speed translation of network data packets between the FDDI and Ethernet frame formats.

Reference: See Figure 10-21 for an example of these bridges.

You can use switches to isolate traffic and to aggregate bandwidth, which can result in greater throughput.

4.11.7 Configuration Guidelines for Gigabit Ethernet Clusters

Use the following guidelines when configuring systems in a Gigabit Ethernet cluster:

  • Two-node Gigabit Ethernet clusters do not require a switch. They can be connected point to point.
  • Most Gigabit Ethernet switches can be configured with Gigabit Ethernet or a combination of Gigabit Ethernet and Fast Ethernet (100 Mb/s).
  • Each node can have a single connection to the switch or can be configured with multiple paths, thereby increasing availability. The AlphaServer models GS140, 8400, 8200, and 4x00 support up to four adapters each. The AlphaServer models 1200 and 800 support up to two adapters each.
  • Support for jumbo frames (7552 bytes each) is available starting with OpenVMS Version 7.3. (Prior to the introduction of jumbo-frame support, the only frame size supported for cluster communications was the standard 1518-byte maximum Ethernet frame size.)
  • The DEGPA cannot be used as the boot device, but satellites can be booted over standard 10/100 Ethernet network adapters configured on a Gigabit switch.

4.11.8 ATM Advantages

ATM offers the following advantages, in addition to those listed in Section 4.11:

  • High-speed transmission, up to 622 Mb/s
  • OpenVMS support for LAN Emulation over ATM allows for the following maximum frame sizes: 1516, 4544 and 9234.
  • LAN emulation over ATM provides the ability to create multiple emulated LANs over one physical ATM adapter. Each emulated LAN appears as a separate network. For more information, see the OpenVMS I/O User's Reference Manual.
  • An ATM switch that provides Quality of Service on a per-emulated-LAN basis can be used to favor cluster traffic over other protocols running on different emulated LANs. For more information, see the documentation for your ATM switch.

4.11.9 ATM Throughput

The ATM interconnect transmits up to 622 Mb/s. The adapter that supports this throughput is the DAPCA.

4.11.10 ATM Adapters

ATM adapters supported in an OpenVMS Cluster system and the internal buses on which they are supported are shown in the following list:

  • DAPCA (PCI) 351 (PCI)

4.12 Fiber Distributed Data Interface (FDDI)

FDDI is an ANSI standard LAN interconnect that uses fiber-optic or copper cable. FDDI augments the 10 Mb/s Ethernet by providing a high-speed interconnect for multiple Ethernet segments in a single OpenVMS Cluster system.

4.12.1 FDDI Advantages

FDDI offers the following advantages in addition to the LAN advantages listed in Section 4.11:

  • Combines high throughput and long distances between nodes
  • Supports a variety of topologies

4.12.2 FDDI Node Types

The FDDI standards define the following two types of nodes:

  • Stations --- The ANSI standard single-attachment station (SAS) and dual-attachment station (DAS) can be used as an interconnect to the FDDI ring. It is advisable to attach stations to wiring concentrators and to attach the wiring concentrators to the dual FDDI ring, making the ring more stable.
  • Wiring concentrator --- The wiring concentrator (CON) provides a connection for multiple SASs or CONs to the FDDI ring. A DECconcentrator 500 is an example of this device.

4.12.3 FDDI Distance

FDDI limits the total fiber path to 200 km (125 miles). The maximum distance between adjacent FDDI devices is 40 km with single-mode fiber and 2 km with multimode fiber. In order to control communication delay, however, it is advisable to limit the maximum distance between any two OpenVMS Cluster nodes on an FDDI ring to 40 km.

4.12.4 FDDI Throughput

The maximum throughput of the FDDI interconnect (100 Mb/s) is 10 times higher than that of Ethernet.

In addition, FDDI supports transfers using large packets (up to 4468 bytes). Only FDDI nodes connected exclusively by FDDI can make use of large packets.

Because FDDI adapters do not provide processing assistance for OpenVMS Cluster protocols, more processing power is required than for CI or DSSI.

4.12.5 FDDI Adapters and Bus Types

Following is a list of supported FDDI adapters and the buses they support:

  • DEFPZ (integral)
  • DEFAA (Futurebus+)
  • DEFTA (TURBOchannel)
  • DEFQA (Q-bus)

Reference: For complete information about each adapter's features and order numbers, access the Compaq website at:


Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.

4.12.6 Storage Servers for FDDI-Based Clusters

FDDI-based configurations use FDDI for node-to-node communication. The HS1xx and HS2xx family of storage servers provide FDDI-based storage access to OpenVMS Cluster nodes.

Chapter 5
Choosing OpenVMS Cluster Storage Subsystems

This chapter describes how to design a storage subsystem. The design process involves the following steps:

  1. Understanding storage product choices
  2. Estimating storage capacity requirements
  3. Choosing disk performance optimizers
  4. Determining disk availability requirements
  5. Understanding advantages and tradeoffs for:
    • CI based storage
    • DSSI based storage
    • SCSI based storage
    • Fibre Channel based storage
    • Host-based storage
    • LAN InfoServer

The rest of this chapter contains sections that explain these steps in detail.

5.1 Understanding Storage Product Choices

In an OpenVMS CLuster, storage choices include the StorageWorks family of products, a modular storage expansion system based on the Small Computer Systems Interface (SCSI--2) standard. StorageWorks helps you configure complex storage subsystems by choosing from the following modular elements:

  • Storage devices such as disks, tapes, CD-ROMs, and solid-state disks
  • Array controllers
  • Power supplies
  • Packaging
  • Interconnects
  • Software

5.1.1 Criteria for Choosing Devices

Consider the following criteria when choosing storage devices:

  • Supported interconnects
  • Capacity
  • I/O rate
  • Floor space
  • Purchase, service, and maintenance cost

5.1.2 How Interconnects Affect Storage Choices

One of the benefits of OpenVMS Cluster systems is that you can connect storage devices directly to OpenVMS Cluster interconnects to give member systems access to storage.

In an OpenVMS Cluster system, the following storage devices and adapters can be connected to OpenVMS Cluster interconnects:

  • HSJ and HSC controllers (on the CI)
  • HSD controllers and ISEs (on the DSSI)
  • HSZ and RZ series (on the SCSI)
  • HSG controllers (on the Fibre Channel)
  • Local system adapters

Table 5-1 lists the kinds of storage devices that you can attach to specific interconnects.

Table 5-1 Interconnects and Corresponding Storage Devices
Storage Interconnect Storage Devices
CI HSJ and HSC controllers and SCSI storage
DSSI HSD controllers, ISEs, and SCSI storage
SCSI HSZ controllers and SCSI storage
Fibre Channel HSG controllers and SCSI storage
FDDI HS xxx controllers and SCSI storage

5.1.3 How Floor Space Affects Storage Choices

If the cost of floor space is high and you want to minimize the floor space used for storage devices, consider these options:
  • Choose disk storage arrays for high capacity with small footprint. Several storage devices come in stackable cabinets for labs with higher ceilings.
  • Choose high-capacity disks over high-performance disks.
  • Make it a practice to upgrade regularly to newer storage arrays or disks. As storage technology improves, storage devices are available at higher performance and capacity and reduced physical size. For example, replacing an HSC95 and SA800 with an HSJ40 and SW800 increases capacity and reduces floor-space consumption.
  • Plan adequate floor space for power and cooling equipment.

5.2 Determining Storage Capacity Requirements

Storage capacity is the amount of space needed on storage devices to hold system, application, and user files. Knowing your storage capacity can help you to determine the amount of storage needed for your OpenVMS Cluster configuration.

5.2.1 Estimating Disk Capacity Requirements

To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.

Table 5-2 Estimating Disk Capacity Requirements
Software Component Description
OpenVMS operating system Estimate the number of blocks 1 required by the OpenVMS operating system.

Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information.

Page, swap, and dump files Use AUTOGEN to determine the amount of disk space required for page, swap, and dump files.

Reference: The OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes.

Site-specific utilities and data Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files.
Application programs Estimate the space required for each application to be installed on your OpenVMS Cluster system, using information from the application suppliers.

Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use.

User-written programs Estimate the space required for user-written programs and their associated databases.
Databases Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database.
User data Estimate user disk-space requirements according to these guidelines:
  • Allocate from 10,000 to 100,000 blocks for each occasional user.

    An occasional user reads, writes, and deletes electronic mail; has few, if any, programs; and has little need to keep files for long periods.

  • Allocate from 250,000 to 1,000,000 blocks for each moderate user.

    A moderate user uses the system extensively for electronic communications, keeps information on line, and has a few programs for private use.

  • Allocate 1,000,000 to 3,000,000 blocks for each extensive user.

    An extensive user can require a significant amount of storage space for programs under development and data files, in addition to normal system use for electronic mail. This user may require several hundred thousand blocks of storage, depending on the number of projects and programs being developed and maintained.

Total requirements The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration.

1Storage capacity is measured in blocks. Each block contains 512 bytes.

5.2.2 Additional Disk Capacity Requirements

Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.

For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.

To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.

Planning for adequate backup storage capacity can make archiving procedures more effective and reduce the capacity requirements for online storage.

5.3 Choosing Disk Performance Optimizers

Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.

You can use the Monitor utility and DECamds to help you determine which performance optimizer best meets your application and business needs.

5.3.1 Performance Optimizers

Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.

Table 5-3 Disk Performance Optimizers
Optimizer Description
DECram for OpenVMS A disk device driver that enables system managers to create logical disks in memory to improve I/O performance. Data on an in-memory DECram disk can be accessed at a faster rate than data on hardware disks. DECram disks are capable of being shadowed with Volume Shadowing for OpenVMS and of being served with the MSCP server. 1
Solid-state disks In many systems, approximately 80% of the I/O requests can demand information from approximately 20% of the data stored on line. Solid-state devices can yield the rapid access needed for this subset of the data.
Disk striping Disk striping (RAID level 0) lets applications access an array of disk drives in parallel for higher throughput. Disk striping works by grouping several disks into a "stripe set" and by dividing the application data into "chunks," which are spread equally across the disks in the stripe set in a round-robin fashion.

By reducing access time, disk striping may improve performance, especially if the application:

  • Performs large data transfers in parallel.
  • Requires load balancing across drives.

Two independent types of disk striping are available:

  • Controller-based striping, in which HSJ and HSD controllers combine several disks into a single stripe set. This stripe set is presented to OpenVMS as a single volume. This type of disk striping is hardware based.
  • Host-based striping, which creates stripe sets on an OpenVMS host. The OpenVMS software breaks up an I/O request into several simultaneous requests that it sends to the disks of the stripe set. This type of disk striping is software based.

Note: You can use Volume Shadowing for OpenVMS software in combination with disk striping to make stripe set members redundant. You can shadow controller-based stripe sets or you can host-based disk stripe shadow sets.

Extended file cache (XFC) OpenVMS Alpha Version 7.3 offers improved host-based caching with XFC, which can replace and can coexist with virtual I/O cache (VIOC). XFC is a clusterwide, file-system data cache that offers several features not available with VIOC, including read-ahead caching and automatic resizing of the cache to improve performance.
Controllers with disk cache Some storage technologies use memory to form disk caches. Accesses that can be satisfied from the cache can be done almost immediately and without any seek time or rotational latency. For these accesses, the two largest components of the I/O response time are eliminated. The HSC, HSJ, HSD, and HSZ controllers contain caches. Every RF and RZ disk has a disk cache as part of its embedded controller.

1The MSCP server makes locally connected disks to which it has direct access available to other systems in the OpenVMS Cluster.

Reference: See Section 10.8 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.

Previous Next Contents Index