HP OpenVMS Systems

Ask the Wizard

HP OpenVMS Systems

OpenVMS information

» What's new on our site
» Upcoming events
» Configuration and buying assistance
» Send us your comments

HP OpenVMS systems

» OpenVMS software
» Supported Servers
» OpenVMS virtualization
» OpenVMS solutions and partners
» OpenVMS success stories
» OpenVMS service and support
» OpenVMS resources and information
» OpenVMS documentation
» Education and training

Quick Links

» Non-javascript page
» Ask the Wizard

Test Drive OpenVMS

» OpenVMS I64 test drive
» Java test drive

Other information resources available to you include:

» OpenVMS freeware
» ECO kits, software and hardware support, prior version support
» Alpha SRM, ARC, and AlphaBIOS firmware updates
» ENCOMPASS - HP user group
» OpenVMS software downloads, OpenVMS freeware CD-ROM
» OpenVMS firmware locations
» DECconnect passive adaptor charts
» Cables reference guide
» MicroVAX console commands
» OpenVMS student research

Select a topic below to see Questions Frequently Asked by partners

» Using the online documentation library(installing BNU from the asap SDK)
» Global sections(how to create and use.)
» xx$CREATE_BUFOBJ system service(usage)
» Ethernet address(methods of determination)
» Shareable images(cookbook approach to creating one)
» Sharing data/code at absolute addresses(a guide with examples)
» Determining the Alpha microprocessor
» Using GETSYI to get hardware status

Evolving business value

» Business Systems Evolution
» AlphaServer systems transition planning
» Alpha RetainTrust program

Related links

» HP Integrity servers
» HP Alpha systems
» HP storage
» HP software
» HP products and services
» HP solutions
» HP support
disaster proof
HP Integrity server animation
HP Integrity server animation
Content starts here

Ask the Wizard Questions

FDDI Cluster in V5.5-2

The Question is:


I have a customer with two VAX 6000's (currently running
VMS 5.5-2) which he wants to place in separate buildings.

Is clustering over FDDI (via DEMFA's) supported, with
VMS 5.5-2 or will the customer have to upgrade to VMS 6.1 ?

Thanks in advance.

The Answer is:

    From the V5.5-2 VAXcluster SPD 29.78.06:


     VAXcluster systems are configured by connecting multiple CPUs with a
     communication media, referred to as an interconnect. VAXcluster nodes
     communicate with each other using the most appropriate interconnect
     available. Whenever possible, in the event of interconnect failure,
     VAXcluster software will automatically use an alternate interconnect.
     VAXcluster Software supports any combination of the following inter-

     o  Computer Interconnect (CI)

     o  Ethernet (NI)

     o  Digital Storage System Interconnect (DSSI)[*]

     o  Fiber Distributed Data Interface (FDDI)
     Ethernet and FDDI are industry-standard general purpose communications
     interconnects that can be used to implement a Local Area Network (LAN).
     Except where noted, VAXcluster support for both of these LAN types is

     Configuration Rules:

     The following configuration rules apply to VAXcluster systems:

     o  The maximum number of CPUs supported in a VAXcluster system is 96.

     o  Every VAXcluster node must have a direct communication path to ev-
        ery other VAXcluster node via any of the supported interconnects.


     o  VAX 11/7xx, 6000, 8xxx and 9000-series CPUs require a system disk
        that is accessed via a local controller or through a local CI or
        DSSI connection. VAXcluster satellite booting is not supported for
        these systems.


     o  CPUs that use an FDDI for VAXcluster communications can concurrently
        use it for other network protocols that conform to the applicable
        FDDI standards, such as ANSI X3.139-1987, ANSI X3.148-1988, and ANSI

     o  All LAN bridges must provide a low-latency data path, with approx-
        imately 10 megabits per second throughput for Ethernet and 100 megabits
        per second throughput for FDDI. Translating bridges must be used
        when connecting VAXcluster nodes on an Ethernet to those on an FDDI.

     o  The maximum number of VAXcluster members that can be directly con-
        nected to the FDDI, via the DEC FDDIcontroller 400 (DEMFA), is 16.


     o  VAXcluster CPUs should be configured using interconnects that pro-
        vide appropriate performance for the required system usage. In gen-
        eral, use the highest performance interconnect possible. CI, DSSI,
        and FDDI are the preferred interconnects between powerful VAX CPUs.