HP OpenVMS Systems Documentation

Content starts here

OpenVMS Performance Management

Previous Contents Index

12.3 Enlarge Hardware Capacity

If there seem to be few appropriate or productive ways to shift the demand away from the bottleneck point using available hardware, you may have to acquire additional hardware. Adding capacity can refer to either supplementing the hardware with another similar piece or replacing the item with one that is larger, faster, or both.

Try to avoid a few of the more common mistakes. It is easy to conclude that more disks of the same type will permit better load distribution, when the truth is that providing another controller for the disks you already have might bring much better results. Likewise, rather than acquiring more disks of the same type, the real solution might be to replace one or more of the existing disks with a disk that has a faster transfer rate. Another mistake to avoid is acquiring disks that immediately overburden the controller or bus you place them on.

To make the correct choice, you must know whether your problem is due to either limitations in space and placement or to speed limitations. If you need speed improvement, be sure you know whether it is needed at the device or the controller. You must invest the effort to understand the I/O subsystem and the distribution of the I/O work load across it before you can expect to make the correct choices and configure them optimally. You should try to understand at all times just how close to capacity each part of your I/O subsystem is.

12.4 Improve RMS Caching

The Guide to OpenVMS File Applications is your primary reference for information on tuning RMS files and applications. RMS reduces the load on the I/O subsystems through buffering. Both the size of the buffers and the number of buffers are important in this reduction. In trying to determine reasonable values for buffer sizes and buffer counts, you should look for the optimal balance between minimal RMS I/O (using sufficiently large buffers) and minimal memory management I/O. Note that, if you define RMS buffers that are too large, you can more than fill the process's entire working set with these buffers, ultimately inducing more process paging.

12.5 Adjust File System Caches

The considerations for tuning disk file system caches are similar to those for tuning RMS buffers. Again, the issue is minimizing I/O. A disk file system maintains caches of various file system data structures such as file headers and directories. These caches are allocated from paged pool when the volume is mounted for ODS-2 volumes (default). (For an ODS-1 ACP, they are part of the ACP working set.) File system operations that only read data from the volume (as opposed to those that write) can be satisfied without performing a disk read, if the desired data items are in the file system caches. It is important to seek an appropriate balance point that matches the work load.

To evaluate file system caching activity:

  1. Enter the MONITOR FILE_SYSTEM_CACHE command.
  2. Examine the data items displayed. (For detailed descriptions of these items, refer to the OpenVMS System Management Utilities Reference Manual.)
  3. Invoke SYSGEN and modify, if necessary, appropriate ACP system parameters.

Data items in the FILE_SYSTEM_CACHE display correspond to ACP parameters as follows:


When you change the ACP cache parameters, remember to reboot the system to make the changes effective.

12.6 Use Solid-State Disks

There are two types of solid-state disks:

  • Software such as the optional software product, Compaq DECram for OpenVMS, that emulates a disk using host main memory. Note that the contents of a RAM disk do not survive a reboot.
  • A peripheral storage device based on RAM that emulates a fast standard disk. Some of these devices have physical disk backup or battery backup, so that the data is not necessarily lost at reboot.

With solid-state storage, seek time and latency do not affect performance, and throughput is limited only by the bandwidth of the data path rather than the speed of the device. Solid-state disks are capable of providing higher I/O performance than magnetic disks with device throughput of up to 1200 I/O requests per second and peak transfer rates of 2.5M bytes per second or higher.

The operating system can read from and write to a solid-state disk using standard disk I/O operations.

Two types of applications benefit from using solid-state disks:

  • Applications that frequently use system images
  • Modular applications that use temporary, transient files

Chapter 13
Compensating for CPU-Limited Behavior

This chapter describes corrective procedures for CPU resource limitations described in Chapter 5 and Chapter 9.

13.1 Improving CPU Responsiveness

Before taking action to correct CPU resource problems, do the following:

  • Complete your evaluation of all the system's resources.
  • Resolve any pending memory or disk I/O responsiveness problems.

It is always good practice to review the methods for improving CPU responsiveness to see if there are ways to recover CPU power:

  • Equitable CPU sharing
  • CPU load balancing
  • CPU offloading
  • Reduction of system resource consumption

13.1.1 Equitable CPU Sharing

If you have concluded that a large compute queue is affecting the responsiveness of your CPU, try to determine whether the resource is being shared on an equitable basis. Ask yourself the following questions:

  • Have you assigned different base priorities to different classes of users?
  • Is your system supporting one or more real-time processes?
  • Are some users complaining about poor service while others have no problems?

The operating system uses a round-robin scheduling technique for all nonreal-time processes at the same scheduling priority. However, there are 16 time-sharing priority levels, and as long as a higher level process is ready to use the CPU, none of the lower level processes will execute. A compute-bound process whose base priority is elevated above that of other processes can usurp the CPU. Conversely, the CPU will service processes with base priorities lower than the system default only when no other processes of default priority are ready for service.

Do not confuse inequitable sharing with the priority-boosting scheme of the operating system, which gives temporary priority boosts to processes encountering certain events, such as I/O completion. These boosts are temporary and they cannot cause inequities.

Detecting Inequitable CPU Sharing

You can detect inequitable sharing by using either of the following methods:

  • Examine the CPU Time column of the MONITOR PROCESSES display in a standard summary report (not included in the multifile summary report). A process with a CPU time accumulation much higher than that of other processes could be suspect.
  • Use the MONITOR playback feature to obtain a display of the top CPU users during each collection interval. (This is the preferred method.) To view the display, enter a command of the form:


    You may want to select a specific time interval using the /BEGINNING and /ENDING qualifiers if you suspect a problem. Check whether the top process changes periodically.

CPU Allocation and Processing Requirements

It can sometimes be difficult to judge whether processes are receiving appropriate amounts of CPU allocation because the allocation depends on their processing requirements.

If... Then...
The MONITOR collection interval is too large to provide a sufficient level of detail Enter the command on the running system (live mode) during a representative period using the default three-second collection interval.
There is an inequity Try to obtain more information about the process and the image being run by entering the DCL command SHOW PROCESS/CONTINUOUS.

13.1.2 Reduction of System CPU Consumption

Depending on the amount of service required by your system, operating system functions can consume anywhere from almost no CPU cycles to a significant number. Any reductions you can make in services represent additional available CPU cycles. Processes in the COM state can use these, thereby lowering the average size of the compute queue and making the CPU more responsive.

The information in this section helps you identify the system components that are using the CPU. You can then decide whether it is reasonable to reduce the involvement of those components.

Processor Modes

The principal body of information about system CPU activity is contained in the MONITOR MODES class. Its statistics represent rates of clock ticks (10-millisecond units) per second; but they can also be viewed as percentages of time spent by the CPU in each of the various processor modes.

Note that interrupt time is really kernel mode time that cannot be charged to a particular process. Therefore, it is sometimes convenient to consider these two together.

The following table lists of some of the activities that execute in each processor mode:

Mode Activity
Interrupt 1,2 Interrupts from peripheral devices such as disks, tapes, printers, and terminals. The majority of system scheduling code executes in interrupt state, because for most of the time spent executing that code, there is no current process.
MP Synchronization Time spent by a processor in a multiprocessor system waiting to acquire a spin lock.
Kernel 2 Most local system functions, including local lock requests, file system (XQP) requests, memory management, and most system services (including $QIO).
Executive RMS is the major consumer of executive mode time. Some optional products such as ACMS, DBMS, and Rdb also run in executive mode.
Supervisor The command language interpreters DCL and MCR.
User Most user-written code.
Idle Time during which all processes are in scheduling wait states and there are no interrupts to service.

1In an OpenVMS Cluster configuration, services performed on behalf of a remote node execute in interrupt state because there is no local process to which the time can be charged. These include functions involving system communication services (SCS), such as remote lock requests and MSCP requests.
2As a general rule, the combination of interrupt time and kernel mode time should be less than 40 percent of the total CPU time used.

Although MONITOR provides no breakdown of modes into component parts, you can make inferences about how the time is distributed within a mode by examining some of the other MONITOR classes in your summary report and through your knowledge of the work load.

Interrupt Time

In OpenVMS Cluster systems, interrupt time per node can be higher than in noncluster systems because of the remote services performed. However, if this time appears excessive, you should investigate the remote services and look for deviations from typical values. Enter the following commands:

  • MONITOR DLOCK---Observe the distributed lock manager activity. Activity labeled incoming and outgoing is executed in interrupt state.
  • MONITOR SCS/ITEM=ALL---Observe internode traffic over the computer interconnect (CI).
  • MONITOR MSCP_SERVER---Observe the MSCP server activity.
  • SHOW DEVICE /SERVED /ALL---Observe the MSCP server activity.

Even though OpenVMS Cluster systems can be expected to consume marginally more CPU resources than noncluster systems because of this remote activity, there is no measurable loss in CPU performance when a system becomes a member of an OpenVMS Cluster. OpenVMS Clusters achieve their sense of "clusterness" by making use of SCS, a very low overhead protocol. Furthermore, in a quiescent cluster with default system parameter settings, each system needs to communicate with every other system only once every five seconds.

Multiprocessing Synchronization Time

Multiprocessing (MP) synchronization time is a measure of the contention for spin locks in an MP system. A spin lock is a mechanism that guarantees the synchronization of processors in their manipulation of operating system databases. A certain amount of time in this mode is expected for MP systems. However, MP synchronization time above roughly 8% of total processing time usually indicates a moderate to high level of paging, I/O, or locking activity.

You should evaluate the usage of those resources by examining the IO, DLOCK, PAGE, and DISK statistics. You can also use the System Dump Analyzer (SDA) Spinlock Trace extension to gain insight as to which components of the operating system are contributing to high MP synchronization time. If heavy locking activity is seen on larger multiprocessor systems, using the Dedicated CPU Lock Manager might improve system throughput. See Section 13.2 for more information on this feature.

Kernel Mode Time

High kernel mode time (greater than 25%) can indicate several conditions warranting further investigation:

  • A memory limitation. In this case, the MONITOR IO class should indicate a high page fault rate, a high inswap rate, or both. Refer to Section 7.1 for information on the memory resource.
  • Excessive local locking. Become familiar with the locking rates (New ENQ, Converted ENQ, and DEQ) shown in the MONITOR LOCK class, and watch for deviations from the typical values. (In OpenVMS Cluster environments, use the DLOCK class instead; only the local portion of each of the locking rates is executed in kernel mode.) If you have more than five CPU's and a high amount of MP_SYNCH time, consider implementing a dedicated lock manager. If you are already using the Dedicated CPU Lock Manager, kernel mode time will appear much higher than without the Dedicated CPU Lock Manager.
  • A high process creation rate. Process creation is a CPU-intensive operation. Process accounting can help determine if this activity is contributing to the high level of kernel mode time.
  • Excessive file system activity. The file system, also known as the XQP, performs various operations on behalf of users and RMS. These include file opens, closes, extends, deletes, and window turns (retrieval of mapping pointers). The MONITOR FCP class monitors the following rates:
    Rate Description
    CPU tick rate The percentage of the CPU being consumed by the file system. It is highly dependent on application file handling and can be kept to a minimum by encouraging efficient use of files, by performing periodic backups to minimize disk fragmentation, and so forth.
    Erase rate The rate of erase operations performed to support the high-water marking security feature.

    If you do not require this feature at your site, be sure to set your volumes to disable it. (See Section 2.2.)
  • Excessive direct I/O rate. While direct I/O activity, particularly disk I/O, is important in an evaluation of the I/O resource, it is also important in an evaluation of the CPU resource because it can be costly in terms of CPU cycles. The direct I/O rate is included in the MONITOR IO class. The top users of direct I/O are indicated in the MONITOR PROCESSES /TOPDIO class.
  • A high image activation rate. The image activation code itself does not use a significant amount of CPU time, but it can cause consumption of kernel mode time by activities like the following:
    • An excessive amount of logical name translation as file specifications are parsed.
    • Increased file system activity to locate and open the image and associated library files (this activity also generates buffered I/O operations).
    • A substantial number of page faults as the images and libraries are mapped into working sets.
    • A high demand zero fault rate (shown in the MONITOR PAGE class). This activity can be accompanied by a high global valid fault rate, a high page read I/O (hard fault) rate, or both.

    A possible cause of a high image activation rate is the excessive use of DCL command procedures. You should expect to see high levels of supervisor mode activity if this is the case. Frequently invoked, stable command procedures are good candidates to be rewritten as images.
  • Excessive use of DECnet. Become familiar with the packet rates shown in the MONITOR DECNET class and watch for deviations from the typical values.

Executive Mode Time

High levels of executive mode time can be an indication of excessive RMS activity. File design decisions and access characteristics can have a direct impact on CPU performance. For example, consider how the design of indexed files may affect the consumption of executive mode time:

  • Bucket size determines average time to search each bucket.
  • Fill factor and record add rate determine rate of bucket splits.
  • Index, key, and data compression saves disk space and can reduce bucket splits but requires extra CPU time.
  • Use of alternate keys provides increased retrieval flexibility but requires additional disk space and additional CPU time when adding new records.

Be sure to consult the Guide to OpenVMS File Applications when designing an RMS application. It contains descriptions of available alternatives along with their performance implications.

13.1.3 CPU Offloading

The following are some techniques you can use to reduce demand on the CPU:

  • Decompress the system libraries (see Section 2.1).
  • Force compute-intensive images to execute only in a batch queue, with a job limit. A good technique for enforcing such batch execution is to use the access control list (ACL) facility as follows:


    This command forces batch execution of the image file for which the command is entered.
  • Implement off-shift timesharing or set up batch queues to spread the CPU load across the hours when the CPU would normally not be used.
  • Disable code optimization. Compilers such as FORTRAN and Bliss do some code optimizing by default. However, code optimization is a CPU- and memory-intensive operation. It may be beneficial to disable optimization in environments where frequent iterative compilations are done. Such activity is typical of an educational environment where students are learning a new language.
    In some educational or development environments, where the amount of time spent compiling programs exceeds the amount of time spent running them, it may be beneficial to turn off default code optimization. This reduces the system resources used by the compiler; however, it will increase the resources used by the program during execution. For most production environments, where the time spent running the program exceeds the time spent compiling it, it is better to enable full compiler optimization.
  • Use a dedicated batch engine. It may be beneficial during prime time to set up in an OpenVMS Cluster one system dedicated to batch work, thereby isolating the compute-intensive, noninteractive work from the online users. You can accomplish this by making sure that the cluster-accessible generic batch queue points only to executor batch queues defined on the batch system. If a local area terminal server is used for terminal access to the cluster, you can limit interactive access to the batch system by making that system unknown to the server.

13.1.4 CPU Offloading Between Processors on the Network

Users of standalone workstations on the network can take advantage of local and client/server environments when running applications. Such users can choose to run an application based on DECwindows on their workstations, resources permitting, or on a more powerful host sending the display to the workstation screen. From the point of view of the workstation user, the decision is based on disk space and acceptable response time.

Although the client/server relationship can benefit workstations, it also raises system management questions that can have an impact on performance. On which system will the files be backed up---workstation or host? Must files be copied over the network? Network-based applications can represent a significant additional load on your network depending on interconnect bandwidth, number of processors, and network traffic.

Previous Next Contents Index