HP OpenVMS Systems Documentation
OpenVMS Programming Concepts Manual
29.4.4 No Support for ODS-5 Volumes
An application that uses internal knowledge of the file system,
including knowledge of the contents of a directory and how file header
data is structured on a disk cannot work correctly on an ODS-5 volume.
The following sections describe the changes necessary to upgrade the level of support for extended file specifications. Note that you must first ensure that the application meets the default support level before you can upgrade it to the full support level.
29.5.1 Upgrading to Default Support
To upgrade an application to provide default support for Extended File
Specifications, you must ensure that it minimally supports both the
ODS-5 volume structure and extended file naming as recommended in
naming as recommended in Sections 220.127.116.11 and 18.104.22.168,
respectively. Default support is defined in Section 29.4.2.
Applications that do not support the new ODS-5 volume structure do not operate successfully on these volumes even if they encounter only traditional file specifications.
If an application does not work properly on an ODS-5 volume, examine the application for the following:
22.214.171.124 Providing Support for Extended File Naming
If an application does not handle extended names successfully, examine the application for any the following:
29.5.2 Upgrading to Full Support
Some OpenVMS applications, such as system or disk management utilities, may require full support for Extended File Specifications. Typically, these are utilities that must be able to view and manipulate all file specifications without DID or FID abbreviation. To upgrade an application so that it fully supports all the features of Extended File Specifications, do the following:
|Atomicity||Either all of the changes for a transaction are made, or none are. If the changes for a transaction cannot be completed, partial changes by the transaction must be undone.|
|Consistency||A transaction is expected to change the system from one consistent state to another.|
|Isolation||Intermediate changes by a transaction must not be visible to other transactions.|
|Durability||The changes made by a transaction should survive computer and media failures.|
A transaction often needs to use more than one resource on one or more system. This type of transaction is called a distributed transaction.
Individual OpenVMS systems within the distributed system are called nodes in this chapter.
The DECdtm model constructs a distributed transaction processing system from three types of component:
DECdtm implements a two-phase commit protocol. This is a simple consensus protocol that allows a collection of participants to reach a single conclusion. The two-phase commit protocol makes sure that all of the operations can take effect before the transaction is committed. If any operation cannot take effect, for example if a network link is lost, then the transaction is aborted, and none of the operations take effect. Given a list of participants and a designated coordinator, the protocol proceeds as follows:
|Phase 1:||The coordinator asks each participant if it can agree to commit. Each participant examines its internal state. If the answer is yes, it does whatever it requires to ensure that it can either commit or abort the transaction, regardless of failures. Typically, this requires logging information to disk. It then votes either yes or no.|
The coordinator records the outcome on disk: yes, if all the votes were
positive, or no, if any votes were negative or missing.
The coordinator then informs each participant of the final result.
Note that this protocol reaches a single decision while it allows the coordinator and participants to fail. Any failure during phase 1 causes the transaction to be aborted. If the coordinator fails during phase 2, participants wait for it to recover and read the decision from disk. If a participant fails, it can ask the coordinator for the decision on recovery.
While DECdtm is not complex in itself, construction of a full-function
resource manager needs knowledge of more techniques than can be given
in this manual. Transaction Processing: Concepts and
Techniques by Jim Gray and Andreas Reuter (Morgan Kaufman
Publishers, 1993) may be helpful.
30.2 Single Branch Application
A sequence of AP operations that occurs within a single transaction is called a branch of the transaction. In the simplest use of DECdtm, a single AP invokes two or more RMs.
The AP uses just three of the DECdtm services: $START_TRANS, $END_TRANS, and $ABORT_TRANS. These services are documented in the OpenVMS System Services Reference Manual. They have not changed, but additional information is given in this manual.
$START_TRANS initiates a new transaction and returns a transaction identifier (TID) that is passed to other DECdtm services. $END_TRANS ends a transaction by attempting to commit it and returns the outcome of the transaction with either a commit or abort. $ABORT_TRANS ends the transaction by aborting it.
During the transaction, the AP passes the TID to each RM that it uses. The TID may be passed explicitly, or through the default transaction mechanism described in Section 30.4. Internally, each RM calls the DECdtm RM services. It also uses the branch services if parts of the transaction can be executed by different processes or on different nodes.
DECdtm aborts a transaction if the process executing a branch
terminates. By default, it also aborts a transaction if the current
program image terminates.
30.2.1 Calling DECdtm System Services for a Single Branch Application
An application using the DECdtm system services follows these steps:
Edward Jessup, an employee of a computer company in Italy, is transferring to a subsidiary of the company in Japan. An application must remove his personal information from an Italian DBMS database and add it to a Japanese Rdb database. Both of these operations must happen, otherwise Edward's personal information may either end up cyber space (the application might remove him from the Italian database but then lose a network link while trying to add him to the Japanese database) or find that he is in both databases at the same time. Either way, the two databases would be out of step.
If the application used DECdtm to execute both operations as an atomic transaction, then this error could never happen; DECdtm would automatically detect the network link failure and abort the transaction. Neither of the databases would be updated, and the application could then try again.
Figure 30-1 shows the participants in the distributed transaction discussed in this sample transaction. The application is on node ITALY.
Figure 30-1 Participants in a Distributed Transaction
A transaction may have multiple branches. A separate branch is required for each process that takes part in a transaction, regardless of whether the processes run on the same node or on different nodes of the system.
The top branch of the transaction is created by $START_TRANS. A new branch can be requested in the following ways:
Note that in the last two cases, the RM or TP framework make the necessary branch service calls on behalf of the application. There is no difference in the three cases from the viewpoint of DECdtm.
The top branch of a transaction is created by calling $START_TRANS. A subordinate branch is authorized when an existing branch calls $ADD_BRANCH. This returns a globally unique branch identifier (BID). The application passes the BID and TID with an application-specific request to another process or node of the system. $START_BRANCH is then called on the target node to add a new branch to the transaction. A subordinate branch of a transaction may in turn create further branches.
DECdtm can connect the two parts of the transaction together because $ADD_BRANCH specifies the name of the target node while $START_BRANCH specifies the name of the parent node. Either the two nodes must be in the same OpenVMS Cluster or they must be able to communicate by DECnet. DECdtm operation is more efficient within an OpenVMS Cluster.
Unless DECdtm operation is confined to a single cluster, you must configure each node with the same DECnet node name as its cluster node name.
An application may complete its processing within a branch by calling $END_BRANCH.
On $START_BRANCH, DECdtm checks that the two nodes are able to communicate, but it does not validate that the branch is authorized until $END_BRANCH is called. At that point, an unauthorized branch is aborted without affecting the ability of the authorized branches to commit.
Be careful in situations in which an application attempts to access the same resource from different branches of a transaction. Some RMs can recognize that the branches form part of the same transaction and allow concurrent access to the resource. In that case, just like multiple threads in a process, the application may need to serialize its own operations on the shared resource. Other RMs may lock one branch against another. In that case, the application is likely to deadlock.
Multiple branches in a transaction can serialize their operations on a shared resource within an OpenVMS Cluster using the Lock Manager. Care is needed if two branches outside an OpenVMS Cluster implicitly share a resource, perhaps by each creating a subordinate branch on a third system.
A single process may have multiple branches. For example, a server
process may execute parallel operations on behalf of different
30.3.1 Resource Manager Use of the Branch Services
Strictly defined, an RM provides access to resources on the same process as an AP that has started a transaction or added a branch. However an RM may perform work for a transaction in a different process to the original request. In that case, it must use the branch services to join the transaction in the worker process.
Similarly, an RM such as Oracle Rdb may provide an application
interface that allows remote resources to be accessed. In that case,
the RM uses the branch services to add a branch on the local node and
start a branch on the remote node.
30.3.2 Branch Synchronization
Processing in all branches of a transaction must be complete before calling $END_TRANS.
Normally DECdtm is used to ensure branch completion. In this case:
In other words, when a transaction completes successfully, all synchronized branches complete together. When a transaction aborts, all synchronized branches on a single node complete together, but branches on different nodes complete at different times. Using synchronized branches does not add extra message overhead, because the synchronization events are implicit in the normal DECdtm commitment protocol.
DECdtm branch synchronization is redundant when branch processing is initiated by a synchronous call to a process or remote node, and that call does not return until processing is complete. For example, remote operations may be requested by Remote Procedure Call (RPC). In this case:
See Section 30.4 for a case in which unsynchronized branches are not advised.