The following sections specify the RPC protocol via the statechart
(see
The RPC is designed to operate over a transport layer that offers either a reliable, connection-oriented service (COTS) or a datagram, connectionless service (CLTS), or both types of services. Operation of RPC over other protocols and services is not currently defined by this specification.
The details of the RPC protocol differ depending on the selected transport service. The protocols using COTS and CLTS are described separately in this chapter.
The protocol machines described in the following sections define this space of allowed behaviours. Implementation structure and policy need not follow the protocol machine organisation and defaults. The externally observed behaviour of an implementation, as viewed from the RPC user interface and the transport interface, must be indistinguishable from some subset of the allowed behaviours determined as follows:
The following sections define the interactions between the protocol machines, which are implemented in the RPC run-time system, and the RPC stubs, as applicable to both connectionless (CL) and connection-oriented (CO) protocols.
The RPC stub generates the event START_CALL to invoke a new call, which is associated with Call_Handle data. The RPC run-time system dispatches the call, via CO_CLIENT_ALLOC machine for connection-oriented protocol, to the appropriate instance of the protocol machine. The run-time system also sets the conditional flags for the requested execution semantics (IDEMPOTENT, BROADCAST and MAYBE) and the authentication flag AUTH, according to the Call_Handle data structure. If the security service rpc_c_authn_dce_secret is requested and the authentication ticket for this call is already available, the conditional flag TICKET also has to be set to TRUE.
Upon initiating a new RPC session (see
The RPC stub may queue the marshalled call data either in one segment or in chunks of segments, depending on the call type (for example, whether a pipe data type is opened) and the local memory management policies. The run-time system detects the availability of data and sets the conditional flag TRANSMIT_REQ to TRUE if data for at least one PDU fragment is available. The run-time system resets TRANSMIT_REQ if the queue contains temporarily less than a PDU fragment of data. The sizes of data segments queued by the stub are not necessarily equivalent to the sizes of PDU fragments sent by the run-time system.
If the transmit queue only contains data for the last PDU fragment to be sent, the RPC run-time system sets the conditional flag LAST_IN_FRAG. Note that if the request is to be a single packet PDU, LAST_IN_FRAG must also be set.
Response data (out parameters) are processed at the RPC run-time system in PDU fragment granularity. Each inbound data fragment gets buffered and transferred to the stub through the activity HANDLE_OUT_FRAG. RPC stub implementation policy determines whether it processes incomplete response data. When the client run-time system has received and buffered the complete response, it signals the completion and transfers the control to the stub by raising the event RCV_LAST_OUT_FRAG. Note that the stub must assure that the HANDLE_OUT_FRAG activity has been completed before acting on this event.
Local cancels are transferred to the RPC run-time system by raising the event CLIENT_CANCEL. If an issued cancel was detected by the run-time system, it sets the conditional flag RT_PENDING_CANCEL. To detect cancel requests that may have been issued for a call before the run-time system started execution, the stub transfers this status by setting the conditional flag CURRENT_PENDING_CANCEL along with the START_CALL event. The RT_PENDING_CANCEL status is passed back to the stub after call completion.
If the run-time system terminated the call due to a failure (local or remote), it raises an exception by calling the activity EXCEPTION. The data item RT_EXCEPTION_TYPE indicates the type of failure to the stub, using fault and reject status codes. The conditional flag RT_DID_NOT_EXECUTE further details the execution status of the call (connection-oriented protocol only).
If a context handle is activated, the stub generates a CONTEXT_ACTIVE event and identifies the client/server pair for which this context handle is active. A context handle becomes active when a server returns a value that is not NULL for an RPC context handle parameter. For each context handle that becomes active, the client stub must generate this event.
If a context handle becomes inactive, the stub generates a
CONTEXT_INACTIVE event and identifies the client/server pair for
which this context handle is no longer active. A context handle
becomes inactive when a server returns a NULL value for an RPC context
handle parameter. For each context handle that becomes inactive, the
client stub must generate this event.
The server call protocol machines (CO_SERVER and CL_SERVER) are instantiated at an RPC request for a call in a new session, which is a new association for the connection-oriented protocol or a new activity for the connectionless protocol. If a session has already been established, the server call machines are idling while waiting to accept new call requests unless a context rundown was issued.
Request data (in parameters) are processed at the RPC run-time system in PDU fragment granularity. Each inbound data fragment gets buffered and transferred to the stub through the activity HANDLE_IN_FRAG. RPC stub implementation policy determines whether it processes incomplete request data. When the client run-time system has received and buffered the complete request, it signals the completion and transfers the control to the stub by raising the event RCV_LAST_IN_FRAG. Note that the stub must assure that the HANDLE_IN_FRAG activity has been completed before acting on this event.
When the server application procedure is ready to respond to the RPC request with out parameter data, the stub signals this to the run-time system by raising the event PROC_RESPONSE. The called application procedure may not have completed at the time of this event, depending on the call type.
The RPC stub may queue the marshalled call data for the response either in one segment or in chunks of segments, depending on the call type (for example, whether a pipe data type is opened) and the local memory management policies. The run-time system detects the availability of data and sets the conditional flag TRANSMIT_RESP to TRUE if data for at least one PDU fragment is available. The run-time system resets TRANSMIT_RESP if the queue contains temporarily less than a PDU fragment of data. The sizes of data segments queued by the stub are not necessarily equivalent to the sizes of PDU fragments sent by the run-time system.
If the transmit queue only contains data for the last PDU fragment to be sent, the RPC run-time system sets the conditional flag LAST_OUT_FRAG. Note that if the request is to be a single packet PDU, LAST_OUT_FRAG must also be set.
Upon detecting a cancel request issued by the client, the server run-time system starts the activity CANCEL_NOTIFY_APP to notify the stub that a cancel was issued. The stub returns the status RETURN_PENDING_CANCEL to the run-time system after processing the cancel request and terminating the activity CANCEL_NOTIFY_APP.
If the server manager routine rejects the call before execution, the RPC stub signals the run-time system by raising the event PROCESSING_FDNE. If the stub detected a processing failure during execution of the request, it signals the run-time system by raising the event PROCESSING_FAULT.
If a context handle is activated, the stub generates a CONTEXT_ACTIVE event and identifies the client/server pair for which this context handle is active. A context handle becomes active when a server returns a value which is not NULL for an RPC context handle parameter. For each context handle that becomes active, the server stub must generate this event.
If a context handle becomes inactive, the stub generates a CONTEXT_INACTIVE event and identifies the client/server pair for which this context handle is no longer active. A context handle becomes inactive when a server returns a NULL value for an RPC context handle parameter. For each context handle which becomes inactive, the server stub must generate this event.
If communications between a client/server pair are lost and context
handles were active, the server protocol machine generates a
RUNDOWN_CONTEXT_HANDLES event. For each active context handle
associated with that particular client/server pair, the stub calls the
corresponding <type_id>_rundown routine.
The connection-oriented protocol behaviour is characterised by
concurrent protocol machines of the types specified in
An RPC implementation may function as both a client and a server
concurrently. For modelling purposes, it may be viewed as containing
independent client and server state machines. This corresponds to the
client/server model described in
The client protocol machines support the client interfaces while the server protocol machines support the server interfaces. Invocation of an RPC may establish relationships between instances of the client and server protocol machines at each of the lower levels in the hierarchy.
The protocol and service for each RPC is handled by a corresponding pair of client CALL machine instance and server CALL machine instance. These instances require a communications channel for exchanging PDUs. This communications channel, shared by a client and server, is known as an association and is maintained by a corresponding pair of client and server ASSOCIATION machine instances. A series of RPC calls made from client applications to a specific server may utilise the same association. Concurrent RPCs from a client to the same server may take place over different associations. The set of associations between a client and a server is represented by an association group. Association groups are managed by client and server association group machines (CO_CLIENT_GROUP and CO_SERVER_GROUP). The creation and lifetime of these various protocol machines is a function of resource availability, the relationships described in this section, external events and local system policy.
Each client may have multiple simultaneous relationships of the form described in this section with multiple servers. Similarly, each server may have multiple simultaneous relationships with multiple clients. Precise details of these relationships are specified in the following sections.
An association group comprises a set of one or more associations (see
Association groups support context handle management and facilitate efficient resource management.
An association represents a communications channel that is shared between a client endpoint and a server endpoint. Each association is layered on top of a single transport connection such that associations and transport connections have a one-to-one correspondence. An association adds a security and presentation context negotiation and some other RPC-specific exchanges to the underlying connection. Each association is a member of one association group. An association can support no more than one RPC at a time, including its affiliated cancels. An association may be serially reused to call any of the interfaces resident at that server's endpoint. For each RPC, an association is allocated, the RPC is made, and the association is deallocated when the RPC completes. Attempting to allocate an association may cause new associations, transport connections and association groups to be made, if necessary, within local client and server policy constraints. Local policy also governs the number and lifetime of associations.
Each implementation may determine its own association management policy for accepting new associations and disconnecting existing associations subject to the following constraints:
Unusual events (for example, user and management abort requests) may cause associations to be aborted at any time. However, this is likely to cause a pending RPC to fail.
A primary endpoint address may be a well-known endpoint or a dynamic endpoint that is registered with an endpoint mapper. For the first association established within an association group, a client specifies the primary endpoint address to request a transport connection to a server.
If a server supports concurrent RPCs, then the server returns a secondary address to the client. The secondary address may be the same as the primary address. Whether they differ is a local implementation-dependent matter.
A client uses this secondary address for subsequent transport connection requests to establish additional concurrent associations to the same server. Each subsequent association established using both the secondary address and group identifier of an association group will be directed to the same server. RPCs on any of the associations within an association group are processed by the same server.
If the server does not return a secondary address, the client will permit only a single association for the corresponding association group. The rpc_server_listen() call informs the server RPC run-time system whether to allow concurrent RPCs to the same server.
The absence of a secondary address is modelled as a null value in this specification.
After an association has been allocated for an RPC, the CALL protocol
machines (see
The DCE RPC CO protocol requires a connection-oriented transport service that guarantees reliable sequential delivery of data. This means the transport guarantees that when it delivers data to a transport user, all data previously sent by the remote transport user on that transport connection has been delivered exactly once, unmodified, and in the order it was presented to the transport by the remote sender.
The COTS must provide connection establishment and release, full-duplex data transfer, segmentation and reassembly, flow control and liveness indication.
The moment in time at which each instance of the protocol machines is created depends on the events that trigger the initial transition into a statechart. Similarly, the lifetime of a protocol machine instance is determined by events that cause transition to the terminal state. All machines may be affected by external events. The relationships among instances of these machines are described in the following sections.
The client protocol for processing RPCs is described by the CO_CLIENT_ALLOC, CO_CLIENT_GROUP and CO_CLIENT statecharts.
The server protocol for processing RPCs is described by the CO_SERVER_GROUP and CO_SERVER statecharts.
To avoid race conditions among multiple instances of protocol machines attempting to reference the same state variables or issue conflicting events, a synchronisation mechanism is required. The CO_CLIENT_ALLOC protocol machine illustrates how this synchronisation could be implemented via locking. For simplicity, the other protocol machines merely indicate where synchronisation is necessary, but do not explicitly include the locking steps.
An instance of the CO_CLIENT_ALLOC protocol machine is created each
time a new RPC is invoked by the Invoke service primitive described in
Behaviour of this machine is affected by the states of, and the events generated by, instances of the ASSOCIATION protocol machine that correspond to associations within the relevant association group.
This machine defines the recommended policy for allocating associations to RPCs. Implementations may choose a different policy for allocating associations and, thus, are not required to conform to this definition. Any algorithm for retrying failed attempts to allocate an association must retry no more frequently than specified here.
This protocol machine generates the following events, which are input events for the related CO_CLIENT machine instance:
An instance of the CO_CLIENT_GROUP protocol machine exists for each association group. It is created upon indication that the first association for this group has been established. It terminates when the last association in the group is terminated.
Behaviour of this machine is affected by the states of, and the events generated by, one or more instances of the ASSOCIATION protocol machine that correspond to associations within the relevant association group.
This machine defines the client management of and the protocol for association groups. Implementations are required to conform to the defined behaviour.
The CO_CLIENT statechart defines the protocol machine types for association and call components. An instance of each of the concurrent protocol machines contained in this statechart is created when a client attempts to establish a new association. It terminates when the relevant association is terminated and related termination activities complete. Instances of the concurrent protocol machines within a CO_CLIENT statechart interact via events and state variables. Also, events generated by the relevant instances of CO_CLIENT_GROUP and CO_CLIENT_ALLOC machines affect these protocol machines.
The CO_CLIENT protocol machine generates the following events, which are input events for the related CO_CLIENT_ALLOC machine instance:
The CO_CLIENT protocol machine generates the following events, which are input events for the related CO_CLIENT_GROUP machine instance:
For each association, an instance of the ASSOCIATION protocol machine defines the the client management of, and the protocol for, that association. The contained machine, labeled INIT, manages the initialisation of an association and the corresponding transport connection. Implementations are required to conform to the defined behaviour.
An instance of the CONTROL machine manages the reassembly and dispatching of incoming RPC control PDUs for each association. Implementations are required to conform to the described behaviour.
An instance of the CANCEL machine manages cancel requests for an RPC. Implementations are required to conform to the described behaviour.
For each RPC, an instance of the CALL protocol machine defines the client service and protocol for that RPC. Implementations are required to conform to the defined behaviour.
The contained machine, labelled DATA, manages the data exchange between the client and server for the RPC. The machine CONFIRMATION handles the response reception.
An instance of the CO_SERVER_GROUP protocol machine exists for each association group. It is created upon indication that the first association for this group has been established. It terminates when the last association in the group is terminated and any context for remaining context handles can be rundown.
Behaviour of this machine is affected by the states of, and the events generated by, one or more instances of the ASSOCIATION protocol machine that correspond to associations within the relevant association group.
This machine defines the server management of, and the protocol for, association groups. Implementations are required to conform to the defined behaviour.
The CO_SERVER statechart defines the protocol machine types for association and call components. An instance of each concurrent protocol machine contained in the CO_SERVER statechart is created upon indication that a new transport connection to the server has been established. It terminates when the relevant association is terminated. Instances of the concurrent protocol machines within a CO_SERVER statechart interact via events and state variables. Also, events generated by the relevant CO_SERVER_GROUP machine instance affect these protocol machines.
The CO_SERVER protocol machine generates the following events, which are input events for the related CO_SERVER_GROUP machine instance:
For each association, an instance of the ASSOCIATION protocol machine defines the server management of, and the protocol for, that association. Implementations are required to conform to the defined behaviour.
An instance of the CONTROL machine manages the reassembly and dispatching of incoming RPC control PDUs for each association. Implementations are required to conform to the described behaviour.
An instance of the CANCEL machine manages the cancel protocol and service for an RPC. Implementations are required to conform to the described behaviour.
The WORKING machine defines the handling of an RPC by the server, including the orderly clean up of state after an RPC terminates. The WORKING machine contains the CALL machine.
For each RPC, an instance of the CALL protocol machine defines the
server management of, and the protocol for, that RPC. Implementations
are required to conform to the defined behaviour. The contained
machine, labelled DATA, manages the data exchange between the client
and server for the RPC.
An instance of the CALL protocol machine is created upon receipt
of the first fragment of an RPC request.
The connectionless protocol behaviour is characterised by concurrent
protocol machines of the types specified in
The most fundamental partitioning of the protocol machines is between the
client and server types. This corresponds to the client/server model
described in
Each client may have multiple simultaneous relationships with multiple servers. Similarly, each server may have multiple simultaneous relationships with multiple clients.
An activity corresponds to a client application instance. Multiple activities may exist concurrently for each client. Both the client and server distinguish among activities by a UUID associated with each activity, called the activity identifier. At most one RPC may be in progress for an activity. A series of RPCs may occur sequentially for each activity.
The protocol machines for an RPC manage the exchange of call data between the client and server for an activity. These protocol machines handle, in an orderly fashion, events that may cause abnormal termination of an RPC. The call machines indicate to an RPC client application whether the RPC completed successfully, failed but did not execute, or failed with unknown execution status. Pending cancels are signalled to the client and server applications, and orphaned RPCs are indicated to the server applications. Each RPC is identified by an activity identifier and a sequence number. Activity identifiers may not be reused. A sequence number may be reused for a given activity identifier, if the sequence number space is exhausted. If sequence numbers wrap around and are reused, the implementation must assure that these are unambiguous. Less than half the space of sequence numbers may be used for concurrently pending calls.
The execution context of a call is uniquely identified by the client address space identifier (CAS UUID). This UUID identifies a specific client process instance that is maintaining context with servers. Execution context is not directly related to activities. Multiple activities may run within a single execution context. The client and server run-time system implementations maintain a list of active execution contexts (signalled from the stub by the event CONTEXT_ACTIVE or, respectively, by CONTEXT_INACTIVE).
The server stub indicates, via condition flag CONTEXT_REQUEST, whether it needs to know the execution context identifier (RT_CLIENT_EXECUTION_CONTEXT) for the current call.
Run time implementations monitor liveness of maintained execution
contexts periodically. The procedure convc_indy(), as
specified in
Serial numbers allow data senders to match a fack PDU with the request or response PDU that induced the fack PDU to be sent. Serial numbers are used according to the following model. The sender of data maintains a queue of all PDUs that have been sent but not yet acknowledged. The sender also maintains a current serial number, which is initialised to 0 (zero) when a call begins. Each time a data (request or response) or ping PDU is sent or resent from the queue, the current serial number is incremented and inserted into the outgoing data PDU; each PDU in the queue records the serial number used in the most recent transmission of the PDU. When the receiver of a data PDU sends a fack PDU in reply, it inserts the serial number of the data PDU into the fack body. This is the serial number of the PDU that induced the fack.
Upon receiving a fack PDU, the data sender must take the following steps:
In implementing the second step, the following policies are recommended. It is possible that some PDUs that remain in the queue were in transit at the time the fack was generated, and thus could not have been acknowledged by the fack. It is likely that such PDUs were received after the fack was generated, and retransmitting them would waste network bandwidth. The likelihood of such in-transit PDUs increases as network transmission latency increases.
The potentially gratuitous retransmission of data PDUs can be
eliminated by considering the serial number in the fack and the
serial numbers on the data PDUs in the transmit queue. In particular,
the data sender should not retransmit any data PDU whose serial number
(that is, the serial number used in the most recent transmission of
the data PDU) is greater than the serial number in the fack PDU.
Because serial numbers allow a transmission and a reply to be matched up, serial numbers can be used in the course of estimating the network round trip time (RTT) between sends and receives. Such an estimate of RTT can be used to control retransmission policy.
The connectionless protocol requires a connectionless, datagram transport (CLTS). The CLTS must provide a full-duplex datagram service that delivers transport user data on a best effort basis. The CLTS may lose, delay, reorder and duplicate transport service data units. Transport must not misdeliver or modify user data. The CLTS must guarantee that the maximum lifetime of each transport service data unit is bounded.
The moment in time at which each instance of the protocol machines is created depends upon events that trigger transitions from the initial state. The lifetime of a protocol machine instance is determined by the lifetime of the corresponding activity. All machines may be affected by external events. The relationships among instances of these machines are described in the following sections.
The client protocol for processing RPCs is described by the CL_CLIENT statechart.
The server protocol for processing RPCs is described by the CL_SERVER statechart.
Since the connectionless RPC protocol machines have to take into account the unreliable nature of the underlying datagram transport, the RPC run-time system has to handle fragmentation, the possible delivery of packets out of order, and the reassembly of the entire request or response data.
In accordance to the semantics of the HANDLE_IN_FRAG and
HANDLE_OUT_FRAG activities, the run-time system buffers out-of-order
fragments temporarily and makes received fragments available to the
stub only if they are consecutive (see
The CL_CLIENT statechart defines the client protocol machine
types for an RPC. An instance of each of the protocol machines is
created when an Invoke service primitive, as defined in
An instance of the CONTROL machine defines the protocol used to manage the reassembly and dispatching of received control PDUs for each RPC. Implementations must conform to the described behaviour.
An instance of the AUTHENTICATION machine manages the authentication service for each activity. It handles and verifies mutual authentication if a security service is requested for the associated RPC. It is independent of the underlying authentication protocol and the specific protection services that are in use. Implementations are required to conform to the described behaviour.
An instance of the CALLBACK machine defines the protocol used to manage callbacks to the client for an RPC. Implementations must conform to the described behaviour.
An instance of the PING machine defines the protocol used to ascertain liveness of the server for each RPC. Implementations must conform to the described behaviour.
An instance of the CANCEL machine defines the protocol used to manage cancel requests for each RPC. Implementations must conform to the described behaviour.
An instance of the DATA machine defines the client side of the protocol used to manage the data exchange between the client and server for each RPC. The contained machines labelled REQUEST and CONFIRMATION handle the request transmission and response receipt, respectively. Implementations must conform to the described behaviour.
The CL_SERVER statechart defines the server protocol machine types for an RPC. An instance of each of the protocol machines is created upon indication that an RPC request PDU for a new activity has been received. Subsequent RPC request PDUs for the same activity are handled by the same instance of the CL_SERVER statechart. Thus, the lifetime of the protocol machines corresponds to that of the associated activity. The concurrent protocol machines for an instance of a CL_SERVER statechart interact via events and state variables.
An instance of the CONTROL machine defines the protocol used to manage the reassembly and dispatching of received control PDUs for each RPC. Implementations must conform to the described behaviour.
An instance of the AUTHENTICATION machine manages the authentication service for each activity. It handles and verifies mutual authentication if a security service is requested for the associated RPC. It is independent of the underlying authentication protocol and the specific protection services that are in use. Implementations are required to conform to the described behaviour.
An instance of the CANCEL machine defines the protocol used to manage cancels received for each RPC. Implementations must conform to the described behaviour.
The WORKING machine defines the handling of an RPC by the server, including the orderly clean up of state after an RPC terminates. The WORKING machine contains the CALL machine.
For each RPC, an instance of the CALL protocol machine defines the server management of, and the protocol for, that RPC. The CALL machine is composed of two subordinate machines, DATA and CALLBACK. An instance of the DATA machine defines the server side of the protocol that is used to manage the data exchange between the client and server for each RPC. An instance of the CALLBACK machine defines the protocol used to manage conversation manager callbacks to the client, enabling servers to enforce at-most-once execution semantics.
Implementations are required to conform to the defined behaviour for the WORKING protocol machine and the protocol machines contained within WORKING.
Please note that the html version of this specification may contain formatting aberrations. The definitive version is available as an electronic publication on CD-ROM from The Open Group.
Contents | Next section | Index |