Previous section.

Transport Provider Interface (TPI)

Transport Provider Interface (TPI)
Copyright © 1997 The Open Group

Connection Acceptance

Connection acceptance with TPI is not easy to understand without the benefit of knowing how it has evolved. This Appendix therefore offers background information to explain the state of affairs under existing common implementations, and hence assist the reader in understanding an existing implementation or designing a new one.

The following text is provided for informational purposes only and should not be construed as imposing normative requirements.

For brevity in the following discussion:

user
means a transport user

provider
means a transport provider

address
means a transport address

endpoint
means a transport endpoint.

Accepting Incoming Connections

In order to field an incoming connection request, a user must establish an endpoint and use the T_BIND_REQ message (with a CONIND_number greater than zero) to bind to the local address. The CONIND_number in that message expresses the number of outstanding incoming connection requests the endpoint should support. There may be more than one endpoint bound to the same local address, but only one of them at a time may have a CONIND_number greater than zero. Such an endpoint, if it exists, is called a listener. The other endpoints, if any, bound to the same address will either be conducting outgoing connections or carrying out incoming connections which were processed by a listener. There can only be one listener for each local address because the provider needs to know where to send any T_CONN_IND messages for that address.

Each listening endpoint can only be listening on one local address. When an incoming connection request is detected by the provider it looks for a matching listener in the TS_BIND state. If it does not find one, it fails the connection request, otherwise it constructs a T_CONN_IND message and sends it up the listener to the user. The user sends a T_CONN_RES if it wants to accept the connection, or a T_DISCON_REQ if it does not.

It is permissible for the listener to conduct the actual connection, but this is unusual in practice because, while it is doing so, it cannot also perform its listening task because it will be in some other state than TS_BIND. By far the more usual methodology is for the user to establish a new endpoint and use that for conducting the actual connection while the listener continues to listen for further incoming connections. The T_CONN_RES message contains a field ACCEPTOR_id which is used to identify the endpoint on which the user wishes to conduct the connection. The encoding of this field is implementation specific as are the methods of acquiring a valid value for it, and the method employed by the provider in interpreting it.

In older versions of the TPI standard the ACCEPTOR_id field was called QUEUE_ptr and had the type queue_t *. This unfortunately exposed an implementation detail which made the use of TPI difficult on systems where a pointer is a different length at different times (for example a 64-bit system supporting both 32-bit and 64-bit user applications), and also on systems where transport provider operates in a different address space from other parts of the operating system. Nevertheless, on many systems, the ACCEPTOR_id is still given the value of the provider queue pair read pointer of the endpoint which is to be used to conduct the connection. This remains a perfectly good implementation strategy for those systems which do not suffer the problems mentioned above. The value of the QUEUE_ptr variable was never used by the user as any more than an opaque identifier value (in fact most implementations did not even expose the value to the user).

The Common Single Type Model Implementation

The provider constructs a T_CONN_IND message with the source address of the originating (usually remote) user. It includes any (protocol specific) options and creates a unique reference number which it places in SEQ_number. The encoding and origin of this field is implementation specific under the constraint that it must be unique during the lifetime of the connection acceptance. Some implementations use the address of a kernel data structure associated with the connection request. Others use an incrementing counter and trust that less than 4,294,967,296 incoming connection requests do not occur on this provider before the user responds (this is a fairly safe assumption). The user is then sent the message on the listener.

When the user receives the T_CONN_IND message, it usually opens an entirely new endpoint (to the same transport provider). It may choose to bind that new endpoint to a local address, or it may leave the provider to perform that task on receipt of the T_CONN_RES. Any address it binds to must satisfy the requirements of the provider for the connection. The new endpoint should not have a CONIND_number greater than 0.

The user constructs a T_CONN_RES message. It copies in the SEQ_number from the T_CONN_IND (otherwise the transport will not know to which connection it is responding), removes (if necessary) the options it is not prepared to support, and copies the remainder into the T_CONN_RES . The T_CONN_RES is now complete except for the ACCEPTOR_id. The user does not directly have the information to include in this field; only the operating system kernel can derive that. The usual solution is for the kernel to supply a special ioctl(2) call called I_FDINSERT which expects as argument a T_CONN_RES message and the file-descriptor of the new (accepting) endpoint. This ioctl(2) call is specially treated. Before the message is sent down to the provider, the kernel uses the file-descriptor to access the endpoint. It extracts the value of the provider read queue pointer from that endpoint and places its value in the ACCEPTOR_id field. Then it sends the message to the provider.

The provider cross-references the SEQ_number and determines that it has such a pending connection, then it checks that the ACCEPTOR_id matches the read queue pointer of a valid endpoint (it must exist and obey all the general and provider specific rules). If it is not already bound to a local address the provider will bind it to the same address as that to which the listener is bound. If the ACCEPTOR_id identifies the listener, then the listener becomes the acceptor and further incoming connection requests for its address will fail, at least until the connection terminates. In the usual case, however, a new endpoint is used to conduct the new connection.

If the listener concocts an ACCEPTOR_id which does not represent one of its own endpoints, and gets it exactly correct, then it is possible that it could foist one of its own connections off onto an unsuspecting endpoint if it was in the correct state, etc. This could be a denial of service attack. What it cannot do, is to hijack a connection from another listener.

Possible Multiple Type Model Implementation Methodologies

On 64-bit systems, a decision needs to be made about how to provide a consistent ACCEPTOR_id which has the property of being unique within each transport provider. If the I_FDINSERT ioctl(2) call is still used, then the ACCEPTOR_id encoding must be based on data which is accessible to the STREAM head when the I_FDINSERT call is made. It is likely to be simplest just to encode the 64-bit value of the read queue pointer in the 32-bit ACCEPTOR_id, possibly simply by truncation. The key is to preserve the uniqueness of the value as an identifier.

The STREAM head generates the identifier and places the result in ACCEPTOR_id. When the T_CONN_RES message reaches the provider, it decodes the ACCEPTOR_id to identify the accepting endpoint.


Why not acquire a nicely bound hard copy?
Click here to return to the publication details or order a copy of this publication.

Contents Next section Index