Previous section.
Networking Services (XNS) Issue 5
Copyright © 1997 The Open Group
Explanatory Notes for XTI
Transport Endpoints
A transport endpoint
specifies a communication path between a
transport user and a specific transport provider, which is identified by
a local file descriptor (.iX "file descriptor" ""
fd).
When a user opens a
transport provider
identifier, a local file descriptor
fd
is returned which identifies the
transport endpoint.
A transport provider
is defined to be the transport protocol that provides
the services of the transport layer.
All requests to the
transport provider
must pass through a
transport endpoint.
The file descriptor
fd
is returned by the function
t_open()
and is used as an argument to the subsequent functions to
identify the transport endpoint.
A transport endpoint (.iX "transport endpoint" ""
fd
and local address) can support only one established
transport connection at a time.
To be active, a
transport endpoint
must have a transport
address associated with it by the
t_bind()
function.
A transport connection is characterised by the association of two active
endpoints, made by using the
functions of establishment of transport connection.
The
fd
is a communication path to a
transport provider.
There is no direct assignation of the processes to the
transport provider,
so multiple processes, which obtain the
fd
by
open(),
fork()
or
dup()
operations, may access a given communication path.
Note that the
open()
function will work only if the opened character string is a pathname.
Note that in order to guarantee portability, the only operations which
the applications may perform on any
fd
returned by
t_open()
are those defined by XTI and
fcntl(),
dup()
or
dup2().
Other operations are permitted but these will have system-dependent
results.
Transport Providers
The transport layer may comprise one or more transport providers
at the same time. The identifier parameter of the transport provider passed
to the
t_open()
function determines the required transport provider. To keep the applications
portable,
the
identifier parameter of the transport provider
should not be hard-coded into the application source code.
An application which wants to manage
multiple transport providers must call
t_open()
for each provider.
For example, a server application which is waiting for incoming connection
indications from several transport providers must open a transport endpoint
for each provider and listen for connection
indications on each of the associated file descriptors.
Association of a UNIX Process to an Endpoint
One process can simultaneously open several
fds.
However, in synchronous mode, the process must manage the different
actions of the associated transport connections sequentially.
Conversely, several processes can share the same
fd
(by
fork()
or
dup()
operations) but
they have to synchronise themselves so as
not to issue a function that is unsuitable to the current state of the
transport endpoint.
It is important to remember that the
transport provider
treats all users of a
transport endpoint
as a single user.
If multiple processes are using the same endpoint, they should coordinate
their activities so as not to violate the state of the provider.
The
t_sync()
function returns the current state of the
provider to the user, thereby enabling the user to verify
the state before taking further action.
This coordination is only valid among cooperating processes;
it is possible that a process or an incoming event could change the
provider's state
after
a
t_sync()
is issued.
A process can listen for an incoming connection indication on one
fd
and accept the connection on a different
fd
which has been bound with the
qlen
parameter (see
t_bind())
set to zero.
This facilitates the writing of a listener application whereby the
listener waits for all incoming connection indications on a given
Transport Service Access Point
(TSAP). The listener will accept the connection on a new
fd,
and
fork()
a child process to service the request without blocking other
incoming connection indications.
Use of the Same Protocol Address
If several endpoints are bound to the same protocol address, only one
at the time may be listening for incoming connections.
However, others may be in data transfer state
or establishing a transport
connection as initiators.
Modes of Service
The transport service interface supports two modes of
service: connection-mode and connectionless-mode.
A single transport endpoint may not support both modes of service
simultaneously.
The connection-mode transport service is circuit-oriented and
enables data to be
transferred over an established connection in a reliable,
sequenced manner.
This service enables the negotiation of the
parameters and options that govern the transfer of data.
It provides an identification mechanism that avoids the overhead
of address transmission and resolution during the data transfer
phase.
It also provides a context in which successive units of data,
transferred between peer users, are logically related.
This service is attractive to applications that
require relatively long-lived, datastream-oriented interactions.
In contrast, the connectionless-mode transport service
is message-oriented and supports data transfer
in self-contained units with no logical relationship required among
multiple units.
These units are also known as datagrams.
This service
requires a pre-existing association between the
peer users involved, which determines the characteristics of
the data to be transmitted.
No dynamic negotiation of parameters and options is supported by
this service.
All the information required to deliver a unit of data
(for example, destination address) is presented to the transport provider,
together with the data to be transmitted, in a single service
access which need not relate to any other service access.
Also, each unit of data transmitted is entirely self-contained,
and can be independently routed by the transport provider.
This service is attractive to
applications that involve short-term request/response
interactions, exhibit a high level of redundancy, are dynamically
reconfigurable or do not require
guaranteed, in-sequence delivery of data.
Error Handling
Two levels of error are defined for the transport interface.
The first is the library error level.
Each library function has one or more error returns.
Failures are indicated by a return value of -1.
When header file
<xti.h>
is included, symbol
t_errno
is defined as a modifiable
lvalue
of type
int, t_errno,href="#tag_foot_1">1
and
can be used to access the specific error number when such
a failure occurs. Applications should not include
t_errno
in the left operand of assignment statements.
This value is set when errors occur but is not cleared on
successful library calls, so it should be
tested only after an error has been indicated.
A diagnostic function,
t_error(),
prints out information on the current transport error.
The state of the transport provider may change if a transport error occurs.
The second level of error is the operating system service routine level.
A special library level error number has been defined called [TSYSERR]
which is generated by each library function when an operating system
service routine fails
or some general error occurs.
When a function sets
t_errno
to [TSYSERR], the specific system error may
be accessed through the external variable
errno.
For example, a system error can be generated by the
transport provider
when a protocol error has occurred. If the error is severe, it may
cause the file descriptor and
transport endpoint
to be unusable.
To continue in this case, all users of the
fd
must close it.
Then the
transport endpoint
may be re-opened and initialised.
Synchronous and Asynchronous Execution Modes
The transport service interface is inherently asynchronous;
various events may occur which are independent of the actions of a
transport user.
For example, a user may be sending data over a transport connection
when an asynchronous disconnection indication arrives.
The user must somehow be informed that the connection has been broken.
The transport service interface supports two execution modes for handling
asynchronous events: synchronous mode and asynchronous mode.
In the synchronous mode of operation, the transport primitives wait for specific
events before returning control to the user.
While waiting, the user cannot perform other tasks.
For example, a function that attempts to receive data in
synchronous mode
will wait until data arrives before returning control to the user.
Synchronous mode is the default mode of execution.
It is useful for user processes that want to wait for events to occur,
or for user processes that maintain only a single transport connection.
The asynchronous mode of operation, on the other hand, provides a mechanism
for notifying a user of some event without forcing the user to wait for
the event.
The handling of networking events in an asynchronous manner is seen as
a desirable capability of the transport interface.
This would enable users to perform useful work while expecting a
particular event.
For example, a function that attempts to receive data in asynchronous
mode will return control to the user immediately if no data is available.
The user may then periodically poll for incoming data until it arrives.
The asynchronous mode is intended for those applications that expect long
delays between events and have other tasks that they can perform in the
meantime or handle multiple connections concurrently.
The two execution modes are not provided through separate interfaces
or different functions.
Instead, functions that process incoming events have two
modes of operation: synchronous and asynchronous.
The desired mode is specified through the
O_NONBLOCK
flag, which may be set when the transport provider is initially opened,
or before any specific function or group of functions is
executed using the
fcntl()
operating system service routine.
The effect of this flag is local to this process and is completely
specified in the description of each function.
Nine (only eight if the orderly release is not supported)
asynchronous events are defined in the transport
service interface to cover both connection-mode and
connectionless-mode service.
They are represented as separate bits in a bit-mask using the following
defined symbolic names:
-
T_LISTEN
-
T_CONNECT
-
T_DATA
-
T_EXDATA
-
T_DISCONNECT
-
T_ORDREL
-
T_UDERR
-
T_GODATA
-
T_GOEXDATA.
These are described in
Event Management
.
A process that issues functions in synchronous mode must still be
able to recognise certain asynchronous events and act on them if necessary.
This is handled through a special transport error [TLOOK]
which is returned by a function when an asynchronous event occurs.
The
t_look()
function is then invoked to identify
the specific event that has occurred when this error is returned.
Another means to notify a process that an asynchronous event has occurred
is polling.
The polling capability enables processes to do useful work and periodically
poll for one of the above asynchronous events.
This facility is provided by setting O_NONBLOCK
for the appropriate primitive(s).
Events and t_look()
All events that occur at a transport endpoint are stored by XTI. These
events are retrievable one at a time via the
t_look()
function. If multiple events occur, it is implementation-dependent in
what order
t_look()
will return the events. An event is outstanding on a transport endpoint
until it is consumed.
Every event has a corresponding consuming function which handles the
event and consumes it.
In addition, the abortive T_DISCONNECT consumes other
pending events.
Both T_DATA and T_EXDATA events are consumed when the corresponding
consuming function has read all the corresponding data associated with that
event. The intention of this is that T_DATA should always indicate that there is
data to receive.
Two events,
T_GODATA
and
T_GOEXDATA,
are also cleared as they are returned by
t_look().
summarises this.
Event
| Cleared on t_look() ?
| Consuming XTI functions
|
---|
T_LISTEN
| No
| t_listen()
|
T_CONNECT
| No
| t_{rcv}connect()\*F
|
T_DATA
| No
| t_rcv{v}{udata}()
|
T_EXDATA
| No
| t_rcv{v}()
|
T_DISCONNECT
| No
| t_rcvdis()
|
T_UDERR
| No
| t_rcvuderr()
|
T_ORDREL
| No
| t_rcvrel{data}()
|
T_ORDRELDATA
| No
| t_rcvreldata()
|
T_GODATA
| Yes
| t_snd{v}{udata}()
|
T_GOEXDATA
| Yes
| t_snd{v}()
|
Table: Events and t_look()
Effect of Signals
In both the synchronous and the asynchronous execution modes, XTI
calls may be affected by signals. Unless specified otherwise in
the description of each function, the functions behave as
described below.
If a synchronous XTI call is blocking under circumstances where
an asynchronous call would have returned because no event was
available, then the call returns -1 with
t_errno
set to
[TSYSERR] and
errno
set to [EINTR]. The state of the endpoint is unchanged.
In addition an [EINTR] error may be returned by all XTI calls
(except
t_error()
and
t_strerror())
under implementation defined
conditions. In these cases the state of the endpoint will not
have been changed, and no data will have been sent or received.
Any buffers provided by the user for return values may have been
overwritten.
A "well written" application will itself mask out signals except
during specific code sequences (typically only its idle point)
to avoid having to handle an [EINTR] return from all system
calls.
Application writers should be aware that XTI calls may be
implemented in a library as multiple system calls. In order to
maintain the endpoint and associated library data areas in a
consistent state, some of these system calls may be repeated when
interrupted by a signal.
Applications should not call XTI functions from within a signal
handler
or using the
longjmp()
or
siglongjmp()
interfaces (see reference
XSH)
to exit a signal handler, as either
may leave XTI data areas in an inconsistent state.
Applications may be able to cause the XTI library itself to
generate signals that interrupt its internal actions (for example, by
issuing ioctl( fd, I_SETSIG, S_INPUT ) on a UNIX system); this
may cause the user's signal handler to be scheduled, but will not
stop the XTI call from completing.
Event Management
Each XTI call deals with one transport endpoint at a time.
It is not possible to wait for several events from
different sources, particularly from several transport connections at
a time.
We recognise the need for this functionality which may be available
today in a system-dependent fashion.
Throughout the document we refer to an event management service called Event Management (EM) which provides those functions useful to XTI.
This Event Management
will allow a process to be notified of the following events:
- T_LISTEN
-
A connection request from a remote user was
received by a transport provider (connection-mode service only);
this event may occur under the following conditions:
-
The file descriptor is bound to a valid address.
-
No transport connection is established at this time.
- T_CONNECT
-
In connection mode only; a connection response was received by the
transport provider; occurs after a
t_connect()
has been issued.
- T_DATA
-
Normal data (whole or part of Transport Service Data Unit (TSDU))
was received by the transport provider.
- T_EXDATA
-
Expedited data was received by the transport provider.
- T_DISCONNECT
-
In connection mode only; a disconnection request was received by the
transport provider.
It may be reported on both data transfer functions and connection
establishment functions and on the
t_snddis()
function.
- T_ORDREL
-
An orderly release request was received by a transport provider
(connection mode with orderly release only).
- T_UDERR
-
In connectionless-mode only; an error was found in a previously sent
datagram.
It may be notified on the
t_rcvudata()
or
t_unbind()
function calls.
- T_GODATA
-
Flow control restrictions on normal data flow that led to a [TFLOW]
error have been lifted.
Normal data may be sent again.
- T_GOEXDATA
-
Flow control restrictions on expedited data flow that led to a [TFLOW]
error have been lifted. Expedited data may be sent again.
Footnotes
- 1.
- This may be implemented as a macro. In addition the name
_t_errno
is an XTI library-reserved-name for use within such a macro. A
typical definition of
t_errno
for a multithreaded implementation is:
-
-
extern int *_t_errno(void);
#define t_errno (*(_t_errno()))
- 1.
- In the case of the
t_connect()
function the T_CONNECT event is both generated and consumed by the
execution of the function and is therefore not visible to the application.
Why not acquire a nicely bound hard copy?
Click here to return to the publication details or order a copy
of this publication.