Legacy Documentclose button

Important: The information in this document is obsolete and should not be used for new development.

Previous Book Contents Book Index Next

Inside Macintosh: Networking With Open Transport / Part 1 - Open Transport Essentials
Chapter 4 - Endpoints


Using Endpoints

This section begins by explaining how you create an endpoint and associate it with an address. Next, it introduces the functions you can use to obtain information about endpoints and discusses some issues relating to asynchronous processing that specifically affect endpoint providers. Then, it explains some issues relating to data transfer that apply to all types of endpoint providers. Finally, it describes how you can implement each type of service.

No matter what type of service you want to implement, you must read the sections "Opening and Binding Endpoints," "Obtaining Information About Endpoints," "Handling Events for Endpoints," and "Sending and Receiving Data." After you have read these sections, you can read the section describing the type of service you are interested in implementing.

Table 4-6 shows how some of the Open Transport protocols fit with an endpoint's type of service. For example, if you want to use ATP, you would need to read the section "Using Connectionless Transaction-Based Service". If you want to use ADSP, you would need to read the section "Establishing and Terminating Connections" and the section "Using Connection-Oriented Transactionless Service".

Table 4-6 The Open Transport type-of-service matrix and some Open Transport protocols
 Connectionless  Connection-oriented
TransactionlessDDP
PPP
IP
UDP
Serial connection
ADSP
PAP
TCP
Transaction-based  ATP 

Note
The sections that follow present information in such a way as to suggest that communication is always taking place between two Open Transport clients. This does not have to be true. For example, an Open Transport client using a connectionless transactionless DDP endpoint can communicate seamlessly with a client using classic AppleTalk's DDP protocol and interface. However, because this book is about Open Transport, we always show how communication works between two Open Transport clients.

Opening and Binding Endpoints

After you have initialized Open Transport and determined what the endpoint configuration is going to be then, you can open and bind the endpoint. You open the endpoint with the OTOpenEndpoint or OTAsyncOpenEndpoint functions. Opening an endpoint with the OTOpenEndpoint function sets the default mode of execution to be synchronous; opening an endpoint with the OTAsyncOpenEndpoint function sets the default mode of execution to be asynchronous. You can change an endpoint's mode of execution at any time by calling the OTSetSynchronous or OTSetAsynchronous functions, which are described in "Providers".

One of the parameters that you pass to the function used to open the endpoint is a pointer to a configuration structure that Open Transport needs to define the protocol stack providing data transport services. The chapter "Getting Started With Open Transport", contains information about creating a configuration structure for an endpoint provider.

If you use the OTAsyncOpenEndpoint function to open an endpoint, you also specify a notifier function that the endpoint provider can use to call your application when an asynchronous or completion event takes place. If you use the OTOpenEndpoint function to open an endpoint, and you want to handle asynchronous events using a notifier function, you must use the OTInstallNotifier function to install your notifier function.

When Open Transport creates an endpoint, it establishes important state information for the endpoint, including information about

You can retrieve this information by calling functions that return information about the endpoint. These functions are described in the next section, "Obtaining Information About Endpoints".

When the function you use to open the endpoint returns, it passes back to you an endpoint reference. You can pass this reference as a parameter to any endpoint provider function or any general provider function. For example, you pass this reference as a parameter to the OTBind function, which you must use to bind an endpoint after opening it.

Binding an endpoint associates the endpoint with a logical address. Depending on the protocol you use and on your application's needs, you can select a specific address or you can have the protocol choose an address for you. For information about valid address formats, consult the documentation for your protocol. The general rule for binding endpoints is simple: you cannot bind more than one connectionless endpoint to a single address. You can bind more than one connection-oriented endpoint to the same address; for additional information about this possibility, see the section "Processing Multiple Connection Requests".

No matter what type of service you need to implement, you must know how to obtain information about the endpoints you have opened and how to handle asynchronous and completion events for these endpoints. These issues are addressed in the next two sections, "Obtaining Information About E ndpoints", and "Handling Events for Endpoints" After you read these sections, you can proceed by reading about the type of service you want to implement.

Obtaining Information About Endpoints

While you can use general provider functions to determine an endpoint's mode of execution, you must use endpoint-specific functions to obtain the endpoint's type of service, state, or address.

The TEndpointInfo structure contains most of the information you need to determine how you can use an endpoint. You can obtain a copy of this structure when you open the endpoint, or by calling the OTGetEndpointInfo function. This structure specifies the maximum size of the buffers you need to allocate when calling functions that return address and option information or data, and it also contains more specific details about the type of service the endpoint provides. For example, if you have opened a connection-oriented endpoint, the servtype field of the TEndpointInfo structure specifies whether the endpoint supports orderly release.

To obtain information about an endpoint's state, you call the function OTGetEndpointState. This function returns a positive integer indicating the endpoint state or a negative integer corresponding to a result code. Table 4-3 lists and describes endpoint states. If the endpoint is in asynchronous mode and you are not using a notifier function, you can use the OTGetEndpointState function to poll the endpoint and determine whether a specific function has finished executing. The completion of some functions result in an endpoint's changing state. For additional information, see Table 4-4 .

To obtain address information about an endpoint or its peer, you can use one of the following two functions:

For information about the address formats for the protocol you are using, please consult the documentation supplied for the protocol. This manual explains these formats for the TCP/IP and the AppleTalk protocols. For information about obtaining the addresses that correspond to a name pattern, see "Mappers".

Handling Events for Endpoints

Like other providers, endpoint providers can operate synchronously or asynchronously. If you use an asynchronous endpoint, you need to create a notifier function that the provider can call to inform you of provider events. This section describes how you handle events for endpoint providers.

Event handling for endpoints is basically the same as that described for providers in "Provider Events". One slight difference lies in the way the endpoint provider generates T_DATA, T_EXDATA, and T_REQUEST asynchronous events, which signal the arrival of incoming data or of an incoming transaction request. For the sake of efficiency, the provider notifies you just once that incoming data has arrived. To read all the data, you must call the function that clears the event until the function returns with the kOTNoDataErr result.

IMPORTANT
An endpoint does not receive any more T_DATA events until its current T_DATA event is cleared.

Table 4-7 lists the functions you use to clear pending asynchronous events.

Table 4-7 Pending asynchronous events and the functions that clear them
Pending event  Open Transport function
that clears the event
T_LISTENOTListen
T_CONNECTOTRcvConnect
T_DATAOTRcv, OTRcvUData
T_EXDATAOTRcv
T_DISCONNECTOTRcvDisconnect
T_UDERROTRcvUDErr
T_ORDRELOTRcvOrderlyDisconnect
T_GODATAOTSnd, OTSndUData, OTLook
T_GOEXDATAOTSnd, OTLook
T_PASSCONnone

This works as follows. A transport provider has a queue of data or commands to deliver to the client. If the queue is empty when the data or command arrives, the provider notifies the the client. If the queue is not empty, then no notification is delivered at the time the data or command is queued. Instead, whenever the client reads the data or command at the head of the queue, Open Transport examines the next element of the queue, if it exists. If this next element of the queue is of the same type as what was at the head of the queue, no event is generated. If there is a difference, the provider delivers a new event to the client. Typically, this new event is delivered to the client just before the function returns which removed the head element of the queue.

Clearing Events and Synchronization Problems

You do not have to issue calls that clear events in the notification routine itself, but until you make the consuming calls and receive a kOTNoDataErr error, another T_DATA, T_EXDATA, or T_REQUEST event is not guaranteed to be issued.

One exception to this rule occurs when dealing with transaction-based protocols. When the client gets a T_REPLY event, it should call the function OTRcvUReply until the function returns the result kOTNoDataErr. If the client calls OTRcvUReply from the foreground (rather than from a notification routine), the following sequence can occur: While the client is busy reading replies in the foreground, a request arrives. This will cause a T_REQUEST event to be generated. If the foreground client was calling OTRcvUReply at this point in time, a kOTLookErr will be generated rather than a kOTNoDataErr. In this case (and the converse case for T_REQUEST events), another T_REPLY event will be generated when a new reply arrives.

Notifier Reentrancy

Open Transport endpoints are handled independently. That means that you can use the same code for the notifiers of two different endpoints. A different instance of the notfier is invoked for each endpoint using the notifier.

Whatever events are pending on one endpoint have (for the most part) no effect on any other endpoints. For example, assume that an endpoint is notified of a T_DATA event. Following this, a separate T_DATA event is sent to a second endpoint. The notifier for the second endpoint is invoked, interrupting the first endpoint's processing of its T_DATA event. For additional information, see "Using Asynchronous Processing With a Notifier".

Polling for Events

Open Transport also includes the endpoint provider function OTLook. You can use the OTLook function

Establishing and Terminating Connections

To implement a connection-oriented service, you must complete the following steps:

The following sections explain how you establish and terminate a connection. The functions you use to establish and terminate a connection are the same for transactionless as for transaction-based service, but the calls you use to transfer data differ. The section "Using Connection-Oriented Transactionless Service" explains how you transfer data once you have established a connection.

Before you can use a connection-oriented endpoint to initiate or accept a connection, you must open and bind the endpoint. For example, if you are using AppleTalk, you might open an ADSP endpoint, which offers connection-oriented transactionless service. You don't have to do anything special to bind an endpoint that is intended to be the active peer of a connection. However, when you bind an endpoint intended to be the passive peer of a connection, you must specify a value for the qlen field of the reqAddr parameter for the OTBind function. The qlen field indicates the number of outstanding connection requests that can be queued for that endpoint.

Note
The value you specify for the qlen field indicates your desired value, but Open Transport may negotiate a lower value, depending upon the number of internal buffers available. The negotiated value of outstanding connection indications is returned to you in the qlen field of the retAddr parameter for the OTBind function. After calling the OTBind function, you might want to take a look at this field to see whether the protocol has imposed a limit on the qlen value.
You are allowed to bind multiple connection-oriented endpoints to a single address. However, only one of these endpoints can accept incoming connection requests. That is, only one endpoint can specify a value for qlen that is greater than 0. For more information, see the section "Processing Multiple Connection Requests".

Establishing a Connection

You use the following functions to establish a connection:

Active peer calls  Passive peer calls  Meaning
OTConnectRequests a connection to the passive peer.
OTListenGets information about an incoming connection request.
OTAcceptAccepts the connection request identified by the OTListen function. The connection can be accepted by a different endpoint than the one listening for incoming connection requests.
OTRcvConnectReads the status of a pending or completed asynchronous call to the OTConnect function.
OTSndDisconnectRejects an incoming connection request.
OTRcvDisconnectIdentifies the cause of a rejected connection and acknowledges the corresponding disconnection event.

Figure 4-3 illustrates the process of establishing a connection in synchronous mode.

Figure 4-3 Establishing a connection with the active peer in synchronous mode

As Figure 4-3 shows, if the active peer is in synchronous mode, the OTConnect function does not return until the connection has been established or the connection attempt has been rejected. If the passive peer has a notifier function installed, the endpoint provider calls it, passing T_LISTEN for the code parameter. The notifier calls the OTListen function, which reads the connection request. The passive peer can now either accept the connection request using the OTAccept function or reject the request by calling the OTSndDisconnect function. The connection attempt might also fail if the request is never received and the endpoint provider times out the call to the OTConnect function.

If the passive peer calls the OTAccept function to accept the connection, the OTConnect function returns with kOTNoErr. If the passive peer rejects the connection or the request times out, the OTConnect function returns with kOTLookErr. When the OTConnect function returns, the active peer must examine the result. If the call succeeded, it should begin to transfer data. If the call failed, the active peer should call the OTRcvDisconnect function to restore the endpoint to a valid state for subsequent operations.

If the active peer is in asynchronous mode, the OTConnect function returns right away with the kOTNoDataErr result, and the active peer must rely on its notifier function to determine whether the call succeeded. Figure 4-4 illustrates the process of establishing a connection when the active peer is in asynchronous mode.

Figure 4-4 Establishing a connection in asynchronous mode

The active peer calls the OTConnect function, which returns right away with a code of kOTNoDataErr. The endpoint provider calls the passive peer's notifier, passing T_LISTEN for the code parameter. If the passive peer accepts the connection, the endpoint provider calls the active peer's notifier, passing T_CONNECT for the code parameter.

If the passive peer rejects the connection or if the connection times out, the endpoint provider calls the active peer's notifier, passing T_DISCONNECT for the code parameter. The active peer must then call either the OTRcvConnect function in response to a T_CONNECT event or the OTRcvDisconnect function in response to a T_DISCONNECT event. The endpoint provider, in turn, passes the T_ACCEPTCOMPLETE event back to the passive peer (for a successful connection) or the T_DISCONNECTCOMPLETE event (for a rejected connection). The passive peer requires the information provided by these two events to determine whether the connection succeeded.

Sending User Data With Connection or Disconnection Requests

The OTConnect function and the OTSndDisconnect function both pass data structures that include fields for data that you might want to send at the time that you are setting up or tearing down a connection. However, you can only send data when calling these two functions if the connect and discon fields of the TEndpointInfo structure specify that the endpoint can send data with connection or disconnection requests. The amount of data sent must not exceed the limits specified by these two fields. To determine whether the endpoint provider for your endpoint supports data transfer during the establishment of a connection, you must examine the connect and discon fields of the TEndpointInfo structure for the endpoint.

Processing Multiple Connection Requests

If you process multiple connection requests for a single endpoint, you must make sure that the number of outstanding connection requests does not exceed the limit defined for the listening endpoint when you bound that endpoint. An outstanding connection request is a request that you have read using the OTListen function but that you have neither accepted nor rejected. You must also decide whether to accept connections on the same endpoint that is listening for the connections or on a different endpoint.

When you bind the passive endpoint, you must specify a value greater than 0 for the qlen field of the reqAddr parameter to the OTBind function. This value indicates the number of outstanding connections that the provider can queue for this endpoint. As you process incoming connection requests, you must check that the number of connections still waiting to be processed does not exceed this negotiated value for the listening endpoint. How you do this depends on the number of outstanding requests and on whether you are accepting connection requests on the same endpoint as the endpoint listening for requests or accepting them on a different endpoint. Connection acceptance is governed by the following rules:

What these rules add up to in practical terms is that if you anticipate managing more than one connection at a time, you should open an endpoint to listen for connections and then open additional endpoints as needed to accept incoming connections.

Terminating a Connection

You can terminate a connection using either an abortive or orderly disconnect. During an abortive disconnect, the connection is torn down without the underlying protocol taking any steps to make sure that data being transferred has been sent and received. When the client calls the OTSndDisconnect function, the connection is immediately torn down, and the client cannot be sure that the provider actually sent any locally buffered data. During an orderly disconnect, the underlying protocol ensures at least that all outgoing data is actually sent. Some protocols go further than this, using an over-the-wire handshake that allows both peers to finish transferring data and agree to disconnect. The following sections describe the steps required for abortive and orderly disconnects. For additional information about handling an unexpected disconnection from a remote client, see "Handling Dead Clients".

Using an Abortive Disconnect

You use the OTSndDisconnect and OTRcvDisconnect functions to perform an abortive disconnect. Figure 4-5 illustrates the process for two asynchronous endpoints. The figure shows the active peer initiating the disconnection; in fact, either peer can initiate the disconnection.

Figure 4-5 An abortive disconnect

In asynchronous mode, the endpoint initiating the disconnection calls the OTSndDisconnect function. Parameters to the function identify the endpoint and point to a TCall structure that is only of interest if the endpoint provider supports sending data with disconnection requests. To determine whether your protocol does, you must examine the value of the discon field of the TEndpointInfo structure for your endpoint. If you do not want to send data or if you cannot send data to the passive peer, you can set TCall to a NULL pointer.

The endpoint provider receiving the disconnect request calls the passive peer's notifier function, passing T_DISCONNECT for the code parameter. The client must acknowledge the disconnection event by calling the function OTRcvDisconnect. This function clears the event and retrieves any data sent with the event. Parameters to the OTRcvDisconnect function identify the endpoint sending the disconnection and point to a TDiscon structure that is only of interest if the endpoint provider supports sending data with disconnection requests or if the passive peer is managing multiple connections and needs to know which of the connections has been closed by using the sequence field of the TDiscon structure. Otherwise, you can set TDiscon to a NULL pointer.

When the connection has been closed, the endpoint provider calls the active peer's notifier, passing T_DISCONNECTCOMPLETE for the event parameter. At this time the endpoint is once more in the T_IDLE state.

Using Orderly Disconnects

There are two kinds of orderly disconnects: remote orderly disconnects and local orderly disconnects. The first kind, supported by TCP, provides an over-the-wire (three-way) handshake that guarantees that all data has been sent and that both peers have agreed to disconnect. The second kind, supported by ADSP and most other connection-oriented transactionless protocols, is a locally implemented orderly release mechanism ensuring that data currently being transferred has been received by both peers before the connection is torn down. To determine whether your protocol supports orderly disconnects, you must examine the servtype field of the TEndpointInfo structure for the endpoint. A value of T_COTS_ORD or T_TRANS_ORD indicates that the endpoint supports orderly disconnect. It is safest to assume, unless you know for certain it to be otherwise, that the endpoint supports only local orderly disconnects.

Figure 4-6 shows the steps required to complete a remote orderly disconnect. The figure shows the active peer initiating the disconnection; in fact, either peer can initiate the disconnection.

Figure 4-6 Remote orderly disconnect

The active peer initiates the disconnection by calling the OTSndOrderlyDisconnect function to begin the process and to let the remote endpoint know that the active peer will not send any more data. (Once it calls this function, the active peer can receive data but it cannot send any more data.) The provider calls the passive peer's notifier function, passing T_ORDREL for the code parameter. In response, the passive peer must read any unread data and can send additional data. After it has finished reading the data, it must call the OTRcvOrderlyDisconnect function to acknowledge receipt of the orderly release indication. After calling this function, the passive peer cannot read any more data; however, it can continue to send data. This is a half-closed connection. When the passive peer is finished sending any additional data, it calls the OTSndOrderlyDisconnect function to complete its part of the disconnection. Following this call, it cannot send any data. The endpoint provider calls the active peer's notifier, passing T_ORDREL for the code parameter, and the active peer calls the OTRcvOrderlyDisconnect function to acknowledge receipt of the disconnection event and to place the endpoint in the T_IDLE state if this was the only outstanding connection.

Figure 4-7 shows the steps required to complete a local orderly disconnect.

Figure 4-7 A local orderly disconnect

As you can see, the sequence of steps is very similar to that shown in Figure 4-6. The main difference is that the connection is broken as soon as the active peer calls the OTSndOrderlyDisconnect function. As a result, either peer can continue to read any unread data, but neither peer can send data after the initial call to the OTSndOrderlyDisconnect function.

Sending and Receiving Data

This section describes some of the issues that affect send and receive operations for all types of endpoints. After you read this section, you should read "Transferring Data Between Transactionless Endpoints" or "Transferring Data Between Transaction-Based Endpoints" for additional information about the type of endpoint you are using.

The chapter "Advanced Topics" presents additional material that concerns the transfer of data and improving performance; this material includes

Please consult that chapter for more information.

Sending Data Using Multiple Sends

If you are sending a single data unit using multiple sends, you must do the following:

  1. Set the T_MORE bit in the flags field each time you call the send function. This lets the provider know that it has not yet received the entire data unit.

  2. Clear the T_MORE bit the last time you call the send function. This lets the provider know that the data unit is complete.

Even though you are using multiple sends to send the data, the total size of the data sent cannot exceed the value specified for the tsdu field (for normal data or replies) or etsdu field (for expedited data or requests) of the TEndpointInfo structure for the endpoint.

Sending data using multiple sends does not necessarily affect the way in which the remote client receives the data. That is, just because you have used several calls to a send function to send data does not mean that the remote client must call a receiving function several times to read the data.

IMPORTANT
Connectionless transactionless protocols do not support the T_MORE flag.

Receiving Data

If you are reading data and if the T_MORE bit in the flags field is set, this means that the buffer you have allocated to hold the data is not big enough. You need to call the receive function again and read more data until the T_MORE bit is cleared, which indicates that you have read the entire data unit.

Transferring Data Between Transactionless Endpoints

Open Transport defines two sets of functions that you can use to send and receive data between transactionless endpoints. You use one set with connectionless service and the other with connection-oriented service.

Using Connectionless Transactionless Service

You use connectionless transactionless service, as provided by DDP and UDP, to send and receive discrete data packets.

After opening and binding a connectionless transactionless endpoint, you can use three functions to send and receive data:

Either endpoint can send or receive data. However, the endpoint sending data cannot determine whether the other endpoint has actually received the data.

Endpoints are not able to determine that the specified address or options are invalid until after the data is sent. In this case, the sender's endpoint provider might issue the T_UDERR event. You should include code in your notifier function that calls the OTRcvUDErr function in response to this event to determine what caused the send function to fail and to place the sending endpoint in the correct state for further processing.

If the endpoint receiving data has allocated a buffer that is too small to hold the data, the OTRcvUData function returns with the T_MORE bit set in the flags parameter. In this case, you should call the OTRcvUData function repeatedly until the T_MORE bit is cleared.

Using Connection-Oriented Transactionless Service

You use connection-oriented transactionless service, such as provided by ADSP and TCP, to exchange full-duplex streams of data across a network. Connection-oriented transactionless endpoints use the OTSnd function to send data and the OTRcv function to receive data. Either endpoint can call either of these functions. Parameters to the OTSnd function identify the endpoint sending the data, the buffer that holds the data, the size of the data, and a flags value that specifies whether the data sent is normal or expedited and whether multiple sends are being used to send the data. Parameters to the OTRcv function identify the receiving endpoint, the buffer where the data should be copied, the size of the buffer, and a flags value that Open Transport sets to tell the client whether to call OTRcv more than once to retrieve the data being sent.

Some endpoints support the use of expedited data, and some support the use of separators to break the data stream into logical units. You need to examine the endpoint's TEndpointInfo structure to determine if the endpoint supports either of these features:

IMPORTANT
Values for the tsdu and etsdu fields of the TEndpointInfo structure that are returned when you open an endpoint might change after the endpoint is connected, because the endpoint providers can negotiate different values when establishing a connection. If the endpoint supports variable maximum limits for TSDU and ETSDU size, you should call the OTGetEndpointInfo function after the connection has been established to determine what the current limits are.
To send expedited data, you must set the T_EXPEDITED bit in the flags parameter. If the receiving client is in the middle of reading normal data and the OTRcv function returns expedited data, the next OTRcv that returns without T_EXPEDITED set in the flags field resumes the sending of normal data at the point where it was interrupted. It is the responsibility of the client to remember where that was.

There are two ways of breaking up a data stream into logical size units.

Transferring Data Between Transaction-Based Endpoints

Open Transport defines two sets of functions that you can use to perform a transaction. One set is defined for connectionless transactions; the other set is defined for connection-oriented transactions. A transaction is a process during which one endpoint, the requester, sends a request for a service. The remote endpoint, called the responder, reads the request, performs the service, and sends a reply. When the requester receives the reply, the transaction is complete.

You can implement applications that use transactions in the following two ways:

Because one endpoint can conduct multiple transactions at any one time, it is crucial that requesters and responders be able to distinguish one transaction from another. This is done by means of a transaction ID, a number that uniquely identifies a transaction. Because this is not the same number for the requester as it is for the responder, some explanation is required. Figure 4-8 shows how the transaction ID is generated by the requesting application and the provider during the course of a transaction.

Figure 4-8 How a transaction ID is generated

The requester initiates a transaction by sending a request. The requester passes information about the request in a data structure that includes a seq field, which specifies the transaction ID of the request. The requester initializes this field to some arbitrary, unique number. Before sending the request, the endpoint provider saves this number in an internal table and assigns another number to the seq field, which it guarantees to be unique for the requester's machine. The endpoint provider also saves the new number along with the requester-generated sequence number. For example, in Figure 4-8, the requester assigns the number 1001; the endpoint provider assigns the number 5123.

When the responder receives the request, it reads the request information, including the provider-generated sequence number, into buffers it has reserved for the request data. When the responder sends a reply, it specifies the sequence number it read when it received the request.

Before the requester's endpoint provider advises the requester that the reply has arrived, it examines the sequence number of the reply and looks in its internal table to determine which requester-generated sequence number it matches. It then substitutes that number for the sequence number it received from the responder. By using this method Open Transport guarantees that transactions are uniquely identified, and the requester is able to match incoming replies with outgoing requests.

Using Connectionless Transaction-Based Service

You use connectionless transaction-based service to enable two connectionless endpoints to complete a transaction.

The requester initiates the transaction by calling the OTSndURequest function. Parameters to the OTSndURequest function specify the destination address, the request data, any options, and a sequence number to identify this transaction. The requester must supply a sequence number if it is sending multiple requests, so that later on it can match replies to requests. The requester can cancel an outgoing request by calling the OTCancelURequest function. A requester can implement its own timeout mechanism (using the function OTScheduleTimerTask) and calling the OTCancelURequest function after a specific amount of time has elapsed without a response to the request.

If the responder is synchronous and blocking, the OTRcvURequest function returns after it has read the request. If the responder is asynchronous or not blocking and has a notifier installed, the endpoint provider calls the notifier, passing T_REQUEST for the code parameter. When the responder receives this event, it must call the OTRcvURequest function to read the request. On return, parameters to the OTRcvURequest function specify the address of the requester, option values, the request data, flags information, and a sequence number to identify the transaction. When the responder sends a reply to the request, it must use the same sequence number for the reply. If the responder's buffer is too small to contain the request, the endpoint provider sets the T_MORE bit in the flags parameter. The responder must call the OTRcvURequest function until the T_MORE bit is clear. This indicates that the entire request has been read.

Having read the request, the responder can reply to the request using the OTSndUReply function or reject the request using the OTCancelUReply function. Although the requester is not advised that the responder has rejected a request, it's important that the responder explicitly cancel an incoming request in order to free memory reserved by the OTRcvURequest function.

If the requester is in synchronous blocking mode, the OTRcvUReply function waits until a reply comes in. Otherwise, if a notifier is installed, the endpoint provider calls the notifier, passing T_REPLY for the code parameter. The notifier must call the OTRcvUReply function. On return, parameters to the function specify the address of the endpoint sending the reply, specify option values, flag values, reply data, and a sequence number that identifies the request matching this reply. If the T_MORE bit is set in the flags parameter, the requester has allocated a buffer that is too small to contain the reply data. The requester must call the OTRcvUReply function until the T_MORE bit is clear; this indicates that the complete reply has been read.

If the request is rejected or fails in some other way, the requester receives the T_REPLY event. However, the OTRcvUReply function returns with the result kETIMEDOUTErr. Otherwise, the only useful information returned by the function is the sequence number of the request that has failed.

Figure 4-9 illustrates how connectionless transaction-based endpoints in asynchronous mode exchange data.

Figure 4-9 Data transfer using connectionless transaction-based endpoints in asynchronous mode

Using Connection-Oriented Transaction-Based Service

Connection-oriented transaction-based endpoints allow you to transfer data in exactly the same way as connectionless transaction-based endpoints except that, because the endpoints are connected, it is not necessary to specify an address when using the functions to send and receive requests and replies. The only other difference is that a connection-oriented transaction may be interrupted by a connection or disconnection request.

The section "Using Connectionless Transaction-Based Service" describes the sequence of functions used to transfer data using a transaction. Figure 4-10 shows the sequence of functions called during a connection-oriented transaction; both requester and responder are in asynchronous mode. This sequence is the same as for connectionless transaction-based service, as shown in Figure 4-9 . Of course, you use different functions to complete these two types of transactions: the names of the functions shown in Figure 4-10 do not include a "U" in the function name.

Figure 4-10 Data transfer using connection-oriented transaction-based endpoints in asynchronous mode

For information about how to handle disconnection requests that might occur during a transaction, see "Using Orderly Disconnects".


Subtopics
Opening and Binding Endpoints
Obtaining Information About Endpoints
Handling Events for Endpoints
Clearing Events and Synchronization Problems
Notifier Reentrancy
Polling for Events
Establishing and Terminating Connections
Establishing a Connection
Sending User Data With Connection or Disconnection Requests
Processing Multiple Connection Requests
Terminating a Connection
Using an Abortive Disconnect
Using Orderly Disconnects
Sending and Receiving Data
Sending Data Using Multiple Sends
Receiving Data
Transferring Data Between Transactionless Endpoints
Using Connectionless Transactionless Service
Using Connection-Oriented Transactionless Service
Transferring Data Between Transaction-Based Endpoints
Using Connectionless Transaction-Based Service
Using Connection-Oriented Transaction-Based Service

Previous Book Contents Book Index Next

© Apple Computer, Inc.
15 JAN 1998