SQL*Net Performance Tuning Using Underlying Network Protocols
- The Oracle Stack
- Tests
- Recommendations/Standards and Conclusions
This article provides a comprehensive analysis of SQL*Net or Net8 performance tuning utilizing the underlying network protocols (UNPs). It compares SQL*Net or Net8 performance with respect to three UNPs with conclusions and recommendations for tuning based on analytical test results. Secondarily, it discusses other factors that impact SQL*Net performance in addition to the network.
This article covers the following topics:
The Oracle Stack
Tests
Recommendations/Standards
Conclusions
References
The audience for this article is:
Professionals who provide performance tuning in a client/server environment
Network engineers
Oracle database administrators
IT architects
Client/server environment performance tuning is an imperative subject. Many publications discuss database and application performance tuning, but none discuss the Oracle middleware layer SQL*Net or Net8 performance tuning.
This article discuses performance optimization and tuning of SQL*Net based on an arbitrary UNP that could be TCP/IP, SPX/IP or DECNet, among others. SQL*Net performance can be maximized by synchronization with the tunable parameters of the UNP, for example, buffer size.
The performance tuning concepts discussed in this article are applicable to network performance tuning too.
Oracle client/server systems can employ SQL*Net or Net8 as an interface between the Oracle application software and the UNP. SQL*Net enables Oracle products to access, modify, share, and store data on heterogeneous computing platforms in a variety of networking environments.
Total SQL*Net transaction performance can be divided into components of connect time and query time:
total SQL*Net (Net8) transaction time = connect time + query time
When designing and implementing networks, you can maximize connect time by calibrating the tunable parameters of SQL*Net and the UNP. Typically query time is affected by the database tuning parameters which are outside the scope of this article. However, this article discusses the database tuning parameters that impact network performance.
The Oracle Stack
The Oracle client/server architecture separates a data processing system into two partsclient and server. The client executes the application software that issues data requests to the server. The server executes the database application software that responds to client requests and controls the database as required.
The performance of a client/server application can be optimized by expediting the connect and query times between the client and server and by reducing network traffic.
When configuring client/server applications, performance is impacted by:
Configuration parameters of the application, that is, SQL*Plus or Oracle server
Parameters of SQL*Net
Parameters of the UNP
In FIGURE 1, typical Upper Layer Protocols (ULPs) could be TCP/IP, IPX/SPX, or DECNet; Lower Layer Protocols (LLPs) could be Ethernet, Token Ring, or FDDI.
FIGURE 1 Oracle Client/Server Model
The Oracle client/server model can be mapped into the Open System Interconnection (OSI) reference model. Oracle client applications such as SQL*Plus or SQL*Forms, and server applications such as the Oracle relational database management system (RDBMS) are at layer seven of the OSI model; SQL*Net at layers five and six; the ULP at layers three and four; the Lower Layer Protocol (LLP) at layer two; and the physical layer at one. In this discussion, the application software is at the top of the stack. Overall application performance is based on the performance of the lower three layers as well as variable external factors such as network traffic.
The stack paradigm can be applied to SQL*Net performance, which depends to a great extent on the performance of the lower layers. Therefore, when designing or implementing Oracle client/server systems, taking into consideration the tunable parameters of the underlying layers is vital in order to optimize performance.
SQL*Net Performance
For this discussion, SQL*Net performance and tuning analysis is based on two categories:
SQL*Net performance
SQL*Net tuning
Performance of SQL*Net is based on several factors. This section discusses these factors. Consider the data communication transaction resulting from a simple SQL*Plus statement:
SQL> select * from dual; D - X SQL>
The SQL*Plus client application initiates a network message as a result of the preceding statement. The message is received by the server, data is retrieved and returned through the network to the client.
Performance can be rated by the difference between the time the client application presents a communication request to the client SQL*Net (t1) to the time the client SQL*Net returns the response to the client application (t2). Referring to FIGURE 1, (t2 - t1) is the time required for data to be propagated through client layers 6 through 1, transported across the network medium, propagated through server layers 1 through 6, plus the symmetric return trip.
The time (t2 - t1) can be further divided into connect time and query time. Connect time is the round-trip time taken to communicate data between client and server application layers; query time is the time taken by the server to process the data.
Thus,
t= t2 - t1 = connect time + query time (1)
Factors Affecting Connect Time
Connect time is based on various external factors as well as the statuses of certain Oracle runtime options and helper utilities.
TABLE 1 Factors Affecting Connect Time
External factors |
Oracle options and utilities |
Use of domain name service | Prespawn processes |
Network topology | Multithreaded server (MTS) versus dedicated connections |
Network throughput (data rate) | Size of Tnsnames.ora file |
Number of hops (bridges, routers) between client and server | Status of SQL*Net tracing |
Network contention, if applicable | Status of security features |
Response time |
|
Heterogeneous network protocols |
|
Prespawn Processes
Prespawn dedicated server processes provide a faster connection to the database by eliminating the time required to spawn a process for each connection request.
MTS Versus Dedicated Connections
The MTS has its own dispatcher. A dedicated environment must create processes. This creation makes it a little slower.
Size of the Tnsnames.ora File
The Tnsnames.ora file, which is on the client, is significant for applications using SQL*Net. The size of this file can be directly related to connect time. When a client application initiates a transaction to retrieve data from a server, the entire Tnsnames.ora file is read.
Example
$ sqlplus uid/passwd@alias_name
The alias name is stored in the Tnsnames.ora file. Thus, the size of Tnsnames.ora determines a portion of the connect time. Instead of reading the entire file and scanning for the relevant entry, it is better to implement an indexing method.
SQL*Net Tracing
If SQL*Net tracing is turned on, every client/server connection generates a trace file. These files are usually large. The size of the file depends on the level of tracing. Since tracing generates a trace file, it increases the connect time.
Security Features
Implementation of security features such as encryption/decryption algorithms increase processing time at both ends of each secure transaction.
Factors Affecting Query Time
Once the connection is made, query time is the amount of time required to retrieve data from the database. Query time is impacted by the following factors:
Indexing
Array size
Indexing
Such factors affect performance at the database level. Since this article focuses on network performance, discussion is limited to array size.
Array Size
The size of the array_size parameter impacts performance. For example, in SQL*Plus, the array_size parameter is defined by the set command:
SQL> set array_size value
The value parameter determines the number of rows (called a batch) that SQL*Plus fetches from the database at one time. The value parameter can range from 1 to 5000. A large value increases the efficiency of queries that fetch many rows, but requires more host memory.
By calibrating the array size, it is possible to distribute the time required to query the records rather than fetching them all at once, thus decreasing the perceived query time. Note that the total time to query the records in smaller groups may be greater than the total time to query the records all at once. Computational overhead to access the database is repeated for each call to the database when the array size is less than the number of records required to be fetched. If the array_size parameter is large, the impact of the overhead is minimal, but additional time is required to retrieve the batch. If the array_size parameter is small, the frequency that the overhead impacts the database is greater, but data retrieval time per batch is smaller.
Put another way, when retrieving an arbitrary number of rows, a smaller array size reduces fetch time but increases overhead, whereas larger array size increases fetch time but reduces overhead. Overall, a larger array size produces better results.
Referring to expression (1), there are tradeoffs between connect time and query time. Using a larger array size might optimize query time, at the expense of connect time and overall performance. It is important to determine the optimum batch size, which is a product of array size and row length. Row length, in turn, is a function of the type and amount of data (for example, VARCHAR2, LONG) in a table.
SDU Parameter
If the array size is set to a higher value based on row data type, the application passes a large amount of data to SQL*Net. The SQL*Net buffer size determines the amount of data that can be processed by SQL*Net (FIGURE 1). The session data unit (SDU) parameter defines the SQL*Net buffer. For SQL*Net version 2.3.x and above, the default size of the SDU parameter is 2 kilobytes (configurable up to 32 kilobytes); for versions 2.3 and below, the default SDU is also 2 kilobytes (the maximum configurable size). As an SQL*Net connection is established, the client and server negotiate the size of the SDU to be used. If the SDUs of the client-side and server-side differ, the smaller of the two is selected. This "decision" is made by the server-side SQL*Net.
If the SDU parameter is smaller than the application fetch size, fragmentation could occur. If SDU is larger than the application fetch size, there is no fragmentation, and the entire packet can be sent across the network (assuming ULP and LLP buffer sizes are large enough to handle it).
Again, the array size is the number of rows that Oracle fetches before it passes them to the server SQL*Net to be returned to the client. This action affects SQL*Net packet sizes throughout the communication stream.
CODE EXAMPLE 1 Syntax SDU in Tnsnames.ora File
EOUG= (DESCRIPTION= (SDU=2048)service layer buffer size (TDU=1024)transport layer size (ADDRESS= (PROTOCOL=TCP) (HOST=ORLSUN9) (PORT=4446) ) (CONNECT_DATA= (SID=V7321) ) )
CODE EXAMPLE 2 Syntax SDU in Listener.ora File
LISTENER= (ADDRESS_LIST= (ADDRESS= (PROTOCOL=TCP) (HOST=ORLSUN9) (PORT=4446) ) ) STARTUP_WAIT_TIME_LISTENER=0 CONNECT_TIMEOUT_LISTENER=10 TRACE_LEVEL_LISTENER=OFF SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (SDU=8192) (SID_NAME=V7321) (ORACLE_HOME=ORACLE/7321) ) )
Example
Assume the SDU is 2 kilobytes, the array_size parameter is set to 3 and the first 6 rows of data are the following sizes (in bytes): 1511, 410, 730, 300, 200, 500.
The Oracle server first requests the server side SQL*Net to send 2651 bytes (the first three rows), then 1000 bytes (the last three rows). The Oracle server sends the following datagrams:
Datagram |
Size (bytes) |
Data (bytes) |
SQL*Net header (bytes) |
1 | 2048 (SDU) | 2038 | 10 |
2 | 623 | 613 remaining | 10 |
3 | 1010 | 1000 requested | 10 |
Relationship Between SDU and MTU Parameters
The maximum transfer unit (MTU) defines the buffer size of UNP, specifically with TCP/IP. The following statements summarize the relationship between SDU and MTU parameters:
If SDU = MTU This is the ideal situation; no fragmentations occur.
else if SDU > MTU Fragmentation occurs.
else SDU < MTU Performance does not increase.
NOTE
The three preceding conditions are met if there is enough space left for the UNP header information.
Example
Assume the ULP is TCP/IP and the MTU parameter (buffer size) is set to 1500. Packet 1 is 2048 bytes (condition: SDU > MTU), which cannot be "absorbed" by the ULP because of ULP buffer size limitations. As a result, fragmentation occurs and performance suffers.
Example
TCP/IP-level fragmentation:
SQL*Net buffer size 2048
TCP/IP buffer size 1500
This combination generates two SQL*Net packets. Packet 1a is 1500 (1460+40) bytes and packet 1b is 628 (588 + 40) bytes. As a result of this fragmentation, the amount of traffic passed to the LLP increases by a multiple of two. When these packets go through the datalink layer, more data is prepended (for example, Ethernet, 14 bytes). Theoretically, at the bottom of the client stack, the size of the two packets is:
1500 + 14 = 1514 packet 1a
628 + 14 = 642 packet 1b
Now consider packet 2 (SDU < MTU). Since the size of this packet is 623 bytes, less than the MTU size (1500 bytes), there is no fragmentation. However, increasing the SQL*Net packet size can increase performance as a larger packet transforms across the network.
packet 2 _ 623 (data) + 40 (TCP/IP header) + 1 padding (byte) + 14 (Ethernet header) = 678 data (bytes)
Now consider the ideal condition where SDU equals MTU. In this situation, there is no fragmentation as the buffer sizes are synchronized. This is the optimum situation.
SQL*Net Tuning
As discussed previously, performance optimization means reducing network traffic, which can be achieved through the tuning process. Referring to FIGURE 2, the Oracle server application passes a batch of data to SQL*Net, where a 10-byte control header (HS) is prepended, forming a frame that is passed to the ULP. The ULP prepends its header HULP, the size of which depends on the protocol used. TCP/IP1, for example, uses a 40-byte header2; IPX/SPX, a 30-byte header3, forming a datagram that is passed to the LLP. The LLP prepends its header HLLP, the size of which again depends on the protocol used. Ethernet, for example, uses a 14-byte header4, forming a packet that is passed to the physical layer for transmission.
FIGURE 2 Data Flow Through the Server Network Stack
Ideally, if the data buffers of SQL*Net, the ULP, and the LLP are synchronized, fragmentation is minimized or eliminated as data flows from the application layer to the LLP.
Example
Assume the SDU is 2 kilobytes, the ULP (TCP/IP) MTU is 1500 bytes, and the LLP (Ethernet) buffer is 1506 bytes. The application passes 1442 bytes of data to SQL*Net, which prepends a 10-byte header, producing a frame of 1452 bytes. SQL*Net in turn passes the frame to the ULP, which prepends a 40-byte header, producing a datagram of 1492 bytes. ULP then passes the datagram to the LLP, which prepends a 12-byte header, producing a packet of 1506 bytes. The batch has successfully passed through the client stack without fragmentation.
In this example, note that because each succeeding lower layer buffer is large enough to absorb the data received from its respective upper layer, there is no fragmentation. This is the ideal situation. In practice, this is seldom possible due to incompatibilities between buffer sizes of the layers. When data flows between layers of incompatible buffer sizes, fragmentation occurs, and as a result, extra network traffic is generated. With this in mind, components of the stack can be tuned to minimize fragmentation, which reduces network traffic and thereby increases performance.