Apache Performance Tuning for UNIX: IBM HTTP Server
Performance Tuning IBM HTTP Server documentation
Applies to UNIX

Apache 1.3 Process Model Overview


Apache 1.3 on UNIX is a pre-forking process per request server. When the Apache parent process is started, it forks a number of child processes that are responsible for accepting connections off the listening sockets. When a connection is received, the system wakes up one of the processes to handle the connection. Since Apache does not do asynchronous network I/O, Apache needs a process to handle each concurrently connected browser and client. For example, Apache needs 1000 processes to handle 1000 concurrently connected clients, or connections.

Symptoms of an Inadequate Number of Processes for Server Load

If you notice that server CPU utilization is low, but client requests for static pages are taking a long time to service, you could be running out of server processes to handle requests. This situation results when you have more inbound requests than you have Apache processes to handle those requests. New connections are being queued in the TCP/IP stack listen queue waiting for acceptance from an available process. As a process becomes available, it accepts and handles a connection off the listen queue. Connections can take a long time to reach the top of the listen queue.

If Apache is configured to listen on multiple ports, you can find responses slow on one port (port 80, for example) but adequate on another port (port 443, for example). The disparity in response time results from each port having its own listen queue. The port 80 queue could be deep and the port 443 queue shallow. Apache does not attempt to balance the number of connections received from each queue. If a connection is available, Apache and the operating system consume from either queue at random.

The solution to this scenario is to add more Apache processes to handle inbound connections.

Controlling the Number of Apache Processes

There are several configuration directives used to control the number of Apache processes or the number of concurrently supported clients. These directives are the most important:

  • StartServers
    Specifies the initial number of child processes to start when the server is started. Apache automatically increases the number of child processes as server load increases. If you start 10 processes, or servers, and 20 clients connect, Apache automatically detects the extra load and starts 10 more servers, for a total of 20 servers. Apache continues to create more servers, bounded by the setting of the MaxClients directive.
  • MaxClients
    Specifies the maximum number of Apache processes that can run at once. Limiting the number of Apache processes can prevent overrunning your hardware capabilities.

MaxClients has an upper bound specified by the directive, HARD_SERVER_LIMIT, in src/include/httpd.h. By default, Apache 1.3 sets HARD_SERVER_LIMIT to 256. Unmodified Apache will not allow you to set MaxClients greater than 256, and you cannot have more than 256 Apache processes running at once;therefore support is limited to 256 concurrently connected clients. There is a part of Apache storage called the scoreboard, used by mod_status, which must be preallocated to ensure enough space to hold process information for the HARD_SERVER_LIMIT number of processes.

Note: IBM HTTP Server (IHS) V 1.3.19 HARD_SERVER_LIMIT is 4096.

KeepAlive Affects Availability of Apache Processes to Handle Requests

HTTP V 1.1 has a feature known as Connection KeepAlive. In a non-KeepAlive connection, the browser starts a TCP connection and sends a single request to the server. The server responds and then the connection is taken down. Each request includes the bring-up and take down of a TCP connection. With KeepAlive, each connection is maintained for a number of requests, controlled by configuration directives. This control reduces network overhead related to startup and tear down of TCP connections. The server assumes that for a KeepAlive connection, the browser will send another request. The server attempts to read the next request off the network. If the next request is not available, the process blocks on the read, waiting for the next request.

Consider a typical scenario where a user goes to a Web site resulting in a page of information being fetched and rendered on the browser. The user lingers while reading the page. The user is not actively requesting more pages, but the server is blocked on a network read waiting for the next request. This blocked process is not available to handle requests from other clients. The user may follow a link off the Web site, get a cup of coffee and never send another request. You can configure Apache to wait a specific time for the next request using:

  • KeepAliveTimeout
    The default is 15 seconds. This setting means the server will stop waiting for the next request, shut down the connection to the inactive client and make the process available to handle other requests.

Some heavily loaded sites disable KeepAlive entirely.

Assess the needs of your site and set KeepAliveTimeout appropriately. Give the browser enough time to request all the elements of a page over the KeepAlive connection, but not wait too long for the user to initiate the next page request. Some recommend setting KeepAliveTimeout to 5 seconds as a good compromise between balancing available processes with minimizing network I/O.

Setting Expiration Dates on Static Content

You can sometimes reduce connection requests to your site by setting document expiration dates with mod_expires. If the browser has cached a previous visit, this action can possibly save another connection request to your site.