next up previous
Next: Motivation Up: Forwarding Requests among Reverse Previous: Forwarding Requests among Reverse

Introduction

Today, 70-80% of all traffic on the Internet uses the HTTP protocol.[14] However, the tremendous growth of WWW has also resulted in a number of problems, such as overloaded servers, traffic congestion and increasing client latencies. Web caching has been recognized as an important technique to reduce Internet bandwidth consumption  [4].

By caching HTTP responses, higher throughput and lower latency can be delivered to end users. For users in Europe and Asia who have slow links to the Internet, cooperative local and regional proxy caching has already avoided many repeat requests sent to remote servers. Caching can also distribute load away from server hot-spots, and isolate end users from network failure.

Internet Service Providers (ISPs) that also provide Web hosting services often deploy reverse proxy caching at the border routers of their backbones. In this case, special proxy servers are co-located with the border routers (or IGRs , for Internet Gateway Routers). These proxies store HTTP responses and use them to satisfy future equivalent requests that arrive at their IGRs without going back to the end-server. By servicing content from the edge of the network, reverse proxies reduce the load on the end-servers and the backbone bandwidth consumption.

The reverse proxies can also be cooperative: on a cache miss, a proxy can obtain the requested object from another proxy that happens to have the object in its cache. While such cooperation reduces further the load on end-servers, it has limited effect on the backbone bandwidth consumption because objects sent between cooperative proxies are still sent over the ISP's own backbone. Further, current cooperative proxy caching always leaves the original proxy in charge of sending the object to the client. Thus, it does not help with any load imbalance among proxies. For the same reason, it cannot address the situation where the proxy that receives the original request is not optimal to the client in terms of network proximity or congestion.

Based on these observations, we propose that instead of fetching the missed object from a remote proxy, the original proxy send a short control message to the remote proxy and tell it to send the object directly to the client. By doing so, several potential benefits can be achieved: first, cooperating reverse proxies can get the heavy traffic off their ISP's own backbone quickly and as a result, backbone bandwidth will be saved; second, overloaded proxies can choose to forward requests to other proxies with less load to get some load balancing among all cooperative proxies; third, by observing and adapting to network traffic, forwarding requests to other proxies can be used to better utilize network proximity and avoid network congestion. In practice, this ``circular'' communication (client $\rightarrow$ proxy $\rightarrow$ remote proxy $\rightarrow$ client) can be achieved through a TCP hand-off among the proxies.

As the initial step in quantifying these benefits, we studied the effect of request forwarding on backbone bandwidth consumption. We performed a simulation study based on the AT&T WorldNet topology and a trace of accesses to an AT&T EasyWWW[*] data center and observed backbone bandwidth reduction of 13% to 35%, depending on the forwarding policy used. The other two potential benefits are still under evaluation.

The rest of the paper is organized as follows. We discuss reverse proxy caching and our motivation in more detail in the next section. We explain our approach in Section 3, and provide simulation results in Section 4. Finally, we talk about related and future work.


next up previous
Next: Motivation Up: Forwarding Requests among Reverse Previous: Forwarding Requests among Reverse
Limin Wang
2/20/2000