next up previous
Next: Delayed Dissemination of Cache Up: Simulation and Performance Previous: Simulation Model

Performance

 
  
Figure 6: Bandwidth Under Different Document TTL (IGR)
\begin{figure}
\centerline{
\psfig {figure=bw-igr.eps,height=2.5in, width=3.2in}
}\end{figure}


  
Figure 7: Bandwidth Under Different Document TTL (Server-Forwardable)
\begin{figure}
\centerline{
\psfig {figure=bw-home.eps,height=2.5in, width=3.2in}
}\end{figure}

Figure 6 and Figure 7 plot the total backbone bandwidth consumption of the trace for IGR and Server-Forwardable respectively. The curves in the graph show the bandwidth consumption under different values of the time-to-live parameter; ``Static'' represents the assumption that all the documents will never change; accordingly, ``7-day-TTL'' means the documents change every week. From the graphs, we can see that if all the documents are ``static'', we can get up to 6% bandwidth reduction by for IGR policy and 20% for Server-Forwardable. This implies that if we tolerate several cache misses without fetching the object, we could filter out those seldom accessed pages and only fetch those repeatedly accessed ``hot'' pages. Bandwidth savings increase significantly for shorter time-to-live values. In the 7-day TTL case, we can save up to 13% and 35% of bandwidth; in the 4-day TTL and 2-day TTL cases, the bandwidth saving can be more than 15% and 40%, for IGR and Server-forwardable respectively. All curves (almost) stabilize quickly, meaning a relatively small threshold suffices to get most of the benefit.

There are two reasons why bandwidth savings in IGR are not as large as that in Server-Forwardable:

1.
For IGR, our technique starts to work when there is a miss on one proxy while the same request is a hit on another, but it seems that the chance of this situation is low for reverse proxies. Since we assume infinite cache capacity, especially in the case of static documents, once we cached the object, all the future references would be hits; there would not be many local misses we could forward. Considering document time-to-live parameters partly alleviates this problem by making document caches out-of-date and invalid, thus increasing local misses.

2.
During the warming-up phase, when none of the IGRs has a copy of a document, even a reference to a rarely accessed document would lead to an expensive fetching operation, because there is no place to forward. This could also reduce the bandwidth margin we can save. In the Server-Forwardable case, however, any request will be a hit on the server. Therefore making the server capable of accepting forwarded requests will make forwarding much easier and thus further reduce the bandwidth consumption.


  
Figure 8: Latency Under Different Document TTL (IGR)
\begin{figure}
\centerline{
\psfig {figure=la-igr.eps,height=2.5in, width=3.2in}
}\end{figure}


  
Figure 9: Latency Under Different Document TTL (Server-Forwardable)
\begin{figure}
\centerline{
\psfig {figure=la-home.eps,height=2.5in, width=3.2in}
}\end{figure}

Figure 8 and Figure 9 show the latency overhead brought by the forwarding mechanism. At first look, this is really a bad penalty: the latency roughly doubled for each case. However, these numbers only represent the latency on the backbone, which has a base overhead of 2.3 ms/req. Taking the 7-day-TTL case in IGR for an example, the worst latency on the backbone is 11 ms/req. How badly it affects the overall client-perceived latency depends on the speed of the client side link, but in the current Internet such delay overhead would be in the noise.


next up previous
Next: Delayed Dissemination of Cache Up: Simulation and Performance Previous: Simulation Model
Limin Wang
2/20/2000