Figure 6 and Figure 7 plot the total backbone bandwidth consumption of the trace for IGR and Server-Forwardable respectively. The curves in the graph show the bandwidth consumption under different values of the time-to-live parameter; ``Static'' represents the assumption that all the documents will never change; accordingly, ``7-day-TTL'' means the documents change every week. From the graphs, we can see that if all the documents are ``static'', we can get up to 6% bandwidth reduction by for IGR policy and 20% for Server-Forwardable. This implies that if we tolerate several cache misses without fetching the object, we could filter out those seldom accessed pages and only fetch those repeatedly accessed ``hot'' pages. Bandwidth savings increase significantly for shorter time-to-live values. In the 7-day TTL case, we can save up to 13% and 35% of bandwidth; in the 4-day TTL and 2-day TTL cases, the bandwidth saving can be more than 15% and 40%, for IGR and Server-forwardable respectively. All curves (almost) stabilize quickly, meaning a relatively small threshold suffices to get most of the benefit.
There are two reasons why bandwidth savings in IGR are not as large as that in Server-Forwardable:
Figure 8 and Figure 9 show the latency overhead brought by the forwarding mechanism. At first look, this is really a bad penalty: the latency roughly doubled for each case. However, these numbers only represent the latency on the backbone, which has a base overhead of 2.3 ms/req. Taking the 7-day-TTL case in IGR for an example, the worst latency on the backbone is 11 ms/req. How badly it affects the overall client-perceived latency depends on the speed of the client side link, but in the current Internet such delay overhead would be in the noise.