use weekly graphs, remove old ones, fix links

[Imported from Trac: page TestGrid, version 25]
warner 2008-03-28 22:56:59 +00:00
parent c06e4f5bf5
commit 30627620fc

@ -35,51 +35,23 @@ build.
### Count of how many clients and servers are connected to the grid ### Count of how many clients and servers are connected to the grid
* [/tahoe-munin/tahoecs2.allmydata.com-tahoe_introstats.html other time scales] * [/tahoe-munin/tahoecs2.allmydata.com-tahoe_introstats.html other time scales]
![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_introstats-month.png) ![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_introstats-week.png)
### Load Average ### Load Average
This measures how long a 1 second callback is overdue, on each machine that This measures how long a 1 second callback is overdue, on each machine that
connects to the central 'stats gatherer'. connects to the central 'stats gatherer'.
* [/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_avg.html other time scales] * [/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_avg.html other time scales]
![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_avg-month.png) ![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_avg-week.png)
Same, but those the peak value (reset at node reboot) Same, but those the peak value (reset at node reboot)
* [/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_peak.html other time scales] * [/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_peak.html other time scales]
![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_peak-month.png) ![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_runtime_load_peak-week.png)
### Total space consumed in grid ### Total space consumed in grid
This queries each storage server for how much space it is currently This queries each storage server for how much space it is currently
consuming. consuming.
* [/tahoe-munin/tahoecs2.allmydata.com-tahoe_storage_consumed.html other time scales] * [/tahoe-munin/tahoecs2.allmydata.com-tahoe_storage_consumed.html other time scales]
![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_storage_consumed-month.png) ![](http://allmydata.org/tahoe-munin/tahoecs2.allmydata.com-tahoe_storage_consumed-week.png)
The following counts are samples of a single storage server, running four
nodes. The test grid uses four servers, running a total of 13 nodes.
### count of files stored in the grid
* other time scales:
[/tahoe-munin/tahoebs1.allmydata.com-tahoe_files.html tahoebs1],
[/tahoe-munin/tahoebs3.allmydata.com-tahoe_files.html tahoebs3],
[/tahoe-munin/tahoebs4.allmydata.com-tahoe_files.html tahoebs4],
[/tahoe-munin/tahoebs5.allmydata.com-tahoe_files.html tahoebs5]
![](http://allmydata.org/tahoe-munin/tahoebs3.allmydata.com-tahoe_files-month.png)
### average shares per file
this hints at the number of nodes in the overall grid; If the number of hosts in the grid is less than 10, then shares_per_file should equal about 10/numhosts
* other time scales:
[/tahoe-munin/tahoebs1.allmydata.com-tahoe_sharesperfile.html tahoebs1],
[/tahoe-munin/tahoebs3.allmydata.com-tahoe_sharesperfile.html tahoebs3],
[/tahoe-munin/tahoebs4.allmydata.com-tahoe_sharesperfile.html tahoebs4],
[/tahoe-munin/tahoebs5.allmydata.com-tahoe_sharesperfile.html tahoebs5]
![](http://allmydata.org/tahoe-munin/tahoebs3.allmydata.com-tahoe_sharesperfile-month.png)
### total bytes consumed by all of the Tahoe test grid storage servers operated by Allmydata Inc.
* other time scales:
[/tahoe-munin/tahoebs1.allmydata.com-tahoe_storagespace.html tahoebs1],
[/tahoe-munin/tahoebs3.allmydata.com-tahoe_storagespace.html tahoebs3],
[/tahoe-munin/tahoebs4.allmydata.com-tahoe_storagespace.html tahoebs4],
[/tahoe-munin/tahoebs5.allmydata.com-tahoe_storagespace.html tahoebs5]
![](http://allmydata.org/tahoe-munin/tahoebs3.allmydata.com-tahoe_storagespace-month.png)