add another set of load-testing results, with 1MB mean file size

[Imported from Trac: page Performance, version 22]
warner 2007-12-19 02:12:45 +00:00
parent d191d4b47c
commit 352eecc195

@ -133,6 +133,8 @@ The source:src/allmydata/test/check_load.py tool can be used to generate
random upload/download traffic, to see how much load a Tahoe grid imposes on random upload/download traffic, to see how much load a Tahoe grid imposes on
its hosts. its hosts.
### test one: 10kB mean file size
Preliminary results on the Allmydata test grid (14 storage servers spread Preliminary results on the Allmydata test grid (14 storage servers spread
across four machines (each a 3ishGHz P4), two web servers): we used three across four machines (each a 3ishGHz P4), two web servers): we used three
check_load.py clients running with 100ms delay between requests, an check_load.py clients running with 100ms delay between requests, an
@ -158,6 +160,23 @@ averaging 60%-80% CPU usage. Memory consumption is minor, 37MB [VmSize](VmSize)
about 600Kbps for the whole test, while the inbound traffic started at about 600Kbps for the whole test, while the inbound traffic started at
200Kbps and rose to about 1Mbps at the end. 200Kbps and rose to about 1Mbps at the end.
### test two: 1MB mean file size
Same environment as before, but the mean file size was set to 1MB instead of
10kB.
```
clients: 2MBps down, 340kBps up, 1.37 fps down, .36 fps up
tahoecs2: 60% CPU, 14Mbps out, 11Mbps in, load avg .74 (web server)
tahoecs1: 78% CPU, 7Mbps out, 17Mbps in, load avg .91 (web server)
tahoebs4: 26% CPU, 4.7Mbps out, 3Mbps in, load avg .50 (storage server)
tahoebs5: 34% CPU, 4.5Mbps out, 3Mbps in (storage server)
```
Load is about the same as before, but of course the bandwidths are larger.
For this file size, the per-file overhead seems to be more of a limiting
factor than per-byte overhead.
### initial conclusions ### initial conclusions
So far, Tahoe is scaling as designed: the client nodes are the ones doing So far, Tahoe is scaling as designed: the client nodes are the ones doing