intial thoughts/tests on performance

[Imported from Trac: page Performance, version 1]
warner 2007-09-08 21:47:37 +00:00
parent 2c110aa4ca
commit 5e10593467

72
Performance.md Normal file

@ -0,0 +1,72 @@
Some basic notes on performance:
## Memory Footprint
The [MemoryFootprint](MemoryFootprint) page has more specific information.
The
[graph](http://allmydata.org/tahoe-figleaf-graph/hanford.allmydata.com-tahoe_memstats.html)
munin shows our static memory footprint (starting a node but not doing
anything with it) to be about 24MB. Uploading one file at a time gets the
node to about 29MB. (we only process one segment at a time, so peak memory
consumption occurs when the file is a few MB in size and does not grow beyond
that). Uploading multiple files at once would increase this.
## Network Speed
### test results
Using a 3-server testnet in colo and an uploading node at home (on a DSL line
that gets about 150kBps upstream and has a 14ms ping time to colo) using
0.5.1-34 takes 820ms-900ms per 1kB file uploaded (80-90s for 100 files, 819s
for 1000 files).
'scp' of 3.3kB files (simulating expansion) takes 8.3s for 100 files and 79s
for 1000 files, 80ms each.
Doing the same uploads locally on my laptop (both the uploading node and the
storage nodes are local) takes 46s for 100 1kB files and 369s for 1000 files.
### Roundtrips
The 0.5.1 release requires about 9 roundtrips for each share it uploads. The
upload algorithm sends data to all shareholders in parallel, but these 9
phases are done sequentially. The phases are:
1. allocate_buckets
1. send_subshare (once per segment)
1. send_plaintext_hash_tree
1. send_crypttext_hash_tree
1. send_subshare_hash_trees
1. send_share_hash_trees
1. send_UEB
1. close
1. dirnode update
We need to keep the send_subshare calls sequential (to keep our memory
footprint down), and we need a barrier between the close and the dirnode
update (for robustness and clarity), but the others could be pipelined.
9*14ms=126ms, which accounts for about 15% of the measured upload time.
## Storage Servers
ext3 (on tahoebs1) refuses to create more than 32000 subdirectories in a
single parent directory. In 0.5.1, this appears as a limit on the number of
buckets (one per storage index) that any [StorageServer](StorageServer) can hold. A simple
nested directory structure will work around this.. the following code would
let us manage 33.5G shares:
```
from idlib import b2a
os.path.join(b2a(si[:2]), b2a(si[2:4]), b2a(si))
```
This limitation is independent of problems of memory use and lookup speed.
Once the number of buckets is large, the filesystem may take a long time (and
multiple disk seeks) to determine if a bucket is present or not. The
provisioning page suggests how frequently these lookups will take place, and
we can compare this against the time each one will take to see if we can keep
up or not. If and when necessary, we'll move to a more sophisticated storage
server design (perhaps with a database to locate shares).