remove the section about storage index count
[Imported from Trac: page Performance, version 28]
parent
d47febf163
commit
48ab9b19c8
|
@ -118,30 +118,6 @@ upload speed is the constant per-file overhead, and the FEC expansion factor.
|
|||
|
||||
## Storage Servers
|
||||
|
||||
### storage index count
|
||||
|
||||
ext3 (on tahoebs1) refuses to create more than 32000 subdirectories in a
|
||||
single parent directory. In 0.5.1, this appears as a limit on the number of
|
||||
buckets (one per storage index) that any StorageServer can hold. A simple
|
||||
nested directory structure will work around this.. the following code would
|
||||
let us manage 33.5G shares (see #150).
|
||||
|
||||
```
|
||||
from idlib import b2a
|
||||
os.path.join(b2a(si[:2]), b2a(si[2:4]), b2a(si))
|
||||
```
|
||||
|
||||
This limitation is independent of problems of memory use and lookup speed.
|
||||
Once the number of buckets is large, the filesystem may take a long time (and
|
||||
multiple disk seeks) to determine if a bucket is present or not. The
|
||||
provisioning page suggests how frequently these lookups will take place, and
|
||||
we can compare this against the time each one will take to see if we can keep
|
||||
up or not. If and when necessary, we'll move to a more sophisticated storage
|
||||
server design (perhaps with a database to locate shares).
|
||||
|
||||
I was unable to measure a consistent slowdown resulting from having 30000
|
||||
buckets in a single storage server.
|
||||
|
||||
## System Load
|
||||
|
||||
The source:src/allmydata/test/check_load.py tool can be used to generate
|
||||
|
|
Loading…
Reference in a new issue