storage format is awfully inefficient for small shares #80

Closed
opened 2007-07-08 08:30:14 +00:00 by warner · 11 comments

Eventually we're going to need to revisit our StorageServer implementation. The current approach stores each share in a separate directory, puts the share itself in 'data' and puts the other metadata in its own files. This results in about 7 files per share.

This approach is nice and simple and understandable and browsable, but not particularly efficient (at least under ext3). For a 20-byte share (resulting from a 476-byte file), the directory appears to consume about 33kB, and the parent directory (which holds 58 such shares for the same file) appears to consume 2MB. This is probably just the basic disk-block quantization that most filesystems suffer from. Lots of small files are expensive.

Testing locally, it looks like concatenating all of the files for a single (884-byte) share reduces the space consumed by that share from 33kB to 8.2kB. If we move that file up a level, so that we don't have a directory-per-share, just one file-per-share, then the space consumed drops to 4.1kB.

So I'm thinking that in the medium term, we either need to move to reiserfs (which might handle small files more efficiently) or change our StorageServer to try and put all the data in a single file, which means committing to some of the metadata and pre-allocating space for it in the sharefile.

Eventually we're going to need to revisit our [StorageServer](wiki/StorageServer) implementation. The current approach stores each share in a separate directory, puts the share itself in 'data' and puts the other metadata in its own files. This results in about 7 files per share. This approach is nice and simple and understandable and browsable, but not particularly efficient (at least under ext3). For a 20-byte share (resulting from a 476-byte file), the directory appears to consume about 33kB, and the parent directory (which holds 58 such shares for the same file) appears to consume 2MB. This is probably just the basic disk-block quantization that most filesystems suffer from. Lots of small files are expensive. Testing locally, it looks like concatenating all of the files for a single (884-byte) share reduces the space consumed by that share from 33kB to 8.2kB. If we move that file up a level, so that we don't have a directory-per-share, just one file-per-share, then the space consumed drops to 4.1kB. So I'm thinking that in the medium term, we either need to move to reiserfs (which might handle small files more efficiently) or change our [StorageServer](wiki/StorageServer) to try and put all the data in a single file, which means committing to some of the metadata and pre-allocating space for it in the sharefile.
warner added the
c/code
p/major
t/defect
v/0.4.0
labels 2007-07-08 08:30:14 +00:00
warner added this to the undecided milestone 2007-07-08 08:30:14 +00:00
Author

Ooh, it gets worse, I was trying to upload a copy of the 13MB tahoe source tree into testnet (which has about 1620 files, two thirds of which are patches under _darcs/). The upload failed about two thirds of the way through because of a zero-length file (see #81), but just 2/3rds of the upload consumes 1.2GB per storageserver (when clearly that should be closer to 13MB4/32/3, say 11.5MB).

This 100x overhead is going to be a problem...

Ooh, it gets worse, I was trying to upload a copy of the 13MB tahoe source tree into testnet (which has about 1620 files, two thirds of which are patches under _darcs/). The upload failed about two thirds of the way through because of a zero-length file (see #81), but just 2/3rds of the upload consumes 1.2GB per storageserver (when clearly that should be closer to 13MB*4/3*2/3, say 11.5MB). This 100x overhead is going to be a problem...
Author

Oh, and the tahoe-storagespace munin plugin stops working with that many directories. I rewrote it to use the native /usr/bin/du program instead of doing the directory traversal in python, and it still takes 63 seconds to measure the size of all three storageservers on testnet, which is an order of magnitude more than munin will allow before it gives up on the plugin (remember these get run every 5 minutes). It looks like three storageservers-worth of share directories is too large to fit in the kernel's filesystem cache, so measuring all of them causes thrashing. (in contrast, measuring just one node's space takes 14s the first time and just 2s each time thereafter).

So the reason that the storage space graph is currently broken is because the munin plugin can't keep up.

Oh, and the tahoe-storagespace munin plugin stops working with that many directories. I rewrote it to use the native /usr/bin/du program instead of doing the directory traversal in python, and it still takes 63 seconds to measure the size of all three storageservers on testnet, which is an order of magnitude more than munin will allow before it gives up on the plugin (remember these get run every 5 minutes). It looks like three storageservers-worth of share directories is too large to fit in the kernel's filesystem cache, so measuring all of them causes thrashing. (in contrast, measuring just one node's space takes 14s the first time and just 2s each time thereafter). So the reason that the [storage space graph](http://allmydata.org/tahoe-munin/tahoebs1.allmydata.com-tahoe_storagespace.html) is currently broken is because the munin plugin can't keep up.
Author

Zooko and I did some more analysis:

  • at the moment, each file is encoded into 100 shares, regardless of how many nodes are present in the mesh at that time
  • for small files (less than 2MB) each share has 846 bytes of overhead.
  • each share is stored in a separate directory, in 7 separate files (of which the actual share data is one, the uri_extension is a second, and the various hashes and pieces of metadata are others)
    • on most filesystems (i.e. ext3), each file and directory consumes at minimum a single disk block
    • on filesystems that are larger than a few gigabytes, each disk block is 4096 bytes
  • as a result, in the 0.4.0 release, each share consumes a minimum of 8*4096=32768 bytes.
    • for tiny files, 1 of these bytes is share data, 846 is validation overhead, and 31921 are filesystem quantization lossage
  • so for small files, we incur 1511424 (1.5MB) of disk usage per file (totalled across all 100 shares, on all the blockservers). This usage is constant for filesizes up to about 100kB.

Our plans to improve this:

  • #84: produce fewer shares in small networks, by having the introducer suggest 3-of-10 instead of 25-of-100 by default, for a 10x improvement
  • #85: store shares in a single file rather than 7 files and a directory, for an 8x improvement
  • #81: implement LIT uris, which hold the body of the file inside the URI. To measure the improvement of this we need to collect some filesize histograms from real disk images.
  • (maybe) #87: store fewer validation hashes in each share, to reduce that 846-byte overhead to 718 bytes.

Our guess is that this will reduce the minimum space consumed to 40960 bytes (41kB), occuring when the filesize is 10134 (10kB) or smaller.

The URI:LIT fix will cover the 0-to-80ish byte files efficiently. It may be the case that we just accept the overhead for 80-to-10134 byte files, or perhaps we could switch to a different algorithm (simple replication instead of FEC?) for those files. We'll have to run some more numbers and look at the complexity burden first.

Zooko and I did some more analysis: * at the moment, each file is encoded into 100 shares, regardless of how many nodes are present in the mesh at that time * for small files (less than 2MB) each share has 846 bytes of overhead. * each share is stored in a separate directory, in 7 separate files (of which the actual share data is one, the uri_extension is a second, and the various hashes and pieces of metadata are others) * on most filesystems (i.e. ext3), each file and directory consumes at minimum a single disk block * on filesystems that are larger than a few gigabytes, each disk block is 4096 bytes * as a result, in the 0.4.0 release, each share consumes a minimum of 8*4096=32768 bytes. * for tiny files, 1 of these bytes is share data, 846 is validation overhead, and 31921 are filesystem quantization lossage * so for small files, we incur 1511424 (1.5MB) of disk usage per file (totalled across all 100 shares, on all the blockservers). This usage is constant for filesizes up to about 100kB. Our plans to improve this: * #84: produce fewer shares in small networks, by having the introducer suggest 3-of-10 instead of 25-of-100 by default, for a 10x improvement * #85: store shares in a single file rather than 7 files and a directory, for an 8x improvement * #81: implement LIT uris, which hold the body of the file inside the URI. To measure the improvement of this we need to collect some filesize histograms from real disk images. * (maybe) #87: store fewer validation hashes in each share, to reduce that 846-byte overhead to 718 bytes. Our guess is that this will reduce the minimum space consumed to 40960 bytes (41kB), occuring when the filesize is 10134 (10kB) or smaller. The URI:LIT fix will cover the 0-to-80ish byte files efficiently. It may be the case that we just accept the overhead for 80-to-10134 byte files, or perhaps we could switch to a different algorithm (simple replication instead of FEC?) for those files. We'll have to run some more numbers and look at the complexity burden first.
warner modified the milestone from undecided to 0.5.0 2007-07-12 18:59:55 +00:00
Author

I've fixed the main problems here. My plan is to do some more tests, measure the current overhead (and record the results here), then close this ticket. #87 is a future change, since we want to retain the validation for a while, until we feel super-confident about the intermediate steps.

I've fixed the main problems here. My plan is to do some more tests, measure the current overhead (and record the results here), then close this ticket. #87 is a future change, since we want to retain the validation for a while, until we feel super-confident about the intermediate steps.
Author

copy of a message I sent to tahoe-dev:

I've just upgraded testnet to the most recent code, and have been playing
with larger uploads (now that they're finally possible). A couple of
performance numbers:

  • uploading a copy of the tahoe source tree (created with 'darcs dist'),

  • telling the node to copy the files directly from disk, using:
    time curl -T /dev/null '<http://localhost:8011/vdrive/global/tahoe?t=upload&localdir=/home/warner/tahoe>'

  • 384 files

  • 63 directories

  • about 4.6MB of data

  • upload takes 117 seconds

  • about 30MB consumed on the storage servers

  • 0.3 seconds per file, 3.3 files per second

  • 39kB per second

With the 3-out-of-10 encoding we're now using by default, we expect a 3.3x
expansion from FEC, so we'd expect those 4.6MB to expand to 15.3MB. The 30MB
that was actually consumed (a 2x overhead) is the effect of the 4096-byte
disk blocksize, since the tahoe tree contains a number of small files.

Uploading a copy of a recent linux kernel (linux-2.6.22.1.tar.bz2, 45.1MB)
tests out the large-file performance, this time sending the bytes over the
network (albeit from the same host as the node), using an actual http PUT:
time curl -T linux-2.6.22.1.tar.bz2 '<http://localhost:8011/vdrive/global/big/linux-2.6.22.1.tar.bz2>'

  • 1 file
  • 1 new directory
  • 45.1MB of data
  • upload takes 44 seconds
  • 151MB consumed on the storage servers
  • 1.04MB per second

The 3.3x expansion of a 45.1MB file would lead us to expect 150.3MB consumed,
so the 151MB that was actually consumed is spot on.

Downloading the kernel image (on the same host) took place at 4.39MBps on the
same host as the node, and at 4.46MBps on a separate host (the introducer).

Please note that these speed numbers are somewhat unrealistic: on our
testnet, we have three storage servers running on one machine, and an
introducer/vdrive-server running on a second. Both machines live in the same
cabinet and are connected to each other by a gigabit-speed network (not that
it matters, because the introducer/vdrive-server holds minimal amounts of
data). So what we're measuring here is the speed at which a node can do FEC
and encryption, and the overhead of Foolscap's SSL link encryption, and maybe
the rate at which we can write shares to disk (although these files are small
enough that the kernel can probably buffer them entirely in memory and then
write them to disk at its leisure).

Having storageservers on separate machines would be both better and worse:
worse because the shares would have to be transmitted over an actual wire
(instead of through the loopback interface), and better because then the
storage servers wouldn't be fighting with each other for access to the shared
disk and CPU. When we get more machines to dedicate to this purpose, we'll do
some more performance testing.

copy of a message I sent to tahoe-dev: I've just upgraded testnet to the most recent code, and have been playing with larger uploads (now that they're finally possible). A couple of performance numbers: * uploading a copy of the tahoe source tree (created with 'darcs dist'), * telling the node to copy the files directly from disk, using: `time curl -T /dev/null '<http://localhost:8011/vdrive/global/tahoe?t=upload&localdir=/home/warner/tahoe>'` * 384 files * 63 directories * about 4.6MB of data * upload takes 117 seconds * about 30MB consumed on the storage servers * 0.3 seconds per file, 3.3 files per second * 39kB per second With the 3-out-of-10 encoding we're now using by default, we expect a 3.3x expansion from FEC, so we'd expect those 4.6MB to expand to 15.3MB. The 30MB that was actually consumed (a 2x overhead) is the effect of the 4096-byte disk blocksize, since the tahoe tree contains a number of small files. Uploading a copy of a recent linux kernel (linux-2.6.22.1.tar.bz2, 45.1MB) tests out the large-file performance, this time sending the bytes over the network (albeit from the same host as the node), using an actual http PUT: `time curl -T linux-2.6.22.1.tar.bz2 '<http://localhost:8011/vdrive/global/big/linux-2.6.22.1.tar.bz2>'` * 1 file * 1 new directory * 45.1MB of data * upload takes 44 seconds * 151MB consumed on the storage servers * 1.04MB per second The 3.3x expansion of a 45.1MB file would lead us to expect 150.3MB consumed, so the 151MB that was actually consumed is spot on. Downloading the kernel image (on the same host) took place at 4.39MBps on the same host as the node, and at 4.46MBps on a separate host (the introducer). Please note that these speed numbers are somewhat unrealistic: on our testnet, we have three storage servers running on one machine, and an introducer/vdrive-server running on a second. Both machines live in the same cabinet and are connected to each other by a gigabit-speed network (not that it matters, because the introducer/vdrive-server holds minimal amounts of data). So what we're measuring here is the speed at which a node can do FEC and encryption, and the overhead of Foolscap's SSL link encryption, and maybe the rate at which we can write shares to disk (although these files are small enough that the kernel can probably buffer them entirely in memory and then write them to disk at its leisure). Having storageservers on separate machines would be both better and worse: worse because the shares would have to be transmitted over an actual wire (instead of through the loopback interface), and better because then the storage servers wouldn't be fighting with each other for access to the shared disk and CPU. When we get more machines to dedicate to this purpose, we'll do some more performance testing.
Author

here's a graph of overhead (although I'll be the first to admit it's not the best conceivable way to present this information..): attachment:overhead1.png .

The blue line is URI length. This grows from about 16 characters for a tiny (2-byte) file, to about 160 characters for everything longer than 55 bytes.

The pink line is effective expansion ratio. This is zero for small (<55byte) files, since we use LIT uris. Then it gets really big, because we consume 40960 bytes for a 56byte file, and that consumption stays constant up to a 10095-byte file. Then it jumps to 81920 bytes until we hit 122880 bytes at about 22400-byte files. It asympotically approaches 3.3x (from above) as the filesize gets larger (and the effect of the 4kB blocksize gets smaller).

here's a graph of overhead (although I'll be the first to admit it's not the best conceivable way to present this information..): [attachment:overhead1.png](/tahoe-lafs/trac/attachments/000078ac-b888-b298-26bb-cf5fbfffa123) . The blue line is URI length. This grows from about 16 characters for a tiny (2-byte) file, to about 160 characters for everything longer than 55 bytes. The pink line is effective expansion ratio. This is zero for small (<55byte) files, since we use LIT uris. Then it gets really big, because we consume 40960 bytes for a 56byte file, and that consumption stays constant up to a 10095-byte file. Then it jumps to 81920 bytes until we hit 122880 bytes at about 22400-byte files. It asympotically approaches 3.3x (from above) as the filesize gets larger (and the effect of the 4kB blocksize gets smaller).
Author

Attachment overhead1.png (14114 bytes) added

**Attachment** overhead1.png (14114 bytes) added
Author

Attachment overhead2.png (18836 bytes) added

**Attachment** overhead2.png (18836 bytes) added
Author

ok, this one is more readable. The two axes are in bytes, and you can see how we get constant 41kB storage space until we hit 10k files, then 82kB storage space (two disk blocks per share) until we hit 22k files, then the stairstep continues until the shares get big enough for the disk blocks to not matter. We approach the intended 3.3x as the files get bigger, getting too close to care by about 1MB files.

ok, [this one](/tahoe-lafs/trac/attachments/000078ac-b888-b298-26bb-151415c154be) is more readable. The two axes are in bytes, and you can see how we get constant 41kB storage space until we hit 10k files, then 82kB storage space (two disk blocks per share) until we hit 22k files, then the stairstep continues until the shares get big enough for the disk blocks to not matter. We approach the intended 3.3x as the files get bigger, getting too close to care by about 1MB files.
Author

I'm adding a tool called source:misc/storage-overhead.py to produce these measurements. To run it, use

PYTHONPATH=instdir/lib python misc/storage-overhead.py 1234

and it will print useful storage-usage numbers for each filesize you give it. You can also pass 'chart' instead of a filesize to produce a CSV file suitable for passing into gnumeric or some other spreadsheet (which is how I produced the graphs attached here).

I'm adding a tool called source:misc/storage-overhead.py to produce these measurements. To run it, use `PYTHONPATH=instdir/lib python misc/storage-overhead.py 1234` and it will print useful storage-usage numbers for each filesize you give it. You can also pass 'chart' instead of a filesize to produce a CSV file suitable for passing into gnumeric or some other spreadsheet (which is how I produced the graphs attached here).
Author

and now I'm going to close out this ticket, because I think we've improved the situation well enough for now.

and now I'm going to close out this ticket, because I think we've improved the situation well enough for now.
warner added the
r/fixed
label 2007-07-16 20:45:43 +00:00
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#80
No description provided.