pipeline upload segments to make upload faster #392

Closed
opened 2008-04-23 18:57:16 +00:00 by warner · 6 comments

In ticket #252 we decided to reduce the max segment size from 1MiB to 128KiB. But this caused in-colo upload speed to drop by at least 50%.

We should see if we can pipeline two segments for upload, to get back the extra round-trip times that we lost with having more segments.

It's also possible that some of the slowdown is just from the extra overhead of computing more hashes, but I suspect the turnaround time more than overhead.

We need to do something similar for download too, since the download speed was reduced drastically by the segsize change too.

In ticket #252 we decided to reduce the max segment size from 1MiB to 128KiB. But this caused in-colo upload speed to drop by at least 50%. We should see if we can pipeline two segments for upload, to get back the extra round-trip times that we lost with having more segments. It's also possible that some of the slowdown is just from the extra overhead of computing more hashes, but I suspect the turnaround time more than overhead. We need to do something similar for download too, since the download speed was reduced drastically by the segsize change too.
warner added the
p/major
t/enhancement
v/1.0.0
labels 2008-04-23 18:57:16 +00:00
warner added this to the eventually milestone 2008-04-23 18:57:16 +00:00
warner self-assigned this 2008-04-23 18:57:16 +00:00
Author

Oh, and I just thought of the right place to do this too: in the WriteBucketProxy. It should be allowed to keep a Nagle-like cache of
write vectors, and send them out in a batch when the cache gets larger than some
particular size (that will coalesce small writes into a single call, reducing the
round-trip time). In addition, it should be allowed to have multiple calls outstanding
if the total amount of data that it has sent (and therefore might be in the transport
buffer) is below some amount, say 128KiB. If k=3, then that should allow three segments to be on the wire at once, mitigating the slowdown due to round trips. As long as the RTT time is less than the bandwidth*windowsize, this should keep the pipe full.

Oh, and I just thought of the right place to do this too: in the `WriteBucketProxy`. It should be allowed to keep a Nagle-like cache of write vectors, and send them out in a batch when the cache gets larger than some particular size (that will coalesce small writes into a single call, reducing the round-trip time). In addition, it should be allowed to have multiple calls outstanding if the total amount of data that it has sent (and therefore might be in the transport buffer) is below some amount, say 128KiB. If k=3, then that should allow three segments to be on the wire at once, mitigating the slowdown due to round trips. As long as the RTT time is less than the bandwidth*windowsize, this should keep the pipe full.
Author

#320 is related, since the storage-server protocol changes we talked about would make it easier to implement the pipelining.

#320 is related, since the storage-server protocol changes we talked about would make it easier to implement the pipelining.
Author

Attachment pipeline.diff (14391 bytes) added

patch to add pipelining to immutable upload

**Attachment** pipeline.diff (14391 bytes) added patch to add pipelining to immutable upload
Author

So, using the attached patch, I added pipelined writes to the immutable
upload operation. The Pipeline class allows up to 50KB in the pipe
before it starts blocking the sender (specifically, the calls to
WriteBucketProxy._write return defer.succeed until there is more
than 50KB of unacknowledged data in the pipe, after which it returns regular
Deferreds until some of those writes get retired. A terminal flush()
call causes the Upload to wait for the pipeline to drain before it is
considered complete).

A quick performance test (in the same environments that we do the buildbot
performance tests on: my home DSL line and tahoecs2 in colo) showed a
significant improvement in the DSL per-file overhead, but only about a 10%
improvement in the overall upload rate (for both DSL and colo).

Basically, the 7 writes used to write a small file (header, segment 0,
crypttext_hashtree, block_hashtree, share_hashtree, uri_extension, close) are
all put on the wire together, so they take bandwidth plus 1 RTT instead of
bandwidth plus 7 RTT. The savings of 6 RTT appears to save us about 1.8
seconds over my DSL line. (my ping time to the servers is about 11ms, but
then there's kernel/python/twisted/foolscap/tahoe overhead on top of that).

For a larger file, pipelining might increase the utilization of the wire,
particularly if you have a "long fat" pipe (high bandwidth but high latency).
However, with 10 shares going out at the same time, the wire is probably
pretty full already: the ratio of interest is segsize*N/k/BW / RTT . You send
N blocks for a single segment at once, then you wait for all the replies to
come back, then generate the next blocks. If the time it takes to send a
single block is greater than the server's turnaround time, then N-1 responses
will be received before the last block is finished sending, so you've only
got one RTT of idle time (while you wait for the last server to respond).
Pipelining will fill this last RTT, but my guess is that isn't that much of a
help, and that something else is needed to explain the performance hit we saw
in colo when we moved to larger segments.

DSL no pipelining:

TIME (startup): 2.36461615562 up, 0.719145059586 down
TIME (1x 200B): 2.38471603394 up, 0.734190940857 down
TIME (10x 200B): 21.7909920216 up, 8.98366594315 down
TIME (1MB): 45.8974239826 up, 5.21775698662 down
TIME (10MB): 449.196600914 up, 34.1318571568 down
upload per-file time: 2.179s
upload speed (1MB): 22.87kBps
upload speed (10MB): 22.37kBps

DSL with pipelining:

TIME (startup): 0.437352895737 up, 0.185742139816 down
TIME (1x 200B): 0.493880987167 up, 0.202013969421 down
TIME (10x 200B): 5.15211510658 up, 2.04516386986 down
TIME (1MB): 43.141931057 up, 2.09753513336 down
TIME (10MB): 416.777194977 up, 19.6058299541 down
upload per-file time: 0.515s
upload speed (1MB): 23.46kBps
upload speed (10MB): 24.02kBps

The in-colo tests showed roughly the same improvement to upload speed, but
very little change to the per-file time. The RTT time there is shorter (ping
time is about 120us), which might explain the difference. But I think the
slowdown lies elsewhere. Pipelining shaves about 30ms off each file, and
increases the overall upload speed by about 10%.

colo no pipelining:

TIME (startup): 0.29696393013 up, 0.0784759521484 down
TIME (1x 200B): 0.285771131516 up, 0.0790619850159 down
TIME (10x 200B): 3.23165798187 up, 0.849181175232 down
TIME (100x 200B): 31.7827451229 up, 8.95765590668 down
TIME (1MB): 1.00738477707 up, 0.347244977951 down
TIME (10MB): 7.12743496895 up, 2.9827849865 down
TIME (100MB): 70.9683670998 up, 25.6454920769 down
upload per-file time: 0.318s
upload per-file times-avg-RTT: 83.833386
upload per-file times-total-RTT: 20.958347
upload speed (1MB): 1.45MBps
upload speed (10MB): 1.47MBps
upload speed (100MB): 1.42MBps

colo with pipelining:

TIME (startup): 0.262734889984 up, 0.0758249759674 down
TIME (1x 200B): 0.271718025208 up, 0.0812950134277 down
TIME (10x 200B): 2.80361104012 up, 0.838641881943 down
TIME (100x 200B): 28.4790999889 up, 9.36092710495 down
TIME (1MB): 0.853738069534 up, 0.337486028671 down
TIME (10MB): 6.6658270359 up, 2.67381596565 down
TIME (100MB): 64.6233050823 up, 26.5593090057 down
upload per-file time: 0.285s
upload per-file times-avg-RTT: 77.205647
upload per-file times-total-RTT: 19.301412
upload speed (1MB): 1.76MBps
upload speed (10MB): 1.57MBps
upload speed (100MB): 1.55MBps

I want to run some more tests before landing this patch, to make sure it's
really doing what I though it should be doing. I'd also like to improve the
automated speed-test to do a simple TCP transfer to measure the available
upstream bandwidth, so we can compare tahoe's upload speed against the actual
wire.

So, using the attached patch, I added pipelined writes to the immutable upload operation. The `Pipeline` class allows up to 50KB in the pipe before it starts blocking the sender (specifically, the calls to `WriteBucketProxy._write` return `defer.succeed` until there is more than 50KB of unacknowledged data in the pipe, after which it returns regular Deferreds until some of those writes get retired. A terminal `flush()` call causes the Upload to wait for the pipeline to drain before it is considered complete). A quick performance test (in the same environments that we do the buildbot performance tests on: my home DSL line and tahoecs2 in colo) showed a significant improvement in the DSL per-file overhead, but only about a 10% improvement in the overall upload rate (for both DSL and colo). Basically, the 7 writes used to write a small file (header, segment 0, crypttext_hashtree, block_hashtree, share_hashtree, uri_extension, close) are all put on the wire together, so they take bandwidth plus 1 RTT instead of bandwidth plus 7 RTT. The savings of 6 RTT appears to save us about 1.8 seconds over my DSL line. (my ping time to the servers is about 11ms, but then there's kernel/python/twisted/foolscap/tahoe overhead on top of that). For a larger file, pipelining might increase the utilization of the wire, particularly if you have a "long fat" pipe (high bandwidth but high latency). However, with 10 shares going out at the same time, the wire is probably pretty full already: the ratio of interest is segsize*N/k/BW / RTT . You send N blocks for a single segment at once, then you wait for all the replies to come back, then generate the next blocks. If the time it takes to send a single block is greater than the server's turnaround time, then N-1 responses will be received before the last block is finished sending, so you've only got one RTT of idle time (while you wait for the last server to respond). Pipelining will fill this last RTT, but my guess is that isn't that much of a help, and that something else is needed to explain the performance hit we saw in colo when we moved to larger segments. DSL no pipelining: ``` TIME (startup): 2.36461615562 up, 0.719145059586 down TIME (1x 200B): 2.38471603394 up, 0.734190940857 down TIME (10x 200B): 21.7909920216 up, 8.98366594315 down TIME (1MB): 45.8974239826 up, 5.21775698662 down TIME (10MB): 449.196600914 up, 34.1318571568 down upload per-file time: 2.179s upload speed (1MB): 22.87kBps upload speed (10MB): 22.37kBps ``` DSL with pipelining: ``` TIME (startup): 0.437352895737 up, 0.185742139816 down TIME (1x 200B): 0.493880987167 up, 0.202013969421 down TIME (10x 200B): 5.15211510658 up, 2.04516386986 down TIME (1MB): 43.141931057 up, 2.09753513336 down TIME (10MB): 416.777194977 up, 19.6058299541 down upload per-file time: 0.515s upload speed (1MB): 23.46kBps upload speed (10MB): 24.02kBps ``` The in-colo tests showed roughly the same improvement to upload speed, but very little change to the per-file time. The RTT time there is shorter (ping time is about 120us), which might explain the difference. But I think the slowdown lies elsewhere. Pipelining shaves about 30ms off each file, and increases the overall upload speed by about 10%. colo no pipelining: ``` TIME (startup): 0.29696393013 up, 0.0784759521484 down TIME (1x 200B): 0.285771131516 up, 0.0790619850159 down TIME (10x 200B): 3.23165798187 up, 0.849181175232 down TIME (100x 200B): 31.7827451229 up, 8.95765590668 down TIME (1MB): 1.00738477707 up, 0.347244977951 down TIME (10MB): 7.12743496895 up, 2.9827849865 down TIME (100MB): 70.9683670998 up, 25.6454920769 down upload per-file time: 0.318s upload per-file times-avg-RTT: 83.833386 upload per-file times-total-RTT: 20.958347 upload speed (1MB): 1.45MBps upload speed (10MB): 1.47MBps upload speed (100MB): 1.42MBps ``` colo with pipelining: ``` TIME (startup): 0.262734889984 up, 0.0758249759674 down TIME (1x 200B): 0.271718025208 up, 0.0812950134277 down TIME (10x 200B): 2.80361104012 up, 0.838641881943 down TIME (100x 200B): 28.4790999889 up, 9.36092710495 down TIME (1MB): 0.853738069534 up, 0.337486028671 down TIME (10MB): 6.6658270359 up, 2.67381596565 down TIME (100MB): 64.6233050823 up, 26.5593090057 down upload per-file time: 0.285s upload per-file times-avg-RTT: 77.205647 upload per-file times-total-RTT: 19.301412 upload speed (1MB): 1.76MBps upload speed (10MB): 1.57MBps upload speed (100MB): 1.55MBps ``` I want to run some more tests before landing this patch, to make sure it's really doing what I though it should be doing. I'd also like to improve the automated speed-test to do a simple TCP transfer to measure the available upstream bandwidth, so we can compare tahoe's upload speed against the actual wire.
Author

I pushed this patch anyways.. I think it'll help, just not as much as I was hoping for.

I pushed this patch anyways.. I think it'll help, just not as much as I was hoping for.
warner added the
r/fixed
label 2009-05-18 23:46:26 +00:00
warner modified the milestone from eventually to 1.5.0 2009-05-18 23:46:26 +00:00
Brian Warner <warner@lothar.com> commented 2017-01-11 00:35:09 +00:00
Owner

In 5e1d464/trunk:

Merge branch PR392

closes #392
closes ticket:2860
In [5e1d464/trunk](/tahoe-lafs/trac/commit/5e1d464a65b58b9d5d9964d99e56713e1a591c08): ``` Merge branch PR392 closes #392 closes ticket:2860 ```
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#392
No description provided.