the immutable uploader should call remote_abort on buckets that it knows it won't be using #1117

Closed
opened 2010-07-13 21:10:11 +00:00 by kevan · 11 comments
kevan commented 2010-07-13 21:10:11 +00:00
Owner

The immutable upload code doesn't seem to call the abort method of buckets on remote storage servers when an upload fails (e.g., because the user aborted it, or because peer selection didn't work out). This means that the placeholder sharefiles stick around and reduce available space until the client disconnects, or they expire in some other way. The immutable upload code should do a better job of calling abort to resolve this issue.

(this was first reported in http://tahoe-lafs.org/pipermail/tahoe-dev/2010-July/004656.html)

The immutable upload code doesn't seem to call the abort method of buckets on remote storage servers when an upload fails (e.g., because the user aborted it, or because peer selection didn't work out). This means that the placeholder sharefiles stick around and reduce available space until the client disconnects, or they expire in some other way. The immutable upload code should do a better job of calling abort to resolve this issue. (this was first reported in <http://tahoe-lafs.org/pipermail/tahoe-dev/2010-July/004656.html>)
tahoe-lafs added the
c/code-peerselection
p/major
t/defect
v/1.7.0
labels 2010-07-13 21:10:11 +00:00
tahoe-lafs added this to the 1.7.1 milestone 2010-07-13 21:10:11 +00:00

Kevan: is this a regression vs. v1.6.0? My guess is that this bug was already present in v1.6.0, but if not please add the "regression" tag to this ticket!

Kevan: is this a regression vs. v1.6.0? My guess is that this bug was already present in v1.6.0, but if not please add the "regression" tag to this ticket!

The storage server doesn't exactly use placeholder files, but the internal how-much-space-have-i-committed-to code will indeed keep counting that space until an abort() is sent, so the uploader should definitely abort the shares as soon as it realizes it isn't going to use them. Otherwise the allocation will stick around until the server connection is dropped.

The share that's hanging out may also convince later uploaders to refrain from uploading a new copy of that same share. I think the server reports in-progress shares in exactly the same way as it reports really-complete shares. So failing to abort a share is likely to confuse later uploads too.

The storage server doesn't exactly use placeholder files, but the internal how-much-space-have-i-committed-to code will indeed keep counting that space until an abort() is sent, so the uploader should definitely abort the shares as soon as it realizes it isn't going to use them. Otherwise the allocation will stick around until the server connection is dropped. The share that's hanging out may also convince later uploaders to refrain from uploading a new copy of that same share. I think the server reports in-progress shares in exactly the same way as it reports really-complete shares. So failing to abort a share is likely to confuse later uploads too.
kevan commented 2010-07-16 00:50:33 +00:00
Author
Owner

From what I understand, the storage server only stops counting the space allocated to a bucket writer when that writer's remote_close method is called, since that causes the server's bucket_writer_closed method to be invoked, which causes the bucket writer to be removed from the active writers list. remote_abort, on the other hand, only deletes the incoming file associated with the bucket. If I haven't misunderstood, this issue then breaks down into:

  1. The client needs to be careful about aborting shares when it knows that it will no longer use them.
  2. The server needs to treat remote_abort more like remote_close, only instead of copying the file from the incomingdir to the sharedir, it needs to delete that file.

I've attached a patch that addresses both of these issues. This can be considered a backward-compatibility break for clients that were relying on the fact that abort()ing a bucket didn't cause the server to stop counting the space assigned to that bucket. I'm not sure how likely it is that there are any such clients.

In the tests for the client-side modifications, I use a fireEventually() to make sure that the abort messages get to the storage server before I check that they're sent (the bucket writer proxy abort call uses callRemoteOnly instead of callRemote, because it doesn't care so much about the result, so it is harder to reason about when the messages get to their destination when testing). Is this reasonable? Is this a good thing? Is there a better way?

zooko: I think you're right; this bug seems to exist in 1.6.0 too, so this isn't a regression.

From what I understand, the storage server only stops counting the space allocated to a bucket writer when that writer's `remote_close` method is called, since that causes the server's `bucket_writer_closed` method to be invoked, which causes the bucket writer to be removed from the active writers list. `remote_abort`, on the other hand, only deletes the incoming file associated with the bucket. If I haven't misunderstood, this issue then breaks down into: 1. The client needs to be careful about aborting shares when it knows that it will no longer use them. 2. The server needs to treat `remote_abort` more like `remote_close`, only instead of copying the file from the incomingdir to the sharedir, it needs to delete that file. I've attached a patch that addresses both of these issues. This can be considered a backward-compatibility break for clients that were relying on the fact that `abort()`ing a bucket didn't cause the server to stop counting the space assigned to that bucket. I'm not sure how likely it is that there are any such clients. In the tests for the client-side modifications, I use a `fireEventually()` to make sure that the abort messages get to the storage server before I check that they're sent (the bucket writer proxy abort call uses `callRemoteOnly` instead of `callRemote`, because it doesn't care so much about the result, so it is harder to reason about when the messages get to their destination when testing). Is this reasonable? Is this a good thing? Is there a better way? zooko: I think you're right; this bug seems to exist in 1.6.0 too, so this isn't a regression.
kevan commented 2010-07-16 00:51:19 +00:00
Author
Owner

Attachment 1117patch.dpatch (12597 bytes) added

**Attachment** 1117patch.dpatch (12597 bytes) added

Since it is a bugfix and a patch exists it is still a candidate for 1.7.1. Someone should review it carefully!

Since it is a bugfix and a patch exists it is still a candidate for 1.7.1. Someone should review it carefully!

I just read through attachment:1117patch.dpatch . I didn't see any mistakes and the patch adds two unit tests--test_abort(), test_peer_selector_bucket_abort(), and test_encoder_bucket_abort((). However, I'm too sleepy to double check all the uses of self.buckets in source:src/allmydata/immutable/upload.py and to understand exactly what those tests do, so I'm putting this off until tomorrow.

(Anyone else should feel free to review this before I get around to it.)

I just read through [attachment:1117patch.dpatch](/tahoe-lafs/trac/attachments/000078ac-e6ba-73dc-9533-72a4a625fd99) . I didn't see any mistakes and the patch adds two unit tests--`test_abort()`, `test_peer_selector_bucket_abort()`, and `test_encoder_bucket_abort(()`. However, I'm too sleepy to double check all the uses of `self.buckets` in source:src/allmydata/immutable/upload.py and to understand *exactly* what those tests do, so I'm putting this off until tomorrow. (Anyone else should feel free to review this before I get around to it.)

Thinking about this a bit further (and in light of the persistent-failure-to-upload described in #1118).. it's not a space problem, but rather it's a consequene of the way the server handles shares that it thinks are already in the process of being uploaded.

If an upload fails partway through (after allocate_buckets), such as how #1118 stopped at an assert statement, the storage servers will have BucketWriter objects with open filehandles to partially- (perhaps not-at-all-) written shares in incoming/ . The client will neither close those shares, nor abort them, nor drop the connection, so they'll stick around until the client next restarts. When the client tries to upload the file a second time, their allocate_buckets call will hit source:src/allmydata/storage/server.py#L335, in which the presence of the incoming/ file will cause the server to refuse to accept a new share, but not claim that it already has the share (indistinguishable from the case where it does not have enough space to accept the share).

This effectively disables those servers for that one file (i.e. for that one storage-index). If the grid only has 10 servers, then a single failed upload is enough to leave the client with no servers that will accept shares, and all subsequent uploads of that file (until the client is restarted, severing the TCP connections and aborting the shares) will fail. If the grid has 20 servers, then two failed uploads are enough to get into this nothing-will-work state.

As the rest of this ticket points out, the necessary fix is to examine the error paths out of the uploader code, to make sure that all paths result in the shares either being closed or aborted. This is non-trivial. We need to accumulate a list of remote BucketReaders as soon as they are received from the server (in response to the allocate_buckets call), and then have an addBoth handler (like a 'finally' block in synchronous try/except clauses) that aborts anything left in the list. When the upload succeeds, entries in this list should be removed as soon as the response to the close() message is received. Since the BucketReader is received by the peer-selection code, whereas the best place for the addBoth handler is elsewhere (in the CHKUploader, or maybe the Uploader), it's not clear where this list ought to live.

Thinking about this a bit further (and in light of the persistent-failure-to-upload described in #1118).. it's not a space problem, but rather it's a consequene of the way the server handles shares that it thinks are already in the process of being uploaded. If an upload fails partway through (after allocate_buckets), such as how #1118 stopped at an `assert` statement, the storage servers will have `BucketWriter` objects with open filehandles to partially- (perhaps not-at-all-) written shares in `incoming/` . The client will neither close those shares, nor abort them, nor drop the connection, so they'll stick around until the client next restarts. When the client tries to upload the file a second time, their `allocate_buckets` call will hit source:src/allmydata/storage/server.py#L335, in which the presence of the `incoming/` file will cause the server to refuse to accept a new share, but not claim that it already has the share (indistinguishable from the case where it does not have enough space to accept the share). This effectively disables those servers for that one file (i.e. for that one storage-index). If the grid only has 10 servers, then a single failed upload is enough to leave the client with no servers that will accept shares, and all subsequent uploads of that file (until the client is restarted, severing the TCP connections and aborting the shares) will fail. If the grid has 20 servers, then two failed uploads are enough to get into this nothing-will-work state. As the rest of this ticket points out, the necessary fix is to examine the error paths out of the uploader code, to make sure that all paths result in the shares either being closed or aborted. This is non-trivial. We need to accumulate a list of remote `BucketReaders` as soon as they are received from the server (in response to the `allocate_buckets` call), and then have an `addBoth` handler (like a 'finally' block in synchronous try/except clauses) that aborts anything left in the list. When the upload succeeds, entries in this list should be removed as soon as the response to the `close()` message is received. Since the `BucketReader` is received by the peer-selection code, whereas the best place for the addBoth handler is elsewhere (in the `CHKUploader`, or maybe the `Uploader`), it's not clear where this list ought to live.
daira was unassigned by zooko 2010-07-18 03:39:33 +00:00
zooko self-assigned this 2010-07-18 03:39:33 +00:00

Attachment test-1117.diff (1604 bytes) added

test case to check that the code fails without the abort-shares patch

**Attachment** test-1117.diff (1604 bytes) added test case to check that the code fails without the abort-shares patch

the test case in the patch I just attached fails without Kevan's patch. I have not yet confirmed that it passes with his patch.

the test case in the patch I just attached fails without Kevan's patch. I have not yet confirmed that it passes *with* his patch.

I have just confirmed that my new test case does indeed pass when Kevan's patch is applied. The tests that are in his patch are ok, but they focus on allocated size, rather than the ability to perform a second upload (i.e. the lack of the buggy prevent-all-further-allocate_buckets behavior). So I think we should apply both patches.

I have just confirmed that my new test case does indeed pass when Kevan's patch is applied. The tests that are in his patch are ok, but they focus on allocated size, rather than the ability to perform a second upload (i.e. the lack of the buggy prevent-all-further-allocate_buckets behavior). So I think we should apply both patches.

In changeset:16bb529339e6cbd5:

tests, NEWS, CREDITS re: #1117
Give Brian and Kevan promotions, move release date in NEWS to the 18th, commit Brian's test for #1117.
fixes #1117
In changeset:16bb529339e6cbd5: ``` tests, NEWS, CREDITS re: #1117 Give Brian and Kevan promotions, move release date in NEWS to the 18th, commit Brian's test for #1117. fixes #1117 ```
zooko added the
r/fixed
label 2010-07-18 20:50:16 +00:00
zooko closed this issue 2010-07-18 20:50:16 +00:00
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#1117
No description provided.