'readonly_storage' and 'reserved_space' not honored for mutable-slot write requests #390

Open
opened 2008-04-22 17:39:47 +00:00 by warner · 17 comments

The remote_allocate_buckets call correctly says "no" when the readonly_storage config flag is on, but the corresponding remote_slot_testv_and_readv_and_writev (for mutable files) does not. This means that a storage server which has been kicked into readonly mode (say, if the drive is starting to fail and it has been left online just to get the shares off of that drive and on to a new one) will continue to accumulate new mutable shares.

The `remote_allocate_buckets` call correctly says "no" when the `readonly_storage` config flag is on, but the corresponding `remote_slot_testv_and_readv_and_writev` (for mutable files) does not. This means that a storage server which has been kicked into readonly mode (say, if the drive is starting to fail and it has been left online just to get the shares off of that drive and on to a new one) will continue to accumulate new mutable shares.
warner added the
c/code-storage
p/major
t/defect
v/1.0.0
labels 2008-04-22 17:39:47 +00:00
warner added this to the eventually milestone 2008-04-22 17:39:47 +00:00

Practically speaking, shouldn't read-only storage normally be implemented by remounting the storage partition read-only?

This implies that one should normally not keep anything else (like Tahoe log files) on the partition where one keeps Tahoe storage.

Practically speaking, shouldn't read-only storage normally be implemented by remounting the storage partition read-only? This implies that one should normally not keep anything else (like Tahoe log files) on the partition where one keeps Tahoe storage.

Oh, but now I realize that making it read-only at that level might not propagate back to the client when the client calls remote_allocate_buckets or remote_slot_tesv_and_readv_and_writev. Or, actually, it might! Because...

Oh, but now I realize that making it read-only at that level might not propagate back to the client when the client calls `remote_allocate_buckets` or `remote_slot_tesv_and_readv_and_writev`. Or, actually, it might! Because...

... because [remote_allocate_buckets()]source:src/allmydata/storage@2537#L744 and [{remote_slot_testv_and_readv_and_writev()}]source:src/allmydata/storage@2537#L931 both try to write to the filesystem before they return, so if that filesystem is read-only, then a nice foolscap exception will be sent back to the client.

... because [`remote_allocate_buckets()`]source:src/allmydata/storage@2537#L744 and [{remote_slot_testv_and_readv_and_writev()}]source:src/allmydata/storage@2537#L931 both try to write to the filesystem before they return, so if that filesystem is read-only, then a nice foolscap exception will be sent back to the client.

Those hyperlinks should be [remote_allocate_buckets()]source:src/allmydata/storage.py@2537#L744 and [remote_slot_testv_and_readv_and_writev()]source:src/allmydata/storage.py@2537#L931.

Those hyperlinks should be [remote_allocate_buckets()]source:src/allmydata/storage.py@2537#L744 and [remote_slot_testv_and_readv_and_writev()]source:src/allmydata/storage.py@2537#L931.

So, I would like us to consider removing the "read only storage" feature from the Tahoe source code. People who can't make their whole whole partition read-only can use simple filesystem permissions to make the storage directory unwriteable to the account that runs the Tahoe node. This technique would be less buggy that the implementation of read-only in the Tahoe source code, and it would require less of our developer time to maintain.

So, I would like us to consider removing the "read only storage" feature from the Tahoe source code. People who can't make their whole whole partition read-only can use simple filesystem permissions to make the storage directory unwriteable to the account that runs the Tahoe node. This technique would be less buggy that the implementation of read-only in the Tahoe source code, and it would require less of our developer time to maintain.
zooko changed title from 'readonly_storage' not honored for mutable-slot write requests to 'readonly_storage' not honored for mutable-slot write requests (or shall we stop offering read-only storage as a Tahoe configuration option) 2008-05-30 04:24:20 +00:00
warner modified the milestone from eventually to undecided 2008-06-01 21:08:19 +00:00

Brian and I had a big conversation on the phone about this and came up with a good design -- efficient, robust, and not too complicated. Brian wrote it up:

http://allmydata.org/pipermail/tahoe-dev/2008-May/000630.html

Brian and I had a big conversation on the phone about this and came up with a good design -- efficient, robust, and not too complicated. Brian wrote it up: <http://allmydata.org/pipermail/tahoe-dev/2008-May/000630.html>

Hm... why did you put this one in "undecided"? How about v1.2.0...

Hm... why did you put this one in "undecided"? How about v1.2.0...
zooko modified the milestone from undecided to 1.2.0 2008-06-06 23:31:53 +00:00
zooko changed title from 'readonly_storage' not honored for mutable-slot write requests (or shall we stop offering read-only storage as a Tahoe configuration option) to 'readonly_storage' not honored for mutable-slot write requests 2008-06-06 23:31:53 +00:00
Author

because I figured that we'd replace it with something other than "readonly_storage", and that the accounting / dict-introducer changes might significantly change what we do with this. It's an issue that we really ought to address for 1.2.0, but I don't know how exactly we're going to do that.

1.2.0 sounds fine.

because I figured that we'd replace it with something other than "readonly_storage", and that the accounting / dict-introducer changes might significantly change what we do with this. It's an issue that we really ought to address for 1.2.0, but I don't know how exactly we're going to do that. 1.2.0 sounds fine.
zooko modified the milestone from 1.5.0 to eventually 2009-06-30 12:39:50 +00:00

As long as we have the reserved_space setting, that should also be honoured for writes to mutable slots, so an explicit space check is needed just as in remote_allocate_buckets.

As long as we have the `reserved_space` setting, that should also be honoured for writes to mutable slots, so an explicit space check is needed just as in `remote_allocate_buckets`.
daira changed title from 'readonly_storage' not honored for mutable-slot write requests to 'readonly_storage' and 'reserved_space' not honored for mutable-slot write requests 2010-01-16 00:47:35 +00:00

Required for #871 (handle out-of-disk-space condition).

Required for #871 (handle out-of-disk-space condition).
daira modified the milestone from eventually to 1.7.0 2010-02-01 20:01:38 +00:00
daira modified the milestone from 1.7.0 to 1.6.1 2010-02-15 19:53:30 +00:00

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we should do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.)

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we *should* do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.)
zooko modified the milestone from 1.6.1 to eventually 2010-02-16 05:15:44 +00:00

Replying to zooko:

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events.

OTOH, you can't in general avoid these bad things by not honouring reserved_space, because they will happen anyway if the filesystem runs out of space. Perhaps there is a case for starting to refuse storage of immutable shares at a higher reserved-space threshold than for mutable shares, though.

Replying to [zooko](/tahoe-lafs/trac/issues/390#issuecomment-366656): > The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. OTOH, you can't in general avoid these bad things by not honouring `reserved_space`, because they will happen anyway if the filesystem runs out of space. Perhaps there is a case for starting to refuse storage of immutable shares at a higher reserved-space threshold than for mutable shares, though.

Replying to [davidsarah]comment:19:

OTOH, you can't in general avoid these bad things by not honouring reserved_space, because they will happen anyway if the filesystem runs out of space.

... which, as #871 points out, is currently not handled gracefully.

Replying to [davidsarah]comment:19: > OTOH, you can't in general avoid these bad things by not honouring `reserved_space`, because they will happen anyway if the filesystem runs out of space. ... which, as #871 points out, is currently not handled gracefully.
Owner

Replying to zooko:

The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we should do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.)

Probably I am failing to understand, but on the off chance that's useful: If the notion of taking a server read only and having shares migrate off it (which sounds useful) is going to work, then replacing a mutable file with a new version is going to have to find servers to store the new shares and place them and remove the old shares. So a server failing to accept the new share shouldn't have any direct bearing on the new upload succeeding and the old shares being removed. I would also expect (again, without knowing) that there would be a process of placing the new shares and then only when successful removing the old ones.

Replying to [zooko](/tahoe-lafs/trac/issues/390#issuecomment-366656): > The problem is that if you run out of space in your storage server, and you refuse to overwrite a mutable share with a new version, then you are going to continue to serve the older version, which could cause inefficiency, confusion, and perhaps even "rollback" events. Kicking this one out of v1.6.1 on the grounds that it is Feb. 15 and I don't understand what we *should* do, so it is too late to do something about it for the planned Feb. 20 release of v1.6.1. (Also we have lots of other clearer issues in the v1.6.1 Milestone already.) Probably I am failing to understand, but on the off chance that's useful: If the notion of taking a server read only and having shares migrate off it (which sounds useful) is going to work, then replacing a mutable file with a new version is going to have to find servers to store the new shares and place them and remove the old shares. So a server failing to accept the new share shouldn't have any direct bearing on the new upload succeeding and the old shares being removed. I would also expect (again, without knowing) that there would be a process of placing the new shares and then only when successful removing the old ones.

See also #1568, for the S3 backend.

See also #1568, for the S3 backend.

From comment:5:ticket:1568:

For what it is worth, I increasingly think read-only storage should be deprecated for all backends, and people will have to learn how to use their operating system if they want readonliness of storage. When we invented the read-only storage option, I think partly we were thinking of users who could read our docs but didn't want to learn how to use their operating system to set policy. Nowadays I'm less interested in the idea of such users being server operators.

Also, the fact that we've never really finished implementing read-only storage (to include mutables), so that there are weird failure modes that could hit people who rely on it is evidence that we should not spend our precious engineering time on things that the operating system could do for us and do better.

From comment:5:ticket:1568: For what it is worth, I increasingly think read-only storage should be deprecated for all backends, and people will have to learn how to use their operating system if they want readonliness of storage. When we invented the read-only storage option, I think partly we were thinking of users who could read our docs but didn't want to learn how to use their operating system to set policy. Nowadays I'm less interested in the idea of such users being server operators. Also, the fact that we've never really finished implementing read-only storage (to include mutables), so that there are weird failure modes that could hit people who rely on it is evidence that we should not spend our precious engineering time on things that the operating system could do for us and do better.
Owner

Duplicated from (@@http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1568#comment:366644@@)

I don't really follow this. It seems reasonable for a server operator to decide to not accept new shares, and for this to be separate than whether the server process is able to write the filesystem where the shares are kept. For example, it might be reasonable to allow lease renewal, or for other metadata to be updated. It might be that not accepting shares should be similar to zero space available, so increasing the size of a mutable share also might not be allowed. And, if the purpose really is decommissioning, then presumably the mechanism used for repair should somehow signal that the share is present but should be migrated, so that a deep-check --repair can put those shares on some other server.

There's a difference between people that don't understand enough to sysadmin a server, and the server having uniform configuation for server-level behavior. When tahoe is ported to ITS, it should still be possible to tell it to stop taking shares.

Duplicated from (@@http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1568#[comment:366644](/tahoe-lafs/trac/issues/390#issuecomment-366644)@@) I don't really follow this. It seems reasonable for a server operator to decide to not accept new shares, and for this to be separate than whether the server process is able to write the filesystem where the shares are kept. For example, it might be reasonable to allow lease renewal, or for other metadata to be updated. It might be that not accepting shares should be similar to zero space available, so increasing the size of a mutable share also might not be allowed. And, if the purpose really is decommissioning, then presumably the mechanism used for repair should somehow signal that the share is present but should be migrated, so that a deep-check --repair can put those shares on some other server. There's a difference between people that don't understand enough to sysadmin a server, and the server having uniform configuation for server-level behavior. When tahoe is ported to [ITS](http://en.wikipedia.org/wiki/Incompatible_Timesharing_System), it should still be possible to tell it to stop taking shares.
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#390
No description provided.