repair of mutable files/directories should not increment the sequence number #1209

Open
opened 2010-09-23 23:38:26 +00:00 by gdt · 17 comments
Owner

Particularly with my root directory, I often find that 9 shares of seqN are available compared to 10 desired. I do a repair, and this results in 10 shares of seqN+1 and then 9 are deleted. Then the next day there are 9 of seqN+1 and 1 of seqN, and the file is again not healthy. This repeats daily.

It seems that the missing seqN shares should be generated and placed, and then when servers churn more it's likely that 10 can still be found, and no unrecoverable versions. Perhaps I don't get something, but the current behavior is not stable with intermittent servers.

I have observed this problem with directories, but it seems likely that it applies to all immutable files.

Particularly with my root directory, I often find that 9 shares of seqN are available compared to 10 desired. I do a repair, and this results in 10 shares of seqN+1 and then 9 are deleted. Then the next day there are 9 of seqN+1 and 1 of seqN, and the file is again not healthy. This repeats daily. It seems that the missing seqN shares should be generated and placed, and then when servers churn more it's likely that 10 can still be found, and no unrecoverable versions. Perhaps I don't get something, but the current behavior is not stable with intermittent servers. I have observed this problem with directories, but it seems likely that it applies to all ~~im~~mutable files.
tahoe-lafs added the
c/code
p/minor
t/defect
v/1.8β
labels 2010-09-23 23:38:26 +00:00
tahoe-lafs added this to the undecided milestone 2010-09-23 23:38:26 +00:00

gdt: did you mean "mutable" instead of "immutable"? Immutable files don't have a sequence number!

gdt: did you mean "mutable" instead of "immutable"? Immutable files don't have a sequence number!
zooko added
v/1.8.0
and removed
v/1.8β
labels 2010-09-29 12:22:35 +00:00
Author
Owner

Sorry, I really meant directories in particular. I have edited the summary and description.

Sorry, I really meant directories in particular. I have edited the summary and description.
tahoe-lafs changed title from repair of immutable files should not increment the sequence number to repair of directories (all immutable files) should not increment the sequence number 2010-09-29 13:35:29 +00:00
Author
Owner

Clarify that the ticket is really about directories, but that it likely applies to all immutable files.

Clarify that the ticket is really about directories, but that it likely applies to all immutable files.
tahoe-lafs changed title from repair of directories (all immutable files) should not increment the sequence number to repair of directories (all immutable files?) should not increment the sequence number 2010-09-29 13:36:31 +00:00

Replying to gdt:

Clarify that the ticket is really about directories, but that it likely applies to all immutable files.

You mean it likely applies to all mutable files, right? :-) Directories are normally mutable, although it is also possible to have immutable directories. But immutable directories don't have sequence numbers. :-)

Replying to [gdt](/tahoe-lafs/trac/issues/1209#issuecomment-381202): > Clarify that the ticket is really about directories, but that it likely applies to all immutable files. You mean it likely applies to all *mutable* files, right? :-) Directories are normally mutable, although it is also possible to have immutable directories. But immutable directories don't have sequence numbers. :-)
daira modified the milestone from undecided to soon 2010-10-09 17:02:50 +00:00
daira changed title from repair of directories (all immutable files?) should not increment the sequence number to repair of mutable files/directories should not increment the sequence number 2010-10-09 17:04:13 +00:00
daira modified the milestone from soon to 1.9.0 2011-01-14 18:11:43 +00:00

#1210 was a duplicate. Its description was:

If there are 1 shares of seqN and 10 shares of seqM, M>N, the file is not healthy. The fix should be to remove the seqN share and not molest the seqM shares, instead of incrementing the version. This contributes to instability.

#1210 was a duplicate. Its description was: > If there are 1 shares of seqN and 10 shares of seqM, M>N, the file is not healthy. The fix should be to remove the seqN share and not molest the seqM shares, instead of incrementing the version. This contributes to instability.
daira added
p/major
and removed
p/minor
labels 2011-01-14 18:50:53 +00:00
daira modified the milestone from 1.9.0 to soon 2011-08-14 01:14:33 +00:00
warner added
c/code-mutable
and removed
c/code
labels 2012-03-18 01:07:05 +00:00

This is a great idea, especially since one of the failure modes for mutable files (when a share is migrated to a new server, causing the write-enabler to become wrong, causing the share to get "stuck" at some old version) is that it's never going to be able to get rid of that old share, so the file will always appear "unhealthy". In that case, constantly clobbering the perfectly-valid most-recent-version shares is obviously wrong.

This is a great idea, especially since one of the failure modes for mutable files (when a share is migrated to a new server, causing the write-enabler to become wrong, causing the share to get "stuck" at some old version) is that it's never going to be able to get rid of that old share, so the file will always appear "unhealthy". In that case, constantly clobbering the perfectly-valid most-recent-version shares is obviously wrong.

Brian wrote on tahoe-dev:

I haven't looked at that code for a long time, but it occurs to me that what it wants is a checker-results flag that says whether the unhealthy status is due to the presence of multiple versions or not. In terms of the ServerMap object, I think that's just:

len(sm.recoverable_versions()) + len(sm.unrecoverable_versions()) > 1

We only need to do the full download+upload cycle (which increments the version number) if there are multiple versions present. If we're just missing a couple of shares (or some are corrupted and could be replaced), then the number of versions ==1 and we should do a non-incrementing form of repair.

I think we'll need new Repairer code to do that, though. Something to set the new version equal to the existing version, to avoid sending updates to existing correct shares, and something to make sure the generated test-vectors are ok with all that. Not trivial, but not a huge task either.

Brian wrote on tahoe-dev: > I haven't looked at that code for a long time, but it occurs to me that what it wants is a checker-results flag that says whether the unhealthy status is due to the presence of multiple versions or not. In terms of the `ServerMap` object, I think that's just: > `len(sm.recoverable_versions()) + len(sm.unrecoverable_versions()) > 1` > We only need to do the full download+upload cycle (which increments the version number) if there are multiple versions present. If we're just missing a couple of shares (or some are corrupted and could be replaced), then the number of versions ==1 and we should do a non-incrementing form of repair. > I think we'll need new Repairer code to do that, though. Something to set the new version equal to the existing version, to avoid sending updates to existing correct shares, and something to make sure the generated test-vectors are ok with all that. Not trivial, but not a huge task either.

Greg Troxel wrote:

More than that - if we have 1 share of M and all shares of N, for N > M, then we really just want to purge (or ignore?) the M share, and not molest the N shares.

The test for this should be like:

  set up 5 servers S
  upload some files
  while 20 iterations
    for s in S
      take s off line
      run repair
      take s back

With the current code, you get repair every time and 100 increments, I think. With your proposal, I think it's still true.

The above test is how the pubgrid feels to me, or used to.

Greg Troxel wrote: > More than that - if we have 1 share of M and all shares of N, for N > M, then we really just want to purge (or ignore?) the M share, and not molest the N shares. > The test for this should be like: ``` set up 5 servers S upload some files while 20 iterations for s in S take s off line run repair take s back ``` > With the current code, you get repair every time and 100 increments, I think. With your proposal, I think it's still true. > The above test is how the pubgrid feels to me, or used to.

Brian replied:

On 3/29/12 12:01 PM, Greg Troxel wrote:

More than that - if we have 1 share of M and all shares of N, for N >
M, then we really just want to purge (or ignore?) the M share, and not
molest the N shares.

Ah, good point. We really only need a new version if there are multiple competing versions with the same sequence number, and if that sequence number is the highest seen. Repair is tricky in that case anyways, since at the Tahoe level we can't do an automatic merge, so we're certainly losing information (if it's just a directory modification, then the directory.py code can re-apply the modification, so that one case might be safe).

Hm, ServerMap.needs_merge() is pretty close already, but it only looks at recoverable versions (it tells you that an update will lose information that would have been recoverable if you'd read the individual versions first.. there are alternate cases where it doesn't matter because the other versions weren't recoverable anyways).

We should add a method to ServerMap that tells us whether a new version is needed or not.

The above test is how the pubgrid feels to me, or used to.

Yup, that test looks right.

Brian replied: > On 3/29/12 12:01 PM, Greg Troxel wrote: > > > > More than that - if we have 1 share of M and all shares of N, for N > > > M, then we really just want to purge (or ignore?) the M share, and not > > molest the N shares. > > Ah, good point. We really only need a new version if there are multiple competing versions with the same sequence number, and if that sequence number is the highest seen. Repair is tricky in that case anyways, since at the Tahoe level we can't do an automatic merge, so we're certainly losing information (if it's just a directory modification, then the directory.py code can re-apply the modification, so that one case might be safe). > > Hm, `ServerMap.needs_merge()` is pretty close already, but it only looks at recoverable versions (it tells you that an update will lose information that would have been recoverable if you'd read the individual versions first.. there are alternate cases where it doesn't matter because the other versions weren't recoverable anyways). > > We should add a method to `ServerMap` that tells us whether a new version is needed or not. > > > The above test is how the pubgrid feels to me, or used to. > > Yup, that test looks right.

Replying to Brian:

We really only need a new version if there are multiple competing versions with the same sequence number, and if that sequence number is the highest seen.

Counterexample: suppose there is a recoverable version with sequence number S, and an unrecoverable version with sequence number S+1. (There are no versions with sequence number >= S+2.) Then, the new sequence number needs to be S+2, in order for clients not to use the shares from S+1. Deleting the shares with sequence number S+1 is also possible but inferior, since sequence numbers should be monotonically increasing to minimize race conditions.

We should add a method to ServerMap that tells us whether a new version is needed or not.

+1.

Replying to [Brian](/tahoe-lafs/trac/issues/1209#issuecomment-381217): > We really only need a new version if there are multiple competing versions with the same sequence number, and if that sequence number is the highest seen. Counterexample: suppose there is a recoverable version with sequence number S, and an unrecoverable version with sequence number S+1. (There are no versions with sequence number >= S+2.) Then, the new sequence number needs to be S+2, in order for clients not to use the shares from S+1. Deleting the shares with sequence number S+1 is also possible but inferior, since sequence numbers should be monotonically increasing to minimize race conditions. > > We should add a method to `ServerMap` that tells us whether a new version is needed or not. +1.
daira modified the milestone from soon to 1.10.0 2012-03-29 20:38:22 +00:00

Proposed algorithm:

  1. Find the highest recoverable sequence number, R.
  2. Find the highest sequence number for which there exist any shares, S.
  3. If R == S, repair version R using the algorithm in /tahoe-lafs/trac/issues/27074#comment:381215 . Otherwise, download version R and upload it to version S+1.

This would also fix #1004, #614, and #1130. IIUC, an implementation of the /tahoe-lafs/trac/issues/27074#comment:381215 algorithm is being worked on in #1382.

Proposed algorithm: 1. Find the highest recoverable sequence number, R. 2. Find the highest sequence number for which there exist any shares, S. 3. If R == S, repair version R using the algorithm in [/tahoe-lafs/trac/issues/27074](/tahoe-lafs/trac/issues/27074)#[comment:381215](/tahoe-lafs/trac/issues/1209#issuecomment-381215) . Otherwise, download version R and upload it to version S+1. This would also fix #1004, #614, and #1130. IIUC, an implementation of the [/tahoe-lafs/trac/issues/27074](/tahoe-lafs/trac/issues/27074)#[comment:381215](/tahoe-lafs/trac/issues/1209#issuecomment-381215) algorithm is being worked on in #1382.
Author
Owner

I think davidsarah's proposed algorithm is a good choice. A few comments:

  • if there are shares of a version Q < R, then S = R, not Q. This follows from the algorithm, but in a design doc perhaps should be made more obvious: stray shares of a version less than the highest recoverable version are not a problem.
  • In the case where R is repaired, stray shares of a lower version should be removed.
  • in the case where S+1 is uploaded, shares of R, and actually shares of <=S should be removed.
  • if R is recoverable and there are shares of S > R, then it's really not clear what should happen. One possibility is to wait for a while (days?), keeping checking, and hoping there are enough S. But this is probably a very unlikely, and it's unclear what ought to happen, so I would defer that nuance to later.
I think davidsarah's proposed algorithm is a good choice. A few comments: * if there are shares of a version Q < R, then S = R, not Q. This follows from the algorithm, but in a design doc perhaps should be made more obvious: stray shares of a version less than the highest recoverable version are not a problem. * In the case where R is repaired, stray shares of a lower version should be removed. * in the case where S+1 is uploaded, shares of R, and actually shares of <=S should be removed. * if R is recoverable and there are shares of S > R, then it's really not clear what should happen. One possibility is to wait for a while (days?), keeping checking, and hoping there are enough S. But this is probably a very unlikely, and it's unclear what ought to happen, so I would defer that nuance to later.

Replying to gdt:

I think davidsarah's proposed algorithm is a good choice. A few comments:

  • if there are shares of a version Q < R, then S = R, not Q. This follows from the algorithm, but in a design doc perhaps should be made more obvious: stray shares of a version less than the highest recoverable version are not a problem.
  • In the case where R is repaired, stray shares of a lower version should be removed.
  • in the case where S+1 is uploaded, shares of R, and actually shares of <=S should be removed.

I agree. To make this more explicit:

  1. Find the highest recoverable sequence number, R. If there is no recoverable sequence number, abort.
  2. Find the highest sequence number for which there exist any shares, S.
  3. If R == S,
  4. If the client's happiness threshold is met for shares of sequence number Recovered, remove all known shares with sequence numbers < Recovered.

(The reason to only do the removal when the Recovered version is happy, is in case of a partition where different clients can see different subsets of servers. In that case, removing shares is a bad idea unless we know that the Recovered version has been stored reliably, not just recoverably. Also, we shouldn't remove shares from servers that weren't considered in steps 1 and 2, if they have connected in the meantime.)

  • if R is recoverable and there are shares of S > R, then it's really not clear what should happen. One possibility is to wait for a while (days?), keeping checking, and hoping there are enough S. But this is probably a very unlikely, and it's unclear what ought to happen, so I would defer that nuance to later.

The algorithm says to upload to version S+1 in that case. I think this is the right thing.

Replying to [gdt](/tahoe-lafs/trac/issues/1209#issuecomment-381221): > I think davidsarah's proposed algorithm is a good choice. A few comments: > * if there are shares of a version Q < R, then S = R, not Q. This follows from the algorithm, but in a design doc perhaps should be made more obvious: stray shares of a version less than the highest recoverable version are not a problem. > * In the case where R is repaired, stray shares of a lower version should be removed. > * in the case where S+1 is uploaded, shares of R, and actually shares of <=S should be removed. I agree. To make this more explicit: 1. Find the highest recoverable sequence number, R. If there is no recoverable sequence number, abort. 2. Find the highest sequence number for which there exist any shares, S. 3. If R == S, * Repair version R using the algorithm in [/tahoe-lafs/trac/issues/27074](/tahoe-lafs/trac/issues/27074)#[comment:381215](/tahoe-lafs/trac/issues/1209#issuecomment-381215) . Set Recovered := R. Otherwise, * Download version R and upload it to version S+1. Set Recovered := S+1. 4. If the client's happiness threshold is met for shares of sequence number Recovered, remove all known shares with sequence numbers < Recovered. (The reason to only do the removal when the Recovered version is happy, is in case of a partition where different clients can see different subsets of servers. In that case, removing shares is a bad idea unless we know that the Recovered version has been stored reliably, not just recoverably. Also, we shouldn't remove shares from servers that weren't considered in steps 1 and 2, if they have connected in the meantime.) > * if R is recoverable and there are shares of S > R, then it's really not clear what should happen. One possibility is to wait for a while (days?), keeping checking, and hoping there are enough S. But this is probably a very unlikely, and it's unclear what ought to happen, so I would defer that nuance to later. The algorithm says to upload to version S+1 in that case. I think this is the right thing.
PRabahy commented 2013-02-18 15:52:21 +00:00
Author
Owner

I believe that davidsarah's algorithm should help for most cases. Does it make sense to use a vector clock (http://en.wikipedia.org/wiki/Vector_clock) for even more robustness. I don't believe this should happen often (ever), but if there is a partial brain split where several nodes can connect to more than S storage servers but less than half the total storage servers in the grid there could still be significant churn. (Did that last sentence make any sense?) I'm probably over-thinking this solution so feel free to ignore me.

I believe that davidsarah's algorithm should help for most cases. Does it make sense to use a vector clock (<http://en.wikipedia.org/wiki/Vector_clock>) for even more robustness. I don't believe this should happen often (ever), but if there is a partial brain split where several nodes can connect to more than S storage servers but less than half the total storage servers in the grid there could still be significant churn. (Did that last sentence make any sense?) I'm probably over-thinking this solution so feel free to ignore me.
Author
Owner

Regarding vector clock, I wouldn't say overthinking, but I think that tahoe needs to have a design subspace within the space of all network and user behaviors. To date, tahoe has been designed for (by observing what's been done) the case where all servers are almost always connected. I'm talking about a case where at any given time most servers are connected, and most servers are connected almost all the time. In that world, not being connected is still unusual and to be avoided, but when you have 20 servers, the odds that one is missing are still pretty high.

So I think this ticket should stay focused on the mostly-connected case. If you want to work on a far more distributed less available use case, good luck, but I think it's about 50x the work of fixing /tahoe-lafs/trac/issues/27153.

Regarding vector clock, I wouldn't say overthinking, but I think that tahoe needs to have a design subspace within the space of all network and user behaviors. To date, tahoe has been designed for (by observing what's been done) the case where all servers are almost always connected. I'm talking about a case where at any given time most servers are connected, and most servers are connected almost all the time. In that world, not being connected is still unusual and to be avoided, but when you have 20 servers, the odds that one is missing are still pretty high. So I think this ticket should stay focused on the mostly-connected case. If you want to work on a far more distributed less available use case, good luck, but I think it's about 50x the work of fixing [/tahoe-lafs/trac/issues/27153](/tahoe-lafs/trac/issues/27153).

The vector clock algorithm is designed to ensure causal ordering (assuming cooperative peers) in a general peer-to-peer message passing system. It's inefficient and overcomplicated for this case, and the assumptions it relies on aren't met in any case.

The vector clock algorithm is designed to ensure causal ordering (assuming cooperative peers) in a general peer-to-peer message passing system. It's inefficient and overcomplicated for this case, and the assumptions it relies on aren't met in any case.

Incidentally, the comment:19 algorithm never makes things worse even in the case of partition, because it only deletes shares of a lower version than a version that is stored happily, even when run on an arbitrary partition of a grid.

Incidentally, the comment:19 algorithm never makes things worse even in the case of partition, because it only deletes shares of a lower version than a version that is stored happily, even when run on an arbitrary partition of a grid.
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#1209
No description provided.