writing of shares is fragile and "tahoe stop" is unnecessarily harsh #200

Open
opened 2007-11-01 17:58:53 +00:00 by zooko · 4 comments

As per comment:/tahoe-lafs/trac/issues/26125:8, the updating of share data is an incremental in-place process on disk, which means that if the node crashes during updating a share, the share will be corrupted. Also, there is not currently a way to deliberately stop (or restart) node without crashing it.

I'm inclined to measure the I/O cost of more robust atomic update of shares, but I'll leave it up to Brian and assign this ticket to him.

As per comment:[/tahoe-lafs/trac/issues/26125](/tahoe-lafs/trac/issues/26125):8, the updating of share data is an incremental in-place process on disk, which means that if the node crashes during updating a share, the share will be corrupted. Also, there is not currently a way to deliberately stop (or restart) node without crashing it. I'm inclined to measure the I/O cost of more robust atomic update of shares, but I'll leave it up to Brian and assign this ticket to him.
zooko added the
c/unknown
p/major
t/enhancement
v/0.6.1
labels 2007-11-01 17:58:53 +00:00
zooko added this to the eventually milestone 2007-11-01 17:58:53 +00:00
warner was assigned by zooko 2007-11-01 17:58:53 +00:00
zooko added
c/code-storage
and removed
c/unknown
labels 2007-12-04 21:39:14 +00:00
Author

This isn't an integrity issue because even if a share is corrupted due to this issue that doesn't threaten the integrity of the file.

Note that there are in general two possible ways to reduce the problem of shares being corrupted during a shutdown or crash. One is to make the writing of shares be more robust, for example by writing out a complete new copy of the share to a new temporary location and then renaming it into place. This is the option that increases I/O costs as discussed in the initial comment. Another is to add a "graceful shutdown" option where the storage server gets a chance to finish (or abort) updating a share before its process is killed.

I'm currently opposed to the latter and would be happier with the current fragile update than with the latter.

This isn't an integrity issue because even if a share is corrupted due to this issue that doesn't threaten the integrity of the file. Note that there are in general two possible ways to reduce the problem of shares being corrupted during a shutdown or crash. One is to make the writing of shares be more robust, for example by writing out a complete new copy of the share to a new temporary location and then renaming it into place. This is the option that increases I/O costs as discussed in the initial comment. Another is to add a "graceful shutdown" option where the storage server gets a chance to finish (or abort) updating a share before its process is killed. I'm currently opposed to the latter and would be happier with the current fragile update than with the latter.

I agree that "graceful shutdown" is not the right solution.

I agree that "graceful shutdown" is not the right solution.
daira changed title from writing of shares is fragile and/or there is no graceful shutdown to writing of shares is fragile 2009-10-28 21:22:11 +00:00

Hrmph, I guess this is one of my hot buttons. Zooko and I have discussed the
"crash-only" approach before, and I think we're still circling around each
other's opinions. I currently feel that any approach that prefers fragility
is wrong. Intentionally killing the server with no warning whatsoever (i.e.
the SIGKILL that "tahoe stop" does), when it is perfectly reasonable to
provide some warning and tolerate a brief delay, is equal to intentionally
causing data loss and damaging shares for the sake of some sort of
ideological purity that I don't really understand.

Be nice to your server! Don't shoot it in the head just to prove that you
can. :-)

Yes, sometimes the server will die abruptly. But it will be manually
restarted far more frequently than that. Here's my list of
running-to-not-running transition scenarios, in roughly increasing order of
frequency:

  • kernel crash (some disk writes completed, in temporal order if you're lucky)
  • power loss (like kernel crash)
  • process crash / SIGSEGV (all disk writes completed)
  • kernel shutdown (process gets SIGINT, then SIGKILL, all disk writes
    completed and buffers flushed)
  • process shutdown (SIGINT, then SIGKILL: process can choose what to do, all
    disk writes completed)

The tradeoff is between:

  • performance in the good case
  • shutdown time in the "graceful shutdown" case
  • recovery time after something unexpected/rare happens
  • correctness: amount of corruption when something unexpected/rare happens
    (i.e. resistance to corruption: what is the probability that a share will
    survive intact?)
  • code complexity

Modern disk filesystems effectively write a bunch of highly-correct
corruption-resistant but poor-performance data to disk (i.e. the journal),
then write a best-effort performance-improving index to very specific places
(i.e. the inodes and dirnodes and free-block-tables and the rest). In the
good case, it uses the index and gets high performance. In the bad case (i.e.
the fsck that happens after it wakes up and learns that it didn't shut down
gracefully), it spends a lot of time on recovery but maximizes the
correctness by using the journal. The shutdown time is pretty small but
depends upon how much buffered data is waiting to be written (it tends to be
insignificant for hard drives, but annoyingly long for removable USB drives).

A modern filesystem could achieve its correctness goals purely by using the
journal, with zero shutdown time (umount == poweroff), and would never spend
any time recovering anything, and would be completely "crash-only", but of
course the performance would be so horrible that nobody would ever use it.
Each open() or read() would involve a big fsck process, and it would probably
have to keep the entire directory structure in RAM.

So it's an engineering tradeoff. In Tahoe, we've got a layer of reliability
over and above the individual storage servers, which lets us deprioritize the
per-server correctness/corruption-resistance goal a little bit.

If correctness were infinitely important, we'd write out each new version of
a mutable share to a separate file, then do an fsync(), then perform an
atomic rename (except on platforms that are too stupid to provide such a
feature, of course), then do fsync() again, to maximize the period of time
when the disk contained a valid monotonically-increasing version of the
share.

If performance or code complexity were infinitely important, we'd modify the
share in-place with as few writes and syscalls as possible, and leave the
flushing up to the filesystem and kernel, to do at the most efficient time
possible.

If performance and correctness were top goals, but not code complexity, you
could imagine writing out a journal of mutable share updates, and somehow
replaying it on restart if we didn't see the "clean" bit that means we'd
finished doing all updates before shutdown.

So anyways, those are my feelings in the abstract. As for the specific, I
strongly feel that "tahoe stop" should be changed to send SIGINT and give the
process a few seconds to finish any mutable-file-modification operation it
was doing before sending it SIGKILL. (as far as I'm concerned, the only
reason to ever send SIGKILL is because you're impatient and don't want to
wait for it to clean up, possibly because you believe that the process has
hung or stopped making progress, and you can't or don't wish to look at the
logs to find out what's going on).

I don't yet have an informed opinion about copy-before-write or
edit-in-place. As Zooko points out, it would be appropriate to measure the IO
costs of writing out a new copy of each share, and see how bad it looks. Code
notes

  • the simplest way to implement copy-before-write would be to first copy the
    entire share, then apply in-place edits to the new versions, then
    atomically rename. We'd want to consider a recovery-like scan for
    abandoned editing files (i.e.
    find storage/shares -name *.tmp |xargs rm) at startup, to avoid
    unbounded accumulation of those tempfiles, except that would be expensive
    to perform and will never yield much results.

  • another option is to make a backup copy of the entire share, apply
    in-place edits to the old version, then delete the backup (and establish
    a recovery procedure that looks for backup copies and uses them to replace
    the presumeably-incompletely-edited original). This would be easier to
    implement if the backup copies are all placed in a single central
    directory, so the recovery process can scan for them quickly, perhaps in
    storage/shares/updates/$SI.

However, my suspicion is that edit-in-place is the appropriate tradeoff,
because that will lead to simpler code (i.e. fewer bugs) and better
performance, while only making us vulnerable to share corruption during the
rare events that don't give the server time to finish its write() calls (i.e.
kernel crash, power loss, and SIGKILL). Similarly, I suspect that it is not
appropriate to call fsync(), because we lose performance everywhere but only
improve correctness in the kernel crash and power loss scenarios. (a graceful
kernel shutdown, or arbitrary process shutdown followed by enough time for
the kernel/filesystem to flush its buffers, would provide for all write()s to
be flushed even without a single fsync() call).

Hrmph, I guess this is one of my hot buttons. Zooko and I have discussed the "crash-only" approach before, and I think we're still circling around each other's opinions. I currently feel that any approach that prefers fragility is wrong. Intentionally killing the server with no warning whatsoever (i.e. the SIGKILL that "tahoe stop" does), when it is perfectly reasonable to provide some warning and tolerate a brief delay, is equal to intentionally causing data loss and damaging shares for the sake of some sort of ideological purity that I don't really understand. Be nice to your server! Don't shoot it in the head just to prove that you can. :-) Yes, sometimes the server will die abruptly. But it will be manually restarted far more frequently than that. Here's my list of running-to-not-running transition scenarios, in roughly increasing order of frequency: * kernel crash (some disk writes completed, in temporal order if you're lucky) * power loss (like kernel crash) * process crash / SIGSEGV (all disk writes completed) * kernel shutdown (process gets SIGINT, then SIGKILL, all disk writes completed and buffers flushed) * process shutdown (SIGINT, then SIGKILL: process can choose what to do, all disk writes completed) The tradeoff is between: * performance in the good case * shutdown time in the "graceful shutdown" case * recovery time after something unexpected/rare happens * correctness: amount of corruption when something unexpected/rare happens (i.e. resistance to corruption: what is the probability that a share will survive intact?) * code complexity Modern disk filesystems effectively write a bunch of highly-correct corruption-resistant but poor-performance data to disk (i.e. the journal), then write a best-effort performance-improving index to very specific places (i.e. the inodes and dirnodes and free-block-tables and the rest). In the good case, it uses the index and gets high performance. In the bad case (i.e. the fsck that happens after it wakes up and learns that it didn't shut down gracefully), it spends a lot of time on recovery but maximizes the correctness by using the journal. The shutdown time is pretty small but depends upon how much buffered data is waiting to be written (it tends to be insignificant for hard drives, but annoyingly long for removable USB drives). A modern filesystem could achieve its correctness goals purely by using the journal, with zero shutdown time (umount == poweroff), and would never spend any time recovering anything, and would be completely "crash-only", but of course the performance would be so horrible that nobody would ever use it. Each open() or read() would involve a big fsck process, and it would probably have to keep the entire directory structure in RAM. So it's an engineering tradeoff. In Tahoe, we've got a layer of reliability over and above the individual storage servers, which lets us deprioritize the per-server correctness/corruption-resistance goal a little bit. If correctness were infinitely important, we'd write out each new version of a mutable share to a separate file, then do an fsync(), then perform an atomic rename (except on platforms that are too stupid to provide such a feature, of course), then do fsync() again, to maximize the period of time when the disk contained a valid monotonically-increasing version of the share. If performance or code complexity were infinitely important, we'd modify the share in-place with as few writes and syscalls as possible, and leave the flushing up to the filesystem and kernel, to do at the most efficient time possible. If performance and correctness were top goals, but not code complexity, you could imagine writing out a journal of mutable share updates, and somehow replaying it on restart if we didn't see the "clean" bit that means we'd finished doing all updates before shutdown. So anyways, those are my feelings in the abstract. As for the specific, I strongly feel that "tahoe stop" should be changed to send SIGINT and give the process a few seconds to finish any mutable-file-modification operation it was doing before sending it SIGKILL. (as far as I'm concerned, the only reason to ever send SIGKILL is because you're impatient and don't want to wait for it to clean up, possibly because you believe that the process has hung or stopped making progress, and you can't or don't wish to look at the logs to find out what's going on). I don't yet have an informed opinion about copy-before-write or edit-in-place. As Zooko points out, it would be appropriate to measure the IO costs of writing out a new copy of each share, and see how bad it looks. Code notes * the simplest way to implement copy-before-write would be to first copy the entire share, then apply in-place edits to the new versions, then atomically rename. We'd want to consider a recovery-like scan for abandoned editing files (i.e. `find storage/shares -name *.tmp |xargs rm`) at startup, to avoid unbounded accumulation of those tempfiles, except that would be expensive to perform and will never yield much results. * another option is to make a backup copy of the entire share, apply in-place edits to the *old* version, then delete the backup (and establish a recovery procedure that looks for backup copies and uses them to replace the presumeably-incompletely-edited original). This would be easier to implement if the backup copies are all placed in a single central directory, so the recovery process can scan for them quickly, perhaps in storage/shares/updates/$SI. However, my suspicion is that edit-in-place is the appropriate tradeoff, because that will lead to simpler code (i.e. fewer bugs) and better performance, while only making us vulnerable to share corruption during the rare events that don't give the server time to finish its write() calls (i.e. kernel crash, power loss, and SIGKILL). Similarly, I suspect that it is *not* appropriate to call fsync(), because we lose performance everywhere but only improve correctness in the kernel crash and power loss scenarios. (a graceful kernel shutdown, or arbitrary process shutdown followed by enough time for the kernel/filesystem to flush its buffers, would provide for all write()s to be flushed even without a single fsync() call).
warner changed title from writing of shares is fragile to writing of shares is fragile and "tahoe stop" is unnecessarily harsh 2009-11-02 08:16:25 +00:00
Author

I'm sorry if this topic makes you feel unhappy. For what it is worth, I am satisfied with the current behavior: dumb writes, stupid shutdown, simple startup. :-) This scores highest on simplicity, highest on performance, and not so great on preserving mutable shares.

This seems okay to me, because I consider shares to be expendable -- files are what we care about, and those are preserved by verification and repair at the Tahoe-LAFS layer rather than by having high quality storage at the storage layer. allmydata.com uses the cheap commodity PC kit, such as 2 TB hard drive for a mere $200. Enterprise storage people consider it to be completely irresponsible and wrong to use such kit for "enterprise" purposes. They buy "enterprise" SCSI drives from their big equipment provider (Sun, HP, IBM) with something like 300 GB capacity for something like $500. Then they add RAID-5 or RAID-6 or RAID-Z, redundant power supplies, yadda yadda yadda.

So anyway, allmydata.com buys these commodity PCs -- basically the same hardware you can buy retail at Fry's or Newegg -- which are quite inexpensive and suffer a correspondingly higher failure rate. In one memorable incidence, one of these 1U servers from ~SuperMicro failed in such a way that all four of the commodity 1 TB hard drives in it were destroyed. This means lots of mutable shares -- maybe something on the order of 10,000 mutable shares -- were destroyed in an instant! But none of the allmydata.com customer files were harmed.

The hard shutdown behavior that is currently in Tahoe-LAFS would have to be exercised quite a lot while under high load before it would come close to destroying that many mutable shares. :-)

I would accept changing it to do robust writes such as the simple "write-new-then-relink-into-place". (My guess is that this will not cause a noticeable performance degradation.)

I would accept changing it to do traditional unixy two-phase graceful shutdown as you describe, with misgivings, as I think I've already made clear to you in personal conversation and in comment:/tahoe-lafs/trac/issues/26125:8.

To sum my misgivings: 1. our handling of hard shutdown (e.g. power off, out of disk space, kernel crash) is not thereby improved, and 2. if we come to rely on "graceful shutdown" then our "robust startup" muscles atrophy.

Consider this: we currently have no automated tests of what happens when servers get shut down in the middle of their work. So we should worry that as the code evolves, someone could commit a patch which causes bad behavior in that case and we wouldn't notice.

However, we do know that everytime anyone runs tahoe stop or tahoe restart that it exercises the hard shutdown case. The fact that allmydata.com has hundreds of servers with this behavior and has had for years gives me increased confidence the current code doesn't do anything catastrophically wrong in this case.

If we improved tahoe stop to be a graceful shutdown instead of a hard shutdown, then of course the current version of Tahoe-LAFS would still be just as good as ever, but as time went on and the code evolved I would start worrying more and more about how tahoe servers handle the hard shutdown case. Maybe this means we need automated tests of that case.

I'm sorry if this topic makes you feel unhappy. For what it is worth, I am satisfied with the current behavior: dumb writes, stupid shutdown, simple startup. :-) This scores highest on simplicity, highest on performance, and not so great on preserving mutable shares. This seems okay to me, because I consider shares to be expendable -- files are what we care about, and those are preserved by verification and repair at the Tahoe-LAFS layer rather than by having high quality storage at the storage layer. allmydata.com uses the cheap commodity PC kit, such as 2 TB hard drive for a mere $200. Enterprise storage people consider it to be completely irresponsible and wrong to use such kit for "enterprise" purposes. They buy "enterprise" SCSI drives from their big equipment provider (Sun, HP, IBM) with something like 300 GB capacity for something like $500. Then they add RAID-5 or RAID-6 or RAID-Z, redundant power supplies, yadda yadda yadda. So anyway, allmydata.com buys these commodity PCs -- basically the same hardware you can buy retail at Fry's or Newegg -- which are quite inexpensive and suffer a correspondingly higher failure rate. In one memorable incidence, one of these 1U servers from ~SuperMicro failed in such a way that all four of the commodity 1 TB hard drives in it were destroyed. This means lots of mutable shares -- maybe something on the order of 10,000 mutable shares -- were destroyed in an instant! But none of the allmydata.com customer files were harmed. The hard shutdown behavior that is currently in Tahoe-LAFS would have to be exercised quite a lot while under high load before it would come close to destroying that many mutable shares. :-) I would accept changing it to do robust writes such as the simple "write-new-then-relink-into-place". (My guess is that this will not cause a noticeable performance degradation.) I would accept changing it to do traditional unixy two-phase graceful shutdown as you describe, with misgivings, as I think I've already made clear to you in personal conversation and in comment:[/tahoe-lafs/trac/issues/26125](/tahoe-lafs/trac/issues/26125):8. To sum my misgivings: 1. our handling of hard shutdown (e.g. power off, out of disk space, kernel crash) is not thereby improved, and 2. if we come to rely on "graceful shutdown" then our "robust startup" muscles atrophy. Consider this: we currently have no automated tests of what happens when servers get shut down in the middle of their work. So we should worry that as the code evolves, someone could commit a patch which causes bad behavior in that case and we wouldn't notice. However, we do know that everytime anyone runs `tahoe stop` or `tahoe restart` that it exercises the hard shutdown case. The fact that allmydata.com has hundreds of servers with this behavior and has had for years gives me increased confidence the current code doesn't do anything catastrophically wrong in this case. If we improved `tahoe stop` to be a graceful shutdown instead of a hard shutdown, then of course the current version of Tahoe-LAFS would still be just as good as ever, but as time went on and the code evolved I would start worrying more and more about how tahoe servers handle the hard shutdown case. Maybe this means we need automated tests of that case.
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#200
No description provided.