KeyError in mutable read-modify-write #1670

Open
opened 2012-02-10 07:27:40 +00:00 by vikarti · 46 comments
vikarti commented 2012-02-10 07:27:40 +00:00
Owner

Mac OS X 10.7 gateway,

mutable.format is set to mdmf

sometimes tahoe backup process several files fine and later stops
if it stops - it's always same place
like this

skipping "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner/Tahoe - The Least-Authority Filesystem (1347)/Tahoe - The Least-Authority Filesystem - Zooko Wilcox-O'Hearn_ Brian Warner.pdf"..
 re-using old directory for "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner/Tahoe - The Least-Authority Filesystem (1347)"
 re-using old directory for "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner"
 re-using old directory for '/Users/vikarti/Calibre Library'
Traceback (most recent call last):
  File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/support/bin/tahoe", line 9, in <module>
    load_entry_point('allmydata-tahoe==1.9.1', 'console_scripts', 'tahoe')()
  File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/runner.py", line 113, in run
    rc = runner(sys.argv[1:], install_node_control=install_node_control)
  File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/runner.py", line 99, in runner
    rc = cli.dispatch[command](so)
  File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/cli.py", line 569, in backup
    rc = tahoe_backup.backup(options)
  File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 325, in backup
    return bu.run()
  File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 123, in run
    put_child(archives_url, now, new_backup_dircap)
  File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 57, in put_child
    raise HTTPError("Error during put_child", resp)
allmydata.scripts.common_http.HTTPError: Error during put_child: 500 Internal Server Error
"Traceback (most recent call last):
  File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py\", line 793, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/support/lib/python2.7/site-packages/foolscap-0.6.3-py2.7.egg/foolscap/eventual.py\", line 26, in _turn
    cb(*args, **kwargs)\x0a  File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 361, in callback
    self._startRunCallbacks(result)\x0a  File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 455, in _startRunCallbacks
    self._runCallbacks()
--- <exception caught here> ---
  File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 542, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 855, in <lambda>\x0a    self._modify_once(modifier, first_time))\x0a  File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 881, in _modify_once
    d = self._try_to_download_data()
  File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 959, in _try_to_download_data
    d = self._read(c, fetch_privkey=True)
  File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 980, in _read
    d = r.download(consumer, offset, size)
  File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/retrieve.py\", line 237, in download
    self._setup_download()
  File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/retrieve.py\", line 277, in _setup_download
    shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (5, '\\x11c\\x1c\\xf7\\xc9\\xae1e\\xa2\\xdaF\\xaf\\xb6S\\x03IH\\x84\\x11#\\xb5\\xf2TS\\xfb\\xde\\x08\\x81\\x01\\xf0\\xd3\\xec', None, 131073, 1080, 3, 10,
'\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\x11c\\x1c\\xf7\\xc9\\xae1e\\xa2\\xdaF\\xaf\\xb6S\\x03IH\\x84\\x11#\\xb5\\xf2TS\\xfb\\xde\\x08\\x81\\x01\\xf0\\xd3\\xec\\x03\\n\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x048',
(('EOF', 2575), ('verification_key', 1729), ('enc_privkey', 123), ('share_data', 2167), ('signature', 1473), ('block_hash_tree', 2543), ('share_hash_chain', 1337), ('verification_key_end', 2021)))
"
Mac OS X 10.7 gateway, mutable.format is set to mdmf sometimes tahoe backup process several files fine and later stops if it stops - it's always same place like this ``` skipping "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner/Tahoe - The Least-Authority Filesystem (1347)/Tahoe - The Least-Authority Filesystem - Zooko Wilcox-O'Hearn_ Brian Warner.pdf".. re-using old directory for "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner/Tahoe - The Least-Authority Filesystem (1347)" re-using old directory for "/Users/vikarti/Calibre Library/Zooko Wilcox-O'Hearn_ Brian Warner" re-using old directory for '/Users/vikarti/Calibre Library' Traceback (most recent call last): File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/support/bin/tahoe", line 9, in <module> load_entry_point('allmydata-tahoe==1.9.1', 'console_scripts', 'tahoe')() File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/runner.py", line 113, in run rc = runner(sys.argv[1:], install_node_control=install_node_control) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/runner.py", line 99, in runner rc = cli.dispatch[command](so) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/cli.py", line 569, in backup rc = tahoe_backup.backup(options) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 325, in backup return bu.run() File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 123, in run put_child(archives_url, now, new_backup_dircap) File "/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/scripts/tahoe_backup.py", line 57, in put_child raise HTTPError("Error during put_child", resp) allmydata.scripts.common_http.HTTPError: Error during put_child: 500 Internal Server Error "Traceback (most recent call last): File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py\", line 793, in runUntilCurrent call.func(*call.args, **call.kw) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/support/lib/python2.7/site-packages/foolscap-0.6.3-py2.7.egg/foolscap/eventual.py\", line 26, in _turn cb(*args, **kwargs)\x0a File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 361, in callback self._startRunCallbacks(result)\x0a File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 455, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File \"/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py\", line 542, in _runCallbacks current.result = callback(current.result, *args, **kw) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 855, in <lambda>\x0a self._modify_once(modifier, first_time))\x0a File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 881, in _modify_once d = self._try_to_download_data() File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 959, in _try_to_download_data d = self._read(c, fetch_privkey=True) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/filenode.py\", line 980, in _read d = r.download(consumer, offset, size) File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/retrieve.py\", line 237, in download self._setup_download() File \"/Users/vikarti/tahoe-new/allmydata-tahoe-1.9.1/src/allmydata/mutable/retrieve.py\", line 277, in _setup_download shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (5, '\\x11c\\x1c\\xf7\\xc9\\xae1e\\xa2\\xdaF\\xaf\\xb6S\\x03IH\\x84\\x11#\\xb5\\xf2TS\\xfb\\xde\\x08\\x81\\x01\\xf0\\xd3\\xec', None, 131073, 1080, 3, 10, '\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\x11c\\x1c\\xf7\\xc9\\xae1e\\xa2\\xdaF\\xaf\\xb6S\\x03IH\\x84\\x11#\\xb5\\xf2TS\\xfb\\xde\\x08\\x81\\x01\\xf0\\xd3\\xec\\x03\\n\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x048', (('EOF', 2575), ('verification_key', 1729), ('enc_privkey', 123), ('share_data', 2167), ('signature', 1473), ('block_hash_tree', 2543), ('share_hash_chain', 1337), ('verification_key_end', 2021))) " ```
tahoe-lafs added the
unknown
major
defect
1.9.1
labels 2012-02-10 07:27:40 +00:00
tahoe-lafs added this to the undecided milestone 2012-02-10 07:27:40 +00:00
vikarti commented 2012-02-10 07:40:50 +00:00
Author
Owner

Attachment incident-2012-02-10--07-10-36Z-yyfpk7a.flog.bz2 (42352 bytes) added

possible related incident file 1

**Attachment** incident-2012-02-10--07-10-36Z-yyfpk7a.flog.bz2 (42352 bytes) added possible related incident file 1
vikarti commented 2012-02-10 07:41:03 +00:00
Author
Owner

Attachment incident-2012-02-10--07-11-26Z-qwg5ivy.flog.bz2 (42111 bytes) added

possible related incident file 2

**Attachment** incident-2012-02-10--07-11-26Z-qwg5ivy.flog.bz2 (42111 bytes) added possible related incident file 2
vikarti commented 2012-02-10 07:41:19 +00:00
Author
Owner

Attachment incident-2012-02-10--07-14-46Z-fpevtli.flog.bz2 (42576 bytes) added

possible related incident file 3

**Attachment** incident-2012-02-10--07-14-46Z-fpevtli.flog.bz2 (42576 bytes) added possible related incident file 3
davidsarah commented 2012-02-11 00:36:58 +00:00
Author
Owner

reformat description

reformat description
tahoe-lafs added
code-mutable
and removed
unknown
labels 2012-02-17 18:43:24 +00:00
tahoe-lafs changed title from tahoe backup sometimes fails to KeyError in mutable MDMF retrieve 2012-02-17 18:43:24 +00:00
davidsarah commented 2012-02-19 00:59:47 +00:00
Author
Owner

#1656 is a probable duplicate with a KeyError on the same line. Based on that, the bug doesn't seem to be specific to MDMF; vikarti, can you confirm that by trying to reproduce it with SDMF?

#1656 is a probable duplicate with a KeyError on the same line. Based on that, the bug doesn't seem to be specific to MDMF; vikarti, can you confirm that by trying to reproduce it with SDMF?
tahoe-lafs modified the milestone from undecided to soon 2012-02-19 00:59:47 +00:00
tahoe-lafs changed title from KeyError in mutable MDMF retrieve to KeyError in mutable retrieve 2012-02-19 00:59:47 +00:00
tahoe-lafs added
critical
and removed
major
labels 2012-02-19 01:52:39 +00:00
davidsarah commented 2012-02-19 02:28:29 +00:00
Author
Owner

Attachment 1656-tahoe-repair-incident-2012-02-18--22_59_20Z-fsyvi7q.flog.bz2 (41405 bytes) added

Incident file for 'tahoe deep-check --repair' from #1656

**Attachment** 1656-tahoe-repair-incident-2012-02-18--22_59_20Z-fsyvi7q.flog.bz2 (41405 bytes) added Incident file for 'tahoe deep-check --repair' from #1656
vikarti commented 2012-02-19 04:43:17 +00:00
Author
Owner

davidsarah, I can't reproduce this one on SDMF, it just either happens or not. If it will happen with SDMF - I will report here.

davidsarah, I can't reproduce this one on SDMF, it just either happens or not. If it will happen with SDMF - I will report here.
vikarti commented 2012-02-19 06:32:13 +00:00
Author
Owner

Davidsarah,
possible related to this:
SDMF directory, 2 tahoe backup sessions from different gateways to this directory,
2nd one got:

Unable to create target directory: 500 Internal Server Error
"Traceback (most recent call last):\x0a  File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\inte
rnet\\base.py\", line 796, in runUntilCurrent\x0a    call.func(*call.args, **call.kw)\x0a  File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\
\foolscap-0.6.3-py2.6.egg\\foolscap\\eventual.py\", line 26, in _turn\x0a    cb(*args, **kwargs)\x0a  File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\sit
e-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\internet\\defer.py\", line 318, in callback\x0a    self._startRunCallbacks(result)\x0a  File \"C:\\Taho
e\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\internet\\defer.py\", line 424, in _startRunCallbacks\x0a
 self._runCallbacks()\x0a--- <exception caught here> ---\x0a  File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd
64.egg\\twisted\\internet\\defer.py\", line 441, in _runCallbacks\x0a    self.result = callback(self.result, *args, **kw)\x0a  File \"c:\\tahoe\\allmydata-tahoe
-1.9.1\\src\\allmydata\\mutable\\filenode.py\", line 855, in <lambda>\x0a    self._modify_once(modifier, first_time))\x0a  File \"c:\\tahoe\\allmydata-tahoe-1.9
.1\\src\\allmydata\\mutable\\filenode.py\", line 881, in _modify_once\x0a    d = self._try_to_download_data()\x0a  File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\
\allmydata\\mutable\\filenode.py\", line 959, in _try_to_download_data\x0a    d = self._read(c, fetch_privkey=True)\x0a  File \"c:\\tahoe\\allmydata-tahoe-1.9.1
\\src\\allmydata\\mutable\\filenode.py\", line 980, in _read\x0a    d = r.download(consumer, offset, size)\x0a  File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\\al
lmydata\\mutable\\retrieve.py\", line 237, in download\x0a    self._setup_download()\x0a  File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\\allmydata\\mutable\\retr
ieve.py\", line 277, in _setup_download\x0a    shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (1, '@E\\xe0H\\x8a\\xbc\\xa6rQA\\xfa\\xab\\xfd&B\\xf6td
\"jv\\xae\\x96\\xccel\\xcf\\xc2\\x9f\\x8aG)', '\\xde\\xdeyKb\\xcc\\x8f-\\xed\\xea\\xa7\\xcc\\xe1a\\xfe\\x10', 0, 0, 4, 13, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\
\x00\\x01@E\\xe0H\\x8a\\xbc\\xa6rQA\\xfa\\xab\\xfd&B\\xf6td\"jv\\xae\\x96\\xccel\\xcf\\xc2\\x9f\\x8aG)\\xde\\xdeyKb\\xcc\\x8f-\\xed\\xea\\xa7\\xcc\\xe1a\\xfe\\x
10\\x04\\r\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', (('enc_privkey', 823), ('EOF', 2038), ('share_data', 823), ('signat
ure', 399), ('block_hash_tree', 791), ('share_hash_chain', 655)))\x0a"

I think this is possible related to this ticket so reporting there.
also, there is inciden file for this one(attaching it)

Davidsarah, possible related to this: SDMF directory, 2 tahoe backup sessions from different gateways to this directory, 2nd one got: ``` Unable to create target directory: 500 Internal Server Error "Traceback (most recent call last):\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\inte rnet\\base.py\", line 796, in runUntilCurrent\x0a call.func(*call.args, **call.kw)\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\ \foolscap-0.6.3-py2.6.egg\\foolscap\\eventual.py\", line 26, in _turn\x0a cb(*args, **kwargs)\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\sit e-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\internet\\defer.py\", line 318, in callback\x0a self._startRunCallbacks(result)\x0a File \"C:\\Taho e\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd64.egg\\twisted\\internet\\defer.py\", line 424, in _startRunCallbacks\x0a self._runCallbacks()\x0a--- <exception caught here> ---\x0a File \"C:\\Tahoe\\allmydata-tahoe-1.9.1\\support\\Lib\\site-packages\\twisted-10.1.0-py2.6-win-amd 64.egg\\twisted\\internet\\defer.py\", line 441, in _runCallbacks\x0a self.result = callback(self.result, *args, **kw)\x0a File \"c:\\tahoe\\allmydata-tahoe -1.9.1\\src\\allmydata\\mutable\\filenode.py\", line 855, in <lambda>\x0a self._modify_once(modifier, first_time))\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9 .1\\src\\allmydata\\mutable\\filenode.py\", line 881, in _modify_once\x0a d = self._try_to_download_data()\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\ \allmydata\\mutable\\filenode.py\", line 959, in _try_to_download_data\x0a d = self._read(c, fetch_privkey=True)\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1 \\src\\allmydata\\mutable\\filenode.py\", line 980, in _read\x0a d = r.download(consumer, offset, size)\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\\al lmydata\\mutable\\retrieve.py\", line 237, in download\x0a self._setup_download()\x0a File \"c:\\tahoe\\allmydata-tahoe-1.9.1\\src\\allmydata\\mutable\\retr ieve.py\", line 277, in _setup_download\x0a shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (1, '@E\\xe0H\\x8a\\xbc\\xa6rQA\\xfa\\xab\\xfd&B\\xf6td \"jv\\xae\\x96\\xccel\\xcf\\xc2\\x9f\\x8aG)', '\\xde\\xdeyKb\\xcc\\x8f-\\xed\\xea\\xa7\\xcc\\xe1a\\xfe\\x10', 0, 0, 4, 13, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\ \x00\\x01@E\\xe0H\\x8a\\xbc\\xa6rQA\\xfa\\xab\\xfd&B\\xf6td\"jv\\xae\\x96\\xccel\\xcf\\xc2\\x9f\\x8aG)\\xde\\xdeyKb\\xcc\\x8f-\\xed\\xea\\xa7\\xcc\\xe1a\\xfe\\x 10\\x04\\r\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', (('enc_privkey', 823), ('EOF', 2038), ('share_data', 823), ('signat ure', 399), ('block_hash_tree', 791), ('share_hash_chain', 655)))\x0a" ``` I think this is possible related to this ticket so reporting there. also, there is inciden file for this one(attaching it)
vikarti commented 2012-02-19 06:33:06 +00:00
Author
Owner

Attachment incident-2012-02-19--06-26-28Z-o6fb6qy.flog.bz2 (38106 bytes) added

possibl related incident file(see comment 7)

**Attachment** incident-2012-02-19--06-26-28Z-o6fb6qy.flog.bz2 (38106 bytes) added possibl related incident file(see comment 7)
davidsarah commented 2012-02-19 15:10:05 +00:00
Author
Owner

vikarti: thanks, that confirms this problem is not specific to MDMF.

vikarti: thanks, that confirms this problem is not specific to MDMF.
kevan commented 2012-02-20 00:55:11 +00:00
Author
Owner

It looks like the mutable filenode modification code in MutableFileVersion reacts to an UncoordinatedWriteError by performing a map update, then trying the operation again. That map update will update the MutableFileVersion's internal servermap, but not its internal verinfo tuple. If the UncoordinatedWriteError is due to a concurrent update operation that updated all or most (enough for the version to be unrecoverable, anyway) of the shares associated with the MutableFileVersion's version of the mutable file, then we would see the KeyError reported by vikarti and others. Someone (likely me, if no one gets around to it before next weekend) should look over the incident logs to see if they support or refute this theory.

It looks like the mutable filenode modification code in `MutableFileVersion` reacts to an `UncoordinatedWriteError` by performing a map update, then trying the operation again. That map update will update the `MutableFileVersion`'s internal servermap, but not its internal verinfo tuple. If the `UncoordinatedWriteError` is due to a concurrent update operation that updated all or most (enough for the version to be unrecoverable, anyway) of the shares associated with the `MutableFileVersion`'s version of the mutable file, then we would see the `KeyError` reported by vikarti and others. Someone (likely me, if no one gets around to it before next weekend) should look over the incident logs to see if they support or refute this theory.
davidsarah commented 2012-02-20 03:52:48 +00:00
Author
Owner

Attachment 1656-tahoe-put-incident-2012-02-18--22_00_17Z-7jusaxa.flog.bz2 (39376 bytes) added

Incident file for 'tahoe put' from #1656

**Attachment** 1656-tahoe-put-incident-2012-02-18--22_00_17Z-7jusaxa.flog.bz2 (39376 bytes) added Incident file for 'tahoe put' from #1656
zooko commented 2012-03-05 22:45:58 +00:00
Author
Owner

This is a regression in 1.8→1.9, isn't it? I propose we put it into the "1.9.2" Milestone.

This is a regression in 1.8→1.9, isn't it? I propose we put it into the "1.9.2" Milestone.
tahoe-lafs modified the milestone from soon to 1.9.2 2012-03-05 22:45:58 +00:00
zooko commented 2012-04-12 01:18:11 +00:00
Author
Owner

huang jun provided some detailed debugging:

https://tahoe-lafs.org/pipermail/tahoe-dev/2012-April/007285.html

You have to read carefully to find where huang jun inserted comments and evidence from other log files. Look for the lines beginning with * or >.

huang jun provided some detailed debugging: <https://tahoe-lafs.org/pipermail/tahoe-dev/2012-April/007285.html> You have to read carefully to find where huang jun inserted comments and evidence from other log files. Look for the lines beginning with `*` or `>`.
davidsarah commented 2012-06-04 17:14:39 +00:00
Author
Owner

Kyle Markley reported a bug with a similar stacktrace:

As I went about gathering information to submit a ticket, I discovered something interesting. This probably has something to do with the health of the target directories. I discovered one 2/4 directory that works cleanly, and another 2/4 directory that gives an error (after successfully creating the link). I haven't created a ticket yet because I'm not sure what's supposed to happen here. :)

Working directory:

$ tahoe check --raw kyle:
{
 "results": {
  "needs-rebalancing": true,
  "count-shares-expected": 4,
  "healthy": false,
  "count-unrecoverable-versions": 0,
  "count-shares-needed": 2,
  "sharemap": {
   "seq52-2x5m-sh3": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ],
   "seq52-2x5m-sh2": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ],
   "seq52-2x5m-sh1": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ],
   "seq52-2x5m-sh0": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ],
   "seq49-cqp3-sh2": [
    "juwmgssmwnhrhfdcpxxmrz3bghh37esx"
   ],
   "seq49-cqp3-sh3": [
    "vjqcroalrgmft66mgiwfjug667fl6qjd"
   ]
  },
  "count-recoverable-versions": 2,
  "servers-responding": [
   "vjqcroalrgmft66mgiwfjug667fl6qjd",
   "juwmgssmwnhrhfdcpxxmrz3bghh37esx",
   "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx",
   "47cslusczp3uu2kygodi3nlalcruscif",
   "eqtprtidz5emkvzlqt27dylgocdf3f77"
  ],
  "count-good-share-hosts": 1,
  "count-wrong-shares": 2,
  "count-shares-good": 4,
  "count-corrupt-shares": 0,
  "list-corrupt-shares": [],
  "recoverable": true
 },
 "storage-index": "rsi6ge4hmbzhxplyqjzkmd254e",
 "summary": "Unhealthy: multiple versions are recoverable"
}

Gives an error:

$ tahoe check --raw share:
{
 "results": {
  "needs-rebalancing": true,
  "count-shares-expected": 4,
  "healthy": false,
  "count-unrecoverable-versions": 1,
  "count-shares-needed": 2,
  "sharemap": {
   "seq45-7bxx-sh3": [
    "juwmgssmwnhrhfdcpxxmrz3bghh37esx"
   ],
   "seq46-jiym-sh1": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ],
   "seq46-jiym-sh0": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ],
   "seq56-aiot-sh0": [
    "vjqcroalrgmft66mgiwfjug667fl6qjd"
   ],
   "seq56-aiot-sh1": [
    "vjqcroalrgmft66mgiwfjug667fl6qjd"
   ],
   "seq56-aiot-sh2": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ],
   "seq56-aiot-sh3": [
    "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx"
   ]
  },
  "count-recoverable-versions": 2,
  "servers-responding": [
   "vjqcroalrgmft66mgiwfjug667fl6qjd",
   "juwmgssmwnhrhfdcpxxmrz3bghh37esx",
   "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx",
   "47cslusczp3uu2kygodi3nlalcruscif",
   "eqtprtidz5emkvzlqt27dylgocdf3f77"
  ],
  "count-good-share-hosts": 2,
  "count-wrong-shares": 3,
  "count-shares-good": 4,
  "count-corrupt-shares": 0,
  "list-corrupt-shares": [],
  "recoverable": true
 },
 "storage-index": "lb2mpyg4fnznnfebfayevcdpki",
 "summary": "Unhealthy: some versions are unrecoverable multiple versions are recoverable"
}

My error looks something like this. The exact message has changed; it used to say there was an UncoordinatedWriteError, but my experimentation seems to have changed things a bit, and right now I only see that error mentioned in the incident report.

$ tahoe ln foo: share:foo
Error: 500 Internal Server Error
"Traceback (most recent call last):\x0a  File \"/usr/local/lib/python2.7/site-packages/twisted/internet/base.py\", line 800, in runUntilCurrent\x0a    call.func(*call.args, **call.kw)\x0a  File \"/usr/local/lib/python2.7/site-packages/foolscap-0.6.3-py2.7.egg/foolscap/eventual.py\", line 26, in _turn\x0a    cb(*args, **kwargs)\x0a  File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 368, in callback\x0a    self._startRunCallbacks(result)\x0a  File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 464, in _startRunCallbacks\x0a    self._runCallbacks()\x0a--- <exception caught here> ---\x0a  File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 551, in _runCallbacks\x0a    current.result = callback(current.result, *args, **kw)\x0a  File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 855, in <lambda>\x0a    self._modify_once(modifier, first_time))\x0a  File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 881, in _modify_once\x0a    d = self._try_to_download_data()\x0a  File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 959, in _try_to_download_data\x0a    d = self._read(c, fetch_privkey=True)\x0a  File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 980, in _read\x0a    d = r.download(consumer, offset, size)\x0a  File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/retrieve.py\", line 237, in download\x0a    self._setup_download()\x0a  File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/retrieve.py\", line 277, in _setup_download\x0a    shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (57, '\\x16i\\xdb\\xa8\\xbc\\xd7\\xabrY\\xcdpv\\xa4I\\x82\\xfe\\xa5i\\xed\\x82;\\xca\\xe8\\xcaL\\xf7\\xdav\\xa9\\xf2O\\t', '\\x19f_S!&\\xb0\\xa1\\xeb\\x94\\x81F)\\xbb\\x89q', 336, 335, 2, 4, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x009\\x16i\\xdb\\xa8\\xbc\\xd7\\xabrY\\xcdpv\\xa4I\\x82\\xfe\\xa5i\\xed\\x82;\\xca\\xe8\\xcaL\\xf7\\xdav\\xa9\\xf2O\\t\\x19f_S!&\\xb0\\xa1\\xeb\\x94\\x81F)\\xbb\\x89q\\x02\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x01P\\x00\\x00\\x00\\x00\\x00\\x00\\x01O', (('enc_privkey', 923), ('EOF', 2138), ('share_data', 755), ('signature', 399), ('block_hash_tree', 723), ('share_hash_chain', 655)))\x0a"

From the incident report:

local#11372110 21:30:59.251: current goal: before update: , sh0 to [vjqcroal], sh0 to [xxaj2tgm], sh1 to [vjqcroal], sh1 to [xxaj2tgm], sh2 to [xxaj2tgm], sh3 to [juwmgssm], sh3 to [xxaj2tgm]
local#11372111 21:30:59.251: we are planning to push new seqnum=#58
local#11372112 21:30:59.252: Starting push
local#11372113 21:30:59.252: Pushing segment 1 of 1
local#11372114 21:30:59.275: storage: slot_writev lb2mpyg4fnznnfebfayevcdpki
local#11372115 21:30:59.277: storage: slot_writev lb2mpyg4fnznnfebfayevcdpki
local#11372116 21:30:59.281: _got_write_answer from xxaj2tgm, share 2
local#11372117 21:30:59.281: found the following surprise shares: set([0, 1])
local#11372118 21:30:59.281: they had shares [0, 1] that we didn't know about [INCIDENT-TRIGGER]
local#11372119 21:31:00.433: wrote successfully: adding new share to servermap
local#11372120 21:31:00.435: _got_write_answer from xxaj2tgm, share 3
local#11372121 21:31:00.435: found the following surprise shares: set([0, 1])
local#11372122 21:31:00.435: they had shares [0, 1] that we didn't know about
local#11372123 21:31:00.435: wrote successfully: adding new share to servermap
local#11372124 21:31:00.436: _got_write_answer from vjqcroal, share 0
local#11372125 21:31:00.436: found the following surprise shares: set([])
local#11372126 21:31:00.436: wrote successfully: adding new share to servermap
local#11372127 21:31:00.437: _got_write_answer from vjqcroal, share 1
local#11372128 21:31:00.437: found the following surprise shares: set([])
local#11372129 21:31:00.437: wrote successfully: adding new share to servermap
local#11372130 21:31:00.437: Publish failed with UncoordinatedWriteError

What's up with these surprise shares?
Kyle Markley reported a bug with a similar stacktrace: > As I went about gathering information to submit a ticket, I discovered something interesting. This probably has something to do with the health of the target directories. I discovered one 2/4 directory that works cleanly, and another 2/4 directory that gives an error (after *successfully* creating the link). I haven't created a ticket yet because I'm not sure what's supposed to happen here. :) > Working directory: ``` $ tahoe check --raw kyle: { "results": { "needs-rebalancing": true, "count-shares-expected": 4, "healthy": false, "count-unrecoverable-versions": 0, "count-shares-needed": 2, "sharemap": { "seq52-2x5m-sh3": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq52-2x5m-sh2": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq52-2x5m-sh1": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq52-2x5m-sh0": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq49-cqp3-sh2": [ "juwmgssmwnhrhfdcpxxmrz3bghh37esx" ], "seq49-cqp3-sh3": [ "vjqcroalrgmft66mgiwfjug667fl6qjd" ] }, "count-recoverable-versions": 2, "servers-responding": [ "vjqcroalrgmft66mgiwfjug667fl6qjd", "juwmgssmwnhrhfdcpxxmrz3bghh37esx", "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx", "47cslusczp3uu2kygodi3nlalcruscif", "eqtprtidz5emkvzlqt27dylgocdf3f77" ], "count-good-share-hosts": 1, "count-wrong-shares": 2, "count-shares-good": 4, "count-corrupt-shares": 0, "list-corrupt-shares": [], "recoverable": true }, "storage-index": "rsi6ge4hmbzhxplyqjzkmd254e", "summary": "Unhealthy: multiple versions are recoverable" } ``` > Gives an error: ``` $ tahoe check --raw share: { "results": { "needs-rebalancing": true, "count-shares-expected": 4, "healthy": false, "count-unrecoverable-versions": 1, "count-shares-needed": 2, "sharemap": { "seq45-7bxx-sh3": [ "juwmgssmwnhrhfdcpxxmrz3bghh37esx" ], "seq46-jiym-sh1": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq46-jiym-sh0": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq56-aiot-sh0": [ "vjqcroalrgmft66mgiwfjug667fl6qjd" ], "seq56-aiot-sh1": [ "vjqcroalrgmft66mgiwfjug667fl6qjd" ], "seq56-aiot-sh2": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ], "seq56-aiot-sh3": [ "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx" ] }, "count-recoverable-versions": 2, "servers-responding": [ "vjqcroalrgmft66mgiwfjug667fl6qjd", "juwmgssmwnhrhfdcpxxmrz3bghh37esx", "xxaj2tgmnl7debjdpn4mgv2oks6pjjnx", "47cslusczp3uu2kygodi3nlalcruscif", "eqtprtidz5emkvzlqt27dylgocdf3f77" ], "count-good-share-hosts": 2, "count-wrong-shares": 3, "count-shares-good": 4, "count-corrupt-shares": 0, "list-corrupt-shares": [], "recoverable": true }, "storage-index": "lb2mpyg4fnznnfebfayevcdpki", "summary": "Unhealthy: some versions are unrecoverable multiple versions are recoverable" } ``` > My error looks something like this. The exact message has changed; it used to say there was an `UncoordinatedWriteError`, but my experimentation seems to have changed things a bit, and right now I only see that error mentioned in the incident report. ``` $ tahoe ln foo: share:foo Error: 500 Internal Server Error "Traceback (most recent call last):\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/base.py\", line 800, in runUntilCurrent\x0a call.func(*call.args, **call.kw)\x0a File \"/usr/local/lib/python2.7/site-packages/foolscap-0.6.3-py2.7.egg/foolscap/eventual.py\", line 26, in _turn\x0a cb(*args, **kwargs)\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 368, in callback\x0a self._startRunCallbacks(result)\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 464, in _startRunCallbacks\x0a self._runCallbacks()\x0a--- <exception caught here> ---\x0a File \"/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py\", line 551, in _runCallbacks\x0a current.result = callback(current.result, *args, **kw)\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 855, in <lambda>\x0a self._modify_once(modifier, first_time))\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 881, in _modify_once\x0a d = self._try_to_download_data()\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 959, in _try_to_download_data\x0a d = self._read(c, fetch_privkey=True)\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/filenode.py\", line 980, in _read\x0a d = r.download(consumer, offset, size)\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/retrieve.py\", line 237, in download\x0a self._setup_download()\x0a File \"/usr/local/lib/python2.7/site-packages/allmydata/mutable/retrieve.py\", line 277, in _setup_download\x0a shares = versionmap[self.verinfo]\x0aexceptions.KeyError: (57, '\\x16i\\xdb\\xa8\\xbc\\xd7\\xabrY\\xcdpv\\xa4I\\x82\\xfe\\xa5i\\xed\\x82;\\xca\\xe8\\xcaL\\xf7\\xdav\\xa9\\xf2O\\t', '\\x19f_S!&\\xb0\\xa1\\xeb\\x94\\x81F)\\xbb\\x89q', 336, 335, 2, 4, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x009\\x16i\\xdb\\xa8\\xbc\\xd7\\xabrY\\xcdpv\\xa4I\\x82\\xfe\\xa5i\\xed\\x82;\\xca\\xe8\\xcaL\\xf7\\xdav\\xa9\\xf2O\\t\\x19f_S!&\\xb0\\xa1\\xeb\\x94\\x81F)\\xbb\\x89q\\x02\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x01P\\x00\\x00\\x00\\x00\\x00\\x00\\x01O', (('enc_privkey', 923), ('EOF', 2138), ('share_data', 755), ('signature', 399), ('block_hash_tree', 723), ('share_hash_chain', 655)))\x0a" ``` > From the incident report: ``` local#11372110 21:30:59.251: current goal: before update: , sh0 to [vjqcroal], sh0 to [xxaj2tgm], sh1 to [vjqcroal], sh1 to [xxaj2tgm], sh2 to [xxaj2tgm], sh3 to [juwmgssm], sh3 to [xxaj2tgm] local#11372111 21:30:59.251: we are planning to push new seqnum=#58 local#11372112 21:30:59.252: Starting push local#11372113 21:30:59.252: Pushing segment 1 of 1 local#11372114 21:30:59.275: storage: slot_writev lb2mpyg4fnznnfebfayevcdpki local#11372115 21:30:59.277: storage: slot_writev lb2mpyg4fnznnfebfayevcdpki local#11372116 21:30:59.281: _got_write_answer from xxaj2tgm, share 2 local#11372117 21:30:59.281: found the following surprise shares: set([0, 1]) local#11372118 21:30:59.281: they had shares [0, 1] that we didn't know about [INCIDENT-TRIGGER] local#11372119 21:31:00.433: wrote successfully: adding new share to servermap local#11372120 21:31:00.435: _got_write_answer from xxaj2tgm, share 3 local#11372121 21:31:00.435: found the following surprise shares: set([0, 1]) local#11372122 21:31:00.435: they had shares [0, 1] that we didn't know about local#11372123 21:31:00.435: wrote successfully: adding new share to servermap local#11372124 21:31:00.436: _got_write_answer from vjqcroal, share 0 local#11372125 21:31:00.436: found the following surprise shares: set([]) local#11372126 21:31:00.436: wrote successfully: adding new share to servermap local#11372127 21:31:00.437: _got_write_answer from vjqcroal, share 1 local#11372128 21:31:00.437: found the following surprise shares: set([]) local#11372129 21:31:00.437: wrote successfully: adding new share to servermap local#11372130 21:31:00.437: Publish failed with UncoordinatedWriteError What's up with these surprise shares? ```
davidsarah commented 2012-06-04 17:25:12 +00:00
Author
Owner

Kevan, is Kyle's report consistent with comment:128910?

Kevan, is Kyle's report consistent with [comment:128910](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128910)?
zooko commented 2012-06-14 15:32:24 +00:00
Author
Owner

Replying to kevan:

It looks like the mutable filenode modification code in MutableFileVersion reacts to an UncoordinatedWriteError by performing a map update, then trying the operation again.

Hm, that's interesting. Is this... intentional? Is this intended by someone? :-) I don't know right now whether I would intend that behavior.

A simpler behavior in response to UncoordinatedWriteError would be to abort the current operation and inform the user. That simpler behavior would presumably not have this bug in it, and maybe it would also avoid other potential problems caused by re-attempting an operation when there is evidence that someone else is uncoordinatedly changing the same file.

Replying to [kevan](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128910): > It looks like the mutable filenode modification code in `MutableFileVersion` reacts to an `UncoordinatedWriteError` by performing a map update, then trying the operation again. Hm, that's interesting. Is this... intentional? Is this intended by *someone*? :-) I don't know right now whether *I* would intend that behavior. A simpler behavior in response to `UncoordinatedWriteError` would be to abort the current operation and inform the user. That simpler behavior would presumably not have this bug in it, and maybe it would also avoid other potential problems caused by re-attempting an operation when there is evidence that someone else is uncoordinatedly changing the same file.
zooko commented 2012-06-18 15:37:06 +00:00
Author
Owner

This looks like a similar error to #1772, so I'm investigating them both right now. Kevan, Brian, David-Sarah: if you are interested, please consider the question of whether we should simplify handling of UncoordinatedWriteError as suggested in comment:15. It seems to me like we currently handle UCWE complicatedly and buggily, and we should maybe change it to simpler and less buggy before attempting to change it to more complicated and featureful without being buggy. Just a thought. Anyway, I'll report what I learn about this...

This looks like a similar error to #1772, so I'm investigating them both right now. Kevan, Brian, David-Sarah: if you are interested, please consider the question of whether we should simplify handling of UncoordinatedWriteError as suggested in comment:15. It seems to me like we currently handle UCWE complicatedly and buggily, and we should maybe change it to simpler and less buggy before attempting to change it to more complicated and featureful without being buggy. Just a thought. Anyway, I'll report what I learn about this...
zooko commented 2012-06-18 15:37:49 +00:00
Author
Owner

Sorry, I didn't mean #1772 (in comment:128917), I meant #1669.

Sorry, I didn't mean #1772 (in [comment:128917](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128917)), I meant #1669.
davidsarah commented 2012-06-18 18:21:42 +00:00
Author
Owner

Replying to zooko:

please consider the question of whether we should simplify handling of UncoordinatedWriteError as suggested in comment:15.

Well, it looks like there is quite a lot of code to support the retrying of mutable operations, so I assumed that was certainly intentional (it was there before I joined the project, and I was mildly surprised since it seemed not entirely consistent with the "prime coordination directive"). Let's redesign it as part of designing two-phase commit, but not before 1.11. I really want to get 1.10 out with all the changes already on trunk, without making any more significant design changes.

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128917): > please consider the question of whether we should simplify handling of UncoordinatedWriteError as suggested in comment:15. Well, it looks like there is quite a lot of code to support the retrying of mutable operations, so I assumed that was certainly intentional (it was there before I joined the project, and I was mildly surprised since it seemed not entirely consistent with the "prime coordination directive"). Let's redesign it as part of designing two-phase commit, but not before 1.11. I really want to get 1.10 out with all the changes already on trunk, without making any more significant design changes.
zooko commented 2012-06-21 19:30:41 +00:00
Author
Owner

Fixed #1669 and it is different from this -- this isn't a duplicate.

By inspecting the bug reports and incident files I've confirmed what Kevan said, that the error happens during retry after a UCWE. One of the incident files shows the following:

02:26:28.641 [1920941]: current goal: before update: , sh0 to [fbil5app], sh1 to [d4f7ssed], sh2 to [lxur2oq3], sh3 to [rczho2ut], sh4 to [xi6zcwi2], sh5 to [zpm5ziqh], sh6 to [e5l3augd], sh7 to [yf6ahya4], sh8 to [7xdwosoe], sh9 to [t3zmoeba], sh10 to [lfc4wmmk], sh11 to [wasuxoux], sh12 to [iz3pbtdv]
    02:26:28.641 [1920942]: we are planning to push new seqnum=#2
    02:26:28.642 [1920943]: Starting push
    02:26:28.642 [1920944]: Pushing segment 1 of 1
    02:26:28.950 [1920945]: _got_write_answer from d4f7ssed, share 1
        02:26:28.950 [1920947]: WEIRD our testv failed, so the write did not happen [INCIDENT-TRIGGER]
        02:26:33.803 [1920948]: somebody modified the share on us: shnum=1: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:28.950 [1920946]: found the following surprise shares: set([])
    02:26:33.805 [1920949]: _got_write_answer from rczho2ut, share 3
        02:26:33.805 [1920951]: WEIRD our testv failed, so the write did not happen
        02:26:33.805 [1920952]: somebody modified the share on us: shnum=3: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.805 [1920950]: found the following surprise shares: set([])
    02:26:33.806 [1920953]: _got_write_answer from t3zmoeba, share 9
        02:26:33.806 [1920955]: WEIRD our testv failed, so the write did not happen
        02:26:33.806 [1920956]: somebody modified the share on us: shnum=9: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.806 [1920954]: found the following surprise shares: set([])
    02:26:33.812 [1920957]: _got_write_answer from fbil5app, share 0
        02:26:33.812 [1920959]: WEIRD our testv failed, so the write did not happen
        02:26:33.812 [1920960]: somebody modified the share on us: shnum=0: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.812 [1920958]: found the following surprise shares: set([])
    02:26:33.813 [1920961]: _got_write_answer from lfc4wmmk, share 10
        02:26:33.813 [1920963]: WEIRD our testv failed, so the write did not happen
        02:26:33.813 [1920964]: somebody modified the share on us: shnum=10: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.813 [1920962]: found the following surprise shares: set([])
    02:26:33.815 [1920965]: _got_write_answer from wasuxoux, share 11
        02:26:33.815 [1920967]: WEIRD our testv failed, so the write did not happen
        02:26:33.815 [1920968]: somebody modified the share on us: shnum=11: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.815 [1920966]: found the following surprise shares: set([])
    02:26:33.816 [1920969]: _got_write_answer from lxur2oq3, share 2
        02:26:33.816 [1920971]: WEIRD our testv failed, so the write did not happen
        02:26:33.818 [1920972]: somebody modified the share on us: shnum=2: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.816 [1920970]: found the following surprise shares: set([])
    02:26:33.819 [1920973]: _got_write_answer from 7xdwosoe, share 8
        02:26:33.819 [1920975]: WEIRD our testv failed, so the write did not happen
        02:26:33.819 [1920976]: somebody modified the share on us: shnum=8: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.819 [1920974]: found the following surprise shares: set([])
    02:26:33.819 [1920977]: _got_write_answer from iz3pbtdv, share 12
        02:26:33.819 [1920979]: WEIRD our testv failed, so the write did not happen
        02:26:33.821 [1920980]: somebody modified the share on us: shnum=12: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.819 [1920978]: found the following surprise shares: set([])
    02:26:33.822 [1920981]: _got_write_answer from e5l3augd, share 6
        02:26:33.822 [1920983]: WEIRD our testv failed, so the write did not happen
        02:26:33.822 [1920984]: somebody modified the share on us: shnum=6: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.822 [1920982]: found the following surprise shares: set([])
    02:26:33.882 [1920985]: _got_write_answer from zpm5ziqh, share 5
        02:26:33.882 [1920987]: WEIRD our testv failed, so the write did not happen
        02:26:33.882 [1920988]: somebody modified the share on us: shnum=5: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.882 [1920986]: found the following surprise shares: set([])
    02:26:33.884 [1920989]: _got_write_answer from xi6zcwi2, share 4
        02:26:33.884 [1920991]: WEIRD our testv failed, so the write did not happen
        02:26:33.884 [1920992]: somebody modified the share on us: shnum=4: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.884 [1920990]: found the following surprise shares: set([])
    02:26:33.885 [1920993]: _got_write_answer from yf6ahya4, share 7
        02:26:33.885 [1920995]: WEIRD our testv failed, so the write did not happen
        02:26:33.885 [1920996]: somebody modified the share on us: shnum=7: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2
    02:26:33.885 [1920994]: found the following surprise shares: set([])
    02:26:33.885 [1920997]: Publish failed with UncoordinatedWriteError

02:26:37.075 [1920998]: doing modify
02:26:37.075 [1920999]: SharemapUpdater(wmnt2): starting (MODE_CHECK)

    02:26:37.076 [1921000]: sending query to [iz3pbtdv], readsize=1000
    02:26:37.078 [1921001]: sending query to [rczho2ut], readsize=1000
    02:26:37.079 [1921002]: sending query to [lfc4wmmk], readsize=1000
    02:26:37.081 [1921003]: sending query to [yf6ahya4], readsize=1000
    02:26:37.082 [1921004]: sending query to [d4f7ssed], readsize=1000
    02:26:37.084 [1921005]: sending query to [t3zmoeba], readsize=1000
    02:26:37.085 [1921006]: sending query to [fbil5app], readsize=1000
    02:26:37.085 [1921007]: sending query to [xi6zcwi2], readsize=1000
    02:26:37.086 [1921008]: sending query to [wasuxoux], readsize=1000
    02:26:37.088 [1921009]: sending query to [7xdwosoe], readsize=1000
    02:26:37.088 [1921010]: sending query to [zpm5ziqh], readsize=1000
    02:26:37.089 [1921011]: sending query to [lxur2oq3], readsize=1000
    02:26:37.098 [1921012]: sending query to [e5l3augd], readsize=1000
    02:26:37.131 [1921013]: got result from [d4f7ssed], 1 shares
        02:26:37.131 [1921014]: _got_results done
        02:26:37.134 [1921019]: _got_results: got shnum #1 from peerid d4f7ssed
        02:26:37.134 [1921020]: found valid version 2-tmzg from d4f7ssed-sh1: 4-13/352/349
    02:26:37.131 [1921015]: got result from [rczho2ut], 1 shares
        02:26:37.132 [1921016]: _got_results done
        02:26:37.137 [1921025]: _got_results: got shnum #3 from peerid rczho2ut
        02:26:37.137 [1921026]: found valid version 2-tmzg from rczho2ut-sh3: 4-13/352/349
    02:26:37.134 [1921017]: got result from [t3zmoeba], 1 shares
        02:26:37.134 [1921018]: _got_results done
        02:26:37.138 [1921031]: _got_results: got shnum #9 from peerid t3zmoeba
        02:26:37.138 [1921032]: found valid version 2-tmzg from t3zmoeba-sh9: 4-13/352/349
    02:26:37.135 [1921021]: _check_for_done, mode is 'MODE_CHECK', 12 queries outstanding, 0 extra peers available, 12 'must query' peers left, need_privkey=False
        02:26:37.135 [1921022]: 12 'must query' peers left
    02:26:37.135 [1921023]: _check_for_done, mode is 'MODE_CHECK', 12 queries outstanding, 0 extra peers available, 12 'must query' peers left, need_privkey=False
        02:26:37.135 [1921024]: 12 'must query' peers left
    02:26:37.137 [1921027]: _check_for_done, mode is 'MODE_CHECK', 11 queries outstanding, 0 extra peers available, 11 'must query' peers left, need_privkey=False
        02:26:37.137 [1921028]: 11 'must query' peers left
    02:26:37.137 [1921029]: _check_for_done, mode is 'MODE_CHECK', 11 queries outstanding, 0 extra peers available, 11 'must query' peers left, need_privkey=False
        02:26:37.137 [1921030]: 11 'must query' peers left
    02:26:37.138 [1921033]: _check_for_done, mode is 'MODE_CHECK', 10 queries outstanding, 0 extra peers available, 10 'must query' peers left, need_privkey=False
        02:26:37.138 [1921034]: 10 'must query' peers left
    02:26:37.138 [1921035]: _check_for_done, mode is 'MODE_CHECK', 10 queries outstanding, 0 extra peers available, 10 'must query' peers left, need_privkey=False
        02:26:37.138 [1921036]: 10 'must query' peers left
    02:26:37.269 [1921037]: got result from [7xdwosoe], 1 shares
        02:26:37.270 [1921038]: _got_results done
        02:26:37.272 [1921039]: _got_results: got shnum #8 from peerid 7xdwosoe
        02:26:37.273 [1921040]: found valid version 2-tmzg from 7xdwosoe-sh8: 4-13/352/349
    02:26:37.273 [1921041]: _check_for_done, mode is 'MODE_CHECK', 9 queries outstanding, 0 extra peers available, 9 'must query' peers left, need_privkey=False
        02:26:37.273 [1921042]: 9 'must query' peers left
    02:26:37.273 [1921043]: _check_for_done, mode is 'MODE_CHECK', 9 queries outstanding, 0 extra peers available, 9 'must query' peers left, need_privkey=False
        02:26:37.273 [1921044]: 9 'must query' peers left
    02:26:37.319 [1921045]: got result from [lfc4wmmk], 1 shares
        02:26:37.319 [1921046]: _got_results done
        02:26:37.321 [1921047]: _got_results: got shnum #10 from peerid lfc4wmmk

And the user then reported the following:

  File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 855, in <lambda>
    self._modify_once(modifier, first_time))
  File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 881, in _modify_once
    d = self._try_to_download_data()
  File "c:\tahoe\allmydata-tahoe-1.9.1\srcllmydata\mutable\filenode.py", line 959, in _try_to_download_data
    d = self._read(c, fetch_privkey=True)
  File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 980, in _read
    d = r.download(consumer, offset, size)
  File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\retrieve.py", line 237, in download
    self._setup_download()
  File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\retrieve.py", line 277, in _setup_download
    shares = versionmap[self.verinfo]
exceptions.KeyError: (1, '@E\xe0H\x8a\xbc\xa6rQA\xfa\xab\xfd&B\xf6td"jv\xae\x96\xccel\xcf\xc2\x9f\x8aG)', '\xde\xdeyKb\xcc\x8f-\xed\xea\xa7\xcc\xe1a\xfe\x10', 0, 0, 4, 13, '\x00\x00\x00\x00\x00\x00\x00\x01@E\xe0H\x8a\xbc\xa6rQA\xfa\xab\xfd&B\xf6td"jv\xae\x96\xccel\xcf\xc2\x9f\x8aG)\xde\xdeyKb\xcc\x8f-\xed\xea\xa7\xcc\xe1a\xfe\x10\x04\r\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', (('enc_privkey', 823), ('EOF', 2038), ('share_data', 823), ('signature', 399), ('block_hash_tree', 791), ('share_hash_chain', 655)))
Fixed #1669 and it is different from this -- this isn't a duplicate. By inspecting the bug reports and incident files I've confirmed what Kevan said, that the error happens during retry after a UCWE. One of the incident files shows the following: ``` 02:26:28.641 [1920941]: current goal: before update: , sh0 to [fbil5app], sh1 to [d4f7ssed], sh2 to [lxur2oq3], sh3 to [rczho2ut], sh4 to [xi6zcwi2], sh5 to [zpm5ziqh], sh6 to [e5l3augd], sh7 to [yf6ahya4], sh8 to [7xdwosoe], sh9 to [t3zmoeba], sh10 to [lfc4wmmk], sh11 to [wasuxoux], sh12 to [iz3pbtdv] 02:26:28.641 [1920942]: we are planning to push new seqnum=#2 02:26:28.642 [1920943]: Starting push 02:26:28.642 [1920944]: Pushing segment 1 of 1 02:26:28.950 [1920945]: _got_write_answer from d4f7ssed, share 1 02:26:28.950 [1920947]: WEIRD our testv failed, so the write did not happen [INCIDENT-TRIGGER] 02:26:33.803 [1920948]: somebody modified the share on us: shnum=1: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:28.950 [1920946]: found the following surprise shares: set([]) 02:26:33.805 [1920949]: _got_write_answer from rczho2ut, share 3 02:26:33.805 [1920951]: WEIRD our testv failed, so the write did not happen 02:26:33.805 [1920952]: somebody modified the share on us: shnum=3: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.805 [1920950]: found the following surprise shares: set([]) 02:26:33.806 [1920953]: _got_write_answer from t3zmoeba, share 9 02:26:33.806 [1920955]: WEIRD our testv failed, so the write did not happen 02:26:33.806 [1920956]: somebody modified the share on us: shnum=9: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.806 [1920954]: found the following surprise shares: set([]) 02:26:33.812 [1920957]: _got_write_answer from fbil5app, share 0 02:26:33.812 [1920959]: WEIRD our testv failed, so the write did not happen 02:26:33.812 [1920960]: somebody modified the share on us: shnum=0: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.812 [1920958]: found the following surprise shares: set([]) 02:26:33.813 [1920961]: _got_write_answer from lfc4wmmk, share 10 02:26:33.813 [1920963]: WEIRD our testv failed, so the write did not happen 02:26:33.813 [1920964]: somebody modified the share on us: shnum=10: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.813 [1920962]: found the following surprise shares: set([]) 02:26:33.815 [1920965]: _got_write_answer from wasuxoux, share 11 02:26:33.815 [1920967]: WEIRD our testv failed, so the write did not happen 02:26:33.815 [1920968]: somebody modified the share on us: shnum=11: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.815 [1920966]: found the following surprise shares: set([]) 02:26:33.816 [1920969]: _got_write_answer from lxur2oq3, share 2 02:26:33.816 [1920971]: WEIRD our testv failed, so the write did not happen 02:26:33.818 [1920972]: somebody modified the share on us: shnum=2: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.816 [1920970]: found the following surprise shares: set([]) 02:26:33.819 [1920973]: _got_write_answer from 7xdwosoe, share 8 02:26:33.819 [1920975]: WEIRD our testv failed, so the write did not happen 02:26:33.819 [1920976]: somebody modified the share on us: shnum=8: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.819 [1920974]: found the following surprise shares: set([]) 02:26:33.819 [1920977]: _got_write_answer from iz3pbtdv, share 12 02:26:33.819 [1920979]: WEIRD our testv failed, so the write did not happen 02:26:33.821 [1920980]: somebody modified the share on us: shnum=12: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.819 [1920978]: found the following surprise shares: set([]) 02:26:33.822 [1920981]: _got_write_answer from e5l3augd, share 6 02:26:33.822 [1920983]: WEIRD our testv failed, so the write did not happen 02:26:33.822 [1920984]: somebody modified the share on us: shnum=6: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.822 [1920982]: found the following surprise shares: set([]) 02:26:33.882 [1920985]: _got_write_answer from zpm5ziqh, share 5 02:26:33.882 [1920987]: WEIRD our testv failed, so the write did not happen 02:26:33.882 [1920988]: somebody modified the share on us: shnum=5: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.882 [1920986]: found the following surprise shares: set([]) 02:26:33.884 [1920989]: _got_write_answer from xi6zcwi2, share 4 02:26:33.884 [1920991]: WEIRD our testv failed, so the write did not happen 02:26:33.884 [1920992]: somebody modified the share on us: shnum=4: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.884 [1920990]: found the following surprise shares: set([]) 02:26:33.885 [1920993]: _got_write_answer from yf6ahya4, share 7 02:26:33.885 [1920995]: WEIRD our testv failed, so the write did not happen 02:26:33.885 [1920996]: somebody modified the share on us: shnum=7: I thought they had #1:R=ibc6, but testv reported #2:R=�2e!J&t����ܘvҐ:%��l�i�tA2 02:26:33.885 [1920994]: found the following surprise shares: set([]) 02:26:33.885 [1920997]: Publish failed with UncoordinatedWriteError 02:26:37.075 [1920998]: doing modify 02:26:37.075 [1920999]: SharemapUpdater(wmnt2): starting (MODE_CHECK) 02:26:37.076 [1921000]: sending query to [iz3pbtdv], readsize=1000 02:26:37.078 [1921001]: sending query to [rczho2ut], readsize=1000 02:26:37.079 [1921002]: sending query to [lfc4wmmk], readsize=1000 02:26:37.081 [1921003]: sending query to [yf6ahya4], readsize=1000 02:26:37.082 [1921004]: sending query to [d4f7ssed], readsize=1000 02:26:37.084 [1921005]: sending query to [t3zmoeba], readsize=1000 02:26:37.085 [1921006]: sending query to [fbil5app], readsize=1000 02:26:37.085 [1921007]: sending query to [xi6zcwi2], readsize=1000 02:26:37.086 [1921008]: sending query to [wasuxoux], readsize=1000 02:26:37.088 [1921009]: sending query to [7xdwosoe], readsize=1000 02:26:37.088 [1921010]: sending query to [zpm5ziqh], readsize=1000 02:26:37.089 [1921011]: sending query to [lxur2oq3], readsize=1000 02:26:37.098 [1921012]: sending query to [e5l3augd], readsize=1000 02:26:37.131 [1921013]: got result from [d4f7ssed], 1 shares 02:26:37.131 [1921014]: _got_results done 02:26:37.134 [1921019]: _got_results: got shnum #1 from peerid d4f7ssed 02:26:37.134 [1921020]: found valid version 2-tmzg from d4f7ssed-sh1: 4-13/352/349 02:26:37.131 [1921015]: got result from [rczho2ut], 1 shares 02:26:37.132 [1921016]: _got_results done 02:26:37.137 [1921025]: _got_results: got shnum #3 from peerid rczho2ut 02:26:37.137 [1921026]: found valid version 2-tmzg from rczho2ut-sh3: 4-13/352/349 02:26:37.134 [1921017]: got result from [t3zmoeba], 1 shares 02:26:37.134 [1921018]: _got_results done 02:26:37.138 [1921031]: _got_results: got shnum #9 from peerid t3zmoeba 02:26:37.138 [1921032]: found valid version 2-tmzg from t3zmoeba-sh9: 4-13/352/349 02:26:37.135 [1921021]: _check_for_done, mode is 'MODE_CHECK', 12 queries outstanding, 0 extra peers available, 12 'must query' peers left, need_privkey=False 02:26:37.135 [1921022]: 12 'must query' peers left 02:26:37.135 [1921023]: _check_for_done, mode is 'MODE_CHECK', 12 queries outstanding, 0 extra peers available, 12 'must query' peers left, need_privkey=False 02:26:37.135 [1921024]: 12 'must query' peers left 02:26:37.137 [1921027]: _check_for_done, mode is 'MODE_CHECK', 11 queries outstanding, 0 extra peers available, 11 'must query' peers left, need_privkey=False 02:26:37.137 [1921028]: 11 'must query' peers left 02:26:37.137 [1921029]: _check_for_done, mode is 'MODE_CHECK', 11 queries outstanding, 0 extra peers available, 11 'must query' peers left, need_privkey=False 02:26:37.137 [1921030]: 11 'must query' peers left 02:26:37.138 [1921033]: _check_for_done, mode is 'MODE_CHECK', 10 queries outstanding, 0 extra peers available, 10 'must query' peers left, need_privkey=False 02:26:37.138 [1921034]: 10 'must query' peers left 02:26:37.138 [1921035]: _check_for_done, mode is 'MODE_CHECK', 10 queries outstanding, 0 extra peers available, 10 'must query' peers left, need_privkey=False 02:26:37.138 [1921036]: 10 'must query' peers left 02:26:37.269 [1921037]: got result from [7xdwosoe], 1 shares 02:26:37.270 [1921038]: _got_results done 02:26:37.272 [1921039]: _got_results: got shnum #8 from peerid 7xdwosoe 02:26:37.273 [1921040]: found valid version 2-tmzg from 7xdwosoe-sh8: 4-13/352/349 02:26:37.273 [1921041]: _check_for_done, mode is 'MODE_CHECK', 9 queries outstanding, 0 extra peers available, 9 'must query' peers left, need_privkey=False 02:26:37.273 [1921042]: 9 'must query' peers left 02:26:37.273 [1921043]: _check_for_done, mode is 'MODE_CHECK', 9 queries outstanding, 0 extra peers available, 9 'must query' peers left, need_privkey=False 02:26:37.273 [1921044]: 9 'must query' peers left 02:26:37.319 [1921045]: got result from [lfc4wmmk], 1 shares 02:26:37.319 [1921046]: _got_results done 02:26:37.321 [1921047]: _got_results: got shnum #10 from peerid lfc4wmmk ``` And the user then reported the following: ``` File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 855, in <lambda> self._modify_once(modifier, first_time)) File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 881, in _modify_once d = self._try_to_download_data() File "c:\tahoe\allmydata-tahoe-1.9.1\srcllmydata\mutable\filenode.py", line 959, in _try_to_download_data d = self._read(c, fetch_privkey=True) File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\filenode.py", line 980, in _read d = r.download(consumer, offset, size) File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\retrieve.py", line 237, in download self._setup_download() File "c:\tahoe\allmydata-tahoe-1.9.1\src\allmydata\mutable\retrieve.py", line 277, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (1, '@E\xe0H\x8a\xbc\xa6rQA\xfa\xab\xfd&B\xf6td"jv\xae\x96\xccel\xcf\xc2\x9f\x8aG)', '\xde\xdeyKb\xcc\x8f-\xed\xea\xa7\xcc\xe1a\xfe\x10', 0, 0, 4, 13, '\x00\x00\x00\x00\x00\x00\x00\x01@E\xe0H\x8a\xbc\xa6rQA\xfa\xab\xfd&B\xf6td"jv\xae\x96\xccel\xcf\xc2\x9f\x8aG)\xde\xdeyKb\xcc\x8f-\xed\xea\xa7\xcc\xe1a\xfe\x10\x04\r\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', (('enc_privkey', 823), ('EOF', 2038), ('share_data', 823), ('signature', 399), ('block_hash_tree', 791), ('share_hash_chain', 655))) ```
zooko commented 2012-06-24 03:25:41 +00:00
Author
Owner

Hm, actually this may be the same underlying problem as in #1669. In #1669, we found that:

• [SDMFSlotWriteProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L433 and [MDMFSlotReadProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L1692 return the same shape of tuple, but [MDMFSlotWriteProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L1102 returns a different shape.

So, could this ticket (#1670) be caused by MDMFSlotWriteProxy.get_verinfo returning a differently-shaped verinfo, which gets used as a key in the Retrieve, and then later when MDMFSlotReadProxy.get_verinfo returns what ought to be the same thing (but isn't due to the different shape), and then the Retrieve looks it up in the versionmap, it gets a key error?

Hm, actually this may be the same underlying problem as in #1669. In #1669, we found that: • [SDMFSlotWriteProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L433 and [MDMFSlotReadProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L1692 return the same shape of tuple, but [MDMFSlotWriteProxy.get_verinfo]source:git/src/allmydata/mutable/layout.py?annotate=blame&rev=87ca4fc7055faaea7e54cbab4584132b021e42e1#L1102 returns a different shape. So, could this ticket (#1670) be caused by `MDMFSlotWriteProxy.get_verinfo` returning a differently-shaped verinfo, which gets used as a key in the `Retrieve`, and then later when `MDMFSlotReadProxy.get_verinfo` returns what ought to be the same thing (but isn't due to the different shape), and then the `Retrieve` looks it up in the `versionmap`, it gets a key error?
tahoe-lafs added
1.9.0-s3branch
and removed
1.9.1
labels 2012-06-24 03:25:41 +00:00
zooko commented 2012-06-24 03:39:09 +00:00
Author
Owner

Yes, I've looked at how the auto-retry functionality shown in the stack trace (comment:128920) works, and it looks like the earlier attempt to publish would indeed update the ServerMap object's self.servermap dict to have a verinfo returned by the MDMFSlotWriteProxy.get_verinfo. So I'm increasingly confident that the fix to #1669 also fixed this bug. The next step is to write a unit test that exercises the auto-retry functionality with MDMF, which should show the bug present in 1.9.1 and absent in 1.9.2a1.

(Also, by the way, I remain pretty interested in the idea of removing the auto-retry functionality completely.)

Yes, I've looked at how the auto-retry functionality shown in the stack trace ([comment:128920](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128920)) works, and it looks like the earlier attempt to publish would indeed update the `ServerMap` object's `self.servermap` dict to have a verinfo returned by the `MDMFSlotWriteProxy.get_verinfo`. So I'm increasingly confident that the fix to #1669 also fixed this bug. The next step is to write a unit test that exercises the auto-retry functionality with MDMF, which should show the bug present in 1.9.1 and absent in 1.9.2a1. (Also, by the way, I remain pretty interested in the idea of removing the auto-retry functionality completely.)
davidsarah commented 2012-06-24 21:47:34 +00:00
Author
Owner

Did we fix the binary SIs (if that's what they are) in the log, BTW?

Did we fix the binary SIs (if that's what they are) in the log, BTW?
tahoe-lafs added
1.9.1
and removed
1.9.0-s3branch
labels 2012-06-24 21:47:34 +00:00
zooko commented 2012-06-25 00:57:02 +00:00
Author
Owner

I haven't committed that patch yet. Wasn't planning to do for 1.9.2. Shall I?

I haven't committed that patch yet. Wasn't planning to do for 1.9.2. Shall I?
davidsarah commented 2012-06-25 15:07:43 +00:00
Author
Owner

Replying to zooko:

I haven't committed that patch yet. Wasn't planning to do for 1.9.2. Shall I?

Please attach the patch here so I can decide whether to commit it for 1.9.2. It should be low-risk I think.

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128928): > I haven't committed that patch yet. Wasn't planning to do for 1.9.2. Shall I? Please attach the patch here so I can decide whether to commit it for 1.9.2. It should be low-risk I think.
davidsarah commented 2012-07-02 16:24:28 +00:00
Author
Owner

I think this bug can only happen for a read-modify-write.

I think this bug can only happen for a read-modify-write.
tahoe-lafs changed title from KeyError in mutable retrieve to KeyError in mutable read-modify-write 2012-07-02 16:24:28 +00:00
tahoe-lafs modified the milestone from 1.9.2 to 1.10.0 2012-07-03 20:28:43 +00:00
tahoe-lafs changed title from KeyError in mutable read-modify-write to add tests for KeyError in mutable read-modify-write 2012-09-04 16:36:32 +00:00
zooko commented 2012-09-04 16:39:03 +00:00
Author
Owner

Will look at adding a test.

Will look at adding a test.
davidsarah commented 2012-09-04 23:33:18 +00:00
Author
Owner

Replying to davidsarah:

Did we fix the binary SIs (if that's what they are) in the log, BTW?

Split to #1800.

Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128925): > Did we fix the binary SIs (if that's what they are) in the log, BTW? Split to #1800.
tahoe-lafs added
normal
and removed
critical
labels 2012-11-13 23:28:41 +00:00
tahoe-lafs modified the milestone from 1.10.0 to 1.11.0 2012-12-20 16:33:25 +00:00
zooko commented 2013-01-14 08:35:30 +00:00
Author
Owner

News flash! joepie91 from IRC reported a bug which looks exactly like this one. However, he is using Tahoe-LAFS v1.9.2, with the patch from #1669 (changeset:5524/1.9.2) in place! This means that, contrary to what I thought, that patch did not fix this issue — #1670.

News flash! joepie91 from IRC reported a bug which looks exactly like this one. However, he is using Tahoe-LAFS v1.9.2, with the patch from #1669 (changeset:5524/1.9.2) in place! This means that, contrary to what I thought, that patch did not fix this issue — #1670.
zooko commented 2013-01-14 09:39:50 +00:00
Author
Owner

Here is a stack trace from joepie91:

ata/mutable/filenode.py", line 885, in _modify_once
    d = self._try_to_download_data()
  File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 963, in _try_to_download_data
    d = self._read(c, fetch_privkey=True)
  File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 985, in _read
    d = r.download(consumer, offset, size)
  File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 244, in download
    self._setup_download()
  File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 284, in _setup_download
    shares = versionmap[self.verinfo]
exceptions.KeyError: (42, "@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe", '4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2', 668, 666, 4, 8, "\x00\x00\x00\x00\x00\x00\x00\x00*@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2\x04\x08\x00\x00\x00\x00\x00\x00\x02\x9c\x00\x00\x00\x00\x00\x00\x02\x9a", (('enc_privkey', 956), ('EOF', 2172), ('share_data', 789), ('signature', 399), ('block_hash_tree', 757), ('share_hash_chain', 655)))
Here is a stack trace from joepie91: ``` ata/mutable/filenode.py", line 885, in _modify_once d = self._try_to_download_data() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 963, in _try_to_download_data d = self._read(c, fetch_privkey=True) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 985, in _read d = r.download(consumer, offset, size) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 244, in download self._setup_download() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 284, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (42, "@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe", '4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2', 668, 666, 4, 8, "\x00\x00\x00\x00\x00\x00\x00\x00*@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2\x04\x08\x00\x00\x00\x00\x00\x00\x02\x9c\x00\x00\x00\x00\x00\x00\x02\x9a", (('enc_privkey', 956), ('EOF', 2172), ('share_data', 789), ('signature', 399), ('block_hash_tree', 757), ('share_hash_chain', 655))) ```
zooko commented 2013-01-14 09:41:31 +00:00
Author
Owner

Attaching an incident report file from joepie91 that I manually filtered to remove potentially sensitive information.

Attaching an incident report file from joepie91 that I manually filtered to remove potentially sensitive information.
zooko commented 2013-01-14 09:42:33 +00:00
Author
Owner

Attachment incident-2013-01-14--07-54-09Z-2ymyxfi.flog.bz2.dump.txt.manually-filtered.txt (286008 bytes) added

**Attachment** incident-2013-01-14--07-54-09Z-2ymyxfi.flog.bz2.dump.txt.manually-filtered.txt (286008 bytes) added
zooko commented 2013-01-14 09:54:12 +00:00
Author
Owner

In incident-2013-01-14--07-54-09Z-2ymyxfi.flog.bz2.dump.txt.manually-filtered.txt, you can see that the KeyError was preceded by a mysteriously changed version number, probably due to a different gateway modifying the directory at the same time as this gateway was doing so:

local#197564 2013-01-14_05:30:14.753657Z:  found valid version 42-iatt from qenkbcot-sh3: 4-8/668/666

local#197629 2013-01-14_05:30:18.187664Z: we are planning to push new seqnum=#43

local#197655 2013-01-14_05:31:15.323720Z: our testv failed, so the write did not happen
local#197656 2013-01-14_05:31:17.737595Z: somebody modified the share on us: shnum=3: I thought they had #42:R=iatt, but testv reported #44:R=ÙUQ$žÌJè)C⑍vDíŽ[·²±ºŒNE#˜ÖG\Ö
local#197657 2013-01-14_05:31:17.737826Z: Publish failed with UncoordinatedWriteError

So then it does the automatic-merge-and-retry thing (which I would still like to remove, for simplicity):

local#197658 2013-01-14_05:31:21.260199Z: doing modify
local#197659 2013-01-14_05:31:21.260870Z: SharemapUpdater(i2sen): starting (MODE_CHECK)

local#197745 2013-01-14_05:31:22.504839Z: got result from [qenkbcot], 1 shares
local#197748 2013-01-14_05:31:22.509699Z:  found valid version 44-a3mv from qenkbcot-sh3: 4-8/668/665

But then it somehow uses a cached verinfo which has the old "42" in it, and gets the KeyError:

local#197778 2013-01-14_05:31:27.311038Z: all queries are retired, no extra servers: done
local#197779 2013-01-14_05:31:27.311780Z: servermap: 8*seq44-a3mv
local#197782 2013-01-14_05:31:27.312745Z: Retrieve(i2sen): starting
local#197783 2013-01-14_05:31:27.312889Z: got seqnum 42
local#197784 2013-01-14_05:31:27.585402Z: Unhandled Error
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/internet/base.py", line 800, in runUntilCurrent
    call.func(*call.args, **call.kw)
File "/home/occupy/allmydata-tahoe-1.9.2/support/lib/python2.7/site-packages/foolscap-0.6.4-py2.7.egg/foolscap/eventual.py", line 26, in _turn
    cb(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 368, in callback
    self._startRunCallbacks(result)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 464, in _startRunCallbacks
    self._runCallbacks()
	--- <exception caught here> ---
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 551, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 859, in <lambda>
    self._modify_once(modifier, first_time))
File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 885, in _modify_once
    d = self._try_to_download_data()
File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 963, in _try_to_download_data
    d = self._read(c, fetch_privkey=True)
File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 985, in _read
    d = r.download(consumer, offset, size)
File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 244, in download
    self._setup_download()
  File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 284, in _setup_download
    shares = versionmap[self.verinfo]
exceptions.KeyError: (42, "@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe", '4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2', 668, 666, 4, 8, "\x00\x00\x00\x00\x00\x00\x00\x00*@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2\x04\x08\x00\x00\x00\x00\x00\x00\x02\x9c\x00\x00\x00\x00\x00\x00\x02\x9a", (('enc_privkey', 956), ('EOF', 2172), ('share_data', 789), ('signature', 399), ('block_hash_tree', 757), ('share_hash_chain', 655)))
In [incident-2013-01-14--07-54-09Z-2ymyxfi.flog.bz2.dump.txt.manually-filtered.txt](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-7741-4e93-65b4-f65202501b87), you can see that the KeyError was preceded by a mysteriously changed version number, probably due to a different gateway modifying the directory at the same time as this gateway was doing so: ``` local#197564 2013-01-14_05:30:14.753657Z: found valid version 42-iatt from qenkbcot-sh3: 4-8/668/666 ``` … ``` local#197629 2013-01-14_05:30:18.187664Z: we are planning to push new seqnum=#43 ``` … ``` local#197655 2013-01-14_05:31:15.323720Z: our testv failed, so the write did not happen local#197656 2013-01-14_05:31:17.737595Z: somebody modified the share on us: shnum=3: I thought they had #42:R=iatt, but testv reported #44:R=ÙUQ$žÌJè)C⑍vDíŽ[·²±ºŒNE#˜ÖG\Ö local#197657 2013-01-14_05:31:17.737826Z: Publish failed with UncoordinatedWriteError ``` So then it does the automatic-merge-and-retry thing (which I would still like to remove, for simplicity): ``` local#197658 2013-01-14_05:31:21.260199Z: doing modify local#197659 2013-01-14_05:31:21.260870Z: SharemapUpdater(i2sen): starting (MODE_CHECK) ``` … ``` local#197745 2013-01-14_05:31:22.504839Z: got result from [qenkbcot], 1 shares local#197748 2013-01-14_05:31:22.509699Z: found valid version 44-a3mv from qenkbcot-sh3: 4-8/668/665 ``` But then it somehow uses a cached verinfo which has the old "42" in it, and gets the KeyError: ``` local#197778 2013-01-14_05:31:27.311038Z: all queries are retired, no extra servers: done local#197779 2013-01-14_05:31:27.311780Z: servermap: 8*seq44-a3mv local#197782 2013-01-14_05:31:27.312745Z: Retrieve(i2sen): starting local#197783 2013-01-14_05:31:27.312889Z: got seqnum 42 local#197784 2013-01-14_05:31:27.585402Z: Unhandled Error Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/twisted/internet/base.py", line 800, in runUntilCurrent call.func(*call.args, **call.kw) File "/home/occupy/allmydata-tahoe-1.9.2/support/lib/python2.7/site-packages/foolscap-0.6.4-py2.7.egg/foolscap/eventual.py", line 26, in _turn cb(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 368, in callback self._startRunCallbacks(result) File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 464, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 551, in _runCallbacks current.result = callback(current.result, *args, **kw) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 859, in <lambda> self._modify_once(modifier, first_time)) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 885, in _modify_once d = self._try_to_download_data() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 963, in _try_to_download_data d = self._read(c, fetch_privkey=True) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/filenode.py", line 985, in _read d = r.download(consumer, offset, size) File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 244, in download self._setup_download() File "/home/occupy/allmydata-tahoe-1.9.2/src/allmydata/mutable/retrieve.py", line 284, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (42, "@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe", '4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2', 668, 666, 4, 8, "\x00\x00\x00\x00\x00\x00\x00\x00*@'<F\x98\xbdY\xdf\x98\xc5\xa6\x885\xd7U\xe2\xb3U\x83\xdcn\xfa\x0fW2\xe7\xbc\xc1\\%r\xbe4\xeb\xc5- \x19\x8b)\xe1.@-z|\x1c\xb2\x04\x08\x00\x00\x00\x00\x00\x00\x02\x9c\x00\x00\x00\x00\x00\x00\x02\x9a", (('enc_privkey', 956), ('EOF', 2172), ('share_data', 789), ('signature', 399), ('block_hash_tree', 757), ('share_hash_chain', 655))) ```
tahoe-lafs changed title from add tests for KeyError in mutable read-modify-write to KeyError in mutable read-modify-write 2013-01-14 09:54:38 +00:00
davidsarah commented 2013-01-15 03:26:06 +00:00
Author
Owner

Replying to zooko:

News flash! joepie91 from IRC reported a bug which looks exactly like this one. However, he is using Tahoe-LAFS v1.9.2, with the patch from #1669 (changeset:5524/1.9.2) in place! This means that, contrary to what I thought, that patch did not fix this issue — #1670.

This is why tests are important. I'm always rather skeptical that something has been fixed if there is no test.

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128939): > News flash! joepie91 from IRC reported a bug which looks exactly like this one. However, he is using Tahoe-LAFS v1.9.2, with the patch from #1669 (changeset:5524/1.9.2) in place! This means that, contrary to what I thought, that patch did not fix this issue — #1670. This is why tests are important. I'm always rather skeptical that something has been fixed if there is no test.
daira commented 2013-06-17 23:22:56 +00:00
Author
Owner

AF saw this bug -- it happened nondeterministically:

time tahoe cp /tmp/zeros-100m tahoe:zeros3
Error during PUT: 500 Internal Server Error
"Traceback (most recent call last):
  File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/base.py\", line 824, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/foolscap-0.6.4-py2.7.egg/foolscap/eventual.py\", line 26, in _turn
    cb(*args, **kwargs)
  File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 380, in callback
    self._startRunCallbacks(result)
  File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 488, in _startRunCallbacks
    self._runCallbacks()
--- <exception caught here> ---
  File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 575, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 854, in <lambda>
    self._modify_once(modifier, first_time))
  File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 880, in _modify_once
    d = self._try_to_download_data()
  File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 958, in _try_to_download_data
    d = self._read(c, fetch_privkey=True)
  File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 980, in _read
    d = r.download(consumer, offset, size)
  File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/retrieve.py\", line 244, in download
    self._setup_download()
  File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/retrieve.py\", line 284, in _setup_download
    shares = versionmap[self.verinfo]
exceptions.KeyError: (9, '\\x9b\\xb9\\xaa\\xa5\\xfa\\x88\\x9bX\\x03\\x99HK\\x06\\xed\\xa5V\\x12\\xd6f\\x97l^\\x9f\\x06\\x98\\xac\\xe7\\xb4\\xcdn\\x82\\xa8', 'V\\x97\\x15\\\\\\xf8[7\\rZ\\x9bZ\\xf1\\x93\\xe2\\xdb\\xc2', 2145, 2144, 3, 10, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\t\\x9b\\xb9\\xaa\\xa5\\xfa\\x88\\x9bX\\x03\\x99HK\\x06\\xed\\xa5V\\x12\\xd6f\\x97l^\\x9f\\x06\\x98\\xac\\xe7\\xb4\\xcdn\\x82\\xa8V\\x97\\x15\\\\\\xf8[7\\rZ\\x9bZ\\xf1\\x93\\xe2\\xdb\\xc2\\x03\\n\\x00\\x00\\x00\\x00\\x00\\x00\\x08a\\x00\\x00\\x00\\x00\\x00\\x00\\x08\`', (('enc_privkey', 1538), ('EOF', 2753), ('share_data', 823), ('signature', 399), ('block_hash_tree', 791), ('share_hash_chain', 655)))
"

(Presumably it is happening on the update of the tahoe:zeros3 directory.)

Notice that this is using Tahoe-LAFS v1.10.

AF saw this bug -- it happened nondeterministically: ``` time tahoe cp /tmp/zeros-100m tahoe:zeros3 Error during PUT: 500 Internal Server Error "Traceback (most recent call last): File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/base.py\", line 824, in runUntilCurrent call.func(*call.args, **call.kw) File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/foolscap-0.6.4-py2.7.egg/foolscap/eventual.py\", line 26, in _turn cb(*args, **kwargs) File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 380, in callback self._startRunCallbacks(result) File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 488, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File \"/root/allmydata-tahoe-1.10.0/support/lib/python2.7/site-packages/Twisted-13.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py\", line 575, in _runCallbacks current.result = callback(current.result, *args, **kw) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 854, in <lambda> self._modify_once(modifier, first_time)) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 880, in _modify_once d = self._try_to_download_data() File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 958, in _try_to_download_data d = self._read(c, fetch_privkey=True) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/filenode.py\", line 980, in _read d = r.download(consumer, offset, size) File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/retrieve.py\", line 244, in download self._setup_download() File \"/root/allmydata-tahoe-1.10.0/src/allmydata/mutable/retrieve.py\", line 284, in _setup_download shares = versionmap[self.verinfo] exceptions.KeyError: (9, '\\x9b\\xb9\\xaa\\xa5\\xfa\\x88\\x9bX\\x03\\x99HK\\x06\\xed\\xa5V\\x12\\xd6f\\x97l^\\x9f\\x06\\x98\\xac\\xe7\\xb4\\xcdn\\x82\\xa8', 'V\\x97\\x15\\\\\\xf8[7\\rZ\\x9bZ\\xf1\\x93\\xe2\\xdb\\xc2', 2145, 2144, 3, 10, '\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\t\\x9b\\xb9\\xaa\\xa5\\xfa\\x88\\x9bX\\x03\\x99HK\\x06\\xed\\xa5V\\x12\\xd6f\\x97l^\\x9f\\x06\\x98\\xac\\xe7\\xb4\\xcdn\\x82\\xa8V\\x97\\x15\\\\\\xf8[7\\rZ\\x9bZ\\xf1\\x93\\xe2\\xdb\\xc2\\x03\\n\\x00\\x00\\x00\\x00\\x00\\x00\\x08a\\x00\\x00\\x00\\x00\\x00\\x00\\x08\`', (('enc_privkey', 1538), ('EOF', 2753), ('share_data', 823), ('signature', 399), ('block_hash_tree', 791), ('share_hash_chain', 655))) " ``` (Presumably it is happening on the update of the `tahoe:zeros3` directory.) Notice that this is using Tahoe-LAFS v1.10.
zooko commented 2013-07-05 16:58:22 +00:00
Author
Owner

Ed Kapitein posted to the tahoe-dev list [//pipermail/tahoe-dev/2013-July/008487.html a bug report] that matches this stack trace.

Ed Kapitein posted to the tahoe-dev list [//pipermail/tahoe-dev/2013-July/008487.html a bug report] that matches this stack trace.
zooko commented 2013-07-05 17:00:58 +00:00
Author
Owner

Upgrading "Priority" from "normal" to "major", because this seems like a bad bug. It also apparently led to data loss in Ed Kapitein's case, so I'm adding the preservation keyword.

Upgrading "Priority" from "normal" to "major", because this seems like a bad bug. It also apparently led to data loss in Ed Kapitein's case, so I'm adding the `preservation` keyword.
tahoe-lafs added
major
and removed
normal
labels 2013-07-05 17:00:58 +00:00
daira commented 2013-07-08 00:36:59 +00:00
Author
Owner

Replying to kevan:

It looks like the mutable filenode modification code in MutableFileVersion reacts to an UncoordinatedWriteError by performing a map update, then trying the operation again. That map update will update the MutableFileVersion's internal servermap, but not its internal verinfo tuple. If the UncoordinatedWriteError is due to a concurrent update operation that updated all or most (enough for the version to be unrecoverable, anyway) of the shares associated with the MutableFileVersion's version of the mutable file, then we would see the KeyError reported by vikarti and others.

So, isn't the correct fix for this, without making any design changes, just for MutableFileVersion to make sure that its verinfo tuple is also updated when it does a mapupdate in this case?

Replying to [kevan](/tahoe-lafs/trac-2024-07-25/issues/1670#issuecomment-128910): > It looks like the mutable filenode modification code in `MutableFileVersion` reacts to an `UncoordinatedWriteError` by performing a map update, then trying the operation again. That map update will update the `MutableFileVersion`'s internal servermap, but not its internal verinfo tuple. If the `UncoordinatedWriteError` is due to a concurrent update operation that updated all or most (enough for the version to be unrecoverable, anyway) of the shares associated with the `MutableFileVersion`'s version of the mutable file, then we would see the `KeyError` reported by vikarti and others. So, isn't the correct fix for this, without making any design changes, just for `MutableFileVersion` to make sure that its verinfo tuple is also updated when it does a mapupdate in this case?
tahoe-lafs modified the milestone from soon to 1.11.0 2013-08-28 15:36:41 +00:00
tahoe-lafs added
critical
and removed
major
labels 2013-10-31 15:50:46 +00:00
zooko commented 2013-11-14 23:08:14 +00:00
Author
Owner

We investigated this during the Tahoe-LAFS Summit. It seems likely that directory-modification operations have been accidentally using the original version of the directory, even when there was a write-collision and a newer version of the directory was discovered. I didn't 100% confirm this, but I suspect that means that in those (rare‽) cases where there are write-collisions on a directory, the retrying code would blow away the other person's write, by re-applying the earlier version (plus the current modification). This would be a data loss bug and very much not the kind of thing we tolerate. ☹

Now, there are no bug reports that I am aware of that could point to this having caused a directory modification to get thrown out in practice. There are several reports of this bug leading to an internal KeyError, but as far as I recall, nobody reported that a change they had made to their directory was subsequently discovered to have been lost.

Nonetheless, the possibility of this happening seems to be present, from code inspection.

We investigated this during the Tahoe-LAFS Summit. It seems likely that directory-modification operations have been accidentally using the original version of the directory, even when there was a write-collision and a newer version of the directory was discovered. I didn't 100% confirm this, but I suspect that means that in those (rare‽) cases where there are write-collisions on a directory, the retrying code would blow away the other person's write, by re-applying the earlier version (plus the current modification). This would be a data loss bug and very much not the kind of thing we tolerate. ☹ Now, there are *no* bug reports that I am aware of that could point to this having caused a directory modification to get thrown out in practice. There are several reports of this bug leading to an internal `KeyError`, but as far as I recall, nobody reported that a change they had made to their directory was subsequently discovered to have been lost. Nonetheless, the possibility of this happening seems to be present, from code inspection.
zooko commented 2013-11-14 23:11:12 +00:00
Author
Owner

We agreed at the Tahoe-LAFS Summit to remove the "retry" feature instead of fixing it, for v1.11.0. I have a branch which does this: https://github.com/zooko/tahoe-lafs/commits/1670-KeyError-in-mutable-read-modify-write

This branch is not yet ready to merge into trunk because:

• Although it has unit tests, I'm not yet sure they are correct tests.

• The history of patches needs to be rewritten into a nice readable story.

We agreed at the Tahoe-LAFS Summit to remove the "retry" feature instead of fixing it, for v1.11.0. I have a branch which does this: <https://github.com/zooko/tahoe-lafs/commits/1670-KeyError-in-mutable-read-modify-write> This branch is not yet ready to merge into trunk because: • Although it has unit tests, I'm not yet sure they are *correct* tests. • The history of patches needs to be rewritten into a nice readable story.
tahoe-lafs modified the milestone from 1.11.0 to 1.12.0 2014-09-02 16:20:17 +00:00
daira commented 2014-09-02 16:20:59 +00:00
Author
Owner

We have no fix for this, nor do we understand it well.

We have no fix for this, nor do we understand it well.
warner commented 2016-03-22 05:02:25 +00:00
Author
Owner

Milestone renamed

Milestone renamed
tahoe-lafs modified the milestone from 1.12.0 to 1.13.0 2016-03-22 05:02:25 +00:00
warner commented 2016-06-28 18:17:14 +00:00
Author
Owner

renaming milestone

renaming milestone
tahoe-lafs modified the milestone from 1.13.0 to 1.14.0 2016-06-28 18:17:14 +00:00
exarkun commented 2020-06-30 14:45:13 +00:00
Author
Owner

Moving open issues out of closed milestones.

Moving open issues out of closed milestones.
tahoe-lafs modified the milestone from 1.14.0 to 1.15.0 2020-06-30 14:45:13 +00:00
meejah commented 2021-03-30 18:40:19 +00:00
Author
Owner

Ticket retargeted after milestone closed

Ticket retargeted after milestone closed
tahoe-lafs modified the milestone from 1.15.0 to soon 2021-03-30 18:40:19 +00:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#1670
No description provided.