DOS defect concerning forged shares #2214

Open
opened 2014-04-07 18:35:30 +00:00 by rcv · 4 comments
Owner

It is fundamental assumption that when performing the "Verify every bit" checker operation, the verifier will report a corrupt share as a corrupt share and will report a good share as a good share.

However, when dealing with an immutable file, a certain type of "forged" share tricks the verifier into believing the forged share is good. This leads to several unpleasant consequences.

First. Using the verify-every-bit, the user may be wrongly told that their file is healthy, when it is actually unhealthy.

Second. Even if the user knows the file is unhealthy, it is difficult to persuade Tahoe-LAFS to repair the file via the Web or CLI interface.

Third. Depending on the extent of the forgery, attempts to read the file are sometimes met with the dreaded insufficient shares message.

I consider this a form of DOS. There are some mitigating factors, but overall I think this defect is about as severe as #1528.

How to create such a forged share? Simply rename (or copy) a known good share to a different share number. For example, if your storage server is assigned share number 6 in a 3 of 10 distribution, copy that share to shares 0 through 5 and 7 through 10.

One may say that a rogue storage server can delete their shares at any time. True, but normally a verify-every-bit cannot be defeated by a rogue storage server. If the storage server has tampered with the share, the verifier will detect the tamper and attempt to repair the share. If the storage server has deleted the share, the verifier will detect the deletion and attempt to repair the share. If the storage server has presented share number x as share number y, the verifier should detect the forgery and attempt to repair the share.

Demonstration of failure mode:

A sample file (defect.txt) was placed on the public test grid, encoded with k=30, n=31. [large number of shares is irrelevant, but there were many active storage nodes, and I wanted my node to contain at least two shares.]The My storage server was assigned shares 4 and 20. I swapped them with each other. [I placed good copies of shares 0-3, 5-19, and 21-30 on my storage server to protect against other storage nodes leaving the grid.]Later, The readcap for the sample file is: URI:CHK:jrlysuyd6334z3hzpvspsvbpam:gchyqggiohotsvy44cdng6qzxyvvqcfthrud57tml5abxjuljaca:30:31:2235

Verify every bit reports the following. (Surprisingly, shares 4 and 20 were not flagged as corrupt.)

File Check Results for SI=em3llebitvmuhbisp62laknrqe
Healthy : Healthy

* Report:

* Share Counts: need 30-of-31, have 31
* Hosts with good shares: 14
* Corrupt shares: none
* Wrong Shares: 0
* Good Shares (sorted in share order):

Attempting to fetch the share gives the following report. (Not surprisingly, shares 4 and 20 are not available.)

NotEnoughSharesError: This indicates that some servers were unavailable, or that shares have been lost to server departure, hard drive failure, or disk corruption. You should perform a filecheck on this object to learn more.

The full error message is:
ran out of shares: complete=sh0,sh1,sh2,sh3,sh5,sh6,sh7,sh8,sh9,sh10,sh11,sh12,sh13,sh14,sh15,sh16,sh17,sh18,sh19,sh21,sh22,sh23,sh24,sh25,sh26,sh27,sh28,sh29,sh30 pending= overdue= unused=Share(sh15-on-tjnj7znu),Share(sh13-on-q7cs354n),Share(sh29-on-q7cs354n),Share(sh14-on-yec63zgr),Share(sh30-on-yec63zgr),Share(sh11-on-sw653ebi),Share(sh27-on-sw653ebi),Share(sh10-on-p742cj66),Share(sh26-on-p742cj66),Share(sh7-on-eyk2eslf),Share(sh23-on-eyk2eslf),Share(sh6-on-27wpeurw),Share(sh22-on-27wpeurw),Share(sh0-on-ra3o4edq),Share(sh16-on-ra3o4edq),Share(sh2-on-uy5th4nz),Share(sh18-on-uy5th4nz),Share(sh1-on-xvwei6mc),Share(sh17-on-xvwei6mc),Share(sh9-on-nszizgf5),Share(sh25-on-nszizgf5),Share(sh5-on-cyuiutio),Share(sh21-on-cyuiutio),Share(sh3-on-oygrircp),Share(sh19-on-oygrircp) need 30. Last failure: [Failure instance: Traceback: <type 'exceptions.AssertionError'>:
/usr/lib/python2.6/dist-packages/twisted/internet/base.py:1165:run
/usr/lib/python2.6/dist-packages/twisted/internet/base.py:1174:mainLoop
/usr/lib/python2.6/dist-packages/twisted/internet/base.py:796:runUntilCurrent
/usr/local/lib/python2.6/dist-packages/foolscap-0.6.4-py2.6.egg/foolscap/eventual.py:26:_turn
--- ---
/usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:206:loop
/usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:254:_do_loop
/usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:323:_get_satisfaction
/usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:859:set_block_hash_root
/usr/local/lib/python2.6/dist-packages/allmydata/hashtree.py:375:set_hashes

It is fundamental assumption that when performing the "Verify every bit" checker operation, the verifier will report a corrupt share as a corrupt share and will report a good share as a good share. However, when dealing with an immutable file, a certain type of "forged" share tricks the verifier into believing the forged share is good. This leads to several unpleasant consequences. First. Using the verify-every-bit, the user may be wrongly told that their file is healthy, when it is actually unhealthy. Second. Even if the user knows the file is unhealthy, it is difficult to persuade Tahoe-LAFS to repair the file via the Web or CLI interface. Third. Depending on the extent of the forgery, attempts to read the file are sometimes met with the dreaded insufficient shares message. I consider this a form of DOS. There are some mitigating factors, but overall I think this defect is about as severe as #1528. How to create such a forged share? Simply rename (or copy) a known good share to a different share number. For example, if your storage server is assigned share number 6 in a 3 of 10 distribution, copy that share to shares 0 through 5 and 7 through 10. One may say that a rogue storage server can delete their shares at any time. True, but normally a verify-every-bit cannot be defeated by a rogue storage server. If the storage server has tampered with the share, the verifier will detect the tamper and attempt to repair the share. If the storage server has deleted the share, the verifier will detect the deletion and attempt to repair the share. If the storage server has presented share number x as share number y, the verifier **should** detect the forgery and attempt to repair the share. ---------------------------------------------------- Demonstration of failure mode: A sample file (defect.txt) was placed on the public test grid, encoded with k=30, n=31. [large number of shares is irrelevant, but there were many active storage nodes, and I wanted my node to contain at least two shares.]The My storage server was assigned shares 4 and 20. I swapped them with each other. [I placed good copies of shares 0-3, 5-19, and 21-30 on my storage server to protect against other storage nodes leaving the grid.]Later, The readcap for the sample file is: URI:CHK:jrlysuyd6334z3hzpvspsvbpam:gchyqggiohotsvy44cdng6qzxyvvqcfthrud57tml5abxjuljaca:30:31:2235 Verify every bit reports the following. (Surprisingly, shares 4 and 20 were not flagged as corrupt.) ---------------------------------------------------- File Check Results for SI=em3llebitvmuhbisp62laknrqe Healthy : Healthy * Report: * Share Counts: need 30-of-31, have 31 * Hosts with good shares: 14 * Corrupt shares: none * Wrong Shares: 0 * Good Shares (sorted in share order): ---------------------------------------------------- Attempting to fetch the share gives the following report. (Not surprisingly, shares 4 and 20 are not available.) ---------------------------------------------------- [NotEnoughSharesError](wiki/NotEnoughSharesError): This indicates that some servers were unavailable, or that shares have been lost to server departure, hard drive failure, or disk corruption. You should perform a filecheck on this object to learn more. The full error message is: ran out of shares: complete=sh0,sh1,sh2,sh3,sh5,sh6,sh7,sh8,sh9,sh10,sh11,sh12,sh13,sh14,sh15,sh16,sh17,sh18,sh19,sh21,sh22,sh23,sh24,sh25,sh26,sh27,sh28,sh29,sh30 pending= overdue= unused=Share(sh15-on-tjnj7znu),Share(sh13-on-q7cs354n),Share(sh29-on-q7cs354n),Share(sh14-on-yec63zgr),Share(sh30-on-yec63zgr),Share(sh11-on-sw653ebi),Share(sh27-on-sw653ebi),Share(sh10-on-p742cj66),Share(sh26-on-p742cj66),Share(sh7-on-eyk2eslf),Share(sh23-on-eyk2eslf),Share(sh6-on-27wpeurw),Share(sh22-on-27wpeurw),Share(sh0-on-ra3o4edq),Share(sh16-on-ra3o4edq),Share(sh2-on-uy5th4nz),Share(sh18-on-uy5th4nz),Share(sh1-on-xvwei6mc),Share(sh17-on-xvwei6mc),Share(sh9-on-nszizgf5),Share(sh25-on-nszizgf5),Share(sh5-on-cyuiutio),Share(sh21-on-cyuiutio),Share(sh3-on-oygrircp),Share(sh19-on-oygrircp) need 30. Last failure: [Failure instance: Traceback: <type 'exceptions.AssertionError'>: /usr/lib/python2.6/dist-packages/twisted/internet/base.py:1165:run /usr/lib/python2.6/dist-packages/twisted/internet/base.py:1174:mainLoop /usr/lib/python2.6/dist-packages/twisted/internet/base.py:796:runUntilCurrent /usr/local/lib/python2.6/dist-packages/foolscap-0.6.4-py2.6.egg/foolscap/eventual.py:26:_turn --- <exception caught here> --- /usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:206:loop /usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:254:_do_loop /usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:323:_get_satisfaction /usr/local/lib/python2.6/dist-packages/allmydata/immutable/downloader/share.py:859:set_block_hash_root /usr/local/lib/python2.6/dist-packages/allmydata/hashtree.py:375:set_hashes ----------------------------------------------------
tahoe-lafs added the
unknown
critical
defect
1.9.2
labels 2014-04-07 18:35:30 +00:00
tahoe-lafs added this to the undecided milestone 2014-04-07 18:35:30 +00:00
tahoe-lafs added
code-encoding
major
and removed
unknown
critical
labels 2014-04-14 22:48:38 +00:00
tahoe-lafs modified the milestone from undecided to 1.12.0 2015-01-29 19:48:39 +00:00
warner commented 2016-03-22 05:02:25 +00:00
Author
Owner

Milestone renamed

Milestone renamed
tahoe-lafs modified the milestone from 1.12.0 to 1.13.0 2016-03-22 05:02:25 +00:00
warner commented 2016-06-28 18:17:14 +00:00
Author
Owner

renaming milestone

renaming milestone
tahoe-lafs modified the milestone from 1.13.0 to 1.14.0 2016-06-28 18:17:14 +00:00
exarkun commented 2020-06-30 14:45:13 +00:00
Author
Owner

Moving open issues out of closed milestones.

Moving open issues out of closed milestones.
tahoe-lafs modified the milestone from 1.14.0 to 1.15.0 2020-06-30 14:45:13 +00:00
meejah commented 2021-03-30 18:40:19 +00:00
Author
Owner

Ticket retargeted after milestone closed

Ticket retargeted after milestone closed
tahoe-lafs modified the milestone from 1.15.0 to soon 2021-03-30 18:40:19 +00:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#2214
No description provided.