Thu Jun 24 16:46:37 PDT 2010 Kevan Carstensen * Misc. changes to support the work I'm doing - Add a notion of file version number to interfaces.py - Alter mutable file node interfaces to have a notion of version, though this may be changed later. - Alter mutable/filenode.py to conform to these changes. - Add a salt hasher to util/hashutil.py Thu Jun 24 16:48:33 PDT 2010 Kevan Carstensen * nodemaker.py: create MDMF files when asked to Thu Jun 24 16:49:05 PDT 2010 Kevan Carstensen * storage/server.py: minor code cleanup Thu Jun 24 16:49:24 PDT 2010 Kevan Carstensen * test/test_mutable.py: alter some tests that were failing due to MDMF; minor code cleanup. Fri Jun 25 17:35:20 PDT 2010 Kevan Carstensen * test/test_mutable.py: change the definition of corrupt() to work with MDMF as well as SDMF files, change users of corrupt to use the new definition Sat Jun 26 16:41:18 PDT 2010 Kevan Carstensen * Alter the ServermapUpdater to find MDMF files The servermapupdater should find MDMF files on a grid in the same way that it finds SDMF files. This patch makes it do that. Sat Jun 26 16:42:04 PDT 2010 Kevan Carstensen * Make a segmented mutable uploader The mutable file uploader should be able to publish files with one segment and files with multiple segments. This patch makes it do that. This is still incomplete, and rather ugly -- I need to flesh out error handling, I need to write tests, and I need to remove some of the uglier kludges in the process before I can call this done. Sat Jun 26 16:43:14 PDT 2010 Kevan Carstensen * Write a segmented mutable downloader The segmented mutable downloader can deal with MDMF files (files with one or more segments in MDMF format) and SDMF files (files with one segment in SDMF format). It is backwards compatible with the old file format. This patch also contains tests for the segmented mutable downloader. Mon Jun 28 15:50:48 PDT 2010 Kevan Carstensen * mutable/checker.py: check MDMF files This patch adapts the mutable file checker and verifier to check and verify MDMF files. It does this by using the new segmented downloader, which is trained to perform verification operations on request. This removes some code duplication. Mon Jun 28 15:52:01 PDT 2010 Kevan Carstensen * mutable/retrieve.py: learn how to verify mutable files Wed Jun 30 11:33:05 PDT 2010 Kevan Carstensen * interfaces.py: add IMutableSlotWriter Thu Jul 1 16:28:06 PDT 2010 Kevan Carstensen * test/test_mutable.py: temporarily disable two tests that are now irrelevant Fri Jul 2 15:55:31 PDT 2010 Kevan Carstensen * Add MDMF reader and writer, and SDMF writer The MDMF/SDMF reader MDMF writer, and SDMF writer are similar to the object proxies that exist for immutable files. They abstract away details of connection, state, and caching from their callers (in this case, the download, servermap updater, and uploader), and expose methods to get and set information on the remote server. MDMFSlotReadProxy reads a mutable file from the server, doing the right thing (in most cases) regardless of whether the file is MDMF or SDMF. It allows callers to tell it how to batch and flush reads. MDMFSlotWriteProxy writes an MDMF mutable file to a server. SDMFSlotWriteProxy writes an SDMF mutable file to a server. This patch also includes tests for MDMFSlotReadProxy, SDMFSlotWriteProxy, and MDMFSlotWriteProxy. Fri Jul 2 15:55:54 PDT 2010 Kevan Carstensen * mutable/publish.py: cleanup + simplification Fri Jul 2 15:57:10 PDT 2010 Kevan Carstensen * test/test_mutable.py: remove tests that are no longer relevant Tue Jul 6 14:52:17 PDT 2010 Kevan Carstensen * interfaces.py: create IMutableUploadable Tue Jul 6 14:52:57 PDT 2010 Kevan Carstensen * mutable/publish.py: add MutableDataHandle and MutableFileHandle Tue Jul 6 14:55:41 PDT 2010 Kevan Carstensen * mutable/publish.py: reorganize in preparation of file-like uploadables Tue Jul 6 14:56:49 PDT 2010 Kevan Carstensen * test/test_mutable.py: write tests for MutableFileHandle and MutableDataHandle Wed Jul 7 17:00:31 PDT 2010 Kevan Carstensen * Alter tests to work with the new APIs Wed Jul 7 17:07:32 PDT 2010 Kevan Carstensen * Alter mutable files to use file-like objects for publishing instead of strings. Thu Jul 8 12:35:22 PDT 2010 Kevan Carstensen * test/test_sftp.py: alter a setup routine to work with new mutable file APIs. Thu Jul 8 12:36:00 PDT 2010 Kevan Carstensen * mutable/publish.py: make MutableFileHandle seek to the beginning of its file handle before reading. Fri Jul 9 16:29:12 PDT 2010 Kevan Carstensen * Refactor download interfaces to be more uniform, per #993 Fri Jul 9 16:29:51 PDT 2010 Kevan Carstensen * frontends/sftpd.py: alter a mutable file overwrite to work with the new API Tue Jul 13 16:17:58 PDT 2010 Kevan Carstensen * mutable/filenode.py: implement most of IVersion, per #993 New patches: [Misc. changes to support the work I'm doing Kevan Carstensen **20100624234637 Ignore-this: fdd18fa8cc05f4b4b15ff53ee24a1819 - Add a notion of file version number to interfaces.py - Alter mutable file node interfaces to have a notion of version, though this may be changed later. - Alter mutable/filenode.py to conform to these changes. - Add a salt hasher to util/hashutil.py ] { hunk ./src/allmydata/interfaces.py 7 ChoiceOf, IntegerConstraint, Any, RemoteInterface, Referenceable HASH_SIZE=32 +SALT_SIZE=16 + +SDMF_VERSION=0 +MDMF_VERSION=1 Hash = StringConstraint(maxLength=HASH_SIZE, minLength=HASH_SIZE)# binary format 32-byte SHA256 hash hunk ./src/allmydata/interfaces.py 811 writer-visible data using this writekey. """ + def set_version(version): + """Tahoe-LAFS supports SDMF and MDMF mutable files. By default, + we upload in SDMF for reasons of compatibility. If you want to + change this, set_version will let you do that. + + To say that this file should be uploaded in SDMF, pass in a 0. To + say that the file should be uploaded as MDMF, pass in a 1. + """ + + def get_version(): + """Returns the mutable file protocol version.""" + class NotEnoughSharesError(Exception): """Download was unable to get enough shares""" hunk ./src/allmydata/mutable/filenode.py 8 from twisted.internet import defer, reactor from foolscap.api import eventually from allmydata.interfaces import IMutableFileNode, \ - ICheckable, ICheckResults, NotEnoughSharesError + ICheckable, ICheckResults, NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION from allmydata.util import hashutil, log from allmydata.util.assertutil import precondition from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI hunk ./src/allmydata/mutable/filenode.py 67 self._sharemap = {} # known shares, shnum-to-[nodeids] self._cache = ResponseCache() self._most_recent_size = None + # filled in after __init__ if we're being created for the first time; + # filled in by the servermap updater before publishing, otherwise. + # set to this default value in case neither of those things happen, + # or in case the servermap can't find any shares to tell us what + # to publish as. + # TODO: Set this back to None, and find out why the tests fail + # with it set to None. + self._protocol_version = SDMF_VERSION # all users of this MutableFileNode go through the serializer. This # takes advantage of the fact that Deferreds discard the callbacks hunk ./src/allmydata/mutable/filenode.py 472 def _did_upload(self, res, size): self._most_recent_size = size return res + + + def set_version(self, version): + # I can be set in two ways: + # 1. When the node is created. + # 2. (for an existing share) when the Servermap is updated + # before I am read. + assert version in (MDMF_VERSION, SDMF_VERSION) + self._protocol_version = version + + + def get_version(self): + return self._protocol_version hunk ./src/allmydata/util/hashutil.py 90 MUTABLE_READKEY_TAG = "allmydata_mutable_writekey_to_readkey_v1" MUTABLE_DATAKEY_TAG = "allmydata_mutable_readkey_to_datakey_v1" MUTABLE_STORAGEINDEX_TAG = "allmydata_mutable_readkey_to_storage_index_v1" +MUTABLE_SALT_TAG = "allmydata_mutable_segment_salt_v1" # dirnodes DIRNODE_CHILD_WRITECAP_TAG = "allmydata_mutable_writekey_and_salt_to_dirnode_child_capkey_v1" hunk ./src/allmydata/util/hashutil.py 134 def plaintext_segment_hasher(): return tagged_hasher(PLAINTEXT_SEGMENT_TAG) +def mutable_salt_hash(data): + return tagged_hash(MUTABLE_SALT_TAG, data) +def mutable_salt_hasher(): + return tagged_hasher(MUTABLE_SALT_TAG) + KEYLEN = 16 IVLEN = 16 } [nodemaker.py: create MDMF files when asked to Kevan Carstensen **20100624234833 Ignore-this: 26c16aaca9ddab7a7ce37a4530bc970 ] { hunk ./src/allmydata/nodemaker.py 3 import weakref from zope.interface import implements -from allmydata.interfaces import INodeMaker +from allmydata.util.assertutil import precondition +from allmydata.interfaces import INodeMaker, MustBeDeepImmutableError, \ + SDMF_VERSION, MDMF_VERSION from allmydata.immutable.filenode import ImmutableFileNode, LiteralFileNode from allmydata.immutable.upload import Data from allmydata.mutable.filenode import MutableFileNode hunk ./src/allmydata/nodemaker.py 92 return self._create_dirnode(filenode) return None - def create_mutable_file(self, contents=None, keysize=None): + def create_mutable_file(self, contents=None, keysize=None, + version=SDMF_VERSION): n = MutableFileNode(self.storage_broker, self.secret_holder, self.default_encoding_parameters, self.history) hunk ./src/allmydata/nodemaker.py 96 + n.set_version(version) d = self.key_generator.generate(keysize) d.addCallback(n.create_with_keys, contents) d.addCallback(lambda res: n) hunk ./src/allmydata/nodemaker.py 102 return d - def create_new_mutable_directory(self, initial_children={}): + def create_new_mutable_directory(self, initial_children={}, + version=SDMF_VERSION): + # initial_children must have metadata (i.e. {} instead of None) + for (name, (node, metadata)) in initial_children.iteritems(): + precondition(isinstance(metadata, dict), + "create_new_mutable_directory requires metadata to be a dict, not None", metadata) + node.raise_error() d = self.create_mutable_file(lambda n: hunk ./src/allmydata/nodemaker.py 110 - pack_children(n, initial_children)) + pack_children(n, initial_children), + version) d.addCallback(self._create_dirnode) return d } [storage/server.py: minor code cleanup Kevan Carstensen **20100624234905 Ignore-this: 2358c531c39e48d3c8e56b62b5768228 ] { hunk ./src/allmydata/storage/server.py 569 self) return share - def remote_slot_readv(self, storage_index, shares, readv): + def remote_slot_readv(self, storage_index, shares, readvs): start = time.time() self.count("readv") si_s = si_b2a(storage_index) hunk ./src/allmydata/storage/server.py 590 if sharenum in shares or not shares: filename = os.path.join(bucketdir, sharenum_s) msf = MutableShareFile(filename, self) - datavs[sharenum] = msf.readv(readv) + datavs[sharenum] = msf.readv(readvs) log.msg("returning shares %s" % (datavs.keys(),), facility="tahoe.storage", level=log.NOISY, parent=lp) self.add_latency("readv", time.time() - start) } [test/test_mutable.py: alter some tests that were failing due to MDMF; minor code cleanup. Kevan Carstensen **20100624234924 Ignore-this: afb86ec1fbdbfe1a5ef6f46f350273c0 ] { hunk ./src/allmydata/test/test_mutable.py 151 chr(ord(original[byte_offset]) ^ 0x01) + original[byte_offset+1:]) +def add_two(original, byte_offset): + # It isn't enough to simply flip the bit for the version number, + # because 1 is a valid version number. So we add two instead. + return (original[:byte_offset] + + chr(ord(original[byte_offset]) ^ 0x02) + + original[byte_offset+1:]) + def corrupt(res, s, offset, shnums_to_corrupt=None, offset_offset=0): # if shnums_to_corrupt is None, corrupt all shares. Otherwise it is a # list of shnums to corrupt. hunk ./src/allmydata/test/test_mutable.py 187 real_offset = offset1 real_offset = int(real_offset) + offset2 + offset_offset assert isinstance(real_offset, int), offset - shares[shnum] = flip_bit(data, real_offset) + if offset1 == 0: # verbyte + f = add_two + else: + f = flip_bit + shares[shnum] = f(data, real_offset) return res def make_storagebroker(s=None, num_peers=10): hunk ./src/allmydata/test/test_mutable.py 423 d.addCallback(_created) return d + def test_modify_backoffer(self): def _modifier(old_contents, servermap, first_time): return old_contents + "line2" hunk ./src/allmydata/test/test_mutable.py 658 d.addCallback(_created) return d + def _copy_shares(self, ignored, index): shares = self._storage._peers # we need a deep copy } [test/test_mutable.py: change the definition of corrupt() to work with MDMF as well as SDMF files, change users of corrupt to use the new definition Kevan Carstensen **20100626003520 Ignore-this: 836e59e2fde0535f6b4bea3468dc8244 ] { hunk ./src/allmydata/test/test_mutable.py 168 and shnum not in shnums_to_corrupt): continue data = shares[shnum] - (version, - seqnum, - root_hash, - IV, - k, N, segsize, datalen, - o) = unpack_header(data) - if isinstance(offset, tuple): - offset1, offset2 = offset - else: - offset1 = offset - offset2 = 0 - if offset1 == "pubkey": - real_offset = 107 - elif offset1 in o: - real_offset = o[offset1] - else: - real_offset = offset1 - real_offset = int(real_offset) + offset2 + offset_offset - assert isinstance(real_offset, int), offset - if offset1 == 0: # verbyte - f = add_two - else: - f = flip_bit - shares[shnum] = f(data, real_offset) - return res + # We're feeding the reader all of the share data, so it + # won't need to use the rref that we didn't provide, nor the + # storage index that we didn't provide. We do this because + # the reader will work for both MDMF and SDMF. + reader = MDMFSlotReadProxy(None, None, shnum, data) + # We need to get the offsets for the next part. + d = reader.get_verinfo() + def _do_corruption(verinfo, data, shnum): + (seqnum, + root_hash, + IV, + segsize, + datalen, + k, n, prefix, o) = verinfo + if isinstance(offset, tuple): + offset1, offset2 = offset + else: + offset1 = offset + offset2 = 0 + if offset1 == "pubkey": + real_offset = 107 + elif offset1 in o: + real_offset = o[offset1] + else: + real_offset = offset1 + real_offset = int(real_offset) + offset2 + offset_offset + assert isinstance(real_offset, int), offset + if offset1 == 0: # verbyte + f = add_two + else: + f = flip_bit + shares[shnum] = f(data, real_offset) + d.addCallback(_do_corruption, data, shnum) + ds.append(d) + dl = defer.DeferredList(ds) + dl.addCallback(lambda ignored: res) + return dl def make_storagebroker(s=None, num_peers=10): if not s: hunk ./src/allmydata/test/test_mutable.py 1177 return d def test_download_fails(self): - corrupt(None, self._storage, "signature") - d = self.shouldFail(UnrecoverableFileError, "test_download_anyway", + d = corrupt(None, self._storage, "signature") + d.addCallback(lambda ignored: + self.shouldFail(UnrecoverableFileError, "test_download_anyway", "no recoverable versions", self._fn.download_best_version) return d hunk ./src/allmydata/test/test_mutable.py 1232 return d def test_check_all_bad_sig(self): - corrupt(None, self._storage, 1) # bad sig - d = self._fn.check(Monitor()) + d = corrupt(None, self._storage, 1) # bad sig + d.addCallback(lambda ignored: + self._fn.check(Monitor())) d.addCallback(self.check_bad, "test_check_all_bad_sig") return d hunk ./src/allmydata/test/test_mutable.py 1239 def test_check_all_bad_blocks(self): - corrupt(None, self._storage, "share_data", [9]) # bad blocks + d = corrupt(None, self._storage, "share_data", [9]) # bad blocks # the Checker won't notice this.. it doesn't look at actual data hunk ./src/allmydata/test/test_mutable.py 1241 - d = self._fn.check(Monitor()) + d.addCallback(lambda ignored: + self._fn.check(Monitor())) d.addCallback(self.check_good, "test_check_all_bad_blocks") return d hunk ./src/allmydata/test/test_mutable.py 1252 return d def test_verify_all_bad_sig(self): - corrupt(None, self._storage, 1) # bad sig - d = self._fn.check(Monitor(), verify=True) + d = corrupt(None, self._storage, 1) # bad sig + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) d.addCallback(self.check_bad, "test_verify_all_bad_sig") return d hunk ./src/allmydata/test/test_mutable.py 1259 def test_verify_one_bad_sig(self): - corrupt(None, self._storage, 1, [9]) # bad sig - d = self._fn.check(Monitor(), verify=True) + d = corrupt(None, self._storage, 1, [9]) # bad sig + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) d.addCallback(self.check_bad, "test_verify_one_bad_sig") return d hunk ./src/allmydata/test/test_mutable.py 1266 def test_verify_one_bad_block(self): - corrupt(None, self._storage, "share_data", [9]) # bad blocks + d = corrupt(None, self._storage, "share_data", [9]) # bad blocks # the Verifier *will* notice this, since it examines every byte hunk ./src/allmydata/test/test_mutable.py 1268 - d = self._fn.check(Monitor(), verify=True) + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) d.addCallback(self.check_bad, "test_verify_one_bad_block") d.addCallback(self.check_expected_failure, CorruptShareError, "block hash tree failure", hunk ./src/allmydata/test/test_mutable.py 1277 return d def test_verify_one_bad_sharehash(self): - corrupt(None, self._storage, "share_hash_chain", [9], 5) - d = self._fn.check(Monitor(), verify=True) + d = corrupt(None, self._storage, "share_hash_chain", [9], 5) + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) d.addCallback(self.check_bad, "test_verify_one_bad_sharehash") d.addCallback(self.check_expected_failure, CorruptShareError, "corrupt hashes", hunk ./src/allmydata/test/test_mutable.py 1287 return d def test_verify_one_bad_encprivkey(self): - corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey - d = self._fn.check(Monitor(), verify=True) + d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) d.addCallback(self.check_bad, "test_verify_one_bad_encprivkey") d.addCallback(self.check_expected_failure, CorruptShareError, "invalid privkey", hunk ./src/allmydata/test/test_mutable.py 1297 return d def test_verify_one_bad_encprivkey_uncheckable(self): - corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey + d = corrupt(None, self._storage, "enc_privkey", [9]) # bad privkey readonly_fn = self._fn.get_readonly() # a read-only node has no way to validate the privkey hunk ./src/allmydata/test/test_mutable.py 1300 - d = readonly_fn.check(Monitor(), verify=True) + d.addCallback(lambda ignored: + readonly_fn.check(Monitor(), verify=True)) d.addCallback(self.check_good, "test_verify_one_bad_encprivkey_uncheckable") return d } [Alter the ServermapUpdater to find MDMF files Kevan Carstensen **20100626234118 Ignore-this: 25f6278209c2983ba8f307cfe0fde0 The servermapupdater should find MDMF files on a grid in the same way that it finds SDMF files. This patch makes it do that. ] { hunk ./src/allmydata/mutable/servermap.py 7 from itertools import count from twisted.internet import defer from twisted.python import failure -from foolscap.api import DeadReferenceError, RemoteException, eventually +from foolscap.api import DeadReferenceError, RemoteException, eventually, \ + fireEventually from allmydata.util import base32, hashutil, idlib, log from allmydata.storage.server import si_b2a from allmydata.interfaces import IServermapUpdaterStatus hunk ./src/allmydata/mutable/servermap.py 17 from allmydata.mutable.common import MODE_CHECK, MODE_ANYTHING, MODE_WRITE, MODE_READ, \ DictOfSets, CorruptShareError, NeedMoreDataError from allmydata.mutable.layout import unpack_prefix_and_signature, unpack_header, unpack_share, \ - SIGNED_PREFIX_LENGTH + SIGNED_PREFIX_LENGTH, MDMFSlotReadProxy class UpdateStatus: implements(IServermapUpdaterStatus) hunk ./src/allmydata/mutable/servermap.py 254 """Return a set of versionids, one for each version that is currently recoverable.""" versionmap = self.make_versionmap() - recoverable_versions = set() for (verinfo, shares) in versionmap.items(): (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, hunk ./src/allmydata/mutable/servermap.py 366 self._servers_responded = set() # how much data should we read? + # SDMF: # * if we only need the checkstring, then [0:75] # * if we need to validate the checkstring sig, then [543ish:799ish] # * if we need the verification key, then [107:436ish] hunk ./src/allmydata/mutable/servermap.py 374 # * if we need the encrypted private key, we want [-1216ish:] # * but we can't read from negative offsets # * the offset table tells us the 'ish', also the positive offset - # A future version of the SMDF slot format should consider using - # fixed-size slots so we can retrieve less data. For now, we'll just - # read 2000 bytes, which also happens to read enough actual data to - # pre-fetch a 9-entry dirnode. + # MDMF: + # * Checkstring? [0:72] + # * If we want to validate the checkstring, then [0:72], [143:?] -- + # the offset table will tell us for sure. + # * If we need the verification key, we have to consult the offset + # table as well. + # At this point, we don't know which we are. Our filenode can + # tell us, but it might be lying -- in some cases, we're + # responsible for telling it which kind of file it is. self._read_size = 4000 if mode == MODE_CHECK: # we use unpack_prefix_and_signature, so we need 1k hunk ./src/allmydata/mutable/servermap.py 432 self._queries_completed = 0 sb = self._storage_broker + # All of the peers, permuted by the storage index, as usual. full_peerlist = sb.get_servers_for_index(self._storage_index) self.full_peerlist = full_peerlist # for use later, immutable self.extra_peers = full_peerlist[:] # peers are removed as we use them hunk ./src/allmydata/mutable/servermap.py 439 self._good_peers = set() # peers who had some shares self._empty_peers = set() # peers who don't have any shares self._bad_peers = set() # peers to whom our queries failed + self._readers = {} # peerid -> dict(sharewriters), filled in + # after responses come in. k = self._node.get_required_shares() hunk ./src/allmydata/mutable/servermap.py 443 + # For what cases can these conditions work? if k is None: # make a guess k = 3 hunk ./src/allmydata/mutable/servermap.py 456 self.num_peers_to_query = k + self.EPSILON if self.mode == MODE_CHECK: + # We want to query all of the peers. initial_peers_to_query = dict(full_peerlist) must_query = set(initial_peers_to_query.keys()) self.extra_peers = [] hunk ./src/allmydata/mutable/servermap.py 464 # we're planning to replace all the shares, so we want a good # chance of finding them all. We will keep searching until we've # seen epsilon that don't have a share. + # We don't query all of the peers because that could take a while. self.num_peers_to_query = N + self.EPSILON initial_peers_to_query, must_query = self._build_initial_querylist() self.required_num_empty_peers = self.EPSILON hunk ./src/allmydata/mutable/servermap.py 474 # might also avoid the round trip required to read the encrypted # private key. - else: + else: # MODE_READ, MODE_ANYTHING + # 2k peers is good enough. initial_peers_to_query, must_query = self._build_initial_querylist() # this is a set of peers that we are required to get responses from: hunk ./src/allmydata/mutable/servermap.py 490 # before we can consider ourselves finished, and self.extra_peers # contains the overflow (peers that we should tap if we don't get # enough responses) + # I guess that self._must_query is a subset of + # initial_peers_to_query? + assert set(must_query).issubset(set(initial_peers_to_query)) self._send_initial_requests(initial_peers_to_query) self._status.timings["initial_queries"] = time.time() - self._started hunk ./src/allmydata/mutable/servermap.py 549 # errors that aren't handled by _query_failed (and errors caused by # _query_failed) get logged, but we still want to check for doneness. d.addErrback(log.err) - d.addBoth(self._check_for_done) d.addErrback(self._fatal_error) hunk ./src/allmydata/mutable/servermap.py 550 + d.addCallback(self._check_for_done) return d def _do_read(self, ss, peerid, storage_index, shnums, readv): hunk ./src/allmydata/mutable/servermap.py 569 d = ss.callRemote("slot_readv", storage_index, shnums, readv) return d + + def _got_corrupt_share(self, e, shnum, peerid, data, lp): + """ + I am called when a remote server returns a corrupt share in + response to one of our queries. By corrupt, I mean a share + without a valid signature. I then record the failure, notify the + server of the corruption, and record the share as bad. + """ + f = failure.Failure(e) + self.log(format="bad share: %(f_value)s", f_value=str(f), + failure=f, parent=lp, level=log.WEIRD, umid="h5llHg") + # Notify the server that its share is corrupt. + self.notify_server_corruption(peerid, shnum, str(e)) + # By flagging this as a bad peer, we won't count any of + # the other shares on that peer as valid, though if we + # happen to find a valid version string amongst those + # shares, we'll keep track of it so that we don't need + # to validate the signature on those again. + self._bad_peers.add(peerid) + self._last_failure = f + # XXX: Use the reader for this? + checkstring = data[:SIGNED_PREFIX_LENGTH] + self._servermap.mark_bad_share(peerid, shnum, checkstring) + self._servermap.problems.append(f) + + + def _cache_good_sharedata(self, verinfo, shnum, now, data): + """ + If one of my queries returns successfully (which means that we + were able to and successfully did validate the signature), I + cache the data that we initially fetched from the storage + server. This will help reduce the number of roundtrips that need + to occur when the file is downloaded, or when the file is + updated. + """ + self._node._add_to_cache(verinfo, shnum, 0, data, now) + + def _got_results(self, datavs, peerid, readsize, stuff, started): lp = self.log(format="got result from [%(peerid)s], %(numshares)d shares", peerid=idlib.shortnodeid_b2a(peerid), hunk ./src/allmydata/mutable/servermap.py 630 else: self._empty_peers.add(peerid) - last_verinfo = None - last_shnum = None + ss, storage_index = stuff + ds = [] + for shnum,datav in datavs.items(): data = datav[0] hunk ./src/allmydata/mutable/servermap.py 635 - try: - verinfo = self._got_results_one_share(shnum, data, peerid, lp) - last_verinfo = verinfo - last_shnum = shnum - self._node._add_to_cache(verinfo, shnum, 0, data, now) - except CorruptShareError, e: - # log it and give the other shares a chance to be processed - f = failure.Failure() - self.log(format="bad share: %(f_value)s", f_value=str(f.value), - failure=f, parent=lp, level=log.WEIRD, umid="h5llHg") - self.notify_server_corruption(peerid, shnum, str(e)) - self._bad_peers.add(peerid) - self._last_failure = f - checkstring = data[:SIGNED_PREFIX_LENGTH] - self._servermap.mark_bad_share(peerid, shnum, checkstring) - self._servermap.problems.append(f) - pass - - self._status.timings["cumulative_verify"] += (time.time() - now) + reader = MDMFSlotReadProxy(ss, + storage_index, + shnum, + data) + self._readers.setdefault(peerid, dict())[shnum] = reader + # our goal, with each response, is to validate the version + # information and share data as best we can at this point -- + # we do this by validating the signature. To do this, we + # need to do the following: + # - If we don't already have the public key, fetch the + # public key. We use this to validate the signature. + if not self._node.get_pubkey(): + # fetch and set the public key. + d = reader.get_verification_key() + d.addCallback(lambda results, shnum=shnum, peerid=peerid: + self._try_to_set_pubkey(results, peerid, shnum, lp)) + # XXX: Make self._pubkey_query_failed? + d.addErrback(lambda error, shnum=shnum, peerid=peerid: + self._got_corrupt_share(error, shnum, peerid, data, lp)) + else: + # we already have the public key. + d = defer.succeed(None) + # Neither of these two branches return anything of + # consequence, so the first entry in our deferredlist will + # be None. hunk ./src/allmydata/mutable/servermap.py 661 - if self._need_privkey and last_verinfo: - # send them a request for the privkey. We send one request per - # server. - lp2 = self.log("sending privkey request", - parent=lp, level=log.NOISY) - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, - offsets_tuple) = last_verinfo - o = dict(offsets_tuple) + # - Next, we need the version information. We almost + # certainly got this by reading the first thousand or so + # bytes of the share on the storage server, so we + # shouldn't need to fetch anything at this step. + d2 = reader.get_verinfo() + d2.addErrback(lambda error, shnum=shnum, peerid=peerid: + self._got_corrupt_share(error, shnum, peerid, data, lp)) + # - Next, we need the signature. For an SDMF share, it is + # likely that we fetched this when doing our initial fetch + # to get the version information. In MDMF, this lives at + # the end of the share, so unless the file is quite small, + # we'll need to do a remote fetch to get it. + d3 = reader.get_signature() + d3.addErrback(lambda error, shnum=shnum, peerid=peerid: + self._got_corrupt_share(error, shnum, peerid, data, lp)) + # Once we have all three of these responses, we can move on + # to validating the signature hunk ./src/allmydata/mutable/servermap.py 679 - self._queries_outstanding.add(peerid) - readv = [ (o['enc_privkey'], (o['EOF'] - o['enc_privkey'])) ] - ss = self._servermap.connections[peerid] - privkey_started = time.time() - d = self._do_read(ss, peerid, self._storage_index, - [last_shnum], readv) - d.addCallback(self._got_privkey_results, peerid, last_shnum, - privkey_started, lp2) - d.addErrback(self._privkey_query_failed, peerid, last_shnum, lp2) - d.addErrback(log.err) - d.addCallback(self._check_for_done) - d.addErrback(self._fatal_error) + # Does the node already have a privkey? If not, we'll try to + # fetch it here. + if self._need_privkey: + d4 = reader.get_encprivkey() + d4.addCallback(lambda results, shnum=shnum, peerid=peerid: + self._try_to_validate_privkey(results, peerid, shnum, lp)) + d4.addErrback(lambda error, shnum=shnum, peerid=peerid: + self._privkey_query_failed(error, shnum, data, lp)) + else: + d4 = defer.succeed(None) hunk ./src/allmydata/mutable/servermap.py 690 + dl = defer.DeferredList([d, d2, d3, d4]) + dl.addCallback(lambda results, shnum=shnum, peerid=peerid: + self._got_signature_one_share(results, shnum, peerid, lp)) + dl.addErrback(lambda error, shnum=shnum, data=data: + self._got_corrupt_share(error, shnum, peerid, data, lp)) + dl.addCallback(lambda verinfo, shnum=shnum, peerid=peerid, data=data: + self._cache_good_sharedata(verinfo, shnum, now, data)) + ds.append(dl) + # dl is a deferred list that will fire when all of the shares + # that we found on this peer are done processing. When dl fires, + # we know that processing is done, so we can decrement the + # semaphore-like thing that we incremented earlier. + dl = defer.DeferredList(ds, fireOnOneErrback=True) + # Are we done? Done means that there are no more queries to + # send, that there are no outstanding queries, and that we + # haven't received any queries that are still processing. If we + # are done, self._check_for_done will cause the done deferred + # that we returned to our caller to fire, which tells them that + # they have a complete servermap, and that we won't be touching + # the servermap anymore. + dl.addCallback(self._check_for_done) + dl.addErrback(self._fatal_error) # all done! self.log("_got_results done", parent=lp, level=log.NOISY) hunk ./src/allmydata/mutable/servermap.py 714 + return dl + + + def _try_to_set_pubkey(self, pubkey_s, peerid, shnum, lp): + if self._node.get_pubkey(): + return # don't go through this again if we don't have to + fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s) + assert len(fingerprint) == 32 + if fingerprint != self._node.get_fingerprint(): + raise CorruptShareError(peerid, shnum, + "pubkey doesn't match fingerprint") + self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s)) + assert self._node.get_pubkey() + def notify_server_corruption(self, peerid, shnum, reason): ss = self._servermap.connections[peerid] hunk ./src/allmydata/mutable/servermap.py 734 ss.callRemoteOnly("advise_corrupt_share", "mutable", self._storage_index, shnum, reason) - def _got_results_one_share(self, shnum, data, peerid, lp): + + def _got_signature_one_share(self, results, shnum, peerid, lp): + # It is our job to give versioninfo to our caller. We need to + # raise CorruptShareError if the share is corrupt for any + # reason, something that our caller will handle. self.log(format="_got_results: got shnum #%(shnum)d from peerid %(peerid)s", shnum=shnum, peerid=idlib.shortnodeid_b2a(peerid), hunk ./src/allmydata/mutable/servermap.py 744 level=log.NOISY, parent=lp) - - # this might raise NeedMoreDataError, if the pubkey and signature - # live at some weird offset. That shouldn't happen, so I'm going to - # treat it as a bad share. - (seqnum, root_hash, IV, k, N, segsize, datalength, - pubkey_s, signature, prefix) = unpack_prefix_and_signature(data) - - if not self._node.get_pubkey(): - fingerprint = hashutil.ssk_pubkey_fingerprint_hash(pubkey_s) - assert len(fingerprint) == 32 - if fingerprint != self._node.get_fingerprint(): - raise CorruptShareError(peerid, shnum, - "pubkey doesn't match fingerprint") - self._node._populate_pubkey(self._deserialize_pubkey(pubkey_s)) - - if self._need_privkey: - self._try_to_extract_privkey(data, peerid, shnum, lp) - - (ig_version, ig_seqnum, ig_root_hash, ig_IV, ig_k, ig_N, - ig_segsize, ig_datalen, offsets) = unpack_header(data) + _, verinfo, signature, __ = results + (seqnum, + root_hash, + saltish, + segsize, + datalen, + k, + n, + prefix, + offsets) = verinfo[1] offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] ) hunk ./src/allmydata/mutable/servermap.py 756 - verinfo = (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, + # XXX: This should be done for us in the method, so + # presumably you can go in there and fix it. + verinfo = (seqnum, + root_hash, + saltish, + segsize, + datalen, + k, + n, + prefix, offsets_tuple) hunk ./src/allmydata/mutable/servermap.py 767 + # This tuple uniquely identifies a share on the grid; we use it + # to keep track of the ones that we've already seen. if verinfo not in self._valid_versions: hunk ./src/allmydata/mutable/servermap.py 771 - # it's a new pair. Verify the signature. - valid = self._node.get_pubkey().verify(prefix, signature) + # This is a new version tuple, and we need to validate it + # against the public key before keeping track of it. + assert self._node.get_pubkey() + valid = self._node.get_pubkey().verify(prefix, signature[1]) if not valid: hunk ./src/allmydata/mutable/servermap.py 776 - raise CorruptShareError(peerid, shnum, "signature is invalid") + raise CorruptShareError(peerid, shnum, + "signature is invalid") hunk ./src/allmydata/mutable/servermap.py 779 - # ok, it's a valid verinfo. Add it to the list of validated - # versions. - self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d" - % (seqnum, base32.b2a(root_hash)[:4], - idlib.shortnodeid_b2a(peerid), shnum, - k, N, segsize, datalength), - parent=lp) - self._valid_versions.add(verinfo) - # We now know that this is a valid candidate verinfo. + # ok, it's a valid verinfo. Add it to the list of validated + # versions. + self.log(" found valid version %d-%s from %s-sh%d: %d-%d/%d/%d" + % (seqnum, base32.b2a(root_hash)[:4], + idlib.shortnodeid_b2a(peerid), shnum, + k, n, segsize, datalen), + parent=lp) + self._valid_versions.add(verinfo) + # We now know that this is a valid candidate verinfo. Whether or + # not this instance of it is valid is a matter for the next + # statement; at this point, we just know that if we see this + # version info again, that its signature checks out and that + # we're okay to skip the signature-checking step. hunk ./src/allmydata/mutable/servermap.py 793 + # (peerid, shnum) are bound in the method invocation. if (peerid, shnum) in self._servermap.bad_shares: # we've been told that the rest of the data in this share is # unusable, so don't add it to the servermap. hunk ./src/allmydata/mutable/servermap.py 808 self.versionmap.add(verinfo, (shnum, peerid, timestamp)) return verinfo + def _deserialize_pubkey(self, pubkey_s): verifier = rsa.create_verifying_key_from_string(pubkey_s) return verifier hunk ./src/allmydata/mutable/servermap.py 813 - def _try_to_extract_privkey(self, data, peerid, shnum, lp): - try: - r = unpack_share(data) - except NeedMoreDataError, e: - # this share won't help us. oh well. - offset = e.encprivkey_offset - length = e.encprivkey_length - self.log("shnum %d on peerid %s: share was too short (%dB) " - "to get the encprivkey; [%d:%d] ought to hold it" % - (shnum, idlib.shortnodeid_b2a(peerid), len(data), - offset, offset+length), - parent=lp) - # NOTE: if uncoordinated writes are taking place, someone might - # change the share (and most probably move the encprivkey) before - # we get a chance to do one of these reads and fetch it. This - # will cause us to see a NotEnoughSharesError(unable to fetch - # privkey) instead of an UncoordinatedWriteError . This is a - # nuisance, but it will go away when we move to DSA-based mutable - # files (since the privkey will be small enough to fit in the - # write cap). - - return - - (seqnum, root_hash, IV, k, N, segsize, datalen, - pubkey, signature, share_hash_chain, block_hash_tree, - share_data, enc_privkey) = r - - return self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp) def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp): hunk ./src/allmydata/mutable/servermap.py 815 - + """ + Given a writekey from a remote server, I validate it against the + writekey stored in my node. If it is valid, then I set the + privkey and encprivkey properties of the node. + """ alleged_privkey_s = self._node._decrypt_privkey(enc_privkey) alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s) if alleged_writekey != self._node.get_writekey(): hunk ./src/allmydata/mutable/servermap.py 892 self._queries_completed += 1 self._last_failure = f - def _got_privkey_results(self, datavs, peerid, shnum, started, lp): - now = time.time() - elapsed = now - started - self._status.add_per_server_time(peerid, "privkey", started, elapsed) - self._queries_outstanding.discard(peerid) - if not self._need_privkey: - return - if shnum not in datavs: - self.log("privkey wasn't there when we asked it", - level=log.WEIRD, umid="VA9uDQ") - return - datav = datavs[shnum] - enc_privkey = datav[0] - self._try_to_validate_privkey(enc_privkey, peerid, shnum, lp) def _privkey_query_failed(self, f, peerid, shnum, lp): self._queries_outstanding.discard(peerid) hunk ./src/allmydata/mutable/servermap.py 906 self._servermap.problems.append(f) self._last_failure = f + def _check_for_done(self, res): # exit paths: # return self._send_more_queries(outstanding) : send some more queries hunk ./src/allmydata/mutable/servermap.py 912 # return self._done() : all done # return : keep waiting, no new queries - lp = self.log(format=("_check_for_done, mode is '%(mode)s', " "%(outstanding)d queries outstanding, " "%(extra)d extra peers available, " hunk ./src/allmydata/mutable/servermap.py 1117 self._servermap.last_update_time = self._started # the servermap will not be touched after this self.log("servermap: %s" % self._servermap.summarize_versions()) + eventually(self._done_deferred.callback, self._servermap) def _fatal_error(self, f): hunk ./src/allmydata/test/test_mutable.py 637 d.addCallback(_created) return d - def publish_multiple(self): + def publish_mdmf(self): + # like publish_one, except that the result is guaranteed to be + # an MDMF file. + # self.CONTENTS should have more than one segment. + self.CONTENTS = "This is an MDMF file" * 100000 + self._storage = FakeStorage() + self._nodemaker = make_nodemaker(self._storage) + self._storage_broker = self._nodemaker.storage_broker + d = self._nodemaker.create_mutable_file(self.CONTENTS, version=1) + def _created(node): + self._fn = node + self._fn2 = self._nodemaker.create_from_cap(node.get_uri()) + d.addCallback(_created) + return d + + + def publish_sdmf(self): + # like publish_one, except that the result is guaranteed to be + # an SDMF file + self.CONTENTS = "This is an SDMF file" * 1000 + self._storage = FakeStorage() + self._nodemaker = make_nodemaker(self._storage) + self._storage_broker = self._nodemaker.storage_broker + d = self._nodemaker.create_mutable_file(self.CONTENTS, version=0) + def _created(node): + self._fn = node + self._fn2 = self._nodemaker.create_from_cap(node.get_uri()) + d.addCallback(_created) + return d + + + def publish_multiple(self, version=0): self.CONTENTS = ["Contents 0", "Contents 1", "Contents 2", hunk ./src/allmydata/test/test_mutable.py 677 self._copied_shares = {} self._storage = FakeStorage() self._nodemaker = make_nodemaker(self._storage) - d = self._nodemaker.create_mutable_file(self.CONTENTS[0]) # seqnum=1 + d = self._nodemaker.create_mutable_file(self.CONTENTS[0], version=version) # seqnum=1 def _created(node): self._fn = node # now create multiple versions of the same file, and accumulate hunk ./src/allmydata/test/test_mutable.py 906 return d + def test_servermapupdater_finds_mdmf_files(self): + # setUp already published an MDMF file for us. We just need to + # make sure that when we run the ServermapUpdater, the file is + # reported to have one recoverable version. + d = defer.succeed(None) + d.addCallback(lambda ignored: + self.publish_mdmf()) + d.addCallback(lambda ignored: + self.make_servermap(mode=MODE_CHECK)) + # Calling make_servermap also updates the servermap in the mode + # that we specify, so we just need to see what it says. + def _check_servermap(sm): + self.failUnlessEqual(len(sm.recoverable_versions()), 1) + d.addCallback(_check_servermap) + return d + + + def test_servermapupdater_finds_sdmf_files(self): + d = defer.succeed(None) + d.addCallback(lambda ignored: + self.publish_sdmf()) + d.addCallback(lambda ignored: + self.make_servermap(mode=MODE_CHECK)) + d.addCallback(lambda servermap: + self.failUnlessEqual(len(servermap.recoverable_versions()), 1)) + return d + class Roundtrip(unittest.TestCase, testutil.ShouldFailMixin, PublishMixin): def setUp(self): hunk ./src/allmydata/test/test_mutable.py 1050 return d test_no_servers_download.timeout = 15 + def _test_corrupt_all(self, offset, substring, should_succeed=False, corrupt_early=True, failure_checker=None): } [Make a segmented mutable uploader Kevan Carstensen **20100626234204 Ignore-this: d199af8ab0bc64d8ed2bc19c5437bfba The mutable file uploader should be able to publish files with one segment and files with multiple segments. This patch makes it do that. This is still incomplete, and rather ugly -- I need to flesh out error handling, I need to write tests, and I need to remove some of the uglier kludges in the process before I can call this done. ] { hunk ./src/allmydata/mutable/publish.py 8 from zope.interface import implements from twisted.internet import defer from twisted.python import failure -from allmydata.interfaces import IPublishStatus +from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION from allmydata.util import base32, hashutil, mathutil, idlib, log from allmydata import hashtree, codec from allmydata.storage.server import si_b2a hunk ./src/allmydata/mutable/publish.py 19 UncoordinatedWriteError, NotEnoughServersError from allmydata.mutable.servermap import ServerMap from allmydata.mutable.layout import pack_prefix, pack_share, unpack_header, pack_checkstring, \ - unpack_checkstring, SIGNED_PREFIX + unpack_checkstring, SIGNED_PREFIX, MDMFSlotWriteProxy + +KiB = 1024 +DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB class PublishStatus: implements(IPublishStatus) hunk ./src/allmydata/mutable/publish.py 112 self._status.set_helper(False) self._status.set_progress(0.0) self._status.set_active(True) + # We use this to control how the file is written. + version = self._node.get_version() + assert version in (SDMF_VERSION, MDMF_VERSION) + self._version = version def get_status(self): return self._status hunk ./src/allmydata/mutable/publish.py 134 simultaneous write. """ - # 1: generate shares (SDMF: files are small, so we can do it in RAM) - # 2: perform peer selection, get candidate servers - # 2a: send queries to n+epsilon servers, to determine current shares - # 2b: based upon responses, create target map - # 3: send slot_testv_and_readv_and_writev messages - # 4: as responses return, update share-dispatch table - # 4a: may need to run recovery algorithm - # 5: when enough responses are back, we're done + # 0. Setup encoding parameters, encoder, and other such things. + # 1. Encrypt, encode, and publish segments. self.log("starting publish, datalen is %s" % len(newdata)) self._status.set_size(len(newdata)) hunk ./src/allmydata/mutable/publish.py 187 self.bad_peers = set() # peerids who have errbacked/refused requests self.newdata = newdata - self.salt = os.urandom(16) hunk ./src/allmydata/mutable/publish.py 188 + # This will set self.segment_size, self.num_segments, and + # self.fec. self.setup_encoding_parameters() # if we experience any surprises (writes which were rejected because hunk ./src/allmydata/mutable/publish.py 238 self.bad_share_checkstrings[key] = old_checkstring self.connections[peerid] = self._servermap.connections[peerid] - # create the shares. We'll discard these as they are delivered. SDMF: - # we're allowed to hold everything in memory. + # Now, the process dovetails -- if this is an SDMF file, we need + # to write an SDMF file. Otherwise, we need to write an MDMF + # file. + if self._version == MDMF_VERSION: + return self._publish_mdmf() + else: + return self._publish_sdmf() + #return self.done_deferred + + def _publish_mdmf(self): + # Next, we find homes for all of the shares that we don't have + # homes for yet. + # TODO: Make this part do peer selection. + self.update_goal() + self.writers = {} + # For each (peerid, shnum) in self.goal, we make an + # MDMFSlotWriteProxy for that peer. We'll use this to write + # shares to the peer. + for key in self.goal: + peerid, shnum = key + write_enabler = self._node.get_write_enabler(peerid) + renew_secret = self._node.get_renewal_secret(peerid) + cancel_secret = self._node.get_cancel_secret(peerid) + secrets = (write_enabler, renew_secret, cancel_secret) + + self.writers[shnum] = MDMFSlotWriteProxy(shnum, + self.connections[peerid], + self._storage_index, + secrets, + self._new_seqnum, + self.required_shares, + self.total_shares, + self.segment_size, + len(self.newdata)) + if (peerid, shnum) in self._servermap.servermap: + old_versionid, old_timestamp = self._servermap.servermap[key] + (old_seqnum, old_root_hash, old_salt, old_segsize, + old_datalength, old_k, old_N, old_prefix, + old_offsets_tuple) = old_versionid + self.writers[shnum].set_checkstring(old_seqnum, old_root_hash) + + # Now, we start pushing shares. + self._status.timings["setup"] = time.time() - self._started + def _start_pushing(res): + self._started_pushing = time.time() + return res + + # First, we encrypt, encode, and publish the shares that we need + # to encrypt, encode, and publish. + + # This will eventually hold the block hash chain for each share + # that we publish. We define it this way so that empty publishes + # will still have something to write to the remote slot. + self.blockhashes = dict([(i, []) for i in xrange(self.total_shares)]) + self.sharehash_leaves = None # eventually [sharehashes] + self.sharehashes = {} # shnum -> [sharehash leaves necessary to + # validate the share] hunk ./src/allmydata/mutable/publish.py 296 + d = defer.succeed(None) + self.log("Starting push") + for i in xrange(self.num_segments - 1): + d.addCallback(lambda ignored, i=i: + self.push_segment(i)) + d.addCallback(self._turn_barrier) + # We have at least one segment, so we will have a tail segment + if self.num_segments > 0: + d.addCallback(lambda ignored: + self.push_tail_segment()) + + d.addCallback(lambda ignored: + self.push_encprivkey()) + d.addCallback(lambda ignored: + self.push_blockhashes()) + d.addCallback(lambda ignored: + self.push_sharehashes()) + d.addCallback(lambda ignored: + self.push_toplevel_hashes_and_signature()) + d.addCallback(lambda ignored: + self.finish_publishing()) + return d + + + def _publish_sdmf(self): self._status.timings["setup"] = time.time() - self._started hunk ./src/allmydata/mutable/publish.py 322 + self.salt = os.urandom(16) + d = self._encrypt_and_encode() d.addCallback(self._generate_shares) def _start_pushing(res): hunk ./src/allmydata/mutable/publish.py 335 return self.done_deferred + def setup_encoding_parameters(self): hunk ./src/allmydata/mutable/publish.py 337 - segment_size = len(self.newdata) + if self._version == MDMF_VERSION: + segment_size = DEFAULT_MAX_SEGMENT_SIZE # 128 KiB by default + else: + segment_size = len(self.newdata) # SDMF is only one segment # this must be a multiple of self.required_shares segment_size = mathutil.next_multiple(segment_size, self.required_shares) hunk ./src/allmydata/mutable/publish.py 350 segment_size) else: self.num_segments = 0 - assert self.num_segments in [0, 1,] # SDMF restrictions + if self._version == SDMF_VERSION: + assert self.num_segments in (0, 1) # SDMF + return + # calculate the tail segment size. + self.tail_segment_size = len(self.newdata) % segment_size + + if self.tail_segment_size == 0: + # The tail segment is the same size as the other segments. + self.tail_segment_size = segment_size + + # We'll make an encoder ahead-of-time for the normal-sized + # segments (defined as any segment of segment_size size. + # (the part of the code that puts the tail segment will make its + # own encoder for that part) + fec = codec.CRSEncoder() + fec.set_params(self.segment_size, + self.required_shares, self.total_shares) + self.piece_size = fec.get_block_size() + self.fec = fec + + + def push_segment(self, segnum): + started = time.time() + segsize = self.segment_size + self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments)) + data = self.newdata[segsize * segnum:segsize*(segnum + 1)] + assert len(data) == segsize + + salt = os.urandom(16) + + key = hashutil.ssk_readkey_data_hash(salt, self.readkey) + enc = AES(key) + crypttext = enc.process(data) + assert len(crypttext) == len(data) + + now = time.time() + self._status.timings["encrypt"] = now - started + started = now + + # now apply FEC + + self._status.set_status("Encoding") + crypttext_pieces = [None] * self.required_shares + piece_size = self.piece_size + for i in range(len(crypttext_pieces)): + offset = i * piece_size + piece = crypttext[offset:offset+piece_size] + piece = piece + "\x00"*(piece_size - len(piece)) # padding + crypttext_pieces[i] = piece + assert len(piece) == piece_size + d = self.fec.encode(crypttext_pieces) + def _done_encoding(res): + elapsed = time.time() - started + self._status.timings["encode"] = elapsed + return res + d.addCallback(_done_encoding) + + def _push_shares_and_salt(results): + shares, shareids = results + dl = [] + for i in xrange(len(shares)): + sharedata = shares[i] + shareid = shareids[i] + block_hash = hashutil.block_hash(salt + sharedata) + self.blockhashes[shareid].append(block_hash) + + # find the writer for this share + d = self.writers[shareid].put_block(sharedata, segnum, salt) + dl.append(d) + # TODO: Naturally, we need to check on the results of these. + return defer.DeferredList(dl) + d.addCallback(_push_shares_and_salt) + return d + + + def push_tail_segment(self): + # This is essentially the same as push_segment, except that we + # don't use the cached encoder that we use elsewhere. + self.log("Pushing tail segment") + started = time.time() + segsize = self.segment_size + data = self.newdata[segsize * (self.num_segments-1):] + assert len(data) == self.tail_segment_size + salt = os.urandom(16) + + key = hashutil.ssk_readkey_data_hash(salt, self.readkey) + enc = AES(key) + crypttext = enc.process(data) + assert len(crypttext) == len(data) + + now = time.time() + self._status.timings['encrypt'] = now - started + started = now + + self._status.set_status("Encoding") + tail_fec = codec.CRSEncoder() + tail_fec.set_params(self.tail_segment_size, + self.required_shares, + self.total_shares) + + crypttext_pieces = [None] * self.required_shares + piece_size = tail_fec.get_block_size() + for i in range(len(crypttext_pieces)): + offset = i * piece_size + piece = crypttext[offset:offset+piece_size] + piece = piece + "\x00"*(piece_size - len(piece)) # padding + crypttext_pieces[i] = piece + assert len(piece) == piece_size + d = tail_fec.encode(crypttext_pieces) + def _push_shares_and_salt(results): + shares, shareids = results + dl = [] + for i in xrange(len(shares)): + sharedata = shares[i] + shareid = shareids[i] + block_hash = hashutil.block_hash(salt + sharedata) + self.blockhashes[shareid].append(block_hash) + # find the writer for this share + d = self.writers[shareid].put_block(sharedata, + self.num_segments - 1, + salt) + dl.append(d) + # TODO: Naturally, we need to check on the results of these. + return defer.DeferredList(dl) + d.addCallback(_push_shares_and_salt) + return d + + + def push_encprivkey(self): + started = time.time() + encprivkey = self._encprivkey + dl = [] + def _spy_on_writer(results): + print results + return results + for shnum, writer in self.writers.iteritems(): + d = writer.put_encprivkey(encprivkey) + dl.append(d) + d = defer.DeferredList(dl) + return d + + + def push_blockhashes(self): + started = time.time() + dl = [] + def _spy_on_results(results): + print results + return results + self.sharehash_leaves = [None] * len(self.blockhashes) + for shnum, blockhashes in self.blockhashes.iteritems(): + t = hashtree.HashTree(blockhashes) + self.blockhashes[shnum] = list(t) + # set the leaf for future use. + self.sharehash_leaves[shnum] = t[0] + d = self.writers[shnum].put_blockhashes(self.blockhashes[shnum]) + dl.append(d) + d = defer.DeferredList(dl) + return d + + + def push_sharehashes(self): + share_hash_tree = hashtree.HashTree(self.sharehash_leaves) + share_hash_chain = {} + ds = [] + def _spy_on_results(results): + print results + return results + for shnum in xrange(len(self.sharehash_leaves)): + needed_indices = share_hash_tree.needed_hashes(shnum) + self.sharehashes[shnum] = dict( [ (i, share_hash_tree[i]) + for i in needed_indices] ) + d = self.writers[shnum].put_sharehashes(self.sharehashes[shnum]) + ds.append(d) + self.root_hash = share_hash_tree[0] + d = defer.DeferredList(ds) + return d + + + def push_toplevel_hashes_and_signature(self): + # We need to to three things here: + # - Push the root hash and salt hash + # - Get the checkstring of the resulting layout; sign that. + # - Push the signature + ds = [] + def _spy_on_results(results): + print results + return results + for shnum in xrange(self.total_shares): + d = self.writers[shnum].put_root_hash(self.root_hash) + ds.append(d) + d = defer.DeferredList(ds) + def _make_and_place_signature(ignored): + signable = self.writers[0].get_signable() + self.signature = self._privkey.sign(signable) + + ds = [] + for (shnum, writer) in self.writers.iteritems(): + d = writer.put_signature(self.signature) + ds.append(d) + return defer.DeferredList(ds) + d.addCallback(_make_and_place_signature) + return d + + + def finish_publishing(self): + # We're almost done -- we just need to put the verification key + # and the offsets + ds = [] + verification_key = self._pubkey.serialize() + + def _spy_on_results(results): + print results + return results + for (shnum, writer) in self.writers.iteritems(): + d = writer.put_verification_key(verification_key) + d.addCallback(lambda ignored, writer=writer: + writer.finish_publishing()) + ds.append(d) + return defer.DeferredList(ds) + + + def _turn_barrier(self, res): + # putting this method in a Deferred chain imposes a guaranteed + # reactor turn between the pre- and post- portions of that chain. + # This can be useful to limit memory consumption: since Deferreds do + # not do tail recursion, code which uses defer.succeed(result) for + # consistency will cause objects to live for longer than you might + # normally expect. + return fireEventually(res) + def _fatal_error(self, f): self.log("error during loop", failure=f, level=log.UNUSUAL) hunk ./src/allmydata/mutable/publish.py 716 self.log_goal(self.goal, "after update: ") - def _encrypt_and_encode(self): # this returns a Deferred that fires with a list of (sharedata, # sharenum) tuples. TODO: cache the ciphertext, only produce the hunk ./src/allmydata/mutable/publish.py 757 d.addCallback(_done_encoding) return d + def _generate_shares(self, shares_and_shareids): # this sets self.shares and self.root_hash self.log("_generate_shares") hunk ./src/allmydata/mutable/publish.py 1145 self._status.set_progress(1.0) eventually(self.done_deferred.callback, res) - hunk ./src/allmydata/test/test_mutable.py 248 d.addCallback(_created) return d + + def test_create_mdmf(self): + d = self.nodemaker.create_mutable_file(version=MDMF_VERSION) + def _created(n): + self.failUnless(isinstance(n, MutableFileNode)) + self.failUnlessEqual(n.get_storage_index(), n._storage_index) + sb = self.nodemaker.storage_broker + peer0 = sorted(sb.get_all_serverids())[0] + shnums = self._storage._peers[peer0].keys() + self.failUnlessEqual(len(shnums), 1) + d.addCallback(_created) + return d + + def test_serialize(self): n = MutableFileNode(None, None, {"k": 3, "n": 10}, None) calls = [] hunk ./src/allmydata/test/test_mutable.py 334 d.addCallback(_created) return d + + def test_create_mdmf_with_initial_contents(self): + initial_contents = "foobarbaz" * 131072 # 900KiB + d = self.nodemaker.create_mutable_file(initial_contents, + version=MDMF_VERSION) + def _created(n): + d = n.download_best_version() + d.addCallback(lambda data: + self.failUnlessEqual(data, initial_contents)) + d.addCallback(lambda ignored: + n.overwrite(initial_contents + "foobarbaz")) + d.addCallback(lambda ignored: + n.download_best_version()) + d.addCallback(lambda data: + self.failUnlessEqual(data, initial_contents + + "foobarbaz")) + return d + d.addCallback(_created) + return d + + def test_create_with_initial_contents_function(self): data = "initial contents" def _make_contents(n): hunk ./src/allmydata/test/test_mutable.py 370 d.addCallback(lambda data2: self.failUnlessEqual(data2, data)) return d + + def test_create_mdmf_with_initial_contents_function(self): + data = "initial contents" * 100000 + def _make_contents(n): + self.failUnless(isinstance(n, MutableFileNode)) + key = n.get_writekey() + self.failUnless(isinstance(key, str), key) + self.failUnlessEqual(len(key), 16) + return data + d = self.nodemaker.create_mutable_file(_make_contents, + version=MDMF_VERSION) + d.addCallback(lambda n: + n.download_best_version()) + d.addCallback(lambda data2: + self.failUnlessEqual(data2, data)) + return d + + def test_create_with_too_large_contents(self): BIG = "a" * (self.OLD_MAX_SEGMENT_SIZE + 1) d = self.nodemaker.create_mutable_file(BIG) } [Write a segmented mutable downloader Kevan Carstensen **20100626234314 Ignore-this: d2bef531cde1b5c38f2eb28afdd4b17c The segmented mutable downloader can deal with MDMF files (files with one or more segments in MDMF format) and SDMF files (files with one segment in SDMF format). It is backwards compatible with the old file format. This patch also contains tests for the segmented mutable downloader. ] { hunk ./src/allmydata/mutable/retrieve.py 8 from twisted.internet import defer from twisted.python import failure from foolscap.api import DeadReferenceError, eventually, fireEventually -from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError -from allmydata.util import hashutil, idlib, log +from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError, \ + MDMF_VERSION, SDMF_VERSION +from allmydata.util import hashutil, idlib, log, mathutil from allmydata import hashtree, codec from allmydata.storage.server import si_b2a from pycryptopp.cipher.aes import AES hunk ./src/allmydata/mutable/retrieve.py 17 from pycryptopp.publickey import rsa from allmydata.mutable.common import DictOfSets, CorruptShareError, UncoordinatedWriteError -from allmydata.mutable.layout import SIGNED_PREFIX, unpack_share_data +from allmydata.mutable.layout import SIGNED_PREFIX, unpack_share_data, \ + MDMFSlotReadProxy class RetrieveStatus: implements(IRetrieveStatus) hunk ./src/allmydata/mutable/retrieve.py 104 self.verinfo = verinfo # during repair, we may be called upon to grab the private key, since # it wasn't picked up during a verify=False checker run, and we'll - # need it for repair to generate the a new version. + # need it for repair to generate a new version. self._need_privkey = fetch_privkey if self._node.get_privkey(): self._need_privkey = False hunk ./src/allmydata/mutable/retrieve.py 109 + if self._need_privkey: + # TODO: Evaluate the need for this. We'll use it if we want + # to limit how many queries are on the wire for the privkey + # at once. + self._privkey_query_markers = [] # one Marker for each time we've + # tried to get the privkey. + self._status = RetrieveStatus() self._status.set_storage_index(self._storage_index) self._status.set_helper(False) hunk ./src/allmydata/mutable/retrieve.py 125 offsets_tuple) = self.verinfo self._status.set_size(datalength) self._status.set_encoding(k, N) + self.readers = {} def get_status(self): return self._status hunk ./src/allmydata/mutable/retrieve.py 149 self.remaining_sharemap = DictOfSets() for (shnum, peerid, timestamp) in shares: self.remaining_sharemap.add(shnum, peerid) + # If the servermap update fetched anything, it fetched at least 1 + # KiB, so we ask for that much. + # TODO: Change the cache methods to allow us to fetch all of the + # data that they have, then change this method to do that. + any_cache, timestamp = self._node._read_from_cache(self.verinfo, + shnum, + 0, + 1000) + ss = self.servermap.connections[peerid] + reader = MDMFSlotReadProxy(ss, + self._storage_index, + shnum, + any_cache) + reader.peerid = peerid + self.readers[shnum] = reader + self.shares = {} # maps shnum to validated blocks hunk ./src/allmydata/mutable/retrieve.py 167 + self._active_readers = [] # list of active readers for this dl. + self._validated_readers = set() # set of readers that we have + # validated the prefix of + self._block_hash_trees = {} # shnum => hashtree + # TODO: Make this into a file-backed consumer or something to + # conserve memory. + self._plaintext = "" # how many shares do we need? hunk ./src/allmydata/mutable/retrieve.py 176 - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, + (seqnum, + root_hash, + IV, + segsize, + datalength, + k, + N, + prefix, offsets_tuple) = self.verinfo hunk ./src/allmydata/mutable/retrieve.py 185 - assert len(self.remaining_sharemap) >= k - # we start with the lowest shnums we have available, since FEC is - # faster if we're using "primary shares" - self.active_shnums = set(sorted(self.remaining_sharemap.keys())[:k]) - for shnum in self.active_shnums: - # we use an arbitrary peer who has the share. If shares are - # doubled up (more than one share per peer), we could make this - # run faster by spreading the load among multiple peers. But the - # algorithm to do that is more complicated than I want to write - # right now, and a well-provisioned grid shouldn't have multiple - # shares per peer. - peerid = list(self.remaining_sharemap[shnum])[0] - self.get_data(shnum, peerid) hunk ./src/allmydata/mutable/retrieve.py 186 - # control flow beyond this point: state machine. Receiving responses - # from queries is the input. We might send out more queries, or we - # might produce a result. hunk ./src/allmydata/mutable/retrieve.py 187 + # We need one share hash tree for the entire file; its leaves + # are the roots of the block hash trees for the shares that + # comprise it, and its root is in the verinfo. + self.share_hash_tree = hashtree.IncompleteHashTree(N) + self.share_hash_tree.set_hashes({0: root_hash}) + + # This will set up both the segment decoder and the tail segment + # decoder, as well as a variety of other instance variables that + # the download process will use. + self._setup_encoding_parameters() + assert len(self.remaining_sharemap) >= k + + self.log("starting download") + self._add_active_peers() + # The download process beyond this is a state machine. + # _add_active_peers will select the peers that we want to use + # for the download, and then attempt to start downloading. After + # each segment, it will check for doneness, reacting to broken + # peers and corrupt shares as necessary. If it runs out of good + # peers before downloading all of the segments, _done_deferred + # will errback. Otherwise, it will eventually callback with the + # contents of the mutable file. return self._done_deferred hunk ./src/allmydata/mutable/retrieve.py 211 - def get_data(self, shnum, peerid): - self.log(format="sending sh#%(shnum)d request to [%(peerid)s]", - shnum=shnum, - peerid=idlib.shortnodeid_b2a(peerid), - level=log.NOISY) - ss = self.servermap.connections[peerid] - started = time.time() - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, + + def _setup_encoding_parameters(self): + """ + I set up the encoding parameters, including k, n, the number + of segments associated with this file, and the segment decoder. + """ + (seqnum, + root_hash, + IV, + segsize, + datalength, + k, + n, + known_prefix, offsets_tuple) = self.verinfo hunk ./src/allmydata/mutable/retrieve.py 226 - offsets = dict(offsets_tuple) + self._required_shares = k + self._total_shares = n + self._segment_size = segsize + self._data_length = datalength + + if not IV: + self._version = MDMF_VERSION + else: + self._version = SDMF_VERSION + + if datalength and segsize: + self._num_segments = mathutil.div_ceil(datalength, segsize) + self._tail_data_size = datalength % segsize + else: + self._num_segments = 0 + self._tail_data_size = 0 hunk ./src/allmydata/mutable/retrieve.py 243 - # we read the checkstring, to make sure that the data we grab is from - # the right version. - readv = [ (0, struct.calcsize(SIGNED_PREFIX)) ] + self._segment_decoder = codec.CRSDecoder() + self._segment_decoder.set_params(segsize, k, n) + self._current_segment = 0 hunk ./src/allmydata/mutable/retrieve.py 247 - # We also read the data, and the hashes necessary to validate them - # (share_hash_chain, block_hash_tree, share_data). We don't read the - # signature or the pubkey, since that was handled during the - # servermap phase, and we'll be comparing the share hash chain - # against the roothash that was validated back then. + if not self._tail_data_size: + self._tail_data_size = segsize hunk ./src/allmydata/mutable/retrieve.py 250 - readv.append( (offsets['share_hash_chain'], - offsets['enc_privkey'] - offsets['share_hash_chain'] ) ) + self._tail_segment_size = mathutil.next_multiple(self._tail_data_size, + self._required_shares) + if self._tail_segment_size == self._segment_size: + self._tail_decoder = self._segment_decoder + else: + self._tail_decoder = codec.CRSDecoder() + self._tail_decoder.set_params(self._tail_segment_size, + self._required_shares, + self._total_shares) hunk ./src/allmydata/mutable/retrieve.py 260 - # if we need the private key (for repair), we also fetch that - if self._need_privkey: - readv.append( (offsets['enc_privkey'], - offsets['EOF'] - offsets['enc_privkey']) ) + self.log("got encoding parameters: " + "k: %d " + "n: %d " + "%d segments of %d bytes each (%d byte tail segment)" % \ + (k, n, self._num_segments, self._segment_size, + self._tail_segment_size)) hunk ./src/allmydata/mutable/retrieve.py 267 - m = Marker() - self._outstanding_queries[m] = (peerid, shnum, started) + for i in xrange(self._total_shares): + # So we don't have to do this later. + self._block_hash_trees[i] = hashtree.IncompleteHashTree(self._num_segments) hunk ./src/allmydata/mutable/retrieve.py 271 - # ask the cache first - got_from_cache = False - datavs = [] - for (offset, length) in readv: - (data, timestamp) = self._node._read_from_cache(self.verinfo, shnum, - offset, length) - if data is not None: - datavs.append(data) - if len(datavs) == len(readv): - self.log("got data from cache") - got_from_cache = True - d = fireEventually({shnum: datavs}) - # datavs is a dict mapping shnum to a pair of strings - else: - d = self._do_read(ss, peerid, self._storage_index, [shnum], readv) - self.remaining_sharemap.discard(shnum, peerid) + # If we have more than one segment, we are an SDMF file, which + # means that we need to validate the salts as we receive them. + self._salt_hash_tree = hashtree.IncompleteHashTree(self._num_segments) + self._salt_hash_tree[0] = IV # from the prefix. hunk ./src/allmydata/mutable/retrieve.py 276 - d.addCallback(self._got_results, m, peerid, started, got_from_cache) - d.addErrback(self._query_failed, m, peerid) - # errors that aren't handled by _query_failed (and errors caused by - # _query_failed) get logged, but we still want to check for doneness. - def _oops(f): - self.log(format="problem in _query_failed for sh#%(shnum)d to %(peerid)s", - shnum=shnum, - peerid=idlib.shortnodeid_b2a(peerid), - failure=f, - level=log.WEIRD, umid="W0xnQA") - d.addErrback(_oops) - d.addBoth(self._check_for_done) - # any error during _check_for_done means the download fails. If the - # download is successful, _check_for_done will fire _done by itself. - d.addErrback(self._done) - d.addErrback(log.err) - return d # purely for testing convenience hunk ./src/allmydata/mutable/retrieve.py 277 - def _do_read(self, ss, peerid, storage_index, shnums, readv): - # isolate the callRemote to a separate method, so tests can subclass - # Publish and override it - d = ss.callRemote("slot_readv", storage_index, shnums, readv) - return d + def _add_active_peers(self): + """ + I populate self._active_readers with enough active readers to + retrieve the contents of this mutable file. I am called before + downloading starts, and (eventually) after each validation + error, connection error, or other problem in the download. + """ + # TODO: It would be cool to investigate other heuristics for + # reader selection. For instance, the cost (in time the user + # spends waiting for their file) of selecting a really slow peer + # that happens to have a primary share is probably more than + # selecting a really fast peer that doesn't have a primary + # share. Maybe the servermap could be extended to provide this + # information; it could keep track of latency information while + # it gathers more important data, and then this routine could + # use that to select active readers. + # + # (these and other questions would be easier to answer with a + # robust, configurable tahoe-lafs simulator, which modeled node + # failures, differences in node speed, and other characteristics + # that we expect storage servers to have. You could have + # presets for really stable grids (like allmydata.com), + # friendnets, make it easy to configure your own settings, and + # then simulate the effect of big changes on these use cases + # instead of just reasoning about what the effect might be. Out + # of scope for MDMF, though.) hunk ./src/allmydata/mutable/retrieve.py 304 - def remove_peer(self, peerid): - for shnum in list(self.remaining_sharemap.keys()): - self.remaining_sharemap.discard(shnum, peerid) + # We need at least self._required_shares readers to download a + # segment. + needed = self._required_shares - len(self._active_readers) + # XXX: Why don't format= log messages work here? + self.log("adding %d peers to the active peers list" % needed) hunk ./src/allmydata/mutable/retrieve.py 310 - def _got_results(self, datavs, marker, peerid, started, got_from_cache): - now = time.time() - elapsed = now - started - if not got_from_cache: - self._status.add_fetch_timing(peerid, elapsed) - self.log(format="got results (%(shares)d shares) from [%(peerid)s]", - shares=len(datavs), - peerid=idlib.shortnodeid_b2a(peerid), - level=log.NOISY) - self._outstanding_queries.pop(marker, None) - if not self._running: - return + # We favor lower numbered shares, since FEC is faster with + # primary shares than with other shares, and lower-numbered + # shares are more likely to be primary than higher numbered + # shares. + active_shnums = set(sorted(self.remaining_sharemap.keys())) + # We shouldn't consider adding shares that we already have; this + # will cause problems later. + active_shnums -= set([reader.shnum for reader in self._active_readers]) + active_shnums = list(active_shnums)[:needed] + if len(active_shnums) < needed: + # We don't have enough readers to retrieve the file; fail. + return self._failed() hunk ./src/allmydata/mutable/retrieve.py 323 - # note that we only ask for a single share per query, so we only - # expect a single share back. On the other hand, we use the extra - # shares if we get them.. seems better than an assert(). + for shnum in active_shnums: + self._active_readers.append(self.readers[shnum]) + self.log("added reader for share %d" % shnum) + assert len(self._active_readers) == self._required_shares + # Conceptually, this is part of the _add_active_peers step. It + # validates the prefixes of newly added readers to make sure + # that they match what we are expecting for self.verinfo. If + # validation is successful, _validate_active_prefixes will call + # _download_current_segment for us. If validation is + # unsuccessful, then _validate_prefixes will remove the peer and + # call _add_active_peers again, where we will attempt to rectify + # the problem by choosing another peer. + return self._validate_active_prefixes() hunk ./src/allmydata/mutable/retrieve.py 337 - for shnum,datav in datavs.items(): - (prefix, hash_and_data) = datav[:2] - try: - self._got_results_one_share(shnum, peerid, - prefix, hash_and_data) - except CorruptShareError, e: - # log it and give the other shares a chance to be processed - f = failure.Failure() - self.log(format="bad share: %(f_value)s", - f_value=str(f.value), failure=f, - level=log.WEIRD, umid="7fzWZw") - self.notify_server_corruption(peerid, shnum, str(e)) - self.remove_peer(peerid) - self.servermap.mark_bad_share(peerid, shnum, prefix) - self._bad_shares.add( (peerid, shnum) ) - self._status.problems[peerid] = f - self._last_failure = f - pass - if self._need_privkey and len(datav) > 2: - lp = None - self._try_to_validate_privkey(datav[2], peerid, shnum, lp) - # all done! hunk ./src/allmydata/mutable/retrieve.py 338 - def notify_server_corruption(self, peerid, shnum, reason): - ss = self.servermap.connections[peerid] - ss.callRemoteOnly("advise_corrupt_share", - "mutable", self._storage_index, shnum, reason) + def _validate_active_prefixes(self): + """ + I check to make sure that the prefixes on the peers that I am + currently reading from match the prefix that we want to see, as + said in self.verinfo. hunk ./src/allmydata/mutable/retrieve.py 344 - def _got_results_one_share(self, shnum, peerid, - got_prefix, got_hash_and_data): - self.log("_got_results: got shnum #%d from peerid %s" - % (shnum, idlib.shortnodeid_b2a(peerid))) - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, + If I find that all of the active peers have acceptable prefixes, + I pass control to _download_current_segment, which will use + those peers to do cool things. If I find that some of the active + peers have unacceptable prefixes, I will remove them from active + peers (and from further consideration) and call + _add_active_peers to attempt to rectify the situation. I keep + track of which peers I have already validated so that I don't + need to do so again. + """ + assert self._active_readers, "No more active readers" + + ds = [] + new_readers = set(self._active_readers) - self._validated_readers + self.log('validating %d newly-added active readers' % len(new_readers)) + + for reader in new_readers: + # We force a remote read here -- otherwise, we are relying + # on cached data that we already verified as valid, and we + # won't detect an uncoordinated write that has occurred + # since the last servermap update. + d = reader.get_prefix(force_remote=True) + d.addCallback(self._try_to_validate_prefix, reader) + ds.append(d) + dl = defer.DeferredList(ds, consumeErrors=True) + def _check_results(results): + # Each result in results will be of the form (success, msg). + # We don't care about msg, but success will tell us whether + # or not the checkstring validated. If it didn't, we need to + # remove the offending (peer,share) from our active readers, + # and ensure that active readers is again populated. + bad_readers = [] + for i, result in enumerate(results): + if not result[0]: + reader = self._active_readers[i] + f = result[1] + assert isinstance(f, failure.Failure) + + self.log("The reader %s failed to " + "properly validate: %s" % \ + (reader, str(f.value))) + bad_readers.append((reader, f)) + else: + reader = self._active_readers[i] + self.log("the reader %s checks out, so we'll use it" % \ + reader) + self._validated_readers.add(reader) + # Each time we validate a reader, we check to see if + # we need the private key. If we do, we politely ask + # for it and then continue computing. If we find + # that we haven't gotten it at the end of + # segment decoding, then we'll take more drastic + # measures. + if self._need_privkey: + d = reader.get_encprivkey() + d.addCallback(self._try_to_validate_privkey, reader) + if bad_readers: + # We do them all at once, or else we screw up list indexing. + for (reader, f) in bad_readers: + self._mark_bad_share(reader, f) + return self._add_active_peers() + else: + return self._download_current_segment() + # The next step will assert that it has enough active + # readers to fetch shares; we just need to remove it. + dl.addCallback(_check_results) + return dl + + + def _try_to_validate_prefix(self, prefix, reader): + """ + I check that the prefix returned by a candidate server for + retrieval matches the prefix that the servermap knows about + (and, hence, the prefix that was validated earlier). If it does, + I return True, which means that I approve of the use of the + candidate server for segment retrieval. If it doesn't, I return + False, which means that another server must be chosen. + """ + (seqnum, + root_hash, + IV, + segsize, + datalength, + k, + N, + known_prefix, offsets_tuple) = self.verinfo hunk ./src/allmydata/mutable/retrieve.py 430 - assert len(got_prefix) == len(prefix), (len(got_prefix), len(prefix)) - if got_prefix != prefix: - msg = "someone wrote to the data since we read the servermap: prefix changed" - raise UncoordinatedWriteError(msg) - (share_hash_chain, block_hash_tree, - share_data) = unpack_share_data(self.verinfo, got_hash_and_data) + if known_prefix != prefix: + self.log("prefix from share %d doesn't match" % reader.shnum) + raise UncoordinatedWriteError("Mismatched prefix -- this could " + "indicate an uncoordinated write") + # Otherwise, we're okay -- no issues. hunk ./src/allmydata/mutable/retrieve.py 436 - assert isinstance(share_data, str) - # build the block hash tree. SDMF has only one leaf. - leaves = [hashutil.block_hash(share_data)] - t = hashtree.HashTree(leaves) - if list(t) != block_hash_tree: - raise CorruptShareError(peerid, shnum, "block hash tree failure") - share_hash_leaf = t[0] - t2 = hashtree.IncompleteHashTree(N) - # root_hash was checked by the signature - t2.set_hashes({0: root_hash}) - try: - t2.set_hashes(hashes=share_hash_chain, - leaves={shnum: share_hash_leaf}) - except (hashtree.BadHashError, hashtree.NotEnoughHashesError, - IndexError), e: - msg = "corrupt hashes: %s" % (e,) - raise CorruptShareError(peerid, shnum, msg) - self.log(" data valid! len=%d" % len(share_data)) - # each query comes down to this: placing validated share data into - # self.shares - self.shares[shnum] = share_data hunk ./src/allmydata/mutable/retrieve.py 437 - def _try_to_validate_privkey(self, enc_privkey, peerid, shnum, lp): + def _remove_reader(self, reader): + """ + At various points, we will wish to remove a peer from + consideration and/or use. These include, but are not necessarily + limited to: hunk ./src/allmydata/mutable/retrieve.py 443 - alleged_privkey_s = self._node._decrypt_privkey(enc_privkey) - alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s) - if alleged_writekey != self._node.get_writekey(): - self.log("invalid privkey from %s shnum %d" % - (idlib.nodeid_b2a(peerid)[:8], shnum), - parent=lp, level=log.WEIRD, umid="YIw4tA") - return + - A connection error. + - A mismatched prefix (that is, a prefix that does not match + our conception of the version information string). + - A failing block hash, salt hash, or share hash, which can + indicate disk failure/bit flips, or network trouble. hunk ./src/allmydata/mutable/retrieve.py 449 - # it's good - self.log("got valid privkey from shnum %d on peerid %s" % - (shnum, idlib.shortnodeid_b2a(peerid)), - parent=lp) - privkey = rsa.create_signing_key_from_string(alleged_privkey_s) - self._node._populate_encprivkey(enc_privkey) - self._node._populate_privkey(privkey) - self._need_privkey = False + This method will do that. I will make sure that the + (shnum,reader) combination represented by my reader argument is + not used for anything else during this download. I will not + advise the reader of any corruption, something that my callers + may wish to do on their own. + """ + # TODO: When you're done writing this, see if this is ever + # actually used for something that _mark_bad_share isn't. I have + # a feeling that they will be used for very similar things, and + # that having them both here is just going to be an epic amount + # of code duplication. + # + # (well, okay, not epic, but meaningful) + self.log("removing reader %s" % reader) + # Remove the reader from _active_readers + self._active_readers.remove(reader) + # TODO: self.readers.remove(reader)? + for shnum in list(self.remaining_sharemap.keys()): + self.remaining_sharemap.discard(shnum, reader.peerid) hunk ./src/allmydata/mutable/retrieve.py 469 - def _query_failed(self, f, marker, peerid): - self.log(format="query to [%(peerid)s] failed", - peerid=idlib.shortnodeid_b2a(peerid), - level=log.NOISY) - self._status.problems[peerid] = f - self._outstanding_queries.pop(marker, None) - if not self._running: - return - self._last_failure = f - self.remove_peer(peerid) - level = log.WEIRD - if f.check(DeadReferenceError): - level = log.UNUSUAL - self.log(format="error during query: %(f_value)s", - f_value=str(f.value), failure=f, level=level, umid="gOJB5g") hunk ./src/allmydata/mutable/retrieve.py 470 - def _check_for_done(self, res): - # exit paths: - # return : keep waiting, no new queries - # return self._send_more_queries(outstanding) : send some more queries - # fire self._done(plaintext) : download successful - # raise exception : download fails + def _mark_bad_share(self, reader, f): + """ + I mark the (peerid, shnum) encapsulated by my reader argument as + a bad share, which means that it will not be used anywhere else. hunk ./src/allmydata/mutable/retrieve.py 475 - self.log(format="_check_for_done: running=%(running)s, decoding=%(decoding)s", - running=self._running, decoding=self._decoding, - level=log.NOISY) - if not self._running: - return - if self._decoding: - return - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, - offsets_tuple) = self.verinfo + There are several reasons to want to mark something as a bad + share. These include: hunk ./src/allmydata/mutable/retrieve.py 478 - if len(self.shares) < k: - # we don't have enough shares yet - return self._maybe_send_more_queries(k) - if self._need_privkey: - # we got k shares, but none of them had a valid privkey. TODO: - # look further. Adding code to do this is a bit complicated, and - # I want to avoid that complication, and this should be pretty - # rare (k shares with bitflips in the enc_privkey but not in the - # data blocks). If we actually do get here, the subsequent repair - # will fail for lack of a privkey. - self.log("got k shares but still need_privkey, bummer", - level=log.WEIRD, umid="MdRHPA") + - A connection error to the peer. + - A mismatched prefix (that is, a prefix that does not match + our local conception of the version information string). + - A failing block hash, salt hash, share hash, or other + integrity check. hunk ./src/allmydata/mutable/retrieve.py 484 - # we have enough to finish. All the shares have had their hashes - # checked, so if something fails at this point, we don't know how - # to fix it, so the download will fail. + This method will ensure that readers that we wish to mark bad + (for these reasons or other reasons) are not used for the rest + of the download. Additionally, it will attempt to tell the + remote peer (with no guarantee of success) that its share is + corrupt. + """ + self.log("marking share %d on server %s as bad" % \ + (reader.shnum, reader)) + self._remove_reader(reader) + self._bad_shares.add((reader.peerid, reader.shnum)) + self._status.problems[reader.peerid] = f + self._last_failure = f + self.notify_server_corruption(reader.peerid, reader.shnum, + str(f.value)) hunk ./src/allmydata/mutable/retrieve.py 499 - self._decoding = True # avoid reentrancy - self._status.set_status("decoding") - now = time.time() - elapsed = now - self._started - self._status.timings["fetch"] = elapsed hunk ./src/allmydata/mutable/retrieve.py 500 - d = defer.maybeDeferred(self._decode) - d.addCallback(self._decrypt, IV, self._node.get_readkey()) - d.addBoth(self._done) - return d # purely for test convenience + def _download_current_segment(self): + """ + I download, validate, decode, decrypt, and assemble the segment + that this Retrieve is currently responsible for downloading. + """ + assert len(self._active_readers) >= self._required_shares + if self._current_segment < self._num_segments: + d = self._process_segment(self._current_segment) + else: + d = defer.succeed(None) + d.addCallback(self._check_for_done) + return d hunk ./src/allmydata/mutable/retrieve.py 513 - def _maybe_send_more_queries(self, k): - # we don't have enough shares yet. Should we send out more queries? - # There are some number of queries outstanding, each for a single - # share. If we can generate 'needed_shares' additional queries, we do - # so. If we can't, then we know this file is a goner, and we raise - # NotEnoughSharesError. - self.log(format=("_maybe_send_more_queries, have=%(have)d, k=%(k)d, " - "outstanding=%(outstanding)d"), - have=len(self.shares), k=k, - outstanding=len(self._outstanding_queries), - level=log.NOISY) hunk ./src/allmydata/mutable/retrieve.py 514 - remaining_shares = k - len(self.shares) - needed = remaining_shares - len(self._outstanding_queries) - if not needed: - # we have enough queries in flight already + def _process_segment(self, segnum): + """ + I download, validate, decode, and decrypt one segment of the + file that this Retrieve is retrieving. This means coordinating + the process of getting k blocks of that file, validating them, + assembling them into one segment with the decoder, and then + decrypting them. + """ + self.log("processing segment %d" % segnum) hunk ./src/allmydata/mutable/retrieve.py 524 - # TODO: but if they've been in flight for a long time, and we - # have reason to believe that new queries might respond faster - # (i.e. we've seen other queries come back faster, then consider - # sending out new queries. This could help with peers which have - # silently gone away since the servermap was updated, for which - # we're still waiting for the 15-minute TCP disconnect to happen. - self.log("enough queries are in flight, no more are needed", - level=log.NOISY) - return + # TODO: The old code uses a marker. Should this code do that + # too? What did the Marker do? + assert len(self._active_readers) >= self._required_shares + + # We need to ask each of our active readers for its block and + # salt. We will then validate those. If validation is + # successful, we will assemble the results into plaintext. + ds = [] + for reader in self._active_readers: + d = reader.get_block_and_salt(segnum, queue=True) + d2 = self._get_needed_hashes(reader, segnum) + dl = defer.DeferredList([d, d2], consumeErrors=True) + dl.addCallback(self._validate_block, segnum, reader) + dl.addErrback(self._validation_or_decoding_failed, [reader]) + ds.append(dl) + reader.flush() + dl = defer.DeferredList(ds) + dl.addCallback(self._maybe_decode_and_decrypt_segment, segnum) + return dl hunk ./src/allmydata/mutable/retrieve.py 544 - outstanding_shnums = set([shnum - for (peerid, shnum, started) - in self._outstanding_queries.values()]) - # prefer low-numbered shares, they are more likely to be primary - available_shnums = sorted(self.remaining_sharemap.keys()) - for shnum in available_shnums: - if shnum in outstanding_shnums: - # skip ones that are already in transit - continue - if shnum not in self.remaining_sharemap: - # no servers for that shnum. note that DictOfSets removes - # empty sets from the dict for us. - continue - peerid = list(self.remaining_sharemap[shnum])[0] - # get_data will remove that peerid from the sharemap, and add the - # query to self._outstanding_queries - self._status.set_status("Retrieving More Shares") - self.get_data(shnum, peerid) - needed -= 1 - if not needed: + + def _maybe_decode_and_decrypt_segment(self, blocks_and_salts, segnum): + """ + I take the results of fetching and validating the blocks from a + callback chain in another method. If the results are such that + they tell me that validation and fetching succeeded without + incident, I will proceed with decoding and decryption. + Otherwise, I will do nothing. + """ + self.log("trying to decode and decrypt segment %d" % segnum) + failures = False + for block_and_salt in blocks_and_salts: + if not block_and_salt[0] or block_and_salt[1] == None: + self.log("some validation operations failed; not proceeding") + failures = True break hunk ./src/allmydata/mutable/retrieve.py 560 + if not failures: + self.log("everything looks ok, building segment %d" % segnum) + d = self._decode_blocks(blocks_and_salts, segnum) + d.addCallback(self._decrypt_segment) + d.addErrback(self._validation_or_decoding_failed, + self._active_readers) + d.addCallback(self._set_segment) + return d + else: + return defer.succeed(None) + + + def _set_segment(self, segment): + """ + Given a plaintext segment, I register that segment with the + target that is handling the file download. + """ + self.log("got plaintext for segment %d" % self._current_segment) + self._plaintext += segment + self._current_segment += 1 hunk ./src/allmydata/mutable/retrieve.py 581 - # at this point, we have as many outstanding queries as we can. If - # needed!=0 then we might not have enough to recover the file. - if needed: - format = ("ran out of peers: " - "have %(have)d shares (k=%(k)d), " - "%(outstanding)d queries in flight, " - "need %(need)d more, " - "found %(bad)d bad shares") - args = {"have": len(self.shares), - "k": k, - "outstanding": len(self._outstanding_queries), - "need": needed, - "bad": len(self._bad_shares), - } - self.log(format=format, - level=log.WEIRD, umid="ezTfjw", **args) - err = NotEnoughSharesError("%s, last failure: %s" % - (format % args, self._last_failure)) - if self._bad_shares: - self.log("We found some bad shares this pass. You should " - "update the servermap and try again to check " - "more peers", - level=log.WEIRD, umid="EFkOlA") - err.servermap = self.servermap - raise err hunk ./src/allmydata/mutable/retrieve.py 582 + def _validation_or_decoding_failed(self, f, readers): + """ + I am called when a block or a salt fails to correctly validate, or when + the decryption or decoding operation fails for some reason. I react to + this failure by notifying the remote server of corruption, and then + removing the remote peer from further activity. + """ + assert isinstance(readers, list) + bad_shnums = [reader.shnum for reader in readers] + + self.log("validation or decoding failed on share(s) %s, peer(s) %s " + ", segment %d: %s" % \ + (bad_shnums, readers, self._current_segment, str(f))) + for reader in readers: + self._mark_bad_share(reader, f) return hunk ./src/allmydata/mutable/retrieve.py 599 - def _decode(self): - started = time.time() - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, - offsets_tuple) = self.verinfo hunk ./src/allmydata/mutable/retrieve.py 600 - # shares_dict is a dict mapping shnum to share data, but the codec - # wants two lists. - shareids = []; shares = [] - for shareid, share in self.shares.items(): + def _validate_block(self, results, segnum, reader): + """ + I validate a block from one share on a remote server. + """ + # Grab the part of the block hash tree that is necessary to + # validate this block, then generate the block hash root. + self.log("validating share %d for segment %d" % (reader.shnum, + segnum)) + # Did we fail to fetch either of the things that we were + # supposed to? Fail if so. + if not results[0][0] and results[1][0]: + # handled by the errback handler. + + # These all get batched into one query, so the resulting + # failure should be the same for all of them, so we can just + # use the first one. + assert isinstance(results[0][1], failure.Failure) + + f = results[0][1] + raise CorruptShareError(reader.peerid, + reader.shnum, + "Connection error: %s" % str(f)) + + block_and_salt, block_and_sharehashes = results + block, salt = block_and_salt[1] + blockhashes, sharehashes = block_and_sharehashes[1] + + blockhashes = dict(enumerate(blockhashes[1])) + self.log("the reader gave me the following blockhashes: %s" % \ + blockhashes.keys()) + self.log("the reader gave me the following sharehashes: %s" % \ + sharehashes[1].keys()) + bht = self._block_hash_trees[reader.shnum] + + if bht.needed_hashes(segnum, include_leaf=True): + try: + bht.set_hashes(blockhashes) + except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \ + IndexError), e: + raise CorruptShareError(reader.peerid, + reader.shnum, + "block hash tree failure: %s" % e) + + if self._version == MDMF_VERSION: + blockhash = hashutil.block_hash(salt + block) + else: + blockhash = hashutil.block_hash(block) + # If this works without an error, then validation is + # successful. + try: + bht.set_hashes(leaves={segnum: blockhash}) + except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \ + IndexError), e: + raise CorruptShareError(reader.peerid, + reader.shnum, + "block hash tree failure: %s" % e) + + # Reaching this point means that we know that this segment + # is correct. Now we need to check to see whether the share + # hash chain is also correct. + # SDMF wrote share hash chains that didn't contain the + # leaves, which would be produced from the block hash tree. + # So we need to validate the block hash tree first. If + # successful, then bht[0] will contain the root for the + # shnum, which will be a leaf in the share hash tree, which + # will allow us to validate the rest of the tree. + if self.share_hash_tree.needed_hashes(reader.shnum, + include_leaf=True): + try: + self.share_hash_tree.set_hashes(hashes=sharehashes[1], + leaves={reader.shnum: bht[0]}) + except (hashtree.BadHashError, hashtree.NotEnoughHashesError, \ + IndexError), e: + raise CorruptShareError(reader.peerid, + reader.shnum, + "corrupt hashes: %s" % e) + + # TODO: Validate the salt, too. + self.log('share %d is valid for segment %d' % (reader.shnum, + segnum)) + return {reader.shnum: (block, salt)} + + + def _get_needed_hashes(self, reader, segnum): + """ + I get the hashes needed to validate segnum from the reader, then return + to my caller when this is done. + """ + bht = self._block_hash_trees[reader.shnum] + needed = bht.needed_hashes(segnum, include_leaf=True) + # The root of the block hash tree is also a leaf in the share + # hash tree. So we don't need to fetch it from the remote + # server. In the case of files with one segment, this means that + # we won't fetch any block hash tree from the remote server, + # since the hash of each share of the file is the entire block + # hash tree, and is a leaf in the share hash tree. This is fine, + # since any share corruption will be detected in the share hash + # tree. + #needed.discard(0) + self.log("getting blockhashes for segment %d, share %d: %s" % \ + (segnum, reader.shnum, str(needed))) + d1 = reader.get_blockhashes(needed, queue=True, force_remote=True) + if self.share_hash_tree.needed_hashes(reader.shnum): + need = self.share_hash_tree.needed_hashes(reader.shnum) + self.log("also need sharehashes for share %d: %s" % (reader.shnum, + str(need))) + d2 = reader.get_sharehashes(need, queue=True, force_remote=True) + else: + d2 = defer.succeed({}) # the logic in the next method + # expects a dict + dl = defer.DeferredList([d1, d2], consumeErrors=True) + return dl + + + def _decode_blocks(self, blocks_and_salts, segnum): + """ + I take a list of k blocks and salts, and decode that into a + single encrypted segment. + """ + d = {} + # We want to merge our dictionaries to the form + # {shnum: blocks_and_salts} + # + # The dictionaries come from validate block that way, so we just + # need to merge them. + for block_and_salt in blocks_and_salts: + d.update(block_and_salt[1]) + + # All of these blocks should have the same salt; in SDMF, it is + # the file-wide IV, while in MDMF it is the per-segment salt. In + # either case, we just need to get one of them and use it. + # + # d.items()[0] is like (shnum, (block, salt)) + # d.items()[0][1] is like (block, salt) + # d.items()[0][1][1] is the salt. + salt = d.items()[0][1][1] + # Next, extract just the blocks from the dict. We'll use the + # salt in the next step. + share_and_shareids = [(k, v[0]) for k, v in d.items()] + d2 = dict(share_and_shareids) + shareids = [] + shares = [] + for shareid, share in d2.items(): shareids.append(shareid) shares.append(share) hunk ./src/allmydata/mutable/retrieve.py 746 - assert len(shareids) >= k, len(shareids) + assert len(shareids) >= self._required_shares, len(shareids) # zfec really doesn't want extra shares hunk ./src/allmydata/mutable/retrieve.py 748 - shareids = shareids[:k] - shares = shares[:k] - - fec = codec.CRSDecoder() - fec.set_params(segsize, k, N) - - self.log("params %s, we have %d shares" % ((segsize, k, N), len(shares))) - self.log("about to decode, shareids=%s" % (shareids,)) - d = defer.maybeDeferred(fec.decode, shares, shareids) - def _done(buffers): - self._status.timings["decode"] = time.time() - started - self.log(" decode done, %d buffers" % len(buffers)) + shareids = shareids[:self._required_shares] + shares = shares[:self._required_shares] + self.log("decoding segment %d" % segnum) + if segnum == self._num_segments - 1: + d = defer.maybeDeferred(self._tail_decoder.decode, shares, shareids) + else: + d = defer.maybeDeferred(self._segment_decoder.decode, shares, shareids) + def _process(buffers): segment = "".join(buffers) hunk ./src/allmydata/mutable/retrieve.py 757 + self.log(format="now decoding segment %(segnum)s of %(numsegs)s", + segnum=segnum, + numsegs=self._num_segments, + level=log.NOISY) self.log(" joined length %d, datalength %d" % hunk ./src/allmydata/mutable/retrieve.py 762 - (len(segment), datalength)) - segment = segment[:datalength] + (len(segment), self._data_length)) + if segnum == self._num_segments - 1: + size_to_use = self._tail_data_size + else: + size_to_use = self._segment_size + segment = segment[:size_to_use] self.log(" segment len=%d" % len(segment)) hunk ./src/allmydata/mutable/retrieve.py 769 - return segment - def _err(f): - self.log(" decode failed: %s" % f) - return f - d.addCallback(_done) - d.addErrback(_err) + return segment, salt + d.addCallback(_process) return d hunk ./src/allmydata/mutable/retrieve.py 773 - def _decrypt(self, crypttext, IV, readkey): + + def _decrypt_segment(self, segment_and_salt): + """ + I take a single segment and its salt, and decrypt it. I return + the plaintext of the segment that is in my argument. + """ + segment, salt = segment_and_salt self._status.set_status("decrypting") hunk ./src/allmydata/mutable/retrieve.py 781 + self.log("decrypting segment %d" % self._current_segment) started = time.time() hunk ./src/allmydata/mutable/retrieve.py 783 - key = hashutil.ssk_readkey_data_hash(IV, readkey) + key = hashutil.ssk_readkey_data_hash(salt, self._node.get_readkey()) decryptor = AES(key) hunk ./src/allmydata/mutable/retrieve.py 785 - plaintext = decryptor.process(crypttext) + plaintext = decryptor.process(segment) self._status.timings["decrypt"] = time.time() - started return plaintext hunk ./src/allmydata/mutable/retrieve.py 789 - def _done(self, res): - if not self._running: + + def notify_server_corruption(self, peerid, shnum, reason): + ss = self.servermap.connections[peerid] + ss.callRemoteOnly("advise_corrupt_share", + "mutable", self._storage_index, shnum, reason) + + + def _try_to_validate_privkey(self, enc_privkey, reader): + + alleged_privkey_s = self._node._decrypt_privkey(enc_privkey) + alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s) + if alleged_writekey != self._node.get_writekey(): + self.log("invalid privkey from %s shnum %d" % + (reader, reader.shnum), + level=log.WEIRD, umid="YIw4tA") return hunk ./src/allmydata/mutable/retrieve.py 805 - self._running = False - self._status.set_active(False) - self._status.timings["total"] = time.time() - self._started - # res is either the new contents, or a Failure - if isinstance(res, failure.Failure): - self.log("Retrieve done, with failure", failure=res, - level=log.UNUSUAL) - self._status.set_status("Failed") - else: - self.log("Retrieve done, success!") - self._status.set_status("Finished") - self._status.set_progress(1.0) - # remember the encoding parameters, use them again next time - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, - offsets_tuple) = self.verinfo - self._node._populate_required_shares(k) - self._node._populate_total_shares(N) - eventually(self._done_deferred.callback, res) hunk ./src/allmydata/mutable/retrieve.py 806 + # it's good + self.log("got valid privkey from shnum %d on reader %s" % + (reader.shnum, reader)) + privkey = rsa.create_signing_key_from_string(alleged_privkey_s) + self._node._populate_encprivkey(enc_privkey) + self._node._populate_privkey(privkey) + self._need_privkey = False + + + def _check_for_done(self, res): + """ + I check to see if this Retrieve object has successfully finished + its work. + + I can exit in the following ways: + - If there are no more segments to download, then I exit by + causing self._done_deferred to fire with the plaintext + content requested by the caller. + - If there are still segments to be downloaded, and there + are enough active readers (readers which have not broken + and have not given us corrupt data) to continue + downloading, I send control back to + _download_current_segment. + - If there are still segments to be downloaded but there are + not enough active peers to download them, I ask + _add_active_peers to add more peers. If it is successful, + it will call _download_current_segment. If there are not + enough peers to retrieve the file, then that will cause + _done_deferred to errback. + """ + self.log("checking for doneness") + if self._current_segment == self._num_segments: + # No more segments to download, we're done. + self.log("got plaintext, done") + return self._done() + + if len(self._active_readers) >= self._required_shares: + # More segments to download, but we have enough good peers + # in self._active_readers that we can do that without issue, + # so go nab the next segment. + self.log("not done yet: on segment %d of %d" % \ + (self._current_segment + 1, self._num_segments)) + return self._download_current_segment() + + self.log("not done yet: on segment %d of %d, need to add peers" % \ + (self._current_segment + 1, self._num_segments)) + return self._add_active_peers() + + + def _done(self): + """ + I am called by _check_for_done when the download process has + finished successfully. After making some useful logging + statements, I return the decrypted contents to the owner of this + Retrieve object through self._done_deferred. + """ + eventually(self._done_deferred.callback, self._plaintext) + + + def _failed(self): + """ + I am called by _add_active_peers when there are not enough + active peers left to complete the download. After making some + useful logging statements, I return an exception to that effect + to the caller of this Retrieve object through + self._done_deferred. + """ + format = ("ran out of peers: " + "have %(have)d of %(total)d segments " + "found %(bad)d bad shares " + "encoding %(k)d-of-%(n)d") + args = {"have": self._current_segment, + "total": self._num_segments, + "k": self._required_shares, + "n": self._total_shares, + "bad": len(self._bad_shares)} + e = NotEnoughSharesError("%s, last failure: %s" % (format % args, + str(self._last_failure))) + f = failure.Failure(e) + eventually(self._done_deferred.callback, f) hunk ./src/allmydata/test/test_mutable.py 12 from allmydata.util.hashutil import tagged_hash, ssk_writekey_hash, \ ssk_pubkey_fingerprint_hash from allmydata.interfaces import IRepairResults, ICheckAndRepairResults, \ - NotEnoughSharesError + NotEnoughSharesError, SDMF_VERSION, MDMF_VERSION from allmydata.monitor import Monitor from allmydata.test.common import ShouldFailMixin from allmydata.test.no_network import GridTestMixin hunk ./src/allmydata/test/test_mutable.py 28 from allmydata.mutable.retrieve import Retrieve from allmydata.mutable.publish import Publish from allmydata.mutable.servermap import ServerMap, ServermapUpdater -from allmydata.mutable.layout import unpack_header, unpack_share +from allmydata.mutable.layout import unpack_header, unpack_share, \ + MDMFSlotReadProxy from allmydata.mutable.repairer import MustForceRepairError import allmydata.test.common_util as testutil hunk ./src/allmydata/test/test_mutable.py 104 d = fireEventually() d.addCallback(lambda res: _call()) return d + def callRemoteOnly(self, methname, *args, **kwargs): d = self.callRemote(methname, *args, **kwargs) d.addBoth(lambda ignore: None) hunk ./src/allmydata/test/test_mutable.py 163 def corrupt(res, s, offset, shnums_to_corrupt=None, offset_offset=0): # if shnums_to_corrupt is None, corrupt all shares. Otherwise it is a # list of shnums to corrupt. + ds = [] for peerid in s._peers: shares = s._peers[peerid] for shnum in shares: hunk ./src/allmydata/test/test_mutable.py 190 else: offset1 = offset offset2 = 0 - if offset1 == "pubkey": + if offset1 == "pubkey" and IV: real_offset = 107 hunk ./src/allmydata/test/test_mutable.py 192 + elif offset1 == "share_data" and not IV: + real_offset = 104 elif offset1 in o: real_offset = o[offset1] else: hunk ./src/allmydata/test/test_mutable.py 327 d.addCallback(_created) return d + + def test_upload_and_download_mdmf(self): + d = self.nodemaker.create_mutable_file(version=MDMF_VERSION) + def _created(n): + d = defer.succeed(None) + d.addCallback(lambda ignored: + n.get_servermap(MODE_READ)) + def _then(servermap): + dumped = servermap.dump(StringIO()) + self.failUnlessIn("3-of-10", dumped.getvalue()) + d.addCallback(_then) + # Now overwrite the contents with some new contents. We want + # to make them big enough to force the file to be uploaded + # in more than one segment. + big_contents = "contents1" * 100000 # about 900 KiB + d.addCallback(lambda ignored: + n.overwrite(big_contents)) + d.addCallback(lambda ignored: + n.download_best_version()) + d.addCallback(lambda data: + self.failUnlessEqual(data, big_contents)) + # Overwrite the contents again with some new contents. As + # before, they need to be big enough to force multiple + # segments, so that we make the downloader deal with + # multiple segments. + bigger_contents = "contents2" * 1000000 # about 9MiB + d.addCallback(lambda ignored: + n.overwrite(bigger_contents)) + d.addCallback(lambda ignored: + n.download_best_version()) + d.addCallback(lambda data: + self.failUnlessEqual(data, bigger_contents)) + return d + d.addCallback(_created) + return d + + def test_create_with_initial_contents(self): d = self.nodemaker.create_mutable_file("contents 1") def _created(n): hunk ./src/allmydata/test/test_mutable.py 1147 def _test_corrupt_all(self, offset, substring, - should_succeed=False, corrupt_early=True, - failure_checker=None): + should_succeed=False, + corrupt_early=True, + failure_checker=None, + fetch_privkey=False): d = defer.succeed(None) if corrupt_early: d.addCallback(corrupt, self._storage, offset) hunk ./src/allmydata/test/test_mutable.py 1167 self.failUnlessIn(substring, "".join(allproblems)) return servermap if should_succeed: - d1 = self._fn.download_version(servermap, ver) + d1 = self._fn.download_version(servermap, ver, + fetch_privkey) d1.addCallback(lambda new_contents: self.failUnlessEqual(new_contents, self.CONTENTS)) else: hunk ./src/allmydata/test/test_mutable.py 1175 d1 = self.shouldFail(NotEnoughSharesError, "_corrupt_all(offset=%s)" % (offset,), substring, - self._fn.download_version, servermap, ver) + self._fn.download_version, servermap, + ver, + fetch_privkey) if failure_checker: d1.addCallback(failure_checker) d1.addCallback(lambda res: servermap) hunk ./src/allmydata/test/test_mutable.py 1186 return d def test_corrupt_all_verbyte(self): - # when the version byte is not 0, we hit an UnknownVersionError error - # in unpack_share(). + # when the version byte is not 0 or 1, we hit an UnknownVersionError + # error in unpack_share(). d = self._test_corrupt_all(0, "UnknownVersionError") def _check_servermap(servermap): # and the dump should mention the problems hunk ./src/allmydata/test/test_mutable.py 1193 s = StringIO() dump = servermap.dump(s).getvalue() - self.failUnless("10 PROBLEMS" in dump, dump) + self.failUnless("30 PROBLEMS" in dump, dump) d.addCallback(_check_servermap) return d hunk ./src/allmydata/test/test_mutable.py 1263 return self._test_corrupt_all("enc_privkey", None, should_succeed=True) + def test_corrupt_all_encprivkey_late(self): + # this should work for the same reason as above, but we corrupt + # after the servermap update to exercise the error handling + # code. + # We need to remove the privkey from the node, or the retrieve + # process won't know to update it. + self._fn._privkey = None + return self._test_corrupt_all("enc_privkey", + None, # this shouldn't fail + should_succeed=True, + corrupt_early=False, + fetch_privkey=True) + + def test_corrupt_all_seqnum_late(self): # corrupting the seqnum between mapupdate and retrieve should result # in NotEnoughSharesError, since each share will look invalid hunk ./src/allmydata/test/test_mutable.py 1283 def _check(res): f = res[0] self.failUnless(f.check(NotEnoughSharesError)) - self.failUnless("someone wrote to the data since we read the servermap" in str(f)) + self.failUnless("uncoordinated write" in str(f)) return self._test_corrupt_all(1, "ran out of peers", corrupt_early=False, failure_checker=_check) hunk ./src/allmydata/test/test_mutable.py 1333 self.failUnlessEqual(new_contents, self.CONTENTS)) return d - def test_corrupt_some(self): - # corrupt the data of first five shares (so the servermap thinks - # they're good but retrieve marks them as bad), so that the - # MODE_READ set of 6 will be insufficient, forcing node.download to - # retry with more servers. - corrupt(None, self._storage, "share_data", range(5)) - d = self.make_servermap() + + def _test_corrupt_some(self, offset, mdmf=False): + if mdmf: + d = self.publish_mdmf() + else: + d = defer.succeed(None) + d.addCallback(lambda ignored: + corrupt(None, self._storage, offset, range(5))) + d.addCallback(lambda ignored: + self.make_servermap()) def _do_retrieve(servermap): ver = servermap.best_recoverable_version() self.failUnless(ver) hunk ./src/allmydata/test/test_mutable.py 1349 return self._fn.download_best_version() d.addCallback(_do_retrieve) d.addCallback(lambda new_contents: - self.failUnlessEqual(new_contents, self.CONTENTS)) + self.failUnlessEqual(new_contents, self.CONTENTS)) return d hunk ./src/allmydata/test/test_mutable.py 1352 + + def test_corrupt_some(self): + # corrupt the data of first five shares (so the servermap thinks + # they're good but retrieve marks them as bad), so that the + # MODE_READ set of 6 will be insufficient, forcing node.download to + # retry with more servers. + return self._test_corrupt_some("share_data") + + def test_download_fails(self): d = corrupt(None, self._storage, "signature") d.addCallback(lambda ignored: hunk ./src/allmydata/test/test_mutable.py 1366 self.shouldFail(UnrecoverableFileError, "test_download_anyway", "no recoverable versions", - self._fn.download_best_version) + self._fn.download_best_version)) return d hunk ./src/allmydata/test/test_mutable.py 1370 + + def test_corrupt_mdmf_block_hash_tree(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + self._test_corrupt_all(("block_hash_tree", 12 * 32), + "block hash tree failure", + corrupt_early=False, + should_succeed=False)) + return d + + + def test_corrupt_mdmf_block_hash_tree_late(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + self._test_corrupt_all(("block_hash_tree", 12 * 32), + "block hash tree failure", + corrupt_early=True, + should_succeed=False)) + return d + + + def test_corrupt_mdmf_share_data(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + # TODO: Find out what the block size is and corrupt a + # specific block, rather than just guessing. + self._test_corrupt_all(("share_data", 12 * 40), + "block hash tree failure", + corrupt_early=True, + should_succeed=False)) + return d + + + def test_corrupt_some_mdmf(self): + return self._test_corrupt_some(("share_data", 12 * 40), + mdmf=True) + + class CheckerMixin: def check_good(self, r, where): self.failUnless(r.is_healthy(), where) hunk ./src/allmydata/test/test_mutable.py 2116 d.addCallback(lambda res: self.shouldFail(NotEnoughSharesError, "test_retrieve_surprise", - "ran out of peers: have 0 shares (k=3)", + "ran out of peers: have 0 of 1", n.download_version, self.old_map, self.old_map.best_recoverable_version(), hunk ./src/allmydata/test/test_mutable.py 2125 d.addCallback(_created) return d + def test_unexpected_shares(self): # upload the file, take a servermap, shut down one of the servers, # upload it again (causing shares to appear on a new server), then hunk ./src/allmydata/test/test_mutable.py 2329 self.basedir = "mutable/Problems/test_privkey_query_missing" self.set_up_grid(num_servers=20) nm = self.g.clients[0].nodemaker - LARGE = "These are Larger contents" * 2000 # about 50KB + LARGE = "These are Larger contents" * 2000 # about 50KiB nm._node_cache = DevNullDictionary() # disable the nodecache d = nm.create_mutable_file(LARGE) hunk ./src/allmydata/test/test_mutable.py 2342 d.addCallback(_created) d.addCallback(lambda res: self.n2.get_servermap(MODE_WRITE)) return d + + + def test_block_and_hash_query_error(self): + # This tests for what happens when a query to a remote server + # fails in either the hash validation step or the block getting + # step (because of batching, this is the same actual query). + # We need to have the storage server persist up until the point + # that its prefix is validated, then suddenly die. This + # exercises some exception handling code in Retrieve. + self.basedir = "mutable/Problems/test_block_and_hash_query_error" + self.set_up_grid(num_servers=20) + nm = self.g.clients[0].nodemaker + CONTENTS = "contents" * 2000 + d = nm.create_mutable_file(CONTENTS) + def _created(node): + self._node = node + d.addCallback(_created) + d.addCallback(lambda ignored: + self._node.get_servermap(MODE_READ)) + def _then(servermap): + # we have our servermap. Now we set up the servers like the + # tests above -- the first one that gets a read call should + # start throwing errors, but only after returning its prefix + # for validation. Since we'll download without fetching the + # private key, the next query to the remote server will be + # for either a block and salt or for hashes, either of which + # will exercise the error handling code. + killer = FirstServerGetsKilled() + for (serverid, ss) in nm.storage_broker.get_all_servers(): + ss.post_call_notifier = killer.notify + ver = servermap.best_recoverable_version() + assert ver + return self._node.download_version(servermap, ver) + d.addCallback(_then) + d.addCallback(lambda data: + self.failUnlessEqual(data, CONTENTS)) + return d } [mutable/checker.py: check MDMF files Kevan Carstensen **20100628225048 Ignore-this: fb697b36285d60552df6ca5ac6a37629 This patch adapts the mutable file checker and verifier to check and verify MDMF files. It does this by using the new segmented downloader, which is trained to perform verification operations on request. This removes some code duplication. ] { hunk ./src/allmydata/mutable/checker.py 12 from allmydata.mutable.common import MODE_CHECK, CorruptShareError from allmydata.mutable.servermap import ServerMap, ServermapUpdater from allmydata.mutable.layout import unpack_share, SIGNED_PREFIX_LENGTH +from allmydata.mutable.retrieve import Retrieve # for verifying class MutableChecker: hunk ./src/allmydata/mutable/checker.py 29 def check(self, verify=False, add_lease=False): servermap = ServerMap() + # Updating the servermap in MODE_CHECK will stand a good chance + # of finding all of the shares, and getting a good idea of + # recoverability, etc, without verifying. u = ServermapUpdater(self._node, self._storage_broker, self._monitor, servermap, MODE_CHECK, add_lease=add_lease) if self._history: hunk ./src/allmydata/mutable/checker.py 55 if num_recoverable: self.best_version = servermap.best_recoverable_version() + # The file is unhealthy and needs to be repaired if: + # - There are unrecoverable versions. if servermap.unrecoverable_versions(): self.need_repair = True hunk ./src/allmydata/mutable/checker.py 59 + # - There isn't a recoverable version. if num_recoverable != 1: self.need_repair = True hunk ./src/allmydata/mutable/checker.py 62 + # - The best recoverable version is missing some shares. if self.best_version: available_shares = servermap.shares_available() (num_distinct_shares, k, N) = available_shares[self.best_version] hunk ./src/allmydata/mutable/checker.py 73 def _verify_all_shares(self, servermap): # read every byte of each share + # + # This logic is going to be very nearly the same as the + # downloader. I bet we could pass the downloader a flag that + # makes it do this, and piggyback onto that instead of + # duplicating a bunch of code. + # + # Like: + # r = Retrieve(blah, blah, blah, verify=True) + # d = r.download() + # (wait, wait, wait, d.callback) + # + # Then, when it has finished, we can check the servermap (which + # we provided to Retrieve) to figure out which shares are bad, + # since the Retrieve process will have updated the servermap as + # it went along. + # + # By passing the verify=True flag to the constructor, we are + # telling the downloader a few things. + # + # 1. It needs to download all N shares, not just K shares. + # 2. It doesn't need to decrypt or decode the shares, only + # verify them. if not self.best_version: return hunk ./src/allmydata/mutable/checker.py 97 - versionmap = servermap.make_versionmap() - shares = versionmap[self.best_version] - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, - offsets_tuple) = self.best_version - offsets = dict(offsets_tuple) - readv = [ (0, offsets["EOF"]) ] - dl = [] - for (shnum, peerid, timestamp) in shares: - ss = servermap.connections[peerid] - d = self._do_read(ss, peerid, self._storage_index, [shnum], readv) - d.addCallback(self._got_answer, peerid, servermap) - dl.append(d) - return defer.DeferredList(dl, fireOnOneErrback=True, consumeErrors=True) hunk ./src/allmydata/mutable/checker.py 98 - def _do_read(self, ss, peerid, storage_index, shnums, readv): - # isolate the callRemote to a separate method, so tests can subclass - # Publish and override it - d = ss.callRemote("slot_readv", storage_index, shnums, readv) + r = Retrieve(self._node, servermap, self.best_version, verify=True) + d = r.download() + d.addCallback(self._process_bad_shares) return d hunk ./src/allmydata/mutable/checker.py 103 - def _got_answer(self, datavs, peerid, servermap): - for shnum,datav in datavs.items(): - data = datav[0] - try: - self._got_results_one_share(shnum, peerid, data) - except CorruptShareError: - f = failure.Failure() - self.need_repair = True - self.bad_shares.append( (peerid, shnum, f) ) - prefix = data[:SIGNED_PREFIX_LENGTH] - servermap.mark_bad_share(peerid, shnum, prefix) - ss = servermap.connections[peerid] - self.notify_server_corruption(ss, shnum, str(f.value)) - - def check_prefix(self, peerid, shnum, data): - (seqnum, root_hash, IV, segsize, datalength, k, N, prefix, - offsets_tuple) = self.best_version - got_prefix = data[:SIGNED_PREFIX_LENGTH] - if got_prefix != prefix: - raise CorruptShareError(peerid, shnum, - "prefix mismatch: share changed while we were reading it") - - def _got_results_one_share(self, shnum, peerid, data): - self.check_prefix(peerid, shnum, data) - - # the [seqnum:signature] pieces are validated by _compare_prefix, - # which checks their signature against the pubkey known to be - # associated with this file. hunk ./src/allmydata/mutable/checker.py 104 - (seqnum, root_hash, IV, k, N, segsize, datalen, pubkey, signature, - share_hash_chain, block_hash_tree, share_data, - enc_privkey) = unpack_share(data) - - # validate [share_hash_chain,block_hash_tree,share_data] - - leaves = [hashutil.block_hash(share_data)] - t = hashtree.HashTree(leaves) - if list(t) != block_hash_tree: - raise CorruptShareError(peerid, shnum, "block hash tree failure") - share_hash_leaf = t[0] - t2 = hashtree.IncompleteHashTree(N) - # root_hash was checked by the signature - t2.set_hashes({0: root_hash}) - try: - t2.set_hashes(hashes=share_hash_chain, - leaves={shnum: share_hash_leaf}) - except (hashtree.BadHashError, hashtree.NotEnoughHashesError, - IndexError), e: - msg = "corrupt hashes: %s" % (e,) - raise CorruptShareError(peerid, shnum, msg) - - # validate enc_privkey: only possible if we have a write-cap - if not self._node.is_readonly(): - alleged_privkey_s = self._node._decrypt_privkey(enc_privkey) - alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s) - if alleged_writekey != self._node.get_writekey(): - raise CorruptShareError(peerid, shnum, "invalid privkey") + def _process_bad_shares(self, bad_shares): + if bad_shares: + self.need_repair = True + self.bad_shares = bad_shares hunk ./src/allmydata/mutable/checker.py 109 - def notify_server_corruption(self, ss, shnum, reason): - ss.callRemoteOnly("advise_corrupt_share", - "mutable", self._storage_index, shnum, reason) def _count_shares(self, smap, version): available_shares = smap.shares_available() hunk ./src/allmydata/test/test_mutable.py 193 if offset1 == "pubkey" and IV: real_offset = 107 elif offset1 == "share_data" and not IV: - real_offset = 104 + real_offset = 107 elif offset1 in o: real_offset = o[offset1] else: hunk ./src/allmydata/test/test_mutable.py 395 return d d.addCallback(_created) return d + test_create_mdmf_with_initial_contents.timeout = 20 def test_create_with_initial_contents_function(self): hunk ./src/allmydata/test/test_mutable.py 700 k, N, segsize, datalen) self.failUnless(p._pubkey.verify(sig_material, signature)) #self.failUnlessEqual(signature, p._privkey.sign(sig_material)) - self.failUnless(isinstance(share_hash_chain, dict)) - self.failUnlessEqual(len(share_hash_chain), 4) # ln2(10)++ + self.failUnlessEqual(len(share_hash_chain), 4) # ln2(10)++ for shnum,share_hash in share_hash_chain.items(): self.failUnless(isinstance(shnum, int)) self.failUnless(isinstance(share_hash, str)) hunk ./src/allmydata/test/test_mutable.py 820 shares[peerid][shnum] = oldshares[index][peerid][shnum] + + class Servermap(unittest.TestCase, PublishMixin): def setUp(self): return self.publish_one() hunk ./src/allmydata/test/test_mutable.py 951 self._storage._peers = {} # delete all shares ms = self.make_servermap d = defer.succeed(None) - +# d.addCallback(lambda res: ms(mode=MODE_CHECK)) d.addCallback(lambda sm: self.failUnlessNoneRecoverable(sm)) hunk ./src/allmydata/test/test_mutable.py 1440 d.addCallback(self.check_good, "test_check_good") return d + def test_check_mdmf_good(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + self._fn.check(Monitor())) + d.addCallback(self.check_good, "test_check_mdmf_good") + return d + def test_check_no_shares(self): for shares in self._storage._peers.values(): shares.clear() hunk ./src/allmydata/test/test_mutable.py 1454 d.addCallback(self.check_bad, "test_check_no_shares") return d + def test_check_mdmf_no_shares(self): + d = self.publish_mdmf() + def _then(ignored): + for share in self._storage._peers.values(): + share.clear() + d.addCallback(_then) + d.addCallback(lambda ignored: + self._fn.check(Monitor())) + d.addCallback(self.check_bad, "test_check_mdmf_no_shares") + return d + def test_check_not_enough_shares(self): for shares in self._storage._peers.values(): for shnum in shares.keys(): hunk ./src/allmydata/test/test_mutable.py 1474 d.addCallback(self.check_bad, "test_check_not_enough_shares") return d + def test_check_mdmf_not_enough_shares(self): + d = self.publish_mdmf() + def _then(ignored): + for shares in self._storage._peers.values(): + for shnum in shares.keys(): + if shnum > 0: + del shares[shnum] + d.addCallback(_then) + d.addCallback(lambda ignored: + self._fn.check(Monitor())) + d.addCallback(self.check_bad, "test_check_mdmf_not_enougH_shares") + return d + + def test_check_all_bad_sig(self): d = corrupt(None, self._storage, 1) # bad sig d.addCallback(lambda ignored: hunk ./src/allmydata/test/test_mutable.py 1495 d.addCallback(self.check_bad, "test_check_all_bad_sig") return d + def test_check_mdmf_all_bad_sig(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + corrupt(None, self._storage, 1)) + d.addCallback(lambda ignored: + self._fn.check(Monitor())) + d.addCallback(self.check_bad, "test_check_mdmf_all_bad_sig") + return d + def test_check_all_bad_blocks(self): d = corrupt(None, self._storage, "share_data", [9]) # bad blocks # the Checker won't notice this.. it doesn't look at actual data hunk ./src/allmydata/test/test_mutable.py 1512 d.addCallback(self.check_good, "test_check_all_bad_blocks") return d + + def test_check_mdmf_all_bad_blocks(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + corrupt(None, self._storage, "share_data")) + d.addCallback(lambda ignored: + self._fn.check(Monitor())) + d.addCallback(self.check_good, "test_check_mdmf_all_bad_blocks") + return d + def test_verify_good(self): d = self._fn.check(Monitor(), verify=True) d.addCallback(self.check_good, "test_verify_good") hunk ./src/allmydata/test/test_mutable.py 1582 "test_verify_one_bad_encprivkey_uncheckable") return d + + def test_verify_mdmf_good(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) + d.addCallback(self.check_good, "test_verify_mdmf_good") + return d + + + def test_verify_mdmf_one_bad_block(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + corrupt(None, self._storage, "share_data", [1])) + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) + # We should find one bad block here + d.addCallback(self.check_bad, "test_verify_mdmf_one_bad_block") + d.addCallback(self.check_expected_failure, + CorruptShareError, "block hash tree failure", + "test_verify_mdmf_one_bad_block") + return d + + + def test_verify_mdmf_bad_encprivkey(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + corrupt(None, self._storage, "enc_privkey", [1])) + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) + d.addCallback(self.check_bad, "test_verify_mdmf_bad_encprivkey") + d.addCallback(self.check_expected_failure, + CorruptShareError, "privkey", + "test_verify_mdmf_bad_encprivkey") + return d + + + def test_verify_mdmf_bad_sig(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + corrupt(None, self._storage, 1, [1])) + d.addCallback(lambda ignored: + self._fn.check(Monitor(), verify=True)) + d.addCallback(self.check_bad, "test_verify_mdmf_bad_sig") + return d + + + def test_verify_mdmf_bad_encprivkey_uncheckable(self): + d = self.publish_mdmf() + d.addCallback(lambda ignored: + corrupt(None, self._storage, "enc_privkey", [1])) + d.addCallback(lambda ignored: + self._fn.get_readonly()) + d.addCallback(lambda fn: + fn.check(Monitor(), verify=True)) + d.addCallback(self.check_good, + "test_verify_mdmf_bad_encprivkey_uncheckable") + return d + + class Repair(unittest.TestCase, PublishMixin, ShouldFailMixin): def get_shares(self, s): hunk ./src/allmydata/test/test_mutable.py 1706 current_shares = self.old_shares[-1] self.failUnlessEqual(old_shares, current_shares) + def test_unrepairable_0shares(self): d = self.publish_one() def _delete_all_shares(ign): hunk ./src/allmydata/test/test_mutable.py 1721 d.addCallback(_check) return d + def test_mdmf_unrepairable_0shares(self): + d = self.publish_mdmf() + def _delete_all_shares(ign): + shares = self._storage._peers + for peerid in shares: + shares[peerid] = {} + d.addCallback(_delete_all_shares) + d.addCallback(lambda ign: self._fn.check(Monitor())) + d.addCallback(lambda check_results: self._fn.repair(check_results)) + d.addCallback(lambda crr: self.failIf(crr.get_successful())) + return d + + def test_unrepairable_1share(self): d = self.publish_one() def _delete_all_shares(ign): hunk ./src/allmydata/test/test_mutable.py 1750 d.addCallback(_check) return d + def test_mdmf_unrepairable_1share(self): + d = self.publish_mdmf() + def _delete_all_shares(ign): + shares = self._storage._peers + for peerid in shares: + for shnum in list(shares[peerid]): + if shnum > 0: + del shares[peerid][shnum] + d.addCallback(_delete_all_shares) + d.addCallback(lambda ign: self._fn.check(Monitor())) + d.addCallback(lambda check_results: self._fn.repair(check_results)) + def _check(crr): + self.failUnlessEqual(crr.get_successful(), False) + d.addCallback(_check) + return d + + def test_repairable_5shares(self): + d = self.publish_mdmf() + def _delete_all_shares(ign): + shares = self._storage._peers + for peerid in shares: + for shnum in list(shares[peerid]): + if shnum > 4: + del shares[peerid][shnum] + d.addCallback(_delete_all_shares) + d.addCallback(lambda ign: self._fn.check(Monitor())) + d.addCallback(lambda check_results: self._fn.repair(check_results)) + def _check(crr): + self.failUnlessEqual(crr.get_successful(), True) + d.addCallback(_check) + return d + + def test_mdmf_repairable_5shares(self): + d = self.publish_mdmf() + def _delete_all_shares(ign): + shares = self._storage._peers + for peerid in shares: + for shnum in list(shares[peerid]): + if shnum > 5: + del shares[peerid][shnum] + d.addCallback(_delete_all_shares) + d.addCallback(lambda ign: self._fn.check(Monitor())) + d.addCallback(lambda check_results: self._fn.repair(check_results)) + def _check(crr): + self.failUnlessEqual(crr.get_successful(), True) + d.addCallback(_check) + return d + + def test_merge(self): self.old_shares = [] d = self.publish_multiple() } [mutable/retrieve.py: learn how to verify mutable files Kevan Carstensen **20100628225201 Ignore-this: 989af7800c47589620918461ec989483 ] { hunk ./src/allmydata/mutable/retrieve.py 86 # Retrieve object will remain tied to a specific version of the file, and # will use a single ServerMap instance. - def __init__(self, filenode, servermap, verinfo, fetch_privkey=False): + def __init__(self, filenode, servermap, verinfo, fetch_privkey=False, + verify=False): self._node = filenode assert self._node.get_pubkey() self._storage_index = filenode.get_storage_index() hunk ./src/allmydata/mutable/retrieve.py 106 # during repair, we may be called upon to grab the private key, since # it wasn't picked up during a verify=False checker run, and we'll # need it for repair to generate a new version. - self._need_privkey = fetch_privkey - if self._node.get_privkey(): + self._need_privkey = fetch_privkey or verify + if self._node.get_privkey() and not verify: self._need_privkey = False if self._need_privkey: hunk ./src/allmydata/mutable/retrieve.py 117 self._privkey_query_markers = [] # one Marker for each time we've # tried to get the privkey. + # verify means that we are using the downloader logic to verify all + # of our shares. This tells the downloader a few things. + # + # 1. We need to download all of the shares. + # 2. We don't need to decode or decrypt the shares, since our + # caller doesn't care about the plaintext, only the + # information about which shares are or are not valid. + # 3. When we are validating readers, we need to validate the + # signature on the prefix. Do we? We already do this in the + # servermap update? + # + # (just work on 1 and 2 for now, I guess) + self._verify = False + if verify: + self._verify = True + self._status = RetrieveStatus() self._status.set_storage_index(self._storage_index) self._status.set_helper(False) hunk ./src/allmydata/mutable/retrieve.py 323 # We need at least self._required_shares readers to download a # segment. - needed = self._required_shares - len(self._active_readers) + if self._verify: + needed = self._total_shares + else: + needed = self._required_shares - len(self._active_readers) # XXX: Why don't format= log messages work here? self.log("adding %d peers to the active peers list" % needed) hunk ./src/allmydata/mutable/retrieve.py 339 # will cause problems later. active_shnums -= set([reader.shnum for reader in self._active_readers]) active_shnums = list(active_shnums)[:needed] - if len(active_shnums) < needed: + if len(active_shnums) < needed and not self._verify: # We don't have enough readers to retrieve the file; fail. return self._failed() hunk ./src/allmydata/mutable/retrieve.py 346 for shnum in active_shnums: self._active_readers.append(self.readers[shnum]) self.log("added reader for share %d" % shnum) - assert len(self._active_readers) == self._required_shares + assert len(self._active_readers) >= self._required_shares # Conceptually, this is part of the _add_active_peers step. It # validates the prefixes of newly added readers to make sure # that they match what we are expecting for self.verinfo. If hunk ./src/allmydata/mutable/retrieve.py 416 # that we haven't gotten it at the end of # segment decoding, then we'll take more drastic # measures. - if self._need_privkey: + if self._need_privkey and not self._node.is_readonly(): d = reader.get_encprivkey() d.addCallback(self._try_to_validate_privkey, reader) if bad_readers: hunk ./src/allmydata/mutable/retrieve.py 423 # We do them all at once, or else we screw up list indexing. for (reader, f) in bad_readers: self._mark_bad_share(reader, f) - return self._add_active_peers() + if self._verify: + if len(self._active_readers) >= self._required_shares: + return self._download_current_segment() + else: + return self._failed() + else: + return self._add_active_peers() else: return self._download_current_segment() # The next step will assert that it has enough active hunk ./src/allmydata/mutable/retrieve.py 518 """ self.log("marking share %d on server %s as bad" % \ (reader.shnum, reader)) + prefix = self.verinfo[-2] + self.servermap.mark_bad_share(reader.peerid, + reader.shnum, + prefix) self._remove_reader(reader) hunk ./src/allmydata/mutable/retrieve.py 523 - self._bad_shares.add((reader.peerid, reader.shnum)) + self._bad_shares.add((reader.peerid, reader.shnum, f)) self._status.problems[reader.peerid] = f self._last_failure = f self.notify_server_corruption(reader.peerid, reader.shnum, hunk ./src/allmydata/mutable/retrieve.py 571 ds.append(dl) reader.flush() dl = defer.DeferredList(ds) - dl.addCallback(self._maybe_decode_and_decrypt_segment, segnum) + if self._verify: + dl.addCallback(lambda ignored: "") + dl.addCallback(self._set_segment) + else: + dl.addCallback(self._maybe_decode_and_decrypt_segment, segnum) return dl hunk ./src/allmydata/mutable/retrieve.py 701 # shnum, which will be a leaf in the share hash tree, which # will allow us to validate the rest of the tree. if self.share_hash_tree.needed_hashes(reader.shnum, - include_leaf=True): + include_leaf=True) or \ + self._verify: try: self.share_hash_tree.set_hashes(hashes=sharehashes[1], leaves={reader.shnum: bht[0]}) hunk ./src/allmydata/mutable/retrieve.py 832 def _try_to_validate_privkey(self, enc_privkey, reader): - alleged_privkey_s = self._node._decrypt_privkey(enc_privkey) alleged_writekey = hashutil.ssk_writekey_hash(alleged_privkey_s) if alleged_writekey != self._node.get_writekey(): hunk ./src/allmydata/mutable/retrieve.py 838 self.log("invalid privkey from %s shnum %d" % (reader, reader.shnum), level=log.WEIRD, umid="YIw4tA") + if self._verify: + self.servermap.mark_bad_share(reader.peerid, reader.shnum, + self.verinfo[-2]) + e = CorruptShareError(reader.peerid, + reader.shnum, + "invalid privkey") + f = failure.Failure(e) + self._bad_shares.add((reader.peerid, reader.shnum, f)) return # it's good hunk ./src/allmydata/mutable/retrieve.py 904 statements, I return the decrypted contents to the owner of this Retrieve object through self._done_deferred. """ - eventually(self._done_deferred.callback, self._plaintext) + if self._verify: + ret = list(self._bad_shares) + self.log("done verifying, found %d bad shares" % len(ret)) + else: + ret = self._plaintext + eventually(self._done_deferred.callback, ret) def _failed(self): hunk ./src/allmydata/mutable/retrieve.py 920 to the caller of this Retrieve object through self._done_deferred. """ - format = ("ran out of peers: " - "have %(have)d of %(total)d segments " - "found %(bad)d bad shares " - "encoding %(k)d-of-%(n)d") - args = {"have": self._current_segment, - "total": self._num_segments, - "k": self._required_shares, - "n": self._total_shares, - "bad": len(self._bad_shares)} - e = NotEnoughSharesError("%s, last failure: %s" % (format % args, - str(self._last_failure))) - f = failure.Failure(e) - eventually(self._done_deferred.callback, f) + if self._verify: + ret = list(self._bad_shares) + else: + format = ("ran out of peers: " + "have %(have)d of %(total)d segments " + "found %(bad)d bad shares " + "encoding %(k)d-of-%(n)d") + args = {"have": self._current_segment, + "total": self._num_segments, + "k": self._required_shares, + "n": self._total_shares, + "bad": len(self._bad_shares)} + e = NotEnoughSharesError("%s, last failure: %s" % \ + (format % args, str(self._last_failure))) + f = failure.Failure(e) + ret = f + eventually(self._done_deferred.callback, ret) } [interfaces.py: add IMutableSlotWriter Kevan Carstensen **20100630183305 Ignore-this: ff9dca96ef1a009ae85485682f81ea5 ] hunk ./src/allmydata/interfaces.py 418 """ +class IMutableSlotWriter(Interface): + """ + The interface for a writer around a mutable slot on a remote server. + """ + def set_checkstring(checkstring, *args): + """ + Set the checkstring that I will pass to the remote server when + writing. + + @param checkstring A packed checkstring to use. + + Note that implementations can differ in which semantics they + wish to support for set_checkstring -- they can, for example, + build the checkstring themselves from its constituents, or + some other thing. + """ + + def get_checkstring(): + """ + Get the checkstring that I think currently exists on the remote + server. + """ + + def put_block(data, segnum, salt): + """ + Add a block and salt to the share. + """ + + def put_encprivey(encprivkey): + """ + Add the encrypted private key to the share. + """ + + def put_blockhashes(blockhashes=list): + """ + Add the block hash tree to the share. + """ + + def put_sharehashes(sharehashes=dict): + """ + Add the share hash chain to the share. + """ + + def get_signable(): + """ + Return the part of the share that needs to be signed. + """ + + def put_signature(signature): + """ + Add the signature to the share. + """ + + def put_verification_key(verification_key): + """ + Add the verification key to the share. + """ + + def finish_publishing(): + """ + Do anything necessary to finish writing the share to a remote + server. I require that no further publishing needs to take place + after this method has been called. + """ + + class IURI(Interface): def init_from_string(uri): """Accept a string (as created by my to_string() method) and populate [test/test_mutable.py: temporarily disable two tests that are now irrelevant Kevan Carstensen **20100701232806 Ignore-this: 701e143567f3954812ca6960af1d6ac7 ] { hunk ./src/allmydata/test/test_mutable.py 651 self.failUnlessEqual(len(share_ids), 10) d.addCallback(_done) return d + test_encrypt.todo = "Write an equivalent of this for the new uploader" def test_generate(self): nm = make_nodemaker() hunk ./src/allmydata/test/test_mutable.py 713 self.failUnlessEqual(enc_privkey, self._fn.get_encprivkey()) d.addCallback(_generated) return d + test_generate.todo = "Write an equivalent of this for the new uploader" # TODO: when we publish to 20 peers, we should get one share per peer on 10 # when we publish to 3 peers, we should get either 3 or 4 shares per peer } [Add MDMF reader and writer, and SDMF writer Kevan Carstensen **20100702225531 Ignore-this: bf6276a91d27dcb4e779b0eb82ea1843 The MDMF/SDMF reader MDMF writer, and SDMF writer are similar to the object proxies that exist for immutable files. They abstract away details of connection, state, and caching from their callers (in this case, the download, servermap updater, and uploader), and expose methods to get and set information on the remote server. MDMFSlotReadProxy reads a mutable file from the server, doing the right thing (in most cases) regardless of whether the file is MDMF or SDMF. It allows callers to tell it how to batch and flush reads. MDMFSlotWriteProxy writes an MDMF mutable file to a server. SDMFSlotWriteProxy writes an SDMF mutable file to a server. This patch also includes tests for MDMFSlotReadProxy, SDMFSlotWriteProxy, and MDMFSlotWriteProxy. ] { hunk ./src/allmydata/mutable/layout.py 4 import struct from allmydata.mutable.common import NeedMoreDataError, UnknownVersionError +from allmydata.interfaces import HASH_SIZE, SALT_SIZE, SDMF_VERSION, \ + MDMF_VERSION, IMutableSlotWriter +from allmydata.util import mathutil, observer +from twisted.python import failure +from twisted.internet import defer +from zope.interface import implements + + +# These strings describe the format of the packed structs they help process +# Here's what they mean: +# +# PREFIX: +# >: Big-endian byte order; the most significant byte is first (leftmost). +# B: The version information; an 8 bit version identifier. Stored as +# an unsigned char. This is currently 00 00 00 00; our modifications +# will turn it into 00 00 00 01. +# Q: The sequence number; this is sort of like a revision history for +# mutable files; they start at 1 and increase as they are changed after +# being uploaded. Stored as an unsigned long long, which is 8 bytes in +# length. +# 32s: The root hash of the share hash tree. We use sha-256d, so we use 32 +# characters = 32 bytes to store the value. +# 16s: The salt for the readkey. This is a 16-byte random value, stored as +# 16 characters. +# +# SIGNED_PREFIX additions, things that are covered by the signature: +# B: The "k" encoding parameter. We store this as an 8-bit character, +# which is convenient because our erasure coding scheme cannot +# encode if you ask for more than 255 pieces. +# B: The "N" encoding parameter. Stored as an 8-bit character for the +# same reasons as above. +# Q: The segment size of the uploaded file. This will essentially be the +# length of the file in SDMF. An unsigned long long, so we can store +# files of quite large size. +# Q: The data length of the uploaded file. Modulo padding, this will be +# the same of the data length field. Like the data length field, it is +# an unsigned long long and can be quite large. +# +# HEADER additions: +# L: The offset of the signature of this. An unsigned long. +# L: The offset of the share hash chain. An unsigned long. +# L: The offset of the block hash tree. An unsigned long. +# L: The offset of the share data. An unsigned long. +# Q: The offset of the encrypted private key. An unsigned long long, to +# account for the possibility of a lot of share data. +# Q: The offset of the EOF. An unsigned long long, to account for the +# possibility of a lot of share data. +# +# After all of these, we have the following: +# - The verification key: Occupies the space between the end of the header +# and the start of the signature (i.e.: data[HEADER_LENGTH:o['signature']]. +# - The signature, which goes from the signature offset to the share hash +# chain offset. +# - The share hash chain, which goes from the share hash chain offset to +# the block hash tree offset. +# - The share data, which goes from the share data offset to the encrypted +# private key offset. +# - The encrypted private key offset, which goes until the end of the file. +# +# The block hash tree in this encoding has only one share, so the offset of +# the share data will be 32 bits more than the offset of the block hash tree. +# Given this, we may need to check to see how many bytes a reasonably sized +# block hash tree will take up. PREFIX = ">BQ32s16s" # each version has a different prefix SIGNED_PREFIX = ">BQ32s16s BBQQ" # this is covered by the signature hunk ./src/allmydata/mutable/layout.py 73 SIGNED_PREFIX_LENGTH = struct.calcsize(SIGNED_PREFIX) HEADER = ">BQ32s16s BBQQ LLLLQQ" # includes offsets HEADER_LENGTH = struct.calcsize(HEADER) +OFFSETS = ">LLLLQQ" +OFFSETS_LENGTH = struct.calcsize(OFFSETS) def unpack_header(data): o = {} hunk ./src/allmydata/mutable/layout.py 194 return (share_hash_chain, block_hash_tree, share_data) -def pack_checkstring(seqnum, root_hash, IV): +def pack_checkstring(seqnum, root_hash, IV, version=0): return struct.pack(PREFIX, hunk ./src/allmydata/mutable/layout.py 196 - 0, # version, + version, seqnum, root_hash, IV) hunk ./src/allmydata/mutable/layout.py 269 encprivkey]) return final_share +def pack_prefix(seqnum, root_hash, IV, + required_shares, total_shares, + segment_size, data_length): + prefix = struct.pack(SIGNED_PREFIX, + 0, # version, + seqnum, + root_hash, + IV, + required_shares, + total_shares, + segment_size, + data_length, + ) + return prefix + + +class SDMFSlotWriteProxy: + implements(IMutableSlotWriter) + """ + I represent a remote write slot for an SDMF mutable file. I build a + share in memory, and then write it in one piece to the remote + server. This mimics how SDMF shares were built before MDMF (and the + new MDMF uploader), but provides that functionality in a way that + allows the MDMF uploader to be built without much special-casing for + file format, which makes the uploader code more readable. + """ + def __init__(self, + shnum, + rref, # a remote reference to a storage server + storage_index, + secrets, # (write_enabler, renew_secret, cancel_secret) + seqnum, # the sequence number of the mutable file + required_shares, + total_shares, + segment_size, + data_length): # the length of the original file + self.shnum = shnum + self._rref = rref + self._storage_index = storage_index + self._secrets = secrets + self._seqnum = seqnum + self._required_shares = required_shares + self._total_shares = total_shares + self._segment_size = segment_size + self._data_length = data_length + + # This is an SDMF file, so it should have only one segment, so, + # modulo padding of the data length, the segment size and the + # data length should be the same. + expected_segment_size = mathutil.next_multiple(data_length, + self._required_shares) + assert expected_segment_size == segment_size + + self._block_size = self._segment_size / self._required_shares + + # This is meant to mimic how SDMF files were built before MDMF + # entered the picture: we generate each share in its entirety, + # then push it off to the storage server in one write. When + # callers call set_*, they are just populating this dict. + # finish_publishing will stitch these pieces together into a + # coherent share, and then write the coherent share to the + # storage server. + self._share_pieces = {} + + # This tells the write logic what checkstring to use when + # writing remote shares. + self._testvs = [] + + self._readvs = [(0, struct.calcsize(PREFIX))] + + + def set_checkstring(self, checkstring_or_seqnum, + root_hash=None, + salt=None): + """ + Set the checkstring that I will pass to the remote server when + writing. + + @param checkstring_or_seqnum: A packed checkstring to use, + or a sequence number. I will treat this as a checkstr + + Note that implementations can differ in which semantics they + wish to support for set_checkstring -- they can, for example, + build the checkstring themselves from its constituents, or + some other thing. + """ + if root_hash and salt: + checkstring = struct.pack(PREFIX, + 0, + checkstring_or_seqnum, + root_hash, + salt) + else: + checkstring = checkstring_or_seqnum + self._testvs = [(0, len(checkstring), "eq", checkstring)] + + + def get_checkstring(self): + """ + Get the checkstring that I think currently exists on the remote + server. + """ + if self._testvs: + return self._testvs[0][3] + return "" + + + def put_block(self, data, segnum, salt): + """ + Add a block and salt to the share. + """ + # SDMF files have only one segment + assert segnum == 0 + assert len(data) == self._block_size + assert len(salt) == SALT_SIZE + + self._share_pieces['sharedata'] = data + self._share_pieces['salt'] = salt + + # TODO: Figure out something intelligent to return. + return defer.succeed(None) + + + def put_encprivkey(self, encprivkey): + """ + Add the encrypted private key to the share. + """ + self._share_pieces['encprivkey'] = encprivkey + + return defer.succeed(None) + + + def put_blockhashes(self, blockhashes): + """ + Add the block hash tree to the share. + """ + assert isinstance(blockhashes, list) + for h in blockhashes: + assert len(h) == HASH_SIZE + + # serialize the blockhashes, then set them. + blockhashes_s = "".join(blockhashes) + self._share_pieces['block_hash_tree'] = blockhashes_s + + return defer.succeed(None) + + + def put_sharehashes(self, sharehashes): + """ + Add the share hash chain to the share. + """ + assert isinstance(sharehashes, dict) + for h in sharehashes.itervalues(): + assert len(h) == HASH_SIZE + + # serialize the sharehashes, then set them. + sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i]) + for i in sorted(sharehashes.keys())]) + self._share_pieces['share_hash_chain'] = sharehashes_s + + return defer.succeed(None) + + + def put_root_hash(self, root_hash): + """ + Add the root hash to the share. + """ + assert len(root_hash) == HASH_SIZE + + self._share_pieces['root_hash'] = root_hash + + return defer.succeed(None) + + + def put_salt(self, salt): + """ + Add a salt to an empty SDMF file. + """ + assert len(salt) == SALT_SIZE + + self._share_pieces['salt'] = salt + self._share_pieces['sharedata'] = "" + + + def get_signable(self): + """ + Return the part of the share that needs to be signed. + + SDMF writers need to sign the packed representation of the + first eight fields of the remote share, that is: + - version number (0) + - sequence number + - root of the share hash tree + - salt + - k + - n + - segsize + - datalen + + This method is responsible for returning that to callers. + """ + return struct.pack(SIGNED_PREFIX, + 0, + self._seqnum, + self._share_pieces['root_hash'], + self._share_pieces['salt'], + self._required_shares, + self._total_shares, + self._segment_size, + self._data_length) + + + def put_signature(self, signature): + """ + Add the signature to the share. + """ + self._share_pieces['signature'] = signature + + return defer.succeed(None) + + + def put_verification_key(self, verification_key): + """ + Add the verification key to the share. + """ + self._share_pieces['verification_key'] = verification_key + + return defer.succeed(None) + + + def get_verinfo(self): + """ + I return my verinfo tuple. This is used by the ServermapUpdater + to keep track of versions of mutable files. + + The verinfo tuple for MDMF files contains: + - seqnum + - root hash + - a blank (nothing) + - segsize + - datalen + - k + - n + - prefix (the thing that you sign) + - a tuple of offsets + + We include the nonce in MDMF to simplify processing of version + information tuples. + + The verinfo tuple for SDMF files is the same, but contains a + 16-byte IV instead of a hash of salts. + """ + return (self._seqnum, + self._share_pieces['root_hash'], + self._share_pieces['salt'], + self._segment_size, + self._data_length, + self._required_shares, + self._total_shares, + self.get_signable(), + self._get_offsets_tuple()) + + def _get_offsets_dict(self): + post_offset = HEADER_LENGTH + offsets = {} + + verification_key_length = len(self._share_pieces['verification_key']) + o1 = offsets['signature'] = post_offset + verification_key_length + + signature_length = len(self._share_pieces['signature']) + o2 = offsets['share_hash_chain'] = o1 + signature_length + + share_hash_chain_length = len(self._share_pieces['share_hash_chain']) + o3 = offsets['block_hash_tree'] = o2 + share_hash_chain_length + + block_hash_tree_length = len(self._share_pieces['block_hash_tree']) + o4 = offsets['share_data'] = o3 + block_hash_tree_length + + share_data_length = len(self._share_pieces['sharedata']) + o5 = offsets['enc_privkey'] = o4 + share_data_length + + encprivkey_length = len(self._share_pieces['encprivkey']) + offsets['EOF'] = o5 + encprivkey_length + return offsets + + + def _get_offsets_tuple(self): + offsets = self._get_offsets_dict() + return tuple([(key, value) for key, value in offsets.items()]) + + + def _pack_offsets(self): + offsets = self._get_offsets_dict() + return struct.pack(">LLLLQQ", + offsets['signature'], + offsets['share_hash_chain'], + offsets['block_hash_tree'], + offsets['share_data'], + offsets['enc_privkey'], + offsets['EOF']) + + + def finish_publishing(self): + """ + Do anything necessary to finish writing the share to a remote + server. I require that no further publishing needs to take place + after this method has been called. + """ + for k in ["sharedata", "encprivkey", "signature", "verification_key", + "share_hash_chain", "block_hash_tree"]: + assert k in self._share_pieces + # This is the only method that actually writes something to the + # remote server. + # First, we need to pack the share into data that we can write + # to the remote server in one write. + offsets = self._pack_offsets() + prefix = self.get_signable() + final_share = "".join([prefix, + offsets, + self._share_pieces['verification_key'], + self._share_pieces['signature'], + self._share_pieces['share_hash_chain'], + self._share_pieces['block_hash_tree'], + self._share_pieces['sharedata'], + self._share_pieces['encprivkey']]) + + # Our only data vector is going to be writing the final share, + # in its entirely. + datavs = [(0, final_share)] + + if not self._testvs: + # Our caller has not provided us with another checkstring + # yet, so we assume that we are writing a new share, and set + # a test vector that will allow a new share to be written. + self._testvs = [] + self._testvs.append(tuple([0, 1, "eq", ""])) + new_share = True + + tw_vectors = {} + tw_vectors[self.shnum] = (self._testvs, datavs, None) + return self._rref.callRemote("slot_testv_and_readv_and_writev", + self._storage_index, + self._secrets, + tw_vectors, + # TODO is it useful to read something? + self._readvs) + + +MDMFHEADER = ">BQ32sBBQQ QQQQQQ" +MDMFHEADERWITHOUTOFFSETS = ">BQ32sBBQQ" +MDMFHEADERSIZE = struct.calcsize(MDMFHEADER) +MDMFHEADERWITHOUTOFFSETSSIZE = struct.calcsize(MDMFHEADERWITHOUTOFFSETS) +MDMFCHECKSTRING = ">BQ32s" +MDMFSIGNABLEHEADER = ">BQ32sBBQQ" +MDMFOFFSETS = ">QQQQQQ" +MDMFOFFSETS_LENGTH = struct.calcsize(MDMFOFFSETS) + +class MDMFSlotWriteProxy: + implements(IMutableSlotWriter) + + """ + I represent a remote write slot for an MDMF mutable file. + + I abstract away from my caller the details of block and salt + management, and the implementation of the on-disk format for MDMF + shares. + """ + # Expected layout, MDMF: + # offset: size: name: + #-- signed part -- + # 0 1 version number (01) + # 1 8 sequence number + # 9 32 share tree root hash + # 41 1 The "k" encoding parameter + # 42 1 The "N" encoding parameter + # 43 8 The segment size of the uploaded file + # 51 8 The data length of the original plaintext + #-- end signed part -- + # 59 8 The offset of the encrypted private key + # 67 8 The offset of the block hash tree + # 75 8 The offset of the share hash chain + # 83 8 The offset of the signature + # 91 8 The offset of the verification key + # 99 8 The offset of the EOF + # + # followed by salts and share data, the encrypted private key, the + # block hash tree, the salt hash tree, the share hash chain, a + # signature over the first eight fields, and a verification key. + # + # The checkstring is the first three fields -- the version number, + # sequence number, root hash and root salt hash. This is consistent + # in meaning to what we have with SDMF files, except now instead of + # using the literal salt, we use a value derived from all of the + # salts -- the share hash root. + # + # The salt is stored before the block for each segment. The block + # hash tree is computed over the combination of block and salt for + # each segment. In this way, we get integrity checking for both + # block and salt with the current block hash tree arrangement. + # + # The ordering of the offsets is different to reflect the dependencies + # that we'll run into with an MDMF file. The expected write flow is + # something like this: + # + # 0: Initialize with the sequence number, encoding parameters and + # data length. From this, we can deduce the number of segments, + # and where they should go.. We can also figure out where the + # encrypted private key should go, because we can figure out how + # big the share data will be. + # + # 1: Encrypt, encode, and upload the file in chunks. Do something + # like + # + # put_block(data, segnum, salt) + # + # to write a block and a salt to the disk. We can do both of + # these operations now because we have enough of the offsets to + # know where to put them. + # + # 2: Put the encrypted private key. Use: + # + # put_encprivkey(encprivkey) + # + # Now that we know the length of the private key, we can fill + # in the offset for the block hash tree. + # + # 3: We're now in a position to upload the block hash tree for + # a share. Put that using something like: + # + # put_blockhashes(block_hash_tree) + # + # Note that block_hash_tree is a list of hashes -- we'll take + # care of the details of serializing that appropriately. When + # we get the block hash tree, we are also in a position to + # calculate the offset for the share hash chain, and fill that + # into the offsets table. + # + # 4: At the same time, we're in a position to upload the salt hash + # tree. This is a Merkle tree over all of the salts. We use a + # Merkle tree so that we can validate each block,salt pair as + # we download them later. We do this using + # + # put_salthashes(salt_hash_tree) + # + # When you do this, I automatically put the root of the tree + # (the hash at index 0 of the list) in its appropriate slot in + # the signed prefix of the share. + # + # 5: We're now in a position to upload the share hash chain for + # a share. Do that with something like: + # + # put_sharehashes(share_hash_chain) + # + # share_hash_chain should be a dictionary mapping shnums to + # 32-byte hashes -- the wrapper handles serialization. + # We'll know where to put the signature at this point, also. + # The root of this tree will be put explicitly in the next + # step. + # + # TODO: Why? Why not just include it in the tree here? + # + # 6: Before putting the signature, we must first put the + # root_hash. Do this with: + # + # put_root_hash(root_hash). + # + # In terms of knowing where to put this value, it was always + # possible to place it, but it makes sense semantically to + # place it after the share hash tree, so that's why you do it + # in this order. + # + # 6: With the root hash put, we can now sign the header. Use: + # + # get_signable() + # + # to get the part of the header that you want to sign, and use: + # + # put_signature(signature) + # + # to write your signature to the remote server. + # + # 6: Add the verification key, and finish. Do: + # + # put_verification_key(key) + # + # and + # + # finish_publish() + # + # Checkstring management: + # + # To write to a mutable slot, we have to provide test vectors to ensure + # that we are writing to the same data that we think we are. These + # vectors allow us to detect uncoordinated writes; that is, writes + # where both we and some other shareholder are writing to the + # mutable slot, and to report those back to the parts of the program + # doing the writing. + # + # With SDMF, this was easy -- all of the share data was written in + # one go, so it was easy to detect uncoordinated writes, and we only + # had to do it once. With MDMF, not all of the file is written at + # once. + # + # If a share is new, we write out as much of the header as we can + # before writing out anything else. This gives other writers a + # canary that they can use to detect uncoordinated writes, and, if + # they do the same thing, gives us the same canary. We them update + # the share. We won't be able to write out two fields of the header + # -- the share tree hash and the salt hash -- until we finish + # writing out the share. We only require the writer to provide the + # initial checkstring, and keep track of what it should be after + # updates ourselves. + # + # If we haven't written anything yet, then on the first write (which + # will probably be a block + salt of a share), we'll also write out + # the header. On subsequent passes, we'll expect to see the header. + # This changes in two places: + # + # - When we write out the salt hash + # - When we write out the root of the share hash tree + # + # since these values will change the header. It is possible that we + # can just make those be written in one operation to minimize + # disruption. + def __init__(self, + shnum, + rref, # a remote reference to a storage server + storage_index, + secrets, # (write_enabler, renew_secret, cancel_secret) + seqnum, # the sequence number of the mutable file + required_shares, + total_shares, + segment_size, + data_length): # the length of the original file + self.shnum = shnum + self._rref = rref + self._storage_index = storage_index + self._seqnum = seqnum + self._required_shares = required_shares + assert self.shnum >= 0 and self.shnum < total_shares + self._total_shares = total_shares + # We build up the offset table as we write things. It is the + # last thing we write to the remote server. + self._offsets = {} + self._testvs = [] + self._secrets = secrets + # The segment size needs to be a multiple of the k parameter -- + # any padding should have been carried out by the publisher + # already. + assert segment_size % required_shares == 0 + self._segment_size = segment_size + self._data_length = data_length + + # These are set later -- we define them here so that we can + # check for their existence easily + + # This is the root of the share hash tree -- the Merkle tree + # over the roots of the block hash trees computed for shares in + # this upload. + self._root_hash = None + + # We haven't yet written anything to the remote bucket. By + # setting this, we tell the _write method as much. The write + # method will then know that it also needs to add a write vector + # for the checkstring (or what we have of it) to the first write + # request. We'll then record that value for future use. If + # we're expecting something to be there already, we need to call + # set_checkstring before we write anything to tell the first + # write about that. + self._written = False + + # When writing data to the storage servers, we get a read vector + # for free. We'll read the checkstring, which will help us + # figure out what's gone wrong if a write fails. + self._readv = [(0, struct.calcsize(MDMFCHECKSTRING))] + + # We calculate the number of segments because it tells us + # where the salt part of the file ends/share segment begins, + # and also because it provides a useful amount of bounds checking. + self._num_segments = mathutil.div_ceil(self._data_length, + self._segment_size) + self._block_size = self._segment_size / self._required_shares + # We also calculate the share size, to help us with block + # constraints later. + tail_size = self._data_length % self._segment_size + if not tail_size: + self._tail_block_size = self._block_size + else: + self._tail_block_size = mathutil.next_multiple(tail_size, + self._required_shares) + self._tail_block_size /= self._required_shares + + # We already know where the sharedata starts; right after the end + # of the header (which is defined as the signable part + the offsets) + # We can also calculate where the encrypted private key begins + # from what we know know. + self._actual_block_size = self._block_size + SALT_SIZE + data_size = self._actual_block_size * (self._num_segments - 1) + data_size += self._tail_block_size + data_size += SALT_SIZE + self._offsets['enc_privkey'] = MDMFHEADERSIZE + self._offsets['enc_privkey'] += data_size + # We'll wait for the rest. Callers can now call my "put_block" and + # "set_checkstring" methods. + + + def set_checkstring(self, + seqnum_or_checkstring, + root_hash=None, + salt=None): + """ + Set checkstring checkstring for the given shnum. + + This can be invoked in one of two ways. + + With one argument, I assume that you are giving me a literal + checkstring -- e.g., the output of get_checkstring. I will then + set that checkstring as it is. This form is used by unit tests. + + With two arguments, I assume that you are giving me a sequence + number and root hash to make a checkstring from. In that case, I + will build a checkstring and set it for you. This form is used + by the publisher. + + By default, I assume that I am writing new shares to the grid. + If you don't explcitly set your own checkstring, I will use + one that requires that the remote share not exist. You will want + to use this method if you are updating a share in-place; + otherwise, writes will fail. + """ + # You're allowed to overwrite checkstrings with this method; + # I assume that users know what they are doing when they call + # it. + if root_hash: + checkstring = struct.pack(MDMFCHECKSTRING, + 1, + seqnum_or_checkstring, + root_hash) + else: + checkstring = seqnum_or_checkstring + + if checkstring == "": + # We special-case this, since len("") = 0, but we need + # length of 1 for the case of an empty share to work on the + # storage server, which is what a checkstring that is the + # empty string means. + self._testvs = [] + else: + self._testvs = [] + self._testvs.append((0, len(checkstring), "eq", checkstring)) + + + def __repr__(self): + return "MDMFSlotWriteProxy for share %d" % self.shnum + + + def get_checkstring(self): + """ + Given a share number, I return a representation of what the + checkstring for that share on the server will look like. + + I am mostly used for tests. + """ + if self._root_hash: + roothash = self._root_hash + else: + roothash = "\x00" * 32 + return struct.pack(MDMFCHECKSTRING, + 1, + self._seqnum, + roothash) + + + def put_block(self, data, segnum, salt): + """ + Put the encrypted-and-encoded data segment in the slot, along + with the salt. + """ + if segnum >= self._num_segments: + raise LayoutInvalid("I won't overwrite the private key") + if len(salt) != SALT_SIZE: + raise LayoutInvalid("I was given a salt of size %d, but " + "I wanted a salt of size %d") + if segnum + 1 == self._num_segments: + if len(data) != self._tail_block_size: + raise LayoutInvalid("I was given the wrong size block to write") + elif len(data) != self._block_size: + raise LayoutInvalid("I was given the wrong size block to write") + + # We want to write at len(MDMFHEADER) + segnum * block_size. + + offset = MDMFHEADERSIZE + (self._actual_block_size * segnum) + data = salt + data + + datavs = [tuple([offset, data])] + return self._write(datavs) + + + def put_encprivkey(self, encprivkey): + """ + Put the encrypted private key in the remote slot. + """ + assert self._offsets + assert self._offsets['enc_privkey'] + # You shouldn't re-write the encprivkey after the block hash + # tree is written, since that could cause the private key to run + # into the block hash tree. Before it writes the block hash + # tree, the block hash tree writing method writes the offset of + # the salt hash tree. So that's a good indicator of whether or + # not the block hash tree has been written. + if "share_hash_chain" in self._offsets: + raise LayoutInvalid("You must write this before the block hash tree") + + self._offsets['block_hash_tree'] = self._offsets['enc_privkey'] + len(encprivkey) + datavs = [(tuple([self._offsets['enc_privkey'], encprivkey]))] + def _on_failure(): + del(self._offsets['block_hash_tree']) + return self._write(datavs, on_failure=_on_failure) + + + def put_blockhashes(self, blockhashes): + """ + Put the block hash tree in the remote slot. + + The encrypted private key must be put before the block hash + tree, since we need to know how large it is to know where the + block hash tree should go. The block hash tree must be put + before the salt hash tree, since its size determines the + offset of the share hash chain. + """ + assert self._offsets + assert isinstance(blockhashes, list) + if "block_hash_tree" not in self._offsets: + raise LayoutInvalid("You must put the encrypted private key " + "before you put the block hash tree") + # If written, the share hash chain causes the signature offset + # to be defined. + if "signature" in self._offsets: + raise LayoutInvalid("You must put the block hash tree before " + "you put the share hash chain") + blockhashes_s = "".join(blockhashes) + self._offsets['share_hash_chain'] = self._offsets['block_hash_tree'] + len(blockhashes_s) + datavs = [] + datavs.append(tuple([self._offsets['block_hash_tree'], blockhashes_s])) + def _on_failure(): + del(self._offsets['share_hash_chain']) + return self._write(datavs, on_failure=_on_failure) + + + def put_sharehashes(self, sharehashes): + """ + Put the share hash chain in the remote slot. + + The salt hash tree must be put before the share hash chain, + since we need to know where the salt hash tree ends before we + can know where the share hash chain starts. The share hash chain + must be put before the signature, since the length of the packed + share hash chain determines the offset of the signature. Also, + semantically, you must know what the root of the salt hash tree + is before you can generate a valid signature. + """ + assert isinstance(sharehashes, dict) + if "share_hash_chain" not in self._offsets: + raise LayoutInvalid("You need to put the salt hash tree before " + "you can put the share hash chain") + # The signature comes after the share hash chain. If the + # signature has already been written, we must not write another + # share hash chain. The signature writes the verification key + # offset when it gets sent to the remote server, so we look for + # that. + if "verification_key" in self._offsets: + raise LayoutInvalid("You must write the share hash chain " + "before you write the signature") + datavs = [] + sharehashes_s = "".join([struct.pack(">H32s", i, sharehashes[i]) + for i in sorted(sharehashes.keys())]) + self._offsets['signature'] = self._offsets['share_hash_chain'] + len(sharehashes_s) + datavs.append(tuple([self._offsets['share_hash_chain'], sharehashes_s])) + def _on_failure(): + del(self._offsets['signature']) + return self._write(datavs, on_failure=_on_failure) + + + def put_root_hash(self, roothash): + """ + Put the root hash (the root of the share hash tree) in the + remote slot. + """ + # It does not make sense to be able to put the root + # hash without first putting the share hashes, since you need + # the share hashes to generate the root hash. + # + # Signature is defined by the routine that places the share hash + # chain, so it's a good thing to look for in finding out whether + # or not the share hash chain exists on the remote server. + if "signature" not in self._offsets: + raise LayoutInvalid("You need to put the share hash chain " + "before you can put the root share hash") + if len(roothash) != HASH_SIZE: + raise LayoutInvalid("hashes and salts must be exactly %d bytes" + % HASH_SIZE) + datavs = [] + self._root_hash = roothash + # To write both of these values, we update the checkstring on + # the remote server, which includes them + checkstring = self.get_checkstring() + datavs.append(tuple([0, checkstring])) + # This write, if successful, changes the checkstring, so we need + # to update our internal checkstring to be consistent with the + # one on the server. + def _on_success(): + self._testvs = [(0, len(checkstring), "eq", checkstring)] + def _on_failure(): + self._root_hash = None + return self._write(datavs, + on_success=_on_success, + on_failure=_on_failure) + + + def get_signable(self): + """ + Get the first seven fields of the mutable file; the parts that + are signed. + """ + if not self._root_hash: + raise LayoutInvalid("You need to set the root hash " + "before getting something to " + "sign") + return struct.pack(MDMFSIGNABLEHEADER, + 1, + self._seqnum, + self._root_hash, + self._required_shares, + self._total_shares, + self._segment_size, + self._data_length) + + + def put_signature(self, signature): + """ + Put the signature field to the remote slot. + + I require that the root hash and share hash chain have been put + to the grid before I will write the signature to the grid. + """ + if "signature" not in self._offsets: + raise LayoutInvalid("You must put the share hash chain " + # It does not make sense to put a signature without first + # putting the root hash and the salt hash (since otherwise + # the signature would be incomplete), so we don't allow that. + "before putting the signature") + if not self._root_hash: + raise LayoutInvalid("You must complete the signed prefix " + "before computing a signature") + # If we put the signature after we put the verification key, we + # could end up running into the verification key, and will + # probably screw up the offsets as well. So we don't allow that. + # The method that writes the verification key defines the EOF + # offset before writing the verification key, so look for that. + if "EOF" in self._offsets: + raise LayoutInvalid("You must write the signature before the verification key") + + self._offsets['verification_key'] = self._offsets['signature'] + len(signature) + datavs = [] + datavs.append(tuple([self._offsets['signature'], signature])) + def _on_failure(): + del(self._offsets['verification_key']) + return self._write(datavs, on_failure=_on_failure) + + + def put_verification_key(self, verification_key): + """ + Put the verification key into the remote slot. + + I require that the signature have been written to the storage + server before I allow the verification key to be written to the + remote server. + """ + if "verification_key" not in self._offsets: + raise LayoutInvalid("You must put the signature before you " + "can put the verification key") + self._offsets['EOF'] = self._offsets['verification_key'] + len(verification_key) + datavs = [] + datavs.append(tuple([self._offsets['verification_key'], verification_key])) + def _on_failure(): + del(self._offsets['EOF']) + return self._write(datavs, on_failure=_on_failure) + + def _get_offsets_tuple(self): + return tuple([(key, value) for key, value in self._offsets.items()]) + + def get_verinfo(self): + return (self._seqnum, + self._root_hash, + self._required_shares, + self._total_shares, + self._segment_size, + self._data_length, + self.get_signable(), + self._get_offsets_tuple()) + + + def finish_publishing(self): + """ + Write the offset table and encoding parameters to the remote + slot, since that's the only thing we have yet to publish at this + point. + """ + if "EOF" not in self._offsets: + raise LayoutInvalid("You must put the verification key before " + "you can publish the offsets") + offsets_offset = struct.calcsize(MDMFHEADERWITHOUTOFFSETS) + offsets = struct.pack(MDMFOFFSETS, + self._offsets['enc_privkey'], + self._offsets['block_hash_tree'], + self._offsets['share_hash_chain'], + self._offsets['signature'], + self._offsets['verification_key'], + self._offsets['EOF']) + datavs = [] + datavs.append(tuple([offsets_offset, offsets])) + encoding_parameters_offset = struct.calcsize(MDMFCHECKSTRING) + params = struct.pack(">BBQQ", + self._required_shares, + self._total_shares, + self._segment_size, + self._data_length) + datavs.append(tuple([encoding_parameters_offset, params])) + return self._write(datavs) + + + def _write(self, datavs, on_failure=None, on_success=None): + """I write the data vectors in datavs to the remote slot.""" + tw_vectors = {} + new_share = False + if not self._testvs: + self._testvs = [] + self._testvs.append(tuple([0, 1, "eq", ""])) + new_share = True + if not self._written: + # Write a new checkstring to the share when we write it, so + # that we have something to check later. + new_checkstring = self.get_checkstring() + datavs.append((0, new_checkstring)) + def _first_write(): + self._written = True + self._testvs = [(0, len(new_checkstring), "eq", new_checkstring)] + on_success = _first_write + tw_vectors[self.shnum] = (self._testvs, datavs, None) + datalength = sum([len(x[1]) for x in datavs]) + d = self._rref.callRemote("slot_testv_and_readv_and_writev", + self._storage_index, + self._secrets, + tw_vectors, + self._readv) + def _result(results): + if isinstance(results, failure.Failure) or not results[0]: + # Do nothing; the write was unsuccessful. + if on_failure: on_failure() + else: + if on_success: on_success() + return results + d.addCallback(_result) + return d + + +class MDMFSlotReadProxy: + """ + I read from a mutable slot filled with data written in the MDMF data + format (which is described above). + + I can be initialized with some amount of data, which I will use (if + it is valid) to eliminate some of the need to fetch it from servers. + """ + def __init__(self, + rref, + storage_index, + shnum, + data=""): + # Start the initialization process. + self._rref = rref + self._storage_index = storage_index + self.shnum = shnum + + # Before doing anything, the reader is probably going to want to + # verify that the signature is correct. To do that, they'll need + # the verification key, and the signature. To get those, we'll + # need the offset table. So fetch the offset table on the + # assumption that that will be the first thing that a reader is + # going to do. + + # The fact that these encoding parameters are None tells us + # that we haven't yet fetched them from the remote share, so we + # should. We could just not set them, but the checks will be + # easier to read if we don't have to use hasattr. + self._version_number = None + self._sequence_number = None + self._root_hash = None + # Filled in if we're dealing with an SDMF file. Unused + # otherwise. + self._salt = None + self._required_shares = None + self._total_shares = None + self._segment_size = None + self._data_length = None + self._offsets = None + + # If the user has chosen to initialize us with some data, we'll + # try to satisfy subsequent data requests with that data before + # asking the storage server for it. If + self._data = data + # The way callers interact with cache in the filenode returns + # None if there isn't any cached data, but the way we index the + # cached data requires a string, so convert None to "". + if self._data == None: + self._data = "" + + self._queue_observers = observer.ObserverList() + self._queue_errbacks = observer.ObserverList() + self._readvs = [] + + + def _maybe_fetch_offsets_and_header(self, force_remote=False): + """ + I fetch the offset table and the header from the remote slot if + I don't already have them. If I do have them, I do nothing and + return an empty Deferred. + """ + if self._offsets: + return defer.succeed(None) + # At this point, we may be either SDMF or MDMF. Fetching 107 + # bytes will be enough to get header and offsets for both SDMF and + # MDMF, though we'll be left with 4 more bytes than we + # need if this ends up being MDMF. This is probably less + # expensive than the cost of a second roundtrip. + readvs = [(0, 107)] + d = self._read(readvs, force_remote) + d.addCallback(self._process_encoding_parameters) + d.addCallback(self._process_offsets) + return d + + + def _process_encoding_parameters(self, encoding_parameters): + assert self.shnum in encoding_parameters + encoding_parameters = encoding_parameters[self.shnum][0] + # The first byte is the version number. It will tell us what + # to do next. + (verno,) = struct.unpack(">B", encoding_parameters[:1]) + if verno == MDMF_VERSION: + read_size = MDMFHEADERWITHOUTOFFSETSSIZE + (verno, + seqnum, + root_hash, + k, + n, + segsize, + datalen) = struct.unpack(MDMFHEADERWITHOUTOFFSETS, + encoding_parameters[:read_size]) + if segsize == 0 and datalen == 0: + # Empty file, no segments. + self._num_segments = 0 + else: + self._num_segments = mathutil.div_ceil(datalen, segsize) + + elif verno == SDMF_VERSION: + read_size = SIGNED_PREFIX_LENGTH + (verno, + seqnum, + root_hash, + salt, + k, + n, + segsize, + datalen) = struct.unpack(">BQ32s16s BBQQ", + encoding_parameters[:SIGNED_PREFIX_LENGTH]) + self._salt = salt + if segsize == 0 and datalen == 0: + # empty file + self._num_segments = 0 + else: + # non-empty SDMF files have one segment. + self._num_segments = 1 + else: + raise UnknownVersionError("You asked me to read mutable file " + "version %d, but I only understand " + "%d and %d" % (verno, SDMF_VERSION, + MDMF_VERSION)) + + self._version_number = verno + self._sequence_number = seqnum + self._root_hash = root_hash + self._required_shares = k + self._total_shares = n + self._segment_size = segsize + self._data_length = datalen + + self._block_size = self._segment_size / self._required_shares + # We can upload empty files, and need to account for this fact + # so as to avoid zero-division and zero-modulo errors. + if datalen > 0: + tail_size = self._data_length % self._segment_size + else: + tail_size = 0 + if not tail_size: + self._tail_block_size = self._block_size + else: + self._tail_block_size = mathutil.next_multiple(tail_size, + self._required_shares) + self._tail_block_size /= self._required_shares + + return encoding_parameters + + + def _process_offsets(self, offsets): + if self._version_number == 0: + read_size = OFFSETS_LENGTH + read_offset = SIGNED_PREFIX_LENGTH + end = read_size + read_offset + (signature, + share_hash_chain, + block_hash_tree, + share_data, + enc_privkey, + EOF) = struct.unpack(">LLLLQQ", + offsets[read_offset:end]) + self._offsets = {} + self._offsets['signature'] = signature + self._offsets['share_data'] = share_data + self._offsets['block_hash_tree'] = block_hash_tree + self._offsets['share_hash_chain'] = share_hash_chain + self._offsets['enc_privkey'] = enc_privkey + self._offsets['EOF'] = EOF + + elif self._version_number == 1: + read_offset = MDMFHEADERWITHOUTOFFSETSSIZE + read_length = MDMFOFFSETS_LENGTH + end = read_offset + read_length + (encprivkey, + blockhashes, + sharehashes, + signature, + verification_key, + eof) = struct.unpack(MDMFOFFSETS, + offsets[read_offset:end]) + self._offsets = {} + self._offsets['enc_privkey'] = encprivkey + self._offsets['block_hash_tree'] = blockhashes + self._offsets['share_hash_chain'] = sharehashes + self._offsets['signature'] = signature + self._offsets['verification_key'] = verification_key + self._offsets['EOF'] = eof + + + def get_block_and_salt(self, segnum, queue=False): + """ + I return (block, salt), where block is the block data and + salt is the salt used to encrypt that segment. + """ + d = self._maybe_fetch_offsets_and_header() + def _then(ignored): + if self._version_number == 1: + base_share_offset = MDMFHEADERSIZE + else: + base_share_offset = self._offsets['share_data'] + + if segnum + 1 > self._num_segments: + raise LayoutInvalid("Not a valid segment number") + + if self._version_number == 0: + share_offset = base_share_offset + self._block_size * segnum + else: + share_offset = base_share_offset + (self._block_size + \ + SALT_SIZE) * segnum + if segnum + 1 == self._num_segments: + data = self._tail_block_size + else: + data = self._block_size + + if self._version_number == 1: + data += SALT_SIZE + + readvs = [(share_offset, data)] + return readvs + d.addCallback(_then) + d.addCallback(lambda readvs: + self._read(readvs, queue=queue)) + def _process_results(results): + assert self.shnum in results + if self._version_number == 0: + # We only read the share data, but we know the salt from + # when we fetched the header + data = results[self.shnum] + if not data: + data = "" + else: + assert len(data) == 1 + data = data[0] + salt = self._salt + else: + data = results[self.shnum] + if not data: + salt = data = "" + else: + salt_and_data = results[self.shnum][0] + salt = salt_and_data[:SALT_SIZE] + data = salt_and_data[SALT_SIZE:] + return data, salt + d.addCallback(_process_results) + return d + + + def get_blockhashes(self, needed=None, queue=False, force_remote=False): + """ + I return the block hash tree + + I take an optional argument, needed, which is a set of indices + correspond to hashes that I should fetch. If this argument is + missing, I will fetch the entire block hash tree; otherwise, I + may attempt to fetch fewer hashes, based on what needed says + that I should do. Note that I may fetch as many hashes as I + want, so long as the set of hashes that I do fetch is a superset + of the ones that I am asked for, so callers should be prepared + to tolerate additional hashes. + """ + # TODO: Return only the parts of the block hash tree necessary + # to validate the blocknum provided? + # This is a good idea, but it is hard to implement correctly. It + # is bad to fetch any one block hash more than once, so we + # probably just want to fetch the whole thing at once and then + # serve it. + if needed == set([]): + return defer.succeed([]) + d = self._maybe_fetch_offsets_and_header() + def _then(ignored): + blockhashes_offset = self._offsets['block_hash_tree'] + if self._version_number == 1: + blockhashes_length = self._offsets['share_hash_chain'] - blockhashes_offset + else: + blockhashes_length = self._offsets['share_data'] - blockhashes_offset + readvs = [(blockhashes_offset, blockhashes_length)] + return readvs + d.addCallback(_then) + d.addCallback(lambda readvs: + self._read(readvs, queue=queue, force_remote=force_remote)) + def _build_block_hash_tree(results): + assert self.shnum in results + + rawhashes = results[self.shnum][0] + results = [rawhashes[i:i+HASH_SIZE] + for i in range(0, len(rawhashes), HASH_SIZE)] + return results + d.addCallback(_build_block_hash_tree) + return d + + + def get_sharehashes(self, needed=None, queue=False, force_remote=False): + """ + I return the part of the share hash chain placed to validate + this share. + + I take an optional argument, needed. Needed is a set of indices + that correspond to the hashes that I should fetch. If needed is + not present, I will fetch and return the entire share hash + chain. Otherwise, I may fetch and return any part of the share + hash chain that is a superset of the part that I am asked to + fetch. Callers should be prepared to deal with more hashes than + they've asked for. + """ + if needed == set([]): + return defer.succeed([]) + d = self._maybe_fetch_offsets_and_header() + + def _make_readvs(ignored): + sharehashes_offset = self._offsets['share_hash_chain'] + if self._version_number == 0: + sharehashes_length = self._offsets['block_hash_tree'] - sharehashes_offset + else: + sharehashes_length = self._offsets['signature'] - sharehashes_offset + readvs = [(sharehashes_offset, sharehashes_length)] + return readvs + d.addCallback(_make_readvs) + d.addCallback(lambda readvs: + self._read(readvs, queue=queue, force_remote=force_remote)) + def _build_share_hash_chain(results): + assert self.shnum in results + + sharehashes = results[self.shnum][0] + results = [sharehashes[i:i+(HASH_SIZE + 2)] + for i in range(0, len(sharehashes), HASH_SIZE + 2)] + results = dict([struct.unpack(">H32s", data) + for data in results]) + return results + d.addCallback(_build_share_hash_chain) + return d + + + def get_encprivkey(self, queue=False): + """ + I return the encrypted private key. + """ + d = self._maybe_fetch_offsets_and_header() + + def _make_readvs(ignored): + privkey_offset = self._offsets['enc_privkey'] + if self._version_number == 0: + privkey_length = self._offsets['EOF'] - privkey_offset + else: + privkey_length = self._offsets['block_hash_tree'] - privkey_offset + readvs = [(privkey_offset, privkey_length)] + return readvs + d.addCallback(_make_readvs) + d.addCallback(lambda readvs: + self._read(readvs, queue=queue)) + def _process_results(results): + assert self.shnum in results + privkey = results[self.shnum][0] + return privkey + d.addCallback(_process_results) + return d + + + def get_signature(self, queue=False): + """ + I return the signature of my share. + """ + d = self._maybe_fetch_offsets_and_header() + + def _make_readvs(ignored): + signature_offset = self._offsets['signature'] + if self._version_number == 1: + signature_length = self._offsets['verification_key'] - signature_offset + else: + signature_length = self._offsets['share_hash_chain'] - signature_offset + readvs = [(signature_offset, signature_length)] + return readvs + d.addCallback(_make_readvs) + d.addCallback(lambda readvs: + self._read(readvs, queue=queue)) + def _process_results(results): + assert self.shnum in results + signature = results[self.shnum][0] + return signature + d.addCallback(_process_results) + return d + + + def get_verification_key(self, queue=False): + """ + I return the verification key. + """ + d = self._maybe_fetch_offsets_and_header() + + def _make_readvs(ignored): + if self._version_number == 1: + vk_offset = self._offsets['verification_key'] + vk_length = self._offsets['EOF'] - vk_offset + else: + vk_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ") + vk_length = self._offsets['signature'] - vk_offset + readvs = [(vk_offset, vk_length)] + return readvs + d.addCallback(_make_readvs) + d.addCallback(lambda readvs: + self._read(readvs, queue=queue)) + def _process_results(results): + assert self.shnum in results + verification_key = results[self.shnum][0] + return verification_key + d.addCallback(_process_results) + return d + + + def get_encoding_parameters(self): + """ + I return (k, n, segsize, datalen) + """ + d = self._maybe_fetch_offsets_and_header() + d.addCallback(lambda ignored: + (self._required_shares, + self._total_shares, + self._segment_size, + self._data_length)) + return d + + + def get_seqnum(self): + """ + I return the sequence number for this share. + """ + d = self._maybe_fetch_offsets_and_header() + d.addCallback(lambda ignored: + self._sequence_number) + return d + + + def get_root_hash(self): + """ + I return the root of the block hash tree + """ + d = self._maybe_fetch_offsets_and_header() + d.addCallback(lambda ignored: self._root_hash) + return d + + + def get_checkstring(self): + """ + I return the packed representation of the following: + + - version number + - sequence number + - root hash + - salt hash + + which my users use as a checkstring to detect other writers. + """ + d = self._maybe_fetch_offsets_and_header() + def _build_checkstring(ignored): + if self._salt: + checkstring = strut.pack(PREFIX, + self._version_number, + self._sequence_number, + self._root_hash, + self._salt) + else: + checkstring = struct.pack(MDMFCHECKSTRING, + self._version_number, + self._sequence_number, + self._root_hash) + + return checkstring + d.addCallback(_build_checkstring) + return d + + + def get_prefix(self, force_remote): + d = self._maybe_fetch_offsets_and_header(force_remote) + d.addCallback(lambda ignored: + self._build_prefix()) + return d + + + def _build_prefix(self): + # The prefix is another name for the part of the remote share + # that gets signed. It consists of everything up to and + # including the datalength, packed by struct. + if self._version_number == SDMF_VERSION: + return struct.pack(SIGNED_PREFIX, + self._version_number, + self._sequence_number, + self._root_hash, + self._salt, + self._required_shares, + self._total_shares, + self._segment_size, + self._data_length) + + else: + return struct.pack(MDMFSIGNABLEHEADER, + self._version_number, + self._sequence_number, + self._root_hash, + self._required_shares, + self._total_shares, + self._segment_size, + self._data_length) + + + def _get_offsets_tuple(self): + # The offsets tuple is another component of the version + # information tuple. It is basically our offsets dictionary, + # itemized and in a tuple. + return self._offsets.copy() + + + def get_verinfo(self): + """ + I return my verinfo tuple. This is used by the ServermapUpdater + to keep track of versions of mutable files. + + The verinfo tuple for MDMF files contains: + - seqnum + - root hash + - a blank (nothing) + - segsize + - datalen + - k + - n + - prefix (the thing that you sign) + - a tuple of offsets + + We include the nonce in MDMF to simplify processing of version + information tuples. + + The verinfo tuple for SDMF files is the same, but contains a + 16-byte IV instead of a hash of salts. + """ + d = self._maybe_fetch_offsets_and_header() + def _build_verinfo(ignored): + if self._version_number == SDMF_VERSION: + salt_to_use = self._salt + else: + salt_to_use = None + return (self._sequence_number, + self._root_hash, + salt_to_use, + self._segment_size, + self._data_length, + self._required_shares, + self._total_shares, + self._build_prefix(), + self._get_offsets_tuple()) + d.addCallback(_build_verinfo) + return d + + + def flush(self): + """ + I flush my queue of read vectors. + """ + d = self._read(self._readvs) + def _then(results): + self._readvs = [] + if isinstance(results, failure.Failure): + self._queue_errbacks.notify(results) + else: + self._queue_observers.notify(results) + self._queue_observers = observer.ObserverList() + self._queue_errbacks = observer.ObserverList() + d.addBoth(_then) + + + def _read(self, readvs, force_remote=False, queue=False): + unsatisfiable = filter(lambda x: x[0] + x[1] > len(self._data), readvs) + # TODO: It's entirely possible to tweak this so that it just + # fulfills the requests that it can, and not demand that all + # requests are satisfiable before running it. + if not unsatisfiable and not force_remote: + results = [self._data[offset:offset+length] + for (offset, length) in readvs] + results = {self.shnum: results} + return defer.succeed(results) + else: + if queue: + start = len(self._readvs) + self._readvs += readvs + end = len(self._readvs) + def _get_results(results, start, end): + if not self.shnum in results: + return {self._shnum: [""]} + return {self.shnum: results[self.shnum][start:end]} + d = defer.Deferred() + d.addCallback(_get_results, start, end) + self._queue_observers.subscribe(d.callback) + self._queue_errbacks.subscribe(d.errback) + return d + return self._rref.callRemote("slot_readv", + self._storage_index, + [self.shnum], + readvs) + + + def is_sdmf(self): + """I tell my caller whether or not my remote file is SDMF or MDMF + """ + d = self._maybe_fetch_offsets_and_header() + d.addCallback(lambda ignored: + self._version_number == 0) + return d + + +class LayoutInvalid(Exception): + """ + This isn't a valid MDMF mutable file + """ hunk ./src/allmydata/test/test_storage.py 2 -import time, os.path, stat, re, simplejson, struct +import time, os.path, stat, re, simplejson, struct, shutil from twisted.trial import unittest hunk ./src/allmydata/test/test_storage.py 22 from allmydata.storage.expirer import LeaseCheckingCrawler from allmydata.immutable.layout import WriteBucketProxy, WriteBucketProxy_v2, \ ReadBucketProxy -from allmydata.interfaces import BadWriteEnablerError -from allmydata.test.common import LoggingServiceParent +from allmydata.mutable.layout import MDMFSlotWriteProxy, MDMFSlotReadProxy, \ + LayoutInvalid, MDMFSIGNABLEHEADER, \ + SIGNED_PREFIX, MDMFHEADER, \ + MDMFOFFSETS, SDMFSlotWriteProxy +from allmydata.interfaces import BadWriteEnablerError, MDMF_VERSION, \ + SDMF_VERSION +from allmydata.test.common import LoggingServiceParent, ShouldFailMixin from allmydata.test.common_web import WebRenderingMixin from allmydata.web.storage import StorageStatus, remove_prefix hunk ./src/allmydata/test/test_storage.py 106 class RemoteBucket: + def __init__(self): + self.read_count = 0 + self.write_count = 0 + def callRemote(self, methname, *args, **kwargs): def _call(): meth = getattr(self.target, "remote_" + methname) hunk ./src/allmydata/test/test_storage.py 114 return meth(*args, **kwargs) + + if methname == "slot_readv": + self.read_count += 1 + if "writev" in methname: + self.write_count += 1 + return defer.maybeDeferred(_call) hunk ./src/allmydata/test/test_storage.py 122 + class BucketProxy(unittest.TestCase): def make_bucket(self, name, size): basedir = os.path.join("storage", "BucketProxy", name) hunk ./src/allmydata/test/test_storage.py 1299 self.failUnless(os.path.exists(prefixdir), prefixdir) self.failIf(os.path.exists(bucketdir), bucketdir) + +class MDMFProxies(unittest.TestCase, ShouldFailMixin): + def setUp(self): + self.sparent = LoggingServiceParent() + self._lease_secret = itertools.count() + self.ss = self.create("MDMFProxies storage test server") + self.rref = RemoteBucket() + self.rref.target = self.ss + self.secrets = (self.write_enabler("we_secret"), + self.renew_secret("renew_secret"), + self.cancel_secret("cancel_secret")) + self.segment = "aaaaaa" + self.block = "aa" + self.salt = "a" * 16 + self.block_hash = "a" * 32 + self.block_hash_tree = [self.block_hash for i in xrange(6)] + self.share_hash = self.block_hash + self.share_hash_chain = dict([(i, self.share_hash) for i in xrange(6)]) + self.signature = "foobarbaz" + self.verification_key = "vvvvvv" + self.encprivkey = "private" + self.root_hash = self.block_hash + self.salt_hash = self.root_hash + self.salt_hash_tree = [self.salt_hash for i in xrange(6)] + self.block_hash_tree_s = self.serialize_blockhashes(self.block_hash_tree) + self.share_hash_chain_s = self.serialize_sharehashes(self.share_hash_chain) + # blockhashes and salt hashes are serialized in the same way, + # only we lop off the first element and store that in the + # header. + self.salt_hash_tree_s = self.serialize_blockhashes(self.salt_hash_tree[1:]) + + + def tearDown(self): + self.sparent.stopService() + shutil.rmtree(self.workdir("MDMFProxies storage test server")) + + + def write_enabler(self, we_tag): + return hashutil.tagged_hash("we_blah", we_tag) + + + def renew_secret(self, tag): + return hashutil.tagged_hash("renew_blah", str(tag)) + + + def cancel_secret(self, tag): + return hashutil.tagged_hash("cancel_blah", str(tag)) + + + def workdir(self, name): + basedir = os.path.join("storage", "MutableServer", name) + return basedir + + + def create(self, name): + workdir = self.workdir(name) + ss = StorageServer(workdir, "\x00" * 20) + ss.setServiceParent(self.sparent) + return ss + + + def build_test_mdmf_share(self, tail_segment=False, empty=False): + # Start with the checkstring + data = struct.pack(">BQ32s", + 1, + 0, + self.root_hash) + self.checkstring = data + # Next, the encoding parameters + if tail_segment: + data += struct.pack(">BBQQ", + 3, + 10, + 6, + 33) + elif empty: + data += struct.pack(">BBQQ", + 3, + 10, + 0, + 0) + else: + data += struct.pack(">BBQQ", + 3, + 10, + 6, + 36) + # Now we'll build the offsets. + sharedata = "" + if not tail_segment and not empty: + for i in xrange(6): + sharedata += self.salt + self.block + elif tail_segment: + for i in xrange(5): + sharedata += self.salt + self.block + sharedata += self.salt + "a" + + # The encrypted private key comes after the shares + salts + offset_size = struct.calcsize(MDMFOFFSETS) + encrypted_private_key_offset = len(data) + offset_size + len(sharedata) + # The blockhashes come after the private key + blockhashes_offset = encrypted_private_key_offset + len(self.encprivkey) + # The sharehashes come after the salt hashes + sharehashes_offset = blockhashes_offset + len(self.block_hash_tree_s) + # The signature comes after the share hash chain + signature_offset = sharehashes_offset + len(self.share_hash_chain_s) + # The verification key comes after the signature + verification_offset = signature_offset + len(self.signature) + # The EOF comes after the verification key + eof_offset = verification_offset + len(self.verification_key) + data += struct.pack(MDMFOFFSETS, + encrypted_private_key_offset, + blockhashes_offset, + sharehashes_offset, + signature_offset, + verification_offset, + eof_offset) + self.offsets = {} + self.offsets['enc_privkey'] = encrypted_private_key_offset + self.offsets['block_hash_tree'] = blockhashes_offset + self.offsets['share_hash_chain'] = sharehashes_offset + self.offsets['signature'] = signature_offset + self.offsets['verification_key'] = verification_offset + self.offsets['EOF'] = eof_offset + # Next, we'll add in the salts and share data, + data += sharedata + # the private key, + data += self.encprivkey + # the block hash tree, + data += self.block_hash_tree_s + # the share hash chain, + data += self.share_hash_chain_s + # the signature, + data += self.signature + # and the verification key + data += self.verification_key + return data + + + def write_test_share_to_server(self, + storage_index, + tail_segment=False, + empty=False): + """ + I write some data for the read tests to read to self.ss + + If tail_segment=True, then I will write a share that has a + smaller tail segment than other segments. + """ + write = self.ss.remote_slot_testv_and_readv_and_writev + data = self.build_test_mdmf_share(tail_segment, empty) + # Finally, we write the whole thing to the storage server in one + # pass. + testvs = [(0, 1, "eq", "")] + tws = {} + tws[0] = (testvs, [(0, data)], None) + readv = [(0, 1)] + results = write(storage_index, self.secrets, tws, readv) + self.failUnless(results[0]) + + + def build_test_sdmf_share(self, empty=False): + if empty: + sharedata = "" + else: + sharedata = self.segment * 6 + self.sharedata = sharedata + blocksize = len(sharedata) / 3 + block = sharedata[:blocksize] + self.blockdata = block + prefix = struct.pack(">BQ32s16s BBQQ", + 0, # version, + 0, + self.root_hash, + self.salt, + 3, + 10, + len(sharedata), + len(sharedata), + ) + post_offset = struct.calcsize(">BQ32s16sBBQQLLLLQQ") + signature_offset = post_offset + len(self.verification_key) + sharehashes_offset = signature_offset + len(self.signature) + blockhashes_offset = sharehashes_offset + len(self.share_hash_chain_s) + sharedata_offset = blockhashes_offset + len(self.block_hash_tree_s) + encprivkey_offset = sharedata_offset + len(block) + eof_offset = encprivkey_offset + len(self.encprivkey) + offsets = struct.pack(">LLLLQQ", + signature_offset, + sharehashes_offset, + blockhashes_offset, + sharedata_offset, + encprivkey_offset, + eof_offset) + final_share = "".join([prefix, + offsets, + self.verification_key, + self.signature, + self.share_hash_chain_s, + self.block_hash_tree_s, + block, + self.encprivkey]) + self.offsets = {} + self.offsets['signature'] = signature_offset + self.offsets['share_hash_chain'] = sharehashes_offset + self.offsets['block_hash_tree'] = blockhashes_offset + self.offsets['share_data'] = sharedata_offset + self.offsets['enc_privkey'] = encprivkey_offset + self.offsets['EOF'] = eof_offset + return final_share + + + def write_sdmf_share_to_server(self, + storage_index, + empty=False): + # Some tests need SDMF shares to verify that we can still + # read them. This method writes one, which resembles but is not + assert self.rref + write = self.ss.remote_slot_testv_and_readv_and_writev + share = self.build_test_sdmf_share(empty) + testvs = [(0, 1, "eq", "")] + tws = {} + tws[0] = (testvs, [(0, share)], None) + readv = [] + results = write(storage_index, self.secrets, tws, readv) + self.failUnless(results[0]) + + + def test_read(self): + self.write_test_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + # Check that every method equals what we expect it to. + d = defer.succeed(None) + def _check_block_and_salt((block, salt)): + self.failUnlessEqual(block, self.block) + self.failUnlessEqual(salt, self.salt) + + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mr.get_block_and_salt(i)) + d.addCallback(_check_block_and_salt) + + d.addCallback(lambda ignored: + mr.get_encprivkey()) + d.addCallback(lambda encprivkey: + self.failUnlessEqual(self.encprivkey, encprivkey)) + + d.addCallback(lambda ignored: + mr.get_blockhashes()) + d.addCallback(lambda blockhashes: + self.failUnlessEqual(self.block_hash_tree, blockhashes)) + + d.addCallback(lambda ignored: + mr.get_sharehashes()) + d.addCallback(lambda sharehashes: + self.failUnlessEqual(self.share_hash_chain, sharehashes)) + + d.addCallback(lambda ignored: + mr.get_signature()) + d.addCallback(lambda signature: + self.failUnlessEqual(signature, self.signature)) + + d.addCallback(lambda ignored: + mr.get_verification_key()) + d.addCallback(lambda verification_key: + self.failUnlessEqual(verification_key, self.verification_key)) + + d.addCallback(lambda ignored: + mr.get_seqnum()) + d.addCallback(lambda seqnum: + self.failUnlessEqual(seqnum, 0)) + + d.addCallback(lambda ignored: + mr.get_root_hash()) + d.addCallback(lambda root_hash: + self.failUnlessEqual(self.root_hash, root_hash)) + + d.addCallback(lambda ignored: + mr.get_seqnum()) + d.addCallback(lambda seqnum: + self.failUnlessEqual(0, seqnum)) + + d.addCallback(lambda ignored: + mr.get_encoding_parameters()) + def _check_encoding_parameters((k, n, segsize, datalen)): + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + self.failUnlessEqual(segsize, 6) + self.failUnlessEqual(datalen, 36) + d.addCallback(_check_encoding_parameters) + + d.addCallback(lambda ignored: + mr.get_checkstring()) + d.addCallback(lambda checkstring: + self.failUnlessEqual(checkstring, checkstring)) + return d + + + def test_read_with_different_tail_segment_size(self): + self.write_test_share_to_server("si1", tail_segment=True) + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = mr.get_block_and_salt(5) + def _check_tail_segment(results): + block, salt = results + self.failUnlessEqual(len(block), 1) + self.failUnlessEqual(block, "a") + d.addCallback(_check_tail_segment) + return d + + + def test_get_block_with_invalid_segnum(self): + self.write_test_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = defer.succeed(None) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test invalid segnum", + None, + mr.get_block_and_salt, 7)) + return d + + + def test_get_encoding_parameters_first(self): + self.write_test_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = mr.get_encoding_parameters() + def _check_encoding_parameters((k, n, segment_size, datalen)): + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + self.failUnlessEqual(segment_size, 6) + self.failUnlessEqual(datalen, 36) + d.addCallback(_check_encoding_parameters) + return d + + + def test_get_seqnum_first(self): + self.write_test_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = mr.get_seqnum() + d.addCallback(lambda seqnum: + self.failUnlessEqual(seqnum, 0)) + return d + + + def test_get_root_hash_first(self): + self.write_test_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = mr.get_root_hash() + d.addCallback(lambda root_hash: + self.failUnlessEqual(root_hash, self.root_hash)) + return d + + + def test_get_checkstring_first(self): + self.write_test_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = mr.get_checkstring() + d.addCallback(lambda checkstring: + self.failUnlessEqual(checkstring, self.checkstring)) + return d + + + def test_write_read_vectors(self): + # When writing for us, the storage server will return to us a + # read vector, along with its result. If a write fails because + # the test vectors failed, this read vector can help us to + # diagnose the problem. This test ensures that the read vector + # is working appropriately. + mw = self._make_new_mw("si1", 0) + d = defer.succeed(None) + + # Write one share. This should return a checkstring of nothing, + # since there is no data there. + d.addCallback(lambda ignored: + mw.put_block(self.block, 0, self.salt)) + def _check_first_write(results): + result, readvs = results + self.failUnless(result) + self.failIf(readvs) + d.addCallback(_check_first_write) + # Now, there should be a different checkstring returned when + # we write other shares + d.addCallback(lambda ignored: + mw.put_block(self.block, 1, self.salt)) + def _check_next_write(results): + result, readvs = results + self.failUnless(result) + self.expected_checkstring = mw.get_checkstring() + self.failUnlessIn(0, readvs) + self.failUnlessEqual(readvs[0][0], self.expected_checkstring) + d.addCallback(_check_next_write) + # Add the other four shares + for i in xrange(2, 6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(_check_next_write) + # Add the encrypted private key + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(_check_next_write) + # Add the block hash tree and share hash tree + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(_check_next_write) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(_check_next_write) + # Add the root hash and the salt hash. This should change the + # checkstring, but not in a way that we'll be able to see right + # now, since the read vectors are applied before the write + # vectors. + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + def _check_old_testv_after_new_one_is_written(results): + result, readvs = results + self.failUnless(result) + self.failUnlessIn(0, readvs) + self.failUnlessEqual(self.expected_checkstring, + readvs[0][0]) + new_checkstring = mw.get_checkstring() + self.failIfEqual(new_checkstring, + readvs[0][0]) + d.addCallback(_check_old_testv_after_new_one_is_written) + # Now add the signature. This should succeed, meaning that the + # data gets written and the read vector matches what the writer + # thinks should be there. + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + d.addCallback(_check_next_write) + # The checkstring remains the same for the rest of the process. + return d + + + def test_blockhashes_after_share_hash_chain(self): + mw = self._make_new_mw("si1", 0) + d = defer.succeed(None) + # Put everything up to and including the share hash chain + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + + # Now try to put the block hash tree again. + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test repeat salthashes", + None, + mw.put_blockhashes, self.block_hash_tree)) + return d + + + def test_encprivkey_after_blockhashes(self): + mw = self._make_new_mw("si1", 0) + d = defer.succeed(None) + # Put everything up to and including the block hash tree + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "out of order private key", + None, + mw.put_encprivkey, self.encprivkey)) + return d + + + def test_share_hash_chain_after_signature(self): + mw = self._make_new_mw("si1", 0) + d = defer.succeed(None) + # Put everything up to and including the signature + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + # Now try to put the share hash chain again. This should fail + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "out of order share hash chain", + None, + mw.put_sharehashes, self.share_hash_chain)) + return d + + + def test_signature_after_verification_key(self): + mw = self._make_new_mw("si1", 0) + d = defer.succeed(None) + # Put everything up to and including the verification key. + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + d.addCallback(lambda ignored: + mw.put_verification_key(self.verification_key)) + # Now try to put the signature again. This should fail + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "signature after verification", + None, + mw.put_signature, self.signature)) + return d + + + def test_uncoordinated_write(self): + # Make two mutable writers, both pointing to the same storage + # server, both at the same storage index, and try writing to the + # same share. + mw1 = self._make_new_mw("si1", 0) + mw2 = self._make_new_mw("si1", 0) + d = defer.succeed(None) + def _check_success(results): + result, readvs = results + self.failUnless(result) + + def _check_failure(results): + result, readvs = results + self.failIf(result) + + d.addCallback(lambda ignored: + mw1.put_block(self.block, 0, self.salt)) + d.addCallback(_check_success) + d.addCallback(lambda ignored: + mw2.put_block(self.block, 0, self.salt)) + d.addCallback(_check_failure) + return d + + + def test_invalid_salt_size(self): + # Salts need to be 16 bytes in size. Writes that attempt to + # write more or less than this should be rejected. + mw = self._make_new_mw("si1", 0) + invalid_salt = "a" * 17 # 17 bytes + another_invalid_salt = "b" * 15 # 15 bytes + d = defer.succeed(None) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "salt too big", + None, + mw.put_block, self.block, 0, invalid_salt)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "salt too small", + None, + mw.put_block, self.block, 0, + another_invalid_salt)) + return d + + + def test_write_test_vectors(self): + # If we give the write proxy a bogus test vector at + # any point during the process, it should fail to write. + mw = self._make_new_mw("si1", 0) + mw.set_checkstring("this is a lie") + # The initial write should be expecting to find the improbable + # checkstring above in place; finding nothing, it should fail. + d = defer.succeed(None) + d.addCallback(lambda ignored: + mw.put_block(self.block, 0, self.salt)) + def _check_failure(results): + result, readv = results + self.failIf(result) + d.addCallback(_check_failure) + # Now set the checkstring to the empty string, which + # indicates that no share is there. + d.addCallback(lambda ignored: + mw.set_checkstring("")) + d.addCallback(lambda ignored: + mw.put_block(self.block, 0, self.salt)) + def _check_success(results): + result, readv = results + self.failUnless(result) + d.addCallback(_check_success) + # Now set the checkstring to something wrong + d.addCallback(lambda ignored: + mw.set_checkstring("something wrong")) + # This should fail to do anything + d.addCallback(lambda ignored: + mw.put_block(self.block, 1, self.salt)) + d.addCallback(_check_failure) + # Now set it back to what it should be. + d.addCallback(lambda ignored: + mw.set_checkstring(mw.get_checkstring())) + for i in xrange(1, 6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(_check_success) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(_check_success) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(_check_success) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(_check_success) + def _keep_old_checkstring(ignored): + self.old_checkstring = mw.get_checkstring() + mw.set_checkstring("foobarbaz") + d.addCallback(_keep_old_checkstring) + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + d.addCallback(_check_failure) + d.addCallback(lambda ignored: + self.failUnlessEqual(self.old_checkstring, mw.get_checkstring())) + def _restore_old_checkstring(ignored): + mw.set_checkstring(self.old_checkstring) + d.addCallback(_restore_old_checkstring) + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + d.addCallback(_check_success) + # The checkstring should have been set appropriately for us on + # the last write; if we try to change it to something else, + # that change should cause the verification key step to fail. + d.addCallback(lambda ignored: + mw.set_checkstring("something else")) + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + d.addCallback(_check_failure) + d.addCallback(lambda ignored: + mw.set_checkstring(mw.get_checkstring())) + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + d.addCallback(_check_success) + d.addCallback(lambda ignored: + mw.put_verification_key(self.verification_key)) + d.addCallback(_check_success) + return d + + + def test_offset_only_set_on_success(self): + # The write proxy should be smart enough to detect when a write + # has failed, and to temper its definition of progress based on + # that. + mw = self._make_new_mw("si1", 0) + d = defer.succeed(None) + for i in xrange(1, 6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + def _break_checkstring(ignored): + self._old_checkstring = mw.get_checkstring() + mw.set_checkstring("foobarbaz") + + def _fix_checkstring(ignored): + mw.set_checkstring(self._old_checkstring) + + d.addCallback(_break_checkstring) + + # Setting the encrypted private key shouldn't work now, which is + # to be expected and is tested elsewhere. We also want to make + # sure that we can't add the block hash tree after a failed + # write of this sort. + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test out-of-order blockhashes", + None, + mw.put_blockhashes, self.block_hash_tree)) + d.addCallback(_fix_checkstring) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(_break_checkstring) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test out-of-order sharehashes", + None, + mw.put_sharehashes, self.share_hash_chain)) + d.addCallback(_fix_checkstring) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(_break_checkstring) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "out-of-order root hash", + None, + mw.put_root_hash, self.root_hash)) + d.addCallback(_fix_checkstring) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(_break_checkstring) + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "out-of-order signature", + None, + mw.put_signature, self.signature)) + d.addCallback(_fix_checkstring) + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + d.addCallback(_break_checkstring) + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "out-of-order verification key", + None, + mw.put_verification_key, + self.verification_key)) + d.addCallback(_fix_checkstring) + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + d.addCallback(_break_checkstring) + d.addCallback(lambda ignored: + mw.put_verification_key(self.verification_key)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "out-of-order finish", + None, + mw.finish_publishing)) + return d + + + def serialize_blockhashes(self, blockhashes): + return "".join(blockhashes) + + + def serialize_sharehashes(self, sharehashes): + ret = "".join([struct.pack(">H32s", i, sharehashes[i]) + for i in sorted(sharehashes.keys())]) + return ret + + + def test_write(self): + # This translates to a file with 6 6-byte segments, and with 2-byte + # blocks. + mw = self._make_new_mw("si1", 0) + mw2 = self._make_new_mw("si1", 1) + # Test writing some blocks. + read = self.ss.remote_slot_readv + expected_sharedata_offset = struct.calcsize(MDMFHEADER) + written_block_size = 2 + len(self.salt) + written_block = self.block + self.salt + def _check_block_write(i, share): + self.failUnlessEqual(read("si1", [share], [(expected_sharedata_offset + (i * written_block_size), written_block_size)]), + {share: [written_block]}) + d = defer.succeed(None) + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored, i=i: + _check_block_write(i, 0)) + # Now try the same thing, but with share 1 instead of share 0. + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw2.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored, i=i: + _check_block_write(i, 1)) + + # Next, we make a fake encrypted private key, and put it onto the + # storage server. + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + expected_private_key_offset = expected_sharedata_offset + \ + len(written_block) * 6 + self.failUnlessEqual(len(self.encprivkey), 7) + d.addCallback(lambda ignored: + self.failUnlessEqual(read("si1", [0], [(expected_private_key_offset, 7)]), + {0: [self.encprivkey]})) + + # Next, we put a fake block hash tree. + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + expected_block_hash_offset = expected_private_key_offset + len(self.encprivkey) + self.failUnlessEqual(len(self.block_hash_tree_s), 32 * 6) + d.addCallback(lambda ignored: + self.failUnlessEqual(read("si1", [0], [(expected_block_hash_offset, 32 * 6)]), + {0: [self.block_hash_tree_s]})) + + # Next, put a fake share hash chain + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + expected_share_hash_offset = expected_block_hash_offset + len(self.block_hash_tree_s) + d.addCallback(lambda ignored: + self.failUnlessEqual(read("si1", [0],[(expected_share_hash_offset, (32 + 2) * 6)]), + {0: [self.share_hash_chain_s]})) + + # Next, we put what is supposed to be the root hash of + # our share hash tree but isn't + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + # The root hash gets inserted at byte 9 (its position is in the header, + # and is fixed). + def _check(ignored): + self.failUnlessEqual(read("si1", [0], [(9, 32)]), + {0: [self.root_hash]}) + d.addCallback(_check) + + # Next, we put a signature of the header block. + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + expected_signature_offset = expected_share_hash_offset + len(self.share_hash_chain_s) + self.failUnlessEqual(len(self.signature), 9) + d.addCallback(lambda ignored: + self.failUnlessEqual(read("si1", [0], [(expected_signature_offset, 9)]), + {0: [self.signature]})) + + # Next, we put the verification key + d.addCallback(lambda ignored: + mw.put_verification_key(self.verification_key)) + expected_verification_key_offset = expected_signature_offset + len(self.signature) + self.failUnlessEqual(len(self.verification_key), 6) + d.addCallback(lambda ignored: + self.failUnlessEqual(read("si1", [0], [(expected_verification_key_offset, 6)]), + {0: [self.verification_key]})) + + def _check_signable(ignored): + # Make sure that the signable is what we think it should be. + signable = mw.get_signable() + verno, seq, roothash, k, n, segsize, datalen = \ + struct.unpack(">BQ32sBBQQ", + signable) + self.failUnlessEqual(verno, 1) + self.failUnlessEqual(seq, 0) + self.failUnlessEqual(roothash, self.root_hash) + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + self.failUnlessEqual(segsize, 6) + self.failUnlessEqual(datalen, 36) + d.addCallback(_check_signable) + # Next, we cause the offset table to be published. + d.addCallback(lambda ignored: + mw.finish_publishing()) + expected_eof_offset = expected_verification_key_offset + len(self.verification_key) + + def _check_offsets(ignored): + # Check the version number to make sure that it is correct. + expected_version_number = struct.pack(">B", 1) + self.failUnlessEqual(read("si1", [0], [(0, 1)]), + {0: [expected_version_number]}) + # Check the sequence number to make sure that it is correct + expected_sequence_number = struct.pack(">Q", 0) + self.failUnlessEqual(read("si1", [0], [(1, 8)]), + {0: [expected_sequence_number]}) + # Check that the encoding parameters (k, N, segement size, data + # length) are what they should be. These are 3, 10, 6, 36 + expected_k = struct.pack(">B", 3) + self.failUnlessEqual(read("si1", [0], [(41, 1)]), + {0: [expected_k]}) + expected_n = struct.pack(">B", 10) + self.failUnlessEqual(read("si1", [0], [(42, 1)]), + {0: [expected_n]}) + expected_segment_size = struct.pack(">Q", 6) + self.failUnlessEqual(read("si1", [0], [(43, 8)]), + {0: [expected_segment_size]}) + expected_data_length = struct.pack(">Q", 36) + self.failUnlessEqual(read("si1", [0], [(51, 8)]), + {0: [expected_data_length]}) + expected_offset = struct.pack(">Q", expected_private_key_offset) + self.failUnlessEqual(read("si1", [0], [(59, 8)]), + {0: [expected_offset]}) + expected_offset = struct.pack(">Q", expected_block_hash_offset) + self.failUnlessEqual(read("si1", [0], [(67, 8)]), + {0: [expected_offset]}) + expected_offset = struct.pack(">Q", expected_share_hash_offset) + self.failUnlessEqual(read("si1", [0], [(75, 8)]), + {0: [expected_offset]}) + expected_offset = struct.pack(">Q", expected_signature_offset) + self.failUnlessEqual(read("si1", [0], [(83, 8)]), + {0: [expected_offset]}) + expected_offset = struct.pack(">Q", expected_verification_key_offset) + self.failUnlessEqual(read("si1", [0], [(91, 8)]), + {0: [expected_offset]}) + expected_offset = struct.pack(">Q", expected_eof_offset) + self.failUnlessEqual(read("si1", [0], [(99, 8)]), + {0: [expected_offset]}) + d.addCallback(_check_offsets) + return d + + def _make_new_mw(self, si, share, datalength=36): + # This is a file of size 36 bytes. Since it has a segment + # size of 6, we know that it has 6 byte segments, which will + # be split into blocks of 2 bytes because our FEC k + # parameter is 3. + mw = MDMFSlotWriteProxy(share, self.rref, si, self.secrets, 0, 3, 10, + 6, datalength) + return mw + + + def test_write_rejected_with_too_many_blocks(self): + mw = self._make_new_mw("si0", 0) + + # Try writing too many blocks. We should not be able to write + # more than 6 + # blocks into each share. + d = defer.succeed(None) + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "too many blocks", + None, + mw.put_block, self.block, 7, self.salt)) + return d + + + def test_write_rejected_with_invalid_salt(self): + # Try writing an invalid salt. Salts are 16 bytes -- any more or + # less should cause an error. + mw = self._make_new_mw("si1", 0) + bad_salt = "a" * 17 # 17 bytes + d = defer.succeed(None) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test_invalid_salt", + None, mw.put_block, self.block, 7, bad_salt)) + return d + + + def test_write_rejected_with_invalid_root_hash(self): + # Try writing an invalid root hash. This should be SHA256d, and + # 32 bytes long as a result. + mw = self._make_new_mw("si2", 0) + # 17 bytes != 32 bytes + invalid_root_hash = "a" * 17 + d = defer.succeed(None) + # Before this test can work, we need to put some blocks + salts, + # a block hash tree, and a share hash tree. Otherwise, we'll see + # failures that match what we are looking for, but are caused by + # the constraints imposed on operation ordering. + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "invalid root hash", + None, mw.put_root_hash, invalid_root_hash)) + return d + + + def test_write_rejected_with_invalid_blocksize(self): + # The blocksize implied by the writer that we get from + # _make_new_mw is 2bytes -- any more or any less than this + # should be cause for failure, unless it is the tail segment, in + # which case it may not be failure. + invalid_block = "a" + mw = self._make_new_mw("si3", 0, 33) # implies a tail segment with + # one byte blocks + # 1 bytes != 2 bytes + d = defer.succeed(None) + d.addCallback(lambda ignored, invalid_block=invalid_block: + self.shouldFail(LayoutInvalid, "test blocksize too small", + None, mw.put_block, invalid_block, 0, + self.salt)) + invalid_block = invalid_block * 3 + # 3 bytes != 2 bytes + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test blocksize too large", + None, + mw.put_block, invalid_block, 0, self.salt)) + for i in xrange(5): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + # Try to put an invalid tail segment + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test invalid tail segment", + None, + mw.put_block, self.block, 5, self.salt)) + valid_block = "a" + d.addCallback(lambda ignored: + mw.put_block(valid_block, 5, self.salt)) + return d + + + def test_write_enforces_order_constraints(self): + # We require that the MDMFSlotWriteProxy be interacted with in a + # specific way. + # That way is: + # 0: __init__ + # 1: write blocks and salts + # 2: Write the encrypted private key + # 3: Write the block hashes + # 4: Write the share hashes + # 5: Write the root hash and salt hash + # 6: Write the signature and verification key + # 7: Write the file. + # + # Some of these can be performed out-of-order, and some can't. + # The dependencies that I want to test here are: + # - Private key before block hashes + # - share hashes and block hashes before root hash + # - root hash before signature + # - signature before verification key + mw0 = self._make_new_mw("si0", 0) + # Write some shares + d = defer.succeed(None) + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw0.put_block(self.block, i, self.salt)) + # Try to write the block hashes before writing the encrypted + # private key + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "block hashes before key", + None, mw0.put_blockhashes, + self.block_hash_tree)) + + # Write the private key. + d.addCallback(lambda ignored: + mw0.put_encprivkey(self.encprivkey)) + + + # Try to write the share hash chain without writing the block + # hash tree + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "share hash chain before " + "salt hash tree", + None, + mw0.put_sharehashes, self.share_hash_chain)) + + # Try to write the root hash and without writing either the + # block hashes or the or the share hashes + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "root hash before share hashes", + None, + mw0.put_root_hash, self.root_hash)) + + # Now write the block hashes and try again + d.addCallback(lambda ignored: + mw0.put_blockhashes(self.block_hash_tree)) + + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "root hash before share hashes", + None, mw0.put_root_hash, self.root_hash)) + + # We haven't yet put the root hash on the share, so we shouldn't + # be able to sign it. + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "signature before root hash", + None, mw0.put_signature, self.signature)) + + d.addCallback(lambda ignored: + self.failUnlessRaises(LayoutInvalid, mw0.get_signable)) + + # ..and, since that fails, we also shouldn't be able to put the + # verification key. + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "key before signature", + None, mw0.put_verification_key, + self.verification_key)) + + # Now write the share hashes. + d.addCallback(lambda ignored: + mw0.put_sharehashes(self.share_hash_chain)) + # We should be able to write the root hash now too + d.addCallback(lambda ignored: + mw0.put_root_hash(self.root_hash)) + + # We should still be unable to put the verification key + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "key before signature", + None, mw0.put_verification_key, + self.verification_key)) + + d.addCallback(lambda ignored: + mw0.put_signature(self.signature)) + + # We shouldn't be able to write the offsets to the remote server + # until the offset table is finished; IOW, until we have written + # the verification key. + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "offsets before verification key", + None, + mw0.finish_publishing)) + + d.addCallback(lambda ignored: + mw0.put_verification_key(self.verification_key)) + return d + + + def test_end_to_end(self): + mw = self._make_new_mw("si1", 0) + # Write a share using the mutable writer, and make sure that the + # reader knows how to read everything back to us. + d = defer.succeed(None) + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mw.put_block(self.block, i, self.salt)) + d.addCallback(lambda ignored: + mw.put_encprivkey(self.encprivkey)) + d.addCallback(lambda ignored: + mw.put_blockhashes(self.block_hash_tree)) + d.addCallback(lambda ignored: + mw.put_sharehashes(self.share_hash_chain)) + d.addCallback(lambda ignored: + mw.put_root_hash(self.root_hash)) + d.addCallback(lambda ignored: + mw.put_signature(self.signature)) + d.addCallback(lambda ignored: + mw.put_verification_key(self.verification_key)) + d.addCallback(lambda ignored: + mw.finish_publishing()) + + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + def _check_block_and_salt((block, salt)): + self.failUnlessEqual(block, self.block) + self.failUnlessEqual(salt, self.salt) + + for i in xrange(6): + d.addCallback(lambda ignored, i=i: + mr.get_block_and_salt(i)) + d.addCallback(_check_block_and_salt) + + d.addCallback(lambda ignored: + mr.get_encprivkey()) + d.addCallback(lambda encprivkey: + self.failUnlessEqual(self.encprivkey, encprivkey)) + + d.addCallback(lambda ignored: + mr.get_blockhashes()) + d.addCallback(lambda blockhashes: + self.failUnlessEqual(self.block_hash_tree, blockhashes)) + + d.addCallback(lambda ignored: + mr.get_sharehashes()) + d.addCallback(lambda sharehashes: + self.failUnlessEqual(self.share_hash_chain, sharehashes)) + + d.addCallback(lambda ignored: + mr.get_signature()) + d.addCallback(lambda signature: + self.failUnlessEqual(signature, self.signature)) + + d.addCallback(lambda ignored: + mr.get_verification_key()) + d.addCallback(lambda verification_key: + self.failUnlessEqual(verification_key, self.verification_key)) + + d.addCallback(lambda ignored: + mr.get_seqnum()) + d.addCallback(lambda seqnum: + self.failUnlessEqual(seqnum, 0)) + + d.addCallback(lambda ignored: + mr.get_root_hash()) + d.addCallback(lambda root_hash: + self.failUnlessEqual(self.root_hash, root_hash)) + + d.addCallback(lambda ignored: + mr.get_encoding_parameters()) + def _check_encoding_parameters((k, n, segsize, datalen)): + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + self.failUnlessEqual(segsize, 6) + self.failUnlessEqual(datalen, 36) + d.addCallback(_check_encoding_parameters) + + d.addCallback(lambda ignored: + mr.get_checkstring()) + d.addCallback(lambda checkstring: + self.failUnlessEqual(checkstring, mw.get_checkstring())) + return d + + + def test_is_sdmf(self): + # The MDMFSlotReadProxy should also know how to read SDMF files, + # since it will encounter them on the grid. Callers use the + # is_sdmf method to test this. + self.write_sdmf_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = mr.is_sdmf() + d.addCallback(lambda issdmf: + self.failUnless(issdmf)) + return d + + + def test_reads_sdmf(self): + # The slot read proxy should, naturally, know how to tell us + # about data in the SDMF format + self.write_sdmf_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = defer.succeed(None) + d.addCallback(lambda ignored: + mr.is_sdmf()) + d.addCallback(lambda issdmf: + self.failUnless(issdmf)) + + # What do we need to read? + # - The sharedata + # - The salt + d.addCallback(lambda ignored: + mr.get_block_and_salt(0)) + def _check_block_and_salt(results): + block, salt = results + # Our original file is 36 bytes long. Then each share is 12 + # bytes in size. The share is composed entirely of the + # letter a. self.block contains 2 as, so 6 * self.block is + # what we are looking for. + self.failUnlessEqual(block, self.block * 6) + self.failUnlessEqual(salt, self.salt) + d.addCallback(_check_block_and_salt) + + # - The blockhashes + d.addCallback(lambda ignored: + mr.get_blockhashes()) + d.addCallback(lambda blockhashes: + self.failUnlessEqual(self.block_hash_tree, + blockhashes, + blockhashes)) + # - The sharehashes + d.addCallback(lambda ignored: + mr.get_sharehashes()) + d.addCallback(lambda sharehashes: + self.failUnlessEqual(self.share_hash_chain, + sharehashes)) + # - The keys + d.addCallback(lambda ignored: + mr.get_encprivkey()) + d.addCallback(lambda encprivkey: + self.failUnlessEqual(encprivkey, self.encprivkey, encprivkey)) + d.addCallback(lambda ignored: + mr.get_verification_key()) + d.addCallback(lambda verification_key: + self.failUnlessEqual(verification_key, + self.verification_key, + verification_key)) + # - The signature + d.addCallback(lambda ignored: + mr.get_signature()) + d.addCallback(lambda signature: + self.failUnlessEqual(signature, self.signature, signature)) + + # - The sequence number + d.addCallback(lambda ignored: + mr.get_seqnum()) + d.addCallback(lambda seqnum: + self.failUnlessEqual(seqnum, 0, seqnum)) + + # - The root hash + d.addCallback(lambda ignored: + mr.get_root_hash()) + d.addCallback(lambda root_hash: + self.failUnlessEqual(root_hash, self.root_hash, root_hash)) + return d + + + def test_only_reads_one_segment_sdmf(self): + # SDMF shares have only one segment, so it doesn't make sense to + # read more segments than that. The reader should know this and + # complain if we try to do that. + self.write_sdmf_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = defer.succeed(None) + d.addCallback(lambda ignored: + mr.is_sdmf()) + d.addCallback(lambda issdmf: + self.failUnless(issdmf)) + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "test bad segment", + None, + mr.get_block_and_salt, 1)) + return d + + + def test_read_with_prefetched_mdmf_data(self): + # The MDMFSlotReadProxy will prefill certain fields if you pass + # it data that you have already fetched. This is useful for + # cases like the Servermap, which prefetches ~2kb of data while + # finding out which shares are on the remote peer so that it + # doesn't waste round trips. + mdmf_data = self.build_test_mdmf_share() + self.write_test_share_to_server("si1") + def _make_mr(ignored, length): + mr = MDMFSlotReadProxy(self.rref, "si1", 0, mdmf_data[:length]) + return mr + + d = defer.succeed(None) + # This should be enough to fill in both the encoding parameters + # and the table of offsets, which will complete the version + # information tuple. + d.addCallback(_make_mr, 107) + d.addCallback(lambda mr: + mr.get_verinfo()) + def _check_verinfo(verinfo): + self.failUnless(verinfo) + self.failUnlessEqual(len(verinfo), 9) + (seqnum, + root_hash, + salt_hash, + segsize, + datalen, + k, + n, + prefix, + offsets) = verinfo + self.failUnlessEqual(seqnum, 0) + self.failUnlessEqual(root_hash, self.root_hash) + self.failUnlessEqual(segsize, 6) + self.failUnlessEqual(datalen, 36) + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + expected_prefix = struct.pack(MDMFSIGNABLEHEADER, + 1, + seqnum, + root_hash, + k, + n, + segsize, + datalen) + self.failUnlessEqual(expected_prefix, prefix) + self.failUnlessEqual(self.rref.read_count, 0) + d.addCallback(_check_verinfo) + # This is not enough data to read a block and a share, so the + # wrapper should attempt to read this from the remote server. + d.addCallback(_make_mr, 107) + d.addCallback(lambda mr: + mr.get_block_and_salt(0)) + def _check_block_and_salt((block, salt)): + self.failUnlessEqual(block, self.block) + self.failUnlessEqual(salt, self.salt) + self.failUnlessEqual(self.rref.read_count, 1) + # This should be enough data to read one block. + d.addCallback(_make_mr, 249) + d.addCallback(lambda mr: + mr.get_block_and_salt(0)) + d.addCallback(_check_block_and_salt) + return d + + + def test_read_with_prefetched_sdmf_data(self): + sdmf_data = self.build_test_sdmf_share() + self.write_sdmf_share_to_server("si1") + def _make_mr(ignored, length): + mr = MDMFSlotReadProxy(self.rref, "si1", 0, sdmf_data[:length]) + return mr + + d = defer.succeed(None) + # This should be enough to get us the encoding parameters, + # offset table, and everything else we need to build a verinfo + # string. + d.addCallback(_make_mr, 107) + d.addCallback(lambda mr: + mr.get_verinfo()) + def _check_verinfo(verinfo): + self.failUnless(verinfo) + self.failUnlessEqual(len(verinfo), 9) + (seqnum, + root_hash, + salt, + segsize, + datalen, + k, + n, + prefix, + offsets) = verinfo + self.failUnlessEqual(seqnum, 0) + self.failUnlessEqual(root_hash, self.root_hash) + self.failUnlessEqual(salt, self.salt) + self.failUnlessEqual(segsize, 36) + self.failUnlessEqual(datalen, 36) + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + expected_prefix = struct.pack(SIGNED_PREFIX, + 0, + seqnum, + root_hash, + salt, + k, + n, + segsize, + datalen) + self.failUnlessEqual(expected_prefix, prefix) + self.failUnlessEqual(self.rref.read_count, 0) + d.addCallback(_check_verinfo) + # This shouldn't be enough to read any share data. + d.addCallback(_make_mr, 107) + d.addCallback(lambda mr: + mr.get_block_and_salt(0)) + def _check_block_and_salt((block, salt)): + self.failUnlessEqual(block, self.block * 6) + self.failUnlessEqual(salt, self.salt) + # TODO: Fix the read routine so that it reads only the data + # that it has cached if it can't read all of it. + self.failUnlessEqual(self.rref.read_count, 2) + + # This should be enough to read share data. + d.addCallback(_make_mr, self.offsets['share_data']) + d.addCallback(lambda mr: + mr.get_block_and_salt(0)) + d.addCallback(_check_block_and_salt) + return d + + + def test_read_with_empty_mdmf_file(self): + # Some tests upload a file with no contents to test things + # unrelated to the actual handling of the content of the file. + # The reader should behave intelligently in these cases. + self.write_test_share_to_server("si1", empty=True) + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + # We should be able to get the encoding parameters, and they + # should be correct. + d = defer.succeed(None) + d.addCallback(lambda ignored: + mr.get_encoding_parameters()) + def _check_encoding_parameters(params): + self.failUnlessEqual(len(params), 4) + k, n, segsize, datalen = params + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + self.failUnlessEqual(segsize, 0) + self.failUnlessEqual(datalen, 0) + d.addCallback(_check_encoding_parameters) + + # We should not be able to fetch a block, since there are no + # blocks to fetch + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "get block on empty file", + None, + mr.get_block_and_salt, 0)) + return d + + + def test_read_with_empty_sdmf_file(self): + self.write_sdmf_share_to_server("si1", empty=True) + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + # We should be able to get the encoding parameters, and they + # should be correct + d = defer.succeed(None) + d.addCallback(lambda ignored: + mr.get_encoding_parameters()) + def _check_encoding_parameters(params): + self.failUnlessEqual(len(params), 4) + k, n, segsize, datalen = params + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + self.failUnlessEqual(segsize, 0) + self.failUnlessEqual(datalen, 0) + d.addCallback(_check_encoding_parameters) + + # It does not make sense to get a block in this format, so we + # should not be able to. + d.addCallback(lambda ignored: + self.shouldFail(LayoutInvalid, "get block on an empty file", + None, + mr.get_block_and_salt, 0)) + return d + + + def test_verinfo_with_sdmf_file(self): + self.write_sdmf_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + # We should be able to get the version information. + d = defer.succeed(None) + d.addCallback(lambda ignored: + mr.get_verinfo()) + def _check_verinfo(verinfo): + self.failUnless(verinfo) + self.failUnlessEqual(len(verinfo), 9) + (seqnum, + root_hash, + salt, + segsize, + datalen, + k, + n, + prefix, + offsets) = verinfo + self.failUnlessEqual(seqnum, 0) + self.failUnlessEqual(root_hash, self.root_hash) + self.failUnlessEqual(salt, self.salt) + self.failUnlessEqual(segsize, 36) + self.failUnlessEqual(datalen, 36) + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + expected_prefix = struct.pack(">BQ32s16s BBQQ", + 0, + seqnum, + root_hash, + salt, + k, + n, + segsize, + datalen) + self.failUnlessEqual(prefix, expected_prefix) + self.failUnlessEqual(offsets, self.offsets) + d.addCallback(_check_verinfo) + return d + + + def test_verinfo_with_mdmf_file(self): + self.write_test_share_to_server("si1") + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d = defer.succeed(None) + d.addCallback(lambda ignored: + mr.get_verinfo()) + def _check_verinfo(verinfo): + self.failUnless(verinfo) + self.failUnlessEqual(len(verinfo), 9) + (seqnum, + root_hash, + IV, + segsize, + datalen, + k, + n, + prefix, + offsets) = verinfo + self.failUnlessEqual(seqnum, 0) + self.failUnlessEqual(root_hash, self.root_hash) + self.failIf(IV) + self.failUnlessEqual(segsize, 6) + self.failUnlessEqual(datalen, 36) + self.failUnlessEqual(k, 3) + self.failUnlessEqual(n, 10) + expected_prefix = struct.pack(">BQ32s BBQQ", + 1, + seqnum, + root_hash, + k, + n, + segsize, + datalen) + self.failUnlessEqual(prefix, expected_prefix) + self.failUnlessEqual(offsets, self.offsets) + d.addCallback(_check_verinfo) + return d + + + def test_reader_queue(self): + self.write_test_share_to_server('si1') + mr = MDMFSlotReadProxy(self.rref, "si1", 0) + d1 = mr.get_block_and_salt(0, queue=True) + d2 = mr.get_blockhashes(queue=True) + d3 = mr.get_sharehashes(queue=True) + d4 = mr.get_signature(queue=True) + d5 = mr.get_verification_key(queue=True) + dl = defer.DeferredList([d1, d2, d3, d4, d5]) + mr.flush() + def _print(results): + self.failUnlessEqual(len(results), 5) + # We have one read for version information and offsets, and + # one for everything else. + self.failUnlessEqual(self.rref.read_count, 2) + block, salt = results[0][1] # results[0] is a boolean that says + # whether or not the operation + # worked. + self.failUnlessEqual(self.block, block) + self.failUnlessEqual(self.salt, salt) + + blockhashes = results[1][1] + self.failUnlessEqual(self.block_hash_tree, blockhashes) + + sharehashes = results[2][1] + self.failUnlessEqual(self.share_hash_chain, sharehashes) + + signature = results[3][1] + self.failUnlessEqual(self.signature, signature) + + verification_key = results[4][1] + self.failUnlessEqual(self.verification_key, verification_key) + dl.addCallback(_print) + return dl + + + def test_sdmf_writer(self): + # Go through the motions of writing an SDMF share to the storage + # server. Then read the storage server to see that the share got + # written in the way that we think it should have. + + # We do this first so that the necessary instance variables get + # set the way we want them for the tests below. + data = self.build_test_sdmf_share() + sdmfr = SDMFSlotWriteProxy(0, + self.rref, + "si1", + self.secrets, + 0, 3, 10, 36, 36) + # Put the block and salt. + sdmfr.put_block(self.blockdata, 0, self.salt) + + # Put the encprivkey + sdmfr.put_encprivkey(self.encprivkey) + + # Put the block and share hash chains + sdmfr.put_blockhashes(self.block_hash_tree) + sdmfr.put_sharehashes(self.share_hash_chain) + sdmfr.put_root_hash(self.root_hash) + + # Put the signature + sdmfr.put_signature(self.signature) + + # Put the verification key + sdmfr.put_verification_key(self.verification_key) + + # Now check to make sure that nothing has been written yet. + self.failUnlessEqual(self.rref.write_count, 0) + + # Now finish publishing + d = sdmfr.finish_publishing() + def _then(ignored): + self.failUnlessEqual(self.rref.write_count, 1) + read = self.ss.remote_slot_readv + self.failUnlessEqual(read("si1", [0], [(0, len(data))]), + {0: [data]}) + d.addCallback(_then) + return d + + + def test_sdmf_writer_preexisting_share(self): + data = self.build_test_sdmf_share() + self.write_sdmf_share_to_server("si1") + + # Now there is a share on the storage server. To successfully + # write, we need to set the checkstring correctly. When we + # don't, no write should occur. + sdmfw = SDMFSlotWriteProxy(0, + self.rref, + "si1", + self.secrets, + 1, 3, 10, 36, 36) + sdmfw.put_block(self.blockdata, 0, self.salt) + + # Put the encprivkey + sdmfw.put_encprivkey(self.encprivkey) + + # Put the block and share hash chains + sdmfw.put_blockhashes(self.block_hash_tree) + sdmfw.put_sharehashes(self.share_hash_chain) + + # Put the root hash + sdmfw.put_root_hash(self.root_hash) + + # Put the signature + sdmfw.put_signature(self.signature) + + # Put the verification key + sdmfw.put_verification_key(self.verification_key) + + # We shouldn't have a checkstring yet + self.failUnlessEqual(sdmfw.get_checkstring(), "") + + d = sdmfw.finish_publishing() + def _then(results): + self.failIf(results[0]) + # this is the correct checkstring + self._expected_checkstring = results[1][0][0] + return self._expected_checkstring + + d.addCallback(_then) + d.addCallback(sdmfw.set_checkstring) + d.addCallback(lambda ignored: + sdmfw.get_checkstring()) + d.addCallback(lambda checkstring: + self.failUnlessEqual(checkstring, self._expected_checkstring)) + d.addCallback(lambda ignored: + sdmfw.finish_publishing()) + def _then_again(results): + self.failUnless(results[0]) + read = self.ss.remote_slot_readv + self.failUnlessEqual(read("si1", [0], [(1, 8)]), + {0: [struct.pack(">Q", 1)]}) + self.failUnlessEqual(read("si1", [0], [(9, len(data) - 9)]), + {0: [data[9:]]}) + d.addCallback(_then_again) + return d + + class Stats(unittest.TestCase): def setUp(self): } [mutable/publish.py: cleanup + simplification Kevan Carstensen **20100702225554 Ignore-this: 36a58424ceceffb1ddc55cc5934399e2 ] { hunk ./src/allmydata/mutable/publish.py 19 UncoordinatedWriteError, NotEnoughServersError from allmydata.mutable.servermap import ServerMap from allmydata.mutable.layout import pack_prefix, pack_share, unpack_header, pack_checkstring, \ - unpack_checkstring, SIGNED_PREFIX, MDMFSlotWriteProxy + unpack_checkstring, SIGNED_PREFIX, MDMFSlotWriteProxy, \ + SDMFSlotWriteProxy KiB = 1024 DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB hunk ./src/allmydata/mutable/publish.py 24 +PUSHING_BLOCKS_STATE = 0 +PUSHING_EVERYTHING_ELSE_STATE = 1 +DONE_STATE = 2 class PublishStatus: implements(IPublishStatus) hunk ./src/allmydata/mutable/publish.py 229 self.bad_share_checkstrings = {} + # This is set at the last step of the publishing process. + self.versioninfo = "" + # we use the servermap to populate the initial goal: this way we will # try to update each existing share in place. for (peerid, shnum) in self._servermap.servermap: hunk ./src/allmydata/mutable/publish.py 245 self.bad_share_checkstrings[key] = old_checkstring self.connections[peerid] = self._servermap.connections[peerid] - # Now, the process dovetails -- if this is an SDMF file, we need - # to write an SDMF file. Otherwise, we need to write an MDMF - # file. - if self._version == MDMF_VERSION: - return self._publish_mdmf() - else: - return self._publish_sdmf() - #return self.done_deferred - - def _publish_mdmf(self): - # Next, we find homes for all of the shares that we don't have - # homes for yet. # TODO: Make this part do peer selection. self.update_goal() self.writers = {} hunk ./src/allmydata/mutable/publish.py 248 - # For each (peerid, shnum) in self.goal, we make an - # MDMFSlotWriteProxy for that peer. We'll use this to write + if self._version == MDMF_VERSION: + writer_class = MDMFSlotWriteProxy + else: + writer_class = SDMFSlotWriteProxy + + # For each (peerid, shnum) in self.goal, we make a + # write proxy for that peer. We'll use this to write # shares to the peer. for key in self.goal: peerid, shnum = key hunk ./src/allmydata/mutable/publish.py 263 cancel_secret = self._node.get_cancel_secret(peerid) secrets = (write_enabler, renew_secret, cancel_secret) - self.writers[shnum] = MDMFSlotWriteProxy(shnum, - self.connections[peerid], - self._storage_index, - secrets, - self._new_seqnum, - self.required_shares, - self.total_shares, - self.segment_size, - len(self.newdata)) + self.writers[shnum] = writer_class(shnum, + self.connections[peerid], + self._storage_index, + secrets, + self._new_seqnum, + self.required_shares, + self.total_shares, + self.segment_size, + len(self.newdata)) + self.writers[shnum].peerid = peerid if (peerid, shnum) in self._servermap.servermap: old_versionid, old_timestamp = self._servermap.servermap[key] (old_seqnum, old_root_hash, old_salt, old_segsize, hunk ./src/allmydata/mutable/publish.py 278 old_datalength, old_k, old_N, old_prefix, old_offsets_tuple) = old_versionid - self.writers[shnum].set_checkstring(old_seqnum, old_root_hash) + self.writers[shnum].set_checkstring(old_seqnum, + old_root_hash, + old_salt) + elif (peerid, shnum) in self.bad_share_checkstrings: + old_checkstring = self.bad_share_checkstrings[(peerid, shnum)] + self.writers[shnum].set_checkstring(old_checkstring) + + # Our remote shares will not have a complete checkstring until + # after we are done writing share data and have started to write + # blocks. In the meantime, we need to know what to look for when + # writing, so that we can detect UncoordinatedWriteErrors. + self._checkstring = self.writers.values()[0].get_checkstring() # Now, we start pushing shares. self._status.timings["setup"] = time.time() - self._started hunk ./src/allmydata/mutable/publish.py 293 - def _start_pushing(res): - self._started_pushing = time.time() - return res - # First, we encrypt, encode, and publish the shares that we need # to encrypt, encode, and publish. hunk ./src/allmydata/mutable/publish.py 306 d = defer.succeed(None) self.log("Starting push") - for i in xrange(self.num_segments - 1): - d.addCallback(lambda ignored, i=i: - self.push_segment(i)) - d.addCallback(self._turn_barrier) - # We have at least one segment, so we will have a tail segment - if self.num_segments > 0: - d.addCallback(lambda ignored: - self.push_tail_segment()) - - d.addCallback(lambda ignored: - self.push_encprivkey()) - d.addCallback(lambda ignored: - self.push_blockhashes()) - d.addCallback(lambda ignored: - self.push_sharehashes()) - d.addCallback(lambda ignored: - self.push_toplevel_hashes_and_signature()) - d.addCallback(lambda ignored: - self.finish_publishing()) - return d - - - def _publish_sdmf(self): - self._status.timings["setup"] = time.time() - self._started - self.salt = os.urandom(16) hunk ./src/allmydata/mutable/publish.py 307 - d = self._encrypt_and_encode() - d.addCallback(self._generate_shares) - def _start_pushing(res): - self._started_pushing = time.time() - return res - d.addCallback(_start_pushing) - d.addCallback(self.loop) # trigger delivery - d.addErrback(self._fatal_error) + self._state = PUSHING_BLOCKS_STATE + self._push() return self.done_deferred hunk ./src/allmydata/mutable/publish.py 327 segment_size) else: self.num_segments = 0 + + self.log("building encoding parameters for file") + self.log("got segsize %d" % self.segment_size) + self.log("got %d segments" % self.num_segments) + if self._version == SDMF_VERSION: assert self.num_segments in (0, 1) # SDMF hunk ./src/allmydata/mutable/publish.py 334 - return # calculate the tail segment size. hunk ./src/allmydata/mutable/publish.py 335 - self.tail_segment_size = len(self.newdata) % segment_size hunk ./src/allmydata/mutable/publish.py 336 - if self.tail_segment_size == 0: + if segment_size and self.newdata: + self.tail_segment_size = len(self.newdata) % segment_size + else: + self.tail_segment_size = 0 + + if self.tail_segment_size == 0 and segment_size: # The tail segment is the same size as the other segments. self.tail_segment_size = segment_size hunk ./src/allmydata/mutable/publish.py 345 - # We'll make an encoder ahead-of-time for the normal-sized - # segments (defined as any segment of segment_size size. - # (the part of the code that puts the tail segment will make its - # own encoder for that part) + # Make FEC encoders fec = codec.CRSEncoder() fec.set_params(self.segment_size, self.required_shares, self.total_shares) hunk ./src/allmydata/mutable/publish.py 352 self.piece_size = fec.get_block_size() self.fec = fec + if self.tail_segment_size == self.segment_size: + self.tail_fec = self.fec + else: + tail_fec = codec.CRSEncoder() + tail_fec.set_params(self.tail_segment_size, + self.required_shares, + self.total_shares) + self.tail_fec = tail_fec + + self._current_segment = 0 + + + def _push(self, ignored=None): + """ + I manage state transitions. In particular, I see that we still + have a good enough number of writers to complete the upload + successfully. + """ + # Can we still successfully publish this file? + # TODO: Keep track of outstanding queries before aborting the + # process. + if len(self.writers) <= self.required_shares or self.surprised: + return self._failure() + + # Figure out what we need to do next. Each of these needs to + # return a deferred so that we don't block execution when this + # is first called in the upload method. + if self._state == PUSHING_BLOCKS_STATE: + return self.push_segment(self._current_segment) + + # XXX: Do we want more granularity in states? Is that useful at + # all? + # Yes -- quicker reaction to UCW. + elif self._state == PUSHING_EVERYTHING_ELSE_STATE: + return self.push_everything_else() + + # If we make it to this point, we were successful in placing the + # file. + return self._done(None) + def push_segment(self, segnum): hunk ./src/allmydata/mutable/publish.py 394 + if self.num_segments == 0 and self._version == SDMF_VERSION: + self._add_dummy_salts() + + if segnum == self.num_segments: + # We don't have any more segments to push. + self._state = PUSHING_EVERYTHING_ELSE_STATE + return self._push() + + d = self._encode_segment(segnum) + d.addCallback(self._push_segment, segnum) + def _increment_segnum(ign): + self._current_segment += 1 + # XXX: I don't think we need to do addBoth here -- any errBacks + # should be handled within push_segment. + d.addBoth(_increment_segnum) + d.addBoth(self._push) + + + def _add_dummy_salts(self): + """ + SDMF files need a salt even if they're empty, or the signature + won't make sense. This method adds a dummy salt to each of our + SDMF writers so that they can write the signature later. + """ + salt = os.urandom(16) + assert self._version == SDMF_VERSION + + for writer in self.writers.itervalues(): + writer.put_salt(salt) + + + def _encode_segment(self, segnum): + """ + I encrypt and encode the segment segnum. + """ started = time.time() hunk ./src/allmydata/mutable/publish.py 430 - segsize = self.segment_size + + if segnum + 1 == self.num_segments: + segsize = self.tail_segment_size + else: + segsize = self.segment_size + + + offset = self.segment_size * segnum + length = segsize + offset self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments)) hunk ./src/allmydata/mutable/publish.py 440 - data = self.newdata[segsize * segnum:segsize*(segnum + 1)] + data = self.newdata[offset:length] assert len(data) == segsize salt = os.urandom(16) hunk ./src/allmydata/mutable/publish.py 455 started = now # now apply FEC + if segnum + 1 == self.num_segments: + fec = self.tail_fec + else: + fec = self.fec self._status.set_status("Encoding") crypttext_pieces = [None] * self.required_shares hunk ./src/allmydata/mutable/publish.py 462 - piece_size = self.piece_size + piece_size = fec.get_block_size() for i in range(len(crypttext_pieces)): offset = i * piece_size piece = crypttext[offset:offset+piece_size] hunk ./src/allmydata/mutable/publish.py 469 piece = piece + "\x00"*(piece_size - len(piece)) # padding crypttext_pieces[i] = piece assert len(piece) == piece_size - d = self.fec.encode(crypttext_pieces) + d = fec.encode(crypttext_pieces) def _done_encoding(res): elapsed = time.time() - started self._status.timings["encode"] = elapsed hunk ./src/allmydata/mutable/publish.py 473 - return res + return (res, salt) d.addCallback(_done_encoding) hunk ./src/allmydata/mutable/publish.py 475 - - def _push_shares_and_salt(results): - shares, shareids = results - dl = [] - for i in xrange(len(shares)): - sharedata = shares[i] - shareid = shareids[i] - block_hash = hashutil.block_hash(salt + sharedata) - self.blockhashes[shareid].append(block_hash) - - # find the writer for this share - d = self.writers[shareid].put_block(sharedata, segnum, salt) - dl.append(d) - # TODO: Naturally, we need to check on the results of these. - return defer.DeferredList(dl) - d.addCallback(_push_shares_and_salt) return d hunk ./src/allmydata/mutable/publish.py 478 - def push_tail_segment(self): - # This is essentially the same as push_segment, except that we - # don't use the cached encoder that we use elsewhere. - self.log("Pushing tail segment") + def _push_segment(self, encoded_and_salt, segnum): + """ + I push (data, salt) as segment number segnum. + """ + results, salt = encoded_and_salt + shares, shareids = results started = time.time() hunk ./src/allmydata/mutable/publish.py 485 - segsize = self.segment_size - data = self.newdata[segsize * (self.num_segments-1):] - assert len(data) == self.tail_segment_size - salt = os.urandom(16) - - key = hashutil.ssk_readkey_data_hash(salt, self.readkey) - enc = AES(key) - crypttext = enc.process(data) - assert len(crypttext) == len(data) + dl = [] + for i in xrange(len(shares)): + sharedata = shares[i] + shareid = shareids[i] + if self._version == MDMF_VERSION: + hashed = salt + sharedata + else: + hashed = sharedata + block_hash = hashutil.block_hash(hashed) + self.blockhashes[shareid].append(block_hash) hunk ./src/allmydata/mutable/publish.py 496 - now = time.time() - self._status.timings['encrypt'] = now - started - started = now + # find the writer for this share + writer = self.writers[shareid] + d = writer.put_block(sharedata, segnum, salt) + d.addCallback(self._got_write_answer, writer, started) + d.addErrback(self._connection_problem, writer) + dl.append(d) + # TODO: Naturally, we need to check on the results of these. + return defer.DeferredList(dl) hunk ./src/allmydata/mutable/publish.py 505 - self._status.set_status("Encoding") - tail_fec = codec.CRSEncoder() - tail_fec.set_params(self.tail_segment_size, - self.required_shares, - self.total_shares) hunk ./src/allmydata/mutable/publish.py 506 - crypttext_pieces = [None] * self.required_shares - piece_size = tail_fec.get_block_size() - for i in range(len(crypttext_pieces)): - offset = i * piece_size - piece = crypttext[offset:offset+piece_size] - piece = piece + "\x00"*(piece_size - len(piece)) # padding - crypttext_pieces[i] = piece - assert len(piece) == piece_size - d = tail_fec.encode(crypttext_pieces) - def _push_shares_and_salt(results): - shares, shareids = results - dl = [] - for i in xrange(len(shares)): - sharedata = shares[i] - shareid = shareids[i] - block_hash = hashutil.block_hash(salt + sharedata) - self.blockhashes[shareid].append(block_hash) - # find the writer for this share - d = self.writers[shareid].put_block(sharedata, - self.num_segments - 1, - salt) - dl.append(d) - # TODO: Naturally, we need to check on the results of these. - return defer.DeferredList(dl) - d.addCallback(_push_shares_and_salt) + def push_everything_else(self): + """ + I put everything else associated with a share. + """ + encprivkey = self._encprivkey + d = self.push_encprivkey() + d.addCallback(self.push_blockhashes) + d.addCallback(self.push_sharehashes) + d.addCallback(self.push_toplevel_hashes_and_signature) + d.addCallback(self.finish_publishing) + def _change_state(ignored): + self._state = DONE_STATE + d.addCallback(_change_state) + d.addCallback(self._push) return d hunk ./src/allmydata/mutable/publish.py 527 started = time.time() encprivkey = self._encprivkey dl = [] - def _spy_on_writer(results): - print results - return results - for shnum, writer in self.writers.iteritems(): + for writer in self.writers.itervalues(): d = writer.put_encprivkey(encprivkey) hunk ./src/allmydata/mutable/publish.py 529 + d.addCallback(self._got_write_answer, writer, started) + d.addErrback(self._connection_problem, writer) dl.append(d) d = defer.DeferredList(dl) return d hunk ./src/allmydata/mutable/publish.py 536 - def push_blockhashes(self): + def push_blockhashes(self, ignored): started = time.time() dl = [] hunk ./src/allmydata/mutable/publish.py 539 - def _spy_on_results(results): - print results - return results self.sharehash_leaves = [None] * len(self.blockhashes) for shnum, blockhashes in self.blockhashes.iteritems(): t = hashtree.HashTree(blockhashes) hunk ./src/allmydata/mutable/publish.py 545 self.blockhashes[shnum] = list(t) # set the leaf for future use. self.sharehash_leaves[shnum] = t[0] - d = self.writers[shnum].put_blockhashes(self.blockhashes[shnum]) + writer = self.writers[shnum] + d = writer.put_blockhashes(self.blockhashes[shnum]) + d.addCallback(self._got_write_answer, writer, started) + d.addErrback(self._connection_problem, self.writers[shnum]) dl.append(d) d = defer.DeferredList(dl) return d hunk ./src/allmydata/mutable/publish.py 554 - def push_sharehashes(self): + def push_sharehashes(self, ignored): + started = time.time() share_hash_tree = hashtree.HashTree(self.sharehash_leaves) share_hash_chain = {} ds = [] hunk ./src/allmydata/mutable/publish.py 559 - def _spy_on_results(results): - print results - return results for shnum in xrange(len(self.sharehash_leaves)): needed_indices = share_hash_tree.needed_hashes(shnum) self.sharehashes[shnum] = dict( [ (i, share_hash_tree[i]) hunk ./src/allmydata/mutable/publish.py 563 for i in needed_indices] ) - d = self.writers[shnum].put_sharehashes(self.sharehashes[shnum]) + writer = self.writers[shnum] + d = writer.put_sharehashes(self.sharehashes[shnum]) + d.addCallback(self._got_write_answer, writer, started) + d.addErrback(self._connection_problem, writer) ds.append(d) self.root_hash = share_hash_tree[0] d = defer.DeferredList(ds) hunk ./src/allmydata/mutable/publish.py 573 return d - def push_toplevel_hashes_and_signature(self): + def push_toplevel_hashes_and_signature(self, ignored): # We need to to three things here: # - Push the root hash and salt hash # - Get the checkstring of the resulting layout; sign that. hunk ./src/allmydata/mutable/publish.py 578 # - Push the signature + started = time.time() ds = [] hunk ./src/allmydata/mutable/publish.py 580 - def _spy_on_results(results): - print results - return results for shnum in xrange(self.total_shares): hunk ./src/allmydata/mutable/publish.py 581 - d = self.writers[shnum].put_root_hash(self.root_hash) + writer = self.writers[shnum] + d = writer.put_root_hash(self.root_hash) + d.addCallback(self._got_write_answer, writer, started) ds.append(d) d = defer.DeferredList(ds) hunk ./src/allmydata/mutable/publish.py 586 - def _make_and_place_signature(ignored): - signable = self.writers[0].get_signable() - self.signature = self._privkey.sign(signable) - - ds = [] - for (shnum, writer) in self.writers.iteritems(): - d = writer.put_signature(self.signature) - ds.append(d) - return defer.DeferredList(ds) - d.addCallback(_make_and_place_signature) + d.addCallback(self._update_checkstring) + d.addCallback(self._make_and_place_signature) return d hunk ./src/allmydata/mutable/publish.py 591 - def finish_publishing(self): + def _update_checkstring(self, ignored): + """ + After putting the root hash, MDMF files will have the + checkstring written to the storage server. This means that we + can update our copy of the checkstring so we can detect + uncoordinated writes. SDMF files will have the same checkstring, + so we need not do anything. + """ + self._checkstring = self.writers.values()[0].get_checkstring() + + + def _make_and_place_signature(self, ignored): + """ + I create and place the signature. + """ + started = time.time() + signable = self.writers[0].get_signable() + self.signature = self._privkey.sign(signable) + + ds = [] + for (shnum, writer) in self.writers.iteritems(): + d = writer.put_signature(self.signature) + d.addCallback(self._got_write_answer, writer, started) + d.addErrback(self._connection_problem, writer) + ds.append(d) + return defer.DeferredList(ds) + + + def finish_publishing(self, ignored): # We're almost done -- we just need to put the verification key # and the offsets hunk ./src/allmydata/mutable/publish.py 622 + started = time.time() ds = [] verification_key = self._pubkey.serialize() hunk ./src/allmydata/mutable/publish.py 626 - def _spy_on_results(results): - print results - return results + + # TODO: Bad, since we remove from this same dict. We need to + # make a copy, or just use a non-iterated value. for (shnum, writer) in self.writers.iteritems(): d = writer.put_verification_key(verification_key) hunk ./src/allmydata/mutable/publish.py 631 + d.addCallback(self._got_write_answer, writer, started) + d.addCallback(self._record_verinfo) d.addCallback(lambda ignored, writer=writer: writer.finish_publishing()) hunk ./src/allmydata/mutable/publish.py 635 + d.addCallback(self._got_write_answer, writer, started) + d.addErrback(self._connection_problem, writer) ds.append(d) return defer.DeferredList(ds) hunk ./src/allmydata/mutable/publish.py 641 - def _turn_barrier(self, res): - # putting this method in a Deferred chain imposes a guaranteed - # reactor turn between the pre- and post- portions of that chain. - # This can be useful to limit memory consumption: since Deferreds do - # not do tail recursion, code which uses defer.succeed(result) for - # consistency will cause objects to live for longer than you might - # normally expect. - return fireEventually(res) + def _record_verinfo(self, ignored): + self.versioninfo = self.writers.values()[0].get_verinfo() hunk ./src/allmydata/mutable/publish.py 645 - def _fatal_error(self, f): - self.log("error during loop", failure=f, level=log.UNUSUAL) - self._done(f) + def _connection_problem(self, f, writer): + """ + We ran into a connection problem while working with writer, and + need to deal with that. + """ + self.log("found problem: %s" % str(f)) + self._last_failure = f + del(self.writers[writer.shnum]) hunk ./src/allmydata/mutable/publish.py 654 - def _update_status(self): - self._status.set_status("Sending Shares: %d placed out of %d, " - "%d messages outstanding" % - (len(self.placed), - len(self.goal), - len(self.outstanding))) - self._status.set_progress(1.0 * len(self.placed) / len(self.goal)) def loop(self, ignored=None): self.log("entering loop", level=log.NOISY) hunk ./src/allmydata/mutable/publish.py 778 self.log_goal(self.goal, "after update: ") - def _encrypt_and_encode(self): - # this returns a Deferred that fires with a list of (sharedata, - # sharenum) tuples. TODO: cache the ciphertext, only produce the - # shares that we care about. - self.log("_encrypt_and_encode") - - self._status.set_status("Encrypting") - started = time.time() + def _got_write_answer(self, answer, writer, started): + if not answer: + # SDMF writers only pretend to write when readers set their + # blocks, salts, and so on -- they actually just write once, + # at the end of the upload process. In fake writes, they + # return defer.succeed(None). If we see that, we shouldn't + # bother checking it. + return hunk ./src/allmydata/mutable/publish.py 787 - key = hashutil.ssk_readkey_data_hash(self.salt, self.readkey) - enc = AES(key) - crypttext = enc.process(self.newdata) - assert len(crypttext) == len(self.newdata) + peerid = writer.peerid + lp = self.log("_got_write_answer from %s, share %d" % + (idlib.shortnodeid_b2a(peerid), writer.shnum)) now = time.time() hunk ./src/allmydata/mutable/publish.py 792 - self._status.timings["encrypt"] = now - started - started = now - - # now apply FEC - - self._status.set_status("Encoding") - fec = codec.CRSEncoder() - fec.set_params(self.segment_size, - self.required_shares, self.total_shares) - piece_size = fec.get_block_size() - crypttext_pieces = [None] * self.required_shares - for i in range(len(crypttext_pieces)): - offset = i * piece_size - piece = crypttext[offset:offset+piece_size] - piece = piece + "\x00"*(piece_size - len(piece)) # padding - crypttext_pieces[i] = piece - assert len(piece) == piece_size - - d = fec.encode(crypttext_pieces) - def _done_encoding(res): - elapsed = time.time() - started - self._status.timings["encode"] = elapsed - return res - d.addCallback(_done_encoding) - return d - - - def _generate_shares(self, shares_and_shareids): - # this sets self.shares and self.root_hash - self.log("_generate_shares") - self._status.set_status("Generating Shares") - started = time.time() - - # we should know these by now - privkey = self._privkey - encprivkey = self._encprivkey - pubkey = self._pubkey - - (shares, share_ids) = shares_and_shareids - - assert len(shares) == len(share_ids) - assert len(shares) == self.total_shares - all_shares = {} - block_hash_trees = {} - share_hash_leaves = [None] * len(shares) - for i in range(len(shares)): - share_data = shares[i] - shnum = share_ids[i] - all_shares[shnum] = share_data - - # build the block hash tree. SDMF has only one leaf. - leaves = [hashutil.block_hash(share_data)] - t = hashtree.HashTree(leaves) - block_hash_trees[shnum] = list(t) - share_hash_leaves[shnum] = t[0] - for leaf in share_hash_leaves: - assert leaf is not None - share_hash_tree = hashtree.HashTree(share_hash_leaves) - share_hash_chain = {} - for shnum in range(self.total_shares): - needed_hashes = share_hash_tree.needed_hashes(shnum) - share_hash_chain[shnum] = dict( [ (i, share_hash_tree[i]) - for i in needed_hashes ] ) - root_hash = share_hash_tree[0] - assert len(root_hash) == 32 - self.log("my new root_hash is %s" % base32.b2a(root_hash)) - self._new_version_info = (self._new_seqnum, root_hash, self.salt) - - prefix = pack_prefix(self._new_seqnum, root_hash, self.salt, - self.required_shares, self.total_shares, - self.segment_size, len(self.newdata)) - - # now pack the beginning of the share. All shares are the same up - # to the signature, then they have divergent share hash chains, - # then completely different block hash trees + salt + share data, - # then they all share the same encprivkey at the end. The sizes - # of everything are the same for all shares. - - sign_started = time.time() - signature = privkey.sign(prefix) - self._status.timings["sign"] = time.time() - sign_started - - verification_key = pubkey.serialize() - - final_shares = {} - for shnum in range(self.total_shares): - final_share = pack_share(prefix, - verification_key, - signature, - share_hash_chain[shnum], - block_hash_trees[shnum], - all_shares[shnum], - encprivkey) - final_shares[shnum] = final_share - elapsed = time.time() - started - self._status.timings["pack"] = elapsed - self.shares = final_shares - self.root_hash = root_hash - - # we also need to build up the version identifier for what we're - # pushing. Extract the offsets from one of our shares. - assert final_shares - offsets = unpack_header(final_shares.values()[0])[-1] - offsets_tuple = tuple( [(key,value) for key,value in offsets.items()] ) - verinfo = (self._new_seqnum, root_hash, self.salt, - self.segment_size, len(self.newdata), - self.required_shares, self.total_shares, - prefix, offsets_tuple) - self.versioninfo = verinfo - - - - def _send_shares(self, needed): - self.log("_send_shares") - - # we're finally ready to send out our shares. If we encounter any - # surprises here, it's because somebody else is writing at the same - # time. (Note: in the future, when we remove the _query_peers() step - # and instead speculate about [or remember] which shares are where, - # surprises here are *not* indications of UncoordinatedWriteError, - # and we'll need to respond to them more gracefully.) - - # needed is a set of (peerid, shnum) tuples. The first thing we do is - # organize it by peerid. - - peermap = DictOfSets() - for (peerid, shnum) in needed: - peermap.add(peerid, shnum) - - # the next thing is to build up a bunch of test vectors. The - # semantics of Publish are that we perform the operation if the world - # hasn't changed since the ServerMap was constructed (more or less). - # For every share we're trying to place, we create a test vector that - # tests to see if the server*share still corresponds to the - # map. - - all_tw_vectors = {} # maps peerid to tw_vectors - sm = self._servermap.servermap - - for key in needed: - (peerid, shnum) = key - - if key in sm: - # an old version of that share already exists on the - # server, according to our servermap. We will create a - # request that attempts to replace it. - old_versionid, old_timestamp = sm[key] - (old_seqnum, old_root_hash, old_salt, old_segsize, - old_datalength, old_k, old_N, old_prefix, - old_offsets_tuple) = old_versionid - old_checkstring = pack_checkstring(old_seqnum, - old_root_hash, - old_salt) - testv = (0, len(old_checkstring), "eq", old_checkstring) - - elif key in self.bad_share_checkstrings: - old_checkstring = self.bad_share_checkstrings[key] - testv = (0, len(old_checkstring), "eq", old_checkstring) - - else: - # add a testv that requires the share not exist - - # Unfortunately, foolscap-0.2.5 has a bug in the way inbound - # constraints are handled. If the same object is referenced - # multiple times inside the arguments, foolscap emits a - # 'reference' token instead of a distinct copy of the - # argument. The bug is that these 'reference' tokens are not - # accepted by the inbound constraint code. To work around - # this, we need to prevent python from interning the - # (constant) tuple, by creating a new copy of this vector - # each time. - - # This bug is fixed in foolscap-0.2.6, and even though this - # version of Tahoe requires foolscap-0.3.1 or newer, we are - # supposed to be able to interoperate with older versions of - # Tahoe which are allowed to use older versions of foolscap, - # including foolscap-0.2.5 . In addition, I've seen other - # foolscap problems triggered by 'reference' tokens (see #541 - # for details). So we must keep this workaround in place. - - #testv = (0, 1, 'eq', "") - testv = tuple([0, 1, 'eq', ""]) - - testvs = [testv] - # the write vector is simply the share - writev = [(0, self.shares[shnum])] - - if peerid not in all_tw_vectors: - all_tw_vectors[peerid] = {} - # maps shnum to (testvs, writevs, new_length) - assert shnum not in all_tw_vectors[peerid] - - all_tw_vectors[peerid][shnum] = (testvs, writev, None) - - # we read the checkstring back from each share, however we only use - # it to detect whether there was a new share that we didn't know - # about. The success or failure of the write will tell us whether - # there was a collision or not. If there is a collision, the first - # thing we'll do is update the servermap, which will find out what - # happened. We could conceivably reduce a roundtrip by using the - # readv checkstring to populate the servermap, but really we'd have - # to read enough data to validate the signatures too, so it wouldn't - # be an overall win. - read_vector = [(0, struct.calcsize(SIGNED_PREFIX))] - - # ok, send the messages! - self.log("sending %d shares" % len(all_tw_vectors), level=log.NOISY) - started = time.time() - for (peerid, tw_vectors) in all_tw_vectors.items(): - - write_enabler = self._node.get_write_enabler(peerid) - renew_secret = self._node.get_renewal_secret(peerid) - cancel_secret = self._node.get_cancel_secret(peerid) - secrets = (write_enabler, renew_secret, cancel_secret) - shnums = tw_vectors.keys() - - for shnum in shnums: - self.outstanding.add( (peerid, shnum) ) - - d = self._do_testreadwrite(peerid, secrets, - tw_vectors, read_vector) - d.addCallbacks(self._got_write_answer, self._got_write_error, - callbackArgs=(peerid, shnums, started), - errbackArgs=(peerid, shnums, started)) - # tolerate immediate errback, like with DeadReferenceError - d.addBoth(fireEventually) - d.addCallback(self.loop) - d.addErrback(self._fatal_error) - - self._update_status() - self.log("%d shares sent" % len(all_tw_vectors), level=log.NOISY) + elapsed = now - started hunk ./src/allmydata/mutable/publish.py 794 - def _do_testreadwrite(self, peerid, secrets, - tw_vectors, read_vector): - storage_index = self._storage_index - ss = self.connections[peerid] + self._status.add_per_server_time(peerid, elapsed) hunk ./src/allmydata/mutable/publish.py 796 - #print "SS[%s] is %s" % (idlib.shortnodeid_b2a(peerid), ss), ss.tracker.interfaceName - d = ss.callRemote("slot_testv_and_readv_and_writev", - storage_index, - secrets, - tw_vectors, - read_vector) - return d + wrote, read_data = answer hunk ./src/allmydata/mutable/publish.py 798 - def _got_write_answer(self, answer, peerid, shnums, started): - lp = self.log("_got_write_answer from %s" % - idlib.shortnodeid_b2a(peerid)) - for shnum in shnums: - self.outstanding.discard( (peerid, shnum) ) + surprise_shares = set(read_data.keys()) - set([writer.shnum]) hunk ./src/allmydata/mutable/publish.py 800 - now = time.time() - elapsed = now - started - self._status.add_per_server_time(peerid, elapsed) + # We need to remove from surprise_shares any shares that we are + # knowingly also writing to that peer from other writers. hunk ./src/allmydata/mutable/publish.py 803 - wrote, read_data = answer + # TODO: Precompute this. + known_shnums = [x.shnum for x in self.writers.values() + if x.peerid == peerid] + surprise_shares -= set(known_shnums) + self.log("found the following surprise shares: %s" % + str(surprise_shares)) hunk ./src/allmydata/mutable/publish.py 810 - surprise_shares = set(read_data.keys()) - set(shnums) + # Now surprise shares contains all of the shares that we did not + # expect to be there. surprised = False for shnum in surprise_shares: hunk ./src/allmydata/mutable/publish.py 817 # read_data is a dict mapping shnum to checkstring (SIGNED_PREFIX) checkstring = read_data[shnum][0] - their_version_info = unpack_checkstring(checkstring) - if their_version_info == self._new_version_info: + # What we want to do here is to see if their (seqnum, + # roothash, salt) is the same as our (seqnum, roothash, + # salt), or the equivalent for MDMF. The best way to do this + # is to store a packed representation of our checkstring + # somewhere, then not bother unpacking the other + # checkstring. + if checkstring == self._checkstring: # they have the right share, somehow if (peerid,shnum) in self.goal: hunk ./src/allmydata/mutable/publish.py 902 self.log("our testv failed, so the write did not happen", parent=lp, level=log.WEIRD, umid="8sc26g") self.surprised = True - self.bad_peers.add(peerid) # don't ask them again + # TODO: This needs to + self.bad_peers.add(writer) # don't ask them again # use the checkstring to add information to the log message for (shnum,readv) in read_data.items(): checkstring = readv[0] hunk ./src/allmydata/mutable/publish.py 928 # self.loop() will take care of finding new homes return - for shnum in shnums: - self.placed.add( (peerid, shnum) ) - # and update the servermap - self._servermap.add_new_share(peerid, shnum, + # and update the servermap + # self.versioninfo is set during the last phase of publishing. + # If we get there, we know that responses correspond to placed + # shares, and can safely execute these statements. + if self.versioninfo: + self.log("wrote successfully: adding new share to servermap") + self._servermap.add_new_share(peerid, writer.shnum, self.versioninfo, started) hunk ./src/allmydata/mutable/publish.py 936 - - # self.loop() will take care of checking to see if we're done - return + self.placed.add( (peerid, writer.shnum) ) hunk ./src/allmydata/mutable/publish.py 938 - def _got_write_error(self, f, peerid, shnums, started): - for shnum in shnums: - self.outstanding.discard( (peerid, shnum) ) - self.bad_peers.add(peerid) - if self._first_write_error is None: - self._first_write_error = f - self.log(format="error while writing shares %(shnums)s to peerid %(peerid)s", - shnums=list(shnums), peerid=idlib.shortnodeid_b2a(peerid), - failure=f, - level=log.UNUSUAL) # self.loop() will take care of checking to see if we're done return hunk ./src/allmydata/mutable/publish.py 949 now = time.time() self._status.timings["total"] = now - self._started self._status.set_active(False) - if isinstance(res, failure.Failure): - self.log("Publish done, with failure", failure=res, - level=log.WEIRD, umid="nRsR9Q") - self._status.set_status("Failed") - elif self.surprised: - self.log("Publish done, UncoordinatedWriteError", level=log.UNUSUAL) - self._status.set_status("UncoordinatedWriteError") - # deliver a failure - res = failure.Failure(UncoordinatedWriteError()) - # TODO: recovery - else: - self.log("Publish done, success") - self._status.set_status("Finished") - self._status.set_progress(1.0) + self.log("Publish done, success") + self._status.set_status("Finished") + self._status.set_progress(1.0) eventually(self.done_deferred.callback, res) hunk ./src/allmydata/mutable/publish.py 954 + def _failure(self): + + if not self.surprised: + # We ran out of servers + self.log("Publish ran out of good servers, " + "last failure was: %s" % str(self._last_failure)) + e = NotEnoughServersError("Ran out of non-bad servers, " + "last failure was %s" % + str(self._last_failure)) + else: + # We ran into shares that we didn't recognize, which means + # that we need to return an UncoordinatedWriteError. + self.log("Publish failed with UncoordinatedWriteError") + e = UncoordinatedWriteError() + f = failure.Failure(e) + eventually(self.done_deferred.callback, f) } [test/test_mutable.py: remove tests that are no longer relevant Kevan Carstensen **20100702225710 Ignore-this: 90a26b4cc4b2e190a635474ba7097e21 ] hunk ./src/allmydata/test/test_mutable.py 627 return d -class MakeShares(unittest.TestCase): - def test_encrypt(self): - nm = make_nodemaker() - CONTENTS = "some initial contents" - d = nm.create_mutable_file(CONTENTS) - def _created(fn): - p = Publish(fn, nm.storage_broker, None) - p.salt = "SALT" * 4 - p.readkey = "\x00" * 16 - p.newdata = CONTENTS - p.required_shares = 3 - p.total_shares = 10 - p.setup_encoding_parameters() - return p._encrypt_and_encode() - d.addCallback(_created) - def _done(shares_and_shareids): - (shares, share_ids) = shares_and_shareids - self.failUnlessEqual(len(shares), 10) - for sh in shares: - self.failUnless(isinstance(sh, str)) - self.failUnlessEqual(len(sh), 7) - self.failUnlessEqual(len(share_ids), 10) - d.addCallback(_done) - return d - test_encrypt.todo = "Write an equivalent of this for the new uploader" - - def test_generate(self): - nm = make_nodemaker() - CONTENTS = "some initial contents" - d = nm.create_mutable_file(CONTENTS) - def _created(fn): - self._fn = fn - p = Publish(fn, nm.storage_broker, None) - self._p = p - p.newdata = CONTENTS - p.required_shares = 3 - p.total_shares = 10 - p.setup_encoding_parameters() - p._new_seqnum = 3 - p.salt = "SALT" * 4 - # make some fake shares - shares_and_ids = ( ["%07d" % i for i in range(10)], range(10) ) - p._privkey = fn.get_privkey() - p._encprivkey = fn.get_encprivkey() - p._pubkey = fn.get_pubkey() - return p._generate_shares(shares_and_ids) - d.addCallback(_created) - def _generated(res): - p = self._p - final_shares = p.shares - root_hash = p.root_hash - self.failUnlessEqual(len(root_hash), 32) - self.failUnless(isinstance(final_shares, dict)) - self.failUnlessEqual(len(final_shares), 10) - self.failUnlessEqual(sorted(final_shares.keys()), range(10)) - for i,sh in final_shares.items(): - self.failUnless(isinstance(sh, str)) - # feed the share through the unpacker as a sanity-check - pieces = unpack_share(sh) - (u_seqnum, u_root_hash, IV, k, N, segsize, datalen, - pubkey, signature, share_hash_chain, block_hash_tree, - share_data, enc_privkey) = pieces - self.failUnlessEqual(u_seqnum, 3) - self.failUnlessEqual(u_root_hash, root_hash) - self.failUnlessEqual(k, 3) - self.failUnlessEqual(N, 10) - self.failUnlessEqual(segsize, 21) - self.failUnlessEqual(datalen, len(CONTENTS)) - self.failUnlessEqual(pubkey, p._pubkey.serialize()) - sig_material = struct.pack(">BQ32s16s BBQQ", - 0, p._new_seqnum, root_hash, IV, - k, N, segsize, datalen) - self.failUnless(p._pubkey.verify(sig_material, signature)) - #self.failUnlessEqual(signature, p._privkey.sign(sig_material)) - self.failUnlessEqual(len(share_hash_chain), 4) # ln2(10)++ - for shnum,share_hash in share_hash_chain.items(): - self.failUnless(isinstance(shnum, int)) - self.failUnless(isinstance(share_hash, str)) - self.failUnlessEqual(len(share_hash), 32) - self.failUnless(isinstance(block_hash_tree, list)) - self.failUnlessEqual(len(block_hash_tree), 1) # very small tree - self.failUnlessEqual(IV, "SALT"*4) - self.failUnlessEqual(len(share_data), len("%07d" % 1)) - self.failUnlessEqual(enc_privkey, self._fn.get_encprivkey()) - d.addCallback(_generated) - return d - test_generate.todo = "Write an equivalent of this for the new uploader" - - # TODO: when we publish to 20 peers, we should get one share per peer on 10 - # when we publish to 3 peers, we should get either 3 or 4 shares per peer - # when we publish to zero peers, we should get a NotEnoughSharesError - class PublishMixin: def publish_one(self): # publish a file and create shares, which can then be manipulated [interfaces.py: create IMutableUploadable Kevan Carstensen **20100706215217 Ignore-this: bee202ec2bfbd8e41f2d4019cce176c7 ] hunk ./src/allmydata/interfaces.py 1693 """The upload is finished, and whatever filehandle was in use may be closed.""" + +class IMutableUploadable(Interface): + """ + I represent content that is due to be uploaded to a mutable filecap. + """ + # This is somewhat simpler than the IUploadable interface above + # because mutable files do not need to be concerned with possibly + # generating a CHK, nor with per-file keys. It is a subset of the + # methods in IUploadable, though, so we could just as well implement + # the mutable uploadables as IUploadables that don't happen to use + # those methods (with the understanding that the unused methods will + # never be called on such objects) + def get_size(): + """ + Returns a Deferred that fires with the size of the content held + by the uploadable. + """ + + def read(length): + """ + Returns a list of strings which, when concatenated, are the next + length bytes of the file, or fewer if there are fewer bytes + between the current location and the end of the file. + """ + + def close(): + """ + The process that used the Uploadable is finished using it, so + the uploadable may be closed. + """ + class IUploadResults(Interface): """I am returned by upload() methods. I contain a number of public attributes which can be read to determine the results of the upload. Some [mutable/publish.py: add MutableDataHandle and MutableFileHandle Kevan Carstensen **20100706215257 Ignore-this: 295ea3bc2a962fd14fb7877fc76c011c ] { hunk ./src/allmydata/mutable/publish.py 8 from zope.interface import implements from twisted.internet import defer from twisted.python import failure -from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION +from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION, \ + IMutableUploadable from allmydata.util import base32, hashutil, mathutil, idlib, log from allmydata import hashtree, codec from allmydata.storage.server import si_b2a hunk ./src/allmydata/mutable/publish.py 971 e = UncoordinatedWriteError() f = failure.Failure(e) eventually(self.done_deferred.callback, f) + + +class MutableFileHandle: + """ + I am a mutable uploadable built around a filehandle-like object, + usually either a StringIO instance or a handle to an actual file. + """ + implements(IMutableUploadable) + + def __init__(self, filehandle): + # The filehandle is defined as a generally file-like object that + # has these two methods. We don't care beyond that. + assert hasattr(filehandle, "read") + assert hasattr(filehandle, "close") + + self._filehandle = filehandle + + + def get_size(self): + """ + I return the amount of data in my filehandle. + """ + if not hasattr(self, "_size"): + old_position = self._filehandle.tell() + # Seek to the end of the file by seeking 0 bytes from the + # file's end + self._filehandle.seek(0, os.SEEK_END) + self._size = self._filehandle.tell() + # Restore the previous position, in case this was called + # after a read. + self._filehandle.seek(old_position) + assert self._filehandle.tell() == old_position + + assert hasattr(self, "_size") + return self._size + + + def read(self, length): + """ + I return some data (up to length bytes) from my filehandle. + + In most cases, I return length bytes. If I don't, it is because + length is longer than the distance between my current position + in the file that I represent and its end. In that case, I return + as many bytes as I can before going over the EOF. + """ + return [self._filehandle.read(length)] + + + def close(self): + """ + I close the underlying filehandle. Any further operations on the + filehandle fail at this point. + """ + self._filehandle.close() + + +class MutableDataHandle(MutableFileHandle): + """ + I am a mutable uploadable built around a string, which I then cast + into a StringIO and treat as a filehandle. + """ + + def __init__(self, s): + # Take a string and return a file-like uploadable. + assert isinstance(s, str) + + MutableFileHandle.__init__(self, StringIO(s)) } [mutable/publish.py: reorganize in preparation of file-like uploadables Kevan Carstensen **20100706215541 Ignore-this: 5346c9f919ee5b73807c8f287c64e8ce ] { hunk ./src/allmydata/mutable/publish.py 4 import os, struct, time +from StringIO import StringIO from itertools import count from zope.interface import implements from twisted.internet import defer hunk ./src/allmydata/mutable/publish.py 118 self._status.set_helper(False) self._status.set_progress(0.0) self._status.set_active(True) - # We use this to control how the file is written. - version = self._node.get_version() - assert version in (SDMF_VERSION, MDMF_VERSION) - self._version = version + self._version = self._node.get_version() + assert self._version in (SDMF_VERSION, MDMF_VERSION) + def get_status(self): return self._status hunk ./src/allmydata/mutable/publish.py 141 # 0. Setup encoding parameters, encoder, and other such things. # 1. Encrypt, encode, and publish segments. + self.data = StringIO(newdata) + self.datalength = len(newdata) hunk ./src/allmydata/mutable/publish.py 144 - self.log("starting publish, datalen is %s" % len(newdata)) - self._status.set_size(len(newdata)) + self.log("starting publish, datalen is %s" % self.datalength) + self._status.set_size(self.datalength) self._status.set_status("Started") self._started = time.time() hunk ./src/allmydata/mutable/publish.py 193 self.full_peerlist = full_peerlist # for use later, immutable self.bad_peers = set() # peerids who have errbacked/refused requests - self.newdata = newdata - # This will set self.segment_size, self.num_segments, and # self.fec. self.setup_encoding_parameters() hunk ./src/allmydata/mutable/publish.py 272 self.required_shares, self.total_shares, self.segment_size, - len(self.newdata)) + self.datalength) self.writers[shnum].peerid = peerid if (peerid, shnum) in self._servermap.servermap: old_versionid, old_timestamp = self._servermap.servermap[key] hunk ./src/allmydata/mutable/publish.py 318 if self._version == MDMF_VERSION: segment_size = DEFAULT_MAX_SEGMENT_SIZE # 128 KiB by default else: - segment_size = len(self.newdata) # SDMF is only one segment + segment_size = self.datalength # SDMF is only one segment # this must be a multiple of self.required_shares segment_size = mathutil.next_multiple(segment_size, self.required_shares) hunk ./src/allmydata/mutable/publish.py 324 self.segment_size = segment_size if segment_size: - self.num_segments = mathutil.div_ceil(len(self.newdata), + self.num_segments = mathutil.div_ceil(self.datalength, segment_size) else: self.num_segments = 0 hunk ./src/allmydata/mutable/publish.py 337 assert self.num_segments in (0, 1) # SDMF # calculate the tail segment size. - if segment_size and self.newdata: - self.tail_segment_size = len(self.newdata) % segment_size + if segment_size and self.datalength: + self.tail_segment_size = self.datalength % segment_size else: self.tail_segment_size = 0 hunk ./src/allmydata/mutable/publish.py 438 segsize = self.segment_size - offset = self.segment_size * segnum - length = segsize + offset self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments)) hunk ./src/allmydata/mutable/publish.py 439 - data = self.newdata[offset:length] + data = self.data.read(segsize) + assert len(data) == segsize salt = os.urandom(16) hunk ./src/allmydata/mutable/publish.py 502 d.addCallback(self._got_write_answer, writer, started) d.addErrback(self._connection_problem, writer) dl.append(d) - # TODO: Naturally, we need to check on the results of these. return defer.DeferredList(dl) } [test/test_mutable.py: write tests for MutableFileHandle and MutableDataHandle Kevan Carstensen **20100706215649 Ignore-this: df719a0c52b4bbe9be4fae206c7ab3e7 ] { hunk ./src/allmydata/test/test_mutable.py 2 -import struct +import struct, os from cStringIO import StringIO from twisted.trial import unittest from twisted.internet import defer, reactor hunk ./src/allmydata/test/test_mutable.py 26 NeedMoreDataError, UnrecoverableFileError, UncoordinatedWriteError, \ NotEnoughServersError, CorruptShareError from allmydata.mutable.retrieve import Retrieve -from allmydata.mutable.publish import Publish +from allmydata.mutable.publish import Publish, MutableFileHandle, \ + MutableDataHandle from allmydata.mutable.servermap import ServerMap, ServermapUpdater from allmydata.mutable.layout import unpack_header, unpack_share, \ MDMFSlotReadProxy hunk ./src/allmydata/test/test_mutable.py 2465 d.addCallback(lambda data: self.failUnlessEqual(data, CONTENTS)) return d + + +class FileHandle(unittest.TestCase): + def setUp(self): + self.test_data = "Test Data" * 50000 + self.sio = StringIO(self.test_data) + self.uploadable = MutableFileHandle(self.sio) + + + def test_filehandle_read(self): + self.basedir = "mutable/FileHandle/test_filehandle_read" + chunk_size = 10 + for i in xrange(0, len(self.test_data), chunk_size): + data = self.uploadable.read(chunk_size) + data = "".join(data) + start = i + end = i + chunk_size + self.failUnlessEqual(data, self.test_data[start:end]) + + + def test_filehandle_get_size(self): + self.basedir = "mutable/FileHandle/test_filehandle_get_size" + actual_size = len(self.test_data) + size = self.uploadable.get_size() + self.failUnlessEqual(size, actual_size) + + + def test_filehandle_get_size_out_of_order(self): + # We should be able to call get_size whenever we want without + # disturbing the location of the seek pointer. + chunk_size = 100 + data = self.uploadable.read(chunk_size) + self.failUnlessEqual("".join(data), self.test_data[:chunk_size]) + + # Now get the size. + size = self.uploadable.get_size() + self.failUnlessEqual(size, len(self.test_data)) + + # Now get more data. We should be right where we left off. + more_data = self.uploadable.read(chunk_size) + start = chunk_size + end = chunk_size * 2 + self.failUnlessEqual("".join(more_data), self.test_data[start:end]) + + + def test_filehandle_file(self): + # Make sure that the MutableFileHandle works on a file as well + # as a StringIO object, since in some cases it will be asked to + # deal with files. + self.basedir = self.mktemp() + # necessary? What am I doing wrong here? + os.mkdir(self.basedir) + f_path = os.path.join(self.basedir, "test_file") + f = open(f_path, "w") + f.write(self.test_data) + f.close() + f = open(f_path, "r") + + uploadable = MutableFileHandle(f) + + data = uploadable.read(len(self.test_data)) + self.failUnlessEqual("".join(data), self.test_data) + size = uploadable.get_size() + self.failUnlessEqual(size, len(self.test_data)) + + + def test_close(self): + # Make sure that the MutableFileHandle closes its handle when + # told to do so. + self.uploadable.close() + self.failUnless(self.sio.closed) + + +class DataHandle(unittest.TestCase): + def setUp(self): + self.test_data = "Test Data" * 50000 + self.uploadable = MutableDataHandle(self.test_data) + + + def test_datahandle_read(self): + chunk_size = 10 + for i in xrange(0, len(self.test_data), chunk_size): + data = self.uploadable.read(chunk_size) + data = "".join(data) + start = i + end = i + chunk_size + self.failUnlessEqual(data, self.test_data[start:end]) + + + def test_datahandle_get_size(self): + actual_size = len(self.test_data) + size = self.uploadable.get_size() + self.failUnlessEqual(size, actual_size) + + + def test_datahandle_get_size_out_of_order(self): + # We should be able to call get_size whenever we want without + # disturbing the location of the seek pointer. + chunk_size = 100 + data = self.uploadable.read(chunk_size) + self.failUnlessEqual("".join(data), self.test_data[:chunk_size]) + + # Now get the size. + size = self.uploadable.get_size() + self.failUnlessEqual(size, len(self.test_data)) + + # Now get more data. We should be right where we left off. + more_data = self.uploadable.read(chunk_size) + start = chunk_size + end = chunk_size * 2 + self.failUnlessEqual("".join(more_data), self.test_data[start:end]) } [Alter tests to work with the new APIs Kevan Carstensen **20100708000031 Ignore-this: 1f377904ac61ce40e9a04716fbd2ad95 ] { hunk ./src/allmydata/test/common.py 12 from allmydata import uri, dirnode, client from allmydata.introducer.server import IntroducerNode from allmydata.interfaces import IMutableFileNode, IImmutableFileNode, \ - FileTooLargeError, NotEnoughSharesError, ICheckable + FileTooLargeError, NotEnoughSharesError, ICheckable, \ + IMutableUploadable from allmydata.check_results import CheckResults, CheckAndRepairResults, \ DeepCheckResults, DeepCheckAndRepairResults from allmydata.mutable.common import CorruptShareError hunk ./src/allmydata/test/common.py 18 from allmydata.mutable.layout import unpack_header +from allmydata.mutable.publish import MutableDataHandle from allmydata.storage.server import storage_index_to_dir from allmydata.storage.mutable import MutableShareFile from allmydata.util import hashutil, log, fileutil, pollmixin hunk ./src/allmydata/test/common.py 182 self.init_from_cap(make_mutable_file_cap()) def create(self, contents, key_generator=None, keysize=None): initial_contents = self._get_initial_contents(contents) - if len(initial_contents) > self.MUTABLE_SIZELIMIT: + if initial_contents.get_size() > self.MUTABLE_SIZELIMIT: raise FileTooLargeError("SDMF is limited to one segment, and " hunk ./src/allmydata/test/common.py 184 - "%d > %d" % (len(initial_contents), + "%d > %d" % (initial_contents.get_size(), self.MUTABLE_SIZELIMIT)) hunk ./src/allmydata/test/common.py 186 - self.all_contents[self.storage_index] = initial_contents + data = initial_contents.read(initial_contents.get_size()) + data = "".join(data) + self.all_contents[self.storage_index] = data return defer.succeed(self) def _get_initial_contents(self, contents): hunk ./src/allmydata/test/common.py 191 - if isinstance(contents, str): - return contents if contents is None: hunk ./src/allmydata/test/common.py 192 - return "" + return MutableDataHandle("") + + if IMutableUploadable.providedBy(contents): + return contents + assert callable(contents), "%s should be callable, not %s" % \ (contents, type(contents)) return contents(self) hunk ./src/allmydata/test/common.py 309 return defer.succeed(self.all_contents[self.storage_index]) def overwrite(self, new_contents): - if len(new_contents) > self.MUTABLE_SIZELIMIT: + if new_contents.get_size() > self.MUTABLE_SIZELIMIT: raise FileTooLargeError("SDMF is limited to one segment, and " hunk ./src/allmydata/test/common.py 311 - "%d > %d" % (len(new_contents), + "%d > %d" % (new_contents.get_size(), self.MUTABLE_SIZELIMIT)) assert not self.is_readonly() hunk ./src/allmydata/test/common.py 314 - self.all_contents[self.storage_index] = new_contents + new_data = new_contents.read(new_contents.get_size()) + new_data = "".join(new_data) + self.all_contents[self.storage_index] = new_data return defer.succeed(None) def modify(self, modifier): # this does not implement FileTooLargeError, but the real one does hunk ./src/allmydata/test/common.py 324 def _modify(self, modifier): assert not self.is_readonly() old_contents = self.all_contents[self.storage_index] - self.all_contents[self.storage_index] = modifier(old_contents, None, True) + new_data = modifier(old_contents, None, True) + if new_data is not None: + new_data = new_data.read(new_data.get_size()) + new_data = "".join(new_data) + self.all_contents[self.storage_index] = new_data return None def make_mutable_file_cap(): hunk ./src/allmydata/test/test_checker.py 11 from allmydata.test.no_network import GridTestMixin from allmydata.immutable.upload import Data from allmydata.test.common_web import WebRenderingMixin +from allmydata.mutable.publish import MutableDataHandle class FakeClient: def get_storage_broker(self): hunk ./src/allmydata/test/test_checker.py 291 def _stash_immutable(ur): self.imm = c0.create_node_from_uri(ur.uri) d.addCallback(_stash_immutable) - d.addCallback(lambda ign: c0.create_mutable_file("contents")) + d.addCallback(lambda ign: + c0.create_mutable_file(MutableDataHandle("contents"))) def _stash_mutable(node): self.mut = node d.addCallback(_stash_mutable) hunk ./src/allmydata/test/test_cli.py 12 from allmydata.util import fileutil, hashutil, base32 from allmydata import uri from allmydata.immutable import upload +from allmydata.mutable.publish import MutableDataHandle from allmydata.dirnode import normalize # Test that the scripts can be imported -- although the actual tests of their hunk ./src/allmydata/test/test_cli.py 1983 self.set_up_grid() c0 = self.g.clients[0] DATA = "data" * 100 - d = c0.create_mutable_file(DATA) + DATA_uploadable = MutableDataHandle(DATA) + d = c0.create_mutable_file(DATA_uploadable) def _stash_uri(n): self.uri = n.get_uri() d.addCallback(_stash_uri) hunk ./src/allmydata/test/test_cli.py 2085 upload.Data("literal", convergence=""))) d.addCallback(_stash_uri, "small") - d.addCallback(lambda ign: c0.create_mutable_file(DATA+"1")) + d.addCallback(lambda ign: + c0.create_mutable_file(MutableDataHandle(DATA+"1"))) d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn)) d.addCallback(_stash_uri, "mutable") hunk ./src/allmydata/test/test_cli.py 2104 # root/small # root/mutable + # We haven't broken anything yet, so this should all be healthy. d.addCallback(lambda ign: self.do_cli("deep-check", "--verbose", self.rooturi)) def _check2((rc, out, err)): hunk ./src/allmydata/test/test_cli.py 2119 in lines, out) d.addCallback(_check2) + # Similarly, all of these results should be as we expect them to + # be for a healthy file layout. d.addCallback(lambda ign: self.do_cli("stats", self.rooturi)) def _check_stats((rc, out, err)): self.failUnlessReallyEqual(err, "") hunk ./src/allmydata/test/test_cli.py 2136 self.failUnlessIn(" 317-1000 : 1 (1000 B, 1000 B)", lines) d.addCallback(_check_stats) + # Now we break things. def _clobber_shares(ignored): shares = self.find_shares(self.uris[u"gööd"]) self.failUnlessReallyEqual(len(shares), 10) hunk ./src/allmydata/test/test_cli.py 2155 d.addCallback(_clobber_shares) # root - # root/gööd [9 shares] + # root/gööd [1 missing share] # root/small # root/mutable [1 corrupt share] hunk ./src/allmydata/test/test_cli.py 2161 d.addCallback(lambda ign: self.do_cli("deep-check", "--verbose", self.rooturi)) + # This should reveal the missing share, but not the corrupt + # share, since we didn't tell the deep check operation to also + # verify. def _check3((rc, out, err)): self.failUnlessReallyEqual(err, "") self.failUnlessReallyEqual(rc, 0) hunk ./src/allmydata/test/test_cli.py 2212 "--verbose", "--verify", "--repair", self.rooturi)) def _check6((rc, out, err)): + # We've just repaired the directory. There is no reason for + # that repair to be unsuccessful. self.failUnlessReallyEqual(err, "") self.failUnlessReallyEqual(rc, 0) lines = out.splitlines() hunk ./src/allmydata/test/test_deepcheck.py 9 from twisted.internet import threads # CLI tests use deferToThread from allmydata.immutable import upload from allmydata.mutable.common import UnrecoverableFileError +from allmydata.mutable.publish import MutableDataHandle from allmydata.util import idlib from allmydata.util import base32 from allmydata.scripts import runner hunk ./src/allmydata/test/test_deepcheck.py 38 self.basedir = "deepcheck/MutableChecker/good" self.set_up_grid() CONTENTS = "a little bit of data" - d = self.g.clients[0].create_mutable_file(CONTENTS) + CONTENTS_uploadable = MutableDataHandle(CONTENTS) + d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable) def _created(node): self.node = node self.fileurl = "uri/" + urllib.quote(node.get_uri()) hunk ./src/allmydata/test/test_deepcheck.py 61 self.basedir = "deepcheck/MutableChecker/corrupt" self.set_up_grid() CONTENTS = "a little bit of data" - d = self.g.clients[0].create_mutable_file(CONTENTS) + CONTENTS_uploadable = MutableDataHandle(CONTENTS) + d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable) def _stash_and_corrupt(node): self.node = node self.fileurl = "uri/" + urllib.quote(node.get_uri()) hunk ./src/allmydata/test/test_deepcheck.py 99 self.basedir = "deepcheck/MutableChecker/delete_share" self.set_up_grid() CONTENTS = "a little bit of data" - d = self.g.clients[0].create_mutable_file(CONTENTS) + CONTENTS_uploadable = MutableDataHandle(CONTENTS) + d = self.g.clients[0].create_mutable_file(CONTENTS_uploadable) def _stash_and_delete(node): self.node = node self.fileurl = "uri/" + urllib.quote(node.get_uri()) hunk ./src/allmydata/test/test_deepcheck.py 223 self.root = n self.root_uri = n.get_uri() d.addCallback(_created_root) - d.addCallback(lambda ign: c0.create_mutable_file("mutable file contents")) + d.addCallback(lambda ign: + c0.create_mutable_file(MutableDataHandle("mutable file contents"))) d.addCallback(lambda n: self.root.set_node(u"mutable", n)) def _created_mutable(n): self.mutable = n hunk ./src/allmydata/test/test_deepcheck.py 965 def create_mangled(self, ignored, name): nodetype, mangletype = name.split("-", 1) if nodetype == "mutable": - d = self.g.clients[0].create_mutable_file("mutable file contents") + mutable_uploadable = MutableDataHandle("mutable file contents") + d = self.g.clients[0].create_mutable_file(mutable_uploadable) d.addCallback(lambda n: self.root.set_node(unicode(name), n)) elif nodetype == "large": large = upload.Data("Lots of data\n" * 1000 + name + "\n", None) hunk ./src/allmydata/test/test_dirnode.py 1281 implements(IMutableFileNode) counter = 0 def __init__(self, initial_contents=""): - self.data = self._get_initial_contents(initial_contents) + data = self._get_initial_contents(initial_contents) + self.data = data.read(data.get_size()) + self.data = "".join(self.data) + counter = FakeMutableFile.counter FakeMutableFile.counter += 1 writekey = hashutil.ssk_writekey_hash(str(counter)) hunk ./src/allmydata/test/test_dirnode.py 1331 pass def modify(self, modifier): - self.data = modifier(self.data, None, True) + data = modifier(self.data, None, True) + self.data = data.read(data.get_size()) + self.data = "".join(self.data) return defer.succeed(None) class FakeNodeMaker(NodeMaker): hunk ./src/allmydata/test/test_hung_server.py 10 from allmydata.util.consumer import download_to_data from allmydata.immutable import upload from allmydata.mutable.common import UnrecoverableFileError +from allmydata.mutable.publish import MutableDataHandle from allmydata.storage.common import storage_index_to_dir from allmydata.test.no_network import GridTestMixin from allmydata.test.common import ShouldFailMixin, _corrupt_share_data hunk ./src/allmydata/test/test_hung_server.py 96 self.servers = [(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()] if mutable: - d = nm.create_mutable_file(mutable_plaintext) + uploadable = MutableDataHandle(mutable_plaintext) + d = nm.create_mutable_file(uploadable) def _uploaded_mutable(node): self.uri = node.get_uri() self.shares = self.find_shares(self.uri) hunk ./src/allmydata/test/test_mutable.py 297 d.addCallback(lambda smap: smap.dump(StringIO())) d.addCallback(lambda sio: self.failUnless("3-of-10" in sio.getvalue())) - d.addCallback(lambda res: n.overwrite("contents 1")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 1"))) d.addCallback(lambda res: self.failUnlessIdentical(res, None)) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1")) hunk ./src/allmydata/test/test_mutable.py 304 d.addCallback(lambda res: n.get_size_of_best_version()) d.addCallback(lambda size: self.failUnlessEqual(size, len("contents 1"))) - d.addCallback(lambda res: n.overwrite("contents 2")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 2"))) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2")) d.addCallback(lambda res: n.get_servermap(MODE_WRITE)) hunk ./src/allmydata/test/test_mutable.py 308 - d.addCallback(lambda smap: n.upload("contents 3", smap)) + d.addCallback(lambda smap: n.upload(MutableDataHandle("contents 3"), smap)) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3")) d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING)) hunk ./src/allmydata/test/test_mutable.py 320 # mapupdate-to-retrieve data caching (i.e. make the shares larger # than the default readsize, which is 2000 bytes). A 15kB file # will have 5kB shares. - d.addCallback(lambda res: n.overwrite("large size file" * 1000)) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("large size file" * 1000))) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "large size file" * 1000)) hunk ./src/allmydata/test/test_mutable.py 343 # to make them big enough to force the file to be uploaded # in more than one segment. big_contents = "contents1" * 100000 # about 900 KiB + big_contents_uploadable = MutableDataHandle(big_contents) d.addCallback(lambda ignored: hunk ./src/allmydata/test/test_mutable.py 345 - n.overwrite(big_contents)) + n.overwrite(big_contents_uploadable)) d.addCallback(lambda ignored: n.download_best_version()) d.addCallback(lambda data: hunk ./src/allmydata/test/test_mutable.py 355 # segments, so that we make the downloader deal with # multiple segments. bigger_contents = "contents2" * 1000000 # about 9MiB + bigger_contents_uploadable = MutableDataHandle(bigger_contents) d.addCallback(lambda ignored: hunk ./src/allmydata/test/test_mutable.py 357 - n.overwrite(bigger_contents)) + n.overwrite(bigger_contents_uploadable)) d.addCallback(lambda ignored: n.download_best_version()) d.addCallback(lambda data: hunk ./src/allmydata/test/test_mutable.py 368 def test_create_with_initial_contents(self): - d = self.nodemaker.create_mutable_file("contents 1") + upload1 = MutableDataHandle("contents 1") + d = self.nodemaker.create_mutable_file(upload1) def _created(n): d = n.download_best_version() d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1")) hunk ./src/allmydata/test/test_mutable.py 373 - d.addCallback(lambda res: n.overwrite("contents 2")) + upload2 = MutableDataHandle("contents 2") + d.addCallback(lambda res: n.overwrite(upload2)) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2")) return d hunk ./src/allmydata/test/test_mutable.py 380 d.addCallback(_created) return d + test_create_with_initial_contents.timeout = 15 def test_create_mdmf_with_initial_contents(self): hunk ./src/allmydata/test/test_mutable.py 385 initial_contents = "foobarbaz" * 131072 # 900KiB - d = self.nodemaker.create_mutable_file(initial_contents, + initial_contents_uploadable = MutableDataHandle(initial_contents) + d = self.nodemaker.create_mutable_file(initial_contents_uploadable, version=MDMF_VERSION) def _created(n): d = n.download_best_version() hunk ./src/allmydata/test/test_mutable.py 392 d.addCallback(lambda data: self.failUnlessEqual(data, initial_contents)) + uploadable2 = MutableDataHandle(initial_contents + "foobarbaz") d.addCallback(lambda ignored: hunk ./src/allmydata/test/test_mutable.py 394 - n.overwrite(initial_contents + "foobarbaz")) + n.overwrite(uploadable2)) d.addCallback(lambda ignored: n.download_best_version()) d.addCallback(lambda data: hunk ./src/allmydata/test/test_mutable.py 413 key = n.get_writekey() self.failUnless(isinstance(key, str), key) self.failUnlessEqual(len(key), 16) # AES key size - return data + return MutableDataHandle(data) d = self.nodemaker.create_mutable_file(_make_contents) def _created(n): return n.download_best_version() hunk ./src/allmydata/test/test_mutable.py 429 key = n.get_writekey() self.failUnless(isinstance(key, str), key) self.failUnlessEqual(len(key), 16) - return data + return MutableDataHandle(data) d = self.nodemaker.create_mutable_file(_make_contents, version=MDMF_VERSION) d.addCallback(lambda n: hunk ./src/allmydata/test/test_mutable.py 441 def test_create_with_too_large_contents(self): BIG = "a" * (self.OLD_MAX_SEGMENT_SIZE + 1) - d = self.nodemaker.create_mutable_file(BIG) + BIG_uploadable = MutableDataHandle(BIG) + d = self.nodemaker.create_mutable_file(BIG_uploadable) def _created(n): hunk ./src/allmydata/test/test_mutable.py 444 - d = n.overwrite(BIG) + other_BIG_uploadable = MutableDataHandle(BIG) + d = n.overwrite(other_BIG_uploadable) return d d.addCallback(_created) return d hunk ./src/allmydata/test/test_mutable.py 459 def test_modify(self): def _modifier(old_contents, servermap, first_time): - return old_contents + "line2" + new_contents = old_contents + "line2" + return MutableDataHandle(new_contents) def _non_modifier(old_contents, servermap, first_time): hunk ./src/allmydata/test/test_mutable.py 462 - return old_contents + return MutableDataHandle(old_contents) def _none_modifier(old_contents, servermap, first_time): return None def _error_modifier(old_contents, servermap, first_time): hunk ./src/allmydata/test/test_mutable.py 468 raise ValueError("oops") def _toobig_modifier(old_contents, servermap, first_time): - return "b" * (self.OLD_MAX_SEGMENT_SIZE+1) + new_content = "b" * (self.OLD_MAX_SEGMENT_SIZE + 1) + return MutableDataHandle(new_content) calls = [] def _ucw_error_modifier(old_contents, servermap, first_time): # simulate an UncoordinatedWriteError once hunk ./src/allmydata/test/test_mutable.py 476 calls.append(1) if len(calls) <= 1: raise UncoordinatedWriteError("simulated") - return old_contents + "line3" + new_contents = old_contents + "line3" + return MutableDataHandle(new_contents) def _ucw_error_non_modifier(old_contents, servermap, first_time): # simulate an UncoordinatedWriteError once, and don't actually # modify the contents on subsequent invocations hunk ./src/allmydata/test/test_mutable.py 484 calls.append(1) if len(calls) <= 1: raise UncoordinatedWriteError("simulated") - return old_contents + return MutableDataHandle(old_contents) hunk ./src/allmydata/test/test_mutable.py 486 - d = self.nodemaker.create_mutable_file("line1") + initial_contents = "line1" + d = self.nodemaker.create_mutable_file(MutableDataHandle(initial_contents)) def _created(n): d = n.modify(_modifier) d.addCallback(lambda res: n.download_best_version()) hunk ./src/allmydata/test/test_mutable.py 548 def test_modify_backoffer(self): def _modifier(old_contents, servermap, first_time): - return old_contents + "line2" + return MutableDataHandle(old_contents + "line2") calls = [] def _ucw_error_modifier(old_contents, servermap, first_time): # simulate an UncoordinatedWriteError once hunk ./src/allmydata/test/test_mutable.py 555 calls.append(1) if len(calls) <= 1: raise UncoordinatedWriteError("simulated") - return old_contents + "line3" + return MutableDataHandle(old_contents + "line3") def _always_ucw_error_modifier(old_contents, servermap, first_time): raise UncoordinatedWriteError("simulated") def _backoff_stopper(node, f): hunk ./src/allmydata/test/test_mutable.py 570 giveuper._delay = 0.1 giveuper.factor = 1 - d = self.nodemaker.create_mutable_file("line1") + d = self.nodemaker.create_mutable_file(MutableDataHandle("line1")) def _created(n): d = n.modify(_modifier) d.addCallback(lambda res: n.download_best_version()) hunk ./src/allmydata/test/test_mutable.py 620 d.addCallback(lambda smap: smap.dump(StringIO())) d.addCallback(lambda sio: self.failUnless("3-of-10" in sio.getvalue())) - d.addCallback(lambda res: n.overwrite("contents 1")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 1"))) d.addCallback(lambda res: self.failUnlessIdentical(res, None)) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1")) hunk ./src/allmydata/test/test_mutable.py 624 - d.addCallback(lambda res: n.overwrite("contents 2")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 2"))) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2")) d.addCallback(lambda res: n.get_servermap(MODE_WRITE)) hunk ./src/allmydata/test/test_mutable.py 628 - d.addCallback(lambda smap: n.upload("contents 3", smap)) + d.addCallback(lambda smap: n.upload(MutableDataHandle("contents 3"), smap)) d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 3")) d.addCallback(lambda res: n.get_servermap(MODE_ANYTHING)) hunk ./src/allmydata/test/test_mutable.py 646 # publish a file and create shares, which can then be manipulated # later. self.CONTENTS = "New contents go here" * 1000 + self.uploadable = MutableDataHandle(self.CONTENTS) self._storage = FakeStorage() self._nodemaker = make_nodemaker(self._storage) self._storage_broker = self._nodemaker.storage_broker hunk ./src/allmydata/test/test_mutable.py 650 - d = self._nodemaker.create_mutable_file(self.CONTENTS) + d = self._nodemaker.create_mutable_file(self.uploadable) def _created(node): self._fn = node self._fn2 = self._nodemaker.create_from_cap(node.get_uri()) hunk ./src/allmydata/test/test_mutable.py 662 # an MDMF file. # self.CONTENTS should have more than one segment. self.CONTENTS = "This is an MDMF file" * 100000 + self.uploadable = MutableDataHandle(self.CONTENTS) self._storage = FakeStorage() self._nodemaker = make_nodemaker(self._storage) self._storage_broker = self._nodemaker.storage_broker hunk ./src/allmydata/test/test_mutable.py 666 - d = self._nodemaker.create_mutable_file(self.CONTENTS, version=1) + d = self._nodemaker.create_mutable_file(self.uploadable, version=MDMF_VERSION) def _created(node): self._fn = node self._fn2 = self._nodemaker.create_from_cap(node.get_uri()) hunk ./src/allmydata/test/test_mutable.py 678 # like publish_one, except that the result is guaranteed to be # an SDMF file self.CONTENTS = "This is an SDMF file" * 1000 + self.uploadable = MutableDataHandle(self.CONTENTS) self._storage = FakeStorage() self._nodemaker = make_nodemaker(self._storage) self._storage_broker = self._nodemaker.storage_broker hunk ./src/allmydata/test/test_mutable.py 682 - d = self._nodemaker.create_mutable_file(self.CONTENTS, version=0) + d = self._nodemaker.create_mutable_file(self.uploadable, version=SDMF_VERSION) def _created(node): self._fn = node self._fn2 = self._nodemaker.create_from_cap(node.get_uri()) hunk ./src/allmydata/test/test_mutable.py 696 "Contents 2", "Contents 3a", "Contents 3b"] + self.uploadables = [MutableDataHandle(d) for d in self.CONTENTS] self._copied_shares = {} self._storage = FakeStorage() self._nodemaker = make_nodemaker(self._storage) hunk ./src/allmydata/test/test_mutable.py 700 - d = self._nodemaker.create_mutable_file(self.CONTENTS[0], version=version) # seqnum=1 + d = self._nodemaker.create_mutable_file(self.uploadables[0], version=version) # seqnum=1 def _created(node): self._fn = node # now create multiple versions of the same file, and accumulate hunk ./src/allmydata/test/test_mutable.py 707 # their shares, so we can mix and match them later. d = defer.succeed(None) d.addCallback(self._copy_shares, 0) - d.addCallback(lambda res: node.overwrite(self.CONTENTS[1])) #s2 + d.addCallback(lambda res: node.overwrite(self.uploadables[1])) #s2 d.addCallback(self._copy_shares, 1) hunk ./src/allmydata/test/test_mutable.py 709 - d.addCallback(lambda res: node.overwrite(self.CONTENTS[2])) #s3 + d.addCallback(lambda res: node.overwrite(self.uploadables[2])) #s3 d.addCallback(self._copy_shares, 2) hunk ./src/allmydata/test/test_mutable.py 711 - d.addCallback(lambda res: node.overwrite(self.CONTENTS[3])) #s4a + d.addCallback(lambda res: node.overwrite(self.uploadables[3])) #s4a d.addCallback(self._copy_shares, 3) # now we replace all the shares with version s3, and upload a new # version to get s4b. hunk ./src/allmydata/test/test_mutable.py 717 rollback = dict([(i,2) for i in range(10)]) d.addCallback(lambda res: self._set_versions(rollback)) - d.addCallback(lambda res: node.overwrite(self.CONTENTS[4])) #s4b + d.addCallback(lambda res: node.overwrite(self.uploadables[4])) #s4b d.addCallback(self._copy_shares, 4) # we leave the storage in state 4 return d hunk ./src/allmydata/test/test_mutable.py 826 # create a new file, which is large enough to knock the privkey out # of the early part of the file LARGE = "These are Larger contents" * 200 # about 5KB - d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE)) + LARGE_uploadable = MutableDataHandle(LARGE) + d.addCallback(lambda res: self._nodemaker.create_mutable_file(LARGE_uploadable)) def _created(large_fn): large_fn2 = self._nodemaker.create_from_cap(large_fn.get_uri()) return self.make_servermap(MODE_WRITE, large_fn2) hunk ./src/allmydata/test/test_mutable.py 1842 class MultipleEncodings(unittest.TestCase): def setUp(self): self.CONTENTS = "New contents go here" + self.uploadable = MutableDataHandle(self.CONTENTS) self._storage = FakeStorage() self._nodemaker = make_nodemaker(self._storage, num_peers=20) self._storage_broker = self._nodemaker.storage_broker hunk ./src/allmydata/test/test_mutable.py 1846 - d = self._nodemaker.create_mutable_file(self.CONTENTS) + d = self._nodemaker.create_mutable_file(self.uploadable) def _created(node): self._fn = node d.addCallback(_created) hunk ./src/allmydata/test/test_mutable.py 1872 s = self._storage s._peers = {} # clear existing storage p2 = Publish(fn2, self._storage_broker, None) - d = p2.publish(data) + uploadable = MutableDataHandle(data) + d = p2.publish(uploadable) def _published(res): shares = s._peers s._peers = {} hunk ./src/allmydata/test/test_mutable.py 2049 self._set_versions(target) def _modify(oldversion, servermap, first_time): - return oldversion + " modified" + return MutableDataHandle(oldversion + " modified") d = self._fn.modify(_modify) d.addCallback(lambda res: self._fn.download_best_version()) expected = self.CONTENTS[2] + " modified" hunk ./src/allmydata/test/test_mutable.py 2175 self.basedir = "mutable/Problems/test_publish_surprise" self.set_up_grid() nm = self.g.clients[0].nodemaker - d = nm.create_mutable_file("contents 1") + d = nm.create_mutable_file(MutableDataHandle("contents 1")) def _created(n): d = defer.succeed(None) d.addCallback(lambda res: n.get_servermap(MODE_WRITE)) hunk ./src/allmydata/test/test_mutable.py 2185 d.addCallback(_got_smap1) # then modify the file, leaving the old map untouched d.addCallback(lambda res: log.msg("starting winning write")) - d.addCallback(lambda res: n.overwrite("contents 2")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 2"))) # now attempt to modify the file with the old servermap. This # will look just like an uncoordinated write, in which every # single share got updated between our mapupdate and our publish hunk ./src/allmydata/test/test_mutable.py 2194 self.shouldFail(UncoordinatedWriteError, "test_publish_surprise", None, n.upload, - "contents 2a", self.old_map)) + MutableDataHandle("contents 2a"), self.old_map)) return d d.addCallback(_created) return d hunk ./src/allmydata/test/test_mutable.py 2203 self.basedir = "mutable/Problems/test_retrieve_surprise" self.set_up_grid() nm = self.g.clients[0].nodemaker - d = nm.create_mutable_file("contents 1") + d = nm.create_mutable_file(MutableDataHandle("contents 1")) def _created(n): d = defer.succeed(None) d.addCallback(lambda res: n.get_servermap(MODE_READ)) hunk ./src/allmydata/test/test_mutable.py 2213 d.addCallback(_got_smap1) # then modify the file, leaving the old map untouched d.addCallback(lambda res: log.msg("starting winning write")) - d.addCallback(lambda res: n.overwrite("contents 2")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 2"))) # now attempt to retrieve the old version with the old servermap. # This will look like someone has changed the file since we # updated the servermap. hunk ./src/allmydata/test/test_mutable.py 2241 self.basedir = "mutable/Problems/test_unexpected_shares" self.set_up_grid() nm = self.g.clients[0].nodemaker - d = nm.create_mutable_file("contents 1") + d = nm.create_mutable_file(MutableDataHandle("contents 1")) def _created(n): d = defer.succeed(None) d.addCallback(lambda res: n.get_servermap(MODE_WRITE)) hunk ./src/allmydata/test/test_mutable.py 2253 self.g.remove_server(peer0) # then modify the file, leaving the old map untouched log.msg("starting winning write") - return n.overwrite("contents 2") + return n.overwrite(MutableDataHandle("contents 2")) d.addCallback(_got_smap1) # now attempt to modify the file with the old servermap. This # will look just like an uncoordinated write, in which every hunk ./src/allmydata/test/test_mutable.py 2263 self.shouldFail(UncoordinatedWriteError, "test_surprise", None, n.upload, - "contents 2a", self.old_map)) + MutableDataHandle("contents 2a"), self.old_map)) return d d.addCallback(_created) return d hunk ./src/allmydata/test/test_mutable.py 2267 + test_unexpected_shares.timeout = 15 def test_bad_server(self): # Break one server, then create the file: the initial publish should hunk ./src/allmydata/test/test_mutable.py 2303 d.addCallback(_break_peer0) # now "create" the file, using the pre-established key, and let the # initial publish finally happen - d.addCallback(lambda res: nm.create_mutable_file("contents 1")) + d.addCallback(lambda res: nm.create_mutable_file(MutableDataHandle("contents 1"))) # that ought to work def _got_node(n): d = n.download_best_version() hunk ./src/allmydata/test/test_mutable.py 2312 def _break_peer1(res): self.connection1.broken = True d.addCallback(_break_peer1) - d.addCallback(lambda res: n.overwrite("contents 2")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 2"))) # that ought to work too d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2")) hunk ./src/allmydata/test/test_mutable.py 2344 peerids = [serverid for (serverid,ss) in sb.get_all_servers()] self.g.break_server(peerids[0]) - d = nm.create_mutable_file("contents 1") + d = nm.create_mutable_file(MutableDataHandle("contents 1")) def _created(n): d = n.download_best_version() d.addCallback(lambda res: self.failUnlessEqual(res, "contents 1")) hunk ./src/allmydata/test/test_mutable.py 2352 def _break_second_server(res): self.g.break_server(peerids[1]) d.addCallback(_break_second_server) - d.addCallback(lambda res: n.overwrite("contents 2")) + d.addCallback(lambda res: n.overwrite(MutableDataHandle("contents 2"))) # that ought to work too d.addCallback(lambda res: n.download_best_version()) d.addCallback(lambda res: self.failUnlessEqual(res, "contents 2")) hunk ./src/allmydata/test/test_mutable.py 2371 d = self.shouldFail(NotEnoughServersError, "test_publish_all_servers_bad", "Ran out of non-bad servers", - nm.create_mutable_file, "contents") + nm.create_mutable_file, MutableDataHandle("contents")) return d def test_publish_no_servers(self): hunk ./src/allmydata/test/test_mutable.py 2383 d = self.shouldFail(NotEnoughServersError, "test_publish_no_servers", "Ran out of non-bad servers", - nm.create_mutable_file, "contents") + nm.create_mutable_file, MutableDataHandle("contents")) return d test_publish_no_servers.timeout = 30 hunk ./src/allmydata/test/test_mutable.py 2401 # we need some contents that are large enough to push the privkey out # of the early part of the file LARGE = "These are Larger contents" * 2000 # about 50KB - d = nm.create_mutable_file(LARGE) + LARGE_uploadable = MutableDataHandle(LARGE) + d = nm.create_mutable_file(LARGE_uploadable) def _created(n): self.uri = n.get_uri() self.n2 = nm.create_from_cap(self.uri) hunk ./src/allmydata/test/test_mutable.py 2438 self.set_up_grid(num_servers=20) nm = self.g.clients[0].nodemaker LARGE = "These are Larger contents" * 2000 # about 50KiB + LARGE_uploadable = MutableDataHandle(LARGE) nm._node_cache = DevNullDictionary() # disable the nodecache hunk ./src/allmydata/test/test_mutable.py 2441 - d = nm.create_mutable_file(LARGE) + d = nm.create_mutable_file(LARGE_uploadable) def _created(n): self.uri = n.get_uri() self.n2 = nm.create_from_cap(self.uri) hunk ./src/allmydata/test/test_mutable.py 2464 self.set_up_grid(num_servers=20) nm = self.g.clients[0].nodemaker CONTENTS = "contents" * 2000 - d = nm.create_mutable_file(CONTENTS) + CONTENTS_uploadable = MutableDataHandle(CONTENTS) + d = nm.create_mutable_file(CONTENTS_uploadable) def _created(node): self._node = node d.addCallback(_created) hunk ./src/allmydata/test/test_system.py 22 from allmydata.monitor import Monitor from allmydata.mutable.common import NotWriteableError from allmydata.mutable import layout as mutable_layout +from allmydata.mutable.publish import MutableDataHandle from foolscap.api import DeadReferenceError from twisted.python.failure import Failure from twisted.web.client import getPage hunk ./src/allmydata/test/test_system.py 460 def test_mutable(self): self.basedir = "system/SystemTest/test_mutable" DATA = "initial contents go here." # 25 bytes % 3 != 0 + DATA_uploadable = MutableDataHandle(DATA) NEWDATA = "new contents yay" hunk ./src/allmydata/test/test_system.py 462 + NEWDATA_uploadable = MutableDataHandle(NEWDATA) NEWERDATA = "this is getting old" hunk ./src/allmydata/test/test_system.py 464 + NEWERDATA_uploadable = MutableDataHandle(NEWERDATA) d = self.set_up_nodes(use_key_generator=True) hunk ./src/allmydata/test/test_system.py 471 def _create_mutable(res): c = self.clients[0] log.msg("starting create_mutable_file") - d1 = c.create_mutable_file(DATA) + d1 = c.create_mutable_file(DATA_uploadable) def _done(res): log.msg("DONE: %s" % (res,)) self._mutable_node_1 = res hunk ./src/allmydata/test/test_system.py 558 self.failUnlessEqual(res, DATA) # replace the data log.msg("starting replace1") - d1 = newnode.overwrite(NEWDATA) + d1 = newnode.overwrite(NEWDATA_uploadable) d1.addCallback(lambda res: newnode.download_best_version()) return d1 d.addCallback(_check_download_3) hunk ./src/allmydata/test/test_system.py 572 newnode2 = self.clients[3].create_node_from_uri(uri) self._newnode3 = self.clients[3].create_node_from_uri(uri) log.msg("starting replace2") - d1 = newnode1.overwrite(NEWERDATA) + d1 = newnode1.overwrite(NEWERDATA_uploadable) d1.addCallback(lambda res: newnode2.download_best_version()) return d1 d.addCallback(_check_download_4) hunk ./src/allmydata/test/test_system.py 642 def _check_empty_file(res): # make sure we can create empty files, this usually screws up the # segsize math - d1 = self.clients[2].create_mutable_file("") + d1 = self.clients[2].create_mutable_file(MutableDataHandle("")) d1.addCallback(lambda newnode: newnode.download_best_version()) d1.addCallback(lambda res: self.failUnlessEqual("", res)) return d1 hunk ./src/allmydata/test/test_system.py 673 self.key_generator_svc.key_generator.pool_size + size_delta) d.addCallback(check_kg_poolsize, 0) - d.addCallback(lambda junk: self.clients[3].create_mutable_file('hello, world')) + d.addCallback(lambda junk: + self.clients[3].create_mutable_file(MutableDataHandle('hello, world'))) d.addCallback(check_kg_poolsize, -1) d.addCallback(lambda junk: self.clients[3].create_dirnode()) d.addCallback(check_kg_poolsize, -2) hunk ./src/allmydata/test/test_web.py 3166 def _stash_mutable_uri(n, which): self.uris[which] = n.get_uri() assert isinstance(self.uris[which], str) - d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3")) + d.addCallback(lambda ign: + c0.create_mutable_file(publish.MutableDataHandle(DATA+"3"))) d.addCallback(_stash_mutable_uri, "corrupt") d.addCallback(lambda ign: c0.upload(upload.Data("literal", convergence=""))) hunk ./src/allmydata/test/test_web.py 3313 def _stash_mutable_uri(n, which): self.uris[which] = n.get_uri() assert isinstance(self.uris[which], str) - d.addCallback(lambda ign: c0.create_mutable_file(DATA+"3")) + d.addCallback(lambda ign: + c0.create_mutable_file(publish.MutableDataHandle(DATA+"3"))) d.addCallback(_stash_mutable_uri, "corrupt") def _compute_fileurls(ignored): hunk ./src/allmydata/test/test_web.py 3976 def _stash_mutable_uri(n, which): self.uris[which] = n.get_uri() assert isinstance(self.uris[which], str) - d.addCallback(lambda ign: c0.create_mutable_file(DATA+"2")) + d.addCallback(lambda ign: + c0.create_mutable_file(publish.MutableDataHandle(DATA+"2"))) d.addCallback(_stash_mutable_uri, "mutable") def _compute_fileurls(ignored): hunk ./src/allmydata/test/test_web.py 4076 convergence=""))) d.addCallback(_stash_uri, "small") - d.addCallback(lambda ign: c0.create_mutable_file("mutable")) + d.addCallback(lambda ign: + c0.create_mutable_file(publish.MutableDataHandle("mutable"))) d.addCallback(lambda fn: self.rootnode.set_node(u"mutable", fn)) d.addCallback(_stash_uri, "mutable") } [Alter mutable files to use file-like objects for publishing instead of strings. Kevan Carstensen **20100708000732 Ignore-this: 8dd07d95386b6d540bc21289f981ebd0 ] { hunk ./src/allmydata/dirnode.py 11 from allmydata.mutable.common import NotWriteableError from allmydata.mutable.filenode import MutableFileNode from allmydata.unknown import UnknownNode, strip_prefix_for_ro +from allmydata.mutable.publish import MutableDataHandle from allmydata.interfaces import IFilesystemNode, IDirectoryNode, IFileNode, \ IImmutableFileNode, IMutableFileNode, \ ExistingChildError, NoSuchChildError, ICheckable, IDeepCheckable, \ hunk ./src/allmydata/dirnode.py 104 del children[self.name] new_contents = self.node._pack_contents(children) - return new_contents + uploadable = MutableDataHandle(new_contents) + return uploadable class MetadataSetter: hunk ./src/allmydata/dirnode.py 130 children[name] = (child, metadata) new_contents = self.node._pack_contents(children) - return new_contents + uploadable = MutableDataHandle(new_contents) + return uploadable class Adder: hunk ./src/allmydata/dirnode.py 175 children[name] = (child, metadata) new_contents = self.node._pack_contents(children) - return new_contents + uploadable = MutableDataHandle(new_contents) + return uploadable def _encrypt_rw_uri(filenode, rw_uri): hunk ./src/allmydata/mutable/filenode.py 7 from zope.interface import implements from twisted.internet import defer, reactor from foolscap.api import eventually -from allmydata.interfaces import IMutableFileNode, \ - ICheckable, ICheckResults, NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION +from allmydata.interfaces import IMutableFileNode, ICheckable, ICheckResults, \ + NotEnoughSharesError, \ + MDMF_VERSION, SDMF_VERSION, IMutableUploadable from allmydata.util import hashutil, log from allmydata.util.assertutil import precondition from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI hunk ./src/allmydata/mutable/filenode.py 16 from allmydata.monitor import Monitor from pycryptopp.cipher.aes import AES -from allmydata.mutable.publish import Publish +from allmydata.mutable.publish import Publish, MutableFileHandle, \ + MutableDataHandle from allmydata.mutable.common import MODE_READ, MODE_WRITE, UnrecoverableFileError, \ ResponseCache, UncoordinatedWriteError from allmydata.mutable.servermap import ServerMap, ServermapUpdater hunk ./src/allmydata/mutable/filenode.py 133 return self._upload(initial_contents, None) def _get_initial_contents(self, contents): - if isinstance(contents, str): - return contents if contents is None: hunk ./src/allmydata/mutable/filenode.py 134 - return "" + return MutableDataHandle("") + + if IMutableUploadable.providedBy(contents): + return contents + assert callable(contents), "%s should be callable, not %s" % \ (contents, type(contents)) return contents(self) hunk ./src/allmydata/mutable/filenode.py 353 def overwrite(self, new_contents): return self._do_serialized(self._overwrite, new_contents) def _overwrite(self, new_contents): + assert IMutableUploadable.providedBy(new_contents) + servermap = ServerMap() d = self._update_servermap(servermap, mode=MODE_WRITE) d.addCallback(lambda ignored: self._upload(new_contents, servermap)) hunk ./src/allmydata/mutable/filenode.py 431 # recovery when it observes UCWE, we need to do a second # publish. See #551 for details. We'll basically loop until # we managed an uncontested publish. - new_contents = old_contents - precondition(isinstance(new_contents, str), - "Modifier function must return a string or None") + old_uploadable = MutableDataHandle(old_contents) + new_contents = old_uploadable + precondition((IMutableUploadable.providedBy(new_contents) or + new_contents is None), + "Modifier function must return an IMutableUploadable " + "or None") return self._upload(new_contents, servermap) d.addCallback(_apply) return d hunk ./src/allmydata/mutable/filenode.py 472 return self._do_serialized(self._upload, new_contents, servermap) def _upload(self, new_contents, servermap): assert self._pubkey, "update_servermap must be called before publish" + assert IMutableUploadable.providedBy(new_contents) + p = Publish(self, self._storage_broker, servermap) if self._history: hunk ./src/allmydata/mutable/filenode.py 476 - self._history.notify_publish(p.get_status(), len(new_contents)) + self._history.notify_publish(p.get_status(), new_contents.get_size()) d = p.publish(new_contents) hunk ./src/allmydata/mutable/filenode.py 478 - d.addCallback(self._did_upload, len(new_contents)) + d.addCallback(self._did_upload, new_contents.get_size()) return d def _did_upload(self, res, size): self._most_recent_size = size hunk ./src/allmydata/mutable/publish.py 141 # 0. Setup encoding parameters, encoder, and other such things. # 1. Encrypt, encode, and publish segments. - self.data = StringIO(newdata) - self.datalength = len(newdata) + assert IMutableUploadable.providedBy(newdata) + + self.data = newdata + self.datalength = newdata.get_size() self.log("starting publish, datalen is %s" % self.datalength) self._status.set_size(self.datalength) hunk ./src/allmydata/mutable/publish.py 442 self.log("Pushing segment %d of %d" % (segnum + 1, self.num_segments)) data = self.data.read(segsize) + # XXX: This is dumb. Why return a list? + data = "".join(data) assert len(data) == segsize hunk ./src/allmydata/mutable/repairer.py 5 from zope.interface import implements from twisted.internet import defer from allmydata.interfaces import IRepairResults, ICheckResults +from allmydata.mutable.publish import MutableDataHandle class RepairResults: implements(IRepairResults) hunk ./src/allmydata/mutable/repairer.py 108 raise RepairRequiresWritecapError("Sorry, repair currently requires a writecap, to set the write-enabler properly.") d = self.node.download_version(smap, best_version, fetch_privkey=True) + d.addCallback(lambda data: + MutableDataHandle(data)) d.addCallback(self.node.upload, smap) d.addCallback(self.get_results, smap) return d hunk ./src/allmydata/nodemaker.py 9 from allmydata.immutable.filenode import ImmutableFileNode, LiteralFileNode from allmydata.immutable.upload import Data from allmydata.mutable.filenode import MutableFileNode +from allmydata.mutable.publish import MutableDataHandle from allmydata.dirnode import DirectoryNode, pack_children from allmydata.unknown import UnknownNode from allmydata import uri hunk ./src/allmydata/nodemaker.py 111 "create_new_mutable_directory requires metadata to be a dict, not None", metadata) node.raise_error() d = self.create_mutable_file(lambda n: - pack_children(n, initial_children), + MutableDataHandle( + pack_children(n, initial_children)), version) d.addCallback(self._create_dirnode) return d hunk ./src/allmydata/web/filenode.py 12 from allmydata.interfaces import ExistingChildError from allmydata.monitor import Monitor from allmydata.immutable.upload import FileHandle +from allmydata.mutable.publish import MutableFileHandle from allmydata.util import log, base32 from allmydata.web.common import text_plain, WebError, RenderMixin, \ hunk ./src/allmydata/web/filenode.py 27 # a new file is being uploaded in our place. mutable = boolean_of_arg(get_arg(req, "mutable", "false")) if mutable: - req.content.seek(0) - data = req.content.read() + data = MutableFileHandle(req.content) d = client.create_mutable_file(data) def _uploaded(newnode): d2 = self.parentnode.set_node(self.name, newnode, hunk ./src/allmydata/web/filenode.py 61 d.addCallback(lambda res: childnode.get_uri()) return d - def _read_data_from_formpost(self, req): - # SDMF: files are small, and we can only upload data, so we read - # the whole file into memory before uploading. - contents = req.fields["file"] - contents.file.seek(0) - data = contents.file.read() - return data def replace_me_with_a_formpost(self, req, client, replace): # create a new file, maybe mutable, maybe immutable hunk ./src/allmydata/web/filenode.py 66 mutable = boolean_of_arg(get_arg(req, "mutable", "false")) + # create an immutable file + contents = req.fields["file"] if mutable: hunk ./src/allmydata/web/filenode.py 69 - data = self._read_data_from_formpost(req) - d = client.create_mutable_file(data) + uploadable = MutableFileHandle(contents.file) + d = client.create_mutable_file(uploadable) def _uploaded(newnode): d2 = self.parentnode.set_node(self.name, newnode, overwrite=replace) hunk ./src/allmydata/web/filenode.py 78 return d2 d.addCallback(_uploaded) return d - # create an immutable file - contents = req.fields["file"] + uploadable = FileHandle(contents.file, convergence=client.convergence) d = self.parentnode.add_file(self.name, uploadable, overwrite=replace) d.addCallback(lambda newnode: newnode.get_uri()) hunk ./src/allmydata/web/filenode.py 84 return d + class PlaceHolderNodeHandler(RenderMixin, rend.Page, ReplaceMeMixin): def __init__(self, client, parentnode, name): rend.Page.__init__(self) hunk ./src/allmydata/web/filenode.py 278 def replace_my_contents(self, req): req.content.seek(0) - new_contents = req.content.read() + new_contents = MutableFileHandle(req.content) d = self.node.overwrite(new_contents) d.addCallback(lambda res: self.node.get_uri()) return d hunk ./src/allmydata/web/filenode.py 286 def replace_my_contents_with_a_formpost(self, req): # we have a mutable file. Get the data from the formpost, and replace # the mutable file's contents with it. - new_contents = self._read_data_from_formpost(req) + new_contents = req.fields['file'] + new_contents = MutableFileHandle(new_contents.file) + d = self.node.overwrite(new_contents) d.addCallback(lambda res: self.node.get_uri()) return d hunk ./src/allmydata/web/unlinked.py 7 from twisted.internet import defer from nevow import rend, url, tags as T from allmydata.immutable.upload import FileHandle +from allmydata.mutable.publish import MutableFileHandle from allmydata.web.common import getxmlfile, get_arg, boolean_of_arg, \ convert_children_json, WebError from allmydata.web import status hunk ./src/allmydata/web/unlinked.py 23 def PUTUnlinkedSSK(req, client): # SDMF: files are small, and we can only upload data req.content.seek(0) - data = req.content.read() + data = MutableFileHandle(req.content) d = client.create_mutable_file(data) d.addCallback(lambda n: n.get_uri()) return d hunk ./src/allmydata/web/unlinked.py 87 # "POST /uri", to create an unlinked file. # SDMF: files are small, and we can only upload data contents = req.fields["file"] - contents.file.seek(0) - data = contents.file.read() + data = MutableFileHandle(contents.file) d = client.create_mutable_file(data) d.addCallback(lambda n: n.get_uri()) return d } [test/test_sftp.py: alter a setup routine to work with new mutable file APIs. Kevan Carstensen **20100708193522 Ignore-this: 434bbe1347072076c0836d26fca8ac8a ] { hunk ./src/allmydata/test/test_sftp.py 32 from allmydata.util.consumer import download_to_data from allmydata.immutable import upload +from allmydata.mutable import publish from allmydata.test.no_network import GridTestMixin from allmydata.test.common import ShouldFailMixin from allmydata.test.common_util import ReallyEqualMixin hunk ./src/allmydata/test/test_sftp.py 84 return d def _set_up_tree(self): - d = self.client.create_mutable_file("mutable file contents") + u = publish.MutableDataHandle("mutable file contents") + d = self.client.create_mutable_file(u) d.addCallback(lambda node: self.root.set_node(u"mutable", node)) def _created_mutable(n): self.mutable = n } [mutable/publish.py: make MutableFileHandle seek to the beginning of its file handle before reading. Kevan Carstensen **20100708193600 Ignore-this: 453a737dc62a79c77b3d360fed9000ab ] hunk ./src/allmydata/mutable/publish.py 989 assert hasattr(filehandle, "close") self._filehandle = filehandle + # We must start reading at the beginning of the file, or we risk + # encountering errors when the data read does not match the size + # reported to the uploader. + self._filehandle.seek(0) def get_size(self): [Refactor download interfaces to be more uniform, per #993 Kevan Carstensen **20100709232912 Ignore-this: 277c5699c4a2dd7c03ecfa0a28458f5b ] { hunk ./src/allmydata/immutable/filenode.py 10 from foolscap.api import eventually from allmydata.interfaces import IImmutableFileNode, ICheckable, \ IDownloadTarget, IUploadResults -from allmydata.util import dictutil, log, base32 +from allmydata.util import dictutil, log, base32, consumer from allmydata.uri import CHKFileURI, LiteralFileURI from allmydata.immutable.checker import Checker from allmydata.check_results import CheckResults, CheckAndRepairResults hunk ./src/allmydata/immutable/filenode.py 318 self.download_cache.read(consumer, offset, size)) return d + # IReadable, IFileNode + + def get_best_readable_version(self): + """ + Return an IReadable of the best version of this file. Since + immutable files can have only one version, we just return the + current filenode. + """ + return self + + + def download_best_version(self): + """ + Download the best version of this file, returning its contents + as a bytestring. Since there is only one version of an immutable + file, we download and return the contents of this file. + """ + d = consumer.download_to_data(self) + return d + + # for an immutable file, download_to_data (specified in IReadable) + # is the same as download_best_version (specified in IFileNode). For + # mutable files, the difference is more meaningful, since they can + # have multiple versions. + download_to_data = download_best_version + + + # get_size() (IReadable), get_current_size() (IFilesystemNode), and + # get_size_of_best_version(IFileNode) are all the same for immutable + # files. + get_size_of_best_version = get_current_size + + class LiteralProducer: implements(IPushProducer) def resumeProducing(self): hunk ./src/allmydata/immutable/filenode.py 409 d = basic.FileSender().beginFileTransfer(StringIO(data), consumer) d.addCallback(lambda lastSent: consumer) return d + + # IReadable, IFileNode, IFilesystemNode + def get_best_readable_version(self): + return self + + + def download_best_version(self): + return defer.succeed(self.u.data) + + + download_to_data = download_best_version + get_size_of_best_version = get_current_size hunk ./src/allmydata/interfaces.py 563 class MustNotBeUnknownRWError(CapConstraintError): """Cannot add an unknown child cap specified in a rw_uri field.""" + +class IReadable(Interface): + """I represent a readable object -- either an immutable file, or a + specific version of a mutable file. + """ + + def is_readonly(): + """Return True if this reference provides mutable access to the given + file or directory (i.e. if you can modify it), or False if not. Note + that even if this reference is read-only, someone else may hold a + read-write reference to it. + + For an IReadable returned by get_best_readable_version(), this will + always return True, but for instances of subinterfaces such as + IMutableFileVersion, it may return False.""" + + def is_mutable(): + """Return True if this file or directory is mutable (by *somebody*, + not necessarily you), False if it is is immutable. Note that a file + might be mutable overall, but your reference to it might be + read-only. On the other hand, all references to an immutable file + will be read-only; there are no read-write references to an immutable + file.""" + + def get_storage_index(): + """Return the storage index of the file.""" + + def get_size(): + """Return the length (in bytes) of this readable object.""" + + def download_to_data(): + """Download all of the file contents. I return a Deferred that fires + with the contents as a byte string.""" + + def read(consumer, offset=0, size=None): + """Download a portion (possibly all) of the file's contents, making + them available to the given IConsumer. Return a Deferred that fires + (with the consumer) when the consumer is unregistered (either because + the last byte has been given to it, or because the consumer threw an + exception during write(), possibly because it no longer wants to + receive data). The portion downloaded will start at 'offset' and + contain 'size' bytes (or the remainder of the file if size==None). + + The consumer will be used in non-streaming mode: an IPullProducer + will be attached to it. + + The consumer will not receive data right away: several network trips + must occur first. The order of events will be:: + + consumer.registerProducer(p, streaming) + (if streaming == False):: + consumer does p.resumeProducing() + consumer.write(data) + consumer does p.resumeProducing() + consumer.write(data).. (repeat until all data is written) + consumer.unregisterProducer() + deferred.callback(consumer) + + If a download error occurs, or an exception is raised by + consumer.registerProducer() or consumer.write(), I will call + consumer.unregisterProducer() and then deliver the exception via + deferred.errback(). To cancel the download, the consumer should call + p.stopProducing(), which will result in an exception being delivered + via deferred.errback(). + + See src/allmydata/util/consumer.py for an example of a simple + download-to-memory consumer. + """ + + +class IMutableFileVersion(IReadable): + """I provide access to a particular version of a mutable file. The + access is read/write if I was obtained from a filenode derived from + a write cap, or read-only if the filenode was derived from a read cap. + """ + + def get_sequence_number(): + """Return the sequence number of this version.""" + + def get_servermap(): + """Return the IMutableFileServerMap instance that was used to create + this object. + """ + + def get_writekey(): + """Return this filenode's writekey, or None if the node does not have + write-capability. This may be used to assist with data structures + that need to make certain data available only to writers, such as the + read-write child caps in dirnodes. The recommended process is to have + reader-visible data be submitted to the filenode in the clear (where + it will be encrypted by the filenode using the readkey), but encrypt + writer-visible data using this writekey. + """ + + # TODO: Can this be overwrite instead of replace? + def replace(new_contents): + """Replace the contents of the mutable file, provided that no other + node has published (or is attempting to publish, concurrently) a + newer version of the file than this one. + + I will avoid modifying any share that is different than the version + given by get_sequence_number(). However, if another node is writing + to the file at the same time as me, I may manage to update some shares + while they update others. If I see any evidence of this, I will signal + UncoordinatedWriteError, and the file will be left in an inconsistent + state (possibly the version you provided, possibly the old version, + possibly somebody else's version, and possibly a mix of shares from + all of these). + + The recommended response to UncoordinatedWriteError is to either + return it to the caller (since they failed to coordinate their + writes), or to attempt some sort of recovery. It may be sufficient to + wait a random interval (with exponential backoff) and repeat your + operation. If I do not signal UncoordinatedWriteError, then I was + able to write the new version without incident. + + I return a Deferred that fires (with a PublishStatus object) when the + update has completed. + """ + + def modify(modifier_cb): + """Modify the contents of the file, by downloading this version, + applying the modifier function (or bound method), then uploading + the new version. This will succeed as long as no other node + publishes a version between the download and the upload. + I return a Deferred that fires (with a PublishStatus object) when + the update is complete. + + The modifier callable will be given three arguments: a string (with + the old contents), a 'first_time' boolean, and a servermap. As with + download_to_data(), the old contents will be from this version, + but the modifier can use the servermap to make other decisions + (such as refusing to apply the delta if there are multiple parallel + versions, or if there is evidence of a newer unrecoverable version). + 'first_time' will be True the first time the modifier is called, + and False on any subsequent calls. + + The callable should return a string with the new contents. The + callable must be prepared to be called multiple times, and must + examine the input string to see if the change that it wants to make + is already present in the old version. If it does not need to make + any changes, it can either return None, or return its input string. + + If the modifier raises an exception, it will be returned in the + errback. + """ + + # The hierarchy looks like this: # IFilesystemNode # IFileNode hunk ./src/allmydata/interfaces.py 801 def raise_error(): """Raise any error associated with this node.""" + # XXX: These may not be appropriate outside the context of an IReadable. def get_size(): """Return the length (in bytes) of the data this node represents. For directory nodes, I return the size of the backing store. I return hunk ./src/allmydata/interfaces.py 818 class IFileNode(IFilesystemNode): """I am a node which represents a file: a sequence of bytes. I am not a container, like IDirectoryNode.""" + def get_best_readable_version(): + """Return a Deferred that fires with an IReadable for the 'best' + available version of the file. The IReadable provides only read + access, even if this filenode was derived from a write cap. hunk ./src/allmydata/interfaces.py 823 -class IImmutableFileNode(IFileNode): - def read(consumer, offset=0, size=None): - """Download a portion (possibly all) of the file's contents, making - them available to the given IConsumer. Return a Deferred that fires - (with the consumer) when the consumer is unregistered (either because - the last byte has been given to it, or because the consumer threw an - exception during write(), possibly because it no longer wants to - receive data). The portion downloaded will start at 'offset' and - contain 'size' bytes (or the remainder of the file if size==None). - - The consumer will be used in non-streaming mode: an IPullProducer - will be attached to it. + For an immutable file, there is only one version. For a mutable + file, the 'best' version is the recoverable version with the + highest sequence number. If no uncoordinated writes have occurred, + and if enough shares are available, then this will be the most + recent version that has been uploaded. If no version is recoverable, + the Deferred will errback with an UnrecoverableFileError. + """ hunk ./src/allmydata/interfaces.py 831 - The consumer will not receive data right away: several network trips - must occur first. The order of events will be:: + def download_best_version(): + """Download the contents of the version that would be returned + by get_best_readable_version(). This is equivalent to calling + download_to_data() on the IReadable given by that method. hunk ./src/allmydata/interfaces.py 836 - consumer.registerProducer(p, streaming) - (if streaming == False):: - consumer does p.resumeProducing() - consumer.write(data) - consumer does p.resumeProducing() - consumer.write(data).. (repeat until all data is written) - consumer.unregisterProducer() - deferred.callback(consumer) + I return a Deferred that fires with a byte string when the file + has been fully downloaded. To support streaming download, use + the 'read' method of IReadable. If no version is recoverable, + the Deferred will errback with an UnrecoverableFileError. + """ hunk ./src/allmydata/interfaces.py 842 - If a download error occurs, or an exception is raised by - consumer.registerProducer() or consumer.write(), I will call - consumer.unregisterProducer() and then deliver the exception via - deferred.errback(). To cancel the download, the consumer should call - p.stopProducing(), which will result in an exception being delivered - via deferred.errback(). + def get_size_of_best_version(): + """Find the size of the version that would be returned by + get_best_readable_version(). hunk ./src/allmydata/interfaces.py 846 - See src/allmydata/util/consumer.py for an example of a simple - download-to-memory consumer. + I return a Deferred that fires with an integer. If no version + is recoverable, the Deferred will errback with an + UnrecoverableFileError. """ hunk ./src/allmydata/interfaces.py 851 + +class IImmutableFileNode(IFileNode, IReadable): + """I am a node representing an immutable file. Immutable files have + only one version""" + + class IMutableFileNode(IFileNode): """I provide access to a 'mutable file', which retains its identity regardless of what contents are put in it. hunk ./src/allmydata/interfaces.py 916 only be retrieved and updated all-at-once, as a single big string. Future versions of our mutable files will remove this restriction. """ - - def download_best_version(): - """Download the 'best' available version of the file, meaning one of - the recoverable versions with the highest sequence number. If no + def get_best_mutable_version(): + """Return a Deferred that fires with an IMutableFileVersion for + the 'best' available version of the file. The best version is + the recoverable version with the highest sequence number. If no uncoordinated writes have occurred, and if enough shares are hunk ./src/allmydata/interfaces.py 921 - available, then this will be the most recent version that has been - uploaded. - - I update an internal servermap with MODE_READ, determine which - version of the file is indicated by - servermap.best_recoverable_version(), and return a Deferred that - fires with its contents. If no version is recoverable, the Deferred - will errback with UnrecoverableFileError. - """ - - def get_size_of_best_version(): - """Find the size of the version that would be downloaded with - download_best_version(), without actually downloading the whole file. + available, then this will be the most recent version that has + been uploaded. hunk ./src/allmydata/interfaces.py 924 - I return a Deferred that fires with an integer. + If no version is recoverable, the Deferred will errback with an + UnrecoverableFileError. """ def overwrite(new_contents): hunk ./src/allmydata/interfaces.py 964 errback. """ - def get_servermap(mode): """Return a Deferred that fires with an IMutableFileServerMap instance, updated using the given mode. hunk ./src/allmydata/test/test_filenode.py 98 def _check_segment(res): self.failUnlessEqual(res, DATA[1:1+5]) d.addCallback(_check_segment) + d.addCallback(lambda ignored: + self.failUnlessEqual(fn1.get_best_readable_version(), fn1)) + d.addCallback(lambda ignored: + fn1.get_size_of_best_version()) + d.addCallback(lambda size: + self.failUnlessEqual(size, len(DATA))) + d.addCallback(lambda ignored: + fn1.download_to_data()) + d.addCallback(lambda data: + self.failUnlessEqual(data, DATA)) + d.addCallback(lambda ignored: + fn1.download_best_version()) + d.addCallback(lambda data: + self.failUnlessEqual(data, DATA)) return d hunk ./src/allmydata/test/test_immutable.py 153 return d + def test_download_to_data(self): + d = self.n.download_to_data() + d.addCallback(lambda data: + self.failUnlessEqual(data, common.TEST_DATA)) + return d + + + def test_download_best_version(self): + d = self.n.download_best_version() + d.addCallback(lambda data: + self.failUnlessEqual(data, common.TEST_DATA)) + return d + + + def test_get_best_readable_version(self): + n = self.n.get_best_readable_version() + self.failUnlessEqual(n, self.n) + + def test_get_size_of_best_version(self): + d = self.n.get_size_of_best_version() + d.addCallback(lambda size: + self.failUnlessEqual(size, len(common.TEST_DATA))) + return d + + # XXX extend these tests to show bad behavior of various kinds from servers: raising exception from each remove_foo() method, for example # XXX test disconnect DeadReferenceError from get_buckets and get_block_whatsit } [frontends/sftpd.py: alter a mutable file overwrite to work with the new API Kevan Carstensen **20100709232951 Ignore-this: e0441c3ef2dfe78a1cac3f423d613e40 ] { hunk ./src/allmydata/frontends/sftpd.py 33 from allmydata.interfaces import IFileNode, IDirectoryNode, ExistingChildError, \ NoSuchChildError, ChildOfWrongTypeError from allmydata.mutable.common import NotWriteableError +from allmydata.mutable.publish import MutableFileHandle from allmydata.immutable.upload import FileHandle from allmydata.dirnode import update_metadata hunk ./src/allmydata/frontends/sftpd.py 867 assert parent and childname, (parent, childname, self.metadata) d2.addCallback(lambda ign: parent.set_metadata_for(childname, self.metadata)) - d2.addCallback(lambda ign: self.consumer.get_current_size()) - d2.addCallback(lambda size: self.consumer.read(0, size)) - d2.addCallback(lambda new_contents: self.filenode.overwrite(new_contents)) + d2.addCallback(lambda ign: self.filenode.overwrite(MutableFileHandle(self.consumer.get_file()))) else: def _add_file(ign): self.log("_add_file childname=%r" % (childname,), level=OPERATIONAL) } [mutable/filenode.py: implement most of IVersion, per #993 Kevan Carstensen **20100713231758 Ignore-this: 8a30b8c908d98eeffa0762e084a541f8 ] { hunk ./src/allmydata/mutable/filenode.py 8 from twisted.internet import defer, reactor from foolscap.api import eventually from allmydata.interfaces import IMutableFileNode, ICheckable, ICheckResults, \ - NotEnoughSharesError, \ - MDMF_VERSION, SDMF_VERSION, IMutableUploadable + NotEnoughSharesError, MDMF_VERSION, SDMF_VERSION, IMutableUploadable, \ + IMutableFileVersion from allmydata.util import hashutil, log from allmydata.util.assertutil import precondition from allmydata.uri import WriteableSSKFileURI, ReadonlySSKFileURI hunk ./src/allmydata/mutable/filenode.py 208 def get_size(self): return self._most_recent_size + def get_current_size(self): d = self.get_size_of_best_version() d.addCallback(self._stash_size) hunk ./src/allmydata/mutable/filenode.py 213 return d + def _stash_size(self, size): self._most_recent_size = size return size hunk ./src/allmydata/mutable/filenode.py 272 return cmp(self.__class__, them.__class__) return cmp(self._uri, them._uri) - def _do_serialized(self, cb, *args, **kwargs): - # note: to avoid deadlock, this callable is *not* allowed to invoke - # other serialized methods within this (or any other) - # MutableFileNode. The callable should be a bound method of this same - # MFN instance. - d = defer.Deferred() - self._serializer.addCallback(lambda ignore: cb(*args, **kwargs)) - # we need to put off d.callback until this Deferred is finished being - # processed. Otherwise the caller's subsequent activities (like, - # doing other things with this node) can cause reentrancy problems in - # the Deferred code itself - self._serializer.addBoth(lambda res: eventually(d.callback, res)) - # add a log.err just in case something really weird happens, because - # self._serializer stays around forever, therefore we won't see the - # usual Unhandled Error in Deferred that would give us a hint. - self._serializer.addErrback(log.err) - return d ################################# # ICheckable hunk ./src/allmydata/mutable/filenode.py 297 ################################# - # IMutableFileNode + # IFileNode + + def get_best_readable_version(self): + """ + I return a Deferred that fires with a MutableFileVersion + representing the best readable version of the file that I + represent + """ + return self.get_readable_version() + + + def get_readable_version(self, servermap=None, version=None): + """ + I return a Deferred that fires with an MutableFileVersion for my + version argument, if there is a recoverable file of that version + on the grid. If there is no recoverable version, I fire with an + UnrecoverableFileError. + + If a servermap is provided, I look in there for the requested + version. If no servermap is provided, I create and update a new + one. + + If no version is provided, then I return a MutableFileVersion + representing the best recoverable version of the file. + """ + d = self._get_version_from_servermap(MODE_READ, servermap, version) + def _build_version((servermap, their_version)): + assert their_version in servermap.recoverable_versions() + assert their_version in servermap.make_versionmap() + + mfv = MutableFileVersion(self, + servermap, + their_version, + self._storage_index, + self._storage_broker, + self._readkey, + history=self._history) + assert mfv.is_readonly() + # our caller can use this to download the contents of the + # mutable file. + return mfv + return d.addCallback(_build_version) + + + def _get_version_from_servermap(self, + mode, + servermap=None, + version=None): + """ + I return a Deferred that fires with (servermap, version). + + This function performs validation and a servermap update. If it + returns (servermap, version), the caller can assume that: + - servermap was last updated in mode. + - version is recoverable, and corresponds to the servermap. + + If version and servermap are provided to me, I will validate + that version exists in the servermap, and that the servermap was + updated correctly. + + If version is not provided, but servermap is, I will validate + the servermap and return the best recoverable version that I can + find in the servermap. + + If the version is provided but the servermap isn't, I will + obtain a servermap that has been updated in the correct mode and + validate that version is found and recoverable. + + If neither servermap nor version are provided, I will obtain a + servermap updated in the correct mode, and return the best + recoverable version that I can find in there. + """ + # XXX: wording ^^^^ + if servermap and servermap.last_update_mode == mode: + d = defer.succeed(servermap) + else: + d = self._get_servermap(mode) + + def _get_version(servermap, version): + if version and version not in servermap.recoverable_versions(): + version = None + else: + version = servermap.best_recoverable_version() + if not version: + raise UnrecoverableFileError("no recoverable versions") + return (servermap, version) + return d.addCallback(_get_version, version) + def download_best_version(self): hunk ./src/allmydata/mutable/filenode.py 387 + """ + I return a Deferred that fires with the contents of the best + version of this mutable file. + """ return self._do_serialized(self._download_best_version) hunk ./src/allmydata/mutable/filenode.py 392 + + def _download_best_version(self): hunk ./src/allmydata/mutable/filenode.py 395 - servermap = ServerMap() - d = self._try_once_to_download_best_version(servermap, MODE_READ) - def _maybe_retry(f): - f.trap(NotEnoughSharesError) - # the download is worth retrying once. Make sure to use the - # old servermap, since it is what remembers the bad shares, - # but use MODE_WRITE to make it look for even more shares. - # TODO: consider allowing this to retry multiple times.. this - # approach will let us tolerate about 8 bad shares, I think. - return self._try_once_to_download_best_version(servermap, - MODE_WRITE) + """ + I am the serialized sibling of download_best_version. + """ + d = self.get_best_readable_version() + d.addCallback(self._record_size) + d.addCallback(lambda version: version.download_to_data()) + + # It is possible that the download will fail because there + # aren't enough shares to be had. If so, we will try again after + # updating the servermap in MODE_WRITE, which may find more + # shares than updating in MODE_READ, as we just did. We can do + # this by getting the best mutable version and downloading from + # that -- the best mutable version will be a MutableFileVersion + # with a servermap that was last updated in MODE_WRITE, as we + # want. If this fails, then we give up. + def _maybe_retry(failure): + failure.trap(NotEnoughSharesError) + + d = self.get_best_mutable_version() + d.addCallback(self._record_size) + d.addCallback(lambda version: version.download_to_data()) + return d + d.addErrback(_maybe_retry) return d hunk ./src/allmydata/mutable/filenode.py 420 - def _try_once_to_download_best_version(self, servermap, mode): - d = self._update_servermap(servermap, mode) - d.addCallback(self._once_updated_download_best_version, servermap) - return d - def _once_updated_download_best_version(self, ignored, servermap): - goal = servermap.best_recoverable_version() - if not goal: - raise UnrecoverableFileError("no recoverable versions") - return self._try_once_to_download_version(servermap, goal) + + + def _record_size(self, mfv): + """ + I record the size of a mutable file version. + """ + self._most_recent_size = mfv.get_size() + return mfv + def get_size_of_best_version(self): hunk ./src/allmydata/mutable/filenode.py 431 - d = self.get_servermap(MODE_READ) - def _got_servermap(smap): - ver = smap.best_recoverable_version() - if not ver: - raise UnrecoverableFileError("no recoverable version") - return smap.size_of_version(ver) - d.addCallback(_got_servermap) - return d + """ + I return the size of the best version of this mutable file. + + This is equivalent to calling get_size() on the result of + get_best_readable_version(). + """ + d = self.get_best_readable_version() + return d.addCallback(lambda mfv: mfv.get_size()) + + + ################################# + # IMutableFileNode + + def get_best_mutable_version(self, servermap=None): + """ + I return a Deferred that fires with a MutableFileVersion + representing the best readable version of the file that I + represent. I am like get_best_readable_version, except that I + will try to make a writable version if I can. + """ + return self.get_mutable_version(servermap=servermap) + + + def get_mutable_version(self, servermap=None, version=None): + """ + I return a version of this mutable file. I return a Deferred + that fires with a MutableFileVersion + + If version is provided, the Deferred will fire with a + MutableFileVersion initailized with that version. Otherwise, it + will fire with the best version that I can recover. + + If servermap is provided, I will use that to find versions + instead of performing my own servermap update. + """ + if self.is_readonly(): + return self.get_readable_version(servermap=servermap, + version=version) + + # get_mutable_version => write intent, so we require that the + # servermap is updated in MODE_WRITE + d = self._get_version_from_servermap(MODE_WRITE, servermap, version) + def _build_version((servermap, smap_version)): + # these should have been set by the servermap update. + assert self._secret_holder + assert self._writekey + + mfv = MutableFileVersion(self, + servermap, + smap_version, + self._storage_index, + self._storage_broker, + self._readkey, + self._writekey, + self._secret_holder, + history=self._history) + assert not mfv.is_readonly() + return mfv + + return d.addCallback(_build_version) + + + # XXX: I'm uncomfortable with the difference between upload and + # overwrite, which, FWICT, is basically that you don't have to + # do a servermap update before you overwrite. We split them up + # that way anyway, so I guess there's no real difficulty in + # offering both ways to callers, but it also makes the + # public-facing API cluttery, and makes it hard to discern the + # right way of doing things. hunk ./src/allmydata/mutable/filenode.py 501 + # In general, we leave it to callers to ensure that they aren't + # going to cause UncoordinatedWriteErrors when working with + # MutableFileVersions. We know that the next three operations + # (upload, overwrite, and modify) will all operate on the same + # version, so we say that only one of them can be going on at once, + # and serialize them to ensure that that actually happens, since as + # the caller in this situation it is our job to do that. def overwrite(self, new_contents): hunk ./src/allmydata/mutable/filenode.py 509 + """ + I overwrite the contents of the best recoverable version of this + mutable file with new_contents. This is equivalent to calling + overwrite on the result of get_best_mutable_version with + new_contents as an argument. I return a Deferred that eventually + fires with the results of my replacement process. + """ return self._do_serialized(self._overwrite, new_contents) hunk ./src/allmydata/mutable/filenode.py 517 + + def _overwrite(self, new_contents): hunk ./src/allmydata/mutable/filenode.py 520 - assert IMutableUploadable.providedBy(new_contents) + """ + I am the serialized sibling of overwrite. + """ + d = self.get_best_mutable_version() + return d.addCallback(lambda mfv: mfv.overwrite(new_contents)) + + + + def upload(self, new_contents, servermap): + """ + I overwrite the contents of the best recoverable version of this + mutable file with new_contents, using servermap instead of + creating/updating our own servermap. I return a Deferred that + fires with the results of my upload. + """ + return self._do_serialized(self._upload, new_contents, servermap) + + + def _upload(self, new_contents, servermap): + """ + I am the serialized sibling of upload. + """ + d = self.get_best_mutable_version(servermap) + return d.addCallback(lambda mfv: mfv.overwrite(new_contents)) + + + def modify(self, modifier, backoffer=None): + """ + I modify the contents of the best recoverable version of this + mutable file with the modifier. This is equivalent to calling + modify on the result of get_best_mutable_version. I return a + Deferred that eventually fires with an UploadResults instance + describing this process. + """ + return self._do_serialized(self._modify, modifier, backoffer) + + + def _modify(self, modifier, backoffer): + """ + I am the serialized sibling of modify. + """ + d = self.get_best_mutable_version() + return d.addCallback(lambda mfv: mfv.modify(modifier, backoffer)) + + + def download_version(self, servermap, version, fetch_privkey=False): + """ + Download the specified version of this mutable file. I return a + Deferred that fires with the contents of the specified version + as a bytestring, or errbacks if the file is not recoverable. + """ + d = self.get_readable_version(servermap, version) + return d.addCallback(lambda mfv: mfv.download_to_data(fetch_privkey)) + + + def get_servermap(self, mode): + """ + I return a servermap that has been updated in mode. + + mode should be one of MODE_READ, MODE_WRITE, MODE_CHECK or + MODE_ANYTHING. See servermap.py for more on what these mean. + """ + return self._do_serialized(self._get_servermap, mode) + hunk ./src/allmydata/mutable/filenode.py 585 + def _get_servermap(self, mode): + """ + I am a serialized twin to get_servermap. + """ servermap = ServerMap() hunk ./src/allmydata/mutable/filenode.py 590 - d = self._update_servermap(servermap, mode=MODE_WRITE) - d.addCallback(lambda ignored: self._upload(new_contents, servermap)) + return self._update_servermap(servermap, mode) + + + def _update_servermap(self, servermap, mode): + u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap, + mode) + if self._history: + self._history.notify_mapupdate(u.get_status()) + return u.update() + + + def set_version(self, version): + # I can be set in two ways: + # 1. When the node is created. + # 2. (for an existing share) when the Servermap is updated + # before I am read. + assert version in (MDMF_VERSION, SDMF_VERSION) + self._protocol_version = version + + + def get_version(self): + return self._protocol_version + + + def _do_serialized(self, cb, *args, **kwargs): + # note: to avoid deadlock, this callable is *not* allowed to invoke + # other serialized methods within this (or any other) + # MutableFileNode. The callable should be a bound method of this same + # MFN instance. + d = defer.Deferred() + self._serializer.addCallback(lambda ignore: cb(*args, **kwargs)) + # we need to put off d.callback until this Deferred is finished being + # processed. Otherwise the caller's subsequent activities (like, + # doing other things with this node) can cause reentrancy problems in + # the Deferred code itself + self._serializer.addBoth(lambda res: eventually(d.callback, res)) + # add a log.err just in case something really weird happens, because + # self._serializer stays around forever, therefore we won't see the + # usual Unhandled Error in Deferred that would give us a hint. + self._serializer.addErrback(log.err) return d hunk ./src/allmydata/mutable/filenode.py 633 + def _upload(self, new_contents, servermap): + """ + A MutableFileNode still has to have some way of getting + published initially, which is what I am here for. After that, + all publishing, updating, modifying and so on happens through + MutableFileVersions. + """ + assert self._pubkey, "update_servermap must be called before publish" + + p = Publish(self, self._storage_broker, servermap) + if self._history: + self._history.notify_publish(p.get_status(), + new_contents.get_size()) + d = p.publish(new_contents) + d.addCallback(self._did_upload, new_contents.get_size()) + return d + + + def _did_upload(self, res, size): + self._most_recent_size = size + return res + + +class MutableFileVersion: + """ + I represent a specific version (most likely the best version) of a + mutable file. + + Since I implement IReadable, instances which hold a + reference to an instance of me are guaranteed the ability (absent + connection difficulties or unrecoverable versions) to read the file + that I represent. Depending on whether I was initialized with a + write capability or not, I may also provide callers the ability to + overwrite or modify the contents of the mutable file that I + reference. + """ + implements(IMutableFileVersion) + + def __init__(self, + node, + servermap, + version, + storage_index, + storage_broker, + readcap, + writekey=None, + write_secrets=None, + history=None): + + self._node = node + self._servermap = servermap + self._version = version + self._storage_index = storage_index + self._write_secrets = write_secrets + self._history = history + self._storage_broker = storage_broker + + #assert isinstance(readcap, IURI) + self._readcap = readcap + + self._writekey = writekey + self._serializer = defer.succeed(None) + self._size = None + + + def get_sequence_number(self): + """ + Get the sequence number of the mutable version that I represent. + """ + return 0 + + + # TODO: Terminology? + def get_writekey(self): + """ + I return a writekey or None if I don't have a writekey. + """ + return self._writekey + + + def overwrite(self, new_contents): + """ + I overwrite the contents of this mutable file version with the + data in new_contents. + """ + assert not self.is_readonly() + + return self._do_serialized(self._overwrite, new_contents) + + + def _overwrite(self, new_contents): + assert IMutableUploadable.providedBy(new_contents) + assert self._servermap.last_update_mode == MODE_WRITE + + return self._upload(new_contents) + + def modify(self, modifier, backoffer=None): """I use a modifier callback to apply a change to the mutable file. I implement the following pseudocode:: hunk ./src/allmydata/mutable/filenode.py 770 backoffer should not invoke any methods on this MutableFileNode instance, and it needs to be highly conscious of deadlock issues. """ + assert not self.is_readonly() + return self._do_serialized(self._modify, modifier, backoffer) hunk ./src/allmydata/mutable/filenode.py 773 + + def _modify(self, modifier, backoffer): hunk ./src/allmydata/mutable/filenode.py 776 - servermap = ServerMap() if backoffer is None: backoffer = BackoffAgent().delay hunk ./src/allmydata/mutable/filenode.py 778 - return self._modify_and_retry(servermap, modifier, backoffer, True) - def _modify_and_retry(self, servermap, modifier, backoffer, first_time): - d = self._modify_once(servermap, modifier, first_time) + return self._modify_and_retry(modifier, backoffer, True) + + + def _modify_and_retry(self, modifier, backoffer, first_time): + """ + I try to apply modifier to the contents of this version of the + mutable file. If I succeed, I return an UploadResults instance + describing my success. If I fail, I try again after waiting for + a little bit. + """ + log.msg("doing modify") + d = self._modify_once(modifier, first_time) def _retry(f): f.trap(UncoordinatedWriteError) d2 = defer.maybeDeferred(backoffer, self, f) hunk ./src/allmydata/mutable/filenode.py 794 d2.addCallback(lambda ignored: - self._modify_and_retry(servermap, modifier, + self._modify_and_retry(modifier, backoffer, False)) return d2 d.addErrback(_retry) hunk ./src/allmydata/mutable/filenode.py 799 return d - def _modify_once(self, servermap, modifier, first_time): - d = self._update_servermap(servermap, MODE_WRITE) - d.addCallback(self._once_updated_download_best_version, servermap) + + + def _modify_once(self, modifier, first_time): + """ + I attempt to apply a modifier to the contents of the mutable + file. + """ + assert self._servermap.last_update_mode == MODE_WRITE + + # download_to_data is serialized, so we have to call this to + # avoid deadlock. + d = self._try_to_download_data() def _apply(old_contents): hunk ./src/allmydata/mutable/filenode.py 812 - new_contents = modifier(old_contents, servermap, first_time) + new_contents = modifier(old_contents, self._servermap, first_time) if new_contents is None or new_contents == old_contents: hunk ./src/allmydata/mutable/filenode.py 814 + log.msg("no changes") # no changes need to be made if first_time: return hunk ./src/allmydata/mutable/filenode.py 828 new_contents is None), "Modifier function must return an IMutableUploadable " "or None") - return self._upload(new_contents, servermap) + return self._upload(new_contents) d.addCallback(_apply) return d hunk ./src/allmydata/mutable/filenode.py 832 - def get_servermap(self, mode): - return self._do_serialized(self._get_servermap, mode) - def _get_servermap(self, mode): - servermap = ServerMap() - return self._update_servermap(servermap, mode) - def _update_servermap(self, servermap, mode): - u = ServermapUpdater(self, self._storage_broker, Monitor(), servermap, - mode) - if self._history: - self._history.notify_mapupdate(u.get_status()) - return u.update() hunk ./src/allmydata/mutable/filenode.py 833 - def download_version(self, servermap, version, fetch_privkey=False): - return self._do_serialized(self._try_once_to_download_version, - servermap, version, fetch_privkey) - def _try_once_to_download_version(self, servermap, version, - fetch_privkey=False): - r = Retrieve(self, servermap, version, fetch_privkey) + def is_readonly(self): + """ + I return True if this MutableFileVersion provides no write + access to the file that it encapsulates, and False if it + provides the ability to modify the file. + """ + return self._writekey is None + + + def is_mutable(self): + """ + I return True, since mutable files are always mutable by + somebody. + """ + return True + + + def get_storage_index(self): + """ + I return the storage index of the reference that I encapsulate. + """ + return self._storage_index + + + def get_size(self): + """ + I return the length, in bytes, of this readable object. + """ + return self._servermap.size_of_version(self._version) + + + def download_to_data(self, fetch_privkey=False): + """ + I return a Deferred that fires with the contents of this + readable object as a byte string. + + """ + return self._do_serialized(self._try_to_download_data, fetch_privkey) + + + def _try_to_download_data(self, fetch_privkey=False): + """ + I am the serialized sibling of download_to_data. I attempt to + download all of the mutable file that I represent into a + bytestring. If I fail, I will try again to see if the situation + has changed before returning a failure. + """ + return self._do_download(fetch_privkey) + + + def _update_servermap(self, mode=MODE_READ): + """ + I update our Servermap according to my mode argument. I return a + Deferred that fires with None when this has finished. The + updated Servermap will be at self._servermap in that case. + """ + d = self._node.get_servermap(mode) + + def _got_servermap(servermap): + assert servermap.last_update_mode == mode + + self._servermap = servermap + d.addCallback(_got_servermap) + return d + + + def _do_download(self, fetch_privkey): + """ + I try once to download the data associated with this mutable + file. + """ + r = Retrieve(self._node, self._servermap, self._version, fetch_privkey) if self._history: self._history.notify_retrieve(r.get_status()) d = r.download() hunk ./src/allmydata/mutable/filenode.py 908 - d.addCallback(self._downloaded_version) + def _got_data(contents): + self._size = len(contents) + return contents + return d.addCallback(_got_data) + + + def read(self, consumer, offset=0, size=None): + """ + I read a portion (possibly all) of the mutable file that I + reference into consumer. + """ + pass + + + def _do_serialized(self, cb, *args, **kwargs): + # note: to avoid deadlock, this callable is *not* allowed to invoke + # other serialized methods within this (or any other) + # MutableFileNode. The callable should be a bound method of this same + # MFN instance. + d = defer.Deferred() + self._serializer.addCallback(lambda ignore: cb(*args, **kwargs)) + # we need to put off d.callback until this Deferred is finished being + # processed. Otherwise the caller's subsequent activities (like, + # doing other things with this node) can cause reentrancy problems in + # the Deferred code itself + self._serializer.addBoth(lambda res: eventually(d.callback, res)) + # add a log.err just in case something really weird happens, because + # self._serializer stays around forever, therefore we won't see the + # usual Unhandled Error in Deferred that would give us a hint. + self._serializer.addErrback(log.err) return d hunk ./src/allmydata/mutable/filenode.py 939 - def _downloaded_version(self, data): - self._most_recent_size = len(data) - return data hunk ./src/allmydata/mutable/filenode.py 940 - def upload(self, new_contents, servermap): - return self._do_serialized(self._upload, new_contents, servermap) - def _upload(self, new_contents, servermap): - assert self._pubkey, "update_servermap must be called before publish" - assert IMutableUploadable.providedBy(new_contents) hunk ./src/allmydata/mutable/filenode.py 941 - p = Publish(self, self._storage_broker, servermap) + def _upload(self, new_contents): + #assert self._pubkey, "update_servermap must be called before publish" + p = Publish(self._node, self._storage_broker, self._servermap) if self._history: hunk ./src/allmydata/mutable/filenode.py 945 - self._history.notify_publish(p.get_status(), new_contents.get_size()) + self._history.notify_publish(p.get_status(), + new_contents.get_size()) d = p.publish(new_contents) d.addCallback(self._did_upload, new_contents.get_size()) return d hunk ./src/allmydata/mutable/filenode.py 950 - def _did_upload(self, res, size): - self._most_recent_size = size - return res - - - def set_version(self, version): - # I can be set in two ways: - # 1. When the node is created. - # 2. (for an existing share) when the Servermap is updated - # before I am read. - assert version in (MDMF_VERSION, SDMF_VERSION) - self._protocol_version = version hunk ./src/allmydata/mutable/filenode.py 952 - def get_version(self): - return self._protocol_version + def _did_upload(self, res, size): + self._size = size + return res } Context: [SFTP: don't call .stopProducing on the producer registered with OverwriteableFileConsumer (which breaks with warner's new downloader). david-sarah@jacaranda.org**20100628231926 Ignore-this: 131b7a5787bc85a9a356b5740d9d996f ] [docs/how_to_make_a_tahoe-lafs_release.txt: trivial correction, install.html should now be quickstart.html. david-sarah@jacaranda.org**20100625223929 Ignore-this: 99a5459cac51bd867cc11ad06927ff30 ] [setup: in the Makefile, refuse to upload tarballs unless someone has passed the environment variable "BB_BRANCH" with value "trunk" zooko@zooko.com**20100619034928 Ignore-this: 276ddf9b6ad7ec79e27474862e0f7d6 ] [trivial: tiny update to in-line comment zooko@zooko.com**20100614045715 Ignore-this: 10851b0ed2abfed542c97749e5d280bc (I'm actually committing this patch as a test of the new eager-annotation-computation of trac-darcs.) ] [docs: about.html link to home page early on, and be decentralized storage instead of cloud storage this time around zooko@zooko.com**20100619065318 Ignore-this: dc6db03f696e5b6d2848699e754d8053 ] [docs: update about.html, especially to have a non-broken link to quickstart.html, and also to comment out the broken links to "for Paranoids" and "for Corporates" zooko@zooko.com**20100619065124 Ignore-this: e292c7f51c337a84ebfeb366fbd24d6c ] [TAG allmydata-tahoe-1.7.0 zooko@zooko.com**20100619052631 Ignore-this: d21e27afe6d85e2e3ba6a3292ba2be1 ] Patch bundle hash: cc92b921e3bed86288d45b16885c82cb3856dbc1