Sun Jul 18 16:15:37 MDT 2010  zooko@zooko.com
  * immutable: test for #1118

Sun Jul 18 22:50:47 MDT 2010  zooko@zooko.com
  * immutable: extend the tests to check that the shares that got uploaded really do make a sufficiently Happy distribution
  This patch also renames some instances of "find_shares()" to "find_all_shares()" and other instances to "find_uri_shares()" as appropriate -- the conflation between those names confused me at first when writing these tests.

Sun Jul 18 22:46:55 MDT 2010  david-sarah@jacaranda.org
  * upload.py: fix #1118 by aborting newly-homeless buckets when reassignment runs. This makes a previously failing assert correct. This version refactors 'abort' into two methods, rather than using a default argument.

Mon Jul 19 00:56:29 MDT 2010  zooko@zooko.com
  * immutable: use PrefixingLogMixin to organize logging in Tahoe2PeerSelector and add more detailed messages about peer

New patches:

[immutable: test for #1118
zooko@zooko.com**20100718221537
 Ignore-this: 8882aabe2aaec6a0148c87e735d817ad
] {
hunk ./src/allmydata/immutable/upload.py 919
             for shnum in peer.buckets:
                 self._peer_trackers[shnum] = peer
                 servermap.setdefault(shnum, set()).add(peer.peerid)
-        assert len(buckets) == sum([len(peer.buckets) for peer in used_peers])
+        assert len(buckets) == sum([len(peer.buckets) for peer in used_peers]), "%s (%s) != %s (%s)" % (len(buckets), buckets, sum([len(peer.buckets) for peer in used_peers]), [(p.buckets, p.peerid) for p in used_peers])
         encoder.set_shareholders(buckets, servermap)
 
     def _encrypted_done(self, verifycap):
hunk ./src/allmydata/test/test_upload.py 1789
         return d
     test_problem_layout_comment_187.todo = "this isn't fixed yet"
 
+    def test_problem_layout_ticket_1118(self):
+        # #1118 includes a report from a user who hit an assertion in
+        # the upload code with this layout.
+        self.basedir = self.mktemp()
+        d = self._setup_and_upload(k=2, n=4)
+
+        # server 0: no shares
+        # server 1: shares 0, 3
+        # server 3: share 1
+        # server 2: share 2
+        # The order that they get queries is 0, 1, 3, 2
+        def _setup(ign):
+            self._add_server(server_number=0)
+            self._add_server_with_share(server_number=1, share_number=0)
+            self._add_server_with_share(server_number=2, share_number=2)
+            self._add_server_with_share(server_number=3, share_number=1)
+            # Copy shares
+            self._copy_share_to_server(3, 1)
+            storedir = self.get_serverdir(0)
+            # remove the storedir, wiping out any existing shares
+            shutil.rmtree(storedir)
+            # create an empty storedir to replace the one we just removed
+            os.mkdir(storedir)
+            client = self.g.clients[0]
+            client.DEFAULT_ENCODING_PARAMETERS['happy'] = 4
+            return client
+
+        d.addCallback(_setup)
+        d.addCallback(lambda client:
+                          client.upload(upload.Data("data" * 10000, convergence="")))
+        return d
 
     def test_upload_succeeds_with_some_homeless_shares(self):
         # If the upload is forced to stop trying to place shares before
}
[immutable: extend the tests to check that the shares that got uploaded really do make a sufficiently Happy distribution
zooko@zooko.com**20100719045047
 Ignore-this: 89c33a7b795e23018667351045a8d5d0
 This patch also renames some instances of "find_shares()" to "find_all_shares()" and other instances to "find_uri_shares()" as appropriate -- the conflation between those names confused me at first when writing these tests.
] {
hunk ./src/allmydata/test/common.py 959
         d.addCallback(_stash_it)
         return d
 
-    def find_shares(self, unused=None):
+    def find_all_shares(self, unused=None):
         """Locate shares on disk. Returns a dict that maps
         (clientnum,sharenum) to a string that contains the share container
         (copied directly from the disk, containing leases etc). You can
hunk ./src/allmydata/test/common.py 984
 
     def replace_shares(self, newshares, storage_index):
         """Replace shares on disk. Takes a dictionary in the same form
-        as find_shares() returns."""
+        as find_all_shares() returns."""
 
         for i, c in enumerate(self.clients):
             sharedir = c.getServiceNamed("storage").sharedir
hunk ./src/allmydata/test/common.py 1009
     def _delete_a_share(self, unused=None, sharenum=None):
         """ Delete one share. """
 
-        shares = self.find_shares()
+        shares = self.find_all_shares()
         ks = shares.keys()
         if sharenum is not None:
             k = [ key for key in shares.keys() if key[1] == sharenum ][0]
hunk ./src/allmydata/test/common.py 1021
         return unused
 
     def _corrupt_a_share(self, unused, corruptor_func, sharenum):
-        shares = self.find_shares()
+        shares = self.find_all_shares()
         ks = [ key for key in shares.keys() if key[1] == sharenum ]
         assert ks, (shares.keys(), sharenum)
         k = ks[0]
hunk ./src/allmydata/test/common.py 1031
 
     def _corrupt_all_shares(self, unused, corruptor_func):
         """ All shares on disk will be corrupted by corruptor_func. """
-        shares = self.find_shares()
+        shares = self.find_all_shares()
         for k in shares.keys():
             self._corrupt_a_share(unused, corruptor_func, k[1])
         return corruptor_func
hunk ./src/allmydata/test/common.py 1038
 
     def _corrupt_a_random_share(self, unused, corruptor_func):
         """ Exactly one share on disk will be corrupted by corruptor_func. """
-        shares = self.find_shares()
+        shares = self.find_all_shares()
         ks = shares.keys()
         k = random.choice(ks)
         self._corrupt_a_share(unused, corruptor_func, k[1])
hunk ./src/allmydata/test/no_network.py 305
             ss = self.g.servers_by_number[i]
             yield (i, ss, ss.storedir)
 
-    def find_shares(self, uri):
+    def find_uri_shares(self, uri):
         si = tahoe_uri.from_string(uri).get_storage_index()
         prefixdir = storage_index_to_dir(si)
         shares = []
hunk ./src/allmydata/test/no_network.py 326
         os.unlink(sharefile)
 
     def delete_shares_numbered(self, uri, shnums):
-        for (i_shnum, i_serverid, i_sharefile) in self.find_shares(uri):
+        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
             if i_shnum in shnums:
                 os.unlink(i_sharefile)
 
hunk ./src/allmydata/test/no_network.py 336
         open(sharefile, "wb").write(corruptdata)
 
     def corrupt_shares_numbered(self, uri, shnums, corruptor, debug=False):
-        for (i_shnum, i_serverid, i_sharefile) in self.find_shares(uri):
+        for (i_shnum, i_serverid, i_sharefile) in self.find_uri_shares(uri):
             if i_shnum in shnums:
                 sharedata = open(i_sharefile, "rb").read()
                 corruptdata = corruptor(sharedata, debug=debug)
hunk ./src/allmydata/test/test_cli.py 1998
 
         def _clobber_shares(ignored):
             # delete one, corrupt a second
-            shares = self.find_shares(self.uri)
+            shares = self.find_uri_shares(self.uri)
             self.failUnlessReallyEqual(len(shares), 10)
             os.unlink(shares[0][2])
             cso = debug.CorruptShareOptions()
hunk ./src/allmydata/test/test_cli.py 2123
         d.addCallback(_check_stats)
 
         def _clobber_shares(ignored):
-            shares = self.find_shares(self.uris[u"gööd"])
+            shares = self.find_uri_shares(self.uris[u"gööd"])
             self.failUnlessReallyEqual(len(shares), 10)
             os.unlink(shares[0][2])
 
hunk ./src/allmydata/test/test_cli.py 2127
-            shares = self.find_shares(self.uris["mutable"])
+            shares = self.find_uri_shares(self.uris["mutable"])
             cso = debug.CorruptShareOptions()
             cso.stdout = StringIO()
             cso.parseOptions([shares[1][2]])
hunk ./src/allmydata/test/test_deepcheck.py 994
         self.delete_shares_numbered(node.get_uri(), [0,1])
 
     def _corrupt_some_shares(self, node):
-        for (shnum, serverid, sharefile) in self.find_shares(node.get_uri()):
+        for (shnum, serverid, sharefile) in self.find_uri_shares(node.get_uri()):
             if shnum in (0,1):
                 self._run_cli(["debug", "corrupt-share", sharefile])
 
hunk ./src/allmydata/test/test_hung_server.py 67
             os.makedirs(si_dir)
         new_sharefile = os.path.join(si_dir, str(sharenum))
         shutil.copy(sharefile, new_sharefile)
-        self.shares = self.find_shares(self.uri)
+        self.shares = self.find_uri_shares(self.uri)
         # Make sure that the storage server has the share.
         self.failUnless((sharenum, ss.original.my_nodeid, new_sharefile)
                         in self.shares)
hunk ./src/allmydata/test/test_hung_server.py 98
             d = nm.create_mutable_file(mutable_plaintext)
             def _uploaded_mutable(node):
                 self.uri = node.get_uri()
-                self.shares = self.find_shares(self.uri)
+                self.shares = self.find_uri_shares(self.uri)
             d.addCallback(_uploaded_mutable)
         else:
             data = upload.Data(immutable_plaintext, convergence="")
hunk ./src/allmydata/test/test_hung_server.py 105
             d = self.c0.upload(data)
             def _uploaded_immutable(upload_res):
                 self.uri = upload_res.uri
-                self.shares = self.find_shares(self.uri)
+                self.shares = self.find_uri_shares(self.uri)
             d.addCallback(_uploaded_immutable)
         return d
 
hunk ./src/allmydata/test/test_immutable.py 14
         # replace_shares, and asserting that the new set of shares equals the
         # old is more to test this test code than to test the Tahoe code...
         d = defer.succeed(None)
-        d.addCallback(self.find_shares)
+        d.addCallback(self.find_all_shares)
         stash = [None]
         def _stash_it(res):
             stash[0] = res
hunk ./src/allmydata/test/test_repairer.py 90
         d.addCallback(_check)
 
         def _remove_all(ignored):
-            for sh in self.find_shares(self.uri):
+            for sh in self.find_uri_shares(self.uri):
                 self.delete_share(sh)
         d.addCallback(_remove_all)
 
hunk ./src/allmydata/test/test_repairer.py 325
         def _grab_sh0(res):
             self.sh0_file = [sharefile
                              for (shnum, serverid, sharefile)
-                             in self.find_shares(self.uri)
+                             in self.find_uri_shares(self.uri)
                              if shnum == 0][0]
             self.sh0_orig = open(self.sh0_file, "rb").read()
         d.addCallback(_grab_sh0)
hunk ./src/allmydata/test/test_repairer.py 470
         self.set_up_grid(num_clients=2)
         d = self.upload_and_stash()
 
-        d.addCallback(lambda ignored: self.find_shares(self.uri))
+        d.addCallback(lambda ignored: self.find_uri_shares(self.uri))
         def _stash_shares(oldshares):
             self.oldshares = oldshares
         d.addCallback(_stash_shares)
hunk ./src/allmydata/test/test_repairer.py 474
-        d.addCallback(lambda ignored: self.find_shares(self.uri))
+        d.addCallback(lambda ignored: self.find_uri_shares(self.uri))
         def _compare(newshares):
             self.failUnlessEqual(newshares, self.oldshares)
         d.addCallback(_compare)
hunk ./src/allmydata/test/test_repairer.py 485
             for sh in self.oldshares[1:8]:
                 self.delete_share(sh)
         d.addCallback(_delete_8)
-        d.addCallback(lambda ignored: self.find_shares(self.uri))
+        d.addCallback(lambda ignored: self.find_uri_shares(self.uri))
         d.addCallback(lambda shares: self.failUnlessEqual(len(shares), 2))
 
         d.addCallback(lambda ignored:
hunk ./src/allmydata/test/test_repairer.py 502
         # test share corruption
         def _test_corrupt(ignored):
             olddata = {}
-            shares = self.find_shares(self.uri)
+            shares = self.find_uri_shares(self.uri)
             for (shnum, serverid, sharefile) in shares:
                 olddata[ (shnum, serverid) ] = open(sharefile, "rb").read()
             for sh in shares:
hunk ./src/allmydata/test/test_repairer.py 513
         d.addCallback(_test_corrupt)
 
         def _remove_all(ignored):
-            for sh in self.find_shares(self.uri):
+            for sh in self.find_uri_shares(self.uri):
                 self.delete_share(sh)
         d.addCallback(_remove_all)
hunk ./src/allmydata/test/test_repairer.py 516
-        d.addCallback(lambda ignored: self.find_shares(self.uri))
+        d.addCallback(lambda ignored: self.find_uri_shares(self.uri))
         d.addCallback(lambda shares: self.failUnlessEqual(shares, []))
 
         return d
hunk ./src/allmydata/test/test_repairer.py 547
 
             # Now we inspect the filesystem to make sure that it has 10
             # shares.
-            shares = self.find_shares(self.uri)
+            shares = self.find_uri_shares(self.uri)
             self.failIf(len(shares) < 10)
         d.addCallback(_check_results)
 
hunk ./src/allmydata/test/test_repairer.py 592
             self.failUnless(post.is_healthy(), post.data)
 
             # Make sure we really have 10 shares.
-            shares = self.find_shares(self.uri)
+            shares = self.find_uri_shares(self.uri)
             self.failIf(len(shares) < 10)
         d.addCallback(_check_results)
 
hunk ./src/allmydata/test/test_repairer.py 653
     def OFF_test_repair_from_corruption_of_1(self):
         d = defer.succeed(None)
 
-        d.addCallback(self.find_shares)
+        d.addCallback(self.find_all_shares)
         stash = [None]
         def _stash_it(res):
             stash[0] = res
hunk ./src/allmydata/test/test_repairer.py 688
 
                 # Now we inspect the filesystem to make sure that it has 10
                 # shares.
-                shares = self.find_shares()
+                shares = self.find_all_shares()
                 self.failIf(len(shares) < 10)
 
                 # Now we assert that the verifier reports the file as healthy.
hunk ./src/allmydata/test/test_system.py 386
 
         return d
 
-    def _find_shares(self, basedir):
+    def _find_all_shares(self, basedir):
         shares = []
         for (dirpath, dirnames, filenames) in os.walk(basedir):
             if "storage" not in dirpath:
hunk ./src/allmydata/test/test_system.py 478
         def _test_debug(res):
             # find a share. It is important to run this while there is only
             # one slot in the grid.
-            shares = self._find_shares(self.basedir)
+            shares = self._find_all_shares(self.basedir)
             (client_num, storage_index, filename, shnum) = shares[0]
             log.msg("test_system.SystemTest.test_mutable._test_debug using %s"
                     % filename)
hunk ./src/allmydata/test/test_system.py 581
         def _corrupt_shares(res):
             # run around and flip bits in all but k of the shares, to test
             # the hash checks
-            shares = self._find_shares(self.basedir)
+            shares = self._find_all_shares(self.basedir)
             ## sort by share number
             #shares.sort( lambda a,b: cmp(a[3], b[3]) )
             where = dict([ (shnum, filename)
hunk ./src/allmydata/test/test_upload.py 1
+# -*- coding: utf-8 -*-
+
 import os, shutil
 from cStringIO import StringIO
 from twisted.trial import unittest
hunk ./src/allmydata/test/test_upload.py 689
         d.addCallback(_done)
         return d
 
+# copied from python docs because itertools.combinations was added in
+# python 2.6 and we support >= 2.4.
+def combinations(iterable, r):
+    # combinations('ABCD', 2) --> AB AC AD BC BD CD
+    # combinations(range(4), 3) --> 012 013 023 123
+    pool = tuple(iterable)
+    n = len(pool)
+    if r > n:
+        return
+    indices = range(r)
+    yield tuple(pool[i] for i in indices)
+    while True:
+        for i in reversed(range(r)):
+            if indices[i] != i + n - r:
+                break
+        else:
+            return
+        indices[i] += 1
+        for j in range(i+1, r):
+            indices[j] = indices[j-1] + 1
+        yield tuple(pool[i] for i in indices)
+
+def is_happy_enough(servertoshnums, h, k):
+    """ I calculate whether servertoshnums achieves happiness level h. I do this with a naïve "brute force search" approach. (See src/allmydata/util/happinessutil.py for a better algorithm.) """
+    if len(servertoshnums) < h:
+        return False
+    # print "servertoshnums: ", servertoshnums, h, k
+    for happysetcombo in combinations(servertoshnums.iterkeys(), h):
+        # print "happysetcombo: ", happysetcombo
+        for subsetcombo in combinations(happysetcombo, k):
+            shnums = reduce(set.union, [ servertoshnums[s] for s in subsetcombo ])
+            # print "subsetcombo: ", subsetcombo, ", shnums: ", shnums
+            if len(shnums) < k:
+                # print "NOT HAAPP{Y", shnums, k
+                return False
+    # print "HAAPP{Y"
+    return True
+
 class EncodingParameters(GridTestMixin, unittest.TestCase, SetDEPMixin,
     ShouldFailMixin):
hunk ./src/allmydata/test/test_upload.py 729
+    def find_all_shares(self, unused=None):
+        """Locate shares on disk. Returns a dict that maps
+        server to set of sharenums.
+        """
+        assert self.g, "I tried to find a grid at self.g, but failed"
+        servertoshnums = {} # k: server, v: set(shnum)
+
+        for i, c in self.g.servers_by_number.iteritems():
+            for (dirp, dirns, fns) in os.walk(c.sharedir):
+                for fn in fns:
+                    try:
+                        sharenum = int(fn)
+                    except TypeError:
+                        # Whoops, I guess that's not a share file then.
+                        pass
+                    else:
+                        servertoshnums.setdefault(i, set()).add(sharenum)
+
+        return servertoshnums
+
     def _do_upload_with_broken_servers(self, servers_to_break):
         """
         I act like a normal upload, but before I send the results of
hunk ./src/allmydata/test/test_upload.py 792
         d.addCallback(_have_shareholders)
         return d
 
+    def _has_happy_share_distribution(self):
+        servertoshnums = self.find_all_shares()
+        k = self.g.clients[0].DEFAULT_ENCODING_PARAMETERS['k']
+        h = self.g.clients[0].DEFAULT_ENCODING_PARAMETERS['happy']
+        return is_happy_enough(servertoshnums, h, k)
 
     def _add_server(self, server_number, readonly=False):
         assert self.g, "I tried to find a grid at self.g, but failed"
hunk ./src/allmydata/test/test_upload.py 828
                                           str(share_number))
         if old_share_location != new_share_location:
             shutil.copy(old_share_location, new_share_location)
-        shares = self.find_shares(self.uri)
+        shares = self.find_uri_shares(self.uri)
         # Make sure that the storage server has the share.
         self.failUnless((share_number, ss.my_nodeid, new_share_location)
                         in shares)
hunk ./src/allmydata/test/test_upload.py 858
             self.uri = ur.uri
         d.addCallback(_store_uri)
         d.addCallback(lambda ign:
-            self.find_shares(self.uri))
+            self.find_uri_shares(self.uri))
         def _store_shares(shares):
             self.shares = shares
         d.addCallback(_store_shares)
hunk ./src/allmydata/test/test_upload.py 944
         d.addCallback(lambda ign: self._add_server(4, False))
         # and this time the upload ought to succeed
         d.addCallback(lambda ign: c.upload(DATA))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
 
hunk ./src/allmydata/test/test_upload.py 1082
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
 
 
         # This scenario is basically comment:53, but changed so that the
hunk ./src/allmydata/test/test_upload.py 1122
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
 
 
         # Try the same thing, but with empty servers after the first one
hunk ./src/allmydata/test/test_upload.py 1155
         # servers of happiness were pushed.
         d.addCallback(lambda results:
             self.failUnlessEqual(results.pushed_shares, 3))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
     def test_problem_layout_ticket1124(self):
hunk ./src/allmydata/test/test_upload.py 1182
         d.addCallback(_setup)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
     test_problem_layout_ticket1124.todo = "Fix this after 1.7.1 release."
 
hunk ./src/allmydata/test/test_upload.py 1221
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
 
hunk ./src/allmydata/test/test_upload.py 1260
         d.addCallback(_reset_encoding_parameters)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
 
hunk ./src/allmydata/test/test_upload.py 1571
         d.addCallback(_prepare_client)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
 
hunk ./src/allmydata/test/test_upload.py 1867
         d.addCallback(_setup)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
     test_problem_layout_comment_187.todo = "this isn't fixed yet"
 
hunk ./src/allmydata/test/test_upload.py 1902
         d.addCallback(_setup)
         d.addCallback(lambda client:
                           client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
     def test_upload_succeeds_with_some_homeless_shares(self):
hunk ./src/allmydata/test/test_upload.py 1940
         d.addCallback(_server_setup)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
 
hunk ./src/allmydata/test/test_upload.py 1969
         d.addCallback(_server_setup)
         d.addCallback(lambda client:
             client.upload(upload.Data("data" * 10000, convergence="")))
+        d.addCallback(lambda ign:
+            self.failUnless(self._has_happy_share_distribution()))
         return d
 
 
hunk ./src/allmydata/test/test_web.py 3198
         d.addCallback(_compute_fileurls)
 
         def _clobber_shares(ignored):
-            good_shares = self.find_shares(self.uris["good"])
+            good_shares = self.find_uri_shares(self.uris["good"])
             self.failUnlessReallyEqual(len(good_shares), 10)
hunk ./src/allmydata/test/test_web.py 3200
-            sick_shares = self.find_shares(self.uris["sick"])
+            sick_shares = self.find_uri_shares(self.uris["sick"])
             os.unlink(sick_shares[0][2])
hunk ./src/allmydata/test/test_web.py 3202
-            dead_shares = self.find_shares(self.uris["dead"])
+            dead_shares = self.find_uri_shares(self.uris["dead"])
             for i in range(1, 10):
                 os.unlink(dead_shares[i][2])
hunk ./src/allmydata/test/test_web.py 3205
-            c_shares = self.find_shares(self.uris["corrupt"])
+            c_shares = self.find_uri_shares(self.uris["corrupt"])
             cso = CorruptShareOptions()
             cso.stdout = StringIO()
             cso.parseOptions([c_shares[0][2]])
hunk ./src/allmydata/test/test_web.py 3339
         d.addCallback(_compute_fileurls)
 
         def _clobber_shares(ignored):
-            good_shares = self.find_shares(self.uris["good"])
+            good_shares = self.find_uri_shares(self.uris["good"])
             self.failUnlessReallyEqual(len(good_shares), 10)
hunk ./src/allmydata/test/test_web.py 3341
-            sick_shares = self.find_shares(self.uris["sick"])
+            sick_shares = self.find_uri_shares(self.uris["sick"])
             os.unlink(sick_shares[0][2])
hunk ./src/allmydata/test/test_web.py 3343
-            dead_shares = self.find_shares(self.uris["dead"])
+            dead_shares = self.find_uri_shares(self.uris["dead"])
             for i in range(1, 10):
                 os.unlink(dead_shares[i][2])
hunk ./src/allmydata/test/test_web.py 3346
-            c_shares = self.find_shares(self.uris["corrupt"])
+            c_shares = self.find_uri_shares(self.uris["corrupt"])
             cso = CorruptShareOptions()
             cso.stdout = StringIO()
             cso.parseOptions([c_shares[0][2]])
hunk ./src/allmydata/test/test_web.py 3407
         d.addCallback(_compute_fileurls)
 
         def _clobber_shares(ignored):
-            sick_shares = self.find_shares(self.uris["sick"])
+            sick_shares = self.find_uri_shares(self.uris["sick"])
             os.unlink(sick_shares[0][2])
         d.addCallback(_clobber_shares)
 
hunk ./src/allmydata/test/test_web.py 3897
         #d.addCallback(_stash_uri, "corrupt")
 
         def _clobber_shares(ignored):
-            good_shares = self.find_shares(self.uris["good"])
+            good_shares = self.find_uri_shares(self.uris["good"])
             self.failUnlessReallyEqual(len(good_shares), 10)
hunk ./src/allmydata/test/test_web.py 3899
-            sick_shares = self.find_shares(self.uris["sick"])
+            sick_shares = self.find_uri_shares(self.uris["sick"])
             os.unlink(sick_shares[0][2])
hunk ./src/allmydata/test/test_web.py 3901
-            #dead_shares = self.find_shares(self.uris["dead"])
+            #dead_shares = self.find_uri_shares(self.uris["dead"])
             #for i in range(1, 10):
             #    os.unlink(dead_shares[i][2])
 
hunk ./src/allmydata/test/test_web.py 3905
-            #c_shares = self.find_shares(self.uris["corrupt"])
+            #c_shares = self.find_uri_shares(self.uris["corrupt"])
             #cso = CorruptShareOptions()
             #cso.stdout = StringIO()
             #cso.parseOptions([c_shares[0][2]])
hunk ./src/allmydata/test/test_web.py 3961
 
     def _count_leases(self, ignored, which):
         u = self.uris[which]
-        shares = self.find_shares(u)
+        shares = self.find_uri_shares(u)
         lease_counts = []
         for shnum, serverid, fn in shares:
             sf = get_share_file(fn)
}
[upload.py: fix #1118 by aborting newly-homeless buckets when reassignment runs. This makes a previously failing assert correct. This version refactors 'abort' into two methods, rather than using a default argument.
david-sarah@jacaranda.org**20100719044655
 Ignore-this: 142d182c0739986812140bb8387077d5
] {
hunk ./src/allmydata/immutable/upload.py 140
 
     def abort(self):
         """
-        I abort the remote bucket writers for the share numbers in
-        sharenums. This is a good idea to conserve space on the storage
-        server.
+        I abort the remote bucket writers for all shares. This is a good idea
+        to conserve space on the storage server.
         """
hunk ./src/allmydata/immutable/upload.py 143
-        for writer in self.buckets.itervalues(): writer.abort()
+        self.abort_some_buckets(self.buckets.keys())
+
+    def abort_some_buckets(self, sharenums):
+        """
+        I abort the remote bucket writers for the share numbers in sharenums.
+        """
+        for sharenum in sharenums:
+            if sharenum in self.buckets:
+                self.buckets[sharenum].abort()
+                del self.buckets[sharenum]
 
 
 class Tahoe2PeerSelector:
hunk ./src/allmydata/immutable/upload.py 367
                             if not self.preexisting_shares[share]:
                                 del self.preexisting_shares[share]
                             items.append((server, sharelist))
+                        for writer in self.use_peers:
+                            writer.abort_some_buckets(self.homeless_shares)
                     return self._loop()
                 else:
                     # Redistribution won't help us; fail.
hunk ./src/allmydata/immutable/upload.py 377
                                           self.needed_shares,
                                           self.servers_of_happiness,
                                           effective_happiness)
+                    log.msg("server selection unsuccessful for %r: %s (%s), merged=%r"
+                            % (self, msg, self._get_progress_message(), merged), level=log.INFREQUENT)
                     return self._failed("%s (%s)" % (msg, self._get_progress_message()))
 
         if self.uncontacted_peers:
}
[immutable: use PrefixingLogMixin to organize logging in Tahoe2PeerSelector and add more detailed messages about peer
zooko@zooko.com**20100719065629
 Ignore-this: dce3e763dc628abf1604d1bfb9bdc829
] {
hunk ./src/allmydata/immutable/upload.py 77
 # TODO: actual extensions are closer to 419 bytes, so we can probably lower
 # this.
 
+def pretty_print_shnum_to_servers(s):
+    return ', '.join([ "sh%s: %s" % (k, '+'.join([idlib.shortnodeid_b2a(x) for x in v])) for k, v in s.iteritems() ])
+
 class PeerTracker:
     def __init__(self, peerid, storage_server,
                  sharesize, blocksize, num_segments, num_share_hashes,
hunk ./src/allmydata/immutable/upload.py 158
                 del self.buckets[sharenum]
 
 
-class Tahoe2PeerSelector:
+class Tahoe2PeerSelector(log.PrefixingLogMixin):
 
     def __init__(self, upload_id, logparent=None, upload_status=None):
         self.upload_id = upload_id
hunk ./src/allmydata/immutable/upload.py 169
         self.num_peers_contacted = 0
         self.last_failure_msg = None
         self._status = IUploadStatus(upload_status)
-        self._log_parent = log.msg("%s starting" % self, parent=logparent)
+        log.PrefixingLogMixin.__init__(self, 'tahoe.immutable.upload', logparent, prefix=upload_id)
+        self.log("starting", level=log.OPERATIONAL)
 
     def __repr__(self):
         return "<Tahoe2PeerSelector for upload %s>" % self.upload_id
hunk ./src/allmydata/immutable/upload.py 275
             ds.append(d)
             self.num_peers_contacted += 1
             self.query_count += 1
-            log.msg("asking peer %s for any existing shares for "
-                    "upload id %s"
-                    % (idlib.shortnodeid_b2a(peer.peerid), self.upload_id),
-                    level=log.NOISY, parent=self._log_parent)
+            self.log("asking peer %s for any existing shares" %
+                     (idlib.shortnodeid_b2a(peer.peerid),),
+                    level=log.NOISY)
         dl = defer.DeferredList(ds)
         dl.addCallback(lambda ign: self._loop())
         return dl
hunk ./src/allmydata/immutable/upload.py 289
         Tahoe2PeerSelector._existing_shares.
         """
         if isinstance(res, failure.Failure):
-            log.msg("%s got error during existing shares check: %s"
+            self.log("%s got error during existing shares check: %s"
                     % (idlib.shortnodeid_b2a(peer), res),
hunk ./src/allmydata/immutable/upload.py 291
-                    level=log.UNUSUAL, parent=self._log_parent)
+                    level=log.UNUSUAL)
             self.error_count += 1
             self.bad_query_count += 1
         else:
hunk ./src/allmydata/immutable/upload.py 298
             buckets = res
             if buckets:
                 self.peers_with_shares.add(peer)
-            log.msg("response from peer %s: alreadygot=%s"
+            self.log("response to get_buckets() from peer %s: alreadygot=%s"
                     % (idlib.shortnodeid_b2a(peer), tuple(sorted(buckets))),
hunk ./src/allmydata/immutable/upload.py 300
-                    level=log.NOISY, parent=self._log_parent)
+                    level=log.NOISY)
             for bucket in buckets:
                 self.preexisting_shares.setdefault(bucket, set()).add(peer)
                 if self.homeless_shares and bucket in self.homeless_shares:
hunk ./src/allmydata/immutable/upload.py 334
             merged = merge_peers(self.preexisting_shares, self.use_peers)
             effective_happiness = servers_of_happiness(merged)
             if self.servers_of_happiness <= effective_happiness:
-                msg = ("peer selection successful for %s: %s" % (self,
-                            self._get_progress_message()))
-                log.msg(msg, parent=self._log_parent)
+                msg = ("server selection successful for %s: %s: %s" % (self,
+                            self._get_progress_message(), pretty_print_shnum_to_servers(merged)))
+                self.log(msg, level=log.OPERATIONAL)
                 return (self.use_peers, self.preexisting_shares)
             else:
                 # We're not okay right now, but maybe we can fix it by
hunk ./src/allmydata/immutable/upload.py 380
                                           self.needed_shares,
                                           self.servers_of_happiness,
                                           effective_happiness)
-                    log.msg("server selection unsuccessful for %r: %s (%s), merged=%r"
-                            % (self, msg, self._get_progress_message(), merged), level=log.INFREQUENT)
+                    self.log("server selection unsuccessful for %r: %s (%s), merged=%s" % (self, msg, self._get_progress_message(), pretty_print_shnum_to_servers(merged)), level=log.INFREQUENT)
                     return self._failed("%s (%s)" % (msg, self._get_progress_message()))
 
         if self.uncontacted_peers:
hunk ./src/allmydata/immutable/upload.py 403
         elif self.contacted_peers:
             # ask a peer that we've already asked.
             if not self._started_second_pass:
-                log.msg("starting second pass", parent=self._log_parent,
+                self.log("starting second pass",
                         level=log.NOISY)
                 self._started_second_pass = True
             num_shares = mathutil.div_ceil(len(self.homeless_shares),
hunk ./src/allmydata/immutable/upload.py 441
                                 self._get_progress_message()))
                 if self.last_failure_msg:
                     msg += " (%s)" % (self.last_failure_msg,)
-                log.msg(msg, level=log.UNUSUAL, parent=self._log_parent)
+                self.log(msg, level=log.UNUSUAL)
                 return self._failed(msg)
             else:
                 # we placed enough to be happy, so we're done
hunk ./src/allmydata/immutable/upload.py 447
                 if self._status:
                     self._status.set_status("Placed all shares")
+                msg = ("server selection successful (no more servers) for %s: %s: %s" % (self,
+                            self._get_progress_message(), pretty_print_shnum_to_servers(merged)))
+                self.log(msg, level=log.OPERATIONAL)
                 return (self.use_peers, self.preexisting_shares)
 
     def _got_response(self, res, peer, shares_to_ask, put_peer_here):
hunk ./src/allmydata/immutable/upload.py 456
         if isinstance(res, failure.Failure):
             # This is unusual, and probably indicates a bug or a network
             # problem.
-            log.msg("%s got error during peer selection: %s" % (peer, res),
-                    level=log.UNUSUAL, parent=self._log_parent)
+            self.log("%s got error during peer selection: %s" % (peer, res),
+                    level=log.UNUSUAL)
             self.error_count += 1
             self.bad_query_count += 1
             self.homeless_shares = list(shares_to_ask) + self.homeless_shares
hunk ./src/allmydata/immutable/upload.py 476
                 self.last_failure_msg = msg
         else:
             (alreadygot, allocated) = res
-            log.msg("response from peer %s: alreadygot=%s, allocated=%s"
+            self.log("response to allocate_buckets() from peer %s: alreadygot=%s, allocated=%s"
                     % (idlib.shortnodeid_b2a(peer.peerid),
                        tuple(sorted(alreadygot)), tuple(sorted(allocated))),
hunk ./src/allmydata/immutable/upload.py 479
-                    level=log.NOISY, parent=self._log_parent)
+                    level=log.NOISY)
             progress = False
             for s in alreadygot:
                 self.preexisting_shares.setdefault(s, set()).add(peer.peerid)
hunk ./src/allmydata/immutable/upload.py 922
         @paran already_peers: a dict mapping sharenum to a set of peerids
                               that claim to already have this share
         """
-        self.log("_send_shares, used_peers is %s" % (used_peers,))
+        self.log("set_shareholders; used_peers is %s, already_peers is %s" % ([p.buckets for p in used_peers], already_peers))
         # record already-present shares in self._results
         self._results.preexisting_shares = len(already_peers)
 
hunk ./src/allmydata/immutable/upload.py 936
             for shnum in peer.buckets:
                 self._peer_trackers[shnum] = peer
                 servermap.setdefault(shnum, set()).add(peer.peerid)
+        self.log("set_shareholders; %s (%s) == %s (%s)" % (len(buckets), buckets, sum([len(peer.buckets) for peer in used_peers]), [(p.buckets, p.peerid) for p in used_peers]))
         assert len(buckets) == sum([len(peer.buckets) for peer in used_peers]), "%s (%s) != %s (%s)" % (len(buckets), buckets, sum([len(peer.buckets) for peer in used_peers]), [(p.buckets, p.peerid) for p in used_peers])
         encoder.set_shareholders(buckets, servermap)
 
hunk ./src/allmydata/storage/server.py 8
 
 from zope.interface import implements
 from allmydata.interfaces import RIStorageServer, IStatsProducer
-from allmydata.util import fileutil, log, time_format
+from allmydata.util import fileutil, idlib, log, time_format
 import allmydata # for __full_version__
 
 from allmydata.storage.common import si_b2a, si_a2b, storage_index_to_dir
hunk ./src/allmydata/storage/server.py 109
                                    expiration_sharetypes)
         self.lease_checker.setServiceParent(self)
 
+    def __repr__(self):
+        return "<StorageServer %s>" % (idlib.shortnodeid_b2a(self.my_nodeid),)
+
     def add_bucket_counter(self):
         statefile = os.path.join(self.storedir, "bucket_counter.state")
         self.bucket_counter = BucketCountingCrawler(self, statefile)
hunk ./src/allmydata/test/test_upload.py 14
 from allmydata import uri, monitor, client
 from allmydata.immutable import upload, encode
 from allmydata.interfaces import FileTooLargeError, UploadUnhappinessError
+from allmydata.util import log
 from allmydata.util.assertutil import precondition
 from allmydata.util.deferredutil import DeferredListShouldSucceed
 from allmydata.test.no_network import GridTestMixin
hunk ./src/allmydata/test/test_upload.py 714
 
 def is_happy_enough(servertoshnums, h, k):
     """ I calculate whether servertoshnums achieves happiness level h. I do this with a naïve "brute force search" approach. (See src/allmydata/util/happinessutil.py for a better algorithm.) """
+    print "servertoshnums: ", servertoshnums, "h: ", h, "k: ", k
     if len(servertoshnums) < h:
         return False
     # print "servertoshnums: ", servertoshnums, h, k
hunk ./src/allmydata/test/test_upload.py 803
     def _add_server(self, server_number, readonly=False):
         assert self.g, "I tried to find a grid at self.g, but failed"
         ss = self.g.make_server(server_number, readonly)
+        log.msg("just created a server, number: %s => %s" % (server_number, ss,))
         self.g.add_server(server_number, ss)
 
hunk ./src/allmydata/test/test_upload.py 806
-
     def _add_server_with_share(self, server_number, share_number=None,
                                readonly=False):
         self._add_server(server_number, readonly)
hunk ./src/allmydata/test/test_upload.py 866
         d.addCallback(_store_shares)
         return d
 
-
     def test_configure_parameters(self):
         self.basedir = self.mktemp()
         hooks = {0: self._set_up_nodes_extra_config}
}

Context:

[docs/logging.txt: document that _trial_temp/test.log does not receive messages below level=OPERATIONAL, due to <http://foolscap.lothar.com/trac/ticket/154>.
david-sarah@jacaranda.org**20100718230420
 Ignore-this: aef40f2e74ddeabee5e122e8d80893a1
] 
[immutable: test for #1124
zooko@zooko.com**20100718222907
 Ignore-this: 1766e3cbab92ea2a9e246f40eb6e770b
] 
[trivial: fix unused import (sorry about that, pyflakes)
zooko@zooko.com**20100718215133
 Ignore-this: c2414e443405072b51d552295f2c0e8c
] 
[tests, NEWS, CREDITS re: #1117
zooko@zooko.com**20100718203225
 Ignore-this: 1f08be2c692fb72cc0dd023259f11354
 Give Brian and Kevan promotions, move release date in NEWS to the 18th, commit Brian's test for #1117.
 fixes #1117
] 
[test/test_upload.py: test to see that aborted buckets are ignored by the storage server
Kevan Carstensen <kevan@isnotajoke.com>**20100716001046
 Ignore-this: cc075c24b1c86d737f3199af894cc780
] 
[test/test_storage.py: test for the new remote_abort semantics.
Kevan Carstensen <kevan@isnotajoke.com>**20100715232148
 Ignore-this: d3d6491f17bf670e770ca4b385007515
] 
[storage/immutable.py: make remote_abort btell the storage server about aborted buckets.
Kevan Carstensen <kevan@isnotajoke.com>**20100715232105
 Ignore-this: 16ab0090676355abdd5600ed44ff19c9
] 
[test/test_upload.py: changes to test plumbing for #1117 tests
Kevan Carstensen <kevan@isnotajoke.com>**20100715231820
 Ignore-this: 78a6d359d7bf8529d283e2815bf1e2de
 
     - Add a callRemoteOnly method to FakeBucketWriter.
     - Change the abort method in FakeBucketWriter to not return a
       RuntimeError.
] 
[immutable/upload.py: abort buckets if peer selection fails
Kevan Carstensen <kevan@isnotajoke.com>**20100715231714
 Ignore-this: 2a0b643a22284df292d8ed9d91b1fd37
] 
[test_encodingutil: correct an error in the previous patch to StdlibUnicode.test_open_representable.
david-sarah@jacaranda.org**20100718151420
 Ignore-this: af050955f623fbc0e4d78e15a0a8a144
] 
[NEWS: Forward-compatibility improvements for non-ASCII caps (#1051).
david-sarah@jacaranda.org**20100718143622
 Ignore-this: 1edfebc4bd38a3b5c35e75c99588153f
] 
[test_dirnode and test_web: don't use failUnlessReallyEqual in cases where the return type from simplejson.loads can vary between unicode and str. Use to_str when comparing URIs parsed from JSON.
david-sarah@jacaranda.org**20100718142915
 Ignore-this: c4e78ef4b1478dd400da71cf077ffa4a
] 
[test_encodingutil: StdlibUnicode.test_open_representable no longer uses a mock.
david-sarah@jacaranda.org**20100718125412
 Ignore-this: 4bf373a5e2dfe4209e5e364124af29a3
] 
[docs: add comment clarifying #1051
zooko@zooko.com**20100718053250
 Ignore-this: 6cfc0930434cbdbbc262dabb58f1505d
] 
[docs: update NEWS
zooko@zooko.com**20100718053225
 Ignore-this: 63d5c782ef84812e6d010f0590866831
] 
[Add tests of caps from the future that have non-ASCII characters in them (encoded as UTF-8). The changes to test_uri.py, test_client.py, and test_dirnode.py add tests of non-ASCII future caps in addition to the current tests. The changes to test_web.py just replace the tests of all-ASCII future caps with tests of non-ASCII future caps. We also change uses of failUnlessEqual to failUnlessReallyEqual, in order to catch cases where the type of a string is not as expected.
david-sarah@jacaranda.org**20100711200252
 Ignore-this: c2f193352369d32e06865f8f3e951894
] 
[Debian documentation update
jacob@appelbaum.net**20100305003004] 
[debian-docs-patch-final
jacob@appelbaum.net**20100304085955] 
[M-x whitespace-cleanup
zooko@zooko.com**20100718032739
 Ignore-this: babfd4af6ad2fc885c957fd5c8b10c3f
] 
[docs: tidy up NEWS a little
zooko@zooko.com**20100718032434
 Ignore-this: 54f2820fd1a37c8967609f6bfc4e5e18
] 
[benchmarking: update bench_dirnode.py to reflect the new directory interfaces
zooko@zooko.com**20100718031710
 Ignore-this: 368ba523dd3de80d9da29cd58afbe827
] 
[test_encodingutil: fix test_open_representable, which is only valid when run on a platform for which we know an unrepresentable filename.
david-sarah@jacaranda.org**20100718030333
 Ignore-this: c114d92c17714a5d4ae005c15267d60c
] 
[iputil.py: Add support for FreeBSD 7,8 and 9
francois@ctrlaltdel.ch**20100718022832
 Ignore-this: 1829b4cf4b91107f4cf87841e6167e99
 committed by: zooko@zooko.com
 date: 2010-07-17
 and I also patched: NEWS and CREDITS
] 
[NEWS: add snippet about #1083
zooko@zooko.com**20100718020653
 Ignore-this: d353a9d93cbc5a5e6ba4671f78d1e22b
] 
[fileutil: docstrings for non-obvious usage restrictions on methods of EncryptedTemporaryFile.
david-sarah@jacaranda.org**20100717054647
 Ignore-this: 46d8fc10782fa8ec2b6c5b168c841943
] 
[Move EncryptedTemporaryFile from SFTP frontend to allmydata.util.fileutil, and make the FTP frontend also use it (fixing #1083).
david-sarah@jacaranda.org**20100711213721
 Ignore-this: e452e8ca66391aa2a1a49afe0114f317
] 
[NEWS: reorder NEWS snippets to be in descending order of interestingness
zooko@zooko.com**20100718015929
 Ignore-this: 146c42e88a9555a868a04a69dd0e5326
] 
[Correct stringutils->encodingutil patch to be the newer version, rather than the old version that was committed in error.
david-sarah@jacaranda.org**20100718013435
 Ignore-this: c8940c4e1aa2e9acc80cd4fe54753cd8
] 
[test_cli.py: fix error that crept in when rebasing the patch for #1072.
david-sarah@jacaranda.org**20100718000123
 Ignore-this: 3e8f6cc3a27b747c708221dd581934f4
] 
[stringutils: add test for when sys.stdout has no encoding attribute (fixes #1099).
david-sarah@jacaranda.org**20100717045816
 Ignore-this: f28dce6940e909f12f354086d17db54f
] 
[CLI: add 'tahoe unlink' as an alias to 'tahoe rm', for forward-compatibility.
david-sarah@jacaranda.org**20100717220411
 Ignore-this: 3ecdde7f2d0498514cef32e118e0b855
] 
[minor code clean-up in dirnode.py
zooko@zooko.com**20100714060255
 Ignore-this: bb0ab2783203e605024b3e2f798256a1
 Impose micro-POLA by passing only the writekey instead of the whole node object to {{{_encrypt_rw_uri()}}}. Remove DummyImmutableFileNode in nodemaker.py, which is obviated by this. Add micro-optimization by precomputing the netstring of the empty string and branching on whether the writekey is present or not outside of {{{_encrypt_rw_uri()}}}. Add doc about writekey to docstring.
 fixes #967
] 
[Rename stringutils to encodingutil, and drop listdir_unicode and open_unicode (since the Python stdlib functions work fine with Unicode paths). Also move some utility functions to fileutil.
david-sarah@jacaranda.org**20100712003015
 Ignore-this: 103b809d180df17a7283077c3104c7be
] 
[Allow URIs passed in the initial JSON for t=mkdir-with-children, t=mkdir-immutable to be Unicode. Also pass the name of each child into nodemaker.create_from_cap for error reporting.
david-sarah@jacaranda.org**20100711195525
 Ignore-this: deac32d8b91ba26ede18905d3f7d2b93
] 
[docs: CREDITS and NEWS
zooko@zooko.com**20100714060150
 Ignore-this: dc83e612f77d69e50ee975f07f6b16fe
] 
[CREDITS: more creds for Kevan, plus utf-8 BOM
zooko@zooko.com**20100619045503
 Ignore-this: 72d02bdd7a0f324f1cee8cd399c7c6de
] 
[cli.py: make command descriptions consistently end with a full stop.
david-sarah@jacaranda.org**20100714014538
 Ignore-this: 9ee7fa29ca2d1631db4049c2a389a97a
] 
[SFTP: address some of the comments in zooko's review (#1106).
david-sarah@jacaranda.org**20100712025537
 Ignore-this: c3921638a2d4f1de2a776ae78e4dc37e
] 
[docs/logging.txt: note that setting flogging vars might affect tests with race conditions.
david-sarah@jacaranda.org**20100712050721
 Ignore-this: fc1609d215fcd5561a57fd1226206f27
] 
[test_storage.py: potential fix for failures when logging is enabled.
david-sarah@jacaranda.org**19700713040546
 Ignore-this: 5815693a0df3e64c52c3c6b7be2846c7
] 
[upcase_since_on_welcome
terrellrussell@gmail.com**20100708193903] 
[server_version_on_welcome_page.dpatch.txt
freestorm77@gmail.com**20100605191721
 Ignore-this: b450c76dc875f5ac8cca229a666cbd0a
 
 
 - The storage server version is 0 for all storage nodes in the Welcome Page
 
 
] 
[NEWS: add NEWS snippets about two recent patches
zooko@zooko.com**20100708162058
 Ignore-this: 6c9da6a0ad7351a960bdd60f81532899
] 
[directory_html_top_banner.dpatch
freestorm77@gmail.com**20100622205301
 Ignore-this: 1d770d975e0c414c996564774f049bca
 
 The div tag with the link "Return to Welcome page" on the directory.xhtml page is not correct
 
] 
[tahoe_css_toolbar.dpatch
freestorm77@gmail.com**20100622210046
 Ignore-this: 5b3ebb2e0f52bbba718a932f80c246c0
 
 CSS modification to be correctly diplayed with Internet Explorer 8
 
 The links on the top of page directory.xhtml are not diplayed in the same line as display with Firefox.
 
] 
[runnin_test_tahoe_css.dpatch
freestorm77@gmail.com**20100622214714
 Ignore-this: e0db73d68740aad09a7b9ae60a08c05c
 
 Runnin test for changes in tahoe.css file
 
] 
[runnin_test_directory_xhtml.dpatch
freestorm77@gmail.com**20100622201403
 Ignore-this: f8962463fce50b9466405cb59fe11d43
 
 Runnin test for diretory.xhtml top banner
 
] 
[stringutils.py: tolerate sys.stdout having no 'encoding' attribute.
david-sarah@jacaranda.org**20100626040817
 Ignore-this: f42cad81cef645ee38ac1df4660cc850
] 
[quickstart.html: python 2.5 -> 2.6 as recommended version
david-sarah@jacaranda.org**20100705175858
 Ignore-this: bc3a14645ea1d5435002966ae903199f
] 
[SFTP: don't call .stopProducing on the producer registered with OverwriteableFileConsumer (which breaks with warner's new downloader).
david-sarah@jacaranda.org**20100628231926
 Ignore-this: 131b7a5787bc85a9a356b5740d9d996f
] 
[docs/how_to_make_a_tahoe-lafs_release.txt: trivial correction, install.html should now be quickstart.html.
david-sarah@jacaranda.org**20100625223929
 Ignore-this: 99a5459cac51bd867cc11ad06927ff30
] 
[setup: in the Makefile, refuse to upload tarballs unless someone has passed the environment variable "BB_BRANCH" with value "trunk"
zooko@zooko.com**20100619034928
 Ignore-this: 276ddf9b6ad7ec79e27474862e0f7d6
] 
[trivial: tiny update to in-line comment
zooko@zooko.com**20100614045715
 Ignore-this: 10851b0ed2abfed542c97749e5d280bc
 (I'm actually committing this patch as a test of the new eager-annotation-computation of trac-darcs.)
] 
[docs: about.html link to home page early on, and be decentralized storage instead of cloud storage this time around
zooko@zooko.com**20100619065318
 Ignore-this: dc6db03f696e5b6d2848699e754d8053
] 
[docs: update about.html, especially to have a non-broken link to quickstart.html, and also to comment out the broken links to "for Paranoids" and "for Corporates"
zooko@zooko.com**20100619065124
 Ignore-this: e292c7f51c337a84ebfeb366fbd24d6c
] 
[TAG allmydata-tahoe-1.7.0
zooko@zooko.com**20100619052631
 Ignore-this: d21e27afe6d85e2e3ba6a3292ba2be1
] 
Patch bundle hash:
ea9a6b59d0dd8cf99124dfef11c54425f5dd0ad4