Performance regression for large values of K #1264

Open
opened 2010-11-20 10:02:47 +00:00 by francois · 32 comments
francois commented 2010-11-20 10:02:47 +00:00
Owner

On the grid described here, the download time of a 10 MB file encoded with 20-of-60 is roughly doubled on a client running v1.8.1 than the same client running v.1.7.1.

allmydata-tahoe: 1.8.0-r4843

$ for i in $(seq 1 5); do /usr/bin/time -f "%e" tahoe get tahoe:10MB.bin.20-of-60 > /dev/null; done
40.16
44.87
43.57
51.88
58.40

allmydata-tahoe: 1.8.0c4-r4743

$ for i in $(seq 1 5); do /usr/bin/time -f "%e" tahoe get tahoe:10MB.bin.20-of-60 > /dev/null; done
38.29
39.58
55.32
41.03
45.36

allmydata-tahoe: 1.7.0-r4579

$ for i in $(seq 1 5); do /usr/bin/time -f "%e" tahoe get tahoe:10MB.bin.20-of-60 > /dev/null; done
15.43
21.87
26.77
25.60
20.71
repairer:~# 
On the grid described [here](http://tahoe-lafs.org/pipermail/tahoe-dev/2010-November/005597.html), the download time of a 10 MB file encoded with 20-of-60 is roughly doubled on a client running v1.8.1 than the same client running v.1.7.1. **allmydata-tahoe: 1.8.0-r4843** ``` $ for i in $(seq 1 5); do /usr/bin/time -f "%e" tahoe get tahoe:10MB.bin.20-of-60 > /dev/null; done 40.16 44.87 43.57 51.88 58.40 ``` **allmydata-tahoe: 1.8.0c4-r4743** ``` $ for i in $(seq 1 5); do /usr/bin/time -f "%e" tahoe get tahoe:10MB.bin.20-of-60 > /dev/null; done 38.29 39.58 55.32 41.03 45.36 ``` **allmydata-tahoe: 1.7.0-r4579** ``` $ for i in $(seq 1 5); do /usr/bin/time -f "%e" tahoe get tahoe:10MB.bin.20-of-60 > /dev/null; done 15.43 21.87 26.77 25.60 20.71 repairer:~# ```
tahoe-lafs added the
code-network
major
defect
1.8.0
labels 2010-11-20 10:02:47 +00:00
tahoe-lafs added this to the undecided milestone 2010-11-20 10:02:47 +00:00
francois commented 2010-11-20 11:11:56 +00:00
Author
Owner

Attachment timeline.zip (949761 bytes) added

**Attachment** timeline.zip (949761 bytes) added
francois commented 2010-11-20 11:12:08 +00:00
Author
Owner

Attachment 10MB-20-of-60-timeline.pdf (240768 bytes) added

**Attachment** 10MB-20-of-60-timeline.pdf (240768 bytes) added
francois commented 2010-11-20 11:12:41 +00:00
Author
Owner

Here are both a PDF and an HTML version of the download status timeline of a 10 MB 20-of-60 encoded file download.

Here are both a PDF and an HTML version of the download status timeline of a 10 MB 20-of-60 encoded file download.
francois commented 2010-11-20 22:32:39 +00:00
Author
Owner

Attachment Tahoe-LAFS - File Download Status.html (747069 bytes) added

**Attachment** Tahoe-LAFS - File Download Status.html (747069 bytes) added
francois commented 2010-11-20 23:01:37 +00:00
Author
Owner

Zooko mentioned that this bug might be a duplicate of #1180, so I modified the default number of in-flight DYHB request from 10 to 60. The result is a slight improvement in download time (about 1 second), so there's probably something else.

diff -rN old-tahoe-lafs/src/allmydata/immutable/downloader/finder.py new-tahoe-lafs/src/allmydata/immutable/downloader/finder.py
31c31
<                  logparent=None, max_outstanding_requests=10):
---
>                  logparent=None, max_outstanding_requests=60):

Zooko mentioned that this bug might be a duplicate of #1180, so I modified the default number of in-flight DYHB request from 10 to 60. The result is a slight improvement in download time (about 1 second), so there's probably something else. ``` diff -rN old-tahoe-lafs/src/allmydata/immutable/downloader/finder.py new-tahoe-lafs/src/allmydata/immutable/downloader/finder.py 31c31 < logparent=None, max_outstanding_requests=10): --- > logparent=None, max_outstanding_requests=60): ```
francois commented 2010-11-22 15:00:59 +00:00
Author
Owner

This performance regression most probably appeared when the new-downloader landed in v1.8.0, see ticket #798. In my opinion, this issue shouldn't block release v1.8.1 right now but might possibly be addressed in a potential v1.8.2.

This performance regression most probably appeared when the new-downloader landed in v1.8.0, see ticket #798. In my opinion, this issue shouldn't block release v1.8.1 right now but might possibly be addressed in a potential v1.8.2.
tahoe-lafs modified the milestone from undecided to soon 2010-11-22 15:00:59 +00:00
warner commented 2010-11-22 17:28:10 +00:00
Author
Owner

In my opinion, this issue shouldn't block release v1.8.1 right now but might possibly be addressed in a potential v1.8.2.

Works for me. I ought to get some analysis time this upcoming (US holiday) weekend. My plan is to point the JS-based visualizer at it, and measure the time spent recomputing the desire/satisfaction bitmaps/Spans, since I've observed a surprising amount of time spent in that portion of the code (it's being called much too frequently, and I think it gets worse with large k).

> In my opinion, this issue shouldn't block release v1.8.1 right now but might possibly be addressed in a potential v1.8.2. Works for me. I ought to get some analysis time this upcoming (US holiday) weekend. My plan is to point the JS-based visualizer at it, and measure the time spent recomputing the desire/satisfaction bitmaps/Spans, since I've observed a surprising amount of time spent in that portion of the code (it's being called much too frequently, and I think it gets worse with large `k`).
francois commented 2010-11-23 12:09:01 +00:00
Author
Owner

Replying to warner:

Works for me. I ought to get some analysis time this upcoming (US holiday) weekend. My plan is to point the JS-based visualizer at it, and measure the time spent recomputing the desire/satisfaction bitmaps/Spans, since I've observed a surprising amount of time spent in that portion of the code (it's being called much too frequently, and I think it gets worse with large k).

Great! I also hope to find some free time this weekend to work on that.

An quick analysis of [- File Download Status.html"]attachment:"Tahoe-LAFS shows that some segments requests were delayed because a single share request was taking too long. The average time for a share request during this download was about 100 ms, but it raised to a bit more than one second in a few cases such as this one.

qnsglseg	21	[6590:+6554]	+0.896405s	+1.985190s	6554	1.09s

I think that those share requests taking too long are probably due to the overall load on the grid and are probably difficult to avoid. However, it seems that each segment request is currently serialized (#1187). That means that a single slow share request will prevent the next segment download to start which will slowdown the overall file download. The bigger k is, the more shares the client will have to download and therefore the probability of slow share requests will increase.

Now, I don't really understand if and why this issue was not present in v1.7.1 because it seems that segement pipelining wasn't already implemented in the previous downloader.

Replying to [warner](/tahoe-lafs/trac-2024-07-25/issues/1264#issuecomment-122888): > Works for me. I ought to get some analysis time this upcoming (US holiday) weekend. My plan is to point the JS-based visualizer at it, and measure the time spent recomputing the desire/satisfaction bitmaps/Spans, since I've observed a surprising amount of time spent in that portion of the code (it's being called much too frequently, and I think it gets worse with large `k`). Great! I also hope to find some free time this weekend to work on that. An quick analysis of [- File Download Status.html"]attachment:"Tahoe-LAFS shows that some segments requests were delayed because a single share request was taking too long. The average time for a share request during this download was about 100 ms, but it raised to a bit more than one second in a few cases such as this one. ``` qnsglseg 21 [6590:+6554] +0.896405s +1.985190s 6554 1.09s ``` I think that those share requests taking too long are probably due to the overall load on the grid and are probably difficult to avoid. However, it seems that each segment request is currently serialized (#1187). That means that a single slow share request will prevent the next segment download to start which will slowdown the overall file download. The bigger `k` is, the more shares the client will have to download and therefore the probability of slow share requests will increase. Now, I don't really understand if and why this issue was not present in v1.7.1 because it seems that segement pipelining wasn't already implemented in the previous downloader.
zooko commented 2010-11-23 14:48:54 +00:00
Author
Owner

Replying to [francois]comment:5:

An quick analysis of [- File Download Status.html"]attachment:"Tahoe-LAFS shows that some segments requests were delayed because a single share request was taking too long. The average time for a share request during this download was about 100 ms, but it raised to a bit more than one second in a few cases such as this one.

qnsglseg	21	[6590:+6554]	+0.896405s	+1.985190s	6554	1.09s

I think that those share requests taking too long are probably due to the overall load on the grid and are probably difficult to avoid.

I don't understand why a few of the share requests would take ten times as long as normal. Is the delay on the client, the server, or the network? Brian hypothesized that it had something to do with how the spans data structure gets used more when K is higher. Maybe running the client (gateway, downloader) under a Python profiler would give us a clue. (David-Sarah has pointed out that tahoe has support for that! comment:2:ticket:1267 .)

You could also try running oprofile to get information about what the CPU is doing at the native-code (x86/amd64) level. You could try either or both of these on the client and on the servers.

But, if the slowdown is due to waiting for disk or for network rather than due to spinning the CPU, these tools won't show it. Also if you get unlucky and don't encounter any of these 1-second-long pauses while you are profiling or if you encounter so few of them that the functions responsible for the delay don't accumulate a lot of CPU time over the whole profiling run then they could evade detection.

On the gripping hand, a good profiling result is almost always interesting. It either tells us about some functions that are wasting CPU cycles or it tells us that there are no functions that waste CPU cycles (under this load). So I would encourage you to try it.

Now, I don't really understand if and why this issue was not present in v1.7.1 because it seems that segement pipelining wasn't already implemented in the previous downloader.

Yes this is definitely one of the mysteries. What changed between v1.7.1 and v1.8? The addition of the spans data structure is one thing...

Oh! This would probably help:

http://tahoe-lafs.org/trac/tahoe-lafs/attachment/ticket/1170/debuggery-trace-spans.dpatch.txt

That patch adds logging of every time Share._received gets touched, which is one of the most commonly-used spans objects.

This tool reads logs like that and benchmarks how fast a given spans implementation can handle such usage: source:trunk/misc/simulators/bench_spans.py

Applying that patch to your downloader could tell us whether the Share._received object is getting used a lot, and then running bench_spans.py on the resulting log could tell us whether the current implementation of spans has some inefficient algorithm.

Replying to [francois]comment:5: > > An quick analysis of [- File Download Status.html"]attachment:"Tahoe-LAFS shows that some segments requests were delayed because a single share request was taking too long. The average time for a share request during this download was about 100 ms, but it raised to a bit more than one second in a few cases such as this one. > > ``` > qnsglseg 21 [6590:+6554] +0.896405s +1.985190s 6554 1.09s > ``` > > I think that those share requests taking too long are probably due to the overall load on the grid and are probably difficult to avoid. I don't understand why a few of the share requests would take ten times as long as normal. Is the delay on the client, the server, or the network? Brian hypothesized that it had something to do with how the spans data structure gets used more when K is higher. Maybe running the client (gateway, downloader) under a Python profiler would give us a clue. (David-Sarah has pointed out that tahoe has support for that! comment:2:ticket:1267 .) You could also try running oprofile to get information about what the CPU is doing at the native-code (x86/amd64) level. You could try either or both of these on the client and on the servers. But, if the slowdown is due to waiting for disk or for network rather than due to spinning the CPU, these tools won't show it. Also if you get unlucky and don't encounter any of these 1-second-long pauses while you are profiling or if you encounter so few of them that the functions responsible for the delay don't accumulate a lot of CPU time over the whole profiling run then they could evade detection. On the gripping hand, a good profiling result is almost always interesting. It either tells us about some functions that are wasting CPU cycles or it tells us that there are no functions that waste CPU cycles (under this load). So I would encourage you to try it. > Now, I don't really understand if and why this issue was not present in v1.7.1 because it seems that segement pipelining wasn't already implemented in the previous downloader. Yes this is definitely one of the mysteries. What changed between v1.7.1 and v1.8? The addition of the spans data structure is one thing... Oh! This would probably help: <http://tahoe-lafs.org/trac/tahoe-lafs/attachment/ticket/1170/debuggery-trace-spans.dpatch.txt> That patch adds logging of every time `Share._received` gets touched, which is one of the most commonly-used spans objects. This tool reads logs like that and benchmarks how fast a given spans implementation can handle such usage: source:trunk/misc/simulators/bench_spans.py Applying that patch to your downloader could tell us whether the `Share._received` object is getting used a lot, and then running bench_spans.py on the resulting log could tell us whether the current implementation of spans has some inefficient algorithm.
francois commented 2010-11-23 21:54:07 +00:00
Author
Owner

As proposed I ran the client under the cProfile Python profiler by directly calling twistd with the following command because the -n is necessary to have profiler output written into a file.

twistd -y tahoe-client.tac --profiler=cProfile --savestats --profile=/tmp/tahoe.profile --debug -n

You'll find the profile generated by downloading a 20-of-60 10 MB file and then stopping the client. I'll hopefully address the remaining points tomorrow.

As proposed I ran the client under the cProfile Python profiler by directly calling `twistd` with the following command because the `-n` is necessary to have profiler output written into a file. ``` twistd -y tahoe-client.tac --profiler=cProfile --savestats --profile=/tmp/tahoe.profile --debug -n ``` You'll find the [profile generated](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-74bf-ac88-ec6b-22e21c5e9344) by downloading a 20-of-60 10 MB file and then stopping the client. I'll hopefully address the remaining points tomorrow.
francois commented 2010-11-23 21:54:24 +00:00
Author
Owner

Attachment tahoe.profile (535720 bytes) added

**Attachment** tahoe.profile (535720 bytes) added
warner commented 2010-11-23 22:25:47 +00:00
Author
Owner

zooko said:

I don't understand why a few of the share requests would take ten times as
long as normal. Is the delay on the client, the server, or the network?
Brian hypothesized that it had something to do with how the spans data
structure gets used more when K is higher.

Actually my hypothesis is that the time spent between receipt of the block
request and the transmission of the next block request will be higher for
larger values of k. This time would not be "charged" against the individual
server: it occurs after the clock has been stopped.

This picture
(and the other pictures on #1170) show part of this: it's the delay between
the receipt of the last block() request (e.g. at 2.60s) and the transmission
of the next request (at around 2.67s).

Zooming in
shows what's happening in that gap. If you could read the "desire" words on
the small boxes (which of course is easier when you can interactively zoom
the chart, rather than looking at a screenshot), you could see that
recomputing the desire/satified bitmaps is being done three times, but the
second and third passes are redundant (no new responses have arrived, so the
computed bitmap is exactly the same, so that's just wasted time).

If I remember right, the downloader was doing one redundant pass for each
hash received, and we fetch more hashes for some segments than others
(peaking for NUMSEGS/2), resulting in an uneven distribution of extra delays.
My comments in http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1170#comment:92
suggest that it's not enough to explain the delays seen here (8% on a 12MB
file on a one-CPU testnet), but it would be worse with more segments.

zooko said: > I don't understand why a few of the share requests would take ten times as > long as normal. Is the delay on the client, the server, or the network? > Brian hypothesized that it had something to do with how the spans data > structure gets used more when K is higher. Actually my hypothesis is that the time spent between receipt of the block request and the transmission of the next block request will be higher for larger values of k. This time would not be "charged" against the individual server: it occurs after the clock has been stopped. [This picture](http://tahoe-lafs.org/trac/tahoe-lafs/attachment/ticket/1170/180c2-viz-delays.png) (and the other pictures on #1170) show part of this: it's the delay between the receipt of the last block() request (e.g. at 2.60s) and the transmission of the next request (at around 2.67s). [Zooming in](http://tahoe-lafs.org/trac/tahoe-lafs/attachment/ticket/1170/viz-3.png) shows what's happening in that gap. If you could read the "desire" words on the small boxes (which of course is easier when you can interactively zoom the chart, rather than looking at a screenshot), you could see that recomputing the desire/satified bitmaps is being done three times, but the second and third passes are redundant (no new responses have arrived, so the computed bitmap is exactly the same, so that's just wasted time). If I remember right, the downloader was doing one redundant pass for each hash received, and we fetch more hashes for some segments than others (peaking for NUMSEGS/2), resulting in an uneven distribution of extra delays. My comments in <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1170#comment:92> suggest that it's not enough to explain the delays seen here (8% on a 12MB file on a one-CPU testnet), but it *would* be worse with more segments.
zooko commented 2010-11-24 07:42:34 +00:00
Author
Owner

You'll find the profile generated by downloading a 20-of-60 10 MB file and then stopping the client. I'll hopefully address the remaining points tomorrow.

Okay I did a quick dump of the stats from tahoe.profile, first sorted by total time spent in each function excluding time spent waiting for functions that it called to return ("tottime"), and then sorted by total time spent in each function including time spent waiting for functions it called to return ("cumtime").

>>> p.sort_stats('time').print_stats(8)
Tue Nov 23 14:36:37 2010    tahoe.profile

         21516265 function calls (21475767 primitive calls) in 240.371 CPU seconds

   Ordered by: internal time
   List reduced from 1491 to 8 due to restriction <8>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1406  147.383    0.105  147.492    0.105 {select.select}
  1370101   21.221    0.000   21.221    0.000 {posix.stat}
    50882   14.011    0.000   51.913    0.001 traceback.py:273(extract_stack)
  1370097    7.485    0.000   28.704    0.000 linecache.py:43(checkcache)
    50882    7.292    0.000    8.031    0.000 traceback.py:27(format_list)
  1370097    3.531    0.000    7.492    0.000 linecache.py:13(getline)
     2031    2.392    0.001    2.392    0.001 {built-in method send}
  1370097    1.749    0.000    3.435    0.000 linecache.py:33(getlines)

         21516265 function calls (21475767 primitive calls) in 240.371 CPU seconds

   Ordered by: cumulative time
   List reduced from 1491 to 44 due to restriction <44>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000  240.371  240.371 base.py:1046(run)
        1    0.025    0.025  240.361  240.361 base.py:1051(mainLoop)
     1406    0.062    0.000  190.230    0.135 selectreactor.py:93(doSelect)
     1406  147.383    0.105  147.492    0.105 {select.select}
    50882    1.038    0.000   61.047    0.001 traceback.py:264(format_stack)
    50882   14.011    0.000   51.913    0.001 traceback.py:273(extract_stack)
     1406    0.053    0.000   50.059    0.036 base.py:666(runUntilCurrent)
      874    0.054    0.000   49.797    0.057 eventual.py:18(_turn)
25381/20221    0.116    0.000   46.592    0.002 defer.py:233(callback)
25391/20231    0.392    0.000   46.453    0.002 defer.py:291(_startRunCallbacks)
     6377    0.277    0.000   45.856    0.007 share.py:188(loop)
     6377    0.180    0.000   44.419    0.007 share.py:239(_do_loop)
     4672    0.049    0.000   42.653    0.009 log.py:38(callWithLogger)
     4672    0.063    0.000   42.597    0.009 log.py:33(callWithContext)
     4672    0.035    0.000   42.467    0.009 context.py:58(callWithContext)
     4672    0.053    0.000   42.428    0.009 context.py:32(callWithContext)
     4672    0.048    0.000   42.358    0.009 selectreactor.py:144(_doReadOrWrite)
5200/5143    0.101    0.000   39.445    0.008 defer.py:84(maybeDeferred)
     2212    0.043    0.000   39.311    0.018 tcp.py:345(doRead)
     2136    0.038    0.000   38.325    0.018 tcp.py:116(doRead)
     6377    0.405    0.000   37.374    0.006 share.py:709(_send_requests)
     1893    0.087    0.000   37.301    0.020 banana.py:633(dataReceived)
     4999    0.035    0.000   37.245    0.007 referenceable.py:406(callRemote)
     1893    1.008    0.001   37.210    0.020 banana.py:701(handleData)
     4999    0.273    0.000   37.103    0.007 referenceable.py:418(_callRemote)
     4777    0.029    0.000   34.931    0.007 share.py:741(_send_request)
    25491    0.333    0.000   31.085    0.001 defer.py:166(__init__)
50890/30329    0.944    0.000   30.678    0.001 defer.py:314(_runCallbacks)
     5000    0.100    0.000   30.631    0.006 banana.py:183(send)
     5000    0.120    0.000   30.525    0.006 root.py:92(send)
  1370097    7.485    0.000   28.704    0.000 linecache.py:43(checkcache)
    10106    0.082    0.000   24.185    0.002 defer.py:30(succeed)
     5582    0.183    0.000   22.692    0.004 banana.py:1049(handleClose)
     4999    0.110    0.000   21.742    0.004 call.py:652(receiveClose)
  1370101   21.221    0.000   21.221    0.000 {posix.stat}
     5072    0.829    0.000   19.518    0.004 banana.py:191(produce)
    10072    0.118    0.000   13.274    0.001 root.py:72(next)
    16851    0.085    0.000   12.370    0.001 banana.py:1035(handleToken)
     9998    0.095    0.000   12.250    0.001 call.py:628(receiveChild)
50868/44994    0.268    0.000   11.336    0.000 defer.py:172(addCallbacks)
     4999    0.026    0.000    9.349    0.002 call.py:673(_done)
     4999    0.054    0.000    9.323    0.002 call.py:55(complete)
    50882    7.292    0.000    8.031    0.000 traceback.py:27(format_list)
  1370097    3.531    0.000    7.492    0.000 linecache.py:13(getline)

Here are my comments/questions about this:

  • Only the first few rows of the tottime list are at all significant. The total CPU time was 240 seconds, so the 3rd row -- extract_stack -- used about 5% of the CPU time and the 4th row -- checkcache -- used about 3%.
  • Why is extract_stack getting caled 50,882 times!? Here are all of the callers which called extract_stack in this run:
>>> p.print_callers('extract_stack')
   Ordered by: cumulative time
   List reduced from 1491 to 1 due to restriction <'extract_stack'>

Function                         was called by...
                                     ncalls  tottime  cumtime
traceback.py:273(extract_stack)  <-   50882   14.011   51.913  traceback.py:264(format_stack)


<pstats.Stats instance at 0x1004e0950>
>>> p.print_callers('format_stack')
   Ordered by: cumulative time
   List reduced from 1491 to 1 due to restriction <'format_stack'>

Function                        was called by...
                                    ncalls  tottime  cumtime
traceback.py:264(format_stack)  <-   25491    0.591   30.752  defer.py:166(__init__)
                                     25391    0.447   30.295  defer.py:291(_startRunCallbacks)


<pstats.Stats instance at 0x1004e0950>

Ah, this indicates that twisted.internet.defer.debug=True. Okay. I wonder how it got turned on?

  • Okay, and why is posix.stat invoked more than a million times? Ah, print_callers() tells me that posix.stat is getting called by checkcache which is getting called by extract_stack. So all told, around 23% of the CPU time (as measured in "tottime" terms instead of "cumtime" terms) was spent in doing the debugging stuff due to defer.debug=True. This makes me wonder what the overall throughput would be if defer.debug were turned off.

Next, I turn my attention to the "cumtime" results.

  • Measured this way, the CPU time spent in the defer.debug stuff was 25% of all CPU time. (And this is is a better measure of this than the tottime result is.)
  • [share.py:188(loop)]source:trunk/src/allmydata/immutable/downloader/share.py?annotate=blame&rev=4707#L188 was called 6377 times and took up a total of 19% of all CPU time. This might be a smoking gun showing that Brian's analysis in #1268 is correct and explains the slowdown that this ticket (#1264) is about.
  • [share.py:709(_send_requests)]source:trunk/src/allmydata/immutable/downloader/share.py?annotate=blame&rev=4707#L709, called 6377 times, total of 37.374 seconds (15.5% of the total). 6 milliseconds per call. Is this more evidence in favor of the #1268 theory? What functions does _send_requests call, and how long do they take?
>>> p.strip_dirs().print_callees('_send_requests')
   Random listing order was used
   List reduced from 1491 to 1 due to restriction <'_send_requests'>

Function                      called...
                                  ncalls  tottime  cumtime
share.py:709(_send_requests)  ->    9554    0.024    0.067  defer.py:189(addCallback)
                                    9554    0.022    0.064  defer.py:197(addErrback)
                                   11154    0.207    0.652  log.py:144(msg)
                                    4777    0.029   34.931  share.py:741(_send_request)
                                    4777    0.083    0.210  spans.py:46(add)
                                   19131    0.132    0.196  spans.py:142(dump)
                                   11154    0.014    0.014  spans.py:152(__iter__)
                                   12754    0.060    0.605  spans.py:170(__sub__)
                                    6377    0.021    0.069  spans.py:256(get_spans)
                                    4777    0.049    0.066  status.py:97(add_request_sent)
                                   11154    0.034    0.093  {repr}
                                    4777    0.002    0.002  {time.time}

Huh-oh, I think this shows that cProfile is giving us bad data! The cumulative time spent in _send_request, including waiting for its callees to return, was 34.931s, and the total time spent in _send_request not waiting for its callees to return was 0.029s, which means that it was waiting for its callees to return for almost 35s. But, the total time spent in all of its callees adds up to about 2s seconds. That doesn't make sense! :-(

Well, it is obviously time for me to go to bed. Hopefully it will make more sense when I get back to this next weekend.

One last comment: it is surprising how much time is spent in banana.py and its children! (That's the code in foolscap to serialize/deserialize things.)

> You'll find the [profile generated](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-74bf-ac88-ec6b-22e21c5e9344) by downloading a 20-of-60 10 MB file and then stopping the client. I'll hopefully address the remaining points tomorrow. Okay I did a quick dump of the stats from [tahoe.profile](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-74bf-ac88-ec6b-22e21c5e9344), first sorted by total time spent in each function excluding time spent waiting for functions that it called to return ("tottime"), and then sorted by total time spent in each function including time spent waiting for functions it called to return ("cumtime"). ``` >>> p.sort_stats('time').print_stats(8) Tue Nov 23 14:36:37 2010 tahoe.profile 21516265 function calls (21475767 primitive calls) in 240.371 CPU seconds Ordered by: internal time List reduced from 1491 to 8 due to restriction <8> ncalls tottime percall cumtime percall filename:lineno(function) 1406 147.383 0.105 147.492 0.105 {select.select} 1370101 21.221 0.000 21.221 0.000 {posix.stat} 50882 14.011 0.000 51.913 0.001 traceback.py:273(extract_stack) 1370097 7.485 0.000 28.704 0.000 linecache.py:43(checkcache) 50882 7.292 0.000 8.031 0.000 traceback.py:27(format_list) 1370097 3.531 0.000 7.492 0.000 linecache.py:13(getline) 2031 2.392 0.001 2.392 0.001 {built-in method send} 1370097 1.749 0.000 3.435 0.000 linecache.py:33(getlines) 21516265 function calls (21475767 primitive calls) in 240.371 CPU seconds Ordered by: cumulative time List reduced from 1491 to 44 due to restriction <44> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 240.371 240.371 base.py:1046(run) 1 0.025 0.025 240.361 240.361 base.py:1051(mainLoop) 1406 0.062 0.000 190.230 0.135 selectreactor.py:93(doSelect) 1406 147.383 0.105 147.492 0.105 {select.select} 50882 1.038 0.000 61.047 0.001 traceback.py:264(format_stack) 50882 14.011 0.000 51.913 0.001 traceback.py:273(extract_stack) 1406 0.053 0.000 50.059 0.036 base.py:666(runUntilCurrent) 874 0.054 0.000 49.797 0.057 eventual.py:18(_turn) 25381/20221 0.116 0.000 46.592 0.002 defer.py:233(callback) 25391/20231 0.392 0.000 46.453 0.002 defer.py:291(_startRunCallbacks) 6377 0.277 0.000 45.856 0.007 share.py:188(loop) 6377 0.180 0.000 44.419 0.007 share.py:239(_do_loop) 4672 0.049 0.000 42.653 0.009 log.py:38(callWithLogger) 4672 0.063 0.000 42.597 0.009 log.py:33(callWithContext) 4672 0.035 0.000 42.467 0.009 context.py:58(callWithContext) 4672 0.053 0.000 42.428 0.009 context.py:32(callWithContext) 4672 0.048 0.000 42.358 0.009 selectreactor.py:144(_doReadOrWrite) 5200/5143 0.101 0.000 39.445 0.008 defer.py:84(maybeDeferred) 2212 0.043 0.000 39.311 0.018 tcp.py:345(doRead) 2136 0.038 0.000 38.325 0.018 tcp.py:116(doRead) 6377 0.405 0.000 37.374 0.006 share.py:709(_send_requests) 1893 0.087 0.000 37.301 0.020 banana.py:633(dataReceived) 4999 0.035 0.000 37.245 0.007 referenceable.py:406(callRemote) 1893 1.008 0.001 37.210 0.020 banana.py:701(handleData) 4999 0.273 0.000 37.103 0.007 referenceable.py:418(_callRemote) 4777 0.029 0.000 34.931 0.007 share.py:741(_send_request) 25491 0.333 0.000 31.085 0.001 defer.py:166(__init__) 50890/30329 0.944 0.000 30.678 0.001 defer.py:314(_runCallbacks) 5000 0.100 0.000 30.631 0.006 banana.py:183(send) 5000 0.120 0.000 30.525 0.006 root.py:92(send) 1370097 7.485 0.000 28.704 0.000 linecache.py:43(checkcache) 10106 0.082 0.000 24.185 0.002 defer.py:30(succeed) 5582 0.183 0.000 22.692 0.004 banana.py:1049(handleClose) 4999 0.110 0.000 21.742 0.004 call.py:652(receiveClose) 1370101 21.221 0.000 21.221 0.000 {posix.stat} 5072 0.829 0.000 19.518 0.004 banana.py:191(produce) 10072 0.118 0.000 13.274 0.001 root.py:72(next) 16851 0.085 0.000 12.370 0.001 banana.py:1035(handleToken) 9998 0.095 0.000 12.250 0.001 call.py:628(receiveChild) 50868/44994 0.268 0.000 11.336 0.000 defer.py:172(addCallbacks) 4999 0.026 0.000 9.349 0.002 call.py:673(_done) 4999 0.054 0.000 9.323 0.002 call.py:55(complete) 50882 7.292 0.000 8.031 0.000 traceback.py:27(format_list) 1370097 3.531 0.000 7.492 0.000 linecache.py:13(getline) ``` Here are my comments/questions about this: * Only the first few rows of the tottime list are at all significant. The total CPU time was 240 seconds, so the 3rd row -- `extract_stack` -- used about 5% of the CPU time and the 4th row -- `checkcache` -- used about 3%. * Why is `extract_stack` getting caled 50,882 times!? Here are all of the callers which called `extract_stack` in this run: ``` >>> p.print_callers('extract_stack') Ordered by: cumulative time List reduced from 1491 to 1 due to restriction <'extract_stack'> Function was called by... ncalls tottime cumtime traceback.py:273(extract_stack) <- 50882 14.011 51.913 traceback.py:264(format_stack) <pstats.Stats instance at 0x1004e0950> >>> p.print_callers('format_stack') Ordered by: cumulative time List reduced from 1491 to 1 due to restriction <'format_stack'> Function was called by... ncalls tottime cumtime traceback.py:264(format_stack) <- 25491 0.591 30.752 defer.py:166(__init__) 25391 0.447 30.295 defer.py:291(_startRunCallbacks) <pstats.Stats instance at 0x1004e0950> ``` Ah, this indicates that `twisted.internet.defer.debug=True`. Okay. I wonder how it got turned on? * Okay, and why is `posix.stat` invoked more than a million times? Ah, `print_callers()` tells me that `posix.stat` is getting called by `checkcache` which is getting called by `extract_stack`. So all told, around 23% of the CPU time (as measured in "tottime" terms instead of "cumtime" terms) was spent in doing the debugging stuff due to `defer.debug=True`. This makes me wonder what the overall throughput would be if `defer.debug` were turned off. Next, I turn my attention to the "cumtime" results. * Measured this way, the CPU time spent in the `defer.debug` stuff was 25% of all CPU time. (And this is is a better measure of this than the tottime result is.) * [share.py:188(loop)]source:trunk/src/allmydata/immutable/downloader/share.py?annotate=blame&rev=4707#L188 was called 6377 times and took up a total of 19% of all CPU time. This might be a smoking gun showing that Brian's analysis in #1268 is correct and explains the slowdown that this ticket (#1264) is about. * [share.py:709(_send_requests)]source:trunk/src/allmydata/immutable/downloader/share.py?annotate=blame&rev=4707#L709, called 6377 times, total of 37.374 seconds (15.5% of the total). 6 milliseconds per call. Is this more evidence in favor of the #1268 theory? What functions does `_send_requests` call, and how long do they take? ``` >>> p.strip_dirs().print_callees('_send_requests') Random listing order was used List reduced from 1491 to 1 due to restriction <'_send_requests'> Function called... ncalls tottime cumtime share.py:709(_send_requests) -> 9554 0.024 0.067 defer.py:189(addCallback) 9554 0.022 0.064 defer.py:197(addErrback) 11154 0.207 0.652 log.py:144(msg) 4777 0.029 34.931 share.py:741(_send_request) 4777 0.083 0.210 spans.py:46(add) 19131 0.132 0.196 spans.py:142(dump) 11154 0.014 0.014 spans.py:152(__iter__) 12754 0.060 0.605 spans.py:170(__sub__) 6377 0.021 0.069 spans.py:256(get_spans) 4777 0.049 0.066 status.py:97(add_request_sent) 11154 0.034 0.093 {repr} 4777 0.002 0.002 {time.time} ``` Huh-oh, I think this shows that cProfile is giving us bad data! The cumulative time spent in `_send_request`, including waiting for its callees to return, was 34.931s, and the total time spent in `_send_request` not waiting for its callees to return was 0.029s, which means that it was waiting for its callees to return for almost 35s. But, the total time spent in all of its callees adds up to about 2s seconds. That doesn't make sense! :-( Well, it is obviously time for me to go to bed. Hopefully it will make more sense when I get back to this next weekend. One last comment: it is surprising how much time is spent in banana.py and its children! (That's the code in foolscap to serialize/deserialize things.)
fijal commented 2010-11-24 08:03:17 +00:00
Author
Owner

Hey.

I think this might be the case with recursive call - you end up having a cycle that is not correctly reported here (it's kind of hard and you need cycle detection). So you end up calling your children which are yourself. I would suggest using lsprofcalltree (a copy is here http://codespeak.net/svn/pypy/trunk/pypy/tool/lsprofcalltree.py) and kcachegrind for display (it's so much easier and contains cycle detection). I know installing KDE on mac is a pain but there is always virtualization technology and whatnot.

Cheers,
fijal

Hey. I think this might be the case with recursive call - you end up having a cycle that is not correctly reported here (it's kind of hard and you need cycle detection). So you end up calling your children which are yourself. I would suggest using lsprofcalltree (a copy is here <http://codespeak.net/svn/pypy/trunk/pypy/tool/lsprofcalltree.py>) and kcachegrind for display (it's so much easier and contains cycle detection). I know installing KDE on mac is a pain but there is always virtualization technology and whatnot. Cheers, fijal
warner commented 2010-11-24 16:39:04 +00:00
Author
Owner

zooko says:

So all told, around 23% of the CPU time (as measured in "tottime" terms
instead of "cumtime" terms) was spent in doing the debugging stuff due to
defer.debug=True. This makes me wonder what the overall throughput would be
if defer.debug were turned off.

Whoa, yeah, defer.debug should definitely not be turned on. We've observed
the unit test suite taking like 3x as long when it is. (last time, it was
turned on by some accidentally left-in debugging code). What's going on?
Francois: do you have any local patches which call
twisted.internet.defer.setDebugging(True)? There's nothing in the current
trunk which does that. If you run twistd with -d/--debug, it will turn on
defer.setDebugging, but from the command line you ran it doesn't look like
that's the case.

share.py:188(loop) was called 6377 times and took up a total of 19% of all
CPU time. This might be a smoking gun showing that Brian's analysis in
#1268 is correct and explains the slowdown that this ticket (#1264) is
about.

share.py:709(_send_requests), called 6377 times, total of 37.374 seconds
(15.5% of the total). 6 milliseconds per call.

Let's see, k=20, 10MB file, assuming default 128KiB segsize, that gives 77
segments, so a total of 144 nodes in the block hash merkle tree for each
share. We fetch half of the leaves and all of the non-leaves, so about 115
nodes per share, and there are 20 shares, so about 2310 block hash nodes
total. There are also 77 data blocks per share, so 1540 of those. We also
fetch maybe a dozen other pieces (offset table, UEB, some portion of the
share hash tree and ciphertext hash tree) for each share, so maybe an extra
240 fetches there. Eyeballing that is about 4090 fetches. So, it's within an
order of magnitude of what I'd expect to see.

Each _send_requests call turns into one loop call, yes: that's
the #1268 bug. When that gets fixed, we should see 6377 _send_requests
but far fewer calls to loop, maybe 20*(77+2)=1580.

Re _send_request and recursion: careful, there are two similarly-named
functions here. _send_requests() (plural) loops over
_send_request() (singular). Does this resolve the confusion?
_send_request() is just a single call to callRemote, and if we're
spending a lot of time there, it means we're spending a lot of time in the
Foolscap serialization code. Let's see, 4777 calls and 34.931s total, so
7.3ms per call.. that feels high. The arguments being serialized are really
simple (two ints), but there are a lot of Deferreds involved, so if
defer.setDebugging(True) is really turned on, I'm not as surprised.

We really need a new profiling run with defer.setDebugging turned off.

zooko says: > So all told, around 23% of the CPU time (as measured in "tottime" terms > instead of "cumtime" terms) was spent in doing the debugging stuff due to > defer.debug=True. This makes me wonder what the overall throughput would be > if defer.debug were turned off. Whoa, yeah, defer.debug should definitely *not* be turned on. We've observed the unit test suite taking like 3x as long when it is. (last time, it was turned on by some accidentally left-in debugging code). What's going on? Francois: do you have any local patches which call twisted.internet.defer.setDebugging(True)? There's nothing in the current trunk which does that. If you run twistd with -d/--debug, it will turn on defer.setDebugging, but from the command line you ran it doesn't look like that's the case. > share.py:188(loop) was called 6377 times and took up a total of 19% of all > CPU time. This might be a smoking gun showing that Brian's analysis in > #1268 is correct and explains the slowdown that this ticket (#1264) is > about. > share.py:709(_send_requests), called 6377 times, total of 37.374 seconds > (15.5% of the total). 6 milliseconds per call. Let's see, k=20, 10MB file, assuming default 128KiB segsize, that gives 77 segments, so a total of 144 nodes in the block hash merkle tree for each share. We fetch half of the leaves and all of the non-leaves, so about 115 nodes per share, and there are 20 shares, so about 2310 block hash nodes total. There are also 77 data blocks per share, so 1540 of those. We also fetch maybe a dozen other pieces (offset table, UEB, some portion of the share hash tree and ciphertext hash tree) for each share, so maybe an extra 240 fetches there. Eyeballing that is about 4090 fetches. So, it's within an order of magnitude of what I'd expect to see. Each `_send_requests` call turns into one `loop` call, yes: that's the #1268 bug. When that gets fixed, we should see 6377 `_send_requests` but far fewer calls to `loop`, maybe 20*(77+2)=1580. Re `_send_request` and recursion: careful, there are two similarly-named functions here. `_send_requests()` (plural) loops over `_send_request()` (singular). Does this resolve the confusion? `_send_request()` is just a single call to `callRemote`, and if we're spending a lot of time there, it means we're spending a lot of time in the Foolscap serialization code. Let's see, 4777 calls and 34.931s total, so 7.3ms per call.. that feels high. The arguments being serialized are really simple (two ints), but there are a lot of Deferreds involved, so if defer.setDebugging(True) is really turned on, I'm not as surprised. We really need a new profiling run with defer.setDebugging turned off.
francois commented 2010-11-24 17:11:30 +00:00
Author
Owner

Attachment tahoe-nodebug.profile (469838 bytes) added

**Attachment** tahoe-nodebug.profile (469838 bytes) added
francois commented 2010-11-24 17:16:29 +00:00
Author
Owner

Replying to warner:

Whoa, yeah, defer.debug should definitely not be turned on. We've observed
the unit test suite taking like 3x as long when it is. (last time, it was
turned on by some accidentally left-in debugging code). What's going on?
Francois: do you have any local patches which call
twisted.internet.defer.setDebugging(True)? There's nothing in the current
trunk which does that. If you run twistd with -d/--debug, it will turn on
defer.setDebugging, but from the command line you ran it doesn't look like
that's the case.

Oh! Sorry, the previous profile was indeed generated by twistd --debug, my bad! Here's a new one.

Replying to [warner](/tahoe-lafs/trac-2024-07-25/issues/1264#issuecomment-122895): > Whoa, yeah, defer.debug should definitely *not* be turned on. We've observed > the unit test suite taking like 3x as long when it is. (last time, it was > turned on by some accidentally left-in debugging code). What's going on? > Francois: do you have any local patches which call > twisted.internet.defer.setDebugging(True)? There's nothing in the current > trunk which does that. If you run twistd with -d/--debug, it will turn on > defer.setDebugging, but from the command line you ran it doesn't look like > that's the case. Oh! Sorry, the previous profile was indeed generated by `twistd --debug`, my bad! Here's a new one.
zooko commented 2010-11-28 08:59:58 +00:00
Author
Owner

Replying to [francois]comment:13:

Oh! Sorry, the previous profile was indeed generated by twistd --debug, my bad! Here's a new one.

Did this one take about half as much time to download the 10 MB file? Because profiler reports 130 seconds (which it incorrectly calls CPU seconds when it is really wall clock seconds) for this one, and it reported about 240 for the last one. Hey, in fact, does this mean that almost all of the reported performance regression is due to François accidentally turning on debug mode? :-)

In any case, here are the total and cumulative times, showing all the ones that took more than 1% of the total time:

         8948704 function calls (8908705 primitive calls) in 129.678 CPU seconds

   Ordered by: cumulative time
   List reduced from 1277 to 64 due to restriction <64>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000  129.678  129.678 base.py:1046(run)
        1    0.032    0.032  129.678  129.678 base.py:1051(mainLoop)
     1705    0.065    0.000  110.454    0.065 selectreactor.py:93(doSelect)
     1705  101.247    0.059  101.373    0.059 {select.select}
     1705    0.049    0.000   19.137    0.011 base.py:666(runUntilCurrent)
     1173    0.055    0.000   18.963    0.016 eventual.py:18(_turn)
     6428    0.270    0.000   17.554    0.003 share.py:188(loop)
     6428    0.170    0.000   16.151    0.003 share.py:239(_do_loop)
     6428    0.357    0.000    9.349    0.001 share.py:709(_send_requests)
51318/30610    0.784    0.000    9.008    0.000 defer.py:314(_runCallbacks)
     4939    0.052    0.000    8.991    0.002 log.py:38(callWithLogger)
     4939    0.061    0.000    8.932    0.002 log.py:33(callWithContext)
     4939    0.033    0.000    8.804    0.002 context.py:58(callWithContext)
     4939    0.054    0.000    8.766    0.002 context.py:32(callWithContext)
     4939    0.047    0.000    8.695    0.002 selectreactor.py:144(_doReadOrWrite)
25616/20408    0.089    0.000    8.362    0.000 defer.py:233(callback)
25622/20414    0.105    0.000    8.247    0.000 defer.py:291(_startRunCallbacks)
5231/5177    0.043    0.000    7.703    0.001 defer.py:84(maybeDeferred)
     5049    0.030    0.000    7.472    0.001 referenceable.py:406(callRemote)
     5049    0.208    0.000    7.398    0.001 referenceable.py:418(_callRemote)
     4828    0.022    0.000    7.087    0.001 share.py:741(_send_request)
     2495    0.045    0.000    6.566    0.003 tcp.py:345(doRead)
     5050    0.032    0.000    6.500    0.001 banana.py:183(send)
     5050    0.087    0.000    6.468    0.001 root.py:92(send)
     2423    0.026    0.000    6.351    0.003 tcp.py:116(doRead)
     5121    0.787    0.000    6.090    0.001 banana.py:191(produce)
     1891    0.037    0.000    5.503    0.003 banana.py:633(dataReceived)
     1891    0.868    0.000    5.464    0.003 banana.py:701(handleData)
   155707    1.606    0.000    3.148    0.000 abstract.py:161(write)
     8028    0.144    0.000    3.058    0.000 share.py:268(_get_satisfaction)
    23231    0.338    0.000    3.057    0.000 hashtree.py:298(needed_hashes)
51317/45429    0.245    0.000    2.934    0.000 defer.py:172(addCallbacks)
     6428    0.239    0.000    2.831    0.000 share.py:560(_desire)
    23231    0.518    0.000    2.716    0.000 hashtree.py:128(needed_for)
     5629    0.105    0.000    2.706    0.000 banana.py:1049(handleClose)
    40405    0.343    0.000    2.609    0.000 banana.py:485(sendToken)
     5049    0.096    0.000    2.515    0.000 call.py:652(receiveClose)
    44426    0.900    0.000    2.475    0.000 log.py:144(msg)
    60873    0.329    0.000    2.220    0.000 banana.py:22(int2b128)
     2263    0.087    0.000    1.890    0.001 abstract.py:90(doWrite)
     5049    0.017    0.000    1.807    0.000 call.py:673(_done)
     5049    0.041    0.000    1.790    0.000 call.py:55(complete)
   150076    0.508    0.000    1.742    0.000 hashtree.py:118(sibling)
     2263    0.019    0.000    1.732    0.001 tcp.py:364(writeSomeData)
     2032    0.012    0.000    1.723    0.001 tcp.py:140(doWrite)
     2032    0.011    0.000    1.581    0.001 tcp.py:152(writeSomeData)
     2032    1.552    0.001    1.552    0.001 {built-in method send}
    61062    0.312    0.000    1.392    0.000 spans.py:25(__init__)
   155628    0.536    0.000    1.283    0.000 tcp.py:256(startWriting)
    44426    0.848    0.000    1.117    0.000 log.py:227(add_event)
    36748    0.421    0.000    1.091    0.000 spans.py:46(add)
         8948704 function calls (8908705 primitive calls) in 129.678 CPU seconds

   Ordered by: internal time
   List reduced from 1277 to 64 due to restriction <64>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1705  101.247    0.059  101.373    0.059 {select.select}
   155707    1.606    0.000    3.148    0.000 abstract.py:161(write)
     2032    1.552    0.001    1.552    0.001 {built-in method send}
    44426    0.900    0.000    2.475    0.000 log.py:144(msg)

We can see the share.py functions in the cumulative listing, showing that we spent 13.5% of the total time inside share.py's loop(), but other than that there aren't any suspicious CPU users. It seems to spend most of its time waiting for the network, which is as it should be.

Replying to [francois]comment:13: > > Oh! Sorry, the previous profile was indeed generated by `twistd --debug`, my bad! Here's a new one. Did this one take about half as much time to download the 10 MB file? Because profiler reports 130 seconds (which it incorrectly calls CPU seconds when it is really wall clock seconds) for this one, and it reported about 240 for the last one. Hey, in fact, does this mean that almost all of the reported performance regression is due to François accidentally turning on debug mode? :-) In any case, here are the total and cumulative times, showing all the ones that took more than 1% of the total time: ``` 8948704 function calls (8908705 primitive calls) in 129.678 CPU seconds Ordered by: cumulative time List reduced from 1277 to 64 due to restriction <64> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 129.678 129.678 base.py:1046(run) 1 0.032 0.032 129.678 129.678 base.py:1051(mainLoop) 1705 0.065 0.000 110.454 0.065 selectreactor.py:93(doSelect) 1705 101.247 0.059 101.373 0.059 {select.select} 1705 0.049 0.000 19.137 0.011 base.py:666(runUntilCurrent) 1173 0.055 0.000 18.963 0.016 eventual.py:18(_turn) 6428 0.270 0.000 17.554 0.003 share.py:188(loop) 6428 0.170 0.000 16.151 0.003 share.py:239(_do_loop) 6428 0.357 0.000 9.349 0.001 share.py:709(_send_requests) 51318/30610 0.784 0.000 9.008 0.000 defer.py:314(_runCallbacks) 4939 0.052 0.000 8.991 0.002 log.py:38(callWithLogger) 4939 0.061 0.000 8.932 0.002 log.py:33(callWithContext) 4939 0.033 0.000 8.804 0.002 context.py:58(callWithContext) 4939 0.054 0.000 8.766 0.002 context.py:32(callWithContext) 4939 0.047 0.000 8.695 0.002 selectreactor.py:144(_doReadOrWrite) 25616/20408 0.089 0.000 8.362 0.000 defer.py:233(callback) 25622/20414 0.105 0.000 8.247 0.000 defer.py:291(_startRunCallbacks) 5231/5177 0.043 0.000 7.703 0.001 defer.py:84(maybeDeferred) 5049 0.030 0.000 7.472 0.001 referenceable.py:406(callRemote) 5049 0.208 0.000 7.398 0.001 referenceable.py:418(_callRemote) 4828 0.022 0.000 7.087 0.001 share.py:741(_send_request) 2495 0.045 0.000 6.566 0.003 tcp.py:345(doRead) 5050 0.032 0.000 6.500 0.001 banana.py:183(send) 5050 0.087 0.000 6.468 0.001 root.py:92(send) 2423 0.026 0.000 6.351 0.003 tcp.py:116(doRead) 5121 0.787 0.000 6.090 0.001 banana.py:191(produce) 1891 0.037 0.000 5.503 0.003 banana.py:633(dataReceived) 1891 0.868 0.000 5.464 0.003 banana.py:701(handleData) 155707 1.606 0.000 3.148 0.000 abstract.py:161(write) 8028 0.144 0.000 3.058 0.000 share.py:268(_get_satisfaction) 23231 0.338 0.000 3.057 0.000 hashtree.py:298(needed_hashes) 51317/45429 0.245 0.000 2.934 0.000 defer.py:172(addCallbacks) 6428 0.239 0.000 2.831 0.000 share.py:560(_desire) 23231 0.518 0.000 2.716 0.000 hashtree.py:128(needed_for) 5629 0.105 0.000 2.706 0.000 banana.py:1049(handleClose) 40405 0.343 0.000 2.609 0.000 banana.py:485(sendToken) 5049 0.096 0.000 2.515 0.000 call.py:652(receiveClose) 44426 0.900 0.000 2.475 0.000 log.py:144(msg) 60873 0.329 0.000 2.220 0.000 banana.py:22(int2b128) 2263 0.087 0.000 1.890 0.001 abstract.py:90(doWrite) 5049 0.017 0.000 1.807 0.000 call.py:673(_done) 5049 0.041 0.000 1.790 0.000 call.py:55(complete) 150076 0.508 0.000 1.742 0.000 hashtree.py:118(sibling) 2263 0.019 0.000 1.732 0.001 tcp.py:364(writeSomeData) 2032 0.012 0.000 1.723 0.001 tcp.py:140(doWrite) 2032 0.011 0.000 1.581 0.001 tcp.py:152(writeSomeData) 2032 1.552 0.001 1.552 0.001 {built-in method send} 61062 0.312 0.000 1.392 0.000 spans.py:25(__init__) 155628 0.536 0.000 1.283 0.000 tcp.py:256(startWriting) 44426 0.848 0.000 1.117 0.000 log.py:227(add_event) 36748 0.421 0.000 1.091 0.000 spans.py:46(add) ``` ``` 8948704 function calls (8908705 primitive calls) in 129.678 CPU seconds Ordered by: internal time List reduced from 1277 to 64 due to restriction <64> ncalls tottime percall cumtime percall filename:lineno(function) 1705 101.247 0.059 101.373 0.059 {select.select} 155707 1.606 0.000 3.148 0.000 abstract.py:161(write) 2032 1.552 0.001 1.552 0.001 {built-in method send} 44426 0.900 0.000 2.475 0.000 log.py:144(msg) ``` We can see the share.py functions in the cumulative listing, showing that we spent 13.5% of the total time inside share.py's `loop()`, but other than that there aren't any suspicious CPU users. It seems to spend most of its time waiting for the network, which is as it should be.
zooko commented 2011-01-04 06:56:14 +00:00
Author
Owner

Replying to [zooko]comment:14:

Did this one take about half as much time to download the 10 MB file? Because profiler reports 130 seconds (which it incorrectly calls CPU seconds when it is really wall clock seconds) for this one, and it reported about 240 for the last one. Hey, in fact, does this mean that almost all of the reported performance regression is due to François accidentally turning on debug mode? :-)

For the record, François assured me on IRC that this is not the case, that the regression from 1.7 to 1.8 is real regardless of debug mode being turned on. I guess the large difference in CPU seconds reported in the debug mode profiling vs. the no-debug-mode profiling doesn't indicate a proportionally large difference in real wall-clock time.

François: if you want to generate some fresh profiling results with your current setup and v1.7 vs. 1.8 that would be cool. We could compare the profiling results from the two versions.

In any case, here are the total and cumulative times, showing all the ones that took more than 1% of the total time:

   Ordered by: cumulative time
   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000  129.678  129.678 base.py:1046(run)
        1    0.032    0.032  129.678  129.678 base.py:1051(mainLoop)
     1705    0.065    0.000  110.454    0.065 selectreactor.py:93(doSelect)
     1705  101.247    0.059  101.373    0.059 {select.select}
     1705    0.049    0.000   19.137    0.011 base.py:666(runUntilCurrent)
     1173    0.055    0.000   18.963    0.016 eventual.py:18(_turn)
     6428    0.270    0.000   17.554    0.003 share.py:188(loop)
     6428    0.170    0.000   16.151    0.003 share.py:239(_do_loop)
     6428    0.357    0.000    9.349    0.001 share.py:709(_send_requests)

We can see the share.py functions in the cumulative listing, showing that we spent 13.5% of the total time inside share.py's loop(), but other than that there aren't any suspicious CPU users. It seems to spend most of its time waiting for the network, which is as it should be.

Well, maybe the 13.5% time inside share.py's loop() causes the overall progress to slow down significantly, as François observed from the visualizations in comment:5 and as Brian potentially explained in comment:122892.

Replying to [zooko]comment:14: > > Did this one take about half as much time to download the 10 MB file? Because profiler reports 130 seconds (which it incorrectly calls CPU seconds when it is really wall clock seconds) for this one, and it reported about 240 for the last one. Hey, in fact, does this mean that almost all of the reported performance regression is due to François accidentally turning on debug mode? :-) For the record, François assured me on IRC that this is not the case, that the regression from 1.7 to 1.8 is real regardless of debug mode being turned on. I guess the large difference in CPU seconds reported in the debug mode profiling vs. the no-debug-mode profiling doesn't indicate a proportionally large difference in real wall-clock time. François: if you want to generate some fresh profiling results with your current setup and v1.7 vs. 1.8 that would be cool. We could compare the profiling results from the two versions. > In any case, here are the total and cumulative times, showing all the ones that took more than 1% of the total time: > > ``` > Ordered by: cumulative time > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.000 0.000 129.678 129.678 base.py:1046(run) > 1 0.032 0.032 129.678 129.678 base.py:1051(mainLoop) > 1705 0.065 0.000 110.454 0.065 selectreactor.py:93(doSelect) > 1705 101.247 0.059 101.373 0.059 {select.select} > 1705 0.049 0.000 19.137 0.011 base.py:666(runUntilCurrent) > 1173 0.055 0.000 18.963 0.016 eventual.py:18(_turn) > 6428 0.270 0.000 17.554 0.003 share.py:188(loop) > 6428 0.170 0.000 16.151 0.003 share.py:239(_do_loop) > 6428 0.357 0.000 9.349 0.001 share.py:709(_send_requests) > ``` > > We can see the share.py functions in the cumulative listing, showing that we spent 13.5% of the total time inside share.py's `loop()`, but other than that there aren't any suspicious CPU users. It seems to spend most of its time waiting for the network, which is as it should be. Well, maybe the 13.5% time inside share.py's `loop()` causes the overall progress to slow down significantly, as François observed from the visualizations in comment:5 and as Brian potentially explained in [comment:122892](/tahoe-lafs/trac-2024-07-25/issues/1264#issuecomment-122892).
warner commented 2011-09-15 17:06:58 +00:00
Author
Owner

Attachment atlasperf.zip (99574 bytes) added

tools for doing comparative download performance tests

**Attachment** atlasperf.zip (99574 bytes) added tools for doing comparative download performance tests
warner commented 2011-09-15 17:11:24 +00:00
Author
Owner

Attachment atlasperf-1MB.png (60015 bytes) added

measured download speed on WAN versus encoding (1MB file) tahoe-1.8.2

**Attachment** atlasperf-1MB.png (60015 bytes) added measured download speed on WAN versus encoding (1MB file) tahoe-1.8.2
warner commented 2011-09-15 17:11:43 +00:00
Author
Owner

Attachment atlasperf-10MB.png (53279 bytes) added

measured download speed on WAN versus encoding (10MB file) tahoe-1.8.2

**Attachment** atlasperf-10MB.png (53279 bytes) added measured download speed on WAN versus encoding (10MB file) tahoe-1.8.2
warner commented 2011-09-15 17:12:26 +00:00
Author
Owner

Attachment atlasperf-100MB.png (54259 bytes) added

measured download speed on WAN versus encoding (100MB file) tahoe-1.8.2

**Attachment** atlasperf-100MB.png (54259 bytes) added measured download speed on WAN versus encoding (100MB file) tahoe-1.8.2
warner commented 2011-09-15 17:14:22 +00:00
Author
Owner

Attachment atlasperf-timing.out (78473 bytes) added

raw data

**Attachment** atlasperf-timing.out (78473 bytes) added raw data
warner commented 2011-09-15 17:28:08 +00:00
Author
Owner

I just finshed doing the first batch of comparative performance tests on the Atlas Networks hardware, and uploaded my tools and the graphs. I'm calling this environment "atlasperf1". It consists of four machines (one client, the others are storage servers, two server nodes per machine) on fast local connections. The CPUs are dual-core Pentium4s at 3.00GHz, the average ping time is 420us. The client is running tahoe-1.8.2 (the servers are a mix of versions: 1.8.2, 1.7.1, 1.6.1).

I uploaded 1MB/10MB/100MB immutable files with various encodings (from 1-of-1 to 60-of-60, all with k=N). Then I ran a program to download random ones over and over. I graphed the k-vs-time curve for about 1700 trials.

The graph shows a nearly-perfect linear slowdown with increasing k. For a 100MB file, k=5 yields a 50s download, k=30 takes 250s, and k=60 takes about 500s. The same shape holds for 1MB and 10MB files too.

The next test is to try various segment sizes. I've got a few pet theories about the slowdown: the number of foolscap messages being sent, or the limited pipelining of small requests. Both would have a correlation with number of segments. For a fast network, our current one-segment pipeline isn't very big. And with high 'k', the amount of data we're getting from each block can get quite small (k=60 means each block is 128k/60=2.2k, which is scarcely a whole packet), so we've got a lot of tiny little requests flying around.

Possibly mitigations include:

  • when segsize/k is small, accept the memory-footprint hit and use a deeper pipeline, to keep the pipes full
  • implement a readv() method on the storage servers to reduce the foolscap overhead (if that's in fact an issue)
  • advise larger segsize on fast networks
I just finshed doing the first batch of comparative performance tests on the Atlas Networks hardware, and uploaded my tools and the graphs. I'm calling this environment "atlasperf1". It consists of four machines (one client, the others are storage servers, two server nodes per machine) on fast local connections. The CPUs are dual-core Pentium4s at 3.00GHz, the average ping time is 420us. The client is running tahoe-1.8.2 (the servers are a mix of versions: 1.8.2, 1.7.1, 1.6.1). I uploaded 1MB/10MB/100MB immutable files with various encodings (from 1-of-1 to 60-of-60, all with k=N). Then I ran a program to download random ones over and over. I graphed the k-vs-time curve for about 1700 trials. The graph shows a nearly-perfect linear slowdown with increasing k. For a 100MB file, k=5 yields a 50s download, k=30 takes 250s, and k=60 takes about 500s. The same shape holds for 1MB and 10MB files too. The next test is to try various segment sizes. I've got a few pet theories about the slowdown: the number of foolscap messages being sent, or the limited pipelining of small requests. Both would have a correlation with number of segments. For a fast network, our current one-segment pipeline isn't very big. And with high 'k', the amount of data we're getting from each block can get quite small (k=60 means each block is 128k/60=2.2k, which is scarcely a whole packet), so we've got a lot of tiny little requests flying around. Possibly mitigations include: * when segsize/k is small, accept the memory-footprint hit and use a deeper pipeline, to keep the pipes full * implement a `readv()` method on the storage servers to reduce the foolscap overhead (if that's in fact an issue) * advise larger segsize on fast networks
zooko commented 2011-09-15 21:26:21 +00:00
Author
Owner

Brian: nice work! Now we're really getting somewhere. :-)

It would be cool to see the New Visualizer report for a k=5, 50s download and a k=50, 500s download. Did you capture those?

Brian: nice work! Now we're really getting somewhere. :-) It would be cool to see the New Visualizer report for a `k=5`, 50s download and a `k=50`, 500s download. Did you capture those?
zooko commented 2011-09-16 16:25:36 +00:00
Author
Owner

Brian pointed out to me, on the phone just now, that 1.8.2 doesn't have the New Visualizer in it. He's running the same experiment with trunk now, and will hopefully get New Visualizer reports from those.

Brian pointed out to me, on the phone just now, that 1.8.2 doesn't have the New Visualizer in it. He's running the same experiment with trunk now, and will hopefully get New Visualizer reports from those.
zooko commented 2011-09-19 03:26:07 +00:00
Author
Owner

On the topic of segment sizes, you might be interested in http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1264 and the various other notes linked from http://tahoe-lafs.org/trac/tahoe-lafs/wiki/Performance .

On the topic of segment sizes, you might be interested in <http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1264> and the various other notes linked from <http://tahoe-lafs.org/trac/tahoe-lafs/wiki/Performance> .
warner commented 2011-09-19 17:58:22 +00:00
Author
Owner

That last image shows a comparison of LAN download times for a 100MB file at various 'k' values (the graphs for 10MB and 1MB files have the same shape, but more variance). The faster blue samples are for 1.7.1, while the slower red samples are current trunk (probably around changeset:8e69b94588c1c0e7). Both graphs are quite linear, but trunk (with the new downloader) is significantly slower than the old downloader in 1.7.1 across the board, probably about 3x.

Higher 'k' slows down the process too, more severely for the new downloader. For 1.7.1, k=60 is about 3x slower than k=15, and for trunk, the difference is about 3.8x .

I'm still suspecting the foolscap/message-handling overhead. I'd also like to find a way to get a speed-limited WAN-like pipe (simulated, for consistency across runs) and compute similar graphs in an environment where the network is the bottleneck. I believe the new-downloader is faster than the old code in the WAN (that's why I wrote it in the first place), but it looks like something about the complexity or small-reads makes the LAN case slower.

That last image shows a comparison of LAN download times for a 100MB file at various 'k' values (the graphs for 10MB and 1MB files have the same shape, but more variance). The faster blue samples are for 1.7.1, while the slower red samples are current trunk (probably around changeset:8e69b94588c1c0e7). Both graphs are quite linear, but trunk (with the new downloader) is significantly slower than the old downloader in 1.7.1 across the board, probably about 3x. Higher 'k' slows down the process too, more severely for the new downloader. For 1.7.1, k=60 is about 3x slower than k=15, and for trunk, the difference is about 3.8x . I'm still suspecting the foolscap/message-handling overhead. I'd also like to find a way to get a speed-limited WAN-like pipe (simulated, for consistency across runs) and compute similar graphs in an environment where the network is the bottleneck. I believe the new-downloader is faster than the old code in the WAN (that's why I wrote it in the first place), but it looks like something about the complexity or small-reads makes the LAN case slower.
warner commented 2011-09-19 22:06:06 +00:00
Author
Owner

Attachment atlas1-1.7.1vstrunk-100MB.png (66021 bytes) added

comparison of 1.7.1 versus current trunk (probably about changeset:8e69b94588c1c0e7) (better graph, more data)

**Attachment** atlas1-1.7.1vstrunk-100MB.png (66021 bytes) added comparison of 1.7.1 versus current trunk (probably about changeset:8e69b94588c1c0e7) (better graph, more data)
warner commented 2011-09-24 18:17:56 +00:00
Author
Owner

FYI, my full report on LAN performance is now on the wiki at Performance/Sep2011.

FYI, my full report on LAN performance is now on the wiki at Performance/Sep2011.
warner commented 2011-09-26 00:28:45 +00:00
Author
Owner

I've opened some new tickets for some of the improvements we should experiment with, in decreasing order of bang-for-the-buck:

  • #1545: add readv() to immutable storage API
  • #1544: coordinate hash-chain requests between multiple Shares
  • #1543: rearrange share format to make downloads more efficient
I've opened some new tickets for some of the improvements we should experiment with, in decreasing order of bang-for-the-buck: * #1545: add readv() to immutable storage API * #1544: coordinate hash-chain requests between multiple Shares * #1543: rearrange share format to make downloads more efficient
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#1264
No description provided.