NetBSD < 6.0 /dev/random appears to break RSA keygen in test suites #1924

Closed
opened 2013-02-26 19:55:52 +00:00 by midnightmagic · 32 comments
midnightmagic commented 2013-02-26 19:55:52 +00:00
Owner

It looks as though the NetBSD /dev/random from earlier than 6.0 (prior to Thor's patch which overhauled it to supply unlimited amounts of random data) does not supply enough bytes to get Tahoe LAFS through the test phase at all times.

If you turn off all sources of mixed entropy via NetBSD rndctl, and exhaust it continuously (cat /dev/random > /dev/null) it is possible to reproduce the issue semi-regularly.

It manifests as failed RSA invertibility tests in the test suite.

Running crypto++ test binary under the same conditions makes crypto++ complain bitterly about how long it has to wait for random bytes to be supplied from /dev/random, but it does not technically fail: just complain about it.

I believe it is this issue which is not being handled correctly.

I have no direct evidence this is so.

Updating to NetBSD >= Thor's /dev/random overhaul appears to correct the issue.

It *looks* as though the NetBSD /dev/random from earlier than 6.0 (prior to Thor's patch which overhauled it to supply unlimited amounts of random data) does not supply enough bytes to get Tahoe LAFS through the test phase *at all times*. If you turn off all sources of mixed entropy via NetBSD rndctl, and exhaust it continuously (cat /dev/random > /dev/null) it is possible to reproduce the issue semi-regularly. It manifests as failed RSA invertibility tests in the test suite. Running crypto++ test binary under the same conditions makes crypto++ complain bitterly about how long it has to wait for random bytes to be supplied from /dev/random, but it does not technically fail: just complain about it. I believe it is this issue which is not being handled correctly. I have no direct evidence this is so. Updating to NetBSD >= Thor's /dev/random overhaul appears to correct the issue.
tahoe-lafs added the
code
major
defect
1.9.2
labels 2013-02-26 19:55:52 +00:00
tahoe-lafs added this to the undecided milestone 2013-02-26 19:55:52 +00:00
davidsarah commented 2013-02-26 20:49:48 +00:00
Author
Owner

Isn't this a pycryptopp bug rather than a Tahoe bug? Also, if there's a problem with how pycryptopp responds to the warnings from Crypto++, then updating NetBSD is only hiding that problem.

Isn't this a pycryptopp bug rather than a Tahoe bug? Also, if there's a problem with how pycryptopp responds to the warnings from Crypto++, then updating NetBSD is only hiding that problem.
midnightmagic commented 2013-02-26 21:10:57 +00:00
Author
Owner

This may be a pycryptopp bug. I wasn't sure where to put it because it only seems to show up in the Tahoe unit tests.

Correct, updating the kernel only hides the bug.

This may be a pycryptopp bug. I wasn't sure where to put it because it only seems to show up in the Tahoe unit tests. Correct, updating the kernel only hides the bug.
Author
Owner

The behavior of random(4) on NetBSD 6 is actually a bit more complicated; a single open and read of lots of data uses up some bits, but repeated reads will still drain the pool and cause blocking. By definition, random(4) is supposed to block rather than return bits without defensible entropy, while urandom(4) is supposed to return pretty good bits.

In actual use, presumably blocking is better than bad security properties.

So I'd say the test harness should do some combination of

  • refrain from over-using random bits
  • use urandom(4) instead (which is a scary/invasive change)
  • warn the user to address entropy generation (can be very hard in VMs)
  • accept that it may take a very long time to run

(I am assuming that the random/urandom distinction is portable; my understanding is that while NetBSD and Linux have different implementations there has been a common view of the top-level specifications of how this should behave.)

The behavior of random(4) on NetBSD 6 is actually a bit more complicated; a single open and read of lots of data uses up some bits, but repeated reads will still drain the pool and cause blocking. By definition, random(4) is supposed to block rather than return bits without defensible entropy, while urandom(4) is supposed to return pretty good bits. In actual use, presumably blocking is better than bad security properties. So I'd say the test harness should do some combination of * refrain from over-using random bits * use urandom(4) instead (which is a scary/invasive change) * warn the user to address entropy generation (can be very hard in VMs) * accept that it may take a very long time to run (I am assuming that the random/urandom distinction is portable; my understanding is that while NetBSD and Linux have different implementations there has been a common view of the top-level specifications of how this should behave.)
midnightmagic commented 2013-02-27 01:59:24 +00:00
Author
Owner

For more information and a much more in-depth discussion (for future readers) including information alluded to above, and the nature of when it might block, the new /dev/random in NetBSD we are talking about is introduced and beaten soundly about the head here:

http://mail-index.netbsd.org/tech-kern/2011/12/09/msg012085.html

OpenBSD has gone a bit of a different route: /dev/random is the same as /dev/urandom:

http://comments.gmane.org/gmane.os.openbsd.misc/189670

Note that under no circumstances have I ever been able to reproduce the issue described here under the new 6.x kernel. However much strain Tahoe unit tests place on the system, it supplies enough bytes not to trip us up now.

I do not know enough about pycryptopp's use of crypto++ to know what it does with the crypto++ complaint about /dev/random read latencies.

For more information and a much more in-depth discussion (for future readers) including information alluded to above, and the nature of when it *might* block, the new /dev/random in NetBSD we are talking about is introduced and beaten soundly about the head here: <http://mail-index.netbsd.org/tech-kern/2011/12/09/msg012085.html> OpenBSD has gone a bit of a different route: /dev/random is the same as /dev/urandom: <http://comments.gmane.org/gmane.os.openbsd.misc/189670> Note that under no circumstances have I ever been able to reproduce the issue described here under the new 6.x kernel. However much strain Tahoe unit tests place on the system, it supplies enough bytes not to trip us up now. I do not know enough about pycryptopp's use of crypto++ to know what it does with the crypto++ complaint about /dev/random read latencies.
zooko commented 2013-03-11 04:57:18 +00:00
Author
Owner

So the way this bug manifests is that Crypto++ gets an internal inconsistency, saying "InvertibleRSAFunction: computational error during private key operation". The only place in Crypto++ which emits that exception message is [//trac/pycryptopp/browser/git/src-cryptopp/rsa.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L223].

This is a self-test which Crypto++ always does internally, checking that if it has computed y to be the "RSA inverse" of x mod N that then yᵉ = x mod N. This internal consistency check fails frequently on midnightmagic's NetBSD 5 machine when the "entropy level" of /dev/urandom is drained.

Now, the thing about this is that the blocking or non-blocking or delaying behavior of /dev/urandom cannot be a legitimate excuse for this internal check to fail! There has to be a bug, either in Crypto++, in the compiler, or in the kernel, in order to let this inconsistency happen.

Here's a typical example of the error, on midnightmagic's NetBSD 5 buildslave:

https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/146/steps/bench/logs/stdio

In case that is no longer available, here is a copy of it for posterity:

 (view as text)

python setup.py bench
 in dir /home/pycryptopp/buildslave/pycryptopp/MM_netbsd5_i386_warp/build (timeout 14400 secs)
 watching logfiles {}
 argv: ['python', 'setup.py', 'bench']
 environment:
  EDITOR=joe
  ENV=/home/pycryptopp/.shrc
  EXINIT=set autoindent
  HISTFILESIZE=100000
  HISTSIZE=100000
  HOME=/home/pycryptopp
  LESS=-X
  LOGNAME=pycryptopp
  OLDPWD=/home/pycryptopp
  PAGER=more
  PATH=/home/pycryptopp/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R7/bin:/usr/X11R6/bin:/usr/pkg/bin:/usr/pkg/sbin:/usr/games:/usr/local/bin:/usr/local/sbin
  PWD=/home/pycryptopp/buildslave/pycryptopp/MM_netbsd5_i386_warp/build
  PYTHONPATH=/home/pycryptopp/lib/python2.6/site-packages
  SHELL=/usr/pkg/bin/bash
  SHLVL=1
  SU_FROM=root
  TERM=xterm
  USER=pycryptopp
  _=/home/pycryptopp/bin/buildslave
 using PTY: False
running bench
terminate called after throwing an instance of 'CryptoPP::Exception'
  what():  InvertibleRSAFunction: computational error during private key operation
<class 'pycryptopp.bench.bench_sigs.ECDSA256'>
generate key
best: 1.194e-01,   3th-best: 1.195e-01, mean: 1.196e-01,   3th-worst: 1.196e-01, worst: 1.198e-01 (of      9)
sign
best: 3.679e+00,   1th-best: 3.679e+00, mean: 3.679e+00,   1th-worst: 3.679e+00, worst: 3.679e+00 (of      1)
verify
best: 1.282e+01,   1th-best: 1.282e+01, mean: 1.282e+01,   1th-worst: 1.282e+01, worst: 1.282e+01 (of      1)

<class 'pycryptopp.bench.bench_sigs.Ed25519'>
generate key
best: 3.968e+00,   1th-best: 3.968e+00, mean: 3.968e+00,   1th-worst: 3.968e+00, worst: 3.968e+00 (of      1)
sign
best: 4.589e+00,   1th-best: 4.589e+00, mean: 4.589e+00,   1th-worst: 4.589e+00, worst: 4.589e+00 (of      1)
verify
best: 1.221e+01,   1th-best: 1.221e+01, mean: 1.221e+01,   1th-worst: 1.221e+01, worst: 1.221e+01 (of      1)

<class 'pycryptopp.bench.bench_sigs.RSA2048'>
generate key
best: 5.225e+02,   1th-best: 5.225e+02, mean: 5.537e+02,   1th-worst: 5.848e+02, worst: 5.848e+02 (of      2)
sign
process killed by signal 6
program finished with exit code -1
elapsedTime=45.001851

(By the way, the fact that Crypto++ does this sort of internal self-check unconditonally (i.e., not only when built in some sort of "debug mode") is an example of the kind of careful cryptographic engineering which I appreciate about Crypto++.)

Midnightmagic upgraded various components of his computer until he had eventually replaced every single component of the computer and the error behavior never changed, so it can't be an actual hardware error. (Also, it would be a suspiciously specific sort of behavior for a hardware error.)

Midnightmagic figured out that it happened a lot more frequently when the "entropy pool" was running low.

I wrote this patch which removes use of the operating system's random number generator and instead hardcodes a seed so that the RNG generates the same sequence each time:

https://github.com/zooko/pycryptopp/commits/debug-netbsd-rsa

Midnightmagic ran with that patch intensively, for many hours and it never showed any failure.

Now Samuel "Dcoder" Neves and I poked through the relevant parts of the Crypto++ source code, and we didn't see any bug in there that could lead to this.

Hm...

You know what? Something midnightmagic mentioned about timing made me realize that there is a way that a timing race could cause exactly this observed failure. That is, see how in [*trac/pycryptopp/browser/git/src-cryptopp/rsa.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L229 line 229] it sets r equal to a random number read from the operating system? And then on line 230 it sets rInv to the multiplicative inverse of r? And then later on line 233 it uses r for something else. Now, if there is a bug in the kernel such that it overwrote the contents of r's memory *after// line 229 — after the call to Randomize() returned — then that would cause this bug. So, look for a race condition/insufficient-locking in the NetBSD kernel such that reading from /dev/random causes your memory to get written to by the kernel after your read() has returned.

To help find such a bug, please try this patch:

https://github.com/zooko/pycryptopp/commits/debug-netbsd-rsa-2

This uses the standard (operating-system-provided) RNG, but does extra self-checks in search of the hypothesized "late memory overwrite" that I speculate about above.

So the way this bug manifests is that Crypto++ gets an internal inconsistency, saying `"InvertibleRSAFunction: computational error during private key operation"`. The only place in Crypto++ which emits that exception message is [//trac/pycryptopp/browser/git/src-cryptopp/rsa.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L223]. This is a self-test which Crypto++ always does internally, checking that if it has computed `y` to be the "RSA inverse" of `x` `mod N` that then `yᵉ = x mod N`. This internal consistency check fails frequently on midnightmagic's NetBSD 5 machine when the "entropy level" of /dev/urandom is drained. Now, the thing about this is that the blocking or non-blocking or delaying behavior of /dev/urandom *cannot* be a legitimate excuse for this internal check to fail! There has to be a bug, either in Crypto++, in the compiler, or in the kernel, in order to let this inconsistency happen. Here's a typical example of the error, on midnightmagic's NetBSD 5 buildslave: <https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/146/steps/bench/logs/stdio> In case that is no longer available, here is a copy of it for posterity: ``` (view as text) python setup.py bench in dir /home/pycryptopp/buildslave/pycryptopp/MM_netbsd5_i386_warp/build (timeout 14400 secs) watching logfiles {} argv: ['python', 'setup.py', 'bench'] environment: EDITOR=joe ENV=/home/pycryptopp/.shrc EXINIT=set autoindent HISTFILESIZE=100000 HISTSIZE=100000 HOME=/home/pycryptopp LESS=-X LOGNAME=pycryptopp OLDPWD=/home/pycryptopp PAGER=more PATH=/home/pycryptopp/bin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11R7/bin:/usr/X11R6/bin:/usr/pkg/bin:/usr/pkg/sbin:/usr/games:/usr/local/bin:/usr/local/sbin PWD=/home/pycryptopp/buildslave/pycryptopp/MM_netbsd5_i386_warp/build PYTHONPATH=/home/pycryptopp/lib/python2.6/site-packages SHELL=/usr/pkg/bin/bash SHLVL=1 SU_FROM=root TERM=xterm USER=pycryptopp _=/home/pycryptopp/bin/buildslave using PTY: False running bench terminate called after throwing an instance of 'CryptoPP::Exception' what(): InvertibleRSAFunction: computational error during private key operation <class 'pycryptopp.bench.bench_sigs.ECDSA256'> generate key best: 1.194e-01, 3th-best: 1.195e-01, mean: 1.196e-01, 3th-worst: 1.196e-01, worst: 1.198e-01 (of 9) sign best: 3.679e+00, 1th-best: 3.679e+00, mean: 3.679e+00, 1th-worst: 3.679e+00, worst: 3.679e+00 (of 1) verify best: 1.282e+01, 1th-best: 1.282e+01, mean: 1.282e+01, 1th-worst: 1.282e+01, worst: 1.282e+01 (of 1) <class 'pycryptopp.bench.bench_sigs.Ed25519'> generate key best: 3.968e+00, 1th-best: 3.968e+00, mean: 3.968e+00, 1th-worst: 3.968e+00, worst: 3.968e+00 (of 1) sign best: 4.589e+00, 1th-best: 4.589e+00, mean: 4.589e+00, 1th-worst: 4.589e+00, worst: 4.589e+00 (of 1) verify best: 1.221e+01, 1th-best: 1.221e+01, mean: 1.221e+01, 1th-worst: 1.221e+01, worst: 1.221e+01 (of 1) <class 'pycryptopp.bench.bench_sigs.RSA2048'> generate key best: 5.225e+02, 1th-best: 5.225e+02, mean: 5.537e+02, 1th-worst: 5.848e+02, worst: 5.848e+02 (of 2) sign process killed by signal 6 program finished with exit code -1 elapsedTime=45.001851 ``` (By the way, the fact that Crypto++ does this sort of internal self-check unconditonally (i.e., not only when built in some sort of "debug mode") is an example of the kind of careful cryptographic engineering which I appreciate about Crypto++.) Midnightmagic upgraded various components of his computer until he had eventually replaced every single component of the computer and the error behavior never changed, so it can't be an actual hardware error. (Also, it would be a suspiciously specific sort of behavior for a hardware error.) Midnightmagic figured out that it happened a lot more frequently when the "entropy pool" was running low. I wrote this patch which removes use of the operating system's random number generator and instead hardcodes a seed so that the RNG generates the same sequence each time: <https://github.com/zooko/pycryptopp/commits/debug-netbsd-rsa> Midnightmagic ran with that patch intensively, for many hours and it never showed any failure. Now Samuel "Dcoder" Neves and I poked through the relevant parts of the Crypto++ source code, and we didn't see any bug in there that could lead to this. Hm... You know what? Something midnightmagic mentioned about timing made me realize that there *is* a way that a timing race could cause exactly this observed failure. That is, see how in [*trac/pycryptopp/browser/git/src-cryptopp/rsa.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L229 line 229] it sets `r` equal to a random number read from the operating system? And then on line 230 it sets `rInv` to the multiplicative inverse of `r`? And then later on line 233 it uses `r` for something else. Now, if there is a bug in the kernel such that it overwrote the contents of `r`'s memory *after// line 229 — after the call to `Randomize()` returned — then that would cause this bug. So, look for a race condition/insufficient-locking in the NetBSD kernel such that reading from /dev/random causes your memory to get written to by the kernel after your `read()` has returned. To help find such a bug, please try this patch: <https://github.com/zooko/pycryptopp/commits/debug-netbsd-rsa-2> This uses the standard (operating-system-provided) RNG, but does extra self-checks in search of the hypothesized "late memory overwrite" that I speculate about above.
midnightmagic commented 2013-03-11 08:50:29 +00:00
Author
Owner

I will test your debug routines. I am fairly confident that the only time this race might occur would be if there were another thread that were doing it incorrectly after the fact.

I wrote this in #tahoe-lafs a moment ago and realised I should put it in here instead:

There is a match between this error and the pycrypto++ self-test program that coincidentally happens (and doesn't happen) at exactly the same frequency as whether I have a high-supply /dev/random (netbsd6 kernel) or not (netbsd5 kernel).

Specifically, in the cryptest.exe program validation routines, it will block when reading from /dev/random, also the kernel itself sits there picking its nose "waiting" for "entropy".

Finally, it will emit an error.. but it won't actually exit. it just complains about the lack of bytes from /dev/random and then at the end of the validation it will insist the system failed some of the tests.

In wrapping crypto++, does pycryptopp disable any blocking reads from /dev/random in some fashion?

Here's the error under similar conditions to where the pycryptopp stuff is failing:

./cryptest.exe v
[...]
  Testing operating system provided blocking random number generator...
  FAILED:  it took 361 seconds to generate 1 bytes
[...]

I have now switched the buildmachine back to the NetBSD 5.x kernel which can be emptied of its entropy fairly easily, so don't kill the buildbots if they start choking. :-)

I will test your debug routines. I am fairly confident that the only time this race might occur would be if there were another thread that were doing it incorrectly after the fact. I wrote this in #tahoe-lafs a moment ago and realised I should put it in here instead: There is a match between this error and the pycrypto++ self-test program that coincidentally happens (and doesn't happen) at exactly the same frequency as whether I have a high-supply /dev/random (netbsd6 kernel) or not (netbsd5 kernel). Specifically, in the cryptest.exe program validation routines, it will *block* when reading from /dev/random, also the kernel itself sits there picking its nose "waiting" for "entropy". Finally, it will emit an error.. but it won't actually exit. it just complains about the lack of bytes from /dev/random and then at the end of the validation it will insist the system failed some of the tests. In wrapping crypto++, does pycryptopp disable any blocking reads from /dev/random in some fashion? Here's the error under similar conditions to where the pycryptopp stuff is failing: ``` ./cryptest.exe v [...] Testing operating system provided blocking random number generator... FAILED: it took 361 seconds to generate 1 bytes [...] ``` I have now switched the buildmachine back to the NetBSD 5.x kernel which can be emptied of its entropy fairly easily, so don't kill the buildbots if they start choking. :-)
Author
Owner

I would be really surprised if read() returned and later memory got modified (because of that previous system call).

With threads, though, the usual issues about reuse of static buffers apply. Presumably that's not it or it woudl have shown up elsewhere.

I wonder about a test entropy source that sometimes takes a long time, and sometimes returns fewer bytes than requested, as a way to make the tests more harsh. This could perhaps be done by stubbing the read with a routine that does the actual read, and then maybe sleeps, and maybe adjusts the bytes read.

I would be really surprised if read() returned and later memory got modified (because of that previous system call). With threads, though, the usual issues about reuse of static buffers apply. Presumably that's not it or it woudl have shown up elsewhere. I wonder about a test entropy source that sometimes takes a long time, and sometimes returns fewer bytes than requested, as a way to make the tests more harsh. This could perhaps be done by stubbing the read with a routine that does the actual read, and then maybe sleeps, and maybe adjusts the bytes read.
zooko commented 2013-03-11 16:42:11 +00:00
Author
Owner

Finally, it will emit an error.. but it won't actually exit. it just complains about the lack of bytes from /dev/random and then at the end of the validation it will insist the system failed some of the tests.

What error message? Thanks!

> Finally, it will emit an error.. but it won't actually exit. it just complains about the lack of bytes from /dev/random and then at the end of the validation it will insist the system failed some of the tests. What error message? Thanks!
zooko commented 2013-03-11 16:49:41 +00:00
Author
Owner

In wrapping crypto++, does pycryptopp disable any blocking reads from /dev/random in some fashion?

Oooh.

    AutoSeededRandomPool osrng(false);

from:

[//trac/pycryptopp/browser/git/src/pycryptopp/publickey/rsamodule.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L297]

The false is the argument to the parameter "blocking": [//trac/pycryptopp/browser/git/src-cryptopp/osrng.h?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L84].

However, I still refuse to believe that if I tell it to use an RNG in non-blocking mode, and the OS is providing an underlying RNG which is blocking, that this means it generates a yᵉ ≠ x mod N. What it should do if I require non-blocking and the underlying pool is blocking is either use a non-blocking underlying pool (/dev/urandom), or raise an exception.

> In wrapping crypto++, does pycryptopp disable any blocking reads from /dev/random in some fashion? Oooh. ``` AutoSeededRandomPool osrng(false); ``` from: [//trac/pycryptopp/browser/git/src/pycryptopp/publickey/rsamodule.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L297] The `false` is the argument to the parameter "blocking": [//trac/pycryptopp/browser/git/src-cryptopp/osrng.h?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L84]. However, I still refuse to believe that if I tell it to use an RNG in non-blocking mode, and the OS is providing an underlying RNG which is blocking, that this means it generates a `yᵉ ≠ x mod N`. What it *should* do if I require non-blocking and the underlying pool is blocking is either use a non-blocking underlying pool (/dev/urandom), or raise an exception.
Author
Owner

If you put /dev/random in non-blocking mode, the correct behavior from the OS viewpoint is on read of N bytes to return as many bytes are available, if >=1, or to return EWOULDBLOCK. So the caller should expect that get N bytes might return fewer.

If you put /dev/random in non-blocking mode, the correct behavior from the OS viewpoint is on read of N bytes to return as many bytes are available, if >=1, or to return EWOULDBLOCK. So the caller should expect that get N bytes might return fewer.
zooko commented 2013-04-11 06:20:04 +00:00
Author
Owner

Ah, I see that the comment in [//trac/pycryptopp/browser/git/src-cryptopp/osrng.h?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L84] says:

	//! use blocking to choose seeding with BlockingRng or NonblockingRng. the parameter is ignored if only one of these is available

So apparently NetBSD (at least old NetBSD 5.x?) does not have a non-blocking PRNG, so the fact that we pass false there, requesting a non-blocking PRNG, should be ignored. Aha! This looks like the bug, then:

[config.h]source:git/src-cryptopp/config.h?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L393 says that if this is a Unix then it has a non-blocking PRNG.

Is that the bug? Should we add some conditions to config.h so that it won't define NONBLOCKING_RNG_AVAILABLE on NetBSD?

Ah, I see that the comment in [//trac/pycryptopp/browser/git/src-cryptopp/osrng.h?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L84] says: ``` //! use blocking to choose seeding with BlockingRng or NonblockingRng. the parameter is ignored if only one of these is available ``` So apparently NetBSD (at least old NetBSD 5.x?) does not have a non-blocking PRNG, so the fact that we pass `false` there, requesting a non-blocking PRNG, should be ignored. Aha! This looks like the bug, then: [config.h]source:git/src-cryptopp/config.h?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L393 says that if this is a Unix then it has a non-blocking PRNG. Is that the bug? Should we add some conditions to config.h so that it won't define `NONBLOCKING_RNG_AVAILABLE` on NetBSD?
zooko commented 2013-04-11 06:39:53 +00:00
Author
Owner

Replying to zooko:

Is that the bug? Should we add some conditions to config.h so that it won't define NONBLOCKING_RNG_AVAILABLE on NetBSD?

Well, no, the only effect of defining NONBLOCKING_RNG_AVAILABLE (on unix) is to define a class that reads from /dev/urandom:

[//trac/pycryptopp/browser/git/src-cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L44 osrng.cpp]

So, hold on, what's the behavior on NetBSD again? Reconsider all of the above in light of the fact that pycryptopp has been reading exclusively from /dev/urandom on NetBSD and never from /dev/random all this time.

So in that case, midnightmagic's observations imply that when the entropy pool has been sucked dry, then something about reading from /dev/urandom causes Crypto++ to generate inconsistent internal values. This is doubly weird, because:

(a) reading from /dev/urandom should not be detectably (to the Crypto++ code) different whether the entropy pool is brimming or dry, right? Or is there something really different about NetBSD 5.x /dev/urandom than Linux /dev/urandom here?,

and

(b) no matter what the result of reading from /dev/urandom (in [//trac/pycryptopp/browser/git/src-cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L44 osrng.cpp]), this shouldn't cause Crypto++ to generate internally inconsistent values for its RSA digital signatures.

Note that the reads from /dev/urandom check whether the OS returned the expected number of bytes as the return value from read().

I don't see how any possible behavior of the OS'es read() call could cause the observed failure in Crypto++. The only thing that I can imagine causing this result would be if read() returned (returning the expected "number of bytes read" -- size) and then later the output buffer output got overwritten by the kernel in the middle of Crypto++'s computations using that output buffer.

Samuel Neves suggested something on IRC to the effect that stack corruption could also explain the observed fault. Oh, I wonder if the kernel could sometimes be buffer-overrunning output? Copying more than size bytes into it?

midnightmagic: could you please run the tests with the https://github.com/zooko/pycryptopp/commits/debug-netbsd-rsa-2 patches applied? Thanks!

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment-132544): > > Is that the bug? Should we add some conditions to config.h so that it won't define `NONBLOCKING_RNG_AVAILABLE` on NetBSD? Well, no, the only effect of defining `NONBLOCKING_RNG_AVAILABLE` (on unix) is to define a class that reads from `/dev/urandom`: [//trac/pycryptopp/browser/git/src-cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L44 osrng.cpp] So, hold on, what's the behavior on NetBSD again? Reconsider all of the above in light of the fact that pycryptopp has been reading exclusively from `/dev/urandom` on NetBSD and never from `/dev/random` all this time. So in that case, midnightmagic's observations imply that when the entropy pool has been sucked dry, then something about reading from `/dev/urandom` causes Crypto++ to generate inconsistent internal values. This is doubly weird, because: (a) reading from `/dev/urandom` should not be detectably (to the Crypto++ code) different whether the entropy pool is brimming or dry, right? Or is there something really different about NetBSD 5.x `/dev/urandom` than Linux `/dev/urandom` here?, and (b) no matter what the result of reading from `/dev/urandom` (in [//trac/pycryptopp/browser/git/src-cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L44 osrng.cpp]), this shouldn't cause Crypto++ to generate internally inconsistent values for its RSA digital signatures. Note that [the reads from /dev/urandom](https://tahoe-lafs.org/trac/pycryptopp/browser/git/src-cryptopp/osrng.cpp?annotate=blame&rev=9c884d4ea2c75bc47dc49d4c404bfc5a9fc3b437#L77) check whether the OS returned the expected number of bytes as the return value from `read()`. I don't see how any possible behavior of the OS'es `read()` call could cause the observed failure in Crypto++. The only thing that I can imagine causing this result would be if `read()` returned (returning the expected "number of bytes read" -- `size`) and then later the output buffer `output` got **overwritten** by the kernel in the middle of Crypto++'s computations using that output buffer. Samuel Neves suggested something on IRC to the effect that stack corruption could also explain the observed fault. Oh, I wonder if the kernel could sometimes be buffer-overrunning `output`? Copying more than `size` bytes into it? midnightmagic: could you please run the tests with the <https://github.com/zooko/pycryptopp/commits/debug-netbsd-rsa-2> patches applied? Thanks!
zooko commented 2013-04-11 06:55:10 +00:00
Author
Owner

Awesome -- I remembered that I can test builds on midnightmagic's computer without his intervention, by pushing patches to github and then telling the buildbot to build that branch. If midnightmagic wants to experiment with the debug-netbsd-2 branch, that would be good, too, but it is building right now on the buildbot:

https://tahoe-lafs.org/buildbot-pycryptopp/waterfall?last_time=1365663289

Awesome -- I remembered that I can test builds on midnightmagic's computer without his intervention, by pushing patches to github and then telling the buildbot to build that branch. If midnightmagic wants to experiment with the debug-netbsd-2 branch, that would be good, too, but it is building right now on the buildbot: <https://tahoe-lafs.org/buildbot-pycryptopp/waterfall?last_time=1365663289>
zooko commented 2013-04-11 07:06:57 +00:00
Author
Owner

Replying to zooko:

Awesome -- I remembered that I can test builds on midnightmagic's computer without his intervention, by pushing patches to github and then telling the buildbot to build that branch. If midnightmagic wants to experiment with the debug-netbsd-2 branch, that would be good, too, but it is building right now on the buildbot:

https://tahoe-lafs.org/buildbot-pycryptopp/waterfall?last_time=1365663289

Well, damn. My "look for buffer overrun from read()" hack did not report any buffer overrun, but the internal error in Crypto++ still happens:

https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/150

Replying to [zooko](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment-132546): > Awesome -- I remembered that I can test builds on midnightmagic's computer without his intervention, by pushing patches to github and then telling the buildbot to build that branch. If midnightmagic wants to experiment with the debug-netbsd-2 branch, that would be good, too, but it is building right now on the buildbot: > > <https://tahoe-lafs.org/buildbot-pycryptopp/waterfall?last_time=1365663289> Well, damn. My "look for buffer overrun from `read()`" hack did not report any buffer overrun, but the internal error in Crypto++ still happens: <https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/150>
zooko commented 2013-04-11 07:07:29 +00:00
Author
Owner

I'm stumped. Help!

I'm stumped. Help!
zooko commented 2013-04-11 07:10:13 +00:00
Author
Owner

Huh, and it also passed by all of my clever checks for internal consistency -- https://github.com/zooko/pycryptopp/commit/a7f5955a576734396a54f5c10497c84018022691 -- and yet still triggered on the final Crypto++ check for internal consistency! https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/150/steps/bench/logs/stdio Curiouser and curiouser!

I'm still stumped, and I still need help.

Huh, and it also passed by all of my clever checks for internal consistency -- <https://github.com/zooko/pycryptopp/commit/a7f5955a576734396a54f5c10497c84018022691> -- and yet still triggered on the final Crypto++ check for internal consistency! <https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/150/steps/bench/logs/stdio> Curiouser and curiouser! I'm still stumped, and I still need help.
zooko commented 2013-04-11 07:39:37 +00:00
Author
Owner

Okay, here's another experiment. As described over at [//trac/pycryptopp/ticket/85#comment:-1 pycryptopp #85], I just triggered a build of a patch to define CRYPTOPP_DISABLE_ASM=1:

(@@https://tahoe-lafs.org/trac/pycryptopp/ticket/85#comment:-1@@)

Okay, here's another experiment. As described over at [//trac/pycryptopp/ticket/85#[comment:-1](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment--1) pycryptopp #85], I just triggered a build of a patch to define `CRYPTOPP_DISABLE_ASM=1`: (@@https://tahoe-lafs.org/trac/pycryptopp/ticket/85#[comment:-1](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment--1)@@)
midnightmagic commented 2013-04-11 08:10:09 +00:00
Author
Owner

The benchmark is or was using /dev/random, which is blocking:

<class 'pycryptopp.bench.bench_sigs.RSA2048'>
generate key
best: 5.075e+01, 3th-best: 1.051e+02, mean: 1.349e+02, 3th-worst: 1.540e+02, worst: 2.239e+02 (of 8)
sign
Cterminate called after throwing an instance of 'CryptoPP::OS_RNG_Err'

what(): OS_Rng: read /dev/random operation failed with error 4
Abort trap (core dumped)

NOTE later: nevermind, fresh ktrace shows /dev/urandom is being used now. It's possible the above was artifact of an old test. How does one delete one of these comments anyway?

The benchmark is or was using /dev/random, which is blocking: <class 'pycryptopp.bench.bench_sigs.RSA2048'> generate key best: 5.075e+01, 3th-best: 1.051e+02, mean: 1.349e+02, 3th-worst: 1.540e+02, worst: 2.239e+02 (of 8) sign Cterminate called after throwing an instance of 'CryptoPP::OS_RNG_Err' > what(): OS_Rng: read /dev/random operation failed with error 4 Abort trap (core dumped) NOTE later: nevermind, fresh ktrace shows /dev/urandom is being used now. It's possible the above was artifact of an old test. How does one delete one of these comments anyway?
Author
Owner

I am trying to follow this and so far it looks like there is some subtle bug which has not been found. If you do find behavior in NetBSD that you think is actually wrong please let me know.

Another thing to keep in mind is that NetBSD has opencrypto(9) support (originally from OpenBSD, I think), which lets kernel crypto operations be offloaded to hardware coprocessors or other cpus. There is also support for openssl(3) to do offload via a /dev node. So in testing, one may want to disable that, as it's another source of complexity and possible bugs.

Another thing would be to try to bisect the code path in the failing test to add intermediate checks, to try to find more precisely where the problem is happening.

I am trying to follow this and so far it looks like there is some subtle bug which has not been found. If you do find behavior in NetBSD that you think is actually wrong please let me know. Another thing to keep in mind is that NetBSD has opencrypto(9) support (originally from OpenBSD, I think), which lets kernel crypto operations be offloaded to hardware coprocessors or other cpus. There is also support for openssl(3) to do offload via a /dev node. So in testing, one may want to disable that, as it's another source of complexity and possible bugs. Another thing would be to try to bisect the code path in the failing test to add intermediate checks, to try to find more precisely where the problem is happening.
zooko commented 2013-04-12 16:46:50 +00:00
Author
Owner

I posted an update on the Crypto++ mailing list: https://groups.google.com/forum/?fromgroups=#!topic/cryptopp-users/qGIdqp3MIgg

pycryptopp #85 appears to be on the way to fixing this bug, but it is somewhat dissatisfying since I don't know exactly why pycryptopp #85 fixes it.

I posted an update on the Crypto++ mailing list: <https://groups.google.com/forum/?fromgroups=#!topic/cryptopp-users/qGIdqp3MIgg> [pycryptopp #85](https://tahoe-lafs.org/trac/pycryptopp/ticket/85) appears to be on the way to fixing this bug, but it is somewhat dissatisfying since I don't know _exactly_ why [pycryptopp #85](https://tahoe-lafs.org/trac/pycryptopp/ticket/85) fixes it.
zooko commented 2013-07-17 16:41:53 +00:00
Author
Owner

Just an update: this ticket is not getting much love from me because pycryptopp #85 makes it stop happening when pycryptopp compiles libcryptopp. However, pycryptopp #85 presumably has no effect when the libcryptopp was compiled by something else and pycryptopp is just linking to it, so this bug could still affect a user in that case.

Also, of course, this could actually be a bug in Crypto++, a bug in pycryptopp, a bug in gcc, a bug in a backdoor in the compiler that was used to compile the compiler that was used to compile gcc, etc. Until we know what this is, we can never be sure of what it isn't.

Okay, here comes a summary of what I know about this bug. The main theme is that it is probably a bug in NetBSD v5, but we're not 100% sure of that.

The effect of the bug is that Crypto++'s internal consistency check on RSA multiplication fails, like this:

terminate called after throwing an instance of 'CryptoPP::Exception'
  what():  InvertibleRSAFunction: computational error during private key operation

Here's an example of that happening on midnightmagic's buildslave running NetBSD v5: https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/150/steps/bench/logs/stdio

This is reproducible on midnightmagic's machine, but only when the "entropy pool" that feeds /dev/random is depleted. Now this part is insane and means we have entered and alternate dimension in which time and space are not as we know them. Because, pycryptopp never reads from /dev/random! It only reads from /dev/urandom. Nevertheless, midnightmagic confirms that when the entropy pool is full, this bug never (or almost never?) manifests, and when the entropy pool is depleted, this bug is very reproducible.

midnightmagic: please confirm for the Nth time that I haven't misremembered the above.

If the above is true, then it strongly suggests a bug in the kernel which, in the case that the entropy pool is depleted when you read from /dev/urandom, corrupts some memory or something. One thing that would be strange about that is why only this particular RSA internal consistency check ever suffers ill effects from this proposed corruption.

Another explanation besides the "alternate dimension" explanation is that Russian blackhats are using midnightmagic's NetBSD 5 buildslave as a training ground for new recruits.

It would be interesting to see if anyone else can reproduce this bug on their NetBSD 5 system.

Okay, here's the next thing that I know about this bug: that I added more internal consistency checks to the code, and triggered midnightmagic's buildslave to run it, and what I found was that my new internal consistency checks passed (and they ran after the read from /dev/urandom), but then a few instructions later Crypto++'s original internal consistency check failed. Here, look at the code to see what I mean:

Original version from Crypto++:

http://sourceforge.net/p/cryptopp/code/433/tree/trunk/c5/rsa.cpp#l223

Version with my added internal consistency checks:

https://github.com/zooko/pycryptopp/blob/a7f5955a576734396a54f5c10497c84018022691/src-cryptopp/rsa.cpp#L224

Okay, that's the summary.

Suggested next-steps:

Someone please try to reproduce this on a NetBSD v5 machine that is unlikely to be controlled by the same Russian blackhat trainees as midnightmagic's machine. The steps to reproduce are:

git clone https://github.com/tahoe-lafs/pycryptopp.git 
cd pycryptopp
python setup.py build
while [ 1 ] ; do
  python setup.py bench
done
Just an update: this ticket is not getting much love from me because [pycryptopp #85](https://tahoe-lafs.org/trac/pycryptopp/ticket/85) makes it stop happening when pycryptopp compiles libcryptopp. However, [pycryptopp #85](https://tahoe-lafs.org/trac/pycryptopp/ticket/85) presumably has no effect when the libcryptopp was compiled by something else and pycryptopp is just linking to it, so this bug could still affect a user in that case. Also, of course, this could actually be a bug in Crypto++, a bug in pycryptopp, a bug in gcc, a bug in a backdoor in the compiler that was used to compile the compiler that was used to compile gcc, etc. Until we know what this *is*, we can never be sure of what it isn't. Okay, here comes a summary of what I know about this bug. The main theme is that it is probably a bug in NetBSD v5, but we're not 100% sure of that. The effect of the bug is that Crypto++'s internal consistency check on RSA multiplication fails, like this: ``` terminate called after throwing an instance of 'CryptoPP::Exception' what(): InvertibleRSAFunction: computational error during private key operation ``` Here's an example of that happening on midnightmagic's buildslave running NetBSD v5: <https://tahoe-lafs.org/buildbot-pycryptopp/builders/MM%20netbsd5%20i386%20warp/builds/150/steps/bench/logs/stdio> This is reproducible on midnightmagic's machine, but only when the "entropy pool" that feeds /dev/random is depleted. Now this part is insane and means we have entered and alternate dimension in which time and space are not as we know them. Because, pycryptopp never reads from /dev/random! It only reads from /dev/urandom. Nevertheless, midnightmagic confirms that when the entropy pool is full, this bug never (or almost never?) manifests, and when the entropy pool is depleted, this bug is very reproducible. midnightmagic: please confirm for the Nth time that I haven't misremembered the above. If the above is true, then it strongly suggests a bug in the kernel which, in the case that the entropy pool is depleted when you read from /dev/urandom, corrupts some memory or something. One thing that would be strange about that is why only this particular RSA internal consistency check ever suffers ill effects from this proposed corruption. Another explanation besides the "alternate dimension" explanation is that Russian blackhats are using midnightmagic's NetBSD 5 buildslave as a training ground for new recruits. It would be interesting to see if anyone else can reproduce this bug on *their* NetBSD 5 system. Okay, here's the next thing that I know about this bug: that I added more internal consistency checks to the code, and triggered midnightmagic's buildslave to run it, and what I found was that my new internal consistency checks passed (and they ran *after* the read from /dev/urandom), but then a few instructions later Crypto++'s original internal consistency check failed. Here, look at the code to see what I mean: Original version from Crypto++: <http://sourceforge.net/p/cryptopp/code/433/tree/trunk/c5/rsa.cpp#l223> Version with my added internal consistency checks: <https://github.com/zooko/pycryptopp/blob/a7f5955a576734396a54f5c10497c84018022691/src-cryptopp/rsa.cpp#L224> Okay, that's the summary. Suggested next-steps: Someone please try to reproduce this on a NetBSD v5 machine that is unlikely to be controlled by the same Russian blackhat trainees as midnightmagic's machine. The steps to reproduce are: ``` git clone https://github.com/tahoe-lafs/pycryptopp.git cd pycryptopp python setup.py build while [ 1 ] ; do python setup.py bench done ```
midnightmagic commented 2013-07-17 17:05:34 +00:00
Author
Owner

Hi zooko,

Yes, it is very reproducible, and it is exactly as reproducible as the error message from cryptest.exe test program from crypto++. :-( It's not a bug in the kernel though, unless you consider blocking on /dev/random to be a bug. I'm not sure I do.

Further steps to reproduce are:

. TURN OFF all entropy sources via rndctl
. ACTIVELY DRAIN /dev/random by cat /dev/random > /dev/null
. WATCH to ensure that there are 0 bytes of entropy in the pool via rndctl
. (Optional) Build a bunch of other stuff that uses /dev/random during their 
test suites
Hi zooko, Yes, it is very reproducible, and it is exactly as reproducible as the error message from cryptest.exe test program from crypto++. :-( It's not a bug in the kernel though, unless you consider blocking on /dev/random to be a bug. I'm not sure I do. Further steps to reproduce are: ``` . TURN OFF all entropy sources via rndctl . ACTIVELY DRAIN /dev/random by cat /dev/random > /dev/null . WATCH to ensure that there are 0 bytes of entropy in the pool via rndctl . (Optional) Build a bunch of other stuff that uses /dev/random during their test suites ```
daira commented 2013-07-17 17:36:01 +00:00
Author
Owner

Zooko: how do you know that pycryptopp never reads from /dev/random?

Zooko: how do you know that pycryptopp never reads from /dev/random?
zooko commented 2013-07-17 20:17:38 +00:00
Author
Owner

Replying to daira:

Zooko: how do you know that pycryptopp never reads from /dev/random?

comment:132544 and comment:14 are my notes about reading the source code and concluding that pycryptopp's use of the OS's entropy source was solely through /dev/urandom. Also midnightmagic checked with ktrace and found only use of /dev/urandom (comment:132551).

Replying to [daira](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment-132556): > Zooko: how do you know that pycryptopp never reads from /dev/random? [comment:132544](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment-132544) and comment:14 are my notes about reading the source code and concluding that pycryptopp's use of the OS's entropy source was solely through /dev/urandom. Also midnightmagic checked with ktrace and found only use of /dev/urandom ([comment:132551](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment-132551)).
zooko commented 2013-07-17 20:19:51 +00:00
Author
Owner

Oh, one more important clue that I neglected to include in my summary (comment:132554), is that I made a version that replaced the OS's entropy source with an in-process deterministic PRNG, and that version ran for many hours on midnightmagic's machine without triggering this exception.

That version is visible here: https://github.com/zooko/pycryptopp/commit/ce369bfe97e67a94f9a02752f348f8de40763053

Oh, one more important clue that I neglected to include in my summary ([comment:132554](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment-132554)), is that I made a version that replaced the OS's entropy source with an in-process deterministic PRNG, and that version ran for many hours on midnightmagic's machine without triggering this exception. That version is visible here: <https://github.com/zooko/pycryptopp/commit/ce369bfe97e67a94f9a02752f348f8de40763053>
daira commented 2013-07-17 20:51:54 +00:00
Author
Owner

Maybe there is a bug in the affected NetBSD versions' /dev/urandom code that returns a particular pattern, say all-zeroes, when the /dev/random entropy pool is empty, and another bug in the Crypto++ RSA code that causes it to fail with that pattern?

I'm clutching at straws :-/

Maybe there is a bug in the affected NetBSD versions' `/dev/urandom` code that returns a particular pattern, say all-zeroes, when the `/dev/random` entropy pool is empty, and another bug in the Crypto++ RSA code that causes it to fail with that pattern? I'm clutching at straws :-/
daira commented 2013-07-17 20:54:38 +00:00
Author
Owner

How about logging to a file the randomness returned by the Crypto++ RNG when the bug is reproducible, and checking whether it is seems to be actually random?

How about logging to a file the randomness returned by the Crypto++ RNG when the bug is reproducible, and checking whether it ~~is~~ seems to be actually random?
zooko commented 2013-07-17 21:14:03 +00:00
Author
Owner

Replying to daira:

How about logging to a file the randomness returned by the Crypto++ RNG when the bug is reproducible, and checking whether it is actually random?

Good idea.

Replying to [daira](/tahoe-lafs/trac-2024-07-25/issues/1924#issuecomment-132560): > How about logging to a file the randomness returned by the Crypto++ RNG when the bug is reproducible, and checking whether it is actually random? Good idea.
midnightmagic commented 2013-07-19 21:46:01 +00:00
Author
Owner

midnightmagic current TO-DO:

. i386 bare hardware repro
. both a netbsd-5 kernel and a netbsd-6 kernel, with RND_VERBOSE and RND_DEBUG enabled, and with `int rnd_debug = 0xf;' in sys/dev/rnd.c (netbsd-5) or sys/kern/kern_rndq.c (netbsd-6)
. (optional) As for the RSA issue -- can you figure out how to print the Integer objects in a way that you can reliably read back in a test program that does the same computation as InvertibleRSAFunction::CalculateInverse?  (m_n, m_e, r, rInv, re (before and after modn.Multiply), y)
midnightmagic current TO-DO: ``` . i386 bare hardware repro . both a netbsd-5 kernel and a netbsd-6 kernel, with RND_VERBOSE and RND_DEBUG enabled, and with `int rnd_debug = 0xf;' in sys/dev/rnd.c (netbsd-5) or sys/kern/kern_rndq.c (netbsd-6) . (optional) As for the RSA issue -- can you figure out how to print the Integer objects in a way that you can reliably read back in a test program that does the same computation as InvertibleRSAFunction::CalculateInverse? (m_n, m_e, r, rInv, re (before and after modn.Multiply), y) ```
zooko commented 2013-10-31 19:57:34 +00:00
Author
Owner

I currently consider this irreproducible, and welcome both midnightmagic and also some other independent NetBSD-lover to try to reproduce it and report back here!

I currently consider this irreproducible, and welcome both midnightmagic and also some other independent NetBSD-lover to try to reproduce it and report back here!
zooko commented 2016-01-03 23:19:36 +00:00
Author
Owner

This bug was always very hard to reproduce, but at one point I claimed that disabling asm fixed it. Therefore, now that we've landed #85, I'm going to close this ticket as, uh, ... fixed.

This bug was always very hard to reproduce, but at one point I claimed that disabling asm fixed it. Therefore, now that we've landed #85, I'm going to close this ticket as, uh, ... fixed.
tahoe-lafs added the
fixed
label 2016-01-03 23:19:36 +00:00
zooko closed this issue 2016-01-03 23:19:36 +00:00
zooko commented 2016-01-03 23:20:19 +00:00
Author
Owner
I meant #85 over on pycryptopp: <https://tahoe-lafs.org/trac/pycryptopp/ticket/85>
tahoe-lafs removed the
fixed
label 2016-01-14 17:54:20 +00:00
daira reopened this issue 2016-01-14 17:54:20 +00:00
tahoe-lafs added the
cannot reproduce
label 2016-01-14 17:55:01 +00:00
daira closed this issue 2016-01-14 17:55:01 +00:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#1924
No description provided.