> > I currently have a problem with generating random numbers (/dev/random
> > undefinied amount of time) which slows things very much.
> > That could explain the 'unpredictable' factor: maybe you don't have enough
> > random numbers on your system. But do a strace and it'll help understand
> > where the process 'hangs'.
>Oh, yeah, this happened to me too!
>I recompiled APR with "--with-devrandom=/dev/urandom", then recompiled
>my Subversion client against that APR, and the problem went away.
I have thought about this problem, and I would like to argue that
"/dev/urandom" (or even better the alternate timer-based implementation)
should be the *default*, rather than being a retarded "gotcha" for
This code is called from APR's getuuid.c (which generates the UUID's),
and the UUID spec referenced there does not require the numbers to be
In my case, the /dev/random issue turned out to be a design problem
in the 2.4 Linux kernel (see below). I've also seen several newsgroup
discussions discussing improper default configurations for /dev/random,
so this situation can occur in several ways.
Although some people will figure it out and begrudgingly recompile
their server, I think most admins will just conclude that Subversion
is inefficient by design. Which, I argue, is not far from the truth
if /dev/random is kept as the default. ;-)
Date: Tue, 15 Feb 2005 10:59:38 -0500
From: Theodore Ts'o <firstname.lastname@example.org>
To: Pete Gonzalez <email@example.com>
Subject: Re: Question regarding /dev/urandom design
> I'm writing with a question regarding your random.c driver for the
> Linux kernel. It would appear that the intent of "/dev/urandom"
> is to supply random numbers for applications that need immediate
> results, and where quality can be sacrificed for quantity. However,
> it appears that when an application reads from /dev/urandom
> frequently, other applications reading from /dev/random will hang
> indefinitely because the entropy is being constantly depleted.
> If so, then IMO this defeats the purpose of "/dev/urandom", since
> although it offers an unlimited resource for the calling application,
> it is still a limited resource from the OS's perspective. A possible
> solution would be to maintain a separate entropy pool for
> "/dev/urandom", or maybe somehow the "/dev/random" requests can
> be prioritized over "/dev/urandom".
It is a separate entropy pool in 2.6, but people still use it wrong,
because they seem to use it instead of a cryptographic random
generator, which is what most of them *really* seem to want. All you
should do is read 16 bytes from /dev/random, and use it to seed a SHA
based random number generator --- which you use in userspace! It's
faster, and really the right answer.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Tue Feb 15 18:59:27 2005