Re: RandomDirichlet?
On 11/2/2011 4:39 AM, Roedy Green wrote:
On Tue, 01 Nov 2011 20:23:28 -0400, Eric Sosman
<esosman@ieee-dot-org.invalid> wrote, quoted or indirectly quoted
someone who said :
The completely general approach is to find the cumulative
distribution function F(x) = P(X< x), then find its inverse F[-1],
then evaluate X = F[-1](R) for a uniformly distributed R. The
rest is "just numerical analysis" ...
that's simpler than I thought it would be. The hard part is
discovering an inverse function. I suppose you could always implement
one by interpolation of the original function.
Interpolation, or any other approximation/calculation method.
There's a slight snag when F is not monotonically increasing (as
for any discrete distribution), because then F[-1] doesn't really
exist: It has vertical lines corresponding to F's horizontal
sections. But that's easily dealt with: Replace each vertical with
a "closed" point at one end and an "open" point at the other, and
you've got a function that generates the desired answer.
Most of the snazzy methods one learns for generating variates
from various distributions are, in essence, tricks or shortcuts that
avoid the need to evaluate F[-1] explicitly. Even for some "simple"
distributions F[-1] can be unattractive and expensive to evaluate.
Even F itself might have no closed form in elementary functions,
never mind its inverse! When random variates are required in great
quantity (as in Monte Carlo integration, for example), both the
speed and accuracy of generation become important -- which is what
motivates the search for these strange-looking calculations that
"just happen" to Do The Right Thing.
Knuth describes several such methods, some in considerable
detail, in Volume II of "The Art of Computer Programming."
I suppose I could do it for y = x + 1; as an example.
Not sure what you mean by this. To qualify as a cumulative
distribution function, the values of F must lie entirely in [0,1]
and F must be non-decreasing (that is, F(x1) <= F(x2) for all pairs
x1 < x2).
Do you have any free histogram code to recommend to demonstrate that
it works.
Again, I'm not sure what you seek. Dumping a bunch of samples
into a histogram and staring at the shape may suggest something, but
doesn't really "demonstrate" it. Calculating a chi-squared statistic
can give an estimate of how likely it is that a particular histogram
shape would arise from a hypothesized distribution, but again that's
not what I'd call a "demonstration."
Histograms with foreordained bin boundaries are pretty easy to
code. Histograms with dynamically-chosen boundaries (percentiles, say)
are easy if you can afford to store the observations and sort them
afterward. And (lemme look it up; ah, here) in the October 1985 issue
of CACM there's an article by Raj Jain and Imrich Chlamtac describing
how to get dynamically-chosen histogram *estimates* without storing
the samples. I don't know if code's available; Jain and Chlamtac
certainly weren't writing Java in 1985!
--
Eric Sosman
esosman@ieee-dot-org.invalid