germamassive.blogg.se

Random number generator gnu octave
Random number generator gnu octave










random number generator gnu octave
  1. RANDOM NUMBER GENERATOR GNU OCTAVE CODE
  2. RANDOM NUMBER GENERATOR GNU OCTAVE SERIES

The interpreter then needs a way to access the new functionality. My suggestion would be implement the functionality in liboctave in or a combination of and. Going forward, I have other bug reports that I want to sqaush so I won't be working on this. I see that Melissa O'Neill even has a very recent post about just the problem we are discussing here: And further, for beating Matlab with something "new" I would propose to change the underlying PRNG to something of the PCG family, which are both among the fastest and "randomest" PRNGs available today. Thinking about what to do when you really want to beat Matlab: I think you meant that rand should keep returning doubles within, because this is in most likelihood what the Mersenne twister internally returns. So as long as it is about addressing the bias, and as long as nobody finds a bug in my implementation, I still think it can be used.

random number generator gnu octave

The slowing-down with rejection cannot be avoided, and a factor of 1.6 for an m-file is not bad, I think. > tic a = randi_(6004799503160661,1000000,1) tocĪbout a factor of 2.2 slower than the simple biased m-file when one in three values has to be rejected, dropping to a factor 1.6 when practically nothing is rejected.

RANDOM NUMBER GENERATOR GNU OCTAVE CODE

I think that these are the main reasons why your code is faster, not so much that the arxiv-algorithms are better (because they are actually the same). Further, when you directly use randi53(), you obviate the cast to double and subsequent division that happen in rand(), and you do not have to undo the division. Yes, a compiled function will always be faster than a vectorized m-file, mainly because of the fact that with vectorization, you have to repeatedly retrieve and store back your intermediate vectors to main memory, while in a loop you have only to store the final value, everything else happens in registers. That's what I mean by "practically always". Thus the probability for the loop having to be evaluated a second time is probably about 1e-20.

random number generator gnu octave

In octave, normcdf(10) evaluates to exactly 1, while 1-normcdf(8) is 1e-15. If have the comment "should practically always be true" in the code, because I generate use an excess number that is 10 times the expected standard deviation of the number of rejections. Instead, I generate first more numbers that I expect to need, do everything vectorized, and only when at the end I see that the number of generated numbers is not large enough because I had to reject too many, I try again. Yes, there is a loop, but it does not loop over the elements. So actually all the implementations have unbounded worst-case behaviour, because they all use rejection. What is discussed in the arxiv-manuscript is how you can do that while keeping the number of integer divisions low. So now you have an unbiased integer r within, and you get your output by doing either mod(r,10) or floor(r/6553) (when you are doing integer arithmetic, you can drop the floor). Both use rejection: For instance, when you have a generator for integers in (i.e., 16 bit) and you want to generate integers in, you first generate a source integer, which you reject and regenerate iteratively if it should fall within.

RANDOM NUMBER GENERATOR GNU OCTAVE SERIES

Rik, I have to comment on a number of points in your series of comments.įirst, I would say that there are just two main variants of a single algorithm for deriving unbiased random integers in an arbitrary range from a source of unbiased random integers in a given range. If I run the nearlydivisionless algorithm on a range where it works, imax=9, the results are impressive.īut, since it didn't work for all values of imax I can't advocate for it. Since it is so balanced, I think there are opportunities for improvement in both halves. So the random number generation is about 43% of the total time and the mapping is 57% of the time. Running the openbsd algorithm in randi_rej.cpp withįor (octave_idx_type i = 0 i mean (bm(2:end)) Total time = time_to_create_random_integer + time_map_integer_onto_range.

random number generator gnu octave

But, what I had in mind was column 3 of Table 1, "Maximal number of remainders per integer" which is Infinity for the Java algorithm which is why I wasn't interested in it. I saw that the loop would be executed infrequently, but didn't continue to calculate how infrequently that might be.












Random number generator gnu octave