http://rjlipton.wordpress.com/2012/05/04/digital-butterflies-and-prgs/

I'd be interested in reactions---and corrections if any.

At the very least you can learn about the work of scientist George Marsaglia, who even studied with Alan Turing.

Analysis by Toga II 1.4 beta5c: (Fritz 11 GUI)

75.Ke7-d6 Qe2-b2 76.Qc3-d3+ Kd1-e1 77.Kd6-e6 Qb2-g2

+/= (0.27) Depth: 6/22 00:00:00 17kN

75.Ke7-d6 Qe2-b2 76.Qc3-c4 Qb2-h2+ 77.Kd6-c6 Qh2-h7 78.Kc6-c5

+/= (0.28) Depth: 7/25 00:00:00 54kN

75.Ke7-d6 Qe2-b2 76.Qc3-d3+ Kd1-e1 77.Kd6-e6 Qb2-h2 78.Qd3-c3+ Ke1-f1 79.Qc3-c1+ Kf1-g2 80.Qc1-d2+ Kg2-h3 81.Qd2xh2+ Kh3xh2

+/= (0.28) Depth: 8/28 00:00:00 200kN

75.Ke7-d6 Qe2-b2 76.Qc3-d3+ Kd1-e1 77.Be3-f4 Ke1-f2 78.Bf4-e5 Qb2-b3 79.Be5-d4+ Kf2-g2 80.Nd5-f4+ Kg2-h2 81.Qd3xb3 Rb7xb3

+/= (0.29) Depth: 9/34 00:00:00 316kN

75.Ke7-d6 Qe2-b2 76.Qc3-d3+ Kd1-e1 77.Be3-f4 Ke1-f2 78.Qd3-g3+ Kf2-f1 79.Qg3-f3+ Kf1-e1 80.Bf4-e5 Rb7-b6+ 81.Nd5xb6 Qb2xb6+ 82.Kd6-d7 Qb6-b5+ 83.Kd7-e6

+/= (0.54) Depth: 10/38 00:00:00 853kN

75.Ke7-d6 Qe2-b2 76.Qc3-d3+ Kd1-e1 77.Qd3-e4 Ke1-f1 78.Qe4-h1+ Kf1-e2 79.Be3-c1 Rb7-b6+ 80.Nd5xb6 Qb2xb6+ 81.Kd6-e5 Qb6-b5+ 82.Ke5-f6 Qb5-a6+ 83.Kf6-f5 Qa6-b5+ 84.Kf5-g4 Qb5-d7+ 85.Kg4-g3 Qd7-d3+ 86.Kg3-g2

+/= (0.47) Depth: 11/38 00:00:01 1480kN

75.Ke7-d6 Qe2-b2 76.Qc3-d3+ Kd1-e1 77.Be3-d4 Rb7-b3 78.Nd5-c3 Qb2-h2+ 79.Bd4-e5 Rb3-b6+ 80.Kd6-c5 Qh2xe5+ 81.Kc5xb6 Qe5-e7 82.Nc3-e4 Qe7-b4+ 83.Kb6-c6 Qb4-a4+ 84.Kc6-d6

+/= (0.55) Depth: 12/42 00:00:02 3834kN

75.Ke7-d6 Qe2-b2 76.Qc3-d3+ Kd1-e1 77.Be3-g5 Rb7-b4 78.Kd6-c6 Ke1-f2 79.Bg5-e3+ Kf2-e1 80.Nd5xb4 Qb2xb4 81.Qd3-c2 Qb4-a5

+/= (0.58) Depth: 13/46 00:00:05 8392kN

75.Ke7-d6 Rb7-b1 76.Qc3-d4+ Kd1-c2 77.Qd4-c5+ Kc2-b3 78.Qc5-c3+ Kb3-a4 79.Qc3-d4+ Ka4-a3 80.Qd4-a7+ Ka3-b3 81.Qa7-b6+ Kb3-c2 82.Qb6-c6+ Kc2-b3 83.Qc6-b7+ Kb3-c2 84.Qb7-c7+ Kc2-b3 85.Qc7-b8+ Kb3-c2 86.Qb8xb1+ Kc2xb1 87.Nd5-c3+ Kb1-c2 88.Nc3xe2

+-

**(6.90)**Depth: 14/48 00:00:18 28432kN

75.Ke7-d6 Rb7-b1 76.Qc3-d4+ Kd1-c2 77.Qd4-c5+ Kc2-b3 78.Qc5-c3+ Kb3-a2 79.Qc3-a5+ Ka2-b3 80.Qa5-b4+ Kb3-c2 81.Qb4-a4+ Rb1-b3 82.Qa4-c6+ Kc2-b1 83.Nd5-c3+ Rb3xc3 84.Qc6xc3 Qe2-h2+ 85.Kd6-e6 Qh2-h3+ 86.Ke6-e7 Qh3-h7+ 87.Ke7-d8 Qh7-c2 88.Qc3-e5

+/= (0.52) Depth: 15/54 00:01:01 93746kN

75.Ke7-d6 Rb7-b1 76.Qc3-d4+ Kd1-c2 77.Qd4-c5+ Kc2-b3 78.Qc5-c3+ Kb3-a2 79.Qc3-a5+ Ka2-b3 80.Qa5-b4+ Kb3-c2 81.Qb4-a4+ Rb1-b3 82.Kd6-c5 Qe2-d1 83.Qa4-a2+ Kc2-d3 84.Qa2-a6+ Kd3-c2 85.Qa6-c4+ Kc2-b1 86.Nd5-c3+ Rb3xc3 87.Qc4xc3 Kb1-a2

+/= (0.54) Depth: 16/54 00:02:00 181mN

75.Ke7-d6 Rb7-b1 76.Qc3-d4+ Kd1-c2 77.Qd4-c5+ Kc2-b3 78.Qc5-b4+ Kb3-c2 79.Qb4-a4+ Rb1-b3 80.Qa4-c6+ Kc2-b1 81.Nd5-c3+ Rb3xc3 82.Qc6xc3 Qe2-h2+ 83.Kd6-e6 Qh2-c2 84.Qc3-e5 Qc2-g6+ 85.Ke6-d7 Kb1-c2 86.Be3-c5 Qg6-d3+ 87.Kd7-c7 Qd3-h7+ 88.Kc7-c6

+/= (0.53) Depth: 17/60 00:04:25 400mN

75.Ke7-d6 Rb7-b1 76.Qc3-d4+ Kd1-c2 77.Qd4-c5+ Kc2-b3 78.Qc5-b4+ Kb3-c2 79.Qb4-a4+ Rb1-b3 80.Qa4-c6+ Kc2-b1 81.Qc6-c1+ Kb1-a2 82.Nd5-c3+ Rb3xc3 83.Qc1xc3 Qe2-h2+ 84.Kd6-c6 Qh2-g2+ 85.Kc6-c5 Qg2-b2 86.Qc3-c4+ Qb2-b3 87.Qc4-e4 Qb3-a3+ 88.Kc5-b5 Qa3-b2+ 89.Kb5-c6 Qb2-f6+ 90.Kc6-d5 Qf6-f7+ 91.Kd5-c5 Qf7-f8+ 92.Kc5-b5 Qf8-b8+ 93.Kb5-a5

+/= (0.52) Depth: 18/62 00:11:10 1016mN

Doesn't look like a hash collision just after 37k nodes seached. Maybe engines find this position winning for white at first look

[Event "Move 74 WTM orig (WKe6)"] [Site "?"] [Date "2008.01.05"] [Round "?"] [White "Topalov"] [Black "Kramnik"] [Result "1/2-1/2"] [Annotator "Regan,Kenneth"] [SetUp "1"] [FEN "8/8/4K3/3N4/8/2Q1B3/4q3/1r1k4 w - - 0 74"] [PlyCount "10"] [EventDate "2008.??.??"] 74. Ke7 Rb7+ 75. Kd6 Rb1 76. Qd4+ Kc2 77. Qc5+ Kb3 78. Qc3+ Ka4 1/2-1/2 Analysis by Toga II 1.4 beta5c: 79.Qc3-c6+ Ka4-a3 80.Qc6-a8+ Ka3-b3 81.Qa8-b7+ Kb3-c2 82.Qb7-c7+ Kc2-b3 83.Qc7-b6+ Kb3-c2 84.Qb6xb1+ Kc2xb1 85.Nd5-c3+ Kb1-c2 86.Nc3xe2 +- (6.90) Depth: 6/20 00:00:00 37kN 79.Qc3-c6+ Ka4-a3 80.Be3-c5+ Ka3-b3 81.Qc6-b6+ Kb3-c2 82.Qb6xb1+ Kc2xb1 83.Nd5-c3+ Kb1-c2 84.Nc3xe2 Kc2-d3 85.Ne2-g3 Kd3-c4 +- (6.69) Depth: 7/22 00:00:00 90kN 79.Qc3-c6+ Ka4-b3 80.Qc6-b6+ Kb3-c2 81.Qb6-c7+ Kc2-b3 82.Qc7-b7+ Kb3-c2 83.Qb7-c8+ Kc2-b3 84.Qc8-b8+ Kb3-c2 85.Qb8xb1+ Kc2xb1 86.Nd5-c3+ Kb1-c2 87.Nc3xe2 Kc2-d3 +- (6.91) Depth: 8/24 00:00:00 165kN 79.Qc3-c6+ Ka4-b3 80.Qc6-b6+ Kb3-c2 81.Qb6-c7+ Kc2-b3 82.Qc7-b7+ Kb3-c2 83.Qb7-a7 Qe2-c4 84.Be3-d4 Kc2-d3 85.Qa7-a3+ Kd3-d2 +/= (0.30) Depth: 9/33 00:00:00 625kN 79.Nd5-b6+ Rb1xb6+ 80.Be3xb6 Qe2-h2+ 81.Kd6-d7 Qh2-h7+ 82.Kd7-d8 Qh7-h4+ 83.Kd8-c7 Qh4-h7+ 84.Kc7-c8 Qh7-f5+ 85.Kc8-b7 Qf5-d5+ 86.Kb7-a6 Qd5-b5+ 87.Ka6-a7 Qb5-d5 88.Qc3-c2+ Ka4-b4 +/= (0.46) Depth: 9/33 00:00:00 1017kN 79.Kd6-c7 Qe2-h2+ 80.Kc7-c6 Rb1-b8 81.Qc3-c4+ Ka4-a5 82.Be3-d2+ Qh2xd2 83.Qc4-c5+ Ka5-a4 84.Qc5-a7+ Qd2-a5 85.Qa7xb8 Qa5-a6+ 86.Kc6-c5 Qa6-a5+ 87.Kc5-c4 +/= (0.53) Depth: 9/35 00:00:02 3870kN 79.Kd6-c7 Rb1-b5 80.Qc3-a1+ Ka4-b3 81.Qa1-b1+ Qe2-b2 82.Qb1-d3+ Kb3-a4 83.Nd5-c3+ Ka4-a5 84.Nc3xb5 Qb2xb5 85.Be3-d2+ Ka5-a4 86.Qd3-c2+ Ka4-a3 +/= (0.52) Depth: 10/35 00:00:03 4733kN 79.Qc3-d4+ Ka4-a3 80.Qd4-e4 Qe2-a6+ 81.Kd6-e5 Rb1-b8 82.Be3-c5+ Ka3-b2 83.Qe4-d4+ Kb2-c2 84.Nd5-b4+ Rb8xb4 85.Bc5xb4 Qa6-d3 86.Qd4-f2+ Kc2-b3 +/= (0.53) Depth: 10/35 00:00:03 5580kN 79.Qc3-d4+ Ka4-a3 80.Qd4-e4 Qe2-a6+ 81.Kd6-e5 Qa6-b5 82.Be3-c5+ Qb5xc5 83.Qe4xb1 Qc5-c3+ 84.Ke5-d6 Qc3-c4 85.Qb1-a1+ Ka3-b3 86.Qa1-d1+ Kb3-a3 87.Qd1-f3+ Ka3-a2 88.Qf3-f2+ Ka2-a3 +/= (0.54) Depth: 11/41 00:00:07 11519kN (, 05.05.2012)

Note also that the 6/20 line involves White delaying the Qxb1+ capture---I imagine that the line without such delay must have evaluated this already to a draw (K + one piece vs K needs no tablebase), so why is the draw value being ejected? In any event, that "different hash sizes cause different lines to be searched" (because of collisions, not the mere size of the table) is scientifically the point---which my proposed statistical test is aiming to exploit.

Thanks also to those trying out the directions I gave. ---Ken R.

>so why is the draw value being ejected?

I would call "horizon effect" is when engine meets difficult position and is yet unsure if it win, draw or lose. Answer comes in time or next depth.

6/20 means engine searched 6 depth throughly and to 20 depth just a little bit, only checks, retreats, captures, attacks on queen,

probably forks too, but looks like Toga didn't gave enough attention to the "silent" fork move 86...Kd3.

[Event "Move 74 WTM orig (WKe6)"] [Site "?"] [Date "2008.01.05"] [Round "?"] [White "Topalov"] [Black "Kramnik"] [Result "1/2-1/2"] [Annotator "Regan,Kenneth"] [SetUp "1"] [FEN "8/8/4K3/3N4/8/2Q1B3/4q3/1r1k4 w - - 0 74"] [PlyCount "25"] [EventDate "2008.??.??"] 74. Ke7 Rb7+ 75. Kd6 Rb1 76. Qd4+ Kc2 77. Qc5+ Kb3 78. Qc3+ Ka4 79. Qc6+ Ka3 80. Qa8+ Kb3 81. Qb7+ Kc2 82. Qc7+ Kb3 83. Qb6+ Kc2 84. Qxb1+ Kxb1 85. Nc3+ Kc2 86. Nxe2 1/2-1/2

Analysis by Ruffian 2.1.0:

79.Qc3-c6+ Ka4-a3 80.Qc6-a8+ Ka3-b3 81.Qa8-b7+ Kb3-c2 82.Qb7xb1+ Kc2xb1 83.Nd5-c3+ Kb1-c2 84.Nc3xe2

**Kc2-d3**

+- (7.77) Depth: 6/18 00:00:00 56kN

79.Qc3-c6+ Ka4-b3 80.Be3-d4 Qe2-h2+ 81.Bd4-e5 Qh2-h6+ 82.Be5-f6 Qh6-h2+ 83.Kd6-d7 Qh2-h3+ 84.Qc6-e6 Qh3-d3

+- (2.25) Depth: 7/19 00:00:00 371kN

79.Qc3-d4+ Ka4-a3 80.Kd6-e7 Rb1-b7+ 81.Ke7-f6 Qe2-f3+ 82.Be3-f4 Rb7-b5 83.Nd5-c3 Rb5-h5

+- (2.80) Depth: 7/19 00:00:00 667kN

79.Qc3-d4+ Ka4-a3 80.Qd4-e4 Qe2-a6+ 81.Kd6-e5 Rb1-f1 82.Qe4-b4+ Ka3-a2 83.Nd5-c3+ Ka2-a1 84.Qb4-a4+ Qa6xa4 85.Nc3xa4 Ka1-a2 86.Na4-b6

+- (2.93) Depth: 8/22 00:00:01 1947kN

With a smaller hash table, more nodes of the tree map to the same table entry. Hashing is not merely a speed-enhancing method, but does affect the result of a search. The reason is that the engine will accept scores of a position obtained (and then saved in the hash table) by a search that is

*deeper*than the search that is currently requested. This leads to the phenomenon of 'grafting', effectively looking beyond the search horizon by grafting a deep sub-tree on a branch close the leaves. This is a beneficial effect (which is why people do not restrict hash hits to exact depth matches). But which sub-trees exactly are grafted, and what interesting, root-score-affecting tactics you might see in there, is dependent on which positions in the table survive the continuing replacement process of entries overwriting each other due to limited table space, which again depends on table size and key choice.

So the 'anomaly' you see here is in fact quite normal engine behavior. In the first place, a score jumping to a totally different value in a new iteration, and later back to near the original value, is quite normal, and does not point to any program error. It can happen even in programs that do not use hash tables. It happens when the engine sees a way to 'force' winning of a piece, which on deeper inspection turns out to be poisoned. As soon as it sees the piece is poisoned, it will of course refrain from taking the piece, and the score drops back to the previous 'no-progress' score. Such a tactic is quite common in Chess.

That the ability to find the imagined gain can be dependent on the details of hashing is also quite normal, as by their very nature such gains have to occur close to the horizon. And this makes there visibility very sensitive to the grafting effect, which shifts the horizon around in a hash-dependent way. This has absolutely nothing to do with key collissions, and will even occur in total absense of any key collissions.

*So I am sad to say that there is a very high likelihood that your paper is total nonsense*. The very least you would have to do to give it any credibility whatsoever is provide proof that key collissions actually do take place in the experiment you describe. E.g. by making a version of the engine that stores the complete position in every hash entry, rather than just a key from the Zobrist scheme, and verify on every hash hit (as decided by the key scheme) if the position was indeed the same. (You can pack a position into 32 bytes, so this only triples the size of the hash entry, and 192MB is feasible on almost any computer nowadays.) Then you can let the program print any positions that were mis-identified due to a key collission.

Even if you find there were key collisions (which is very unlikely, as the collission rate will be something like 1 in 10^12 with 64-bit keys as Toga II uses, so only 1 in 250 hours or so), you would sutill have to prove this collision was responsible for the +6.9 root score. Again, this is extremely unlikely. But you could do this by changing the hash-probe code of the engine such that it would suppress the use of hashed scores even if the key says it is a hit, when the full position you stored with it tells you the stored position is not the current position. Only if removing the key collisions would make the +6.9 root score go away, you would have a case.

*ejection*. There is no need for me to allege identity of 64-bit keys---this has nothing to do with the test I describe. If a program gets a manifestly wrong value---and gets a right value at other depths and other hash-size settings, while other programs usually get a right value---then the wrong value is an anomaly. The size of the difference will influence the propagation function I define and propose as a testing criterion, dealing with ejections.

In my player-modelling and anti-cheating work, the deviations caused by hash ejections seem to be about 0.02--0.03 in typical middlegame positions, and this is just an understood variation term in my model.

> So the latter is what we call a "collision"---when there is no more space in a probe cluster,

> then for the evicted value there is an ejection.

OK, that clears it up. But I think it still doesn't do much for your paper, as collissions in your sense have almost nothing to do with the quality of the randomness of the Zobrist key generation. And the 'almost' in this case is only to account for grossly wrong PRNG, such as those that would return always the same number, return 16-bit randoms where the key is 64-bits, where you forget to initialize a subset of the keys, so they all stay zero, etc. Because even the best random generator will still cause as many second-level collissions as a modeately poor one. The collissions simply follow from the fact that you overload the hash table by about an order of magnitude, so that no matter how you hash, on average always ~ 10 positions compete for the same entry.

The anomaly is sensitive to the details of the hashing scheme, (like table size or key choice), but in a way that has nothing to do with the quality of the randomness. It is like sampling the average investment of a group of 1000 people, where one of them happens to be Bill Gates, by randomly drawing 100 and average over those. In 10% of the cases you will be unlucky, and Bill Gates will be in your sample. Resulting in an 'anomaly' of a several $100-million too high average than the other 90% of the cases. This 'anomaly' will have little to do with the quality of the sampling, and in fact only samplings with unacceptably poor randomness (like always selecting the same person, which does happen to be another than Bill Gates) will be free of it.

The example in your paper suffers from exactly that problem. You constructed a Chess position for which the search tree contains many 'Bill-Gates' nodes, where capturing a piece is only a short-time gain, but results in Queen loss after a royal fork, which can only be seen on deeper search. A minimax search is even more sensitive to 'out-liers' than an average, because you take maxima and minima. So the root score will be dependent on whether you happen to sample a graft that allows you to look far enough over the search horizon to see it. And that depends on whether positions that map to the same hash-table entry as the sub-tree that has to be grafted happen to be encountered before or after it (so they overwrite it).

So the 'anomaly' is actually something in the search tree. Hardly any conclusion can be drawn from whether these anomalies are sampled or not by the hashing process. In fact, in your example the anomaly seems to go away when you decrease the size of the hash table, which is a poor-man's method for mimicking a lower quality hashing (increasing the number of collissions).

I can add that I have never been able to detect any effect of using poor-quality randomness in the Zobrist keys on the number of key collissions. (And in almost all engines the second-level hashing is determined by simply masking out a number of bits from the 64-bit key, making it equivalent to just having a shorter key.) In fact most of my engines use Zobrist keys that are far from random, but are generated from each other by an 8-bit left shift, filling in only the 8 least-significant bits with an independent random. (I do this to save table space, so that the Zobrist keys can be stored in such a way that 7 of their 8 bytes overlap.) This does not lead to any measurable increase of the number of key collissions, nor decrease in the Elo performance of the engine.

> And the 'almost' in this case is only to account for grossly wrong PRNG, such as those that would return always the same number, return 16-bit randoms where the key is 64-bits, where you forget to initialize a subset of the keys, so they all stay zero, etc.

Indeed, in my article is a link to http://www.cse.buffalo.edu/~regan/chess/computer/anomalies/FruitTable.pdf which shows little difference for the simple option of making key[ i ] = (i + 1001)^5 for i=0 to 780, with Fruit 2.1. The fifth power makes it fill up most of 64 bits but is completely non-random. I also linked at 2008 paper co-authored by Michael Mitzenmacher of Harvard (who has seen my demo) explaining why even a slight amount of randomness taken from data itself often suffices in hashing applications. The effect I am theorizing is deeper than what one can see at top level for playing chess.

> Because even the best random generator will still cause as many second-level collisions as a moderately poor one.

The number of collisions is expressly stated as not being a consideration---it is intuitively a linear statistical test of the kind PRGs can pass.

> The collissions simply follow from the fact that you overload the hash table by about an order of magnitude, so that no matter how you hash, on average always ~ 10 positions compete for the same entry.

This is desired for my experiment!---and it happens in real chess games too---even with 1 GB hash there are numerous ejections. That it involves the search tree is also necessary, else it would have too low complexity.

> The example in your paper suffers from exactly that ["Bill Gates"] problem.

It is just a "proof of concept" example. Let's say +6.90 = Bill Gates, +0.03 = Johann Käse (No kidding: our national term "Yankee" comes from the Dutch version of this name for "average man", now we say John Q. Public:), but what I am interested in is deviations of +0.50 to +1.00 which maybe we can call Angela Merk[el]würdig. The fact that minimax is sensitive to outliers (as you say below) is exactly what I want to help me measure how high the "angels" rise in the search tree. (Incidentally our same blog has a recent article on exactly the "Bill Gates outlier" problem you mention in another context: http://rjlipton.wordpress.com/2012/04/18/cybercrime-and-bad-statistics/)

> And that depends on whether positions that map to the same hash-table entry as the sub-tree that has to be grafted happen to be encountered before or after it (so they overwrite it).

Thank you for confirming that this is (evidently) what happens-. Once I had a student who I hoped would insert code into Toga II to verify this and count my propagation measure, but he and I worked on my other chess project instead---which has had precedence, not to mention my "regular" research and role on a blog which is like Susan Polgar's in my field. Hence I am hoping to attract others to try this, or at least give me some similar useful information---such as also seeing that you care about saving even "only" 50k of space, when to my mind it is most sensible just to get 50,000 truly-random bits, and then list them out like the Fruit 2.1

`random.cpp`

file does.> ...collissions in your sense have almost nothing to do with the quality of the randomness of the Zobrist key generation.

How do you know? Do you know of a study like Hyatt-Cozzie which I cited? Have you tried the "Rationale" and "Canary" sections of the post? What you can tell from a quick look is not the point---I boldfaced it could need

**large**amounts of computer time in the "Extending" section. Maybe the idea has only 5% chance of success, but the reward could be enormous, as it is talking not about chess but about very intense long computations used for instance to design new drugs---I linked two such papers with the words "perilous dynamics"---and that would make it

*Merck*würdig.

> such as also seeing that you care about saving even "only" 50k of space,

> when to my mind it is most sensible just to get 50,000 truly-random bits,

> and then list them out like the Fruit 2.1 random.cpp file does.

Well, 50k bits is only 6K bytes. But it it can be a lot more when a board has 81 squares, and there are 14 piece types x 2 colors, and you need keys for pieces that are held in hand as well. By modern standards a few dozen KB might seems like nothing, but it should not be compared to L1-cache space of typically 32KB, rather than the two-orders-of-magnitude-slower RAM.

> How do you know? Do you know of a study like Hyatt-Cozzie which I cited?

This is the study where they show that even an extremely high rate of

**key collisions**(like 1 in 1000 probes) has virtually no impact on the search (scores or duration) at all, right? Note that is not the type of collisions you address: key collisions are enormously more damaging, because they make the hash probe return an arbitrarily wrong score. The 'storage collisions' you are dealing with are far more benign: they do not result in any errors, but only give a little slowdown due to the engine having to re-search the position, and sometimes a missed opportunity for grafting.

I am not sure why you say testing any of this could require huge amounts of computer time, but that is perhaps because it is a bit vague to me what exactly you want to test. I would think it is quite easy to test whether the quality of the random generator affects the frequency of occurrence of the 'anomalies' in searches from the given position. Just try a couple of hundred searches with different sets of keys obtained from a perfect generator, and from one with a known poor one, and count the number of anomalies these searches report. That should tell you whether the anomalies you observe can be affected by the type of random generator used. How can I be pretty sure that you will find they don't? Because to affect the grafting effect while keeping the average number of storage collisions (determined by hash and tree size) the same, it would be necessary that the random generator can predict which positions are tactically quiet, and which positions are pregnant with forks or skewers, just from the hash key. Which seems a pretty much impossible task even for an algorithm designed to do this.

Powered by mwForum 2.27.4 © 1999-2012 Markus Wichitill