Houdini 3 played on a 3.4 GHz real 16 core computer (~25MNodes+ in the starting position).
Cluster Rybka 4.1 played with approx. 110 GB 6pc Tbs and 64 cores.
The result: +4 -1 =1 for Houdini 3
Seemed that Houdini scaled very well on 16 cores.
The first two games were played without any opening, which was really not a good setup for Rybka.
So if we discard those 2 games, the result would have been +2 -1 =1 for Houdini 3
However, on 16 cores, on the TCEC computer, at stage # 1 and 2, both engines seems to be of comparable strength so far.
Any explanation to this disparity of results ?
On TCEC, they only use 5 man tablebase.
Maybe using 6-man tablebase with the cluster was not a good setup for a computer match ?
Also I have never been convinced by test suites as no engines would ever get to these positions and imo are pure tactics.
Just my take on things.
> I agree about test suites--they don't effectively show how the engines would play in games.
Where the engine gets to in a game is usually guided by a book or test suite and test suites are very useful for testing new moves in openings to give a fixed number of games with however many variations are being employed. In this way the resultant games can be used for tuning certain book lines. I'm not keen on using an engine without a book because there is no benefit where an opening line has proven track record.
My preference for engine testing is that opening lines should be contemporary and played as both sides. As far a I can tell, engines gain time on the time control using test suites just as they would do if using a book so no penalty there and takes out any randomness of book move selection.
Many moons ago in a chat with John Nunn about chess engines he thought to be fair that two games from the same position should be played obviously with each engine getting a game with each colour, now it is up to each individual to chose the opening line\lines ... I don`t know if JN has changed his opinion but it does stop an engine getting a bad position out of an opening book which it has no control over.
As an example and it is really only for my own interest I am running some tournaments from the positions below. Whatever engines I use in the tournaments they get to play each side of the position 4 times
1. e4 d5 2. exd5 Qxd5 3. Nc3 Qa5 4. d4 Nf6 5. Nf3 c6 6. Ne5 Be6 *
1. e4 Nf6 2. e5 Nd5 3. d4 d6 4. c4 Nb6 5. exd6 cxd6 6. a4 *
> they get to play each side of the position 4 times
I assume that here, you're taking into account multiprocessing randomness to see if there is a systematic tendency for a position to go one way or the other?
>I assume that here, you're taking into account multiprocessing randomness to see if there is a systematic tendency for a position to go one way or the other?
Yes .... I am always curious lets say in a 4 engine tournament which engine chooses what move, do they play an alternative next time round and at what point do they deviate.
It helps reach certain conclusions about positions that interest me these days and then you can run another tournament further down the opening line from a new start position.
As an example I came across http://www.youtube.com/watch?v=yPJeMrhL19w and around 8 Minutes 30 seconds into it you reach
The author then mentions that 16...a5 is not good because of 17.Rb5 and the black a pawn is simply weak ...... Now what is funny is that every engine I have tested wants to play 16...a5 with the second choice tending to be the game move 16...Be6 ........... The games I have found are all black wins in fact after 16...a5 17.Rb5
Now after running a tournament Rfb1, Re1 and Bc3 have all been played but not Rb5
Everyone has there ways of doing these things but this works for what I want
> Also I have never been convinced by test suites as no engines would ever get to these positions and imo are pure tactics.
> Just my take on things.
I find test suites are very useful for comparing different engines' performance with thematics e.g. open or closed positions, fianchetto games such as the Sicilian Dragon or some of the Reti type lines but of most benefit when testing new moves in openings. Sometimes the human eye can spot a potential good move missed by the engines and then running a number of games in a time control rather than ply-depth that gives a type of Montecarlo run on the ensuing lines that give useful potential performance information.
> Seemed that Houdini scaled very well on 16 cores.
Seemed that Houdini ist very strong in general!
CEGT-one-core gives H3 impressing 100 ELO more than R4.1.
I believe to get really more strength when using 32, 64 or 128 cores instead of 16 cores is a very hard Job! Really successful in case of Rybka? Or more or less a Marketing idea?
Have you seen http://www.chesscluster.com/index.html?
> Have you seen http://www.chesscluster.com/index.html?
Should be interesting! Based on Stockfish...
> Based on Stockfish...
"Based on " is a rather nebulous way to reference the chess engine behind the cluster!
 There is enough difference -but at the same time there is a form of similarity that just cannot go unnoticed. The few colors used are dramatic enough to make both standout.
You don't have to disclose your identity to us.
Data being generated through the analysis process is the client's sole property. We don't use it in any other way or disclose it to any third parties. You can keep your analysis data on our servers to have a secure backup or you can delete it from our servers and it shall be gone forever.
> Data being generated through the analysis process is the client's sole property. We don't use it in any other way or disclose it to any third parties. You can keep your analysis data on our servers to have a secure backup or you can delete it from our servers and it shall be gone forever.
Deleting something from a server does not mean it is gone forever. Some of our nations leaders found that out!
He says that the special dev version used in stage # 3 does not use contempt (just an experiment he said).
At later stages, he might change this.
He also said that @2 minutes time controls, he saw a 40-50 ELO improvement with this dev version.
Finally, if everything is fine, Houdini 4 should be ready in November.
But I would make a guess that it will also be shipped out with a new Aquarium too!
> Well now Dec it seems, and far from cheap!
That's for the Chessbase edition that costs extra for the CB frills as well as the uci engine. If marketed as per H3 release, the uci engine alone would be released first and at less cost direct from Cruxis.
Powered by mwForum 2.27.4 © 1999-2012 Markus Wichitill