Not logged inRybka Chess Community Forum
Up Topic The Rybka Lounge / Computer Chess / The Big 4 Tournament September 2017
- - By Peter Grayson (****) [gb] Date 2017-09-29 22:19 Upvotes 2
The recently released Houdini 6 Pro 64 popc by Robert Houdart also supporting LP hash is a must to include perhaps being favourite for top spot.

With SugaR XPrO 1.3's good performances elsewhere it seemed a natural for inclusion into this tournament therefore I have included SugaR XPrO 1.3 x64 popcnt that also supports LP hash and it is a real contender for top spot giving me the first opportunity to have a close look at this latest offering from Marco Zerbinati.

With Stockfish 8 looking rather long in the tooth and no hint of Stockfish 9 I have substituted it with Brainfish 230917 64 popcnt that also supports LP hash and is a faster compile than the development site engines as the latest Stockfish code representative.

This would not be complete without Komodo 11.2.2 64 bit that does benefit from popcount but not LP hash that is the fourth engine. It is the reference engine that will retain a 3400 Elo throughout the updates and the other engines may vary based against this standard close to the CEGT and CCRL rating lists.

Each engine will play 300 games using the 50 line opening set derived from 2400+ Player games giving 100 games against each opponent playing each line with both colours.

At the 40 moves in 5 minutes repeating time control, in practice around 80 games a day are completed.

With the four contenders identified I have started the tournament. Game 60 has just completed. As interesting as it is to see the scores from an early stage it should not be forgotten that the first 21 opening lines section are King's Pawn openings and it can all change once the non-King's Pawn section is entered.

OK it is only 1/10th of the way through but with Houdini already going top in what on paper was its weakest openings demonstrates just how tough a tournament this is going to be! Based on Stockfish previous games the King's Pawn openings should be its strongest performing lines.

Hoping to update on a daily basis until completion.

Match conditions:
Machine: dual Intel Xeon E5-2687w + 64 Gb RAM, HT disabled, Windows 7 Pro, Chessbase Fritz 14 GUI update 36
Engines: 6 cores + 4 Gb hash each, Ponder=on
Time Control: 40 moves in 5 minutes repeating to conclusion.
Syzygy 5 man TB's, GUI uses 5 man Nalimov and some 6 man.
Opening set: Derived from popular 2400+ player games with no engine influence.
Attachment: Big4Sept2017G1to60.zip - Games 1 to 60 (87k)
Parent - - By Dr.X (Gold) Date 2017-09-29 22:25
Thanks for the heads up on SugaR.XPrO.1.3x release. Excellent chess engine. :smile:
Parent - - By Hamster (**) [ch] Date 2017-09-30 09:16
Is SugaR based on Stockfish? Did not find much information about it on chessengines.blogspot but maybe I have not looked properly :eek:
Parent - By Labyrinth (*****) [us] Date 2017-09-30 10:23
It is based on Stockfish yes. As to the difference between SugaR and the SF releases, no idea.
Parent - - By Dr.X (Gold) Date 2017-09-30 15:58
Clone, well, yes! But, it is licensed.

SugaR is based on Stockfish but in saying that it still is licensed under the  GNU General Public License -If you make any changes to the source code, these changes must also be made available under the GPL.

https://github.com/Zerbinati/SugaR
Parent - - By Hamster (**) [ch] Date 2017-09-30 16:07
Zerbinati sure has a made a nice logo



but very little information about the engine. How does it compare to Stockfish/asmFish?
Parent - By Dr.X (Gold) Date 2017-09-30 16:10
You have to give him that! :smile:
Parent - - By Peter Grayson (****) [gb] Date 2017-10-01 11:57

> Clone, well, yes


SuagaR is not a pure clone of Stockfish. MZ clearly states it is a derivative and whatever changes he makes, whether small or large, causes it to give a different output, sufficiently so to consider it for inclusion into tournaments without source code origin limitations. There is a difference between clone and derivative but he has never denied SugaR is based on Stockfish and clearly highlights that on his engine Github pages.

The reason I chose to include SugaR in this tournament with Brainfish already having been selected to be the Stockfish representative was for a comparison with the current Stockfish code performance. In this tournament, without the Cerebellum book, the Brainfish engine is a faster compile clone of Stockfish that restores some advantage that Houdini and SugaR have in the speed up from the use of LP hash.

It also makes it more interesting with 4 rather than the usual three engines. I cannot give Komodo the benefit of LP hash because it does not support it.

Peter
Parent - - By Hamster (**) [ch] Date 2017-10-01 13:35
Why not use asmFish? It also supports large pages as Houdini and SugaR and Brainfish is a bit of a special case with the in-built book.

A real pity that Komodo does not support large pages... is it that difficult to implement?
Parent - - By APassionforCriminalJustice (***) [ca] Date 2017-10-01 14:09
It shouldn't be difficult to implement. It's a mistake on Mark Lefler's part. In fact Ronald himself on talkchess noted that large pages will last indefinitely while running so long as you never load the engine multiple times. For some reason Mark seems to think that LP will only suffice for a period of time (if I remember correctly) which is not correct. I have had LP running for DAYS and it still never fails as noted by Ronald. LP is a simple advantage and ELO gain that should be implemented into any super-strong engine.
Parent - By Peter Grayson (****) [gb] Date 2017-10-01 19:39

> In fact Ronald himself on talkchess noted that large pages will last indefinitely while running so long as you never load the engine multiple times.


In my previous Gauntlet running Houdini 6 and three other engines, they were loaded and unloaded after everygame but LP hash was still available after 300 games. However the 48 Gb spare RAM had reduced to just over 14 Gb by end and I have seen that before with Rybka. In the Gauntlet games, Houdini had access to Nalimov bases that seems to be the culprit of reducing free memory.

In this current tournament with 3 engines being loaded and unloaded after over 210 games now and free memory remains at almost 50 Gb confirming the use of Nalimov is the culprit. Nalimov bases are not used by the engines in this tournament.

Peter
Parent - - By Peter Grayson (****) [gb] Date 2017-10-01 19:31

>Why not use asmFish?


Unless the download directory has changed, the last asmFish engine available here

https://github.com/lantonov/asmFish/tree/executables/Windows

is a month old whereas Brainfish was current when I decided to run the tournament. The best performing asmFish I have is from late May. From colleagues comments they are seeeing little difference between the Brainfish and asmFish performances although I have not compared them myself.

Peter
Parent - - By Hamster (**) [ch] Date 2017-10-01 19:56
Understood, thanks.

Peter, you are very well informed in matters of computer chess and sometimes even more importantly, you answer in full, understandable sentences. Therefore I want to ask you how to test a hypothesis that the current asmFish (2017-08-25) is worse than the May edition (2017-05-22?)? Running an engine tournament of course but with what time control and opening book and finally with how many games? The more the better but what is a reasonable amount in your opinion?
Parent - - By Peter Grayson (****) [gb] Date 2017-10-02 00:59 Upvotes 1
Personally I prefer sets of fixed opening lines in a .pgn just as I am using in this current tournament. By doing so it removes the randomness of book move selection and ensures engines play with both colours of a line against each of its opponents. Therefore performance comparisons are like for like. The GUI usually provides many configuration permutations for opening book move selection, consequentially this introduces a high degree of randomness in move selection when many more games have to be played to achieve a reasonably accurate comparison with a more general range of opening lines. Here it may be worth considering the nature of openings that are your specific interest.

I would suggest using a Gauntlet tournament with the engine to be tested being the primary engine. For accuracy, ideally the engines should be within a ± 200 Elo range to minimise result skewing from odd draws and possibly not so good openings. That tends to limit engines for inclusion for the top flight.

If you wish to run with optimised engines, the maximum time control will be determined by the amount of available RAM and engine hash fill rate. If running with ponder=on that must be considered too. Because I run with ponder=on I allow for 50% ponder hit rate and use the simple formula of 3 times average time control move as reference time to test the engine hash fill rate from the start position. e.g. in the 40 moves in 5 minutes time control games this works out at
((300s + 150s)/40) * 3 = 33.75s.
Look for a hash size that as close as possible gives 40% to 60% indicated hash fill rate. To achieve that I use 4 Gb hash. In practice, engines give indications of 15% to 80% fill rate during the game with the fill rate reducing significantly in the end game. Moves in the first time control section usually show the largest hash fill rate because of the extra available time from the opening line or book moves.

Take into account software demands on RAM plus caches for TB's. With two engines running during a game hash for engine "A" must be added to hash for engine "B" when the sum should not exceed half of the available RAM. Minimise background tasks ensuring any security software is disabled as well as Internet or Network access. Ensure sufficient resources remain for essential background tasks. To that end, from a purist perspective the in use number of cores (n) should not exceed n-1. If running with ponder=on then allocation per engine must not exceed n/2 ((n-1)/2 for purists). I run with 6 cores per engine leaving 4 free cores for anything else making demands on CPU resources. With 64 Gb available, running this tournament ties up around 16 Gb RAM with typically 48 Gb shown as free.

Running that way I was able to demonstrate that each batch of 100 games between Houdini 5.01 and Stockfish 8 was within ±10 Elo of the average Elo of 800 games compiled from 8, 100 game batches using the same opening lines. I believe that is significantly better than the standard error margin for 100 games run under non-optimised and loose control conditions.

Time control will depend on your patience and availability of resource to conduct the test. Ensure that if stopped, the tournament can be restarted from that point. Fritz GUI's provide that facility and I believe Arena does too. I would try and avoid opening lines or books where move choices have been influenced by an engine.

To determine how many games, especially if using an opening book, I would suggest checking at around 100 games per engine intervals to measure the Elo variance. There will come a point when additional 100's of games make little difference to Elo at which point it is reasonable to assume variance saturation has occurred.

Peter
Parent - By Hamster (**) [ch] Date 2017-10-02 18:47
Great reply, thanks, will digest it over the next 3 days. :smile:
Parent - By Dr.X (Gold) Date 2017-10-01 15:13
That is correct, I misspoke in designating it a clone. Derivative is an accurate indicator. That was rather careless of me.
- By Peter Grayson (****) [gb] Date 2017-09-30 14:56
As can be seen in the next batch of 30 games per engine just completed, all engines have gained at the expense of Komodo 11.2.2 with Brainfish being the biggest beneficiary. 12 games per opening line are produced here so this completes the first 10/50 opening lines with the 10th being the start of the Spanish lines.

   120/600 games completed so far. With just 2.5 points seperating the top three it remains very open but my guess is Houdini 6.01 already with a positive score against each of its opponents is going to be the one to beat.

PeterG
Attachment: Big4Sept2017G61to120.zip (87k)
- By Peter Grayson (****) [gb] Date 2017-10-01 13:01
The Lead Changes

   The headline sees Brainfish overtake Houdini in the last 60 games to move ahead by one point and pulling back to equal in its head to head games with Houdini. The last 30 games were good for Brainfish but not so good for Houdini with Brainfish scoring 3-0 in their head to head matches in this session. Now, with more games under its belt, the Brainfish engine representing the Stockfish code suggests at his stage there is not much wrong with the code. It is still very close between the top three with just two points seperating them and this is looking much closer than I had anticipated.

   After leading early on, SugaR has continued to mix it with Brainfish and Houdini and at just 2 points behind Brainfish with Houdini inbetween it is too close to call with 210 games per engine left to play.

   Presently Komodo looks to be struggling to match the pace in this company but it does tend to perform better in the non King's pawn openings but it has some work to do to close the widening gap with the top three.

   There is still another French line, two Caro Kann and a Modern opening to go before entering the Queens Pawn section of the opening lines.

   My 80 games a day looks to have been a little optimistic for this line up with some long games being played out including a 201 mover that would have taken the best part of an hour on its own.

PeterG
Attachment: Big4Sept2017G121to180.zip - Games 121 to 180 (94k)
- By Peter Grayson (****) [gb] Date 2017-10-02 14:42 Upvotes 1
At the end of the King's Pawn openings excluding the final Alekhine's for line 50, Brainfish has maintained its lead holding 3 points over Houdini. Komodo did relatively well scoring the only win with White over SugaR in the final Pirc, Austrian attack line.

This last section of King's Pawn games were not the best for SugaR or Houdini with individual scores of
Brainfish 19.5/36
Komodo 18/36
Houdini 17.5/36
SugaR 17/36

As can be seen though, margins at this level are relatively small confirming how tough this group is. All engines have demonstrated they are capable of winning against eachother. 8.5 points may look a lot in this company but even Komodo cannot be ruled out here with Queen's Pawn, Indian's, Reti's and English opening lines all to come.

PeterG
Attachment: Big4Sept2017G181to252.zip - Games 181 to 252 (95k)
- By Peter Grayson (****) [gb] Date 2017-10-03 19:45
Brainfish increases lead

With creeping rather than startling change, Brainfish has increased its lead to 4.5 points and Houdini is in danger of dropping to third. Looking at Houdini's score against Komodo and comparing the far superior performance achieved in the earlier Gauntlet, I am wondering if instead of encumbering the engine those 6 man Nalimov bases actually assisted it with more precise endgame play.

Sugar has proven itself worthy of inclusion and may even finish second here. As for Komodo then the inclusion of LP hash enhancement seems a must and then perhaps the amount of work required to pull it into line with or indeed overtake the other three would not be so great. Despite 120 games per engine still left to play it is beginning to look good for Brainfish. With just 5 points between the top three and indeed just 10.5 from top to bottom there really is not a great difference in these engines in terms of capability.

Brainfish's profile against the others has improved such that it may well finish with advantage over them all in the end but the earlier score profiles indicated these engines are certainly different.

PeterG
Attachment: Big4Sept2017G253to360.zip - Games 253 to 360 (153k)
- - By Peter Grayson (****) [gb] Date 2017-10-04 21:34
Brainfish increases the gap, SugaR holds 2nd.

After 24 consecutive draws in the Queen's Indian lines there has been a breath of fresh air giving some positive results with 6 wins in the King's Indian Classical; SugaR with both sides against Komodo in the King's Indian Classical, Petrosian system, a heady 7 wins in the King's Indian, early White h3 variation and also 7 wins in the King's Indian, Saemisch variation. Just completed the Dutch Opening giving 2 wins to Brainfish winning with both colours.

That has all resulted in a shift giving a 16 point gap from top to bottom and a 5.5 point gap at the top. Houdini pulled it back in its head to head with SugaR but SugaR holds second spot and Houdini's score against Komodo just does not compare with its Gauntlet performance. Perhaps the influence of EGTB's is more than they are accredited?

With current scores it does seem as if the Stockfish code is back on track and the differences Sugar provides makes the tournament more of an interesting spectacle.

It does begin to look as though Brainfish is too strong to lose the lead so the main battle in the remaining games is between SugaR and Houdini for 2nd spot.

PeterG
Attachment: Big4Sept2017G361to444.zip - Games 361 to 444 (121k)
Parent - - By APassionforCriminalJustice (***) [ca] Date 2017-10-04 23:00
Brainfish is just too strong. These results aren't exactly that surprising to me.
Parent - By Peter Grayson (****) [gb] Date 2017-10-04 23:30

> Brainfish is just too strong. These results aren't exactly that surprising to me.


The Houdini vs Komodo score is surprising to me when in the Gauntlet I ran previously using the same opening lines set and conditions the score was

Houdini 6.01 Pro x64 popc vs Komodo 11.2.2 64-bit: 43.0 - 31.0, +18/=56/-6

when the extra 9 points compared to performance here  would have seen Houdini 6.01 topping the table. The only difference here is I have disabled the Nalimov EGTB access for engines because I thought it was to Houdini's detriment with visible slowing of the engine in the endgame. Perhaps it was not detrimental after all but here all engines have up to and including full 5-man Syzygy.

PeterG
- By Peter Grayson (****) [gb] Date 2017-10-05 20:57
Little Change

Although there are 36 games per engine left to play, it is difficult to see much changing and the final Elo's are probably going to look very similar to these. Was hoping for a Grandstand finish but not to be!

Just 1 win in the Semi-Slav 5.Bg5 h6 games, 2 in the exchange Gruenfeld, 3 in the Schmid Benoni, 2 in the Reti line, 3 in a QGD Semi-Tarrasch that I thought was an English but transposed! Notable was Brainfish beating Houdini with Black. Finishing with 3 White wins in the Old Queen's Indian line, 2 for Houdini 6. Still some life left in that old line for White!

PeterG
Attachment: Big4Sept2017G445to528.zip - Games 445 to 528 (116k)
- - By Peter Grayson (****) [gb] Date 2017-10-06 20:47
Tournament Completed

A mainly uneventful final leg to completion of the Tournament with little change in the relative scores. Brainfish runs out undisputed leader, winning against each of the other opponents. From my perspective, the Houdini 6.01 engine turned out to be somewhat disappointing after the early promise and never looked like regaining the lead.

I believe the games consisted of a reasonable range of contemporary popular openings and the results perhaps reflect how the engines may perform when used in analysis by players. Certainly the Houdini engine offers a different take on some positions and approach to the ensuing game with a noticeable different playing style to the Stockfish based engines. How much SugaR has deviated from Stockfish is unclear but the Stockfish code had the edge in their head to head games.

What it means is Houdini did not quite cut it and based on recent releases it will be another 12 months before the next engine so downhill from here as the others improve in that time window. There is also asmFish of course and I would probably have used that if there had been a more recent update at tournament start.

Based on the final result there may not be as much work needed with Komodo as originally thought but it would be nice to see the commercials lead the performance ratings when updates are released.

Looking at the performance movement through the course of the openings, I really do wonder on the oft quoted statement that thousands of games are required to get an accurate Elo rating. I've never found that nor seen evidence to support it for today's engines and competitive opening lines and the table below confirms little relative change from Game 222. Unfortunately though it does suggest the early single thread match results being posted were to a degree misleading, especially when bigger hardware is used. Of course that could all change again when longer time controls and even bigger hardware than used here comes into play.

One additional comment; the three engines using LP hash continued to benefit from it right through to the end according to Windows Task Manager.

An interesting if a little disappointing tournament from the commercials' performance.

PeterG
Attachment: Big4Sept2017G529to600.zip - Games 529 to 600 (101k)
Parent - - By oudheusa (*****) [be] Date 2017-10-07 09:05
Thanks for this Peter. Have you been able to identify the type of positions that Houdini evaluates differently?
Parent - - By Peter Grayson (****) [us] Date 2017-10-07 17:03
My comments are based on perceptive observations from the games I watched during the tournament.

When making an appraisal from watching games as they progress I use the four fundamental elements of Time, Space, Position and Material. In addition the engine's ability to identify the dynamics of its position as the game progresses that may lead to an element of creativity when in some cases an engine can create what appears to be something from nothing; usually from a situation earlier in the game that is beyond my comprehension of the subtleties of the moves.

Still fresh in the mind because it was later in the tournament was Houdini's win against Brainfish in the Open Catalan, game No. 574.

Post opening line, the engines deviated with Black's move 12 with Brainfish choosing the less favoured 12..Rac8 whereas Houdini chose 12..Rd8. The question here is whether Black can expand without weakening its position.

An observation from both games was that with Black, Houdini was quite aggressive and quite happy to drop material to distract Brainfish' attention from its Kingside attack. In the opposite game, Stockfish seemed more concerned for its King safety and pawn structure.

Back to Houdini as Black and after the quite aggressive 17..Bb5 note how both of Houdini's bishops are controlling the diagonals through to e1 and f1. Now considering the dynamics at this point, the position is ripe for King's side pawn expansion and the Bishops can quickly drop back to d6 and d7 to support any opening up of the King's side. By 19..e4, the dynamics of Houdini's position look better than Brainfish's position when in effect its Bc1 and Ra1 are in their original starting positions. So a minus for Brainfish in "Time" element. Maybe its OK but it tends to force any expansion towards the Queen's side and the question is after 24.Qxb5 has Back distracted White sufficiently to launch a King attack? White is noticeably devoid of major pieces on the King's side and 25..h5 highlights Black's intent. My opinion was that Houdini had exposed a weakness here.

Brainfish's offering of the black bishop exchange pulls the Queenside Rook further away from its King that would have merit if White proves stronger on the Queen's side. Still a pawn down but going for the King attack after White's 35.Rc5 the game status was unclear and what I did not expect and still muse at was Houdini's move 35..c6! Here Brainfish thought capturing with the Rook led to equality, Houdini's score started to move to its favour. Of course 36.dxc and White is quickly mated. How quickly though does Brainfish's position deteriorate when from what appeared to be a broadly balanced position Houdini found something that Brainfish did not see.

I think the main difference here is that Houdini is more ready to move the King's pawn safety net if it identifies a possible attack and doubt if any of the Stockfish based engines would consider doing so. I very much liked that Houdini was willing to force the issue and I am not so sure the Stockfish based engines would do that in an unclear situation. Here it proved to Houdini's advantage but that is not always the case but provides for different considerations when the opportunity arises.

So Black had no specific advantage coming out of the opening line and of the 12 games with the line, the only engine that found a win was Houdini with Black against the Strongest engine. Did Houdini look more human in its approach?

[Event "Big 4 Sept 2017-1"]
[Site "Newport, South Wales"]
[Date "2017.10.06"]
[Round "96.2"]
[White "Brainfish 230917 64 POPCNT"]
[Black "Houdini 6.01 Pro x64-popc"]
[Result "0-1"]
[ECO "E04"]
[WhiteElo "3437"]
[BlackElo "3420"]
[Annotator "0.40;0.29"]
[PlyCount "148"]
[EventDate "2017.09.29"]
[EventType "tourn"]
[Source "Grayson"]

{Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz 3092 MHz  W=43.8 plies; 14,847kN/s;
102,161,313 TBAs  B=37.3 plies; 17,970kN/s; 78,303,349 TBAs} 1. d4 Nf6 2. c4 e6
3. Nf3 d5 4. g3 dxc4 5. Bg2 a6 6. O-O Nc6 7. e3 Bd7 8. Qe2 Bd6 {[%eval 29,22]
[%emt 0:00:14]} 9. Qxc4 {[%eval 40,26] [%emt 0:00:13] (Nc3)} O-O {[%eval 22,23]
[%emt 0:00:12]} 10. Rd1 {[%eval 41,29] [%emt 0:00:09]} Qe7 {[%eval 20,23]
[%emt 0:00:00]} 11. Nc3 {[%eval 40,23] [%emt 0:00:02]} h6 {[%eval 39,25] [%emt
0:00:33] (b5)} 12. Qe2 {[%eval 24,28] [%emt 0:00:23]} Rad8 {[%eval 22,26]
[%emt 0:00:03] (Rac8)} 13. Nd2 {[%eval 12,29] [%emt 0:00:20]} e5 {[%eval 17,28]
[%emt 0:00:00]} 14. d5 {[%eval 41,26] [%emt 0:00:01]} Na7 {[%eval 22,27] [%emt
0:00:08] (Nb8)} 15. Nce4 {[%eval 23,32] [%emt 0:00:18] (Nde4)} Nxe4 {[%eval 30,
26] [%emt 0:00:14]} 16. Nxe4 {[%eval 30,34] [%emt 0:00:00]} Bb4 {[%eval 20,25]
[%emt 0:00:05] (Nc8)} 17. Qc4 {[%eval 40,28] [%emt 0:00:07]} Bb5 {[%eval 20,27]
[%emt 0:00:00] (Nc8)} 18. Qc2 {[%eval 39,28] [%emt 0:00:18] (Qb3)} f5 {[%eval
0,24] [%emt 0:00:07]} 19. Nc3 {[%eval 51,29] [%emt 0:00:09]} e4 {[%eval 20,26]
[%emt 0:00:13] (Bd7)} 20. Nxb5 {[%eval 42,29] [%emt 0:00:26] (Qb3)} Nxb5 {
[%eval 19,27] [%emt 0:00:14]} 21. Bf1 {[%eval 37,31] [%emt 0:00:12] (a3)} Qf7 {
[%eval 12,26] [%emt 0:00:18] (Kh7)} 22. Qb3 {[%eval 49,28] [%emt 0:00:05] (Bc4)
} Be7 {[%eval 22,25] [%emt 0:00:26]} 23. Bxb5 {[%eval 67,30] [%emt 0:00:00]}
axb5 {[%eval 25,26] [%emt 0:00:20]} 24. Qxb5 {[%eval 51,31] [%emt 0:00:00]} b6
{[%eval 16,26] [%emt 0:00:06]} 25. b3 {[%eval 56,28] [%emt 0:00:00]} h5 {
[%eval 16,26] [%emt 0:00:14] (Qh5)} 26. Qc4 {[%eval 29,30] [%emt 0:00:20] (Bb2)
} Rd7 {[%eval 28,21] [%emt 0:00:05]} 27. Bb2 {[%eval 21,27] [%emt 0:00:09]} Bd6
{[%eval 34,23] [%emt 0:00:06] (h4)} 28. a4 {[%eval 52,25] [%emt 0:00:04]} h4 {
[%eval 27,23] [%emt 0:00:01]} 29. Ba3 {[%eval 25,30] [%emt 0:00:19]} Bxa3 {
[%eval 21,28] [%emt 0:00:00]} 30. Rxa3 {[%eval 15,32] [%emt 0:00:09]} Rfd8 {
[%eval 19,28] [%emt 0:00:04]} 31. a5 {[%eval 20,32] [%emt 0:00:00]} Qh5 {
[%eval 15,29] [%emt 0:00:08] (bxa5)} 32. Re1 {[%eval 8,33] [%emt 0:00:07]} bxa5
{[%eval 14,29] [%emt 0:00:00]} 33. Rxa5 {[%eval 15,31] [%emt 0:00:02] (d6+)}
Kh7 {[%eval 8,30] [%emt 0:00:10]} 34. b4 {[%eval 8,35] [%emt 0:00:13]} h3 {
[%eval 15,32] [%emt 0:00:00] (Qf3)} 35. Rc5 {[%eval 0,42] [%emt 0:00:08]} c6 $3
{[%eval 7,31] [%emt 0:00:00] (Rb8)} 36. Rxc6 {[%eval 0,40] [%emt 0:00:06] (b5)}
(36. dxc6 $4 {[%emt 0:00:00]} Rd1 {[%emt 0:00:00]} 37. Qf1 Rxe1 38. Qxe1 Qf3
39. Kf1 Rd1 40. g4 Qh1+ 41. Ke2 Qxe1#) 36... Rxd5 {[%eval -20,32] [%emt 0:00:
08]} 37. Qf1 {[%eval 0,47] [%emt 0:00:00]} Rd2 {[%eval -110,28] [%emt 0:00:07]
(Ra8)} 38. Rcc1 {[%eval 0,37] [%emt 0:00:10]} R8d3 {[%eval -95,29] [%emt 0:00:
15]} 39. Kh1 {[%eval -131,39] [%emt 0:00:14] (Rb1)} Kg6 {[%eval -182,27] [%emt
0:00:19] (Rb3)} 40. Kg1 {[%eval -185,34] [%emt 0:00:08]} Qf3 {[%eval -216,30]
[%emt 0:00:00] (Kh7)} 41. Qxh3 {[%eval -111,26] [%emt 0:00:03] (Rb1)} Qxf2+ {
[%eval -313,29] [%emt 0:00:09]} 42. Kh1 {[%eval -142,38] [%emt 0:00:00]} Rc3 {
[%eval -357,32] [%emt 0:00:08] (Qf3+)} 43. g4 {[%eval -1262,38] [%emt 0:00:34]}
Rcc2 {[%eval -529,43] [%emt 0:00:00]} 44. Qh5+ {[%eval -13247,43] [%emt 0:00:
28] (b5)} Kf6 {[%eval -483,11] [%emt 0:00:00]} 45. Qxf5+ {[%eval -13252,47]
[%emt 0:00:10]} Qxf5 {[%eval -1003,32] [%emt 0:00:00]} 46. gxf5 {[%eval -13252,
52] [%emt 0:00:02]} Rxh2+ {[%eval -14856,33] [%emt 0:00:06]} 47. Kg1 {[%eval
-13255,51] [%emt 0:00:00]} Rcg2+ {[%eval -14874,38] [%emt 0:00:03]} 48. Kf1 {
[%eval -13257,47] [%emt 0:00:00]} Ra2 {[%eval -14876,48] [%emt 0:00:13]} 49.
Kg1 {[%eval -13257,56] [%emt 0:00:06]} Kxf5 {[%eval -14877,47] [%emt 0:00:00]}
50. Rc5+ {[%eval -13258,57] [%emt 0:00:54]} Kg4 {[%eval -14878,52] [%emt 0:00:
00]} 51. Rd1 {[%eval -13261,59] [%emt 0:00:24]} g5 {[%eval -14879,52] [%emt 0:
00:00]} 52. Rd6 {[%eval -13262,54] [%emt 0:00:09] (Rd7)} Rhg2+ {[%eval -14880,
43] [%emt 0:00:15]} 53. Kf1 {[%eval -13262,58] [%emt 0:00:00]} Rgb2 {[%eval
-14881,45] [%emt 0:00:08]} 54. Rcd5 {[%eval -13263,60] [%emt 0:00:02]} Rxb4 {
[%eval -14882,51] [%emt 0:00:18] (Kf3)} 55. Rd2 {[%eval -13261,43] [%emt 0:00:
05] (Rd1)} Rb1+ {[%eval -14884,48] [%emt 0:00:15] (Rxd2)} 56. Kf2 {[%eval
-13265,51] [%emt 0:00:06]} Rbb2 {[%eval -14885,50] [%emt 0:00:00]} 57. Rxb2 {
[%eval -13267,48] [%emt 0:00:01]} Rxb2+ {[%eval -14886,52] [%emt 0:00:16]} 58.
Kg1 {[%eval -13268,61] [%emt 0:00:00] (Kf1)} Kf3 {[%eval -14887,52] [%emt 0:00:
15]} 59. Rd1 {[%eval -13269,61] [%emt 0:00:03] (Rf6+)} g4 {[%eval -14888,57]
[%emt 0:00:10] (Kxe3)} 60. Rf1+ {[%eval -13270,54] [%emt 0:00:08]} Kg3 {[%eval
-14889,58] [%emt 0:00:02] (Kxe3)} 61. Rc1 {[%eval -13271,61] [%emt 0:00:07]
(Rd1)} Kh3 {[%eval -14890,59] [%emt 0:00:15]} 62. Rc7 {[%eval -13272,72] [%emt
0:00:00]} Rb1+ {[%eval -14891,59] [%emt 0:00:12]} 63. Kf2 {[%eval -13272,75]
[%emt 0:00:00]} g3+ {[%eval -14892,59] [%emt 0:00:07]} 64. Ke2 {[%eval -13273,
74] [%emt 0:00:00]} Rb2+ {[%eval -14893,60] [%emt 0:00:38]} 65. Kd1 {[%eval
-13274,75] [%emt 0:00:00] (Ke1)} g2 {[%eval -14894,51] [%emt 0:00:07]} 66. Rh7+
{[%eval -13275,76] [%emt 0:00:20]} Kg3 {[%eval -14895,58] [%emt 0:00:00]} 67.
Rg7+ {[%eval -13276,76] [%emt 0:00:04]} Kf2 {[%eval -14896,59] [%emt 0:00:04]}
68. Rf7+ {[%eval -13277,75] [%emt 0:00:16]} Kxe3 {[%eval -14897,59] [%emt 0:00:
00]} 69. Kc1 {[%eval -32743,1] [%emt 0:00:00] (Rf8)} g1=Q+ {[%eval -14898,51]
[%emt 0:00:08]} 70. Rf1 {[%eval -13278,79] [%emt 0:00:25]} Qxf1+ {[%eval
-14899,74] [%emt 0:00:00]} 71. Kxb2 {[%eval -32759,1] [%emt 0:00:00]} Kd3 {
[%eval -32759,0] [%emt 0:00:00]} 72. Ka3 {[%eval -32761,0] [%emt 0:00:00]} Kc3
{[%eval -32761,0] [%emt 0:00:00]} 73. Ka2 {[%eval -32763,1] [%emt 0:00:00]} e3
{[%eval -32763,0] [%emt 0:00:00]} 74. Ka3 {[%eval -32765,1] [%emt 0:00:00]}
Qa6# {[%eval -32765,0] [%emt 0:00:00]} 0-1
Parent - By oudheusa (*****) [nl] Date 2017-10-09 10:05
Thanks for your insights!
Up Topic The Rybka Lounge / Computer Chess / The Big 4 Tournament September 2017

Powered by mwForum 2.27.4 © 1999-2012 Markus Wichitill