Not logged inRybka Chess Community Forum
Up Topic The Rybka Lounge / Computer Chess / AlphaZero beats Stockfish 8 by 64-36
1 2 3 4 5 6 Previous Next  
Parent - - By rocket (***) [se] Date 2017-12-07 22:20
Exactly. 44 million games... With a smart, selective search, it is very likely to have already played those dozen of games it beat Stockfish, because the search will eliminate complete garbage games in self-play .
Parent - - By Sesse (****) [no] Date 2017-12-07 22:23
Parent - - By rocket (***) [se] Date 2017-12-07 22:25
Yes it will. With a deep enough search, garbage lines will be avoided. There is no evidence that Alphazero had a more sophisticated search heuristics than Stockfish. Selective search operations is already in use.
Parent - - By Sesse (****) [no] Date 2017-12-07 22:29
You're just talking out of your ass, sorry. There's zero reason why AlphaZero would see Stockfish's losing moves frequently in self-play (more than any other losing moves), and the odds of seeing a given 50-move game several times in self-play over 44 million games is nil. The state space is just too immense.
Parent - - By rocket (***) [se] Date 2017-12-07 22:38
If the probability of actual game playing positions reoccuring after 44 million games is non-existent, it would make no sense to base it on a Monte-Carlo algorithm, instead of just general knowledge.
Parent - - By Sesse (****) [no] Date 2017-12-07 23:02
This makes no sense.
Parent - - By Felix Kling (Gold) [de] Date 2017-12-08 11:13
how does "hauptsache was gesagt" translate? Maybe "It is not the hen that cackles the most that lay the most eggs."
Parent - By Sesse (****) [se] Date 2017-12-08 12:06
“Less is more”, perhaps? :-P

(Mein Deutsch ist leider nicht so gut mehr, weil ich nicht mehr in der Schweiz wohne und darum es so selten spreche, aber lesen ist immer noch kein Problem.)
Parent - - By Felix Kling (Gold) [de] Date 2017-12-08 11:10 Edited 2017-12-08 11:16
Aha, OK, so you can't buy that hardware and it is difficult to guess how much it would cost if they would sell it. Would be interesting how it performs on commercial hardware (like Nvidia cards).
Parent - - By Sesse (****) [se] Date 2017-12-08 12:05
TPUs are optimized for neural networks, and neural networks only. GPUs also do well on neural networks; NVIDIA is trying to counter TPUs adding “Tensor cores” to their own GPUs, but I haven't seen any independent benchmarks yet. Remember, NVIDIA is these days much a gaming-only company, and more and more driven by delivering high-end GPUs for AI.

I think it's fairly safe to say that you could make something like AlphaZero that plays great chess on the Titan V ($3000, just announced), but whether it would be as strong as AlphaZero (or Stockfish on a 32-core) is much less certain.
Parent - By Banned for Life (Gold) Date 2017-12-21 22:08
Remember, NVIDIA is these days much [less] a gaming-only company, and more and more driven by delivering high-end GPUs for AI.

Actually, at this point they are being heavily driven by efforts to unlock cryptocurrency...
Parent - - By Sesse (****) [no] Date 2017-12-08 17:46
Tord Romstad weighs in at the bottom:

A lot of it is correct, but I feel some of it is also missing the mark: Yes, sure, it's not the latest git master, but that's only something like 20 Elo, IIRC. And who really cares about time management when engines are generally used for analysis?
Parent - By Chris Whittington (**) [fr] Date 2017-12-08 18:31 Upvotes 1
Thanks for the link. World divides into those that see, those who probably didn't have time to look and those in shock.
Game 3 video is nice. My view remains the same, once the crude, blunt and beginners material concept got junked, replaced effectively by "mobility" and all it's ramifications (which are many and complex) then chess is back to Tal with the bean counter programs left for dead. As A0 now proves. One can only tremble at what this technology can do when given more than four hours to learn.

Second factor is we can also see the rating leaps of Stockfish as basically incestuous. Like has been playing like down dead end street. We can predict a catch up by strong players who've had Stockfish's weaknesses shown them.
Parent - - By Vegan (****) [ca] Date 2017-12-09 03:03
Is there a PGN version of the games I can look over?
Parent - - By Venator (Silver) [nl] Date 2017-12-09 07:51
You can replay the games at the end of this article:
Parent - By Vegan (****) [ca] Date 2017-12-12 03:14
I wanted a PGN so i could analyse the games for an article or 10 for my site
- - By rocket (***) [se] Date 2017-12-07 21:12
Isn't it likely that Alphazero simply knew the tactical moves in the positions against stockfish by memory from it's own monte carlos self-play marathon- thus search having nothing to do with it - it had already played the position in the past.....
Parent - - By Kappatoo (*****) [de] Date 2017-12-07 21:19
Do you know how many possible games there are in chess?
Parent - By rocket (***) [se] Date 2017-12-07 21:27
If they play conventional openings (which they did) , and are both 3000+ programs, it is entirely conceivable that Alphazero entered lines by Stockfish that it has already played against itself thousand of times and had a good chunk of stats on.

I want to know the amount of games it self-played
- By rocket (***) [se] Date 2017-12-07 21:15
The engines with selective search are not the top tactics solvers. Deep Junior scores around a mere 50% on user werewolfes tactical test suite, so I dont really buy this. It has seen the positions before!!
- By rocket (***) [se] Date 2017-12-07 22:29
The fact that it only beat Stockfish 3 times as black out 100 games also proves that it is a far cry from "alien chess" Stockfish is nowhere near perfection as white and would lose more than 3 times against a perfect opponent. .
- - By rocket (***) [se] Date 2017-12-07 22:58
BTW, how can Alphazero play non-random moves if it has no knowledge besides the rules? It must have a material table of some sort? Like basic piece values
Parent - - By Sesse (****) [no] Date 2017-12-07 23:02
It has no initial knowledge besides the rules. It certainly has tons of knowledge after the training phase, but it's self-learned, as opposed to given to it by humans.
Parent - - By rocket (***) [se] Date 2017-12-07 23:09
But how can the training games be of any use with no knowledge in the engine? With 44 million tries it still has zero knowledge and would play random moves if it has no objectives - a material table, evaluation heuristics, etc
Parent - - By Chris Whittington (**) [fr] Date 2017-12-07 23:24 Upvotes 1
well, you've a neural net in your head. how have you learnt that a face, that you've never seen before, is happy or sad, or male or female or whatever, or even that it  is a face? You generalise from previous experience, no? AlphaZero neural net also generalises from previous experience - it has seen maybe four thousand milllion chess positions, and it learns, because it is told so, that this position led to a win/loss/draw and so on. Eventually it gets quite good at telling whether a position it has never seen before is a win/loss/draw. It doesn't care about material tables or evaluation heuristics.
Parent - - By Sesse (****) [no] Date 2017-12-08 00:04 Upvotes 1
Well, probably deep down in the network, there's something resembling a material table. But it's emerged from experience and generalization, as you say. AlphaZero is playing far from random, and far from rote position memorization.
Parent - - By Sesse (****) [no] Date 2017-12-08 00:10 Edited 2017-12-08 11:44
By the way, this is a good reread: Deep Thought's parameter tuning. Infinitely more primitive, of course, but they also saw general chess patterns emerge in the tables by self-play.
Parent - - By gsgs (**) [de] Date 2017-12-08 02:56
there it is, now that chess is almost solved anyway and the drawrate in top correspondence chess
is already above 95%.
How could all those brilliant and enthusiastic chess programmers had missed it for decades
And yes, I was also one of those who thought it won't work for chess, even after the Go experience.

Could it be that alphabeta,minmax are somehow encoded, hidden in the neural net ?
Could it be, that a certain big amount of hardware is _needed_ to implement it ?
Parent - - By Sesse (****) [no] Date 2017-12-08 08:10
It wasn't missed. There have been multiple attempts to make it work earlier, they just didn't pan out. As I see it, AlphaZero is a product of great engineering plus tons of computing power; you simply couldn't make something like this in the 90s.
Parent - By gsgs (**) [de] Date 2017-12-09 03:26
how much Elo does the hardware make ?

what Elo do we estimate for AZ on a normal PC

of 2000 , Intel Pentium or AMD K6 , 200MHz , 256MB RAM one core

or 2010 , 2GB RAM , 2GHz , one core

or 2017 , 16GB RAM , 16 cores at 4GHz
Parent - - By gsgs (**) [de] Date 2017-12-15 03:41
apparently it _was_ missed. See here, what Matthew Lai wrote in Jan.2016 :
there are several techniques I can try that will probably drastically improve Giraffe,
when I can't [easily] try them [while working at Deepmind] because they are still [maybe] trade secret
Machine learning has defeated hand-crafted systems in just about every other field.
That will happen in chess, too, and the only question is when.
Parent - - By Sesse (****) [se] Date 2017-12-15 10:12
I don't agree with your interpretation of his post. Yes, sure, there are always tweaks. That's what's called engineering.
Parent - - By gsgs (**) [de] Date 2017-12-16 01:01 Upvotes 1
Lantonov complained the code looks like Chinese, Matthew Lai offered to
help understanding it (which might be still valid : "feel free to ping me")
, but an understanding of neural nets were required.

> You do need to have fairly good understanding of neutral networks and TD learning to begin with, though.

I speculate that chessprogramming interested people usually lack that understanding,
while people interested in neural networks and TD usually are not interested in

It's not too late ... Alpha Zero will probably not be released, so that is no competitor.
Now we have "proof" that much improvement is possible and hopefully someone
might improve Giraffe and give it to the rating lists.

ML  machine learning
NN  neural net (neutral net must be a typo)
TD  Temporal Difference
Parent - By gsgs (**) [de] Date 2017-12-16 01:25 Edited 2017-12-16 01:29
Giraffe is an experimental chess engine based on temporal-difference
reinforcement learning with deep neural networks

> In the last few years, neural networks have become hugely powerful thanks to two advances.

so maybe this was not possible 5 years ago, as you also suggested above

presumably the availability of big RAM at low prices, which improved a lot since 2000,
much more than computation power

His network consists of four layers that together examine each position
on the board in three different ways.

The first looks at the global state of the game, such as the number and type
of pieces on each side, which side is to move, castling rights and so on.

The second looks at piece-centric features such as the location of each piece
on each side, while the final aspect is to map the squares that each piece attacks and defends.

so counting material _is_ one basic way of examining positions

> 175 million positions
> Strategic Test Suite, which consists of 1,500 positions

matches the best chess engines in the world. [in that test]

> Giraffe takes about 10 times longer than a conventional chess engine to search the
> same number of positions.

> opening and end game phases, where it plays exceptionally well.

Parent - By rocket (***) [se] Date 2017-12-08 13:29
If chess was almost solved, Stockfish would lose something like 90-95% of its games. Far away from that
Parent - - By Kreuzfahrtschiff (***) [de] Date 2017-12-08 14:49
its still nearly 100% draw rate if i play corr chess against top players

no program can change that

AZ just comes too late to play better chess, its done, like checkers
Parent - - By Chris Whittington (**) [fr] Date 2017-12-08 15:03
Is 100% draw rate because, probably, most of your opponents are using Stockfish or equivalent. AlphaZero shows how to play romantically.
Parent - - By Kreuzfahrtschiff (***) [de] Date 2017-12-08 15:22
u dont play at the highest level, so u dont know it like me
Parent - - By Chris Whittington (**) [fr] Date 2017-12-08 16:08
really?! I was BCF 221 in 1971, that translates to FIDE 2400. One more maliciously rude or arrogant comment and I block you. Your loss.
Parent - - By Kreuzfahrtschiff (***) [de] Date 2017-12-08 16:57
maybe u r not up to date, i run my 3 machines 24 hours a day and can see what happens in actual positions and i have the strongest opps there are on this planet
Parent - - By Venator (Silver) [nl] Date 2017-12-08 17:34
The strongest opponent on the planet is Alpha Zero and you haven't played it yet :wink:
Parent - - By Kreuzfahrtschiff (***) [de] Date 2017-12-08 17:42

i think in 12 games i will win 2 against AZ
Parent - - By Venator (Silver) [nl] Date 2017-12-08 18:13
You can't win, because you use inferior engines playing with human-made rules :wink:
Parent - - By Kreuzfahrtschiff (***) [de] Date 2017-12-08 18:29
maybe not, but AZ will not win for sure  :)

it has weaknesses in the opening
Parent - - By Venator (Silver) [nl] Date 2017-12-09 06:00
First generation Alpha Go loses to second generation Alpha Go. Second generation Alpha Go is getting crushed by third generation Alpha Go Zero.

Currently we have only first generation Alpha Zero. It beats SF by 64%. Second, third, fourth generation Alpha Zero will win by much bigger margins. Especially if they feed SF to the Alpha Zero learning process with the aim of beating it by the highest margin.

The game has been changed by this new method. In 10-20 years the NN programs will do the same to the current top engines as the latter ones are doing to Shredder, Fritz and Junior of 2002-2007.
Parent - - By Kreuzfahrtschiff (***) [de] Date 2017-12-09 11:29
its an interesting new way, but lets face it

corr chess is solved and no program can do better as it is now

only quick games can (and will) be improved by AZ
Parent - - By Venator (Silver) [nl] Date 2017-12-09 12:07
corr chess is solved and no program can do better as it is now

That's what they also said 10 years ago :-).

Of course a much stronger NN program can do better. Namely a NN program that finds interesting things in the 99.999999% subset of the tree that SF is not looking at.
Parent - - By Kreuzfahrtschiff (***) [de] Date 2017-12-09 12:22
what do i care, what some non experts say?
i didnt say it 10 years ago, but 3 years ago it was also right
Parent - - By zaarcis (***) [lv] Date 2017-12-09 14:07
I believe someone is missing the point - the human part of correspondence chess can be easily (at least if you're Google-sized :D) automated.

Deepmind/Google isn't much interested into chess. It they were, that wouldn't be hard to teach AlphaZero take into account what Stockfish or anyone else thinks. :)
Such "AlphaCentaur" would kick asses of us all.
Parent - By zaarcis (***) [lv] Date 2017-12-09 14:21
If suddenly Google *actually* wanted to make a top chess entity, they would do it. (Dunno, add option to query 7 piece endgame database; do actual infinite analysis - like in correspondence chess... seriously, 1 minute, pshh :D; make their own actual [and secret] opening book that keeps improving every second; optimise all the running code until nothing can be improved in the terms of speed. Etc, etc. Oh right, and keep it self-training until end of universe. :D)

That could be enough, I believe, even without consulting with Stockfish or anyone else. That would be superhuman level.
Of course, it can be modified into supercentaur, too. :)
Up Topic The Rybka Lounge / Computer Chess / AlphaZero beats Stockfish 8 by 64-36
1 2 3 4 5 6 Previous Next  

Powered by mwForum 2.27.4 © 1999-2012 Markus Wichitill