Not logged inRybka Chess Community Forum
Up Topic The Rybka Lounge / Computer Chess / AlphaZero beats Stockfish 8 by 64-36
1 2 3 4 5 6 Previous Next  
Parent - By Banned for Life (Gold) Date 2017-12-21 23:03
The TPU development effort certainly cost a lot more than $25 million, and it's certainly conceivable that Alphabet didn't allocate all of its development costs to it's server farm efforts, but a little additional research shows the development team for AlphaGo was quite large, so it's easy to imagine that the $25 million was actually spent on labor.

In any event, in a recent paper "Mastering the Game of Go without Human Knowledge", the development team credited a recent 100-0 victory of AlphaGo Zero over AlphaGo Lee using comparable hardware to a "novel reinforcement learning algorithm". :razz:
Parent - - By gsgs (**) [de] Date 2017-12-14 03:11 Edited 2017-12-14 03:41
1170000 kn/s , 50 of my Ryzen 1700x

> It used 4 of googles tensor processing units (TPUs) which might be equivalent to
> about 1000 Intel cores (say 14 of the latest 72-core Xeons)


167484522  4x Intel Xeon E7-8890 v3 @2.5Ghz 144 threads BMI2 drudru42

-------------------------------------------------

https://www.chess.com/forum/view/general/objectively-speaking-is-magnus-a-patzer-compared-to-stockfish-and-alphazero?page=8
Elroch

a course on reinforcement learning. One gentle introduction is this 10 lecture course
by David Silver of the DeepMind team. Very enjoyable and informative:

https://www.youtube.com/playlist?list=PLzuuYNsE1EZAXYR4FJ75jcJseBmo4KQ9-

Those TPUs cost half a million bucks.

NN computations work very well on TPUs, better than on GPUs and much better than on CPUs.
For the type of computation it is about 15 to 30 times faster (sayed by Google about TPUv2),
so remove the last zero from your 1000 to be more realistic.

The team includes at least 3 chess programmers. Matthew Lai, the author of Giraffe
and Talkchess member, is one of them. It is maybe for a reason that Giraffe,
following the very same approach as Alpha, is rated only around 2400 on single core.
It is the huge hardware that made the difference and not the approach.
Parent - - By Sesse (****) [se] Date 2017-12-14 10:18

> Those TPUs cost half a million bucks.


[citation needed]
Parent - By Banned for Life (Gold) Date 2017-12-21 20:21
If you are only making a small number of these chips (or any other piece of hardware), the amortized development costs will make them very expensive. If they made millions of them, the cost would be based on the die area and the yield, and would be less than a $100...
Parent - By Carl Bicknell (*****) [gb] Date 2017-12-15 09:03

> It is the huge hardware that made the difference and not the approach.


yep.

I am very impressed by Alpha Zero, but most people ignore the hardware advantage it had.
Parent - By Lukas Cimiotti (Bronze) [de] Date 2017-12-17 11:56
I guess a computer with 4 TPUs (75W each)  has roughly the same power consumption like a dual Xeon E5 2696v4 (150W each). So this is absolutely impressive. I also guess google doesn't sell these TPUs, so the price is questionable. https://www.hardwareluxx.de/index.php/news/hardware/prozessoren/42533-ein-blick-in-google-s-tensor-processing-unit-tpu.html
Parent - By Labyrinth (*****) [us] Date 2017-12-06 12:37
Zomg awesome, I was so hoping they'd turn AlphaZero on chess. This is really cool, hope they keep tuning it and see how far it can go!!
Parent - - By zaarcis (***) [lv] Date 2017-12-06 12:53
Now we have to wait for Stockfish team to start making Stockfish-zero. Because Deepmind isn't going publish resulting neural network, 100% sure. :D

Maybe in some similar distributed way, like Leela Zero project for go.
Parent - - By Banned for Life (Gold) Date 2017-12-06 17:04
They published, more or less, how they represented the game state, and also how the learning algorithm worked, more or less. :wink:

The resulting neural network can be derived in roughly 24 hours, just as it was for the paper...
Parent - - By Sesse (****) [no] Date 2017-12-06 17:13

> The resulting neural network can be derived in roughly 24 hours, just as it was for the paper...


Yeah, assuming you want to rent 5,000 TPUs. And write a whole bunch of software, of course.
Parent - - By Banned for Life (Gold) Date 2017-12-06 17:30
I'm pretty sure that somebody will write software in the not too distant future, probably for Nvidia's offering.

From the paper:

We applied the AlphaZero algorithm to chess, shogi, and also Go. Unless otherwise specified,
the same algorithm settings, network architecture, and hyper-parameters were used for all
three games. We trained a separate instance of AlphaZero for each game. Training proceeded
for 700,000 steps (mini-batches of size 4,096) starting from randomly initialised parameters,
using 5,000 first-generation TPUs (15) to generate self-play games and 64 second-generation
TPUs to train the neural networks.1 Further details of the training procedure are provided in the
Methods.
Figure 1 shows the performance of AlphaZero during self-play reinforcement learning, as
a function of training steps, on an Elo scale (10). In chess, AlphaZero outperformed Stockfish
after just 4 hours (300k steps);


It took 4 hours to perform 300k steps, so the 700k steps probably took about 8 hours (i.e. the 24 hours was probably the time to train for all three games). The second generation units that Google will be producing have close to 20X the memory bandwidth of the first generation units, so the number of second generation TPUs required for game play is probably going to be a lot less than 5,000, in addition to the 64 for learning.

Assuming you might have to lease 4000 second generation TPU-hours (based on 8 hours x 500 second generation TPUs) shouldn't be unaffordable for a serious engine developer today, and no doubt that cost will significantly decrease in the near future as more TPU like processors from Google, Nvidia and AMD enter the market...
Parent - - By Sesse (****) [no] Date 2017-12-06 22:47

> I'm pretty sure that somebody will write software in the not too distant future, probably for Nvidia's offering.


Well, people have tried to replicate AlphaGo since the first paper (soon to be two years ago). So far, essentially nothing has happened. There are multiple NN-based chess engines already—the AlphaZero paper links to several of them. The paper in itself is nowhere near enough to replicate what DeepMind has been doing—it requires a bunch of engineering and computing time.

> It took 4 hours to perform 300k steps, so the 700k steps probably took about 8 hours (i.e. the 24 hours was probably the time to train for all three games). The second generation units that Google will be producing have close to 20X the memory bandwidth of the first generation units, so the number of second generation TPUs required for game play is probably going to be a lot less than 5,000, in addition to the 64 for learning.


I've been working with neural networks in Google (not directly on DeepMind, but they used our stuff), so I'll refrain from commenting on specific hardware details. But do keep in mind that you will not do with one training round, unless you write perfect software the first time.

> Assuming you might have to lease 4000 second generation TPU-hours (based on 8 hours x 500 second generation TPUs) shouldn't be unaffordable for a serious engine developer today


OK, so let us take this number at face value. Google currently charges about $0.5 for one our of one ML unit (one machine, essentially)—this is without a TPU, but let's just say unrealistically that it will come for free. So every training round of 4000 of those machines is $2000. If you're an AI expert and you're really good, maybe you can get away with only 100 training rounds while you're developing your software, so $200,000. (Any engine developer will probably tell you that they did many more than that!) How many engine developers are “serious” enough that they can blow $200k on training? Can you recoup that cost by being, say, 50 Elo better than Stockfish? For something you can only really run with a monster GPU?
Parent - - By Banned for Life (Gold) Date 2017-12-21 20:46
Well, people have tried to replicate AlphaGo since the first paper (soon to be two years ago). So far, essentially nothing has happened. There are multiple NN-based chess engines already—the AlphaZero paper links to several of them. The paper in itself is nowhere near enough to replicate what DeepMind has been doing—it requires a bunch of engineering and computing time.

Technology transfer usually occurs on two legs... At some point, some insider from the AlphaZero development team will leave and the technology will start to be disseminated into the larger community. This will be slowed down by the fact that Alphabet has more money than just about anybody else, and is probably spending more on this effort than the rest of the field combined...

But do keep in mind that you will not do with one training round, unless you write perfect software the first time.

Having an independent group develop the training algorithms to the level shown by the Alphabet effort would be a very expensive and time consuming effort (as it was for the developers at Alphabet). Assuming the training algorithms eventually leak out, the training effort should be comparable to the effort required by the Alphabet team. No doubt Alphabet will do everything it can to keep the details under wraps, but eventually they will leak out...

OK, so let us take this number at face value. Google currently charges about $0.5 for one our of one ML unit (one machine, essentially)—this is without a TPU, but let's just say unrealistically that it will come for free. So every training round of 4000 of those machines is $2000. If you're an AI expert and you're really good, maybe you can get away with only 100 training rounds while you're developing your software, so $200,000. (Any engine developer will probably tell you that they did many more than that!) How many engine developers are “serious” enough that they can blow $200k on training? Can you recoup that cost by being, say, 50 Elo better than Stockfish? For something you can only really run with a monster GPU?

I guess you're assuming that you're developing the training algorithms. Most people implementing NN based systems aren't implementing and testing new training algorithms. The claim in the article is that there were only minor differences in the algorithms for training for three (or at least two) very different games. The implication is that if the base training algorithm were available, it could be customized for a particular game with fewer than 100 training rounds.

The most impressive factoid in the article was that given the training algorithm, it took only hours to take the NN from knowing nothing but the rules, to much better than SF...

As an aside, I have no clue how big the computer engine market is. I'm guessing it doesn't exceed $1M a year?
Parent - - By Sesse (****) [gb] Date 2017-12-21 21:22
You're seemingly still assuming there's some secret algorithm sauce here you can “leak” and then it will be a lot easier to reimplement this. Really, there isn't. Experience helps a lot, but software takes time to develop, even the second time.
Parent - - By Banned for Life (Gold) Date 2017-12-21 22:02
Yes. I am assuming that they have developed and implemented better general purpose training algorithms and supporting software, and these better general purpose training algorithms and supporting software are their primary advantage over other NN based chess engine developers, rather than the effort they spent on developing game specific chess algorithms or software.

If one assumes that Alphabet's deep learning efforts are skewed >99.9% toward non-chess applications, but that chess applications benefit from the more general purpose efforts, than leaking the results of those efforts could certainly be expected to lead to large advances in chess specific implementations with significant reductions in development time. I never meant to imply that the chess specific portion was going to zero though.
Parent - - By Sesse (****) [gb] Date 2017-12-21 23:18
As someone who has worked in the AI section of Alphabet (aka Google Brain): You're just wrong.
Parent - By Banned for Life (Gold) Date 2017-12-21 23:44
Could be. Lots of effort has been wasted over the years in application specific game algorithms and software development. But then again, Alphabet felt the need to pay a lot of money to buy DeepMind a few years ago, so maybe they felt they didn't have all the answers, and maybe you don't either...
Parent - - By Banned for Life (Gold) Date 2017-12-22 18:51
You do realize that the last group that DeepMind will share trade secrets with are the people at Google Brain, right?
Parent - - By Sesse (****) [gb] Date 2017-12-22 23:47
You're grasping for straws. Bye :-)
Parent - By Banned for Life (Gold) Date 2017-12-23 00:17
Really? You threw out your relationship with Google Brain multiple times inferring that it gave you insight into how the development process worked at DeepMind, when in fact the two groups are actually competitors with no researchers in common. There is no reason to believe that Google Brain would be competitive in developing top ranked game playing software, or that you know much more than nothing about the process that DeepMind used... Bye :-)
Parent - By Kappatoo (*****) [de] Date 2017-12-21 23:23
Do you know how many TPUs they used in training AlphaGo Zero? If I understand that paper correctly, they played 4.9 million games in an initial training phase lasting 72 hours, compared to 21 million games in 34 hours for Alphazero(Go). But these games seem to have been played at a slower pace than those of Alphazero(Go) - 0.4 seconds per move vs. 0.2 seconds per move. Do I understand this correctly? If I do, it suggests that they used many more processors for Alphazero - about 4.5 times as many.
It is not entirely clear to me what happened in AlphaGo Zero after this initial 72 hours training phase - overall, they trained the network for 40 days. It seems like they added 20 blocks to the neural network (what exactly is a block, by the way?) and changed the parameters of the MCTS search based on what had been learned from the initial training phase. In the remaining 37 days, there seems to have been only unsupervised self-play.
Still, do you think the algorithm/network architecture they used in Alphazero is superior to the one they used in AlphaGo Zero for Go?
Parent - - By rocket (***) [se] Date 2017-12-06 15:58
What time control was used
Parent - - By Sesse (****) [no] Date 2017-12-06 16:47
One minute per move.
Parent - By Fulcrum2000 (****) [nl] Date 2017-12-06 20:47
Stockfish had 64-cores and 1 minute per move but only 1 GB hash.
That will limit it's strength a lot. I would probably still have lost the match but with a smaller margin I think.
Parent - - By Chris Whittington (**) [fr] Date 2017-12-06 16:31 Upvotes 1
Wowza!!

Been looking at the games, the paper  and forum comments all day. Observations:

This is the end of paradigm computer chess for fifty years, material plus positional weight tinkering. It's an integrated evaluation which takes no account of "material", it could not care less about Q=9, R=5 and so on, fine tuned and added up in a giant polynomial of alleged "accuracy". It couldn't care less whether isolated pawn is penalised by 0.25 or 0.27 or 0.3333 recurring. Rightly so, because, in chess, nothing in constant, everything depends on everything else, or even matters not at all. In particular, it is not hamstrung by the greatest unfixed bug in computer chess: the massive reliance on "material", the inability to discount it effectively against "positional" and, most importantly, the inability to discount it effectively against, well everything(!) but King attack springs to mind. They have the evaluation of Tal. "Who cares about material, I'm going to win this". Polynomial fail, system.

It's the end of alpha beta with pruning paradigm (too much material bias). Chess was never about separating "material" and "positional". They are integrated, but search broadly treats each as discrete. System fail.

From the games: It's very attacking. It's very sacrificial. Why does it kill Stockfish? Because Stockfish is materialistic (system, old paradigm). Because Stockfish can't prune away the tree effectively (system, too materialistic, old paradigm). It's the end of programmers tinkering with evaluation functions, even hundreds of programmers with thousands of machines. In a sense, Stockfish understands, or can understand, or can approach understanding everything, but also nothing. It doesn't know how to add "everything" up. Busted.

What else did I notice? It doesn't have null move and can therefore exploit Zugswang. It seems to understand restricting mobility and when this is important. It seems good at covering incoming pieces. To repeat, it likes attacking.

From the paper, a point comparing minimax with Monte Carlo. The former acts to seek out and bring back to the root, errors, and thus can promote moves based on evaluation error. The latter is error cancelling. Neat.

"AlphaZero evaluates positions using non-linear function approximation based on a deep
neural network, rather than the linear function approximation used in typical chess programs.
This provides a much more powerful representation, but may also introduce spurious approximation
errors. MCTS averages over these approximation errors, which therefore tend to cancel
out when evaluating a large subtree. In contrast, alpha-beta search computes an explicit minimax,
which propagates the biggest approximation errors to the root of the subtree. Using MCTS
may allow AlphaZero to effectively combine its neural network representations with a powerful,
domain-independent search."


The AlphaZero approach can only get better. Current chess program paradigm will be systemically unable to meet it.

Woo-hoo!!
Parent - - By Banned for Life (Gold) Date 2017-12-06 17:00
Yes, definitely a new paradigm. The "drawback", if you want to call it that, is that since it is not based on simple rules, it will take a lot of work to explain to a person why one move is preferred over another. This will make top level chess even less understandable to us mere mortals! :lol:
Parent - - By Sesse (****) [no] Date 2017-12-06 17:12
Chess engines have never been good at explaining move preference anyway, so this isn't a big difference.

I wonder if one could train it on human games to get an accurate probability of humans winning or losing a game (of course, it would be a weaker chess player in absolute terms).
Parent - - By Chris Whittington (**) [fr] Date 2017-12-06 17:20
As the paper points out, minimax amplifies evaluation error; which is why we see so many main lines with really quite dumb moves towards the end of the line. MC averages and thus reduces error. There is no main line in MC, but there's also no reason why AlphaZero can't print a best choice line N moves deep.
Parent - - By Sesse (****) [no] Date 2017-12-07 08:24
I don't think I agree with your reasoning. The “minimax amplifies evaluation error” is about that a wrong evaluation in one line is likely to override another line (even make it possible to prune it out entirely). The reason the moves are dumber near the end don't have anything to do with evaluation error—even with a great evaluation function, you'd still see a silly PV if you take it to the end, simply because those moves are searched shallower. Unless, of course, if your evaluation function was so great you didn't need search at all, but even AlphaZero isn't there by a long shot.

MCTS is based more on averages and less on comparisons, so it's naturally less sensitive to these kinds of swings.
Parent - - By Chris Whittington (**) [fr] Date 2017-12-07 11:01
Well, this disagree might only be about the meaning of the term "amplify",but I think it does. Minimax is about finding the pot of gold at the end of the rainbow without stepping on any mines to get there. It's a narrow and precarious path.. Errors in mine detection or pot of gold detection throw the search off very easily onto another path. It's error intolerant. Tends to find evaluation errors in search and play to them.

Whereas Monte Carlo averaging is not about trying to pick out a fine line, I'm trying to think up a suitable metaphor, but you get the drift.

On PV stupidity .... I'ld argue it's the nature of the pruning, which is basically materialistic and thus a very blunt tool, add to this the hand crafted evaluation which still fails to recognize dynamic features (that's left to the search, right?) and the pruned minimax shows you more and more in the outer branches how poor it is at melding material and positional together. Fifty years of comp chess science and the bug remains, the separation of material and positional concepts in both evaluation and search, everybody thought deep search gets around the problem, but Alphazero integrated evaluation proved otherwise. material 95331 thinking, fine tuned or not, is systemic failure both in evaluation and search and is the source of the dumb PVs
Parent - - By Sesse (****) [se] Date 2017-12-07 11:10
Well… I partially agree and partially don't agree. There's not necessarily a clear material/positional divide; just think of piece square tables (a knight in a corner is worth much less than a piece in the center). But in a sense, even though the score comes from a single tip of the PV, that PV has been chosen based on what happened in millions and billions of other lines that were not chosen (and this is independent of whether pruning is sound or not). They contributed to the end result just as much as the PV did.
Parent - - By Chris Whittington (**) [fr] Date 2017-12-07 11:53
Maybe I am not explaining this well enough .....

Systemic failure of minimax alpha beta prune extend and so on with static evaluation is ...

... not because the component evaluation components lack individual precision or fine tuning (probably they are very precise and very fine tuned, overall, on average)

... but because process of polynomial adding them all together to give a global score is basically nonsensical (although in incest games within paradigm, nobody notices, because the being better than humans by massive depth masks the problem)

... and also because the fine tuned (again incestuous, within paradigm) search pruning/extending functions rely on the dumb evaluation and, worse, modify it with brute Q9, R5 and so on values.

The entire concept of material plus piece square tables plus open files plus bla bla and pruned search to cover dynamics bases itself on the same broken additive/subtractive polynomial idea. AlphaZero has integrated the evaluation beyond the comprehension of a polynomial evaluator, however fine tuned, and however many programmers. New DNA. Anecdotally, how else to explain the game where Stockfish queen is trapped, iirc, Qh8, Kg8, fianchetto pawns, AlphaZero with Rf6. Queen dead. Polynomial evaluation function probably quite happy for quite a while beforehand, and dumb pruning functions presumably totally misleading themselves.
Parent - - By Banned for Life (Gold) Date 2017-12-21 20:51
Excellent explanation. There is no reason to believe that the evaluation functions add linearly, and in fact, it would be astounding if they did...
Parent - By Chris Whittington (**) [fr] Date 2017-12-22 23:25
I'm beginning to realise this stuff is a lot more complicated and hard to understand than Messrs Dunning and Kruger could ever have dreamed. One imagines one had it worked out, and then not. Fascinating topic, especially all the opinions.
Parent - - By Chris Whittington (**) [fr] Date 2017-12-06 17:15 Upvotes 1
Yup, that's a neural net feature, it cannot explain in any language understandable to humans other than the new WEIGHT language, for which we have no dictionary, although we do understand the grammatical structure. But, didn't AlphaGo find some game structure types that humans hadn't before perceived? And now do.
What it might do is romanticise Chess again - if it can show many games in a Tal style, humans might start believing in (and recognising) dynamic patterns again, and have the confidence to "go for it".
Parent - - By Banned for Life (Gold) Date 2017-12-06 17:39
I am most excited about the fact that this takes us back to the point where algorithms are more important (much more important) than hardware speed. Although Alpha Zero has already definitively shown that a neural nets based approach will be superior to a highly tuned alpha-beta based approach, there is no reason to believe that the approach used by Alpha Zero is anywhere near optimal for a NN based approach. Most likely it can be improved significantly in both the game play and learning phases.

I am hoping that this brings us back to where you chess engine guys were 25 years ago where people could roll their own engines and bring them to competitions with a chance of winning, without having an engine that was 99% the same as any other top engine...
Parent - - By Chris Whittington (**) [fr] Date 2017-12-06 17:52
Well, I suppose the question is whether a start-from-zero NN would play differently (apart from trivial differences) from any other. Or, if a "programmer" could make much of an effort to affect its "style". It might just end up as a competition between NN designers, time spent training, hardware and so on.

I'm wondering where Deep Mind will go with this. Chess always was, and now Go replaced it, the PR vehicle for AI industry funding. It gets media. Media brings investment and so on. But the actual development, I think, is not very interesting for them other than to stimulate thinking (fantasising) about what else might be possible.
Parent - By Banned for Life (Gold) Date 2017-12-06 18:09
Optimal training algorithms for large NNs is still a hot research area, and it stands to reason that some methods will work much better than others, and it's almost certain that better methods are yet to be discovered. There are also quite a few variables in how the NN itself is constructed, and how the nodes influence each other.

All in all, I think there are enough variables in play such that it will take a while to converge on a solution that works best. During this time, competitions would be very exciting. More so due to the fact that lots of chess knowledge is no longer a requirement. Just knowing the rules of the game is enough (so Bob could make a comeback!:lol:). Another factor is that the complexity of writing an engine from scratch (based on good NN development tools of course) should be significantly reduced. So it will be much more about algorithms, and less about programming skills.

It's an interesting question as to whether different training algorithms based on self play will result in what humans perceive as different styles of chess, or the same style at different strength...
Parent - - By Banned for Life (Gold) Date 2017-12-06 19:08
Upon further reflection, you could certainly change the program behavior by changing its goal, e.g. an engine that was playing to win at all costs and had very little difference in scoring a draw or a loss, would be expected to play differently than one which was playing to achieve at least a draw. The behavior might also be changed by giving a higher score to games that are won quickly. In a tournament setting, a separate NN could be used to game out the best strategy for adjusting the playing style for each game to maximize the chances for winning the tournament! :lol: This could be adjusted in real time by keeping track of the positions on each of the other boards during each round...

I'm sure you're right about chess not being the end-all for AI anymore. It would certainly be more impressive to have practical applications, like a self driving car, or an automated tool for selecting optimal traveling arrangements...
Parent - By Chris Whittington (**) [fr] Date 2017-12-06 20:08
this is true. adjust goal, wins by mate with lots of pieces on board and so on.
Parent - By shrapnel (***) [in] Date 2017-12-06 18:53
+1. You summed it up perfectly.
Parent - By Felix Kling (Gold) [de] Date 2017-12-06 17:11
This is also interesting from performance point of view, MC approaches should scale well :-)
Parent - By Banned for Life (Gold) Date 2017-12-06 17:59
If I were a billionaire, and had been following this effort earlier, I would have offered every cent I had to have the effort keep the engine internals under wraps, and play only 1.b3... :lol::yell:
Parent - By rocket (***) [se] Date 2017-12-06 20:21
Guys, Stockfish played the French defence like crap, as most engines tend to do. A strong human would castle and play for f6.

But the queens indian massacre was scary
Parent - By Kreuzfahrtschiff (***) [de] Date 2017-12-07 09:02
this is a joke, i bet this program would not win any game aginst newest asmfish and decent hash
Parent - - By Felix Kling (Gold) [de] Date 2017-12-07 09:58
Btw., which hardware was their program running on? They do not really state that...
Parent - By Chris Whittington (**) [fr] Date 2017-12-07 11:09
Doesn't matter. the old paradigm has been sent to the dustbin of history.
Parent - - By Sesse (****) [se] Date 2017-12-07 11:11
They did state it was running on four TPUs. If you're talking about the CPUs, they are less likely to be fully loaded, as they're mostly used for bookkeeping and for driving the TPUs.
Parent - - By rocket (***) [se] Date 2017-12-07 21:09
How many games did alphazero play against itself in the 24 hours?
Parent - - By Sesse (****) [no] Date 2017-12-07 22:15
44 million games (see table S3 from the paper).
Up Topic The Rybka Lounge / Computer Chess / AlphaZero beats Stockfish 8 by 64-36
1 2 3 4 5 6 Previous Next  

Powered by mwForum 2.27.4 © 1999-2012 Markus Wichitill