Are you, in fact, a pregnant lady who lives in the apartment next door to Superdeath's parents? - Commodore

Create an account  

 
Chess

(December 7th, 2017, 13:57)darrelljs Wrote: Very interesting article...would love to see it take on Deep Blue at some appropriate point in time.  And, of course, Magnus smile.

Deep Blue was dismantled after the match against Kasparov 20 years ago and would be no match for any modern chess software given the advances in programming and hardware power. A modern cell phone would very likely beat the Deep Blue of 1997:

Wikipedia Wrote:In 2009 a chess engine running on slower hardware, a 528 MHz HTC Touch HD mobile phone, reached the grandmaster level. The mobile phone won a category 6 tournament with a performance rating 2898. The chess engine Hiarcs 13 runs inside Pocket Fritz 4 on the mobile phone HTC Touch HD. Pocket Fritz 4 won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009.[33] Pocket Fritz 4 searches fewer than 20,000 positions per second.[34] This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second. Pocket Fritz 4 achieves a higher performance level than Deep Blue.

And Vishy Anand said in 2013:

Quote:"Would you lose if you played against your cellphone?" "Probably," he says. "That is pretty depressing." "It was depressing" he agrees. "Now we are used to it." He thinks for a moment. "Well if I just played not to lose, I might survive."

As for Magnus playing against a CPU, this would probably be a bit pointless. If you check the the chess software rating list, #2 Houdini costs $90 and has a rating of 3412 while running on a 4-core CPU.

http://www.computerchess.org.uk/ccrl/4040/

The last serious man vs machine match was Kramnik vs. Deep Fritz in 2006, since that point in time balanced man vs machine matches (without a handicap) are not of a lot of interest anymore, since machines simply became too strong.

Reply

(December 7th, 2017, 13:53)Gustaran Wrote: That does not seem to be correct:

chessbase. com Wrote:Since AlphaZero did not benefit from any chess knowledge, which means no games or opening theory, it also means it had to discover opening theory on its own

Source: https://en.chessbase.com/post/the-future...arns-chess

Interestingly, the article shows which openings AlphaZero played first and abandoned later in the learning process. As a matter of fact, the chessbase article as well as the academic paper seem to talk about 24 hours of deep learning, not 4.

https://arxiv.org/pdf/1712.01815.pdf

From Lev Alburt GM; who read the paper:

Are you sure this paper is serious? A number of details are not coherent. Table 1 of the paper says there was a 100-game match between AZ and Stockfish with no defeats by AZ. But Table 2 shows results for the 12 most common openings with 50 games as White and 50 games as Black for each one. Three conclusion here: (1) There was a total of 1200 games between AZ and Stockfish, with 24 losses of AZ; (2) Opening choice was not free, they set both to play a defined number of games for each line; (3) AZ was already self-played tons of learning games when it faced Stockfish. So it was already “learnt” openings and have its own “book” (in form of theta parameters) while Stockfish was completely vulnerable on this part, a huge handcap. There was other points suspect, but I will not invest so much time analysing all this. AZ may be a relevant advance in AI, but the paper does not proves that.
Reply

This was a research paper. They were specifically comparing the A/B Tree Search(ABTS) that StockFish(SF) uses to the Monte Carlo Tree Search(MCTS) Alpha Zero(AZ) used. Empirically speaking, removing the opening book allows for a fair comparison between AZ's tree construction and search algorithm and SF's. There's no point in polluting your data where SF is just looking up numbers in a pre-computed database. Likewise, it makes sense that they are comparing using nodes search rate because they want to show that their algorithm is more efficient.

I don't really agree with Lev. He doesn't seemed to have read too carefully.

1 & 2) There were 1300 games as part of this dataset (or 13 x 100 game datasets). 1200 were locked openings, 100 of them were done without restriction. I dunno why he got confused on this point.

3a) Lev contends that AZ having 'experience' is the equivalent of a fully computed lookup table for the entire game and that its unfair. I disagree. SF requires an opening table because the search algorithm is bad at openings and requires precomputed lookups. This is the nature of the ABTS algorithm SF uses. So if you are comparing search algorithms, it is better to exclude it. Having the opening book is like someone showing up to a test with half the answers. So to me, this is fairer comparison when you are comparing the two algorithms.

3b) "AZ may be a relevant advance in AI" is an understatement. SF is a chess AI. AZ is currently the best Go, Shoji, and Chess AI all at the same time.

I do agree that they should hold a real tourney where both AIs are playing at their full potential. It'll be interesting to see if AZ can keep getting away with its Morphy-like sacrifices when SF knows the exact best first 40 moves.
In Soviet Russia, Civilization Micros You!

"Right, as the world goes, is only in question between equals in power, while the strong do what they can and the weak suffer what they must."
“I have never understood why it is "greed" to want to keep the money you have earned but not greed to want to take somebody else's money.”
Reply

(December 7th, 2017, 15:28)Gustaran Wrote: The last serious man vs machine match was Kramnik vs. Deep Fritz in 2006, since that point in time balanced man vs machine matches (without a handicap) are not of a lot of interest anymore, since machines simply became too strong.

I have not been paying attention eek. I did not realize things had skewed so far in the direction of our new AI overlords...

Darrell
Reply

(December 8th, 2017, 09:20)darrelljs Wrote:
(December 7th, 2017, 15:28)Gustaran Wrote: The last serious man vs machine match was Kramnik vs. Deep Fritz in 2006, since that point in time balanced man vs machine matches (without a handicap) are not of a lot of interest anymore, since machines simply became too strong.

I have not been paying attention eek.  I did not realize things had skewed so far in the direction of our new AI overlords...

Darrell

It used to be that humans could still beat computers in the more intuitive positional aspect of chess. But the engines are much better now.
Reply

Yesterday's game Nakamura vs. Carlsen was pretty unorthodox to say the least. crazyeye  If you didn't know it was Nakamura who was playing, you would think some crazy club player was breaking every "common sense" early game rule: lol


Reply

(December 8th, 2017, 10:14)ipecac Wrote: It used to be that humans could still beat computers in the more intuitive positional aspect of chess. But the engines are much better now.

Yeah, it's reached the "humans won't necessarily understand what the computer is doing, even after the fact" stage, even outside of special cases like endgame tablebases where that phenomenon has been going on for about 30 years.

I remember that one unfolding in the pages of Chess Life then, when people couldn't quite work out what the computer was thinking in King + 2 bishops versus King + 1 knight.  Last I checked, people still didn't quite know.
Reply

I think that the issue of learning from AI games is that there's a fundamental disconnect between how humans reason in terms of gameplans with ideas and how you realize those ideas and how the AIs operate. While you can suss out some potential gameplans from an AI game, at any given time they try to maximize based on material and position for the likeliest board state in 20 turns and all appearance of strategy or train of thought is purely incidental.

I once heard a commentator paraphrase Magnus Carlsen expressing a similar sentiment: "[A Computer] just make moves and ends up ahead."

This is also one of the reasons why the 10 released AZ games were so interesting. AZ seemed to have a much more obvious and human-like way of playing out its games while out performing SF. From what I could gather from the AZ paper, this may have something to do with AZ only exploring lines that result in victory. AZ also treats all victories as being equal - it doesn't rely on scoring specific board states because its system relies on played games. I am probably wrong about this, hopefully we can find out more from Google one of these days.
In Soviet Russia, Civilization Micros You!

"Right, as the world goes, is only in question between equals in power, while the strong do what they can and the weak suffer what they must."
“I have never understood why it is "greed" to want to keep the money you have earned but not greed to want to take somebody else's money.”
Reply

I know there is already a ton of analysis about the AlphaZero games out there, but GM Daniel King recently uploaded his analysis of a game, which I found very interesting:


Reply

The hockey broadcasts in the U.S experimented with a blue highlight of the puck for people that don't follow hockey. It was terrible, but...I need him to toggle the board colors when he is doing a hypothetical path vs. what actually happened in the game, because it was really confusing me smile. Still enjoyed it!

Darrell
Reply



Forum Jump: