579:. The value is presumed to represent the relative probability of winning if the game tree were expanded from that node to the end of the game. The function looks only at the current position (i.e. what spaces the pieces are on and their relationship to each other) and does not take into account the history of the position or explore possible moves forward of the node (therefore static). This implies that for dynamic positions where tactical threats exist, the evaluation function will not be an accurate assessment of the position. These positions are termed non-
819:, Ethereal, and many other engines, where each table considers the position of every type of piece in relation to the player's king, rather than the position of the every type of piece alone. The values in the tables are bonuses/penalties for the location of each piece on each space, and encode a composite of many subtle factors difficult to quantify analytically. In handcrafted evaluation functions, there are sometimes two sets of tables: one for the opening/middlegame, and one for the endgame; positions of the middle game are interpolated between the two.
591:
complexity: computing detailed knowledge may take so much time that performance decreases, so approximations to exact knowledge are often better. Because the evaluation function depends on the nominal depth of search as well as the extensions and reductions employed in the search, there is no generic or stand-alone formulation for an evaluation function. An evaluation function which works well in one application will usually need to be substantially re-tuned or re-trained to work effectively in another application.
61:
623:, which are a hundredth of a pawn. Larger evaluations indicate a material imbalance or positional advantage or that a win of material is usually imminent. Very large evaluations may indicate that checkmate is imminent. An evaluation function also implicitly encodes the value of the right to move, which can vary from a small fraction of a pawn to win or loss.
845:. An efficiently updatable neural network architecture, using king-piece-square tables as its inputs, was first ported to chess in a Stockfish derivative called Stockfish NNUE, publicly released on May 30, 2020, and was adopted by many other engines before eventually being incorporated into the official Stockfish engine on August 6, 2020.
755:. More recently, evaluation functions in computer chess have started to use multiple neural networks, with each neural network trained for a specific part of the evaluation, such as pawn structure or endgames. This allows for hybrid approaches where an evaluation function consists of both neural networks and handcrafted terms.
507:
There do not exist analytical or theoretical models for evaluation functions for unsolved games, nor are such functions entirely ad-hoc. The composition of evaluation functions is determined empirically by inserting a candidate function into an automaton and evaluating its subsequent performance. A
738:
have been used in the evaluation functions of chess engines since the late 1980s, they did not become popular in computer chess until the late 2010s, as the hardware needed to train neural networks was not strong enough at the time, and fast training algorithms and network topology and architectures
639:
of various weighted terms determined to influence the value of a position. However, not all terms in a handcrafted evaluation function are linear, some, such as king safety and pawn structure, are nonlinear. Each term may be considered to be composed of first order factors (those that depend only on
647:
used for material are Queen=9, Rook=5; Knight or Bishop=3; Pawn=1; the king is assigned an arbitrarily large value, usually larger than the total value of all the other pieces. In addition, it typically has a set of positional terms usually totaling no more than the value of a pawn, though in some
810:
An important technique in evaluation since at least the early 1990s is the use of piece-square tables (also called piece-value tables) for evaluation. Each table is a set of 64 values corresponding to the squares of the chessboard. The most basic implementation of piece-square table consists of
711:
Mobility is the number of legal moves available to a player, or alternately the sum of the number of spaces attacked or defended by each piece, including spaces occupied by friendly or opposing pieces. Effective mobility, or the number of "safe" spaces a piece may move to, may also be taken into
651:
In practice, effective handcrafted evaluation functions are not created by expanding the list of evaluated parameters, but by careful tuning or training of the weights relative to each other, of a modest set of parameters such as those described above. Toward this end, positions from various
590:
There is an intricate relationship between search and knowledge in the evaluation function. Deeper search favors less near-term tactical factors and more subtle long-horizon positional motifs in the evaluation. There is also a trade-off between efficacy of encoded knowledge and computational
1118:
Schrittwieser, Julian; Antonoglou, Ioannis; Hubert, Thomas; Simonyan, Karen; Sifre, Laurent; Schmitt, Simon; Guez, Arthur; Lockhart, Edward; Hassabis, Demis; Graepel, Thore; Lillicrap, Timothy (2020). "Mastering Atari, Go, chess and shogi by planning with a learned model".
909:
811:
separate tables for each type of piece per player, which in chess results in 12 piece-square tables in total. More complex variants of piece-square tables are used in computer chess, one of the most prominent being the king-piece-square table, used in
871:
took into account both territory controlled, influence of stones, number of prisoners and life and death of groups on the board. However, modern go playing computer programs largely use deep neural networks in their evaluation functions, such as
781:
the results of
Deepmind's AlphaZero paper. Apart from the size of the networks, the neural networks used in AlphaZero and Leela Chess Zero also differ from those used in traditional chess engines as they have two outputs, one for evaluation (the
739:
have not been developed yet. Initially, neural network based evaluation functions generally consisted of one neural network for the entire evaluation function, with input features selected from the board and whose output is an
840:
that has only piece-square tables as the inputs into the neural network. In fact, the most basic NNUE architecture is simply the 12 piece-square tables described above, a neural network with only one layer and no
715:
King safety is a set of bonuses and penalties assessed for the location of the king and the configuration of pawns and pieces adjacent to or in front of the king, and opposing pieces bearing on spaces around the
480:, is a function used by game-playing computer programs to estimate the value or goodness of a position (usually at a leaf or terminal node) in a game tree. Most of the time, the value is either a
943:; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen;
648:
positions the positional terms can get much larger, such as when checkmate is imminent. Handcrafted evaluation functions typically contain dozens to hundreds of individual terms.
1456:
Slate, D and Atkin, L., 1983, "Chess 4.5, the
Northwestern University Chess Program" in Chess Skill in Man and Machine 2nd Ed., pp. 93–100. Springer-Verlag, New York, NY.
631:
Historically in computer chess, the terms of an evaluation function are constructed (i.e. handcrafted) by the engine developer, as opposed to discovered through training
794:
to approximate the centipawn scale used in traditional chess engines, by default the output is the win-draw-loss percentages, a vector of three values each from the
747:
to the centipawn scale so that a value of 100 is roughly equivalent to a material advantage of a pawn. The parameters in neural networks are typically trained using
587:
to resolve threats before evaluation. Some values returned by evaluation functions are absolute rather than heuristic, if a win, loss or draw occurs at the node.
640:
the space and any piece on it), second order factors (the space in relation to other spaces), and nth-order factors (dependencies on history of the position).
722:
Pawn structure is a set of penalties and bonuses for various strengths and weaknesses in pawn structure, such as penalties for doubled and isolated pawns.
455:
719:
Center control is derived from how many pawns and pieces occupy or bear on the four center spaces and sometimes the 12 spaces of the extended center.
790:), rather than only one output for evaluation. In addition, while it is possible to set the output of the value head of Leela's neural network to a
508:
significant body of evidence now exists for several games like chess, shogi and go as to the general composition of evaluation functions for them.
1459:
Ebeling, Carl, 1987, All the Right Moves: A VLSI Architecture for Chess (ACM Distinguished
Dissertation), pp. 56–86. MIT Press, Cambridge, MA
147:
1328:
1469:
859:
Chess engines frequently use endgame tablebases in their evaluation function, as it allows the engine to play perfectly in the endgame.
725:
King tropism is a bonus for closeness (or penalty for distance) of certain pieces, especially queens and knights, to the opposing king.
704:
Each of the terms is a weight multiplied by a difference factor: the value of white's material or positional terms minus black's.
106:
1047:
798:. Since deep neural networks are very large, engines using deep neural networks in their evaluation function usually require a
448:
1494:
70:
1181:
828:
117:
611:. The term 'pawn' refers to the value when the player has one more pawn than the opponent in a position, as explained in
17:
1423:
643:
A handcrafted evaluation function typically has of a material balance term that usually dominates the evaluation. The
441:
1347:"Efficiently Updatable Neural-Network-based Evaluation Function for computer Shogi (Unofficial English Translation)"
1438:
1398:
242:
112:
1046:
Schaeffer, J.; Burch, N.; Y. Björnsson; Kishimoto, A.; Müller, M.; Lake, R.; Lu, P.; Sutphen, S. (2007).
137:
572:
744:
644:
612:
237:
187:
1370:
940:
837:
735:
632:
257:
854:
799:
778:
1287:
1096:
Schaeffer, J.; Björnsson, Y.; Burch, N.; Kishimoto, A.; Müller, M.; Lake, R.; Lu, P.; Sutphen, S.
1489:
568:
197:
1288:
Jun
Nagashima; Masahumi Taketoshi; Yoichiro Kajihara; Tsuyoshi Hashimoto; Hiroyuki Iida (2002),
949:"A general reinforcement learning algorithm that masters chess, shogi, and go through self-play"
1105:
Proceedings of the 2005 International Joint
Conferences on Artificial Intelligence Organization
761:
have been used, albeit infrequently, in computer chess after
Matthew Lai's Giraffe in 2015 and
748:
657:
372:
307:
132:
770:
576:
217:
152:
1474:
826:
in 2018 by Yu Nasu, the most common evaluation function used in computer chess today is the
1138:
962:
758:
497:
417:
769:
in 2017 demonstrated the feasibility of deep neural networks in evaluation functions. The
8:
1499:
881:
842:
752:
559:, and do not require search or evaluation because a discrete solution tree is available.
397:
142:
102:
35:
1346:
1142:
966:
1227:
1162:
1128:
1097:
1078:
1021:
708:
The material term is obtained by assigning a value in pawn-units to each of the pieces.
636:
496:
may be tenths, hundredths or other convenient fraction, but sometimes, the value is an
182:
127:
1305:
1166:
1154:
1070:
980:
953:
812:
584:
412:
192:
1082:
492:
ths of the value of a playing piece such as a stone in go or a pawn in chess, where
1329:"Efficiently Updatable Neural-Network-based Evaluation Function for computer Shogi"
1146:
1062:
1025:
1011:
970:
774:
528:
362:
322:
511:
Games in which game playing computer programs employ evaluation functions include
888:, and output a win/draw/loss percentage rather than a value in number of stones.
635:. The general approach for constructing handcrafted evaluation functions is as a
615:. The integer 1 usually represents some fraction of a pawn, and commonly used in
337:
165:
999:
1150:
944:
816:
616:
600:
357:
347:
342:
210:
169:
82:
52:
1483:
795:
501:
427:
402:
387:
317:
252:
1066:
975:
948:
1158:
1074:
984:
382:
275:
222:
1016:
567:
A tree of such evaluations is usually part of a search algorithm, such as
41:
Function in a computer game-playing program that evaluates a game position
1289:
868:
791:
556:
552:
544:
481:
297:
287:
607:, and the units of the evaluation function are typically referred to as
877:
548:
532:
367:
302:
232:
1249:
1232:
1117:
766:
516:
504:, representing the win, draw, and loss percentages of the position.
422:
407:
352:
327:
312:
282:
60:
1133:
1045:
939:
762:
536:
227:
583:; they require at least a limited kind of search extension called
873:
740:
653:
652:
databases are employed, such as from master games, engine games,
604:
524:
485:
177:
1443:
1375:
1354:
885:
540:
377:
332:
292:
262:
247:
1475:
GameDev.net - Chess
Programming Part VI: Evaluation Functions
823:
669:
520:
512:
392:
1095:
802:
in order to efficiently calculate the evaluation function.
77:
28:
543:, computer programs also use evaluation functions to play
1291:
An
Efficient Use of Piece-Square Tables in Computer Shogi
1244:
1242:
1224:
1275:
Learning Piece-Square Values using
Temporal Differences
603:, the output of an evaluation function is typically an
1239:
122:
626:
539:. In addition, with the advent of programs such as
1436:
935:
933:
931:
1481:
1209:A Self-Learning, Pattern-Oriented Chess Program
668:An example handcrafted evaluation function for
928:
917:, Ser. 7, vol. 41, Philosophical Magazine
1368:
449:
1439:"official-stockfish / Stockfish, NNUE merge"
1000:"Temporal Difference Learning and TD-Gammon"
456:
442:
1294:, Information Processing Society of Japan
1231:
1132:
1015:
974:
903:
901:
1206:
911:Programming a Computer for Playing Chess
777:was started shortly after to attempt to
27:For the string evaluation function, see
1344:
1326:
997:
907:
855:Endgame tablebase § Computer chess
14:
1482:
1215:
898:
867:Historically, evaluation functions in
805:
1338:
1272:
1179:
1111:
848:
656:games, or even from self-play, as in
562:
118:Efficiently updatable neural networks
47:This article is part of the series on
1437:Joost VandeVondele (July 25, 2020).
1396:
1320:
829:efficiently updatable neural network
1399:"Release stockfish-nnue-2020-05-30"
1221:
991:
24:
1183:Learning to Play the Game of Chess
729:
25:
1511:
1463:
1222:Lai, Matthew (4 September 2015),
822:Originally developed in computer
786:) and one for move ordering (the
1369:Gary Linscott (April 30, 2021).
836:for short, a sparse and shallow
627:Handcrafted evaluation functions
123:Handcrafted evaluation functions
59:
1430:
1416:
1390:
1362:
1298:
1281:
1266:
672:might look like the following:
1397:Noda, Hisayori (30 May 2020).
1200:
1173:
1089:
1039:
998:Tesauro, Gerald (March 1995).
13:
1:
1424:"Introducing NNUE Evaluation"
1273:Beal, Don; Smith, Martin C.,
891:
474:heuristic evaluation function
1495:Game artificial intelligence
1470:Keys to Evaluating Positions
1277:, vol. 22, ICCA Journal
1211:, vol. 12, ICCA Journal
7:
594:
138:Stochastic gradient descent
10:
1516:
1345:Yu Nasu (April 28, 2018).
1327:Yu Nasu (April 28, 2018).
1307:Stockfish Evaluation Guide
1151:10.1038/s41586-020-03051-4
852:
663:
613:Chess piece relative value
478:static evaluation function
188:Principal variation search
33:
26:
1250:"Neural network topology"
1207:Levinson, Robert (1989),
1180:Thurn, Sebastian (1995),
1004:Communications of the ACM
547:, such as those from the
908:Shannon, Claude (1950),
862:
800:graphics processing unit
34:Not to be confused with
1067:10.1126/science.1144079
976:10.1126/science.aar6404
569:Monte Carlo tree search
500:of three values in the
198:Monte Carlo tree search
749:reinforcement learning
658:reinforcement learning
308:Dragon by Komodo Chess
133:Reinforcement learning
1017:10.1145/203330.203343
771:distributed computing
153:Unsupervised learning
71:Board representations
1048:"Checkers is Solved"
843:activation functions
759:Deep neural networks
700:* king tropism + ...
696:* pawn structure + c
692:* center control + c
103:Deep neural networks
96:Evaluation functions
1143:2020Natur.588..604S
967:2018Sci...362.1140S
961:(6419): 1140–1144.
947:(7 December 2018).
806:Piece-square tables
753:supervised learning
645:conventional values
470:evaluation function
143:Supervised learning
128:Piece-square tables
36:Function evaluation
1098:"Solving Checkers"
849:Endgame tablebases
637:linear combination
563:Relation to search
551:. Some games like
523:(Japanese chess),
472:, also known as a
183:Alpha-beta pruning
18:Piece-square table
1127:(7839): 604–609.
1061:(5844): 1518–22.
688:* king safety + c
585:quiescence search
577:alpha–beta search
573:minimax algorithm
466:
465:
193:Quiescence search
172:search algorithms
53:Chess programming
16:(Redirected from
1507:
1449:
1448:
1434:
1428:
1427:
1426:. 6 August 2020.
1420:
1414:
1413:
1411:
1409:
1394:
1388:
1387:
1385:
1383:
1366:
1360:
1359:
1351:
1342:
1336:
1335:
1333:
1324:
1318:
1317:
1316:
1314:
1302:
1296:
1295:
1285:
1279:
1278:
1270:
1264:
1263:
1261:
1260:
1246:
1237:
1236:
1235:
1219:
1213:
1212:
1204:
1198:
1197:
1196:
1194:
1188:
1177:
1171:
1170:
1136:
1115:
1109:
1108:
1102:
1093:
1087:
1086:
1052:
1043:
1037:
1036:
1034:
1032:
1019:
995:
989:
988:
978:
937:
926:
925:
924:
922:
916:
905:
775:Leela Chess Zero
458:
451:
444:
363:Leela Chess Zero
63:
44:
43:
21:
1515:
1514:
1510:
1509:
1508:
1506:
1505:
1504:
1480:
1479:
1466:
1453:
1452:
1435:
1431:
1422:
1421:
1417:
1407:
1405:
1395:
1391:
1381:
1379:
1367:
1363:
1349:
1343:
1339:
1331:
1325:
1321:
1312:
1310:
1304:
1303:
1299:
1286:
1282:
1271:
1267:
1258:
1256:
1248:
1247:
1240:
1220:
1216:
1205:
1201:
1192:
1190:
1186:
1178:
1174:
1116:
1112:
1100:
1094:
1090:
1050:
1044:
1040:
1030:
1028:
996:
992:
945:Hassabis, Demis
938:
929:
920:
918:
914:
906:
899:
894:
865:
857:
851:
808:
736:neural networks
732:
730:Neural networks
699:
695:
691:
687:
683:
679:
666:
633:neural networks
629:
597:
565:
557:strongly solved
484:or a quantized
462:
433:
432:
278:
268:
267:
213:
211:Chess computers
203:
202:
173:
158:
157:
98:
88:
87:
73:
42:
39:
32:
23:
22:
15:
12:
11:
5:
1513:
1503:
1502:
1497:
1492:
1490:Computer chess
1478:
1477:
1472:
1465:
1464:External links
1462:
1461:
1460:
1457:
1451:
1450:
1429:
1415:
1389:
1361:
1337:
1334:(in Japanese).
1319:
1297:
1280:
1265:
1238:
1214:
1199:
1172:
1110:
1088:
1038:
990:
927:
896:
895:
893:
890:
864:
861:
853:Main article:
850:
847:
838:neural network
807:
804:
731:
728:
727:
726:
723:
720:
717:
713:
709:
702:
701:
697:
693:
689:
685:
684:* mobility + c
681:
680:* material + c
677:
665:
662:
628:
625:
617:computer chess
601:computer chess
596:
593:
564:
561:
464:
463:
461:
460:
453:
446:
438:
435:
434:
431:
430:
425:
420:
415:
410:
405:
400:
395:
390:
385:
380:
375:
370:
365:
360:
355:
350:
345:
340:
335:
330:
325:
320:
315:
310:
305:
300:
295:
290:
285:
279:
274:
273:
270:
269:
266:
265:
260:
255:
250:
245:
240:
235:
230:
225:
220:
214:
209:
208:
205:
204:
201:
200:
195:
190:
185:
180:
174:
164:
163:
160:
159:
156:
155:
150:
145:
140:
135:
130:
125:
120:
115:
110:
99:
94:
93:
90:
89:
86:
85:
80:
74:
69:
68:
65:
64:
56:
55:
49:
48:
40:
9:
6:
4:
3:
2:
1512:
1501:
1498:
1496:
1493:
1491:
1488:
1487:
1485:
1476:
1473:
1471:
1468:
1467:
1458:
1455:
1454:
1446:
1445:
1440:
1433:
1425:
1419:
1404:
1400:
1393:
1378:
1377:
1372:
1365:
1357:
1356:
1348:
1341:
1330:
1323:
1309:
1308:
1301:
1293:
1292:
1284:
1276:
1269:
1255:
1251:
1245:
1243:
1234:
1229:
1225:
1218:
1210:
1203:
1185:
1184:
1176:
1168:
1164:
1160:
1156:
1152:
1148:
1144:
1140:
1135:
1130:
1126:
1122:
1114:
1106:
1099:
1092:
1084:
1080:
1076:
1072:
1068:
1064:
1060:
1056:
1049:
1042:
1027:
1023:
1018:
1013:
1009:
1005:
1001:
994:
986:
982:
977:
972:
968:
964:
960:
956:
955:
950:
946:
942:
941:Silver, David
936:
934:
932:
913:
912:
904:
902:
897:
889:
887:
883:
879:
875:
870:
860:
856:
846:
844:
839:
835:
831:
830:
825:
820:
818:
817:Komodo Dragon
814:
803:
801:
797:
796:unit interval
793:
789:
785:
780:
776:
772:
768:
764:
760:
756:
754:
750:
746:
742:
737:
724:
721:
718:
714:
710:
707:
706:
705:
675:
674:
673:
671:
661:
659:
655:
649:
646:
641:
638:
634:
624:
622:
618:
614:
610:
606:
602:
592:
588:
586:
582:
578:
574:
570:
560:
558:
554:
550:
546:
542:
538:
534:
530:
526:
522:
518:
514:
509:
505:
503:
502:unit interval
499:
495:
491:
487:
483:
479:
475:
471:
459:
454:
452:
447:
445:
440:
439:
437:
436:
429:
426:
424:
421:
419:
416:
414:
411:
409:
406:
404:
401:
399:
396:
394:
391:
389:
386:
384:
381:
379:
376:
374:
371:
369:
366:
364:
361:
359:
356:
354:
351:
349:
346:
344:
341:
339:
336:
334:
331:
329:
326:
324:
321:
319:
316:
314:
311:
309:
306:
304:
301:
299:
296:
294:
291:
289:
286:
284:
281:
280:
277:
276:Chess engines
272:
271:
264:
261:
259:
256:
254:
251:
249:
246:
244:
241:
239:
236:
234:
231:
229:
226:
224:
221:
219:
216:
215:
212:
207:
206:
199:
196:
194:
191:
189:
186:
184:
181:
179:
176:
175:
171:
167:
162:
161:
154:
151:
149:
146:
144:
141:
139:
136:
134:
131:
129:
126:
124:
121:
119:
116:
114:
111:
108:
104:
101:
100:
97:
92:
91:
84:
81:
79:
76:
75:
72:
67:
66:
62:
58:
57:
54:
51:
50:
46:
45:
37:
30:
19:
1442:
1432:
1418:
1406:. Retrieved
1402:
1392:
1382:December 12,
1380:. Retrieved
1374:
1364:
1353:
1340:
1322:
1311:, retrieved
1306:
1300:
1290:
1283:
1274:
1268:
1257:. Retrieved
1253:
1233:1509.01549v1
1223:
1217:
1208:
1202:
1191:, retrieved
1182:
1175:
1124:
1120:
1113:
1104:
1091:
1058:
1054:
1041:
1029:. Retrieved
1010:(3): 58–68.
1007:
1003:
993:
958:
952:
919:, retrieved
910:
866:
858:
833:
827:
821:
809:
787:
783:
757:
733:
703:
667:
650:
642:
630:
620:
608:
598:
589:
580:
566:
510:
506:
493:
489:
477:
473:
469:
467:
243:Deep Thought
223:ChessMachine
148:Texel tuning
107:Transformers
95:
1408:12 December
1313:12 December
1193:12 December
1189:, MIT Press
921:12 December
869:Computer Go
792:real number
788:policy head
553:tic-tac-toe
545:video games
488:, often in
482:real number
298:CuckooChess
288:Chess Tiger
1500:Heuristics
1484:Categories
1259:2021-12-12
1254:lczero.org
1134:1911.08265
892:References
878:Leela Zero
784:value head
745:normalized
621:centipawns
549:Atari 2600
533:backgammon
368:MChess Pro
303:Deep Fritz
233:Cray Blitz
1167:208158225
813:Stockfish
779:replicate
767:AlphaZero
581:quiescent
423:Turochamp
413:Stockfish
408:SmarThink
353:KnightCap
328:GNU Chess
313:Fairy-Max
283:AlphaZero
238:Deep Blue
113:Attention
83:Bitboards
1159:33361790
1083:10274228
1075:17641166
985:30523106
882:Fine Art
773:project
763:Deepmind
712:account.
595:In chess
537:checkers
398:Shredder
258:Mephisto
228:ChipTest
1139:Bibcode
1055:Science
1026:8763243
963:Bibcode
954:Science
874:AlphaGo
741:integer
664:Example
654:Lichess
605:integer
525:othello
486:integer
373:Mittens
338:Houdini
178:Minimax
1444:GitHub
1403:Github
1376:GitHub
1371:"NNUE"
1355:GitHub
1165:
1157:
1121:Nature
1081:
1073:
1031:Nov 1,
1024:
983:
886:KataGo
884:, and
734:While
571:or a
541:MuZero
535:, and
378:MuZero
358:Komodo
348:Junior
343:Ikarus
333:HIARCS
293:Crafty
263:Saitek
248:HiTech
1350:(PDF)
1332:(PDF)
1228:arXiv
1187:(PDF)
1163:S2CID
1129:arXiv
1101:(PDF)
1079:S2CID
1051:(PDF)
1022:S2CID
915:(PDF)
863:In Go
832:, or
824:shogi
716:king.
670:chess
609:pawns
575:like
521:shogi
513:chess
498:array
428:Zappa
418:Torch
403:Sjeng
393:Rybka
388:REBEL
323:Fruit
318:Fritz
253:Hydra
218:Belle
166:Graph
1410:2021
1384:2020
1315:2021
1195:2021
1155:PMID
1071:PMID
1033:2013
981:PMID
923:2021
834:NNUE
619:are
555:are
383:Naum
170:tree
168:and
78:0x88
29:eval
1147:doi
1125:588
1063:doi
1059:317
1012:doi
971:doi
959:362
765:'s
751:or
660:.
599:In
529:hex
476:or
468:An
1486::
1441:.
1401:.
1373:.
1352:.
1252:.
1241:^
1226:,
1161:.
1153:.
1145:.
1137:.
1123:.
1103:.
1077:.
1069:.
1057:.
1053:.
1020:.
1008:38
1006:.
1002:.
979:.
969:.
957:.
951:.
930:^
900:^
880:,
876:,
815:,
743:,
531:,
527:,
519:,
517:go
515:,
1447:.
1412:.
1386:.
1358:.
1262:.
1230::
1169:.
1149::
1141::
1131::
1107:.
1085:.
1065::
1035:.
1014::
987:.
973::
965::
698:6
694:5
690:4
686:3
682:2
678:1
676:c
494:n
490:n
457:e
450:t
443:v
109:)
105:(
38:.
31:.
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.