109:
36:
810:
743:
562:
796:"domain independent" to emphasize the fact that they can solve planning problems from a wide range of domains. Typical examples of domains are block-stacking, logistics, workflow management, and robot task planning. Hence a single domain-independent planner can be used to solve planning problems in all these various domains. On the other hand, a route planner is typical of a domain-specific planner.
977:. The Simple Temporal Network with Uncertainty (STNU) is a scheduling problem which involves controllable actions, uncertain events and temporal constraints. Dynamic Controllability for such problems is a type of scheduling which requires a temporal planning strategy to activate controllable actions reactively as uncertain events are observed so that all constraints are guaranteed to be satisfied.
1095:-complete. A particular case of contiguous planning is represented by FOND problems - for "fully-observable and non-deterministic". If the goal is specified in LTLf (linear time logic on finite trace) then the problem is always EXPTIME-complete and 2EXPTIME-complete if the goal is specified with LDLf.
968:
Temporal planning can be solved with methods similar to classical planning. The main difference is, because of the possibility of several, temporally overlapping actions with a duration being taken concurrently, that the definition of a state has to include information about the current absolute time
1087:
because each step of the plan is represented by a set of states rather than a single perfectly observable state, as in the case of classical planning. The selected actions depend on the state of the system. For example, if it rains, the agent chooses to take the umbrella, and if it doesn't, they may
795:
In AI planning, planners typically input a domain model (a description of a set of possible actions which model the domain) as well as the specific problem to be solved specified by the initial state and goal, in contrast to those in which there is no input domain specified. Such planners are called
870:
for
Classical Planning, are based on state variables. Each possible state of the world is an assignment of values to the state variables, and actions determine how the values of the state variables change when that action is taken. Since a set of state variables induce a state space that has a size
1103:
Conformant planning is when the agent is uncertain about the state of the system, and it cannot make any observations. The agent then has beliefs about the real world, but cannot verify them with sensing actions, for instance. These problems are solved by techniques similar to those of classical
614:
Given a description of the possible initial states of the world, a description of the desired goals, and a description of a set of possible actions, the planning problem is to synthesize a plan that is guaranteed (when applied to any of the initial states) to generate a state which contains the
886:, in which a set of tasks is given, and each task can be either realized by a primitive action or decomposed into a set of other tasks. This does not necessarily involve state variables, although in more realistic applications state variables simplify the description of task networks.
1104:
planning, but where the state space is exponential in the size of the problem, because of the uncertainty about the current state. A solution for a conformant planning problem is a sequence of actions. Haslum and
Jonsson have demonstrated that the problem of conformant planning is
1082:
We speak of "contingent planning" when the environment is observable through sensors, which can be faulty. It is thus a situation where the planning agent acts under incomplete information. For a contingent planning problem, a plan is no longer a sequence of actions but a
1065:
which are unknown for the planner. The planner generates two choices in advance. For example, if an object was detected, then action A is executed, if an object is missing, then action B is executed. A major advantage of conditional planning is the ability to handle
969:
and how far the execution of each active action has proceeded. Further, in planning with rational or real time, the state space may be infinite, unlike in classical planning or planning with integer time. Temporal planning is closely related to
1003:, when the state space is sufficiently small. With partial observability, probabilistic planning is similarly solved with iterative methods, but using a representation of the value functions defined for the space of beliefs instead of states.
1056:
An early example of a conditional planner is “Warplan-C” which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at
680:
Since the initial state is known unambiguously, and all actions are deterministic, the state of the world after any sequence of actions can be accurately predicted, and the question of observability is irrelevant for classical planning.
656:
Is there only one agent or are there several agents? Are the agents cooperative or selfish? Do all of the agents construct their own plans separately, or are the plans constructed centrally for all agents?
618:
The difficulty of planning is dependent on the simplifying assumptions employed. Several classes of planning problems can be identified depending on the properties the problems have in several dimensions.
687:
With nondeterministic actions or other events outside the control of the agent, the possible executions form a tree, and plans have to determine the appropriate actions for every node of the tree.
1033:
planning system, which is a hierarchical planner. Action names are ordered in a sequence and this is a plan for the robot. Hierarchical planning can be compared with an automatic generated
1428:
1037:. The disadvantage is, that a normal behavior tree is not so expressive like a computer program. That means, the notation of a behavior graph contains action commands, but no
1154:
1034:
514:
In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the
1693:
1299:
Neufeld, Xenija and
Mostaghim, Sanaz and Sancho-Pradel, Dario and Brand, Sandy (2017). "Building a Planner: A Survey of Planning Systems Used in Commercial Video Games".
368:
1021:. A difference to the more common reward-based planning, for example corresponding to MDPs, preferences don't necessarily have a precise numerical value.
1335:
827:
760:
579:
1450:
459:
1631:
959:- both are essentially problems of traversing state spaces, and the classical planning problem corresponds to a subclass of model checking problems.
1390:
1356:
1314:
1603:
1484:
990:
715:
1422:
255:
220:
1730:
1030:
916:
863:
1518:
1191:
319:
297:
1149:
1041:
or if-then-statements. Conditional planning overcomes the bottleneck and introduces an elaborated notation which is similar to a
233:
1108:-complete, and 2EXPTIME-complete when the initial situation is uncertain, and there is non-determinism in the actions outcomes.
507:
problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to
1701:
157:
1262:
Vidal, Thierry (January 1999). "Handling contingency in temporal constraint networks: from consistency to controllabilities".
452:
378:
332:
287:
282:
1071:
684:
Further, plans can be defined as sequences of actions, because it is always known in advance which actions will be needed.
867:
431:
403:
398:
292:
1549:
1201:
1122:
1058:
391:
260:
250:
240:
17:
1669:
1236:
849:
782:
601:
363:
309:
275:
79:
57:
50:
445:
349:
195:
1159:
127:
831:
764:
583:
1329:
1327:
1298:
1196:
1046:
1017:
In preference-based planning, the objective is not only to produce a plan but also to satisfy user-specified
518:
often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative
1625:
1427:(Technical report). Technical Report TR-2008-936, Department of Computer Science, University of Rochester.
945:
907:
871:
that is exponential in the set, planning, similarly to many other computational problems, suffers from the
862:
The most commonly used languages for representing planning domains and specific planning problems, such as
637:
Can the current state be observed unambiguously? There can be full observability and partial observability.
1456:
1242:
1139:
314:
265:
162:
1455:. International Joint Conference of Artificial Intelligence (IJCAI). Pasadena, CA: AAAI. Archived from
535:
504:
137:
120:
1424:
A survey of planning in intelligent agents: from externally motivated to internally motivated systems
1012:
883:
215:
1276:
108:
1656:. Lecture Notes in Computer Science. Vol. 1809. Springer Berlin Heidelberg. pp. 308–318.
1571:
1384:
1350:
660:
The simplest possible planning problem, known as the
Classical Planning Problem, is determined by:
634:
discrete or continuous? If they are discrete, do they have only a finite number of possible values?
339:
44:
1478:
1070:. An agent is not forced to plan everything from start to finish but can divide the problem into
986:
876:
872:
820:
753:
691:
572:
523:
480:
100:
1371:
1271:
1174:
1169:
1118:
1067:
926:
627:
or non-deterministic? For nondeterministic actions, are the associated probabilities available?
531:
210:
61:
1308:
1091:
Michael L. Littman showed in 1998 that with branching actions, the planning problem becomes
722:
152:
1483:. Fourteenth National Conference on Artificial Intelligence. MIT Press. pp. 748–754.
1126:
8:
1186:
527:
304:
1509:
714:
When full observability is replaced by partial observability, planning corresponds to a
1583:
1328:
Sanelli, Valerio and
Cashmore, Michael and Magazzeni, Daniele and Iocchi, Luca (2017).
903:
551:
354:
1630:. Twenty-First International Conference on Automated Planning and Scheduling (ICAPS).
1053:, which means a planner generates sourcecode which can be executed by an interpreter.
1665:
1232:
1164:
1050:
912:
488:
132:
646:
Can several actions be taken concurrently, or is only one action possible at a time?
1657:
1593:
1405:
1281:
1038:
1000:
900:
496:
492:
270:
205:
190:
1334:. Proc. of International Conference on Automated Planning and Scheduling (ICAPS).
1627:
Effective heuristics and belief tracking for planning with incomplete information
1228:
996:
974:
933:
650:
539:
519:
508:
147:
1572:"Compiling uncertainty away in conformant planning problems with bounded width"
956:
631:
500:
1074:. This helps to reduce the state space and solves much more complex problems.
649:
Is the objective of a plan to reach a designated goal state, or to maximize a
1724:
1543:
1448:
1331:
Short-term human robot interaction through conditional planning and execution
1084:
973:
problems when uncertainty is involved and can also be understood in terms of
624:
200:
1285:
1042:
344:
1144:
1062:
726:
373:
358:
1661:
1545:
Automata-Theoretic
Foundations of FOND Planning for LTLf and LDLf Goals
1379:. Artificial Intelligence Planning Systems. Elsevier. pp. 189–197.
1018:
970:
834: in this section. Unsourced material may be challenged and removed.
767: in this section. Unsourced material may be challenged and removed.
586: in this section. Unsourced material may be challenged and removed.
1654:
Some
Results on the Complexity of Planning with Incomplete Information
1598:
538:. Languages used to describe planning and scheduling are often called
920:
408:
172:
1480:
Probabilistic
Propositional Planning: Representations and Complexity
995:
Probabilistic planning can be solved with iterative methods such as
882:
An alternative language for describing planning problems is that of
809:
742:
561:
1715:
1105:
515:
484:
245:
167:
1588:
1264:
Journal of
Experimental & Theoretical Artificial Intelligence
1092:
949:
413:
1369:
1222:
915:
search, possibly enhanced by the use of state constraints (see
640:
How many initial states are there, finite or arbitrarily many?
1716:
International
Conference on Automated Planning and Scheduling
1155:
International Conference on Automated Planning and Scheduling
1624:
Albore, Alexandre; RamĂrez, Miquel; Geffner, Hector (2011).
1449:
Alexandre Albore; Hector Palacios; Hector Geffner (2009).
1517:. Int. Conf. Automated Planning and Scheduling. AAAI.
1221:
Ghallab, Malik; Nau, Dana S.; Traverso, Paolo (2004),
799:
615:
desired goals (such a state is called a goal state).
1623:
1452:
A Translation-Based Approach to Contingent Planning
1407:Conditional progressive planning under uncertainty
1220:
1511:Complexity of Planning with Partial Observability
1111:
1722:
1061:of a plan. The idea is that a plan can react to
487:or action sequences, typically for execution by
1569:
1541:
1420:
1029:Deterministic planning was introduced with the
1507:
1403:
1045:, known from other programming languages like
938:
732:
1651:
1503:
1501:
453:
1389:: CS1 maint: multiple names: authors list (
1355:: CS1 maint: multiple names: authors list (
1313:: CS1 maint: multiple names: authors list (
1006:
991:Partially observable Markov decision process
716:partially observable Markov decision process
701:nondeterministic actions with probabilities,
1576:Journal of Artificial Intelligence Research
1542:De Giacomo, Giuseppe; Rubin, Sasha (2018).
1676:conference: Recent Advances in AI Planning
1570:Palacios, Hector; Geffner, Hector (2009).
1498:
889:
721:If there are more than one agent, we have
460:
446:
1597:
1587:
1275:
980:
850:Learn how and when to remove this message
783:Learn how and when to remove this message
602:Learn how and when to remove this message
80:Learn how and when to remove this message
1370:Peot, Mark A and Smith, David E (1992).
1192:List of constraint programming languages
43:This article includes a list of general
1652:Haslum, Patrik; Jonsson, Peter (2000).
1476:
1224:Automated Planning: Theory and Practice
1150:Applications of artificial intelligence
1125:and a long-term planning system called
1077:
1024:
14:
1723:
1098:
673:which can be taken only one at a time,
1261:
894:
1691:
963:
832:adding citations to reliable sources
803:
765:adding citations to reliable sources
736:
584:adding citations to reliable sources
555:
29:
800:Planning domain modelling languages
24:
1685:
1202:Outline of artificial intelligence
707:maximization of a reward function,
694:(MDP) are planning problems with:
107:
49:it lacks sufficient corresponding
25:
1742:
1731:Automated planning and scheduling
1709:
483:that concerns the realization of
473:Automated planning and scheduling
27:Branch of artificial intelligence
1121:uses a short-term system called
808:
741:
560:
34:
1645:
1634:from the original on 2017-07-06
1617:
1606:from the original on 2020-04-27
1563:
1552:from the original on 2018-07-17
1535:
1524:from the original on 2020-10-31
1487:from the original on 2019-02-12
1431:from the original on 2023-03-15
1338:from the original on 2019-08-16
1245:from the original on 2009-08-24
1160:Constraint satisfaction problem
819:needs additional citations for
752:needs additional citations for
571:needs additional citations for
128:Artificial general intelligence
1470:
1442:
1414:
1397:
1373:Conditional nonlinear planning
1363:
1321:
1292:
1255:
1214:
1112:Deployment of planning systems
725:, which is closely related to
475:, sometimes denoted as simply
13:
1:
1207:
1197:List of emerging technologies
664:a unique known initial state,
1477:Littman, Michael L. (1997).
946:propositional satisfiability
7:
1140:Action description language
1133:
939:Reduction to other problems
733:Domain independent planning
643:Do actions have a duration?
545:
522:processes commonly seen in
163:Natural language processing
10:
1747:
1410:. IJCAI. pp. 431–438.
1301:IEEE Transactions on Games
1010:
984:
931:
884:hierarchical task networks
549:
536:combinatorial optimization
216:Hybrid intelligent systems
138:Recursive self-improvement
1694:"Planning and Scheduling"
1013:Preference-based planning
1007:Preference-based planning
906:, possibly enhanced with
692:Markov decision processes
1421:Liu, Daphne Hao (2008).
1049:. It is very similar to
340:Artificial consciousness
1508:Jussi Rintanen (2004).
1404:Karlsson, Lars (2001).
1286:10.1080/095281399146607
1088:choose not to take it.
987:Markov decision process
890:Algorithms for planning
877:combinatorial explosion
873:curse of dimensionality
524:artificial intelligence
481:artificial intelligence
211:Evolutionary algorithms
101:Artificial intelligence
64:more precise citations.
1175:Strategy (game theory)
1170:Scheduling (computing)
1119:Hubble Space Telescope
981:Probabilistic planning
927:partial-order planning
670:deterministic actions,
532:reinforcement learning
112:
698:durationless actions,
667:durationless actions,
550:Further information:
111:
1078:Contingency planning
1025:Conditional planning
828:improve this article
761:improve this article
723:multi-agent planning
580:improve this article
153:General game playing
1662:10.1007/10720246_24
1187:List of SMT solvers
1099:Conformant planning
710:and a single agent.
704:full observability,
676:and a single agent.
528:dynamic programming
499:. Unlike classical
305:Machine translation
221:Systems integration
158:Knowledge reasoning
95:Part of a series on
904:state space search
895:Classical planning
552:State space search
489:intelligent agents
113:
18:Automated planning
1599:10.1613/jair.2708
1165:Reactive planning
1051:program synthesis
964:Temporal planning
944:reduction to the
913:backward chaining
860:
859:
852:
793:
792:
785:
612:
611:
604:
497:unmanned vehicles
493:autonomous robots
479:, is a branch of
470:
469:
206:Bayesian networks
133:Intelligent agent
90:
89:
82:
16:(Redirected from
1738:
1705:
1700:. Archived from
1679:
1678:
1649:
1643:
1642:
1640:
1639:
1621:
1615:
1614:
1612:
1611:
1601:
1591:
1567:
1561:
1560:
1558:
1557:
1539:
1533:
1532:
1530:
1529:
1523:
1516:
1505:
1496:
1495:
1493:
1492:
1474:
1468:
1467:
1465:
1464:
1446:
1440:
1439:
1437:
1436:
1418:
1412:
1411:
1401:
1395:
1394:
1388:
1380:
1378:
1367:
1361:
1360:
1354:
1346:
1344:
1343:
1325:
1319:
1318:
1312:
1304:
1296:
1290:
1289:
1279:
1259:
1253:
1252:
1251:
1250:
1218:
1001:policy iteration
901:forward chaining
855:
848:
844:
841:
835:
812:
804:
788:
781:
777:
774:
768:
745:
737:
623:Are the actions
607:
600:
596:
593:
587:
564:
556:
540:action languages
526:. These include
462:
455:
448:
369:Existential risk
191:Machine learning
92:
91:
85:
78:
74:
71:
65:
60:this article by
51:inline citations
38:
37:
30:
21:
1746:
1745:
1741:
1740:
1739:
1737:
1736:
1735:
1721:
1720:
1712:
1688:
1686:Further reading
1683:
1682:
1672:
1650:
1646:
1637:
1635:
1622:
1618:
1609:
1607:
1568:
1564:
1555:
1553:
1540:
1536:
1527:
1525:
1521:
1514:
1506:
1499:
1490:
1488:
1475:
1471:
1462:
1460:
1447:
1443:
1434:
1432:
1419:
1415:
1402:
1398:
1385:cite conference
1382:
1381:
1376:
1368:
1364:
1351:cite conference
1348:
1347:
1341:
1339:
1326:
1322:
1306:
1305:
1297:
1293:
1277:10.1.1.107.1065
1260:
1256:
1248:
1246:
1239:
1229:Morgan Kaufmann
1219:
1215:
1210:
1136:
1114:
1101:
1080:
1027:
1015:
1009:
997:value iteration
993:
985:Main articles:
983:
966:
941:
936:
934:Sussman anomaly
897:
892:
856:
845:
839:
836:
825:
813:
802:
789:
778:
772:
769:
758:
746:
735:
651:reward function
632:state variables
608:
597:
591:
588:
577:
565:
554:
548:
520:trial and error
509:decision theory
466:
437:
436:
427:
419:
418:
394:
384:
383:
355:Control problem
335:
325:
324:
236:
226:
225:
186:
178:
177:
148:Computer vision
123:
86:
75:
69:
66:
56:Please help to
55:
39:
35:
28:
23:
22:
15:
12:
11:
5:
1744:
1734:
1733:
1719:
1718:
1711:
1710:External links
1708:
1707:
1706:
1704:on 2013-12-22.
1687:
1684:
1681:
1680:
1670:
1644:
1616:
1562:
1534:
1497:
1469:
1441:
1413:
1396:
1362:
1320:
1291:
1254:
1237:
1212:
1211:
1209:
1206:
1205:
1204:
1199:
1194:
1189:
1183:
1182:
1178:
1177:
1172:
1167:
1162:
1157:
1152:
1147:
1142:
1135:
1132:
1131:
1130:
1113:
1110:
1100:
1097:
1079:
1076:
1063:sensor signals
1026:
1023:
1011:Main article:
1008:
1005:
982:
979:
975:timed automata
965:
962:
961:
960:
957:model checking
953:
940:
937:
930:
929:
924:
910:
896:
893:
891:
888:
858:
857:
816:
814:
807:
801:
798:
791:
790:
749:
747:
740:
734:
731:
712:
711:
708:
705:
702:
699:
690:Discrete-time
678:
677:
674:
671:
668:
665:
658:
657:
654:
647:
644:
641:
638:
635:
628:
610:
609:
568:
566:
559:
547:
544:
505:classification
468:
467:
465:
464:
457:
450:
442:
439:
438:
435:
434:
428:
425:
424:
421:
420:
417:
416:
411:
406:
401:
395:
390:
389:
386:
385:
382:
381:
376:
371:
366:
361:
352:
347:
342:
336:
331:
330:
327:
326:
323:
322:
317:
312:
307:
302:
301:
300:
290:
285:
280:
279:
278:
273:
268:
258:
253:
251:Earth sciences
248:
243:
241:Bioinformatics
237:
232:
231:
228:
227:
224:
223:
218:
213:
208:
203:
198:
193:
187:
184:
183:
180:
179:
176:
175:
170:
165:
160:
155:
150:
145:
140:
135:
130:
124:
119:
118:
115:
114:
104:
103:
97:
96:
88:
87:
42:
40:
33:
26:
9:
6:
4:
3:
2:
1743:
1732:
1729:
1728:
1726:
1717:
1714:
1713:
1703:
1699:
1695:
1692:Vlahavas, I.
1690:
1689:
1677:
1673:
1671:9783540446576
1667:
1663:
1659:
1655:
1648:
1633:
1629:
1628:
1620:
1605:
1600:
1595:
1590:
1585:
1581:
1577:
1573:
1566:
1551:
1547:
1546:
1538:
1520:
1513:
1512:
1504:
1502:
1486:
1482:
1481:
1473:
1459:on 2019-07-03
1458:
1454:
1453:
1445:
1430:
1426:
1425:
1417:
1409:
1408:
1400:
1392:
1386:
1375:
1374:
1366:
1358:
1352:
1337:
1333:
1332:
1324:
1316:
1310:
1302:
1295:
1287:
1283:
1278:
1273:
1270:(1): 23--45.
1269:
1265:
1258:
1244:
1240:
1238:1-55860-856-7
1234:
1230:
1226:
1225:
1217:
1213:
1203:
1200:
1198:
1195:
1193:
1190:
1188:
1185:
1184:
1180:
1179:
1176:
1173:
1171:
1168:
1166:
1163:
1161:
1158:
1156:
1153:
1151:
1148:
1146:
1143:
1141:
1138:
1137:
1128:
1124:
1120:
1116:
1115:
1109:
1107:
1096:
1094:
1089:
1086:
1085:decision tree
1075:
1073:
1069:
1068:partial plans
1064:
1060:
1054:
1052:
1048:
1044:
1040:
1036:
1035:behavior tree
1032:
1022:
1020:
1014:
1004:
1002:
998:
992:
988:
978:
976:
972:
958:
955:reduction to
954:
951:
947:
943:
942:
935:
928:
925:
922:
918:
914:
911:
909:
905:
902:
899:
898:
887:
885:
880:
878:
874:
869:
865:
854:
851:
843:
840:February 2021
833:
829:
823:
822:
817:This section
815:
811:
806:
805:
797:
787:
784:
776:
773:February 2021
766:
762:
756:
755:
750:This section
748:
744:
739:
738:
730:
728:
724:
719:
717:
709:
706:
703:
700:
697:
696:
695:
693:
688:
685:
682:
675:
672:
669:
666:
663:
662:
661:
655:
652:
648:
645:
642:
639:
636:
633:
629:
626:
625:deterministic
622:
621:
620:
616:
606:
603:
595:
592:February 2021
585:
581:
575:
574:
569:This section
567:
563:
558:
557:
553:
543:
541:
537:
533:
529:
525:
521:
517:
512:
510:
506:
502:
498:
494:
490:
486:
482:
478:
474:
463:
458:
456:
451:
449:
444:
443:
441:
440:
433:
430:
429:
423:
422:
415:
412:
410:
407:
405:
402:
400:
397:
396:
393:
388:
387:
380:
377:
375:
372:
370:
367:
365:
362:
360:
356:
353:
351:
348:
346:
343:
341:
338:
337:
334:
329:
328:
321:
318:
316:
313:
311:
308:
306:
303:
299:
298:Mental health
296:
295:
294:
291:
289:
286:
284:
281:
277:
274:
272:
269:
267:
264:
263:
262:
261:Generative AI
259:
257:
254:
252:
249:
247:
244:
242:
239:
238:
235:
230:
229:
222:
219:
217:
214:
212:
209:
207:
204:
202:
201:Deep learning
199:
197:
194:
192:
189:
188:
182:
181:
174:
171:
169:
166:
164:
161:
159:
156:
154:
151:
149:
146:
144:
141:
139:
136:
134:
131:
129:
126:
125:
122:
117:
116:
110:
106:
105:
102:
99:
98:
94:
93:
84:
81:
73:
63:
59:
53:
52:
46:
41:
32:
31:
19:
1702:the original
1697:
1675:
1653:
1647:
1636:. Retrieved
1626:
1619:
1608:. Retrieved
1579:
1575:
1565:
1554:. Retrieved
1544:
1537:
1526:. Retrieved
1510:
1489:. Retrieved
1479:
1472:
1461:. Retrieved
1457:the original
1451:
1444:
1433:. Retrieved
1423:
1416:
1406:
1399:
1372:
1365:
1340:. Retrieved
1330:
1323:
1309:cite journal
1300:
1294:
1267:
1263:
1257:
1247:, retrieved
1223:
1216:
1102:
1090:
1081:
1055:
1043:control flow
1028:
1016:
994:
967:
881:
861:
846:
837:
826:Please help
821:verification
818:
794:
779:
770:
759:Please help
754:verification
751:
720:
713:
689:
686:
683:
679:
659:
617:
613:
598:
589:
578:Please help
573:verification
570:
513:
476:
472:
471:
345:Chinese room
234:Applications
142:
76:
70:January 2012
67:
48:
1582:: 623–675.
1145:Actor model
1019:preferences
727:game theory
477:AI planning
374:Turing test
350:Friendly AI
121:Major goals
62:introducing
1638:2019-08-16
1610:2019-08-16
1556:2018-07-17
1528:2019-07-03
1491:2019-02-10
1463:2019-07-03
1435:2019-08-16
1342:2019-08-16
1249:2008-08-20
1208:References
971:scheduling
932:See also:
908:heuristics
485:strategies
379:Regulation
333:Philosophy
288:Healthcare
283:Government
185:Approaches
45:references
1589:1401.3468
1548:. IJCAI.
1272:CiteSeerX
948:problem (
921:graphplan
718:(POMDP).
409:AI winter
310:Military
173:AI safety
1725:Category
1632:Archived
1604:Archived
1550:Archived
1519:Archived
1485:Archived
1429:Archived
1336:Archived
1243:archived
1134:See also
1106:EXPSPACE
875:and the
630:Are the
546:Overview
516:strategy
432:Glossary
426:Glossary
404:Progress
399:Timeline
359:Takeover
320:Projects
293:Industry
256:Finance
246:Deepfake
196:Symbolic
168:Robotics
143:Planning
1303:. IEEE.
1093:EXPTIME
1059:runtime
950:satplan
501:control
414:AI boom
392:History
315:Physics
58:improve
1668:
1274:
1235:
1072:chunks
1047:Pascal
1031:STRIPS
917:STRIPS
864:STRIPS
364:Ethics
47:, but
1584:arXiv
1522:(PDF)
1515:(PDF)
1377:(PDF)
1181:Lists
1127:Spike
1039:loops
276:Music
271:Audio
1698:EETN
1666:ISBN
1391:link
1357:link
1315:link
1233:ISBN
1123:SPSS
1117:The
999:and
989:and
868:PDDL
866:and
534:and
503:and
495:and
1658:doi
1594:doi
1282:doi
830:by
763:by
582:by
266:Art
1727::
1696:.
1674:.
1664:.
1602:.
1592:.
1580:35
1578:.
1574:.
1500:^
1387:}}
1383:{{
1353:}}
1349:{{
1311:}}
1307:{{
1280:.
1268:11
1266:.
1241:,
1231:,
1227:,
952:).
919:,
879:.
729:.
542:.
530:,
511:.
491:,
1660::
1641:.
1613:.
1596::
1586::
1559:.
1531:.
1494:.
1466:.
1438:.
1393:)
1359:)
1345:.
1317:)
1288:.
1284::
1129:.
923:)
853:)
847:(
842:)
838:(
824:.
786:)
780:(
775:)
771:(
757:.
653:?
605:)
599:(
594:)
590:(
576:.
461:e
454:t
447:v
357:/
83:)
77:(
72:)
68:(
54:.
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.