TY - GEN
T1 - Learning action models from plan examples with incomplete knowledge
AU - Yang, Qiang
AU - Wu, Kangheng
AU - Jiang, Yunfei
PY - 2005
Y1 - 2005
N2 - AI planning requires the definition of an action model using a language such as PDDL as input. However, building an action model from scratch is a difficult and time-consuming task even for experts. In this paper, we develop an algorithm called ARMS for automatically discovering action models from a set of successful plan examples. Unlike the previous work in action-model learning, we do not assume complete knowledge of states in the middle of the example plans; that is, we assume that no intermediate states are given. This requirement is motivated by a variety of applications, including object tracking and plan monitoring where the knowledge about intermediate states is either minimal or unavailable to the observing agent. In a real world application, the cost is prohibitively high in labelling the training examples by manually annotating every state in a plan example from snapshots of an environment. To learn action models, our ARMS algorithm gathers knowledge on the statistical distribution of frequent sets of actions in the example plans. It then builds a propositional satisfiability (SAT) problem and solves it using a SAT solver. We lay the theoretical foundations of the learning problem and evaluate the effectiveness of ARMS empirically.
AB - AI planning requires the definition of an action model using a language such as PDDL as input. However, building an action model from scratch is a difficult and time-consuming task even for experts. In this paper, we develop an algorithm called ARMS for automatically discovering action models from a set of successful plan examples. Unlike the previous work in action-model learning, we do not assume complete knowledge of states in the middle of the example plans; that is, we assume that no intermediate states are given. This requirement is motivated by a variety of applications, including object tracking and plan monitoring where the knowledge about intermediate states is either minimal or unavailable to the observing agent. In a real world application, the cost is prohibitively high in labelling the training examples by manually annotating every state in a plan example from snapshots of an environment. To learn action models, our ARMS algorithm gathers knowledge on the statistical distribution of frequent sets of actions in the example plans. It then builds a propositional satisfiability (SAT) problem and solves it using a SAT solver. We lay the theoretical foundations of the learning problem and evaluate the effectiveness of ARMS empirically.
UR - https://www.scopus.com/pages/publications/34447128616
M3 - Conference Paper published in a book
AN - SCOPUS:34447128616
SN - 1577352203
SN - 9781577352204
T3 - ICAPS 2005 - Proceedings of the 15th International Conference on Automated Planning and Scheduling
SP - 241
EP - 250
BT - ICAPS 2005 - Proceedings of the 15th International Conference on Automated Planning and Scheduling
T2 - 15th International Conference on Automated Planning and Scheduling, ICAPS 2005
Y2 - 5 June 2005 through 10 June 2005
ER -