Stanford Research Institute Problem Solver
   HOME





Stanford Research Institute Problem Solver
The Stanford Research Institute Problem Solver, known by its acronym STRIPS, is an automated planning, automated planner developed by Richard Fikes and Nils Nilsson (researcher), Nils Nilsson in 1971 at SRI International. The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today; such languages are commonly known as action languages. This article only describes the language, not the planner. Definition A STRIPS instance is composed of: * An initial state; * The specification of the goal states – situations that the planner is trying to reach; * A set of actions. For each action, the following are included: ** preconditions (what must be established before the action is performed); ** postconditions (what is established after the action is performed). Mathematically, a STRIPS instance is a quadruple \langle P,O,I,G \rangle, in w ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Automated Planning
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategy, strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control system, control and Statistical classification, classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory. In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, reinforcement learning and combinatorial optimization. Languages used to describe planning ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Monkey And Banana Problem
The monkey and banana problem is a famous toy problem in artificial intelligence, particularly in logic programming and planning. Formulation of the problem A monkey is in a room. Suspended from the ceiling is a bunch of bananas, beyond the monkey's reach. However, in the room there are also a chair and a stick. The ceiling is just the right height so that a monkey standing on a chair could knock the bananas down with the stick. The monkey knows how to move around, carry other things around, reach for the bananas, and wave a stick in the air. What is the best sequence of actions for the monkey? Purpose of the problem The problem seeks to answer the question of whether monkeys are intelligent. Both humans and monkeys have the ability to use mental maps to remember things like where to go to find shelter, or how to avoid danger. They can also remember where to go to gather food and water, as well as how to communicate with each other. Monkeys have the ability not only to remember h ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Automated Planning And Scheduling
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory. In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, reinforcement learning and combinatorial optimization. Languages used to describe planning and scheduling are often called action language ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

History Of Artificial Intelligence
The history of artificial intelligence ( AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain. The field of AI research was founded at a workshop held on the campus of Dartmouth College in 1956. Attendees of the workshop became the leaders of AI research for decades. Many of them predicted that machines as intelligent as humans would exist within a generation. The U.S. government provided millions of dollars with the hope of making this vision come true. Eventually, it became obvious that researchers had grossly underestimated the difficulty of this feat. In 1974, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




International Joint Conference On Artificial Intelligence
The International Joint Conference on Artificial Intelligence (IJCAI) is a conference in the field of artificial intelligence. The conference series has been organized by the nonprofit IJCAI Organization since 1969.Jointly sponsored by the IJCAI Organization and the hosting national AI societies. It was held biennially in odd-numbered years from 1969 to 2015 and annually starting from 2016. More recently, IJCAI was held jointly every four years with European Conference on Artificial Intelligence, ECAI since 2018 and Pacific Rim International Conferences on Artificial Intelligence, PRICAI since 2020 to promote collaboration of AI researchers and practitioners. IJCAI covers a broad range of research areas in the field of AI. It is a large and highly selective conference, with only about 20% or less of the submitted Paper (scientific paper), papers accepted after Scholarly peer review, peer review in the 5 years leading up to 2022. Awards Three research awards are given at each IJCAI c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sussman Anomaly
The Sussman anomaly is a problem in artificial intelligence, first described by Gerald Sussman, that illustrates a weakness of noninterleaved planning algorithms, which were prominent in the early 1970s. Most modern planning systems are not restricted to noninterleaved planning and thus can handle this anomaly. While the significance/value of the problem is now a historical one, it is still useful for explaining why planning is non-trivial. In the problem, set in blocks world, three blocks (labeled A, B, and C) rest on a table. The agent must stack the blocks such that A is atop B, which in turn is atop C. However, it may only move one block at a time. The problem starts with B on the table, C atop A, and A on the table: However, noninterleaved planners typically separate the goal (stack A atop B atop C) into subgoals, such as: # get A atop B # get B atop C Suppose the planner starts by pursuing Goal 1. The straightforward solution is to move C out of the way, then move A atop ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Planning Domain Definition Language
The Planning Domain Definition Language (PDDL) is an attempt to standardize Artificial Intelligence (AI) planning languages. It was first developed by Drew McDermott and his colleagues in 1998 mainly to make the 1998/2000 International Planning Competition (IPC) possible, and then evolved with each competition. The standardization provided by PDDL has the benefit of making research more reusable and easily comparable, though at the cost of some expressive power, compared to domain-specific systems. Overview PDDL is a human-readable format for problems in automated planning that gives a description of the possible states of the world, a description of the set of possible actions, a specific initial state of the world, and a specific set of desired goals. Action descriptions include the prerequisites of the action and the effects of the action. PDDL separates the model of the planning problem into two major parts: (1) a domain description of those elements that are present in ever ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hierarchical Task Network
In artificial intelligence, hierarchical task network (HTN) planning is an approach to automated planning in which the dependency among actions can be given in the form of hierarchically structured networks. Planning problems are specified in the hierarchical task network approach by providing a set of tasks, which can be: # primitive (initial state) tasks, which roughly correspond to the actions of Stanford Research Institute Problem Solver, STRIPS; # compound tasks (intermediate state), which can be seen as composed of a set of simpler tasks; # goal tasks (goal state), which roughly corresponds to the goals of Stanford Research Institute Problem Solver, STRIPS, but are more general. A solution to an HTN problem is then an executable sequence of primitive tasks that can be obtained from the initial task network by decomposing compound tasks into their set of simpler tasks, and by inserting ordering constraints. A primitive task is an action that can be executed directly given th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Automated Planning
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategy, strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control system, control and Statistical classification, classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory. In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, reinforcement learning and combinatorial optimization. Languages used to describe planning ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Action Description Language
In artificial intelligence, action description language (ADL) is an automated planning and scheduling system in particular for robots. It is considered an advancement of STRIPS. Edwin Pednault (a specialist in the field of data abstraction and modelling who has been an IBM Research Staff Member in the Data Abstraction Research Group since 1996) proposed this language in 1987. It is an example of an action language. Origins Pednault observed that the expressive power of STRIPS was susceptible to being improved by allowing the effects of an operator to be conditional. This is the main idea of ADL-A, which is roughly the propositional fragment of the ADL proposed by Pednault, with ADL-B an extension of -A. In the -B extension, actions can be described with indirect effects by the introduction of a new kind of propositions: ”static laws". A third variation of ADL is ADL-C which is similar to -B, in the sense that its propositions can be classified into static and dynamic laws, but ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Reinforcement Learning
Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning. Reinforcement learning differs from supervised learning in not needing labelled input-output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead, the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge) with the goal of maximizing the cumulative reward (the feedback of which might be incomplete or delayed). The search for this balance is known as the exploration–exploitation dilemma. The environment is typically stated in the form of a Markov decision process (MDP), as many reinforcement learning algorithms use dyn ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Genetic Programming
Genetic programming (GP) is an evolutionary algorithm, an artificial intelligence technique mimicking natural evolution, which operates on a population of programs. It applies the genetic operators selection (evolutionary algorithm), selection according to a predefined fitness function, fitness measure, mutation (evolutionary algorithm), mutation and crossover (evolutionary algorithm), crossover. The crossover operation involves swapping specified parts of selected pairs (parents) to produce new and different offspring that become part of the new generation of programs. Some programs not selected for reproduction are copied from the current generation to the new generation. Mutation involves substitution of some random part of a program with some other random part of a program. Then the selection and other operations are recursively applied to the new generation of programs. Typically, members of each new generation are on average more fit than the members of the previous gene ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]