Foundations of Artificial Intelligence (FAI) Group
|
|
|
Home | |
People | |
Research | |
Projects | |
Software | |
Teaching | |
Publications |
About this page. Much of the research in
FAI involves the development of software as part of the work. Our
philosophy is to share the source code as much as possible. This page
gives links to most of our developments, so far as ready for
release. All downloadable material is free for research and
educational purposes.
Neural Action Policy Verification and Testing.
Explainable Planning.
Minecraft instruction generation.
We have established
a software
infrastructure for running online user experiments with instruction
giving in Minecraft.
Planning Benchmarks.
- Resource-constrained
planning benchmarks. This is the benchmark suite used in our ICAPS'12 paper on random-walk methods for
critically constrained planning. The benchmarks are controllable in the
sense that the minimal amount of resources required to solve the task is known,
and the benchmarks systemtically scale the constrainedness factor, i.e.,
the factor by which the available amount of resources exceeds that minimum
requirement. Instances whose constrainedness factor is close to 1 are
notoriously hard to solve. Here you can find
the benchmarks used in our AAAI'16
paper including both unsolvable and solvable instances. The AAAI'16
benchmarks use the STRIPS encoding and differ from the ICAPS'12 resource constrained
benchmarks only by the amount of resources initially available (the unsolvable
instances use constrainedness factor 0.5 to 0.9, the solvable instances 1.0 to
1.4).
- Unsolvable planning
benchmarks. This is the benchmark suite used
in our ECAI'14 paper on
merge-and-shrink heuristics tailored to detect dead-ends. It
contains the following domains:
- Bottleneck: square grid of size n x n, where n agents travel from
the left to the right side. Any cell an agent moves to cannot be moved
on by another agent. In the middle is a bottleneck of height m <
n.
- 3UNSAT: Random unsolvable 3SAT formulas from the phase transition
region.
- Mystery: The unsolvable tasks of the IPC'98 domain (we removed two
tasks that were already identified as unsolvable by Fast Downward's
preprocessor)
- UnsNoMystery: taken from
the ICAPS'12 paper:
resource-constrained transportation domain (the resource being the
fuel of the truck), same as nomystery in IPC 2011. Here we have a
constrainedness C < 1, which makes the tasks unsolvable.
- UnsPegsol: Taken from the net-benefit track of IPC 2008. The goal
was changed to the typical one (i.e., having only one peg left,
located in the middle of the board), which is not reachable
- UnsRovers: taken from
the ICAPS'12 paper:
resource-constrained Mars rovers domain (the resource being the energy
of the rover), similar to rovers in IPC 2002. Here we have a
constrainedness C < 1, which makes the tasks unsolvable.
- UnsTiles: Unsolvable tasks of the 8 puzzle and the 11 (= 3 x 4)
puzzle.
- UnsTPP: taken from
the ICAPS'12 paper:
resource-constrained market domain (the resource being the money),
taken from IPC 2006. Here we have a constrainedness C < 1, which
makes the tasks unsolvable.
- Probabilistic planning
benchmarks, acyclic
domains
and cyclic
domains. This is the benchmark suite used
in our ICAPS'16 paper on goal
probability analysis. The benchmark suite collects a comprehensive
set of PPDDL benchmarks from the IPPC, from adaptations of the
resource-constrained planning benchmarks above, as well as from a
first benchmark generator for MDP-based simulated penetration testing
(see also next item). Each domain comes with a limited-budget as well
as an unlimited-budget version. The benchmark suite consists of a
suite of acyclic ones, where there are no cycles in the state space,
due to budget consumption or other structural properties; as well as a
suite of others, where that is not so.
- Simulated penetration testing
benchmarks. This is a PRELIMINARY collection of benchmarks for
simulated penetration testing. The current benchmarks are all based on
an MDP model. Specifically, all models here fall into the class of
"attack-asset MDPs", and most into the class of the "Canadian Hacker
Problem", as specified in our
ICAPS'15 paper on models of simulated penetration testing.
The benchmarks are preliminary in that most of them are based on
randomized problem generators. We are still working on designing more
realistic benchmarks. We will announce any additions at
icaps-conference-security@googlegroups.com
FD Partial Delete Relaxation.
FD Probabilistic Planning.
FD Star-Topology Decoupling.
Symbolic Search Library.
FD Abstractions & Symmetries.
To get the following code please
contact Michael Katz.
- Structural Patterns aka Implicit Abstraction Heuristics
are state-of-the-art lower bound heuristic functions, derived from an
analysis of the causal graph, that captures the dependencies
between the state variables used to encode the planning problem at
hand.
- Landmark Enriched Problem Reformulation
is a reformulation of the planning problem to explicitly include the implicit information
provided by the landmark graph, which is essentially a partial
order over certain formulas that every solution must satisfy at some
point. In this reformulation, other methods, in particular heuristic
functions, are more informed than in the original formulation, and
provide better guidance for the search for solutions.
- Symmetries for state pruning detect symmetrical aspects of
the planning problem, and exploit these for reducing search
effort. This is especially efficient in optimal planning, where the
bottleneck typically is to prove solution optimality.
- Label-catching bisimulation is a technique used during the
construction of merge-and-shrink abstractions, cf. the above. They
ensure the abstraction has the bisimulation property -- which
essentially means no information is lost -- but relative to a subset
of the planning operators only. By selecting this operator subset, we
can control the trade-off between the computational effort for
building the abstraction, and the accuracy of the resulting lower
bound heuristic function.
Planning Tools not based on Fast Downward.
- IPP. This
is a planning tool Joerg Hoffmann implemented as a student. It is
based on the Graphplan algorithm and nowadays largely outdated.
- FF. This planning tool was started
in Joerg Hoffmann's PhD thesis, and gained fame by outclassing the
competition in the 2000 International Planning Competition (IPC),
setting a completely new standard in planner performance. The tool is
still widely used today; variants of its key techniques are used in
almost every state of the art planner. FF has been extended to several
expressive variants of planning, implemented in the tools Metric-FF
(numeric state variables), Conformant-FF (uncertain initial states),
Contingent-FF (partial observability), and Probabilistic-FF (like
Conformant-FF but with probabilistic initial states and action
effects).
- SATPLAN. This
planner, originally proposed by Henry Kautz and Bart Selman, compiles
planning into a series of SAT problems. It has long dominated the
track for optimal planning tools at the IPC. Our contribution amounted
to implementation works on encoding planning into SAT.
- UPPAAL. This is a very
widely used tool for model checking networks of timed automata. We
adapted planning heuristic functions to this context, and showed that
they can be useful to more effectively find bugs (paths to undesired
states) in the state spaces of such networks.
- TorchLight is a recent
tool we have developed, which allows to analyze search space topology
without actually running any search. The tool applies to planners
based on the "ignoring delete lists" relaxation. Papers on TorchLight
have been accepted for
ICAPS'11: TorchLight-ICAPS
as well as
JAIR: TorchLight-JAIR. The
source code is publicly available under the GNU GPL
license: TorchLight.zip.
- adl2strips translates input
from the ADL planning language, which allows arbitrary first-order
formulas in operator preconditions and the goal, to the simpler STRIPS
language which restricts these to conjunctions of atoms. The tool also
features two methods for compiling away conditional effects. It has
been used in the 2004 IPC, and in some later editions of the IPC as
well.
- Gamer is an optimal
BDD-based planner developed by Peter Kissmann and Stefan Edelkamp
for classical and net-benefit planning, which won the
sequential-optimal and the optimal net-benefit track of IPC 2008.
- Gamer is also the name of a hybrid general game playing agent,
also developed by Peter Kissmann and Stefan Edelkamp, which
regularly participates at national and international GGP
competition. It consists of a BDD-based solver for single- and
non-simultaneous two-player games as well as two more classical
players, both using UCT (one uses Prolog to infer the required
knowledge, while the other makes use of instantiated input). The
sources, however, are not yet open.