Foundations of Artificial Intelligence (FAI) Group
divider line


   Home    People    Research    Projects    Software    Teaching    Past Student Projects    Publications

About this page. We summarize FAI's research area, and the major directions we are currently actively exploring. This is mainly intended for students interested in FAI. For other readers, it may be easier to just quickly browse through the list of our funded research projects.

The very first thing you should do, to get an idea what our main research area is about, is read the "Automatic Planning (a high-level summary)" section below. Then, read this paper:

Everything You Always Wanted to Know About Planning (But Were Afraid to Ask)

This provides an an easy-to-read and (hopefully) entertaining introduction.

Note: This page gives the main directions we are currently exploring, but should not be confused with an exhaustive description of our research. We are typically doing many things not subsumed by any of the descriptions below. If you are a student and interested in working with us, you need to contact us directly.

Some example topics of student projects we conducted in the past are listed here.


Automatic Planning (a high-level summary). How to decide what to do? Humans take decisions all the time, and so must robots or software bots to succeed in their environment. How should they do that?

This and related questions are the origin of the research area of Automatic Planning, or AI Planning, one of the founding sub-areas of Artificial Intelligence. Humans are able to (eventually) successfully tackle any new situation they are faced with, coming up with courses of action achieving their aims. The vision of AI Planning is to reproduce this ability, creating general problem solving algorithms, that can in principle solve any problem (more precisely, any problem whose solution involves finding a suitable course of action), given a suitable logics-based description of that problem.

Algorithms of this kind can automatically generate strategies of action for arbitrary autonomous agents in arbitrary environments, like for example a robot on Mars (that requires autonomy due to the impossibility of full remote control), or a softbot executing automatic hacking attacks for security testing (where the environment is highly dynamic and self-adaptive technology is valuable). More generally, planning algorithms can solve any problem whose solution can be cast as finding goal paths in large transition systems (billions of states and upwards), or proving the absence of such paths. They are successfully applied in areas as diverse as large-scale printer control, generation of business process templates, greenhouse logistics, security testing, and natural language sentence generation. Close relations exist to all areas of CS requiring to test reachability in large transition systems, including Verification/Model Checking, Program Synthesis, Bioinformatics, Game Playing, etc.

The most successful approach to Automatic Planning, to date, is heuristic search, which enumerates reachable states in a preference order given through a heuristic function, usually denoted h, that maps states to goal distance estimates. States that appear closer to the goal, according to h, are preferred. If h lower-bounds goal distance, then this can be arranged so as to guarantee that the goal path found is optimal. But how to generate a heuristic function in Automatic Planning, where this needs to happen automatically given only the logics-based problem description (the "rules of the game")? This is a major research area in Automatic Planning, and it is the main topic of our specialized lecture (see our teaching page).


Neural Networks. With the success of neural networks (NN), and in particular the success of AlphaGo Zero in learning to play Go, Chess, and Shogi at world master level, of course they have become a hugely important topic in AI Planning as well. We are considering two research questions in this context:

  1. How to learn effective search guidance (heuristic functions) or action-decisions (policies that map states to actions) with neural networks in planning?
  2. Given a learned NN action policy, how can we convince ourselves that the policy does the right thing? How can we verify or ensure that policy execution does not lead into unsafe states?
For question 1., note first that AlphaGo's success results from the combination of two components: first, neural networks which learn to estimate the value of a game position, and to identify a subset of most interesting applicable moves; second, search, which uses this learned knowledge to explore the game state space and identify the best moves. We believe that this formula, NN + search, is a key recipe for general problem solving.

The key challenge for using NN in AI Planning lies in the generality of planning problems -- way beyond being able to tackle different board games, planning algorithms are supposed to tackle arbitrary problems describable in their input language, covering e.g. the range from natural language sentence generation to cybersecurity attacks. Our current efforts in this direction encompass learning heuristic functions from raw planning-state descriptions; as well as reinforcement learning (deep Q-learning) of action policies, using model-based planning techniques to solve the "cold-start problem", i.e., the fact that, acting randomly, in complex planning tasks an agent will never see the goal, to the effect that the reinforcement learning process will never receive a reward and thus be unable to learn.

For question 2., we are actively exploring three directions:


Explainable AI Planning (XAIP). Model-based approaches to AI are well suited to explainability in principle, given the explicit nature of their world knowledge and of the reasoning performed to take decisions. AI Planning in particular is relevant in this context as a generic approach to action-decision problems. Indeed, explainable AI Planning (XAIP) has received interest since more than a decade, and has been taking up speed recently along with the general trend to explainable AI.

In our work on XAIP, we explore a form of contrastive explanation aimed at answering user questions of the kind "Why do you suggest to do A here, rather than B (which seems more appropriate to me)?". Answers to such questions take the form of reasons why A is preferrable over B. We have designed a formal framework allowing to provide such answers in a systematic way, through plan properties which capture the relevant aspects of plans, and the identification of plan-property dependencies which make explicit to the user how these aspects depend on each other. In particular, the answer to the above question could be "Because if we do A here, then the resulting plan will not be able to achieve objective C.". We show that powerful plan-property languages -- specificaly, linear temporal logig (LTL) -- can be dealt with effectively, i.e., this kind of explanation can be computed with an empirical complexity similar to that of previous AI Planning frameworks.

You can read a paper introducing the framework, or a paper designing effective solution algorithms.

We are currently working on user studies to investigate the benefit users get from such explanations in an iterative planning process. Future works will include the investigatiojn of deeper "Why?" questions ("Why does A entail C?"), the automatic identification of relevant plan properties, and a collaboration with NASA on applying this methodology in (abstractions of) the mission-control process.


Natural Language Generation. Natural language generation can be viewed as a planning problem -- planning the text content, planning the sentences, planning the actual phrasing of these sentences. For example, sentence generation can be formulated as a search in the space of possible partial sentences, where individual search stops add another word to some candidate, and the search terminates when a legal sentence has been constructed.

We have in the past done work successfully applying planning techniques to natural language instruction-giving through a compilation into a planning language, and through the adaptation of search techniques from planning. You can also have a look at our overview/challenge paper outlining challenges and directions in surface generation, i.e., searching for a good sentence formulation.

Currently, we are collaborating with Alexander Koller from the Computational Linguistics department on instruction-giving problems in Minecraft. The ultimate goal is to generate Minecraft instruction videos automatically. Planning techniques are used to structure and arrange the instruction text, tightly interleaved with sentence-generation techniques that produce the actual language. You can read first works on referring to objects that must yet be built (and that do hence not yet exist), suitable search methods in hierarchical planning, and our technical infrastructure for conducting online crowd-sourcing experiments.

Our research explores advanced search techiques for this form of planning; effective two-level search combining hierarchical planning with natural language sentence generation; as well as the Minecraft application, data gathering and model building, user interaction, and evaluation through user studies.


Model-Based Security Analysis. Penetration testing (pentesting) detects security holes in a system by conducting attacks on that system. The pentesting is usually done by humans, which is slow and expensive. The idea in simulated penetration testing is to support that process through AI methods simulating attackers. Possible models for this purpose range from classical (deterministic) planning, over Markov decision processes (MDP), to partially observable Markov decision process (POMDP) models which allow to explicitly capture the incomplete knowledge that characterizes -- from the point of the view of the attacker -- network attacks. Joerg Hoffmann's ICAPS'15 invited paper gives a nice overview.

We are exploring broader forms of model-based security analysis in cooperation with CISPA. Our main focus are models that include defenses to assess cost-security trade-offs, and applications beyond network security testing, e.g. global email infrastructure security.


NoGood Learning in State-Space Search. NoGood learning is of paramount importance in solving constraint satisfaction problems (CSP) and propositional satisfiability (SAT). Essentially, the idea is to learn from conflicts encountered during the search for a solution, analyzing the reasons for the conflict and extract knowledge that will help to avoid similar conflicts in the future. There are decades of literature on this, and NoGood learning techniques are absolutely vital for performance.

By comparison, in state-space search to decide reachability in large transition systems, NoGood learning has been elusive. Attempts have been made since the 80s, but the learned knowledge either was not able to generalize (essentially boiling down to variants of duplicate detection) or was not sound (not giving a guarantee that only unsolvable search states will be pruned).

Building on our work on explicit conjunctions in partial delete relaxation (see further below), we managed to come up with the first NoGood learning technique in state-space search that has both properties -- it is sound and able to generalize to yet unseen states. This effectively realizes a form of clause learning for state space search. We also have a a short summary paper describing this.

We are currently investigating the use of NoGood learning in probabilistic planning, as well as methods for predicting that all paths from a given state will necessarily lead into a known NoGood.


Star-Topology Decoupling. Apart from heuristic functions, another major means to tackle large transition systems are optimality and/or completeness preserving reduction methods, based on notions like partial-order reduction (some transition sequences can be permuted and only one permutation needs be searched), state-dominance relations (states dominated by already explored states can be pruned), symmetries (only one symmetric option needs be explored), or decomposition (solving sub-problems and assembling a global solution from the parts, like in divide-and-conquer approaches).

Our seminar on Admissible Search Enhancements (see our teaching page) gives an overview of such methods. A brand-new method of our own making is star-topology decoupling, a particular form of problem decomposition.

Star-topology decoupling applies to systems/planning problems that can be viewed as consisting of a single center component, interacting with each of a (possibly large) set of leaf components. The leaves interact only via the center, and hence are "conditionally independent given the center". The latter is an appealing picture, and is a common-sense approach to decompose graphical models, like Bayesian networks -- yet here we are not facing a graphical model. The star topology is a structural property of the transition rules in a high-level description of a large transition system. What does it mean to "break the conditional dependencies" in this context?

The answer given in star-topology decoupling is to search over center-component paths only. For any fixed center path (intuitively, a "given valuation of the center"), the possible moves for each leaf component become independent, and can be maintained alongside the center path. This method does not have to enumerate state combinations across leaf components, thus getting rid of a major source of combinatorial blow-up. This typically yields substantial search space reductions (several orders of magnitude are a common empirical phenomenon), especially for proving unreachability.

Recommended reading here is, foremost, our short summary paper outlining the approach. For more details, refer to the journal paper.

We are currently investigating the use of star-topolgy decoupling in model checking, particularly checking liveness properties and the application to weak memory models.


Partial Delete Relaxation. The basic idea in the generation of heuristic functions is to design a problem relaxation, R; then, to estimate the goal distance of state s, solve its relaxation R(s) exactly; and set h(s):= the cost of the solution to R(s). In other words, h(s) is the cost of a relaxed solution for s. But what are useful relaxations of logics-based problem descriptions, and how to design and solve these automatically?

One of the most successful answers is the so-called delete relaxation. This relaxation essentially pretends that everything that once was true, remains true. In particular, a logical proposition may be both, true and false, and we can pick whichever value we prefer. For example, if I buy a car, and I pay 10000 EUR for that, then in the real world, after applying the "buy" action, I have the car but I no longer have the money. In the delete-relaxed world, I have the car, and I also still have the money.

It has been shown that delete-relaxation heuristic functions are highly informative for search on many planning problems, essentially as they preserve complex prerequisite--effect chains. Recommended reading here are our papers on delete-relaxation heuristic functions and their search space properties. Nevertheless, as the delete relaxation ignores the negative "side effects" actions may have, it exhibits severe weaknesses in problems where conflicting side effects are essential to the nature of the problem. One example are puzzle-like problems, like Rubic's cube where in each action application we intend to move just one cube, but as a side effect many other cubes are moved as well. Another, more practically important, example is planning under resource constraints, where we may intend to, e.g., move a vehicle, but as a side effect fuel gets consumed -- under the delete relaxation, a single drop of fuel suffices to travel the world.

Partial delete relaxation devises approximation techniques "taking some delete into account", by realizing an interpolation parameter that allows to pick an arbitrary point between fully delete relaxed planning, and real planning. In other words, such techniques are able to force the relaxation to be exact in the limit, while also being able to choose arbitrary points in the middle, allowing to trade information accuracy against reduced computational overhead. FAI has been the leading research group in the devlopment of such techniques, in particular through explicit conjunctions and red-black planning.

We are not currently actively investigating partial delete relaxation anymore, but students topics in this area can still be arranged if of interest.


Imprint
Data protection