Foundations of Artificial Intelligence (FAI) Group
divider line


   Home    People    Research    Projects    Software    Teaching    Publications

ATTENTION: This page is intended for students to get an idea of what we are working on. The page is not updated regularly and does not reflect the most recent state of our work lines. For up-to-date information, refer to the papers on our Publications page.


Summary of FAI Research. We focus on complex decision-making problems, specifically planning problems (also known as sequential decision making), where an agent must decide how to act in an environment so as to achieve its goals. We are mostly considering general AI planning problems where the agent must have generic decision-making capabilities, not specialized to a particular domain; and we are mostly considering model-based approaches relying on a formal symbolic model of the agent and its environment. Within this realm of research, our activities can be roughly categorized into three major research questions:

To find effective algorithms, our traditional focus is on heuristic search methods, which have dominated the field of AI Planning during the last two decades. We have also investigated, e.g., nogood learning and decomposition methods. In the last years, we are increasingly focusing on neural networks and reinforcement learning, specifically their combination with symbolic methods to various forms of neuro-symbolic AI. To achieve trusted AI, we are investigating the explanation of plan alternatives through insights into the space of possible plans, the verification of action policies (learned functions mapping states to actions), test-case generation methods for debugging action policies, and visualization methods for action policies. Applications we are actively pursueing at this time are natural language generation and model-based security analysis.

In what follows, we give a brief summary of AI Planning, our main field of research, mainly addressed at students; then we discuss some example directions. This should not be confused with an exhaustive description of our research. We are typically doing many things not subsumed by the descriptions below. You can check our list of funded research projects to get a somewhat more complete impression.


What is AI Planning? How to decide what to do? Humans take decisions all the time, and so must robots or software bots to succeed in their environment. How should they do that?

This and related questions are the origin of the research area of AI Planning, one of the founding sub-areas of Artificial Intelligence. Humans are able to (eventually) successfully tackle any new situation they are faced with, coming up with courses of action achieving their aims. The vision of AI Planning is to reproduce this ability, creating general problem solving algorithms, that can in principle solve any problem (more precisely, any problem whose solution involves finding a suitable course of action), given a suitable logics-based description of that problem.

Algorithms of this kind can automatically generate strategies of action for arbitrary autonomous agents in arbitrary environments, like for example a robot on Mars (that requires autonomy due to the impossibility of full remote control), or a softbot executing automatic hacking attacks for security testing (where the environment is highly dynamic and self-adaptive technology is valuable). More generally, planning algorithms can solve any problem whose solution can be cast as finding goal paths in large transition systems (billions of states and upwards), or proving the absence of such paths. They are successfully applied in areas as diverse as large-scale printer control, generation of business process templates, greenhouse logistics, security testing, and natural language sentence generation. Close relations exist to all areas of CS requiring to test reachability in large transition systems, including Verification/Model Checking, Program Synthesis, Bioinformatics, Game Playing, etc.

The dominating approach to AI Planning in the last two decades has been heuristic search, which enumerates reachable states in a preference order given through a heuristic function, usually denoted h, that maps states to goal distance estimates. States that appear closer to the goal, according to h, are preferred. If h lower-bounds goal distance, then this can be arranged so as to guarantee that the goal path found is optimal. But how to generate a heuristic function in Automatic Planning, where this needs to happen automatically given only the symbolic model (the "rules of the game")? This is a major research area in AI Planning. You can get an impression of this research line by reading this paper:

Everything You Always Wanted to Know About Planning (But Were Afraid to Ask)


Example Research (Algorithms): Neural Networks for Effective Planning. With the success of neural networks (NN), and in particular the success of AlphaGo Zero in learning to play Go, Chess, and Shogi at world master level, of course they have become a hugely important topic in AI Planning as well. AlphaGo's success results from the combination of two components: neural networks which learn to estimate the value of a game position, and to identify a subset of most interesting applicable moves; search, which uses this learned knowledge to explore the game state space and identify the best moves. We believe that this formula, NN + search, is a key recipe.

The key challenge lies in the generality of planning problems -- way beyond being able to tackle different board games, planning algorithms are supposed to tackle arbitrary problems describable in their input language, covering e.g. the range from natural language sentence generation to cybersecurity attacks. How to learn effective search guidance (heuristic functions) or action-decisions (policies that map states to actions) with neural networks in general planning?

Our current efforts in this direction encompass learning heuristic functions from raw planning-state descriptions; as well as reinforcement learning (deep Q-learning) of action policies, using model-based techniques to address deficiencies such as cold-start problems and global reliability.


Example Research (Trusted AI): Action Policy Verification, Testing, Visualization. An action policy is a function mapping states to actions, thus representing an agent's strategy. If such an action policy is learned, e.g. in terms of a neural network, there are no inherent guarantees on its behavior. Does it always reach the goal? Does it run the risk to end up in an unsafe state? What has actually been learned and is that strategy sensible? In short: How to gain trust in a learned action policy?

We are exploring a variety of research directions to address this question, including deep statistical model checking to analyze action policy behavior on individual start states in stochastic environments; action policy verification to ascertain that no unsafe states can be reached from any of a given (large) set of start states; action policy testing for debugging purposes, trying to find situations where the policy performs badly; and interactive action policy visualization, making policy behavior visually explorable by human analysts.


Example Research (Trusted AI): Plan-Space Explanation. Model-based approaches to AI are well suited to explainability in principle, given the explicit nature of their world knowledge and of the reasoning performed to take decisions. AI Planning in particular is relevant in this context as a generic approach to action-decision problems. Indeed, explainable AI Planning (XAIP) has received interest since more than a decade, and has been taking up speed recently along with the general trend to explainable AI.

In our work on XAIP, we explore a form of contrastive explanation aimed at answering user questions of the kind "Why do you suggest to do A here, rather than B (which seems more appropriate to me)?". Answers to such questions take the form of reasons why A is preferrable over B. We have designed a formal framework allowing to provide such answers in a systematic way, through plan properties which capture the relevant aspects of plans, and the identification of plan-property dependencies which make explicit to the user how these aspects depend on each other. In particular, the answer to the above question could be "Because if we do A here, then the resulting plan will not be able to achieve objective C.". We show that powerful plan-property languages -- specifically, linear temporal logig (LTL) -- can be dealt with effectively, i.e., this kind of explanation can be computed with an empirical complexity similar to that of previous AI Planning frameworks.

You can read a paper introducing the framework, or a paper designing effective solution algorithms.


Example Research (Applications): Natural Language Generation. Natural language generation can be viewed as a planning problem -- planning the text content, planning the sentences, planning the actual phrasing of these sentences. Neural networks (NN), despite their resounding successes in natural language processing, are (as of now) useful mainly for the last of these three purposes. Training an NN on Harry Potter books yields a machine that can write text which sounds like Harry Potter -- but which completely lacks the actual story content. We are investigating neuro-symbolic methods to achieve the best of both worlds, in the specific setting of instruction generation where it is particularly important to be able to control the content (it will hardly be acceptable to generate text that "sounds like instructions").

We are investigating this in the context of Minecraft building instructions, in collaboration with Alexander Koller from the Computational Linguistics department. The ultimate goal is to generate Minecraft instruction videos automatically. Planning techniques are used to structure and arrange the instruction text, tightly interleaved with sentence-generation techniques that produce the actual language. You can read first works on referring to objects that must yet be built (and that do hence not yet exist), Monte-Carlo tree search methods in hierarchical planning, and how the overall machinery can be used to generate instructions at different levels of abstraction.


Example Research (Applications): Model-Based Security Analysis. Penetration testing (pentesting) detects security weaknesses in a system by conducting attacks on that system. The pentesting is usually done by humans, which is slow and expensive. The idea in simulated penetration testing is to support that process through AI methods simulating attackers. Possible models for this purpose range from classical (deterministic) planning, over Markov decision processes (MDP), to partially observable Markov decision process (POMDP) models which allow to explicitly capture the incomplete knowledge that characterizes -- from the point of the view of the attacker -- network attacks. Joerg Hoffmann's ICAPS'15 invited paper gives a nice overview.

We are currently exploring broader forms of model-based security analysis in cooperation with CISPA. Our main focus are models that include defenses to assess cost-security trade-offs, and applications beyond network security testing, e.g. global email infrastructure security.


Example Research (Algorithms): Star-Topology Decoupling. A major means to tackle large transition systems are optimality and/or completeness preserving reduction methods, based on notions like partial-order reduction (some transition sequences can be permuted and only one permutation needs be searched), state-dominance relations (states dominated by already explored states can be pruned), symmetries (only one symmetric option needs be explored), or decomposition (solving sub-problems and assembling a global solution from the parts, like in divide-and-conquer approaches). A brand-new method of our own making is star-topology decoupling, a particular form of problem decomposition.

Star-topology decoupling applies to systems/planning problems that can be viewed as consisting of a single center component, interacting with each of a (possibly large) set of leaf components. The leaves interact only via the center, and hence are "conditionally independent given the center". The latter is an appealing picture, and is a common-sense approach to decompose graphical models, like Bayesian networks -- yet here we are not facing a graphical model. The star topology is a structural property of the transition rules in a high-level description of a large transition system. What does it mean to "break the conditional dependencies" in this context?

The answer given in star-topology decoupling is to search over center-component paths only. For any fixed center path (intuitively, a "given valuation of the center"), the possible moves for each leaf component become independent, and can be maintained alongside the center path. This method does not have to enumerate state combinations across leaf components, thus getting rid of a major source of combinatorial blow-up. This typically yields substantial search space reductions (several orders of magnitude are a common empirical phenomenon), especially for proving unreachability.

For easy reading, consider our short summary paper outlining the approach. For more details, refer to the journal paper. We have recently shown that star-topology decoupling can also be used in model checking, to more effectively verify lifeness properties.


Imprint
Data protection