17th International Conference on Principles of Knowledge Representation and Reasoning

September 12-18, 2020

last modified: 25 Sep 2020

Invited Talks


speaker
Rachid Alami (LAAS-CNRS, ANITI, France)

Models and Decisional issues for Human-Robot Joint Action

This talk will address some key decisional issues that are necessary for a cognitive and interactive robot which shares space and tasks with humans. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. The system is comprehensive since it aims at dealing with a complete set of abilities articulated so that the robot controller is effectively able to conduct in a flexible and fluent manner a human-robot joint action seen as a collaborative problem solving and task achievement. These abilities include geometric reasoning and situation assessment based essentially on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in a separate cognitive model, human-aware task planning and interleaved execution of shared plans. We will also discuss the key issues linked to the pertinence and the acceptability by the human of the robot behaviour, and how this influence qualitatively the robot decisional, planning, control and communication processes.


Dr. Rachid Alami is Senior Scientist at LAAS-CNRS. He received an engineer diploma in computer science in 1978 from ENSEEIHT, a Ph.D in Robotics in 1983 from Institut National Polytechnique and an Habilitation HDR in 1996 from Paul Sabatier University He contributed and took important responsibilities in several national, European and international research and/or collaborative projects (EUREKA: FAMOS, AMR and I-ARES projects, ESPRIT: MARTHA, PROMotion, ECLA, IST: COMETS, IST FP6 projects: COGNIRON, URUS, PHRIENDS, and FP7 projects: CHRIS, SAPHARI, ARCAS, SPENCER, H2020: MuMMER, France: ARA, VAP-RISP for planetary rovers, PROMIP, several ANR projects). His main research contributions fall in the fields of Robot Decisional and Control Architectures, Task and motion planning, multi-robot cooperation, and human-robot interaction. Rachid Alami is currently the head of the Robotics and InteractionS group at LAAS. He has been offered in 2019 the Academic Chair on Cognitive and Interactive Robotics at the Artificial and Natural Intelligence Toulouse Institute (ANITI).

speaker
Thomas Eiter (Technische Universität Wien, Austria)

A Hitchhiker's Tour Through Computational Complexity in Knowledge Representation and Reasoning

Great Moments in KR

Slides in PDF

The vision of artificial intelligence with human-level capabilities of reasoning has inspired and motivated generations of researchers, starting with John McCarthy's seminal work to develop numerous formalisms and approaches towards making it a reality. Many of these formalisms are rooted in formal logic or mathematical approaches to deal with world models at a symbolic level, allowing to take aspects such as incomplete information, uncertainty, or inconsistency into account. The undecidability of first-order logic, which often served as the basis, has spurred the search for decidable fragments in order to facilitate effective reasoning. However, mere decidability turned out to be insufficient in practice, and tractability as a paradigm for efficient solvability was fostered.

Structural complexity theory, which is concerned with problem solving under resource constraints and describing its inherent difficulty, turned out to be a valuable tool in the design of formalisms and for the analysis of reasoning tasks in them. In fact the rich landscape of complexity classes with various models of computation and resource settings, has facilitated a fine-grained analysis beyond a black-white characterization of being tractable or intractable, where NP-hardness was perceived as a kiss of death for problems in practice. In turn, problems in Knowledge Representation and Reasoning (KRR) have vitalized complexity classes that were considered to be more of academic interest, and led to the development of new notions and techniques.

In this talk, we shall address the role of computational complexity in KRR. We shall consider why complexity matters and what conclusions can be drawn from complexity results beyond mere quantitative (in terms of resource consumption) results. With a focus on selected examples, we shall review some highlights and influential results, consider developments and recent trends, and perhaps risk a glimpse into the future: while regarded almost indispensable today, may the need for complexity considerations vanish?


Thomas Eiter received his Ph.D. degree in computer science from the Vienna University of Technology (TU Wien) in 1991. He worked at TU Wien until 1996, when he moved as an associate professor to the University of Giessen, Germany. In 1998, he rejoined TU Wien as full professor, where he heads the knowledge-based systems group and since 2004 the Institute of Information Systems (now Institute of Logic and Computation).

Eiter's current research interests include knowledge representation and reasoning, computational logic, logic programming and databases, declarative problem solving, and intelligent agents. He has published a number of research papers (among them more than 120 journal articles), co-authored a research monograph on heterogeneous agents, and edited 27 article collections and conference proceedings. His paper on complexity of logic programming in the ACM Computing Surveys (2001), a joint work with Evgeny Dantsin, Georg Gottlob and Andrei Voronkov, is a well-cited reference.

Eiter has been serving on many editorial boards, e.g. of the Artificial Intelligence Journal, the Journal of Artificial Intelligence Research, and the AI Review, and steering bodies. He was a program co-chair of the International Conference on Database Theory (ICDT) in 2005, of KR in 2012, and of the International Conference on Logic Programming (ICLP) in 2015; recently, he was Conference Chair of the International Joint Conference on AI (IJCAI) in 2019. Furthermore, he was President of Knowledge Representation Inc. and is pro-term President of the Association of Logic Programming.

Eiter is a Fellow of the European Association for AI (EurAI), Corresponding Member of the Austrian Academy of Sciences, and Member of the European Academy of Sciences (London).

speaker
Mateja Jamnik (University of Cambridge, UK)

How to (Re)represent it?

To achieve efficient human computer collaboration, computers need to be able to represent information in ways that humans can understand. Picking a good representation is critical for effective communication and human learning, especially on technical topics. To select representations appropriately, AI systems must have some understanding of how humans reason and comprehend the nature of representations. In this work, we are developing the foundations for the analysis of representations for reasoning. Ultimately, our goal is to build AI systems that select representations intelligently, taking users’ preferences and abilities into account.


Mateja Jamnik is a Professor of Artificial Intelligence at the Department of Computer Science and Technology of the University of Cambridge, UK. She is developing AI techniques for human-like computing - she computationally models how people solve problems to enable machines to reason in a similar way to humans. She is essentially trying to humanise computer thinking. She applies this AI technology to medical data to advance personalised cancer medicine, and to education to personalise tutoring systems. Mateja is passionate about bringing science closer to the public and engages frequently with the media and public science events. Her active support of women scientists was recognised by the Royal Society which awarded her the Athena Prize. Mateja has been advising the UK government on policy direction in relation to the impact of AI on society.

speaker
Marta Kwiatkowska (University of Oxford, UK)

Probabilistic model checking for strategic equilibria-based decision making

Software faults have plagued computing systems since the early days, leading to the development of methods based on mathematical logic, such as proof assistants or model checking, to ensure their correctness. The rise of AI calls for automated decision making that incorporates strategic reasoning and coordination of behaviour of multiple autonomous agents acting concurrently and in presence of uncertainty. Traditionally, game-theoretic solutions such as Nash equilibria are employed to analyse strategic interactions between multiple independent entities, but model checking tools for scenarios exhibiting concurrency, stochasticity and equilibria have been lacking.

This lecture will focus on a recent extension of probabilistic model checker PRISM-games (www.prismmodelchecker.org/games/ ), which supports quantitative reasoning and strategy synthesis for concurrent multiplayer stochastic games against temporal logic that can express coalitional, zero-sum and equilibria-based properties. Game-theoretic models arise naturally in the context of autonomous computing infrastructure, including user-centric networks, robotics and security. Using illustrative examples, this lecture will give an overview of recent progress in probabilistic model checking for stochastic games and highlight challenges and opportunities for the future.


Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. She is known for fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. She led the development of the PRISM model checker (www.prismmodelchecker.org), the leading software tool in the area and winner of the HVC Award 2016. Probabilistic model checking has been adopted in diverse fields, including distributed computing, wireless networks, security, robotics, healthcare, systems biology, DNA computing and nanotechnology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska is the first female winner of the Royal Society Milner Award, winner of the BCS Lovelace Medal and was awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She won two ERC Advanced Grants, VERIWARE and FUN2MODEL, and is a coinvestigator of the EPSRC Programme Grant on Mobile Autonomy. Kwiatkowska is a Fellow of the Royal Society, Fellow of ACM, EATCS and BCS, and Member of Academia Europea.

speaker
Gary Marcus (Robust AI, USA)

Taking AI to the Next Level

For nearly half a century, AI has always seemed as if it just was beyond reach, less than two decades away. Yet "strong AI" in some ways still seems elusive. In this talk, I will give a cognitive scientist's perspective on AI. What have we learned, and what are we still struggling with? Is there anything that programmers of AI can still learn from studying the science of human cognition?


Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times best seller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader.

He has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence.

speaker
David Poole (University of British Columbia, Canada)

Lessons from three decades of research into learning and reasoning with relational probabilistic models

Making probabilistic predictions from relational data, recently under the umbrella of "statistical relational AI", has a long history, and is an active research area. The talk presents some insights that everyone should know but often don't fit into research papers. These include: What is a relational model. Triples are universal representations of relations but learning with them is difficult. How embedding-based models generalize (and what they actually learn). Why embedding based models cannot be used for predicting properties. Why ranking is often used for evaluation, but is very misleading. Why entities are not like words in embedding models. Why being Bayesian implies exchangeability, which can be exploited in lifted reasoning. Lifted reasoning relies on counting, but counting relies of know what exists and knowing identity. Why identity and existence uncertainty are tricky to get right.


David Poole is a Professor of Computer Science at the University of British Columbia. He is known for his work on combining logic and probability, probabilistic inference, relational probabilistic models, statistical relational AI and semantic science. He is a co-author of two AI textbooks (Cambridge University Press, 2010, 2nd edition 2017 and Oxford University Press, 1998), and coauthor of "Statistical Relational Artificial Intelligence: Logic, Probability, and Computation", (Morgan & Claypool 2016). He is a former chair of the Association for Uncertainty in Artificial Intelligence, the winner of the Canadian AI Association (CAIAC) 2013 Lifetime Achievement Award, and is a Fellow of the Association for the Advancement Artificial Intelligence (AAAI) and CAIAC.