Monday 18th Dec :: Tuesday 19th Dec :: Wednesday 20th Dec

Monday, 18th December 2023

First keynote speaker: Jaap Hage, Maastricht University

Jaap Hage, Maastricht University

Talk: Explainable Legal AI. With the increasing popularity of AI based on machine learning, the ideal that AI programs can explain their outputs becomes more difficult to realise. There is no reason why this would be different for legal AI. However natural the demand for explicability may seem, it is not at all obvious what precisely is asked for. There seem to be two kinds of explanation, which can ideally be combined but which in practice do not always go together. The one kind describes the process through which the explanandum came about, physically or – in the law – logically. The other kind is a tool to create understanding in the audience. Psychological research has shown that people are often not capable to explain their own behaviour in the first way, and that when they explain it in the second way, the explanation may very well be false. This has also be shown to hold for legal decisions. If naturally intelligent lawyers are not always capable of explaining their own decisions – but may be under the illusion that they are – should we then demand from AI legal decisions makers that they do what human legal decision makers often cannot do? What can we under these circumstances expect from the explanations that AI systems give of their legal decisions? For some, the answer may come as a surprise.

Bio: Jaap Hage is an emeritus professor of Legal Theory at Maastricht University. His research is focused on legal logic, with emphasis on the logic of rules, basic legal concepts (ontology of law), and social ontology. His publications include the following books: Reasoning with Rules (1997), Studies in Legal Logic (2005), and Foundations and Building Blocks of Law (2018). Further information: https://www.jaaphage.nl/

Tuesday, 19th December 2023

Second keynote speaker: Piek Vossen, VU

Piek Vossen, VU

Talk: ChatGPT: what it is, what it can do, cannot do and should not do. OpenAI has set a new standard by making complex AI tools and systems available to the general public through a natural language interface. No need to program complex systems, just ask your question or send your request to ChatGPT. In this presentation, I dive deeper into the workings of ChatGPT to explain what it can do and what it cannot do. Finally, I discuss its potential future as a technology solution: as Artificial General Intelligence or as natural language interface to technology.

Bio: After 10 years in industry, Piek Vossen became full professor (2009) and established the Computational Linguistics and Text Mining Lab at the Vrije Universiteit (VU), where today 25 researchers study language models. His groundwork on cross-language conceptual modelling and interoperability led him to found the Global-WordNet-Association (GWA) for building WordNets in languages and connecting these through semantic graphs (2001). GWA addresses fundamental questions at scale: what words we use, what they stand for, and how they relate. He developed Dutch WordNet databases; today, WordNet reminisces in large language models (LLM) that automatically place words in semantic graphs. He built an LLM from Dutch medical notes with AUMC researchers for medical text classification. He coordinated numerous programmes creating news reading machines to reconstruct what happened in the world as event-centric knowledge graphs. Funded by the prestigious Spinoza prize (2013), he studied three foundations for language understanding: identity, reference, and perspective, resulting in the GraSP model as the “theory-of-mind” of robots communicating with people within the Hybrid Intelligence gravitation programme. Further information: https://vossen.info/

Wednesday, 20th December 2023

Third keynote speaker: Iris van Rooij, Radboud University

Iris van Rooij, Radboud University

Talk: There is no AGI on the horizon, and AI cannot replace people’s (legal) thinking and judging. I will present an argument based on recent interdisciplinary work published as “Reclaiming AI as a theoretical tool for cognitive science” (joint work with Olivia Guest, Federico Adolfi, Ronald de Haan, Antonina Kolokolova, & Patricia Rich): The contemporary field of AI has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems. Yet, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. This puts us at risk of thinking that our thinking can be replaced by AI and of deskilling our professions. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science.

Bio: Iris van Rooij is a Professor of Computational Cognitive Science at the School of Artificial Intelligence in the Faculty of Social Sciences at Radboud University, the Netherlands and Principal Investigator at the Donders Institute for Brain, Cognition and Behaviour. She is also a Guest Professor at the Department of Linguistics, Cognitive Science, and Semiotics, and the Interacting Minds Centre at Aarhus University, Denmark. Her research interests lie at the interface of psychology, philosophy and theoretical computer science, with a focus on the theoretical foundations of computational explanations of cognition. Further information: https://irisvanrooijcogsci.com/