Projects

A collection of my research and development work

Counterfactual Causal Inference in Natural Language

Counterfactual Causal Inference in Natural Language

Gaël Gendron, Jože Rožanec, Michael Witbrock, Gillian Dobbie

We build the first causal extraction and counterfactual causal inference system for natural language, and propose a new direction for model oversight and strategic foresight.

Large Language Models Causal Extraction Causal Inference Counterfactual Reasoning Natural Language Processing
Independent Causal Language Models

Independent Causal Language Models

Gaël Gendron, Bao Trung Nguyen, Alex Peng, Michael Witbrock, Gillian Dobbie

We develop a novel modular language model architecture sparating inference into independant causal modules, and show that it can be used to improve abstract reasoning performance and robustness for out-of-distribution settings.

Large Language Models Abstract Reasoning Independent Causal Mechanisms Out-of-distribution Generalization
Behaviour Modelling of Social Agents

Behaviour Modelling of Social Agents

Gaël Gendron, Yang Chen, Mitchell Rogers, Yiping Liu, Mihailo Azhar, Shahrokh Heidari, David Arturo Soriano Valdez, Kobe Knowles, Padriac O'Leary, Simon Eyre, Michael Witbrock, Gillian Dobbie, Jiamou Liu, Patrice Delmas

We model the behaviour of interacting social agents (e.g. meerkats) using a combination of causal inference and graph neural networks, and demonstrate increased efficiency and interpretability compared to existing architectures.

Graph Neural Networks Causal Structure Discovery Agent-Based Modelling
Evaluation of LLMs on Abstract Reasoning

Evaluation of LLMs on Abstract Reasoning

Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie

We evaluate the performance of large language models on abstract reasoning tasks and show that they fail to adapt to unseen reasoning chains, highlighting a lack of generalization and robustness.

Large Language Models Abstract Reasoning Evaluation Out-of-distribution Generalization
Disentanglement via Causal Interventions on a Quantized Latent Space

Disentanglement via Causal Interventions on a Quantized Latent Space

Gaël Gendron, Michael Witbrock, Gillian Dobbie

We propose a new approach to disentanglement based on hard causal interventions over a quantized latent space, and demonstrate its potential for improving the interpretability and robustness of generative models.

Variational Autoencoders Vector Quantization Causality Disentanglement