"Too many things”
… is what i mostly answer when people ask me what i’m doing in terms of research. Here you can get some ideas about the recent projects. Enjoy !

Attention; still updating this page …

Machine learning applied to biomedical questions

Predicting the oligogenic origins of diseases

We essentially look for the links as opposed to the nodes in the network of genes and their variants.

This research line was initiated more than a decade ago and is still ongoing (on the left a little historical sketch). We ask what combination of variants explains a disease phenotype as identifying the single, very rare and highly impactful variant is often not possible (e.g. the heterogeneous nature of diseases, low penetrance, etc.).

To achieve this ambition new resources that aggregate all published data on oligogenic combinations (DIDA, OLIDA, BOCK) were needed. These resources then allowed us to develop the first variant combination pathogenicity prediction methods (VarCoPP 1.0 and 2.0) and the first analysis platform ORVAL that can be freely used to explore patient VCFs. In addition a new oligogenic prioritisation approach called Hop was developed as well as a high-quality white-box method called ARBOCK to identify gene pairs that could potentially related to an oligogenic disease, using the knowledge graph BOCK. All these resources are also provided as ELIXIR services and via bio.tools.

Together these methods provide a unique set of tools to try to get a handle on the oligogenic origins of diseases, especially for rare diseases. We are therefore now validating our work in collaboration with the 101 genomes foundation and the team of Maris Laan in Estonia. Exciting results to come soon. If you are interested, we are still looking for new validation opportunities.

Theoretical foundations of multi-agent systems

Delegating decision-making to algorithms or AI

What is the effect fo delegating choices in strategic situations to algorithms or an AI system ? In the last years we have published a series of experimental and modelling paper on that topic.

In a first work, we experimentally analysed whether groups composed of artificial delegates (either predefined or self-configured) selected by human participants are more successful in deciding how to act in the collective risk dilemma. While the answer appears to be yes, as the AI or algorithm acts as a commitment device, we also saw more inequality in the gains between the participants. In a follow-up experiment, we further investigated the problem, introducing differences in choices and also a second stage of the same game, allowing participants to revise the "AI settings" or "program”. The results show that people who delegate are more likely to contribute to a public good and correct previous group failure by increasing their contributions when confronted with a new instance of the same game. However, precision errors limit the success of delegation groups.

An evolutionary game theory model aims to explain some of these results by considering how and when participants make mistakes, i.e. when they act themselves or when they code their agent in the wrong way or select the agent that is not matching their intentions. That model reveals that it may be better to delegate and commit to a somewhat flawed strategy, perfectly executed by an autonomous agent, than to commit execution errors directly.

Theoretical foundations of multi-agent systems

Evolution of cognitve mechanisms

Theory of Mind (ToM) is considered to be an asset for autonomous agents: Having the capacity to infer beliefs and intentions of others is often assumed to lead to better solutions, displaying more advanced intelligence, and thus a necessity for AGI.

While the explicit implementation of ToM in agents for solving specific tasks is studied intermittently, it is not understood what conditions encourage agents to acquire and prefer ToM and what other effects it has on the agents' behaviour.

We have investigated, using evolutionary game theoretical models in which agents have strategies that incorporate (or not) ToM, or have different reasoning mechanisms the conditions for the emergence of ToM as well as the preferences for certain biased reasoning mechansims.

Interestingly the model observations align with observations in psychology research concerning the optimism bias in the human species and evolutionary research concerning the adaptiveness of human biases and the importance of self-deception and reality-denial for the humans species.

Foundations of collective intelligence

When 1+1 can be 3

How to best aggregate the suggestions made by experts when trying to arrive at a good decision ? Collective decision-making with expert advice algorithms have mostly tried to find the best expert in the group and then used that expertise for the decision. Yet, real collective intelligence algorithms should go beyond the best expert in the group. In the last 5 years, we have proposed a series of algorithms based in contextual bandit theory that achieves this goal.