skip to content

Features: Faculty Insights

 

Machines can learn not only to make predictions, but to handle causal relationships. New research by an international team, including researchers in the Department of Applied Mathematics and Theoretical Physics (DAMTP), suggests this could make medical treatments safer, more efficient, and more personalised.

Artificial intelligence techniques can be helpful for multiple medical applications, such as radiology or oncology, where the ability to recognise patterns in large volumes of data is vital. For these types of applications, the AI compares information against learned examples, draws conclusions, and makes extrapolations.

It’s an extremely challenging area of machine learning, and seeing it come closer to clinical use, where it will empower clinicians and patients alike, is very satisfying. Michaela van der Schaar

Now an international team, including researchers in DAMTP, is exploring the potential of a comparatively new branch of AI for diagnostics and therapy. The researchers found that causal machine learning (ML) can estimate treatment outcomes – and do so better than the machine learning methods generally used to date. Causal machine learning makes it easier for clinicians to personalise treatment strategies, which individually improves the health of patients.

The importance of 'why?'

The results, reported in the journal Nature Medicine, suggest how causal machine learning could improve the effectiveness and safety of a variety of medical treatments. Classical machine learning recognises patterns and discovers correlations. However, the principle of cause and effect remains closed to machines as a rule; they cannot address the question of why. When making therapy decisions for a patient, the 'why' is vital to achieve the best outcomes.

"Developing machine learning tools to address why and what if questions is empowering for clinicians, because it can strengthen their decision-making processes," says senior author Professor Michaela van der Schaar, Director of the Cambridge Centre for AI in Medicine, hosted in DAMTP. "But this sort of machine learning is far more complex than assessing personalised risk."

For example, when attempting to determine therapy decisions for someone at risk of developing diabetes, classical ML would aim to predict how probable it is for a given patient with a range of risk factors to develop the disease. With causal ML, it would be possible to answer how the risk changes if the patient receives an anti-diabetes drug; that is, gauge the effect of a cause. It would also be possible to estimate whether metformin, the commonly-prescribed medication, would be the best treatment, or whether another treatment plan would be better.

To be able to estimate the effect of a hypothetical treatment, the AI models must learn to answer 'what if?' questions. "We give the machine rules for recognising the causal structure and correctly formalising the problem," explains Professor Stefan Feuerriegel from Ludwig-Maximilians-Universität München (LMU), who led the research. "Then the machine has to learn to recognise the effects of interventions and understand, so to speak, how real-life consequences are mirrored in the data that has been fed into the computers."

Moving a step closer to practice

Even in situations for which reliable treatment standards do not yet exist or where randomised studies are not possible for ethical reasons because they always contain a placebo group, machines could still gauge potential treatment outcomes from the available patient data and form hypotheses for possible treatment plans, so the researchers hope.

With such real-world data, it should generally be possible to describe the patient cohorts with ever greater precision in the estimates, bringing individualised therapy decisions that much closer. However, there is still the challenge of ensuring the reliability and robustness of the methods.

"The software we need for causal ML methods in medicine doesn't exist out of the box," says Feuerriegel. "Rather, complex modelling of the respective problem is required, involving close collaboration between AI experts and doctors."

In other fields, such as marketing, explains Feuerriegel, the work with causal ML has already been in the testing phase for some years now. "Our goal is to bring the methods a step closer to practice," he says.

The paper, whose co-authors include DAMTP PhD student Alicia Curth, describes the direction in which things could move over the coming years. Van der Schaar is continuing to work closely with clinicians to validate these tools in diverse clinical settings, including transplantation, cancer and cardiovascular disease.

"I have worked in this area for almost 10 years, working relentlessly in our lab with generations of students to crack this problem," says van der Schaar, who is the John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine, and affiliated with the Departments of Applied Mathematics and Theoretical Physics, Engineering and Medicine. "It’s an extremely challenging area of machine learning, and seeing it come closer to clinical use, where it will empower clinicians and patients alike, is very satisfying."

 

This article was first published as a news story on the University of Cambridge website.