Artificial intelligence (AI) can be trained to detect whether or not a tissue picture contains a tumour. However, until recently, it has remained a mystery as to how it makes its judgement. A team from Ruhr-Universitat Bochum’s Research Center for Protein Diagnostics (PRODI) is working on a new approach that will make an AI’s judgement clear and hence trustworthy. The researchers led by Professor Axel Mosig describe the approach in the journal Medical Image Analysis. For the study, bioinformatics scientist Axel Mosig cooperated with Professor Andrea Tannapfel, head of the Institute of Pathology, oncologist Professor Anke Reinacher-Schick from the Ruhr-Universitat’s St. Josef Hospital, and biophysicist and PRODI founding director Professor Klaus Gerwert. The group developed a neural network, i.e. an AI, that can classify whether a tissue sample contains tumour or not. To this end, they fed the AI a large number of microscopic tissue images, some of which contained tumours, while others were tumour-free. “Neural networks are initially a black box: it’s unclear which identifying features a network learns from the training data,” explains Axel Mosig. Unlike human experts, they lack the ability to explain their decisions. “However, for medical applications in particular, it’s important that the AI is capable of explanation and thus trustworthy,” adds bioinformatics scientist David Schuhmacher, who collaborated on the study. The Bochum team’s explainable AI is therefore based on the only kind of meaningful statements known to science: on falsifiable hypotheses. If a hypothesis is false, this fact must be demonstrable through an experiment. Artificial intelligence usually follows the principle of inductive reasoning: using concrete observations, i.e. the training data, the AI creates a general model on the basis of which it evaluates all further observations. The underlying problem had been described by philosopher David Hume 250 years ago and can be easily illustrated: No matter how many white swans we observe, we could never conclude from this data that all swans are white and that no black swans exist whatsoever. Science therefore makes use of so-called deductive logic. In this approach, a general hypothesis is the starting point. For example, the hypothesis that all swans are white is falsified when a black swan is spotted. Source: ANI
Related Articles

Frontpage News
Harris promises jobs, fight climate change and affordable care act as part of Biden administration
WASHINGTON (TIP): Kamala Harris, the presumptive Democratic party’s vice-presidential nominee, has said that once elected the Joe Biden administration will create millions of jobs, fight climate change and build an affordable care act among various […]

India
Govt to review Rs 8 lakh cap for EWS quota
New Delhi (TIP) : Nudged by the Supreme Court to do a “high-level policy rethink”, the Union government has decided to review the Rs 8 lakh annual income limit to identify economically weaker section (EWS) […]

Bollywood
Katrina Kaif turns muse for ‘photographer’ Vicky Kaushal
Actors Katrina Kaif and Vicky Kaushal, who will be celebrating their first wedding anniversary on December 9, are currently enjoying a vacation together. While the location of the holiday remains unknown, Katrina shared a picture […]
Be the first to comment