ARTIFICIAL intelligence could one day be trusted to diagnose breast cancer in the UK - but more research is needed before that is allowed.

This is the verdict of the Swindon-based professional body for the IT industry, which argued that while AI may become an important method of breast screening in the future, there is not yet enough good evidence that it works.

BCS, The Chartered Institute for IT, gave a formal response to a consultation by the UK National Screening Committee which supports the NCS’ proposal that the use of AI for image analysis in breast cancer screening should not be endorsed in the UK at present.

Chair of the BCS health and care executive Dr Phillip Scott said there was a significant risk of over-diagnosis if computer programs were used for the process now.

He added: “The NSC review agrees with other recent studies that, while AI methods are promising, there is not yet enough scientific evidence to justify adoption in a cancer screening programme.

“Unfortunately, there is so much hype about AI that some people treat it like magic. Most AI in healthcare is early stage and not shown to work clinically. If you look at the scientific reviews, the experiments done with AI diagnostic tools are simply not good enough. Many studies are at risk of bias from selective use of patient data.

“If AI were adopted now in the screening of breast cancer, there is significant risk of overdiagnosis with all the anxiety that would cause.

"We need to educate and inform the public to maintain trust, and that includes being honest about the immaturity of most AI tools.”

“BCS is keen to support the process in the future once it is evident that sufficient scientifically robust research has been done to ensure that AI breast screening will be safe for all who use it. AI has the potential to be of huge benefit or of huge harm to society, and standards for the design, development, and adoption of AI systems must be regulated to ensure we get the very best out of them.

"Clinical leaders and IT professionals working in digital healthcare will increasingly need to show evidence of commitment to ethics, competence and transparency to build public trust in the algorithms and AI used to make high stakes medical decisions."