Simple Algorithms vs. A.I. And What They Mean For Medicine 

Source : https://medicalfuturist.com/

Artificial intelligence (A.I.), machine learning (ML) and algorithms; if you are a regular at The Medical Futurist, you’ve come across these terms more than once whether it’s in our articles, videos or e-books. Indeed, our latest e-book about investment in digital health has a dedicated section about A.I.

Even The Medical Futurist Institute’s latest peer-reviewed study was published as a guide for medical professionals about the technology. But it’s not just us with a fascination for the sci-fi-esque technology, the healthcare A.I. market is a booming one, as is life science research around the technology.

As A.I. becomes quasi-omnipresent in medicine, the hype factor comes into play. With the interest around the technology, companies might throw the term around left and right; claiming their solution uses A.I. when in fact, they only use a spreadsheet with some macros in it. All of these word-plays to entice investors gain more attention and play the marketing game. 

As such, it becomes crucial to take a step back and ask oneself when coming across a claim that a tool is A.I.-based whether the technology in question is really A.I.-based – or just a simple algorithm.

Knowing where to draw the line between an algorithm and an A.I. will help each and every one of us better address the relevant legal, ethical and social implications; especially when it comes to medicine.

We aim to clear the confusion around this significant issue with this article. To that end, we also turned to Márton Görög, Data Scientist at the Center for Molecular Fingerprinting, to help us take the theoretical stick to draw that theoretical – but much-needed – line.

Knowing the terms

Even though the terms A.I. and algorithm have picked up steam in recent years, they aren’t new concepts altogether. In fact, ‘algorithm’ owes its roots to the 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī; where ‘al-Khwārizmī’ was Latinised as Algoritmi. The current, broad meaning of the word, as defined by Merriam-Webster, refers to “a step-by-step procedure for solving a problem or accomplishing some end.” 

Such instructions are part of what makes an A. I. but aren’t defined as such by themselves. Think of an algorithm as giving a robot a recipe to make a pancake. The robot will follow that recipe, make that pancake and stop once this function has been completed. But following such a simple algorithm does not mean that the robot possesses artificial intelligence.

This latter term was coined by computer scientist John McCarthy in 1956 during a conference with fellow researchers held in Dartmouth, New Hampshire. Merriam-Webster defines A.I. as “the capability of a machine to imitate intelligent human behavior”. 

However, when talking about A.I. nowadays, we most often focus on ‘machine learning’, which is one of A.I.’s subcategories, Márton Görög points out. ML is itself defined as “the process by which a computer is able to improve its own performance (as in analysing image files) by continuously incorporating new data into an existing statistical model.” 

With our pancake recipe example, an ML-based robot fed with enough data about pancake recipes will still make a pancake. But rather than follow the recipe, it will learn it. The robot will thereafter make that pancake with the right mix of ingredients from specific brands it found to be more favourable from the dataset; even if you haven’t explicitly told it to do so.

Drawing the line between a regular algorithm and an A.I.

With those definitions of algorithm and A.I./ML, their differences become clearer. In short, a regular algorithm simply performs a task as instructed, while a true A.I. is coded to learn to perform a task. Márton Görög also points out that ‘learn’ is a very important word for defining an ML-based algorithm. But equally important to him is to add ‘data’, as a regular algorithm doesn’t need data at all to be created. He follows up with defining an ML algorithm as one programmed to  “learn to perform a task using training data.”

A.I. in medicine

In this contemporary sense, the main differences are that a regular algorithm is created fully by a software engineer, implementing the known way of solving the problem as machine-readable commands,” Görög elaborates. “While after the preparation of an ML model comes the training itself, which is driven by the training data, often without any human interaction.” 

With an ML model, it’s not the engineer’s input that shapes the algorithm’s decision but rather the data with which it is fed. It is data that drives and improves the model, without direct human commands.

The importance of this distinction in medicine

With the definitions and explanations in place, separating true A.I. from regular algorithms might be a clear-cut affair. But in practice, companies selling their products might hide behind the terminology without properly describing the functioning of their solution. 

This is what Dr. Meskó and his team noticed while creating and updating the first database of FDA-approved A.I.-based algorithms. Several companies submitting their FDA-approved devices or software only mention their tools as A. I.-based on their website without further explanation of why they credit them as such. Needless to say that these companies omit the mention of their solutions as A. I.-based altogether in their official submission for approval to the FDA.

The need for transparency around algorithms involved in healthcare becomes paramount for multiple reasons. If one purchases a consumer or clinical product thinking that the device possesses A.I. capabilities but does not output results as expected, who is to blame? Is it the fault of the company for not clearly describing the underlying technology? Or is the customer at fault due to a failure of due diligence from their part?

A.I. Bias
Source: www.geneticliteracyproject.org

It also becomes an ethical and social dilemma when patients become involved. A company selling a product or a medical team using one must be able to explain why ML algorithms keep making different decisions. This is because such algorithms learn from the dataset they are fed with. But patients might be oblivious to the reasoning and might feel as test subjects. Moreover, the intrinsic bias in datasets inevitably influences the A.I.’s decisions.

Márton Görög also noticed concerns in safety-critical industries with those so-called “black box A.I. models”, where safe operation is hard to prove. “Human-created algorithms are easier to analyse, validate and trust,” he told The Medical Futurist. “With ML-based solutions, the responsibility of the vendor needs to cover the size, quality and diversity of the training data as well.

Becoming true A.I. seekers

Ideally, we would rely on the transparency and goodwill of companies developing algorithms and A.I. solutions to be truthful about the underlying methods. But the reality is that we can’t take such claims without a grain of salt. It doesn’t mean that if a software uses only a simple algorithm it’s not useful. On the contrary, it can handle repetitive tasks so that humans don’t have to. But knowing whether a software really uses A.I. or not will allow us to better understand its functioning and what to expect of it.

Lawsuit robots

When bringing this up with Márton Görög, he agreed that it’s hard to see what’s under the hood, since both of them are essentially software. But he has some tips to help one make the distinction.  “If it’s assumed that the company owns a data-generating simulator or a rich dataset – regardless if it was collected or bought -, they might really build an ML-model,” he explains. “On the other hand: without data there can’t be training. The task itself can help as well: state-of-the-art results with image/video recognition, speech synthesis, speech recognition and text translation can be achieved only with ML methods.”

Becoming “true A.I. seekers” will become increasingly important in the coming years. This holds true whether you are a medical doctor, digital health enthusiast or patient. The extra legwork might not be totally enticing, but it will ultimately save us from unexpected surprise

Comments

Popular posts from this blog