<!DOCTYPE html>
<html>
<head>
<title> How to train your computer </title>
</head>
<body>
<h1> Today, this is one of the ways we talk to computers. </h1>
<p> The syntax of coding is so familiar that many of us use it in everyday written language without even thinking. </p>
<p> But the rise of machine learning computers, which use input data to learn how to complete tasks without explicit programming, will require us to interact differently with machines. </p>
<p> Because if they are learning, we will need to know how to train them. </p>
</body> </html>
Words: Sooraj Shah / Illustrations: Matt Murphy
And that is a change that is coming soon, as Murray Shanahan, Professor of Cognitive Robotics, acknowledges. “At the moment, I guess most Imperial people – staff, students and alumni – use coding or will have had to code at some point in their everyday lives,” he says. “But in the future, you will need to know how to train computers onto relevant data. It may mean thinking in a different way – more in terms of data and statistics rather than algorithms.”
Healthcare is one of the most exciting areas of machine-learning advances. For Professor of Visual Information Daniel Rueckert, whose team focuses on extracting information from imaging data to help clinicians make decisions about diagnoses of individuals, the reality of training, rather than coding, computers is already here.
“We are using machine learning developed in our group to detect very subtle changes in brain images to try to enable a more accurate diagnosis for patients who have dementia,” he says. “The changes are often too challenging for the human eye to pick up, but we are starting to explore how to use our relationship with new technology routinely in the NHS.”
Professor Rueckert’s team, led by Dr Ben Glocker, is also applying deep-learning techniques to award-winning work on brain lesion segmentation, to more accurately detect brain damage from traumatic brain injuries such as those caused by car accidents. Their system provides automated, image-based assessments of traumatic brain injuries at speeds other systems can’t match. “Doctors need to see what’s happening to the organs and brain, and are making decisions based on what’s in emergency room images,” says Glocker. “What we’re doing with computing technology is helping doctors make better informed decisions.”
“We are replacing image analysis techniques with more general machine learning and artificial intelligence (AI) approaches, which attempt to mimic what a human might do when they look at images,” says Professor Rueckert. “That means that the data becomes more and more important because the data is what we use to train the model, and the model effectively represents our knowledge base. As a result, I think the next generation of data scientists is going to become more AI focused, becoming very good at training, using the right data to solve specific problems, and building powerful models that can be used in healthcare and other domains.”
As some level of machine learning becomes implicit in how all computers work – in the same way coding is today – academics, researchers and technologists across the world are turning their attention to what it means for those of us for whom the computer is the key tool. Are we going to have to change the way we think? If so, how? Are our brains changing already?
“It is something that people will need to learn, and it may not require low-level programming skills but you would need to know a little bit about what goes on under the bonnet,” says Professor Shanahan, suggesting that some coding knowledge will still be useful. “The expertise that this person would need is in getting machines trained with relevant datasets and plugging in various parts of hardware in order to do this.”
Interrogating the black box
As to computers that train themselves, or which obviate the need for the human altogether, Professor Rueckert is sceptical, arguing that regardless of the state of the technology, machines that work with humans are simply more useful than ones that don’t – and machines that don’t work with humans may not be usable at all. “I think self-programming computers are very far-fetched. The machine can’t make a decision on its own. That’s why we call our technology computer-aided diagnosis, not computer controlled diagnosis. The computer is there to aid the human, not replace them.”
This approach can throw up some interesting challenges for those training – and coding – the machines, as Professor Rueckert observes. “The biggest challenge for us working with clinicians is ensuring they do not feel the technology is like a black box – something they can’t interrogate. The machine might not get it completely right or it might give a different answer to the clinician,” he says. “In that situation with a colleague, the clinician would be asking: ‘Why do you think this is the right answer for this particular diagnosis?’ But it is quite challenging to extract from a computer reasoning that can then be explained to a human observer.”
It makes sense to Professor Guang-Zhong Yang, Director and Co-founder of the Hamlyn Centre for Robotic Surgery, which was established in 2008 to develop advanced surgical robots. “The systems we develop tend to be what we call the ‘robot assistant solution’, focusing on tasks that require the kind of precision that can’t be done freehand or which will lighten the surgeon’s cognitive load, leaving them free to deal with unexpected events and high-level planning,” he says.
Could this be the end of 'computer says no'? It may will depend on the trainer"
Sooraj Shah
In practice, this means matching a surgeon’s experience and understanding with a machine’s precision, to execute surgical tasks that require both a high level of dexterity and the ability to respond instantly. “So rather than saying that the robot will run autonomously, we are developing robots that can understand the human better. We call the concept ‘perceptual docking’, where the machine will understand human perceptual capabilities, and will in fact rely on human cognition, with the two working together.”
Adapted to humanity
Of course the advances are not confined to the world of medicine, and the question of how machine learning can positively affect human outcomes is something that has an impact on everyone, including the commercial world. Charlie Muirhead (Computing 1996) founder of CognitionX – a networking platform for the world’s leading AI innovators – says that machines that can ‘learn’ are already on the market and these will commoditise quickly, meaning that affordable off-the-shelf solutions will be readily available shortly. “The challenge will be having the right datasets to train it up in order to solve a problem, so that aspect will be less about coding and more about coming up with an algorithm itself.” And the future is already here, he says. “You can already sign up to a cloud service and get online access to state-of-the-art machine learning at a relatively low cost.”
“Everything is becoming more – not less – involved with the human,” says Professor Yang. “We are working with reinforcement learning, so the system learns from false positive and negative reinforcement, from which the internal reasoning process starts to evolve,” he says – clearly, this will become ever more essential if AI is to be really useful. “The ethical and legal challenges of an autonomous robot are huge,” he says. “If, as a patient, I am treated by a robot and something goes wrong, how do you deal with the legal process? Who is responsible? The future of coding will become a lot more sophisticated and closer to the way we reason and communicate and also how we learn, but the human mind will always be there.”
The increased interaction with computers has enabled people to become empowered, and AI is likely to continue that trend, but the downside of the use of machines is an overreliance which has encouraged a measure of laziness, whereby people aren’t learning their car journeys and are instead relying on mapping applications to get them to their destination, which doesn’t bode well for a future where technology gets more sophisticated.
And maybe that is why many experts believe that the need to train computers represents an opportunity to make them not so much more human, but more adapted to supporting humans than ever before. “We may be offloading a lot of our cognitive effort to machines in the future and become somewhat dependent on them to make increasingly complex decisions on our behalf,” Professor Shanahan says.
Self-programming computers that require no human input are still far from becoming a reality – they’re more likely to be seen in a sci-fi film in the years to come. But this means that there will still be a huge need for people who can code, as the artificial intelligence techniques get increasingly complicated, and for a new breed of engineers who are interested in training machines. Could this be the end of ‘computer says no’? It may well depend on the trainer.