How machines are adapting to solve chemical problems
We are in an age of machines that learn how to solve complex problems. Yet while headlines are made when computers can beat humans at Go or outperform them in speed dating, more powerful computers and larger data sets means machine learning already affects us in far more subtle ways. Algorithms used by online giants such as Google, Netflix and Amazon shape what we buy, the films we watch and how the global economy operates. In chemistry, the same concept can help us develop new drugs, materials and processes. So what is machine learning, and how does it work?
What is machine learning?
Most computer programs (software) take a rigidly defined set of information and return a result based on specified rules. This works well in cases where the number of options will be relatively small and where we already know an effective way of choosing the correct answer. For example, there are well-documented mathematical calculations (algorithms) for finding the square root of a number by using simple arithmetic. We can simply write out one of these algorithms in a language our computer understands in order to make a piece of software that calculates a square root.
In machine learning, the computer isn’t explicitly told how to solve the problem at hand: it is simply given lots of ‘training data’, containing examples (numbers) and corresponding answers (square roots). It is then expected to learn a general method for predicting the correct answer for any information it is provided on that subject.
Is machine learning the same as artificial intelligence?
Not really. Machine learning is sometimes thought of as a branch of artificial intelligence (AI), but things are more complicated than that. Early machine learning research was done as part of the AI research community; in the 1990s, the AI and machine learning communities separated. These days, many AI systems make heavy use of machine learning methods, so in a sense the two fields are drawing closer together again. That being said, there are still systems that could be considered AI but not required to learn. For example, autopilots in passenger planes will make complex decisions about how to keep a plane steady given the available data, but it is necessary for safety reasons that individual autopilots do not try to learn better ways of flying.
Why is this useful?
Calculating square roots is pretty easy to teach, but machine learning allows computers to learn how to solve problems when it is not easy to describe the correct procedure – such as to answer the question ‘are there any people in this photograph?’ As computer science becomes more advanced, computers are able to learn more complex tasks and outperform humans at picking the best output from a large set of likely candidates.
For example, in drug discovery computers are used for high-throughput screening to identify possible research candidates. Let’s say we want to find a new drug that binds to a specific protein. To do so, previously we would need to screen thousands of molecules – an expensive and time-consuming process. Instead, we can create a computer program to predict the binding affinity of different molecules, cutting down the number of potential drug candidates.
First, we would collect some data about molecules we already know bind to this protein. Then, we would ask the computer to look at the things we can readily find out about each molecule (such as the number of rotatable bonds and the number of electron-donating groups). Finally, we would tell the computer that we wanted to know the binding affinity. Using the training data, the computer would then work out what to do using its program.
This creates another problem: we have to tell the computer what to do in a way it will understand.
How do we talk to computers?
Unfortunately, we can’t use language – machine learning requires its information given as lists of numbers, called vectors. This sounds limiting, but in fact we can create a vector for almost anything. For example, to present a black and white photograph as a vector we would simply imagine it as a grid of tiny squares (pixels) and number each square. We could then tell the computer ‘this pixel is 90% (or 0.9) dark, this pixel is 78% (0.78) dark…’ and so on. This would continue until the entire image is described as a vector, which can then be compared with other vectors in the training data.
For any problem, we can imagine an algorithm that always gives you the correct answer when you start with a particular vector. This is called the ‘underlying function’. In machine learning, the computer is told to use ‘function approximation’ – trying to find a calculation that gives approximately the same result as the underlying function. By refining which of these results are correct, the results become more and more accurate. This how Amazon learns to recommend products based on your specific interests.
So, how long until the machines take over?
Machines can already outperform humans in tasks that involve identifying patterns and drawing conclusions – even in tasks that seem very human indeed, such as winning quiz shows and lip reading. For anyone who has seen The Terminator this is all rather unsettling; but don’t panic just yet. Currently, we are only able to train machines to convert inputs from a highly specific domain (English language text, photographs of hotels) into a small range of possible outputs. In other words, a machine that can walk into a bar and demand your clothes, your boots, and your motorcycle is a long way off: and if we did create it, that’s all it would be able to do.
No comments yet