Can artificial intelligence discover new laws of physics? Possibly. An article in Technology Review suggests that data from a swinging pendulum experiment allowed a neural network to discover some of the laws of motion . More generally, the idea is that if we give AI systems lots of data about a physical system, or from experiments, they will discover the relationships and regularities within this data, and do so much more quickly than humans. But recent research also highlights the perils of trying to generate theories from experimental data. Some mathematicians and scientists gave a machine learning algorithm data from experiments with falling bodies. The algorithm didn’t do so well. An AI system, based on measurement data, would “yield an Aristotelian theory of gravitation” . The reasons why have important lessons for the role of AI in science.
A brief history of throwing things off buildings
Aristotle believed that heavier objects fall faster than lighter ones, in direct proportion to their weight. This is wrong; the time it takes an object to fall is independent of its mass. The equation for the time taken t for an object to fall a distance d is t=√(2d/g), where g is the acceleration due to gravity.
In the popular version, it was Galileo (1564–1642) who discovered that Aristotle was wrong. In the version I was taught, it had never crossed anyone’s mind to check what Aristotle said, until Galileo chucked differently sized cannon balls off the (conveniently) leaning tower of Pisa. This is also wrong; people had been throwing objects off buildings and watching how they fell for a long time. John Philoponus rejected Aristotle’s theory on falling bodies in the 6th century, concluding that the weight of an object made little difference to how quickly it fell. Simon Stevin and Jan Cornets de Groot also threw objects off the church tower in Delft in 1586. It is disputed whether Galileo needed to throw things off the tower in Pisa at all, because so many people had done it for him.
People knew that Aristotle was wrong, and experiments suggested it. However, generating a new theory, and getting to the equations describing falling bodies is extraordinarily difficult, even when we have data. The credit for this, mostly, goes to Galileo.
Knowing what data to exclude
Imagine that you’re a 6th century person with an interest in falling bodies. You worry that Aristotle is wrong and you’re keen to discover what’s really going on. What do you do? Find the nearest tall building and throw things off it. But which things? It seems so obvious to us that the things we throw off should be similar in shape and differ in their weight. But why? Don’t we want a theory that explains how cannon balls, and feathers, and tunics, and shoes fall? It’s only obvious if we think weight is the important factor.
We do know how objects other than cannon balls fall. Feathers and tunics fall more slowly than balls of the same weight because of their greater air resistance. Air resistance is an interfering factor that messes up our equations. But it takes knowledge of the underlying equations to know this. If we included all the data from throwing all sorts of things off buildings, we would probably end up with laws of physics that looked rather different. These laws would also be much more local — they wouldn’t apply on the moon (where there is no air resistance). Scientific laws are therefore not about accommodating all the data, they are often about having the foresight (and insight) to decide what data to leave out. Feathers are weird and complicated; just drop cannon balls.
Of course, just dropping things from a great height doesn’t give you enough information to work out how bodies fall. Galileo also rolled balls down inclined planes. This has many advantages — it is easier to measure the descent time, excludes air resistance, and doesn’t involve lugging things up towers. Most of us remember performing these experiments at school. The thing I remember most at being presented with sloping wood, a stopwatch and a selection of marbles was that this didn’t seem the same thing at all as objects falling from the school roof. I may have been particularly dense, but it does take insight to see inclined planes as a special case of falling bodies more generally, and that data gathered from inclined plane experiments is useful for modelling things falling off buildings because it minimises one of the main interfering factors — air resistance.
Nevertheless, however many times you perform experiments with inclined planes, you don’t get the laws that govern falling bodies exactly. You get something approximating them. This is because you don’t eliminate friction entirely, and the timing is never perfect. But you approximate the laws. There is no reason to suppose that Galileo did any better than a talented school child, but from this error riddled data, he developed the laws we all learn today. This is often the case in science. The laws are neat and precise — the experimental data is messy. Moving beyond the data is often quite a leap.
AI and scientific discovery
The implications of this digression for theory development by AI should be clear. We do not necessarily discover the truth if we put all the data we have into an algorithm. Data that might not seem relevant, can be; and we shouldn’t aim for a perfect fit to the data.
The scientists mentioned in the introduction replicated falling body experiments and gave the resulting data to an AI system. They dropped 11 balls from the Alex Fraser Bridge in Vancouver in 2013. The bridge is about 35m high and the balls were a golf ball, a baseball, two wiffle balls with elongated holes, two wiffle balls with circular holes, two basketballs, a bowling ball and a volleyball. The times were measured with an iPad which recorded the drops. The aim was to replicate a situation in which an experimenter is unsure what the relationship between the balls and their time to the ground is. The authors note that a scientist must usually perform poorly controlled experiments in order to work out what factors need to be controlled for. The experiments we learn about are usually ones that have been refined over decades, or even centuries. When scientists have little idea what is going on, their experiments are imprecise, messy, and a little confused.
The equations developed by the AI system over fitted the data (tried to model the values it was given too precisely) and included a spurious height-dependent force. Constant gravitational acceleration isn’t immediately obvious from the data because of all the interfering factors.
Science is difficult. Working out which experiments to run is difficult in and of itself. The move from towers to inclined planes is obvious only because we’re so familiar with it. Once we run a good experiment, an “intuitive leap” is often required to get from the data to a law. The authors believe that it would have been exceptionally difficult for Galileo to have made his insight from experimental data alone.
How did Galileo do it?
So how did Galileo get there? The focus so far has been on experiments, but this is overly simplistic. Galileo had reasons for believing that Aristotle was wrong, without throwing anything off anywhere. This was because Galileo thought that Aristotle’s theory implied a contradiction. Consider the following thought experiment. According to Aristotle, if you drop a heavy and a light weight from a tower the heavy weight will fall faster. What will happen if you attach the two weights together? The combined weight should fall faster, because it’s heavier, but the lighter weight should slow down the heavier weight because it falls more slowly. We can’t have the combined weight falling both faster and slower than the heavy weight alone. There is something wrong with Aristotle!
Importantly, this conclusion does not come from experimentation, although experiments may follow. If you take this contradiction seriously, then you are likely to think that the heavy and light objects might fall at the same speed. And if this is what you’re thinking, you’re more inclined to take similarly sized objects with different weights up your tower, rather than indiscriminately chucking your belongings off. The data from your experiment is likely to help you confirm your suspicions, even if your data is inaccurate. The intuitive leap can come from anywhere, but in this case it came from shortcomings in the original theory.
A problem with Aristotle’s theory led to an alternative possibility, which suggested an experiment, which suggested more refined experiments, which generated a lot of messy data. These laws of physics did not emerge from analysis of data alone; they emerged from careful thought about what was missing from previous theories, decisions about what data to collect, and about how messy the data could reasonably be.
Why all the fuss about laws?
What’s all the fuss about laws though? Algorithms can often make accurate predictions, even when they don’t tell us what’s going on — do we need any more than that? As I noted above, you might have been in for a shock when astronaut David Scott dropped a hammer and a feather on the moon if you hadn’t modelled falling bodies quite right. And we do seem to want to get things ‘right’ — to describe the world as it really is.
The problem with predictive accuracy alone is that we stay in the dark about which factors are limiting factors, and which are genuinely explanatory. As soon as the limiting factors change, or other limiting factors emerge, the predictions will become less accurate. This isn’t always a worry; it depends what we’re predicting, and for what purpose. If I accurately predict what you’ll watch on TV on a Thursday night based on the time you get home and whether you went to work, it doesn’t matter if the real determinant is recommendations you receive from the person you sit next to on the train. There is nothing at stake if my model fails.
AI will undoubtedly discover some interesting and useful relationships. It will probably lead to some genuinely fruitful scientific insights. However, experiments with falling bodies illustrate that the process of moving from experimental data to laws is not as simple as it seems. Just like incompetent students with stop watches and marbles, AI systems will need some guidance.
◊ ◊ ◊
 https://www.technologyreview.com/2018/08/03/2435/who-needs-copernicus-if-you-have-machine-learning/. For the opposing view see https://nautil.us/issue/78/atmospheres/are-neural-networks-about-to-reinvent-physics
James Robert Brown. The Laboratory of the Mind: Thought Experiments in the Natural Sciences. 2010.
Amazon affiliate link. If you buy through this link, Daily Philosophy will get a small commission at no cost to you. Thanks!
◊ ◊ ◊
Catherine Greene is a Research Associate at the Centre for Philosophy of Natural and Social Science at the London School of Economics. Her research interests are the philosophy of finance and social science. Before studying for a PhD she had a career in finance and still consults an ethics and investment strategy. More information is available at www.catherinegreene.co.uk
Cover image by Max Fischer from Pexels.