AI Can Now Simulate the Entire Universe
Artificial Intelligence and Physics are two fields of research that seemingly do not settle well with each other. This is due to the nature of physics being one that is a never-ending chase for accuracy, exactness and precision. On the contrary, current machine learning models rely on vast matrices of floating point numbers that undergo special operations, which everyone should know from high school, called matrix multiplication.However, owing to the binary representation of such values within the memory cells of the computer, their value is imprecise and estimated to avoid otherwise extremely computationally expensive tasks spawned by exact representation of the numbers.
Because of the aforementioned reasons, machine learning and physics were never combined. Nonetheless, an astrophysicist research group at the Flatiron Institute in New York aimed to break the mould by boldly bridging the gap between the two when no other researcher gave it more than a second thought.
The astrophysicists used techniques revolving around machine learning to aid computationally expensive mathematical calculations about the universe. They didn't expect much from this endeavour but in reality, it turned out it was the world that was wrong and physics and machine learning were two subjects that always fit well together.
Shirley Ho, the leader of the project, proudly affirmed that the simulations were indeed faster, but what shocked researchers was the fact that the simulation wasn't just moderately faster, but in some calculation scenarios it was a factor of 10000x faster. Nevertheless, the shocking reveals didn't just end there, it was also revealed by Shirley that the new model had also gained in accuracy compared to previous simulations.
The research project, dubbed D3M (standing for Deep Density Displacement Model) was a great surprise, but thanks to machine learning techniques attempting to emulate the thought process of a human brain, D3M can extrapolate. To the unobserving eye, this may not seem like a big deal, however, this is an exceptional achievement. Let me elaborate - if someone showed a demonstration of an apple dropping to the floor, and then asked you to describe how an orange falls, it will be fairly easy because you already have example data points to refer to in your knowledge. After you answer correctly, suppose the demonstrator questions you about how 2 planets collide or how an atom falls to the ground; chances are if you aren't well-versed in physics, your answer will be wrong and wildly inaccurate. There will be effects swaying the calculations that you would have never known about with just one data point of an apple falling.
The mind-blowing thing that D3M did was that it managed to accurately describe universes where varying constants are different from our own. It was somehow able to map the current data points with the universal constants to data points with different constants accurately and efficiently. Not only is it impossible for a human to eyeball something wildly different with high accuracy, but it was also accurate enough to compete with much more computationally inefficient models that rely on more traditional methods.
Shirley Ho, astounded by the accuracy, claimed that it was similar to teaching image recognition software about cats and dogs and it accurately recognising a completely different and unrelated animal, the elephant.
Such leaps in efficiency are essential to advancements in the field. The physics questions that are proposed by scientists in research have such large magnitudes that they need a vast amount of difficult calculations that need to be done precisely in order for them to be answered. Answering these questions quickly and with a lower barrier to entry for research can allow for the revival of physics, a field that has been fairly barren when it comes to large discoveries since the cold war.
D3M can accurately do calculations with global constants that are varied from our own universe, making it even more beneficial to the field than just a model. This means researchers could easily utilise it in experiments that revolve around varying global constants.
It is also advantageous to the field of machine learning since it draws more talent into the domain of research in the form of physicists searching for increasingly accurate and demanding simulations, while also highlighting the versatility of AI and its vast ocean of never-ending possibilities. An ocean that is truly worth sailing.
In conclusion, this advancement highlights how machine learning can be utilised in fields that initially may not seem like fields in which AI could be used. Not even the founding fathers of AI could have realised a model based on Bayesian statistical guesswork could lead to any resemblance of accuracy let alone data extrapolation.