Animating water isn’t easy.
Firstly, it’s a liquid so it can move multi-directionally. Secondly, it’s mainly transparent, so it needs to be clear (yet not entirely), and it refracts light. Thirdly it has a reflective surface which moves.
Because of the many different elements which need to be skilfully executed, and the significant computational processing requirements necessary to make a high quality simulation, animating water is a time-consuming and difficult challenge.
Recently however, researchers at Washington University have used deep learning to develop a system which shows water moving in a waterfall from just one still picture.
By using thousands of videos with fluid motion including seas, waterfalls and rivers, the process teaches a neural network to predict and animate how the moving water would appear from a single photograph. The neural network views the videos and then guesses the motion based on the first frame.
It was able to learn, based on the image’s context clues, what the motion was supposed to look like. Then, when its output was compared with the actual video, the network slowly learned what to expect from different states of flowing matter.
And the same method – which uses a seamlessly looping short video to give the impression of continuous movement – can be used to animate clouds, smoke, or any other material that flows.
The advantage of the method is that no extra information or user input is necessary. It simply needs the picture from which it uses the information gained by predicting the movement in the photograph when it was taken. This enables the system to determine the movement of each pixel and create the animation.
The team used the term ‘symmetric splatting’ to describe the movement of each pixel according to its predicted motion both past and future. Firstly, the researchers tried ‘splatting’ but that meant that as the pixels travelled down the waterfall, they disappeared from the top.
By predicting the past and future of the image, the predictions could be combined into one animation on a single loop which left no pixel gaps and allowed the endless movement of the animated image.
Because as yet the process has difficulty predicting the distorted appearance of an object under water, or the reflections on moving water, it works best for objects with predictable fluid motion. Nevertheless, it appears to have achieved a more realistic representation of moving water than many other software tools.