In my previous post, I created a fairly simple particle simulation where particles obey gravity, bouncing around in a box. This was a great way to get started with a non-trivial Go application. However, as the the particles didn’t interact with each other, there was limited potential for the simulation as a whole. In this post, I look at building on this simulation with another where particles obey forces of repulsion (and attraction).
Creating the Forces of Repulsion
In the previous post, the formula for determining the velocity of a particle was fairly straightforward: leave the X and Z components constant and and add a constant value to the Y component. This worked as it meant as the particles were leaving the ‘floor’ of the cube, the Y component of the velocity would be negative, and would so get smaller. When the particles peaked, they Y component would enter positive values and so would accelerate to the floor.
The base formula chosen to measure the forces of repulsion between particles was Coulomb’s Law. The particles would mainly have the same charge, so the formula essentially boils down to force = 1 / (distance between particles squared)
.
The function to perform this calculation was implemented at the particle level, where it was given a slice of other particles:
Having to calculate distance between potentially thousands of other points many thousands of times becomes quite an expensive operation. Therefore, finding a way to parallelise this becomes rather important.
Running in Parallel
The gravity approach in the previous post was great for parallelisation as the particles didn’t interact with each other, and as a result parallelisation was easy: allocate certain particles on each thread. For attraction and repulsion, the particles’ positions to each other determine the force acting upon it. Although the previous approach would work, would mean each thread would need every particle’s positions as well as which particles need to be updated, which would have large overheads both computationally and in memory footprint.
As previously mentioned, the formula is essentially force = 1 / (distance between particles squared)
. If that were to be plotted on a chart, it would look like:
As can be seen, the line tends towards 0 with higher distance. Therefore, particles very far away would have a very minor (almost negligible) effect on the velocities. These particles can be discarded from a particular particle’s calculations.
The way I approached this was dividing the cube where particles are bound into smaller “subcubes”. Each subcube represents all particles within a given volume of the larger cube. When calculating the forces acting upon each particle, all particles within that cube and a certain number of neighbouring cubes were assessed. The below image may help:
Dividing the particles this way means that not all of the particles are passed to each thread, meaning an iteration is cheaper and faster.
It’s worth mentioning that this approach will create a simulation that is similar but not identical to comparing each particle against each other. This is because the forces between all of the ignored particles would add up for each iteration, and so positions may be increasingly different per iteration.
During an iteration, the main thread sends the set of subcubes to each child thread (goroutine), then waits for the resulting positions returned by the goroutines. The goroutines are therefore relatively stateless and lightweight:
During an iteration, managing the goroutines is also straightforward:
After all the of the iterations have been completed, cleaning up the goroutines is as simple as closing the channels:
As particles may cross subcube boundaries during an iteration, the results for each thread needed to be fed back to the main thread where some housekeeping was applied – saving the current state of the particles, then recalculating the particles in each subcube and the neighbourhood of relevant particles for each subcube. The code for subcube management is as follows:
The Results
A large number of particles were needed to create a visualisation and to showcase performance improvements. As the radius calculations were the most expensive part of the calculations, increasing the number of subcubes improved performance up to a certain point. Then, the cost of maintaining the subcubes exceeded the amount of time doing the position calculations.
As a result of the above optimisations, the following video was created. In it, 10,000 particles were simulated repelling each other over 500 iterations, with 40 x 40 x 40 subcubes (64,000 total). The total calculation time was 4 minutes – 50 frames of the video took approximately 24 seconds to produce. On a single-threaded implementation, 50 frames took multiple minutes with just 5000 particles.
Creating the Visualisations
The Go programme itself was rather simple in that it didn’t produce any visualisations. Instead, it created a CSV file per unit time of the simulation. These CSV files had one particle per row, with the X, Y, and Z co-ordinates being the row data.
The series of CSV files were fed into ParaView using the Table To Points filter, and then exported to a video file.
The programme could be expanded to perform the visualisation, perhaps using OpenGL bindings.
Wrapping Up
All in all, I was very impressed at the results the simulation produced. Initially, you can see patterns in the particles’ locations due to the subcube method, though increasing the radius of other subcubes to consider would improve this. The performance improved markedly as a result of the parallelisation approach. The approach is one that fits rather well in this case - short range repulsion, with an exponential tail off in force values. Using other formulae for simulations may require a some changes to the approach.
As ever, the code for the simulation is available on GitHub.