This post is about solving a problem that has always been at the back of my mind: How can I speed up my simulations when I have large amounts of File I/O to perform. In many cases I have to write simulation data at 60 FPS for rendering purposes and it can take many seconds to compress and write out a large set of data while it only takes a second or two to simulate.
In this post I will show a simple snippet of code that builds upon an earlier post about using gzwrite to compress data.
The key idea is this: Once a simulation step has completed, In a separate thread synchronously copy data into a secondary array and once copied, continue simulation. This second thread compresses and writes data; if a new simulation data file needs to be written another thread will be spawned. Threads will continue spawning if needed until the max threadpool size is reached after which they will all be joined.
Thanks to Andrew Seidl for his help with figuring this out!