Rapid large-scale fractional differencing to minimize memory loss while making a time series stationary. 6x-400x speed up over CPU implementation.
Happy to launch GFD: GPU-accelerated Fractional Differencing. A substantial 6x-400x speed-up for single GPU RAPIDS cuDF implementation over NumPy/Pandas CPU-implementation.
Feel free to play with the code on Google Colab, run it on GCP/AWS or your local machine with the entirely self-contained notebook.
Typically we attempt to achieve some form of stationarity via a transformation on our time series through common methods including integer differencing. However, integer differencing unnecessarily removes too much memory to achieve stationarity. An alternative, fractional differencing, allows us to achieve stationarity while maintaining the maximum amount of memory compared to integer differencing. While existing CPU-based implementations are inefficient for running fractional differencing on many large-scale time series, our GPU-based implementation enables rapid fractional differencing of up to 400x faster on a single machine.