Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Rapid large-scale fractional differencing to minimize memory loss while making a time series stationary. 6x-400x speed up over CPU implementation.

Happy to launch GFD: GPU-accelerated Fractional Differencing. A substantial 6x-400x speed-up for single GPU RAPIDS cuDF implementation over NumPy/Pandas CPU-implementation.

Feel free to play with the code on Google Colab, run it on GCP/AWS or your local machine with the entirely self-contained notebook.

Summary

Typically we attempt to achieve some form of stationarity via a transformation on our time series through common methods including integer differencing. However, integer differencing unnecessarily removes too much memory to achieve stationarity. An alternative, fractional differencing, allows us to achieve stationarity while maintaining the maximum amount of memory compared to integer differencing. While existing CPU-based implementations are inefficient for running fractional differencing on many large-scale time series, our GPU-based implementation enables rapid fractional differencing of up to 400x faster on a single machine.

Code

https://github.com/ritchieng/fractional_differencing_gpu

Presentation

https://www.researchgate.net/publication/335159299_GFD_GPU_Fractional_Differencing_for_Rapid_Large-scale_Stationarizing_of_Time_Series_Data_while_Minimizing_Memory_Loss

submitted by /u/ritchieng
[link] [comments]