Events > Ritchie Ng Speaking at the Visa Data Science Summit

Ritchie Ng Speaking at the Visa Data Science Summit

13 November, 2019 by Yi Hao Chua

Mr Ritchie Ng is currently the Chief AI Officer of Ensemble Capital and a Visiting Research Scholar with us at NExT++. He was invited to speak at the Visa Data Science Summit. Below are some photos and abstract from his talk.

Ritchie Ng

Bio
I’m into applied deep learning research on deep learning for time series and AI for Social Good with researchers in NUS, Montreal Institute for Learning Algorithms (MILA), New York University (NYU), and African Institute for Mathematical Sciences (AIMS). I am a research scholar in NExT++ and an NUS Enterprise PYI Fellow.

I am also an NVIDIA Deep Learning Institute instructor leading all deep learning workshops in NUS, Singapore and conducting workshops across Southeast Asia.

In industry, I am leading artificial intelligence with my colleagues in ensemblecap.ai, an AI hedge fund based in Singapore comprising research scientists, engineers, quants, and traders from NVIDIA and JP Morgan. I have built the whole AI tech stack in a production environment with rigorous time-sensitive and fail-safe software testing powering multi-million dollar trades daily. Additionally, I co-run, as portfolio manager, our deep learning systematic portfolio, delivering positive annual returns since 2018.

Title:
GPU Fractional Differencing for Rapid Large-scale Stationarizing of Time Series Data while Minimizing Memory Loss

Abstract:
Typically we attempt to achieve some form of stationarity via a transformation on our time series through common methods including integer differencing. However, integer differencing unnecessarily removes too much memory to achieve stationarity. An alternative, fractional differencing, allows us to achieve stationarity while maintaining the maximum amount of memory compared to integer differencing. While existing CPU-based implementations are inefficient for running fractional differencing on many large-scale time series, our GPU-based implementation enables rapid fractional differencing with up to 400x speed-up over a CPU implementation.

This workshop/talk introduces GPU accelerated data science workflows allowing rapid end-to-end data science pipelines through a case study approach on GPU fractional differencing.