NYU Researchers Train GPT-3 to Make Surprisingly Accurate Time Series Predictions by Breaking Numbers into Digits
-
NYU researchers trained GPT-3 to predict next events in time series data represented as numeric digit strings.
-
They re-tokenized the numbers so each digit was separated, allowing better sequence predictions.
-
The resulting LLMTime model matched or exceeded specialized time series models without tuning.
-
LLMTime works by preferring simple rules to complete sequences, a form of Occam's razor.
-
While good at predicting, LLMTime sometimes made incorrect deductions about time series behavior.
