print · login   

Context-Aware Interpretability for Time Series Forecasting


Overview: Join us to develop a novel interpretability framework for time series models that combines SHAP explanations with adaptive conformal intervals (ACI), tailored specifically for dynamic natural systems like rivers and catchments. You'll explore how explanations can evolve with time and how they relate to model uncertainty, enabling both developers and domain experts to better trust and understand complex models — especially when using architectures like LSTMs with multi-horizon forecasts.

Key challenges:

  • Implement rolling SHAP with temporally-local background sets to reflect changing system dynamics
  • Develop methods to aggregate SHAP outputs over time, features, and horizons (especially for LSTM models)
  • Combine SHAP with feature volatility metrics (e.g., coefficient of variation) to detect when the model relies on unstable signals
  • Align SHAP explanations with conformal interval width changes to diagnose what drives uncertainty increases or regime shifts
  • Build intuitive visual diagnostics (e.g., timelines, attribution deltas, horizon-based summaries) for operational or scientific users

You'll learn about:

  • Advanced time series modelling (LSTM, multi-horizon forecasts, regime-aware setups)
  • SHAP and post-hoc interpretability for temporal models
  • Adaptive conformal prediction and uncertainty quantification
  • Visualization techniques for tracking model behavior over time
  • Designing trustworthy ML diagnostics in environmental forecasting settings

Ideal for: A student with strong Python/data science skills and an interest in machine learning interpretability, time series forecasting, or environmental systems. You’re curious not just about making models perform well — but also about making them understandable, stable, and useful for real-world decisions.

Contact: Hans Korving and Tom Heskes