top of page

CAUSAL REASONING FRAMEWORK

Recent research on machine learning (ML) demonstrated the importance of embracing causal modeling for data-driven predictive analytics. By going beyond the correlation within data and considering the causal effects, we can obtain more robust ML models that can eliminate spurious correlations and generalize beyond the observed data distribution; while at the same time empowering the models with the high-level cognitive ability of counterfactual thinking and deep reasoning.

 

Towards this end, this project will explore the central theme of causal modeling and bridge the research gap between conventional machine learning and causal reasoning. The primary research breakthroughs we aim to achieve in this project are as follows:

​

  • Causal representation learning: Aims to learn the latent data representations that account for the causal relations. The primary goal is to empower the current causal representation learning methods designed for a fixed set of causal relations with the ability to automatically discover the changes in causal relations and quickly adjust the data representation.
     

  • Causal reasoning procedure: A decision-making process that calls for counterfactual thinking when making a model prediction. Counterfactual thinking is the ability to imagine counterfactual scenarios, such as “how the animal will behave if it can fly”, and perform the comparison between the counterfactual and factual worlds. Our target is to build a decision-making procedure to mimic the counterfactual thinking in model inference.
     

  • Causal uncertainty modeling: Aims to assess the uncertainty of causal effect estimation. Uncertainty is a property that naturally accompanies causal effect estimation and is essential for causal reasoning. It, however, has received relatively little scrutiny in existing research on causal modeling. Along this line of research, we will study uncertainty estimation and explainable algorithms to shed light on factors leading to uncertainty.

References

Feng, F., Zhang, J., He, X., Zhang, H., & Chua, T. S. Empowering Language Understanding with Counterfactual Reasoning. ACL, 2021.


Wang, W., Feng, F., He, X., Wang, X., & Chua, T. S. Deconfounded Recommendation for Alleviating Bias Amplification. KDD, 2021.


Feng, F., Huang, W., Xin, X., He, X., & Chua, T. S. Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method. SIGIR, 2021.


Wang, W., Feng, F., He, X., Zhang, H., & Chua, T. S. Clicks can be Cheating: Counterfactual Recommendation for Mitigating Clickbait Issue. SIGIR, 2021.


Yang, X., Feng, F., Ji, W., Wang, M., & Chua, T. S. Deconfounded Video Moment Retrieval with Causal Intervention. SIGIR, 2021.


Zhang, Y., Feng, F., He, X., Wei, T., Song, C., Ling, G., & Zhang, Y. Causal Intervention for Leveraging Popularity Bias in Recommendation. SIGIR, 2021.

bottom of page