Abstract
Collaborative filtering (CF) has become one of the most popular and widely used methods in recommender systems, but its performance degrades sharply in practice due to the sparsity and bias of the real-world user feedback data. In this paper, we propose a novel counterfactual data augmentation framework AD-AUG to mitigate the impact of the imperfect training data and empower CF models. The key idea of AD-AUG is to answer the counterfactual question: “what would be a user’s feedback if his previous purchase history had been different?”. Our framework is composed of an augmenter model and a recommender model. The augmenter model aims to generate counterfactual user feedback based on the observed ones, while the recommender leverages the original and counterfactual user feedback data to provide the final recommendation. In particular, we design two adversarial learning-based methods from both “bottom-up” data-oriented and “top-down” model-oriented perspectives for counterfactual learning. Extensive experiments on three real-world datasets show that the AD-AUG can greatly enhance a wide range of CF models, demonstrating our framework’s effectiveness and generality.
Y. Wang and Y. Qin–Both authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
MovieLens: https://grouplens.org/datasets/movielens/.
- 2.
- 3.
The implementations are available at https://github.com/Fang6ang/AD-AUG.
References
Abbasnejad, E., Teney, D., Parvaneh, A., Shi, J., van den Hengel, A.: Counterfactual vision and language learning. In: CVPR, pp. 10044–10054 (2020)
Ashual, O., Wolf, L.: Specifying object attributes and relations in interactive scene generation. In: ICCV, pp. 4561–4569 (2019)
Chen, L., Zhang, H., Xiao, J., He, X., Pu, S., Chang, S.F.: Counterfactual critic multi-agent training for scene graph generation. In: ICCV, pp. 4613–4623 (2019)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607 (2020)
Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., Bharath, A.A.: Generative adversarial networks: an overview. IEEE Sig. Process. Mag. 35(1), 53–65 (2018)
Fu, T.-J., Wang, X.E., Peterson, M.F., Grafton, S.T., Eckstein, M.P., Wang, W.Y.: Counterfactual vision-and-language navigation via adversarial path sampler. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 71–86. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_5
Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: ICML,pp. 2376–2384. PMLR (2019)
He, R., McAuley, J.: Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In: WWW, pp. 507–517 (2016)
He, X., He, Z., Du, X., Chua, T.S.: Adversarial personalized ranking for recommendation. In: SIGIR, pp. 355–364 (2018)
Higgins, I., et al.: Beta-VAE: learning basic visual concepts with a constrained variational framework. In: ICLR (2017)
Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: ICDM, pp. 263–272 (2008)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125–1134 (2017)
Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: ICLR (2017)
Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: ICLR (2014)
Koren, Y., Bell, R.M., Volinsky, C.: Matrix factorization techniques for recommender systems. IEEE Comput. 42(8), 30–37 (2009)
Liang, D., Krishnan, R.G., Hoffman, M.D., Jebara, T.: Variational autoencoders for collaborative filtering. In: WWW, pp. 689–698 (2018)
Lin, K., Li, D., He, X., Zhang, Z., Sun, M.T.: Adversarial ranking for language generation. In: NeuIPS, pp. 3155–3165 (2017)
Ma, J., Zhou, C., Cui, P., Yang, H., Zhu, W.: Learning disentangled representations for recommendation. In: NeuIPS, pp. 5712–5723 (2019)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)
Ning, X., Karypis, G.: Slim: sparse linear methods for top-n recommender systems. In: ICDM, pp. 497–506 (2011)
Poole, B., Ozair, S., Van Den Oord, A., Alemi, A., Tucker, G.: On variational bounds of mutual information. In: ICML, pp. 5171–5180 (2019)
Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: ICML, pp. 1278–1286. PMLR (2014)
Sedhain, S., Menon, A.K., Sanner, S., Xie, L.: Autorec: autoencoders meet collaborative filtering. In: WWW, pp. 111–112 (2015)
Shenbin, I., Alekseev, A., Tutubalina, E., Malykh, V., Nikolenko, S.I.: RecVAE: a new variational autoencoder for top-n recommendations with implicit feedback. In: WSDM, pp. 528–536 (2020)
Suresh, S., Li, P., Hao, C., Neville, J.: Adversarial graph augmentation to improve graph contrastive learning, pp. 15920–15933 (2021)
Tian, Y., Sun, C., Poole, B., Krishnan, D., Schmid, C., Isola, P.: What makes for good views for contrastive learning? In: NeurIPS, pp. 6827–6839 (2020)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: ICML, pp. 1096–1103 (2008)
Wang, Z., et al.: Counterfactual data-augmented sequential recommendation. In: SIGIR, pp. 347–356 (2021)
Wu, Y., DuBois, C., Zheng, A.X., Ester, M.: Collaborative denoising auto-encoders for top-n recommender systems. In: WSDM, pp. 153–162 (2016)
Xu, D., Cheng, W., Luo, D., Chen, H., Zhang, X.: Infogcl: Information-aware graph contrastive learning. In: NeurIPS, pp. 30414–30425 (2021)
Xu, T., et al.: AttnGAN: fine-grained text to image generation with attentional generative adversarial networks. In: CVPR, pp. 1316–1324 (2018)
Yang, M., Dai, Q., Dong, Z., Chen, X., He, X., Wang, J.: Top-n recommendation with counterfactual user preference simulation. In: CIKM, pp. 2342–2351 (2021)
Zmigrod, R., Mielke, S.J., Wallach, H., Cotterell, R.: Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. arXiv preprint arXiv:1906.04571 (2019)
Acknowledgments
This research is partially supported by National Key Research and Development Program of China with Grant No. 2018AAA0101902, the National Natural Science Foundation of China (NSFC Grant 62106008 & 62006004).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, Y. et al. (2023). AD-AUG: Adversarial Data Augmentation for Counterfactual Recommendation. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13713. Springer, Cham. https://doi.org/10.1007/978-3-031-26387-3_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-26387-3_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26386-6
Online ISBN: 978-3-031-26387-3
eBook Packages: Computer ScienceComputer Science (R0)