Abstract:With the extensive application of machine learning and deep learning, models during training absorb vast amounts of knowledge from training data, including sensitive private data. This poses significant privacy concerns as the models trained on such data are susceptible to model inversion attacks (MIA). MIA aims to exploit the knowledge acquired by models to synthesize training data that reflects the class characteristics of the private training data of the target classifier. These attacks enable adversaries to reconstruct highly faithful data that closely aligns with privacy, leading to severe privacy issues. While rapid progress has been made in the field of image-related MIA, other domains are still in their infancy. To foster further research on MIA, this paper delves into and organizes traditional MIA in Euclidean domains and non-Euclidean MIA, meticulously analyzing the core reasons for the success of MIA in each domain.