In this paper, five self-supervised losses including Chamfer Loss \( \mathscr{L}_C\), Laplacian regularization loss \(\Phi_C\), Smooth Loss \(\mathscr{L}_S\), Cycle Consistency Loss \(\mathscr{L}_{CC}\), and GAN Loss \(\mathscr{L}_{GAN}\) are used to train the scene flow generator. As shown in Table 2, when removing the GAN losses and experimenting with only the four existing self-supervised losses, the evaluation results show a significant performance degradation in scene flow estimation. This demonstrates that introducing adversarial learning into scene flow estimation effectively improves scene flow estimation performance. Unlike other methods with self-supervised loss, SFGAN designs loss to self-supervise learn 3D scene flow by utilizing the difference between the distribution of generated data and real data. Finally, the average endpoint error (EPE3D) of scene flow is reduced to 0.098 without accessing scene flow annotations. The performance of scene flow estimation is also degraded by removing the Laplacian regularization loss and smoothing loss at the training process, respectively. The SFGAN framework still needs to be teamed with some existing losses to make the results optimal. As shown in Table 3, we try different loss weights Wgan. The training effect is better when Wgan takes the value of 2 or 3.