References
- A. H. Wang, ”Don’t follow me: Spam detection in Twitter,” 2010
International Conference on Security and Cryptography (SECRYPT) ,
2010, pp. 1-10.
- Rushlene Kaur Bakshi, Navneet Kaur, Ravneet Kaur, and Gurpreet Kaur.
Opinion mining and sentiment analysis. In 2016 3rd International
Conference on Computing for Sustainable Global Development
(INDIACom) , 2016. pages 452-455.
- Vladimir N. Vapnik. 1998. Statistical Learning Theory .
Wiley-Interscience.
- Liang Yao, Chengsheng Mao, and Yuan Luo. 2019.Graph convolutional
networks for text classification. In Proceedings of the AAAI
Conference on Artificial Intelligence , volume 33, pages
7370–7377.
- Liu, X., You, X., Zhang, X., Wu, J., Lv, P.: Tensor graph
convolutional networks for text classification. In: Proceedings
of the AAAI Conference on Artificial Intelligence , vol. 34,pp. 8409 – 8416 (2020)
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.
Bert: Pre-training of deep bidirectional transformers for language
understanding. arXiv preprint arXiv:1810.04805.(2018)
- Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi
Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and V eselin Stoyanov.
Roberta: A robustly optimized bert pretraining approach. arXiv
preprint arXiv:1907.11692. (2019)
- C. C. Aggarwal and C. Zhai, ”A survey of text classification
algorithms,” in Mining text data. Springer , 2012, pp.
163 – 222.
- Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of
word representations in vector space. In: 1st International
Conference on Learning Representations, ICLR 2013, Scottsdale,
Arizona, USA, May 2-4, 2013, Workshop Track Proceedings (2013),
http://arxiv.org/abs/1301.3781
- Y. Miyamoto, K. Cho. Gated word-character recurrent language model. in
Proc. EMNLLP , Austin, Texas, 2016, pp. 1992-1997.
- Yoon Kim. 2014. Convolutional neural networks for sentence
classification. arXiv preprint arXiv:1408.5882.
- K. Sinha, Y. Dong, J. C. K. Cheung, and D. Ruths, ”A hierarchical
neural attention-based text classifier,” in Proceedings of the 2018
Conference on Empirical Methods in Natural Language Processing,2018 , pp. 817 – 823.
- Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and
Gabriele Monfardini. 2008. The graph neural network model. IEEE
Transactions on Neural Networks , 20(1):61 – 80.
- Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive
representation learning on large graphs. In Advances in neural
information processing systems , pages 1024 – 1034.
- Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How
powerful are graph neural networks? arXiv preprint arXiv:1810.00826.
- Thomas N Kipf and Max Welling. 2016a. Semi-supervised classification
with graph convolutional networks. arXiv preprint arXiv:1609.02907.
- Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., & Weinberger, K.
(2019, May). Simplifying graph convolutional networks. In
International conference on machine learning (pp. 6861-6871). PMLR.
- Petar Veli Pebbles ckovi´c, Guillem Cucurull, Arantxa Casanova,
Adriana Romero, Pietro Lio, and Y oshua
Bengio.2017. Graph attention
networks. arXiv preprint arXiv:1710.10903.
- Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit Y
an Y eung. 2018a. Gaan: Gated attention networks for learning on large
and spatiotemporal graphs. In 34th Conference on Uncertainty in
Artificial Intelligence 2018 , UAI 2018.
- Xint, Y., Xu, L., Guo, J., Li, J., Sheng, X., & Zhou, Y. (2021,
January). Label incorporated graph neural networks for text
classification. In 2020 25th International Conference on Pattern
Recognition (ICPR) (pp. 8892-8898). IEEE.
- Chang J, Wang L, Meng G, et al. Local-Aggregation Graph
Networks[J]. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 2019, PP (99):1-1.
- Yaqing Wang, Song Wang, Quanming Yao, Dejing Dou:Hierarchical
Heterogeneous Graph Representation Learning for Short Text
Classification. EMNLP (1) 2021: 3091-3101.
- Jeong, C., Jang, S., Park, E., & Choi, S. (2020). A context-aware
citation recommendation model with BERT and graph convolutional
networks. Scientometrics, 124 (3), 1907-1922.
- Lu, Z., Du, P., & Nie, J. Y. (2020, April). VGCN-BERT: augmenting
BERT with graph embedding for text classification. In European
Conference on Information Retrieval (pp. 369-382). Springer,Cham..
- Ho Chung Wu, Robert Wing Pong Luk, Kam Fai Wong, and Kui Lam Kwok.
2008. Interpreting TF-IDF term weights as making relevance decisions.
ACM Trans. Inf. Syst. 26, 3, Article 13 (June 2008), 37 pages.
https://doi.org/10.1145/1361684.1361686
- Lin, Yuxiao & Meng, Yuxian & Sun, Xiaofei & Han, Qinghong & Kuang,
Kun & Li, Jiwei & Wu, Fei. (2021). BertGCN: Transductive Text
Classification by Combining geri weis-corbley and BERT. 1456-1462.
10.18653 / v1/2021 . The findings - acl. 126.
- Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018a. Deeper insights into
graph convolutional networks for semi-supervised learning. In
Proceedings of the AAAI Conference on Artificial Intelligence,volume 32 .
- Tang, J., Qu, M., & Mei, Q. (2015, August). Pte: Predictive text
embedding through large-scale heterogeneous text networks. In
Proceedings of the 21th ACM SIGKDD international conference on
knowledge discovery and data mining (pp. 1165-1174).
- Joulin, A.; Grave, E.; Bojanowski, P.; and Mikolov, T. 2017. Bag of
tricks for efficient text classification. In EACL , 427 –
431. Association for Computational Linguistics.
- Wang, G.; Li, C.; Wang, W.; Zhang, Y.; Shen, D.; Zhang, X.; Henao, R.;
and Carin, L. 2018. Joint embedding of words and labels for text
classification. In ACL , 2321 – 2331.
- Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., & Weinberger, K.
(2019, May). Simplifying graph convolutional networks. In
International conference on machine learning (pp. 6861-6871). PMLR.
- Zhu H, Koniusz P. Simple Spectral Graph Convolution[C]//International Conference on Learning Representation .2021 .
Competing interests (mandatory)
The authors declare no competing interests.
Author contributions
R.Z. set the experimental strategies. Z.G. draft the main manuscript
text. H.H., Z.G. designed and applied the experiments. All authors
reviewed the manuscript. H.H. handled the process and paper publication
issues.