Conclusion
This paper proposes the PEGCN model, which fully utilizes the advantages
of large-scale pre-trained models and graph convolutional networks for
text classification. The model first uses input representations with
positional information to enable the network to learn relative position
information between text; then processes the adjacency matrix to extract
edge features fully; meanwhile, uses the BERT model for auxiliary
training; finally, combines the predictions of the two models using
linear interpolation for classification. Through a series of
experimental designs and comparative analyses, it is found that the
proposed method in this study outperforms other methods on five commonly
used public datasets, achieving the highest classification accuracy and
demonstrating the effectiveness of the model. Also, through a series of
effectiveness experiments, it is proved that adding positional
information and extracting edge features in graph convolutional networks
are effective for improving classification accuracy. In future research,
this study will further explore the improvement space of this network.