As seen from Table 5, PEGCN has achieved the optimal precision effect on the five classified datasets; GNN-based methods such as TextGCN, SGC, and SSGC are generally superior to traditional methods based on CNN or RNN. On the MR dataset, PEGCN offers the most significant improvement in classification accuracy, mainly because this dataset is extremely sensitive to position information, which is consistent with the conclusion above. The classification performance of PEGCN is better than that of BERTGCN across all datasets, not only on a single dataset, which proves the reliability of the model. In the GNN classification method, PEGCN shows significant improvement on the 20NG and Ohsumed datasets; because the average sentence length of these two datasets is greater than that of the other datasets, and the graph network is composed of word-document statistics, this means that longer text may generate more document connections passing through intermediate word nodes, which facilitates message passing through the graph and better performance when combined with GCN. This may also explain why the GCN model performs better than the BERT model on 20NG. For datasets with short documents, such as MR, the ability of graph structure is limited, but after adding position information and edge information to the graph network, it can be found that the classification accuracy is significantly improved. This finding also proves the importance of position information and edge information to the classification task.