TY - JOUR
T1 - Syntax-Enhanced Pretrained Language Models for Aspect-Level Sentiment Classification
AU - Yuan, Li
AU - Wang, Jin
AU - Lee, Lung Hao
AU - Zhang, Xuejie
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2024
Y1 - 2024
N2 - The main challenge of aspect-level sentiment classification (ASC) is associating target aspect terms with relevant contextual words. Existing methods improve ASC performance by incorporating syntactic dependencies through a graph convolution layer on top of BERT. However, these approaches often assign a fixed weight to edges with the same dependency type, overlooking the contextual nuances these dependencies can convey. To address this, we propose syntax-enhanced BERT (SE-BERT), which integrates syntactic distance embeddings, a syntax-enhanced transformer, aspect-specific masking, and a sentiment classification layer. SE-BERT advances previous methods in two main ways. First, SE-BERT enables dynamic weighting for edges with the same dependency type based on the source node's part-of-speech (POS) tags, which enhances graph propagation accuracy and enables more precise edge associations. Second, rather than adding additional graph convolutional layers on top of BERT, SE-BERT replaces BERT's final Transformer layers with the proposed SE-Transformer. This approach directly encodes syntactic information into the attention distribution and word representation within scaled dot-product attention. The model can be initialized from a pretrained checkpoint and fine-tuned for downstream tasks without requiring additional training parameters. Experimental results on five benchmark datasets demonstrate that SE-BERT outperforms existing ASC methods.
AB - The main challenge of aspect-level sentiment classification (ASC) is associating target aspect terms with relevant contextual words. Existing methods improve ASC performance by incorporating syntactic dependencies through a graph convolution layer on top of BERT. However, these approaches often assign a fixed weight to edges with the same dependency type, overlooking the contextual nuances these dependencies can convey. To address this, we propose syntax-enhanced BERT (SE-BERT), which integrates syntactic distance embeddings, a syntax-enhanced transformer, aspect-specific masking, and a sentiment classification layer. SE-BERT advances previous methods in two main ways. First, SE-BERT enables dynamic weighting for edges with the same dependency type based on the source node's part-of-speech (POS) tags, which enhances graph propagation accuracy and enables more precise edge associations. Second, rather than adding additional graph convolutional layers on top of BERT, SE-BERT replaces BERT's final Transformer layers with the proposed SE-Transformer. This approach directly encodes syntactic information into the attention distribution and word representation within scaled dot-product attention. The model can be initialized from a pretrained checkpoint and fine-tuned for downstream tasks without requiring additional training parameters. Experimental results on five benchmark datasets demonstrate that SE-BERT outperforms existing ASC methods.
KW - Aspect-level sentiment classification (ASC)
KW - fine-grained sentiment analysis
KW - syntactic dependency
KW - syntax-enhanced transformer
UR - http://www.scopus.com/inward/record.url?scp=85213268689&partnerID=8YFLogxK
U2 - 10.1109/TCSS.2024.3514901
DO - 10.1109/TCSS.2024.3514901
M3 - Article
AN - SCOPUS:85213268689
SN - 2329-924X
JO - IEEE Transactions on Computational Social Systems
JF - IEEE Transactions on Computational Social Systems
ER -