TY - GEN
T1 - Graph Evolving and Embedding in Transformer
AU - Chien, Jen Tzung
AU - Tsao, Chia Wei
N1 - Publisher Copyright:
© 2022 Asia-Pacific of Signal and Information Processing Association (APSIPA).
PY - 2022
Y1 - 2022
N2 - This paper presents a novel graph representation which tightly integrates the information sources of node embed-ding matrix and weight matrix in a graph learning representation. A new parameter updating method is proposed to dynamically represent the graph network by using a specialized transformer. This graph evolved and embedded transformer is built by using the weights and node embeddings from graph structural data. The attention-based graph learning machine is implemented. Using the proposed method, each transformer layer is composed of two attention layers. The first layer is designed to calculate the weight matrix in graph convolutional network, and also the self attention within the matrix itself. The second layer is used to estimate the node embedding and weight matrix, and also the cross attention between them. Graph learning representation is enhanced by using these two attention layers. Experiments on three financial prediction tasks demonstrate that this transformer captures the temporal information and improves the Fl score and the mean reciprocal rank.
AB - This paper presents a novel graph representation which tightly integrates the information sources of node embed-ding matrix and weight matrix in a graph learning representation. A new parameter updating method is proposed to dynamically represent the graph network by using a specialized transformer. This graph evolved and embedded transformer is built by using the weights and node embeddings from graph structural data. The attention-based graph learning machine is implemented. Using the proposed method, each transformer layer is composed of two attention layers. The first layer is designed to calculate the weight matrix in graph convolutional network, and also the self attention within the matrix itself. The second layer is used to estimate the node embedding and weight matrix, and also the cross attention between them. Graph learning representation is enhanced by using these two attention layers. Experiments on three financial prediction tasks demonstrate that this transformer captures the temporal information and improves the Fl score and the mean reciprocal rank.
UR - http://www.scopus.com/inward/record.url?scp=85146268090&partnerID=8YFLogxK
U2 - 10.23919/APSIPAASC55919.2022.9979949
DO - 10.23919/APSIPAASC55919.2022.9979949
M3 - Conference contribution
AN - SCOPUS:85146268090
T3 - Proceedings of 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
SP - 538
EP - 545
BT - Proceedings of 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
Y2 - 7 November 2022 through 10 November 2022
ER -