Code generation from a graphical user interface via attention-based encoder–decoder model

Wen Yin Chen, Pavol Podstreleny, Wen Huang Cheng, Yung Yao Chen, Kai Lung Hua*

*此作品的通信作者

    研究成果: Article同行評審

    摘要

    Code generation from graphical user interface images is a promising area of research. Recent progress on machine learning methods made it possible to transform user interface into the code using several methods. The encoder–decoder framework represents one of the possible ways to tackle code generation tasks. Our model implements the encoder–decoder framework with an attention mechanism that helps the decoder to focus on a subset of salient image features when needed. Our attention mechanism also helps the decoder to generate token sequences with higher accuracy. Experimental results show that our model outperforms previously proposed models on the pix2code benchmark dataset.

    原文English
    期刊Multimedia Systems
    DOIs
    出版狀態Accepted/In press - 2021

    指紋

    深入研究「Code generation from a graphical user interface via attention-based encoder–decoder model」主題。共同形成了獨特的指紋。

    引用此