Prior art search and reranking for generated patent text

Jieh-Sheng Lee*, Jieh Hsiang

*此作品的通信作者

研究成果: Conference article同行評審

2 引文 斯高帕斯(Scopus)

摘要

Generative models, such as GPT-2, have demonstrated impressive results recently. A fundamental question we would like to address is: where did the generated text come from? This work is our initial effort toward answering the question by using prior art search. The purpose of the prior art search is to find the most similar prior text in the training data of GPT-2. We take a reranking approach and apply it to the patent domain. Specifically, we pre-train GPT-2 models from scratch by using the patent data from the USPTO. The input for the prior art search is the patent text generated by the GPT-2 model. We also pre-trained BERT models from scratch for converting patent text to embeddings. The steps of reranking are: (1) search the most similar text in the training data of GPT-2 by taking a bag-of-words ranking approach (BM25), (2) convert the search results in text format to BERT embeddings, and (3) provide the final result by ranking the BERT embeddings based on their similarities with the patent text generated by GPT-2. The experiments in this work show that such reranking is better than ranking with embeddings alone. However, our mixed results also indicate that calculating the semantic similarities among long text spans is still challenging. To our knowledge, this work is the first to implement a reranking system to identify retrospectively the most similar inputs to a GPT model based on its output.

原文English
頁(從 - 到)18-24
頁數7
期刊CEUR Workshop Proceedings
2909
出版狀態Published - 15 7月 2021
事件2nd Workshop on Patent Text Mining and Semantic Technologies, PatentSemTech 2021 - Virtual, Online
持續時間: 15 7月 2021 → …

指紋

深入研究「Prior art search and reranking for generated patent text」主題。共同形成了獨特的指紋。

引用此