Evaluating generative patent language models

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Generative language models are promising for assisting human writing in various domains. This manuscript aims to build generative language models in the patent domain and evaluate model performance from a human-centric perspective. The perspective is to measure the ratio of keystrokes that can be saved by autocompletion based on generative patent language models. A higher ratio means a more effective model which can save more keystrokes. This metric can be used to benchmark model performance. The metric is keystroke-based and different from conventional machine-centric metrics that are token-based. In terms of model size, the largest model built in this manuscript is PatentGPT-J-6B, which is state-of-the-art in the patent domain. Based on the metric, it is found that the largest model is not necessarily the best for the human-centric metric. The finding means that keeping increasing model sizes in the patent domain might be unnecessary if the purpose is to assist human writing with autocompletion. Several patent language models are pre-trained from scratch in this research. The pre-trained models are released for future researchers. Several visualization tools are also provided. The importance of building a generative language model in the patent domain is its potential to facilitate creativity and innovations in the future.

Original languageEnglish
Article number102173
JournalWorld Patent Information
Volume72
DOIs
StatePublished - Mar 2023

Keywords

  • Deep learning
  • Natural language generation
  • Natural language processing
  • Patent text generation

Fingerprint

Dive into the research topics of 'Evaluating generative patent language models'. Together they form a unique fingerprint.

Cite this