Evaluating Unsupervised Argument Aligners via Generation of Conclusions of Structured Scientific Abstracts.

Published in Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, 2024

Recommended citation: Yingqiang Gao, Nianlong Gu, Jessica Lam, James Henderson, and Richard Hahnloser. 2024. Evaluating Unsupervised Argument Aligners via Generation of Conclusions of Structured Scientific Abstracts. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers), pages 151–160, St. Julian’s, Malta. Association for Computational Linguistics. https://aclanthology.org/2024.eacl-short.14/

Abstract

Scientific abstracts provide a concise summary of research findings, making them a valuable resource for extracting scientific arguments. In this study, we assess various unsupervised approaches for extracting arguments as aligned premise-conclusion pairs: semantic similarity, text perplexity, and mutual information. We aggregate structured abstracts from PubMed Central Open Access papers published in 2022 and evaluate the argument aligners in terms of the performance of language models that we fine-tune to generate the conclusions from the extracted premise given as input prompts. We find that mutual information outperforms the other measures on this task, suggesting that the reasoning process in scientific abstracts hinges mostly on linguistic constructs beyond simple textual similarity.

Download