Do Pretrained Language Models Indeed Understand Software Engineering Tasks?

Yao Li, Tao Zhang, Xiapu Luo, Haipeng Cai, Sen Fang, Dawei Yuan

Research output: Journal article publicationJournal articleAcademic researchpeer-review

2 Citations (Scopus)

Abstract

Artificial intelligence (AI) for software engineering (SE) tasks has recently achieved promising performance. In this article, we investigate to what extent the pre-trained language model truly understands those SE tasks such as code search, code summarization, etc. We conduct a comprehensive empirical study on a board set of AI for SE (AI4SE) tasks by feeding them with variant inputs: 1) with various masking rates and 2) with sufficient input subset method. Then, the trained models are evaluated on different SE tasks, including code search, code summarization, and duplicate bug report detection. Our experimental results show that pre-trained language models are insensitive to the given input, thus they achieve similar performance in these three SE tasks. We refer to this phenomenon as overinterpretation , where a model confidently makes a decision without salient features, or where a model finds some irrelevant relationships between the final decision and the dataset. Our study investigates two approaches to mitigate the overinterpretation phenomenon: whole word mask strategy and ensembling. To the best of our knowledge, we are the first to reveal this overinterpretation phenomenon to the AI4SE community, which is an important reminder for researchers to design the input for the models and calls for necessary future work in understanding and implementing AI4SE tasks.
Original languageEnglish
Pages (from-to)4639 - 4655
JournalIEEE Transactions on Software Engineering
Volume49
Issue number10
Publication statusPublished - 28 Aug 2023

Fingerprint

Dive into the research topics of 'Do Pretrained Language Models Indeed Understand Software Engineering Tasks?'. Together they form a unique fingerprint.

Cite this