RUS  ENG
Full version
JOURNALS // News of the Kabardino-Balkarian Scientific Center of the Russian Academy of Sciences // Archive

News of the Kabardino-Balkarian Scientific Center of the Russian Academy of Sciences, 2024 Volume 26, Issue 4, Pages 54–61 (Mi izkab892)

Computer science and information processes

A method for assessing the degree of confidence in the self-explanations of GPT models

A. N. Lukyanov, A. M. Tramova

Plekhanov Russian University of Economics, 117997, Russia, Moscow, 36 Stremyanny Lane

Abstract: With the rapid growth in the use of generative neural network models for practical tasks, the problem of explaining their decisions is becoming increasingly acute. As neural network-based solutions are being introduced into medical practice, government administration, and defense, the demands for interpretability of such systems will undoubtedly increase. In this study, we aim to propose a method for verifying the reliability of self-explanations provided by models post factum by comparing the attention distribution of the model during the generation of the response and its explanation. The authors propose and develop methods for numerical evaluation of answers reliability provided by generative pre-trained transformers. It is proposed to use the Kullback-Leibler divergence over the attention distributions of the model during the issuance of the response and the subsequent explanation. Additionally, it is proposed to compute the ratio of the model's attention between the original query and the generated explanation to understand how much the self-explanation was influenced by its own response. An algorithm for recursively computing the model's attention across the generation steps is proposed to obtain these values. The study demonstrated the effectiveness of the proposed methods, identifying metric values corresponding to correct and incorrect explanations and responses. We analyzed the currently existing methods for determining the reliability of generative model responses, noting that the overwhelming majority of them are challenging for an ordinary user to interpret. In this regard, we proposed our own methods, testing them on the most widely used generative models available at the time of writing. As a result, we obtained typical values for the proposed metrics, an algorithm for their computation, and visualization.

Keywords: neural networks, metrics, language models, interpretability, GPT, LLM, XAI

UDC: 004.054

MSC: 68T09

Received: 24.06.2024
Revised: 01.08.2024
Accepted: 07.08.2024

DOI: 10.35330/1991-6639-2024-26-4-54-61



Bibliographic databases:


© Steklov Math. Inst. of RAS, 2026