|
|
Publications in Math-Net.Ru
-
Interpreting transformer-based classifiers via clustering
Dokl. RAN. Math. Inf. Proc. Upr., 527 (2025), 432–448
-
Dynamic division of labor in hybrid AI: contrasting encoder strategies and their impact on LSTM modulators
Dokl. RAN. Math. Inf. Proc. Upr., 527 (2025), 117–133
-
Beyond familiar domains: a study of the generalization capability of machine-generated image detectors
Dokl. RAN. Math. Inf. Proc. Upr., 527 (2025), 103–116
-
Pairwise image matching for plagiarism detection
Dokl. RAN. Math. Inf. Proc. Upr., 527 (2025), 68–83
-
Enhancing fMRI data decoding with spatiotemporal characteristics in limited dataset
Dokl. RAN. Math. Inf. Proc. Upr., 527 (2025), 11–30
-
Sample size determination: likelihood bootstrapping
Zh. Vychisl. Mat. Mat. Fiz., 65:2 (2025), 235–242
-
Stack more LLMs: efficient detection of machine-generated texts via perplexity approximation
Dokl. RAN. Math. Inf. Proc. Upr., 520:2 (2024), 228–237
-
Unraveling the Hessian: a key to smooth convergence in loss function landscapes
Dokl. RAN. Math. Inf. Proc. Upr., 520:2 (2024), 57–70
-
Artificially generated text fragments search in academic documents
Dokl. RAN. Math. Inf. Proc. Upr., 514:2 (2023), 308–317
-
Text reuse detection in handwritten documents
Dokl. RAN. Math. Inf. Proc. Upr., 514:2 (2023), 297–307
-
Analysis of the properties of probabilistic models in expert-augmented learning problems
Avtomat. i Telemekh., 2022, no. 10, 47–59
-
Probabilistic interpretation of the distillation problem
Avtomat. i Telemekh., 2022, no. 1, 150–168
-
Bayesian distillation of deep learning models
Avtomat. i Telemekh., 2021, no. 11, 16–29
-
Prior distribution selection for a mixture of experts
Zh. Vychisl. Mat. Mat. Fiz., 61:7 (2021), 1149–1161
-
Ordering the set of neural network parameters
Inform. Primen., 14:2 (2020), 58–65
-
Estimation of the relevance of the neural network parameters
Inform. Primen., 13:2 (2019), 62–70
© , 2026