RUS  ENG
Full version
JOURNALS // Avtomatika i Telemekhanika // Archive

Avtomat. i Telemekh., 2024 Issue 3, Pages 38–50 (Mi at16363)

This article is cited in 2 papers

Topical issue

Attacks on machine learning models based on the PyTorch framework

T. M. Bidzhiev, D. E. Namiot

Lomonosov Moscow State University

Abstract: This research delves into the cybersecurity implications of neural network training in cloud-based services. Despite their recognition for solving IT problems, the resource-intensive nature of neural network training poses challenges, leading to increased reliance on cloud services. However, this dependence introduces new cybersecurity risks. The study focuses on a novel attack method exploiting neural network weights to discreetly distribute hidden malware. It explores seven embedding methods and four trigger types for malware activation. Additionally, the paper introduces an open-source framework automating code injection into neural network weight parameters, allowing researchers to investigate and counteract this emerging attack vector.

Keywords: neural networks, malware, steganography, triggers.

Presented by the member of Editorial Board: A. A. Galyaev

Received: 08.07.2023
Revised: 24.10.2023
Accepted: 20.01.2024

DOI: 10.31857/S0005231024030038


 English version:
Automation and Remote Control, 2024, 85:3, 263–271


© Steklov Math. Inst. of RAS, 2026