RUS  ENG
Full version
JOURNALS // Computational nanotechnology // Archive

Comp. nanotechnol., 2025 Volume 12, Issue 4, Pages 61–70 (Mi cn593)

SYSTEM ANALYSIS, INFORMATION MANAGEMENT AND PROCESSING, STATISTICS

The procedure for aggregating initial data on the required quality level of complex data processing systems

N. S. Samokhina, A. S. Efremov

Volga Region State University of Service

Abstract: The aim of the research is to develop a procedure for aggregating data on the required quality level of complex data processing systems through the integration of multicriteria analysis with machine learning methods. Existing approaches based on GOST R 59797–2021 and ISO/IEC 25010 demonstrate limited efficiency due to the absence of a unified aggregation procedure]. A three-stage hybrid procedure has been devised: collection and normalization of quality indicators using a modified z-transformation; calculation of adaptive weights via synthesis of the AHP method with a random forest algorithm; formation of an integrated criterion for the required quality level. Validation was carried out on two industrial systems with scales of 50–80 TB/day. Results include an increase in forecast accuracy from 82.1 to 92.4%, a 3.4-fold reduction in decision-making time, and a decrease in critical incidents by 34—45%. The algorithmic complexity is $O(n^{2}m + n \log nk)$, with execution time under 30 seconds. The procedure is applicable to CDPS with data volumes exceeding 10 TB/day and requires at least 500 historical observations. The findings are valuable for architects and specialists in quality management of critically important information systems.

Keywords: computational modeling, dynamic adaptation, system lifecycle, neural network algorithms, system analysis, complex data processing systems, event-predictive quality management, emergent properties.

UDC: 005.6:303.732:519.622

DOI: 10.33693/2313-223X-2025-12-4-61-70



© Steklov Math. Inst. of RAS, 2026