Abstract:
The aim of the research is to develop a procedure for aggregating data on the required quality level of complex data processing systems through the integration of multicriteria analysis with machine learning methods. Existing approaches based on GOST R 59797–2021 and ISO/IEC 25010 demonstrate limited efficiency due to the absence of a unified aggregation procedure]. A three-stage hybrid procedure has been devised: collection and normalization of quality indicators using a modified z-transformation; calculation of adaptive weights via synthesis of the AHP method with a random forest algorithm; formation of an integrated criterion for the required quality level. Validation was carried out on two industrial systems with scales of 50–80 TB/day. Results include an increase in forecast accuracy from 82.1 to 92.4%, a 3.4-fold reduction in decision-making time, and a decrease in critical incidents by 34—45%. The algorithmic complexity is $O(n^{2}m + n \log nk)$, with execution time under 30 seconds. The procedure is applicable to CDPS with data volumes exceeding 10 TB/day and requires at least 500 historical observations. The findings are valuable for architects and specialists in quality management of critically important information systems.
Keywords:computational modeling, dynamic adaptation, system lifecycle, neural network algorithms, system analysis, complex data processing systems, event-predictive quality management, emergent properties.