Abstract:
In this paper, a systematic review of approaches to the formalization of normative behavior in autonomous intelligent agents is presented. The classification is developed along three operational dimensions: the dominant ethical paradigm (deontological or hybrid), the type of logical model and its inference mechanisms (deontic modalities, defeasible and dynamic extensions), and the level of implementation within architectures for planning, control, and verification. A single moral-choice scenario is used for comparison; on this basis, it is shown how decisions change under fixed obligations, with the introduction of priorities and exceptions, and with the consideration of consequential evaluative criteria. The results refine the boundaries of applicability of the formalisms, identify requirements for priority tuning and explainability, and lay the groundwork for designing verifiable modules of normative control for autonomous systems. Methodological conventions for notation and a unified template for model descriptions are articulated, ensuring the comparability of findings.