Preferences, Utility, Value Driven Modeling and Decision Support Yuri Pavlov, Rumen Andreev, Valentina Terzieva, Katia Todorova, Petia Kademova-Katzarova Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Bulgaria INTRODUCTION Organizational knowledge is indispensable in the process of modeling of complex systems, which describe phenomena with significant human participation. In this sense, a complex system is a system with the active or decisive participation of human in the determination of objective, description, and choice of the final decision as an element of the system itself. In the context of the system analysis, this is a “human-process”-system. This conception is reflected in the decision-making theory at the stage of determination of the system structure in the light of the main objective. At such a basic level, the needed information and knowledge are generally expressed in ordinal scale as human preferences. The paper focuses attention on complex models and mathematically well-founded methods where the decision maker is represented through his preference as a value or utility function, being part of the mathematical modeling process. In fact, it is a value-based design, an engineering strategy grounded on system analyses enabling multidisciplinary design optimization (Collopy, Hollingsworth, 2009). It is a good idea to match together the human’s preferences with the machine learning. The latter focuses on a prediction tha t builds on known properties derived from training data. Machine learning explores the construction and learning algorithms that can learn from teaching experts and make predictions. According to some opinions, the machine learning and pattern recognition can be viewed as two facets of the same problem. As a scientific field, Machine learning is an area of computer science that is evolved from the study of pattern recognition and computational learning theory (Aizerman, Braverman, & Rozonoer, 1970; Vapnik, 1998). The decision- making is also an iterative process that includes learning as an essential part of its realization (Keeney, Raiffa, 1999). The combination of the abovementioned theories and approaches enable constructing mathematical value-based models of complex systems like “human-process” and building a mathematically well-grounded control solution as a flexible, iterative mutual learning process (Pavlov, Marinov, 2017). The objective of the paper is to present such a strict logical mathematical approach for value modeling and estimation of human preferences as machine learning in the process of building of two value based models of complex systems with human participation. The first model focuses on forestry timber production rendering an account to landscape design, conservation of biodiversity. The second model concerns the classroom teaching and determining of the optimal usage of active and passive resources based on information and communication technology (ICT). BACKGROUND A reasonable approach to the mathematical description of human beings is an analytical representation of their preferences. Preference representation as value or utility function permits value-based modelling. Value-based decision-making based on human preferences and their inclusion in complex systems is a challenge and a modern research trend simultaneously. It is the first step in the implementation of a human-cantered value-driven design in a decision-making process (Keeney, Raiffa, 1999). The main objective is to avoid the contradictions in human’s decisions in complex processes and to permit mathematical calculations in these fields . The complex phenomena and the characteristics of human thinking raise uncertainty in the expressed human preferences. The mathematical approach to modelling such type of thinking and acquired information includes the theory of measurement, utility theory and theory of probability and various aspects of operational researches. Especially promising in this direction is the stochastic approximation theory and the potential functions method. The latter, by its nature, allows machine learning and is used in various fields, including a mathematical description of perceptions (Aizerman et al, 1970). PREFERENCES, STOCHASTIC APPROXIMATION AND UTILITY REPRESENTATION When the alternatives are arranged by preferences, it implies the ordering scale. In the case of decision-making under certainty, every decision maker’s (DM’s) choice corresponds to only one outcome (alternative x, x ∊X). X denotes the set of alternatives, e.g. possible outcomes, provoked by the DM’s actions. Let consider a more general scheme of interaction between DM and the real world. Assuming that for every choice of the DM, there are (i, i=1n) possible outcomes (alternatives), each of which occurs with probability pi, where 1 1 n i i p . Thus, every decision corresponds to one possibility distribution (p) as an outcome. Following the Bayesian approach, it is reasonable to maximize the mathematical expectation ) ( 1 i n i i x u p (Keeney, Raiffa, 1999; Fishburn, 1970).The function u(x) is a utility function that evaluates the different final alternatives xX. The normative (axiomatic) approach regards the conditions for existence of utility function u(.) (Fishburn, 1970; Pfanzagl, 1971). X is the set of alternatives and P – a convex set of probability distributions over X. By DM’s preference relation over P (XP) is denoted. The relation (pq), (p,q)∊P 2 expresses the preferences of DM over P including those over X (XP). The induced indifference relation (≈) is defined as (Fishburn, 1970): ((x≈y) xyxy, (x,y)X 2 ). Let (u(.)dp) denote integration based on the probability measure p. A utility function u(.) is defined so that the following is fulfilled: ( pq , (p,q)∊P 2 ) (∫u(.)dp∫u(.)dq).