A Quantitative Measure for Information Transfer in Human-Machine Control Systems Maxim Bakaev, Tatiana Avdeenko Economic Informatics Department Novosibirsk State Technical University Novosibirsk, Russia bakaev@corp.nstu.ru; avdeenko@fb.nstu.ru Abstract—This study is an exploration in measuring quantities of information transferred in the course of interaction in human-machine systems, which deemed necessary as fundamental support for interface design. We put forwards several possible measures found in previous research works, including the infamous Hick’s law, and validate their applicability with dedicated experiment with 33 users performing selection tasks. The results suggest that interface “effective information capacity” factor, incorporating number of alternative targets (N) and logarithm of their vocabulary size (K, possible different symbols), provided better fit to the observed performance time, while Hick’s law model and “interface message length” (IML) based models were clearly inferior. ANOVA suggested strong effect of interaction between N, K, and IML, but building corresponding conclusive model would require further theoretical research. We also calculated selection tasks throughput, which amounted to 59.0 bit/s in the experiment, steadily increasing per K and decreasing with age. Our findings might aid interface designers, in particular ones developing human-machine control systems that seek to maximize overall operating efficiency and minimize costly human errors. Keywords— Interface Design, Throughput, Hick-Hyman Law, Model Human Processor, Information Complexity I. INTRODUCTION “The User Is Always Right” has become a widespread must-follow motto for any interaction designer, and is considered common-sense for more than a decade. Interface and product designers do their best to create pleasurable user experiences, however there are cases when overall efficiency of a human-machine system must be considered and optimized instead. This was especially prominent in the 1970-1980s, when extensive development and introduction of automated control systems took place in the leading countries of the time. Professionally trained operator or analyst was considered a part of human-machine control system, receiving, processing, and outputting information just like any other its component. Human was seen as indispensable element adding integrity, adaptability, and self-development capability to the control system [1, vol.1]. However, given all the strong points, humans are clearly inferior to machines in terms of receiving great amounts of information, and disregard of this aspect is known to lead to costly human errors, especially in control systems [2, p.8]. It also seems that the progress of interaction design automation tools remains limited in particular by their inability to optimize information flows between computer and user, due to lack of quantitative measure. For example, SUPPLE system [3] that performs dialogue windows interfaces generation considers it as a discrete optimization problem, but operates with movement time between interface elements, their sizes, etc., which inevitably neglects cognitive aspect in interaction. Perhaps the most straightforward approach to quantifying information transfer would embrace information quantity, calculated with the infamous C. Shannon’s formula for information entropy. Indeed, W. Hick, probably being one of the first to apply the Information Theory in psychology, devised the relation for human choice reaction time (RT) and the number of alternatives (N) to choose from: ) 1 ( log * ~ 2 N k RT , (1) where k coefficient is the rate of gain of information. Of course, it was just a particular case (equiprobable alternatives) of a wider relation between RT and entropy of the set of stimulus (H T ), put forward by R. Hyman: T H H H b a RT * , (2) where a H and b H are empirically defined constants, which may vary quite significantly depending on human subject's characteristics, stimulus type and intensity, environment, etc. However, greatest problem for Hick-Hyman's law (2) application in human-machine interface design is that it remains very much unclear how to calculate information entropy for most practical interaction tasks [4]. The analysis of existing research led us to believe there’s no universally accepted method to measure amounts of information transferred between humans and computers in modern interaction context, although its desirability is widely recognized. So, in our study we put forward several candidate measures found in literature and undertake an experiment modeling human information processing, to access their applicability and establish directions for further research. This research was supported by RFBR grant for young scientists, 15-37- 21058.