Views & Comments How to Interpret Machine Knowledge Fashen Li a,# , Lian Li b,# , Jianping Yin c,# , Yong Zhang d,# , Qingguo Zhou e,# , Kun Kuang f,# a Department of Physics, Lanzhou University, Lanzhou 430000, China b Department of Computer Science, Hefei University of Technology, Hefei 230009, China c Department of Computer Science, Dongguan University of Technology, Dongguan 523808, China d Department of Physics, Xiamen University, Xiamen 361005, China e Department of Computer Science, Lanzhou University, Lanzhou 430000, China f College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China Machine knowledge refers to the knowledge contained in artifi- cial intelligence. This article discusses how to acquire machine knowledge, with a particular focus on the acquisition of causal knowledge. The latter is the process of interpreting machine knowledge. Through the analysis of certain research methods in the fields of physics and artificial intelligence, we propose princi- ples and models for interpreting machine knowledge, and discuss specific methods including the automation of the interpretation process and local linearization. Human beings have now entered the four-dimensional society that comprises the natural world, human world, information world, and intelligent-agent world. The intelligent agent y has become an objective existence of our world. An intelligent agent can make predictions, make judgments, express emotions, and even actively adjust its behaviors to adapt to changes in the environment [1,2]. Hence, we can think of an intelligent agent as a knowledge sys- tem with a knowledge structure and function, known as machine knowledge. To establish a generally accepted definition of knowledge, it is still necessary to continuously study it in depth. In this article, we first set forth the general definition that knowledge is the law of phenomena change. An intelligent agent can change the output from the input, or adjust the next output based on the previous output. This kind of input and output—as well as the law of change between output and output—is the law of change of the phe- nomenon, so it belongs to knowledge. This kind of knowledge is called primary knowledge. For example, placing all the changes in the phenomena into a table is an expression of knowledge (i.e., exhaustive expression). However, the knowledge that people need is often not this primary form of knowledge, but rather one that is abstracted at a higher level—that is, the general and universal law that reflects the change of phenomena. This kind of knowledge is called advanced knowledge. Advanced knowledge can continue to be layered according to the degree of abstraction. Taking the work of Tycho Brahe and Johannes Kepler as an example, through detailed observations, Tycho listed a large amount of trajectory data of planetary operations, which only reflected the associations of phenomena (i.e., planetary operations). Once Kepler successfully summed up the three laws and revealed the causal relationship of those phenomena, high-level knowledge of planetary operations was developed. Moreover, Newton’s second law is a yet higher- level expression of knowledge. Both association and causal rela- tionships are knowledge, but they are at different levels. In the pro- cess of humans acquiring knowledge, it is the most basic scientific activity to determine the association between phenomena through observation. To determine causality, it is necessary to analyze and summarize the phenomena behind the observed data. Causality plays an important role in the human science system, since humans always want to know—and persistently pursue—the ‘‘why” behind a phenomenon change. In this paper, we focus on the question of whether people can obtain causal knowledge from intelligent agents, and how it may be done. This process involves the interpretation of machine knowledge. Through training, intelligent agents can complete very complicated work, and some of their achievements have exceeded humanity’s cultural accumulation over thousands of years. How- ever, we still do not know how these agents are so successful. For example, for an intelligent agent such as neural network, excessive fitting training data does not make neural network more generalizable. We do not know where the boundaries of its success are. We do not know how to design the structure of a neural net- work to accomplish an intended task. We do not know whether it is possible to change the training set to make the neural network perform better. We do not even know what the neural network is based on for precise prediction—that is, whether it is based on data or on features. In a word, we do not understand the knowledge of an intelligent agent; hence, how can we trust it? Thus far, causality remains the fundamental cornerstone of human understanding of the natural world, and the association described by probabilistic thinking is the surface phenomenon that drives us to understand causal mechanisms in the world. As Pearl [3] said, https://doi.org/10.1016/j.eng.2019.11.013 2095-8099/Ó 2020 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). # These authors contributed equally to this work. y The intelligent agent in this paper refers to artificial intelligent machines based on silicon technology and Turing algorithm, such as various learning models, computing models, and simulation models, excluding agents constructed using biological or genetic technologies. Engineering 6 (2020) 218–220 Contents lists available at ScienceDirect Engineering journal homepage: www.elsevier.com/locate/eng