Verifying Temporal Trust Logic using CTL Model Checking Nagat Drawel CIISE, Concordia University, Canada n drawe@encs.concordia.ca Jamal Bentahar CIISE, Concordia University, Canada bentahar@ciise.concordia.ca Mohamed El Menshawy CS, Menoufia University, Egypt moh marzok75@yahoo.com Amine Laarej CIISE, Concordia University, Canada laarej.amine@gmail.com Abstract Several formal trust frameworks have been introduced in the area of Multi- Agent Systems (MASs). However, the problem of model checking trust log- ics is still a challenging research topic that has not been sufficiently investi- gated yet. In this paper, we address this challenge by proposing a formal and fully automatic model checking technique for a temporal logic of trust. From the logical perspective, the starting point of our proposal is TCTL, a Compu- tation Tree Logic of preconditional Trust that has been recently proposed. We extend this logic by introducing a new modality for conditional trust and de- scribe the logical relationship between preconditional and conditional trust. From the formal verification perspective, we develop transformation-based algorithms fully implemented in a Java toolkit that automatically interacts with the NuSMV model checker. Our verification approach automatically transforms the problem of model checking TCTL into the problem of model checking CTL. We also develop a model checking algorithm for the condi- tional trust. We provide proofs of the soundness and completeness of our transformation algorithms. Finally, experiments conducted on a standard in- dustrial case study of auto-insurance claim processing demonstrate the ef- ficiency and scalability of our approach in verifying TCTL and conditional trust formulae. 1 Introduction Trust is regarded as being one of the key aspects behind the success and growth of applications based on Multi-Agent Systems (MASs). It has been the focus of many research projects, both theoretical and practical, in the recent years, par- ticularly in domains where open multi-agent technologies are applied (e.g., Internet-based markets, information retrieval, etc.). The importance of trust in such domains arises mainly because it provides a social control that regulates the relation- ships and interactions among agents. However, despite the growing number of various multi-agent applications, they still encounter many challenges in the verification of agents’ behaviors. The existence of many autonomous entities in such systems makes this verification difficult due to the increase in their complexity and heterogeneity. The main challenge that faces MASs is how to ensure the reliability of the trust relationships in the presence of misbehaving entities. Such entities not only create an exception for other agents, but also may obstruct their proper work [26]. The fact that such systems Copyright c by the paper’s authors. Copying permitted only for private and academic purposes. In: R. Cohen, M. Sensoy, and T. J. Norman (eds.): Proceedings of the 20th International Workshop on Trust in Agent Societies, Stockholm, July 2018, published at http://ceur-ws.org 1