Agents Corner
Opening the black box of trust: reasoning
about trust models in a BDI agent
ANDREW KOSTER and MARCO SCHORLEMMER, Artificial Intelligence
Research Institute IIIA-CSIC, Universitat Autònoma de Barcelona, Bellaterra,
Catalonia, Spain.
E-mail: andrew@iiia.csic.es; marco@iiia.csic.es
JORDI SABATER-MIR, Artificial Intelligence Research Institute IIIA-CSIC,
Bellaterra, Catalonia, Spain.
E-mail: jsabater@iiia.csic.es
Abstract
Trust models as thus far described in the literature can be seen as a monolithic structure: a trust model is provided with a
variety of inputs and the model performs calculations, resulting in a trust evaluation as output. The agent has no direct method
of adapting its trust model to its needs in a given context. In this article, we propose a first step in allowing an agent to reason
about its trust model, by providing a method for incorporating a computational trust model into the cognitive architecture of
the agent. By reasoning about the factors that influence the trust calculation the agent can effect changes in the computational
process, thus proactively adapting its trust model. We give a declarative formalization of this system using a multi-context
system and we show that three contemporary trust models, BRS, ReGReT and ForTrust can be incorporated into a BDI
reasoning system using our framework.
Keywords: Trust, BDI, multi-context systems.
1 Introduction
Research into trust models is currently an active topic in the domain of multi-agent systems (MAS).
Many computational trust models have been proposed [27], based on theoretical foundations from
many different disciplines. For example, some trust models have cognitive foundations [8, 14, 30],
others are based on mathematical methods, such as statistical models [36, 37] or game theory [2] and
still others use a social network-oriented approach [11] or are oriented towards specific applications,
such as negotiation [35] or the semantic web [34]. These trust models have in common that they
are computational methods for calculating an agent’s trust in a trustee based on the agent’s own
interactions with the trustee, as well as on information that is available in the environment about the
trustee. Such information may be direct communications from other agents in the system, giving their
own trust evaluations of the trustee; it may be reputation information; or it may be any other source
of information available in the system. The trust model then aggregates this information, using the
chosen mathematical method, and calculates the evaluation of the trustee.
The problem with the trust models discussed so far in the literature is that an agent is unable to
change its trust model if it discerns a change in the environment. If we were to view the trust model
from the agent’s perspective, it would appear to be a ‘black box’ with as input the various information
sources and as output an evaluation of how trustworthy the trustee is. However, as argued in [4], trust
is not just an evaluation of a trustee, but an integral part of the decision making process of an agent
in a social environment. For a trust evaluation to be meaningful in this process, it may be necessary
to customize the evaluation process to the decision that is being made. This is especially so in an
open MAS, where the environment may change.
Vol. 23 No. 1, © TheAuthor, 2012. Published by Oxford University Press. All rights reserved.
For Permissions, please email: journals.permissions@oup.com
Published online March 16, 2012 doi:10.1093/logcom/exs003
at CSIC on February 27, 2014 http://logcom.oxfordjournals.org/ Downloaded from