Trust models as thus far described in the literature can be seen as a monolithic structure: a trust model is provided with a variety of inputs and the model performs calculations, resulting in a trust evaluation as output. The agent has no direct method of adapting its trust model to its needs in a given context. In this article, we propose a first step in allowing an agent to reason about its trust model, by providing a method for incorporating a computational trust model into the cognitive architecture of the agent. By reasoning about the factors that influence the trust calculation the agent can effect changes in the computational process, thus proactively adapting its trust model. We give a declarative formalization of this system using a multi-context system and we show that three contemporary trust models, BRS, ReGReT and ForTrust can be incorporated into a BDI reasoning system using our framework. © The Author, 2012. Published by Oxford University Press. All rights reserved.
|Journal||Journal of Logic and Computation|
|Publication status||Published - 1 Feb 2013|
- Multi-context systems
Koster, A., Schorlemmer, M., & Sabater-Mir, J. (2013). Opening the black box of trust: Reasoning about trust models in a BDI agent. Journal of Logic and Computation, 23(1), 25-58. https://doi.org/10.1093/logcom/exs003