Opening the black box of trust: Reasoning about trust models in a BDI agent

Andrew Koster, Marco Schorlemmer, Jordi Sabater-Mir

    Research output: Contribution to journalArticleResearchpeer-review

    20 Citations (Scopus)

    Abstract

    Trust models as thus far described in the literature can be seen as a monolithic structure: a trust model is provided with a variety of inputs and the model performs calculations, resulting in a trust evaluation as output. The agent has no direct method of adapting its trust model to its needs in a given context. In this article, we propose a first step in allowing an agent to reason about its trust model, by providing a method for incorporating a computational trust model into the cognitive architecture of the agent. By reasoning about the factors that influence the trust calculation the agent can effect changes in the computational process, thus proactively adapting its trust model. We give a declarative formalization of this system using a multi-context system and we show that three contemporary trust models, BRS, ReGReT and ForTrust can be incorporated into a BDI reasoning system using our framework. © The Author, 2012. Published by Oxford University Press. All rights reserved.
    Original languageEnglish
    Pages (from-to)25-58
    JournalJournal of Logic and Computation
    Volume23
    Issue number1
    DOIs
    Publication statusPublished - 1 Feb 2013

    Keywords

    • BDI
    • Multi-context systems
    • Trust

    Fingerprint

    Dive into the research topics of 'Opening the black box of trust: Reasoning about trust models in a BDI agent'. Together they form a unique fingerprint.

    Cite this