Engineering trust alignment: Theory, method and experimentation

Andrew Koster, Marco Schorlemmer, Jordi Sabater-Mir

    Research output: Contribution to journalArticleResearchpeer-review

    8 Citations (Scopus)

    Abstract

    In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a first-order regression algorithm, to learn an alignment and test it in an example scenario. © 2012 Elsevier Ltd. All rights reserved.
    Original languageEnglish
    Pages (from-to)450-473
    JournalInternational Journal of Human Computer Studies
    Volume70
    Issue number6
    DOIs
    Publication statusPublished - 1 Jun 2012

    Keywords

    • Alignment
    • Channel theory
    • Regression
    • Trust

    Fingerprint

    Dive into the research topics of 'Engineering trust alignment: Theory, method and experimentation'. Together they form a unique fingerprint.

    Cite this