A theory of trust for a given system consists of a set of rules that describe trust of agents in the system. In a certain logical framework, the theory is generally established based on the initial trust of agents in the security mechanisms of the system. Such a theory provides a foundation for reasoning about agent beliefs as well as security properties that the system may satisfy. However, trust changes dynamically. When agents lose their trust or gain new trust in a dynamic environment, the theory established based on the initial trust of agents in the system must be revised, otherwise it can no longer be used for any security purpose. This paper investigates the factors influencing trust of agents and discusses how to revise theories of trust in dynamic environments. A methodology for revising and managing theories of trust for multiagent systems is proposed. This methodology includes a method for modeling trust changes, a method for expressing theory changes, and a technique for obtaining a new theory based on a given trust change. The proposed approach is very general and can be applied to obtain an evolving theory of trust for agent-based systems.
|Number of pages||10|
|Journal||IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans|
|Publication status||Published - May 2006|