Representation and querying of unfair evaluations in social rating systems

Mohammad Allahbakhsh*, Aleksandar Ignjatovic, Boualem Benatallah, Seyed-Mehdi-Reza Beheshti, Norman Foo, Elisa Bertino

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


Social rating systems are subject to unfair evaluations. Users may try to individually or collaboratively promote or demote a product. Detecting unfair evaluations, mainly massive collusive attacks as well as honest looking intelligent attacks, is still a real challenge for collusion detection systems. In this paper, we study the impact of unfair evaluations in online rating systems. First, we study the individual unfair evaluations and their impact on the reputation of people calculated by social rating systems. We then propose a method for detecting collaborative unfair evaluations, also known as collusion. The proposed model uses frequent itemset mining technique to detect the candidate collusion groups and sub-groups. We use several indicators to identify collusion groups and to estimate how destructive such colluding groups can be. The approaches presented in this paper have been implemented in prototype tools, and experimentally validated on synthetic and real-world datasets.

Original languageEnglish
Pages (from-to)68-88
Number of pages21
JournalComputers and Security
Publication statusPublished - 1 Mar 2014
Externally publishedYes


  • Biclique
  • Collusion
  • Degree of fairness
  • Rating system
  • Reputation
  • Unfair evaluation


Dive into the research topics of 'Representation and querying of unfair evaluations in social rating systems'. Together they form a unique fingerprint.

Cite this