Home
Computational Trust Models and Machine Learning / Edition 1
Barnes and Noble
Computational Trust Models and Machine Learning / Edition 1
Current price: $125.00
Barnes and Noble
Computational Trust Models and Machine Learning / Edition 1
Current price: $125.00
Size: OS
Loading Inventory...
*Product information may vary - to confirm product availability, pricing, shipping and return information please contact Barnes and Noble
Computational Trust Models and Machine Learning
provides a detailed introduction to the concept of trust and its application in various computer science areas, including multi-agent systems, online social networks, and communication systems. Identifying trust modeling challenges that cannot be addressed by traditional approaches, this book:
Explains how reputation-based systems are used to determine trust in diverse online communities
Describes how machine learning techniques are employed to build robust reputation systems
Explores two distinctive approaches to determining credibility of resources—one where the human role is implicit, and one that leverages human input explicitly
Shows how decision support can be facilitated by computational trust models
Discusses collaborative filtering-based trust aware recommendation systems
Defines a framework for translating a trust modeling problem into a learning problem
Investigates the objectivity of human feedback, emphasizing the need to filter out outlying opinions
effectively demonstrates how novel machine learning techniques can improve the accuracy of trust assessment.
provides a detailed introduction to the concept of trust and its application in various computer science areas, including multi-agent systems, online social networks, and communication systems. Identifying trust modeling challenges that cannot be addressed by traditional approaches, this book:
Explains how reputation-based systems are used to determine trust in diverse online communities
Describes how machine learning techniques are employed to build robust reputation systems
Explores two distinctive approaches to determining credibility of resources—one where the human role is implicit, and one that leverages human input explicitly
Shows how decision support can be facilitated by computational trust models
Discusses collaborative filtering-based trust aware recommendation systems
Defines a framework for translating a trust modeling problem into a learning problem
Investigates the objectivity of human feedback, emphasizing the need to filter out outlying opinions
effectively demonstrates how novel machine learning techniques can improve the accuracy of trust assessment.