Amazon cover image
Image from Amazon.com

Adversarial machine learning / Anthony D. Joseph, University of California, Berkeley, Blaine Nelson, Google, Benjamin I.P. Rubinstein, University of Melbourne, J.D. Tygar, University of California, Berkeley.

By: Joseph, Anthony D | [author.]Contributor(s): Nelson, Blaine | Rubinstein, Benjamin I. P | Tygar, J. D | [author.] | [author.] | [author.]Material type: TextTextLanguage: English Publication details: Cambridge, United Kingdom ; New York, NY : Cambridge University Press, 2019 Description: pages cmISBN: 9781107043466 (hardback)Subject(s): Machine learning | Computer security | COMPUTERS / Security / General | DDC classification: 006.31 Other classification: COM053000
Contents:
1.Introduction 2.Background and notation 3.A framework for secure learning 4.Attacking a hypersphere learner 5.Availability attack case study: Spambayes 6.Integrity attack case study: PCA detector 7.Privacy-preserving mechanisms for SVM learning 8.Near-optimal evasion of classifiers 9.Adversarial machine learning challenges.
Summary: "Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover how machine learning systems can adapt when an adversary actively poisons data to manipulate statistical inference, learn the latest practical techniques for investigating system security and performing robust data analysis, and gain insight into new approaches for designing effective countermeasures against the latest wave of cyber-attacks. Privacy-preserving mechanisms and the near-optimal evasion of classifiers are discussed in detail, and in-depth case studies on email spam and network security highlight successful attacks on traditional machine learning algorithms. Providing a thorough overview of the current state of the art in the field, and possible future directions, this groundbreaking work is essential reading for researchers, practitioners and students in computer security and machine learning, and those wanting to learn about the next stage of the cybersecurity arms race"--
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Status Date due Barcode
General Books General Books CUTN Central Library

This is a searchable open catalogue of all library of the Central University of Tamil Nadu.

Generalia
Non-fiction 006.31 JOS (Browse shelf(Opens below)) Available 36789

1.Introduction
2.Background and notation
3.A framework for secure learning
4.Attacking a hypersphere learner
5.Availability attack case study: Spambayes
6.Integrity attack case study: PCA detector
7.Privacy-preserving mechanisms for SVM learning
8.Near-optimal evasion of classifiers
9.Adversarial machine learning challenges.

"Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover how machine learning systems can adapt when an adversary actively poisons data to manipulate statistical inference, learn the latest practical techniques for investigating system security and performing robust data analysis, and gain insight into new approaches for designing effective countermeasures against the latest wave of cyber-attacks. Privacy-preserving mechanisms and the near-optimal evasion of classifiers are discussed in detail, and in-depth case studies on email spam and network security highlight successful attacks on traditional machine learning algorithms. Providing a thorough overview of the current state of the art in the field, and possible future directions, this groundbreaking work is essential reading for researchers, practitioners and students in computer security and machine learning, and those wanting to learn about the next stage of the cybersecurity arms race"--

Includes bibliographical references and index.

There are no comments on this title.

to post a comment.

Powered by Koha