IC Colloquium : Learning from Strategic Agents

Event details
Date | 12.10.2015 |
Hour | 16:15 › 17:30 |
Location | |
Category | Conferences - Seminars |
By : Stratis Ioannidis - Northeastern University
Video of his talk
Abstract :
Learning from personal or sensitive data is a cornerstone of several experimental sciences, such as medicine and sociology. It has also become a commonplace, and controversial, aspect of the Internet economy. The monetary and societal benefits of learning from personal data are often off-set by privacy costs incurred by participating individuals. In this talk, we study these issues from the point of view of mechanism design. We consider a learner that wishes to regress a linear function over sensitive data provided by strategic agents. We show that, when agents may misreport their privacy costs, or even purposefully distort their data out of privacy concerns, a learner with a finite budget can still (a) incentivize truthful reporting and (b) learn models that are asymptotically accurate.
This is joint work with Rachel Cummings, Thibaut Horel, Patrick Loiseau, S. Muthukrishnan, and Katrina Ligett.
Bio :
Stratis Ioannidis is an Assistant Professor in the E.C.E. department of Northeastern University. He received his B.Sc. (2002) in Electrical and Computer Engineering from the National Technical University of Athens, and his M.Sc. (2004) and Ph.D. (2009) in Computer Science from the University of Toronto. Prior to joining Northeastern, he was a research scientist at Yahoo Labs, in Sunnyvale, CA, and at the Technicolor research centers in Paris, France, and Los Altos, CA.
More information
Video of his talk
Abstract :
Learning from personal or sensitive data is a cornerstone of several experimental sciences, such as medicine and sociology. It has also become a commonplace, and controversial, aspect of the Internet economy. The monetary and societal benefits of learning from personal data are often off-set by privacy costs incurred by participating individuals. In this talk, we study these issues from the point of view of mechanism design. We consider a learner that wishes to regress a linear function over sensitive data provided by strategic agents. We show that, when agents may misreport their privacy costs, or even purposefully distort their data out of privacy concerns, a learner with a finite budget can still (a) incentivize truthful reporting and (b) learn models that are asymptotically accurate.
This is joint work with Rachel Cummings, Thibaut Horel, Patrick Loiseau, S. Muthukrishnan, and Katrina Ligett.
Bio :
Stratis Ioannidis is an Assistant Professor in the E.C.E. department of Northeastern University. He received his B.Sc. (2002) in Electrical and Computer Engineering from the National Technical University of Athens, and his M.Sc. (2004) and Ph.D. (2009) in Computer Science from the University of Toronto. Prior to joining Northeastern, he was a research scientist at Yahoo Labs, in Sunnyvale, CA, and at the Technicolor research centers in Paris, France, and Los Altos, CA.
More information
Practical information
- General public
- Free
- This event is internal
Contact
- Host: Patrick Thiran