Active Research Projects
NSF AI Institute for Agent-based
Cyber Threat Intelligence and Operation (ACTION)
with
collaborators at UC
Santa Barbara (Giovanni Vigna, Director), Purdue University, UC
Berkeley, Georgia Tech, UIC, UIUC, NSU, Rutgers, U. Washington, and the University of
Chicago. (2023–2028)
ACTION seeks to change the way mission-critical systems are
protected against sophisticated, ever-changing security threats. In cooperation with (and learning from) security operations experts,
intelligent agents will use complex knowledge representation, logic
reasoning, and learning to identify flaws, detect attacks, perform
attribution, and respond to breaches in a timely and scalable fashion.
This frontier project establishes the Center for Trustworthy Machine
Learning (CTML), a large-scale, multi-institution, multi-disciplinary
effort whose goal is to develop scientific understanding of the risks
inherent to machine learning, and to develop the tools, metrics, and
methods to manage and mitigate them.
Econometrically Inferring and Using Individual Privacy Preferences
with Denis Nekipelov (UVA Economics)
(
NSF
SaTC EAGER, 2019–2023)
This project combines research on mechanism design and econometrics to
provide a new perspective on privacy. Our goal is to develop methods
that use ideas from econometrics to reveal concrete privacy preferences
for individuals and aggregate distributions, and connect those
preferences to formal privacy models, including differential
privacy.
Previous Research Projects
These projects are no longer active, but current projects build on many
of the ideas and tools developed by these projects.
Adversarial Machine Learning
An evolutionary framework based on genetic programming for automatically
finding variants that evade detection by machine learning-based malware
classifiers.
Secure Computation
Privacy-preserving machine learning combining secure multi-party
computation with differential privacy and other privacy techniques.
Web/Mobile Application Security
An integrated suite of techniques for protecting
applications and their data from hostile environments.
Quantifying the risks of side-channel leaks in web
applications using a dynamic, black-box approach.
GuardRails with
Jonathan Burket, Austin DeVinney, Casey Mihaloew
(part of AFOSR MURI)
A secure web application framework that provides rich data policies for Ruby on Rails.
Mechanisms that allow clients to enforce meaningful security policies on
untrusted content in mashup web pages.
Protecting privacy for social network applications using privacy-by-proxy.
Security through Diversity
Designing for Measurable Security
with
Sal Stolfo
and
Steve Bellovin
(Columbia University) (Air Force Office of Scientific Research)
Protect systems from sophisticated and motivated adversaries by
automatically and continuously changing the attack surface of a running
system.
Using structured artificial diversity
to provide high security assurances against large classes of attacks.
Genesis
with Jack Davidson, John Knight, and Anh Nguyen-Tuong
(DARPA)
Using automatically generated diversity at
various levels of abstraction to protect computer systems.
Phyiscal Security
New approaches to cryptography, protocol, and system
design to provide adequate security on low-power devices.
How computing in the physical world impacts security.
Getting sensible behavior from collections of unreliable, unorganized
components.
Program Analysis
Techniques for automatically inferring temporal properties of
real world software using dynamic analysis.
Protect vulnerable programs by storing security-critical data in a
separate protected store.
Reducing the cost and improving the scalability of program analysis using
lightweight static analysis (
Splint).
Malware
Uses the disk processor to improve virus detection and response by
recognizing viruses by their disk-level activity.