AAAI 2021 WorkshopTowards Robust, Secure and Efficient Machine Learning |
||
Venue: Online (Click to join)
|
Machine learning technology has been improving with every passing day and has been extensively applied to nearly every corner of the society that offers substantial benefits to our daily lives. However, machine learning models face various threats. For example, it is known that machine learning models are vulnerable to adversarial samples. The existence of adversarial examples reveals that current machine learning models are vulnerable and can be easily fooled, leading to serious security concerns in machine learning systems such as autonomous driving vehicles or face recognition systems.
More recently, due to both data privacy requirements as specified in the European Union’s General Data Protection Regulation (GDPR), and the limitations of computation power, the training process of machine learning models has extended from centralized to decentralized (i.e. distributed or federated learning) where the model will suffer from even more threats. For example, in a federated learning setting, every client can perform various attacks such as backdoor attacks on the global model as clients have direct access to the global model. How to prevent privacy leaking during information exchange of a decentralized training method is also a critical issue.
At the same time, computation efficiency is a big concern for modern deep learning, both inference and training. For inference, people prefer inference on edge devices due to better privacy, but edge devices have very limited computational resource. For training, gradient or weight exchange is necessary for decentralized training, but such exchange requires communication, which may be slow. Furthermore, models that are robust to adversarial attacks usually require longer training time and orders of magnitude more computation FLOPs than normal networks.
This one-day workshop intends to bring experts from machine learning, security communities, and federated learning together to work more closely in addressing the posed concerns. Specifically, we seek to study threats and defenses to machine learning not only in a single node setting but also in a distributed setting. In summary, we seek solutions to achieve a wholistic solution for robust, secure and efficient machine learning.
Time (PST) | Activity |
---|---|
15:45 – 16:00 | Presenters to connect and test the system |
16:00 – 16:05 | Opening Remark by Prof. Qiang Yang [Video] |
16:05 – 16:35 | Keynote Session 1: Efficiency is the Key to Privacy (and Security) by Prof. Kurt Keutzer [PDF] [Video] |
16:35 – 17:15 | Technical Talks Session 1 (2 talks, 20 mins each including Q&A) |
17:15 – 17:20 | Break (Presenters should connect and test the system) |
17:20 – 17:50 | Keynote Session 2: Vertical Federated Kernel Learning by Prof. Heng Huang [PDF] [Video] |
17:50 – 18:30 | Technical Talks Session 2 (2 talks, 20 mins each including Q&A) |
18:30 – 18:35 | Break (Presenters should connect and test the system) |
18:35 – 19:05 | Keynote Session 3: On Private Prediction and Certified Removal by Dr. Laurens van der Maaten [Video] |
19:05 – 19:45 | Technical Talks Session 3 (2 talks, 20 mins each including Q&A) |
19:45 – 21:00 | Poster Session |
21:00 - 21:00 | End of Workshop |
Submissions can be a full technical paper (up to 8 pages) or short paper (up to 4 pages) excluding references or supplementary materials.
Authors should only rely on the supplementary material to include minor details that do not fit in the main paper.
The submissions are anonymous for double-blind review.
The workshop will not have formal proceedings.
Please follow AAAI 2021 Latex style for paper format.
The final submission must be in PDF and please submit your papers to https://easychair.org/conferences/?conf=rseml2021
General Chair
Program Chair
Program Committee
Industrial Chair
Publicity Chair
Please do not hesitate to contact us or Kam Woh if you have questions. This website is linked with http://federated-learning.org/rseml2021/.
The webpage template is by the courtesy of ICCV 2019 Tutorial on Interpretable Machine Learning for Computer Vision.