In the field of machine learning and artificial intelligence, the International Conference on Machine Learning (ICML) is held every year, and is one of the industry’s leading meetings, popular with researchers, practitioners and industry professionals from all over the world, with the aim of encouraging research into machine learning.
ICML is renowned for publishing cutting-edge research on all aspects of machine learning used in closely related fields such as artificial intelligence, statistics and data science, as well as important application areas such as computer vision, computational biology, speech recognition and robotics.
On the occasion of the 40th anniversary edition of the ICML to be held in Hawaii from July 23 to 29, 2023, Foxstream, a member of the Vitaprotech group, is proud to have been selected to present a paper written by two of its researchers, Blaise Delattre and Quentin Barthélemy, in partnership with Paris Dauphine University and New York University: “Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration”.
This recognition by one of the world’s leading organizations dedicated to machine learning demonstrates Foxstream’s high level of rigor and research to continue improving its solutions. Indeed, the work undertaken by Blaise Delattre and Quentin Barthélemy is aimed at reinforcing confidence in the deep learning algorithms used in the Foxstream solution, an essential element in the security sector.
Vitaprotech is proud to support its companies in their research and innovation work, and thus help build a safer world for all.
Find out more about the article “Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration” :
The Lipschitz constant of a neural network is a number characterizing its stability; the smaller the number, the more stable the neural network.
Neural networks are algorithmic models capable of learning from data and performing tasks such as object classification, segmentation or intrusion detection. Despite their incredible success in these tasks, these neural networks can also be fooled or corrupted by bad data or noise. For security, autonomous driving or medical applications, it is crucial to control their stability to guarantee total confidence in the system’s output. So we need to make sure that neural networks are stable, meaning that they don’t change their behavior too much when something small changes in their inputs or parameters.
The paper introduces:
This article was written in partnership with Paris Dauphine University and New York University.
Foxstream authors: Blaise Delattre, Quentin Barthélemy.
Academic authors: Professor Alexandre Allauzen (Paris Dauphine), Alexandre Araujo (New York University)