About me
Welcome to my site. My name is Grigoris and I am an Assistant Professor in University of Wisconsin-Madison.
My research focuses on reliable machine learning, focusing on the development of robust models that perform well under noise and out-of-distribution data. Concretely,
- Architecture design: I have worked extensively on polynomial networks (PNs) that capture high-degree interactions between inputs. My short-term goals are to understand the inductive bias and properties of existing architectures through empirical and theoretical studies. I am interested in the complete theoretical understanding of (neural/polynomial) networks, including their expressivity, trainability, generalization properties, and inductive biases. Our recent work has provided the first characterization of the generalization of this class of functions, or the spectral bias of high-degree polynomials, highlighting how PNs can learn higher frequency functions faster than regular feed-forward networks.
- Trustworthy models: My goal is to understand the enhance the performance of existing networks, particularly with respect to their extrapolation abilities, and their robustness to malicious (adversarial) attacks. I am interested in both discriminative and generative models, including Large Language Models. In our recent work, we have studied adversarial attacks and defenses in the text domain, where there are exciting questions ahead. Our long-term goal is to develop models that are robust, fair, and capable of generalizing well to unseen combinations with strong extrapolation abilities.
News
- October 2024: We are organizing a workshop titled 'ColorAI: Connecting Low-Rank Representations in AI' in conjunction with AAAI'25. We welcome your submissions throughout October.
- September 2024: The following paper is accepted at NeurIPS 2024: Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization.
- September 2024: I am delivering a tutorial titled 'Architecture design: from neural networks to foundation models' in conjunction with DSAA 2024 on 8th October. Addition info on the tutorial site.
- July 2024: We are organizing a workshop titled 'Fine-Tuning in Modern Machine Learning: Principles and Scalability' in conjunction with NeurIPS'24.
- July 2024: The following paper is accepted at ECCV 2024: Multi-Identity Gaussian Splatting via Tensor Decomposition.
- July 2024: We are delivering a tutorial titled 'Scaling and Reliability Foundations in Machine Learning' in conjunction with ISIT 2024 on 7th July.
- June 2024: Grateful to Google and OpenAI for their grants supporting our research on trustworthy Large Language Models (LLMs).
- May 2024: The following papers are accepted at ICML 2024:
- April 2024: We are organizing a tutorial titled 'Scaling and Reliability Foundations in Machine Learning' in conjunction with ISIT 2024 on 7-th July.
- January 2024: The following papers are accepted at ICLR 2024:
- January 2024: The following paper has been accepted at Transactions on Machine Learning Research (TMLR): 'PNeRV: A Polynomial Neural Representation for Videos'.
- November 2023: I was recognized as a top reviewer at NeurIPS 2023.
- October 2023: The following papers are accepted at NeurIPS 2023: `Maximum Independent Set: Self-Training through Dynamic Programming' and `On the Convergence of Encoder-Only Shallow Transformers'.
- June 2023: The slides and the recording of our tutorial titled `Deep Learning Theory for Vision' at CVPR'23 are available: Slides and recording. More information: https://dl-theory.github.io/.
- May 2023: The following paper has been accepted at Transactions on Machine Learning Research (TMLR): `Federated Learning under Covariate Shifts with Generalization Guarantees'.
- April 2023: The following paper has been accepted at ICML 2023: `Benign Overfitting in Deep Neural Networks under Lazy Training'.
- April 2023: Awarded the DAAD AInet Fellowship, which is awarded to outstanding early career researchers. Topic: generative models in ML.
- March 2023: The following paper has been accepted at CVPR 2023: 'Regularization of polynomial networks for image recognition'.
- February 2023: Organizer of the tutorial on 'Polynomial Nets' in conjunction with AAAI'23: https://polynomial-nets.github.io/.
- January 2023: The following paper has been accepted at Transactions on Machine Learning Research (TMLR): 'Revisiting adversarial training for the worst-performing class'.
- December 2022: The following paper has been accepted at Transactions on Pattern Analysis and Machine Intelligence: 'Linear Complexity Self-Attention with 3rd Order Polynomials'.
- October 2022: I was recognized as a best reviewer at NeurIPS 2022.
- September 2022: The following papers have been accepted at NeurIPS 2022:
- 'Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)',
- 'Generalization Properties of NAS under Activation and Skip Connection Search',
- 'Sound and Complete Verification of Polynomial Networks',
- 'Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a Polynomial Net Study'.
- August 2022: The slides used in the tutorial on polynomial networks (organized at CVPR'22) have been released.
- July 2022: I was awarded a best reviewer award (top 10%) at ICML 2022.
- July 2022: The following papers have been accepted at ECCV 2022: 'Augmenting Deep Classifiers with Polynomial Neural Networks' and 'MimicME: A Large Scale Diverse 4D Database for Facial Expression Analysis'. More information soon.
- June 2022: Organizer of the tutorial on 'Polynomial Nets' in conjunction with CVPR'22: https://polynomial-nets.github.io/previous_versions/index.html.
- April 2022: I was awarded a highlighted reviewer award at ICLR 2022.
- March 2022: The following paper has been accepted at CVPR 2022: 'Cluster-guided Image Synthesis with Unconditional Models'.
- February 2022: My talk on polynomial networks at the UCL Centre for Artificial Intelligence has been uploaded online.
- January 2022: The following papers have been accepted at ICLR 2022: 'Controlling the Complexity and Lipschitz Constant improves Polynomial Nets' and 'The Spectral Bias of Polynomial Neural Networks'.
Funding Acknowledgement
I would like to acknowledge the funding of the following organizations who have generously supported various events or projects in the past. I am very thankful for their support:- 2024: Google and OpenAI: grants on trustworthy Large Language Models (LLMs).
- 2024: ELISE Fellows Mobility Program: travel grant for short-term visit of an ELLIS lab.