Exploiting Information Equivalence and Interchangeability between ML Representation Spaces with Nikhil Krishnaswamy

Date: 3/31

Time: 2:30pm

Location: Sage 4101

 

Exploiting Information Equivalence and Interchangeability between ML Representation Spaces

The ability of machine learning (ML) techniques to uncover patterns implicit in data at scale has catapulted artificial intelligence (AI) to a transformative technology in mainstream use. Despite this, much remains unknown about how exactly machine learning, particularly deep learning by neural networks, creates and exploits its high-dimensional representations. Current state-of-the-art (SOTA) AI approaches are task-specific and often use radically different architectures and training to process different data. Resulting AI representations are not easily communicable to humans, leading to problems in interpretability, explainability, and ultimately, trust. Ultimately, vector representations that reside in different high-dimensional embedding spaces are not directly comparable. Put another way, AI representations are not only not aligned with human expectations, they are not even aligned with other AIs' representation spaces. Previous work has typically addressed these challenges using joint training methods, which can be expensive and time-consuming. In this talk, I will present novel methods of exploiting information equivalence between models to align diverse representations. In particular, I will present work that uses identical affine transformation techniques for face identification in vision models, coreference resolution and cognate detection in language models, and for language grounding across modalities. That the same technique can be used for diverse tasks across different model and architecture types suggests a strong information-level equivalence between representation subspaces that has potentially profound implications for ML model training and augmentation in low-compute and low-resource settings.

Bio:

Nikhil Krishnaswamy is an Assistant Professor of Computer Science at Colorado State University. His research focuses on multimodal interactive agents, language and conceptual grounding in situated environments, and the properties of high-dimensional deep learning embedding spaces, and has been funded by NSF, DARPA, and ARO. He has a Ph.D. in Computer Science from Brandeis University, and has won multiple research and demo awards, including from premiere conferences such as AIEd, HCII, and ICAT-EGVE. He has served as an area chair for the COLING conference and on the program committees for many prestigious venues, including AAAI, ACL, CogSci, EACL, EMNLP, and NAACL.

Back to top