Research projects

My research spans across speech synthesis, automated machine learning, and large language models.

TTS

Developed high-quality text-to-speech synthesis systems, focusing on HMM-based and unit selection methods for real-time speech generation.

Recent Advances in Google Real-time HMM-driven Unit Selection Synthesizer

Real-time speech synthesis combining HMM and unit selection for production-quality TTS.

Towards high-quality next-generation text-to-speech synthesis: A multidomain approach by automatic domain classification

Multi-domain TTS approach using automatic domain classification for improved synthesis quality.

AutoML

Pioneered adaptive neural architecture learning through ensemble methods and automated machine learning frameworks.

AdaNet: Adaptive Structural Learning of Artificial Neural Networks

Framework for automatically learning neural network architectures through adaptive ensemble learning.

AdaNet: A Scalable and Flexible Framework for Automatically Learning Ensembles

Scalable implementation of AdaNet for production machine learning systems.

Agnostic Learning with Multiple Objectives

Multi-objective optimization framework for agnostic learning scenarios.

LLMs

Research on efficient training methods and understanding the theoretical foundations of large language models and transformers.

Deep Fusion: Efficient Network Training via Pre-trained Initializations

Novel approach to efficiently train deep networks by fusing pre-trained model initializations.

Learning without training: The implicit dynamics of in-context learning

Theoretical insights into how transformers learn in-context without explicit parameter updates.

Transmuting prompts into weights

Understanding the relationship between prompts and model parameters in modern transformers.