Research projects
My research spans across speech synthesis, automated machine learning, and large language models.
TTS
Developed high-quality text-to-speech synthesis systems, focusing on HMM-based and unit selection methods for real-time speech generation.
Real-time speech synthesis combining HMM and unit selection for production-quality TTS.
Multi-domain TTS approach using automatic domain classification for improved synthesis quality.
AutoML
Pioneered adaptive neural architecture learning through ensemble methods and automated machine learning frameworks.
Framework for automatically learning neural network architectures through adaptive ensemble learning.
Scalable implementation of AdaNet for production machine learning systems.
Multi-objective optimization framework for agnostic learning scenarios.
LLMs
Research on efficient training methods and understanding the theoretical foundations of large language models and transformers.
Novel approach to efficiently train deep networks by fusing pre-trained model initializations.
Theoretical insights into how transformers learn in-context without explicit parameter updates.
Understanding the relationship between prompts and model parameters in modern transformers.