True utility emerges when AI recognizes its own boundaries and knows when to ask clarifying questions instead of providing confident, yet erroneous, answers. This philosophy guides my research, as I am convinced that understanding the limitations of ML models is crucial for building truly trustworthy, robust, and deployable AI systems in high-stakes environments.

Hence, I study topics such as probabilistic machine learning for generative modeling, uncertainty quantification and anomaly detection. Ultimately, I thrive at the convergence of Physics, Deep Learning, and Software Engineering, where I translate abstract probabilistic theory into reliable, deployable systems that respect physical constraints.

See scholar for publication list.