My research aims to build more reliable, general, and accessible AI systems for software engineering. More specifically, I am currently (or previously) focusing on following Research Problems (RPs):
RP1: How to ensure the correctness of AI Infrastructures (PyTorch, TVM, vLLM, etc.)?
RP2: How to unleash the power of LLMs in specific domain (fuzzing, etc.)?
RP3: How to use LLM-based approaches to provide quality assurance for software repositories (GitHub, Stack Overflow, etc.)?