About Me

Iโ€™m Binglin (Kevin) Ji, a graduated masterโ€™s student in Electrical Engineering and Computer Engineering at Washington University in St. Louis. Iโ€™m currently a member of Stream Based Supercomputing Lab, advised by Prof. Roger Chamberlain. Before coming to WashU, I worked at Lenovo Research, where I developed deep learning algorithms for computer vision problems and built container-based MLOps system. Prior to that, I worked at National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, where I conducted research on the topic of deep learning-based video segmentation.

Research

My research interests lie in Machine Learning (Generative Modeling), Parallel Computing (AI Inference Acceleration) and Probabilistic Sampling.

๐Ÿ” Online Active Target Discovery with Generative Model

Strategic sampling within a limited sampling budget from unobserved regions is essential in various scientific and engineering domains. We model this problem as Active Target Discovery(ATD) and introduce novel frameworks that leverage diffusion dynamics to solve ATD problems.

Active Target Discovery under Uninformative Prior

Project Page With zero domain knowledge, inspired by neuro-science, we introduce EM-PTDM to solve the online feedback ATD problem. More details in our paper: Active Target Discovery under Uninformative Prior: The Power of Permanent and Transient Memory (NeurIPS 2025) ๐Ÿš€

Diffusion-guided Active Target Discovery

Project Page With enough domain knowledge data, we first time introduce DiffATD to solve Online Feedback Active Target Discovery problem in Partially Observable Environments. More details in our paper: Online Feedback Efficient Active Target Discovery in Partially Observable Environments (NeurIPS 2025) ๐Ÿš€

โš™๏ธ Optimizing GCN Inference on Multi-Core Systems

Project Page Existing standard GNN libraries face challenges in performance and scalability on modern multi-core systems, especially for large graphs (more than 100,000 vertices) with heavy embeddings. We optimized GCN inference with different parallel strategies according to the graph properties, considering the design trends of multi-core architectures. As a result, we achieved up to 2.64x inference speed compared to DGL v2.4.0 (Deep Graph Library) and 3.36x compared to PyG v2.6.1 (PyTorch-Geometric), both of which used PyTorch v2.3.1 as the backend. More details in our paper: FGI: Fast GNN Inference on Multi-Core Systems (IPDPS 2025 Workshops) ๐Ÿš€

๐ŸŽง Outside of research, I enjoy Rock Music

You should check this out โ€” one of my all-time favorites: Bon Jovi - Livinโ€™ on a Prayer (Hyde Park 2011)