Chanwoo Park

prof_pic.jpeg

Seoul, South Korea

I’m Chanwoo Park. My work focuses on building pre/post training datasets that unlock various capabilities of large language models. I am also interested in building AI solutions for financial domain, with end-to-end solutions in consideration. With various experience of optimizing DNN training in GPU clusters, I am always open to various research opportunities that face computational challenges.

news

Jan 07, 2026 SK Telecom released A.X K1, a 519B parameter MoE LLM that outperforms DeepSeek-V3.1 in math and coding benchmarks. As an intern at the Omnimodal Foundation Model Office, my work contributed to enhancing the model’s reasoning capabilities.
Oct 29, 2025 From November 2025, I will start research internship at SK Telecom! Hope we have great experiences with the Omnimodal Foundation Model Office! :sparkles:
Oct 29, 2025 Our papers, “Beyond Line-Level Filtering for the Pretraining Corpora of LLMs” and Ko-MuSR: A Multistep Soft Reasoning Benchmark for LLMs Capable of Understanding Korean are now available on Arxiv! :sparkles:
Jul 02, 2025 Our Korean-specialized LLM research was featured in DigitalToday! We developed Llama-Thunder-LLM, Thunder-Tok tokenizer (44% token reduction), and a Korean benchmark. :sparkles:
Jun 27, 2025 Our paper, “UDC-VIT: A Real-World Video Dataset for under-Display Cameras” has been accepted to ICCV 2025! :sparkles:

latest posts

selected publications