Chanwoo Park

Research intern at Omnimodal Foundation Model Office, SK Telecom.

prof_pic.jpeg

Seoul, South Korea

I’m Chanwoo Park, a research intern at Omnimodal Foundation Model Office at SK Telecom. My work focuses on building pre/post training datasets that unlocks various capabilities of large language models. I am also interested in building AI solutions for financial domain, with end-to-end solution in consideration. With various experience of optimizing DNN training in GPU clusters, I am always open to various research opportunities that faces computational challenges.

news

Oct 29, 2025 From November 2025, I will start research internship at SK Telecom! Hope we have great experiences with the Omnimodal Foundation Model Office! :sparkles:
Oct 29, 2025 Our papers, “Beyond Line-Level Filtering for the Pretraining Corpora of LLMs” and Ko-MuSR: A Multistep Soft Reasoning Benchmark for LLMs Capable of Understanding Korean are now available on Arxiv! :sparkles:
Jul 02, 2025 Our Korean-specialized LLM research was featured in DigitalToday! We developed Llama-Thunder-LLM, Thunder-Tok tokenizer (44% token reduction), and a Korean benchmark. :sparkles:
Jun 27, 2025 Our paper, “UDC-VIT: A Real-World Video Dataset for under-Display Cameras” has been accepted to ICCV 2025! :sparkles:

latest posts

selected publications