Accelerated Computing Systems Lab (ACSys) is affiliated with CS, Yonsei University. We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing.
ACSys Lab is currently looking for talented students (graduate students, undergraduate interns).
Please contact leejinho@yonsei.ac.kr if you are interested.
석박사 신입생 및 학부생 인턴을 상시 선발하고 있습니다. 관심있는 학생은 leejinho@yonsei.ac.kr로 연락 바랍니다.
Oct. 2020: Our paper GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent has been accepted at HPCA 2021
Jul. 2020: Deokki Hong and Kanghyun Choi join the Lab. Welcome!
Jul. 2020: Our paper FlexReduce: Flexible All-reduce for Distributed Deep Learning on Asymmetric Network Topology is published at DAC 2020
Feb. 2019: Hohyun Kim joins the Lab. Welcome!
Oct. 2019: Our paper In-memory database acceleration on FPGAs: a survey is published at VLDB Journal
Sep. 2019: Jinho Lee joined CS, Yonsei University as an assistant professor.
Aug. 2019: Our paper Accelerating conversational agents built with off-the-shelf modularized services is published at IEEE Pervasive Computing
June 2019: Our demo has been selected as the Best Demo Award at ACM MobiSys 2019
We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing, especially on FPGAs and GPUs. Some of the on-going research topics are listed below. However, you're free to bring your own exciting topic.
With no doubt the most popular accelerator being used nowadays is GPU. However the world is heading towards the next step: domain-specific accelerators. There are much room to improve in terms of accelerator designs. For example, graph processing, utilizing sparse training or processing-in-memory techniques.
Designing a neural architecture, especially in relation with specialized accelerators (i.e. NPUs) is a difficult and time-consuming task. Neural architecture search aims to solve this problem in a way that everyone had in mind: designing DNNs using DNNs.
To utilize multiple devices (i.e., GPUs) for high-speed DNN training, it's common to employ distributed learning. There are still many ways to improve current distributed learning methods: Devising a new communication algorithm, smartly pipelining the jobs, or changing the ways that devices synchronize.
Most of the cloud companies now provide deep-learning-as-a-service (DLaaS). One of the significant issues on it is that many customers want to keep their information (network, dataset) secure. Despite the strong security offered by the cloud companies, people will want to have their information private. We study ways to keep the customer network or dataset in private, while still being able to utilize the cloud-provided computing power.