Accelerated Computing Systems Lab (ACSys) is affiliated with CS, Yonsei University. We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing.
ACSys Lab is currently looking for talented students (graduate students, undergraduate interns).
Please contact email@example.com if you are interested.
석박사 신입생 및 학부생 인턴을 상시 선발하고 있습니다. 관심있는 학생은 firstname.lastname@example.org로 연락 바랍니다.
May. 2021: Our paper Making a Better Use of Caches for GCN Accelerators with Feature Slicing and Automatic Tile Morphing has been accepted at CAL. Congratulations!
May. 2021: Our paper AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression has been accepted at IJCAI 2021.
Feb. 2021: We have two papers accepted to DAC 2021. Congratulations authors!
Feb. 2021: Mingi Yoo joins the Lab. Welcome!
Oct. 2020: Our paper GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent has been accepted at HPCA 2021
Jul. 2020: Deokki Hong and Kanghyun Choi join the Lab. Welcome!
Jul. 2020: Our paper FlexReduce: Flexible All-reduce for Distributed Deep Learning on Asymmetric Network Topology is published at DAC 2020
We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing, especially on FPGAs and GPUs. Some of the on-going research topics are listed below. However, you're free to bring your own exciting topic.
With no doubt the most popular accelerator for AI nowadays is GPU. However the world is heading towards the next step: AI-specific accelerators. There is much room to improve in terms of accelerator designs. For example, optimizing dataflow, utilizing sparse network structure, or processing-in-memory techniques.
Designing a neural architecture, especially in relation with specialized accelerators (i.e. NPUs) is a difficult and time-consuming task. Neural architecture search aims to solve this problem in a way that everyone had in mind: designing DNNs using DNNs.
To utilize multiple devices (i.e., GPUs) for high-speed DNN training, it's common to employ distributed learning. There are still many ways to improve current distributed learning methods: Devising a new communication algorithm, smartly pipelining the jobs, or changing the ways that devices synchronize.
Most of the cloud companies now provide deep-learning-as-a-service (DLaaS). One of the significant issues on it is that many customers want to keep their information (network, dataset) secure. Despite the strong security offered by the cloud companies, people will want to have their information private. We study ways to keep the customer network or dataset in private, while still being able to utilize the cloud-provided computing power.