6.5930/1 Hardware Architecture for Deep Learning - Spring 2024


Top
Course Info
Staff
Announcements
Syllabus
Reading List
Lecture Notes
Recitations
Labs
Paper Review
Collaboration Policy
6.5930/1 Spring 2024 Paper Review List
  • Information on Paper Review (pdf)
Topic Papers
PC1 - Dataflows and Mapping
  • Heterogeneous Dataflow Accelerators for Multi-DNN Workloads
  • Wire-Aware Architecture and Dataflow for CNN Accelerators
  • Shortcut Mining: Exploiting Cross-layer Shortcut Reuse in DCNN Accelerators
  • MAERI: Enabling Flexible Dataflow Mapping over DNN Accelerators via Reconfigurable Interconnects
  • X-Cache : A Modular Architecture for Domain-Specific Caches
  • FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks
  • Sigma: Compiling Einstein Summations to Locality-Aware Dataflow
  • FLAT: An Optimized Dataflow forMitigating Attention Bottlenecks
  • ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators
  • Understanding Reuse, Performance, and Hardware Cost of DNN Dataflows: A Data-Centric Approach
  • CoSA: Scheduling by Constrained Optimization for Spatial Accelerators
  • A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators
  • Leveraging Domain Information for the Efficient Automated Design of Deep Learning Accelerators
  • DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling
PC2 - Sparsity
  • SparTen: A Sparse Tensor Accelerator for Convolutional Neural Networks
  • CANDLES: Channel-Aware Novel Dataflow-Microarchitecture Co-Design for Low Energy Sparse Neural Network Acceleration
  • Ristretto: An Atomized Processing Architecture for Sparsity-Condensed Stream Flow in CNN
  • Anticipating and Eliminating Redundant Computations in Accelerated Sparse Training
  • S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
  • DPACS: Hardware Accelerated Dynamic Neural Network Pruning through Algorithm-Architecture Co-design
  • Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design
  • MatRaptor: A Sparse-Sparse Matrix Multiplication Accelerator Based on Row-Wise Product
  • SIGMA: A Sparse and Irregular GEMM Accelerator with Flexible Interconnects for DNN Training
  • Dual-side sparse tensor core
  • SPADA: Accelerating Sparse Matrix Multiplication with Adaptive Dataflow
  • Flexagon: A Multi-Dataflow Sparse-Sparse Matrix Multiplication Accelerator for Efficient DNN Processing


This is a tentative list. We will finalize the paper list after collecting the survey.