“会议概览: 2020 ASPLOS”

Session 4A: huge memories and distributed databases

∅sim: Preparing System Software for a World with Terabyte-scale Memories

  • 问题:现在系统不能scale到大内存
  • 提出了0sim的模拟器,能够模拟TB级别的内存(通过压缩内存(替换数据为0)),达到了150x的内存缩减
  • Case studies:
    • interactively debug memcached OOM >=2TB system
    • irreversible fragmentation with 100s of GBs free.

Session 5A: frameworks for deep Learning

Interstellar: Using Halide’s Scheduling Language to Analyze DNN Accelerators

  • DNN加速器的设计空间可以看做一个loop transformation: blocking(split/reorder)、dataflow(unroll)、resource allocation(memory)
  • 使用Halide schedule 能够simulate和分析加速器设计空间
  • 很多不同的dataflow都能达到类似的energy efficiency,因为主要开销在RF和DRAM

DeepSniffer: A DNN Model Extraction Framework Based on Learning Architectural Hints

  • 使用运行时sequence identification(类似于Speech recognition),输入GPU kernel features(Latency/ Rv/ R/W / kdd),使用kernel model和context model预测,最后CTC decoder出来。
  • 神经网络的运行栈可以被学习,用来预测得到网络结构
  • 使用context的信息能使得model extraction更powerful
  • 网络结构对targeted attack是重要的(即使没有dimension和parameters)

Session

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×