echo “Hello, World!”

I’m Michael Davies, and this is my website!

Obligatory Elevator Speech

I’m a 6th (final) year Ph.D. student in computer architecture working with Prof. Karu Sankaralingam at the University of Wisconsin-Madison and am planning to defend in March 2024.

My research focuses on accelerators for deep learning, primarily on the intersection and tradeoffs between software stack, architecture and microarchitecture. My work includes the first longitudinal study of popular deep learning workloads on GPUs and uncovered five key insights (ASPLOS’24). Using the insights from this study, my advisor and I have developed a new spatially pipelined execution model for GPUs with a queue library to facilitate inter-SM communication which paves the way for higher performance of deep learning (in submission). We are currently developing a follow-on work to explore co-scheduling heterogeneous work on GPUs (in preparation). In other work, I have unpacked the twin roles of architecture design and technology in building high performance deep learning chips (in submission). I’ve also collaborated with deep learning researchers on developing new techniques to replace dense GEMM-based operators with low-compute counterparts that preserve accuracy and shift the hardware needs to DRAM bandwidth instead of compute (Zeng et. al, ICML’23).

At a very broad level, I am interested in topics spanning architecture, programming languages and operating systems with an eye towards how abstractions at different layers of the technology stack can be crafted to help deliver performance and efficiency by construction – for deep learning and beyond.

Resume

Grab the latest copy of my CV here!