echo “Hello, World!”

I’m Michael Davies, and this is my website!

Obligatory Elevator Speech

I’m a Research Scientist in computer architecture at NVIDIA.

My research focuses on accelerators for deep learning, primarily on the intersection and tradeoffs between software stack, architecture and microarchitecture. My work includes the first longitudinal study of popular deep learning workloads on GPUs and uncovered five key insights (ASPLOS’24). Using the insights from this study, I have developed a new spatially pipelined execution model for GPUs with a queue library to facilitate inter-SM communication which paves the way for higher performance of deep learning (in submission). I am currently developing a follow-on work to explore co-scheduling heterogeneous work on GPUs (in preparation). In other work, I have unpacked the twin roles of architecture design and technology in building high performance deep learning chips (in submission). I’ve also collaborated with deep learning researchers on developing new techniques to replace dense GEMM-based operators with low-compute counterparts that preserve accuracy and shift the hardware needs to DRAM bandwidth instead of compute (Zeng et. al, ICML’23). My dissertation, titled “Composable Architecture Primitives for the Era of Efficient Generalization”, ties these works together and explores what it means for the future of computer architecture research and accelerator design.

At a very broad level, I am interested in topics spanning architecture, programming languages and operating systems with an eye towards how abstractions at different layers of the technology stack can be crafted to help deliver performance and efficiency by construction – for deep learning and beyond.

Resume

Grab the latest copy of my CV here!