High Energy Physics Seminar
Abstract:
The demands for computing in the Large Hadron Collider (LHC) physics are already intensive and are expected to increase tremendously in the coming years. Combined with deep learning algorithms, parallelized processing architectures, in particular, Graphics Processing Unit (GPU) and Field Programmable Gate Arrays (FPGAs) have been shown to give large speedups in computing when compared with conventional CPUs. In this talk, I’ll demonstrate that the acceleration of artificial intelligence (AI) inference as a web service represents a heterogeneous computing solution for particle physics experiments. I’ll present a comprehensive exploration of several realistic examples with up to a factor of 175 improvement in model inference latency over traditional CPU inference. This opens a new strategy for the seamless integration of coprocessors so that the LHC can maintain, if not exceed, its current performance throughout its running. Finally, I’ll discuss how AI-as-a-Service can bring together disparate communities that are threaded by common data-intensive grand challenges to accelerate discovery in Science and Engineering.