HOME        CONTACTS        HEART HISTORY   
HEART2019
Actions

Keynote Speakers


Prof. Christian Plessl
Paderborn University, Germany

Bringing FPGAs to Production HPC Systems and Codes

Summary

[Photo] Prof. Christian Plessl
FPGA architectures and development tools have made great strides towards a platform for high-performant and energy-efficient computing, competing head to head with other processor and accelerator technologies. While we have seen the first large-scale deployments of FPGAs in public and private clouds, FPGAs still have to make inroads in general purpose HPC systems. At the Paderborn Center for Parallel Computing, we are at the forefront of this development and have recently put "Noctua" our first HPC cluster with 32 BittWare 520N boards with Intel Stratix 10 FPGAs into production. In this talk, I will share some of the experiences we made on our journey from the planning, to the procurement to the installation of the Noctua cluster and highlight critical aspects for FPGAs and how we addressed them. I will present results from on-going work on porting libraries and scientific applications from electrodynamics and ab-initio molecular dynamics to our FPGA cluster. Finally, I will discuss the potential and preliminary results of direct FPGA-to-FPGA networks for parallel applications.

Bio

Christian Plessl is professor for High-Performance IT Systems at Paderborn University, Germany. He has lead and been involved in numerous research projects studying reconfigurable architectures, design flows, runtime systems and the application of FPGAs in HPC. His research has been recognized with several awards, e.g., the ReConFig Best Paper Awards in 2014 and 2012 and the FPL significant paper award in 2015. Christian is also the director of the Paderborn Center for Parallel Computing (PC2), which is Paderborn University¡Çs HPC center providing computing resources and services for computational sciences at Paderborn University and Germany-wide. Leveraging the longstanding expertise in FPGA acceleration in Paderborn, PC2 has recently deployed FPGAs for the first time in an HPC production system. The Noctua installation is currently one of the largest and most modern FPGA installations in academic HPC centers.

Prof. Hiroki Nakahara
Tokyo Institute of Technology, Japan

Deep Learning Accelerator for an Intelligent Camera

Summary

[Photo] Prof. Hiroki Nakahara
Convolutional neural networks (CNNs) are primarily a cascaded set of pattern recognition filters, which are trained by big data. It enables us to solve complex problems of computer vision applications, such as object recognition, a segmentation, a pose estimation, toward more complex tasks. Since these vision applications require more accuracy and smart, modern CNNs contain millions of floating-point parameters and need billions of floating-point operations. Furthermore, recent CNNs tends to be massive by AI researchers. Therefore, we must consider a low-power, cost, and real-time computation, thus, deep learning accelerator research becomes still more important. In this talk, I will introduce optimization techniques for the CNN hardware accelerator and how to design a high performance per power efficient for a surveillance camera system. Next, I will apply more complex CNN to more intelligent work. Also, I will explain probabilistic CNN for a more efficient accelerator. Finally, we will discuss the platform which should be adopted, and share future research topics.

Bio

Hiroki Nakahara received the B.E., M.E., and Ph.D. degrees in computer science from Kyushu Institute of Technology, Fukuoka, Japan, in 2003, 2005, and 2007, respectively. He has held research/faculty positions at Kyushu Institute of Technology, Iizuka, Japan, Kagoshima University, Kagoshima, Japan, and Ehime University, Ehime, Japan. Now, he is an associate professor at Tokyo Institute of Technology, Japan. He was the Workshop Chairman for the International Workshop on Post-Binary ULSI Systems (ULSIWS) in 2014, 2015, 2016 and 2017, respectively. He served the Program Chairman for the International Symposium on 8th Highly-Efficient Accelerators and Reconfigurable Technologies (HEART) in 2017. He received the 8th IEEE/ACM MEMOCODE Design Contest 1st Place Award in 2010, the SASIMI Outstanding Paper Award in 2010, IPSJ Yamashita SIG Research Award in 2011, the 11st FIT Funai Best Paper Award in 2012, the 7th IEEE MCSoC-13 Best Paper Award in 2013, and the ISMVL2013 Kenneth C. Smith Early Career Award in 2014, respectively. His research interests include logic synthesis, reconfigurable architecture, digital signal processing, embedded systems, and machine learning. He is a member of the IEEE, the ACM, and the IEICE.

Prof. Ken Oyama
Nagasaki Institute of Applied Science, Japan

FPGA accelerated HPC for Experimental Physics

Summary

[Photo] Prof. Ken Oyama
Modern high energy nuclear and particle physics experiments using particle collider at unprecedented energy and intensity will produce tremendous amount of data that has to be processed online to reduce data amount without losing important physics information. ALICE experiment at CERN Large Hadron Collider is one of such challenging experiments. Physicists were using FPGA since decades for trigger system and and handling data, however not much for data analysis. Data analysis were, so far, performed in normal CPU on recorded data. However in ALICE, above 3 TB/s data continuously comes out of detector. This is not recordable anymore. The data must be analyzed and compressed in real-time by introducing FPGA acceleration expected to give two orders of magnitude better performance per node. In this presentation, little bit of history and review of FPGA use cases in high energy physics, actual problems we are facing now, and our planning solution using FPGA acceleration will be discussed.

Bio

Ken Oyama is a professor for electrical and electronics engineering in Nagasaki Institute of Applied Science, Japan. His field of specialty is high energy nuclear experimental physics. He received Ph.D at graduate school of science, University of Tokyo in 2003 for the work of quark gluon matter study by measuring particle production using relativistic heavy ion collider (RHIC) in Brookhaven National Laboratory in US. His work continued as researcher at Physics Institute in University of Heidelberg. During this time he worked for development and running of huge complex particle detector system in ALICE project at LHC at CERN. He served as a technical coordinator of one of the detector subsystems (TRD) in ALICE, and later as a trigger coordinator of ALICE collaboration. Since last 5 years he is leading new development of next generation data acquisition system for the ALICE upgrade where huge amount of detector data must be processed online to achieve high precision study of quark matter at unprecedented statistics.
Copyright 2010-2019 HEART Steering Committee   
Valid XHTML 1.1 Valid CSS