11:15 – CERN: ARM64/AArch64 for Scientific Computing at the CERN CMS Particle Detector
Speaker: David Abdurachmanov Software Engineer/Consultant at CERN
Abstract: The purpose of this talk is to provide an overview of efforts at CERN (the European Laboratory for Particle Physics, in Geneva, Switzerland) to introduce ARMv8 64-bit (aka AArch64) for large scale scientific computing. The objective is to crunch data from the 14 000 ton CMS particle physics detector located 100 meters underground on a 27 kilometer long circular particle accelerator (the Large Hadron Collider, LHC) running under the Switzerland-France border. The CMS and ATLAS experiments at CERN announced the discovery of Higgs boson in 2012, leading to the awarding of the 2013 Nobel Prize in Physics.
Both the Field and more recently the LAVA lab have worked on HOWTOs to fill a need for a getting-up-and-running thread of documentation alongside the more mature reference information. Using material from both of these the session proposes to show how an evaluation installation of LAVA along with a qemu target can be assembled in a VM. The target audience is member engineers or managers who are aware of LAVA and would consider to evaluate a pilot local installation
Discuss plans in the PM community to improve genpd, runtime PM and cousins that will lead to better dynamic power management of peripherals, shared resources (e.g. caches) and reduce platform-specific code. Please attend if you want to learn about dynamic power management for the whole SoC and would like to discuss problems faced currently.
Due to team size and dynamics, the LAVA dispatcher refactoring has been a slow and gradual push to replace the original device communication logic and we are starting to get closer to completion. Join us for a update and discussion on the current state of the effort, see how current use cases will migrate and how new use cases can be adopted.
11:45 – ScaleMark: Understanding Performance Results for Servers in the Data Center
Speaker: Markus Levy, EEMBC President and CEO
Abstract; Workloads for the Cloud and associated data centers are putting unique demands on the SoCs and other system-level hardware that are being integrated into these scale-out servers. Traditional benchmarks, such as EEMBC® CoreMark®, SPECInt® 2006 and SPECFP® 2006, and others, address the compute complexity of different workloads and the suitability of processors for different tasks. However, when looking at the system level (which is required for comprehending the performance of servers in data centers), many factors contribute to the performance of the system as a whole – memory, disks, operating system, network interfaces, network stack, and more. In addition, the manner in which workloads are generated can significantly affect the results. In this session, using a case study from Cavium’s ARM based Thunder X system and the EEMBC ScaleMark (a cloud and server benchmark suite), results will be presented that demonstrate how subtle variations in the test environment can obfuscate benchmark results and how a properly designed benchmark suite can overcome these obstacles
An introductory session of a system-level overview at Power State Coordination
* Focus on ARMv8
* Goes top-down from ACPI
* A demo based on the current code in qemu (and/or 96Boards)
* The specifications are very dynamic - what’s onging for ACPI and PSCI