11:45 – ScaleMark: Understanding Performance Results for Servers in the Data Center Speaker: Markus Levy, EEMBC President and CEO Abstract; Workloads for the Cloud and associated data centers are putting unique demands on the SoCs and other system-level hardware that are being integrated into these scale-out servers. Traditional benchmarks, such as EEMBC® CoreMark®, SPECInt® 2006 and SPECFP® 2006, and others, address the compute complexity of different workloads and the suitability of processors for different tasks. However, when looking at the system level (which is required for comprehending the performance of servers in data centers), many factors contribute to the performance of the system as a whole – memory, disks, operating system, network interfaces, network stack, and more. In addition, the manner in which workloads are generated can significantly affect the results. In this session, using a case study from Cavium’s Arm based Thunder X system and the EEMBC ScaleMark (a cloud and server benchmark suite), results will be presented that demonstrate how subtle variations in the test environment can obfuscate benchmark results and how a properly designed benchmark suite can overcome these obstacles
SF015 304 Server Ecosystem Day Part 2b
Supporting Complex MIPI DSI Bridges in a Linux systemFriday, September 24, 2021
Display interface solutions are often critical to design due a mismatch between System On Chips(SoC) and it’s associated application-specific display devices. A display interface bridge prevents this mismatch by converting...
BKK19-402 - Inferencing at the edge and Fragmentation ChallengesTuesday, April 16, 2019
As deep learning (DL) expands is application into ever more areas, DL at the edge has become an area of rapid innovation and has also become highly fragmented. This creates a challenge in the ecosystem for framework providers that want to take advantage of specialized hardware, and an equal challenge for SoC providers, or makers of DL accelerators that need to support various frameworks, customer innovations, device constraints, etc. This talk will explore what constitutes DL at the edge, it will highlight the recent trends in this area from runtimes and compilers, to model formats, and explore the challenges, and scalability needs of collaborative solutions.
LTD20-304 Improved Android Testing in LAVA with dockerMonday, March 30, 2020
In this talk we will review the newly added LAVA feature to use docker containers for host-side operations (such as calling adb and fastboot). We will cover the issues with the previous approach of using lxc containers, advantages of this new approach, and a howto on using the new docker support.
Partnership in Open Design and Manufacturing: How Universities can Contribute with Developers Communities - BUD17-511Tuesday, February 28, 2017
The University of Sao Paulo, with support of LSITEC (an NGO Design House), has all of the necessary equipment to design and manufacture 96boards computers and mezzanine boards. Working with...
SAN19-424 - Event Tracing and Pstore with a pinch of Dynamic debugFriday, October 4, 2019
Event tracing is one of the powerful debug feature available in Linux Kernel as part of Ftrace. Pstore or Persistent Storage on the other hand is a boon to find the cause for the kernels dying breath as rightly said by someone and is widely used in production environments. When these two features are combined with a pinch of Dynamic debug, we form a full recipe for debugging problems in Linux Kernel.
This presentation talks about integrating event tracing with pstore to identify and root cause problems by analyzing the last few events before the Kernel says goodbye. In addition to this, we add dynamic debug support to filter out unwanted logs and limit trace to only specific files or directories which help in narrowing down problems to specific subsystems and currently is not supported by Ftrace.
SAN19-215 - AI Benchmarks and IoTFriday, October 4, 2019
There are several mobile and server AI benchmarks in use today and some new ones on the horizon. Which of these or others are applicable to IoT use cases? How do you meaningfully compare AI performance across the wide range of IoT HW with widely varying cost, memory, power and thermal constraints, and accuracy tradeoffs for quantized models vs non-quantized models? This talk will discuss these topics and some of the possible ways to address the issues.
Sign up. Receive Updates. Stay informed.
Sign up to our mailing list to receive updates on the latest Linaro Connect news!