Kernelci.org has been hailed as “the most successful, public build and test system for Linux, in the world". To keep that reputation, there is a need to do more testing faster, and we believe LAVA v2 will help us get there. Come learn about the effort underway to transition the testing system from LAVA v1 to LAVA v2, and the benefits that it will bring.
In this session will we review the new architecture for distributed testing that LAVA v2 will enable. Also, providing status of what’s working currently, and what is still in progress. Come be a part of the solution and help make the Linux kernel the best it can be!
The TI SimpleLink CC32xx family of MCUs provides an SoC and supporting SDK which completely offloads the WiFi stack onto an integrated network coprocessor. The SimpleLink SDK currently has no explicit support for the Zephyr IoT OS, but is designed to be portable. A native IP stack for Zephyr is currently under development, which includes an experimental IP offload option. This session reviews the challenges of integrating a vendor TCP/IP offload engine into an existing OS IP stack in general, and in particular, evaluates options for integrating the TI SimpleLink WiFi stack into Zephyr.
The Per Entity Load Tracking (PELT) is a key stone in tasks placement of the scheduler but suffers of some weakness when it’s not just bugs. During the last LPC, it has been decided to fix all pending issues of PELT before starting to consider another load tracking mechanism for scheduler and/or EAS. This session will show the improvement reached since the last connect and the LPC as well as the next ones. We will also looks at the RT class which lacks a good load tracking.
With the exponential rise in quantity of data to manage, the modern data centre is increasingly limited by the capacity of individual machines. Since storage and compute demand more capacity than can be provided by a single machine, we distribute both over large clusters and use the network to transfer data between where it is stored and where it is processed. Moving all that data around uses deep storage stacks which incur a significant performance impact. If we could somehow flatten the storage stack and provide applications with direct access to data, then we could improve performance by orders of magnitude.
Hewlett Packard Enterprise recently demonstrated that we can do exactly with their research project, ""The Machine"". Instead of moving data around with a network, The Machine uses multi terabytes of persistent memory and a next generation fabric-attached memory interconnect to provide a single pool of storage which can be accessed by any processor in the cluster. It shows that we can provide applications with immediate load/store access to huge data sets in a model called Memory-Driven Computing.
Proof in hand, now it is time to bring Memory-Defined Computing to the data centre. Gen-Z is an open systems interconnect designed to provide memory semantic access to data and devices via direct attached, switched or fabric topologies. HPE has joined the Gen-Z consortium and is using the knowledge gained with The Machine to help shape Gen-Z to set the stage for true Memory-Driven Computing. With putting memory at the centre, this enables us to overcome the limitations of today's computing systems and power innovations.
This session will cover two topics. It will start with a status update on The Machine and an overview of how it works. Then we'll shift into an introduction of Gen-Z, and how it can reshape the architecture of computing in the years to come.
"The demo is one of the cases to use DPDK as network accelerator.
Nginx http server is ported to run on top of a high-performance mTCP stack.
The whole system can be run in physical machine or VM as VNF with virtual switch interconnected. "