As an open source deep learning compiler driven by the community, TVM is evolving quickly and well received by the industry. In this session, the architecture of the TVM stack will be introduced first, including some important features added recently such as AutoTVM and VTA (Versatile Tensor Accelerator) support. Then the build and deployment of deep learning models with TVM will be talked about, and ONNX (Open Neural Network eXchange format) is one of the model formats supported by TVM stack. Besides unified model format and operator definitions, ONNXIFI (ONNX Interface for Framework Integration) is another initiative from the ONNX community to define a cross-platform API, and how to fit TVM stack into ONNXIFI seems an interesting topic to discuss as well.
YVR18-332: TVM compiler stack and ONNX support
Supporting Complex MIPI DSI Bridges in a Linux systemFriday, September 24, 2021
Display interface solutions are often critical to design due a mismatch between System On Chips(SoC) and it’s associated application-specific display devices. A display interface bridge prevents this mismatch by converting...
HKG18-HK16 - PMWG Hacking: Big/Little Capacity AwarenessWednesday, April 11, 2018
Session ID: HKG18-HK16 Session Name: HKG18-HK16 - PMWG Hacking: Big/Little Capacity Awareness Speaker: Vincent Guittot Track: Power Management ## Session Summary ## big/LITTLE capacity awareness ## Resources Event Page: http://connect.linaro.org/resource/hkg18/hkg18-hk16/...
SAN19-106 - What’s new in VIXL 2019?Friday, October 4, 2019
VIXL is a ARMv8 Runtime Code Generation Library which contains three components:
- Programmatic assemblers to generate A64, A32 or T32 code at runtime.
- Disassemblers that can print any instruction emitted by the assemblers.
- Simulator can simulate any instruction emitted by the A64 assembler on x86 and ARM platform. It is configurable, vector length for SVE, for example, and it supports register tracing during the execution.
In this talk, were going to introduce:
- What is VIXL? It is already deployed and is considered “mature”, for example, it has been adopted by Android ART compiler for its ARM backends: AArch64 and AArch32.
- CPU feature management and detection.
- New Armv8.x instructions support, e.g. BTI, PAuth, etc.
- New SVE (Scalable Vector Extension) support.
Sign up. Receive Updates. Stay informed.
Sign up to our mailing list to receive updates on the latest Linaro Connect news!