As an open source deep learning compiler driven by the community, TVM is evolving quickly and well received by the industry. In this session, the architecture of the TVM stack will be introduced first, including some important features added recently such as AutoTVM and VTA (Versatile Tensor Accelerator) support. Then the build and deployment of deep learning models with TVM will be talked about, and ONNX (Open Neural Network eXchange format) is one of the model formats supported by TVM stack. Besides unified model format and operator definitions, ONNXIFI (ONNX Interface for Framework Integration) is another initiative from the ONNX community to define a cross-platform API, and how to fit TVM stack into ONNXIFI seems an interesting topic to discuss as well.