参考文档

  1. 深度学习编译中间件之NNVM(一)介绍
  2. 深度学习编译中间件之NNVM(二)编译&安装
  3. http://nnvm.tvmlang.org/tutorials/from_mxnet.html Compile MXNet Models
  4. https://mxnet.incubator.apache.org/api/python/index.html MXNet Python API手册
  5. http://blog.csdn.net/qq_25491201/article/details/51386435 MXNet的Model API
  6. http://nnvm.tvmlang.org/how_to/deploy.html NNVM部署教程(含Python和C++两种接口形式)

经过参考文档2中的步骤之后已经完成了host端的MXNet,NNVM和TVM python包安装,接下来可以在host端编写python脚本来进行模型编译。

模型编译

在编写python脚本之前先处理一下TVM python package中关于rasp目标器件LLVM编译选项和最终的so文件生成时使用的编译器

import mxnet as mx
import tvm
from tvm.contrib import util
import nnvm
import nnvm.compiler
import nnvm.testing
import numpy as np

# the target device is allwinner h6(4 x cortex-a53 @ 1.8Ghz)
# target = "llvm -target=aarch64-linux-gnu -mcpu=cortex-a53 -mattr=+neon"

# Method 1
# load mxnet model from Gluon Model Zoo
from mxnet.gluon.model_zoo.vision import get_model
from mxnet.gluon.utils import download

block = get_model('mobilenet1.0', pretrained=True)
nnvm_sym, nnvm_params = nnvm.frontend.from_mxnet(block)
nnvm_sym = nnvm.sym.softmax(nnvm_sym)

# set basic data workload 
batch_size = 1
image_shape = (3, 224, 224)
data_shape = (batch_size,) + image_shape

# compile the mobilenet mexnet model
with nnvm.compiler.build_config(opt_level = 2):
    graph, lib, params = nnvm.compiler.build(
        nnvm_sym, tvm.target.rasp(), shape={"data": data_shape}, params = nnvm_params)

# save the deployed module
path_lib = "mobilenet_deploy.so"
lib.export_library(path_lib)
with open("mobilenet_deploy.json", "w") as fo:
    fo.write(graph.json())
with open("mobilenet_deploy.params", "wb") as fo:
    fo.write(nnvm.compiler.save_param_dict(params))

部署

results matching ""

    No results matching ""