ONNX Explained

This post explains the ONNX utility

Introduction 

 
Open Network Exchange Format known as ONNX, https://onnx.ai/ , is an open ecosystem that empowers AI developers to make the best choice of tools that their project involves.

ONNX is the result of working AWS, Facebook and Microsoft to allow the transfer of deep learning models between different frameworks.

Data Scientists use multiples of frameworks to develop deep learning algorithms like Caffe2, PyTorch, Apache, MXNet, Microsoft cognitive services Toolkit and TensorFlow. The choice of the frameworks depends on many constraints (existing developments, team skills…)

These new operational challenges, which slow down the start-up phase, are constantly appearing as more and more suppliers are trying to find solutions to break the deadlock.

Install ONNX

First, build protobuf locally through cloning the GitHub project

git clone https://github.com/protocolbuffers/protobuf.git

cd protobuf

git checkout 3.9.x

cd cmake

# Explicitly set -Dprotobuf_MSVC_STATIC_RUNTIME=OFF to make sure protobuf does not statically link to runtime library

cmake -G "Visual Studio 15 2017 Win64" -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir>

msbuild protobuf.sln /m /p:Configuration=Release

msbuild INSTALL.vcxproj /p:Configuration=Release

Second, build your ONNX project

# Get ONNX
git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
 
# Set environment variables to find protobuf and turn off static linking of ONNX to runtime library.
# Even better option is to add it to user\system PATH so this step can be performed only once.
# For more details check https://docs.microsoft.com/en-us/cpp/build/reference/md-mt-ld-use-run-time-library?view=vs-2017
set PATH=<protobuf_install_dir>\bin;%PATH%
set USE_MSVC_STATIC_RUNTIME=0
 
# Optional : Set environment variable `ONNX_ML=1` for onnx-ml
 
# Build ONNX
python setup.py install

Third, run ONNX

python -c "import onnx"

Finally, test installation:

pip install pytest nbval

ONNX Runtime

This is a new alternative that supports CUDA, MLAS, MKL-DNN for computer acceleration. It was released as a python package (onnxruntime-gpu has been released to support GPUs and onnxruntime is a CPU target release)