Facebook Releases PyTorch 1.3

The latest release of PyTorch, 1.3 brings mobile, quantization, and named tensors.

Recently, Facebook announced the release of PyTorch 1.3, featuring experimental support for mobile device deployment, model quantization, ability to name tensors and many other front-end improvements.
 
Facebook said that it has also launched a number of additional tools and libraries to support model interpretability and multimodal research to production.
 
Facebook has also collaborated with Google and Salesforce to add broad support for Cloud Tensor Processing Units. According to Facebook, this will provide a significantly accelerated option for training large-scale deep neural networks.
 
Till now, it was required to name and access dimensions by comment:
  1. # Tensor[N, C, H, W]  
  2. img = torch.randn(32, 3, 56, 56)  
  3. img.sum(dim=1)  
  4. img.select(dim=1, index=0)  
Now you can name explicitly to have more readable and maintainable code: 
  1. NCHW = [‘N’, ‘C’, ‘H’, ‘W’]  
  2. images = torch.randn(32, 3, 56, 56, names=NCHW)  
  3. images.sum('C')  
  4. images.select('C', index=0)  
With 1.3, PyTorch now supports 8-bit model quantization using the familiar eager mode Python API. It makes use of the FBGEMM and QNNPACK quantized kernel back ends, for x86 and ARM CPUs, respectively. They are integrated with PyTorch and now share a common API.
 
In order to enable more efficient on-device ML, PyTorch 1.3 supports an end-to-end workflow from Python to deployment on iOS and Android.
 
A new feature Captum is a tool to help developers understand their model output. It provides state-of-the-art tools to understand how the importance of specific neurons and layers and affect predictions made by the models. Captum’s algorithms include integrated gradients, conductance, SmoothGrad and VarGrad, and DeepLift.
Source: Facebook 
 
To learn more, you can visit the official announcement here.