Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[N] Microsoft Incorporates Graphcore AI Chips in Azure Cloud

Graphcore’s AI accelerator chip, the Colossus intelligence processing unit (IPU) is now available for customers to use as part of Microsoft’s Azure cloud platform.

This is the first time any major cloud service provider has publicly offered customers the opportunity to run their data on an accelerator from any of the dozens of AI chip startups and as such, it represents a big win for Graphcore. Microsoft has said access will initially be prioritised for customers who are “pushing the boundaries of machine learning”.

Microsoft and Graphcore have been working together for two years to develop cloud systems and build enhanced vision and natural language processing models for the Graphcore IPU. In particular, the natural language processing (NLP) model, Google’s BERT (bidirectional encoder representations from transformers), which is currently very popular with search engines, including Google themselves.

Using eight Graphcore IPU processor cards (each with a pair of Colossus accelerators), BERT can be trained in 56 hours, similar to the result for GPU with PyTorch, though it is faster than the GPU with TensorFlow (see graph below). Graphcore says customers are seeing BERT inference throughput increase threefold, with 20% improvement in latency.

Given the level of hype surrounding Graphcore — the company is valued at $1.7 billion — these performance improvements seem rather modest. It remains to be seen whether the promised improvement is enough to tempt customers into optimising their models for the IPU.

Advanced models
At the same time, Graphcore has also released some results on more advanced models, where it showed more dramatic performance improvements.

Inference on image processing model ResNext was accelerated 3.4x in terms of throughput at 18x lower latency, compared to a GPU solution consuming the same amount of power. ResNext uses a technique called group separable convolutions, which splits convolution filters into smaller separable blocks to increase accuracy while reducing the parameter count. This approach is well-suited to the IPU, Graphcore says, because of the chip’s massively parallel processor architecture and more flexible, high-throughput memory; smaller blocks of data can be mapped to thousands of fully independent processing threads.

Graphcore also showed good results for Markov Chain Monte Carlo (MCMC)-based models, a new type of probabilistic algorithm which is used for modelling financial markets. This type of model has been out of reach for many in the finance industry, as it was previously considered too computationally expensive to use, said Graphcore. Early access IPU customers in the finance sector have been able to train their proprietary, optimised MCMC models in 4.5 minutes on IPUs, compared to over 2 hours with their existing hardware, a 26x speed up in training time.

Reinforcement learning (RL), another popular technique in modern AI algorithm development, can also be accelerated compared to typical existing solutions. Graphcore cited a factor of ten improvement in throughput for RL models, even before they are optimised for the IPU.

https://www.eetimes.com/document.asp?doc_id=1335297#

submitted by /u/downtownslim
[link] [comments]