NVIDIA GTC China: TensorRT 3.0, AI Machine Processor Xavier, and China Partnerships

The annual NVIDIA GTA Conference opened in Beijing on September 26th. The much-anticipated GPU developers’ conference showcased the company’s latest endeavours in AI, deep learning, healthcare, VR, and self-driving cars. In addition to deep learning engine TensorRT 3.0, NVIDIA introduced the HGX-1 hyperscale GPU accelerator powered by Tesla V100 for AI cloud computing, and the […]

Continue reading


Huawei Announces Kirin 970: AI In Your Phone

China’s biggest domestic mobile phone maker, Huawei, is now powering its phones with artificial intelligence (AI) engines. The Chinese tech giant today unveiled Kirin 970 — a SoC (system on a chip) for mobile phones — at IFA (Internationale Funkausstellung Berlin), one of the oldest industrial exhibitions in Germany. The chipset is powered by HiAI, […]

Continue reading


Deep Learning in Real Time – Inference Acceleration and Continuous Training

Introduction Deep learning is revolutionizing many areas of computer vision and natural language processing (NLP), infusing into increasingly more consumer and industrial products intelligence capabilities with the potential to impact the everyday experience of people and the standard processes of industry practices. On a high level, deep learning, similar to any automated system based on […]

Continue reading


Nvidia’s Volta: A Game Changer for AI?

As increasingly complex Artificial Intelligence research makes increasing demands on computer processing power, more and more tech companies are seeking ways to improve hardware performance. Nvidia’s latest play is Volta. Each May, Nvidia hosts the GTC (GPU Technology Conference) in San Jose. The conference introduces technology breakthroughs and new products, and showcases application and software […]

Continue reading


Make AI Computing 100 Times Faster

Chinese AI talent in US – UIUC Professor Wen-Mei Hwu The world is at the point where virtually all technologies rely on computing. And nowhere is the importance of computing more pronounced than in the arena of artificial intelligence. The adoption of GPU (graphic processing units) in general purpose processing enabled AlexNet, a convolutional neural […]

Continue reading


How to Train a Very Large and Deep Model on One GPU?

Problem: GPU memory limitation I believe I don’t need to explain how powerful a GPU can be for training deep neural networks anymore. Using a commonly popular ML framework, it is much more convenient to assign the computations to GPU(s) than doing everything from scratch. However, there is one thing that could create nightmare scenarios […]

Continue reading


Deep Learning on GPUs without the Environment Setup

We have seen an explosion of interest among data scientists who want to use GPUs for training deep learning models. While the libraries to support this (e.g., keras, TensorFlow, etc) have become very powerful, data scientists are still plagued with configuration issues that limit their productivity. For example, a recent post on the keras blog […]

Continue reading


Making Data Science Fast: Survey of GPU Accelerated Tools

This talk took place at the Domino Data Science Pop-up in Austin, TX on April 13, 2016 In this talk, Mazhar Memon, CEO and Co-founder at Bitfusion.io, surveys the GPU-accelerated data science tools landscape. From hardware, to languages, to machine learning and deep learning frameworks. The video includes a demo of GPU-accelerated graph processing. The […]

Continue reading


What Does It Take For Intel To Seize The AI Market?

Introduction During the first quarter of 2017, Nvidia’s revenue was driven by 63% year-over-year growth in data center revenue. This impressive growth was largely owing to technology companies such as Google and Amazon, who have accelerated their AI cloud products which are mostly based on Nvidia’s GPU hardware. By contrast, Intel, the company once dominated […]

Continue reading