Google Play icon

Fujitsu Develops Technology to Automatically Adjust Computing Accuracy to Accelerate AI Processing by 10 Fold

Share
Posted October 25, 2019

Fujitsu Laboratories, Ltd. has announced the development of new “Content-Aware Computing” technology that can control accuracy while increasing computing speeds. The technology was developed in response to the increasing demand for computing power accompanying the evolution and popularization of AI technologies.

Applying this new technology to deep learning tasks promises to accelerate computing speeds by up to ten times, making it easier to utilize AI for an increasing variety of future applications.

Figure 1. Speed-up by making to narrow bit width of match to progress of study. Image credit: Fujitsu Ltd

Figure 1. Speed-up by making to narrow bit width of match to progress of study. Image credit: Fujitsu Ltd

Development Background

In recent years, the spread of AI technologies in areas like image recognition and speech translation has contributed to increasing demands for processing power, straining existing technologies to their limits. New types of GPUs and specialized processors optimized for AI applications of this nature have been developed in response to this trend. AI tasks are calculated in a variety of environments depending on the specific application, ranging from cloud environment to edge computing contexts. Technologies that offer both stability and fast processing will prove increasingly necessary to deliver the computing power required to deal with demanding AI tasks.

Issues

Graphics processing units (GPUs) and dedicated processors have improved computing performance, but they have not caught up with the computational demands of AI. As a means of further streamlining performance for AI tasks, there is a growing interest in technologies that lessen the computational burden and increase speed.

In calculating neural network algorithms in deep learning, for instance, one method for achieving higher performance speeds is to reduce the operation precision from 32 bits to 8 bits and carrying out parallel operations 4 times. Unfortunately, the operation result also deteriorates if the operation precision is uniformly lowered.

For this reason, a trained expert must painstakingly determine which area’s calculation accuracy should be reduced by trial and error. This method proves time-consuming to adjust, requiring readjustment whenever the input data or execution environment changed.

About the Newly Developed Technology

To overcome these challenges, Fujitsu has developed “Content-Aware Computing” technology that automatically controls and speeds up calculation accuracy. This allows for faster AI processing in a variety of execution environments, including GPUs, CPUs, clouds, and edges. The features of this technology are as follows.

1. Automatic bit reduction in technology

Neural networks generally have a numerical range in which each layer converges to a similar numerical value as the learning progresses. Based on the distribution of the numerical range of each layer in the neural network during calculation, the degree of application of the calculation accuracy is determined according to the learning situation, such as wide bit width when the distribution is wide, and narrow bit width when the learning is advanced and converged (Figure 1). This allows deep learning to be up to three times faster than before while reducing the degradation of calculation results.

2. Synchronous mitigation technology that enables high-speed execution in a parallel environment with performance fluctuation

In a cloud environment or other environment where many applications share the CPU, there may be a significant delay in response to some nodes due to communication conflicts, interrupted handling, etc. On the other hand, in each operation of parallel processing, the amount of reduction in processing time when the processing is terminated and the degree of influence on the operation result are estimated, and the termination time of each operation is controlled so that the processing time can be reduced to the maximum extent without deteriorating the operation result. This enables faster parallel processing, leading to confirmed performance of up to 3.7 times faster in deep learning calculations.

Effects

Ultimately, the newly developed technology “Content-Aware Computing” can accelerate processing for AI tasks by up to 10 times. By incorporating this technology into AI frameworks and libraries, it becomes possible to speed up AI processing in cloud environments and data centers using GPUs and CPUs with built-in low-bit computing functions.

Future Plans

Moving forward, Fujitsu aims to incorporate this technology into an AI framework that is widely used and will serve as a foundation for the execution of AI services using deep learning.

Source: ACN Newswire

Featured news from related categories:

Technology Org App
Google Play icon
85,468 science & technology articles

Most Popular Articles

  1. New treatment may reverse celiac disease (October 22, 2019)
  2. The World's Energy Storage Powerhouse (November 1, 2019)
  3. "Helical Engine" Proposed by NASA Engineer could Reach 99% the Speed of Light. But could it, really? (October 17, 2019)
  4. Plastic waste may be headed for the microwave (October 18, 2019)
  5. Universe is a Sphere and Not Flat After All According to a New Research (November 7, 2019)

Follow us

Facebook   Twitter   Pinterest   Tumblr   RSS   Newsletter via Email