The adoption of in-memory computing is increasing as businesses seek quick and easy access to data and analytics to inform many business decisions. Using in-memory computing offers the insights they need to increase efficiency in operations, finances, marketing and sales.
It does this by providing a different way to store and process data. As improvements in in-memory computing occur, it is becoming more affordable and easier to implement, making widespread adoption inevitable in the future.
According to a Gartner report, the In-Memory Computing (IMC) market will touch around a $15 billion mark by 2021, a significant increase from $6.8 billion.
A shift away from past limitations
In-memory computing developed because traditional solutions were no longer adequate. Using disk-first architecture meant that when data in storage had to be accessed, bottlenecks reduced speed, even when using the fastest hard disks. As quantities of data grew, the time required to access data, let alone analytics, increased.
In-memory computing allows fast performance and scaling of data in real-time. This is because storage of data occurs in RAM across multiple computers instead of in a centralized database.
When data is stored in this distributed fashion, parallel distributed processing allows many computers across different locations to share in processing data. When data is stored in the memory of a computer, it is readily available, can be accessed almost instantaneously and makes scalability possible.
Memory-centric architecture is coming, even if the timing is uncertain. This is an important evolution in technology, offering more flexibility and improved return on investment for a whole range of use cases.
The expectation is that multiple NVRAM technologies will be successful, based on characteristics such as durability, cost, application, and performance. Architects will be able to tweak systems in ways that are not possible now.
With memory-centric architecture, the ability to exceed the amount of RAM allows data to be optimized so all data is on disk. Higher value and demand data also resides in-memory and data with low value and demand resides only on disk.
Advantages of In-Memory Computer Platforms
When RAM was costly and servers were slower, caching and processing data in RAM was very limited. Distributed processing solutions were used to scale available RAM and CPU power, but the cost of RAM remained a challenge.
More recently, the cost of RAM decreased and APIs and 64-bit processors allowed integration of in-memory data grids with existing data layers to deliver speed, high availability and scalability.
At the same time, new in-memory databases were developed to replace existing disk-based databases. These developments were all positive steps in the right direction. However, the IMC market was still complex and fragmented.
Over the past few years, we have seen the emergence of integrated in-memory computing platforms that are easier to deploy and have driven down operating costs.
They bring different components, such as in-memory data-grids and streaming analytics together into a unified platform. It is simpler to speed up and scale out existing applications and build new memory-driven applications.
One of the greatest advantages of these in-memory computing platforms is their speed. Businesses are able to get real-time insights and use them to formulate new strategies. They can perform complex queries in minutes, rather than having to rely on information that may already be out of date. It is possible to analyze whole sets of data and make decisions based on all the facts.
Memory-centric architecture helps businesses to balance costs and performance, accelerates recovery in the event of a system crash, and allows for high availability despite the growth of data.
How In-Memory Computer Platforms are being used
In-memory computing is no longer just for big companies as costs come down and implementing solutions has become more manageable. Companies across a broad range of industries are adopting in-memory computing platforms. Some of the use cases are in financial services, software, SaaS, retail, healthcare, IoT, and more.
The banking, financial services and insurance sectors are likely to experience the highest growth in the use of in-memory computer platforms.
By 2020 people are expected to make 726 billion transactions using digital payment technologies. Peer-to-peer money-transfer apps, “tap and go” payments, bitcoins and mobile wallets are transforming the way we pay. Instant payment capabilities are on the rise as customers demand personalized, agile, and real-time payment service.
Transitioning to digital payment technologies has its challenges. Service providers have to be able to ensure a level of reliability and scale capacity ahead of demand. They need to use sophisticated analytics to identify actionable insights from data, manage risks, ensure regulatory compliance and prevent fraud. An in-memory computing platform helps them by offering low latency, scalability and resilience.
In the health industry, e-Therapeutics, an Oxford-based company, is using an in-memory computing platform to dramatically speed up computational drug discovery projects. They are able to discover new and better drugs in a more effective way.
Retailers today face challenges such as much customer retention and sales data, a latency of supply and demand intelligence and managing the flow of inventory in real-time. In-memory computing provides a solution, giving retailers the opportunity to adapt to real-time information. With insights into consumer sentiments and sales patterns, they can run better marketing campaigns, giving them a competitive advantage.
Zalando is an online clothing retailer with millions of website visitors. It is a fast-growing business that handles increasing data volumes. By implementing an in-memory database, it is benefiting from more flexibility.
It can retrieve information about whether products are available in real-time and optimize its stock. Customers who buy items receive personalized emails with offers based on their shopping behavior.
Today many businesses are using streams of information for all kinds of purposes – to inform and protect us, make us healthier and improve our lives. The technology that supports this is in-memory computing.
Some of the top concerns about in-memory computing from an IT perspective have been the high price tag, the ability to handle multiple data types and the ability to integrate disparate data sources. The solutions available today answer these challenges.
In-memory computing is of no use unless it is implemented for the right reasons. It does not help to be able to perform analysis speedily if the data being analyzed is incorrect.
Companies need to be able to collect relevant data with a level of standardization and consistency. The key should be on how to apply in-memory computing solutions to bring real-world improvements to business processes.
Transformations to expect
Over the next decade, more and more businesses will start using comprehensive in-memory computing platforms.
Non-volatile memory (NVM) will be the preferential method for storage with hybrid models used to store very large datasets. Volatile memory, like RAM, erases all stored data when the power source turns off.
Unlike volatile memory, non-volatile memory does not need a continuous power supply to retain data stored in the computing device. A number of NVMs have emerged in recent years, with the most well-known being NAND flash memory.
New developments are taking place that will allow companies to refine and expand their digital transformation initiatives. One such development is in the field of artificial intelligence and machine learning. In-memory computing platform vendors are beginning to add machine learning capabilities to their in-memory computing platforms.
Integrating a machine learning library with an in-memory computing platform will support the retraining of machine learning models in real time based on new operational data. Machine learning models will thus be evolving in real time to support mission critical applications. Price setting, credit approvals, fraud detection and package routing are some of the benefits.
Further integrations between in-memory platforms and deep learning systems will allow companies to feed operational data directly into the deep learning platforms so they won’t have to create and maintain separate analytical architecture. This will reduce the complexity and cost of using operational data to train artificial intelligence models.
We are inevitably moving to towards widespread adoption of in-memory computing. Organizations that collect large quantities of data will have no choice other than to move towards in-memory computing if they want to continue to function efficiently. Without in-memory computing solutions, they may not be able to maintain a competitive advantage.
As it takes steps toward maturity, the price of high-performing computing solutions will continue to become more favorable. For smaller companies, the costs of in-memory computing may still outweigh the benefits but memory-centric architecture may provide a cost-effective way for them to move forward.