Till today, the advancement of storage technology was primarily measured in terms of capacity and speed. But no longer! New sophisticated technologies and methodologies have augmented those steadfast benchmarks in recent times. These new technologies make storage facilities more flexible, smarter, and much easier to manage.
The year 2020 promises to bring even more disruptive technologies to the older storage market as IT leaders look out for more efficient ways of coping with the data density generated by AI, IoT devices, and several other sources.
Here, we look at the five storage technologies that will create the most significant disruption in the year 2020.
1) Software-defined Storage
Attracted by the leverage of flexibility, automation, increased storage capacity, and improved staff efficiency, a large number of enterprises are considering this storage transition to software-defined technology, commonly termed as SDS.
This technology separates storage resources from their underlying hardware. As compared to conventional network-attached storage (NAS) or Storage Area Network (SAN) systems, SDS is more likely to be operational on any industry standards. You will also benefit from smarter interactions between storage and workloads, real-time scalability, and agile storage consumption.
So, what does SDS technology do precisely? It virtualizes the storage resources that are available while providing the management interface that represents various storage pools as a unified storage resource.
Also, you will benefit from mobility, abstraction, virtualization, storage resource management, and optimization. This usage of this technology requires managers to shift their view of hardware as the most crucial element for storage. It is of lesser significance to the supporting player. So, in 2020 the deployment of SDS technology by the managers will be made for different reasons.
The encouragement of this technology would improve the operating expense and will decrease the administrative effort that was required to date. SSD (Solid-State Drive) technologies are varying the way organizations utilize and manage their storage needs. This makes them top the list of candidates who need to adopt SDS technology. Also, these technologies would provide the enterprises/organizations with greater control and the ability to configure the right performance and capacity level while bettering utilization and slashing the controlling cost.
However, it would require a clear and thorough comprehension of the application requirements for achieving better performance and capacity while even selecting the least disruptive approach to SDS. Potential and interested adopters would also need an honest assessment of their organization’s real ability to manage an SDS environment. Depending upon the level of expertise, an SDS device that features prepackaged software and hardware often offers the best course for adoption.
Flash drives that have been used until today were connected via SAT or SAS legacy interfaces. These interfaces were established decades ago for HDDs (Hard Disk Drives).
NVMe (Non-Volatile Memory express), is the best alleged Peripheral Component Interconnect express layer (PCIe). It is a much more powerful protocol for communications, targeted especially at high-speed flash storage systems.
NVMe supports low-latency commands and parallel queues; it is designed to make better use of high-end SSDs. This technology not only offers higher performance and lower rates of latency for the installed applications over legacy protocols but also spruces up new capabilities for real-time processing of data in the data center, cloud, and edge environments. These capabilities can assist businesses to stand out from other competing firms in the line of big data. So, NVMe will be mainly valuable for data-driven companies, particularly those that need real-time data analytics or the ones which are fabricated upon emerging technologies.
The NVMe protocol is not only restricted to connecting flash drives, but it also offers a networking protocol. With the inception of NVMe-oF (NVMe over Fabrics), organizations are now allowed to create a storage network with high performance and latencies that compete for DAS (Direct Attached Storage.
Consequently, the flash drives become capable of sharing data whenever required among servers. NVMe and NVMe-oF, when paired, represent a leap forward into the future of technology, delivering higher performance and low latency in comparison to other predecessors— SATA and SAS.
The deployment of these two technologies together enables new and better solutions, applications, and uses cases that were earlier unattainable or were cost-prohibitive.
So far, the lack of robustness and maturity has restricted the adoption of NVMe and NVMe-oF. But, with enhancement—newly introduced NVMe over TCP— we can foresee the approval of new applications and cases used dramatically accelerating in the year 2020.
3) Computational Storage:
A methodology that enables some processing to be done at the storage level, rather than by the host CPU in the main memory, computational storage is luring a growing number of IT leaders.
With the boom in AI and IoT applications, now, we require even more significant amounts of high-performance storage, as well as other peripheral computing resources, yet transferring data to the host processor is inherently inefficient and costly. Also, the fashion of computing closer to the storage is in-trend for several years now due to the high performance of SSDs (Solid State Drives).
Computational storage serves several different purposes—starting from small edge devices that filter data before sending it to the cloud to storage arrays. Computational storage provides data sorting systems for databases transforming large datasets to rack-level for big data applications.
NVMe and containers are primary enablers for computational storage. Hence, if IT managers have not already done so, IT managers should plan to transition to container-based infrastructure and NVMe. Additionally, managers can also find applications that can profit most from those better services of computational storage and get involved with the appropriate vendors.
4) Storage-class memory
The extensive adoption of SCM (Storage Class Memory) has been foreseen for years, and thankfully, 2020 may be the year to make it finally happen. While Toshiba XL-Flash, Intel Optane, and Samsung Z-SSD memory modules have been there for some time their impact has not been so high so far.
The big difference is yet to come when Intel has gotten their Optane DCPMM continuous memory module version working. This step would be a game-changer. The Intel device merges the features of fast, yet volatile DRAM (slower and persistent), and NAND storage. This combination aims at boosting users’ ability to work with bigger datasets and provides a better speed of DRAM and greater capacity and NAND persistence.
SCM is not just faster than NAND-based flash substitutes; it is in the range of 1,000 times faster, i.e., ‘microsecond latency, not millisecond.’ It is hard to wrap our heads around what this will mean for our infrastructure and our applications. The first most prominent use will be extending memory, noting that third party enables the in-memory application for Optane usage to attain footprints of up to 768TB.
Factually, data centers plan to adopt SCM that will be limited to the deployment on servers that uses the newest-generation Intel CPUs (Cascade Lake), which will mute the immediate impact of the technology. But irresistible ROI can drive data centers to embrace the clear opportunities associated with this significant change.
5) Intent-based storage management
Built on SDS and other latest storage inventions, intent-based storage management is anticipated to better the planning, implementation, and design of storage architecture in 2020 and years to come, especially for organizations that cope with critical mission environments.
Also, intent-based approaches can deliver the same benefits that we have observed in networking, such as operational agility, rapid scaling, and embracing emerging technology for existing and new applications. Furthermore, it can be an approach that compresses deployment time and managerial effort as compared to conventional storage administration. But it is for sure that it would be less error-prone.
A developer who defines the desired outcome with intent-based storage management is not consumed by administrative overheads. Therefore, he can choose provision containers, microservices or conventional applications quickly.
It would be easier for infrastructure operators to manage the application’s needs and the developer that includes availability, efficiency, performance, and data placement. It will allow the software intelligence to optimize the data environment for meeting the application’s needs. Furthermore, a developer can make things easier to adjust storage policies with intent-based storage management rather than spend days physically tuning each array.
An autonomous and continuous cycle of deployment, telemetry, analytics, consumption, and SDS technology is being fulfilled that makes intent-based storage possible to ensure the customer-specified intent. It also allows the intent to be non-disruptive and adjusted in the best way as an AI/ML engine provides feedback to improve the customer’s environment.
The disadvantage of intent-based storage management is the hurdle of deployment against the promised value. Also, it doesn’t fit all technologies as it is not a “one-size-fits-all.” It offers the best value in disaggregated, mission-critical, at-scale environments where distributing operational agility and developer’s velocity will have the most significant impact on businesses. For smaller firms or infrastructure, or less critical situations, direct-attached or hyper-converged infrastructure approaches are generally sufficient.
These 5 storage technologies will surely cause disruption in the world of data storage.