The convergence and virtualization trends of modern Data Center storage are extending their scope to meet next generation requirements.  The article gives a brief on modern practices and future technologies that will meet the demands of advanced Data Center storage. 

Modern storage architectures

There are three types of storage architectures: Direct-Attached Storage (DAS), Network-Attached Storage (NAS), and Storage Area Network (SAN). The choice of these architectures for the Data Center is application-dependent.  

For non-mission-critical archival applications, DAS running object storage protocol is often preferred.  Object storage is not tagged to any operating system (OS), and can be used to access any storage device without OS intervention.  Each storage object can be located anywhere in the world using a unique ID assigned to it based on metadata.  

For high performance applications, NAS or SAN is optimum.  NAS uses OS-dependent file-storage, while SAN uses block-storage.  Both offer high performance and latency in access and retrieval, but their use cannot be extended beyond the localized settings of a private cloud.

Future storage technologies

An ever-growing overhead on storage performance and capacity is demanded by modern technologies like 5G, IoT, facial recognition, and deep learning. Such requirements are driving companies to think outside of the box. 

The same technology revolution is influencing SSD caching as well. SSD caching applications have undergone engineering on the basic storage-cell level, combining the benefits of Persistent Memory (PM), like 3D QLC NAND Flash, with random access procedures to give rise to Non-Volatile Random Access Memory (NVRAM). 

NVRAM is faster than flash, and takes after the characteristics of standard volatile Dual In-line Memory modules (DIMM). It is persistent, non-volatile and stores data when turned off.  NVRAM, or “local PM”, fits into standard DIMM modules and co-exists with standard DRAM of the HCI unit. It uses Persistent Memory over Fabric (PMoF), a protocol that fuses the benefits of kernel-bypass technologies like Remote Direct Memory Access (RDMA) and Non-Volatile Memory express over Fabrics (NVMeoF), to access the “remote PM” or warm-aisle NVMe storage. This provides ground-breaking latency typically needed by e-commerce transactions. 

There is talk of using x86 storage servers to overcome the scale-out limitations of HCI to create super converged systems. These take the storage element out of HCI to make upgrades easier, and use advanced Redundant Array of Independent Disks (RAID) storage configurations to boost overall redundancy, reliability, and performance. 

Amphenol ICC is future-ready with pre-planned offerings fine-tuned to meet current and future Data Center storage needs. Cool Edge hybrid card edge connectors will take on upcoming SSD technologies over NVMe like EDSFF, NGSFF, SFF-TA-1001, SFF-TA-1002, and Gen-Z. Storage Device I/O connectors are also ready to meet SATA 5.0, SAS 4.0, and PCIe 5.0 requirements. The Mini-SAS HD internal and external connectors and cable assemblies available in copper and optical versions meets or exceeds SAS Gen 4. The M.2 connector is capable of connecting modules packed with high density 3D QLC Flash and NVDIMM for advanced server caching. 

For more information, visit