Red Hat APPLICATION STACK 2.0 RELEASE Manual de usuario Pagina 9

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 24
  • Tabla de contenidos
  • SOLUCIÓN DE PROBLEMAS
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 8
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0
Rev 1.1
9
Mellanox Technologies
2 Storage Acceleration Using Mellanox
Interconnect
Data centers rely on communication between compute and storage nodes, as compute servers
read and write data from the storage servers constantly. In order to maximize the server’s
application performance, communication between the compute and storage nodes must have
the lowest possible latency, highest possible bandwidth, and lowest CPU utilization.
Figure 2: OpenStack Based IaaS Cloud POD Deployment Example
Storage applications that use iSCSI over TCP are processed by the CPU. This causes data
center applications that rely heavily on storage communication to suffer from reduced CPU
utilization, as the CPU is busy sending data to the storage servers. The data path for protocols
such as TCP, UDP, NFS, and iSCSI all must wait in line with the other applications and
system processes for their turn using the CPU. This not only slows down the network, but also
uses system resources that could otherwise have been used for executing applications faster.
Mellanox OpenStack solution extends the Cinder project by adding iSCSI running over
RDMA (iSER). Leveraging RDMA Mellanox OpenStack delivers 5X better data throughput
(for example, increasing from 1GB/s to 5GB/s) and requires up to 80% less CPU utilization
(see Figure 3).
Mellanox ConnectX®-3 adapters bypass the operating system and CPU by using RDMA,
allowing much more efficient data movement paths. iSER capabilities are used to accelerate
hypervisor traffic, including storage access, VM migration, and data and VM replication. The
use of RDMA moves data to the Mellanox ConnectX-3 hardware, which provides zero-copy
message transfers for SCSI packets to the application, producing significantly faster
performance, lower network latency, lower access time, and lower CPU overhead. iSER can
provide 6x faster performance than traditional TCP/IP based iSCSI. This also consolidates the
efforts of both Ethernet and InfiniBand communities, and reduces the number of storage
protocols a user must learn and maintain.
Vista de pagina 8
1 2 3 4 5 6 7 8 9 10 11 12 13 14 ... 23 24

Comentarios a estos manuales

Sin comentarios