Red Hat APPLICATION STACK 1.2 RELEASE Manual de usuario Pagina 19

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 44
  • Tabla de contenidos
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 18
Chapter 3.
11
Storage and File Systems
3.1. RAID
Upgrades
Performing an upgrade from a dmraid set to an mdraid set is not supported. A warning will be
displayed when an upgrade of this type is attempted. Upgrades from existing mdraid sets and
creation of new mdraid sets are possible.
The new default superblock can cause problems when upgrading sets. This new superblock format
(used on all devices except when creating a RAID1 /boot partition) is now at the beginning of the array,
and any file system or LVM data is offset from the beginning of the partition. When the array is not
running, LVM and file system mount commands may not detect the device as having a valid volume
or file system data. This is intentional, and means that if you want to mount a single disk in a RAID1
array, you need to start the array having only that single disk in it, then mount the array. You can not
mount the bare disk directly. This change has been made as mounting a bare disk directly can silently
corrupt the array if a resync is not forced.
On subsequent reboots, the RAID system may then consider the disk that was not included in the
array as being incompatible, and will disconnect that device from the array. This is also normal. When
you are ready to re-add the other disk back into the array, use the mdadm command to hot add the
disk into the array, at which point a resync of the changed parts of the disk (if you have write intent
bitmaps) or the whole disk (if you have no bitmap) will be performed, and the array will once again
be synchronized. From this point, devices will not be disconnected from the array, as the array is
considered to be properly assembled.
The new superblock supports the concept of named mdraid arrays. Dependency on the old method
of array enumeration (for instance, /dev/md0 then /dev/md1, etc.) for distinguishing between arrays
has been dropped. You can now choose an arbitrary name for the array (such as home, data, or
opt). Create the array with your chosen name using the --name=opt option. Whatever name is
given to the array, that name will be created in /dev/md/ (unless a full path is given as a name, in
which case that path will be created; or unless you specify a single number, such as 0, and mdadm will
start the array using the old /dev/mdx scheme). The Anaconda installer does not currently allow for
the selection of array names, and instead uses the simple number scheme as a way to emulate how
arrays were created in the past.
The new mdraid arrays support the use of write intent bitmaps. These help the system identify
problematic parts of an array, so that in the event of an unclean shutdown, only the problematic parts
need to be resynchronized, and not the entire disk. This drastically reduces the time required to
resynchronize. Newly created arrays will automatically have a write intent bitmap added when suitable.
For instance, arrays used for swap and very small arrays (such as /boot arrays) do not benefit from
having write intent bitmaps. It is possible to add a write intent bitmap to your previously existing arrays
after the upgrade is complete via the mdadm --grow command on the device, however write intent
bitmaps do incur a modest performance hit (about 3-5% at a bitmap chunk size of 65536, but can
increase to 10% or more at small bitmap chunk sizes such as 8192). This means that if a write intent
bitmap is added to an array, it is best to keep the chunk size reasonably large. The recommended size
is 65536.
Vista de pagina 18
1 2 ... 14 15 16 17 18 19 20 21 22 23 24 ... 43 44

Comentarios a estos manuales

Sin comentarios