基于Raid模式下的LVM(原创)
作者: 曲文庆 日期: 2008-10-11 10:01
 RAID是冗余磁盘阵列(Redundant Array of InexpensiveDisk)的简称。它是把多个磁盘组成一个阵列,当作单一磁盘使用。它将数据以分段(striping)的方式分散存储在不同的磁盘中,通过多个磁盘的同时读写,来减少数据的存取时间,并且可以利用不同的技术实现数据的冗余,即使有一个磁盘损坏,也可以从其他的磁盘中恢复所有的数据。简单地说,其好处就是:安全性高、速度快、数据容量大。
 磁盘阵列根据其使用的技术不同而划分了等级,称为RAID level,目前比较常用的有Raid0、Raid1、Raid5、Raid10。Raid0是将多个磁盘并列起来,成为一个大硬盘。在存取数据时,将数据按磁盘的个数来进行分段,然后同时将这些数据写进这些盘中。在所有的级别中,RAID 0的速度是最快的。但没有数据冗余,阵列中任何一个磁盘坏掉,意味着所有数据丢失。Raid1使用磁盘镜像(disk mirroring)的技术,在一个磁盘上存放数据的同时也在另一个磁盘上写一样的数据。因为有了备份磁盘,所以RAID1的数据安全性在所有的RAID 级别上来说是最好的。尽管其写入数据的速度比较慢,但因其数据是以分段的方式作储存,因而在读取时,它几乎和RAID0有同样的性能。Raid5以数据的校验位来保证数据的安全,但它不是以单独硬盘来存放数据的校验位,而是将数据段的校验位交互存放于各个磁盘上。这样,任何一个磁盘损坏,都可以根据其他磁盘上的校验位来重建损坏的数据。并行读写数据,性能也很高。Radi10同时具备了Raid0和Raid1的优点。
 LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性。
 Linux用户安装Linux操作系统时遇到的一个最常见的难以决定的问题就是如何正确地给评估各分区大小,以分配合适的硬盘空间。而遇到出现 某个分区空间耗尽时,解决的方法通常是使用符号链接,或者使用调整分区大小的工具(比如PatitionMagic等),但这都只是暂时解决办法,没有根本解决问题。随着Linux的逻辑盘卷管理功能的出现,这些问题都迎刃而解,使得用户在无需停机的情况下方便地调整各个分区大 小。
 针对Radi和LVM各自独有的特色,如何将他们的优点有效的结合起来呢?本文粗略探讨一下基于Raid(Radi0、Raid1)模式下的LVM
 操作系统:CentOS 5.0
 系统环境:“Development Libraries”,“System Tools”,“X Software Development”,“Server Configuration Tools”,“Administration Tools”
 #fdisk –l
 Disk /dev/hda: 163.9 GB, 163928604672 bytes
 255 heads, 63 sectors/track, 19929 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot       Id  System
 /dev/hda1   *        fd  Linux raid autodetect
 /dev/hda2            fd  Linux raid autodetect
 /dev/hda3            fd  Linux raid autodetect
 /dev/hda4             5  Extended
 /dev/hda5            fd  Linux raid autodetect
Disk /dev/hdc: 163.9 GB, 163928604672 bytes
 255 heads, 63 sectors/track, 19929 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot       Id  System
 /dev/hdc1   *        fd  Linux raid autodetect
 /dev/hdc2            fd  Linux raid autodetect
 /dev/hdc3            fd  Linux raid autodetect
 /dev/hdc4             5  Extended
 /dev/hdc5            fd  Linux raid autodetect
Disk /dev/md0: 4293 MB, 4293459968 bytes
 2 heads, 4 sectors/track, 1048208 cylinders
 Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1: 1077 MB, 1077411840 bytes
 2 heads, 4 sectors/track, 263040 cylinders
 Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/md2: 6292 MB, 6292242432 bytes
 2 heads, 4 sectors/track, 1536192 cylinders
 Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md2 doesn't contain a valid partition table
Disk /dev/md3: 6292 MB, 6292242432 bytes
 2 heads, 4 sectors/track, 1536192 cylinders
 Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md3 doesn't contain a valid partition table
 #df
 Filesystem         Mounted on
 /dev/md0           /
 tmpfs              /dev/shm
 /dev/md2           /usr
 /dev/md3           /var
用fdisk对hda、hdc分别创建两个分区,分区类型为fd( Linux raid autodetect)
 #fdisk –l
 Disk /dev/hda: 163.9 GB, 163928604672 bytes
 255 heads, 63 sectors/track, 19929 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot       Id  System
 /dev/hda1   *        fd  Linux raid autodetect
 /dev/hda2            fd  Linux raid autodetect
 /dev/hda3            fd  Linux raid autodetect
 /dev/hda4             5  Extended
 /dev/hda5            fd  Linux raid autodetect
 /dev/hda6            fd  Linux raid autodetect
 /dev/hda7            fd  Linux raid autodetect
Disk /dev/hdc: 163.9 GB, 163928604672 bytes
 255 heads, 63 sectors/track, 19929 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot       Id  System
 /dev/hdc1   *        fd  Linux raid autodetect
 /dev/hdc2            fd  Linux raid autodetect
 /dev/hdc3            fd  Linux raid autodetect
 /dev/hdc4             5  Extended
 /dev/hdc5            fd  Linux raid autodetect
 /dev/hdc6            fd  Linux raid autodetect
 /dev/hdc7            fd  Linux raid autodetect
 下面创建Raid0、Raid1两种类型分区(主要测试基于Raid1的LVM和基于Raid0的LVM)
 创建Raid1
 mdadm --create --verbose /dev/md4 --level=1 --raid-devices=2 /dev/hda6 /dev/hdc6
 如果提示mdadm: error opening /dev/md4: No such file or directory错误,需要先用mknod创建/dev/md4
 # ls -al /dev/md*
 brw-r----- 1 root disk 9, 0 Sep 25 16:20 /dev/md0
 brw-r----- 1 root disk 9, 1 Sep 26  2008 /dev/md1
 brw-r----- 1 root disk 9, 2 Sep 25 16:20 /dev/md2
 brw-r----- 1 root disk 9, 3 Sep 25 16:20 /dev/md3
 # mknod /dev/md4 b 9 4
 # mknod /dev/md5 b 9 5
 修改md4、md5属性和其他一样
 创建Raid0
 #mdadm --create --verbose /dev/md5 --level=0 --raid-devices=2 /dev/hda7 /dev/hdc7
 现在创建了两个Raid,/dev/md4(Raid1)、/dev/md5(Raid0)。
 下面创建基于Raid的LVM
 操作命令可以在提示符下操作,也可以进入LVM环境下操作,由于LVM环境下命令较为集中,我们进入LVM下操作
 #lvm
 lvm>help
   Available lvm commands:
   Use 'lvm help <command>' for more information
   
   dumpconfig      Dump active configuration
   formats         List available metadata formats
   help            Display help for commands
   lvchange        Change the attributes of logical volume(s)
   lvconvert       Change logical volume layout
   lvcreate        Create a logical volume
   lvdisplay       Display information about a logical volume
   lvextend        Add space to a logical volume
   lvmchange       With the device mapper, this is obsolete and does nothing.
   lvmdiskscan     List devices that may be used as physical volumes
   lvmsadc         Collect activity data
   lvmsar          Create activity report
   lvreduce        Reduce the size of a logical volume
   lvremove        Remove logical volume(s) from the system
   lvrename        Rename a logical volume
   lvresize        Resize a logical volume
   lvs             Display information about logical volumes
   lvscan          List all logical volumes in all volume groups
   pvchange        Change attributes of physical volume(s)
   pvresize        Resize physical volume(s)
   pvcreate        Initialize physical volume(s) for use by LVM
   pvdata          Display the on-disk metadata for physical volume(s)
   pvdisplay       Display various attributes of physical volume(s)
   pvmove          Move extents from one physical volume to another
   pvremove        Remove LVM label(s) from physical volume(s)
   pvs             Display information about physical volumes
   pvscan          List all physical volumes
   segtypes        List available segment types
   vgcfgbackup     Backup volume group configuration(s)
   vgcfgrestore    Restore volume group configuration
   vgchange        Change volume group attributes
   vgck            Check the consistency of volume group(s)
   vgconvert       Change volume group metadata format
   vgcreate        Create a volume group
   vgdisplay       Display volume group information
   vgexport        Unregister volume group(s) from the system
   vgextend        Add physical volumes to a volume group
   vgimport        Register exported volume group with system
   vgmerge         Merge volume groups
   vgmknodes       Create the special files for volume group devices in /dev
   vgreduce        Remove physical volume(s) from a volume group
   vgremove        Remove volume group(s)
   vgrename        Rename a volume group
   vgs             Display information about volume groups
   vgscan          Search for all volume groups
   vgsplit         Move physical volumes into a new volume group
   version         Display software and driver version information
 创建物理卷
 lvm>pvcreate /dev/md4
   Physical volume "/dev/md4" successfully created
 lvm>pvdisplay
   --- NEW Physical volume ---
   PV Name               /dev/md4
   VG Name              
   PV Size               3.84 GB
   Allocatable           NO
   PE Size (KByte)       0
   Total PE              0
   Free PE               0
   Allocated PE          0
   PV UUID               s7WjkQ-L6Ij-YAN2-cAHY-SDj1-zsPp-rSigaw
 创建卷组
 lvm> vgcreate vg00 /dev/md4
 lvm>pvdisplay
   --- Physical volume ---
   PV Name               /dev/md4
   VG Name               vg00
   PV Size               3.84 GB / not usable 1.81 MB
   Allocatable           yes
   PE Size (KByte)       4096
   Total PE              982
   Free PE               982
   Allocated PE          0
   PV UUID               s7WjkQ-L6Ij-YAN2-cAHY-SDj1-zsPp-rSigaw
 lvm>vgdisplay
   --- Volume group ---
   VG Name               vg00
   System ID            
   Format                lvm2
   Metadata Areas        1
   Metadata Sequence No  1
   VG Access             read/write
   VG Status             resizable
   MAX LV                0
   Cur LV                0
   Open LV               0
   Max PV                0
   Cur PV                1
   Act PV                1
   VG Size               3.84 GB
   PE Size               4.00 MB
   Total PE              982
   Alloc PE / Size       0 / 0  
   Free  PE / Size       982 / 3.84 GB
   VG UUID               fn8mmz-wNoY-Kurv-HtDq-3ghG-rCVI-zAyPQS
 激活卷组
 lvm>vgchange –ay vg00
   0 logical volume(s) in volume group "vg00" now active
 创建逻辑卷,用所有卷组空间创建。
 lvm> lvcreate -l982 vg00 -n lv00
   Logical volume "lv00" created
 lvm>lvdisplay
   --- Logical volume ---
   LV Name                /dev/vg00/lv00
   VG Name                vg00
   LV UUID                7CeDHk-wqdY-60jk-3iis-3XWy-eCaE-q7gtq1
   LV Write Access        read/write
   LV Status              available
   # open                 0
   LV Size                3.84 GB
   Current LE             982
   Segments               1
   Allocation             inherit
   Read ahead sectors     0
   Block device           253:0
 从信息里可以看出逻辑卷名称是/dev/vg00/lv00,下面创建文件系统
 #mkfs.ext3 /dev/vg00/lv00
 mke2fs 1.39 (29-May-2006)
 Filesystem label=
 OS type: Linux
 Block size=4096 (log=2)
 Fragment size=4096 (log=2)
 502944 inodes, 1005568 blocks
 50278 blocks (5.00%) reserved for the super user
 First data block=0
 Maximum filesystem blocks=1031798784
 31 block groups
 32768 blocks per group, 32768 fragments per group
 16224 inodes per group
 Superblock backups stored on blocks:
         32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done                           
 Creating journal (16384 blocks): done
 Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 39 mounts or
 180 days, whichever comes first.  Use tune2fs -c or -i to override.
完成后就可以mount使用了,也可以把信息配置到/etc/fstab里,启动自动加载。
 下面的操作就充分体现LVM的优越性了,假设逻辑卷/dev/vg00/lv00空间使用完毕,需要扩充部分空间。前面创建LVM只使用了md4,下面我们把md5加入LVM中。
 先往/dev/vg00/lv00里面装入些内容(mount、cp、umount等)
 创建物理卷
 lvm> pvcreate /dev/md5
   Physical volume "/dev/md5" successfully created
 lvm> pvdisplay
   --- Physical volume ---
   PV Name               /dev/md4
   VG Name               vg00
   PV Size               3.84 GB / not usable 1.81 MB
   Allocatable           yes (but full)
   PE Size (KByte)       4096
   Total PE              982
   Free PE               0
   Allocated PE          982
   PV UUID            s7WjkQ-L6Ij-YAN2-cAHY-SDj1-zsPp-rSigaw
   
   --- NEW Physical volume ---
   PV Name               /dev/md5
   VG Name              
   PV Size               7.68 GB
   Allocatable           NO
   PE Size (KByte)       0
   Total PE              0
   Free PE               0
   Allocated PE          0
   PV UUID            qJOfAV-OKKs-wTjY-48Av-2My6-7P5c-VfysmX
 添加新的物理卷到卷组中
 lvm> vgextend vg00 /dev/md5
   Volume group "vg00" successfully extended
 lvm>vgdisplay
   --- Volume group ---
   VG Name               vg00
   System ID            
   Format                lvm2
   Metadata Areas        2
   Metadata Sequence No  3
   VG Access             read/write
   VG Status             resizable
   MAX LV                0
   Cur LV                1
   Open LV               0
   Max PV                0
   Cur PV                2
   Act PV                2
   VG Size               11.51 GB
   PE Size               4.00 MB
   Total PE              2946
   Alloc PE / Size       982 / 3.84 GB
   Free  PE / Size       1964 / 7.67 GB
   VG UUID            fn8mmz-wNoY-Kurv-HtDq-3ghG-rCVI-zAyPQS
 可以看到vg00的大小已经变成md4和md5之和
 扩展逻辑卷大小,扩展到5G
 lvm> lvextend -L 5G /dev/vg00/lv00
   Extending logical volume lv00 to 5.00 GB
   Logical volume lv00 successfully resized
 lvm>lvdisplay
   --- Logical volume ---
   LV Name                /dev/vg00/lv00
   VG Name                vg00
   LV UUID           7CeDHk-wqdY-60jk-3iis-3XWy-eCaE-q7gtq1
   LV Write Access        read/write
   LV Status              available
   # open                 0
   LV Size                5.00 GB
   Current LE             1280
   Segments               2
   Allocation             inherit
   Read ahead sectors     0
   Block device           253:0
 也可以用-L +2G实现逻辑卷lv00大小增加2G
 增加了逻辑卷的容量以后,就需要修改文件系统大小以实现利用扩充的空间。
 #e2fsck -f /dev/vg00/lv00
 e2fsck 1.39 (29-May-2006)
 Pass 1: Checking inodes, blocks, and sizes
 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 /dev/vg00/lv00: 39/502944 files (2.6% non-contiguous), 34318/1005568 blocks
 #resize2fs /dev/vg00/lv00
 resize2fs 1.39 (29-May-2006)
 Resizing the filesystem on /dev/vg00/lv00 to 1310720 (4k) blocks.
 The filesystem on /dev/vg00/lv00 is now 1310720 blocks long.
 将/dev/vg00/lv00 做mount后看看空间情况,已经变成5G了,再到目录里看看数据,依然存在。
 总结
 根据上面的实做可以看到,LVM具有很好的可伸缩性,使用起来非常方便。可以方便地对卷组、逻辑卷的大小进行调整,更进一步调整文件系统的大小。Raid1有很好的安全性,Raid0有很好的性能。将Raid和LVM结合起来,有效的将Raid和LVM的优点结合起来,即具备了Raid的特性,又解决了分区的可扩展行。
 订阅
上一篇
 返回
下一篇
标签: 
		


Windows 7 安装、运行 Sniffer 4.7.5 sp5 (2010-04-30 08:55)
Nginx配置PHP的一个关键注意点 (2009-02-02 17:20)
今冬第一雪 (2008-12-21 16:13)
油价终于下调了 (2008-12-19 09:25)
SSH连接慢问题解决 (2008-12-02 16:37)
Linux下的Memcache安装 (2008-12-02 16:26)
搭建Lighttpd的SSL模块(原创) (2008-11-05 16:32)
让PHP5支持java(原创) (2008-11-04 20:35)
yum Could not find any working storages 问题处理(原创) (2008-11-04 20:23)