[root@localhost ~]# parted /dev/sdb GNU Parted 3.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mkpart Partition name? []? sdb1 File system type? [ext2]? ext2 Start? 1M End? 10000M (parted) p Model: VMware, VMware Virtual S (scsi) Disk /dev/sdb: 10.7GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags:
Number Start End Size File system Name Flags 1 1049kB 10.0GB 9999MB ext4 sdb1
(parted) q Information: You may need to update /etc/fstab.
[root@localhost ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 610800 inodes, 2441216 blocks 122060 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2151677952 75 block groups 32768 blocks per group, 32768 fragments per group 8144 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
4.创建挂载点,并挂载设备
[root@localhost ~]# mkdir /sdb1 [root@localhost ~]# mount /dev/sdb1 /sdb1/
# # /etc/fstab # Created by anaconda on Tue Sep 18 09:05:06 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=13d5ccc2-52db-4aec-963a-f88e8edcf01c /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0
[root@localhost ~]# mount -o remount,usrquota,grpquota /dev/sdb1
7.生成配额文件 quotackeck -ugv[分区名]
[root@localhost ~]# quotacheck -ugv /dev/sdb1
quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown. quotacheck: Scanning /dev/sdb1 [/sdb1] done quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted. quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted. quotacheck: Cannot stat old user quota file /sdb1/aquota.user: No such file or directory. Usage will not be subtracted. quotacheck: Cannot stat old group quota file /sdb1/aquota.group: No such file or directory. Usage will not be subtracted. quotacheck: Checked 3 directories and 0 files quotacheck: Old file not found. quotacheck: Old file not found.
8.编辑限制,edquota -ugtp [用户名/组名]
配置lyshark的软限制200M,硬限制500M
[root@localhost ~]# edquota -u lyshark
Disk quotas for user lyshark (uid 1000):
↓文件系统 软(容量) 硬(容量) I节点 软(数) 硬(数) Filesystem blocks soft hard inodes soft hard /dev/sdb1 0 200M 500M 0 0 0
配置temp组软限制100M,硬限制200M.
[root@localhost ~]# edquota -g temp
Disk quotas for group temp (gid 1001): Filesystem blocks soft hard inodes soft hard /dev/sdb1 0 102400 204800 0 0 0
9.开启配额,quota on/off -augv
[root@localhost ~]# quotaon -augv /dev/sdb1 [/sdb1]: group quotas turned on /dev/sdb1 [/sdb1]: user quotas turned on
10.查看指定用户或组的配额,quota -ugvs
[root@localhost ~]# quota -ugvs
Disk quotas for user root (uid 0): Filesystem space quota limit grace files quota limit grace /dev/sdb1 20K 0K 0K 2 0 0 Disk quotas for group root (gid 0): Filesystem space quota limit grace files quota limit grace /dev/sdb1 20K 0K 0K 2 0 0
[root@localhost ~]# mkdir /LVM #首先创建一个挂载点 [root@localhost ~]# [root@localhost ~]# mkfs.ext4 /dev/my_vg/my_lv #格式化LVM分区 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 655360 inodes, 2621440 blocks 131072 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2151677952 80 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
[root@localhost ~]# lvextend -L +5G /dev/my_vg/my_lv #执行增加命令,从VG卷组划分5G Size of logical volume my_vg/my_lv changed from 10.00 GiB (2560 extents) to 15.00 GiB (3840). Logical volume my_vg/my_lv successfully resized.
[root@localhost ~]# resize2fs -f /dev/my_vg/my_lv #扩展文件系统 resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/my_vg/my_lv is mounted on /LVM; on-line resizing required old_desc_blocks = 2, new_desc_blocks = 2 The filesystem on /dev/my_vg/my_lv is now 3932160 blocks long.
[root@localhost ~]# resize2fs -f /dev/my_vg/my_lv 10G(减小后的大小) #缩小文件系统 resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/my_vg/my_lv to 2621440 (4k) blocks. The filesystem on /dev/my_vg/my_lv is now 2621440 blocks long.
[root@localhost ~]# lvreduce -L 10G /dev/my_vg/my_lv #缩小LVM WARNING: Reducing active logical volume to 10.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce my_vg/my_lv? [y/n]: y #输入y Size of logical volume my_vg/my_lv changed from 15.00 GiB (3840 extents) to 10.00 GiB (2560). Logical volume my_vg/my_lv successfully resized.
[root@localhost ~]# mount /dev/my_vg/my_lv /LVM/ #挂载
[root@localhost ~]# mdadm --detail /dev/md0 #查看阵列信息 /dev/md0: ←设备文件名 Version : 1.2 Creation Time : Fri Sep 21 23:19:09 2018 ←创建日期 Raid Level : raid5 ←RAID等级 Array Size : 20953088 (19.98 GiB 21.46 GB) ←可用空间 Used Dev Size : 10476544 (9.99 GiB 10.73 GB) ←每个设备可用空间 Raid Devices : 3 ←RAID设备数量 Total Devices : 4 ←全部设备数量 Persistence : Superblock is persistent
Update Time : Fri Sep 21 23:19:26 2018 State : clean, degraded, recovering Active Devices : 3 ←启动磁盘 Working Devices : 4 ←可用磁盘 Failed Devices : 0 ←错误磁盘 Spare Devices : 1 ←预备磁盘
Layout : left-symmetric Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 34% complete
Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 2ee2bcd5:c5189354:d3810252:23c2d5a8 ←此设备UUID Events : 6
Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 4 8 48 2 spare rebuilding /dev/sdd
3 8 64 - spare /dev/sde
格式化 /dev/md0并挂载使用
[root@localhost ~]# mkfs -t ext4 /dev/md0 #格式化 mke2fs 1.42.9 (28-Dec-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 1310720 inodes, 5238272 blocks 261913 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2153775104 160 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000
Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done