Oracle 10g引入了ASM,在10g版本中,如果由于故障(光缆故障,控制器故障,HBA卡故障或者其他故障造成磁盘无法访问),这时Oracle会把这个磁盘drop掉。在11g中,Oracle引入了一个参数disk_repair_time,这个参数与"Oracle ASM Fast Mirror Resync"有关,有了这个特性,当故障发生时(除磁盘自身故障),在disk_repair_time时间之内,待故障解决磁盘在线后,Oracle会同步由于故障而暂时没有写入本磁盘extent的数据,而不必同步磁盘上所有的数据,进而避免因此造成的性能问题。如果超过disk_repair_time时间,系统仍未修复,Oracle会drop这个磁盘。默认时间为3.6小时,一般能满足大多数环境,可以根据实际情况设置这个参数。
使用这个特性,要满足两个条件:
1.磁盘组的COMPATIBLE属性版本至少在11.1及以上(磁盘组COMPATIBLE参数影响到磁盘组的格式,元数据,AU等)2.磁盘组的的冗余模式为Normal或High
注意:如果是磁盘自身故障(DG冗余模式为Norma/High),这个磁盘必须drop,添加磁盘后,oracle会自动reblancing。如果冗余模式为external,磁盘出故障时,磁盘组会离线,要通过备份来恢复数据库。Exadata冗余度至少为Normal,是通过在ASM级别中mirror,所以等cell节点任何一个节点down机,不影响数据库的正常使用。
本文通过在11g ASM中创建一磁盘组db2,然后将磁盘组中一块盘offline,超过disk_repair_time后,观察磁盘的变化
ASMCMD> lsattr -G db2 -lName Value access_control.enabled FALSE access_control.umask 066 au_size 1048576 cell.smart_scan_capable FALSE compatible.asm 11.2.0.0.0 compatible.rdbms 10.1.0.0.0 disk_repair_time 3.6h sector_size 512
ASMCMD> setattr -G db2 disk_repair_time 5h
ORA-15032: not all alterations performed ORA-15242: could not set attribute disk_repair_time ORA-15283: ASM operation requires compatible.rdbms of 11.1.0.0.0 or higher (DBD ERROR: OCIStmtExecute)注意:compatible参数至少为11.1
ASMCMD> setattr -G db2 compatible.rdbms 11.2
ASMCMD> lsattr -G db2 -l
Name Value access_control.enabled FALSE access_control.umask 066 au_size 1048576 cell.smart_scan_capable FALSE compatible.asm 11.2.0.0.0 compatible.rdbms 11.2 disk_repair_time 3.6h sector_size 512
为了测试,设置disk_repair_time为5分钟,m代表分钟,h代表小时,如果不输单位,默认是小时
ASMCMD> setattr -G db2 disk_repair_time 5m
ASM alert后台日志
SQL> /* ASMCMD */ALTER DISKGROUP DB2 SET ATTRIBUTE 'disk_repair_time' = '5m' SUCCESS: /* ASMCMD */ALTER DISKGROUP DB2 SET ATTRIBUTE 'disk_repair_time' = '5m'
ASMCMD> lsattr -G db2 -l
Name Value access_control.enabled FALSE access_control.umask 066 au_size 1048576 cell.smart_scan_capable FALSE compatible.asm 11.2.0.0.0 compatible.rdbms 11.2 disk_repair_time 5m sector_size 512 ASMCMD>
查看磁盘组的信息和磁盘头信息
ASMCMD> lsdsk -G db2
Path /dev/oracleasm/disks/ASMDISK11 /dev/oracleasm/disks/ASMDISK13
ASMCMD> lsdsk -G db2 --statistics
Reads Write Read_Errs Write_Errs Read_time Write_Time Bytes_Read Bytes_Written Voting_File Path 191 1068 0 0 .965827 39.09044 1794048 4374528 N /dev/oracleasm/disks/ASMDISK11 166 1068 0 0 1.040196 38.521265 684032 4374528 N /dev/oracleasm/disks/ASMDISK13ASMCMD>
[oracle@ohs1 ~]$ kfed read /dev/oracleasm/disks/ASMDISK11
kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 1549816565 ; 0x00c: 0x5c6052f5 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr:ORCLDISKASMDISK11 ; 0x000: length=17 kfdhdb.driver.reserved[0]: 1145918273 ; 0x008: 0x444d5341 kfdhdb.driver.reserved[1]: 827020105 ; 0x00c: 0x314b5349 kfdhdb.driver.reserved[2]: 49 ; 0x010: 0x00000031 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 2 ; 0x026: KFDGTP_NORMAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: DB2_0000 ; 0x028: length=8 kfdhdb.grpname: DB2 ; 0x048: length=3 kfdhdb.fgname: DB2_0000 ; 0x068: length=8 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 33036942 ; 0x0a8: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.crestmp.lo: 3147311104 ; 0x0ac: USEC=0x0 MSEC=0x20a SECS=0x39 MINS=0x2e kfdhdb.mntstmp.hi: 33036942 ; 0x0b0: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.mntstmp.lo: 3163762688 ; 0x0b4: USEC=0x0 MSEC=0xcc SECS=0x9 MINS=0x2f kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000 kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80 kfdhdb.dsksize: 2447 ; 0x0c4: 0x0000098f kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002 kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 33036942 ; 0x0e4: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.grpstmp.lo: 3147119616 ; 0x0e8: USEC=0x0 MSEC=0x14f SECS=0x39 MINS=0x2e kfdhdb.vfstart: 0 ; 0x0ec: 0x00000000 kfdhdb.vfend: 0 ; 0x0f0: 0x00000000 kfdhdb.spfile: 0 ; 0x0f4: 0x00000000 kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000
[oracle@ohs1 ~]$ kfed read /dev/oracleasm/disks/ASMDISK13
kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483649 ; 0x008: disk=1 kfbh.check: 1549816567 ; 0x00c: 0x5c6052f7 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr:ORCLDISKASMDISK13 ; 0x000: length=17 kfdhdb.driver.reserved[0]: 1145918273 ; 0x008: 0x444d5341 kfdhdb.driver.reserved[1]: 827020105 ; 0x00c: 0x314b5349 kfdhdb.driver.reserved[2]: 51 ; 0x010: 0x00000033 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 1 ; 0x024: 0x0001 kfdhdb.grptyp: 2 ; 0x026: KFDGTP_NORMAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: DB2_0001 ; 0x028: length=8 kfdhdb.grpname: DB2 ; 0x048: length=3 kfdhdb.fgname: DB2_0001 ; 0x068: length=8 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 33036942 ; 0x0a8: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.crestmp.lo: 3147311104 ; 0x0ac: USEC=0x0 MSEC=0x20a SECS=0x39 MINS=0x2e kfdhdb.mntstmp.hi: 33036942 ; 0x0b0: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.mntstmp.lo: 3163762688 ; 0x0b4: USEC=0x0 MSEC=0xcc SECS=0x9 MINS=0x2f kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000 kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80 kfdhdb.dsksize: 2447 ; 0x0c4: 0x0000098f kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002 kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 33036942 ; 0x0e4: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.grpstmp.lo: 3147119616 ; 0x0e8: USEC=0x0 MSEC=0x14f SECS=0x39 MINS=0x2e kfdhdb.vfstart: 0 ; 0x0ec: 0x00000000 kfdhdb.vfend: 0 ; 0x0f0: 0x00000000 kfdhdb.spfile: 0 ; 0x0f4: 0x00000000 kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000ASMCMD> iostat -G db2
Group_Name Dsk_Name Reads Writes DB2 DB2_0000 1859584 5001216 DB2 DB2_0001 749568 5001216ASMCMD> offline -G db2 -D DB2_0001
Diskgroup altered.
ASMCMD>
从ASM alet中可以看到磁盘将要在5分钟后drop
WARNING: Disk 1 (DB2_0001) in group 1 will be dropped in: (300) secs on ASM inst 1
Mon Jun 20 15:55:23 2016
5分钟后,磁盘被ASM强制删除(alert日志)
SQL> alter diskgroup DB2 drop disk DB2_0001 force /* ASM SERVER */ NOTE: GroupBlock outside rolling migration privileged region NOTE: requesting all-instance membership refresh for group=1 Mon Jun 20 16:01:36 2016 GMON updating for reconfiguration, group 1 at 26 for pid 33, osid 8507 NOTE: cache closing disk 1 of grp 1: (not open) DB2_0001 NOTE: group DB2: updated PST location: disk 0000 (PST copy 0) NOTE: group 1 PST updated. Mon Jun 20 16:01:36 2016 NOTE: membership refresh pending for group 1/0xe0485f13 (DB2) GMON querying group 1 at 27 for pid 19, osid 5801 NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DB2 SUCCESS: refreshed membership for 1/0xe0485f13 (DB2) NOTE: starting rebalance of group 1/0xe0485f13 (DB2) at power 1 SUCCESS: alter diskgroup DB2 drop disk DB2_0001 force /* ASM SERVER */ SUCCESS: PST-initiated drop disk in group 1(3762839315)) Starting background process ARB0 Mon Jun 20 16:01:39 2016 ARB0 started with pid=34, OS id=9355 NOTE: assigning ARB0 to group 1/0xe0485f13 (DB2) with 1 parallel I/O NOTE: stopping process ARB0 SUCCESS: rebalance completed for group 1/0xe0485f13 (DB2) NOTE: Attempting voting file refresh on diskgroup DB2 Mon Jun 20 16:01:43 2016 NOTE: GroupBlock outside rolling migration privileged region NOTE: requesting all-instance membership refresh for group=1 GMON updating for reconfiguration, group 1 at 28 for pid 34, osid 9361 NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DB2 NOTE: group DB2: updated PST location: disk 0000 (PST copy 0) NOTE: group 1 PST updated. WARNING: offline disk number 1 has references (51 AUs) NOTE: membership refresh pending for group 1/0xe0485f13 (DB2) Mon Jun 20 16:01:49 2016 GMON querying group 1 at 29 for pid 19, osid 5801 NOTE: cache closing disk 1 of grp 1: (not open) _DROPPED_0001_DB2 Mon Jun 20 16:01:49 2016 SUCCESS: refreshed membership for 1/0xe0485f13 (DB2) NOTE: Attempting voting file refresh on diskgroup DB2
可以看到磁盘组db2 offline_disks显示为1
ASMCMD> lsdgState Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED NORMAL N 512 4096 1048576 2447 2394 0 1197 1 N DB2/ MOUNTED EXTERN N 512 4096 1048576 4894 4786 0 4786 0 N OHSDBA/ MOUNTED NORMAL N 512 4096 1048576 7341 6410 2447 1981 0 Y SYSTEMDG/ ASMCMD>
offline后,再次读取磁盘头部信息(发现头部信息没改变)
[oracle@ohs1 ~]$ kfed read /dev/oracleasm/disks/ASMDISK13kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483649 ; 0x008: disk=1 kfbh.check: 1549816567 ; 0x00c: 0x5c6052f7 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr:ORCLDISKASMDISK13 ; 0x000: length=17 kfdhdb.driver.reserved[0]: 1145918273 ; 0x008: 0x444d5341 kfdhdb.driver.reserved[1]: 827020105 ; 0x00c: 0x314b5349 kfdhdb.driver.reserved[2]: 51 ; 0x010: 0x00000033 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 1 ; 0x024: 0x0001 kfdhdb.grptyp: 2 ; 0x026: KFDGTP_NORMAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: DB2_0001 ; 0x028: length=8 kfdhdb.grpname: DB2 ; 0x048: length=3 kfdhdb.fgname: DB2_0001 ; 0x068: length=8 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 33036942 ; 0x0a8: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.crestmp.lo: 3147311104 ; 0x0ac: USEC=0x0 MSEC=0x20a SECS=0x39 MINS=0x2e kfdhdb.mntstmp.hi: 33036942 ; 0x0b0: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.mntstmp.lo: 3163762688 ; 0x0b4: USEC=0x0 MSEC=0xcc SECS=0x9 MINS=0x2f kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000 kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80 kfdhdb.dsksize: 2447 ; 0x0c4: 0x0000098f kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002 kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 33036942 ; 0x0e4: HOUR=0xe DAYS=0x14 MNTH=0x6 YEAR=0x7e0 kfdhdb.grpstmp.lo: 3147119616 ; 0x0e8: USEC=0x0 MSEC=0x14f SECS=0x39 MINS=0x2e kfdhdb.vfstart: 0 ; 0x0ec: 0x00000000 kfdhdb.vfend: 0 ; 0x0f0: 0x00000000 kfdhdb.spfile: 0 ; 0x0f4: 0x00000000 kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000SQL> select name, path from v$asm_disk;
NAME PATH ---------------------------------------- ------------------------------------------------------------ /dev/oracleasm/disks/ASMDISK14 /dev/oracleasm/disks/ASMDISK13 _DROPPED_0001_DB2 DB2_0000 /dev/oracleasm/disks/ASMDISK11 OHSDBA_0001 /dev/oracleasm/disks/ASMDISK10 OHSDBA_0000 /dev/oracleasm/disks/ASMDISK9 DATA_PGOLD_0004 /dev/oracleasm/disks/ASMDISK8 DATA_PGOLD_0003 /dev/oracleasm/disks/ASMDISK7 DATA_PGOLD_0002 /dev/oracleasm/disks/ASMDISK6 DATA_PGOLD_0001 /dev/oracleasm/disks/ASMDISK5 DATA_PGOLD_0000 /dev/oracleasm/disks/ASMDISK4 NAME PATH ---------------------------------------- ------------------------------------------------------------ SYSTEMDG_0002 /dev/oracleasm/disks/ASMDISK3 SYSTEMDG_0001 /dev/oracleasm/disks/ASMDISK2 SYSTEMDG_0000 /dev/oracleasm/disks/ASMDISK1 14 rows selected. SQL>
被drop后,再次尝试online disk
ASMCMD> online -G db2 -D DB2_0001ORA-15032: not all alterations performed ORA-15054: disk "DB2_0001" does not exist in diskgroup "DB2" (DBD ERROR: OCIStmtExecute) ASMCMD>
尝试undrop disk,虽然没报什么错误,但是磁盘仍旧没能撤回成功
[oracle@ohs1 ~]$ sqlplus / as sysasmSQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 20 16:50:17 2016
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Real Application Clusters and Automatic Storage Management options
SQL> alter diskgroup db2 undrop disks; Diskgroup altered. SQL> SQL> alter diskgroup db2 undrop disks NOTE: GroupBlock outside rolling migration privileged region NOTE: requesting all-instance membership refresh for group=2 Mon Jun 20 16:50:35 2016 GMON updating for reconfiguration, group 2 at 17 for pid 28, osid 16254 NOTE: cache closing disk 1 of grp 2: (not open) _DROPPED_0001_DB2 NOTE: group DB2: updated PST location: disk 0000 (PST copy 0) NOTE: group 2 PST updated. Mon Jun 20 16:50:36 2016 NOTE: membership refresh pending for group 2/0xfb327d85 (DB2) GMON querying group 2 at 18 for pid 19, osid 13443 NOTE: cache closing disk 1 of grp 2: (not open) _DROPPED_0001_DB2 SUCCESS: refreshed membership for 2/0xfb327d85 (DB2) NOTE: starting rebalance of group 2/0xfb327d85 (DB2) at power 1 SUCCESS: alter diskgroup db2 undrop disks Starting background process ARB0 Mon Jun 20 16:50:38 2016 ARB0 started with pid=29, OS id=16331 NOTE: assigning ARB0 to group 2/0xfb327d85 (DB2) with 1 parallel I/O NOTE: stopping process ARB0 SUCCESS: rebalance completed for group 2/0xfb327d85 (DB2) NOTE: Attempting voting file refresh on diskgroup DB2 Mon Jun 20 16:50:41 2016 NOTE: GroupBlock outside rolling migration privileged region NOTE: requesting all-instance membership refresh for group=2 GMON updating for reconfiguration, group 2 at 19 for pid 29, osid 16336 NOTE: cache closing disk 1 of grp 2: (not open) _DROPPED_0001_DB2 NOTE: group DB2: updated PST location: disk 0000 (PST copy 0) NOTE: group 2 PST updated. WARNING: offline disk number 1 has references (51 AUs) NOTE: membership refresh pending for group 2/0xfb327d85 (DB2) Mon Jun 20 16:50:48 2016 GMON querying group 2 at 20 for pid 19, osid 13443 NOTE: cache closing disk 1 of grp 2: (not open) _DROPPED_0001_DB2 Mon Jun 20 16:50:48 2016 SUCCESS: refreshed membership for 2/0xfb327d85 (DB2) NOTE: Attempting voting file refresh on diskgroup DB2
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 12235 10514 0 10514 0 N DATA_PGOLD/ MOUNTED NORMAL N 512 4096 1048576 2447 2394 0 1197 1 N DB2/ MOUNTED EXTERN N 512 4096 1048576 4894 4786 0 4786 0 N OHSDBA/ MOUNTED NORMAL N 512 4096 1048576 7341 6410 2447 1981 0 Y SYSTEMDG/
ASMCMD>
NAME PATH ------------------------------ ------------------------------------------------------------ /dev/oracleasm/disks/ASMDISK14 /dev/oracleasm/disks/ASMDISK13 _DROPPED_0001_DB2 DB2_0000 /dev/oracleasm/disks/ASMDISK11 OHSDBA_0001 /dev/oracleasm/disks/ASMDISK10 OHSDBA_0000 /dev/oracleasm/disks/ASMDISK9 DATA_PGOLD_0004 /dev/oracleasm/disks/ASMDISK8 DATA_PGOLD_0003 /dev/oracleasm/disks/ASMDISK7 DATA_PGOLD_0002 /dev/oracleasm/disks/ASMDISK6 DATA_PGOLD_0001 /dev/oracleasm/disks/ASMDISK5 DATA_PGOLD_0000 /dev/oracleasm/disks/ASMDISK4 NAME PATH ------------------------------ ------------------------------------------------------------ SYSTEMDG_0002 /dev/oracleasm/disks/ASMDISK3 SYSTEMDG_0001 /dev/oracleasm/disks/ASMDISK2 SYSTEMDG_0000 /dev/oracleasm/disks/ASMDISK1 14 rows selected. SQL>
因为offline被drop后,磁盘头部信息并未改变,所以再次添加原来的磁盘时出现错误
alter diskgroup db2 add disk '/dev/oracleasm/disks/ASMDISK13' * ERROR at line 1: ORA-15032: not all alterations performed ORA-15033: disk '/dev/oracleasm/disks/ASMDISK13' belongs to diskgroup "DB2"
SQL>
清除ASM磁盘头部信息,重新添加磁盘
1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00966501 seconds, 106 kB/s
[oracle@ohs1 ~]$ kfed read /dev/oracleasm/disks/ASMDISK13
kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 0 ; 0x001: 0x00 kfbh.type: 0 ; 0x002: KFBTYP_INVALID kfbh.datfmt: 0 ; 0x003: 0x00 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 0 ; 0x008: file=0 kfbh.check: 0 ; 0x00c: 0x00000000 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 B7F60200 00000000 00000000 00000000 00000000 [................] Repeat 255 times KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]
[oracle@ohs1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 20 17:13:37 2016 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> alter diskgroup db2 add disk '/dev/oracleasm/disks/ASMDISK13'; Diskgroup altered. SQL>注意:磁盘被drop后,磁盘头部没有变化,如果磁盘本身没有问题,还想再利用这个盘,必须清除磁盘头部才能重新添加。
Oracle ASM Fast Mirror Resync
http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG10044
disk group attribute
http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG10045
http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG137
http://docs.oracle.com/database/121/HABPT/config_storage.htm#HABPT4813
http://www.oratea.com/2017/05/19/disk_repair_time%E4%BB%8B%E7%BB%8D/
DISK_REPAIR_TIME介绍
1. 介绍当ASM磁盘被drop后,ASM就会发起重平衡,保证被drop的磁盘里涉及的extent再次冗余,但是ASM的重平衡很费时间,并且涉及到大量其它磁盘的IO操作。但是有时侯磁盘可能会因为维护或其它原因暂时offline了,如果短时间的offline就发生重平衡,并且磁盘加回来时,又要发生重平衡,这有点不可接受。所以ASM提供了快速磁盘同步特性,在磁盘OFFLINE期间,ASM记录OFFLINE磁盘涉及的extent,而当磁盘恢复时,ASM会快速同步这个磁盘上发生变化的extent。
如果磁盘OFFLINE很长时间,这其实是有风险的,因为在NORMAL冗余的情况下,OFFLINE磁盘涉及的extent就只有一份了,所以ASM提供了一个磁盘组属性DISK_REPAIR_NAME,表明在这个磁盘组上的磁盘,OFFLINE多长时间后就被DROP,ASM开始重平衡,这个值默认是3.6小时。
如果是有计划磁盘OFFLINE维护,3.6小时也许不够,比如Exadata的Cell节点升级,所以可能需要先将该值调大。
--在磁盘组正常情况下,设置该值的命令如下:
SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'disk_repair_time'= '36h';
但是如果磁盘已经被offline,那如何设置offline的磁盘多长时间drop呢?命令有些不同。
2. 判断多少时间后磁盘被drop
--在ASM的alert日志里查看,磁盘被drop掉之前的倒计时,如下:
WARNING: Disk 0 (DATA_CD_00_DMORLCEL08) in group 1 will be dropped in: (12960) secs on ASM inst 1
WARNING: Disk 1 (DATA_CD_01_DMORLCEL08) in group 1 will be dropped in: (12960) secs on ASM inst 1
WARNING: Disk 2 (DATA_CD_02_DMORLCEL08) in group 1 will be dropped in: (12960) secs on ASM inst 1
--检查DISK_REPAIR_NAME
SQL> column name format a30
SQL> column value format a30
SQL> select name,value from v$asm_attribute where group_number=1 and name like '%disk_repair_time%';
NAME VALUE
------------------------------ ------------------------------
disk_repair_time 3.6h
3. 延长DISK_REPAIR_TIME
--如果一个failgroup失败,failgroup上的磁盘已经全部OFFLINE了,可以使用如下命令延长DISK_REPAIR_TIME
SQL> ALTER DISKGROUP <DISKGROUP NAME> OFFLINE DISKS IN FAILGROUP <FAILGROUP NAME> DROP AFTER 5H;
--然后再检查ASM的alert日志,是否生效,例如:
WARNING: Disk 2 (DATA_CD_02_DMORLCEL08) in group 1 will be dropped in: (18000) secs on ASM inst 1
WARNING: Disk 3 (DATA_CD_03_DMORLCEL08) in group 1 will be dropped in: (18000) secs on ASM inst 1
WARNING: Disk 4 (DATA_CD_04_DMORLCEL08) in group 1 will be dropped in: (18000) secs on ASM inst 1
WARNING: Disk 5 (DATA_CD_05_DMORLCEL08) in group 1 will be dropped in: (18000) secs on ASM inst 1
--如果只有1块盘offline,命令如下:
SQL> ALTER DISKGROUP <DISKGROUP NAME> OFFLINE DISK <DISK NAME> DROP AFTER 5H;
--检查磁盘名称
SQL> col path format a59
SQL> set lines 200
SQL> set pagesi 400
SQL> select path, name, header_status, mode_status, mount_status, state, failgroup from v$asm_disk order by path;
4. 立即drop磁盘
--如果磁盘的修复时间预计比较长,那可以立即drop掉磁盘,开始重平衡,而不是等完DISK_REPAIR_TIME时间后再开始。
如果failgroup失败,命令如下:
ALTER DISKGROUP <DISKGROUP NAME> DROP DISKS IN FAILGROUP <FAILGROUP NAME> FORCE;
如果某块磁盘失败,命令如下:
ALTER DISKGROUP <DISKGROUP NAME> DROP DISK <DISK NAME> FORCE;