标签云
asm恢复 bbed bootstrap$ dul In Memory kcbzib_kcrsds_1 kccpb_sanity_check_2 kfed MySQL恢复 ORA-00312 ORA-00607 ORA-00704 ORA-01110 ORA-01555 ORA-01578 ORA-08103 ORA-600 2131 ORA-600 2662 ORA-600 2663 ORA-600 3020 ORA-600 4000 ORA-600 4137 ORA-600 4193 ORA-600 4194 ORA-600 16703 ORA-600 kcbzib_kcrsds_1 ORA-600 KCLCHKBLK_4 ORA-15042 ORA-15196 ORACLE 12C oracle dul ORACLE PATCH Oracle Recovery Tools oracle加密恢复 oracle勒索 oracle勒索恢复 oracle异常恢复 Oracle 恢复 ORACLE恢复 ORACLE数据库恢复 oracle 比特币 OSD-04016 YOUR FILES ARE ENCRYPTED 勒索恢复 比特币加密文章分类
- Others (2)
- 中间件 (2)
- WebLogic (2)
- 操作系统 (102)
- 数据库 (1,682)
- DB2 (22)
- MySQL (73)
- Oracle (1,544)
- Data Guard (52)
- EXADATA (8)
- GoldenGate (24)
- ORA-xxxxx (159)
- ORACLE 12C (72)
- ORACLE 18C (6)
- ORACLE 19C (15)
- ORACLE 21C (3)
- Oracle 23ai (7)
- Oracle ASM (67)
- Oracle Bug (8)
- Oracle RAC (53)
- Oracle 安全 (6)
- Oracle 开发 (28)
- Oracle 监听 (28)
- Oracle备份恢复 (565)
- Oracle安装升级 (92)
- Oracle性能优化 (62)
- 专题索引 (5)
- 勒索恢复 (79)
- PostgreSQL (18)
- PostgreSQL恢复 (6)
- SQL Server (27)
- SQL Server恢复 (8)
- TimesTen (7)
- 达梦数据库 (2)
- 生活娱乐 (2)
- 至理名言 (11)
- 虚拟化 (2)
- VMware (2)
- 软件开发 (37)
- Asp.Net (9)
- JavaScript (12)
- PHP (2)
- 小工具 (20)
-
最近发表
- 断电引起的ORA-08102: 未找到索引关键字, 对象号 39故障处理
- ORA-00227: corrupt block detected in control file
- 手工删除19c rac
- 解决oracle数据文件路径有回车故障
- .wstop扩展名勒索数据库恢复
- Oracle Recovery Tools工具一键解决ORA-00376 ORA-01110故障(文件offline)
- OGG-02771 Input trail file format RELEASE 19.1 is different from previous trail file form at RELEASE 11.2.
- OGG-02246 Source redo compatibility level 19.0.0 requires trail FORMAT 12.2 or higher
- GoldenGate 19安装和打patch
- dd破坏asm磁盘头恢复
- 删除asmlib磁盘导致磁盘组故障恢复
- Kylin Linux 安装19c
- ORA-600 krse_arc_complete.4
- Oracle 19c 202410补丁(RUs+OJVM)
- ntfs MFT损坏(ntfs文件系统故障)导致oracle异常恢复
- .mkp扩展名oracle数据文件加密恢复
- 清空redo,导致ORA-27048: skgfifi: file header information is invalid
- A_H_README_TO_RECOVER勒索恢复
- 通过alert日志分析客户自行对一个数据库恢复的来龙去脉和点评
- ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME
标签归档:asm mount
ORA-15130: diskgroup “ORADATA” is being dismounted
磁盘组mount之后,立马又dismount
Sat Dec 25 17:48:45 2021 SQL> alter diskgroup ORADATA mount NOTE: cache registered group ORADATA number=5 incarn=0xd4b7ac6a NOTE: cache began mount (first) of group ORADATA number=5 incarn=0xd4b7ac6a NOTE: Assigning number (5,24) to disk (/dev/mapper/data31) NOTE: Assigning number (5,26) to disk (/dev/mapper/data33) NOTE: Assigning number (5,21) to disk (/dev/mapper/data29) NOTE: Assigning number (5,23) to disk (/dev/mapper/data30) NOTE: Assigning number (5,25) to disk (/dev/mapper/data32) NOTE: Assigning number (5,19) to disk (/dev/mapper/data27) NOTE: Assigning number (5,20) to disk (/dev/mapper/data28) NOTE: Assigning number (5,18) to disk (/dev/mapper/data26) NOTE: Assigning number (5,14) to disk (/dev/mapper/data22) NOTE: Assigning number (5,17) to disk (/dev/mapper/data25) NOTE: Assigning number (5,16) to disk (/dev/mapper/data24) NOTE: Assigning number (5,15) to disk (/dev/mapper/data23) NOTE: Assigning number (5,13) to disk (/dev/mapper/data21) NOTE: Assigning number (5,12) to disk (/dev/mapper/data20) NOTE: Assigning number (5,10) to disk (/dev/mapper/data19) NOTE: Assigning number (5,9) to disk (/dev/mapper/data18) NOTE: Assigning number (5,8) to disk (/dev/mapper/data17) NOTE: Assigning number (5,3) to disk (/dev/mapper/data12) NOTE: Assigning number (5,22) to disk (/dev/mapper/data3) NOTE: Assigning number (5,2) to disk (/dev/mapper/data11) NOTE: Assigning number (5,7) to disk (/dev/mapper/data16) NOTE: Assigning number (5,28) to disk (/dev/mapper/data5) NOTE: Assigning number (5,32) to disk (/dev/mapper/data9) NOTE: Assigning number (5,6) to disk (/dev/mapper/data15) NOTE: Assigning number (5,5) to disk (/dev/mapper/data14) NOTE: Assigning number (5,4) to disk (/dev/mapper/data13) NOTE: Assigning number (5,1) to disk (/dev/mapper/data10) NOTE: Assigning number (5,30) to disk (/dev/mapper/data7) NOTE: Assigning number (5,29) to disk (/dev/mapper/data6) NOTE: Assigning number (5,31) to disk (/dev/mapper/data8) NOTE: Assigning number (5,11) to disk (/dev/mapper/data2) NOTE: Assigning number (5,27) to disk (/dev/mapper/data4) NOTE: Assigning number (5,0) to disk (/dev/mapper/data1) Sat Dec 25 17:48:52 2021 NOTE: GMON heartbeating for grp 5 GMON querying group 5 at 153 for pid 32, osid 68608 NOTE: cache opening disk 0 of grp 5: ORADATA_0000 path:/dev/mapper/data1 NOTE: F1X0 found on disk 0 au 2 fcn 0.0 NOTE: cache opening disk 1 of grp 5: ORADATA_0001 path:/dev/mapper/data10 NOTE: cache opening disk 2 of grp 5: ORADATA_0002 path:/dev/mapper/data11 NOTE: cache opening disk 3 of grp 5: ORADATA_0003 path:/dev/mapper/data12 NOTE: cache opening disk 4 of grp 5: ORADATA_0004 path:/dev/mapper/data13 NOTE: cache opening disk 5 of grp 5: ORADATA_0005 path:/dev/mapper/data14 NOTE: cache opening disk 6 of grp 5: ORADATA_0006 path:/dev/mapper/data15 NOTE: cache opening disk 7 of grp 5: ORADATA_0007 path:/dev/mapper/data16 NOTE: cache opening disk 8 of grp 5: ORADATA_0008 path:/dev/mapper/data17 NOTE: cache opening disk 9 of grp 5: ORADATA_0009 path:/dev/mapper/data18 NOTE: cache opening disk 10 of grp 5: ORADATA_0010 path:/dev/mapper/data19 NOTE: cache opening disk 11 of grp 5: ORADATA_0011 path:/dev/mapper/data2 NOTE: cache opening disk 12 of grp 5: ORADATA_0012 path:/dev/mapper/data20 NOTE: cache opening disk 13 of grp 5: ORADATA_0013 path:/dev/mapper/data21 NOTE: cache opening disk 14 of grp 5: ORADATA_0014 path:/dev/mapper/data22 NOTE: cache opening disk 15 of grp 5: ORADATA_0015 path:/dev/mapper/data23 NOTE: cache opening disk 16 of grp 5: ORADATA_0016 path:/dev/mapper/data24 NOTE: cache opening disk 17 of grp 5: ORADATA_0017 path:/dev/mapper/data25 NOTE: cache opening disk 18 of grp 5: ORADATA_0018 path:/dev/mapper/data26 NOTE: cache opening disk 19 of grp 5: ORADATA_0019 path:/dev/mapper/data27 NOTE: cache opening disk 20 of grp 5: ORADATA_0020 path:/dev/mapper/data28 NOTE: cache opening disk 21 of grp 5: ORADATA_0021 path:/dev/mapper/data29 NOTE: cache opening disk 22 of grp 5: ORADATA_0022 path:/dev/mapper/data3 NOTE: cache opening disk 23 of grp 5: ORADATA_0023 path:/dev/mapper/data30 NOTE: cache opening disk 24 of grp 5: ORADATA_0024 path:/dev/mapper/data31 NOTE: cache opening disk 25 of grp 5: ORADATA_0025 path:/dev/mapper/data32 NOTE: cache opening disk 26 of grp 5: ORADATA_0026 path:/dev/mapper/data33 NOTE: cache opening disk 27 of grp 5: ORADATA_0027 path:/dev/mapper/data4 NOTE: cache opening disk 28 of grp 5: ORADATA_0028 path:/dev/mapper/data5 NOTE: cache opening disk 29 of grp 5: ORADATA_0029 path:/dev/mapper/data6 NOTE: cache opening disk 30 of grp 5: ORADATA_0030 path:/dev/mapper/data7 NOTE: cache opening disk 31 of grp 5: ORADATA_0031 path:/dev/mapper/data8 NOTE: cache opening disk 32 of grp 5: ORADATA_0032 path:/dev/mapper/data9 NOTE: cache mounting (first) external redundancy group 5/0xD4B7AC6A (ORADATA) Sat Dec 25 17:48:52 2021 * allocate domain 5, invalid = TRUE kjbdomatt send to inst 2 Sat Dec 25 17:48:52 2021 NOTE: attached to recovery domain 5 NOTE: starting recovery of thread=1 ckpt=92.6417 group=5 (ORADATA) NOTE: advancing ckpt for group 5 (ORADATA) thread=1 ckpt=92.6418 NOTE: cache recovered group 5 to fcn 0.9502919 NOTE: redo buffer size is 256 blocks (1053184 bytes) Sat Dec 25 17:48:52 2021 NOTE: LGWR attempting to mount thread 1 for diskgroup 5 (ORADATA) NOTE: LGWR found thread 1 closed at ABA 92.6417 NOTE: LGWR mounted thread 1 for diskgroup 5 (ORADATA) NOTE: LGWR opening thread 1 at fcn 0.9502919 ABA 93.6418 NOTE: cache mounting group 5/0xD4B7AC6A (ORADATA) succeeded NOTE: cache ending mount (success) of group ORADATA number=5 incarn=0xd4b7ac6a Sat Dec 25 17:48:53 2021 NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 5 SUCCESS: diskgroup ORADATA was mounted SUCCESS: alter diskgroup ORADATA mount Sat Dec 25 17:48:53 2021 NOTE: diskgroup resource ora.ORADATA.dg is online WARNING:cache read a corrupt block: group=5(ORADATA)dsk=5 blk=2 disk=5(ORADATA_0005)incarn=2406 au=0 blk=2 count=1 Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc: ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1] NOTE: a corrupted block from group ORADATA was dumped to /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc WARNING:cache read(retry)a corrupt block:group=5(ORADATA)dsk=5 blk=2 disk=5(ORADATA_0005)incarn=2406 au=0 blk=2 count=1 Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc: ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1] ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1] ERROR: cache failed to read group=5(ORADATA) dsk=5 blk=2 from disk(s): 5(ORADATA_0005) ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1] ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1] NOTE: cache initiating offline of disk 5 group ORADATA NOTE: process _rbal_+asm1 (48956) initiating offline of disk 5.240607694 (ORADATA_0005) with mask 0x7e in group 5 NOTE: initiating PST update: grp = 5, dsk = 5/0xe5761ce, mask = 0x6a, op = clear GMON updating disk modes for group 5 at 155 for pid 18, osid 48956 ERROR: Disk 5 cannot be offlined, since diskgroup has external redundancy. ERROR: too many offline disks in PST (grp 5) Sat Dec 25 17:48:55 2021 NOTE: cache dismounting (not clean) group 5/0xD4B7AC6A (ORADATA) WARNING: Offline for disk ORADATA_0005 in mode 0x7f failed. Sat Dec 25 17:48:55 2021 NOTE: halting all I/Os to diskgroup 5 (ORADATA) NOTE: messaging CKPT to quiesce pins Unix process pid: 22744, image: oracle@wxzldb1 (B000) Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_48956.trc (incident=1289754): ORA-15335: ASM metadata corruption detected in disk group 'ORADATA' ORA-15130: diskgroup "ORADATA" is being dismounted ORA-15066: offlining disk "ORADATA_0005" in group "ORADATA" may result in a data loss ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1] ORA-15196: invalid ASM block header [kfc.c:26368] [endian_kfbh] [2147483653] [2] [0 != 1] Incident details in: /u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_1289754/+ASM1_rbal_48956_i1289754.trc NOTE: LGWR doing non-clean dismount of group 5 (ORADATA) NOTE: LGWR sync ABA=93.6418 last written ABA 93.6418 kjbdomdet send to inst 2 detach from dom 5, sending detach message to inst 2 Sat Dec 25 17:48:56 2021 List of instances: 1 2 Dirty detach reconfiguration started (new ddet inc 1, cluster inc 4) Sat Dec 25 17:48:56 2021 Sweep [inc][1289754]: completed Global Resource Directory partially frozen for dirty detach * dirty detach - domain 5 invalid = TRUE 41 GCS resources traversed, 0 cancelled Dirty Detach Reconfiguration complete freeing rdom 5 System State dumped to trace file /u01/app/grid/diag/asm/+asm/+ASM1/incident/incdir_1289754/+ASM1_rbal_48956_i1289754.trc WARNING: dirty detached from domain 5 NOTE: cache dismounted group 5/0xD4B7AC6A (ORADATA)
问题比较明显是由于disk=5 au=0 blk=2有问题导致磁盘组mount之后立马异常.通过kfed分析对应block情况
C:\Users\XFF>kfed read h:\temp\asmdisk\data14.dd|more kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483653 ; 0x008: disk=5 kfbh.check: 314993330 ; 0x00c: 0x12c66ab2 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISK ; 0x000: length=8 kfdhdb.driver.reserved[0]: 0 ; 0x008: 0x00000000 kfdhdb.driver.reserved[1]: 0 ; 0x00c: 0x00000000 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 5 ; 0x024: 0x0005 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: ORADATA_0005 ; 0x028: length=12 kfdhdb.grpname: ORADATA ; 0x048: length=7 kfdhdb.fgname: ORADATA_0005 ; 0x068: length=12 C:\Users\XFF>kfed read h:\temp\asmdisk\data14.dd aun=0 blkn=2|more kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 0 ; 0x001: 0x00 kfbh.type: 0 ; 0x002: KFBTYP_INVALID kfbh.datfmt: 0 ; 0x003: 0x00 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 0 ; 0x008: file=0 kfbh.check: 0 ; 0x00c: 0x00000000 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 0066D8200 00000000 00000000 00000000 00000000 [................] Repeat 255 times KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]
通过kfed分析,该block确实异常,该block主要记录au的分配信息,如果asm 磁盘组的空间不变化,不执行rebalance,一般不会主动访问该block,不访问该block磁盘组也就不会dismount,按照这个解决思路,通过patch解决,让oradata磁盘组不再执行rebalance和分配/回收空间即可一直稳定的mount
数据库直接open成功,实现数据0丢失
发表在 Oracle ASM, Oracle备份恢复
标签为 asm mount, ORA-15066, ORA-15130, ORA-15196, ORA-15335, WARNING: cache read a corrupt block
评论关闭
ORA-600 kfrValAcd30 恢复
有客户由于存储控制器损坏,修复控制器之后,asm无法正常mount,报ORA-600 kfrValAcd30错误,让我们提供技术支持
asm alert日志报错
Wed Apr 03 16:50:57 2019 SQL> alter diskgroup data mount NOTE: cache registered group DATA number=1 incarn=0x14248741 NOTE: cache began mount (first) of group DATA number=1 incarn=0x14248741 NOTE: Assigning number (1,0) to disk (ORCL:DATAVOL1) Wed Apr 03 16:51:03 2019 NOTE: start heartbeating (grp 1) kfdp_query(DATA): 15 kfdp_queryBg(): 15 NOTE: cache opening disk 0 of grp 1: DATAVOL1 label:DATAVOL1 NOTE: F1X0 found on disk 0 au 2 fcn 0.0 NOTE: cache mounting (first) external redundancy group 1/0x14248741 (DATA) Wed Apr 03 16:51:04 2019 * allocate domain 1, invalid = TRUE Wed Apr 03 16:51:04 2019 NOTE: attached to recovery domain 1 NOTE: starting recovery of thread=1 ckpt=27.2697 group=1 (DATA) Errors in file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_ora_15951.trc (incident=23394): ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [27], [2697], [28], [2697], [], [], [], [], [] Incident details in: /u01/app/grid/diag/asm/+asm/+ASM2/incident/incdir_23394/+ASM2_ora_15951_i23394.trc Abort recovery for domain 1 NOTE: crash recovery signalled OER-600 ERROR: ORA-600 signalled during mount of diskgroup DATA ORA-00600: internal error code, arguments: [kfrValAcd30], [DATA], [1], [27], [2697], [28], [2697], [], [], [], [], [] ERROR: alter diskgroup data mount NOTE: cache dismounting (clean) group 1/0x14248741 (DATA) NOTE: lgwr not being msg'd to dismount freeing rdom 1 Wed Apr 03 16:51:05 2019 Sweep [inc][23394]: completed Sweep [inc2][23394]: completed Wed Apr 03 16:51:05 2019 Trace dumping is performing id=[cdmp_20190403165105] NOTE: detached from domain 1 NOTE: cache dismounted group 1/0x14248741 (DATA) NOTE: cache ending mount (fail) of group DATA number=1 incarn=0x14248741 kfdp_dismount(): 16 kfdp_dismountBg(): 16 NOTE: De-assigning number (1,0) from disk (ORCL:DATAVOL1) ERROR: diskgroup DATA was not mounted NOTE: cache deleting context for group DATA 1/337938241
mos相关记录
参考:ORA-600 [KFRVALACD30] in ASM (Doc ID 2123013.1)
ORA-00600: internal error code, arguments: [kfrValAcd30]可能是bug或者硬件故障导致.基于客户的情况,最大可能就是由于硬件故障导致asm 磁盘组的acd无法正常进行,从而无法mount成功.如果运气好,通过kfed相关修复可以正常mount成功,运气不好可以通过底层进行恢复数据文件,从而最大限度恢复数据.
hp平台rdisk中磁盘丢失导致asm启动报ORA-15042恢复
有老朋友找到我,说一个客户的数据库异常,问题是asm无法正常mount,提示是缺少两块磁盘.问我是否可以恢复.因为是内网环境,通过他那边发过来的零零散散的信息,大概分析如下
asm alert日志报错
ERROR: diskgroup DGROUP1 was not mounted
Fri Aug 12 16:03:12 EAT 2016 SQL> alter diskgroup DGROUP1 mount Fri Aug 12 16:03:12 EAT 2016 NOTE: cache registered group DGROUP1 number=1 incarn=0xf6781b5c Fri Aug 12 16:03:12 EAT 2016 NOTE: Hbeat: instance first (grp 1) Fri Aug 12 16:03:16 EAT 2016 NOTE: start heartbeating (grp 1) Fri Aug 12 16:03:16 EAT 2016 NOTE: cache dismounting group 1/0xF6781B5C (DGROUP1) NOTE: dbwr not being msg'd to dismount ERROR: diskgroup DGROUP1 was not mounted
前台尝试mount asm 磁盘组报错ORA-15042
从这里可以明显的看出来asm 磁盘组无法正常mount,是由于缺少asm disk 15,16.如果想恢复asm,最好的方法就是找出来这两个磁盘.通过kfed对现在的磁盘进行分析,最后我们发现asm disk 14对应的磁盘为disk160,,asm disk 17对应的disk163,根据第一感觉很可能是disk161和disk161两块盘异常,让机房检查硬件无任何告警
OS层面分析
省略和本次结论无关的记录
ls -l /dev/rdisk crw-rw---- 1 oracle dba 13 0x000070 Jan 1 2016 disk160 crw-rw---- 1 oracle dba 13 0x000073 Jan 1 2016 disk163 ls -l /dev/disk brw-r----- 1 bin sys 1 0x000070 Jan 13 2015 disk160 brw-r----- 1 bin sys 1 0x000071 Jan 13 2015 disk161 brw-r----- 1 bin sys 1 0x000072 Jan 13 2015 disk162 brw-r----- 1 bin sys 1 0x000073 Jan 13 2015 disk163
这里我们发现在hp unix中/dev/disk下面磁盘都存在,但是/dev/rdisk下面丢失,通过ioscan相关命令继续分析
ioscan -fNnkC disk disk 160 64000/0xfa00/0x70 esdisk CLAIMED DEVICE HP OPEN-V /dev/disk/disk160 /dev/rdisk/disk160 disk 161 64000/0xfa00/0x71 esdisk CLAIMED DEVICE HP OPEN-V /dev/disk/disk161 disk 162 64000/0xfa00/0x72 esdisk CLAIMED DEVICE HP OPEN-V /dev/disk/disk162 disk 163 64000/0xfa00/0x73 esdisk CLAIMED DEVICE HP OPEN-V /dev/disk/disk163 /dev/rdisk/disk163
这里我们基本上可以确定是/dev/rdisk下面的盘发生丢失.进一步分析,因为rdisk是聚合后的盘符,那我们分析聚合前的盘符是否正常
ioscan -m dsf /dev/rdisk/disk160 /dev/rdsk/c29t12d4 /dev/rdsk/c28t12d4 /dev/rdisk/disk163 /dev/rdsk/c29t12d7 /dev/rdsk/c28t12d7 ls -l /dev/rdsk crw-r----- 1 bin sys 188 0x1dc000 Apr 22 2014 c29t12d0 crw-r----- 1 bin sys 188 0x1dc100 Apr 22 2014 c29t12d1 crw-r----- 1 bin sys 188 0x1dc300 Jan 13 2015 c29t12d3 crw-r----- 1 bin sys 188 0x1dc400 Jan 13 2015 c29t12d4 crw-r----- 1 bin sys 188 0x1dc500 Jan 13 2015 c29t12d5 crw-r----- 1 bin sys 188 0x1dc600 Jan 13 2015 c29t12d6 crw-r----- 1 bin sys 188 0x1dc700 Jan 13 2015 c29t12d7 crw-r----- 1 bin sys 188 0x1cc100 Apr 22 2014 c28t12d1 crw-r----- 1 bin sys 188 0x1cc300 Jan 13 2015 c28t12d3 crw-r----- 1 bin sys 188 0x1cc400 Jan 13 2015 c28t12d4 crw-r----- 1 bin sys 188 0x1cc500 Jan 13 2015 c28t12d5 crw-r----- 1 bin sys 188 0x1cc600 Jan 13 2015 c28t12d6 crw-r----- 1 bin sys 188 0x1cc700 Jan 13 2015 c28t12d7
通过这里我们基本上可以大概判断出来/dev/rdsk/c28t12d5,/dev/rdsk/c28t12d6,/dev/rdsk/c29t12d5,/dev/rdsk/c29t12d6就是我们需要找的/dev/rdisk/disk161和disk162的聚合之前的盘符.也就是说,现在我们判断只有/dev/rdisk下面的字符设备有问题,其他均正常.
通过系统命令修复异常
insf -e -H 64000/0xfa00/0x71 insf -e -H 64000/0xfa00/0x72
现在已经可以正常看到/dev/rdisk/disk161和/dev/rdisk/disk162盘符,初步判断,os层面盘符已经恢复正常.修改磁盘权限和所属组
chmod 660 /dev/rdisk/disk161 chmod 660 /dev/rdisk/disk162 chown oracle:dba /dev/rdisk/disk161 chown oracle:dba /dev/rdisk/disk162
正常启动asm,mount磁盘组,open数据库
这次的恢复,主要是从操作系统层面判断解决问题,从而实现数据库完美恢复,数据0丢失.有类似恢复案例:分区无法识别导致asm diskgroup无法mount
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971 Q Q:107644445 E-Mail:dba@xifenfei.com