标签云
asm恢复 bbed bootstrap$ dul In Memory kcbzib_kcrsds_1 kccpb_sanity_check_2 kfed MySQL恢复 ORA-00312 ORA-00607 ORA-00704 ORA-01110 ORA-01555 ORA-01578 ORA-08103 ORA-600 2131 ORA-600 2662 ORA-600 2663 ORA-600 3020 ORA-600 4000 ORA-600 4137 ORA-600 4193 ORA-600 4194 ORA-600 16703 ORA-600 kcbzib_kcrsds_1 ORA-600 KCLCHKBLK_4 ORA-15042 ORA-15196 ORACLE 12C oracle dul ORACLE PATCH Oracle Recovery Tools oracle加密恢复 oracle勒索 oracle勒索恢复 oracle异常恢复 Oracle 恢复 ORACLE恢复 ORACLE数据库恢复 oracle 比特币 OSD-04016 YOUR FILES ARE ENCRYPTED 勒索恢复 比特币加密文章分类
- Others (2)
- 中间件 (2)
- WebLogic (2)
- 操作系统 (102)
- 数据库 (1,682)
- DB2 (22)
- MySQL (73)
- Oracle (1,544)
- Data Guard (52)
- EXADATA (8)
- GoldenGate (24)
- ORA-xxxxx (159)
- ORACLE 12C (72)
- ORACLE 18C (6)
- ORACLE 19C (15)
- ORACLE 21C (3)
- Oracle 23ai (7)
- Oracle ASM (67)
- Oracle Bug (8)
- Oracle RAC (53)
- Oracle 安全 (6)
- Oracle 开发 (28)
- Oracle 监听 (28)
- Oracle备份恢复 (565)
- Oracle安装升级 (92)
- Oracle性能优化 (62)
- 专题索引 (5)
- 勒索恢复 (79)
- PostgreSQL (18)
- PostgreSQL恢复 (6)
- SQL Server (27)
- SQL Server恢复 (8)
- TimesTen (7)
- 达梦数据库 (2)
- 生活娱乐 (2)
- 至理名言 (11)
- 虚拟化 (2)
- VMware (2)
- 软件开发 (37)
- Asp.Net (9)
- JavaScript (12)
- PHP (2)
- 小工具 (20)
-
最近发表
- 断电引起的ORA-08102: 未找到索引关键字, 对象号 39故障处理
- ORA-00227: corrupt block detected in control file
- 手工删除19c rac
- 解决oracle数据文件路径有回车故障
- .wstop扩展名勒索数据库恢复
- Oracle Recovery Tools工具一键解决ORA-00376 ORA-01110故障(文件offline)
- OGG-02771 Input trail file format RELEASE 19.1 is different from previous trail file form at RELEASE 11.2.
- OGG-02246 Source redo compatibility level 19.0.0 requires trail FORMAT 12.2 or higher
- GoldenGate 19安装和打patch
- dd破坏asm磁盘头恢复
- 删除asmlib磁盘导致磁盘组故障恢复
- Kylin Linux 安装19c
- ORA-600 krse_arc_complete.4
- Oracle 19c 202410补丁(RUs+OJVM)
- ntfs MFT损坏(ntfs文件系统故障)导致oracle异常恢复
- .mkp扩展名oracle数据文件加密恢复
- 清空redo,导致ORA-27048: skgfifi: file header information is invalid
- A_H_README_TO_RECOVER勒索恢复
- 通过alert日志分析客户自行对一个数据库恢复的来龙去脉和点评
- ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME
标签归档:asm恢复
ERROR: diskgroup XXXX was not mounted
aix平台10.2.0.5 2节点RAC,由于节点2系统盘故障,通过节点1镜像系统,复制到节点2,结果由于节点2磁盘顺序和节点1不匹配,aix工程师进行了相关操作之后,节点1重启之后datadg磁盘组无法mount
SQL> alter diskgroup datadg mount Mon Jun 10 23:23:46 CST 2019 NOTE: cache registered group DATADG number=1 incarn=0x8cf61164 Mon Jun 10 23:23:46 CST 2019 NOTE: Hbeat: instance first (grp 1) Mon Jun 10 23:23:50 CST 2019 NOTE: start heartbeating (grp 1) Mon Jun 10 23:23:50 CST 2019 NOTE: cache dismounting group 1/0x8CF61164 (DATADG) NOTE: dbwr not being msg'd to dismount ERROR: diskgroup DATADG was not mounted
检查datadg磁盘组相关信息
Tue Jan 29 19:21:45 CST 2019 NOTE: start heartbeating (grp 2) NOTE: cache opening disk 0 of grp 2: DATADG_0000 path:/dev/rhdisk6 Tue Jan 29 19:21:45 CST 2019 NOTE: F1X0 found on disk 0 fcn 0.0 NOTE: cache opening disk 1 of grp 2: DATADG_0001 path:/dev/rhdisk7 NOTE: cache opening disk 2 of grp 2: DATADG_0002 path:/dev/rhdisk8 NOTE: cache opening disk 3 of grp 2: DATADG_0003 path:/dev/rhdisk9 NOTE: cache mounting (first) group 2/0x60E59155 (DATADG) * allocate domain 2, invalid = TRUE Tue Jan 29 19:21:45 CST 2019 NOTE: attached to recovery domain 2 Tue Jan 29 19:21:45 CST 2019 NOTE: cache recovered group 2 to fcn 0.849668 Tue Jan 29 19:21:45 CST 2019 NOTE: LGWR attempting to mount thread 1 for disk group 2 NOTE: LGWR mounted thread 1 for disk group 2 NOTE: opening chunk 1 at fcn 0.849668 ABA NOTE: seq=21 blk=5394 Tue Jan 29 19:21:46 CST 2019 NOTE: cache mounting group 2/0x60E59155 (DATADG) succeeded SUCCESS: diskgroup DATADG was mounted
通过这里可以看出来datadg磁盘组是由rhdisk6-9 四块磁盘组成,查询相关磁盘信息发现
这里确定rhdisk7磁盘异常,通过kfed分析磁盘情况
D:\BaiduNetdiskDownload\xifenfei>kfed read rhdisk7.dd kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 34 ; 0x001: 0x22 kfbh.type: 0 ; 0x002: KFBTYP_INVALID kfbh.datfmt: 0 ; 0x003: 0x00 kfbh.block.blk: 49407 ; 0x004: blk=49407 kfbh.block.obj: 0 ; 0x008: file=0 kfbh.check: 0 ; 0x00c: 0x00000000 kfbh.fcn.base: 58396 ; 0x010: 0x0000e41c kfbh.fcn.wrap: 131072 ; 0x014: 0x00020000 kfbh.spare1: 4294967064 ; 0x018: 0xffffff18 kfbh.spare2: 2105310074 ; 0x01c: 0x7d7c7b7a 005918A00 00002200 0000C0FF 00000000 00000000 [."..............] 005918A10 0000E41C 00020000 FFFFFF18 7D7C7B7A [............z{|}] 005918A20 00000000 00000000 00000000 00000000 [................] Repeat 253 times KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0] D:\BaiduNetdiskDownload\xifenfei>kfed read rhdisk7.dd blkn=1 kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 0 ; 0x001: 0x00 kfbh.type: 0 ; 0x002: KFBTYP_INVALID kfbh.datfmt: 0 ; 0x003: 0x00 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 0 ; 0x008: file=0 kfbh.check: 0 ; 0x00c: 0x00000000 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 006EF8A00 00000000 00000000 00000000 00000000 [................] Repeat 255 times KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0] D:\BaiduNetdiskDownload\xifenfei>kfed read rhdisk7.dd blkn=2|more kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 3 ; 0x002: KFBTYP_ALLOCTBL kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 33554432 ; 0x004: blk=33554432 kfbh.block.obj: 16777344 ; 0x008: file=128 kfbh.check: 3844041089 ; 0x00c: 0xe51f6981 kfbh.fcn.base: 1297484544 ; 0x010: 0x4d560b00 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdatb10.aunum: 0 ; 0x000: 0x00000000 kfdatb10.shrink: 49153 ; 0x004: 0xc001 kfdatb10.ub2pad: 20555 ; 0x006: 0x504b kfdatb10.auinfo[0].link.next: 2048 ; 0x008: 0x0800 kfdatb10.auinfo[0].link.prev: 2048 ; 0x00a: 0x0800 kfdatb10.auinfo[0].free: 0 ; 0x00c: 0x0000 kfdatb10.auinfo[0].total: 49153 ; 0x00e: 0xc001 kfdatb10.auinfo[1].link.next: 4096 ; 0x010: 0x1000 kfdatb10.auinfo[1].link.prev: 4096 ; 0x012: 0x1000 kfdatb10.auinfo[1].free: 0 ; 0x014: 0x0000 kfdatb10.auinfo[1].total: 0 ; 0x016: 0x0000 kfdatb10.auinfo[2].link.next: 6144 ; 0x018: 0x1800 kfdatb10.auinfo[2].link.prev: 6144 ; 0x01a: 0x1800 kfdatb10.auinfo[2].free: 0 ; 0x01c: 0x0000 kfdatb10.auinfo[2].total: 0 ; 0x01e: 0x0000 kfdatb10.auinfo[3].link.next: 8192 ; 0x020: 0x2000 kfdatb10.auinfo[3].link.prev: 8192 ; 0x022: 0x2000 kfdatb10.auinfo[3].free: 0 ; 0x024: 0x0000
对比磁盘可能的损坏情况,由于在aix 平台asm disk的block有一个特征一般0082开头,通过工具打开磁盘,检索该标记对比
正常磁盘
异常磁盘
通过上述分析,大概评估rhdisk7 元数据部分损坏的不光是block 0和1,人工修复继续使用的可能性不太大,而且基于客户的数据库不大,采取方案是直接拷贝数据文件、redo、控制文件到文件系统,然后在本地文件系统open库
运气不错,实现完美恢复数据0丢失
WARNING: Read Failed.导致asm磁盘组异常
有客户对asm dg进行扩容,一段时间之后,asm data 磁盘组直接dismount
Wed May 29 18:37:25 2019 SUCCESS: ALTER DISKGROUP DATA ADD DISK '/dev/oracleasm/disks/DATA_0028' SIZE 511993M , '/dev/oracleasm/disks/DATA_0027' SIZE 511993M , '/dev/oracleasm/disks/DATA_0026' SIZE 511993M , '/dev/oracleasm/disks/DATA_0025' SIZE 511993M /* ASMCA */ NOTE: starting rebalance of group 1/0x9e18e2f1 (DATA) at power 1 Wed May 29 18:37:26 2019 Starting background process ARB0 Wed May 29 18:37:26 2019 ARB0 started with pid=34, OS id=96638 NOTE: assigning ARB0 to group 1/0x9e18e2f1 (DATA) with 1 parallel I/O NOTE: Attempting voting file refresh on diskgroup DATA NOTE: Refresh completed on diskgroup DATA. No voting file found. cellip.ora not found. Wed May 29 19:21:43 2019 WARNING: Read Failed. group:1 disk:27 AU:0 offset:360448 size:4096 WARNING: cache failed reading from group=1(DATA) dsk=27 blk=88 count=1 from disk= 27 (DATA_0027) kfkist=0x20 status=0x02 osderr=0x0 file=kfc.c line=11596 ERROR: cache failed to read group=1(DATA) dsk=27 blk=88 from disk(s): 27(DATA_0027) ORA-15080: synchronous I/O operation to a disk failed ORA-27072: File I/O error Linux-x86_64 Error: 5: Input/output error Additional information: 4 Additional information: 704 Additional information: -1 NOTE: cache initiating offline of disk 27 group DATA NOTE: process _user31879_+asm1 (31879) initiating offline of disk 27.3915911747 (DATA_0027) with mask 0x7e in group 1 NOTE: initiating PST update: grp = 1, dsk = 27/0xe9681243, mask = 0x6a, op = clear Wed May 29 19:21:43 2019 GMON updating disk modes for group 1 at 10 for pid 35, osid 31879 ERROR: Disk 27 cannot be offlined, since diskgroup has external redundancy. ERROR: too many offline disks in PST (grp 1) Wed May 29 19:21:43 2019 NOTE: cache dismounting (not clean) group 1/0x9E18E2F1 (DATA) NOTE: messaging CKPT to quiesce pins Unix process pid: 90256, image: oracle@ftz-db-o1 (B000) Wed May 29 19:21:43 2019 NOTE: halting all I/Os to diskgroup 1 (DATA) WARNING: Offline for disk DATA_0027 in mode 0x7f failed. Wed May 29 19:21:43 2019 NOTE: LGWR doing non-clean dismount of group 1 (DATA) NOTE: LGWR sync ABA=27.3207 last written ABA 27.3207 Wed May 29 19:21:43 2019 ERROR: ORA-15130 thrown in ARB0 for group number 1 Errors in file /oracle/grid_base/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_96638.trc: ORA-15130: diskgroup "" is being dismounted ORA-15130: diskgroup "DATA" is being dismounted Wed May 29 19:21:43 2019 NOTE: stopping process ARB0
后续继续mount data 磁盘组成功,但是立马又dismount
Wed May 29 18:37:25 2019 SUCCESS: ALTER DISKGROUP DATA ADD DISK '/dev/oracleasm/disks/DATA_0028' SIZE 511993M , '/dev/oracleasm/disks/DATA_0027' SIZE 511993M , '/dev/oracleasm/disks/DATA_0026' SIZE 511993M , '/dev/oracleasm/disks/DATA_0025' SIZE 511993M /* ASMCA */ NOTE: starting rebalance of group 1/0x9e18e2f1 (DATA) at power 1 Wed May 29 18:37:26 2019 Starting background process ARB0 Wed May 29 18:37:26 2019 ARB0 started with pid=34, OS id=96638 NOTE: assigning ARB0 to group 1/0x9e18e2f1 (DATA) with 1 parallel I/O NOTE: Attempting voting file refresh on diskgroup DATA NOTE: Refresh completed on diskgroup DATA. No voting file found. cellip.ora not found. Wed May 29 19:21:43 2019 WARNING: Read Failed. group:1 disk:27 AU:0 offset:360448 size:4096 WARNING: cache failed reading from group=1(DATA) dsk=27 blk=88 count=1 from disk= 27 (DATA_0027) kfkist=0x20 status=0x02 osderr=0x0 file=kfc.c line=11596 ERROR: cache failed to read group=1(DATA) dsk=27 blk=88 from disk(s): 27(DATA_0027) ORA-15080: synchronous I/O operation to a disk failed ORA-27072: File I/O error Linux-x86_64 Error: 5: Input/output error Additional information: 4 Additional information: 704 Additional information: -1 NOTE: cache initiating offline of disk 27 group DATA NOTE: process _user31879_+asm1 (31879) initiating offline of disk 27.3915911747 (DATA_0027) with mask 0x7e in group 1 NOTE: initiating PST update: grp = 1, dsk = 27/0xe9681243, mask = 0x6a, op = clear Wed May 29 19:21:43 2019 GMON updating disk modes for group 1 at 10 for pid 35, osid 31879 ERROR: Disk 27 cannot be offlined, since diskgroup has external redundancy. ERROR: too many offline disks in PST (grp 1) Wed May 29 19:21:43 2019 NOTE: cache dismounting (not clean) group 1/0x9E18E2F1 (DATA) NOTE: messaging CKPT to quiesce pins Unix process pid: 90256, image: oracle@ftz-db-o1 (B000) Wed May 29 19:21:43 2019 NOTE: halting all I/Os to diskgroup 1 (DATA) WARNING: Offline for disk DATA_0027 in mode 0x7f failed. Wed May 29 19:21:43 2019 NOTE: LGWR doing non-clean dismount of group 1 (DATA) NOTE: LGWR sync ABA=27.3207 last written ABA 27.3207 Wed May 29 19:21:43 2019 ERROR: ORA-15130 thrown in ARB0 for group number 1 Errors in file /oracle/grid_base/diag/asm/+asm/+ASM1/trace/+ASM1_arb0_96638.trc: ORA-15130: diskgroup "" is being dismounted ORA-15130: diskgroup "DATA" is being dismounted Wed May 29 19:21:43 2019 NOTE: stopping process ARB0
对于上述的故障现象,本质原因是由于asm 磁盘组增加新磁盘之后,开始做rebalance,但是由于遭遇到 27号盘上有IO读错误,使得asm磁盘组无法正常完成rebalance,因而data磁盘组无法稳定的mount。解决该问题思路,通过patch asm磁盘组,禁止rebalance,从而使得data磁盘组不再dismount,再进行后续恢复
asm磁盘分区丢失恢复
有朋友反馈,他们做了xx存储的双活之后,重启主机发现gi无法正常启动,分析发现所有该存储的磁盘分区信息丢失,导致asmlib无法发现磁盘(使用分区做asm disk)
类似如下错误(磁盘分区丢失)
--fdisk -l 显示部分结果 Disk /dev/mapper/datahds1: 1099.5 GB, 1099511627776 bytes 255 heads, 63 sectors/track, 133674 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 --ls -l /dev/mapper/ 显示结果无分区信息 lrwxrwxrwx 1 root root 7 May 6 03:44 datahds1 -> ../dm-1 lrwxrwxrwx 1 root root 7 May 6 03:26 datahds2 -> ../dm-3 lrwxrwxrwx 1 root root 7 May 6 03:26 datahds3 -> ../dm-8 lrwxrwxrwx 1 root root 7 May 6 03:26 ocrhds1 -> ../dm-0 lrwxrwxrwx 1 root root 7 May 6 03:26 ocrhds2 -> ../dm-2 lrwxrwxrwx 1 root root 7 May 6 03:26 ocrhds3 -> ../dm-4
asm日志显示
SUCCESS: diskgroup DATADG was mounted NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 3 SUCCESS: diskgroup OCRHDS was mounted ORA-15032: not all alterations performed ORA-15017: diskgroup "DATA" cannot be mounted ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA"
分析系统日志
May 6 02:23:27 db2 kernel: sdb: unknown partition table May 6 02:23:27 db2 kernel: sde: unknown partition table May 6 02:23:27 db2 kernel: sdc: unknown partition table May 6 02:23:27 db2 kernel: sdf: unknown partition table May 6 02:23:27 db2 kernel: sdd: unknown partition table May 6 02:23:27 db2 kernel: sdj:Dev sdj: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sdi: sdi1 May 6 02:23:27 db2 kernel: sdk: sdk1 May 6 02:23:27 db2 kernel: sdg: unknown partition table May 6 02:23:27 db2 kernel: sdl: sdl1 May 6 02:23:27 db2 kernel: sdm:Dev sdm: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sdo:Dev sdo: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sdn:Dev sdn: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sdp:Dev sdp: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sds:Dev sds: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sdh: May 6 02:23:27 db2 kernel: sdt: sdt1 May 6 02:23:27 db2 kernel: sdv:Dev sdv: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sdq:Dev sdq: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sd 1:0:1:9: [sdr] Very big device. Trying to use READ CAPACITY(16). May 6 02:23:27 db2 kernel: sdr:Dev sdr: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sd 2:0:0:9: [sdab] Very big device. Trying to use READ CAPACITY(16). May 6 02:23:27 db2 kernel: sdab: unknown partition table May 6 02:23:27 db2 kernel: sdac: unknown partition table May 6 02:23:27 db2 kernel: sdw: sdw1 May 6 02:23:27 db2 kernel: sdu:Dev sdu: unable to read RDB block 0 May 6 02:23:27 db2 kernel: unable to read partition table May 6 02:23:27 db2 kernel: sdx: sdx1 May 6 02:23:27 db2 kernel: sdy: sdy1 May 6 02:23:27 db2 kernel: sdaa: sdaa1 May 6 02:23:27 db2 kernel: sdz: sdz1 May 6 02:23:27 db2 kernel: sdae: unknown partition table May 6 02:23:27 db2 kernel: sdaf: unknown partition table May 6 02:23:27 db2 kernel: sdag: unknown partition table May 6 02:23:27 db2 kernel: sdai: May 6 02:23:27 db2 kernel: sdah: unknown partition table May 6 02:23:27 db2 kernel: sdad: unknown partition table May 6 02:23:28 db2 mcelog: failed to prefill DIMM database from DMI data
这里错误比较明显unknown partition table,磁盘的分区信息损坏.使用fdisk无法发现分区
partprobe也无效
[root@db2 oracle]# partprobe /dev/mapper/ocrhds3 [root@db2 oracle]# [root@db2 oracle]# ls -l /dev/mapper/ocrhds3* lrwxrwxrwx 1 root root 7 May 6 07:30 /dev/mapper/ocrhds3 -> ../dm-4
从尚需信息看,磁盘的分区表信息应该已经损坏,现在能够做的,就是希望运气好,磁盘的分区的实际数据没有损坏
分析磁盘实际分区数据
[root@db2 ~]$ dd if=/dev/mapper/datahds1 of=/tmp/datahds1.dd bs=1024k count=50 [root@db2 ~]$ dd if=/tmp/datahds1.dd of=/tmp/xff01.dd bs=3225 skip=1 [grid@db2 ~]$ kfed read /tmp/xff01.dd |more kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 3110278718 ; 0x00c: 0xb963163e kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISKHDSDATA1 ; 0x000: length=16 kfdhdb.driver.reserved[0]: 1146307656 ; 0x008: 0x44534448 kfdhdb.driver.reserved[1]: 826364993 ; 0x00c: 0x31415441 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: DATADG_0000 ; 0x028: length=11 kfdhdb.grpname: DATADG ; 0x048: length=6 kfdhdb.fgname: DATADG_0000 ; 0x068: length=11 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 33050696 ; 0x0a8: HOUR=0x8 DAYS=0x2 MNTH=0x4 YEAR=0x7e1 kfdhdb.crestmp.lo: 3813740544 ; 0x0ac: USEC=0x0 MSEC=0x44 SECS=0x35 MINS=0x38 kfdhdb.mntstmp.hi: 33050701 ; 0x0b0: HOUR=0xd DAYS=0x2 MNTH=0x4 YEAR=0x7e1 kfdhdb.mntstmp.lo: 411385856 ; 0x0b4: USEC=0x0 MSEC=0x150 SECS=0x8 MINS=0x6
通过上述分析,我们可以初步判断,分区磁盘的信息很可能是好的(因为asm disk header是好的,根据一般的规则从前往后覆盖,既然header是好的,后面的block被覆盖的概率非常小)
通过准备新磁盘直接把磁盘分区dd到新设备上
dd if=/dev/mapper/ocrhds1 of=/dev/mapper/ocrhdsnew1 skip=1 bs=3225 dd if=/dev/mapper/ocrhds2 of=/dev/mapper/ocrhdsnew2 skip=1 bs=3225 dd if=/dev/mapper/ocrhds3 of=/dev/mapper/ocrhdsnew3 skip=1 bs=3225 dd if=/dev/mapper/datahds1 of=/dev/mapper/datahdsnew1 skip=1 bs=3225 dd if=/dev/mapper/datahds2 of=/dev/mapper/datahdsnew2 skip=1 bs=3225 dd if=/dev/mapper/datahds3 of=/dev/mapper/datahdsnew3 skip=1 bs=3225
asmlib重新扫描磁盘
[root@db1 disks]# oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... Instantiating disk "HDSOCR3" Instantiating disk "HDSDATA2" Instantiating disk "HDSDATA1" Instantiating disk "HDSDATA3" Instantiating disk "HDSOCR1" Instantiating disk "HDSOCR2" [root@db1 disks]# ls -ltr total 0 brw-rw---- 1 grid asmadmin 8, 160 May 6 13:49 HDSOCR3 brw-rw---- 1 grid asmadmin 8, 192 May 6 13:49 HDSDATA2 brw-rw---- 1 grid asmadmin 8, 176 May 6 13:49 HDSDATA1 brw-rw---- 1 grid asmadmin 8, 208 May 6 13:49 HDSDATA3 brw-rw---- 1 grid asmadmin 8, 128 May 6 13:49 HDSOCR1 brw-rw---- 1 grid asmadmin 8, 144 May 6 13:49 HDSOCR2
kfed验证拷贝的分区
[root@db2 tmp]# /oracle/app/11.2.0/grid_1/bin/kfed read /dev/oracleasm/disks/HDSDATA1 kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 3110278718 ; 0x00c: 0xb963163e kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISKHDSDATA1 ; 0x000: length=16 kfdhdb.driver.reserved[0]: 1146307656 ; 0x008: 0x44534448 kfdhdb.driver.reserved[1]: 826364993 ; 0x00c: 0x31415441 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: DATADG_0000 ; 0x028: length=11 kfdhdb.grpname: DATADG ; 0x048: length=6 kfdhdb.fgname: DATADG_0000 ; 0x068: length=11 kfdhdb.capname: ; 0x088: length=0
asm和数据库启动正常
[grid@db2 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 3145710 2378034 0 2378034 0 N DATADG/ MOUNTED NORMAL N 512 4096 1048576 15342 14416 5114 4651 0 Y OCRHDS/ ASMCMD> [oracle@db2 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Sat May 6 13:54:21 2017 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 3.6077E+10 bytes Fixed Size 2260648 bytes Variable Size 7247757656 bytes Database Buffers 2.8723E+10 bytes Redo Buffers 104382464 bytes Database mounted. Database opened. SQL>
通过上述恢复,实现asm磁盘分区丢失数据0丢失
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971 Q Q:107644445 E-Mail:dba@xifenfei.com