标签云
asm恢复 bbed bootstrap$ dul In Memory kcbzib_kcrsds_1 kccpb_sanity_check_2 kfed MySQL恢复 ORA-00312 ORA-00607 ORA-00704 ORA-01110 ORA-01555 ORA-01578 ORA-08103 ORA-600 2131 ORA-600 2662 ORA-600 2663 ORA-600 3020 ORA-600 4000 ORA-600 4137 ORA-600 4193 ORA-600 4194 ORA-600 16703 ORA-600 kcbzib_kcrsds_1 ORA-600 KCLCHKBLK_4 ORA-15042 ORA-15196 ORACLE 12C oracle dul ORACLE PATCH Oracle Recovery Tools oracle加密恢复 oracle勒索 oracle勒索恢复 oracle异常恢复 Oracle 恢复 ORACLE恢复 ORACLE数据库恢复 oracle 比特币 OSD-04016 YOUR FILES ARE ENCRYPTED 勒索恢复 比特币加密文章分类
- Others (2)
- 中间件 (2)
- WebLogic (2)
- 操作系统 (102)
- 数据库 (1,682)
- DB2 (22)
- MySQL (73)
- Oracle (1,544)
- Data Guard (52)
- EXADATA (8)
- GoldenGate (24)
- ORA-xxxxx (159)
- ORACLE 12C (72)
- ORACLE 18C (6)
- ORACLE 19C (15)
- ORACLE 21C (3)
- Oracle 23ai (7)
- Oracle ASM (67)
- Oracle Bug (8)
- Oracle RAC (53)
- Oracle 安全 (6)
- Oracle 开发 (28)
- Oracle 监听 (28)
- Oracle备份恢复 (565)
- Oracle安装升级 (92)
- Oracle性能优化 (62)
- 专题索引 (5)
- 勒索恢复 (79)
- PostgreSQL (18)
- PostgreSQL恢复 (6)
- SQL Server (27)
- SQL Server恢复 (8)
- TimesTen (7)
- 达梦数据库 (2)
- 生活娱乐 (2)
- 至理名言 (11)
- 虚拟化 (2)
- VMware (2)
- 软件开发 (37)
- Asp.Net (9)
- JavaScript (12)
- PHP (2)
- 小工具 (20)
-
最近发表
- 断电引起的ORA-08102: 未找到索引关键字, 对象号 39故障处理
- ORA-00227: corrupt block detected in control file
- 手工删除19c rac
- 解决oracle数据文件路径有回车故障
- .wstop扩展名勒索数据库恢复
- Oracle Recovery Tools工具一键解决ORA-00376 ORA-01110故障(文件offline)
- OGG-02771 Input trail file format RELEASE 19.1 is different from previous trail file form at RELEASE 11.2.
- OGG-02246 Source redo compatibility level 19.0.0 requires trail FORMAT 12.2 or higher
- GoldenGate 19安装和打patch
- dd破坏asm磁盘头恢复
- 删除asmlib磁盘导致磁盘组故障恢复
- Kylin Linux 安装19c
- ORA-600 krse_arc_complete.4
- Oracle 19c 202410补丁(RUs+OJVM)
- ntfs MFT损坏(ntfs文件系统故障)导致oracle异常恢复
- .mkp扩展名oracle数据文件加密恢复
- 清空redo,导致ORA-27048: skgfifi: file header information is invalid
- A_H_README_TO_RECOVER勒索恢复
- 通过alert日志分析客户自行对一个数据库恢复的来龙去脉和点评
- ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME
标签归档:asm磁盘头修复
pvid=yes导致asm无法mount
今天凌晨接到客户恢复请求,对于aix rac,两个ibm存储做mirror的环境中,客户做存储容灾演练,发现磁盘的名称发生改变,然后对其中一个磁盘设置pvid,结果悲剧了导致asm一个磁盘组无法正常起来。然后又aix端删除这些设备,然后重新扫描设备。结果不是一个磁盘组不能mount,而是整个gi就无法正常启动。希望我们给予技术支持。
查看asm 日志,确定asm disk信息
从这里可以确定,一共有两个asm diskgroup,每个group有两个磁盘,hdisk2和hdisk3 为hisdata,hdisk4,和hdisk5为emrdata.
使用kfed分析磁盘头
dd if=/dev/rhdisk2 of=/tmp/xifenfei/rhdisk2.dd bs=1024k count=10 dd if=/dev/rhdisk3 of=/tmp/xifenfei/rhdisk3.dd bs=1024k count=10 dd if=/dev/rhdisk4 of=/tmp/xifenfei/rhdisk4.dd bs=1024k count=10 dd if=/dev/rhdisk5 of=/tmp/xifenfei/rhdisk5.dd bs=1024k count=10 --传输到我电脑上分析 C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk2.dd|grep name kfdhdb.dskname: HISDATA_0000 ; 0x028: length=12 kfdhdb.grpname: HISDATA ; 0x048: length=7 kfdhdb.fgname: HISDATA_0000 ; 0x068: length=12 kfdhdb.capname: ; 0x088: length=0 C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk3.dd|grep name kfdhdb.dskname: HISDATA_0001 ; 0x028: length=12 kfdhdb.grpname: HISDATA ; 0x048: length=7 kfdhdb.fgname: HISDATA_0001 ; 0x068: length=12 kfdhdb.capname: ; 0x088: length=0 C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk4.dd|grep name kfdhdb.dskname: EMRDATA_0000 ; 0x028: length=12 kfdhdb.grpname: EMRDATA ; 0x048: length=7 kfdhdb.fgname: EMRDATA_0000 ; 0x068: length=12 kfdhdb.capname: ; 0x088: length=0 C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd|grep name C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd kfbh.endian: 201 ; 0x000: 0xc9 kfbh.hard: 194 ; 0x001: 0xc2 kfbh.type: 212 ; 0x002: *** Unknown Enum *** kfbh.datfmt: 193 ; 0x003: 0xc1 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 0 ; 0x008: file=0 kfbh.check: 0 ; 0x00c: 0x00000000 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 000000000 C1D4C2C9 00000000 00000000 00000000 [................] 000000010 00000000 00000000 00000000 00000000 [................] Repeat 254 times KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][212] C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd blkn=2|grep kfbh kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 3 ; 0x002: KFBTYP_ALLOCTBL kfbh.datfmt: 2 ; 0x003: 0x02 kfbh.block.blk: 33554432 ; 0x004: blk=33554432 kfbh.block.obj: 16777344 ; 0x008: file=128 kfbh.check: 2654889601 ; 0x00c: 0x9e3e6681 kfbh.fcn.base: 1696071680 ; 0x010: 0x65180000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd blkn=510|grep name kfdhdb.dskname: EMRDATA_0001 ; 0x028: length=12 kfdhdb.grpname: EMRDATA ; 0x048: length=7 kfdhdb.fgname: EMRDATA_0001 ; 0x068: length=12 kfdhdb.capname: ; 0x088: length=0
通过上述分析,基本上确定由于对hdisk5设置了pvid导致该asm disk的磁盘头损坏.这个可以直接使用asm repair功能修复(注意要clear pvid)
C:\Users\FAL>kfed read H:\temp\xifenfei\tmp\xifenfei\rhdisk5.dd |grep name kfdhdb.dskname: EMRDATA_0001 ; 0x028: length=12 kfdhdb.grpname: EMRDATA ; 0x048: length=7 kfdhdb.fgname: EMRDATA_0001 ; 0x068: length=12 kfdhdb.capname: ; 0x088: length=0
启动crs到cssd进程报错分析
1. 由于删除磁盘,扫描设备导致hdisk[2-5] 权限和用户组不对
2. 由于删除,扫描磁盘导致磁盘共享模式不对
修复磁盘头和解决这两个问题之后,gi启动正常,磁盘组也正常mount,数据库也正常启动,数据0丢失,至此完美恢复
类似客户恢复案例:asm disk误设置pvid导致asm diskgroup无法mount恢复
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971 Q Q:107644445 E-Mail:dba@xifenfei.com
ADHU(ASM Disk Header Utility)—asm disk header备份恢复工具
adhu(ASM Disk Header Utility)作为oracle asm中和kfed,amdu齐名的asm三大恢复神器之一,没有被oracle大力推广(属于内部工具),随着kfed功能增强和asm disk header自动备份功能的完善,adhu oracle基本上停止的开发支持,可以用来作为10.2.0.5之前asm版本的磁盘头保护工具
adhu预览
这里可以通过shell封装的utildhu调用adhu程序,实现更加人性化和自动化操作,它含有install,check,repair三个命令参数.
[root@xff1 tmp]# su - grid xff1:/home/grid> cd /tmp/adhu/ xff1:/tmp/adhu> ls -l total 68 -rwxr-xr-x 1 grid oinstall 18902 Nov 1 2008 adhu -rw-r--r-- 1 grid oinstall 1970 Nov 1 2008 README -rwxr-xr-x 1 grid oinstall 6964 Mar 21 16:20 utildhu -rw-r--r-- 1 root root 12634 Mar 21 16:05 utildhu.zip xff1:/tmp/adhu> ./utildhu Usage: utildhu install/check/repair [device name] $utildhu install Will gather a list of member ASM disks and create the backup directory ./HeaderBackup The ./HeaderBackup directory will contain the backup header of every asm disk in this database $utildhu check Will run /tmp/adhu/adhu for every disk discovered by $utildhu install and will email recipients configured in RECIPIENTS if there are errors in the disk header It is hoped that the user will enter valid RECIPIENT email addresses, and will place this utility in $ORA_ASM_HOME $utildhu repair <device name> Will repair the device provided using the backup header blocks that have been copied previously. This does assume that you have backup header blocks in ./HeaderBackup Sample crontab entry to run a check every 5 minutes #Minute (0-59) Hour (0-23) Day of Month (1-31) Month (1-12 or Jan-Dec) Day of Week (0-6 or Sun-Sat) Command 0,5,10,15,20,25,30,35,40,45,50,55 * * * * * utildhu check Please read the README for more information
adhu install
install主要是实现utildhu.config配置文件生成和第一次asm 磁盘头备份
xff1:/tmp/adhu> ./utildhu install xff1:/tmp/adhu> ls -l total 64 -rwxr-xr-x 1 grid oinstall 18902 Nov 1 2008 adhu drwxr-xr-x 2 grid oinstall 4096 Mar 21 16:23 HeaderBackup -rw-r--r-- 1 grid oinstall 1117 Mar 21 16:23 persistent-log.utildhu -rw-r--r-- 1 grid oinstall 1970 Nov 1 2008 README -rwxr-xr-x 1 grid oinstall 6964 Mar 21 16:20 utildhu -rw-r--r-- 1 grid oinstall 243 Mar 21 16:23 utildhu.config -rw-r--r-- 1 grid oinstall 710 Mar 21 16:23 utildhu.out -rw-r--r-- 1 root root 12634 Mar 21 16:05 utildhu.zip xff1:/tmp/adhu> cd HeaderBackup/ xff1:/tmp/adhu/HeaderBackup> ls -ltr total 12 -rw-r--r-- 1 grid oinstall 4096 Mar 21 16:23 oradata1p1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 16:23 oradata2p1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 16:23 ocrvotep1 xff1:/tmp/adhu/HeaderBackup> more ../utildhu.config /dev/mapper/oradata1p1 /dev/mapper/oradata2p1 /dev/mapper/ocrvotep1 xff1:/tmp/adhu> more ../persistent-log.utildhu Mon Mar 21 16:23:29 CST 2016 ASM Disk Header Check Utility Installed on Devices configured are: /dev/mapper/oradata1p1 /dev/mapper/oradata2p1 /dev/mapper/ocrvotep1 ADHU: /dev/mapper/oradata1p1: Status 0x01 Mon Mar 21 16:23:29 2016 ADHU: /dev/mapper/oradata1p1: Diskgroup:DATA Disk:DATA_0000 #0 ADHU: /dev/mapper/oradata1p1: valid disk header found ADHU: /dev/mapper/oradata1p1: backup block updated ###ADHU check run at Mon Mar 21 16:23:29 CST 2016 NO ERRORS FOUND xff1:/tmp/adhu> ls -l HeaderBackup/ total 16 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:04 ocrvotep1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:08 oradata1p1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:08 oradata1p1.corrupt -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:04 oradata2p1
adhu check
对于正常的asm disk,主要是为了生成新的磁盘头备份
xff1:/tmp/adhu> ls -l HeaderBackup/ total 16 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:04 ocrvotep1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:08 oradata1p1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:08 oradata1p1.corrupt -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:04 oradata2p1 xff1:/tmp/adhu> ./utildhu check xff1:/tmp/adhu> ls -l HeaderBackup/ total 16 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:11 ocrvotep1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:11 oradata1p1 -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:08 oradata1p1.corrupt -rw-r--r-- 1 grid oinstall 4096 Mar 21 23:11 oradata2p1
adhu repair
repair主要是修复磁盘头,当asm 磁盘头损坏之时,可以通过这个命令实现磁盘头修复
xff1:/tmp/adhu> dd if=/dev/zero of=/dev/mapper/oradata1p1 bs=4096 count=1 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000282838 s, 14.5 MB/s xff1:/tmp/adhu> ./utildhu check xff1:/tmp/adhu> tail -f persistent-log.utildhu ###ADHU check run at Mon Mar 21 23:04:04 CST 2016 ERRORS FOUND ADHU: /dev/mapper/oradata1p1: Status 0x08 Mon Mar 21 23:04:04 2016 ADHU: /dev/mapper/oradata1p1: Diskgroup:DATA Disk:DATA_0000 #0 ADHU: /dev/mapper/oradata1p1: corrupt disk header encountered ADHU: /dev/mapper/oradata1p1: valid backup block found ADHU: /dev/mapper/oradata1p1: CORRUPT HEADER NOT REPAIRED xff1:/tmp/adhu> kfed read /dev/mapper/oradata1p1 kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 0 ; 0x001: 0x00 kfbh.type: 0 ; 0x002: KFBTYP_INVALID kfbh.datfmt: 0 ; 0x003: 0x00 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 0 ; 0x008: file=0 kfbh.check: 0 ; 0x00c: 0x00000000 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 7F6BD9981400 00000000 00000000 00000000 00000000 [................] Repeat 255 times KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0] xff1:/tmp/adhu> sqlplus / as sysasm SQL*Plus: Release 11.2.0.4.0 Production on Mon Mar 21 23:07:25 2016 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> alter diskgroup data dismount; alter diskgroup data dismount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15027: active use of diskgroup "DATA" precludes its dismount SQL> alter diskgroup data mount; alter diskgroup data mount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15017: diskgroup "DATA" cannot be mounted ORA-15013: diskgroup "DATA" is already mounted SQL> alter diskgroup data dismount force; Diskgroup altered. SQL> alter diskgroup data mount; alter diskgroup data mount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15017: diskgroup "DATA" cannot be mounted ORA-15040: diskgroup is incomplete xff1:/tmp/adhu> ./utildhu repair /dev/mapper/oradata1p1 DEVICE /dev/mapper/oradata1p1 REPAIRED AT Mon Mar 21 23:06:06 CST 2016 ADHU: /dev/mapper/oradata1p1: Status 0x04 Mon Mar 21 23:06:06 2016 ADHU: /dev/mapper/oradata1p1: Diskgroup:DATA Disk:DATA_0000 #0 ADHU: /dev/mapper/oradata1p1: corrupt disk header encountered ADHU: /dev/mapper/oradata1p1: valid backup block found ADHU: /dev/mapper/oradata1p1: disk header repaired xff1:/tmp/adhu> sqlplus / as sysasm SQL*Plus: Release 11.2.0.4.0 Production on Mon Mar 21 23:08:48 2016 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> alter diskgroup data mount; Diskgroup altered.
adhu直接使用
adhu [-dir dirname ] [-repair] [-quiet] [-readonly] [-syslog mask ] devname
默认情况下adhu将disk header备份为当前目录下的备份文件。 使用-dir选项可以指定存放的目录。
当需要使用adhu去修复一个损坏的asm disk header时使用-repair 选项。
-quiet 选项将过滤所有正常的输出信息,若执行成功则不打印任何输出。
-readonly选项 以只读方式来打开disk device,这样备份block将不被写入,而备份文件将在可能的情况下写入。
-syslog选项控制是否写出结果到系统日志和标准输出。
devname代表为asm disk的设备文件,asm头的备份文件将以该device name为基础,并存放在当前目录或者-dir指定的目录。
手工修复ASM DISK HEADER 异常
今天有网友对asm中的磁盘做了fdisk操作,导致asm disk异常,通过手工修复ASM DISK HEADER 解决该问题,这里通过实验重现,提醒大家操作asm中的硬盘分区需要慎重,平时对ASM DISK HEADER 做好备份
初始化信息
SQL> select * from v$version; BANNER ---------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> show parameter instance_name; NAME TYPE VALUE ------------------------------------ ---------- ---------------- instance_name string +ASM1 SQL> select group_number,DISK_NUMBER,PATH,HEADER_STATUS from v$asm_disk ; GROUP_NUMBER DISK_NUMBER PATH HEADER_STATUS ------------ ----------- ------------------------------ ------------------------ 1 1 /dev/oracleasm/disks/VOL2 MEMBER 1 0 /dev/oracleasm/disks/VOL1 MEMBER 2 1 /dev/oracleasm/disks/VOL4 MEMBER 2 0 /dev/oracleasm/disks/VOL3 MEMBER [grid@rac1 ~]$ /etc/init.d/oracleasm querydisk -d VOL3 Disk "VOL3" is a valid ASM disk on device [8,17] [grid@rac1 ~]$ ll /dev |grep 8,|grep 17 brw-r----- 1 root disk 8, 17 Apr 17 11:37 sdb1 [grid@rac1 ~]$ /etc/init.d/oracleasm querydisk -d VOL4 Disk "VOL4" is a valid ASM disk on device [8,18] [grid@rac1 ~]$ ll /dev |grep 8,|grep 18 brw-r----- 1 root disk 8, 18 Apr 17 11:37 sdb2
备份ASM DISK HEADER
[root@rac1 backup_asmheader]# dd if=/dev/sdb1 of=vol3header.dd bs=4096 count=1 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000143581 seconds, 28.5 MB/s [root@rac1 backup_asmheader]# dd if=/dev/sdb2 of=vol4header.dd bs=4096 count=1 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000147727 seconds, 27.7 MB/s
破坏ASM DISK HEADER
[root@rac1 ~]# dd if=/dev/zero of=/dev/sdb1 bs=4096 count=1 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 4.4421e-05 seconds, 92.2 MB/s [grid@rac1 ~]$ kfed read /dev/oracleasm/disks/VOL3 kfbh.endian: 0 ; 0x000: 0x00 kfbh.hard: 0 ; 0x001: 0x00 kfbh.type: 0 ; 0x002: KFBTYP_INVALID kfbh.datfmt: 0 ; 0x003: 0x00 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 0 ; 0x008: file=0 kfbh.check: 0 ; 0x00c: 0x00000000 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 B4C83200 00000000 00000000 00000000 00000000 [................] Repeat 255 times KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0] SQL> select group_number,DISK_NUMBER,PATH,HEADER_STATUS from v$asm_disk ; GROUP_NUMBER DISK_NUMBER PATH HEADER_STATUS ------------ ----------- ------------------------------ ------------------------ 1 1 /dev/oracleasm/disks/VOL2 MEMBER 1 0 /dev/oracleasm/disks/VOL1 MEMBER 2 1 /dev/oracleasm/disks/VOL4 MEMBER 2 0 /dev/oracleasm/disks/VOL3 CANDIDATE
remount diskgroup
SQL> alter diskgroup xifenfei dismount; Diskgroup altered. SQL> alter diskgroup xifenfei mount; alter diskgroup xifenfei mount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15017: diskgroup "XIFENFEI" cannot be mounted ORA-15063: ASM discovered an insufficient number of disks for diskgroup "XIFENFEI"
查看同一DISKGROUP中其他磁盘kfed
[grid@rac1 ~]$ kfed read /dev/oracleasm/disks/VOL4 kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483649 ; 0x008: disk=1 kfbh.check: 349717291 ; 0x00c: 0x14d8432b kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISKVOL4 ; 0x000: length=12 kfdhdb.driver.reserved[0]: 877416278 ; 0x008: 0x344c4f56 kfdhdb.driver.reserved[1]: 0 ; 0x00c: 0x00000000 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 1 ; 0x024: 0x0001 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: XIFENFEI_0001 ; 0x028: length=13 kfdhdb.grpname: XIFENFEI ; 0x048: length=8 kfdhdb.fgname: XIFENFEI_0001 ; 0x068: length=13 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 32967790 ; 0x0a8: HOUR=0xe DAYS=0x3 MNTH=0x3 YEAR=0x7dc kfdhdb.crestmp.lo: 2015933440 ; 0x0ac: USEC=0x0 MSEC=0x22d SECS=0x2 MINS=0x1e kfdhdb.mntstmp.hi: 32969260 ; 0x0b0: HOUR=0xc DAYS=0x11 MNTH=0x4 YEAR=0x7dc kfdhdb.mntstmp.lo: 3109835776 ; 0x0b4: USEC=0x0 MSEC=0x315 SECS=0x15 MINS=0x2e kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000 kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80 kfdhdb.dsksize: 3788 ; 0x0c4: 0x00000ecc kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 kfdhdb.f1b1locn: 0 ; 0x0d4: 0x00000000 kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 32967790 ; 0x0e4: HOUR=0xe DAYS=0x3 MNTH=0x3 YEAR=0x7dc kfdhdb.grpstmp.lo: 2015746048 ; 0x0e8: USEC=0x0 MSEC=0x176 SECS=0x2 MINS=0x1e kfdhdb.vfstart: 0 ; 0x0ec: 0x00000000 kfdhdb.vfend: 0 ; 0x0f0: 0x00000000 kfdhdb.spfile: 0 ; 0x0f4: 0x00000000 kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000 kfdhdb.ub4spare[0]: 0 ; 0x0fc: 0x00000000 kfdhdb.ub4spare[1]: 0 ; 0x100: 0x00000000 kfdhdb.ub4spare[2]: 0 ; 0x104: 0x00000000 kfdhdb.ub4spare[3]: 0 ; 0x108: 0x00000000 kfdhdb.ub4spare[4]: 0 ; 0x10c: 0x00000000 kfdhdb.ub4spare[5]: 0 ; 0x110: 0x00000000 kfdhdb.ub4spare[6]: 0 ; 0x114: 0x00000000 kfdhdb.ub4spare[7]: 0 ; 0x118: 0x00000000 kfdhdb.ub4spare[8]: 0 ; 0x11c: 0x00000000 kfdhdb.ub4spare[9]: 0 ; 0x120: 0x00000000 kfdhdb.ub4spare[10]: 0 ; 0x124: 0x00000000 kfdhdb.ub4spare[11]: 0 ; 0x128: 0x00000000 kfdhdb.ub4spare[12]: 0 ; 0x12c: 0x00000000 kfdhdb.ub4spare[13]: 0 ; 0x130: 0x00000000 kfdhdb.ub4spare[14]: 0 ; 0x134: 0x00000000 kfdhdb.ub4spare[15]: 0 ; 0x138: 0x00000000 kfdhdb.ub4spare[16]: 0 ; 0x13c: 0x00000000 kfdhdb.ub4spare[17]: 0 ; 0x140: 0x00000000 kfdhdb.ub4spare[18]: 0 ; 0x144: 0x00000000 kfdhdb.ub4spare[19]: 0 ; 0x148: 0x00000000 kfdhdb.ub4spare[20]: 0 ; 0x14c: 0x00000000 kfdhdb.ub4spare[21]: 0 ; 0x150: 0x00000000 kfdhdb.ub4spare[22]: 0 ; 0x154: 0x00000000 kfdhdb.ub4spare[23]: 0 ; 0x158: 0x00000000 kfdhdb.ub4spare[24]: 0 ; 0x15c: 0x00000000 kfdhdb.ub4spare[25]: 0 ; 0x160: 0x00000000 kfdhdb.ub4spare[26]: 0 ; 0x164: 0x00000000 kfdhdb.ub4spare[27]: 0 ; 0x168: 0x00000000 kfdhdb.ub4spare[28]: 0 ; 0x16c: 0x00000000 kfdhdb.ub4spare[29]: 0 ; 0x170: 0x00000000 kfdhdb.ub4spare[30]: 0 ; 0x174: 0x00000000 kfdhdb.ub4spare[31]: 0 ; 0x178: 0x00000000 kfdhdb.ub4spare[32]: 0 ; 0x17c: 0x00000000 kfdhdb.ub4spare[33]: 0 ; 0x180: 0x00000000 kfdhdb.ub4spare[34]: 0 ; 0x184: 0x00000000 kfdhdb.ub4spare[35]: 0 ; 0x188: 0x00000000 kfdhdb.ub4spare[36]: 0 ; 0x18c: 0x00000000 kfdhdb.ub4spare[37]: 0 ; 0x190: 0x00000000 kfdhdb.ub4spare[38]: 0 ; 0x194: 0x00000000 kfdhdb.ub4spare[39]: 0 ; 0x198: 0x00000000 kfdhdb.ub4spare[40]: 0 ; 0x19c: 0x00000000 kfdhdb.ub4spare[41]: 0 ; 0x1a0: 0x00000000 kfdhdb.ub4spare[42]: 0 ; 0x1a4: 0x00000000 kfdhdb.ub4spare[43]: 0 ; 0x1a8: 0x00000000 kfdhdb.ub4spare[44]: 0 ; 0x1ac: 0x00000000 kfdhdb.ub4spare[45]: 0 ; 0x1b0: 0x00000000 kfdhdb.ub4spare[46]: 0 ; 0x1b4: 0x00000000 kfdhdb.ub4spare[47]: 0 ; 0x1b8: 0x00000000 kfdhdb.ub4spare[48]: 0 ; 0x1bc: 0x00000000 kfdhdb.ub4spare[49]: 0 ; 0x1c0: 0x00000000 kfdhdb.ub4spare[50]: 0 ; 0x1c4: 0x00000000 kfdhdb.ub4spare[51]: 0 ; 0x1c8: 0x00000000 kfdhdb.ub4spare[52]: 0 ; 0x1cc: 0x00000000 kfdhdb.ub4spare[53]: 0 ; 0x1d0: 0x00000000 kfdhdb.acdb.aba.seq: 0 ; 0x1d4: 0x00000000 kfdhdb.acdb.aba.blk: 0 ; 0x1d8: 0x00000000 kfdhdb.acdb.ents: 0 ; 0x1dc: 0x0000 kfdhdb.acdb.ub2spare: 0 ; 0x1de: 0x0000
通过VOL4 kfed修改出VOL3 kfed
[grid@rac1 ~]$ cat vol3.txt kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 *kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 332940500 ; 0x00c: 0x13d844d4 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 *kfdhdb.driver.provstr: ORCLDISKVOL3 ; 0x000: length=12 *kfdhdb.driver.reserved[0]: 860639062 ; 0x008: 0x334c4f56 kfdhdb.driver.reserved[1]: 0 ; 0x00c: 0x00000000 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 *kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER *kfdhdb.dskname: XIFENFEI_0000 ; 0x028: length=13 kfdhdb.grpname: XIFENFEI ; 0x048: length=8 *kfdhdb.fgname: XIFENFEI_0000 ; 0x068: length=13 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 32967790 ; 0x0a8: HOUR=0xe DAYS=0x3 MNTH=0x3 YEAR=0x7dc kfdhdb.crestmp.lo: 2015933440 ; 0x0ac: USEC=0x0 MSEC=0x22d SECS=0x2 MINS=0x1e kfdhdb.mntstmp.hi: 32969260 ; 0x0b0: HOUR=0xc DAYS=0x11 MNTH=0x4 YEAR=0x7dc kfdhdb.mntstmp.lo: 3109835776 ; 0x0b4: USEC=0x0 MSEC=0x315 SECS=0x15 MINS=0x2e kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000 kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80 *kfdhdb.dsksize: 2353 ; 0x0c4: 0x00000931 kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 *kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002 kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 32967790 ; 0x0e4: HOUR=0xe DAYS=0x3 MNTH=0x3 YEAR=0x7dc kfdhdb.grpstmp.lo: 2015746048 ; 0x0e8: USEC=0x0 MSEC=0x176 SECS=0x2 MINS=0x1e kfdhdb.vfstart: 0 ; 0x0ec: 0x00000000
后续处理
--导入kfed [grid@rac1 ~]$ kfed merge /dev/oracleasm/disks/VOL3 text=vol3.txt --MOUNT diskgroup SQL> alter diskgroup xifenfei mount; Diskgroup altered.
如果您遇到此类情况,无法解决请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971 Q Q:107644445 E-Mail:dba@xifenfei.com