标签云
asm恢复 bbed bootstrap$ dul In Memory kcbzib_kcrsds_1 kccpb_sanity_check_2 kfed MySQL恢复 ORA-00312 ORA-00607 ORA-00704 ORA-01110 ORA-01555 ORA-01578 ORA-08103 ORA-600 2131 ORA-600 2662 ORA-600 2663 ORA-600 3020 ORA-600 4000 ORA-600 4137 ORA-600 4193 ORA-600 4194 ORA-600 16703 ORA-600 kcbzib_kcrsds_1 ORA-600 KCLCHKBLK_4 ORA-15042 ORA-15196 ORACLE 12C oracle dul ORACLE PATCH Oracle Recovery Tools oracle加密恢复 oracle勒索 oracle勒索恢复 oracle异常恢复 Oracle 恢复 ORACLE恢复 ORACLE数据库恢复 oracle 比特币 OSD-04016 YOUR FILES ARE ENCRYPTED 勒索恢复 比特币加密文章分类
- Others (2)
- 中间件 (2)
- WebLogic (2)
- 操作系统 (102)
- 数据库 (1,682)
- DB2 (22)
- MySQL (73)
- Oracle (1,544)
- Data Guard (52)
- EXADATA (8)
- GoldenGate (24)
- ORA-xxxxx (159)
- ORACLE 12C (72)
- ORACLE 18C (6)
- ORACLE 19C (15)
- ORACLE 21C (3)
- Oracle 23ai (7)
- Oracle ASM (67)
- Oracle Bug (8)
- Oracle RAC (53)
- Oracle 安全 (6)
- Oracle 开发 (28)
- Oracle 监听 (28)
- Oracle备份恢复 (565)
- Oracle安装升级 (92)
- Oracle性能优化 (62)
- 专题索引 (5)
- 勒索恢复 (79)
- PostgreSQL (18)
- PostgreSQL恢复 (6)
- SQL Server (27)
- SQL Server恢复 (8)
- TimesTen (7)
- 达梦数据库 (2)
- 生活娱乐 (2)
- 至理名言 (11)
- 虚拟化 (2)
- VMware (2)
- 软件开发 (37)
- Asp.Net (9)
- JavaScript (12)
- PHP (2)
- 小工具 (20)
-
最近发表
- 断电引起的ORA-08102: 未找到索引关键字, 对象号 39故障处理
- ORA-00227: corrupt block detected in control file
- 手工删除19c rac
- 解决oracle数据文件路径有回车故障
- .wstop扩展名勒索数据库恢复
- Oracle Recovery Tools工具一键解决ORA-00376 ORA-01110故障(文件offline)
- OGG-02771 Input trail file format RELEASE 19.1 is different from previous trail file form at RELEASE 11.2.
- OGG-02246 Source redo compatibility level 19.0.0 requires trail FORMAT 12.2 or higher
- GoldenGate 19安装和打patch
- dd破坏asm磁盘头恢复
- 删除asmlib磁盘导致磁盘组故障恢复
- Kylin Linux 安装19c
- ORA-600 krse_arc_complete.4
- Oracle 19c 202410补丁(RUs+OJVM)
- ntfs MFT损坏(ntfs文件系统故障)导致oracle异常恢复
- .mkp扩展名oracle数据文件加密恢复
- 清空redo,导致ORA-27048: skgfifi: file header information is invalid
- A_H_README_TO_RECOVER勒索恢复
- 通过alert日志分析客户自行对一个数据库恢复的来龙去脉和点评
- ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME
标签归档:ORA-600 3020
ORA-600 3020
ORA-600 3020官方解释说明
ERROR: Format: ORA-600 [3020] [a] [b] {c} [d] [e] VERSIONS: version 6.0 and above DESCRIPTION: This is called a 'STUCK RECOVERY'. There is an inconsistency between the information stored in the redo and the information stored in a database block being recovered. ARGUMENTS: For Oracle 9.2 and earlier: Arg [a] Block DBA Arg [b] Redo Thread Arg 1 Redo RBA Seq Arg [d] Redo RBA Block No Arg [e] Redo RBA Offset. For Oracle 10.1 Arg [a] Absolute file number of the datafile. Arg [b] Block number Arg 1 Block DBA FUNCTIONALITY: kernel cache recovery parallel IMPACT: INSTANCE FAILURE during recovery. SUGGESTIONS: There have been cases of receiving this error when RECOVER has been issued, but either some datafiles were not restored to disk, or the restoration has not finished. Therefore, ensure that the entire backup has been restored and that the restore has finished PRIOR to issuing a RECOVER database command. If problems continue, consider restoring from a backup and doing a point-in-time recovery to a time PRIOR to the one implied by the ORA-600[3020] error. Example: SQL> recover database until time 'YYYY-MON-DD:HH:MI:SS'; This error can also be caused by a lost update. During normal operations, block updates/writes are being performed to a number of files including database datafiles, redo log files, archived redo log files etc. This error can be reported if any of these updates are lost for some reason. Therefore, thoroughly check your operating system and disk hardware. In the case of a lost update, restore an old copy of the datafile and attempt to recover and roll forward again.
相关bug信息
NB |
Prob |
Bug |
Fixed |
Description |
III |
11.2.0.4.6, 11.2.0.4.BP15, 12.1.0.2, 12.2.0.0 |
RMAN bad backup – causes recovery to fail with ORA-600 [3020] |
||
III |
11.2.0.3.BP22, 11.2.0.4, 12.1.0.1.2, 12.1.0.2 |
Exadata cell optimized incremental backup can skip some blocks to backup |
||
I |
12.2.0.0 |
ORA-753 ORA-756 or ORA-600 [3020] with KCOX_FUTURE after RMAN Restore / PITR with BCT after Open Resetlogs in 12c |
||
II |
12.1.0.2.160719, 12.2.0.0 |
ORA-600 [3020] KCOX_FUTURE by RECOVERY for KTU UNDO BLOCK SEQ:254 sometime after RMAN Restore of UNDO datafile in Source Database |
||
III |
11.2.0.4.BP20, 12.1.0.2.160119, 12.1.0.2.DBBP09, 12.2.0.0 |
ORA-600 [3020] / ORA-752 Wrong Results after Parallel Direct Load or RMAN ORA-600 [krcrfr_nohist] in RAC (caused by fix for bug 9962369) |
||
II |
12.2.0.0 |
ORA-1172 or ORA-600 [3020] Stuck recovery in RAC after attempted block rebuild |
||
III |
11.2.0.4.BP13, 12.1.0.2, 12.2.0.0 |
ORA-600 [3020] on ASSM blocks in Standby Database after CONVERT TO PHYSICAL or ORA-8103 ORA-600 [4552] in non-standby after FLASHBACK |
||
III |
12.1.0.2, 12.2.0.0 |
ORA-600 [3020] ORA-10567 and ORA-10560: block type ’0′ / ORA-600 [kdBlkCheckError] ORA-600 [ktfbbset-2] after flashback which reversed a datafile extend – superseded |
||
I |
11.2.0.4, 12.1.0.2, 12.2.0.0 |
ORA-600 [3020] or ORA-752 if db_lost_write_protect is enabled. Bystander standby recovers wrong redo log after switchover or failover. |
||
+ |
III |
11.2.0.3.9, 11.2.0.3.BP22, 11.2.0.4.2, 11.2.0.4.BP03, 12.1.0.1.3, 12.1.0.2 |
ORA-600 [kclchkblkdma_3] ORA-600 [3020] or ORA-600 [kcbchg1_16] Join of temp and permanent table in RAC might lead to corruption – superseded |
|
I |
11.2.0.3.9, 11.2.0.3.BP22, 11.2.0.4, 12.1.0.2 |
ORA-600 [3020] after flashback database in a RAC |
||
III |
11.2.0.3.BP24, 11.2.0.4, 12.1.0.2 |
ORA-600 [kcl_sanity_check_cr_1] ORA-600 [kclchkblkdma_3] in RAC or ORA-752 ORA-600 [3020] during recovery |
||
II |
ORA-752 or ORA-600 [3020] on recovery of Block Cleanout Operation OP:4.6 |
|||
I |
Session hang after applying the patch for Bug 9587912 which causes ORA-600 [3020] |
|||
+ |
III |
11.2.0.2.9, 11.2.0.2.BP15, 11.2.0.3.3, 11.2.0.3.BP04, 11.2.0.4, 12.1.0.1 |
Join of temp and permanent tables in RAC might cause corruption of permanent table. Regression by bug 10352368 |
|
E |
- |
11.2.0.2.BP11, 11.2.0.3.BP01, 11.2.0.4, 12.1.0.1 |
ORA-600 [3020] / ORA-333 Recovery of datafile or async transport do not read mirror if there is a stale block |
|
II |
11.2.0.3.8, 11.2.0.3.BP18, 11.2.0.4, 12.1.0.1 |
Direct NFS appears to be sending zero length windows to storage device. It may also cause Lost Writes |
||
I |
11.2.0.3, 12.1.0.1 |
ORA-8103/ORA-600 [3020] on RMAN recovered locally managed tablespace |
||
P |
I |
12.1.0.1 |
EXADATA LSI firmware for lost writes |
|
III |
11.2.0.2.5, 11.2.0.2.BP13, 11.2.0.2.GIPSU05, 11.2.0.3, 12.1.0.1 |
ORA-600 [3020] during recovery after datafile RESIZE (to smaller size) |
||
+ |
II |
11.2.0.3, 12.1.0.1 |
Stale data blocks may be returned by Exadata FlashCache |
|
- |
11.2.0.1.BP10, 11.2.0.2.2, 11.2.0.2.BP03, 11.2.0.2.GIBUNDLE02, 11.2.0.2.GIPSU02, 11.2.0.3, 12.1.0.1 |
Lost write in ASM with multiple DBWs and a disk is offlined and then onlined |
||
II |
11.2.0.2.2, 11.2.0.2.BP02, 11.2.0.3, 12.1.0.1 |
ORA-600 [3020] during recovery / on standby |
||
+ |
II |
11.1.0.7.7, 11.2.0.1.BP08, 11.2.0.2.1, 11.2.0.2.BP02, 11.2.0.2.GIBUNDLE01, 11.2.0.3, 12.1.0.1 |
ORA-1578 / ORA-600 [3020] Corruption. Misplaced Blocks and Lost Write in ASM |
|
* |
III |
11.2.0.1.6, 11.2.0.1.BP09, 11.2.0.2.2, 11.2.0.2.BP04, 11.2.0.3, 12.1.0.1 |
ORA-600 / corruption possible during shutdown in RAC |
|
II |
11.2.0.2.4, 11.2.0.2.BP09, 11.2.0.3, 12.1.0.1 |
Block change tracking on physical standby can cause data loss |
||
- |
11.2.0.2.BP02, 11.2.0.3, 12.1.0.1 |
Lost write / ORA-600 [kclchkblk_3] / ORA-600 [3020] in RAC – superceded |
||
- |
11.2.0.2, 12.1.0.1 |
ORA-600 [3020] in datafile that went offline/online in a RAC instance |
||
- |
11.2.0.1.2, 11.2.0.1.BP06, 11.2.0.2, 12.1.0.1 |
OERI[3020] reinstating primary |
||
+ |
III |
11.2.0.2, 12.1.0.1 |
ORA-600 [kcbzib_5] on multi block read in RAC. Invalid lock in RAC. ORA-600 [3020] in Recovery |
|
P |
II |
10.2.0.5, 11.2.0.2, 12.1.0.1 |
Solaris: directio may be disabled for RAC file access. Corruption / Lost Write |
|
+ |
II |
11.2.0.1.BP06, 11.2.0.2, 12.1.0.1 |
Lost Write in ASM when normal redundancy is used |
|
+E |
II |
11.2.0.3.9, 11.2.0.3.BP22, 11.2.0.4.2, 11.2.0.4.BP03 |
ORA-600 [kclchkblkdma_3] ORA-600 [3020] RAC diagnostic/fix to avoid a block being modified in Shared Mode and prevent corruption – Superseded |
|
III |
10.2.0.5, 11.2.0.2 |
ORA-600 [3020] for block type 0x3a (58) during recovery for block restored by RMAN backup |
||
I |
11.2.0.4 |
ORA-600 [kjbmpocr:alh] ORA-600 [kclchkblkdma_3] by LMS in RAC which may lead to corruption |
||
- |
11.2.0.1 |
ORA-600 [3020] on standby involving “BRR” redo when db_lost_write_protect is enabled |
||
- |
10.2.0.4.1, 10.2.0.5, 11.1.0.7.1, 11.2.0.1 |
Physical standby media recovery gets OERI[krr_media_12] |
||
+ |
II |
10.2.0.5, 11.1.0.7.1, 11.2.0.1 |
ORA-600 [kclexpandlock_2] in LMS / instance crash. Incorrect locks in RAC. ORA-600 [3020] in recovery |
|
II |
10.2.0.3, 11.1.0.6 |
IMU transactions can produce out-of-order redo (OERI [3020] on recovery) |
||
- |
9.2.0.8, 10.2.0.2, 11.1.0.6 |
Write IO error can cause incorrect file header checkpoint information |
||
- |
10.2.0.2, 11.1.0.6 |
OERI:3020 / corruption errors from multiple FLASHBACK DATABASE |
||
III |
10.2.0.4.1, 10.2.0.5 |
Standby Recovery session cancelled due to ORA-600 [3020] “CHANGE IN FUTURE OF BLOCK” |
||
II |
10.2.0.5 |
MRP terminated by ORA-600[krr_media_12] / OERI:3020 after flashback |
||
- |
9.2.0.7, 10.1.0.4, 10.2.0.1 |
ALTER DATABASE RECOVER MANAGED STANDBY fails with OERI[3020] |
||
I |
10.2.0.1 |
OERI[3020] stuck recovery under RAC |
||
- |
9.2.0.5, 10.1.0.3, 10.2.0.1 |
ALTER SYSTEM KILL SESSION of recovery slave causes stuck recovery |
||
* |
- |
10.2.0.1 |
Backups from RAC DB before Data Guard Failover cannot be used |
|
- |
9.2.0.6, 10.1.0.4 |
OERI[3020] / ORA-10567 from RAC with standby in max performance mode |
||
- |
9.2.0.8, 10.1.0.2 |
Incorrect checkpoint possible in datafile headers |
||
- |
9.2.0.6, 10.1.0.4 |
Stuck recovery (OERI:3020) / ORA-1172 on startup after a crash |
||
II |
9.2.0.1 |
OERI:3020 possible on recovery of LOB DATA |
||
P+ |
- |
7.3.3.4, 7.3.4.0, 8.0.3.0 |
AlphaNT only: Corrupt Redo (zeroed byte) OERI:3020 |
强制关机导致数据库无法正常启动恢复
有客户qq找到我,说有朋友推荐,让我帮他们恢复数据库.由于强制关机后,数据库无法正常启动.
数据库recover database失败
Mon Mar 28 10:20:33 2016 ALTER DATABASE RECOVER database Media Recovery Start started logmerger process Parallel Media Recovery started with 32 slaves Mon Mar 28 10:20:36 2016 Recovery of Online Redo Log: Thread 1 Group 2 Seq 18686 Reading mem 0 Mem# 0: E:\ORACLE_DATA\YCCY\REDO02.LOG Recovery of Online Redo Log: Thread 1 Group 3 Seq 18687 Reading mem 0 Mem# 0: E:\ORACLE_DATA\YCCY\REDO03.LOG Recovery of Online Redo Log: Thread 1 Group 1 Seq 18688 Reading mem 0 Mem# 0: E:\ORACLE_DATA\YCCY\REDO01.LOG Mon Mar 28 10:20:38 2016 Hex dump of (file 45, block 7431) in trace file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0q_2968.trc Corrupt block relative dba: 0x0b401d07 (file 45, block 7431) Mon Mar 28 10:20:38 2016 Hex dump of (file 45, block 7836) in trace file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr01_2220.trc Bad header found during media recovery Corrupt block relative dba: 0x0b401e9c (file 45, block 7836) Data in bad block: Bad header found during media recovery type: 0 format: 0 rdba: 0x1d070000 last change scn: 0x4917.f8dc0b40 seq: 0x0 flg: 0x00 spare1: 0x6 spare2: 0xa2 spare3: 0xc7f7 consistency value in tail: 0x06010000 check value in block header: 0x601 block checksum disabled Reading datafile 'E:\ORACLE_DATA\YCCY\DT_SYS_IDX12.DBF' for corruption at rdba: 0x0b401d07 (file 45, block 7431) Reread (file 45, block 7431) found valid data Repaired corruption at (file 45, block 7431) Hex dump of (file 45, block 7556) in trace file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0q_2968.trc Corrupt block relative dba: 0x0b401d84 (file 45, block 7556) Bad header found during media recovery Data in bad block: type: 106 format: 3 rdba: 0x1d840000 last change scn: 0x461d.391a0b40 seq: 0x0 flg: 0x00 spare1: 0x6 spare2: 0xa2 spare3: 0x2499 consistency value in tail: 0x06013999 check value in block header: 0x401 block checksum disabled Reading datafile 'E:\ORACLE_DATA\YCCY\DT_SYS_IDX12.DBF' for corruption at rdba: 0x0b401d84 (file 45, block 7556) Reread (file 45, block 7556) found valid data Repaired corruption at (file 45, block 7556) Mon Mar 28 10:20:38 2016 Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1334748, kcbzfw()+3094] Mon Mar 28 10:20:39 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0k_3900.trc (incident=131189): ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131189\yccy_pr0k_3900_i131189.trc ERROR: Unable to normalize symbol name for the following short stack (at offset 199): Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0r_3060.trc (incident=131245): ORA-07445: exception encountered: core dump [kcbzfw()+3094] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1334748] [UNABLE_TO_READ] [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 169345, file offset is 1387274240 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131245\yccy_pr0r_3060_i131245.trc Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942] Mon Mar 28 10:20:39 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0d_2112.trc (incident=131133): ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131133\yccy_pr0d_2112_i131133.trc Mon Mar 28 10:20:39 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0e_3260.trc (incident=131141): ORA-00600: internal error code, arguments: [3020], [5], [163457], [21134977], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 163457, file offset is 1339039744 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131141\yccy_pr0e_3260_i131141.trc Mon Mar 28 10:20:39 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr04_3980.trc (incident=131021): ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131021\yccy_pr04_3980_i131021.trc Data in bad block: type: 0 format: 0 rdba: 0x1e9c0000 last change scn: 0x4915.f8320b40 seq: 0x0 flg: 0x00 spare1: 0x6 spare2: 0xa2 spare3: 0x8029 consistency value in tail: 0x0602e40c check value in block header: 0x602 block checksum disabled Reading datafile 'E:\ORACLE_DATA\YCCY\DT_SYS_IDX12.DBF' for corruption at rdba: 0x0b401e9c (file 45, block 7836) Reread (file 45, block 7836) found valid data Repaired corruption at (file 45, block 7836) Mon Mar 28 10:20:39 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0f_816.trc (incident=131149): ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131149\yccy_pr0f_816_i131149.trc Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942] Mon Mar 28 10:20:39 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0i_2132.trc (incident=131173): ORA-00600: internal error code, arguments: [3020], [5], [154240], [21125760], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 154240, file offset is 1263534080 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131173\yccy_pr0i_2132_i131173.trc Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0k_3900.trc (incident=131190): ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131190\yccy_pr0k_3900_i131190.trc Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr01_2220.trc (incident=131037): ORA-00600: internal error code, arguments: [kcbrapply_14], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131037\yccy_pr01_2220_i131037.trc Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942] Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0f_816.trc (incident=131150): ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131150\yccy_pr0f_816_i131150.trc Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr01_2220.trc (incident=131038): ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [kcbrapply_14], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131038\yccy_pr01_2220_i131038.trc Mon Mar 28 10:20:39 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0h_4036.trc (incident=131165): ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131165\yccy_pr0h_4036_i131165.trc Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C, kcbzdh()+942] Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B, kcbzpnd()+299] Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1351BB9, kcbs_dump_adv_state()+1529] Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B, kcbzpnd()+299] Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0h_4036.trc (incident=131166): ORA-07445: exception encountered: core dump [kcbzdh()+942] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC62C] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [kcbr_validate_read_1], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131166\yccy_pr0h_4036_i131166.trc Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B, kcbzpnd()+299] Mon Mar 28 10:20:40 2016 Checker run found 60 new persistent data failures Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0d_2112.trc (incident=131134): ORA-07445: exception encountered: core dump [kcbzpnd()+299] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131134\yccy_pr0d_2112_i131134.trc Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr04_3980.trc (incident=131022): ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+1529] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x1351BB9] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [kcbrapply_12], [], [], [], [], [], [], [], [], [], [], [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131022\yccy_pr04_3980_i131022.trc Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0e_3260.trc (incident=131142): ORA-07445: exception encountered: core dump [kcbzpnd()+299] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [3020], [5], [163457], [21134977], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 163457, file offset is 1339039744 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131142\yccy_pr0e_3260_i131142.trc Mon Mar 28 10:20:41 2016 Trace dumping is performing id=[cdmp_20160328102041] Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr0i_2132.trc (incident=131174): ORA-07445: exception encountered: core dump [kcbzpnd()+299] [ACCESS_VIOLATION] [ADDR:0xFFFFFFFFFFFFFFFF] [PC:0x12EC13B] [UNABLE_TO_READ] [] ORA-00600: internal error code, arguments: [3020], [5], [154240], [21125760], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 154240, file offset is 1263534080 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: data file 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131174\yccy_pr0i_2132_i131174.trc Mon Mar 28 10:20:41 2016 Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0, 0000000074CAE3F0] Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pr06_2684.trc (incident=131077): ORA-07445: exception encountered: core dump [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131077\yccy_pr06_2684_i131077.trc Mon Mar 28 10:20:42 2016 Exception [type: ACCESS_VIOLATION, UNABLE_TO_WRITE] [ADDR:0x0] [PC:0x4D20D2, kslgetl()+54] Mon Mar 28 10:20:42 2016 Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_pmon_3856.trc (incident=130853): ORA-07445: exception encountered: core dump [kslgetl()+54] [ACCESS_VIOLATION] [ADDR:0x0] [PC:0x4D20D2] [UNABLE_TO_WRITE] [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_130853\yccy_pmon_3856_i130853.trc Trace dumping is performing id=[cdmp_20160328102042] Errors in file d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_131077\yccy_pr06_2684_i131077.trc: ORA-00607: Internal error occurred while making a change to a data block ORA-00602: internal programming exception ORA-07445: exception encountered: core dump [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] [] Process debug not enabled via parameter _debug_enable Trace dumping is performing id=[cdmp_20160328102043] Mon Mar 28 10:21:01 2016 RECO (ospid: 3524): terminating the instance due to error 472 Instance terminated by RECO, pid = 3524
通过观察这段日志,基本上可以发现主要是FILE 45,虽然提示坏块但是最终验证确定为正常块(类似:Reread (file 45, block 7836) found valid data),这里主要是file 5,报了大量的ORA-600[3020].
对数据文件逐个进行recover操作
SQL> startup mount; ORACLE 例程已经启动。 Total System Global Area 1.7103E+10 bytes Fixed Size 2192864 bytes Variable Size 9059699232 bytes Database Buffers 8019509248 bytes Redo Buffers 21762048 bytes 数据库装载完毕。 SQL> recover datafile 1; 完成介质恢复。 SQL> recover datafile 2; ORA-03113: 通信通道的文件结尾 进程 ID: 1652 会话 ID: 551 序列号: 55 SQL> recover datafile 3; 完成介质恢复。 SQL> recover datafile 4; 完成介质恢复。 SQL> recover datafile 5; ORA-03113: 通信通道的文件结尾 进程 ID: 4900 会话 ID: 551 序列号: 56131 SQL> recover datafile 6; 完成介质恢复。 ………… SQL> recover datafile 63; 完成介质恢复。 SQL> recover datafile 64; 完成介质恢复。
除掉datafile 2,5之外,其他文件全部recover成功.
对于file 2 尝试处理
无法通过recover成功,只能暂时放弃,后续考虑先offline open库,然后把这个文件强制online
SQL> recover datafile 2 ; ORA-03113: 通信通道的文件结尾 进程 ID: 5020 会话 ID: 551 序列号: 3 Mon Mar 28 10:47:12 2016 ALTER DATABASE RECOVER datafile 2 Media Recovery Start Serial Media Recovery started Recovery of Online Redo Log: Thread 1 Group 1 Seq 18688 Reading mem 0 Mem# 0: E:\ORACLE_DATA\YCCY\REDO01.LOG Exception [type: ACCESS_VIOLATION, UNABLE_TO_READ] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0, 0000000074CAE3F0] Errors in file d:\oracle\diag\rdbms\yccy\yccy\trace\yccy_ora_3508.trc (incident=143022): ORA-07445: 出现异常错误: 核心转储 [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] [] Incident details in: d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_143022\yccy_ora_3508_i143022.trc Errors in file d:\oracle\diag\rdbms\yccy\yccy\incident\incdir_143022\yccy_ora_3508_i143022.trc: ORA-00607: 当更改数据块时出现内部错误 ORA-00602: 内部编程异常错误 ORA-07445: 出现异常错误: 核心转储 [PC:0x74CAE3F0] [ACCESS_VIOLATION] [ADDR:0x2E7FFFFFE] [PC:0x74CAE3F0] [UNABLE_TO_READ] []
对于file 5处理
SQL> recover datafile 5; ORA-00283: 恢复会话因错误而取消 ORA-00600: 内部错误代码, 参数: [3020], [5], [163457], [21134977], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 163457, file offset is 1339039744 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' SQL> recover datafile 5 allow 1 corruption; ORA-00283: 恢复会话因错误而取消 ORA-00600: 内部错误代码, 参数: [3020], [5], [162433], [21133953], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 162433, file offset is 1330651136 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' SQL> recover datafile 5 allow 1 corruption; ORA-00283: 恢复会话因错误而取消 ORA-00600: 内部错误代码, 参数: [3020], [5], [166272], [21137792], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 166272, file offset is 1362100224 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' SQL> recover datafile 5 allow 1 corruption; ORA-00283: 恢复会话因错误而取消 ORA-00600: 内部错误代码, 参数: [3020], [5], [169346], [21140866], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 5, block# 169346, file offset is 1387282432 bytes) ORA-10564: tablespace DT_SYS_DAT ORA-01110: 数据文件 5: 'E:\ORACLE_DATA\YCCY\DT_SYS_DAT.ORA' ORA-10560: block type 'FIRST LEVEL BITMAP BLOCK' SQL> recover datafile 5 allow 1 corruption; 完成介质恢复。
open数据库并online datafile 2
SQL> startup pfile='d:/pfile.txt' mount; ORACLE 例程已经启动。 Total System Global Area 1.7103E+10 bytes Fixed Size 2192864 bytes Variable Size 9059699232 bytes Database Buffers 8019509248 bytes Redo Buffers 21762048 bytes 数据库装载完毕。 SQL> alter database datafile 2 offline; 数据库已更改。 SQL> alter database open; 数据库已更改。 SQL> shutdown immediate; ORA-03113: 通信通道的文件结尾 SQL> conn / as sysdba 已连接到空闲例程。 SQL> startup pfile='d:/pfile.txt' mount; ORACLE 例程已经启动。 Total System Global Area 1.7103E+10 bytes Fixed Size 2192864 bytes Variable Size 9059699232 bytes Database Buffers 8019509248 bytes Redo Buffers 21762048 bytes 数据库装载完毕。 SQL> select group#,status from v$log; GROUP# STATUS ---------- ---------------- 1 INACTIVE 3 INACTIVE 2 CURRENT SQL> recover database until cancel; ORA-00279: 更改 1226478477 (在 03/28/2016 20:23:37 生成) 对于线程 1 是必需的 ORA-00289: 建议: D:\ORACLE\FLASH_RECOVERY_AREA\YCCY\ARCHIVELOG\2016_03_28\O1_MF_1_18689_%U_.ARC ORA-00280: 更改 1226478477 (用于线程 1) 在序列 #18689 中 指定日志: {<RET>=suggested | filename | AUTO | CANCEL} E:\ORACLE_DATA\YCCY\REDO02.LOG 已应用的日志。 完成介质恢复。 SQL> alter database datafile 2 online; 数据库已更改。 SQL> alter database open resetlogs; 数据库已更改。
数据库基本上属于正常打开,处理掉3020部分的坏块基本ok
ORA-10562 故障恢复—allow 1 corruption
朋友数据库由于存储变动,导致数据库瞬间hang住,然后直接crash,之后无法正常启动,请求技术支持.
数据库报ORA-00600[2131]错误
不能mount,可以通过重建控制文件解决
Mon Nov 30 20:35:38 2015 alter database mount Mon Nov 30 20:35:38 2015 NOTE: Loaded library: System Mon Nov 30 20:35:38 2015 SUCCESS: diskgroup DATADG was mounted Mon Nov 30 20:35:38 2015 NOTE: dependency between database xifenfei and diskgroup resource ora.DATADG.dg is established Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_ora_26450.trc (incident=3032256): ORA-00600: internal error code, arguments: [2131], [33], [32], [], [], [], [], [], [], [], [], [] ORA-600 signalled during: alter database mount...
尝试recover数据库
Mon Nov 30 20:45:53 2015 ALTER DATABASE RECOVER database Media Recovery Start started logmerger process Parallel Media Recovery started with 80 slaves Mon Nov 30 20:45:56 2015 Recovery of Online Redo Log: Thread 2 Group 11 Seq 617 Reading mem 0 Mem# 0: +DATADG/xifenfei/redo011.log Recovery of Online Redo Log: Thread 1 Group 4 Seq 5410 Reading mem 0 Mem# 0: +DATADG/xifenfei/redo04.log Recovery of Online Redo Log: Thread 1 Group 5 Seq 5411 Reading mem 0 Mem# 0: +DATADG/xifenfei/redo05.log Mon Nov 30 20:46:07 2015 Recovery of Online Redo Log: Thread 1 Group 6 Seq 5412 Reading mem 0 Mem# 0: +DATADG/xifenfei/redo06.log Mon Nov 30 20:46:13 2015 Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0xC] [PC:0x95FB502, kdxlin()+4088] [flags: 0x0, count: 1] Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_pr13_30480.trc (incident=3032568): ORA-07445: 出现异常错误: 核心转储 [kdxlin()+4088] [SIGSEGV] [ADDR:0xC] [PC:0x95FB502] [Address not mapped to object] [] Mon Nov 30 20:46:17 2015 Sweep [inc][3032568]: completed Sweep [inc2][3032568]: completed Mon Nov 30 20:46:31 2015 Slave exiting with ORA-10562 exception Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_pr13_30480.trc: ORA-10562: Error occurred while applying redo to data block (file# 2, block# 165054) ORA-10564: tablespace SYSAUX ORA-01110: 数据文件 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867' ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 271 ORA-00607: 当更改数据块时出现内部错误 ORA-00602: 内部编程异常错误 ORA-07445: 出现异常错误: 核心转储 [kdxlin()+4088] [SIGSEGV] [ADDR:0xC] [PC:0x95FB502] [Address not mapped to object] [] Mon Nov 30 20:46:31 2015 Recovery Slave PR13 previously exited with exception 10562 Mon Nov 30 20:46:33 2015 Checker run found 28 new persistent data failures Mon Nov 30 20:46:35 2015 Media Recovery failed with error 448 Errors in file /u01/app/oracle/diag/rdbms/xifenfei/xifenfei1/trace/xifenfei1_pr00_30400.trc: ORA-00283: 恢复会话因错误而取消 ORA-00448: 后台进程正常结束 ORA-10562 signalled during: ALTER DATABASE RECOVER database ...
通过这里可以看到,由于在recover 操作之时,由于某种原因redo的数据无法apply到file 2 block 165054中,导致数据库recover database失败.
按照数据文件recover操作
SQL> recover datafile 1; Media recovery complete. SQL> recover datafile 3,4,5,6,7; Media recovery complete. SQL> recover datafile 8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28; Media recovery complete. SQL> recover datafile 2; ORA-00283: recovery session canceled due to errors ORA-10562: Error occurred while applying redo to data block (file# 2, block# 165054) ORA-10564: tablespace SYSAUX ORA-01110: data file 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867' ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 271 ORA-00607: Internal error occurred while making a change to a data block ORA-00602: internal programming exception ORA-07445: exception encountered: core dump [kdxlin()+4088] [SIGSEGV] [ADDR:0xC] [PC:0x95FB502] [Address not mapped to object] []
错误提示和recover database一样,那我们只能让恢复跳过该block继续恢复,因为根据经验判断data object# 271不是系统核心对象,不会影响数据库的启动
跳过坏块继续恢复
SQL> recover datafile 2 allow 1 corruption; ORA-00283: recovery session canceled due to errors ORA-00600: internal error code, arguments: [3020], [2], [69793], [8458401], [], [], [], [], [], [], [], [] ORA-10567: Redo is inconsistent with data block (file# 2, block# 69793, file offset is 571744256 bytes) ORA-10564: tablespace SYSAUX ORA-01110: data file 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867' ORA-10561: block type 'TRANSACTION MANAGED INDEX BLOCK', data object# 272 SQL> recover datafile 2 allow 1 corruption; Media recovery complete. SQL> alter database open; Database altered.
出现了ORA-600[3020] 继续跳过坏块,然后数据库顺利open,别忘记加tempfile
处理异常对象
SQL> select object_name,object_type from dba_objects where data_object_id in(272,271); OBJECT_NAME -------------------------------------------------------------------------------- OBJECT_TYPE ------------------- SMON_SCN_TIME_TIM_IDX INDEX SMON_SCN_TIME_SCN_IDX INDEX SQL> select index_name from dba_indexes where table_name='SMON_SCN_TIME'; INDEX_NAME ------------------------------ SMON_SCN_TIME_TIM_IDX SMON_SCN_TIME_SCN_IDX SQL> set pages 1000 SQL> set long 1000 SQL> Select dbms_metadata.get_ddl('TABLE','SMON_SCN_TIME','SYS') FROM DUAL ; DBMS_METADATA.GET_DDL('TABLE','SMON_SCN_TIME','SYS') -------------------------------------------------------------------------------- CREATE TABLE "SYS"."SMON_SCN_TIME" ( "THREAD" NUMBER, "TIME_MP" NUMBER, "TIME_DP" DATE, "SCN_WRP" NUMBER, "SCN_BAS" NUMBER, "NUM_MAPPINGS" NUMBER, "TIM_SCN_MAP" RAW(1200), "SCN" NUMBER DEFAULT 0, "ORIG_THREAD" NUMBER DEFAULT 0 /* for downgrade */ ) CLUSTER "SYS"."SMON_SCN_TO_TIME_AUX" ("THREAD") SQL> analyze table smon_scn_time validate structure cascade online; analyze table smon_scn_time validate structure cascade online * ERROR at line 1: ORA-01578: ORACLE data block corrupted (file # 2, block # 165054) ORA-01110: data file 2: '+DATADG/xifenfei/datafile/sysaux.265.861925867' SQL> truncate CLUSTER "SYS"."SMON_SCN_TO_TIME_AUX"; Cluster truncated.
关于SMON_SCN_TIME部分处理,可以参考:关于SMON_SCN_TIME若干问题说明.至此数据库基本上恢复完成,而且运气非常好,恢复的非常完美,数据实现0丢失.