标签云
asm恢复 bbed bootstrap$ dul In Memory kcbzib_kcrsds_1 kccpb_sanity_check_2 kfed MySQL恢复 ORA-00312 ORA-00607 ORA-00704 ORA-01110 ORA-01555 ORA-01578 ORA-08103 ORA-600 2131 ORA-600 2662 ORA-600 2663 ORA-600 3020 ORA-600 4000 ORA-600 4137 ORA-600 4193 ORA-600 4194 ORA-600 16703 ORA-600 kcbzib_kcrsds_1 ORA-600 KCLCHKBLK_4 ORA-15042 ORA-15196 ORACLE 12C oracle dul ORACLE PATCH Oracle Recovery Tools oracle加密恢复 oracle勒索 oracle勒索恢复 oracle异常恢复 Oracle 恢复 ORACLE恢复 ORACLE数据库恢复 oracle 比特币 OSD-04016 YOUR FILES ARE ENCRYPTED 勒索恢复 比特币加密文章分类
- Others (2)
- 中间件 (2)
- WebLogic (2)
- 操作系统 (102)
- 数据库 (1,670)
- DB2 (22)
- MySQL (73)
- Oracle (1,532)
- Data Guard (52)
- EXADATA (8)
- GoldenGate (21)
- ORA-xxxxx (159)
- ORACLE 12C (72)
- ORACLE 18C (6)
- ORACLE 19C (14)
- ORACLE 21C (3)
- Oracle 23ai (7)
- Oracle ASM (65)
- Oracle Bug (8)
- Oracle RAC (52)
- Oracle 安全 (6)
- Oracle 开发 (28)
- Oracle 监听 (28)
- Oracle备份恢复 (560)
- Oracle安装升级 (91)
- Oracle性能优化 (62)
- 专题索引 (5)
- 勒索恢复 (78)
- PostgreSQL (18)
- PostgreSQL恢复 (6)
- SQL Server (27)
- SQL Server恢复 (8)
- TimesTen (7)
- 达梦数据库 (2)
- 生活娱乐 (2)
- 至理名言 (11)
- 虚拟化 (2)
- VMware (2)
- 软件开发 (37)
- Asp.Net (9)
- JavaScript (12)
- PHP (2)
- 小工具 (20)
-
最近发表
- ORA-600 krse_arc_complete.4
- Oracle 19c 202410补丁(RUs+OJVM)
- ntfs MFT损坏(ntfs文件系统故障)导致oracle异常恢复
- .mkp扩展名oracle数据文件加密恢复
- 清空redo,导致ORA-27048: skgfifi: file header information is invalid
- A_H_README_TO_RECOVER勒索恢复
- 通过alert日志分析客户自行对一个数据库恢复的来龙去脉和点评
- ORA-12514: TNS: 监听进程不能解析在连接描述符中给出的SERVICE_NAME
- ORA-01092 ORA-00604 ORA-01558故障处理
- ORA-65088: database open should be retried
- Oracle 19c异常恢复—ORA-01209/ORA-65088
- ORA-600 16703故障再现
- 数据库启动报ORA-27102 OSD-00026 O/S-Error: (OS 1455)
- .[metro777@cock.li].Elbie勒索病毒加密数据库恢复
- 应用连接错误,初始化mysql数据库恢复
- RAC默认服务配置优先节点
- Oracle 19c RAC 替换私网操作
- 监听报TNS-12541 TNS-12560 TNS-00511错误
- drop tablespace xxx including contents恢复
- Linux 8 修改网卡名称
标签归档:ora-600 kccsbck_first
Control file mount id mismatch!故障处理
通过沟通确认客户由于存储双活异常,业务运行在主存储上,另外一套存储修复之后,进行存储双活同步,结果在这个过程中由于遭遇Control file mount id mismatch! 导致数据库crash了
2023-05-03T20:21:07.446873+08:00 Archived Log entry 491897 added for T-1.S-246903 ID 0x97d92f0b LAD:1 2023-05-03T20:47:53.902701+08:00 Error: 2141 Control file mount id mismatch! fhmid: 2592441863, SGA mid: 2624617448 Requesting DIAG on each RAC instance to dump the control file header block 2023-05-03T20:47:55.906490+08:00 Errors in file /opt/rac/oracle/diag/rdbms/xff/xff1/trace/xff1_rms0_20989.trc: 2023-05-03T20:47:56.521500+08:00 RMS0 (ospid: 20989): terminating the instance 2023-05-03T20:47:56.610656+08:00 System state dump requested by (instance=1, osid=20989 (RMS0)), summary=[abnormal instance termination]. System State dumped to trace file /opt/rac/oracle/diag/rdbms/xff/xff1/trace/xff1_diag_20912_20230503204756.trc 2023-05-03T20:47:58.480397+08:00 License high water mark = 395 2023-05-03T20:48:02.600203+08:00 Instance terminated by RMS0, pid = 20989 2023-05-03T20:48:02.601563+08:00 Warning: 2 processes are still attach to shmid 393226: (size: 28672 bytes, creator pid: 19941, last attach/detach pid: 20912) 2023-05-03T20:48:03.481726+08:00 USER (ospid: 967): terminating the instance 2023-05-03T20:48:03.483351+08:00 Instance terminated by USER, pid = 967
节点自动重启报错ORA-600 kccsbck_first
2023-05-03T20:48:34.870435+08:00 NOTE: ASMB mounting group 2 (FRA) NOTE: ASM background process initiating disk discovery for grp 2 (reqid:0) NOTE: Assigning number (2,1) to disk (/dev/asm_data0g) NOTE: Assigning number (2,0) to disk (/dev/asm_data0f) SUCCESS: mounted group 2 (FRA) NOTE: grp 2 disk 1: FRA_0001 path:/dev/asm_data0g NOTE: grp 2 disk 0: FRA_0000 path:/dev/asm_data0f 2023-05-03T20:48:34.919965+08:00 NOTE: dependency between database xff and diskgroup resource ora.FRA.dg is established 2023-05-03T20:48:38.983416+08:00 Errors in file /opt/rac/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_2436.trc (incident=1333249): ORA-00600: ??????, ??: [kccsbck_first], [1], [2624617448], [], [], [], [], [], [], [], [], [] Incident details in: /opt/rac/oracle/diag/rdbms/xff/xff1/incident/incdir_1333249/xff1_ora_2436_i1333249.trc Use ADRCI or Support Workbench to package the incident. See Note 411.1 at My Oracle Support for error and packaging details. ORA-600 signalled during: ALTER DATABASE MOUNT /* db agent *//* {0:8:116} */...
再次重启数据库报错ORA-00742 ORA-00312
2023-05-04T08:18:59.635790+08:00 Aborting crash recovery due to error 742 2023-05-04T08:18:59.635897+08:00 Errors in file /opt/rac/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_80855.trc: ORA-00742: ??????? 2 ?? 244996 ? 8262 ?????????? ORA-00312: ???? 7 ?? 2: '+FRA/xff/ONLINELOG/group_7.446.1059323695' ORA-00312: ???? 7 ?? 2: '+DATA/xff/ONLINELOG/group_7.272.1059323695' Abort recovery for domain 0, flags 4 2023-05-04T08:18:59.647994+08:00 Errors in file /opt/rac/oracle/diag/rdbms/xff/xff1/trace/xff1_ora_80855.trc: ORA-00742: ??????? 2 ?? 244996 ? 8262 ?????????? ORA-00312: ???? 7 ?? 2: '+FRA/xff/ONLINELOG/group_7.446.1059323695' ORA-00312: ???? 7 ?? 2: '+DATA/xff/ONLINELOG/group_7.272.1059323695' ORA-742 signalled during: ALTER DATABASE OPEN /* db agent *//* {2:37368:2} */... 2023-05-04T08:19:00.820708+08:00 License high water mark = 33 2023-05-04T08:19:00.820936+08:00 USER (ospid: 82788): terminating the instance 2023-05-04T08:19:01.827132+08:00 Instance terminated by USER, pid = 82788
明显数据库在启动的时候做实例恢复,发现redo写丢失,从而引起数据库无法正常open,对于此类故障,处理比较多
ORA-00742 ORA-00312 故障恢复-1
ORA-00742 ORA-00312故障恢复-2
ORA-00742: 日志读取在线程 %d 序列 %d 块 %d 中检测到写入丢失情况
控制文件异常导致ORA-00600[kccsbck_first]
今天接到一个朋友求救他们的his系统数据库不能访问,情况比较紧急,让我帮忙处理.登录数据库得到信息如下:
操作系统:windows 2003 数据库:8.1.7 容灾方案:双机+emc存储镜像 备份:数据库无任何备份 启动到mount报错类此: ORA-00600: internal error code, arguments: [kccsbck_first], [1], [4141358753], [], [], [], [], []
这个问题在上周的数据库恢复中遇到过一次,他们也是因为双机的案例,当时的情况见:双机mount数据库出现ORA-00600[kccsbck_first],有了上次的思维,我开始也怀疑是客户的双机的问题,但是客户说双机在半年前就关闭了,没有启动过;因为我对win的双机不太熟悉,怕他们双机软自动启动系统然后接管oracle从而导致这个问题,然后让客户检查另一台机器,确定没有启动和接管oracle 服务.
然后查询MOS发现win上面的特殊之处:是控制文件corruption导致故障(不过dbv检查不出来),而且三个控制文件有同样的问题
MOS记录如下(不过8.1.7也存在同样问题) [ID 291684.1]
Applies to: Oracle Server - Enterprise Edition - Version: 9.0.1.5 and later [Release: 9.0.1 and later ] Information in this document applies to any platform. ***Checked for relevance on 09-APR-2012*** Symptoms Alter database mount exclusive results in ORA-00600: internal error code, arguments: [kccsbck_first], [1], [2141358753], [], [], [], [], [] The description of the error is: 'We receive this error because we are attempting to be the first thread/instance to mount the database and cannot because it appears that at least one other thread has mounted the database already'. However in this case the database was a standalone database on Windows. It had only one oracle service running. The operating system was rebooted, the oracle service was deleted and a new service created. Even then the error persisted. Cause There was some corruption present in the controlfile. Solution In this case the problem was resolved by: + Taking a backup of the old control file + Recreating the control file using the following document How to Recreate a Controlfile [Document 735106.1]
因为数据库不能mount,所以不能使用backup controlfile to trace;
因为是win系统,没有任何的控制文件备份,只能把控制文件拷贝到linux下面通过strings命令,自己编辑创建控制文件脚本(noresetlogs).执行脚本创建控制文件,recover database,应用redo文件恢复,然后resetlogs库,恢复成功(注意:8i中不需要另外增加临时文件)
双机mount数据库出现ORA-00600[kccsbck_first]
一朋友的数据库在做数据库恢复的时候,数据库不能启动到mount状态,检查发现
alert日志错误如下
Mon Aug 27 10:00:18 2012 ALTER DATABASE MOUNT Mon Aug 27 10:00:23 2012 Errors in file /oracle/admin/wf2009/udump/orcl_ora_7042.trc: ORA-00600: internal error code, arguments: [kccsbck_first], [1], [1208656276], [], [], [], [], [] Mon Aug 27 10:00:23 2012 ORA-600 signalled during: ALTER DATABASE MOUNT...
查询mos发现解释
The ORA-600 [kccsbck_first] error occurs when Oracle detects that another instance has this database already mounted. For some reason, Oracle already sees a thread with a heartbeat. This could be the expected behaviour if running OPS. In such a case the parallel_server parameter needs to be set. In cases where Parallel Server is not linked in, this is not the expected behaviour.
在非集群环境中,当该数据库已经在一个节点启动,然后另外一个节点再尝试启动数据库可能出现该错误.
检查环境发现是使用roseha的双机环境,当关闭当前节点的数据库时候,另外一个节点认为oracle down掉了,然后自动在该节点去尝试启动数据库,而当前操作节点去尝试mount数据库的时候发生该错误,因为该数据库的另外一个节点已经mount了.出现这样的情况,和朋友的存储资源的配置也有不合理之处,在接管资源之前,应该先分析和处理存储的挂载情况,而不是不卸载这边,另外一遍直接挂载(也就是存储两边都挂载)
解决办法
这个问题的本质就是因为ha的两边都启动了数据库导致,关闭一边的roseha或者关闭主机就可以了
在做数据库恢复的时候,尽量排查掉其他因素的影响,比如:rac在一个节点上操作,ha关闭双机软件等