Windows平台整合SpringBoot+KAFKA__第2部分_代码编写前传

那年夏天0 2020-01-06

开始准备写测试代码

看半天不太懂(我也算是小白级别的,看我搞windows版本的kafka就知道了),

看文档无聊,偶然看到一个KAFKA的windows管理程序,于是就试试就装了一个,感觉那个玩意也比较垃圾,打算不用,突然看到上面可以管理 topic
于是我就当测试了,用这个管理工具删除 我自定义的 test topic 后面操作比较快,我也懒得看结果,反正都是垃圾

中间接了个电话,回来,打算从零开始再弄一个JAVA测试类,心想这些个test topic 也没啥价值,干脆我重启 zookeeper/KAFKA得了;
于是关掉那两个CMD窗口,再启动这两个服务;
zookeeper启动正常;
KAFKA启动不正常,我心想,不是吧,啥玩意啊,这么容易坏??

[2020-01-06 16:38:27,384] TRACE [Broker id=0] Handling stop replica (delete=true) for partition test-0 (state.change.logger)
[2020-01-06 16:38:27,429] ERROR [Broker id=0] Ignoring stop replica (delete=true) for partition test-0 due to storage exception (state.change.logger)
org.apache.kafka.common.errors.KafkaStorageException: Error while renaming dir for test-0 in log dir D:\kafka_2.12-2.4.0\kafka-logs
Caused by: java.nio.file.AccessDeniedException: D:\kafka_2.12-2.4.0\kafka-logs\test-0 -> D:\kafka_2.12-2.4.0\kafka-logs\test-0.a66dcd7640b1444b86fd2d4cbafe30d2-delete
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:795)
at kafka.log.Log.$anonfun$renameDir$2(Log.scala:966)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at kafka.log.Log.maybeHandleIOException(Log.scala:2316)
at kafka.log.Log.renameDir(Log.scala:964)
at kafka.log.LogManager.asyncDelete(LogManager.scala:925)
at kafka.cluster.Partition.$anonfun$delete$1(Partition.scala:479)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:261)
at kafka.cluster.Partition.delete(Partition.scala:470)
at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:360)
at kafka.server.ReplicaManager.$anonfun$stopReplicas$2(ReplicaManager.scala:404)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:97)
at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:402)
at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:235)
at kafka.server.KafkaApis.handle(KafkaApis.scala:131)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)
at java.lang.Thread.run(Thread.java:745)
Suppressed: java.nio.file.AccessDeniedException: D:\kafka_2.12-2.4.0\kafka-logs\test-0 -> D:\kafka_2.12-2.4.0\kafka-logs\test-0.a66dcd7640b1444b86fd2d4cbafe30d2-delete
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:792)
... 17 more

删掉日志文件夹中的文件
D:\kafka_2.12-2.4.0\kafka-logs
D:\kafka_2.12-2.4.0\logs

重启KAFKA,还是不行,报错日志一样
头皮有些麻了,我CA,删除完日志,算新部署好啊,不应该啊。

一通操作猛如虎,此处省略一万字,半个小时过去,终于

https://blog.csdn.net/dsjtlmy/article/details/88557324

好吧,试试吧。

删除KAFKA的日志文件,再去zookeeper配置的DATA文件夹,删掉里面的文件;
重启
一切正常;

总结:
忘记了,KAFKA一定是要基于zookeeper的,如果KAFKA有异常,zookeeper要有连带责任的(开个玩笑)
因为是分布式,所以 zookeeper 那边有记录,哪怕kafka这端整个日志都被删除,zookeeper那边都有校验提示的,所以要做,一起都做,否则不过;

相关推荐