-
Notifications
You must be signed in to change notification settings - Fork 327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
class java.nio.HeapByteBuffer cannot be cast to class org.apache.avro.generic.GenericRecord #675
Comments
Then don't use AvroFormat. Use |
Worth mentioning that this isn't exactly a proper backup strategy. You've only configured the values of the record to be saved, not the timestamp, headers, or key (or Protobuf schema itself assuming you are using the Schema Registry) |
What would a proper backup strategy look like? |
I personally haven't come across any decent way to backup Kafka in a streaming fashion without using tools like MirrorMaker2 to replicate to a warm standby cluster with increased retention periods. And I didn't even mention compacted topics... They would not obviously not compact in S3, and therefore would not be restored in a compacted format. Ultimately, the last option is static disk snapshots on a regular basis, which you could upload to S3 separately, if needed. |
Thank you so much for that writeup! I guess then that this issue could be closed as "user error" |
If you want to do some more research into the topic (pun intended) I've tried to implement this at my last job and used
Related issue - jcustenborder/kafka-connect-transform-archive#6 |
Yea well I've to tweak the requirements a little, because just "do backups on kafka" is apparently way too broad. Just focusing on the messages will probably be enough for my use case, and ignoring the other parts. Some messages lay in our Kafka for a week or two before it gets processed. In case of a disaster recovery, we need to ensure those weeks old messages also get restored, in addition to our other non-Kafka databases. |
Hello! I'm trying to do basic message backups to S3 using https://docs.confluent.io/kafka-connectors/s3-sink/current/overview.html#schema-evolution
When trying to use the s3-source to restore messages into a brand new Kafka, I get the error:
Configs for cluster 1
Where I'm trying to backup messages using s3-sink. Running a separate pod from the Kafka using:
connect-standalone.properties
sink.properties
Configs for cluster 2
Where I'm trying to restore messages using s3-source. Running a separate pod from the Kafka using:
connect-standalone.properties
(same as cluster 1)
source.properties
What am I doing wrong here?
Btw the messages are protobuf encoded, but I don't want to lock in the message format with the protobuf converter. To my understanding, I just want to use the
ByteArrayConverter
as I just want to backup and restore the messages as-is.The text was updated successfully, but these errors were encountered: