You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using s3 connector to read from kafka avro topic and push to s3 in parquet format. It works fine when compression is none. But when I use gzip compression I get the following error:
{"source_host":"connector-avro-parquet-0","method":"put","level":"ERROR","ctx":{"stacktrace":"org.apache.kafka.connect.errors.RetriableException: org.apache.kafka.connect.errors.DataException: Multipart upload failed to complete.\n\tat io.confluent.connect.s3.TopicPartitionWriter.commitFiles(TopicPartitionWriter.java:524)\n\tat io.confluent.connect.s3.TopicPartitionWriter.commitOnTimeIfNoData(TopicPartitionWriter.java:303)\n\tat io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:194)\n\tat io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:191)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:563)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.base\/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base\/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base\/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base\/java.lang.Thread.run(Thread.java:834)\nCaused by: org.apache.kafka.connect.errors.DataException: Multipart upload failed to complete.\n\tat io.confluent.connect.s3.storage.S3OutputStream.commit(S3OutputStream.java:169)\n\tat io.confluent.connect.s3.storage.S3ParquetOutputStream.close(S3ParquetOutputStream.java:36)\n\tat org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:865)\n\tat org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)\n\tat org.apache.parquet.hadoop.ParquetWriter.close(ParquetWriter.java:310)\n\tat io.confluent.connect.s3.format.parquet.ParquetRecordWriterProvider$1.commit(ParquetRecordWriterProvider.java:108)\n\tat io.confluent.connect.s3.TopicPartitionWriter.commitFile(TopicPartitionWriter.java:544)\n\tat io.confluent.connect.s3.TopicPartitionWriter.commitFiles(TopicPartitionWriter.java:514)\n\t... 14 more\nCaused by: org.apache.kafka.connect.errors.ConnectException: Expected compressionFilter to be a DeflaterOutputStream, but was passed an instance that does not match that type.\n\tat io.confluent.connect.s3.storage.CompressionType$1.finalize(CompressionType.java:77)\n\tat io.confluent.connect.s3.storage.S3OutputStream.commit(S3OutputStream.java:161)\n\t... 21 more","exception_class":"org.apache.kafka.connect.errors.RetriableException","exception_message":"org.apache.kafka.connect.errors.DataException: Multipart upload failed to complete."}
Same issue happens when I use Avro format class with compression
I am using s3 connector to read from kafka avro topic and push to s3 in parquet format. It works fine when compression is none. But when I use gzip compression I get the following error:
Same issue happens when I use Avro format class with compression
Connector config:
Anyway to resolve this?
Thanks in advance!
The text was updated successfully, but these errors were encountered: