Hello,
Is there a chunking
mechanism for S3 file download similar to multipart
upload onto S3 ?
I’m trying to stream a large file through S3 and the underlying consumer is timing out or so …resulting in an error like this:
[WARN] [04/15/2019 14:59:00.553] [default-akka.actor.default-dispatcher-14] [default/Pool(shared->https://s3-eu-west-1.amazonaws.com:443)] [0 (WaitingForEndOfResponseEntity)] Ongoing request [GET /bucket/file Empty] was dropped because pool is shutting down
and right after that:
[INFO] [04/15/2019 14:59:32.033] [default-akka.actor.default-dispatcher-3] [akka://default/system/IO-TCP/selectors/$a/0] Message [akka.io.TcpConnection$Unregistered$] from Actor[akka://default/system/IO-TCP/selectors/$a/0#-1268872956] to Actor[akka://default/system/IO-TCP/selectors/$a/0#-1268872956] was not delivered. [1] dead letters encountered. If this is not an expected behavior, then [Actor[akka://default/system/IO-TCP/selectors/$a/0#-1268872956]] may have terminated unexpectedly, This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
and the stream terminates after that, I guess I could define the ByteRange manually and loop through that, but seems less stream-like
or a dirty way of achieving this.
Any thoughts on this?