Christian Bjørnbak
2017-03-01 06:50:35 UTC
Hi,
I am trying to upload a directory containing 60 GB of jpegs in various
sizes of 3-6 KB to a ceph storage.
First I tried using sync:
s3cmd sync -P /path-to-src/directory s3://bucket
It takes 24+ hours and at some point the process is killed. I tried a
couple of times and noticed that while it is running it uses all of the
source server's memory and swap.
I'm syncing from a 16 GB RAM / 16 GB swap server.
I thought maybe sync keeps the files in memory to compare or something and
changed to put:
s3cmd put -P --recursive /path-to-src/directory s3://bucket
But I still I experience the same - s3cmd uses all the memory.
Is there an memory leak in s3cmd so it does not remove the file from memory
after it has been uploaded?
Med venlig hilsen / Kind regards,
Christian BjÞrnbak
Chefudvikler / Lead Developer
TouristOnline A/S
Islands Brygge 43
2300 KÞbenhavn S
Denmark
TLF: +45 32888230
Dir. TLF: +45 32888235
I am trying to upload a directory containing 60 GB of jpegs in various
sizes of 3-6 KB to a ceph storage.
First I tried using sync:
s3cmd sync -P /path-to-src/directory s3://bucket
It takes 24+ hours and at some point the process is killed. I tried a
couple of times and noticed that while it is running it uses all of the
source server's memory and swap.
I'm syncing from a 16 GB RAM / 16 GB swap server.
I thought maybe sync keeps the files in memory to compare or something and
changed to put:
s3cmd put -P --recursive /path-to-src/directory s3://bucket
But I still I experience the same - s3cmd uses all the memory.
Is there an memory leak in s3cmd so it does not remove the file from memory
after it has been uploaded?
Med venlig hilsen / Kind regards,
Christian BjÞrnbak
Chefudvikler / Lead Developer
TouristOnline A/S
Islands Brygge 43
2300 KÞbenhavn S
Denmark
TLF: +45 32888230
Dir. TLF: +45 32888235