Good. As expected, the IO was the biggest bottleneck. The provisioned IOPS is a welcome improvement. It’s down to 10 minutes.
18 seconds feels too fast. Something else must be going on here. The OS has lots of optimizations here and some very specific ones for the copy routine. It must be doing something beyond the actual bytes read and copy. I can’t believe it can copy 10 GB of hundreds of thousands of files in 18 seconds. It’s not the size that I find hard to believe, but the number of files. I think it must be caching it. Hugo could possibly be benefiting from that cache, but we aren’t doing anything specific to be able to.
Did you remove the destination directory first?
To do a proper benchmark you should remove the destination directories every time.
In your case for the copy this would be the content.test
directory and for the generation it would be the public
directory.