Some discussion on intermediate images here:
@andrewd72 Thanks. Looks like intermediate resources aren’t meant to be published, but they currently are due to a bug. I wanted to avoid bloating public files with useless resources, but I’ll just assume this will be fixed in the future.
Your final quality maths won’t be as simple for two “quality 75” operations.
If you mean that the final quality won’t exactly be 75% * 75%, but roughly that, then I understand. I meant to illustrate with that math that the quality was reduced twice.
For best quality for multiple operations you could use a tiff intermediate or a higher quality jpg.
To resize to multiple sizes, you might as well start from the original source for each case.
I’m writing an image helper that processes an image for you using whatever options you want, then optionally generates various breakpoint-sized versions of it for you. In the case where the original image is resized, I use the original image to generate the breakpoint versions. However, the problem is that the user can use processing methods other than resize—like crop—in which case the breakpoint versions have to be generated from the processed image. In that case, it seems that I have to process the original image again using the same processing options, except change the quality to 100%, then generate the breakpoint images with the original quality applied.
If you repeatedly compress a WebP image using the same q value, the image size will decrease each time.
@jmooring Thanks, good to have confirmation!
If you repeatedly compress a JPEG image using the same q value, the image size will not decrease each time.
By size, I assume you mean file size. But the quality will still go down, correct? (E.g. more pixelated, loss of detail.)