robbo,
you're combining quality and size and they are two different things. Size is how many pixels in dimensions of the picture. Quality is how many 'blank' pixels there are. Let's leave the size the same for now (large jpeg). When the camera takes a picture it uses all of the pixels in the sensor. When it converts to jpeg, it then throws away a number of pixels. How many and which ones depend on the processing and quality level. For instance, let's say you have a 3x3 grid of pixels. They all have roughly the same RGB values. On highest quality, the conversion will keep all 9 pixels. On a very high quality, the algorithm might eliminate the middle pixel from the file. When the opening program opens that file, it is missing the data for the middle pixel. So it makes a guess based on the pixels around it. If all of the pixels are similar, then it is an easy guess.
When you start using a lower quality, then maybe the algorithm deletes 4 of the 9 pixels. Again, the opening program will make some guesses about what data should be there.
Each time the opening program guesses, it may or may not guess correctly so it will make some slight differences between the new and original. When you save the new, you're saving the 'guesses' as the new data and the cycle continues. Not so much a problem for pictures with smooth gradients of color, but for sharp differences you'll see changes occur.
Just shoot RAW and use a non destructive editor and be done with it.
thanks
rick