It's really more about the accuracy of the data (bit depth) rather than the format. A camera may put out as little as 6 bits of data at high ISOs or as much as 14 bits at low ISOs... whatever the camera is putting out is then either stored in a 16bit container (raw/dng/tiff) or converted and stored in an 8bit container (jpeg). 8bit resolution/detail/information is generally more than we can see/discern, and it may be more accurate than what the camera generated, so that isn't really the problem.

The problem is when you start to manipulate the data/image heavily, which is essentially a mathematical process... and when you start with less accuracy, you wind up with greater errors/inaccuracy.

The easiest way to demonstrate this is with edits/histograms.

This image is a tiff converted from a raw file, but retaining the 16bit raw file information (it could have been converted to 8bit).

I then applied a heavy levels (dynamic range/gamma) adjustment to it which requires a lot of recalculation/math. That's the lower histogram on the right.

If you look at the upper histogram (color/luminance) you will see it is complete with no gaps.

Here I did exactly the same thing to a jpeg converted from the raw file, which is an 8 bit file.

If you look at the upper histogram you will see a lot of missing data/errors/jumps (vertical lines/gaps).

An easy way to think of it is with simple math. An 8bit data set may have an accuracy that looks like 0,1,2,4,8 and a 14 bit data set may look like 0,1,2,3,4,5,6,7,8. When you apply a simple calculation of +1 to each of those you get 0,1,2,3,4,5,8,9 and 0,1,2,3,4,5,6,7,8,9 (inclusive of the original data sets). Notice that 6 and 7 are missing from the first set of results...