but also, despite many statements to the contrary, working on a JPEG file, even with several edits as a test, did not produce reduced quality, unlike most claim. I actually posted comparison shots of a crop from an original JPEG (which I will confirm matched the equivalent TIFF version) and then proceeded to edit it slightly ten times - meaning each edit followed by a save-as - then edited again.
Good JPG editing programs employ fancy perceptual tricks that can be amazingly effective for some subjects. I suspect you used a great JPG engine on a subject that was an ideal test case. JPG rendering programs attempt to undo the compressor's tricks. If you save and load with the same engine, it's got an even better chance to do that very well. Still there will be generational losses. It's a lossy format, and that's the very thing that gives it the amazing flexibility it has.
So the point still stands, that you should do all of your edited saving to a loss-less format. For printing, the reason you may want to avoid JPG is when printing a wide gamut image. (shoot raw/process 16bit w/a wide colorspace) The further outside of the sRGB (or adobe if your printer supports it) colorspace your image is, the more likely you'll have an objectionable color/saturation loss or shift. Photoshop's gamut warning tools can help spot problems in advance.
Otherwise a high quality setting jpg from a good rendering engine is going to work well, and will for the majority of images.
People do seem to forget that years ago, many used utilities such as Drivespace to enable a small hard drive to equate to something around 40% larger due compressing and uncompressing programmes "on the fly" when hard drives were so expensive. I well remember my massive 40MB hard drive could equate to around 67MB using that (or similar) utility - even today, MS OSs have the facility to compress drives to do similar things.
In the beginning there was Stac. It was a loss-less compression format, just like a compressed TIFF. Some modern SSD drives use the same trick in hardware today both to save space and reduce wear on the flash memory.
In the early days of Windows, drive capacity demands had exploded but capacity/dollar had been stagnant for awhile as had read speed. Processor speed had increased, so PCs of the day could spend CPU time compressing the data before saving and actually finish faster. As a user you got increased apparent capacity without a speed penalty. The downside was that compression massively amplified the damage cause by a bad spot on the drive. Backups were even more important then.