Outside your workflow, the final output is always a 8 bit JPEG, or a print that has less gamut than a computer screen, as you cannot display a raw file.
The difference between out of camera JPEGs and "developing" the raw in your computer is who, where, and how the processing is done.
Raw processors show you a JPEG preview of what the final output might be, they cannot show the 14 bits per colour. Moving the sliders is just choosing what info will be kept (which is of course easier than trying to extrapolate it when processing an OOC JPEG)
In camera, the processing time and power is limited by state of the art technologies and the need to clear off the buffer for the next shots.
However, the increase of in-camera processing power, and the cumulative improvements of the processing algorythms help produce OOC JPEGs much better than a few years ago. The higher pixel count also helps improving the processing software, which has more information to analyse the scene and thus adapt the way it works.
The ability of modern cameras to output high quality 4K video attest the power of modern in-camera processors and the efficiency of their softwares.
The main limitation of today's OOC JPEGs are high contrast scenes and low light high ISO scenes. And of course the choice between true to life rendering or exagerated ambiance for artistic purposes.
In high contrast scenes, because, up to now, the camera cannot guess if a high dynamic range scene will look better with a standard tone curve, i.e. with very dark shadows, and a pleasing contrast in mid tones, at the risk of highlight clipping, or if it will benefit of rising the shadows ans protecting the highlights, at the risk that the final output will look dull or unnatural.
In low light high ISOs scene, because denoising to get a cleaner picture without washing details out is very difficult, and, today, the most advanced denoising softwares need more processing power and time than what is available in cameras.
But maybe tomorrow this processing will be possible in-camera, just like today lens distorsion, corner fall-off, chromatic aberrations and even diffraction can be corrected automatically in cameras by using lens profiles that before could only be applied in post production.
Thus, IMO, the main difference is not the cooker but the cook:
In camera, everything is automatically processed as the manufacturer's engineers have designed it.
You just can choose to fine tune the output settings to your taste as regards white balance, hue, saturation, contrast, sharpening, and highlight/shadows tones rendering. It is like having dinner in a restaurant: you can choose the restaurant and thus the cook, the menu, the sauces that go with it and the wine, but you are not the cook.
Moreover, you must choose the JPEG settings before shooting, it is a one shot no regrets way, as most of the raw information will be thrown away.
Unless you shoot RAW+JPEG, as the saved RAW will allow you a second try once back home.
Processing the RAW at home is like cooking your own meal, you may really customize everything to your taste and your present mood (which may vary from day to day).
And you may spend minutes or even hours to run sophisticated post-production programs, and start again, trying different settings until you get which will suit best this specific scene in
the final JPEG output.
--
Tatouzou,
https://www.flickr.com/photos/70066783@N06/