In an image, it is the differences between the outputs from the pixels. These can be analysed as spatial waveforms, just as digital sound signals can be analysed as temporal waveforms. (Both are limited by the sampling frequency.)
The engineering definition of 'signal' is 'a representation of information'. You're confusing what is the carrier of the 'signal' with what is the 'signal'. That is vert common.
But at least you gave a definition, unlike the person of whom the question was asked.
Once one has a correct definition of 'signal' it's clear that the idea that the 'signal' is amplified is absurd. The carrier of the signal is amplified. It makes no difference to the information which the signal represents.
You play a bit too much on words saying that everybody is wrong, it becomes a bit irritating. Especially when this is nitpicking.
It's not nitpicking. You're actually just plain wrong. ISO is not 'amplification' So rather than just tell you 'you're wrong' (oh well, I have now), it's better for you to work through yourself and understand why you're wrong. See above, what is 'the signal'? You will find that you can't give a meaningful answer, because the you're already on meaningless territory.
If you multiply all the numbers representing samples in a wave, what happens to the amplitude of the wave ?
As I said above, the carrier is amplified. The 'signal' is unaffected, because the information represented does not change.
I will continue using "pp" because lot's of people understand. Try to use "p" instead, good luck !
Again, different things. The latent image needs 'processing'. Once you've 'processed' you can apply 'post processing' operations. The distinction is important, because changing the lightness in post-processing can have very different effects from setting the lightness in processing.
I don't quite get that distinction. You load a raw file into a conversion program and it does things like dealing with the colour filter array, and it alters the numbers according to for instance the white balance given in the metadata. You can immediately alter that WB if you want. Where is the sharp line between processing and PP ? -- especially for a monochrome sensor.
Typically, non-linear encoding. Plus, each additional stage you go through, you lose information. So, once you demosaic, make colour transformations (which are often not reversible) and so on, you've lost information. Always best to develop straight from the Bayer file to your desired brightness/tome curve/WB/ etc and save 'post processing' for image alteration.
Conflating different ideas into the same word is the root of confusion. Don't do it, especially around beginners.
But also, don't assume that everyone agrees with your definitions of words.
I don't. That's why I rarely make my own definitions, I use the ones that are current in the field, so well as they have been properly defined. I also like to avoid conflating two different concepts into one word. For instance, the difference between 'processing' and 'post processing'. If you say they are the same thing, it becomes impossible to geve people the good advice I gave above.
They often don't, and people argue over meanings for centuries.
Not really. In science and engineering, so long as a concept has been properly defined, it is properly defined. Some people might prefer different terms, and often different terms go about, but they are clearly and formally defined, so people don't argue about the definitions.