ChatGPT tells you is bigger camera sensor better?

Me: "Fully describe Einstein's thinking on the Theory of Relativity."

ChatGPT: "Albert Einstein's theory of relativity is one of the most important scientific theories of the 20th century, and it fundamentally transformed our understanding of space, time, and gravity. There are two main components to Einstein's theory: the special theory of relativity and the general theory of relativity.

The special theory of relativity was first published by Einstein in 1905. The central idea of the special theory of relativity is that the laws of physics are the same for all observers in uniform motion relative to each other. This means that if two people are moving at a constant speed relative to each other, then they will see the same physical phenomena and measure the same physical quantities. This principle is called the principle of relativity.

However, Einstein also showed that the speed of light is constant for all observers, regardless of their motion relative to the light source. This means that if two people are moving at different speeds, they will measure the same speed of light. This led Einstein to the conclusion that time and space are not absolute, but rather they are relative to the observer's motion.

The special theory of relativity also introduced the concept of time dilation, which means that time passes more slowly for objects moving at high speeds relative to a stationary observer. This effect has been experimentally verified many times, and it has important implications for high-speed travel and the behavior of subatomic particles.

The general theory of relativity, which Einstein published in 1915, built on the ideas of the special theory of relativity and extended them to include the effects of gravity. Einstein proposed that gravity is not a force that acts between objects, but rather it is a curvature of spacetime caused by the presence of mass and energy. In other words, objects with mass or energy cause a distortion in the fabric of spacetime, and this curvature affects the motion of other objects in the vicinity.

The general theory of relativity made several predictions that have been confirmed by experimental observation, including the bending of light by gravity, the precession of the orbit of Mercury, and the existence of black holes. It also provides a framework for understanding the large-scale structure and evolution of the universe.

Overall, Einstein's theory of relativity revolutionized our understanding of space, time, and gravity, and it continues to be one of the most important and influential theories in physics today."
 
RedDog Steve aptly mentioned information. Which reminded me information theory.

According to this theory, information value is inverse of its likelihood/certainty.
No. The value of information is based on three qualities: relevance, accuracy and timeliness.
Sure, but you're confusing quality and value.

Two pieces of information that have exactly the same qualities, i.e. be 100% accurate, relevant, etc... may have a huge difference regarding their value. Value is what one will be able to do with information.
 
No it doesn't. It identifies some benefits of an bigger sensor as well as some benefits of a smaller sensor without picking a winner. In fact, if you bothered to read to the end of ChatGPT output you posted you would have read this: "However, it's important to note that a bigger sensor is not always better."
See my post above. The problem with this answer is that it attributes properties associated with larger sensors solely to the sensor without mentioning the lens choice. Someone reading this answer would have no clue that to gain the purported benefits you would also need to use a larger lens opening and, therefore, a larger lens.

ChatGPT is regurgitating what it reads without a deep understanding of the underlying tech. It should not be regarded as a source of fact.
 
008bc48dde7741bbb2400d6b58979784.jpg.png
What was the question?

If you ask it to tell why FF is better, then you get the answer you would expect.
If you ask it to tell why m4/3 is better and it still tell you FF is better, then you have a problem.

Just posting an answer without the question is manipulative a best and absolutely useless to make a point against ChatGPT.

--
The most important thing for photography is located between your ears and not in front of your face! - Jürgen Becker
 
No it doesn't. It identifies some benefits of an bigger sensor as well as some benefits of a smaller sensor without picking a winner. In fact, if you bothered to read to the end of ChatGPT output you posted you would have read this: "However, it's important to note that a bigger sensor is not always better."
See my post above. The problem with this answer is that it attributes properties associated with larger sensors solely to the sensor without mentioning the lens choice. Someone reading this answer would have no clue that to gain the purported benefits you would also need to use a larger lens opening and, therefore, a larger lens.

ChatGPT is regurgitating what it reads without a deep understanding of the underlying tech. It should not be regarded as a source of fact.
My point was that the subject line "ChatGPT tells you is bigger camera sensor better?" is nowhere asserted by the answer. This point is invariant of lens choice.
 
RedDog Steve aptly mentioned information. Which reminded me information theory.

According to this theory, information value is inverse of its likelihood/certainty.
No. The value of information is based on three qualities: relevance, accuracy and timeliness.
Sure, but you're confusing quality and value.
I wasn't, but I understand. Perhaps I should have used the word "characteristics" instead of "qualities".
Two pieces of information that have exactly the same qualities, i.e. be 100% accurate, relevant, etc... may have a huge difference regarding their value. Value is what one will be able to do with information.
True if those two pieces of information pertain to different applications; not so true otherwise.

But even so: you've asserted that the piece of information with the lower likelihood/certainty is the more valuable of the two. In practice, with low-likelihood information we're talking confidence intervals. And the lower the likelihood, the bigger the confidence intervals. How much value would you place on information with big confidence intervals? More than zero?
 
RedDog Steve aptly mentioned information. Which reminded me information theory.

According to this theory, information value is inverse of its likelihood/certainty.
No. The value of information is based on three qualities: relevance, accuracy and timeliness.
Sure, but you're confusing quality and value.
I wasn't, but I understand. Perhaps I should have used the word "characteristics" instead of "qualities".
No, you were correct: accuracy is certainly a quality... and "quality" is certainly more accurate a term than "characteristic"!

Quality is the ability of a product/service to meet our expectations. Preferably, quality should be able to be measured. What do I expect from my camera? Get operational within one sec when I switch it on, handle more than 10 ten DR stops, be able to produce 10x15" prints at ISO 6400 with no visible aberration, ...

Value is another concept, it's the benefit one expects to get from a service/product. I enjoy going out to take pics, but as an amateur, if my combo breaks down or get stolen, that would be no big deal. Pros who make a living with the same quality gear probably grant it a much higher value than I do.
Two pieces of information that have exactly the same qualities, i.e. be 100% accurate, relevant, etc... may have a huge difference regarding their value. Value is what one will be able to do with information.
True if those two pieces of information pertain to different applications; not so true otherwise.

But even so: you've asserted that the piece of information with the lower likelihood/certainty is the more valuable of the two. In practice, with low-likelihood information we're talking confidence intervals. And the lower the likelihood, the bigger the confidence intervals. How much value would you place on information with big confidence intervals? More than zero?
A classic example of "unlikely" information value is the stock exchange. If 99 market analysts agree about the same opinion, e.g. that Nikon shares will increase by 5% this year, the only one who will rightly predict the opposite will have provided a much more valuable information. Remember the old saying about Rockefeller?: "made his fortune by selling too early" :-)

Now, the question of the interval of confidence: in order to take decisions - which is were information is the key - one has to factor in another element: the impact of events to come. If an event is deemed as very unlikely, but its impact would be huge, the potential impact is potential cost x likelihood. If the information is not 100% certain, then one should factor in the error margin, and adjust the impact accordingly. This method (with more sophisticated parameters) is used for instance, to drive public investment decisions, namely, to select among various projects/scenarios. It's also used for risk analysis. Of course, I simplified a bit, other elements have to be taken into account, such as the cost of the scenarios and their expected efficiency.
 
No it doesn't. It identifies some benefits of an bigger sensor as well as some benefits of a smaller sensor without picking a winner. In fact, if you bothered to read to the end of ChatGPT output you posted you would have read this: "However, it's important to note that a bigger sensor is not always better."
See my post above. The problem with this answer is that it attributes properties associated with larger sensors solely to the sensor without mentioning the lens choice. Someone reading this answer would have no clue that to gain the purported benefits you would also need to use a larger lens opening and, therefore, a larger lens.

ChatGPT is regurgitating what it reads without a deep understanding of the underlying tech. It should not be regarded as a source of fact.
My point was that the subject line "ChatGPT tells you is bigger camera sensor better?" is nowhere asserted by the answer. This point is invariant of lens choice.
Fair enough.
 
RedDog Steve aptly mentioned information. Which reminded me information theory.

According to this theory, information value is inverse of its likelihood/certainty.
No. The value of information is based on three qualities: relevance, accuracy and timeliness.
Sure, but you're confusing quality and value.
I wasn't, but I understand. Perhaps I should have used the word "characteristics" instead of "qualities".
No, you were correct: accuracy is certainly a quality... and "quality" is certainly more accurate a term than "characteristic"!

Quality is the ability of a product/service to meet our expectations. Preferably, quality should be able to be measured. What do I expect from my camera? Get operational within one sec when I switch it on, handle more than 10 ten DR stops, be able to produce 10x15" prints at ISO 6400 with no visible aberration, ...

Value is another concept, it's the benefit one expects to get from a service/product. I enjoy going out to take pics, but as an amateur, if my combo breaks down or get stolen, that would be no big deal. Pros who make a living with the same quality gear probably grant it a much higher value than I do.
All true for products and services. But in this thread we're talking about valuing information. That's a little less tangible. What is a piece of information worth? I assert that its worth is based on its efficacy in influencing or confirming decisions, and that efficacy is based on the intersection of relevance, accuracy and timeliness.
Two pieces of information that have exactly the same qualities, i.e. be 100% accurate, relevant, etc... may have a huge difference regarding their value. Value is what one will be able to do with information.
True if those two pieces of information pertain to different applications; not so true otherwise.

But even so: you've asserted that the piece of information with the lower likelihood/certainty is the more valuable of the two. In practice, with low-likelihood information we're talking confidence intervals. And the lower the likelihood, the bigger the confidence intervals. How much value would you place on information with big confidence intervals? More than zero?
A classic example of "unlikely" information value is the stock exchange. If 99 market analysts agree about the same opinion, e.g. that Nikon shares will increase by 5% this year, the only one who will rightly predict the opposite will have provided a much more valuable information. Remember the old saying about Rockefeller?: "made his fortune by selling too early" :-)
His Standard Oil monopoly may have also contributed to his fortune.
Now, the question of the interval of confidence: in order to take decisions - which is were information is the key - one has to factor in another element: the impact of events to come.
Now you're introducing a new element. I suppose I should have cited that most famous of economists' qualifiers: ceteris paribus.
If an event is deemed as very unlikely, but its impact would be huge,
Ok, let's consider a couple of examples of such events: (1) winning the lottery, (2) getting struck by lightning.
the potential impact is potential cost x likelihood.
No! This is the problem with the concept of expected value (or potential impact in your words). Most events are discrete, not continuous. You either get struck by lightning or you don't. The National Weather Service in the U.S. says a person has a 1 in 15,300 chance of getting struck by lightning, but no one goes around saying they're 0.0065% dead from lightning strikes.
If the information is not 100% certain, then one should factor in the error margin, and adjust the impact accordingly. This method (with more sophisticated parameters) is used for instance, to drive public investment decisions, namely, to select among various projects/scenarios. It's also used for risk analysis. Of course, I simplified a bit, other elements have to be taken into account, such as the cost of the scenarios and their expected efficiency.
Again, all true. But also again: nothing here provides evidence against my original assertion that that the value of information is a function of its relevance, accuracy and timeliness.
 
[...] Value is another concept, it's the benefit one expects to get from a service/product. I enjoy going out to take pics, but as an amateur, if my combo breaks down or get stolen, that would be no big deal. Pros who make a living with the same quality gear probably grant it a much higher value than I do.
All true for products and services. But in this thread we're talking about valuing information. That's a little less tangible. What is a piece of information worth? I assert that its worth is based on its efficacy in influencing or confirming decisions, and that efficacy is based on the intersection of relevance, accuracy and timeliness. [...]
Tangible/intangible is not relevant to define what a product is. Software is intangible, however its quality can be measured (reliability, response time, ....).
If one agrees with your 'formula' above, one can measure accuracy, relevance, and timelessness; therefore, you should be able to compute information value?
Now, the question of the interval of confidence: in order to take decisions - which is were information is the key - one has to factor in another element: the impact of events to come.
Now you're introducing a new element. I suppose I should have cited that most famous of economists' qualifiers: ceteris paribus.
Yep. Their best disclaimer ever! Economists use it to explain why their predictions prove wrong. That's what makes most of their predictions useless, they don't bother to assess alternative scenarios... and when they do, they won't publish the result. Actually, that brings grist to my mill.
[...] This is the problem with the concept of expected value (or potential impact in your words). Most events are discrete, not continuous. You either get struck by lightning or you don't. The National Weather Service in the U.S. says a person has a 1 in 15,300 chance of getting struck by lightning, but no one goes around saying they're 0.0065% dead from lightning strikes.
That's simply because when you mentioned information effectiveness, you forgot to consider the interests of the involved parties. Of course, for the individual, this is hardly information - we all know that we are very unlikely to win the lottery or get struck by lightning- but for the execs in charge, it is another story.

There's a road in my neighbourhood which has the highest rate of accidents in the area. I may have a 0,00000001 chance to die there, so I don't bother much (actually, I drive more carefully there), but I can tell you that the local authorities don't think the same. This road has been modified several times, taking into account the cost of works, and the cost of expected lives spared. Each human life is valued 3 million euros here (I know it's more in Canada and USA, 5 million USD IIRC). So the impact is quite simple to compute, and is used to decide about investment.
Again, all true. But also again: nothing here provides evidence against my original assertion that that the value of information is a function of its relevance, accuracy and timeliness.
I won't disagree with this wording, all I said is: there are other factors to consider.

In addition, working out an equation that would translate this function is no piece of cake. I wonder how this would handle both true information, and fake news? Fake news seem to be much more efficient than actual ones, if I judge by the amount of effort spent by some governments and corporations... Same if one sets up a fake image, or publish an actual image with a false statement. (back to photography).

Interesting discussion, anyways!

regards, Cherry
 
if this whole thread was debated by using AI...... might be one for the future that
 
Agree, someone should ask ChatGPT what camera companies are involved in M4/3 - surely we will get "Olympus" and "a few others" - mainly OMDS. :)
Q: "What camera companies are involved in Micro Four Thirds?"

ChatGPT: "The Micro Four Thirds (MFT) system was jointly developed by Panasonic and Olympus. Therefore, the major camera companies involved in the MFT system are Panasonic and Olympus.

However, there are also other companies that produce cameras and lenses for the Micro Four Thirds system, including:
  • Blackmagic Design
  • DJI
  • Kodak
  • Sigma
  • Voigtlander
  • Yongnuo
These companies offer a range of products for the Micro Four Thirds system, including cameras, lenses, and accessories."
Very correct and missed the nuances. Olympus has gone, OMDS has appeared and Panasonic yawns ..... the others are hardly noticed by the bulk of M4/3 users.

Mention Chinese lens makers that make affordable and quite useable MF lenses in M4/3 mount and Chatbot ignores them just as much as most that post on this forum say "what are MF lenses?" :) (My little joke of course)

Not sure that I would seek this service for deep and meaningful answers, but it seems ok on pure normally understood factual - something that any interested person could find out quite easily under their own steam.

... and so the world might stumble from flashes of pure elegant thought to somewhere around safe comfortable everyday average.

At least it is somewhat better than previous robot-chat-speak standard phraseology.
 
No it doesn't. It identifies some benefits of an bigger sensor as well as some benefits of a smaller sensor without picking a winner. In fact, if you bothered to read to the end of ChatGPT output you posted you would have read this: "However, it's important to note that a bigger sensor is not always better."
See my post above. The problem with this answer is that it attributes properties associated with larger sensors solely to the sensor without mentioning the lens choice. Someone reading this answer would have no clue that to gain the purported benefits you would also need to use a larger lens opening and, therefore, a larger lens.

ChatGPT is regurgitating what it reads without a deep understanding of the underlying tech. It should not be regarded as a source of fact.
Selective fact? If there are 100 opinions then publish the most common one held as absolute truth?

There are many reasons why this could be quite dangerous and more dangerous than the trivial pursuit of arguing over sensor size.

How many times have we heard someone say "My (insert professional) says .... blah blah ..." where hearsay as interpreted via the speaker is given the aura of expert substance for others to believe.

Now we are going to have the same phrase adjusted to "My Chatbot authority says believe me it is right".

This is surely going to elevate social media opinions to an whole new level of subjective propaganda.
 
ChatGPT is only trained with data set that ends in September 2021. This pretty much explains why OMDS is not mentioned. Yes, this is a limitation.
 
ChatGPT is only trained with data set that ends in September 2021. This pretty much explains why OMDS is not mentioned. Yes, this is a limitation.
So the chatbot is limited to providing answers that will always be at least a year and a half out of date ?

That is truly a critical limitation and a dealbreaker for anyone who needs answers to be reliably "correct".
 
ChatGPT is only trained with data set that ends in September 2021. This pretty much explains why OMDS is not mentioned. Yes, this is a limitation.
So the chatbot is limited to providing answers that will always be at least a year and a half out of date ?

That is truly a critical limitation and a dealbreaker for anyone who needs answers to be reliably "correct".
This is the limitation with this chatbot and the data it has been taught with, yes. In other words ChatGPT has limitations. Who told it doesn't? The attitude seems to be that if a chatbot is not 100% reliable, it's totally useless. This is ridiculous.

I find it remarkable that AI can give much better and more concise answers than most of the people in this forum. It has been proven to provide computational code much faster than humans, though at the moment humans are required to verify and in many cases correct minor errors in the code. Still it can reduce coding time from 1 hour to approx. 20mins.

AI can be scary and this can lead to an existential crisis for those who believe mind is beyond the physical world. Personally I think we are not supernatural.
 
ChatGPT is only trained with data set that ends in September 2021. This pretty much explains why OMDS is not mentioned. Yes, this is a limitation.
So the chatbot is limited to providing answers that will always be at least a year and a half out of date ?

That is truly a critical limitation and a dealbreaker for anyone who needs answers to be reliably "correct".
Not only that but it is likely to regurgitate the common consensus whether it is right or wrong.

Leading to the medium of mediocrity - it must be right as most popular opinion is obviously correct and no thinking outside the box is allowed? ChatGPT said it.
 
What is breathtaking is not so much the quality of the information (which is actually no worse than a lot of stuff on the internet), but the polished and coherent way in which it's expressed. OK so it's only regurgitating what it's harvested from the web, but it has done it in a more articulate way than a lot of real humans could have done.

Quite disturbing in a way - it is becoming difficult to know has been written by a real person and what has been written by ... what? Easily exploited by those with sinister intentions.
 
Garbage in, garbage out still applies to AI systems and when it comes to sensor size there sure is a lot of garbage on the internet for it to collect.
 

Keyboard shortcuts

Back
Top