ChatGPT ...

  • Thread starter Thread starter Ching-Kuang Shene
  • Start date Start date
C

Ching-Kuang Shene

Guest
Hank asked ChatGPT about adopted lenses and I have tried to fool ChatGPT about cameras. Here are some questions:
  • I asked "Contax Cameras". The following is the answer. Apparently, ChatGPT missed the East German Contax SLR line completely. The CZJ Contax was the first modern SLR released in 1949 and the last one was Contax F.
d7d801282d5343cb9ab0664358ad1db9.jpg.png
93f15c772e674fd6b8e97c7d3246497e.jpg.png
  • My next question as as simple as "Mirotar".
45305ad68bb5492db242d303c5141d40.jpg.png
  • OK, I will try one more question on mirror lenses. I typed in "CZJ Spieglobjektiv 1000mm" for which I have a post on this forum. Here is the answer. Did you notice the similarity between "CZJ Spieglobjektiv 1000mm" and "Mirotar"? Toby would bring this 13kg big tank for wildlife or sports photography. None of the CZJ Spieglobjective mirror lenses had f/8. The 500mm has f/4 and the 1000mm has f/5.6. During its life-span of CZJ, it never made T-mount lenses. So, this another piece of junk.
ac44407c6934411fa101f6e626dbda94.jpg.png
  • I also tried "Nikon AF 85mm f/2.8 for F3" and the answer is shown below. Another piece of junk information. The first two paragraphs are generic. The third paragraph is completely wrong because F3 is a full frame SLR and when F3 was there DX format did not exists. Well, you may want to say that could be the APS format. Yes, the size is similar but Nikon had to wait for a couple years to introduce its APS format cameras. ChatGPT does not seem to know F3 is a MF camera.
0c32a3e7ac8e4b85b4bf4894b7971f71.jpg.png
  • My next question was "First autofocus SLR and the answer is shown below. Obviously the answer "Konica C35 AF" is wrong. The first AF SLR was a Polaroid while the first interchangeable lens SLR was Minolta Maxxum 7000.
2b5ad635b3d04fd69b6b163da270bd40.jpg.png
  • My next question was "who is Robion Kirby?" Prof. Kirby contributed very significantly to the math field of topology and solved a number of long standing problems. Missing Prof. Kirby is unforgivable. Here is a Wiki page about Prof. Kirby .
45fd294f67614f18a2d74f89f6c6311b.jpg.png

I don't trust any form of AI. AI is just a computer program and there are problems that cannot be solved by any computer programs such as Turing's Halting problem. A computer program is based on mathematical logic and in any logic system as long as it can count there exists a proposition that cannot be proved within the system. This is the Godel Incompleteness Theorem. Therefore, any logical system that can count has a proposition that is not provable within the system, and there are not computable problems by any computer programs as long as the computer architecture being used is still of the Turing type.

In the 1950's and early 1960's there was a hugh debate regarding the capability of AI. We perhaps should look back before too blindly being led by AI.

CK
 
Last edited:
  • I asked ChatGPT this question "Nikon 2000mm f/11" and got the following ridiculous answer. This answer is real;ly FUNNY. The Nikon 2000mm f/11 was discontinued decades before Nikon Z was released. Moreover, most Nikon Z cameras are full frame cameras and there is no equivalent focal length thing, unless it is being used on Z50 and Z30, which are hobby type cameras.
6932931aaeb6459fb989ebf04f949224.jpg.png
  • Next, the answer to "Mirotar 500mm f/8" is shown below. The CZ Mirotar 500mm f/8 appeared rather late and was made for the Contax RTS SLR rather than the 1970s' Comtarex. BTW, no one would say a mirror lens would have beautiful bokeh. ;-)
47324dda32194384adf83573b7fc79f0.jpg.png
  • It appears that ChatGPT does not know the "Nikon Fun Fun Lens:"
c77d44d9ffbf45b19065f46d47f33783.jpg.png
a195a5cc8fd94cb697b507de2439e11e.jpg.png
  • The answer to "Miroflex" goes as follows.
eadf073e55f84d33b2a50f77b3b7868a.jpg.png
  • I tried to help ChatGPT a little bit and used "Ikon Miroflex". Then, I have this answer. If you are not familiar with this old camera, please take a look at this page: Zeiss Ikon Miroflex. The Miroflex was not a TLR nor using 120 roll film. The years were also wrong.
ace4644ce9614a36b1830d7d5f349ae0.jpg.png

OK, enough is enough. We computer scientists used to say Garbage-In-Garbage-Out. If an AI system is trained by feeding so much junk information, the output would likely to junk as well. Judging with what the answers I got from ChatGPT, I believe its knowledge level in cameras and lenses may be below average. Frequently, a well-educated brain is much better than an AI-trained artificial brain.

CK
 
Last edited:
CK,

The shorter your prompt, the less info ChatGPT has to filter-down what it constructs.

It also maintains context, so asking a prompt by itself gives very different answers from asking a prompt after a few other prompts. This is why it tends to keep similar elements across prompts.

The basic tech in ChatGPT is creating a random set of features that it effectively constrains by using the prompts. There is no attempt made to understand of any of the text; ChatGPT is just following statistical patterns from the training data in that feature space. This is why the responses generally read as good English but suffer from statistically correlated information being thrown in where it doesn't apply and is often completely incorrect. As such, this class of AI will always get things wrong while sounding highly credible -- I can't imagine a tech more suited to a world in which unvetted hallucinations often get amplified more than truths do. :-(

Personally, the only trained AI I really trust is Genetic Programming (GP), as defined by Koza. The reason is simply that, unlike other trained AI, GP produces an algorithm that can then be analyzed, understood, and even revised by humans.
 
CK,

The shorter your prompt, the less info ChatGPT has to filter-down what it constructs.
Yes, you are right. On the other hand, what the given word or words are rather specific, the answer should be reasonably accurate. For example, Mirotar and CZJ Spieglobjektiv do have a specific meaning, just like a person's name.
The basic tech in ChatGPT is creating a random set of features that it effectively constrains by using the prompts. There is no attempt made to understand of any of the text; ChatGPT is just following statistical patterns from the training data in that feature space. This is why the responses generally read as good English but suffer from statistically correlated information being thrown in where it doesn't apply and is often completely incorrect. As such, this class of AI will always get things wrong while sounding highly credible -- I can't imagine a tech more suited to a world in which unvetted hallucinations often get amplified more than truths do. :-(
That does not matter. If a great figure like Robion Kirby becomes unknown, it has a problem. This is what I said before frequently a well-educated brain is better than a grab-trained artificial brain.
Personally, the only trained AI I really trust is Genetic Programming (GP), as defined by Koza. The reason is simply that, unlike other trained AI, GP produces an algorithm that can then be analyzed, understood, and even revised by humans.
I worked on Neuro-Network when I was a graduate student at JHU. However, I just don't trust any "training" by whatever means or mechanisms. It all imitates how human brain works; but human being cannot understand their own brains completely and effectively. Otherwise, human brains are not so great. Consider the Godel Incompleteness Theorem and Turing's Halting problem. These are the two obstacles for von Neumann and Turing architecture based computer to imitate human intelligence.

CK
 
CK,

The shorter your prompt, the less info ChatGPT has to filter-down what it constructs.
Yes, you are right. On the other hand, what the given word or words are rather specific, the answer should be reasonably accurate. For example, Mirotar and CZJ Spieglobjektiv do have a specific meaning, just like a person's name.
The basic tech in ChatGPT is creating a random set of features that it effectively constrains by using the prompts. There is no attempt made to understand of any of the text; ChatGPT is just following statistical patterns from the training data in that feature space. This is why the responses generally read as good English but suffer from statistically correlated information being thrown in where it doesn't apply and is often completely incorrect. As such, this class of AI will always get things wrong while sounding highly credible -- I can't imagine a tech more suited to a world in which unvetted hallucinations often get amplified more than truths do. :-(
That does not matter. If a great figure like Robion Kirby becomes unknown, it has a problem. This is what I said before frequently a well-educated brain is better than a grab-trained artificial brain.
Personally, the only trained AI I really trust is Genetic Programming (GP), as defined by Koza. The reason is simply that, unlike other trained AI, GP produces an algorithm that can then be analyzed, understood, and even revised by humans.
I worked on Neuro-Network when I was a graduate student at JHU. However, I just don't trust any "training" by whatever means or mechanisms. It all imitates how human brain works; but human being cannot understand their own brains completely and effectively. Otherwise, human brains are not so great. Consider the Godel Incompleteness Theorem and Turing's Halting problem. These are the two obstacles for von Neumann and Turing architecture based computer to imitate human intelligence.

CK
AI is obviously a surge to the super-bland that knows everything and nothing at the same time. It obviously cannot innovate something that is not already in its database but provide it with correction and it might remember in future.

It might make life easier but it is a great danger to inspiration.

Hey, but most people prefer bland .... it is much safer to look, talk, wear, and buy things that are the same as everyone else's choice.
 
AI is obviously a surge to the super-bland that knows everything and nothing at the same time. It obviously cannot innovate something that is not already in its database but provide it with correction and it might remember in future.

It might make life easier but it is a great danger to inspiration.

Hey, but most people prefer bland .... it is much safer to look, talk, wear, and buy things that are the same as everyone else's choice.
Tom,

I am afraid of that once these bland become very populated, incorrect or biased information can be spread widely, This can be used in information wars to mislead uninitiated people.

I am trying to achieve my posts here. In fact, my plan started two years and actually purchased a domain name. However, I have not been able to add any article to this blog. The question in mind is: are there trustworthy site for me to archive my images?

CK
 
AI is obviously a surge to the super-bland that knows everything and nothing at the same time. It obviously cannot innovate something that is not already in its database but provide it with correction and it might remember in future.

It might make life easier but it is a great danger to inspiration.

Hey, but most people prefer bland .... it is much safer to look, talk, wear, and buy things that are the same as everyone else's choice.
Tom,

I am afraid of that once these bland become very populated, incorrect or biased information can be spread widely, This can be used in information wars to mislead uninitiated people.

I am trying to achieve my posts here. In fact, my plan started two years and actually purchased a domain name. However, I have not been able to add any article to this blog. The question in mind is: are there trustworthy site for me to archive my images?

CK
CK

You can ask to be supplied with a file of all your posts to dpreview. Lodging a request is accessed from the dpreview notice box in the pane on the rhs "After 25 years of operation ...."

As far as storing data I don't really trust 'the cloud' as it can be shut down at any time just as much as dpreview and it is even more anonymous. I used hard drive backups but this became too complex and eventually switched to a multi-drive RAID setup.

Not sure if anyone will be interested in raking through my melange after myself but at least I have tried.

In the end analysis dpreview was a site for bloggers who could not be bothered running their own site ..... :)

Tom
 
AI is obviously a surge to the super-bland that knows everything and nothing at the same time. It obviously cannot innovate something that is not already in its database but provide it with correction and it might remember in future.

It might make life easier but it is a great danger to inspiration.

Hey, but most people prefer bland .... it is much safer to look, talk, wear, and buy things that are the same as everyone else's choice.
Tom,

I am afraid of that once these bland become very populated, incorrect or biased information can be spread widely, This can be used in information wars to mislead uninitiated people.

I am trying to achieve my posts here. In fact, my plan started two years and actually purchased a domain name. However, I have not been able to add any article to this blog. The question in mind is: are there trustworthy site for me to archive my images?

CK
CK

You can ask to be supplied with a file of all your posts to dpreview. Lodging a request is accessed from the dpreview notice box in the pane on the rhs "After 25 years of operation ...."

As far as storing data I don't really trust 'the cloud' as it can be shut down at any time just as much as dpreview and it is even more anonymous. I used hard drive backups but this became too complex and eventually switched to a multi-drive RAID setup.

Not sure if anyone will be interested in raking through my melange after myself but at least I have tried.

In the end analysis dpreview was a site for bloggers who could not be bothered running their own site ..... :)

Tom
CK

Here is a link that you will find useful:

https://www.dpreview.com/forums/post/66963710
 
your well chosen examples are excellent demonstrations of the limits of automatic learning. It illustrates the danger of posing as an expert by knowing just a little.

The Gödel reference could be expanded to point out that complete systems either contain internal contradictions or undecidable statements, but if one adds a random component it might (MIGHT) hit upon a correct answer,

The chatbot is not trying to solve mathematical problems, so its performance as a lexicon can be improved by forcing it to read all reputable source material. A tip that should be forwarded to Microsoft while being its tutor.

Do not expect it to solve issues where sources diverge and peter out (such as what Tomioka was up to before- and a short while after it was bought by Yashica or whether Anaximander ( or the zetetic sceptics) really were the founders of empitical science The polite banter it wraps its answers in in will remain.

p.
 
Hank asked ChatGPT about adopted lenses and I have tried to fool ChatGPT about cameras.
Mh...
Here are some questions:
  • I asked "Contax Cameras".
This is not even a question.
  • The following is the answer. Apparently, ChatGPT missed the East German Contax SLR line completely. The CZJ Contax was the first modern SLR released in 1949 and the last one was Contax F.
Here's how it treated me.

50c63a839c1c49f1a78c351192677d82.jpg.png

Alright, I don't know if that's correct. I take GPTChat like someone that can give clues, no an expert in any field. it's a master at reading and making inferences, not an SME. It has never seen any camera, much less touched one. It's a LANGUAGE AI.

a27250a876f6437d890826757367b2a0.jpg.png

It misses de Contax S. So without telling it that it missed i just ask:
Me: What about the Zeiss Contax SLR?
ced1fa33165f4424a323f1c134e7f8f2.jpg.png

It probably makes many mistakes.But we have access to things like Klingsdale book, etc. and GPTChat is a language model, not a camera historian.
  • Another junk from ChatGPT. My question: "Contaflex" and the answer is shown below.
"Contaflex" is not a question. Now, you typed "Contaflex", and Zeiss made the Contaflex 126 has a 135 lens (while different than the I, II, Super, etc). As you can see, the language models have to try to guess as to what you want, since as a category of things called Contaflex, its answer is correct. But yours isn't, because there is the 115mm front part for later models.
  • My next question as as simple as "Mirotar".
"Mirotar" is not a question. However, GPTChat is polite. Now, you are using an old version of GPT Chat. Version 4 has higher reasons abilities as compared to version 3 and 3.5.

459cf728284a40c391d64af5f5334585.jpg.png
  • OK, I will try one more question on mirror lenses. I typed in "CZJ Spieglobjektiv 1000mm" for which I have a post on this forum. Here is the answer. Did you notice the similarity between "CZJ Spieglobjektiv 1000mm" and "Mirotar"?
Of course I noticed. Did you KNOW you can talk to GPTChat in ANY language? You can actualy mix and match words in any language, or use structures in more than one language even within one sentense. I tried to do one that used Hindi, Japanese, Italian, Spanish, Turkish and English, and ChatGPT didn't blink and answered the question explaining the subject of my statement.

Therefore, when you ask him IN CHAT, about Zeiss Siegelobjektiv 1000mm, it will understand it semantically, literally as if you had just said "MIRROR LENS" which is what it means in German. It then makes sense to asume you are talking about the lens you had identified before.

So the problem we have, is using "Mirror Lens" as a type of lens, and also at the same time, as a name for a line of lenses.

Now, if you ask politely, it will have more context, and be a bit closer to what you wanted.

16a9a838f12d4e268e10dc8ee23a57f4.jpg.png

Did CZJ ever made a 1000mm f4? I could not find anything only. Tom makes reference to having seen one listed in a 2018 post. Here's the ones it thinks it knows about:

2705ba0aa5c847ca8ef62b629eea6949.jpg.png
  • I also tried "Nikon AF 85mm f/2.8 for F3" and the answer is shown below.
It's clear you are good at fooling it about what you want. Could it be because you are using GPT 3, and trying things in English which is your second language, and also because you want it to be fooled? Maybe it has figured out you love being fooled, and it's just pleasing you so you don't have to worry that much.

Here's what it said. Note I say "the F3" because we are omiting camera, but we are refering to a camera and not an aperture. Is any of this more or less accurate? It shouldn't, but maybe some is:

da301fa544744637aea964ae9fdecff8.jpg.png

Now, I would expect anyone to be ultra confused about all the things released for cameras over the years. The naming conventions are so poor, ambiguous, they vary by region as chosen by marketing departments. it's a minefield of problem.
  • My next question was "First autofocus SLR and the answer is shown below. Obviously the answer "Konica C35 AF" is wrong. The first AF SLR was a Polaroid while the first interchangeable lens SLR was Minolta Maxxum 7000.
Any chance you are also confused? I'd expect GPT to be wrong here, but did some research and there's a chance you made some mistakes here. Here's my chat about it.



6b09f885b55f4c66b84e949ae83fe530.jpg.png



It's clear it knows about the Polaroid SX and believes it's a SLR. I don't even know if it existed or if this is true. I assume it would not. But could it be the OneStep was released in 1978? if so, then the Konica P&S would be the first AF camera.



c89690ab16714c78a736c96802827393.jpg.png

It's not clear to me GPT4 has no clue. It may be reflecting the 77 didn't have it until 88. The first one, though not SLR was the Konica. The Maxxum was a later camera and was the first to combine phase TTL and AF, but Pentax MF-E came earlier.
  • My next question was "who is Robion Kirby?"
His name is Robion Cromwell Kirky, not Robion Kirby. You provided no context, used a different name than full name, and then mispelled it, when there are hundreds of people with those names. And then expected it to agree on the level of importance of the person relative to every other contribution, then felt offended. But here's the answer:

4768b69710514c36aa18121c6db5e80e.jpg.png

I don't trust any form of AI.
I wouldn't either.
AI is just a computer program and there are problems that cannot be solved by any computer programs such as Turing's Halting problem.
We are a program, and can't solve the halting problem either. So what?
A computer program is based on mathematical logic and in any logic system as long as it can count there exists a proposition that cannot be proved within the system.
Godel was a fool, and did not end well. All it means is that in languages, you may have structures that allow you to state things that don't make sense. That's why some forms or recursion are not allowed in some systems. There's nothing "incomplete". The halting problem, though, is a different thing and we have to live with it.
This is the Godel Incompleteness Theorem. Therefore, any logical system that can count has a proposition that is not provable within the system, and there are not computable problems by any computer programs as long as the computer architecture being used is still of the Turing type.
So?
In the 1950's and early 1960's there was a hugh debate regarding the capability of AI. We perhaps should look back before too blindly being led by AI.
We should. It may end result in our extinction. But I think trying to "fool" the AI only fools you. Even at version 4, and just being a language model (there are many more models, visual like Difussion, coming, etc), it is a remarkable achievement. It can be made to produce nonsense, but it's amazingly powerful, to the proportion one can provide the right texts to it. The system doesn't reason as a human, it is not a scientists, it just has very good command of languages.
 
Hank asked ChatGPT about adopted lenses and I have tried to fool ChatGPT about cameras.
Mh...
Here are some questions:
  • I asked "Contax Cameras".
This is not even a question.
  • The following is the answer. Apparently, ChatGPT missed the East German Contax SLR line completely. The CZJ Contax was the first modern SLR released in 1949 and the last one was Contax F.
Here's how it treated me.

50c63a839c1c49f1a78c351192677d82.jpg.png

Alright, I don't know if that's correct.
No, it's not. Exakta is probably right, the Varex wasn't their first SLR, but did get a pentaprism. Similarly, the "Graflex Speed Graphic" has a focal plane shutter, but has no mirror SLR view: http://camera-wiki.org/wiki/Graflex_reflex_models
... It probably makes many mistakes.But we have access to things like Klingsdale book, etc. and GPTChat is a language model, not a camera historian.
Yup. It produces correct English and profusely apologizes when it gets things wrong.

It's really just statistically-driven phrase generation filtered by how well it thinks your prompts describe the generated content.

In sum, lots of truthiness, less truth. ;-)
 
Last edited:
Hank asked ChatGPT about adopted lenses and I have tried to fool ChatGPT about cameras.
Mh...
Here are some questions:
  • I asked "Contax Cameras".
This is not even a question.
  • The following is the answer. Apparently, ChatGPT missed the East German Contax SLR line completely. The CZJ Contax was the first modern SLR released in 1949 and the last one was Contax F.
Here's how it treated me.

50c63a839c1c49f1a78c351192677d82.jpg.png

Alright, I don't know if that's correct.
No, it's not.
But it’s more right than CK’s
Exakta is probably right, the Varex wasn't their first SLR, but did get a pentaprism.
Not bad.
Similarly, the "Graflex Speed Graphic" has a focal plane shutter, but has no mirror SLR view: http://camera-wiki.org/wiki/Graflex_reflex_models
But is ChatGPT wrong or the article? The article has reflex in the url, and starts with “The Graflex is a large single-lens reflex camera”.
... It probably makes many mistakes.But we have access to things like Klingsdale book, etc. and GPTChat is a language model, not a camera historian.
Yup. It produces correct English and profusely apologizes when it gets things wrong.
If everyone could!
It's really just statistically-driven phrase generation filtered by how well it thinks your prompts describe the prompt
Isn’t the world statistically driven? Isn’t inference?
In sum, lots of truthiness, less truth. ;-)
 
Last edited:
Similarly, the "Graflex Speed Graphic" has a focal plane shutter, but has no mirror SLR view: http://camera-wiki.org/wiki/Graflex_reflex_models
But is ChatGPT wrong or the article? The article has reflex in the url, and starts with “The Graflex is a large single-lens reflex camera”.
The SLR version was called "Graflex," like the company, with a model number after it. The catch is, ChatGPT saw that "Graflex" in the context of cameras is nearly always followed by "Speed Graphic" so it threw that in. It's pure ngram statistics, not really a higher-level language understanding model.
... It probably makes many mistakes.But we have access to things like Klingsdale book, etc. and GPTChat is a language model, not a camera historian.
Yup. It produces correct English and profusely apologizes when it gets things wrong.
If everyone could!
The first part will simply make grading papers harder. ;-)

Now, when a student gets something wrong, commonly the entire document about it is consistently wrong in most ways, including poor writing quality. With ChatGPT, the wrong stuff is wrapped in highly credible text with many credible "facts" thrown in. For example, ChatGPT readily fabricates quotes from famous people on any topic you want, and the synthesized quotes sound credible, but how do you prove that the person quoted NEVER actually said that quote? To prove the negative, you'd need to check everything that person ever said (or wrote), which is literally impossible.
It's really just statistically-driven phrase generation filtered by how well it thinks your prompts describe the prompt
Isn’t the world statistically driven? Isn’t inference?
At the quantum level, maybe, but I'm an Engineer. The way engineering works is by knowing a "safe range of circumstances" in which our approximate understanding of the world is valid and can be applied to make desired things reliably happen.

It is true that statistical properties of human language in general, and even of individual documents, are surprisingly stable. That was largely recognized and characterized in research out of Bell Labs over half a century ago. However, it's all about correlation with no concept of causality, and causality is most of what we teach and use in engineering.
In sum, lots of truthiness, less truth. ;-)
This is the part that really scares me: going forward, how do we establish trust?

When I was a kid, we knew folks like Walter Cronkite (and a fleet of reporters and fact-checkers on the other side of the camera) said things that were generally worthy of our trust, and that scientists were careful about confirming things before publishing on them. Now, "reporter" is nearly dead as a profession and we have the Silicon Valley motto of "fake it 'till you make it" driving a lot of the "science" we see. Things like ChatGPT take faking things to a new level of ease and credibility... so, what can we trust?

For example, the images returned by cell phones only use scene capture data as a kernel, with an increasing fraction of the synthesized image being only loosely related to the actual scene content. When you go to things like Midjourney and nothing is real.

Going forward, how do we know what information to trust?
 
Similarly, the "Graflex Speed Graphic" has a focal plane shutter, but has no mirror SLR view: http://camera-wiki.org/wiki/Graflex_reflex_models
But is ChatGPT wrong or the article? The article has reflex in the url, and starts with “The Graflex is a large single-lens reflex camera”.
The SLR version was called "Graflex," like the company, with a model number after it. The catch is, ChatGPT saw that "Graflex" in the context of cameras is nearly always followed by "Speed Graphic" so it threw that in. It's pure ngram statistics, not really a higher-level language understanding model.
Ok, but it’s not just that. Graflex is at the same time the company name. I’d say GPT was onto something and Graflex naming is one of the most confusing and wins the Guinness record of being the messiest and ambiguous. I made my own back for my Wisner 4x5, and experts are still confused as to what name was what and which back would take which size or type of film.
... It probably makes many mistakes.But we have access to things like Klingsdale book, etc. and GPTChat is a language model, not a camera historian.
Yup. It produces correct English and profusely apologizes when it gets things wrong.
If everyone could!
The first part will simply make grading papers harder. ;-)

Now, when a student gets something wrong, commonly the entire document about it is consistently wrong in most ways, including poor writing quality. With ChatGPT, the wrong stuff is wrapped in highly credible text with many credible "facts" thrown in. For example, ChatGPT readily fabricates quotes from famous people on any topic you want, and the synthesized quotes sound credible, but how do you prove that the person quoted NEVER actually said that quote? To prove the negative, you'd need to check everything that person ever said (or wrote), which is literally impossible.
It's really just statistically-driven phrase generation filtered by how well it thinks your prompts describe the prompt
Isn’t the world statistically driven? Isn’t inference?
At the quantum level, maybe, but I'm an Engineer. The way engineering works is by knowing a "safe range of circumstances" in which our approximate understanding of the world is valid and can be applied to make desired things reliably happen.

It is true that statistical properties of human language in general, and even of individual documents, are surprisingly stable. That was largely recognized and characterized in research out of Bell Labs over half a century ago. However, it's all about correlation with no concept of causality, and causality is most of what we teach and use in engineering.
In sum, lots of truthiness, less truth. ;-)
This is the part that really scares me: going forward, how do we establish trust?

When I was a kid, we knew folks like Walter Cronkite (and a fleet of reporters and fact-checkers on the other side of the camera) said things that were generally worthy of our trust, and that scientists were careful about confirming things before publishing on them. Now, "reporter" is nearly dead as a profession and we have the Silicon Valley motto of "fake it 'till you make it" driving a lot of the "science" we see. Things like ChatGPT take faking things to a new level of ease and credibility... so, what can we trust?

For example, the images returned by cell phones only use scene capture data as a kernel, with an increasing fraction of the synthesized image being only loosely related to the actual scene content. When you go to things like Midjourney and nothing is real.

Going forward, how do we know what information to trust?
I agree with everything, including this being a great concern. It’s not just mid journey (which is highly artistic) but we cannot trust texts, quotes, photos, images, sound recordings, anything as having happened or like that. And systems like GPTChat will increase the (entropy) by orders of magnitude.

It will be an area of research, systems that offer guarantees of non tampering. Can a system or procedure be devised that if an image is presented one could know it is not tampered?

Note the security trend in security to ZeroTrust. It’s a glimpse of how much trust we can have in the digital age.
 
Last edited:
It will be an area of research, systems that offer guarantees of non tampering. Can a system or procedure be devised that if an image is presented one could know it is not tampered?
I can easily prove to you that no such thing exists. Many years ago when people dreamed of a universal anti-virus system just like today's ChatGPT hype. But, just like Turing's Halting problem that is not computable, meaning no computer program can determine another program will eventually stop running by scanning its source code. The same argument applies to prove there is no universal anti-virus programs. A generalization of Turing's result is Rice's Theorem, which states that most simple and interesting problems are not computable. For a very simple example: given two programs A and B, determine whether A and B can generate exactly the same output if they have the same input.
Note the security trend in security to ZeroTrust. It’s a glimpse of how much trust we can have in the digital age.
Just like what I said in previous paragraph: there is no computer program that can determine whether another program is trustworthy. The same argument for Turing's Halting problem applies here.

On the other hand, to solve this problem, we need a computer architecture that is NOT the same as today's computers which theoretically a Turing machine. Quantum computers are still kind Turning machines, or just very fast Turing machines that can process all information stored in multiple bits at the same time. So, it is fast but not more powerful, computability-wise. Decades ago, people dreamed about DNA computers which use DNA sequences to store information and chemical process for processing. It proved to be not very successful.

Please note that we are talking computability rather than computing power. When we talk about computing power, the problem is already computable. But, there are non-computable problems. So, problems can be divided into the following categories:
  • All problems can be divided into computable and non-computable. Non-computable problems are problems cannot be solved using Turing's architecture.
  • Computable problems can be divided into efficiently computable problems and non-efficiently computable problems. I did not use technical terms here. Efficiently computable problems include those sorting, matrix inversion, etc. Non-Efficiently computable problems include factoring a large number into prime factors which if it can be done the RSA algorithm can be broken quickly. However, determining whether a large integer is a prime number IS efficiently computable. Simply speaking, if a problem can only be solved with an algorithm that can take 2^n steps so far, where n is the input size, it is very likely a non-efficiently computable problem.
In the 1950's and 1960's philosophers, computer scientists, and mathematicians debated about whether AI can simulate human intelligence. The answer was NO as mentioned in one of my earlier post. Today's development is trying to approximate human intelligence as much as possible. Because a human brain is much more powerful system than a Turing machine, a human being can do something a computer cannot even though the speed is slow to very slow.

CK
 
Never a truer word ProfHank.

Everything is to be made easier from Chat GPT to self drive cars. The end is merging downright naivety with innovation and learned skills into some sort of mush in the middle that works but does not advance.

Mobile phones = easy-pics, no skills necessary and they ain't necessarily bad.

Self drive cars - no driving skills necessary, gets you there just the same.

So where is the joy of the skill well learned and practised?

Dumbing down everything to the safe bog standard mundane?

The craftsman with wood has his or her skills replicated by a machine quickly and precisely - even too precisely perhaps.

In my own profession the hard learned bookkeeping skills are now replicated by software where someone can throw information at the computer which will manage to make visual sense out of it - but where, oh where, in there did the data thrown in land? Opportunity for errors abounds and our accountant in charge arguably no longer understands the process of getting there. "Just hope for the best and accept what the computer says". That sort of belief that "I did it therefore it must be right" is a bubble that many freshman young accountants with manual bookkeeping systems had pricked by their experienced mentor.

Such experienced mentors might be getting very hard to find these days. In everything.
 
It will be an area of research, systems that offer guarantees of non tampering. Can a system or procedure be devised that if an image is presented one could know it is not tampered?
I can easily prove to you that no such thing exists. Many years ago when people dreamed of a universal anti-virus system just like today's ChatGPT hype. But, just like Turing's Halting problem that is not computable, meaning no computer program can determine another program will eventually stop running by scanning its source code. The same argument applies to prove there is no universal anti-virus programs. A generalization of Turing's result is Rice's Theorem, which states that most simple and interesting problems are not computable. For a very simple example: given two programs A and B, determine whether A and B can generate exactly the same output if they have the same input.
Note the security trend in security to ZeroTrust. It’s a glimpse of how much trust we can have in the digital age.
Just like what I said in previous paragraph: there is no computer program that can determine whether another program is trustworthy. The same argument for Turing's Halting problem applies here.

On the other hand, to solve this problem, we need a computer architecture that is NOT the same as today's computers which theoretically a Turing machine. Quantum computers are still kind Turning machines, or just very fast Turing machines that can process all information stored in multiple bits at the same time. So, it is fast but not more powerful, computability-wise. Decades ago, people dreamed about DNA computers which use DNA sequences to store information and chemical process for processing. It proved to be not very successful.

Please note that we are talking computability rather than computing power. When we talk about computing power, the problem is already computable. But, there are non-computable problems. So, problems can be divided into the following categories:
  • All problems can be divided into computable and non-computable. Non-computable problems are problems cannot be solved using Turing's architecture.
  • Computable problems can be divided into efficiently computable problems and non-efficiently computable problems. I did not use technical terms here. Efficiently computable problems include those sorting, matrix inversion, etc. Non-Efficiently computable problems include factoring a large number into prime factors which if it can be done the RSA algorithm can be broken quickly. However, determining whether a large integer is a prime number IS efficiently computable. Simply speaking, if a problem can only be solved with an algorithm that can take 2^n steps so far, where n is the input size, it is very likely a non-efficiently computable problem.
In the 1950's and 1960's philosophers, computer scientists, and mathematicians debated about whether AI can simulate human intelligence. The answer was NO as mentioned in one of my earlier post. Today's development is trying to approximate human intelligence as much as possible. Because a human brain is much more powerful system than a Turing machine, a human being can do something a computer cannot even though the speed is slow to very slow.

CK
CK I wish you to be right. You are more religious than I would have guessed. Ultimate, Turing et all say, if it can be explained, it is computable.
 

Keyboard shortcuts

Back
Top