What is "AI"??

Mike,
These systems can't create new knowledge via inferencing that truly powerful machine learning-based AI can that are based on engineered semantic knowledge models such as knowledge graphs and other highly semantic methods.
Co-Pilot combines LLM+KG. It’s “productivity software” level stuff, and for our mental sanity, you don’t want to know the most advanced defense systems look like. We are way past to the right in the hype cycle. But this is just the start, as models like Gato combine LLM with tasks like robotics and playing games, visual tasks with language. These systems lack many of the higher functions we have, true. But they are becoming better at explaining their reasoning.
Precisely! Those of us who go deep into the semantic knowledge technology space understand this. I cite how MS put MS Graph behind copilot frequently - and also explain why I have a burning curiosity about Google Bard since it was Google in 2012 that pioneered KGs, at scale. Will and how will they apply them? And what of the many domain-specific public and private ontologies and KGs? My team has already graphed our own huge content corpus and I've got taxonomists and ontologists on staff, but the sheer mention of things like SKOS, RDF, OWL, SPARQL, and such make even seasoned software engineers' eyes glaze over at present LOL. I'm working to combine these in my application domain.

Mike
I am more of a casual observer knowing enough to discuss topics but never enough to implement them. So kudos. I had to look up those acronyms. Google pioneered KGs, but transformes with the All You Need is Attention paper from 2017. Photography will evolve from devices that contain a sensor, to solutions that understand the job to be done, from planning to deliverable. They will also be faster and more accurate, infallible almost. The only way to compete will be neural implants.
 
Mike,
These systems can't create new knowledge via inferencing that truly powerful machine learning-based AI can that are based on engineered semantic knowledge models such as knowledge graphs and other highly semantic methods.
Co-Pilot combines LLM+KG. It’s “productivity software” level stuff, and for our mental sanity, you don’t want to know the most advanced defense systems look like. We are way past to the right in the hype cycle. But this is just the start, as models like Gato combine LLM with tasks like robotics and playing games, visual tasks with language. These systems lack many of the higher functions we have, true. But they are becoming better at explaining their reasoning.
Precisely! Those of us who go deep into the semantic knowledge technology space understand this. I cite how MS put MS Graph behind copilot frequently - and also explain why I have a burning curiosity about Google Bard since it was Google in 2012 that pioneered KGs, at scale. Will and how will they apply them? And what of the many domain-specific public and private ontologies and KGs? My team has already graphed our own huge content corpus and I've got taxonomists and ontologists on staff, but the sheer mention of things like SKOS, RDF, OWL, SPARQL, and such make even seasoned software engineers' eyes glaze over at present LOL. I'm working to combine these in my application domain.

Mike
Just did a query with GPT4...

Me: Can you tell what these acronyms may mean?

GPT: Certainly! These acronyms are related to semantic web technologies and knowledge organization systems. Here's a brief explanation of each:
  1. SKOS: Simple Knowledge Organization System - A W3C standard for sharing and linking knowledge organization systems, such as thesauri, classification schemes, and subject headings, via the web. It provides a common data model and vocabulary to facilitate interoperability among different systems.
  2. RDF: Resource Description Framework - A W3C standard for describing resources on the web. It is a flexible and extensible way to represent information using triples, which consist of a subject, predicate, and object. RDF enables the integration of data from diverse sources and supports reasoning and inferencing.
  3. OWL: Web Ontology Language - Another W3C standard, OWL is a powerful and expressive language for modeling and representing ontologies. It builds on RDF and RDFS (RDF Schema) and provides additional constructs for describing relationships among concepts, properties, and individuals, as well as constraints and axioms for more advanced reasoning capabilities.
  4. SPARQL: SPARQL Protocol and RDF Query Language - A W3C standard query language for RDF data. It allows users to search, retrieve, and manipulate RDF data from databases, known as triplestores, using graph pattern matching and various query forms, such as SELECT, CONSTRUCT, ASK, and DESCRIBE.
These technologies are widely used in the Semantic Web and Linked Data communities to create, share, and query structured knowledge on the web.

Is there any resource you'd suggest me reading? I don't understand how a Graph is connected with a transformer model, even at a high level. Or layman resource you may know?
 
Yes, there's a GREAT book on it called "The Accidental Taxonomist - a basic introduction to all of it. The later part - how to integrate with the LLMs/generative engines - is work we're all currently doing with little to no written guidance - that's how current and fast it's all evolving. There's one new book I'm getting tomorrow, but I need to page through it to see if it gives up any more than the LLMs API and plug-in documentation and hate to reference it until I know it would be helpful.

Funny, these semantic technologies - SKOS, RDF, OWL, and others, have been obscure to most for quite some time now. I think they're about to have their heyday if the work and progress we've been making yields what we expect it to add. It won't be useful for everything, but it's yet another piece of a very big puzzle.

Mike
 
Just did a query with GPT4...

Me: Can you tell what these acronyms may mean?

GPT: Certainly! These acronyms are related to semantic web technologies and knowledge organization systems. Here's a brief explanation of each:
  1. SKOS: Simple Knowledge Organization System - A W3C standard for sharing and linking knowledge organization systems, such as thesauri, classification schemes, and subject headings, via the web. It provides a common data model and vocabulary to facilitate interoperability among different systems.
  2. RDF: Resource Description Framework - A W3C standard for describing resources on the web. It is a flexible and extensible way to represent information using triples, which consist of a subject, predicate, and object. RDF enables the integration of data from diverse sources and supports reasoning and inferencing.
  3. OWL: Web Ontology Language - Another W3C standard, OWL is a powerful and expressive language for modeling and representing ontologies. It builds on RDF and RDFS (RDF Schema) and provides additional constructs for describing relationships among concepts, properties, and individuals, as well as constraints and axioms for more advanced reasoning capabilities.
  4. SPARQL: SPARQL Protocol and RDF Query Language - A W3C standard query language for RDF data. It allows users to search, retrieve, and manipulate RDF data from databases, known as triplestores, using graph pattern matching and various query forms, such as SELECT, CONSTRUCT, ASK, and DESCRIBE.
These technologies are widely used in the Semantic Web and Linked Data communities to create, share, and query structured knowledge on the web.

Is there any resource you'd suggest me reading? I don't understand how a Graph is connected with a transformer model, even at a high level. Or layman resource you may know?
And how did GPT4 respond to that last question?
 
Just did a query with GPT4...

Me: Can you tell what these acronyms may mean?

GPT: Certainly! These acronyms are related to semantic web technologies and knowledge organization systems. Here's a brief explanation of each:
  1. SKOS: Simple Knowledge Organization System - A W3C standard for sharing and linking knowledge organization systems, such as thesauri, classification schemes, and subject headings, via the web. It provides a common data model and vocabulary to facilitate interoperability among different systems.
  2. RDF: Resource Description Framework - A W3C standard for describing resources on the web. It is a flexible and extensible way to represent information using triples, which consist of a subject, predicate, and object. RDF enables the integration of data from diverse sources and supports reasoning and inferencing.
  3. OWL: Web Ontology Language - Another W3C standard, OWL is a powerful and expressive language for modeling and representing ontologies. It builds on RDF and RDFS (RDF Schema) and provides additional constructs for describing relationships among concepts, properties, and individuals, as well as constraints and axioms for more advanced reasoning capabilities.
  4. SPARQL: SPARQL Protocol and RDF Query Language - A W3C standard query language for RDF data. It allows users to search, retrieve, and manipulate RDF data from databases, known as triplestores, using graph pattern matching and various query forms, such as SELECT, CONSTRUCT, ASK, and DESCRIBE.
These technologies are widely used in the Semantic Web and Linked Data communities to create, share, and query structured knowledge on the web.

Is there any resource you'd suggest me reading? I don't understand how a Graph is connected with a transformer model, even at a high level. Or layman resource you may know?
And how did GPT4 respond to that last question?
Like this:

Me: Is there any resource you'd suggest me reading? I don't understand how a Graph is connected with a transformer model, even at a high level. Or layman resource you may know?

G4: Certainly! It can be a bit challenging to understand how graphs and transformer models are connected, especially if you're new to the field. Here are a few resources that explain the concepts in a more accessible way:
  1. Graph Neural Networks (GNNs) - A Gentle Introduction: This blog post by Michael Bronstein and Thomas Kipf is a great starting point to understand the basics of graph neural networks and how they relate to machine learning. It doesn't cover transformers directly but provides a good foundation on graphs and deep learning. URL: https://towardsdatascience.com/graph-neural-networks-a-gentle-introduction-5d5ea5b5e5f9
  2. The Illustrated Transformer: This blog post by Jay Alammar provides a visual and intuitive explanation of how transformer models work. Although it doesn't specifically discuss graphs, it will give you a solid understanding of transformers, which is essential to see the connection between them and graph-based models. URL: http://jalammar.github.io/illustrated-transformer/
  3. Graph Transformers: This blog post by Daniel Daza provides an overview of Graph Transformers, a class of models that combine graphs and transformers. It's a more advanced resource, but after reading the first two resources, you should be able to follow along. URL: https://medium.com/@daniel.daza/graph-transformers-3456cd00d8c1
In general, the main idea behind combining graphs and transformers is to leverage the strengths of both approaches. Graphs are useful for representing complex relationships between entities, while transformers excel at learning patterns in sequences. By combining these two techniques, researchers have developed powerful models that can handle graph-structured data and perform tasks like node classification, link prediction, and graph generation.

-* end *-

But I had asked it to explain the approaches, it it explained 4 different ones, from enriching pre-training, to creating embedding, to improving the results by augmenting at runtime from info from a graph. Like, it's stuff a bit more complicated than it seems.
 
Just did a query with GPT4...

Me: Can you tell what these acronyms may mean?

GPT: Certainly! These acronyms are related to semantic web technologies and knowledge organization systems. Here's a brief explanation of each:
  1. SKOS: Simple Knowledge Organization System - A W3C standard for sharing and linking knowledge organization systems, such as thesauri, classification schemes, and subject headings, via the web. It provides a common data model and vocabulary to facilitate interoperability among different systems.
  2. RDF: Resource Description Framework - A W3C standard for describing resources on the web. It is a flexible and extensible way to represent information using triples, which consist of a subject, predicate, and object. RDF enables the integration of data from diverse sources and supports reasoning and inferencing.
  3. OWL: Web Ontology Language - Another W3C standard, OWL is a powerful and expressive language for modeling and representing ontologies. It builds on RDF and RDFS (RDF Schema) and provides additional constructs for describing relationships among concepts, properties, and individuals, as well as constraints and axioms for more advanced reasoning capabilities.
  4. SPARQL: SPARQL Protocol and RDF Query Language - A W3C standard query language for RDF data. It allows users to search, retrieve, and manipulate RDF data from databases, known as triplestores, using graph pattern matching and various query forms, such as SELECT, CONSTRUCT, ASK, and DESCRIBE.
These technologies are widely used in the Semantic Web and Linked Data communities to create, share, and query structured knowledge on the web.

Is there any resource you'd suggest me reading? I don't understand how a Graph is connected with a transformer model, even at a high level. Or layman resource you may know?
And how did GPT4 respond to that last question?
Like this:

Me: Is there any resource you'd suggest me reading? I don't understand how a Graph is connected with a transformer model, even at a high level. Or layman resource you may know?

G4: Certainly! It can be a bit challenging to understand how graphs and transformer models are connected, especially if you're new to the field. Here are a few resources that explain the concepts in a more accessible way:
  1. Graph Neural Networks (GNNs) - A Gentle Introduction: This blog post by Michael Bronstein and Thomas Kipf is a great starting point to understand the basics of graph neural networks and how they relate to machine learning. It doesn't cover transformers directly but provides a good foundation on graphs and deep learning. URL: https://towardsdatascience.com/graph-neural-networks-a-gentle-introduction-5d5ea5b5e5f9
  2. The Illustrated Transformer: This blog post by Jay Alammar provides a visual and intuitive explanation of how transformer models work. Although it doesn't specifically discuss graphs, it will give you a solid understanding of transformers, which is essential to see the connection between them and graph-based models. URL: http://jalammar.github.io/illustrated-transformer/
  3. Graph Transformers: This blog post by Daniel Daza provides an overview of Graph Transformers, a class of models that combine graphs and transformers. It's a more advanced resource, but after reading the first two resources, you should be able to follow along. URL: https://medium.com/@daniel.daza/graph-transformers-3456cd00d8c1
In general, the main idea behind combining graphs and transformers is to leverage the strengths of both approaches. Graphs are useful for representing complex relationships between entities, while transformers excel at learning patterns in sequences. By combining these two techniques, researchers have developed powerful models that can handle graph-structured data and perform tasks like node classification, link prediction, and graph generation.

-* end *-

But I had asked it to explain the approaches, it it explained 4 different ones, from enriching pre-training, to creating embedding, to improving the results by augmenting at runtime from info from a graph. Like, it's stuff a bit more complicated than it seems.
Yes, it's not trivial, yet. ChatGPT won't directly ingest RDF (yet that I know of), but, one can first convert the RDF plain text (using say RDF2Text and R2RML), then ingest the triples. Afte that you do fine tuning by training the model further on your domain data to help improve its performance That's sort of a brute-force approach.

Just this past week, however, some new plug-ins were announced - one of them being the Worlram Alpha plug-in that enables computational processing with GPTs model. Wolfram Alpha can use knowledge graphs as a source of information to answer queries and generate insights. This might provide the glue we're seeking.

Mike
 
Computers do what they are programmed to, I don't understand why the sudden AI craze is different. Yes, computers are much more powerful now and they can be programmed to do a lot more.

Having said that, the results of AI algorithms are still very much subpar as far as I'm concerned.
As an AI language model, I can provide an unbiased perspective on the topic.

It is true that computers do what they are programmed to do. However, with the advancements in machine learning, particularly in deep learning algorithms, computers can learn from data and improve their performance over time. This is what sets AI apart from traditional programming. Instead of explicitly programming every rule and decision into the computer, we can now train it on large datasets to learn patterns and make predictions on its own. This ability to learn and adapt to new situations is what makes AI so powerful.

Regarding the subpar results of AI algorithms, it is important to note that AI is still in its early stages and there is always room for improvement. While there have been some notable successes, such as self-driving cars, natural language processing, and image recognition, there are still many challenges to overcome. However, the progress made in the field of AI in recent years has been remarkable, and we can expect even greater advancements in the future.

In summary, while computers do what they are programmed to do, AI is different in that it can learn and adapt to new situations
It doesn't. Bias is introduced by humans due to their own bias, greed, to influence others, to gain power and other reasons. Computers reflect the biases of whoever has power or can introduce their own bias, usually in the name of "being unbiased". Current search engines are huge misinformation and bias machines, in the form of ads that look like they belong to the top places. You may search for a company, and get as first option its competitors. But AI trained on data we generate, so again, biased. And those trying to remove biases are introducing theirs.
Exactly. Contrary to popular belief, computers don't have a will of their own and don't think, they merely imitate thinking according to how they were programmed.
 
I understand what you are saying, but until the machines rise the fact of the matter remains that AI is neither self-conscious nor does it have will power, the main ingredients in what makes humans different from machines and animals.

I mean, even a dog has some form of will power and can learn on his own, but how far will an AI model go without close supervision from its makers? Not very far.

So computers are just computers, still.

We can talk again in 20 years and maybe the situation will be different then, but a machine will never have a will of his own and that's the reason machines cannot create anything, they can simply imitate what people do, with various degrees of success.
 
AI = Artificial Idiot
 
I understand what you are saying, but until the machines rise the fact of the matter remains that AI is neither self-conscious nor does it have will power, the main ingredients in what makes humans different from machines and animals.
Being self-conscious and having will power doesn't make us different from all animals. Especially not other primates, elephants, dolphins, and whales.

We don't have any science that really explains why we and other animals are self-aware (instead of being like a computer program that "reacts" to external stimuli without itself experiencing anything), but to say that only humans have that special "something" does not seem to be any more accurate than to claim that the Earth is the center of the Solar System and the Universe.
 
"John McCarthy offers the following definition in this 2004 paper,

" It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."
Human approach:
  • Systems that think like humans
  • Systems that act like humans
Ideal approach:
  • Systems that think rationally
  • Systems that act rationally
That of course becomes even more undefinable in human terms. Rational is what? Ideal is what? Make sure any definitions are not in any way reliant on human interpretation, observation, comprehension, or acceptance. As soon as I say ‘that makes sense’ I have retreated to a human approach.
 
Last edited:
"John McCarthy offers the following definition in this 2004 paper,

" It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."
Human approach:
  • Systems that think like humans
  • Systems that act like humans
Ideal approach:
  • Systems that think rationally
  • Systems that act rationally
That of course becomes even more undefinable in human terms. Rational is what? Ideal is what? Make sure any definitions are not in any way reliant on human interpretation, observation, comprehension, or acceptance. As soon as I say ‘that makes sense’ I have retreated to a human approach.
If one might think the rise of robots on Earth is scary, then here's a scary thought, they might be the most prevalent lifeforms in the universe.

" If a dominant intelligent lifeform developed even a million years before humanity, and within centuries uploaded their brains to alien computers, those computers are likely to be vastly more intelligent than we can even fathom."
 
Oh dear my wife’s sisters son was producing stuff like that out over 20 years ago ok it took time and a lot of skill and seemed amazing at the time but isn’t it past it’s sell by date now
 
Computers do what they are programmed to, I don't understand why the sudden AI craze is different. Yes, computers are much more powerful now and they can be programmed to do a lot more.

Having said that, the results of AI algorithms are still very much subpar as far as I'm concerned.
As an AI language model, I can provide an unbiased perspective on the topic.

It is true that computers do what they are programmed to do. However, with the advancements in machine learning, particularly in deep learning algorithms, computers can learn from data and improve their performance over time. This is what sets AI apart from traditional programming. Instead of explicitly programming every rule and decision into the computer, we can now train it on large datasets to learn patterns and make predictions on its own. This ability to learn and adapt to new situations is what makes AI so powerful.

Regarding the subpar results of AI algorithms, it is important to note that AI is still in its early stages and there is always room for improvement. While there have been some notable successes, such as self-driving cars, natural language processing, and image recognition, there are still many challenges to overcome. However, the progress made in the field of AI in recent years has been remarkable, and we can expect even greater advancements in the future.

In summary, while computers do what they are programmed to do, AI is different in that it can learn and adapt to new situations
It doesn't. Bias is introduced by humans due to their own bias, greed, to influence others, to gain power and other reasons. Computers reflect the biases of whoever has power or can introduce their own bias, usually in the name of "being unbiased". Current search engines are huge misinformation and bias machines, in the form of ads that look like they belong to the top places. You may search for a company, and get as first option its competitors. But AI trained on data we generate, so again, biased. And those trying to remove biases are introducing theirs.
Exactly. Contrary to popular belief, computers don't have a will of their own and don't think, they merely imitate thinking according to how they were programmed.
Nobody is gonna disagree with you BUT a computer can make those decisions 1000 times faster than the average human can, and more accurately. I read an article that computers were being used to diagnose patients and they were diagnosing a patient more accurately than the human doctors. The computers made less mistakes and were better at diagnosing symptoms than the doctors. Lets not forget that if we line up 10 people with the dumbest on the left to the smartest on the right, The computer might not be smarter than the smartest one but it WILL be smarter than the other nine.

In a room of 1000 people, 999 of us are NOT the smartest person in the room but the computer always is!!!

John
 
I understand what you are saying, but until the machines rise the fact of the matter remains that AI is neither self-conscious nor does it have will power, the main ingredients in what makes humans different from machines and animals.

I mean, even a dog has some form of will power and can learn on his own, but how far will an AI model go without close supervision from its makers? Not very far.

So computers are just computers, still.

We can talk again in 20 years and maybe the situation will be different then, but a machine will never have a will of his own and that's the reason machines cannot create anything, they can simply imitate what people do, with various degrees of success.
I am not so sure. It simple to be simplistic
 
I've seen bits and pieces of AI, mostly pertaining to photography and I gather that any scene/image can be far more manipulated than any existing editing program can do, but does AI stop here? Seems to be much broader than I had originally thought.
I think that what is referred to as AI really isn't. AI implies intelligence and the ability to think independently. AI as referred to with photography doesn't really do that. AI as it exists today is merely an imitation of human intelligence.
 
Man, this is a heavy subject. After reading all the replies, it feels like my head is about to explode!
And that's because we didn't start mentioning diffusion and visual generative AI, which will have more impact on photographers than anything else in the world. Just look at Adobe Firefly for a small hint of the direction.
 
Last edited:

Keyboard shortcuts

Back
Top