Logical Graphs • Interpretive Duality 1
• https://inquiryintoinquiry.com/2023/10/26/logical-graphs-interpretive-duali…
All,
The duality between Entitative and Existential interpretations
of logical graphs is a good example of a mathematical symmetry,
in this case a symmetry of order two. Symmetries of this and
higher orders give us conceptual handles on excess complexity
in the manifold of sensuous impressions, making it well worth
the effort to seek them out and grasp them where we find them.
Both Peirce and Spencer Brown understood the significance of
the mathematical unity underlying the dual interpretation of
logical graphs. Peirce began with the Entitative option and
later switched to the Existential choice while Spencer Brown
exercised the Entitative option in his Laws of Form.
In that vein, here's a Rosetta Stone to give us a grounding in
the relationship between boolean functions and our two readings
of logical graphs.
Boolean Functions on Two Variables
• https://inquiryintoinquiry.files.wordpress.com/2020/11/boolean-functions-on…
Regards,
Jon
cc: https://www.academia.edu/community/5k4z9V
Peirce's Law • 1
• https://inquiryintoinquiry.com/2023/10/19/peirces-law-1/
A Curious Truth of Classical Logic —
Peirce's law is a propositional calculus formula which
states a non‑obvious truth of classical logic and affords
a novel way of defining classical propositional calculus.
Introduction —
Peirce's law is commonly expressed in the following form.
• ((p ⇒ q) ⇒ p) ⇒ p
Peirce's law holds in classical propositional calculus but
not in intuitionistic propositional calculus. The precise
axiom system one chooses for classical propositional calculus
determines whether Peirce's law is taken as an axiom or proven
as a theorem.
History —
Here is Peirce's own statement and proof of the law:
❝A “fifth icon” is required for the principle of excluded middle
and other propositions connected with it. One of the simplest
formulae of this kind is:
• {(x ‒< y) ‒< x} ‒< x.
❝This is hardly axiomatical. That it is true appears as follows.
It can only be false by the final consequent x being false while
its antecedent (x ‒< y) ‒< x is true. If this is true, either its
consequent, x, is true, when the whole formula would be true, or its
antecedent x ‒< y is false. But in the last case the antecedent of
x ‒< y, that is x, must be true.❞ (Peirce, CP 3.384).
Peirce goes on to point out an immediate application of the law:
❝From the formula just given, we at once get:
• {(x ‒< y) ‒< α} ‒< x,
❝where the α is used in such a sense that (x ‒< y) ‒< α means that
from (x ‒< y) every proposition follows. With that understanding,
the formula states the principle of excluded middle, that from the
falsity of the denial of x follows the truth of x.❞ (Peirce, CP 3.384).
Note. Peirce uses the “sign of illation” “‒<” for implication.
In one place he explains “‒<” as a variant of the sign “≤” for
“less than or equal to”; in another place he suggests that
A ‒< B is an iconic way of representing a state of affairs
where A, in every way that it can be, is B.
References —
• Peirce, Charles Sanders (1885), “On the Algebra of Logic :
A Contribution to the Philosophy of Notation”, American Journal
of Mathematics 7 (1885), 180–202. Reprinted (CP 3.359–403),
(CE 5, 162–190).
• Peirce, Charles Sanders (1931–1935, 1958), Collected Papers
of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and
Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard
University Press, Cambridge, MA. Cited as (CP volume.paragraph).
• Peirce, Charles Sanders (1981–), Writings of Charles S. Peirce :
A Chronological Edition, Peirce Edition Project (eds.), Indiana
University Press, Bloomington and Indianapolis, IN. Cited as
(CE volume, page).
Resources —
Logic Syllabus
• https://oeis.org/wiki/Logic_Syllabus
Logical Graphs
• https://oeis.org/wiki/Logical_Graphs
Peirce's Law
• https://oeis.org/wiki/Peirce%27s_law
Metamath Proof Explorer
• https://us.metamath.org/
Peirce's Axiom
• https://us.metamath.org/mpeuni/peirce.html
Regards,
Jon
cc: https://www.academia.edu/community/V1grBl
The attached Section 5 of the article I'm writing includes new material about the linguist Michael Halliday. I was not sure whether to include a discussion of his work because the connection to Peirce was unclear. But after studying a diagram I include as Figure 12, I realized that it could be interpreted as a major contribution to phaneroscopy. In fact, I believe that it is an important step toward Peirce’’s goal of phaneroscopy as “a strong and beneficient science.”
Comments, suggestions, and criticisms are welcome.
John
Amit and anybody who did or did not attended today's talk at the Ontology Summit session,
All three of those questions below involve metalevel issues about LLMs and various reasoning issues with and about generative AI. The first and most important is about anything generated by LLMs: Is it true, false, or possible? After that are How? Why? and How likely?
The biggest limitation of LLMs is that they cannot do any reasoning by themselves. But they can often find some reasoning by some human in some document from somewhere. If they find something similar, they can apply it to solve the current problem. But the word 'similar' raises critical questions: How similar? In what way is it similar/ Is that kind f similarity relevant to the current question or problem?
For example, the LLMs trained on the WWW must have found textbooks on Euclidean geometry. If some problem is stated in the same terminology as the books on geometry, the LLMs might find an answer and apply it.
But more likely, the problem will be stated in terms of the subject matter, such as building a house, plowing a field, flying an airplane, or surveying the land rights in a contract dispute. In those cases, the same geometrical problem may have few or no words in common with Euclid's description of the geometry and the terminology of each of the applications.
For these reasons, a generative AI system, by itself, is unreliable for any mission-critical application. It is best used under the control and supervision of some system that uses trusted methods of AI and computer science to check, evaluate, and supplement whatever the generative AI happens to generate.
As an example of the kinds of systems that my colleagues and I have been developing, see https://jfsowa.com/talks/cogmem.pdf , Cognitive Memory For Language, Learning, and Reasoning, by Arun K. Majumdar and John F. Sowa.
See especially slides 44 to 64. They show three applications for which precision is essential. There are no LLM systems today that can do anything useful with those applications or anything similar. Today, we have a new company, Permion.ai LLC, which has developed new technology that takes advantage of BOTH LLMs and the 60+ years of earlier AI research.
The often flaky and hallucinogenic LLMs are under the control of technology that is guaranteed to produce precisely controlled reasoning and evaluations. Metalevel reasoning is its forte. It evaluates and filters out whatever may be flaky, hallucinogenic, or inconsistent with the given facts.
John
----------------------------------------
From: "Sheth, Amit" <AMIT(a)sc.edu>
There has been a lot of discussion on LLMs and GenAI on this forum.
I would like to share papers related to three major challenges:
1 Is it Human or AI?
Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think —
Introducing AI Detectability Index
2. Measuring, characterizing and countering Hallucination (Hallucination Vulnerability Index)
The Troubling Emergence of Hallucination in Large Language Models –An Extensive Definition, Quantification, and Prescriptive Remediations
3. Fake News/misinformation
FACTIFY3M: A Benchmark for Multimodal Fact Verification with Explainability through 5W Question-Answering
Introduction/details/links to papers (EMNLP 2023):
https://www.linkedin.com/feed/update/urn:li:activity:7117565699258011648
I think this community won’t find this perspective alien:
Data driven only approaches can’t/won’t address these challenges well—
need to understand the duality of data and knowledge.
Knowledge (including KGs/ontologies/world model/structured semantics) and
neuro-symbolic AI (arxiv) which use a variety of relevant knowledge (linguistic, common sense,
domain specific, etc) will play critical role in
addressing these. The same goes for three of the most important requirements
(knowledge will play a critical role in making progress on these):
grounding, intractability, and alignment.
More to come on this from #AIISC.
Cheers,
Amit
Amit Sheth LinkedIn, Google Scholar, Quora, Blog, Twitter
Artificial Intelligence Institute; NCR Chair
University of South Carolina
#AIISConWeb, #AIISConLinkedIn, #AIISConFB
Andrea, Dan, Doug, Alex,
As I keep repeating, I am enthusiastic about the ongoing research on generative AI and the LLMs that support it. But as I also keep repeating, it's impossible to understand the full potential of any computational or reasoning method without understanding its limitations.
I explicitly address that issue for my own work. In my first book, Conceptual Structures, the final chapter 7 had the title "Limits of Conceptualization". Following is the opening paragraph: "No theory is fully understood until its limitations are recognized. To avoid the presumption that conceptual mechanisms completely define the human mind, this chapter surveys aspects of the mind that lie beyond (or perhaps beneath) conceptual graphs. These are the continuous aspects of the world that cannot be adequately expressed in discrete concepts and conceptual relations."
One of the reviewers, who wrote a favorable review of the book, said that he was surprised that Chapter 7 refuted everything that went before. But actually, it's not a refutation. It just itemizes the many complex issues about human thought and thinking that go beyond what can be handled by conceptual graphs (and related AI methods, such as semantic networks and knowledge graphs). Those are very important research areas, and it's essential to understand what can and cannot be done with current technology. For a copy of that chapter, see https://jfsowa.com/pubs/cs7.pdf
As another example, the AI Journal devoted an entire issue in 1993 to a review of a book on Cyc by Lenat & Guha. Lenat told me that my review was the most accurate, but it was also the most frustrating because I itemized all the difficult problems that they had not yet solved. Following is a copy of that review: https://jfsowa.com/pubs/CycRev93.pdf
Lenat did not hold that review against me. In 2004, the DoD, which had invested a great deal of funding in the Cyc project held a 20-year evaluation of the Cyc project to determine whether and how much should they continue to invest. And Lenat recommended me as one of the members of the review committee. Our unanimous review was that (1) Cyc had developed a great deal of important research, which should be documented and made available to the public; (2) future development of Cyc should be funded mostly by commercial applications of Cyc technology; (3) government funding should be continued during the documentation stage and during the transition to funding by applications. Those goals were achieved, and Cyc continued to be funded by applications for another 19 years.
So when I write about the limitations of generative AI and the LLM technology, I am writing exactly what must be done in any review of any project of any kind. A good review of any development must ALWAYS evaluate the strengths and limitations.
But many (most? all?) of the people who are working on LLMs don't ask questions about the limitations. For example, I have a high regard for Geoffrey Hinton, who has been one of the most prominent pioneers in this area. But in an interview on 60 Minutes last Sunday, he said nothing about the limitations. He even suggested that there were no limits. For that interview, see https://www.cbs.com/shows/video/L25QUOdr6apMNr0ZWqDBCo9uPMd_SBWM/
As a matter of fact, many of the limitations I discussed in cs7.pdf also apply to the limitations of LLMs. In particular, they are the limitations of representing and reasoning about the continuous aspects of the world and their translations to and from a discrete, finite vocabulary of any language, natural or artificial.
Andrea> I agree with the position of using LLMs wherever they are appropriate, researching the areas where they need more work, supplementing them where other technologies are strong, and (in general) "not throwing the baby out with the bath water".
Yes indeed.
Dan> The ability of these systems to engage with human-authored text in ways highly sensitive to their content and intent is absolutely stunning. Encouraging members of this forum to delay putting time into learning how to use LLMs is doing them no favours. All of us love to feel we can see through hype, but it’s also a brainworm that means we’ll occasionally miss out on things whose hype is grounded in substance.
I certainly agree. I'm not asking anybody to stop doing their R & D. But I am asking people who promote LLMs to look at where they are running up against the limits of current versions and what can be done to go beyond those limits.
Doug F> Note that much of the left hemisphere has nothing to do with language. In front of the language areas are strips for motor control of & sensory input from the right side of the body. The frontal lobe forward of those strips do not deal with language. The occipital lobe at the rear of the brain does not deal with language, either. The visual cortex in the temporal lobe also does not deal with language. This means that most of the 8 billion neurons in the cerebral cortex have nothing to do with language.
I agree with that point. But I believe that the LLM proponents would also agree. They would say that those areas of the cortex are necessary for mapping language-based LLMs to and from perception and action. What they fail to recognize is the importance of the 90% of the neurons that do not do anything directly related to language.
Alex> My proposal: let’s first agree that ANN is far from being only an LLM. LLM is by far the most noisy and unexpected of ANN applications. The question can be posed this way: we know about the Language Model, but what other models using ANN exist?
I agree that we should explore the many ways that artificial NNs relate to the NNs in various parts of the brain. It's also important to recognize that there are many difference kinds of NNs in different areas of the brain, and they are organized in ways that are very different from the currently popular ANNs.
In summary, there is a lot more research that remains to be done. I'm not telling anybody to stop what they're doing. I'm just recommending that they look at what more needs to be done before claiming that LLMs can do everything.
John
As I have said in recent notes sent to three groups (Ontolog Forum, Peirce List, and CG list), Peirce's work on diagrammatic reasoning is at the forefront of current research on Generative AI and related applications.
In some of my notes on this topic, I have included excerpts from an article I'm writing, which explains the connections to Peirce's writings, especially in the last decade of his life. But I have also included some further discussions on Ontolog Forum, which do not indicate any connection to Peirce.
Gary Richmond reported that some subscribers to P-List have complained. And I admit that some of those notes addressed some technical issues that are not directly relevant to CSP. Therefore, I'll limit my cc's for those notes to CG list. The only notes I'll cc to P-list are the ones that explicitly cite or discuss Peirce's writings.
Anybody who wishes to see the other notes can subscribe to CG list or to Ontolog Forum. (CG list has very little traffic, so it won't fill up anyone's mailbox.)
John
Stephen.
The six branches of the cognitive sciences (Philosophy, psychology, linguistics, AI, neuroscience, and anthropology) have an open-ended variety of unanswered questions. That is the nature of every active branch of science. The reason why researchers in those six sciences formed the coalition called cognitive science is that cutting-edge research in each of them has strong implications and valuable results for each of the others. In fact, prominent leaders in AI were very active in founding the Cognitive Science Journal and conferences.
There is a huge amount of fundamental research about the multiplicity of very different "languages" of thought. These results are well established with solid evidence about the influences. Natural languages are valuable for communication, but they are not the best or even the most general foundation for thinking about most of the things we do in our daily lives -- or in our most complex activities.
You can't fly an airplane, drive a truck, thread a needle, paint a picture, ski down a mountain, or solve a mathematical problem if you have to talk to yourself (vocally or silently) about every detail. You might do that when you're first learning something, but not when you master the subject
Compared to those results, the writings by many prominent researchers on LLMs are naive. They know how to play with LLMs, but they don't know how to solve the very serious tasks that AI researchers have been implementing and using successfully for years. As just some examples that my colleagues and I have implemented successfully, see https://jfsowa.com/talks/cogmem.pdf
Look at the examples in the final section (slides 44 to 64). The current LLM technology cannot even begin to meet the requirements that the VivoMind technology could implement in 2010. Nobody writing about LLMs can show how to handle those requirements by using LLMs.
And those examples are just a small sample of successful applications. Most of the others were proprietary for our customers, who did not want to have their solutions publicized. That was fundamental science applied to mission-critical applications.
John
----------------------------------------
From: "Stephen Young" <steve(a)electricmint.com>
Sent: 10/8/23 7:13 PM
To: ontolog-forum(a)googlegroups.com, Stephen Young <steve(a)electricmint.com>
Subject: Re: [ontolog-forum] Addendum to (Generative AI is at the top of the Hype Cycle. Is it about to crash?
John, we've known since the 50s that the right brain has a significant role in understanding language. We also know that there is a ton of neural real estate between Wernicke's and Broca's areas that must be involved in language processing. They're like the input and output layers of the 98-layer GPT model. And we call them large language models, but they also "understand" vision.
Using our limited understanding of one black box to try to justify our assessment of another black box is not going to get us anywhere.
On Mon, 9 Oct 2023 at 08:23, John F Sowa <sowa(a)bestweb.net> wrote:
Alex,
Thanks for the list of applications of LANGUAGE-based LLMs. It is indeed impressive. We all agree on that. But mathematics, physics, computer science, neuroscience, and all the branches of cognitive science have shown that natural languages are just one of an open-ended variety of left-brain ways of thinking. LLMs haven't scratched the surface of the methods of thinking by the right brain and the cerebellum.
The left hemisphere of the cerebral cortex has about 8 billion neurons. The right hemisphere has another 8 billion neurons that are NOT dedicated to language. And the cerebellum has about 69 billion neurons that are organized in patterns that are totally different from the cerebrum. That implies that LLMs are only addressing 10% of what is going on in the human brain. There is a lot going on in that other 90%. What kinds of processes are happening in those regions?
Science makes progress by asking QUESTIONS. The biggest question is how can you handle the open-ended range of thinking that is not based on natural languages. Ignoring that question is NOT scientific. As the saying goes, when the only tool you have is a hammer, all the world is a nail. We need more tools to handle the other 90% of the brain -- or perhaps updated and extended variations of tools that have been developed in the past 60+ years of AI and computer science.
I'll say more about these issues with more excerpts from the article I'm writing. But I appreciate your work in showing the limitations of the current LLMs.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
English LLM is the flower on the tip of the iceberg. Multilingual LLMs are also being created. The Chinese certainly train more than just English-speaking LLMs. You can see the underwater structure of the iceberg, for example, here https://huggingface.co/datasets (1).
Academic claims against inventors are possible. But you know the inventors: it works!
It's funny that before that hype LLM meant Master of Laws:-)
Alex
(1)
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4b87b85e6cde491780c3115b5ba….
--
Stephen Young
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAHH%2BT2JSgqdmGksQRc0-qVqJ….