Relations & Their Relatives • 1
• https://inquiryintoinquiry.com/2024/07/31/relations-their-relatives-1-a/
All,
Sign relations are special cases of triadic relations in much
the same way binary operations in mathematics are special cases
of triadic relations. It amounts to a minor complication that
we participate in sign relations whenever we talk or think about
anything else but it still makes sense to try and tease the separate
issues apart as much as we possibly can.
As far as relations in general go, relative terms are often
expressed by means of slotted frames like “brother of __”,
“divisor of __”, and “sum of __ and __”. Peirce referred to
these kinds of incomplete expressions as “rhemes” or “rhemata”
and Frege used the adjective “ungesättigt” or “unsaturated” to
convey more or less the same idea.
Switching the focus to sign relations, it's fair to ask what kinds
of objects might be denoted by pieces of code like “brother of __”,
“divisor of __”, and “sum of __ and __”. And while we're at it, what
is this thing called “denotation”, anyway?
Resources —
Relation Theory
• https://oeis.org/wiki/Relation_theory
Triadic Relations
• https://oeis.org/wiki/Triadic_relation
Sign Relations
• https://oeis.org/wiki/Sign_relation
Survey of Relation Theory
• https://inquiryintoinquiry.com/2024/03/23/survey-of-relation-theory-8/
Peirce's 1870 Logic Of Relatives
• https://oeis.org/wiki/Peirce%27s_1870_Logic_Of_Relatives_%E2%80%A2_Overview
Regards,
Jon
cc: https://www.academia.edu/community/Vj80Dj
Relations & Their Relatives • Discussion 24
• https://inquiryintoinquiry.com/2024/07/28/relations-their-relatives-discuss…
Re: Daniel Everett • June 20, 2024
•
https://www.facebook.com/permalink.php?story_fbid=pfbid02oCRz4EYHAtbrJeAzzo…
Daniel Everett remarks:
❝Among the several ideas Peirce and Frege came up with was the idea
of a predicate before and after it is linked to its arguments. Frege
called the unlinked predicate unsaturated. But Peirce built this into
a theory of valency. An unsaturated predicate in Frege's system is a
generic term, a rheme, in Peirce's system. So in Peirce's theory all
languages need generic terms (rhemes) to exist. Additionally, through
his reduction thesis (a theorem proved separately by various logicians)
Peirce set both the upper and lower bounds on valency which — even to
this day — no other theory has done.❞
Dear Daniel,
In using words like “predicate” or “relation” some people mean an item of
syntax, say, a verbal form with blanks substituted for a number of subject
terms, and other people mean a mathematical object, say, a function f from
a set X to a set B = {0, 1} or a subset L of a cartesian product X₁ × … × Xₖ.
It would be a great service to understanding if we had a way to negotiate
the gap between the above two interpretations.
To be continued …
Resources —
Relation Theory
• https://oeis.org/wiki/Relation_theory
Survey of Relation Theory
• https://inquiryintoinquiry.com/2024/03/23/survey-of-relation-theory-8/
Regards,
Jon
cc: https://www.academia.edu/community/lzAqAe
Alex,
The article you quoted by Gary Marcus is consistent with recent publications about Generative AI. See the excerpt below, which I strongly agree with.
Many investors realize that Generative AI is useful for supporting NL interfaces to complex computer systems (Wolfram, Kingsley, and the Permion.ai company that I am working with are examples).
But by itself, Generative AI is too unreliable for applications that require accuracy and precision. See the longer commentary by Gary M and many of the articles I have cited in recent notes.
As for a collaborative project by Ontolog Forum, this is a discussion group, not a development group. If anybody wants to form a separate project to do anything further, I suggest that they start a separate email list for people on that project. I have too many unfinished projects of my own to work on. And I suspect that is also true of many other subscribers.
John
__________________
An excerpt from the article by Gary Marcus: https://open.substack.com/pub/garymarcus/p/alphaproof-alphageometry-chatgpt…
I think that GenAI is vastly overrated and overhyped, but fear that its collapse may well lead to an AI winter of sorts, like what happened in the mid-1980s, when AI “expert systems” rapidly ascended and rapidly fell.
That said, I am certain that the impending collapse won’t lead to the absolute death of AI. There is too much at stake.
What the collapse of generative AI might lead to, after a quiet period, is a renaissance. Generative AI may well never be as popular as it has been over the last year, but new techniques will come, new techniques that will work better, and that address some of the failings of generative AI.
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
"In the final analysis, expecting AI to “solve” AGI without “System II” mechanisms for symbol-manipulation is like expecting bears to solve quantum mechanics."
IMHO GM and GDM are on the way to formalize theoretic knowledge of sciences and technologies. Huge work for many mathematicians for many years. Let's do it.
Alex
Another of the many reasons why Generative AI requires other methods -- such as the 70 yeas of AI and computer science -- to test, evaluate, and correct anything and everything that it "generates",
As the explanation below says, it does not "UNDERSTAND" what it is doing It just finds and reproduces patterns that it finds in its huge volume of data. Giving it more data gives it more patterns to choose from. But it does nothing to help it understand any of them.
This method enables it to surpass human abilities on IQ tests, law exams, medical exams, etc. -- for the simple reason that the answers to those exams can be found somewhere on the WWW. In other words, Generative AI does a superb job of CHEATING on exams. But it is hopelessly clueless in solving problems whose solution depends on understanding the structure and the goal of the problem.
For similar reasons, the article mentions that self-driving cars fail in complex environments, such as busy streets in city traffic. The number and kinds of situations are far more varied and complex than anything they have been trained on. Carnegie Mellon University is involved in more testing of self-diving cars because Pittsburgh has the most complex and varied patterns. It has more bridges than any other city in the world. It also has three major rivers, many hills and valleys, steep winding roads, complex intersections, tunnels, foot traffic, and combinations of any or all of the above.
Drivers who test self-driving cars in Pittsburgh say that they can't go for twenty minutes without having to grab the steering wheel to prevent an accident. (By rhe way, I learned to drive in P:irravurgh. Then I went to MIT and Harvard,, where the Boston patterns are based on 300-year-old cow paths.)
John
________________________________________________
AI-Generated Code Has A Staggeringly Stupid Flaw
It simply doesn’t work.
https://medium.com/predict/ai-generated-code-has-a-staggeringly-stupid-flaw…. . .
So, what is the problem with AI-generated code?
Well, one of the internet’s favourite developers, Jason Thor Hall of Pirates Software fame, described it best in a recent short. He said, “We have talked to people who’re using AI-generated code, and they are like, hey, it would take me about an hour to produce this code and like 15 minutes to debug. And then they are like, oh, the AI could produce it in like 1 minute, and then it would take me like 3 hours to debug it. And they are like, yeah, but it produced it really fast.”
In other words, even though AI can write code way faster than a human programmer, it does such a poor job that making the code useful actually makes it far less efficient than getting a qualified human to just do the job in the first place.
. . .
Well, AI doesn’t actually understand what it is doing. These generative AI models are basically overly developed predictive text programs. They use statistics based on a stupidly large pool of data to figure out what the next character or word is. As such, No AI actually ‘knows’ how to code. It isn’t cognitively trying to solve the problem, but instead finds an output that matches the statistics of the data it has been trained on. As such, it gets it massively wrong constantly, as the AI isn’t actually trying to solve the problem you think it is. As such, even when the coding problem you are asking the AI to solve is well-represented in its training data, it can still fail to generate a usable solution simply because it doesn’t actually understand the laws and rules of the coding language. This issue gets even worse when you ask it to solve an AI problem it has never seen before, as the statistical models it uses simply can’t be extrapolated out, causing the AI to produce absolute nonsense.
This isn’t just a problem with AI-generated code but every AI product, such as self-driving cars. Moreover, this isn’t a problem that can be easily solved. You can’t just shove more training data into these AIs, and we are starting to hit a point of diminishing returns when it comes to AI training (read more here). So, what is the solution?
Well, when we treat AI as it actually is, a statistical model, we can have tremendous success. For example, AI structural designs, such as those in the Czinger hypercar, are incredibly efficient and effective. But it falls apart when we treat AI as a replacement for human workers. Despite its name, AI isn’t intelligent, and we shouldn’t treat it as such. [End]
Pragmatic Truth • 1
• http://inquiryintoinquiry.com/2024/07/08/pragmatic-truth-1/
All,
Questions about the “pragmatic conception of truth” have broken out
in several quarters, asking in effect, “What conceptions of truth
arise most naturally from and are best suited to pragmatic ways
of thinking?” My best thoughts on that score were written out
quite a few years ago, in an article I originally wrote for
Wikipedia. I haven't dared look at what's become of it on
that site — linked below is my current fork on another wiki.
Pragmatic Theory Of Truth
• https://oeis.org/wiki/Pragmatic_Theory_Of_Truth
It begins as follows …
❝“Pragmatic theory of truth” refers to those accounts, definitions,
and theories of the concept “truth” distinguishing the philosophies
of pragmatism and pragmaticism. The conception of truth in question
varies along lines reflecting the influence of several thinkers,
initially and notably, Charles Sanders Peirce, William James, and
John Dewey, but a number of common features can be identified.
❝The most characteristic features are (1) a reliance on the
“pragmatic maxim” as a means of clarifying the meanings of
difficult concepts, truth in particular, and (2) an emphasis
on the fact that the product variously branded as belief,
certainty, knowledge, or truth is the result of a process,
namely, “inquiry”.❞
Et sic deinceps …
Resources —
Logic Syllabus
• https://inquiryintoinquiry.com/logic-syllabus/
Pragmatic Maxim
• https://inquiryintoinquiry.com/2023/08/07/pragmatic-maxim-a/
Truth Theory
• https://oeis.org/wiki/Truth_theory
Pragmatic Theory Of Truth • Document History
• https://oeis.org/wiki/Pragmatic_Theory_Of_Truth
• https://oeis.org/wiki/Pragmatic_Theory_Of_Truth#Document_history
Correspondence Theory Of Truth
• https://oeis.org/wiki/Correspondence_Theory_Of_Truth
Regards,
Jon
cc: https://www.academia.edu/community/5AEygK
Alex and Gary,
The article cited in your notes is based on logical methods that go far beyond anything that is being done with LLMs and their applications to generative AI. Such developments are important for evaluating and testing the accuracy of the output of OpenGPT and related systems.
I won't say anything in detail about the cited article. But I noticed that some of the researchers mentioned in the article participated in the IKRIS project, which was developing metalanguage extensions to Common Logic. For a summary of that project with multiple references for further study, see https://jfsowa.com/ikl .
The documents cited there include a list of the participants and the topics they were working on. The issues they discuss are vitally important for testing and evaluating the results generated by LLMs. Without such evaluation, the output generated by LLMs cannot be trusted for any critical applications.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
Subject: Re: [ontolog-forum] A formalized approach to consider agents' norms and values
Gary, thank you!
I have sent it to my favorite formal philosophers https://www.facebook.com/share/SaGkSXTmVF2HcJp9/
Alex
вт, 16 июл. 2024 г. в 18:12, Gary Berg-Cross <gbergcross(a)gmail.com>:
Ken Forbus posted this elsewhere but it should be of interest to this community:
"How can an AI system build up and maintain an accurate mental model of people's norms, in order to avoid social friction? This is difficult because norms not only vary between groups but also evolve over time. Taylor Olson's approach is to develop a formal defeasible deontic calculus, building on his prior work on representing social and moral norms, which enables resolving norm conflicts in reasonable ways. This paper appeared at the Advances in Cognitive Systems conference in Palermo last month."
https://arxiv.org/abs/2407.04869?
Gary Berg-Cross
Pragmatic Truth • Discussion 25
• https://inquiryintoinquiry.com/2024/07/15/pragmatic-truth-discussion-25/
Re: OEIS Wiki | Correspondence Theory Of Truth
• https://oeis.org/wiki/Correspondence_Theory_Of_Truth
All,
Richard Saunders writes:
❝Given that “facts are basically combinations of objects together
with their properties or relations; so the fact that Fido barks
is the combination of an object (i.e., Fido) with one of Fido's
properties (that he barks)”, if the object and the property are
real, then the correspondence theory of truth seems adequate for
most purposes. But the question remains, what is “real”? I like
Phillip Dick's suggestion that reality is what remains when you
stop believing in it.❞
Dear Richard,
Let me clear up a few things about that section of the Correspondence Theory
article you quote above. The style of it tells me other Wikipedians probably
had a bigger hand in it than I did — for my part I most likely took it as a
thumbnail sketch of the conventional view, a sop to the two‑headed dogma of
analytic philosoppy, if you will.
Pragmatic treatments of truth begin from a decidedly different standpoint
and make a radical departure from correspondence accounts. But there is
nothing new about the pragmatic view, as we can see from the way Kant and
even the Ancients had already criticized correspondence theories.
Regards,
Jon
cc: https://www.academia.edu/community/L6pMQA
Alex and Eric,
Re intelligence, general or not:
As I replied to Phil in another branch of this thread, my grandmother asked the right question: "How much bread can you bake with that?"
For a more scientific answer, please note the cartoon that Peter L. found (copy below).
Everything that LLMs do is generate and process a pile of linear algebra. That is a very useful operation for many kinds of problems. It can indeed bake bread (among millions of other tasks). But there is no magic intelligence in it. The cartoon below says it all.
John
_______________________________________
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
And just one more question to C3S.
Q
Consider this proposition: "Conscience can only come with internal mirrors systems." What does "internal mirrors" mean here?
вт, 16 июл. 2024 г. в 18:24, 'Eric BEAUSSART' via ontology-summit <ontology-summit(a)googlegroups.com>:
Hello All !
"Garbege un, garbage out !" ... a long ago this was told !
Anyway, <How "Intelligence", artificial or natural can come without ""Psychic" and "Body control" sytems" (even if the "body" is "Data center" !) ! ???>. "Conscience" can only come with "internal "mirrors"" sytems ! And real "Learning" in living bodies can only come other individuals of the same specie (or close enough !) (especially parents of course !) !!!
Even "rats" (maybe "fishes also !), without "social" interactions develop psychic anomalies !
Regards.
E. B.
_____________________________
David,
Goedel showed that there are infinitely many undecidable propositions in first-order logic.
The example you cited, “this statement cannot be proved”, cannot be stated in FOL because it requires metalanguage -- a method for talking about statements. FOL, as it is usually defined, does not contain any operator or method that would enable any statement to talk about any statement of any kind, not even about itself. Therefore, that is not one of the statements that Goedel was proving theorems about.
Puzzles that contain metalanguage were debated by the Greeks thousands of years ago. A famous example is "All Cretans are liars." That statement was uttered by a Cretan.
If that statement is false because it was uttered by a Cretan, it would imply that the Cretan who said it was not a liar. But that would imply that the statement is true.
John
----------------------------------------
From: "poole" <poole(a)cs.ubc.ca>
Goedel’s theorem does not “show that certain very complex propositions stated in first-order logic are undecidable”.
The proposition is “this statement cannot be proved”
If it is true, the logic is incomplete. If it is false the logic is unsound.
(It doesn’t look very complex to me. I doubt that "no logician had ever written or encountered" this proposition, as other similar “paradoxes" were common).
The only way to get around Goedel’s theorem is to make the logic too weak to state this. What Goedel proved was that any logic that can represent arithmetic can represent this. His proof was complicated because he had to invent programming and therem proving. Now the proof should be straightforward as we can assume that computers and theorem provers, and a computer is just a big arithmetic operation (the memory of a computer is just a large integer).
I agree with the comment on OWL. Restricting a logic to be decidable, does not make it efficient, it just means you can state less. There are things you just cannot state.
David