Logical Graphs • Formal Development 1
• https://inquiryintoinquiry.com/2023/09/15/logical-graphs-formal-development…
Recap —
A first approach to logical graphs can be found in the article linked below.
Logical Graphs • First Impressions
• https://inquiryintoinquiry.com/2023/08/24/logical-graphs-first-impressions/
That introduces the initial elements of logical graphs and hopefully supplies
the reader with an intuitive sense of their motivation and rationale.
Formal Development —
Logical graphs are next presented as a formal system by going back to the
initial elements and developing their consequences in a systematic manner.
The next order of business is to give the precise axioms used to develop
the formal system of logical graphs. The axioms derive from C.S. Peirce's
various systems of graphical syntax via the “calculus of indications”
described in Spencer Brown's “Laws of Form”. The formal proofs to follow
will use a variation of Spencer Brown's annotation scheme to mark each step
of the proof according to which axiom is called to license the corresponding
step of syntactic transformation, whether it applies to graphs or to strings.
Regards,
Jon
cc: https://www.academia.edu/community/VrW8bL
cc: https://mathstodon.xyz/@Inquiry/111070230310739613
Additional excerpts from Sections 1, 2, and 3 can clarify the issues discussed in Section6. See the attached Section6.pdf. This includes the excerpts in the previous version.
John
Alex,
The issues you mention happen to be topics that are included in that article I'm writing. I'll add some excerpts from other sections to Section6.pdf and send the update to these lists.
Alex> pictures that are recorded in our memory and are more or less accessible depending on access to visual memory.
Certainly. And not just visual memory, but memory of all perceptions, internal and external, are fundamental to what Peirce called the phaneron. All those perceptions are continuous and language consists of a string of discrete words that represent concepts that people consider significant. Diagrams constructed as patterns of concepts and relations serve as the intermediate stage between continuous perceptions and discrete strings of words.
The mappings go both ways: Perceptions -> images -> mental diagrams -> languages (spoken, signed, and artificial). And languages -> mental diagrams -> images -> actions in and on the continuous world (or some local part of it). The mental diagrams (and representations in one, two, three, or more dimensions) are an essential stage in those mappings.
C. S. Peirce recognized the importance of diagrams, and he had plans to extend them to "stereoscopic moving images". At the language end, nodes of the diagrams can be mapped to and from discrete words or concepts. At the image end, the nodes of diagrams can be moved and mapped to the parts of an image, either static or dynamic, which they represent.
Alex> A broad interpretation of the term diagram is possible. This is somewhat reminiscent of systems engineering. Consider "system thinking" vs "diagrammatic thinking".
Of course. The only change I would make is to replace "vs" with "and". Systems thinking is and must be in terms of diagrams that relate one-dimensional specifications (in words or other kinds of symbols) to the three-dimensional moving systems that engineers design and build.
Alex> It is important that a mind is able to store and operate with visual images - this is cooler than diagrams.
I'm not sure about the temperature. But human memory (and probably the mental imagery of other animals) can include imagery from perception as well as imagery from imagination. (In another note, I'll tell you the story about how Yojo the cat dreamt that there were monsters under the bed.)
Alex> LLM for me is an engineering invention around which there is a lot of noise, because it unexpectedly turned out to be capable of simulating many mental activities.
Yes. I'm glad that you used the critical word "many". The crucial addition is "but not all." I believe that LLMs are valuable for what they do. But as discrete patterns that support a limited set of operations, they are limited in what they can do.
Alex> Interesting topics include visual thinking and movie thinking.
Yes. That's what Peirce wrote in 1911, when he mentioned "stereoscopic moving images." In December of that year, he introduced an extension of his existential graphs called Delta graphs. Unfortunately, he had an accident before he finished writing his MS about them. But what he did write seems to be along the lines we have been discussing.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
There is an important difference between a diagram and a visual representation (pictures that are recorded in our memory and are more or less accessible depending on access to visual memory):
you need to be able to read diagrams.
A broad interpretation of the term diagram is possible. This is somewhat reminiscent of systems engineering. Consider "system thinking" vs "diagrammatic thinking".
It is important that a mind is able to store and operate with visual images - this is cooler than diagrams.
LLM for me is an engineering invention around which there is a lot of noise, because it unexpectedly turned out to be capable of simulating many mental activities.
I'm slowly learning how LLM works. There are a lot of surprises there.
As far as I know, there are GPTs that can build diagrams and even accept them as input. After all, the first ANN layer is more likely intended for an image than for text.
Interesting topics include visual thinking and movie thinking.
Alex
вт, 26 сент. 2023 г. в 23:35, John F Sowa <sowa(a)bestweb.net>:
Alex,
The only relevant item in that reference is a publication that is cited before the paywall: https://arxiv.org/pdf/2309.06979.pdf
What they prove is that you can train a system of LLMs to simulate a Turing machine. But that proves nothing. Almost every AI system designed in the past 60 years can be "trained" to simulate a Turing machine.
Every LLM that is trained on natural language data is limited to the kind of "thinking" that is done in natural languages. As I pointed out in Section6.pdf (and many other publications), NL thinking is limited to all the ambiguities and limitations of NL speaking. In human communication, NLs must be supplemented by context, shared background knowledge, and gestures that indicate or point to non-linguistic information.
The great leap of science by the Egyptians, Stone-hengers, Babylonians, Chinese, Indians, Greeks, Mayans, etc., was to increase the precision and accuracy of their thinking by going beyond what can be stated in ordinary languages. And guess what their magic happens to be? It's DIAGRAMS!!!!
Translating thoughts from diagrams to words is a great leap in communication. But it cannot replace the precision and generality of the original thinking expressed in the original diagrams.
As I said, you cannot design the great architectures of ancient times, the complex machinery of today, or any of the great scientific innovations of the past 500 years without geometrical diagrams that are far more complex than anything you can state in humanly readable natural language.
I admit that it is possible to translate any geometrical design or any bit pattern in a digital computer into a specification that uses the words and syntax of a natural language. But what you get is an immense amount of verbiage that no human could read and understand.
That is the most important message that we can get across in the forthcoming mini-summit. LLMs trained on NL input cannot go beyond NL thinking, and they cannot do any thinking that can go beyond thoughts expressible in NLs. To test that statement, show somebody (anybody you know) a picture, have them describe it, and have somebody else draw or explain what they heard, and have a fourth person compare the original to the explanation. (By the way, my previous sentence would be much clearer if I had included a drawing.)
John
Alex,
The only relevant item in that reference is a publication that is cited before the paywall: https://arxiv.org/pdf/2309.06979.pdf
What they prove is that you can train a system of LLMs to simulate a Turing machine. But that proves nothing. Almost every AI system designed in the past 60 years can be "trained" to simulate a Turing machine.
Every LLM that is trained on natural language data is limited to the kind of "thinking" that is done in natural languages. As I pointed out in Section6.pdf (and many other publications), NL thinking is limited to all the ambiguities and limitations of NL speaking. In human communication, NLs must be supplemented by context, shared background knowledge, and gestures that indicate or point to non-linguistic information.
The great leap of science by the Egyptians, Stone-hengers, Babylonians, Chinese, Indians, Greeks, Mayans, etc., was to increase the precision and accuracy of their thinking by going beyond what can be stated in ordinary languages. And guess what their magic happens to be? It's DIAGRAMS!!!!
Translating thoughts from diagrams to words is a great leap in communication. But it cannot replace the precision and generality of the original thinking expressed in the original diagrams.
As I said, you cannot design the great architectures of ancient times, the complex machinery of today, or any of the great scientific innovations of the past 500 years without geometrical diagrams that are far more complex than anything you can state in humanly readable natural language.
I admit that it is possible to translate any geometrical design or any bit pattern in a digital computer into a specification that uses the words and syntax of a natural language. But what you get is an immense amount of verbiage that no human could read and understand.
That is the most important message that we can get across in the forthcoming mini-summit. LLMs trained on NL input cannot go beyond NL thinking, and they cannot do any thinking that can go beyond thoughts expressible in NLs. To test that statement, show somebody (anybody you know) a picture, have them describe it, and have somebody else draw or explain what they heard, and have a fourth person compare the original to the explanation. (By the way, my previous sentence would be much clearer if I had included a drawing.)
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
Hi Andrea,
The topic you touched is so hot that there should be a lot of overview done yet. For me one source of it is a medium portal. Unfortunately it's a little bit paywalled :-(
Have a look at the newest [1].
Alex
[1] https://medium.com/@paul.k.pallaghy/llms-like-gpt-do-understand-agi-implica…
Some people have claimed that generative AI based on LLMs can support a path toward Artificial General Intelligence. But the best that can be said is that LLMs are useful for machine translation (especially for Standard Average European), for NL interfaces to computer systems, and for various kinds of text processing, summarization, and generation -- provided that they are supplemented with suitable methods for error checking.
I am currently writing an article that explains why LLMs are not capable of supporting the full range of human thinking, reasoning, and language understanding. In fact, they could not replace the brain of a dog, a raven, or a horse on tasks they do best. See the attached Section6.pdf, which includes Section 6 of the article and the opening paragraphs of Section 7. At the end, it includes two references for covering some of the material in the preceding five sections of the article I'm writing.
John
I'm sorry that the 2008 version of CL is not available. The 2018 version has some complex extensions that nobody uses. The IKL extensions are simpler and more useful. Unfortunately, the 2018 ISO standard does not separate the extensions from the much simpler and more useful core.
There were some debates about the usefulness of the different versions, Unfortunately, the other one became the official ISO standard. For the IKL extension and many related publications, see https://jfsowa.com/ikl/
John
----------------------------------------
From: "James Davenport' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
That page has gone, and is replaced by the 2018 version:
https://standards.iso.org/ittf/PubliclyAvailableStandards/c066249_ISO_IEC_2…
James Davenport
Hebron & Medlock Professor of Information Technology, University of Bath
National Teaching Fellow 2014; DSc (honoris causa) UVT
Former Fulbright CyberSecurity Scholar (at New York University)
Former Vice-President and Academy Chair, British Computer Society
Apparently, the operations supported by DNA can serve as a biological computer. Excerpts below.
www.newscientist.com/article/2391747-dna-based-computer-can-run-100-billion…
John
_________________
DNA-based computer can run 100 billion different programs
Mixing and matching various strands of DNA can create versatile biological computer circuits that can take the square roots of numbers or solve quadratic equations.A liquid computer can use strands of DNA to run over 100 billion different simple programs. It could eventually be used for diagnosing diseases within living cells.
Fei Wang at Shanghai Jiao Tong University in China and his colleagues set out to make circuits similar to those on a computer chip, except with DNA molecules acting as wires and instructing the wires to configure in certain ways.When you enter a command on a conventional computer, it instructs electrons to flow through a specific path on a silicon chip. These circuit configurations each correspond to different mathematical operations – adding functions to chips means adding such paths.
To replace the wiring with DNA, Wang and his team modelled how to combine short segments of DNA into larger structures that could serve as circuit components, like wires, or function to direct those wires to form different configurations. They put this into practice by filling tubes with DNA strands and a buffer fluid and letting them attach to each other, combining into larger molecules through chemical reactions. The researchers also equipped all the molecules with fluorescence markers so they could keep track of what the circuit was doing based on how its parts were glowing....
Alex,
Those things were done and published years ago. They are not research issues, and there is nothing controversial about them. They were published in an official ISO standard. The latest version was published in 2018, but it is more complex, and the subset that was defined in 2007 is the only version that has been implemented and used: ISO/IEC standard 24707 for Common Logic. Even more important, it can be downloaded for free: http://standards.iso.org/ittf/PubliclyAvailableStandards/c039175_ISO_IEC_24…
The ISO standard for Common Logic specifies the core semantics in an abstract syntax that is independent of any readable notation of any kind. Then it states that any concrete syntax (linear or diagrammatic) that has a formally defined mapping to the abstract syntax may be called a dialect of Common Logic. Then three different concrete syntaxes are specified in the Appendices: (1) Common Logic Interchange Format (CLIF), which has a LISP-like syntax: (2) Conceptual Graph Interchange Format (CGIF); and (3) an XML-based notation (XCL).
In that standard, the core semantics is formally equivalent to Peirce's existential graphs. The formal name for the notation is "core CGIF", but I use the name EGIF (Existential Graph Interchange Format) because the core can be mapped to and from the graphic notation for EGs. Anything stated in the full CLIF or CGIF or XCL dialects can be mapped to CGIF and then to the core EGIF. The mappings are defined in that standard.
For more details about the full graph notation plus extensions, see the peer-reviewed research publication in the International Journal of Applied Logics: Sowa, John F. (2018) Reasoning with diagrams and images, http://www.collegepublications.co.uk/downloads/ifcolog00025.pdf . That issue of the journal contains several articles presented at a conference in Bogota, Columbia. My article is the second one. It defines an extension to EGs that also supports mappings to and from images.
But before reading all those formal publications, I recommend the slides from the talk that I presented at the European Sematic Web Conference in 2020: https://jfsowa.com/talks/escw.pdf .
These slides present a simpler overview, which may help smooth the way toward the more detailed formalism. They also contain more links to other publications and presentations that can add useful background. See the links at the bottom of most slides, and the suggested readings in the last slide.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
For me the next steps are
-to find axiomatic theories of EG, CG in your egtut.pdf [0] or other papers.
-wait for development of [1].
-to continue with E2HOL [2] where we need algorithms: string is input, graph or diagram is output.
I am happy we align our terminology.
Alex
[0] https://jfsowa.com/pubs/egtut.pdf
[1] https://inquiryintoinquiry.com/2023/09/15/logical-graphs-formal-development…
[2] https://www.researchgate.net/publication/366216531_English_is_a_HOL_languag…