Differential Logic • 1
• https://inquiryintoinquiry.com/2024/10/30/differential-logic-1-a/
Introduction —
Differential logic is the component of logic whose object is
the description of variation — focusing on the aspects of change,
difference, distribution, and diversity — in universes of discourse
subject to logical description. A definition that broad naturally
incorporates any study of variation by way of mathematical models,
but differential logic is especially charged with the qualitative
aspects of variation pervading or preceding quantitative models.
To the extent a logical inquiry makes use of a formal system, its
differential component governs the use of a “differential logical
calculus”, that is, a formal system with the expressive capacity
to describe change and diversity in logical universes of discourse.
Simple examples of differential logical calculi are furnished by
“differential propositional calculi”. A differential propositional
calculus is a propositional calculus extended by a set of terms for
describing aspects of change and difference, for example, processes
taking place in a universe of discourse or transformations mapping
a source universe to a target universe. Such a calculus augments
ordinary propositional calculus in the same way the differential
calculus of Leibniz and Newton augments the analytic geometry of
Descartes.
Resources —
Survey of Differential Logic
• https://inquiryintoinquiry.com/2024/02/25/survey-of-differential-logic-7/
Regards,
Jon
cc: https://www.academia.edu/community/lJX2qa
cc: https://www.researchgate.net/post/Differential_Logic_The_Logic_of_Change_an…
Information = Comprehension × Extension • Comment 1
• https://inquiryintoinquiry.com/2024/10/11/information-comprehension-x-exten…
Re: Information = Comprehension × Extension • Selection 1
• https://inquiryintoinquiry.com/2024/10/05/information-comprehension-x-exten…
All,
Selection 1 ends with Peirce drawing the following conclusion about the
links between information, comprehension, inference, and symbolization.
❝Thus information measures the superfluous comprehension.
And, hence, whenever we make a symbol to express any thing
or any attribute we cannot make it so empty that it shall
have no superfluous comprehension.
❝I am going, next, to show that inference is symbolization
and that the puzzle of the validity of scientific inference
lies merely in this superfluous comprehension and is therefore
entirely removed by a consideration of the laws of information.❞
(Peirce 1866, p. 467)
At this point in his inventory of scientific reasoning, Peirce is
relating the nature of inference, information, and inquiry to the
character of the signs mediating the process in question, a process
he describes as “symbolization”.
In the interest of clarity let's draw from Peirce's account
a couple of quick sketches, designed to show how the examples
he gives of conjunctive terms and disjunctive terms might look
if they were cast within a lattice‑theoretic framework.
Re: Information = Comprehension × Extension • Selection 5
• https://inquiryintoinquiry.com/2024/10/09/information-comprehension-x-exten…
Looking back on Selection 5, let's first examine Peirce's example of a
conjunctive term — “spherical, bright, fragrant, juicy, tropical fruit” —
within a lattice framework. We have the following six terms.
t₁ = spherical
t₂ = bright
t₃ = fragrant
t₄ = juicy
t₅ = tropical
t₆ = fruit
Suppose z is the logical conjunction of the above six terms.
z = t₁ ∙ t₂ ∙ t₃ ∙ t₄ ∙ t₅ ∙ t₆
What on earth could Peirce mean by saying that such a term
is “not a true symbol” or that it is “of no use whatever”?
In particular, consider the following statement.
❝If it occurs in the predicate and something is said
to be a spherical bright fragrant juicy tropical fruit,
since there is nothing which is all this which is not
an orange, we may say that this is an orange at once.❞
(Peirce 1866, p. 470).
In other words, if something x is said to be z then we may guess fairly
surely x is really an orange, in short, x has all the additional features
otherwise summed up quite succinctly in the much more constrained term y,
where y means “an orange”.
Figure 1 shows the implication ordering of logical terms
in the form of a “lattice diagram”.
Figure 1. Conjunctive Term z, Taken as Predicate
• https://inquiryintoinquiry.files.wordpress.com/2016/10/ice-figure-1.jpg
What Peirce is saying about z not being a genuinely useful symbol can
be explained in terms of the gap between the logical conjunction z,
in lattice terms, the greatest lower bound of the conjoined terms,
z = glb{t₁, t₂, t₃, t₄, t₅, t₆}, and what we might regard as the
natural conjunction or natural glb of those terms, namely, y,
“an orange”.
In sum there is an extra measure of constraint which goes into forming the
natural kinds lattice from the free lattice which logic and set theory would
otherwise impose as a default background. The local manifestations of that
global information are meted out over the structure of the natural lattice
by just such abductive gaps as the one we observe between z and y.
Reference —
Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”,
Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce :
A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project,
Indiana University Press, Bloomington, IN, 1982.
Resources —
Inquiry Blog • Survey of Pragmatic Semiotic Information
• https://inquiryintoinquiry.com/2024/03/01/survey-of-pragmatic-semiotic-info…
OEIS Wiki • Information = Comprehension × Extension
• https://oeis.org/wiki/Information_%3D_Comprehension_%C3%97_Extension
C.S. Peirce • Upon Logical Comprehension and Extension
• https://peirce.sitehost.iu.edu/writings/v2/w2/w2_06/v2_06.htm
Regards,
Jon
cc: https://www.academia.edu/community/V91eDe
Simon,
I just wanted to add a note about history. From the Wikipedia page about Klaus Tschira: "After gaining his Diplom in physics and working at IBM, Tschira co-founded the German software giant SAP AG in 1972 in Mannheim, Germany."
While he was at IBM, he read a copy of my Conceptual Structures book. He didn't use conceptual graphs, but he applied some of the topics in the book to the technology he used to found SAP. He later invited me to visit the research center he founded in Heidelberg. I recommended some issues about ontology, which led him to organize an invited conference on ontology at his center..
Various participants in that conference later subscribed to Ontolog Forum and worked on ISO standards projects for Common Logic and ontology.. This is one of many reasons why the 60+ years of symbolic AI is ESSENTIAL for practical and non-hallucinogenic applications of LLMs.
John
----------------------------------------
From: "Polovina, Simon (BTE)' via ontolog-forum" <ontolog-forum(a)googlegroups.com>
Hi all!
Connecting the Facts: SAP HANA Cloud’s Knowledge G... - SAP Community may interest you. Combining knowledge graphs with LLM as a commercial product by the business computing market leader, SAP.
Simon
Dr Simon Polovina
Department of Computing, Sheffield Hallam University
Gary,
The word 'limits" sounds negative. That is why I recommend a positive way of describing the distinction of continuous and discrete representations.
1. The mapping to and from the world depends of continuous representations, such as differential equations and an ope-ended variety of technologies for mapping information about the world for many kinds of applications.
2. Various kinds of diagrams and graphs represent discrete models of the world and things in it. Mappings of those representations to and from computable forms may use various kinds of formal logics.
3. Natural languages can represent anything that humans experience, think, plan, do, or intend to do. Therefore, they should be able to represent anything and everything in #1 and #2.at any level of precision or approximation.
The top two lines describe the continuous and discrete. The third line shows how and why people can describe both sides while using their native language. An important point about using NLs for discussing formal topics: : Every textbook on logic defines the subject by sentences in some NL. Therefore it is possible (but not easy) to talk precisely about formal methods.
However, it is too easy to slip into vagueness. The exercise of mapping NLs to a forma logic forces humans to be absolutely precise. But many people don't know how to do that. Therefore, it's important to develop interactive tools to aid in the translation.
John.
----------------------------------------
From: "Gary Berg-Cross" <gbergcross(a)gmail.com>
Sent: 10/18/24 1:48 PM
To: ontolog-trustee(a)googlegroups.com
Cc: ontolog-forum <ontolog-forum(a)googlegroups.com>, CG <cg(a)lists.iccs-conference.org>
Subject: Re: [ontolog-trustee] Trying to develop a proper useful topic for the 2025 summit
With John's points as background I suggest that the way to frame a workable summit topic would be to explore the current and likely limits to useful formalization.
Gary Berg-Cross
Potomac, MD
240-426-0770
Alex,
I have been trying in many different ways to explain why your proposal, if accepted, would be the DEATH of science. Fortunately, no expert in any branch of science would accept it. The following slide from https://jfsowa.com/eswc.pdf explains the issues:
Your proposal is a plan for relating discrete models, as represented by the diagram in the center, to formal notations, such as first-order logic or variations.
By itself that is a good idea. But it ignores the much more difficult left side of the diagram. Physics is the most fundamental of the sciences. Physicists do NOT use formal logic to express their theories. They use many dimensional differential equations. Those theories represent a CONTINUOUS universe and everything in it.
As I have been trying to explain, vagueness in natural language is not bad. It's ESSENTIAL in order to relate, explain, and communicate information about the world, our relationships to the world, and our actions in, on, and
about the world and everything in it.
As engineers say, all those explanations are false in general, but they can be made as precise as required within a level of tolerance that is appropriate for the application.
That fact is the reason why systems such as WordNet. Roget's Thesaurus. and ordinary dictionaries are useful for analyzing and reasoning with and about NL information. By being vague, those systems can accommodate the vague statements that occur in all NL documents and communications.
Any attempt to map vague statements to FOL or other logic is guaranteed to be false UNLESS the error bounds are explicitly stated and accommodated.
If the error bounds are unknown, it's much better to preserve the NL source unchanged. In conclusion, I recommend the eswc.pdf slides. Since they were presented in 2020, they do not mention LLMs. But every sentence derived from NL statements is vague, and the context and information about error bounds is lost.
Therefore, no
statements derived by LLMs can be trusted unless the error bounds of the source data are known. if the sources are unknown, some system of evaluation is essential. Otherwise, anything LLMs produce must be considered as hypotheses that must be tested and evaluated by some method that uses the above diagram as a prerequisite and guide.
John
----------------------------------------
From: "alex.shkotin" <alex.shkotin(a)gmail.com>
John,
The theory framework and task framework are proposed to be global: one for all and crowdsourced. Having a hypothesis in the former or a task without solution in the latter, anybody around the World can propose her solution. It would be checked by algorithms and, if the answer would be OK, added to the framework. This is how science and technology should concentrate their knowledge on the Internet era.
Any R&D community from Wolfram Foundation to the lab of enthusiasts can start a framework. Welcome.
And after some time OMG or ISO will
release a standard ⚗️
Alex
Alex,
Before you make any proposals about methods of formalizing anything, please study the work that the international organizations have been doing for many years. I worked with some of those organizations as a representative of IBM (30 years ago), and later when I was working with some start-up companies.
They have some very good people working on those standards, and the specifications they produce are actually IMPLEMENTED in working systems. They are much more than email notes, which people delete after a few days.
Alex: whether it will be a language from the DOL family or Python is the choice of enthusiasts.
No. Emtjusiasts are amateurs. Some of them may be very intelligent amateurs, but anything they do vanishes when they get bored with it. DOL is an professional standard by the Object Management Group (OMG), and it supports other standards by International Standards Organisation (ISO), and the Semantic Web. Those organizations develop standards that are adopted and implemented by professionals for software that is used by thousands or even millions or billions of people.
Following is a reference from my previous note. I urge you to follow the links to the work by PROFESSIONALS in slides 8 to 11 of https://jfsowa.com/talks/eswc.pdf
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
JFS:"To Alex: There is no need for you (or anybody else) to lean Python. With the DOL standard, any syntax that conforms to the ISO standard for Common Logic can be automatically translated to and from any syntax that can express FOL or many subsets and supersets of FOL. That includes OWL, Turtle, UML, and even TPTP (Thousands of Problems for Theorem Provers)."
The situation is just the opposite. Formalization of a unit of knowledge (in this case, two theorems) can be placed in the framework of the theory (exactly this one!) in any language. The main thing is that there are enthusiasts who undertake to formalize this particular theory in this particular formal language. And whether it will be a language from the DOL family or Python is the choice of enthusiasts.
The beauty of formalizing a unit of knowledge is that they are usually small (definitions can be several sentences long), of course, except for proofs that can take up terabytes (but in this case it's initially formal). The main thing is that the new unit is consistent with those already in the framework. This structure of the framework of the theory is known but not simple........
Chris P, Chris M, and Alex,
There is no reason why you need to standardize on one specific notation for everybody. The ISO standard for Common Logic was designed with an ABSTRACT SYNTAX, which allows any number of concrete syntaxes, linear or diagrammatic.
Since Python is a procedural language, I'm sure that some data-like subset of Python was used to represent the data model. Ideally, that subset should be designed to conform to the CL abstract syntax. In fact, the DOL standard by the Object Management Group supports translation among an open-ended variety of concrete syntaxes.
To Alex: There is no need for you (or anybody else) to lean Python. With the DOL standard, any syntax that conforms to the ISO standard for Common Logic can be automatically translated to and from any syntax that can express FOL or many subsets and supersets of FOL. That includes OWL, Turtle, UML, and even TPTP (Thousands of Problems for Theorem Provers).
I discussed those options in my ESWC slides, for a talk at the 2020 European Semantic Web Conference. See the references in slides 8 to 11 of https://jfsowa.com/talks/eswc.pdf
Those four slides and the references they contain, answer the basic questions. But you can continue to read more slides and references for more background information. Bit remember that slides 8 to 11 are sufficient to support FOL.
John
_________________________________________
From: "Chris Partridge" <partridge.csj(a)gmail.com>
@chris_mungall - apologies if you answered earlier, but what 'format' are you using to input the FOL axioms? TPTP? CLIF?
(The reason I'm asking is we did some work a while ago and we found it useful to have a more human-readable input format - we went with CLIF using an EBNF grammar approach to read - converted to an internal model then output in whatever format needed - often TPTP for e.g. Vampire. One of the design questions that came up was what was the best way to consume (effectively unstructured) text FOL 9e.g. a csv) at scale - and then what a common data structure that could output a variety of formats would look like.)On Fri, 11 Oct 2024 at 19:32, Chris Mungall <cjmungall(a)lbl.gov> wrote:
Thanks Alex!
To be clear, the reasoning is typically not done in Python. Python acts as the glue. Many developers use Python to specify data-models, but these data models are usually accompanied by complex procedural code for both validation and inference. The idea is to use Python syntax to encode FOL axioms for these data models (not dissimilar to using OCL with UML, or a rules language alongside Frames), and then to seamlessly run reasoners/solvers directly from Python (although these are carried out by integrations, e.g. clingo, souffle, Prover9, Z3).
I chose the paths/graphs examples because it's familiar to both ordinary developers and is a staple of datalog documentation, but of course more sophisticated theories are possible.
On Fri, Oct 11, 2024 at 1:53 AM alex.shkotin <alex.shkotin(a)gmail.com> wrote:
Chris,
Never heard that reasoning can be done in Python. Too bad I don't know Python. I'll do it and make a line for it in the ugraph theory framework when I get to paths. Path has a rather subtle definition. And path is an entity on its own where your Path is actually a predicate of path existence.
And following [GNaA] 1.3 we have to define "walk" first, then "trail", and then "path" as a property of sequence of vertices and nodes.
And your two axioms
are theorems in ugraph theory.
We have a polymorphic predicate adjacent where you use Link. With definition:
rus
Пусть v1, v2 - две вершины. v1 смежна v2 еите v1 и v2 являются различными и концевыми вершинами некоторого ребра.
eng
Let v1, v2 be two vertices. v1 is adjacent to v2 iff v1 and v2 are distinct end vertices of some edge.
yfl
declaration adjacent func(TV vertex vertex) (v1 v2) ≝ (v1≠v2) & (∃e:edge(U) enp(v1 e) and enp(v2 e)).
Calculation on the model of logical formulas is clear to me. Especially if we specify how to encode quantifiers, but I'll definitely look at reasoning.
Very interesting!
Alex
[GNaA] Graphs, Networks, and Algorithms. M. N. S. Swamy, K. Thulasiraman. Wiley, 1981.
четверг, 10 октября 2024 г. в 04:40:01 UTC+3, Chris Mungall:
Most developers these days are familiar with Python. I have been tinkering with an approach that allows existing Python data models (expressed using pydantic, sqlmodel, sqlalchemy, or linkml) to be enhanced with arbitrary FOL axioms. There are integrations with FOL solvers, and also with datalog and ASP engines (for programs that have the requisite expressivity), and soon OWL reasoners. There is also some preliminary LLM integration too.
Comments welcome:
https://py-typedlogic.github.io/
On Wed, Oct 9, 2024 at 12:42 AM 'Knowledge representation' via ontolog-forum <ontolo...(a)googlegroups.com> wrote:
What user-friendly* tools capable of handling full first-order logic do you know?
What tools can you build a conceptual model, ontology in FOL?
What is a stack for: translating natural language sentences into FOL and then FOL to a computable language?
If you refer to a particular FOL computable language, such as KIF, CL, or otherwise, what tools can easily help make an ontology formalized in the given langauge?
(for all questions: in an ontology-neutral way = without committing to any ontology. So no tools that force the user to use the terms of some ontology)
*For non-technical / non-computer scientists.
Thanks
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/ac5c66d7-6ea7-4554-9cad-507….
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/22a5fd29-f005-40f5-bcdd-202….
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN9Aifs6%3Dqnga%3DH%3D0rw-… contributions to this forum are covered by an open-source license.For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-forum+unsubscribe(a)googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAMWD8MrFo2LNNPa9f0fi6BegQK….
Alex,
I am not talking about a "standard" or "official" or "universal" top-level processor.
This is a topic I've discussed before and published before: To be safe, secure, and intelligent, an AI system (robot or just an intelligent processor) should have a top-level control unit that serves the same basic functions as the human (and other mammalian) frontal lobes: serve as the conscious central control unit.
As you say below, such a system would have a supervisor, scheduler, and other system-level processes. Even a mouse-level intelligence would be far superior to any pf today's so-called "intelligent systems".
The goal of a human-level AGI would be far in the future. I doubt that it could be achieved in the 21st C.
This is the topic of my talk in the recent ontology summit series, you can read the slides or view the YouTube.. There is much more to say, and I'll include more references later. But I believe this topic is more important than trying to develop a universal formalization of whatever -- primarily because any such formal system would very rapidly become obsolete.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
John,
About "top-level processor". I am far from robotics to discuss robot OS structure. I hope there is Supervisor, Scheduler and other system level processes there. Is there any subsystem to name "top-level processor" I don't know.
Alex
пт, 11 окт. 2024 г. в 23:25, John F Sowa <sowa(a)bestweb.net>:
Alexandre Rademaker: We don’t necessarily need to throw away the meanings. A safe translation should account for a 1-N mapping.. from surface to logical representations. Context or even some statistical preference can select the most preferable reading.
Yes. That is why we need a top-level symbolic processor that can determine what to do for any particular issue that may arise.
Alex Shkotin: With robots it's better not to use vague terms or sentences. It's dangerous. Good robots will tell: I don't understand, bad ones can make a mess of things.
As I said to Alexandre, the top-level processor should use symbolic methods for determining what to do.
Alex: My way is to represent knowledge formally. The precision of knowledge itself remains the same initially and may be better after we apply knowledge processing algorithms to this formalized knowledge.
Think of the top-level symbolic processor as a gate-keeper. It is in the best position to determine what to do. In many cases, the best thing is to ask a question or even a series of questions before making a decision.
The top-level processor may use LLMs in the simplest and most secure way: Translate a query in any natural language to and from whatever internal form the system uses. After the top-level processor has determined what to do, it can pass the translated result to whatever subroutines can handle it. Those subroutines may or may not use LLMs or many, many other tools of various kinds.
Basic point: One size does not fit all. The top-level processor determines which of many internal processors should or should not be invoked. Anything that seems dangerous can be sent to a security system, which may or may not reject the input or even send it to proper authorities to handle it.
John
Alican,
Fundamental difference: A vague statement has a broad range of meaning. A more precise statement has a narrower range of meaning. Therefore, a vague statement is more likely to be true. A more precise statement is more likely to false.
Alican: Doesn't narrowing down the meaning of a symbol typically lead to a more "precise" interpretation?
Yes. And therefore, the more precise statement is more likely to be false.
Alican: Also, from my observation of Alex's work, in my opinion, that's what he is trying to achieve.
Yes. And that is why I keep telling him to avoid turning a true but vague statement into a precise but false statement.
Example: Buying an ice cream cone, and specifying a perfect sphere of vanilla ice cream that is exactly 10 centimeters in diameter in a cone that is precisely 9,7 cm in diameter at the top and 15 cm in length.
That is very precise, very stupid, and likely to get yourself laughed at or thrown out of the store.
I used a trivial example of an ice cream cone. But the same principle applies to every statement about a continuum of any kind. The degree of precision should be appropriate to the requirements of the subject matter. That is true of a continuum of any and every kind for any purpose of any and every kind.
John
----------------------------------------
From: "Alican Tüzün" <tuzunalican(a)gmail.com>
John and Alex,
@John
Doesn't narrowing down the meaning of a symbol typically lead to a more "precise" interpretation?
If a set of symbols (or sign vehicle) signifies a more limited set of immediate objects, it results in a more specific reference. This increased specificity can lead to
a more focused interpretation (the effect or interpretation in the mind). Overall, sign creation will be more "precise".
E.g., Number 1 and word One. The latter symbol can be interpreted with more things, while the former is less. Overall, isn't the sign-making with Number 1 easier or, in your discussion words, more "precise"?
If I understood something wrong, please correct me.
@Alex
Also, from my observation of Alex's work, in my opinion, that's what he is trying to achieve. Also correct me, Alex, if I understood wrong.
Best,
Alican%40mail.gmail.com.
Alexandre Rademaker: We don’t necessarily need to throw away the meanings. A safe translation should account for a 1-N mapping.. from surface to logical representations. Context or even some statistical preference can select the most preferable reading.
Yes. That is why we need a top-level symbolic processor that can determine what to do for any particular issue that may arise.
Alex Shkotin: With robots it's better not to use vague terms or sentences. It's dangerous. Good robots will tell: I don't understand, bad ones can make a mess of things.
As I said to Alexandre, the top-level processor should use symbolic methods for determining what to do.
Alex: My way is to represent knowledge formally. The precision of knowledge itself remains the same initially and may be better after we apply knowledge processing algorithms to this formalized knowledge.
Think of the top-level symbolic processor as a gate-keeper. It is in the best position to determine what to do. In many cases, the best thing is to ask a question or even a series of questions before making a decision.
The top-level processor may use LLMs in the simplest and most secure way: Translate a query in any natural language to and from whatever internal form the system uses. After the top-level processor has determined what to do, it can pass the translated result to whatever subroutines can handle it. Those subroutines may or may not use LLMs or many, many other tools of various kinds.
Basic point: One size does not fit all. The top-level processor determines which of many internal processors should or should not be invoked. Anything that seems dangerous can be sent to a security system, which may or may not reject the input or even send it to proper authorities to handle it.
John