Differential Propositional Calculus • 15
• https://inquiryintoinquiry.com/2023/12/04/differential-propositional-calcul…
Fire over water:
The image of the condition before transition.
Thus the superior man is careful
In the differentiation of things,
So that each finds its place.
— I Ching ䷿ Hexagram 64
Differential Extension of Propositional Calculus —
This much preparation is enough to begin introducing my
subject, if I excuse myself from giving full arguments
for my definitional choices until a later stage.
To express the goal in a turn of phrase, the aim is to
develop a differential theory of qualitative equations,
one which can parallel the application of differential
geometry to dynamical systems. The idea of a tangent
vector is key to the work and a major goal is to find
the right logical analogues of tangent spaces, bundles,
and functors. The strategy is taken of looking for the
simplest versions of those constructions which can be
discovered within the realm of propositional calculus,
so long as they serve to fill out the general theme.
Reference —
Wilhelm, R., and Baynes, C.F. (trans.), The I Ching,
or Book of Changes, Foreword by C.G. Jung, Preface
by H. Wilhelm, 3rd edition, Bollingen Series XIX,
Princeton University Press, Princeton, NJ, 1967.
Resources —
Differential Logic and Dynamic Systems
• https://oeis.org/wiki/Differential_Logic_and_Dynamic_Systems_%E2%80%A2_Part…
Differential Extension of Propositional Calculus
•
https://oeis.org/wiki/Differential_Logic_and_Dynamic_Systems_%E2%80%A2_Part…
Regards,
Jon
cc: https://www.academia.edu/community/l7pvk5
But a Turing machine that is connected to the WWW is an oracle machine:
MN> Yes, Turing describes NON-algorithmic machines (like the oracle machine—the o-machine as he called it)—but so far we are stuck in the algorithmic.
The following article has a good readable historical development of the issues. (The word 'readable' means that if you had studied some of these topics many years ago and forgot almost everything, the article has enough clear discussion that you don't need to do any further studying elsewhere. If you remember a little more, you can flip through quite fast. That is not something you can say about most publications on these topics.)
Turing Oracle Machines, Online Computing, and Three Displacements in Computability Theory
Robert I. Soare, http://www.people.cs.uchicago.edu/~soare/History/turing.pdf
Following is the final paragraph on p. 60:
Conclusion 14.4. For pedagogical reasons with beginning students it is
reasonable to first present Turing a-machines and ordinary computability.
However, any introductory computability book should then present as soon
as possible Turing oracle machines (o-machines) and relative computability.
Parallels should be drawn with offline and online computing in the real world
John
----------------------------------------
From: "Nadin, Mihai" <nadin(a)utdallas.edu>
There is NO general intelligence—good for everything; rather concrete intelligence, as the context defines its characteristics.
I hope that these notes explain my invitation to my respected colleagues to read Hilbert’s challenge and Turing’s paper. Yes, Turing describes NON-algorithmic machines (like the oracle machine—the o-machine as he called it)—but so far we are stuck in the algorithmic.
Best wishes for a happy and healthy 2024
Mihai Nadin
I agree with Mihai Nadin "that AGI is yet another of those impossible to achieve tasks." I have repeatedly said that it won't be achieved in the 21st C, but I won't make any predictions about the 22nd. So far, nobody has produced the slightest shred of evidence for any kind of AGI any sooner. Best summary of the issues: "AGI is 30 years in the future, always was and always will be." There are still some diehards who claim that the prediction from the year 2000 will come to pass in the next 6 years, but the hopes for generative AI are already dying. -- But there are many useful applications for better natural language interfaces to all kinds of systems, not just AI.
Dan Brickley dug up some excellent references on predictive coding, and Karl Friston is one of the pioneers in the field (see below). A recent book (2022) from MIT Press with a foreword by Friston covers the field: "Active Inference: The Free Energy Principle in Mind, Brain, and Behavior." Chapters of that book can be downloaded for free. Appendix C has an annotated example of the Mathlab code.
I believe that this is the approach and the software techniques that Verses AI has adopted. I don't know how well Friston and his colleagues can develop this approach, but I strongly suspect that some of the co-authors and/or their colleagues and students will be working with them. However, practical applications always take more time and more investment than was predicted. (I worked at IBM R & D for 30 years, and I know the issues from close observation and participation.)
Ricardo Sanz: Friston's work is ok. Neuroscience, statististics and optimal control. Good, ol' classic math. VERSES' narrative is classic bullshit. Not "breakthrough" bullshit; just classic bullshit. In my opinion, anthropocentrism, the intelligence=brain fallacy, and biomesmerization are the biggest roadblocks in the way to AGI.
Neuroscience is much broader than anthropomorphism. Living things from bacteria on up are far more successful in complex behavior than any of the latest and greatest driverless cars. Furthermore, very few of the people who have been working on generative AI know anything about neuroscience or the other branches of cognitive science. Therefore, none of the work in those fields could deter (or inspire) them. And it shows.
I won't defend the claims by Verses AI unless and until they come up with software that implements their promises. But I love their criticisms of generative AI. I can't see how anybody could claim that it's on a path toward AGI.
John
----------------------------------------
From: "Dan Brickley" <danbri(a)danbri.org>
For an implementation-oriented survey see https://github.com/BerenMillidge/Predictive_Coding_Papers and in general work under “predictive processing” and “predictive coding” banners
Also this book has PDFs available;
https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Ene… and also gets pretty specific eg ch8 on continuous time dynamical systems representation, see
https://doi.org/10.7551/mitpress/12441.003.0012
Dan
After a bit of searching, I found more info about Verses AI and their new chief scientist. I like the approach they're taking: putting more emphasis on natural thinking process in neuroscience. And their new chief scientist has publications that would lead them in that direction. The ideas look good, and I would recommend them. But I don't know how far he and his colleagues have gone in implementing them, or how long it will take for anything along those lines to be running in a practical system.
However, it's unlikely that any company would hire somebody as chief scientist without a considerable amount of prior work. And I doubt that any company would make an announcement in a full-page ad in the New York Times unless they already had some kind of prototype.
Following is a list of theoretical publications by Karl Friston: https://www.fil.ion.ucl.ac.uk/~karl/#_Computational_neuroscience
None of them describe an implementation. But it's possible that he and his colleagues (and/or graduate students) have implemented something that Verses AI wanted.
And by the way, one reason why I like this approach is that it's related to methods that Peirce was suggesting. He is famous for his innovations in logic, but he also had many ideas about biosemiotics and reasoning methods in living things down to the level of insects and plants. He even mentioned possible aliens in outer space as agents that might continue research if humans didn't survive.
Although I don't know whether Verses AI will succeed with their plans, I believe that the direction they're taking is more promising than anything OpenAI or Google is doing. I believe that any design that ignores neuroscience is a dead end for AGI.
John
___________________
An excerpt from https://www.verses.ai/press-2/vers-karl-friston
“It is with great enthusiasm and excitement that we welcome Karl Friston to VERSES as our Chief Scientist,” said Gabriel René, Founder, and CEO of VERSES. “Dr. Friston’s breakthrough work in neuroscience and biologically-inspired AI, known as Active Inference, aligns beautifully with our vision and mission to enable a “smarter world” where AI powers the applications of the 21st century. As the originator of this principle, it is only fitting that Karl has a significant role in VERSES AI research and development all the way through their applied uses in product commercialization.”
Friston who was ranked #1 most influential neuroscientist in the world by Semantic Scholar in 2016 has had an illustrious and decorated scientific career. He became a Fellow of the Royal Society in 2006 and The Royal Society of Biology in 2012, received the Weldon Memorial Prize and Medal in 2013 for his remarkable contributions to mathematical biology and was elected as a member of EMBO in 2014 and the Academia Europaea in 2015. He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award from the Organization for Human Brain Mapping. He holds Honorary doctorates from the universities of York, Zurich, Liège, and Radboud University.
“I am delighted and honored to join VERSES. I have seldom met such a friendly, focused, committed, and right-minded group of colleagues. On a personal note, my appointment as Chief Scientist is exactly the kind of dénouement of my academic career I had hoped for – a dénouement that marks the beginning of a new and exciting journey of discovery and enabling.” said Karl Friston.
Verses AI published an article in the NY Times that criticizes and debunks generative AI, and proposes an alternative. I agree with their criticism, but I don't know enough about the alternative to make any further comments. If anybody has difficulty getting the following website, an excerpt without the graphics follows.
In any case, it confirms my basic point: the technology based on LLMs is valuable for many purposes, especially translations between and among languages, natural and artificial. But there is a huge amount of intelligence (by humans and other living things) that it cannot do. Google and others supplement LLMs with different technologies.
The question about how much and what kind of other technology is an open question. The reference below is a suggestion.
John
_______________________
https://medium.com/aimonks/verses-ai-announces-agi-breakthrough-invokes-ope…
In an unprecedented move by VERSES AI, today’s announcement of a breakthrough revealing a new path to AGI based on ‘natural’ rather
than ‘artificial’ intelligence, VERSES took out a full page ad in the NY Times with an open letter to the Board of Open AI appealing to their
stated mission “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”
Specifically, the appeal addresses a clause in the Open AI Board’s charter that states in pursuit of their mission to “to build artificial general
intelligence (AGI) that is safe and benefits all of humanity,” and the concerns about late stage AGI becoming a “competitive race without
time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we
commit to stop competing with and start assisting this project.”
What Happened?
VERSES has achieved an AGI breakthrough within their alternative path to AGI that is Active Inference. And they are appealing to Open AI
“in the spirit of cooperation and in accordance with [their} charter.”
According to their press release today, “VERSES recently achieved a significant internal breakthrough in Active Inference that we believe
addresses the tractability problem of probabilistic AI. This advancement enables the design and deployment of adaptive, real-time Active
Inference agents at scale, matching and often surpassing the performance of state-of-the-art deep learning. These agents achieve superior
performance using orders of magnitude less input data and are optimized for energy efficiency, specifically designed for intelligent computing
on the edge, not just in the cloud.”
In a video published as part of the announcement today titled, “The Year in AI 2023,” VERSES takes a look at the incredible journey of AI
acceleration over this past year and what it suggests about the current path from Artificial Narrow Intelligence (where we are now) to Artificial
General Intelligence — AGI (the holy grail of AI automation)… Noting that all of the major players of Deep Learning technology have publicly
acknowledged throughout the course of 2023 that “another breakthrough” is needed to get to AGI. For many months now, there has been
overwhelming consensus that machine learning/deep learning cannot achieve AGI. Sam Altman, Bill Gates, Yann LeCunn, Gary Marcus,
and many others have publicly stated so.
Just last month, Sam Altman declared at the Hawking Fellowship Award event at Cambridge University that “another breakthrough is needed”
in response to a question asking if LLMs are capable of achieving AGI.
[See graphic in article]
Even more concerning are the potential dangers of proceeding in the direction of machine intelligence, as evidenced by the “Godfather of AI”,
Geoffrey Hinton, creator of back propagation and the deep learning method, withdrawing from Google early this year over his own concerns
of the potential harm to humanity by continuing down the path he had dedicated half a century of his life to.
So What Are The Potential Dangers of Deep Learning Neural Nets?
The many problems that pose these potential dangers of continuing down the current path of generative AI, are compelling and quite serious.
· Black box problem
· Alignment problem
· Generalizability problem
· Halucination problem
· Centralization problem — one corporation owning the AI
· Clean data problem
· Energy consumption problem
· Data update problem
· Financial viability problem
· Guardrail problem
· Copyright problem
All Current AI Stems from This ‘Artificial’ DeepMind Path
[see graphics and much more of this article]
. . .
For months, I have been criticizing LLM technology for ignoring the 60+ years of developments in AI and computer science.
But finally, they can now call a subroutine to do elementary arithmetic. That might not sound like much, but it opens the door to EVERYTHING. It means that LLMs can now invoke a subroutine that can do anything and everything that any computer program has been able to do for over 70 years.
Previous applications could combine LLMs with other software by putting a conventional program in charge and call LLM-based systems as a subroutine. That is still possible with Q* systems. But the option of allowing LLMs themselves to call external subroutines provides greater flexibility. See below for excerpts from https://www.digitaltrends.com/computing/what-is-project-q/
However, there are still some things left to criticize and more work to be done before humans become obsolete.
John
___________________________
What is Project Q*?
Before moving forward, it should be noted that all the details about Project Q* — including its existence — comes from some fresh reports following the drama around Altman’s firing. Reporters at Reuters said on November 22 that it had been given the information by “two people familiar with the matter,” providing a peek behind the curtain of what was happening internally in the weeks leading up to the firing.
According to the article, Project Q* was a new model that excelled in learning and performing mathematics. It was still reportedly only at the level of solving grade-school mathematics, but as a beginning point, it looked promising for demonstrating previously unseen intelligence from the researchers involved.
Seems harmless enough, right? Well, not so fast. The existence of Q* was reportedly scary enough to prompt several staff researchers to write a letter to the board to raise the alarm about the project, claiming it could “threaten humanity.”
On the other hand, other attempts at explaining Q* aren’t quite as novel — and certainly aren’t so earth-shattering. The Chief AI scientist at Meta, Yann LeCun, tweeted that Q* has to do with replacing “auto-regressive token prediction with planning” as a way of improving LLM (large language model) reliability. LeCun says all of OpenAI’s competitors have been working on it, and that OpenAI made a specific hire to address this problem.
[Note by JFS: "auto-regressive token prediction" is jargon for what LLMs do by themselves. Planning is an example of GOFAI (Good Old Fashioned AI). The Q* breakthrough allows LLMs to call GOFAI subroutines. That might not sound like much, but it's the critical innovation that enables integration of old and new AI methods.]
One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning. Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published… — Yann LeCun (@ylecun) November 24, 2023
[JFS: The verb 'replace' is inaccurate. The original methods for using LLMs are still available. A better term is 'integrate'.]
LeCun’s point doesn’t seem to be that such a development isn’t important — but that it’s not some unknown development that no other AI researchers aren’t currently discussing. Then again, in the replies to this tweet, LeCun is dismissive of Altman, saying he has a “long history of self-delusion” and suggests that the reports around Q* don’t convince him that a significant advancement in the problem of planning in learned models has been made.
[JFS: In one sense, that's true, since integration was possible with the older methods. But the Q* options enable a smoother and more flexible integration of LLMs with the methods of GOFAI and other branches of computer science.]
Mike, James, Alex, and anybody who claims that GPT is a dependable source of information,
Analogies are very useful for abduction (Peirce's word for educated guesses). They must be checked for internal consistency by deduction, and they must be tested for consistency with the subject matter by deduction from currently established theories, and induction with observations that are not adequately explained by current theories.
The best that can be said about comparing quantum mechanics to pointilism is that that it is a clever idea that supports some interesting comparisons. But there is an enormous amount of work required to develop a detailed mathematical theory of that comparison, check it for internal consistency of the parts of the theory and external consistency with the well established theories of QM. If both of those checks are OK, the idea would be an interesting example for teaching QM.
But those checks would not add anything to existing theory. They just add new ways of talking about it. It those ways lead to simpler math and better teaching methods, then they would be a very useful addition to current explanations of QM.
But to make new contributions to QM. the new theory must make further predictions that go beyond current theories. That requires induction to test the applicability of the theory beyond what is already known.
From the current evidence, the pointillist theory hasn't yet reached the first step. It's possible that it might be developed into a good teaching tool for current QM, but there's a huge amount of work to be done.
As far as the new theory leading to totally new discoveries, that is an enormously difficult issue. Since over a century of research by the best physicists in the world have gone into the current theories, I strongly doubt that the new version would go beyond what is currently known.
Note that the above comments are written by somebody (me) who took some courses in QM and advanced QM theories and applications in the 1970s. I have done a fair amount of reading of popular source, such as the Scientific American since then. But I have not done any detailed R & D on QM since them. But note that I was able to give a far more detailed analysis of the issues than anything that GPT could produce.
Conclusion: Wikipedia (which is updated by leading professionals in every branch of science) is a far, far better source for answers (with reliable references for further study) than anything produced by GPT. If you're searching for solid advice on an advanced topic in any field, Wikipedia (and more advanced reference documents on the WWW) are much, much more trustworthy than anything you get from GPT.
And that is true of every advanced topic in any branch of science, philosophy, engineering, etc. Don't trust anything generated by generative AI without doing at least a Wikipedia search.
John
----------------------------------------
From: "Alex Shkotin" <alex.shkotin(a)gmail.com>
BINGO!
сб, 2 дек. 2023 г. в 12:33, 'James Davenport' via ontolog-forum <ontolog-forum(a)googlegroups.com>:
See also, from https://scienceexchange.caltech.edu/topics/quantum-science-explained/ask-ex…
Sometimes I think about the quantum world as a pointillism painting. When you look at the painting from a distance, of course, it just looks like an ordinary painting. But as you start to zoom in to it on smaller and smaller scales, you start to notice that there's more structure there. And, in fact, rather than it being a continuous object, you start to notice that it's actually made up of individual points. And as you zoom in further and further, you can see the individual points, the quanta, that make up that painting. And that's what we do in particle physics. We're zooming in on smaller and smaller structures, smaller and smaller scales.
James Davenport
Hebron & Medlock Professor of Information Technology, University of Bath
National Teaching Fellow 2014; DSc (honoris causa) UVT
Former Fulbright CyberSecurity Scholar (at New York University)
Former Vice-President and Academy Chair, British Computer Society
----------------------------------------