• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is an LLM system good enough to help solve the UTF problem, given the right series of questions or observations?

anotherneil

Well-Known Member
LLM: large language model
UTF: unified field theory

Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.

It probably won't be easy, and it'll probably require some guidance and steering by the brightest minds alive today, but I think if this is tried, then I think there's a good chance of it being able to accomplish this.

Heck, I wonder if someone reading this post right now, who knows just the right questions, or comments, or observations to point out to ChatGPT, would be able to solve it.

An LLM system may have the raw ingredients needed to solve it, but I don't think it's something that would automatically try to solve it on its own; with some guidance from someone who both has a strong enough grasp of the problem, and who knows how to use an LLM fairly well, the system could probably serve as a catalyst to reaching the solution.

I'm also willing to accept the possibility that I'm just being ridiculous or silly about this, so what do you think?
 

Mock Turtle

Oh my, did I say that!
Premium Member
LLM: large language model
UTF: unified field theory

Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.

It probably won't be easy, and it'll probably require some guidance and steering by the brightest minds alive today, but I think if this is tried, then I think there's a good chance of it being able to accomplish this.

Heck, I wonder if someone reading this post right now, who knows just the right questions, or comments, or observations to point out to ChatGPT, would be able to solve it.

An LLM system may have the raw ingredients needed to solve it, but I don't think it's something that would automatically try to solve it on its own; with some guidance from someone who both has a strong enough grasp of the problem, and who knows how to use an LLM fairly well, the system could probably serve as a catalyst to reaching the solution.

I'm also willing to accept the possibility that I'm just being ridiculous or silly about this, so what do you think?
I've not bothered with AI so far but these LLM versions don't actually do any thinking from what I understand so why would we expect results better than those who actually do much thinking- that is, humans?
 

Brickjectivity

Veteran Member
Staff member
Premium Member
Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.
Not as such. An LLM could be part of a system that worked on the problem with repeated attempts, but LLM is limited. You need at least something that deals with the real world a little bit and understands things like measurement. LLM does not comprehend something as simple as measurement or motion or color etc. It cannot walk around and be in the environment. What you'd want is something which could do all that and then which would use that LLM as a resource of human knowledge, and that alone would only be part of it. It would need some hand holding.
 

anotherneil

Well-Known Member
I've not bothered with AI so far but these LLM versions don't actually do any thinking from what I understand so why would we expect results better than those who actually do much thinking- that is, humans?
Depends on what you mean by "thinking", but for the sake of argument, let's go with your assertion - they're not thinking, themselves, but they do help up with doing our thinking. That's the point of computer technology, just like automation helps us with doing the working & labor.

A simple electronic calculator doesn't do any "thinking", but it greatly streamlines the work & effort we need to do in order to complete our thinking much more quickly, easier, and accurately; same with a spreadsheet app & same with computer software in general. AI or LLM are really no different in this respect, but that is a bit of an oversimplification of the issue.
 

anotherneil

Well-Known Member
Not as such. An LLM could be part of a system that worked on the problem with repeated attempts, but LLM is limited. You need at least something that deals with the real world a little bit and understands things like measurement. LLM does not comprehend something as simple as measurement or motion or color etc. It cannot walk around and be in the environment. What you'd want is something which could do all that and then which would use that LLM as a resource of human knowledge, and that alone would only be part of it. It would need some hand holding.
I'm not talking about just winding it up, sending it off on its own to do some sort of brute force analysis or exploration, and wishing it good luck, but that is another approach that's worth trying. I'm referring to sitting down at a ChatGPT prompt, and engaging in an interactive session with a bit of back-and-forth with questions, comments, and suggestions; like I said, it was something else that I already did with ChatGPT that made it occur to me that maybe this same approach or strategy or whatever you want to call it could work with something like solving the UTF problem.
 

Eddi

Christianity
Premium Member
Ok, in either case, do you have a position or opinion specifically pertaining to the thread topic question? Would an LLM be able to help us with solving the UFT problem?
As I understand it LLMs can't do science and can't actually understand stuff, so no

They imitate intelligence, without being intelligent
 

anotherneil

Well-Known Member
As I understand it LLMs can't do science and can't actually understand stuff, so no

They imitate intelligence, without being intelligent
I don't think it's necessary for an LLM to be able to do science or understand anything in order to still be useful in some way, just like a simple electronic calculator has shown to be been useful - very useful, in some way.
 

Eddi

Christianity
Premium Member
I don't think it's necessary for an LLM to be able to do science or understand anything in order to still be useful in some way, just like a simple electronic calculator has shown to be been useful - very useful, in some way.
Yes, it could conceivably be a tool to make some things easier for humans

But I very much doubt it can make any progress on its own
 

anotherneil

Well-Known Member
Yes, it could conceivably be a tool to make some things easier for humans

But I very much doubt it can make any progress on its own
That's basically what I've been saying about it.

The same can be said about any books or documents on a shelf or in a folder, whether it's science textbooks, instruction manuals, an encyclopedia set, the Bible, a dictionary, or a science fiction story - nothing happens if no one reads them.
 

Eddi

Christianity
Premium Member
That's basically what I've been saying about it.

The same can be said about any books or documents on a shelf or in a folder, whether it's science textbooks, instruction manuals, an encyclopedia set, the Bible, a dictionary, or a science fiction story - nothing happens if no one reads them.
Maybe in the future it will become able to do such things
 

Brickjectivity

Veteran Member
Staff member
Premium Member
I'm not talking about just winding it up, sending it off on its own to do some sort of brute force analysis or exploration, and wishing it good luck, but that is another approach that's worth trying. I'm referring to sitting down at a ChatGPT prompt, and engaging in an interactive session with a bit of back-and-forth with questions, comments, and suggestions; like I said, it was something else that I already did with ChatGPT that made it occur to me that maybe this same approach or strategy or whatever you want to call it could work with something like solving the UTF problem.
The problem is the data set. Its all books and words. In the early days of computer science people hoped and some believed that intelligence was an expression of language, but this turned out not to be the case. It did encode some knowledge but was not itself something which could grant intelligence. Some great results came out of that time though such as the Structured Query Language which everyone now uses in databases as well as some great programming languages such as LISP variants and ML. These were explorations of the idea you are talking about, even though at the time they did not have something as awesome as an LLM.

A wonderful read that gives an understanding of the potential, early history and limitations of AI is the book Apprentices of Wonder. Its easy to read, talks about an early AI design that could pronounce words from print and goes over some of what I have already mentioned. Trigger warning that the book presumes evolution is a fact, but I don't think that's an issue for you. If that does bother you its not central to understanding the technology. Its a great book giving you a base to get a grasp of all this AI talk and fuzzy logic etc.
 

Kathryn

It was on fire when I laid down on it.
LLM: large language model
UTF: unified field theory

Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.

It probably won't be easy, and it'll probably require some guidance and steering by the brightest minds alive today, but I think if this is tried, then I think there's a good chance of it being able to accomplish this.

Heck, I wonder if someone reading this post right now, who knows just the right questions, or comments, or observations to point out to ChatGPT, would be able to solve it.

An LLM system may have the raw ingredients needed to solve it, but I don't think it's something that would automatically try to solve it on its own; with some guidance from someone who both has a strong enough grasp of the problem, and who knows how to use an LLM fairly well, the system could probably serve as a catalyst to reaching the solution.

I'm also willing to accept the possibility that I'm just being ridiculous or silly about this, so what do you think?
I think I have no idea what you are on about.
 

Terrywoodenpic

Oldest Heretic
The problem is reconciling the UFT with Quantum theory
No progress on this has been made in one hundred years.
They know the Quantum theory is incomplete. ie.wrong.
So at the moment they are wasting their time.

Unfortunately although both theories are most likely wrong, they are actually useful, so no one has the courage to start over again. Gravity and dark matter are also flies in the ointment.
One of the problems is maths is not physics and they do not agree.
Roger Penrose is pretty scathing about progress in these fields.
 
Last edited:

anotherneil

Well-Known Member
The problem is the data set. Its all books and words. In the early days of computer science people hoped and some believed that knowledge was contained in language, but this turned out not to be the case. It did encode some knowledge but was not itself something which could grant intelligence. Some great results came out of that time though such as the Structured Query Language which everyone now uses in databases as well as some great programming languages such as LISP variants and ML. These were explorations of the idea you are talking about, even though at the time they did not have something as awesome as an LLM.

A wonderful read that gives an understanding of the potential, early history and limitations of AI is the book Apprentices of Wonder. Its easy to read, talks about an early AI design that could pronounce words from print and goes over some of what I have already mentioned. Trigger warning that the book presumes evolution is a fact, but I don't think that's an issue for you. If that does bother you its not central to understanding the technology. Its a great book giving you a base to get a grasp of all this AI talk and fuzzy logic etc.
I have degrees in computer science and electrical engineering, and that included several courses in AI, robotics, and automation. I'm even working on a project right now involving an approach to obtaining a training data set (a very huge one) for something unrelated to LLMs, not at my day job, but with others as a "hobby" project. The point is that I'm aware of the importance of the data set involved.

However, I'm not familiar with early day CS people hoping that knowledge was contained in language; that strikes me as a little surprising since CS involves the study of how to process a language and the concept of interpretation. By studying these concepts back then, with what existed in the form of computer system hardware, I would think that it would provide a good grasp for them that it wouldn't be the case - at least at the time, anyways.

I think it can be the case someday, but even with AI as it is now, I don't think I would agree or concede that it can do this, even with the capability of systems that can interact with their environment that we have today. I think that would be some sort of advanced environmental interaction system with advanced concepts and principles involved for observation, absorption, interpretation, and synthesis of the environment and itself; that's what living organisms are to me, and such a form of technology would be more like artificial or synthetic life.

I don't think it's necessary for LLMs to have a data set with knowledge contained in language to help with solving the UFT problem, but I do think that a good data set as opposed to a weak one. or the right data set, can make a big difference in helping.
 
Top