• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is an LLM system good enough to help solve the UTF problem, given the right series of questions or observations?

Heyo

Veteran Member
LLM: large language model
UTF: unified field theory

Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.

It probably won't be easy, and it'll probably require some guidance and steering by the brightest minds alive today, but I think if this is tried, then I think there's a good chance of it being able to accomplish this.

Heck, I wonder if someone reading this post right now, who knows just the right questions, or comments, or observations to point out to ChatGPT, would be able to solve it.

An LLM system may have the raw ingredients needed to solve it, but I don't think it's something that would automatically try to solve it on its own; with some guidance from someone who both has a strong enough grasp of the problem, and who knows how to use an LLM fairly well, the system could probably serve as a catalyst to reaching the solution.

I'm also willing to accept the possibility that I'm just being ridiculous or silly about this, so what do you think?
LLMs are basically good at repeating what they were fed with. There will be no new ideas outside the dataset, only recombinations of previous thoughts. So, unless someone has actually solved UTF, and we have missed it, LLMs will only parrot old ideas.
The best that could happen is that an LLM will help a human solve the problem by acting as a discussion partner.

There are other models to deal with problems like these, inference engines are more useful. Marry one to an LLM and maybe the child of them will be a better help.
 

Mock Turtle

Oh my, did I say that!
Premium Member
Depends on what you mean by "thinking", but for the sake of argument, let's go with your assertion - they're not thinking, themselves, but they do help up with doing our thinking. That's the point of computer technology, just like automation helps us with doing the working & labor.

A simple electronic calculator doesn't do any "thinking", but it greatly streamlines the work & effort we need to do in order to complete our thinking much more quickly, easier, and accurately; same with a spreadsheet app & same with computer software in general. AI or LLM are really no different in this respect, but that is a bit of an oversimplification of the issue.
I think the main problem with such AIs, from my understanding of such, is that mainly they are just going with the numbers as to how they produce results - so liable to the ad populum fallacy - and which many on RF use all too often - all these people can't be wrong, so-and-so has been believed for such a long time, etc. Hence why when asked, a LLM will likely say God exists, created the universe, and other such widely held beliefs.

LLMs are apparently doing well perhaps because the winners surface and are recognised - when such LLMs do well against students in any particular exam, for example.

Probably these type of AIs can bring a lot of evidence together, and might even provide hints as to solving some particular problem, but I'm not sure the current level of AIs will actually produce answers that humans would not have obtained. I'm not a computer or AI expert of course although I have read some works on the future of AI, and I tend to agree that AI in the future will be a lot better than the current ones and hence might (and probably will) solve things that humans cannot.
 
Last edited:

wellwisher

Well-Known Member
The biological or human brain evolved within the physical laws of the real world. Its coding is a reflection of these laws, which is why the brain will have a better chance of solving problems concerning the natural laws. WE can relate to nature since the brain can empathize within its evolving patterns. The brain and its operating system, is based on the patterns of natural laws. Computer and AI is based more on man made rules and languages which add subjectivity or a limited to certain logic.

I like to play with an AI art program. What I notice is if you try to get the AI to create an action, like a track star running, it adds an extra leg or two like it is trying to do a rendition of motion blur. But the figure ends up with three or four legs, with half not fully developed but in the reflected position of time delayed motion.

The human brain can sense motion and notice this is not exactly how the natural laws work, although you can also appreciate what the AI is trying to do, without understanding fully. The man made program cannot think of everything, and the program is not designed to simplify, to make it much easier for the AI to extrapolate from a core.

There are open AI art software, that people can add features to the core program, such as more landscape specialty to an AI art program. This allows the base AI to render landscapes, even better; more details for AI to work with so it tries to think less.

The brain works differently. For example, we have four forces of nature, four directions on a map (n.s.e.w), there are four season, there are four bases on the DNA, space-time is based on four units (x,y,z,t). There is a reoccurring theme of four. There are four psychological orientation functions; emotion, intellect, intuition and sensation. Carbon forms four bonds and water forms four hydrogen bonds. This pattern is more integral, which is the area under the curve, from force to hydrogen bonding. The brain's operating system uses simplifying patterns and we notice these first.

Although, much of the hardware appears to follow the number two pattern; symmetry. Good and evil, cause and effect, electron and proton, two eyes, two sides of the brain. The brain works better within the natural symmetry of natural laws. My belief is the brain has the answers reflected within its operating system. However, the hardware alone may not tell us all we need to know of the brain's software.
 

Yerda

Veteran Member
LLM: large language model
UTF: unified field theory

Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.

It probably won't be easy, and it'll probably require some guidance and steering by the brightest minds alive today, but I think if this is tried, then I think there's a good chance of it being able to accomplish this.

Heck, I wonder if someone reading this post right now, who knows just the right questions, or comments, or observations to point out to ChatGPT, would be able to solve it.

An LLM system may have the raw ingredients needed to solve it, but I don't think it's something that would automatically try to solve it on its own; with some guidance from someone who both has a strong enough grasp of the problem, and who knows how to use an LLM fairly well, the system could probably serve as a catalyst to reaching the solution.

I'm also willing to accept the possibility that I'm just being ridiculous or silly about this, so what do you think?
From what I've gathered from friends who are academically interested in very advanced maths and physics is that the usefuleness of the LLMs out there tapers off very quickly as you get deeper into the weeds. ChatGPT can dazzle you with clear explanations of objects and quantities in textbook science but if you ask it about Ed Witten's work, for example, it will eventually start making mistakes and even invent references for fake results.

Though I can't verify the above as I know almost nothing about GR or QFT beyond the pop science stuff.
 
Top