anotherneil
Well-Known Member
Do you know anything about the unified field theory problem? If not, you'd first have to familiarize yourself with that, before you can have any idea.I think I have no idea what you are on about.
Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
Do you know anything about the unified field theory problem? If not, you'd first have to familiarize yourself with that, before you can have any idea.I think I have no idea what you are on about.
LLMs are basically good at repeating what they were fed with. There will be no new ideas outside the dataset, only recombinations of previous thoughts. So, unless someone has actually solved UTF, and we have missed it, LLMs will only parrot old ideas.LLM: large language model
UTF: unified field theory
Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.
It probably won't be easy, and it'll probably require some guidance and steering by the brightest minds alive today, but I think if this is tried, then I think there's a good chance of it being able to accomplish this.
Heck, I wonder if someone reading this post right now, who knows just the right questions, or comments, or observations to point out to ChatGPT, would be able to solve it.
An LLM system may have the raw ingredients needed to solve it, but I don't think it's something that would automatically try to solve it on its own; with some guidance from someone who both has a strong enough grasp of the problem, and who knows how to use an LLM fairly well, the system could probably serve as a catalyst to reaching the solution.
I'm also willing to accept the possibility that I'm just being ridiculous or silly about this, so what do you think?
I think the main problem with such AIs, from my understanding of such, is that mainly they are just going with the numbers as to how they produce results - so liable to the ad populum fallacy - and which many on RF use all too often - all these people can't be wrong, so-and-so has been believed for such a long time, etc. Hence why when asked, a LLM will likely say God exists, created the universe, and other such widely held beliefs.Depends on what you mean by "thinking", but for the sake of argument, let's go with your assertion - they're not thinking, themselves, but they do help up with doing our thinking. That's the point of computer technology, just like automation helps us with doing the working & labor.
A simple electronic calculator doesn't do any "thinking", but it greatly streamlines the work & effort we need to do in order to complete our thinking much more quickly, easier, and accurately; same with a spreadsheet app & same with computer software in general. AI or LLM are really no different in this respect, but that is a bit of an oversimplification of the issue.
Yeah I think I'll have to take some college classes myself that is if there are even college courses on this.I think I have no idea what you are on about.
From what I've gathered from friends who are academically interested in very advanced maths and physics is that the usefuleness of the LLMs out there tapers off very quickly as you get deeper into the weeds. ChatGPT can dazzle you with clear explanations of objects and quantities in textbook science but if you ask it about Ed Witten's work, for example, it will eventually start making mistakes and even invent references for fake results.LLM: large language model
UTF: unified field theory
Will an LLM system be able to solve the UTF problem? I just did something with ChatGPT that makes me believe it could be done.
It probably won't be easy, and it'll probably require some guidance and steering by the brightest minds alive today, but I think if this is tried, then I think there's a good chance of it being able to accomplish this.
Heck, I wonder if someone reading this post right now, who knows just the right questions, or comments, or observations to point out to ChatGPT, would be able to solve it.
An LLM system may have the raw ingredients needed to solve it, but I don't think it's something that would automatically try to solve it on its own; with some guidance from someone who both has a strong enough grasp of the problem, and who knows how to use an LLM fairly well, the system could probably serve as a catalyst to reaching the solution.
I'm also willing to accept the possibility that I'm just being ridiculous or silly about this, so what do you think?