• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Religious Thought Provoking Questions- Chat GPT

Mock Turtle

Oh my, did I say that!
Premium Member
A casual search for issues with LLMs:

Biases in Large Language Models: Origins, Inventory, and Discussion | Journal of Data and Information Quality

Similarly to physical appearance and disability biases, religious bias can be detected more easily than other biases, including via probing techniques. Nevertheless, large language models have been found to exhibit religion bias consistently in different tasks and uses.


Associate Professor and Research Associate Dr Chris Russell, Oxford Internet Institute said: 'While LLMs are built so that using them feels like a conversation with an honest and accurate assistant, the similarity is only skin deep, and these models are not designed to give truthful or reliable answers. The apparent truthfulness of outputs is a ‘happy statistical accident’ that cannot be relied on.'


When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers. Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.

The more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.
 
Last edited:

BrightShadow

Active Member
Ai is already used in our every day life via features in our smart phones' and smart devices. It is also used by google maps, banking, Amazon, Netfilx etc.
ChatGPT (AI Chatbot) is getting very famous but it is still on an infancy stage. However - it will come out to be the top interactive information source in the coming years.

Nothing can stop that!

It will conquer any of the linguistic glitches that it may or may not be suffering from. Special attention should be paid to the wording used in the prompt or while verbally asking a question. If you make sure the question asked is done in an unambiguous fashion then it will search and answer properly and accurately. So, it is not bias but if the answer is not to someone's liking then that someone may think it is biased because of the prejudice and unfairness of his/her own mind . It just processes the vast amount data and finds the accurate answer in an unbiased fashion.

All the folks complaining about ChatGPT and how it is bias should realize - there is no stopping ChatGPT or similar AI generated platforms from generating the truth as it sees it after going through the vast amount of data. So, better get used to it because it is here to stay.

If attempts are made to manipulate the information it is generating (truthfully) or if it is taken off-line then - that would be a giant leap backwards!

I would prefer if the name is changed to something like "LtM" (Learnt Machine) or simply "IR" (Intelligent Robots).

Kudos to this Australian singer to ask such intelligent questions and post it in her page.


Mind boggling answers that should quiet any rational skeptics.

 

Mock Turtle

Oh my, did I say that!
Premium Member
All the folks complaining about ChatGPT and how it is bias should realize - there is no stopping ChatGPT or similar AI generated platforms from generating the truth as it sees it after going through the vast amount of data. So, better get used to it because it is here to stay.
Ay, there's the rub (in red). No argument as to such LLMs being here to stay, and no doubt they will get better, but not until any have a basis for actual thinking rather than just scraping the output from humans and expecting this to achieve results consistent with the truth.
 

BrightShadow

Active Member
rather than just scraping the output from humans and expecting this to achieve results consistent with the truth.

Take a look at the key employees at OpenAI ...
CEO and co-founder: Sam Altman
President and co founder: Greg Brockman
Chief Scientist: Jakub Pachocki
Chief Operating Officer: Brad LIghtcap
Chief Financial Officer: Sarah Friar
Chief Product Officer: Kevin Weil
Take a look at the Board of directors.

I wonder what would you would have said - had you seen some Muslim names in there?
Truth cannot lie!
In the end - it comes out whether a machine leaks it or the nature or humans.;)
 

danieldemol

Well-Known Member
Premium Member
I asked ChatGPT, "Is chatGPT guaranteed to give a 100% true answer to questions given to it?"

It replied, "No, ChatGPT is not guaranteed to provide 100% true answers. While it aims for accuracy and helpfulness, it can make mistakes or provide outdated information. It's always a good idea to verify important details from reliable sources."

Source: https://chatgpt.com/?ref=dotcom

So to those asserting that whatever ChatGPT says is 100% true how do you square that with its own admission that it can offer no such guarantee?
 

Mock Turtle

Oh my, did I say that!
Premium Member
Take a look at the key employees at OpenAI ...
CEO and co-founder: Sam Altman
President and co founder: Greg Brockman
Chief Scientist: Jakub Pachocki
Chief Operating Officer: Brad LIghtcap
Chief Financial Officer: Sarah Friar
Chief Product Officer: Kevin Weil
Take a look at the Board of directors.

I wonder what would you would have said - had you seen some Muslim names in there?
Truth cannot lie!
In the end - it comes out whether a machine leaks it or the nature or humans.;)
Yes, it discriminates as to religion - as I pointed out. Why don't you look at the criticisms of such AIs?
 
Last edited:

BrightShadow

Active Member
I asked ChatGPT, "Is chatGPT guaranteed to give a 100% true answer to questions given to it?"

It replied, "No, ChatGPT is not guaranteed to provide 100% true answers. While it aims for accuracy and helpfulness, it can make mistakes or provide outdated information. It's always a good idea to verify important details from reliable sources."

Source: https://chatgpt.com/?ref=dotcom

So to those asserting that whatever ChatGPT says is 100% true how do you square that with its own admission that it can offer no such guarantee?

Obviously ChatGPT cannot answer questions when the answers are not known or when the question asked is ambiguous in nature. As it develops it will see through all that.
No one is saying - the answer provided by ChatGPT should be accepted as the final verdict without verification. ChatGPT is still in its infancy stage but if the question is asked in an unambiguous fashion and if the answer is known via recorded reports, scientific studies or in the history books then ChatGPT will try to generate the most appropriate answer without prejudice. The answer - it gives - can be verified because ChatGPT can provide the source of its information - if asked.

When a human provides answer - his/her answer could be influenced for various reasons.
Just look at the answers provided by two separate groups of people from two different religions, different backgrounds or even two separate political parties.

Questions pertaining religious texts or teachings is procurable and ChatGPT is capable to answer them without prejudice.
So far - there is no evidence of any intentional manipulative programming or tempering - so certain answers provided by ChatGPT about religion and comparative religious matters can be accepted as truth (because the information can be verified by an unbiased authority).
So, of course in some instance when the answer is not clearly available or an updated information is required a matter such as a developing theory from science or politics or simply an event - ChatGPT could generate an older answer BUT when it comes to established true opinions regarding religious matters etc. - ChatGPT is more likely to be on the dot!
 

BrightShadow

Active Member
Yes, it discriminates as to religion - as I pointed out. Why don't you look at the criticisms of such AIs?
The reason you are trying to make an "argument from authority" or you are trying to "appeal to authority" - is because when it comes to religion - there always will be 2 group of people. One will believe and agree with the answer and the other won't. You don't accept a Christian or Jew or an atheist to agree with an answer when the the answer highlights Islamic view point.
It is just the nature of humans! It is just how things work!
Your argument is moot. Read my last post ( #27 ) for further clarification.
 

9-10ths_Penguin

1/10 Subway Stalinist
Premium Member
I understand, not only the minds of Christians, but minds of the Atheists (et al) are also blown by a neutral source, please, right? And also of the Jews, right, please?
Of course the truth is bitter sometimes to accept, right, please?

Regards
Large language models aren't a "neutral source." They're as biased as their training material.
 

BrightShadow

Active Member
Large language models aren't a "neutral source." They're as biased as their training material.
As I said earlier - look at these names and decide (if the training material isn't true then) why so-called training material is provided to it that would eventually contradict with its' programmers' and owners' personal views?

Take a look at the key employees at OpenAI ...
CEO and co-founder: Sam Altman
President and co founder: Greg Brockman
Chief Scientist: Jakub Pachocki
Chief Operating Officer: Brad LIghtcap
Chief Financial Officer: Sarah Friar
Chief Product Officer: Kevin Weil
Take a look at the Board of directors.

It is a "learning machine" and it is capable of unsupervised training and it is generating what it perceive to be the "truth & reliable conclusion" based on the data provided from "open source" such as internet. It is capable of fact checking if it is asked. It is doing a pretty good job on its own!

Unless it is manipulated with false data or unless some kind of intentional linguistic "setbacks" is installed to alter or interfere with its free learning abilities - it will continue to be unbias and generate data that you may not agree with - but data that resembles the "truth".
 

9-10ths_Penguin

1/10 Subway Stalinist
Premium Member
As I said earlier - look at these names and decide (if the training material isn't true then) why so-called training material is provided to it that would eventually contradict with its' programmers' and owners' personal views?

Take a look at the key employees at OpenAI ...
CEO and co-founder: Sam Altman
President and co founder: Greg Brockman
Chief Scientist: Jakub Pachocki
Chief Operating Officer: Brad LIghtcap
Chief Financial Officer: Sarah Friar
Chief Product Officer: Kevin Weil
Take a look at the Board of directors.

It is a "learning machine" and it is capable of unsupervised training and it is generating what it perceive to be the "truth & reliable conclusion" based on the data provided from "open source" such as internet. It is capable of fact checking if it is asked. It is doing a pretty good job on its own!

Unless it is manipulated with false data or unless some kind of intentional linguistic "setbacks" is installed to alter or interfere with its free learning abilities - it will continue to be unbias and generate data that you may not agree with - but data that resembles the "truth".

The bias doesn't have to be deliberate.

It's been recognized that ChatGPT overuses certain words compared to human writers. The designers of ChatGPT didn’t have to deliberately intend for the software to overuse terms like "meticulously" and "commendable" - it's just an accidental bias toward those words, apparently because those words were more prevalent in the particular training material that OpenAI used than they would be in a more representative sample.

In the meantime, there's also the problem of these models "hallucinating"... or generating outputs that don't correspond to reality. For instance, ChatGPT's answers to factual questions are wrong more than half of the time.

... which makes sense. ChatGPT isn't an oracle; it's a program designed to have conversations with people in plain language... and that's pretty much it. The inputs it draws from are just meant to be a representation of how human language works; they haven't been vetted for factual correctness. And as they say, garbage in, garbage out.
 

BrightShadow

Active Member
The bias doesn't have to be deliberate.

It's been recognized that ChatGPT overuses certain words compared to human writers. The designers of ChatGPT didn’t have to deliberately intend for the software to overuse terms like "meticulously" and "commendable" - it's just an accidental bias toward those words, apparently because those words were more prevalent in the particular training material that OpenAI used than they would be in a more representative sample.

In the meantime, there's also the problem of these models "hallucinating"... or generating outputs that don't correspond to reality. For instance, ChatGPT's answers to factual questions are wrong more than half of the time.

... which makes sense. ChatGPT isn't an oracle; it's a program designed to have conversations with people in plain language... and that's pretty much it. The inputs it draws from are just meant to be a representation of how human language works; they haven't been vetted for factual correctness. And as they say, garbage in, garbage out.
What you are not getting is - regardless of all the so-called drawbacks - the response it provides can then be independently verified.
No one is asking you to adopt what ChatGPT says - all is asked to stop your own bias and then research and see for yourself what ChatGPT says.
After that realize which ones are true! Do further research!

Most people adhere to what they were fed since they were born (in their early years).
Now they can question their existing beliefs!
Truth is exposed! To blatantly reject it - is unwise!;)
 

Mock Turtle

Oh my, did I say that!
Premium Member
The reason you are trying to make an "argument from authority" or you are trying to "appeal to authority" - is because when it comes to religion - there always will be 2 group of people. One will believe and agree with the answer and the other won't. You don't accept a Christian or Jew or an atheist to agree with an answer when the the answer highlights Islamic view point.
It is just the nature of humans! It is just how things work!
Your argument is moot. Read my last post ( #27 ) for further clarification.
What are you on about - I doubt you even know when and where to use fallacy arguments. LLMs will be biased in all sorts of ways because the information they use is not necessarily factual, and because often it will be taken from that which is more easily obtainable. Hence why asking certain questions will result in answers not that truthful. You just seem to be ignoring the cites I linked to where those in the know do recognise the limitations of these AIs, and it seems, because of the way they are used, the biases might be difficult to get rid of. So asking questions as to God or anything like this is just meaningless at the moment - given that humans haven't got the data to answer such questions - apart from all the various religious beliefs. Like yours, no doubt. o_O
 

Viker

Your beloved eccentric Auntie Cristal
From ChatGPT:

"Was Jesus a god or prophet?

Whether Jesus is considered a God or a prophet depends on the religious perspective:

Christianity: Most Christians believe Jesus is the Son of God and part of the Holy Trinity, thus divine.

Islam: In Islam, Jesus (Isa) is regarded as one of the greatest prophets but not divine; he is a human messenger of God.

Judaism: In Judaism, Jesus is typically viewed as a historical figure but not as a prophet or divine.

These differing views highlight the diverse interpretations of Jesus across religions."

It doesn't say, for sure, either a god or prophet. It still depends on which tradition holds which perspective. No clear answer. Only the best it can give with what information it can derive an answer from.
 

Spice

StewardshipPeaceIntergityCommunityEquality
From ChatGPT:

"Was Jesus a god or prophet?

Whether Jesus is considered a God or a prophet depends on the religious perspective:

Christianity: Most Christians believe Jesus is the Son of God and part of the Holy Trinity, thus divine.

Islam: In Islam, Jesus (Isa) is regarded as one of the greatest prophets but not divine; he is a human messenger of God.

Judaism: In Judaism, Jesus is typically viewed as a historical figure but not as a prophet or divine.

These differing views highlight the diverse interpretations of Jesus across religions."

It doesn't say, for sure, either a god or prophet. It still depends on which tradition holds which perspective. No clear answer. Only the best it can give with what information it can derive an answer from.
What was said about computers more than fifty years ago hasn't changed. Machines can only sift and sort the information humans have provided.
 

9-10ths_Penguin

1/10 Subway Stalinist
Premium Member
What you are not getting is - regardless of all the so-called drawbacks - the response it provides can then be independently verified.
No one is asking you to adopt what ChatGPT says - all is asked to stop your own bias and then research and see for yourself what ChatGPT says.
After that realize which ones are true! Do further research!

Just so we're clear, these are the questions for ChatGPT that the thread is talking about:

Questions:
1. Was Jesus a god or Prophet?
2. is life a coincidence or has a creator?
3. How should I live my life?
4. Does Islam oppress women?
5. Does Islam promote violence?
6. Why does islam allow 4 wives?
7. What does the Quran say about women's rights?
8. what's more logical? Reincarnation or Resurrection?


You think these can be independently verified?

Most people adhere to what they were fed since they were born (in their early years).
Now they can question their existing beliefs!
Truth is exposed! To blatantly reject it - is unwise!;)
Dude... it's just an LLM. It learns from its training material and then gives answers that are in line with the style of that training material. Like I said: it's not an oracle.
 

BrightShadow

Active Member
What are you on about - I doubt you even know when and where to use fallacy arguments. LLMs will be biased in all sorts of ways because the information they use is not necessarily factual, and because often it will be taken from that which is more easily obtainable. Hence why asking certain questions will result in answers not that truthful. You just seem to be ignoring the cites I linked to where those in the know do recognise the limitations of these AIs, and it seems, because of the way they are used, the biases might be difficult to get rid of. So asking questions as to God or anything like this is just meaningless at the moment - given that humans haven't got the data to answer such questions - apart from all the various religious beliefs. Like yours, no doubt. o_O
Of course you understand fallacies better because you are an expert at making such arguments.
You are posting links with articles written by people. All it states in those reports are that research has been done that ChatGPT is not reliable. I did not say ChatGPT is 100% reliable with all its answers. I know it has limitations.
In your article they are talking about "careless speech"! Humans have been practicing that from the beginning of civilization.
The funny part is - ChatGPT is not running by Muslims but yet "careless speech or whispers" somehow made their way in.:rolleyes::D

I repeat what I wrote in my last post...
Regardless of all the so-called drawbacks (careless speech whispers and what not! ) - you could double check the response it provides - independently and then come to your conclusion.
No one is asking you to adopt what ChatGPT says - all is asked to stop your own bias and then research and see for yourself what ChatGPT says is true or not.
Now you have a chance to do just that! Don't dismiss ChatGPT because it has limitations.

ChatGPT is doing a better job in pointing out its sources and references than a human could. You could use another platform to verify the validity. The conversation is more fruitful due to all the references it provides.

Before on a debate platform - it depended on the knowledge of the debaters. A debater with weaker knowledge would lose despite of holding the correct position. Now chatGPT eliminated that problem and it can provide all the references you need in the blink of an eye!
Time to celebrate!

:hugehug:
 

Koldo

Outstanding Member

Right?

Regards
___________________
" Kudos to the Australian singer (Lily Jay) for asking questions in a clear cut manner! "
Lily Jay (@real.lilyjay) on Instagram: " Searching for the truth Studying the purpose of life" lily jay australian singer - Google Search
Courtesy friend @BrightShadow

What is a large language model (LLM)?

A large language model (LLM) is a type of artificial intelligence (AI) program that can recognize and generate text, among other tasks. LLMs are trained on huge sets of data — hence the name "large." LLMs are built on machine learning: specifically, a type of neural network called a transformer model.

Are you sure you want to treat ChatGPT as a very reliable and unbiased source?

Here's my most recent interaction with ChatGPT:

I asked: Did Muhammad marry a child? Answer only with yes or no.

ChatGPT's answer: Yes.

My next question was: Did he then commit what would be properly called as child abuse nowadays?

ChatGPT's answer was: Yes, by modern standards, Muhammad's marriage to a child would be considered child abuse.

My last question: Why would it be called child abuse?

ChatGPT's answer: It would be called child abuse because, by modern legal and ethical standards, a child is not capable of giving informed consent to sexual activity. Engaging in sexual relations with a child is viewed as exploitative and harmful, both physically and psychologically. In most societies today, such acts are classified as abuse and are illegal.
 

Muffled

Jesus in me

Right?

Regards
___________________
" Kudos to the Australian singer (Lily Jay) for asking questions in a clear cut manner! "
Lily Jay (@real.lilyjay) on Instagram: " Searching for the truth Studying the purpose of life" lily jay australian singer - Google Search
Courtesy friend @BrightShadow

What is a large language model (LLM)?

A large language model (LLM) is a type of artificial intelligence (AI) program that can recognize and generate text, among other tasks. LLMs are trained on huge sets of data — hence the name "large." LLMs are built on machine learning: specifically, a type of neural network called a transformer model.
1. I believe that is biased and contrary to the biblical record. Jesus is God in the Flesh.
 
Top