• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

can you proove there isn't a deity?

LegionOnomaMoi

Veteran Member
Premium Member
I was thinking about this a little more (especially as I wrote the first draft reply long after midnight and the second while others were getting up). I missed some very simple ways in which variance among manuscripts can be measured (e.g., not just lexical variants but syntax, lexical additions, lexical omissions, etc.). There is little in the literature about syntactic difference (i.e., difference in word order). This is very, very good because trying to account for it would be either very easy and very wrong (through e.g., treating manuscripts as sets of lexemes and using power sets to yield variance measures) or extremely complex (deriving numerous parameters to come up with a set of models that are then tested for predictive power finally giving us basically what we'd get without factoring in syntax).

The standard (and usually poor) method to compute variance—squared deviations from the mean—requires a mean (obviously), but the ways in which manuscript differ along several dimensions makes computing such a mean difficult and the result an awful basis.

We have some ~6,000 Greek manuscripts, but these vary in “size” (by size I refer to amount of NT material in them, not physical size) varying from the earliest scrap (P52, a fragment of John from the first half of the 2nd century with a few lines) to those like Sinaiticus and Vaticanus. How might we compare P52 to the other two? We can look at the lines included in P52 and for each one we can count how many lexical variants, omissions, and/or additions relative 1st to Sinaiticus and then to Vaticanus. However, this doesn’t take into account variance between Sinaiticus and Vaticanus. The problem is that were we to repeat the same procedure and compare line by line both manuscripts, there can exist variants that we already accounted for. If a word in P52 is identical to the same in Sinaiticus but not Vaticanus, then counting that same difference when we compare Sinaiticus and Vaticanus would be double-dipping.

The various algorithms generally associated with “artificial intelligence”, such as pattern recognition, machine learning, artificial neural networks, etc., are actually used mostly for statistical methods and data analysis. The reason for this is because we can use fairly simple algorithms that begin with useless measures but provide increasingly accurate measures over sets of trials (in this case, the data set from a manuscript). Although we can extrapolate to a hypothesized population, we are lucky (this never happens) because our sample required to derive manuscript variation IS our population.

Although we don’t wish to use the words found in a modern Greek NT, we do wish to use the structure of the modern NT itself. This allows us greater flexibility in the ways in which we can feed data into some computational model (it provides us a way to compare manuscripts without having to represent them as existing in some configuration space or matrix representation in which every word has an exact position for that manuscript, which would bring us back to the issue of syntax we already determined would artificially inflate variance). However, it doesn’t tell us how we should represent the data. This depends in part on the method/model used.

So what are our options? There are two main approaches that would probably work best (accurate but not overly complicated). One would be to use a algorithms which evaluate data points sequentially using some similarity/dissimilarity measure (i.e., a “distance” metric). In fact, the basic, intro stats measure of variance relies on distance metric, but it is the most primitive and others are far more flexible in how they can be used. For some examples:
legiononomamoi-albums-other-picture4992-dissimilarity-distances.jpg


The other main approach involves a subtype of classification and clustering techniques which I’ll call component analyses (most rely on such some component analysis but not all). Even scraps like p52 have ~50 data points, and these are being compared to ~6,000 other manuscripts. That’s a lot of data points. Something like principal component analysis reduces the data while maintaining the important information. The most relevant example is, perhaps, the screen you’re looking at. Anybody who had taken pictures using even a fairly new digital camera knows that these pic files can be huge. Imagine compression works by identifying points in an image where there variance is minimal (and similarity maximal) and projects these points onto a new “space” as one point. A simpler example is GPAs: various grades for numerous courses are combined into a single GPA value. This is analogous and more intuitive than an actual example of projections from Rn to Rm.

Both approaches actually have a great deal in common, and a method like multidimensional scaling (MDS) often works by using dissimilarities between paired data points (something akin to the first approach) and then projecting these onto a lower dimensional space (as in the second). Because the second approach is harder to describe even loosely, I'll go with the first and in general terms.

The advantage of using a more sequential method is that we can pick manuscripts at random. Although in actual practice I wouldn’t do it this way, for simplicity imagine each manuscript as a vector in nth dimensional space (n is the number of words in the Greek NT, which is one reason I wouldn’t actually do it this way). Let’s imagine our first trial (manuscript) is p52. Each word of John in p52 exists as a value 1 in the appropriate “slot”- an element/entry in the p52 manuscript vector using the Greek NT as an index). All other values are NaN (not a number; it really doesn’t matter for our purposes). Now we select another manuscript at random, represent it as a vector, and then add it to the p52 manuscript vector. Most of the values in the combined vector will be 1 (or NaN), but if there are omissions, variants, or additions we increase the value for that entry in the combined vector. Although (again) this isn’t how I’d actually do it, for simplicity the new values are a simple addition of 1 for any omission or variant that we add to the 1 in the previous vector. Additions, however, add .5 to the entry before and after the added one. So even though we only have a representation of two manuscripts, a single element in our NT vector can have a value indicating more than 2 variants.

We continue combining manuscript vectors to make an “NT vector”, but we keep the original manuscript vectors too (the “NT vector” is only one measure of variance as it sums variants sequentially rather than comparing each manuscript variants to every other, we’ll want these). Once we have gone through all the manuscripts, we have a total count of variations relative to the number of points (and as vector length is by itself a measure of variance, we already have something useful). However, what we want is an indication of variance that also tells us something about the reliability of our textual attestation relative to the differences.

To start, we make set of “variant vectors” from the entries of our “NT vector” (I’m avoiding matrix algebra terms). These can be represented in a multidimensional space along with the entire “NT vector” as well as the set of manuscript vectors. To get a visual idea, even bad graphics can help:

expanding_vector_field_3D.png


In reality, the vectors would differ in size and probably share an origin point:
orivect.gif


We want to know how we can understand the variations among manuscripts in useful ways. One class of methods was developed ~80 years ago but couldn't be used until recently: permutation methods. To understand why, consider dealing with the number of permutations of a 52 cards before we had good computers: 80658175170943878571660636856403766975289505440883277824000000000000

To compare e.g., the manuscript vectors to the “variant vectors” & the NT vector we can use multivariate randomized block procedures (MRBP). This can not only yield a statistic for comparing our vector sets (within and between sets), but also give us far more accurate measures of dispersal (variance) for the entirety of sets. It can also give us a wonderful estimate of manuscripts we don’t even have via mathematically sound form of “double-dipping” to create additional observations form those we have. This is also where appropriate uses of distance metrics allows us to compare average dissimilarity between manuscript vectors, quantify the variance vectors, and measure the total degree of similarity vs. dissimilarity for our NT vector.

As there’s no point more inadequate descriptions of possible methods, let's recall that the main points is to consider the ways that our vector sets can be compared. Each manuscript vector has an entry that can be compared to every other as well as the sum total of variance of that entry in the NT vector. Also, clusters of high variability allow us to easily spot areas in which e.g., a line has high variability among all manuscripts indicating that any “base” we might construct to compare variants to would be unreliable. But clusters aren’t necessarily bad, as variance at certain points rather than approaching a random dispersal means that most manuscripts attest to the same reading at most points.

Most importantly, statistical learning/pattern recognition/etc. methods that iterate pairwise comparisons (or multilevel comparisons) using distance metrics allow us to compare e.g., every manuscript vector to every variant vector that can be used to create a better “NT vector 2.0” that is a representation of the ways in which variations are or aren’t cancelled out or at least dampened by textual attestation.
 

1robin

Christian/Baptist
I was thinking about this a little more (especially as I wrote the first draft reply long after midnight and the second while others were getting up). I missed some very simple ways in which variance among manuscripts can be measured (e.g., not just lexical variants but syntax, lexical additions, lexical omissions, etc.). There is little in the literature about syntactic difference (i.e., difference in word order). This is very, very good because trying to account for it would be either very easy and very wrong (through e.g., treating manuscripts as sets of lexemes and using power sets to yield variance measures) or extremely complex (deriving numerous parameters to come up with a set of models that are then tested for predictive power finally giving us basically what we'd get without factoring in syntax).



We have some ~6,000 Greek manuscripts, but these vary in “size” (by size I refer to amount of NT material in them, not physical size) varying from the earliest scrap (P52, a fragment of John from the first half of the 2nd century with a few lines) to those like Sinaiticus and Vaticanus. How might we compare P52 to the other two? We can look at the lines included in P52 and for each one we can count how many lexical variants, omissions, and/or additions relative 1st to Sinaiticus and then to Vaticanus. However, this doesn’t take into account variance between Sinaiticus and Vaticanus. The problem is that were we to repeat the same procedure and compare line by line both manuscripts, there can exist variants that we already accounted for. If a word in P52 is identical to the same in Sinaiticus but not Vaticanus, then counting that same difference when we compare Sinaiticus and Vaticanus would be double-dipping.



Although we don’t wish to use the words found in a modern Greek NT, we do wish to use the structure of the modern NT itself. This allows us greater flexibility in the ways in which we can feed data into some computational model (it provides us a way to compare manuscripts without having to represent them as existing in some configuration space or matrix representation in which every word has an exact position for that manuscript, which would bring us back to the issue of syntax we already determined would artificially inflate variance). However, it doesn’t tell us how we should represent the data. This depends in part on the method/model used.

So what are our options? There are two main approaches that would probably work best (accurate but not overly complicated). One would be to use a algorithms which evaluate data points sequentially using some similarity/dissimilarity measure (i.e., a “distance” metric). In fact, the basic, intro stats measure of variance relies on distance metric, but it is the most primitive and others are far more flexible in how they can be used. For some examples:
legiononomamoi-albums-other-picture4992-dissimilarity-distances.jpg


The other main approach involves a subtype of classification and clustering techniques which I’ll call component analyses (most rely on such some component analysis but not all). Even scraps like p52 have ~50 data points, and these are being compared to ~6,000 other manuscripts. That’s a lot of data points. Something like principal component analysis reduces the data while maintaining the important information. The most relevant example is, perhaps, the screen you’re looking at. Anybody who had taken pictures using even a fairly new digital camera knows that these pic files can be huge. Imagine compression works by identifying points in an image where there variance is minimal (and similarity maximal) and projects these points onto a new “space” as one point. A simpler example is GPAs: various grades for numerous courses are combined into a single GPA value. This is analogous and more intuitive than an actual example of projections from Rn to Rm.

Both approaches actually have a great deal in common, and a method like multidimensional scaling (MDS) often works by using dissimilarities between paired data points (something akin to the first approach) and then projecting these onto a lower dimensional space (as in the second). Because the second approach is harder to describe even loosely, I'll go with the first and in general terms.

The advantage of using a more sequential method is that we can pick manuscripts at random. Although in actual practice I wouldn’t do it this way, for simplicity imagine each manuscript as a vector in nth dimensional space (n is the number of words in the Greek NT, which is one reason I wouldn’t actually do it this way). Let’s imagine our first trial (manuscript) is p52. Each word of John in p52 exists as a value 1 in the appropriate “slot”- an element/entry in the p52 manuscript vector using the Greek NT as an index). All other values are NaN (not a number; it really doesn’t matter for our purposes). Now we select another manuscript at random, represent it as a vector, and then add it to the p52 manuscript vector. Most of the values in the combined vector will be 1 (or NaN), but if there are omissions, variants, or additions we increase the value for that entry in the combined vector. Although (again) this isn’t how I’d actually do it, for simplicity the new values are a simple addition of 1 for any omission or variant that we add to the 1 in the previous vector. Additions, however, add .5 to the entry before and after the added one. So even though we only have a representation of two manuscripts, a single element in our NT vector can have a value indicating more than 2 variants.

We continue combining manuscript vectors to make an “NT vector”, but we keep the original manuscript vectors too (the “NT vector” is only one measure of variance as it sums variants sequentially rather than comparing each manuscript variants to every other, we’ll want these). Once we have gone through all the manuscripts, we have a total count of variations relative to the number of points (and as vector length is by itself a measure of variance, we already have something useful). However, what we want is an indication of variance that also tells us something about the reliability of our textual attestation relative to the differences.

To start, we make set of “variant vectors” from the entries of our “NT vector” (I’m avoiding matrix algebra terms). These can be represented in a multidimensional space along with the entire “NT vector” as well as the set of manuscript vectors. To get a visual idea, even bad graphics can help:

expanding_vector_field_3D.png


In reality, the vectors would differ in size and probably share an origin point:
orivect.gif


We want to know how we can understand the variations among manuscripts in useful ways. One class of methods was developed ~80 years ago but couldn't be used until recently: permutation methods. To understand why, consider dealing with the number of permutations of a 52 cards before we had good computers: 80658175170943878571660636856403766975289505440883277824000000000000

To compare e.g., the manuscript vectors to the “variant vectors” & the NT vector we can use multivariate randomized block procedures (MRBP). This can not only yield a statistic for comparing our vector sets (within and between sets), but also give us far more accurate measures of dispersal (variance) for the entirety of sets. It can also give us a wonderful estimate of manuscripts we don’t even have via mathematically sound form of “double-dipping” to create additional observations form those we have. This is also where appropriate uses of distance metrics allows us to compare average dissimilarity between manuscript vectors, quantify the variance vectors, and measure the total degree of similarity vs. dissimilarity for our NT vector.

As there’s no point more inadequate descriptions of possible methods, let's recall that the main points is to consider the ways that our vector sets can be compared. Each manuscript vector has an entry that can be compared to every other as well as the sum total of variance of that entry in the NT vector. Also, clusters of high variability allow us to easily spot areas in which e.g., a line has high variability among all manuscripts indicating that any “base” we might construct to compare variants to would be unreliable. But clusters aren’t necessarily bad, as variance at certain points rather than approaching a random dispersal means that most manuscripts attest to the same reading at most points.

Most importantly, statistical learning/pattern recognition/etc. methods that iterate pairwise comparisons (or multilevel comparisons) using distance metrics allow us to compare e.g., every manuscript vector to every variant vector that can be used to create a better “NT vector 2.0” that is a representation of the ways in which variations are or aren’t cancelled out or at least dampened by textual attestation.

Do you actually understand what you have posted? I have a math degree and can't get it all. I am not disagreeing with it but there are few people around that could have understood just the vector mechanics you mentioned.

To add something to your permutations concerning card decks. If everyone on Earth counted one combination for a million years we would still have less than 1 billionth of a percent counted. The chances that I could predict an certain hand from 1 - 52 cards is still far greater than the chance life would form on it's own.

I also had no idea what your conclusion was. Reliable or not?
 

LegionOnomaMoi

Veteran Member
Premium Member
Do you actually understand what you have posted?

Yes. If you have any doubts, questions, or issues with what I stated I would be more than happy to clear them up.

I have a math degree and can't get it all.
Years ago I went to a neighbor of mine who is a professor of mathematics at Brown. He couldn't even recall one of the topics I broached. This was not because his knowledge of mathematics lacked in any way, but because mathematics is vastly more diverse than it was a century ago. In fact, the current approach to the foundations of most mathematics (calculus) is taught in a completely outdated way while the original impetus and intuitive concept was sufficiently rendered rigorous decades ago.

I am not disagreeing with it but there are few people around that could have understood just the vector mechanics you mentioned.

Vectors are related to mechanics only insofar as they are used to represent things like displacement. "Vector mechanics" is like "quam numbers". It's worthless without context.

To add something to your permutations concerning card decks. If everyone on Earth counted one combination for a million years we would still have less than 1 billionth of a percent counted.
If a few million people alive today flipped a coin over and over again for the rest of there lives, and the results were finally tabulated, the probability of the result would be unbelievably, incredibly, astronomically impossible. Yet such an outcome would be guaranteed.

far greater than the chance life would form on it's own.

The only possible way to know this is to know the probability space. If you know it, please share.

Reliable or not?

Until you are capable of determining even Ehrman's views (let alone the state of scholarship), whatever your opinons they are not reliable; yet. We are all of us ignorant of many more things than those we are not.
 

1robin

Christian/Baptist
Yes. If you have any doubts, questions, or issues with what I stated I would be more than happy to clear them up.
I am familiar with many of the numbers concerning probability and have seen most of the equations. I do not understand what your proving. I saw no conclusions.


Years ago I went to a neighbor of mine who is a professor of mathematics at Brown. He couldn't even recall one of the topics I broached. This was not because his knowledge of mathematics lacked in any way, but because mathematics is vastly more diverse than it was a century ago. In fact, the current approach to the foundations of most mathematics (calculus) is taught in a completely outdated way while the original impetus and intuitive concept was sufficiently rendered rigorous decades ago.
In my case I am ignorant. I got a math degree because I wanted out of engineering. I had already taken so many math classes I almost had the degree anyway so I finished it up and forgot most of it. I have been working in military aviation since then.



Vectors are related to mechanics only insofar as they are used to represent things like displacement. "Vector mechanics" is like "quam numbers". It's worthless without context.
I meant the mechanics of vector analysis. I actually took a class in it alone. I actually liked it. It was far more intuitive than partial DE or discrete.


If a few million people alive today flipped a coin over and over again for the rest of there lives, and the results were finally tabulated, the probability of the result would be unbelievably, incredibly, astronomically impossible. Yet such an outcome would be guaranteed.
I do not get it. It would be close to 50% and almost exactly what was predicted. Also Biblical probabilities are multiplicative. They have many improbabilities that came true in succession. In a claim to complete truth (at least beyond scribal error which is at worst 5%) they must by multiplicative. They are not the chances a guy wins a lottery. They are the chances the same guy wins it a thousand times. You go from sharpshooter fallacy to astronomical naturalistic absurdity real quick.



The only possible way to know this is to know the probability space. If you know it, please share.
Simply the expansion rate that would permit any life is 1 in billions of trillions. I can look the exact number Hawking gave if you want but I meant only a ballpark. PlLus once the other equally impossible naturalistic chances are MULTIPLIED as are needed for life to arise on it's own you go from absurdity to insanity at the starting gate.



Until you are capable of determining even Ehrman's views (let alone the state of scholarship), whatever your opinons they are not reliable; yet. We are all of us ignorant of many more things than those we are not.

I know what Ehrman's numbers are. I use them more than any critic in this forum. I was asking if your premise agreed with his reasonable estimates.

I use Ehrman's numbers because they cut down on meaningless contentions from non-theists. My personal conclusion is Bible accuracy is around 97%. That is half way between good scholars like Ehrman's 95% and good traditional theologians at 99.5%. The dead sea scrolls affirm this.
 

LegionOnomaMoi

Veteran Member
Premium Member
I am familiar with many of the numbers concerning probability and have seen most of the equations. I do not understand what your proving. I saw no conclusions.

The entirety was devoted to the ways in which this (particularly the bolded part) is not true:
However note this. The numbers of errors in a tradition increases consistently with the number of copies. If the bible (like the Quran had been burned) and only one copy left as a source then no errors would exist, nor any reliability. So the more copies, the more reliable, and the more errors. High numbers of errors are simply inherent with the enormous (more than any other work in ancient history) volume in a tradition.

Apart from the fact that a single manuscript can show us errors in a manuscript tradition we don't have, the real issue is "consistent". This posits a linear relationship (or at least approximately so) and there are more ways in which this can fail to hold true than can hold true.

This:

Number of errors in the entire textual tradition, the number of manuscripts in existence, the number of average words in each manuscript.
cannot be true (not in the sense that it isn't true as nobody knows the number of errors, but that it isn't a viable model). It assumes we can determine "error" when all we can actually determine is whether variants exist and how. There is no hard and fast method for arguing whether a particular variant is more likely to trace back to the autograph.

I have been working in military aviation since then.
Sounds interesting. Not something I'm overly familiar with (apart from the International Symposium's UAV proceedings, a few monographs or volumes from e.g., Springer Tracts in Advanced Robotics, etc., and not only was I more focused on the computational aspects and the HCI issues I'm sure a lot of the more interesting material is classified).



I meant the mechanics of vector analysis.
Mechanics? Vector analysis I know. Are you using mechanics in some colloquial sense (e.g., akin to "methods to solve")? Or do you mean the application of vector analysis to mechanics? Sorry, it's late.

It was far more intuitive than partial DE or discrete.
A central operator in vector analysis is the Del or Nabla operator and in R3 it is defined as a partial differential equation and extends to Rn as a partial differential equation. I'm not sure how you are defining vector analysis.



I do not get it. It would be close to 50%
That would be if we were treating the outcomes in terms of a ratio of frequencies of heads vs. tails. That's not the outcome, that's a function of the outcome. The outcome is a set of sequences. Flip a coin a million times, and you always get a 1 in a million result. A few million people flipping a coin millions and millions of times and you get a set of sequences, each 1 in billion or trillion or however many millions of times they flipped a coin. Altogether, the probability that you'd get each sequence is astronomically tiny.

Also Biblical probabilities are multiplicative.
They aren't. At least not in any way that matters.
In a claim to complete truth (at least beyond scribal error which is at worst 5%) they must by multiplicative.
There's that 5% again, only nobody knows the number of errors nor is there any clear way to define variation or errors.

They are not the chances a guy wins a lottery.
That's true. Those chances are static, easily derived, etc. Defining variance among manuscripts and what constitutes error is no where near as easy.

You go from sharpshooter fallacy to astronomical naturalistic absurdity real quick.

No, as I haven't given any probabilities. I have simply shown that
1) Your statement about "consistent" error increase with the increase of manuscripts is clearly and completely wrong
2) That variance and errors can be defined in a number of ways, linear or multiplicative being on of the poorest.


Simply the expansion rate that would permit any life is 1 in billions of trillions.
This is not the probability space.

I can look the exact number Hawking gave if you want

Don't bother. Google the Drake equation or better yet see Bayesian analysis of the astrobiological implications of life’s early emergence on Earth

&
The Habitability of Our Earth and Other Earths: Astrophysical, Geochemical, Geophysical, and Biological Limits on Planet Habitability

I can provide more if need be, but that should be sufficient for a baseline or foundation for discourse.


PlLus once the other equally impossible naturalistic chances are MULTIPLIED as are needed for life to arise on it's own you go from absurdity to insanity at the starting gate.

I'm not sure how any of this makes any sense. Could you rephrase?





I know what Ehrman's numbers are.
Ehrman has repeatedly published that nobody knows what the numbers are. In you Misquoting Jesus he gives more of a range than in his academic work, but he still specifically states the number of variations is unknown.

I use Ehrman's numbers

You don't.
 

1robin

Christian/Baptist
The entirety was devoted to the ways in which this (particularly the bolded part) is not true:
I assume you meant the bolded part in the next statement I had made. If so then something has gone horribly wrong here. The fact that total mistakes increase given copies that exist is simply reality. This is not a hypothetical, a mathematical prediction, or a deductive argument. It is a brute fact. I must have misunderstood.


Apart from the fact that a single manuscript can show us errors in a manuscript tradition we don't have, the real issue is "consistent". This posits a linear relationship (or at least approximately so) and there are more ways in which this can fail to hold true than can hold true.
We are not discussing a hypothetical and using mathematical models to predict things about it. We are looking at actual manuscripts.

This:


cannot be true (not in the sense that it isn't true as nobody knows the number of errors, but that it isn't a viable model). It assumes we can determine "error" when all we can actually determine is whether variants exist and how. There is no hard and fast method for arguing whether a particular variant is more likely to trace back to the autograph.
I think you must be speaking of error as it would apply to differences between reality and what is said about it, but that is not the context Ehrman and wright are speaking in. They are talking about textual accuracy not historical accuracy. They are two very different issues. Textual accuracy is very easy to compute given a tradition as astronomical as the bible's. The best we can do historically is probability, best fit, most comprehensive, etc.....


Sounds interesting. Not something I'm overly familiar with (apart from the International Symposium's UAV proceedings, a few monographs or volumes from e.g., Springer Tracts in Advanced Robotics, etc., and not only was I more focused on the computational aspects and the HCI issues I'm sure a lot of the more interesting material is classified).
UAVs are the future. If I had anything to invest it would all be in unmanned aircraft. Many think we may have already built the last airframe that will require a pilot. Aviation is fascinating but I have lately been stuck integrating automated test equipment into a machine built in the 80s to service the venerable F-15. It is nothing short of exasperating. Actually classified stuff is boring. The classified aspects of equipment are usually data sets or code and I hate both.



Mechanics? Vector analysis I know. Are you using mechanics in some colloquial sense (e.g., akin to "methods to solve")? Or do you mean the application of vector analysis to mechanics? Sorry, it's late.
I mean it in a common language use way. Mechanics in engineering and other subjects is simply meant to indicate mode of operation or functional methodology. It is almost a slang term.


A central operator in vector analysis is the Del or Nabla operator and in R3 it is defined as a partial differential equation and extends to Rn as a partial differential equation. I'm not sure how you are defining vector analysis.
My experience was in the use of integration and derivation of trigonometric identities associated with graphs produced by functions or plotted by data. Whatever you are referring to sounds extremely complex and not something I would look forward to.




That would be if we were treating the outcomes in terms of a ratio of frequencies of heads vs. tails. That's not the outcome, that's a function of the outcome. The outcome is a set of sequences. Flip a coin a million times, and you always get a 1 in a million result. A few million people flipping a coin millions and millions of times and you get a set of sequences, each 1 in billion or trillion or however many millions of times they flipped a coin. Altogether, the probability that you'd get each sequence is astronomically tiny.
I do not see the significance but will address it anyway because it is a sharpshooter fallacy that does not apply to biblical probability anyway. You have a 100% chance of getting some result. Whatever result you got was trivial and did not defy probability. Biblical predictions and those concerning it are far different. For example you had no predictable goal to compare results to. In the case of life permitting universes I do. Only a vanishing tiny band of circumstances will allow a life permitting universe. First I must have a universe at all. There is 0 probability that nothing will produce anything, then I need an extremely specific universe given 1 in trillions of billions (expansion rates, nuclear forces, gravity, etc.. must all be extremely fine tuned), then I need a whole range of initial specific conditions that will support human life in a universe that is hostile to it. These are all a priori needs. My world view does not allow for whatever happened to occur. My claims are not satisfied by just anyone winning the lottery. I must have the same man win the lottery time after time after time. No natural explanations exist if that occurred. In other realms like prophecy I need a whole string of improbable things to occur that were predicted before hand (which is what your example does not do). Ezekiel's Tyre prophecy for example required a single man to attack it, he could only do a specific amount of damage, he could not have discovered any of the riches known to be in Tyre, he had to then leave and go to Egypt for the specific purpose of gaining wealth to pay his soldiers. I then needed another force to arrive (this one unnamed) to destroy the island fortress and produce an uncommon level of destruction. Then I needed the Phoenicians to give up completely ever rebuilding the city. All these occurred. Claiming afterwards that any series of events is improbable is a meaningless claim. Having a list of a priori necessities that were improbable but still occurred is a whole different matter.

Continued below:
 

1robin

Christian/Baptist
They aren't. At least not in any way that matters.
The one above certainly is and it only has maybe a dozen necessities. The 350 predictions concerning Jesus have hundreds made before hand that were improbable but occurred.

There's that 5% again, only nobody knows the number of errors nor is there any clear way to define variation or errors.
There is easy ways to determine textual errors. Those scholars most capable of knowing how to go about it Ehrman, White, Wright, etc... all get extremely similar numbers and claim they are extremely certain. In a weird irony the more copies you have the more of both errors and certainty you have. I have seen comparisons for many errors. One I remember was.

Tradition 1. Jesus the Christ did X ....
Tradition 2. Christ Jesus did X ......
Tradition 3. The Lord did X .......
Tradition 4. Jesus (called the Christ) did X .......

I have four variants but almost certainty concerning who did what.


That's true. Those chances are static, easily derived, etc. Defining variance among manuscripts and what constitutes error is no where near as easy.
I think your talking about historical error and that is not part of a textual accuracy debate. They are very distinct issues.



No, as I haven't given any probabilities. I have simply shown that
1) Your statement about "consistent" error increase with the increase of manuscripts is clearly and completely wrong.
Textually speaking almost every textual scholar on either side would disagree with you. Historically speaking there would be much more inconsistency among scholars.

2) That variance and errors can be defined in a number of ways, linear or multiplicative being on of the poorest.
I did not understand this.



This is not the probability space.
No it is not, that was only the possibilities concerning expansion rates alone. Rates which seem to be independent of initial conditions and natural law I might add. That is only one of thousands of improbable things that must occur to get life to arise on it's own. Another would be the chance of getting a universe from nothing, that one has 0 probability.



Don't bother. Google the Drake equation or better yet see Bayesian analysis of the astrobiological implications of life’s early emergence on Earth

&
The Habitability of Our Earth and Other Earths: Astrophysical, Geochemical, Geophysical, and Biological Limits on Planet Habitability

I can provide more if need be, but that should be sufficient for a baseline or foundation for discourse.
I have seen many of these computations and they stretch over a huge range. The problem with the more probable ones is they only concern some of the factors necessary for life. For example they never compute the probability of getting a universe from nothing. They usually start at a point after a huge number of improbable things are said to have taken place on their own and the evaluate a tiny microcosm of what life must have done on it's own. For example they may tackle the left handed protein issue, DNA RNA issue, and a couple more but even these astronomical numbers are after a long almost inexhaustible string of improbabilities were required. I think all together the probability that life arose on it's own is equivalent to zero but I have never seen a comprehensive probability less than 1 in 10^50th.




I'm not sure how any of this makes any sense. Could you rephrase?
Forgetting for a minute getting everything from nothing which is a logical absurdity and also a necessity, the expansion rate alone which is necessary for a structured universe alone is probabilistic absurdity:

"Why did the universe start out with so nearly the critical rate of expansion that separates models that recollapse from those that go on expanding forever, that even now, 10 thousand million years later, it is still expanding at nearly the critical rate? If the rate of expansion one second after the Big Bang had been smaller by even one part in 100 thousand million million, the universe would have collapsed before it ever reached its present size."
Stephen Hawking

Add in or multiply in nuclear forces, the mass ration or an atoms components, gravity, etc... and long before we can even get to where most evolutionists begin we have absurdity times absurdity times absurdity which = hyperbolic absurdity.



Ehrman has repeatedly published that nobody knows what the numbers are. In you Misquoting Jesus he gives more of a range than in his academic work, but he still specifically states the number of variations is unknown.
Of course an exact number is unknown. However a useful ballpark is easily derived. You can even buy software that will take all major Bible versions and find every single difference between them all and total them.



You don't.
Ehrman always claims between 300,000 and 400,000. I use his 400,000 number just to limit contention.

Instead of amplifying uncertainty well beyond it's established level lets see what Ehrman's conclusion is from all of these numbers.

Most of these differences are completely immaterial and insignificant; in fact most of the changes found in our early Christian manuscripts have nothing to do with theology or ideology. Far and away the most changes are the result of mistakes, pure and simple — slips of the pen, accidental omissions, inadvertent additions, misspelled words, blunders of one sort or another when scribes made intentional changes, sometimes their motives were as pure as the driven snow. And so we must rest content knowing that getting back
to the earliest attainable version is the best we can do, whether or not we have reached back to the “original” text. This oldest form of the text is no doubt closely (very closely) related to what the author originally wrote, and so it is the basis for our interpretation of his teaching.

The gentleman that I’m quoting is Bart Ehrman in Misquoting Jesus. [audience laughter]

Did the Bible Misquote Jesus Debate
“Can the New Testament Be Inspired in Light of Textual Variation?”
Dr. James White vs. Dr. Bart Ehrman
January 21, 2009


Or if you wish an even more rigorous examination from men who may be histories greatest experts on testimony and evidence see Simon Greenleaf or Lord Lyndhurst.
 

Sha'irullah

رسول الآلهة
Gods and men are vital to each other as each plays a significant role in the mutual reliance of each other as man likes to worship and gods love to be worshiped. Man loves to obey and god loves to dictate.
Kathenotheism is a very peculiar entry in the fields of the existence of deities and when further applied to the philosophical aspects of solipsism and idealism Kathenotheism holds a significant credibility in this regard as it provides more theological basis for the nature of a multiplicity of "supreme" gods. I myself prefer to apply idealism as it does not make equally as brash statement toppled with the existence of a god.
All gods are only relative to the individual and are only effect to the individual's mind.

....In short, it is all in your head.
 

LegionOnomaMoi

Veteran Member
Premium Member
easy ways to determine textual errors.
"It has been estimated that no two manuscripts of the New Testament are identical in all respects...Many of our text-critical decisions concern issues of fundamental importance for the interpretation and meaning of the text, and they often impinge on basic issues for Christian doctrine. The wording of the Lord’s Prayer in Matthew’s Gospel differs within the manuscript tradition; Jesus’ words instituting the Last Supper in Luke’s Gospel are not firmly established; the well-known story of the Woman taken in Adultery, normally printed within John’s Gospel, is absent from some manuscript witnesses. The ending to Mark’s Gospel is disputed; manuscripts deemed important omit the last twelve verses. The verses in Luke 22 about Jesus’ bloody sweat in Gethsemane are not in all our manuscripts. The Parable of the Two Boys in Matthew 21 circulated in three diametrically opposed forms. We can trace these variants to the second century. At Hebrews 2:9 did the author write that Jesus died ‘without God’ or ‘by the grace of God’? The answer depends on which manuscript one is reading. Likewise did Paul confidently tell the readers at Romans 5:1 that ‘we have peace’ or was he exhorting them with the words ‘let us have peace’? The Greek varies in the manuscript tradition. At 1 Cor 15:51 did Paul write that at the end time ‘We shall all die but we shall not all be changed’ or ‘We shall not all die but we shall all be changed’?"
Elliott, J. K. (Ed.). (2010). New Testament Textual Criticism: The Application of Thoroughgoing Principles: Essays on Manuscripts and Textual Variation (Vol. 137 of Supplements of Novum Testamentum). Brill.

"The clarification, definition, and delimitation of the term "textual variant" are vastly more complex and difficult matters than at first would appear. The common or surface assumption is that any textual reading that differs in any way from any other reading in the same unit of text is a "textual variant," but this simplistic definition will not suffice...Here the complexities multiply."
Epp, E. J., & Fee, G. D. (1993). Studies in the theory and method of New Testament textual criticism (Vol. 45 of Studies and Documents). Wm. B. Eerdmans Publishing

"First, the number of variants at any one of the places discussed is due to many factors, such as

the length of the variant, since a longer one will attract more subvariants in the form of spelling variations and accidents of different kinds (although the longest variants in Luke and Galatians are only a word different in length)
the difficulty of the passage, and its significance
the number of forms already in existence, leading to confused recollections in the scribal mind; that is to say, where confusion starts it will multiply

Secondly, there is a procedural question as to whether one should be looking for the number of variants and ignoring subvariants, since many of the latter seem rather trivial alterations.
Thirdly, while the number of readings known to us is derived from extant copies, the degree of variety between those copies must be at least partly due to the degree of variation between the copies which are lost.
Fourthly, I have to ask whether the data I have used is fit for this purpose...Unfortunately, the only way in which we could find that out would be by a full collation of the entire text in every manuscript, and it is the impracticability of that which provided us with Text und Textwert in the first place.
Fifthly, it is not at all clear how different texts, different copying traditions and different variations can be compared."

Parker, D. C. (2008). An introduction to the New Testament manuscripts and their texts. Cambridge University Press.

"It is generally agreed that the fourth gospel is based to a significant extent on some form of predecessor, an older document...There is deep uncertainty however concerning the identity of that predecessor."
Brodie, T. L. (1993). The Quest for the Origin of John's Gospel: A Source-Oriented Approach. Oxford University Press.

"the nature of New Testament textual transmission virtually precludes any ultimate identification of ‘earliest attainable’ with ‘the original'"
Epp, E. J. (2007) “It’s All about Variants: A Variant-Conscious Approach to New Testament Textual Criticism,” HTR 100: 275-308


"If 1 Cor 15:29-34 is in fact a non-Pauline interpolation, as I have argued, questions immediately arise as to when the interpolation was made, by whom, why, and why at this particular place in the Pauline correspondence? The short answer to each of these questions, of course, is that we simply do not know."

Walker, W. O. (2007). 1 Corinthians 15: 29-34 as a non-pauline interpolation. CQB, 69(1), 84-103.

Those scholars most capable of knowing how to go about it Ehrman, White, Wright, etc...
Neither White nor Wrighte are textual critics. What scholars?


the more of both errors and certainty you have.
The reasons for how many ways it can fail or hold true are a simple combinatorics problem, and the evidence for how it does can be seen in real scholarship.

I think your talking about historical error
I'm not. I have the several versions of the Greek NT and various critical editions to individual letters & gospels. Each of the latter includes a critical apparatus (and textual critical notes), while for the UBS' I have an entire textual critical companion to accompany the critical apparatus in the UBS' Greek NT.

To be clear, when I say "critical editions to individual letters & gospels" I mean volumes like
Schmid, U.B., Elliott W.J., & Parker, D.C. (2007) The New Testament in Greek IV The Gospel According to St. John: Volume Two: The Majuscules (vol. 37 of New Testament Tools, Studies and Documents)


I did not understand this.
I devoted an entire post to it. I would ask (for the sake of simplicity and economy of effort) that you refer to that post (here) before I attempt an explanation.



that was only the possibilities concerning expansion rates
There is no possible way that what you said could even in theory describe a probability space. First, you stated "Simply the expansion rate that would permit any life is 1 in billions of trillions". As there are infinitely many possible rates the probability would be the same as with any single value given a continuous interval. For every single continuous pdf the probability of any particular value is always 0. In fact, even some probability distributions which only approximate continuity the probability of any particular outcome is 0.
Second, you give no justification for your "1 in billions of trillions" nor it isn't a priori incorrect given continuity.
Third, you have not given the rate of expansion which would at least provide some connection between the DV and IV.


they stretch over a huge range.
One of them was a Bayesian analysis.


The problem with the more probable ones is they only concern some of the factors necessary for life.
The problem with all of them is that we don't know these.

For example they never compute the probability of getting a universe from nothing.
That could be because in modern cosmology, theoretical physics, and quantum physics, astrophysics, etc., the idea of "from nothing" has no meaning and is a relic from an Aristotelian view of causality that was understandably adopted/adapted by e.g., Anselm and Aquinas but which has no place after the development of QM and systems sciences (and the motivating factors for the latter).


Why did the universe start...
Among other misrepresentations present in your source, found in Hawking's book after "to make possible the development of life", is the following:
"Of course, there might be other forms of intelligent life, not dreamed of even by writers of science fiction, that did not require the light of a star like the sun or the heavier chemical elements that are made in stars and are flung back into space when the stars explode"

The other major misrepresentation is that your source cuts out Hawking's relatively extensive reasoning against positing "a divine purpose".

Add in or multiply in nuclear forces
...and we're back at the coin toss sequence issue.

software that will take all major Bible versions

That isn't textual criticism. Any and all versions are so useless from a textual critical point of view that it couldn't matter less if you had every version in every language as this isn't textual criticism (nor do I know what it is apart from a very, very poor method people who can't read Greek or Hebrew use to understand particular lines in the Bible).

Ehrman always claims
Estimates. And as this estimation doesn't actually give you the number of variations it is useless without understanding at least the basics of textual criticism.

lets see what Ehrman's conclusion is from all of these numbers.

You claim he asserts there are no significant doctrinal issues in the errors, while he wrote an entire monograph devoted to doctrinal issues in manuscript errors.

Moreover, you have continually ignored my quotations of Ehrman as if I didn't know whom you were speaking of. Rather, I don't base my understanding of a scholar by quoting dialogues dumbed down for a public audience; I rely on their scholarship.


see Simon Greenleaf or Lord Lyndhurst.

Neither have published anything that isn't mostly outdated by research in several fields, including my own.
 

1robin

Christian/Baptist
"It has been estimated that no two manuscripts of the New Testament are identical in all respects. The verses in Luke 22 about Jesus’ bloody sweat in Gethsemane are not in all our manuscripts. The Parable of the Two Boys in Matthew 21 circulated in three diametrically opposed forms. We can trace these variants to the second century. At Hebrews 2:9 did the author write that Jesus died ‘without God’ or ‘by the grace of God’? The answer depends on which manuscript one is reading. Likewise did Paul confidently tell the readers at Romans 5:1 that ‘we have peace’ or was he exhorting them with the words ‘let us have peace’? The Greek varies in the manuscript tradition. At 1 Cor 15:51 did Paul write that at the end time ‘We shall all die but we shall not all be changed’ or ‘We shall not all die but we shall all be changed’?"
Elliott, J. K. (Ed.). (2010). New Testament Textual Criticism: The Application of Thoroughgoing Principles: Essays on Manuscripts and Textual Variation (Vol. 137 of Supplements of Novum Testamentum). Brill.
This lines up exactly with what I claimed and what I would expect. The bible has errors. It has such a rich tradition that it allows almost all errors to be identified. This allowed you to post some of the well known examples. I see no conflict between what you stated above and my claims. In fact in hundreds of hours watching debates on the accuracy of the bible, I have never heard a single claim of error to which the Biblical scholar was not aware of and did not have the detailed history for. If we know all the errors or virtually all of them there exists no problem. Every modern bible footnotes these exact errors you mention, and even Ehrman admits there are none in essential doctrine. So even admitting the errors I see no real problem.

"The clarification, definition, and delimitation of the term "textual variant" are vastly more complex and difficult matters than at first would appear. The common or surface assumption is that any textual reading that differs in any way from any other reading in the same unit of text is a "textual variant," but this simplistic definition will not suffice...Here the complexities multiply."
Epp, E. J., & Fee, G. D. (1993). Studies in the theory and method of New Testament textual criticism (Vol. 45 of Studies and Documents). Wm. B. Eerdmans Publishing

"First, the number of variants at any one of the places discussed is due to many factors, such as

the length of the variant, since a longer one will attract more subvariants in the form of spelling variations and accidents of different kinds (although the longest variants in Luke and Galatians are only a word different in length)
the difficulty of the passage, and its significance
the number of forms already in existence, leading to confused recollections in the scribal mind; that is to say, where confusion starts it will multiply

Secondly, there is a procedural question as to whether one should be looking for the number of variants and ignoring subvariants, since many of the latter seem rather trivial alterations.
Thirdly, while the number of readings known to us is derived from extant copies, the degree of variety between those copies must be at least partly due to the degree of variation between the copies which are lost.
Fourthly, I have to ask whether the data I have used is fit for this purpose...Unfortunately, the only way in which we could find that out would be by a full collation of the entire text in every manuscript, and it is the impracticability of that which provided us with Text und Textwert in the first place.
Fifthly, it is not at all clear how different texts, different copying traditions and different variations can be compared."
I can pretty much agree with this but it does not seem to add anything all that meaningful to my claims. I can literally take out all of the well known variants that make and significant impact (or even have the theoretical possibility of doing so) from any 1 bible and have vastly more left than is necessary to justify Christian faith in it's essential doctrines. Even included these uncertainties do not render faith unreliable.







Parker, D. C. (2008). An introduction to the New Testament manuscripts and their texts. Cambridge University Press.

"It is generally agreed that the fourth gospel is based to a significant extent on some form of predecessor, an older document...There is deep uncertainty however concerning the identity of that predecessor."
Brodie, T. L. (1993). The Quest for the Origin of John's Gospel: A Source-Oriented Approach. Oxford University Press.
That was minimalistic conclusion. I think all the Gospels plus much of Paul's writings either include earlier traditions (and I mean early), were written with them in mind, or compiled and added to by the apostles. This increases their validity and is a net positive. Many of the sources commonly sited for Paul are within a few years of Christ's death. Granting they were used or accounted for by Paul only adds to confidence.

"the nature of New Testament textual transmission virtually precludes any ultimate identification of ‘earliest attainable’ with ‘the original'"
Epp, E. J. (2007) “It’s All about Variants: A Variant-Conscious Approach to New Testament Textual Criticism,” HTR 100: 275-308
This is something that doe snot really affect faith but is only a procedural issue concerning textual criticism. If this were a secular class on textual criticism it would be meaningful but it is a discussion concerning the justification for faith and it isn't.


"If 1 Cor 15:29-34 is in fact a non-Pauline interpolation, as I have argued, questions immediately arise as to when the interpolation was made, by whom, why, and why at this particular place in the Pauline correspondence? The short answer to each of these questions, of course, is that we simply do not know."

Walker, W. O. (2007). 1 Corinthians 15: 29-34 as a non-pauline interpolation. CQB, 69(1), 84-103.
These amplifications of uncertainty get washed out by other factors. These claims were made during the lifetime of eyewitnesses, agreed to by all the apostles, and no surviving record exists of a "I was there and this did not happen" claim at all, which you would expect to have if untrue. There was no institution at the time capable of suppressing counter claims. If I only have Paul's claims I have more than enough justification for faith, adding in the possibility he may have made them with the knowledge that prior claims existed only adds to confidence. I do not need a name attached to them. Signing a name to a document adds nothing to reliability unless the details of that persons life is known and known well.


Neither White nor Wrighte are textual critics. What scholars?
Checking a box saying textual critic does not make anyone less or more capable than any other. This is an elitist and arbitrary criteria. For example I think White has held personally more extant biblical manuscripts than anyone, and Wright is a textual legend referenced constantly as a respected source by his opposition as much as any scholar. In fact I know of no scholar more quoted on textual issues.


The reasons for how many ways it can fail or hold true are a simple combinatorics problem, and the evidence for how it does can be seen in real scholarship.
I do not have time to backtrack manually at this time. I can not remember the context here.


I'm not. I have the several versions of the Greek NT and various critical editions to individual letters & gospels. Each of the latter includes a critical apparatus (and textual critical notes), while for the UBS' I have an entire textual critical companion to accompany the critical apparatus in the UBS' Greek NT.

To be clear, when I say "critical editions to individual letters & gospels" I mean volumes like
Schmid, U.B., Elliott W.J., & Parker, D.C. (2007) The New Testament in Greek IV The Gospel According to St. John: Volume Two: The Majuscules (vol. 37 of New Testament Tools, Studies and Documents)

I think your original point was about accuracy. Textual accuracy is reasonably possible to calculate in absolute terms. In fact you could even use Ehrman's worst case scenario numbers like 400,000 and still wind up with a very high reliability factor for core Christian doctrine. So I had to conclude you meant historical accuracy which is far harder to ascertain. You say that is not it. I am left without an explanation for what you claimed. If I ranged virtually all reliable and competent scholar's numbers together. I would have a very narrow range of between 93% and 99.5% textual accuracy for the bible. The best example was the dead sea scrolls which at least produced over 98% accuracy concerning Isaiah (if I am remembering correctly). Add in the fact that essential doctrine is virtually free of error and textual accuracy simply dissolves as an impediment to faith. Just guessing you would need at least a 25% level of inaccuracy to begin to invalidate faith (on a textual basis). Even given the uncertainties that do exist that number is simply impossible to even near concerning textual error.


Continued below:
My computer crashed and erased my continued response. I wil try and go back and redo it soon.
 
Last edited:

Sha'irullah

رسول الآلهة
All gods exist but not objectively, any deity man has conceived is only relative to his own consciousness and mental capabilities. TO disprove god is to attempt and to disprove the existence of another mind. The only way to disprove a god though is to disprove the necessity of a god to alter objectives planes of perception such as the natural world. So far no such thing has occurred nor has the objectively existing text that are the supposed result of such deities has been validated as anything else but the work of another human mind.

The basis of a deity that abides by the natural world cannot be disproved though and exits the categorical label of belief and enters individual acknowledgement though. I can say that Baal exist.......and he does along with Dhul-Halassa but the instant I say that Baal did this to such an item and place the existence in the natural world, the existence of Baal is subject to criticism.
Any deity imaginable is as it is suppose to be, imaginable. Asserting natural effects that disregard natural laws is asserting the nonexistence of a deity(all gods are nonexistent by the way). The existence of any deity is purely a personal and consciousness addition to the human experience, not a physical or supernatural extension to what we already perceive in our own world.

Thomas Paine - The Age of Reason...
"As it is necessary to affix right ideas to words, I will, before I
proceed further into the subject, offer some other observations on the
word revelation. Revelation, when applied to religion, means
something communicated immediately from God to man.

No one will deny or dispute the power of the Almighty to make such
a communication, if he pleases. But admitting, for the sake of a case,
that something has been revealed to a certain person, and not
revealed to any other person, it is revelation to that person only. When
he tells it to a second person, a second to a third, a third to a fourth,
and so on, it ceases to be a revelation to all those persons. It is
revelation to the first person only, and hearsay to every other, and
consequently they are not obliged to believe it."​
 

LegionOnomaMoi

Veteran Member
Premium Member
I see no conflict between what you stated above and my claims.
I tried to quote more to give additional context, but had to cut out a lot which no doubt caused confusions (my apologies).
even Ehrman admits there are none in essential doctrine
He devoted one of his major works to the ways in which doctrinal controversies "impacted the surviving literature on virtually every level."

Alas, I cannot reproduce it all but I can provide some of what he concludes:

"The textual problems we have examined affect the interpretation of many of the familiar and historically significant passages of the New Testament: the birth narratives of Matthew and Luke, the prologue of the Fourth Gospel, the baptismal accounts of the Synoptics, the passion narratives, and other familiar passages in Acts, Paul, Hebrews, and the Catholic epistles. In some instances, the interpretations of these passages—and the books within which they are found—hinge on the textual decision; in virtually every case, the variant readings demonstrate how the passages were understood by scribes who "read" their interpretations not only out of the text but actually into it, as they modified the words in accordance with what they were taken to mean.
It might also be observed that a number of these textual problems affect broader issues that have occupied New Testament scholars for the better part of our century. The following list is suggestive rather than exhaustive: Do the preliterary creedal and hymnic fragments cited by the New Testament authors preserve an adoptionistic Christology? Conversely, do they portray Jesus, already in the 30s or 40s C. E., as divine? How does Mark entitle his Gospel? How does he understand Jesus' baptism at the beginning of his narrative, or the cry of dereliction near the end? Does Luke have a doctrine of atonement? Does he envisage a "passionless Passion"? Just how "high" is the Christology of the Fourth Gospel? Why did the secessionists leave the Johannine community? Is Jesus ever actually called God in the New Testament?"


Ehrman, B. D. (1993). The Orthodox corruption of scripture: The effect of early Christological controversies on the text of the New Testament. Oxford University Press.
Here as elsewhere Ehrman not only covers doctrinal variants still contested, but perhaps more importantly the fact that we know for that during the main centuries of Christian theological and Christological controversies the scribes were altering the texts. However, we have almost no manuscripts from this entire period, just the after effects.


To be perfectly clear: a textual critical issue means the experts disagree which variant (if any) is likely the "correct" one. When such variants concern lines that "impinge on basic issues for Christian doctrine", it means the specialists have not resolved textual critical issues that concern "error in essential doctrine".

Take Hebrews 2:9
ὅπως χωρὶς θεοῦ ὑπὲρ παντὸς γεύσηται θανάτου/hopos choris theou huper pantos geusetai thanatou (SBL)
["so that he [Jesus] would experience death apart from God for all"]

ὅπως χάριτι θεοῦ ὑπὲρ παντὸς γεύσηται θανάτου/"hopos charity theou huper pantos geusetai thanatou (UBS)
["so that by the grace of God he would experience death for all"]

Both editions are produced by the world's leading experts. So why the divergence?

Here's (part of) why:

Bruce, F.F. (1999). “Textual Problems in the Epistle to the Hebrews

It seems to me that Jesus dying away from/apart from/without God is pretty significant especially relative to Jesus' dying "by the grace of God." And there are plenty of other examples.
Wright is a textual legend
Here's his CV. Among his countless publications, degrees, awards, positions, etc., can you point to a single one on textual criticism?
In fact in hundreds of hours watching debates on the accuracy of the bible
That might be the problem. First, these debates are not intended to settle anything other than the opinions of the non-specialist audiences. Scholars debate via journals, conferences, monographs, etc. (there's a reason that academic conferences involve "talks", not debates). Second, it means that you are being denied access to the actual scholarly debates such as those in the sources I cited or even the link above. As you can see, however, these sources expect you to know Greek (and often Syriac, Latin, and Coptic as early witnesses are not all in Greek), be familiar with textual critical notations, and in general require a level of background on the subject that few outside specialists or those studying to be so possess. Third, such debates artificially dichotomize issues that are far more nuanced. Fourth, they often don't even include actual specialists. Finally, the sample of scholars that one is exposed to by watching such debates is not only heavily skewed (not only are most positions not represented, but the leaders in the various fields these debates concern are almost never present).


I have never heard a single claim of error

I've been trying to work with your term but at this point I think it's impeding dialogue. The idea that textual criticism (NT, Biblical, Medieval, Classical, etc.) concerns "errors" is at best highly misleading. To the extent "errors" exist they are usually issues of grammar (and typically foreign to OT/NT criticism). The issue is variants. What we have are places in the NT where two or more variations are attested to by large numbers of witnesses (I use this term as it is used in NT textual criticism in particular, as in this case it includes manuscripts, translations, patristic quotations, etc.). They could all be errors in some cases.
To be more concrete, consider 1 Cor. 15:51-

πάντες οὐ κοιμηθησόμεθα πάντες δὲ ἀλλαγησόμεθα
οὐ κοιμηθησόμεθα, οὐ πάντες δὲ ἀλλαγησόμεθα
οὖν κοιμηθησόμεθα, οὐ πάντες δὲ ἀλλαγησόμεθα
κοιμηθησόμεθα πάντες δὲ ἀλλαγησόμεθα
αναστησομεθα, οὐ πάντες δὲ ἀλλαγησόμεθα

Each one of these is represented in multiple sources, and multiple different kinds. For example, one is attested to by Sinaiticus, another by Vaticanus. The earliest version differs from both. Which one is correct?


Every modern bible footnotes these exact errors you mention
None of them do.


That was minimalistic conclusion.
Virtually all NT scholars believe that there was not only more than one author of John, but that the authors continually added to or changed the Johannine texts. The problem is that while our evidence for this is great, it suggests there wasn't any original text and provides us with only crude generalities as to the nature of its composition.



These amplifications of uncertainty get washed out
That is one of the leading experts in the world on Pauline textual criticism referring to a current textual issue. What was "washed out"?

This is an elitist and arbitrary criteria.
It isn't arbitrary. My only degree in languages is ancient Greek & Latin, but I know how to read several more and have a background in linguistics. However, reading actual Greek manuscripts ranges from "impossible" for me to "difficult". You're talking about going so far beyond reading such manuscripts that it requires extensive training (in fact, textual critics rely on paleographers for the real nuances of the manuscripts (like dating).

In fact I know of no scholar more quoted on textual issues.

You don't read scholarship.

Ehrman's worst case scenario numbers like 400,000
Here's Ehrman in one of his academic works:
"significance cannot simply be quantified; it is pointless, for example, to calculate the number of words of the New Testament affected by such variations or to determine the percentage of known corruptions that are theologically related"
(The Orthodox Corruption of Scripture)
Could you tell me why his "worse case scenario" he mentions a figure he explicitly states both in the academic book above AND in two others that these numbers don't even matter while the "importance of theologically oriented variations, on the other hand, far outweighs their actual numerical count."?


I would have a very narrow range of between 93% and 99.5% textual accuracy

No you would have a meaningless figure.
 
Top