outhouse
Atheistically
Just curious....asking an irreligious.....
which part of the gospels.....count?
Count as what?
Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
Just curious....asking an irreligious.....
which part of the gospels.....count?
I was thinking about this a little more (especially as I wrote the first draft reply long after midnight and the second while others were getting up). I missed some very simple ways in which variance among manuscripts can be measured (e.g., not just lexical variants but syntax, lexical additions, lexical omissions, etc.). There is little in the literature about syntactic difference (i.e., difference in word order). This is very, very good because trying to account for it would be either very easy and very wrong (through e.g., treating manuscripts as sets of lexemes and using power sets to yield variance measures) or extremely complex (deriving numerous parameters to come up with a set of models that are then tested for predictive power finally giving us basically what we'd get without factoring in syntax).
We have some ~6,000 Greek manuscripts, but these vary in size (by size I refer to amount of NT material in them, not physical size) varying from the earliest scrap (P52, a fragment of John from the first half of the 2nd century with a few lines) to those like Sinaiticus and Vaticanus. How might we compare P52 to the other two? We can look at the lines included in P52 and for each one we can count how many lexical variants, omissions, and/or additions relative 1st to Sinaiticus and then to Vaticanus. However, this doesnt take into account variance between Sinaiticus and Vaticanus. The problem is that were we to repeat the same procedure and compare line by line both manuscripts, there can exist variants that we already accounted for. If a word in P52 is identical to the same in Sinaiticus but not Vaticanus, then counting that same difference when we compare Sinaiticus and Vaticanus would be double-dipping.
Although we dont wish to use the words found in a modern Greek NT, we do wish to use the structure of the modern NT itself. This allows us greater flexibility in the ways in which we can feed data into some computational model (it provides us a way to compare manuscripts without having to represent them as existing in some configuration space or matrix representation in which every word has an exact position for that manuscript, which would bring us back to the issue of syntax we already determined would artificially inflate variance). However, it doesnt tell us how we should represent the data. This depends in part on the method/model used.
So what are our options? There are two main approaches that would probably work best (accurate but not overly complicated). One would be to use a algorithms which evaluate data points sequentially using some similarity/dissimilarity measure (i.e., a distance metric). In fact, the basic, intro stats measure of variance relies on distance metric, but it is the most primitive and others are far more flexible in how they can be used. For some examples:
The other main approach involves a subtype of classification and clustering techniques which Ill call component analyses (most rely on such some component analysis but not all). Even scraps like p52 have ~50 data points, and these are being compared to ~6,000 other manuscripts. Thats a lot of data points. Something like principal component analysis reduces the data while maintaining the important information. The most relevant example is, perhaps, the screen youre looking at. Anybody who had taken pictures using even a fairly new digital camera knows that these pic files can be huge. Imagine compression works by identifying points in an image where there variance is minimal (and similarity maximal) and projects these points onto a new space as one point. A simpler example is GPAs: various grades for numerous courses are combined into a single GPA value. This is analogous and more intuitive than an actual example of projections from Rn to Rm.
Both approaches actually have a great deal in common, and a method like multidimensional scaling (MDS) often works by using dissimilarities between paired data points (something akin to the first approach) and then projecting these onto a lower dimensional space (as in the second). Because the second approach is harder to describe even loosely, I'll go with the first and in general terms.
The advantage of using a more sequential method is that we can pick manuscripts at random. Although in actual practice I wouldnt do it this way, for simplicity imagine each manuscript as a vector in nth dimensional space (n is the number of words in the Greek NT, which is one reason I wouldnt actually do it this way). Lets imagine our first trial (manuscript) is p52. Each word of John in p52 exists as a value 1 in the appropriate slot- an element/entry in the p52 manuscript vector using the Greek NT as an index). All other values are NaN (not a number; it really doesnt matter for our purposes). Now we select another manuscript at random, represent it as a vector, and then add it to the p52 manuscript vector. Most of the values in the combined vector will be 1 (or NaN), but if there are omissions, variants, or additions we increase the value for that entry in the combined vector. Although (again) this isnt how Id actually do it, for simplicity the new values are a simple addition of 1 for any omission or variant that we add to the 1 in the previous vector. Additions, however, add .5 to the entry before and after the added one. So even though we only have a representation of two manuscripts, a single element in our NT vector can have a value indicating more than 2 variants.
We continue combining manuscript vectors to make an NT vector, but we keep the original manuscript vectors too (the NT vector is only one measure of variance as it sums variants sequentially rather than comparing each manuscript variants to every other, well want these). Once we have gone through all the manuscripts, we have a total count of variations relative to the number of points (and as vector length is by itself a measure of variance, we already have something useful). However, what we want is an indication of variance that also tells us something about the reliability of our textual attestation relative to the differences.
To start, we make set of variant vectors from the entries of our NT vector (Im avoiding matrix algebra terms). These can be represented in a multidimensional space along with the entire NT vector as well as the set of manuscript vectors. To get a visual idea, even bad graphics can help:
In reality, the vectors would differ in size and probably share an origin point:
We want to know how we can understand the variations among manuscripts in useful ways. One class of methods was developed ~80 years ago but couldn't be used until recently: permutation methods. To understand why, consider dealing with the number of permutations of a 52 cards before we had good computers: 80658175170943878571660636856403766975289505440883277824000000000000
To compare e.g., the manuscript vectors to the variant vectors & the NT vector we can use multivariate randomized block procedures (MRBP). This can not only yield a statistic for comparing our vector sets (within and between sets), but also give us far more accurate measures of dispersal (variance) for the entirety of sets. It can also give us a wonderful estimate of manuscripts we dont even have via mathematically sound form of double-dipping to create additional observations form those we have. This is also where appropriate uses of distance metrics allows us to compare average dissimilarity between manuscript vectors, quantify the variance vectors, and measure the total degree of similarity vs. dissimilarity for our NT vector.
As theres no point more inadequate descriptions of possible methods, let's recall that the main points is to consider the ways that our vector sets can be compared. Each manuscript vector has an entry that can be compared to every other as well as the sum total of variance of that entry in the NT vector. Also, clusters of high variability allow us to easily spot areas in which e.g., a line has high variability among all manuscripts indicating that any base we might construct to compare variants to would be unreliable. But clusters arent necessarily bad, as variance at certain points rather than approaching a random dispersal means that most manuscripts attest to the same reading at most points.
Most importantly, statistical learning/pattern recognition/etc. methods that iterate pairwise comparisons (or multilevel comparisons) using distance metrics allow us to compare e.g., every manuscript vector to every variant vector that can be used to create a better NT vector 2.0 that is a representation of the ways in which variations are or arent cancelled out or at least dampened by textual attestation.
Do you actually understand what you have posted?
Years ago I went to a neighbor of mine who is a professor of mathematics at Brown. He couldn't even recall one of the topics I broached. This was not because his knowledge of mathematics lacked in any way, but because mathematics is vastly more diverse than it was a century ago. In fact, the current approach to the foundations of most mathematics (calculus) is taught in a completely outdated way while the original impetus and intuitive concept was sufficiently rendered rigorous decades ago.I have a math degree and can't get it all.
I am not disagreeing with it but there are few people around that could have understood just the vector mechanics you mentioned.
If a few million people alive today flipped a coin over and over again for the rest of there lives, and the results were finally tabulated, the probability of the result would be unbelievably, incredibly, astronomically impossible. Yet such an outcome would be guaranteed.To add something to your permutations concerning card decks. If everyone on Earth counted one combination for a million years we would still have less than 1 billionth of a percent counted.
far greater than the chance life would form on it's own.
Reliable or not?
I am familiar with many of the numbers concerning probability and have seen most of the equations. I do not understand what your proving. I saw no conclusions.Yes. If you have any doubts, questions, or issues with what I stated I would be more than happy to clear them up.
In my case I am ignorant. I got a math degree because I wanted out of engineering. I had already taken so many math classes I almost had the degree anyway so I finished it up and forgot most of it. I have been working in military aviation since then.Years ago I went to a neighbor of mine who is a professor of mathematics at Brown. He couldn't even recall one of the topics I broached. This was not because his knowledge of mathematics lacked in any way, but because mathematics is vastly more diverse than it was a century ago. In fact, the current approach to the foundations of most mathematics (calculus) is taught in a completely outdated way while the original impetus and intuitive concept was sufficiently rendered rigorous decades ago.
I meant the mechanics of vector analysis. I actually took a class in it alone. I actually liked it. It was far more intuitive than partial DE or discrete.Vectors are related to mechanics only insofar as they are used to represent things like displacement. "Vector mechanics" is like "quam numbers". It's worthless without context.
I do not get it. It would be close to 50% and almost exactly what was predicted. Also Biblical probabilities are multiplicative. They have many improbabilities that came true in succession. In a claim to complete truth (at least beyond scribal error which is at worst 5%) they must by multiplicative. They are not the chances a guy wins a lottery. They are the chances the same guy wins it a thousand times. You go from sharpshooter fallacy to astronomical naturalistic absurdity real quick.If a few million people alive today flipped a coin over and over again for the rest of there lives, and the results were finally tabulated, the probability of the result would be unbelievably, incredibly, astronomically impossible. Yet such an outcome would be guaranteed.
Simply the expansion rate that would permit any life is 1 in billions of trillions. I can look the exact number Hawking gave if you want but I meant only a ballpark. PlLus once the other equally impossible naturalistic chances are MULTIPLIED as are needed for life to arise on it's own you go from absurdity to insanity at the starting gate.The only possible way to know this is to know the probability space. If you know it, please share.
Until you are capable of determining even Ehrman's views (let alone the state of scholarship), whatever your opinons they are not reliable; yet. We are all of us ignorant of many more things than those we are not.
I am familiar with many of the numbers concerning probability and have seen most of the equations. I do not understand what your proving. I saw no conclusions.
However note this. The numbers of errors in a tradition increases consistently with the number of copies. If the bible (like the Quran had been burned) and only one copy left as a source then no errors would exist, nor any reliability. So the more copies, the more reliable, and the more errors. High numbers of errors are simply inherent with the enormous (more than any other work in ancient history) volume in a tradition.
cannot be true (not in the sense that it isn't true as nobody knows the number of errors, but that it isn't a viable model). It assumes we can determine "error" when all we can actually determine is whether variants exist and how. There is no hard and fast method for arguing whether a particular variant is more likely to trace back to the autograph.Number of errors in the entire textual tradition, the number of manuscripts in existence, the number of average words in each manuscript.
Sounds interesting. Not something I'm overly familiar with (apart from the International Symposium's UAV proceedings, a few monographs or volumes from e.g., Springer Tracts in Advanced Robotics, etc., and not only was I more focused on the computational aspects and the HCI issues I'm sure a lot of the more interesting material is classified).I have been working in military aviation since then.
Mechanics? Vector analysis I know. Are you using mechanics in some colloquial sense (e.g., akin to "methods to solve")? Or do you mean the application of vector analysis to mechanics? Sorry, it's late.I meant the mechanics of vector analysis.
A central operator in vector analysis is the Del or Nabla operator and in R3 it is defined as a partial differential equation and extends to Rn as a partial differential equation. I'm not sure how you are defining vector analysis.It was far more intuitive than partial DE or discrete.
That would be if we were treating the outcomes in terms of a ratio of frequencies of heads vs. tails. That's not the outcome, that's a function of the outcome. The outcome is a set of sequences. Flip a coin a million times, and you always get a 1 in a million result. A few million people flipping a coin millions and millions of times and you get a set of sequences, each 1 in billion or trillion or however many millions of times they flipped a coin. Altogether, the probability that you'd get each sequence is astronomically tiny.I do not get it. It would be close to 50%
They aren't. At least not in any way that matters.Also Biblical probabilities are multiplicative.
There's that 5% again, only nobody knows the number of errors nor is there any clear way to define variation or errors.In a claim to complete truth (at least beyond scribal error which is at worst 5%) they must by multiplicative.
That's true. Those chances are static, easily derived, etc. Defining variance among manuscripts and what constitutes error is no where near as easy.They are not the chances a guy wins a lottery.
You go from sharpshooter fallacy to astronomical naturalistic absurdity real quick.
This is not the probability space.Simply the expansion rate that would permit any life is 1 in billions of trillions.
I can look the exact number Hawking gave if you want
PlLus once the other equally impossible naturalistic chances are MULTIPLIED as are needed for life to arise on it's own you go from absurdity to insanity at the starting gate.
Ehrman has repeatedly published that nobody knows what the numbers are. In you Misquoting Jesus he gives more of a range than in his academic work, but he still specifically states the number of variations is unknown.I know what Ehrman's numbers are.
I use Ehrman's numbers
I assume you meant the bolded part in the next statement I had made. If so then something has gone horribly wrong here. The fact that total mistakes increase given copies that exist is simply reality. This is not a hypothetical, a mathematical prediction, or a deductive argument. It is a brute fact. I must have misunderstood.The entirety was devoted to the ways in which this (particularly the bolded part) is not true:
We are not discussing a hypothetical and using mathematical models to predict things about it. We are looking at actual manuscripts.Apart from the fact that a single manuscript can show us errors in a manuscript tradition we don't have, the real issue is "consistent". This posits a linear relationship (or at least approximately so) and there are more ways in which this can fail to hold true than can hold true.
I think you must be speaking of error as it would apply to differences between reality and what is said about it, but that is not the context Ehrman and wright are speaking in. They are talking about textual accuracy not historical accuracy. They are two very different issues. Textual accuracy is very easy to compute given a tradition as astronomical as the bible's. The best we can do historically is probability, best fit, most comprehensive, etc.....This:
cannot be true (not in the sense that it isn't true as nobody knows the number of errors, but that it isn't a viable model). It assumes we can determine "error" when all we can actually determine is whether variants exist and how. There is no hard and fast method for arguing whether a particular variant is more likely to trace back to the autograph.
UAVs are the future. If I had anything to invest it would all be in unmanned aircraft. Many think we may have already built the last airframe that will require a pilot. Aviation is fascinating but I have lately been stuck integrating automated test equipment into a machine built in the 80s to service the venerable F-15. It is nothing short of exasperating. Actually classified stuff is boring. The classified aspects of equipment are usually data sets or code and I hate both.Sounds interesting. Not something I'm overly familiar with (apart from the International Symposium's UAV proceedings, a few monographs or volumes from e.g., Springer Tracts in Advanced Robotics, etc., and not only was I more focused on the computational aspects and the HCI issues I'm sure a lot of the more interesting material is classified).
I mean it in a common language use way. Mechanics in engineering and other subjects is simply meant to indicate mode of operation or functional methodology. It is almost a slang term.Mechanics? Vector analysis I know. Are you using mechanics in some colloquial sense (e.g., akin to "methods to solve")? Or do you mean the application of vector analysis to mechanics? Sorry, it's late.
My experience was in the use of integration and derivation of trigonometric identities associated with graphs produced by functions or plotted by data. Whatever you are referring to sounds extremely complex and not something I would look forward to.A central operator in vector analysis is the Del or Nabla operator and in R3 it is defined as a partial differential equation and extends to Rn as a partial differential equation. I'm not sure how you are defining vector analysis.
I do not see the significance but will address it anyway because it is a sharpshooter fallacy that does not apply to biblical probability anyway. You have a 100% chance of getting some result. Whatever result you got was trivial and did not defy probability. Biblical predictions and those concerning it are far different. For example you had no predictable goal to compare results to. In the case of life permitting universes I do. Only a vanishing tiny band of circumstances will allow a life permitting universe. First I must have a universe at all. There is 0 probability that nothing will produce anything, then I need an extremely specific universe given 1 in trillions of billions (expansion rates, nuclear forces, gravity, etc.. must all be extremely fine tuned), then I need a whole range of initial specific conditions that will support human life in a universe that is hostile to it. These are all a priori needs. My world view does not allow for whatever happened to occur. My claims are not satisfied by just anyone winning the lottery. I must have the same man win the lottery time after time after time. No natural explanations exist if that occurred. In other realms like prophecy I need a whole string of improbable things to occur that were predicted before hand (which is what your example does not do). Ezekiel's Tyre prophecy for example required a single man to attack it, he could only do a specific amount of damage, he could not have discovered any of the riches known to be in Tyre, he had to then leave and go to Egypt for the specific purpose of gaining wealth to pay his soldiers. I then needed another force to arrive (this one unnamed) to destroy the island fortress and produce an uncommon level of destruction. Then I needed the Phoenicians to give up completely ever rebuilding the city. All these occurred. Claiming afterwards that any series of events is improbable is a meaningless claim. Having a list of a priori necessities that were improbable but still occurred is a whole different matter.That would be if we were treating the outcomes in terms of a ratio of frequencies of heads vs. tails. That's not the outcome, that's a function of the outcome. The outcome is a set of sequences. Flip a coin a million times, and you always get a 1 in a million result. A few million people flipping a coin millions and millions of times and you get a set of sequences, each 1 in billion or trillion or however many millions of times they flipped a coin. Altogether, the probability that you'd get each sequence is astronomically tiny.
The one above certainly is and it only has maybe a dozen necessities. The 350 predictions concerning Jesus have hundreds made before hand that were improbable but occurred.They aren't. At least not in any way that matters.
There is easy ways to determine textual errors. Those scholars most capable of knowing how to go about it Ehrman, White, Wright, etc... all get extremely similar numbers and claim they are extremely certain. In a weird irony the more copies you have the more of both errors and certainty you have. I have seen comparisons for many errors. One I remember was.There's that 5% again, only nobody knows the number of errors nor is there any clear way to define variation or errors.
I think your talking about historical error and that is not part of a textual accuracy debate. They are very distinct issues.That's true. Those chances are static, easily derived, etc. Defining variance among manuscripts and what constitutes error is no where near as easy.
Textually speaking almost every textual scholar on either side would disagree with you. Historically speaking there would be much more inconsistency among scholars.No, as I haven't given any probabilities. I have simply shown that
1) Your statement about "consistent" error increase with the increase of manuscripts is clearly and completely wrong.
I did not understand this.2) That variance and errors can be defined in a number of ways, linear or multiplicative being on of the poorest.
No it is not, that was only the possibilities concerning expansion rates alone. Rates which seem to be independent of initial conditions and natural law I might add. That is only one of thousands of improbable things that must occur to get life to arise on it's own. Another would be the chance of getting a universe from nothing, that one has 0 probability.This is not the probability space.
I have seen many of these computations and they stretch over a huge range. The problem with the more probable ones is they only concern some of the factors necessary for life. For example they never compute the probability of getting a universe from nothing. They usually start at a point after a huge number of improbable things are said to have taken place on their own and the evaluate a tiny microcosm of what life must have done on it's own. For example they may tackle the left handed protein issue, DNA RNA issue, and a couple more but even these astronomical numbers are after a long almost inexhaustible string of improbabilities were required. I think all together the probability that life arose on it's own is equivalent to zero but I have never seen a comprehensive probability less than 1 in 10^50th.Don't bother. Google the Drake equation or better yet see Bayesian analysis of the astrobiological implications of life’s early emergence on Earth
&
The Habitability of Our Earth and Other Earths: Astrophysical, Geochemical, Geophysical, and Biological Limits on Planet Habitability
I can provide more if need be, but that should be sufficient for a baseline or foundation for discourse.
Forgetting for a minute getting everything from nothing which is a logical absurdity and also a necessity, the expansion rate alone which is necessary for a structured universe alone is probabilistic absurdity:I'm not sure how any of this makes any sense. Could you rephrase?
Of course an exact number is unknown. However a useful ballpark is easily derived. You can even buy software that will take all major Bible versions and find every single difference between them all and total them.Ehrman has repeatedly published that nobody knows what the numbers are. In you Misquoting Jesus he gives more of a range than in his academic work, but he still specifically states the number of variations is unknown.
Ehrman always claims between 300,000 and 400,000. I use his 400,000 number just to limit contention.You don't.
"It has been estimated that no two manuscripts of the New Testament are identical in all respects...Many of our text-critical decisions concern issues of fundamental importance for the interpretation and meaning of the text, and they often impinge on basic issues for Christian doctrine. The wording of the Lords Prayer in Matthews Gospel differs within the manuscript tradition; Jesus words instituting the Last Supper in Lukes Gospel are not firmly established; the well-known story of the Woman taken in Adultery, normally printed within Johns Gospel, is absent from some manuscript witnesses. The ending to Marks Gospel is disputed; manuscripts deemed important omit the last twelve verses. The verses in Luke 22 about Jesus bloody sweat in Gethsemane are not in all our manuscripts. The Parable of the Two Boys in Matthew 21 circulated in three diametrically opposed forms. We can trace these variants to the second century. At Hebrews 2:9 did the author write that Jesus died without God or by the grace of God? The answer depends on which manuscript one is reading. Likewise did Paul confidently tell the readers at Romans 5:1 that we have peace or was he exhorting them with the words let us have peace? The Greek varies in the manuscript tradition. At 1 Cor 15:51 did Paul write that at the end time We shall all die but we shall not all be changed or We shall not all die but we shall all be changed?"easy ways to determine textual errors.
Neither White nor Wrighte are textual critics. What scholars?Those scholars most capable of knowing how to go about it Ehrman, White, Wright, etc...
The reasons for how many ways it can fail or hold true are a simple combinatorics problem, and the evidence for how it does can be seen in real scholarship.the more of both errors and certainty you have.
I'm not. I have the several versions of the Greek NT and various critical editions to individual letters & gospels. Each of the latter includes a critical apparatus (and textual critical notes), while for the UBS' I have an entire textual critical companion to accompany the critical apparatus in the UBS' Greek NT.I think your talking about historical error
I devoted an entire post to it. I would ask (for the sake of simplicity and economy of effort) that you refer to that post (here) before I attempt an explanation.I did not understand this.
There is no possible way that what you said could even in theory describe a probability space. First, you stated "Simply the expansion rate that would permit any life is 1 in billions of trillions". As there are infinitely many possible rates the probability would be the same as with any single value given a continuous interval. For every single continuous pdf the probability of any particular value is always 0. In fact, even some probability distributions which only approximate continuity the probability of any particular outcome is 0.that was only the possibilities concerning expansion rates
One of them was a Bayesian analysis.they stretch over a huge range.
The problem with all of them is that we don't know these.The problem with the more probable ones is they only concern some of the factors necessary for life.
That could be because in modern cosmology, theoretical physics, and quantum physics, astrophysics, etc., the idea of "from nothing" has no meaning and is a relic from an Aristotelian view of causality that was understandably adopted/adapted by e.g., Anselm and Aquinas but which has no place after the development of QM and systems sciences (and the motivating factors for the latter).For example they never compute the probability of getting a universe from nothing.
Among other misrepresentations present in your source, found in Hawking's book after "to make possible the development of life", is the following:Why did the universe start...
...and we're back at the coin toss sequence issue.Add in or multiply in nuclear forces
software that will take all major Bible versions
Estimates. And as this estimation doesn't actually give you the number of variations it is useless without understanding at least the basics of textual criticism.Ehrman always claims
lets see what Ehrman's conclusion is from all of these numbers.
see Simon Greenleaf or Lord Lyndhurst.
ofcourse there is diety, only one god, people who like to follow their desires remove god
This lines up exactly with what I claimed and what I would expect. The bible has errors. It has such a rich tradition that it allows almost all errors to be identified. This allowed you to post some of the well known examples. I see no conflict between what you stated above and my claims. In fact in hundreds of hours watching debates on the accuracy of the bible, I have never heard a single claim of error to which the Biblical scholar was not aware of and did not have the detailed history for. If we know all the errors or virtually all of them there exists no problem. Every modern bible footnotes these exact errors you mention, and even Ehrman admits there are none in essential doctrine. So even admitting the errors I see no real problem."It has been estimated that no two manuscripts of the New Testament are identical in all respects. The verses in Luke 22 about Jesus’ bloody sweat in Gethsemane are not in all our manuscripts. The Parable of the Two Boys in Matthew 21 circulated in three diametrically opposed forms. We can trace these variants to the second century. At Hebrews 2:9 did the author write that Jesus died ‘without God’ or ‘by the grace of God’? The answer depends on which manuscript one is reading. Likewise did Paul confidently tell the readers at Romans 5:1 that ‘we have peace’ or was he exhorting them with the words ‘let us have peace’? The Greek varies in the manuscript tradition. At 1 Cor 15:51 did Paul write that at the end time ‘We shall all die but we shall not all be changed’ or ‘We shall not all die but we shall all be changed’?"
Elliott, J. K. (Ed.). (2010). New Testament Textual Criticism: The Application of Thoroughgoing Principles: Essays on Manuscripts and Textual Variation (Vol. 137 of Supplements of Novum Testamentum). Brill.
I can pretty much agree with this but it does not seem to add anything all that meaningful to my claims. I can literally take out all of the well known variants that make and significant impact (or even have the theoretical possibility of doing so) from any 1 bible and have vastly more left than is necessary to justify Christian faith in it's essential doctrines. Even included these uncertainties do not render faith unreliable."The clarification, definition, and delimitation of the term "textual variant" are vastly more complex and difficult matters than at first would appear. The common or surface assumption is that any textual reading that differs in any way from any other reading in the same unit of text is a "textual variant," but this simplistic definition will not suffice...Here the complexities multiply."
Epp, E. J., & Fee, G. D. (1993). Studies in the theory and method of New Testament textual criticism (Vol. 45 of Studies and Documents). Wm. B. Eerdmans Publishing
"First, the number of variants at any one of the places discussed is due to many factors, such as
the length of the variant, since a longer one will attract more subvariants in the form of spelling variations and accidents of different kinds (although the longest variants in Luke and Galatians are only a word different in length)
the difficulty of the passage, and its significance
the number of forms already in existence, leading to confused recollections in the scribal mind; that is to say, where confusion starts it will multiply
Secondly, there is a procedural question as to whether one should be looking for the number of variants and ignoring subvariants, since many of the latter seem rather trivial alterations.
Thirdly, while the number of readings known to us is derived from extant copies, the degree of variety between those copies must be at least partly due to the degree of variation between the copies which are lost.
Fourthly, I have to ask whether the data I have used is fit for this purpose...Unfortunately, the only way in which we could find that out would be by a full collation of the entire text in every manuscript, and it is the impracticability of that which provided us with Text und Textwert in the first place.
Fifthly, it is not at all clear how different texts, different copying traditions and different variations can be compared."
That was minimalistic conclusion. I think all the Gospels plus much of Paul's writings either include earlier traditions (and I mean early), were written with them in mind, or compiled and added to by the apostles. This increases their validity and is a net positive. Many of the sources commonly sited for Paul are within a few years of Christ's death. Granting they were used or accounted for by Paul only adds to confidence.Parker, D. C. (2008). An introduction to the New Testament manuscripts and their texts. Cambridge University Press.
"It is generally agreed that the fourth gospel is based to a significant extent on some form of predecessor, an older document...There is deep uncertainty however concerning the identity of that predecessor."
Brodie, T. L. (1993). The Quest for the Origin of John's Gospel: A Source-Oriented Approach. Oxford University Press.
This is something that doe snot really affect faith but is only a procedural issue concerning textual criticism. If this were a secular class on textual criticism it would be meaningful but it is a discussion concerning the justification for faith and it isn't."the nature of New Testament textual transmission virtually precludes any ultimate identification of ‘earliest attainable’ with ‘the original'"
Epp, E. J. (2007) “It’s All about Variants: A Variant-Conscious Approach to New Testament Textual Criticism,” HTR 100: 275-308
These amplifications of uncertainty get washed out by other factors. These claims were made during the lifetime of eyewitnesses, agreed to by all the apostles, and no surviving record exists of a "I was there and this did not happen" claim at all, which you would expect to have if untrue. There was no institution at the time capable of suppressing counter claims. If I only have Paul's claims I have more than enough justification for faith, adding in the possibility he may have made them with the knowledge that prior claims existed only adds to confidence. I do not need a name attached to them. Signing a name to a document adds nothing to reliability unless the details of that persons life is known and known well."If 1 Cor 15:29-34 is in fact a non-Pauline interpolation, as I have argued, questions immediately arise as to when the interpolation was made, by whom, why, and why at this particular place in the Pauline correspondence? The short answer to each of these questions, of course, is that we simply do not know."
Walker, W. O. (2007). 1 Corinthians 15: 29-34 as a non-pauline interpolation. CQB, 69(1), 84-103.
Checking a box saying textual critic does not make anyone less or more capable than any other. This is an elitist and arbitrary criteria. For example I think White has held personally more extant biblical manuscripts than anyone, and Wright is a textual legend referenced constantly as a respected source by his opposition as much as any scholar. In fact I know of no scholar more quoted on textual issues.Neither White nor Wrighte are textual critics. What scholars?
I do not have time to backtrack manually at this time. I can not remember the context here.The reasons for how many ways it can fail or hold true are a simple combinatorics problem, and the evidence for how it does can be seen in real scholarship.
I'm not. I have the several versions of the Greek NT and various critical editions to individual letters & gospels. Each of the latter includes a critical apparatus (and textual critical notes), while for the UBS' I have an entire textual critical companion to accompany the critical apparatus in the UBS' Greek NT.
To be clear, when I say "critical editions to individual letters & gospels" I mean volumes like
Schmid, U.B., Elliott W.J., & Parker, D.C. (2007) The New Testament in Greek IV The Gospel According to St. John: Volume Two: The Majuscules (vol. 37 of New Testament Tools, Studies and Documents)
I tried to quote more to give additional context, but had to cut out a lot which no doubt caused confusions (my apologies).I see no conflict between what you stated above and my claims.
He devoted one of his major works to the ways in which doctrinal controversies "impacted the surviving literature on virtually every level."even Ehrman admits there are none in essential doctrine
Here's his CV. Among his countless publications, degrees, awards, positions, etc., can you point to a single one on textual criticism?Wright is a textual legend
That might be the problem. First, these debates are not intended to settle anything other than the opinions of the non-specialist audiences. Scholars debate via journals, conferences, monographs, etc. (there's a reason that academic conferences involve "talks", not debates). Second, it means that you are being denied access to the actual scholarly debates such as those in the sources I cited or even the link above. As you can see, however, these sources expect you to know Greek (and often Syriac, Latin, and Coptic as early witnesses are not all in Greek), be familiar with textual critical notations, and in general require a level of background on the subject that few outside specialists or those studying to be so possess. Third, such debates artificially dichotomize issues that are far more nuanced. Fourth, they often don't even include actual specialists. Finally, the sample of scholars that one is exposed to by watching such debates is not only heavily skewed (not only are most positions not represented, but the leaders in the various fields these debates concern are almost never present).In fact in hundreds of hours watching debates on the accuracy of the bible
I have never heard a single claim of error
None of them do.Every modern bible footnotes these exact errors you mention
Virtually all NT scholars believe that there was not only more than one author of John, but that the authors continually added to or changed the Johannine texts. The problem is that while our evidence for this is great, it suggests there wasn't any original text and provides us with only crude generalities as to the nature of its composition.That was minimalistic conclusion.
That is one of the leading experts in the world on Pauline textual criticism referring to a current textual issue. What was "washed out"?These amplifications of uncertainty get washed out
It isn't arbitrary. My only degree in languages is ancient Greek & Latin, but I know how to read several more and have a background in linguistics. However, reading actual Greek manuscripts ranges from "impossible" for me to "difficult". You're talking about going so far beyond reading such manuscripts that it requires extensive training (in fact, textual critics rely on paleographers for the real nuances of the manuscripts (like dating).This is an elitist and arbitrary criteria.
In fact I know of no scholar more quoted on textual issues.
Here's Ehrman in one of his academic works:Ehrman's worst case scenario numbers like 400,000
I would have a very narrow range of between 93% and 99.5% textual accuracy