Warning, lengthy post ahead!
I didn't watch that video yet, but I will be able to do so tonight. Once I've seen it, I'll try to address the issues presented in it more specifically. I can see one thing that I can address from the preview: "Fossils date the rocks and rocks date the fossils" is only circular reasoning if you take it as a stand-alone statement without outside context. The statement actually means "if you date a fossil, then you know the age of the rock it was contained in" and "if you date the rock, you know the age of the fossil contained within it".
I'll see what I can address about common creationist concerns regarding radiological dating:
(1) We don't know what the original isotope content of a rock was when it first formed.
Scientists actually do have ways of knowing what the original isotopic ratios were in a rock when it first formed. There are two things that can tell us the answer: (1) basic chemical and physical laws, and (2) isochron plots.
Chemical and Physical Laws -
When a rock first forms, it condenses from a molten state into a solid state. Let's take lava flows from a volcano as an example. Lava contains a variety of chemicals, including both radioactive and stable elements. Potassium-40, a radioactive form of potassium, decays into either calcium-40 (89.1% of the time) or argon-40 (10.9% of the time). Argon is a noble gas which almost never reacts with other elements (this is only known to occur in special, cryogenic laboratory circumstances). When potassium decays into argon when the lava is liquid, the gaseous argon bubbles out of the lava and escapes into the atmosphere. As long as the lava is in a molten state, argon cannot accumulate in it. This is how we know that volcanic rocks originally contained no argon-40 when they first formed. This has been confirmed by tests performed on recently solidified lava flows; no argon is present in such fresh deposits.
Once lava cools into solid rock, however, the situation changes. Decaying potassium still produces argon, but it is trapped within the rock and accumulates over time. Since potassium-40 decays into argon-40 at a constant rate (it is not affected by humidity, temperature, pressure, etc.), one can use the ratio of argon-40 to potassium-40 in a rock as a measure of how long argon-40 has been accumulating in the rock.
Uranium-lead dating also has chemical laws working in its favor. When zircon crystals first form, uranium atoms can take the place of zirconium atoms in the crystal matrix and form impurities. Lead atoms, however, cannot do this and are actively excluded from zircon crystals during formation. It's like trying to mix water and oil; it doesn't work. For this reason, we know that freshly-formed zircon crystals are lead-free.
Unlike with potassium-argon dating, uranium-lead dating techniques can actually utilize two separate decay chains: uranium-235 decaying to lead-206 and uranium-238 decaying to lead-207. Since the half-life of uranium-235 is very different from uranium-238 (4.47 billion years vs. 704 million years), measurement of both of these isotopic ratios in a single sample can provide extra confirmation that the measured dates are correct (if both the U-235 method and the U-238 method agree with each other on the age of the rock, then one can be confident that the age is correct. If the techniques were faulty, you would expect the measured ages given by the two different techniques to disagree with each other).
Isochron Plots
Another way of knowing the original isotope concentrations in a rock is to make an isochron plot (in this case, I will use the example of a uranium-lead isochron plot). First, many different kinds of rocks are taken from the same source bedrock and their isotopic ratios are measured. Ideally, the rocks would contain several different types of minerals. Since each mineral has its own unique chemical properties, some of these minerals will dissolve uranium better than others and some of those minerals will dissolve lead better than others (this is an occasion where lead dissolving in the rock is acceptable).
A graph is drawn out with two axes. On one axis, the ratio of radiogenic lead-to-non-radiogenic lead is written (radiogenic lead being isotopes of lead that result from the decay of uranium (such as lead-206) and non-radiogenic lead being isotopes which do not (such as lead-204). On the other, the ratio of uranium-to-radiogenic lead is written. The ratios measured from the rocks are then placed on the graph.
Now, if the samples are not contaminated and have not lost lead over time due to damage to the rocks, then all of the samples will form a straight line on the plot. If there is contamination or if weather or geological events have caused lead to be lost from the rocks, then the sample will not fall on a straight line. This is very important because this trend will reveal whether the dates can be trusted or not. Geologists therefore take pains to carefully examine rocks and exclude those they have experienced weathering or geologic disturbances. If the samples are all in a line, then they have passed the first test of being trustworthy and the next step can be taken (I'll get to that a little later). One of the important attributes of the isochron plot is that it can be used to establish what the original uranium-lead content of the rock was when it was first formed. As rocks age, their position on the graph will change slightly, with those further from the Y-axis moving more than those close to the Y-axis. As a matter of fact, those samples that are exactly on the Y-axis will not move at all, no matter what the age of the samples are. This place where the graph line intercepts the Y-axis is what the original uranium-lead isotope ratios were when the rocks first form:
Here is a graph showing this. Look at the red dot. It represents the original isotope ratio and does not change no matter how old the samples become over time: An Animated Isochron Diagram
(2) We do not know that decay rates have always been constant.
Actually, we've got a lot of evidence that decay rates are constant. Since atomic nuclei are sealed away deep within an atom, external factors such as chemical corrosion, pressure, temperature and humidity do not affect the rate that they decay. In order to affect the rate of decay, you have to affect the nucleus itself. In order for pressures and temperature to be high enough to affect the nucleus, you'd have to have extraordinary circumstances which can breach the protective electron shell that surrounds it. This kinds of things may occur with temperatures and pressures in the super-heated plasma in the Sun's core, but you're not going to have that happen naturally on Earth.
Now, is there anything that can affect the decay rate of nuclei? The answer is yes. Radiation, for example, can do it under the right circumstances. Neutrons released during nuclear fission cause a massive increase in decay rates. Out-of-control nuclear decay is basically what causes atomic bombs to explode. However, these events require extremely specific circumstances which almost never occur on Earth naturally (although there is the so-called "Oklo natural reactor" which is thought to have initiated fission. Evidence for this is left behind by a depletion of U-235. If this had happened commonly on Earth, we'd expect to find a depletion of U-235 in many other rocks. Also, non-fissile radioactive isotopes such as potassium-40 would not be subject to this).
An external source of extreme cosmic radiation might conceivably speed up decay rates. However, this would leave behind testable evidence. For example, rocks on the bottom of the oceans would be much more heavily shielded from cosmic rays than those on the surface (which would make sub-oceanic rocks look significantly younger than surface rocks). Also, rocks would be less affected by the radiation as they are deeper in the Earth. This would cause deep rocks to appear younger than shallow rocks (the opposite has been measured to be true. In practice, deep rocks are measured as older than surface rocks: Are Radioactive Dates Consistent?).
Since different elements absorb neutrons at different rates, different isotopes would have their decay accelerated by different amounts. If a radiation event had occurred in the past, then different dating methods peformed on the same rock samples would always be in disagreement with each other. Yet different dating methods do agree with each other: Are Radioactive Dating Methods Consistent?.
Radiation from space of such massive intensity would also kill all life on Earth. It's apparent from these facts that there was not some extreme radiation event from the past that accelerated decay.
I didn't watch that video yet, but I will be able to do so tonight. Once I've seen it, I'll try to address the issues presented in it more specifically. I can see one thing that I can address from the preview: "Fossils date the rocks and rocks date the fossils" is only circular reasoning if you take it as a stand-alone statement without outside context. The statement actually means "if you date a fossil, then you know the age of the rock it was contained in" and "if you date the rock, you know the age of the fossil contained within it".
I'll see what I can address about common creationist concerns regarding radiological dating:
(1) We don't know what the original isotope content of a rock was when it first formed.
Scientists actually do have ways of knowing what the original isotopic ratios were in a rock when it first formed. There are two things that can tell us the answer: (1) basic chemical and physical laws, and (2) isochron plots.
Chemical and Physical Laws -
When a rock first forms, it condenses from a molten state into a solid state. Let's take lava flows from a volcano as an example. Lava contains a variety of chemicals, including both radioactive and stable elements. Potassium-40, a radioactive form of potassium, decays into either calcium-40 (89.1% of the time) or argon-40 (10.9% of the time). Argon is a noble gas which almost never reacts with other elements (this is only known to occur in special, cryogenic laboratory circumstances). When potassium decays into argon when the lava is liquid, the gaseous argon bubbles out of the lava and escapes into the atmosphere. As long as the lava is in a molten state, argon cannot accumulate in it. This is how we know that volcanic rocks originally contained no argon-40 when they first formed. This has been confirmed by tests performed on recently solidified lava flows; no argon is present in such fresh deposits.
Once lava cools into solid rock, however, the situation changes. Decaying potassium still produces argon, but it is trapped within the rock and accumulates over time. Since potassium-40 decays into argon-40 at a constant rate (it is not affected by humidity, temperature, pressure, etc.), one can use the ratio of argon-40 to potassium-40 in a rock as a measure of how long argon-40 has been accumulating in the rock.
Uranium-lead dating also has chemical laws working in its favor. When zircon crystals first form, uranium atoms can take the place of zirconium atoms in the crystal matrix and form impurities. Lead atoms, however, cannot do this and are actively excluded from zircon crystals during formation. It's like trying to mix water and oil; it doesn't work. For this reason, we know that freshly-formed zircon crystals are lead-free.
Unlike with potassium-argon dating, uranium-lead dating techniques can actually utilize two separate decay chains: uranium-235 decaying to lead-206 and uranium-238 decaying to lead-207. Since the half-life of uranium-235 is very different from uranium-238 (4.47 billion years vs. 704 million years), measurement of both of these isotopic ratios in a single sample can provide extra confirmation that the measured dates are correct (if both the U-235 method and the U-238 method agree with each other on the age of the rock, then one can be confident that the age is correct. If the techniques were faulty, you would expect the measured ages given by the two different techniques to disagree with each other).
Isochron Plots
Another way of knowing the original isotope concentrations in a rock is to make an isochron plot (in this case, I will use the example of a uranium-lead isochron plot). First, many different kinds of rocks are taken from the same source bedrock and their isotopic ratios are measured. Ideally, the rocks would contain several different types of minerals. Since each mineral has its own unique chemical properties, some of these minerals will dissolve uranium better than others and some of those minerals will dissolve lead better than others (this is an occasion where lead dissolving in the rock is acceptable).
A graph is drawn out with two axes. On one axis, the ratio of radiogenic lead-to-non-radiogenic lead is written (radiogenic lead being isotopes of lead that result from the decay of uranium (such as lead-206) and non-radiogenic lead being isotopes which do not (such as lead-204). On the other, the ratio of uranium-to-radiogenic lead is written. The ratios measured from the rocks are then placed on the graph.
Now, if the samples are not contaminated and have not lost lead over time due to damage to the rocks, then all of the samples will form a straight line on the plot. If there is contamination or if weather or geological events have caused lead to be lost from the rocks, then the sample will not fall on a straight line. This is very important because this trend will reveal whether the dates can be trusted or not. Geologists therefore take pains to carefully examine rocks and exclude those they have experienced weathering or geologic disturbances. If the samples are all in a line, then they have passed the first test of being trustworthy and the next step can be taken (I'll get to that a little later). One of the important attributes of the isochron plot is that it can be used to establish what the original uranium-lead content of the rock was when it was first formed. As rocks age, their position on the graph will change slightly, with those further from the Y-axis moving more than those close to the Y-axis. As a matter of fact, those samples that are exactly on the Y-axis will not move at all, no matter what the age of the samples are. This place where the graph line intercepts the Y-axis is what the original uranium-lead isotope ratios were when the rocks first form:
Here is a graph showing this. Look at the red dot. It represents the original isotope ratio and does not change no matter how old the samples become over time: An Animated Isochron Diagram
(2) We do not know that decay rates have always been constant.
Actually, we've got a lot of evidence that decay rates are constant. Since atomic nuclei are sealed away deep within an atom, external factors such as chemical corrosion, pressure, temperature and humidity do not affect the rate that they decay. In order to affect the rate of decay, you have to affect the nucleus itself. In order for pressures and temperature to be high enough to affect the nucleus, you'd have to have extraordinary circumstances which can breach the protective electron shell that surrounds it. This kinds of things may occur with temperatures and pressures in the super-heated plasma in the Sun's core, but you're not going to have that happen naturally on Earth.
Now, is there anything that can affect the decay rate of nuclei? The answer is yes. Radiation, for example, can do it under the right circumstances. Neutrons released during nuclear fission cause a massive increase in decay rates. Out-of-control nuclear decay is basically what causes atomic bombs to explode. However, these events require extremely specific circumstances which almost never occur on Earth naturally (although there is the so-called "Oklo natural reactor" which is thought to have initiated fission. Evidence for this is left behind by a depletion of U-235. If this had happened commonly on Earth, we'd expect to find a depletion of U-235 in many other rocks. Also, non-fissile radioactive isotopes such as potassium-40 would not be subject to this).
An external source of extreme cosmic radiation might conceivably speed up decay rates. However, this would leave behind testable evidence. For example, rocks on the bottom of the oceans would be much more heavily shielded from cosmic rays than those on the surface (which would make sub-oceanic rocks look significantly younger than surface rocks). Also, rocks would be less affected by the radiation as they are deeper in the Earth. This would cause deep rocks to appear younger than shallow rocks (the opposite has been measured to be true. In practice, deep rocks are measured as older than surface rocks: Are Radioactive Dates Consistent?).
Since different elements absorb neutrons at different rates, different isotopes would have their decay accelerated by different amounts. If a radiation event had occurred in the past, then different dating methods peformed on the same rock samples would always be in disagreement with each other. Yet different dating methods do agree with each other: Are Radioactive Dating Methods Consistent?.
Radiation from space of such massive intensity would also kill all life on Earth. It's apparent from these facts that there was not some extreme radiation event from the past that accelerated decay.
Last edited: