Sure. Given the behaviour of elementary particles can be calculated probabilistically with extraordinary accuracy, then whether wave function, elementary particle, or quantum field, the sub atomic world is clearly governed by precise and immutable natural laws.
1) We do not have any method for determining which particles are “elementary”. This is part of the reason that different accounts of the standard model of particle physics give different numbers of types of particles (other, related reasons include where one decides to draw the lines when it comes to how many different descriptions of essentially the same entity in the SM Lagrangian to include under one name or under the full set, which in the extreme case would entail counting e.g. the photon twice as the equivalent of the electron vs. positron for the massless interaction boson in QED is the photon vs. photon particle & antiparticle pair).
2) We do not calculate their behavior probabilistically, because unlike in quantum mechanics (where at least we do have a probabilistic description of “physical” systems) the consequences of incorporating relativistic symmetries and spacetime into quantum theory destroys our ability to describe the kinds of entities (those famous wave-like and particle-like systems such as non-relativistic electrons) one has in QM. The relativistic equation for a quantum-mechanical wave function didn’t make sense when Schrödinger first wrote down his equation, which is why he ended up obtaining in the equation which bears his name in the limit of the relativistic one. This equation, rediscovered and now name after its rediscoverers, the Klein-Gordon equation, still doesn’t make sense. So Dirac worked on the problem and obtained his hole theory and the Dirac equation governing its dynamics. It also doesn’t make sense. By “make sense” here I don’t mean it is counter-intuitive or complicated or hard to conceptualize. I mean that it has solutions that physically have no meaning in any physical theory and CANNOT have any meaning without reinterpreting the most basic laws and (already highly abstract) physical quantities of modern physics.
Dirac’s hole theory, however, hit on the “solution” that is today modern particle physics. Simplistically, since we can’t have a relativistic theory of quantum mechanics either by so-called second quantization or by forcing something like the Schrödinger equation to behave under the appropriate symmetries, we reinterpret the basic entities in the theory as well as the very nature of the space(time) in which these entities are supposed to have been described. We take the operators that are supposed to represent physical, measurable quantities in some sense from quantum mechanics and we reinterpret them as being particles. These particles are undetectable by any means. But we force them to behave locally by reinterpreting the “space” in which they are detected in terms of underlying fields which are assumed to exist and we build our theory and our experiments around such sets of assumptions in order to obtain measured values from experiments.
3) One problem with this method reinterpreting equations that don’t make any sense physically by demoting some elements, promoting others, and then ascribing the term “particles” to observables that act locally on regions of spacetime we identify with “fields” is that the field theories are still problematic and incapable of yielding anything other than infinitely wrong predictions. One reason for this is the famous divergences one has whenever one tries to do any calculations in quantum field theory. So, in order to make these fantastically accurate predictions, we make the measurements first. We obtain numbers. We take these numbers and divide up the theoretical entities in the QFTs of particle physics into “bare” particles and “dressed” particles. The rational (put incredibly simplistically) is that, because there is no way to actually isolate any of these particle from their fields (particles are interpreted in terms of detector clicks of what are deemed field excitations), we declare the measured quantities to be of the “dressed” particles, which we treat as something like “bare particle + self-interaction= dressed particle”. Then wrap up certain infinite results and set infinity equal to the measured value we started with. We then try to calculate what are ultimately diverging integrals and so forth using lattice gauge theory or perturbation theory (having already employed some regularization scheme) so that we can extract the “incredibly accurate” predictions which are “measured value we started with + cancelling infinities up to an arbitrary cut-off scale”.
4) Of course, in order to do all of this we also have to postulate into existence several mathematical entities and procedures which may not “exist” in the sense that we have no proof that one can actually formulate the mathematics of the theory as we do as we don’t know that there is any mathematical definition underlying certain mathematical objects used nor whether or not the procedures used are mathematically legitimate.
All this to obtain a procedure that can calculate detector clicks beyond our current ability to probe at those scales so that we can shove most of our ignorance about what is going on down beyond scales we can deal with at all, and as long as we are at it we can reinterpret the whole business in a manner that allows us to deal with theories that are infinitely wrong everywhere (non-renormalizable theories) using the current “effective field theory” approach. Oh, and because we can’t understand even the basics of subatomic processes after all of this without positing the existence of constituent models of e.g., protons and neutrons, we invented a sophisticated (though not fully rigorous) mathematical method of explaining why we haven’t ever observed quarks- confinement and asymptotic freedom force quarks to behave like constituents of e.g., protons right up until we try to force them to behave in a manner that would allow us to detect them, in which case the strength of their interaction force (gluons) forces them to behave as a single entity.
What, precisely, is so “extraordinary” about the predicted accuracy of values we obtained by setting infinity equal to the measured values so that we could calculate higher order corrections by cancelling infinities up to an ignorance parameter, only to have a theory we interpret as fundamentally phenomenological (i.e., as a theory that describes how to predict what detectors will do) up to a scale at which even this fails? How is this “clearly governed by precise and immutable natural laws”?