As a graduate student during the 1980's, I became fascinated by the colorful, but seemingly chaotic wave patterns exhibited by electricity in plasma bulbs. I soon concluded that the eratic wave patterns provided intuitive evidence of indeterminism in nature (though not proof). Thirty years later, I am still persuaded that indeterminism trumps determinism as a guiding philosophical proposition. This conclusion eventually shaped my views and approach to financial economics leading to an advocacy of stochastic modeling methodologies in research and practice.
I recorded the video that follows using a plasma lamp from my study and post it here for others to view and ponder. Notice how the waves respond as my finger touches the globe.
Follow the link below for primers on determinism and indeterminism, as well as metaphysics.
Dialogos of Eide
The Pico/W In C: Servos
1 hour ago
6 comments:
What you are seeing is an example of chaotic/complex behavior. It is only indeterminate to you because you don't have all of the information that would lead to a description and successful modeling of its behavior.
Various particle and flow problems used to be viewed in much the same way. Now, we model things as complex as explosions, crack propogation, and various other phenomena with reasonable levels of success (sometimes aided by empirically derived statistics, but often using only fundamental mathematics that predicts the empirical statistics we observe).
The words determinism and deterministic are generally used in philosophy and physics to describe processes that have no stochastic variables. They will however have unknown and/or uncertain variables. To my knowledge, there is yet no one in physics that can convincingly demonstrate a truly non-deterministic fundamental physical process. Without such a foundation for stochasticism, indeterminism of any given system will always simply be a reflection of our lack of knowledge and not a reflection of inherent stochasticism. That doesn't mean that statistics and statistical modeling shouldn't be used when appropriate. It just means that we shouldn't mistake chaos or complexity for stochastic behavior.
Hi Mike, thanks for the incisive comment. Regarding the matter of non-deterministic physical systems, we should consider Bell's theorem and experiments as potential evidence. Recall that Bell's theorem posits that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. If we accept Bell's theorem as true (though some remain skeptical), then we appear to have convincing evidence of indeterminism in a physical system. As an aside, stochastic methods seek to model chaotic and complex behaviors, much like probability distributions seek to describe the characteristics of a given population. Chaos and complexity are the realities, while stochastics are simply a modeling methodology. I will concede however, that some thinkers have still not given up on local hidden variables in advocacy for deterministic models. Thanks for the opportunity to respond to your comment...
William,
Read the following article related to the illusion of entanglement:
http://philsci-archive.pitt.edu/archive/00004595/01/hopf.pdf
I must admit that the mathematics used in this paper is presently beyond me, but I understand the gist of the arguments made by the author (and I'm certain that you and other readers of this blog can as well).
It may take years or decades before the physics and mathematics communities resolve this debate (which has been ongoing for several decades already), but acausal and indeterministic processes have clearly not been proven to exist and Bell's Theorem will absolutely not be the last word on the subject.
With respect to stochasticism, as a computer programmer, I can create any number of random number generators that appear to the less-knowledgable to be stochastic. The more complex and convolved that I make the algorithms, the more that I can hide the underlying algorithms used to produce a given series of numbers. But just because the data produced appears to be stochastic (i.e. truly random) does not mean that it is. The series is completely deterministic to the person who encoded the algorithm.
Once again, it is the lack of knowledge and understanding that lead one to conclude that a process is random. Admittedly, in practice, we all use the words random and stochastic to represent apparently stochastic or random processes (http://en.wikipedia.org/wiki/Stochastic_process). But we are much wiser to never conclude that apparent or observed randomness or the effects of uncertain initial parameters played out over time equates to true randomness.
William,
Although not directly applicable to this discussion, you and other readers of this blog might find the Feynman Messenger Lectures interesting.
http://research.microsoft.com/apps/tools/tuva/index.html
In his lecture on "The Relation of Mathematics and Physics", starting at chapter nine, he talks about action at a distance vs. local field theory vs. minimization of path integrals and he concludes in chapter 12 with his view that the fundamental mechanisms of physics (when/if we finally discover them) are likely to be simple and that all of the complexities we see come from size of the problems that we try to understand (i.e. number of variables, number of interactions, number of unknowns, etc.)
Hi Mike, thanks again for your comments. Based on our discussion here, I can see why some view simulation methods as a "blunt instrument" approach to probability...
William,
I don't see statistical simulations as blunt. They can be very powerful ways of analyzing something that might be, given our current level of knowledge, tremendously more difficult to analyze any other way.
However, if one assumes that no more knowledge about a given system is attainable, then one has effectively guaranteed that more knowledge will not be discovered because you stop looking for it and so the basic phenomenon will need to smack you on the head to alert you to its presence.
So my view is that statistical tools are powerful and appropriate for use when the detailed knowledge of a system is limited, but I never disallow the possibility that additional information might be achievable that would explain in perhaps a more basic way how the statistics I observe are produced.
Here is a pretty simple example. You are my customer and I sell you some parts. You measure some characteristic of my parts and find that I have a bimodal distribution. You want to know how my bimodal distribution affects your product/process. So you take the bimodal distribution and run some simulations against it and determine how it impacts your product/process. If you are happy that there is no real problem, then you stop there and assume that my product variation is an apparently stochastic variable with a particular distribution. However, if the bimodal distribution is a problem for you, you might ask me to figure out how it is occurring. And I look at my process and discover that, by selecting the best product (middle of the bell curve) to sell to those customers who are willing to pay more, what I am selling to the rest of my customers (including you) is bimodally distributed. The bimodal distribution was introduced by way of selective pressures. Or, perhaps it occurs because my process has a sinusoidal variable with significant influence and the U-shape distribution of the sinusoidal output helps to produce (in combination with other sources) a bimodal distribution. Or, perhaps it is some other reason such as bias introduced by some binary operator (parts produced during day versus night, etc). Or maybe it is a combination of all of the above. The point is that there is likely to be a more basic explanation for every statistical distribution that we observe. We may or may not ever know the mechanism, but it is there doing its thing whether or not we uncover it.
In the Feynman lecture on Probability and Uncertainty (follow the link that I gave earlier), he explains the double slit experiment and its quantum physics interpretation. And he takes the position that there is no way out, that that is just the way that nature appears to work. With that mindset (i.e. I have no other knowledge and I can't see getting any more knowledge), one has no option but to analyze the wave-particle duality probabilistically. Yet the arguments and proofs in favor of inherent indeterminism in nature (at the quantum level) have weaknesses which are still being uncovered after 50+ years. The reason for that is because, even if it is true that there is inherent indeterminism, there are some people who aren't ready to throw in the towel (and it's a good thing because they are still finding valid concerns in the proofs). Those people are trying to find a more satisfying answer and I believe that a more satisfying answer exists.
Anyway, I see no problem with physicists, economists, engineers, doctors, risk managers, or anyone else using statistics where their knowledge has reached some limit or where the statistical approach leads to a tremendous simplification of the analysis (most of our bulk laws are of this sort), but I do have difficulty accepting that (given what we know today) that there is inherent indeterminism in any system or that it exists as a fundamental proposition of nature.
Post a Comment