Classical logic, in turn, allows truth values for all propositions and thus it is not adequate for propositions about a quantum system, where the empirical content of propositions is relevant when applying the rules of logic. First, he called attention to the fact that commensurability between any two propositions is implicit in classical logic. Then, starting from elementary propositions that assert that a system has a certain property, which can be valued by testing the property in an experiment, the concept of dialog-game was introduced.
Several kinds of compound propositions may be defined by specifying the dialog-game. By means of this concept of commensurability, the dialog-game gives a complete frame for argumentation [, Ch. During the eighties and nineties, in line with the neo-Kantian QL line of research, the French philosopher Michel Bitbol analyzed the different alternatives of the language of physical properties and their role in objectivity.
First, contextuality is pointed to as the main characteristic that has to be focused on when applying the program. In the classical case:. This being done, one may consider that the new range of possible compound phenomena is relative to a single ubiquitous context which is not even worth mentioning. In classical physics, the rules of classical logic hold in every context, but they also hold when merging the contexts.
This is not the case in QM. Although Boolean algebra and the corresponding laws of classical logic may be used to deal with propositions about qualities in each context, when considering them all together the structure is that of L H. To manifestly show how the different languages link together, classical languages using classical connectives are implemented in each context, then a meta-language is constructed using a relation of implication, that is, one language implies another one if and only if every sentence in the first is also a sentence in the other.
The combination of contexts has more consequences than the ones that occur when they are used separately. This construction is shown to be nothing but an orthocomplemented non-distributive lattice [42, Annexe I]. In this sense, one can say that QL has been derived by means of a transcendental argument: it is a condition of possibility of a meta-language able to unify context-dependent experimental languages. The Birkhoff-von Neumann paper initiated the search for an axiomatic theory where the, physically non-justified, Hilbert space structure would be derived from a set of physically motivated axioms, giving particular importance to the concept of experimental propositions.
Thus, Mackey referred to the propositions affiliated with a physical system as questions [, p. It is the purpose of this paper to show that, in any separable Hilbert space of dimension at least three, whether real or complex, every measure on the closed subspaces is derived in this fashion. For example, some mathematical aspects of the notion of probability involved by the density operator have been studied by Veeravalli Varadarajan .
But it was the representation theorem of Constantin Piron  which clarified the field. The theorem states that if L is a complete orthocomplemented atomic lattice which is weakly modular and satisfies the covering law, then each irreducible component of the lattice L can be represented as the lattice of all biorthogonal subspaces of a vector space V over a division ring K. In the sixties, Jauch and Piron [, ] also aimed at reconstructing the formalism of QM from first principles with special interest in the relation between concepts and real physical operations that can be performed in the laboratory.
The distinction between the system and its states cannot be maintained under all circumstances with the precision implied by this definition. The reason is that systems which we regard under normal circumstances as different may be considered as two different states of the same system.
An example is a positronium and a system of two photons. Moreover, its purpose is to attempt to give an independent motivation to the general program to understand QM . One of the main results of the operational line of research is due to Aerts in Orthodox QL faces a deep problem for treating composite systems.
- Anarchist Periodicals in English Published in the United States.
- Literary Hybrids: Indeterminacy in Medieval & Modern French Narrative (Studies in Medieval History and Culture, 21)!
- Creationists quantum mechanics - iqegumybiwyf.ml.
- Quantum Physics | Solid-state Device Theory | Electronics Textbook!
In fact, when considering two classical systems, it is meaningful to organize the whole set of propositions about them in the corresponding Boolean lattice built up as the Cartesian product of the individual lattices. Informally one may say that each factor lattice corresponds to the properties of each physical system. But the quantum case is completely different. When two or more systems are considered together, the state space of their pure states is taken to be the tensor product of their Hilbert spaces. But it is not true, as a naive classical analogy would suggest, that any pure state of the compound system factorizes after the interaction in pure states of the subsystems, and that they evolve with their own Hamiltonian operators.
It was shown, in a non-separability theorem by Aerts , that when trying to repeat the classical procedure of taking the tensor product of the lattices of the properties of two systems, to obtain the lattice of the properties of the composite, the procedure fails [5, 6, 8, 57, , ].
Attempts to vary the conditions that define the product of lattices have been made but in all cases it results that the Hilbert lattice factorizes only in the case in which one of the factors is a Boolean lattice, or when the systems have never interacted. During the late sixties and beginning of the seventies there was a radical philosophical view initiated by David Finkelstein [, ] and Hilary Putnam [, ] arguing that logic is in a certain sense empirical.
Finkelstein highlighted the abstractions we make in passing from mechanics to geometry to logic, and suggested that the dynamical processes of fracture and flow already observed at the first two levels should also arise at the third. Putnam, on the other hand, argued that the metaphysical pathologies of superposition and complementarity are nothing more than artifacts of logical contradictions generated by an indiscriminate use of the distributive law. We live in a world with a non-classical logic. Inasmuch as this picture of physical properties is confirmed by the empirical success of QM, this view means we must accept that the way in which physical properties actually hang together is not Boolean.
Since logic is, for Putnam, very much the study of how physical properties actually hang together, he concludes that classical logic is simply mistaken: the distributive law is not universally valid. The study of the modal character of QM was explicitly formalized in the seventies and eighties by a group of physicists and philosophers of science. Bas van Fraassen was the first to formally include the reasoning of modal logic in QM.
He presented a modal interpretation MI of QL in terms of its semantical analysis [, , , ]. The purpose of which was to clarify which properties among those of the complete set structured in the lattice of subspaces of Hilbert space pertain to the system. In , Simon Kochen presented his own modal version  at one of the famous conferences on the foundations of QM organized by Kalervo Laurikainen in Finland. This interpretation of QM also has a direct link to the discussions between the founding fathers of the theory.
We consider it is an illuminating clarification of the mathematical structure of the theory, especially apt to describe the measuring process. We would, however feel that it means not an alternative but a continuation to the Copenhagen interpretation Bohr and, to some extent, Heisenberg. Taking as a standpoint the work done by van Fraassen, Dieks went further in relation to the metaphysical presuppositions involved, making explicit the idea that MIs [94, 95, 96, 97] could be also considered from a realist stance as describing systems with properties.
Of course, the way in which MIs attack the problem rests on the distinction between the realms of possibility and actuality. As noted by Dirac in the first chapter of his famous book , the existence of superpositions is responsible for the striking difference between quantum and classical behavior. Superpositions are also central when dealing with the measurement process, where the various terms associated with the possible outcomes of a measurement must be assumed to be present together in the description.
This fact leads van Fraassen to the distinction between value-attributing propositions and state-attributing propositions , between value-states and dynamic-states :. In other words, the state delimits what can and cannot occur, and how likely it is—it delimits possibility, impossibility, and probability of occurrence—but does not say what actually occurs. So, van Fraassen distinguishes propositions about events and propositions about states. Value-states are specified by stating which observables have values and what these values are.
Dynamic-states state how the system will develop. This is endowed with the following interpretation:. This interpretation informs the consideration of possibility in the realm of QL [, chapter 9]. The logic operations among value-attribution propositions are defined as:. It may be enriched to approach the lattice of subspaces of Hilbert space. One may recognize a modal relation between both kind of propositions. For example, one starts denying the collapse in the measurement process and recognizing that the observable has one of the possible eigenvalues.
Then it may be asked what may be inferred with respect to those values when one knows the dynamic state. Thus, the logic of V is that of P , that is, QL. Endowed with these tools, van Fraassen gives an interpretation of the probabilities of the measurement outcomes which is in agreement with the Born rule. The MI proposed by Kochen and Dieks K-D, for short , proposes to use the so called biorthogonal decomposition theorem also called Schmidt theorem in order to describe the correlations between the quantum system and the apparatus in the measurement process.
From a realistic perspective, an interpretational issue which MIs need to take into account is the assignment of definite values to properties. But if we try to interpret eigenvalues which pertain to different sets of observables as the actual pre-existent values of the physical properties of a system, we are faced with all kind of no-go theorems that preclude this possibility. Regarding the specific scheme of the MI, Bacciagaluppi and Clifton were able to derive KS-type contradictions in the K-D interpretation which showed that one cannot extend the set of definite valued properties to non-disjoint sub-systems [26, 56].
In this way one can avoid KS contradictions and maintain a consistent discourse about statements which pertain to the sublattice determined by the preferred observable R. It is this distinction between property states and dynamical states which according to Bub provides the modal character to the interpretation:. In precise terms, as L H does not admit a global family of compatible valuations, and thus not all propositions about the system are determinately true or false, probabilities defined by the pure state cannot be interpreted epistemically  p.
So, dynamical states do not coincide with property states. The determinate sublattice, which changes with the dynamics of the system, is a partial Boolean algebra, that is, the union of a family of Boolean algebras pasted together in such a way that the maximum and minimum elements of each one, and eventually other elements, are identified and, for every n -tuple of pair-wise compatible elements, there exists a Boolean algebra in the family containing the n elements.
Thus constructed, the structure avoids KS-type theorems. Then, given a system S and a measuring apparatus M ,. Moreover, the quantum state can be interpreted as assigning probabilities to the different possible ways in which the set of determinate quantities can have values, where one particular set of values represents the actual but unknown values of these quantities. The problem with this interpretation is that, in the case of an isolated system, there is no single element in the formalism of QM that allows us to choose an observable R , rather than another.
This is why the move seems flagrantly ad hoc. Were we dealing with an apparatus, there would be a preferred observable, namely the pointer position, but the quantum wave function contains in itself mutually incompatible representations choices of apparatuses each of which provides non-trivial information about the state of affairs. The authors of this work have also contributed to the understanding of modality in the context of orthodox QL [, , , ]. From our investigation there are several conclusions which can be drawn. We started our analysis with a question regarding the contextual aspect of possibility.
As it is well known, the KS theorem does not talk about probabilities, but rather about the constraints of the formalism to actual definite valued properties considered from multiple contexts. What we found via the analysis of possible families of valuations is that a theorem which we called, for obvious reasons, the Modal KS MKS theorem can be derived which proves that quantum possibility, contrary to classical possibility, is also contextually constrained .
This means that, regardless of its use in the literature, quantum possibility is not classical possibility. In a paper written in , we concentrated on the analysis of actualization within the orthodox frame and interpreted, following the structure, the logical realm of possibility in terms of ontological potentiality. The study of the structure of tensor products [57, , , , ] motivated a fruitful development of different algebraic structures that could represent quantum propositions, which in turn became a line of investigation by itself.
Beginning with the proposal of test spaces by Foulis and Randall [, , , , , , ], which are related to orthoalgebras, the theory of structures as orthomodular lattices, partial Boolean algebras, orthomodular posets, effect algebras, quantum MV-algebras and the like became widely discussed.
The weakened structures allow consideration of unsharp propositions related, not to projections, but to the elements of the more general set of linear bounded operators—called effects —over which the probability measure given by the Born rule may be defined. An important line of research in the subject of quantum structures is the application of QL methods to languages of information processing and, more specifically, to quantum computational logic QCL [53, 80, , 82, , , , , , , ].
In this way several logical systems associated to quantum computation were developed. They provide a new form of quantum logic strongly connected with the fuzzy logic of continuous t -norms . The groups in Firenze directed by Dalla Chiara, and Cagliari directed by Giuntini, have also developed different languages for quantum computation. A sentence in QL may be interpreted as a closed subspace of H.
Instead, the meaning of an elementary sentence in QCL is a quantum information quantity encoded in a collection of qbits —unit vectors pertaining to the tensorial product of two dimensional complex Hilbert spaces—or qmixes —positive semi-definite Hermitian operators of trace one over Hilbert space. Conjunction and disjunction are not associated to the join and meet lattice operations. On the one hand, NRL is, in a wide sense, a logic in which the relation of identity or equality is restricted, eliminated, replaced, at least in part, by a weaker relation, or employed together with a new non-reflexive implication or equivalence relation.
There are other versions in higher-order logic, in which higher order variables appear. Some of the above principles are not in general valid in non-reflexive logics. They are total or partially eliminated, restricted, or not applied to the relation that is employed instead of identity. Several of these principles are the motivations for the development of non-reflexive logics. In the Congress, Manin proposed as one of the new set of problems for the next century:. New quantum physics has shown us models of entities with quite different behaviour.
Within this context, the weakening of the concept of identity—substituted by that of indiscernibility—allows the development of non-reflexive logics which, in a wide sense, are logics in which the relation of identity or equality is restricted, eliminated, replaced, at least in part, by a weaker relation, or employed together with a new non-reflexive implication or equivalence relation [68, 73, , 75]. There are also different approaches to the logic related to quantum set theories.
Gaisi Takeuti proposed a quantum set theory developed in the lattice of projections-valued universe [, ] and Satoko Titani formulated a lattice valued logic corresponding to general complete lattices developed in the classical set theory based on the classical logic . On the other hand, PL are the logics of inconsistent but non-trivial theories. The origins of PL go back to the first systematic studies dealing with the possibility of rejecting the PNC. PL was elaborated, independently, by Stanislaw Jaskowski in Poland, and by Newton da Costa in Brazil, around the middle of the last century on PL, see, for example: .
T is called trivial if any sentence of its language is also a theorem of T ; otherwise, T is said to be non-trivial. In classical logics and in most usual logics, a theory is inconsistent if, and only if, it is trivial. L is paraconsistent when it can be the underlying logic of inconsistent but non-trivial theories. Clearly, no classical logic is paraconsistent. The notion of complementarity was developed by Bohr in order to consider the contradictory representations of wave representation and corpuscular representation found in the double-slit experiment see for example .
There is a great amount of work in progress in QL from new quantum structures, to the use of non-reflexive logics, paraconsistent logics, dynamical logics, etc. In the following section we shall review some of these advancements that have taken place in relation to QM. IQSA gathers experts on quantum logic and quantum structures from all over the world under its umbrella. In fact, in the subject of quantum structures, MV-algebras, effect algebras, pseudo-effect algebras and related structures are being developed in relation to their use in QM.
See [55, , , , , , ], just to cite a few examples. As mentioned above see Section 4. The standpoint of this approach is the observation that QL is essentially a dynamical logic, that it is about actions rather than propositions . Smets together with Alexandru Baltag have proposed two axiomatizations of the logic of quantum actions . De Broglie proposed that electrons, as photons particles of light manifested both particle-like and wave-like properties.
Building on this proposal, he suggested that an analysis of orbiting electrons from a wave perspective rather than a particle perspective might make more sense of their quantized nature. Indeed, another breakthrough in understanding was reached. The atom according to de Broglie consisted of electrons existing as standing waves , a phenomenon well known to physicists in a variety of forms. De Broglie envisioned electrons around atoms standing as waves bent around a circle as in Figure below. In any other radius, the wave should destructively interfere with itself and thus cease to exist.
According to the new quantum theory, it was impossible to determine the exact position and exact momentum of a particle at the same time. The startling implication of quantum mechanics is that particles do not actually have precise positions and momenta, but rather balance the two quantities in a such way that their combined uncertainties never diminish below a certain minimum value. In simple terms, the more precisely we know its constituent frequency ies , the less precisely we know its amplitude in time, and vice versa. To quote myself:. A waveform of infinite duration infinite number of cycles can be analyzed with absolute precision, but the less cycles available to the computer for analysis, the less precise the analysis.
The fewer times that a wave cycles, the less certain its frequency is. This principle is common to all wave-based phenomena, not just AC voltages and currents. In order to precisely determine the amplitude of a varying signal, we must sample it over a very narrow span of time. Thus, we cannot simultaneously know the instantaneous amplitude and the overall frequency of any wave with unlimited precision.
Stranger yet, this uncertainty is much more than observer imprecision; it resides in the very nature of the wave. It is not as though it would be possible, given the proper technology, to obtain precise measurements of both instantaneous amplitude and frequency at once. Quite literally, a wave cannot have both a precise, instantaneous amplitude, and a precise frequency at the same time. It was, after all, this discovery that led to the formation of quantum theory to explain it. However, the quantized behavior of electrons does not depend on electrons having definite position and momentum values, but rather on other properties called quantum numbers.
In essence, quantum mechanics dispenses with commonly held notions of absolute position and absolute momentum, and replaces them with absolute notions of a sort having no analogue in common experience. Any electron in an atom can be described by four numerical measures the previously mentioned quantum numbers , called the Principal , Angular Momentum , Magnetic , and Spin numbers. Principal Quantum Number: Symbolized by the letter n , this number describes the shell that an electron resides in. The principal quantum number must be a positive integer a whole number, greater than or equal to 1.
These integer values were not arrived at arbitrarily, but rather through experimental evidence of light spectra: the differing frequencies colors of light emitted by excited hydrogen atoms follow a sequence mathematically dependent on specific, integer values as illustrated in Figure previous. Each shell has the capacity to hold multiple electrons.
An analogy for electron shells is the concentric rows of seats of an amphitheater. As in amphitheater rows, the outermost shells hold more electrons than the inner shells. Also, electrons tend to seek the lowest available shell, as people in an amphitheater seek the closest seat to the center stage. The higher the shell number, the greater the energy of the electrons in it. Figure below. Electron shells in an atom were formerly designated by letter rather than by number. Angular Momentum Quantum Number: A shell, is composed of subshells.
One might be inclined to think of subshells as simple subdivisions of shells, as lanes dividing a road. The subshells are much stranger. The first subshell is shaped like a sphere, Figure below s which makes sense when visualized as a cloud of electrons surrounding the atomic nucleus in three dimensions. These subshell shapes are reminiscent of graphical depictions of radio antenna signal strength, with bulbous lobe-shaped regions extending from the antenna in various directions.
Figure below d. Valid angular momentum quantum numbers are positive integers like principal quantum numbers, but also include zero. These quantum numbers for electrons are symbolized by the letter l. An older convention for subshell description used letters rather than numbers. The letters come from the words sharp , principal not to be confused with the principal quantum number, n , diffuse , and fundamental.
Magnetic Quantum Number: The magnetic quantum number for an electron classifies which orientation its subshell shape is pointed. These different orientations are called orbitals. Think of three dumbbells intersecting at the origin, each oriented along a different axis in a three-axis coordinate space. Valid numerical values for this quantum number consist of integers ranging from -l to l, and are symbolized as m l in atomic physics and l z in nuclear physics.
Spin Quantum Number: Like the magnetic quantum number, this property of atomic electrons was discovered through experimentation. Spin quantum numbers are symbolized as m s in atomic physics and s z in nuclear physics. The physicist Wolfgang Pauli developed a principle explaining the ordering of electrons in an atom according to these quantum numbers. His principle, called the Pauli exclusion principle , states that no two electrons in the same atom may occupy the exact same quantum states. That is, each electron in an atom has a unique set of quantum numbers. This limits the number of electrons that may occupy any given orbital, subshell, and shell.
A common method of describing this organization is by listing the electrons according to their shells and subshells in a convention called spectroscopic notation. In this notation, the shell number is shown as an integer, the subshell as a letter s,p,d,f , and the total number of electrons in the subshell all orbitals, all spins as a superscript. He was largely correct, Matter is necessarily connected due to the Spherical Standing Wave Structure of Matter, but due to lack of knowledge of the system as a whole the Universe , and the fact that it is impossible to determine an Infinite system of which our finite spherical universe is a part - see Article on Cosmology , then this gives rise to the chance and uncertainty found in Quantum Theory.
QED is founded on the assumption that charged 'particles' somehow generate spherical electromagnetic vector In and Out Waves a dynamic version of Lorentz's Theory of the Electron, as Feynman uses spherical electromagnetic Waves, rather than static force fields. It is important to realise though, that like most post-modern physicists, Richard Feynman was a Logical Positivist. Thus he did not believe in the existence of either particles or waves, he simply used this conceptual language as a way of representing how matter behaves in a logical way.
As he says;. This explains why he had such success and such failure at the same time, as he had the correct spherical wave structure of Matter, but he continued with two further errors, the existence of the particle, and the use of vector 'electromagnetic' waves mathematical waves of force , rather than using the correct scalar 'quantum' waves.
Model of Biological Quantum Logic in DNA
It is this error of Feynman's that ultimately led Wolff to make his remarkable discoveries of the WSM. Firstly, there is the Problem of 'Renormalisation' - Feynman must assume finite dimensions for the particle, else the spherical electromagnetic waves would reach infinite fields strengths when the radius of the spherical electromagnetic waves tends to zero.
There must be some non-zero cut-off that is arbitrarily introduced by having a 'particle' of a certain finite size. Effectively, Feynman gets infinities in his equations, and then he subtract infinity from infinity and puts in the correct empirical answer which is not good mathematics, but it does then work extraordinarily well! Secondly, it is a mathematical fact that there are no vector wave solutions of the Maxwell Equations which found electromagnetic fields in spherical co-ordinates!
These are profound problems that have caused contradiction and paradox within Quantum Theory to the present day, and have led to the self fulfilling belief that we can never correctly describe and understand Reality. So theoretical physics has given up on that. In fact Nature behaves in a very sensible and logical way which explains why mathematical physics exists as a subject and can describe so many phenomena, and also explains how we 'humans' have been able to evolve a logical aspect to our minds!
That it is not Nature which is strange, but our incorrect conceptions of Nature! Most importantly, the simple sensible solutions to these problems can be easily understood once we know the correct Wave Structure of Matter. Richard Feynman's PhD thesis with J. Wheeler, used Spherical IN Advanced and OUT Retarded e-m waves to investigate this spherical e-m field effect around the electron and how accelerated electrons could emit light e-m radiation to be absorbed by other electrons at-a-distant in space.
One vexing problem of this e-m field theory was that it led to infinitely high fields singularities at the center of the point particle electron. This was avoided with a mathematical process called renormalisation whereby infinity was subtracted from infinity and the correct experimental result was substituted into the equation. It was Dirac who pointed out that this is not good mathematics - and Feynman was well aware of this!
In Paul Dirac wrote;. I must say that I am very dissatisfied with the situation, because this so called good theory does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small - not neglecting it just because it is infinitely great and you do not want it! Dirac , Richard Feynman was obviously also aware of this problem, and had this to say about renormalisation.
- Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis?
- Quantum Logic | Internet Encyclopedia of Philosophy.
- Quantum Physics.
- Understanding the Physics of Our Universe: What Is Quantum Mechanics?.
- Atari player missile-graphics in BASIC?
- Fausto De Martini.
But no matter how clever the word, it is what I call a dippy process! Having to resort to such hocus pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self consistent. I suspect that renormalisation is not mathematically legitimate. Feynman , The inadequacy of this point of view manifested itself in the necessity of assuming finite dimensions for the particles in order to prevent the electromagnetic field existing at their surfaces from becoming infinitely large.
Feynman's Spherical IN OUT wave theory is largely correct and of course explains his success but his error of using vector e-m waves resulted in infinities at the point particle as the radius tended to zero, and this led to the errors of renormalisation. In reality, Matter, as a structure of scalar spherical quantum waves, has a finite wave amplitude at the Wave-Center as observed and thus eliminates the infinities and the problems of renormalisation found in Feynman's Quantum Electrodynamics QED.
James Maxwell used the experimental empirical results of Faraday, Coulomb, etc. Maxwell was correct that light is a wave traveling with velocity c - but it is a wave developed from the interaction of the IN and OUT waves of two spherical standing waves whose Wave-Centers are bound in resonant standing wave patterns.
Thus it is the interaction of four waves which probably explains why there are four Maxwell Equations. The Maxwell's Equations M. It is a mathematical fact that there are no wave solutions of the M. Only the scalar 'quantum' wave equation has spherical wave solutions. Similarly, there are no imaginable M.
It is clear that the M. The failure of the M. This means that if you attempt to comb down an E field the hair representing the electric vector everywhere flat onto a tennis ball a spherical surface , you must create a 'cowlick' somewhere on the ball which frustrates your attempt to comb it. The questions arise, Why did theorists continue to favour the e-m field, the photon, and M.
Why were alternative descriptions of nature not sought? We suspect the answer is because it worked once the errors were removed with a bit of 'hocus pocus' mathematics and the aid of empirical data. Unfortunately, this logical positivist view to retain the point particle and vector force fields has been the root cause of the many paradoxes and mysteries surrounding quantum theory. The resulting confusion has been increasingly exploited in the popular press. Instead of searching for the simple behaviour of nature, the physics community found that 'wave-particle duality' was an exciting launching pad for more complex proposals that found support from government funding agencies.
The search for truth was put into limbo and wave-particle duality reigned. Once we understand though, that the particle theory of matter is a mathematical logical positivist description of nature, then it becomes less confusing. Essentially the particle is a mathematical construction to describe energy exchange. It says nothing about the energy exchange mechanism and thus makes no comment about how the particle exists, how it moves through Space, what the Space around the particle is made of, and how matter particles 'emit' and 'absorb' photon particles with other matter particles distant in Space.
Let us then consider one fundamentally important argument of Feynman's that light must be a particle. Light behaves as particles. But when experiments were developed that were sensitive enough to detect a single photon, the wave theory predicted that the clicks of a photomultiplier would get softer and softer, whereas they stayed at full strength - they just occurred less and less often. No reasonable model could explain this fact.
This state of confusion was called the wave - particle duality of light. Feynman though is incorrect in two ways; Firstly, he is making unjustified assumptions beyond what is observed. It is true that light energy is emitted and absorbed in discrete amounts between two electrons. Secondly, the solution is to realize that the Spherical Standing Wave Structure of Matter actually demands that all energy exchanges for light be of discrete amounts because this is what occurs for 'Resonant Coupling', and for standing Wave interactions in general.
So now, I present to you the three basic actions, from which all the phenomena of light and electrons arise. Action 2: An Electron resonantly couples with another Electron emits or absorbs a photon.
Once we realise that there are no separate electron or photon particles, thus we remove the problem as to how an electron particle can interact with a separate photon particle! Thus this solution is actually more consistent and simpler than Feynman's QED, particularly when we consider Feynman's further explanation of a positron being an electron which goes backwards in Time. The backwards-moving electron when viewed with time moving forwards appears the same as an ordinary electron, except that it is attracted to normal electrons - we say it has a positive charge.
For this reason it's called a positron. The positron is a sister particle to the electron, and is an example of an anti-particle. This phenomena is general. Every particle in Nature has an amplitude to move backwards in time, and therefore has an anti-particle. As Wolff explains this is simply a mathematical truth caused by the fact that a negative time in the wave equations changes the phase of the standing waves to be equal and opposite, which corresponds to antimatter.
Antimatter does no move 'backwards in time'! Further, notice what Feynman says about photons, which are treated as particles in QED, and thus by Feynman's logic there should also be anti-photons, whereas the WSM is clear on this point - there are anti-electrons positrons which are opposite phase Spherical Standing Waves, but there are no separate photon particles, thus no anti-photons!
And what about photons? Photons look exactly the same in all respects when they travel backwards in time, so they are their own anti-particles. You see how clever we are at making an exception part of the rule! While it may be clever, it is not good philosophy, and it has led to a very confused and absurd modern physics. Surely it is time for physicists to start considering the fundamental theoretical problems of the existing theories and to appreciate that the Metaphysics of Space and Motion and the Spherical Wave Structure of Matter is a simple, sensible, and obvious way to solve these problems!
Finally, let us explain how we can experimentally confirm the Spherical Wave Structure of Matter which would obviously be very convincing to the skeptics! In l, Albert Einstein, Podolsky, and Rosen EPR put forward a gedanken thought experiment whose outcome they thought was certain to show that there existed natural phenomena that quantum theory could not account for. The experiment was based on the concept that two events cannot influence each other if the distance between them is greater than the distance light could travel in the time available.
In other words, only local events inside the light sphere can influence one another. Their experimental concept was later used by John Bell to frame a theorem which showed that either the statistical predictions of quantum theory or the Principle of Local Events is incorrect. It did not say which one was false but only that both cannot be true, although it was clear that Albert Einstein expected The Principle to be affirmed. The Principle of Local Events failed, forcing us to recognize that the world is not the way it appears. What then is the real nature of our world? The important impact of Bell's Theorem and the experiments is that they clearly thrust, a formerly only philosophical dilemma of quantum theory, into the real world.
They show that post-modern physics' ideas about the world are somehow profoundly deficient. No one understood these results and only scant scientific attention has been paid to them. Figure 1. Coincidences simultaneous detection are recorded and plotted as a function of the angular difference between the two settings of the polarization filters. His theorem relates to the results of an experiment like the one shown in Figure 1.
At opposite sides, are located two detectors of polarized photons. The polarization filters of each detector can be set parallel to each other, or at some other angle, freely chosen. It is known that polarizations of paired photons are always parallel to each other, but random with respect to their surroundings. So, if the detector filters are set parallel, both photons will be detected simultaneously. If the filters are at right angles, the two photons will never be detected together. The detection pattern for settings at intermediate angles is the subject of the theorem.
Bell and Albert Einstein, Podolsky, and Rosen assumed that the photons arriving at each detector could have no knowledge of the setting of the other detector. This is because they assumed that such information would have to travel faster than the speed of light - prohibited by Albert Einstein's Special Relativity. Their assumption reflects the Principle of Local Causes, that is, only events local to each detector can affect its behaviour. Based on this assumption, Bell deduced that the relationship between the angular difference between detector settings and the detected coincidences of photon pairs was linear, like line L in Figure 1.
His deduction comes from the symmetry and independence of the two detectors, as follows: A setting difference of X, at one detector has the same effect as a difference X, at the other detector. Hence if both are moved X, the total angular difference is 2X and the total effect is twice as much, which is a linear relationship. They agree with the line QM, predicted by the quantum mechanics, and do not agree with the line L, predicted by Albert Einstein's concept of causality.
This was a big surprise, because the failure of causality suggests that the communication is taking place at speeds greater than the velocity of light. The curved line is the calculation obtained from standard quantum theory. Bell, Albert Einstein, Podolsky, and Rosen, or anyone who does not believe in superluminal speeds, would expect to find line L. In fact, the experiments yielded points R, which agreed with line QM. The predictions of quantum theory had destroyed the assumptions of Albert Einstein, Podolsky and Rosen!
The results of these experiments were so disbelieved that they were repeated by other persons, using different photon sources, as well as particles with paired spins. The most recent experiment by Aspect, Dalibard, and Roger, used acousto-optical switches at a frequency of 50MHz which shifted the settings of the polarizers during the flight of the photons, to completely eliminate any possibility of local effects of one detector on the other.
Bell's Theorem and the experimental results imply that parts of the universe are connected in an intimate way i.
How can we understand them? Those authors tend to agree on the following description of the non-local connections: 1. They link events at separate locations without known fields or matter. They do not diminish with distance; a million miles is the same as an inch. They appear to act with speed greater than light.
Clearly, within the framework of science, this is a perplexing phenomenon. In some mysterious quantum way, communication does appear to take place faster than light between the two detectors of the apparatus. These results showed that our understanding of the physical world is profoundly deficient. The Spherical Wave Structure of Matter, particularly the behaviour of the In and Out Waves, is able to resolve this puzzle so that the appearance of instant communication is understood and yet neither Albert Einstein nor QM need be wrong.
Remember that for resonant coupling it is necessary for the In and Out Waves of both electrons to interact with one another. The passage of both In-Waves through both Wave-Centers precedes the actual frequency shifts of the source and detector. A means to detect this first passage event is not a capability of the usual photo-detector apparatus and remains totally unnoticed.
But the In-Waves are symmetrical counterparts of the Out-Waves and carry the information of their polarization state between parts of the experimental apparatus before the Out-Waves cause a departing photon event. The IN-waves travel with the speed of light so there is no violation of relativity. At this point you may be inclined to disbelieve the reality of the In-Wave. But there is other evidence for it.
An alternative approach to unifying chemistry with quantum mechanics
Remember, it explains the de Broglie wavelength and thereby QM. It is necessary to explain the relativistic mass increase of a moving object or the symmetry in its direction of motion. It is responsible for the finite force of the SR electron at its center. Are all of these merely coincidence? Especially, it is the combination of In and Out Waves which explains these laws, not just the In-Waves.