YangMills quantum field theory dynamics predicting Standard Model forces, particle masses, general relativity, and cosmology, with empirical evidence and proofs of mechanisms which make other predictions, allowing independent verification (this page is in revision as of 14 December 2006; see https://nige.wordpress.com and http://electrogravity.blogspot.com for recent updates which will be included)
Contents
*****
In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the YangMills theory is the only model for Maxwell’s equations which is consistent with quantum mechanics and the empirically validated results of relativity. The photon YangMills theory is U(1). Equivalent YangMills interaction theories of the strong force SU(3) and the weak force SU(2) in conjunction with the U(1) force result in the symmetry group set SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on lefthanded spinning fermions, breaking the conservation of parity.
Mediators conveying forces are called gauge bosons: 8 types of gluons for the SU(3) strong force, 3 particles (Z, W+, W) for the weak force, and 1 type of photon for electromagnetism. The strong and weak forces are empirically known to be very shortranged, which implies they are mediated by massive bosons unlike the photon which is said to be lacking mass although really it carries momentum and has mass in a sense. The correct distinction is not concerned with ‘the photon having no rest mass’ (because it is never at rest anyway), but is concerned with velocity: the photon actually goes at light velocity while all the other gauge bosons travel slightly more slowly. Hence there is a total of 12 different gauge bosons. The problem with the Standard Model at this point is the absence of a model for particle masses: SU(3) x SU(2) x U(1) does not describe mass and so is an incomplete description of particle interactions. In addition, the exact mechanism which breaks the electroweak interaction symmetry SU(2) x U(1) at low energy is speculative. The force carrying radiation emission and reception which drives forces is distinct from heat. Coulomb's law is no different in a bonfire. You need 90 GeV before Coulomb's law becomes 7% stronger (Levine, Koltick, et al., PRL, 1997). The Standard Model is the best predictive theory of history. Predicts magnetic moment of electron and muon, and lamb shift, to 10 decimals. Predicts decay rates of particles.
‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space [emphasis added] is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Cape, London, 2006, p189.
Loop quantum gravity (LQG) is compatible with YangMills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again. Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations. As Nobel Laureate Professor Philip Anderson stated: ‘… the flat universe is just not decelerating, it isn’t really accelerating …’  http://cosmicvariance.com/2006/01/03/dangerphilanderson
Things accelerated by a gravity field are losing gravitational potential energy and gaining kinetic energy, so the exchange radiation carries energy. If the LQG spinfoam vacuum does describes a YangMills energy exchange scheme, you can get solid checkable predictions by taking account of the effect of the expansion of the universe on these conserved gravity field mediators.
In May 1996 it was recognised that a physical mechanism of gravity could be constructed which predicts general relativity with corrections because gravity is the result of the recession of the surrounding mass of the universe outward from us in space and time. This modification predicted that gravitational attraction does not slow down the distant matter, contrary to the standard general relativity then in use, and it also gave hope for predicting the strength of gravity. It was published via the October 1996 issue of Electronics World.
‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ – Hermann Minkowski, 1908.
When we look to greater distances, we’re seeing earlier times. So, an increase in velocity with earlier times (further distances) can legitimately be interpreted objectively from our frame of reference as being a variation of velocity as a function of time. The Standard Model is the besttested physical theory: forces result from radiation exchange in spacetime. Mass recession speed is 0c in spacetime of 015 billion years, so outward force F = m.dv/dt ~ m(c  0)/(age of universe) = mc/t ~ mcH ~ 10^{43} N (H is Hubble’s parameter). Newton’s 3rd law implies equal inward force, which according to the possibilities implied by the Standard Model, is carried by exchange radiation. This causes gravity, contraction in general relativity, etc. The actual calculation is complicated by the fact that the distant (and early) universe contributing most to gravity introducing modifications: (1) the gauge boson radiation carries less energy per unit time and contributes less effect when it becomes excessively redshifted from very early times after the big bang, and (2) the density of the earlier universe which contributes to gravity we perceive was greater than the present density of the universe. Each effect offsets the other: (1) tends to reduce the overall inward force of gauge bosons, while (2) tends to increase it. The exact offset may be calculated, however, by using divergence to model redshift as photon ‘stretching’ and using density variation which goes as the inverse cube of the radius of the universe. The result predicts gravity constant G to within 2 %: F = ľ mMH^{2}/( p r r^{2} e^{3}) ~ 6.7 x 10^{11} mM/r^{2} Newtons correct within 2 % for consistent (interdependent observational) values of the Hubble constant and density. A fundamental particle of mass M has a crosssectional space pressure shielding area of p (2GM/c^{2})^{2}. This makes many predictions, which will be summarised in the section 1.2.
The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is then a mere compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c˛) = 1.5 mm for the contraction of the spacetime fabric by the mass in the Earth. Plotting the earth and the observable distant receding matter average distance circles (not to scale) the geometry of the mechanism becomes clear:
A local mass shields the forcecarrying radiation exchange, because the distant masses in the universe have high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = mv/(x/c) = mcv/x = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, so you get pushed towards it. This is why apples fall.
Illustration above: exchange force (gauge boson) radiation force cancels out (although there is compression equal to the contraction predicted by general relativity) in symmetrical situations outside the cone area since the net force sideways is the same in each direction unless there is a shielding mass intervening. Shielding is caused simply by the fact that nearby matter is not significantly receding, whereas distant matter is receding. Gravity is the net force introduced where a mass shadows you, namely in the doublecone areas shown above. In all other directions the symmetry cancels out and produces no net force. Hence gravity can be quantitatively predicted using only well established facts of quantum field theory, recession, etc.
Gravity is not due to a surface compression but instead is mediated through
the void between fundamental particles in atoms by exchange radiation which does
not recognise macroscopic surfaces, but only interacts with the subnuclear
particles associated with the elementary units of mass. The radial contraction
of the earth's radius by gravity, as predicted by general relativity, is 1.5 mm.
[This contraction of distance hasn't been measured directly, but the
corresponding contraction or rather ‘dilation’ of time has been
accurately measured by atomic clocks which have been carried to various
altitudes (where gravity is weaker) in aircraft. Spacetime tells us that where
distance is contracted, so is time.]
This contraction is not caused by a
material pressure carried through the atoms of the earth, but is
instead due to the gravitycausing exchange radiation of gravity which is
carried through the void (nearly 100% of atomic volume is void). Hence the
contraction is independent of the chemical nature of the earth. (Similarly, the
contraction of moving bodies is caused by the same exchange radiation effect,
and so is independent of the material's composition.)
The effective shielding radius of a black hole of mass M is equal to 2GM/c^{2}. A shield, like the planet earth, is composed of very small, subatomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other.
The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inversesquare gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field.
From the illustration here, the total outward force of the big bang, (total outward force) = ma = (mass of universe).(Hubble acceleration, a = Hc, because a = dv/dt = (c0)/t = Hc), while the gravity force is the shielded inward reaction (by Newton’s 3rd law the outward force has an equal and opposite reaction):
F = (total outward force).(crosssectional area of shield projected to radius R) / (total spherical area with radius R).
The crosssectional area of shield projected to radius R is equal to the area of the fundamental particle (Pi multiplied by the square of the radius of the black hole of similar mass), multiplied by the (R/r)^{2} which is the inversesquare law for the geometry of the implosion. The total spherical area with radius R is simply four times Pi, multiplied by the square of R. Inserting simple Hubble law results c = RH and R/c = 1/H give us F = (4/3)Pi*Rho*G^{2}M^{2}/(Hr)^{2}. (Notice the presence of M^{2} in this equation, instead of mM or similar; this is evidence that gravitational mass is ultimately quantized into discrete units of M, see this post for evidence, ie this schematic idealized diagram of polarization  in reality there are not two simple shells of charges but that is similar to the net effect for the purpose required for coupling of masses  and this diagram showing numerically how observable particle masses can be built up on the polarization shielding basis. More information in posts and comments on this blog.) We then set this equal to F=Ma and solve, getting G = (3/4)H^{2}/(Pi*Rho). When the effect of the higher density in the universe at the great distance R is included, this becomes
G = (3/4)H^{2}/(Pi*Rho_{local}e^{3}).
This appears to be very accurate. Newton’s gravitation says the acceleration field a = MG/r^{2}.
STEP 1: Pressure is force/area. By geometry (illustrated here), the scaled area of shielding below you is equal to the area of space pressure above that is pushing you down. The shielded area of the sky is 100% if the shield mass is the mass of the universe, so: A_{shielding} = A_{r} M / M_{universe}.
Force, F = P_{space} A_{shielding} = (F_{space} /A_{r}).(A_{r}M/M_{universe}) = F_{space}.M/M_{universe}
Next (see step 2 below): introduce F_{space} = m_{space} a_{H}. Here, Hubble velocity variation in spacetime (v = HR) implies an acceleration equal to: a_{H} = dv/dt = c/t = c/(1/H) = cH = RH^{2}, while m_{space} = m(A_{R}/A_{r}) = m(R/r)^{2}, and the mass of the universe is its density, Rho, multiplied by its spherical volume, (4/3)*Pi*R^{3}.
Hence, F = F_{space}.M/M_{universe} = (m_{space} a_{H})M/M_{universe} = m(R/r)^{2}(RH^{2})M/ [(Rho*4*Pi*R^{3} /3)]
STEP 2: Air is flowing around you like a wave as you as you walk down a corridor (an equal volume goes in the other direction at the same speed, filling in the volume you are vacating as you move). It is not possible for the surrounding fluid to move in the same direction , or a void would form behind and fluid pressure would continuously increase in front until motion stopped. Therefore, an equal volume of the surrounding fluid moves in the opposite direction at the same speed, pemitting uniform motion to occur! Similarly, as fundamental particles move in space, a similar amount of massenergy in the fabric of space (spin foam vacuum field) is displaced as a wave around the particles in the opposite direction, filling in the void volume being continuously vacated behind them. For the mass of the big bang, the massenergy of the mass causing field of virtual particle field particles in the moving fabric of space is similar to the mass of the universe. As the big bang mass goes outward, the fabric of space goes inward around each fundamental particle, filling in the vacated volume. (This inward moving fabric of space exerts pressure, causing the force of gravity.)
‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp323.
The effective mass of the spacetime fabric moving inward which actually produces the gravity effect is equal to that which is exactly shielded by the mass (illustrated here). So m_{space} = m, but we also have to allow for the greater distance of the mass which is producing the gravity force by implosion. To take account of focussing due to the ‘implosion’ of space fabric pressure (see diagram) converging in to us in step 1 above (illustration above), we scale the mass to the shielding area because mass is due to shielding area per fundamental particle: m_{space} /m = A_{R} / A_{r}. Hence: m_{space} = mA_{R} / A_{r} = m(R/r)^{2}. This is because nearby areas on which force acts to produce pressure are much smaller than the area of sky at the very great distances where the recession and density are high and produce the source of space pressure and thus gravity.
The big bang recession velocities vary from 0 to c with distance for observable times of 15,000 million years towards zero, so the matter of the universe has an effective outward acceleration of c divided by the age of the universe. This acceleration, a = c/t = cH = RH^{2} where H is the Hubble constant (from: v = HR), is so small that its effects are generally undetectable. (Notice that if we could see and experience forces instantly, the universe would not show this acceleration. This acceleration is only real because we can’t see the universe at an age of 15 Gyr irrespective of distance. By Newton’s 2nd law, the actual outward force, when properly allowing for the varying effective density of the observed universe as a function of spacetime, is large and by Newton’s 3rd law it has an equal and opposite reaction, inward force which, where shielded, is gravity.)
F = m(R/r)^{2}(RH^{2})M/ [(Rho*4*Pi*R^{3} /3)] = (3/4)mMH^{2}/(Rho*Pi*r^{2})
Next: for mass continuity, d(Rho)/dt = div.(Rho*v) = 3*Rho*H. Hence, Rho = Rho_{local} e^{3} (early visible universe has higher density). The reason for multiplying the local measured density of the universe up by a factor of about 20 (the number e^{3} is the cube of the base of natural logarithms) is because it is the denser, more distant universe which contains most of the mass which is producing most of the inward pressure. Because we see further back in time with increasing distance, we see a more compressed age of the universe. Gravitational push comes to us at light speed, with the same velocity as the visible light that shows the stars. Therefore we have to take account of the higher density at earlier times. What counts is what we see, the spacetime in which distance is directly linked to time past, not the simplistic picture of a universe at constant density, because we can never see or experience gravity from such a thing due to the finite speed of light. The mass continuity equation d(Rho)/dt = div.(Rho*v) is simple hydrodynamics based on Green’s theorem and allows the Hubble law (v = HR) to be inserted and solved. An earlier method of calculation for this the notes of CERN preprint EXT2004007, is to set up a formula for the density at any particular time past, so as to calculate redshifted contributions to inward spacetime fabric pressure from a series of shells surrounding the observer. This is the same as the result Rho = Rho_{local} e^{3}.
F = (3/4) mMH^{2}/( p r^{2} r _{local} e^{3 }) = mMG/r^{2}, where G = (3/4) H^{2}/( p r _{local} e^{3 }) = 0.0119H^{2}/r _{local} = 6.7 x 10^{11} Nm^{2}kg^{2}, already accurate to within 1.65% using reliable supernovae data reported in Physical Review Letters! If there were any other reason for gravity with similar accuracy, the strength of gravity would then be twice what we measure, so this is a firm testable prediction/confirmation that can be checked even more delicately as more evidence becomes available from current astronomy research.
Let’s go right through the derivation of the EinsteinHilbert field equation in a nonobfuscating way. To start with, the classical analogue of general relativity’s field equation is Poisson’s equation
div.^{2}E = 4*Pi*Rho*G
The square of the divergence of
E is just the Laplacian operator (well known in heat diffusion) acting on
E and implies for radial symmetry (r = x = y = z) of a
field:
div.^{2}E
= d^{2}E/dx^{2} +
d^{2}E/dy^{2} + d^{2}E/dz^{2}
=
3*d^{2}E/dr^{2}
To derive Poisson’s equation in a
simple way (not mathematically rigorous), observe that for nonrelativistic
situations
E = (1/2)mv^{2} = MG/r
(Kinetic energy
gained by a test particle falling to distance r from mass M is
simply the gravitational potential energy gained at that distance by the
fall!)
Now, observe for spherical geometry and uniform density (where
density Rho = M/[(4/3)*Pi*r^{3}]),
4*Pi*Rho*G =
3MG/r^{3} = 3[MG/r]/r^{2}
So, since E =
(1/2)mv^{2} = MG/r,
4*Pi*Rho*G =
3[(1/2)mv^{2}]/r^{2} = (3/2)m(v/r)^{2}
Here,
the ratio v/r = dv/dr when translating to a differential equation, and as
already shown div.^{2}E = 3*d^{2}E/dr^{2} for
radial symmetry, so
4*Pi*Rho*G = (3/2)m(dv/dr)^{2} =
div.^{2}E
Hence proof of Poisson’s gravity field
equation:
div.^{2}E = 4*Pi*Rho*G.
To get this
expressed as tensors you begin with a Ricci tensor R_{uv} for
curvature (this is a shortened Riemann tensor).
R_{uv} =
4*Pi*G*T_{uv},
where T_{uv} is the
energymomentum tensor which includes potential energy contributions due to
pressures, but is analogous to the density term Rho in Poisson's
equation. (The density of mass can be converted into energy density simply by
using E=mc^{2}.)
However, this equation R_{uv}
= 4*Pi*G*T_{uv} was found by Einstein to be a failure because the
divergence of T_{uv} should be zero if energy is conserved. (A
uniform energy density will have zero divergence, and T_{uv} is
of course a densitytype parameter. The energy potential of a gravitational
field doesn't have zero divergence, because it diverges  falls off  with
distance, but uniform density has zero divergence simply because it doesn't fall
with distance!)
The only way Einstein could correct the equation (so that
the divergence of T_{uv} is zero) was by replacing
T_{uv} with T_{uv}  (1/2)(g_{uv})T, where
R is the trace of the Ricci tensor, and T is the trace of the
energymass tensor.
R_{uv} = 4*Pi*G*[T_{uv} 
(1/2)(g_{uv})T]
which is equivalent
to
R_{uv}  (1/2)Rg_{uv} =
8*Pi*G*T_{uv}
Which is the full general relativity field
equation (ignoring the cosmological constant and dark energy, which is
incompatible with any YangMills quantum gravity because to use an
oversimplified argument, the redshift of gravitycausing exchange radiation
between receding masses over long ranges cuts off gravity, negating the need for
dark energy to explain observations).
There are no errors as such in
the above but there are errors in the way the metric is handled and in the
ignorance of quantum gravity effects on the gravity strength constant
G. I’ve received nothing but ignorant pseudo personal abuse from string
theorists. An early version on
CERN can’t be updated via arXiv apparently because arXiv is controlled in the
relevant section by maintream string theorist. Newton never expressed a
gravity formula with the constant G because he didn't know what the
constant was (that was measured by Cavendish much later).
Newton did have
empirical evidence, however, for the inverse square law. He knew the earth has a
radius of 4000 miles and the moon is a quarter of a million miles away, hence by
inversesquare law, gravity should be (4000/250,000)^{2} = 3900 times
weaker at the moon than the 32 ft/s/s at earth's surface. Hence the gravity
acceleration due to the earth's mass at the moon is 32/3900 = 0.008
ft/s/s.
Newton’s formula for the centripetal acceleration of the moon is:
a = v^{2} /(distance to moon), where v is the moon's
orbital velocity, v = 2p .[250,000 miles]/[27
days] ~ 0.67 mile/second), hence a = 0.0096 ft/s/s.
So Newton had
evidence that the gravity from the earth at moon's radius is approximately the
same (0.008 ft/s/s ~ 0.0096 ft/s/s) as the centripetal force for the moon. The
gravity law we have proved from experimental facts is a complete mechanism that
predicts gravity and the contraction of general relativity.
The naďve application of general relativity to a socalled ‘flat’ spacetime cosmology (one which is just balanced between eventual collapse and eternal expansion, so that the expansion rate is forever falling) gives rise to the Friedmann equation (ignoring the small effect of the pseudo dark energy and its pseudo cosmological constant Lambda): density, r = (3/8)H^{2}/(8p G). In this model the retarding effect of gravity is to make the expanding radius of the matter universe proportional to the two thirds power of time: R ~ t^{2/3}, with the current age of the universe t = (2/3)/H, where H is Hubble parameter given by H = v/R. This falsely assumes that gravity is actually slowing down the expansion of the universe, which is why the 2/3 fraction is there. However, experimental evidence shows that there is no gravitational retardation. So the correct age of the universe is t = 1/H, and the correct expansion rate is as R ~ t, not as R ~ t^{2/3}.
The reason for the lack of observed gravitational retardation is ‘explained’ by the ad hoc value of the epicycle of dark energy (which powers the cosmological constant) in the quantum vacuum. However, the first observations of this came in 1998, and in 1996 Electronics World had published a paper with the nonad hoc prediction that expansion powers gravitation and expansion is not retarded by gravitation. Therefore this successful prediction should be impressive, as is the fact that the actual value for the universal gravitational constant G and various other parameters can be obtained by this mechanism and its extensions to other forces. However, it was removed from the arXiv.org server within a few seconds, without being read. This is apparently the fascism of 'string theory' at work to destroy any hope of progress. (Contrary to Popper and Kuhn, the mainstream will hold on to a noncheckable error – proved error by the correctness of a totally different model – that pays them well, as long as the public enjoys entertainment by extradimensional string theory speculation and fantasy. The string theory landscape, with 10^{350} solutions to be found for the cosmological constant, can keep millions of sheeplike PhD students engaged in ‘research’ virtually forever, which is what a stringy community needs.)
The biggest error in general relativity as applied to cosmology (the cosmological constant or Lambda, in the LambdaCold Dark Model, assumes the universe is accelerating just enough to wipe out the gravitational deceleration), is the assumption that general relativity is the final theory of quantum gravity. It isn’t. The nucleosynthesis of light elements in the first few minutes of the big bang indicates only that the ratio of gravity to electromagnetism was the same then as now. It doesn’t indicate that gravity didn’t change to within 10%, merely that the ratio of gravity to electromagnetic strength between protons was the same to within 10% of today’s value. The absolute strength could have been immensely different. This is because fusion rate depends on the approach of charged nuclei, chiefly protons, caused by gravity (causing compression) as offset against Coulomb electric repulsion. The stronger gravity was, the more fusion would occur, but the stronger the electric force (Coulomb’s law) was, the less the fusion rate. The determinism, from the relative abundance of light elements, that gravity was the same in the first few minutes is purely due to the assumption that Coulomb’s law was the same. All that determinism proves is that the ratio gravity/electromagnetism was similar a minute after the big bang to today’s value. If gravity had been a million times weaker, for example, fusion rate would still have been the same because electromagnetism is in fixed ratio to gravity and the weaker Coulomb repulsion between approaching charges would have made fusion easier, counteracting the lower gravity strength's effect on the fusion rate. Fusion occurs when particles approach close enough  against electric repulsion (which is an inversesquare of distance law, similar in that sense to gravitation)  to be fused together by the shortranged strong nuclear attraction force which exists between nucleons. So claims like ‘G is observed to be constant’ instead of the correct statement that ‘G/electromagnetism ratio is observed to be constant’, are based on complete ignorance and are false.
{LIMIT OF EDITED SECTION AS OF 14 DECEMBER 2006. THE REMAINDER WILL BE EDITED LATER.}
Particle mass model
Above: Polarisation of the vacuum around an electron core screens the core electric charge by a factor of alpha (ie, the observable charge is just 1 part in 137.036 of the full bare core charge). Below: how vacuum shielding and geometric factors account for the masses of all isolated observable particles.
‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle "is" an irreducible representation of the symmetry group of the universe.’ Wiki.
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
Fig. 1  (Display page full width to see illustration properly; it is not an image file.) The incompatibility between the quantum fields of quantum field theory (which are discontinuous, particulate) and the continious fields of classical theories like Einstein’s general relativity and Maxwell’s electromagnetism. The incompatibility between quantum field theory and general relativity is due to the Dirac sea, which imposes discrete upper and lower limits (called the UV/ultraviolet and the IR/infrared cutoff, respectively) on the strengths of fields originating from particles.
Vacuum polarization picture: zone A is the UV cutoff, while zone B is the IR cutoff around the particle core in http://thumbsnap.com/vf/FBeqR0gc.gifhttp://photos1.blogger.com/blogger/1931/1487/1600/PARTICLE.4.gif. See also http://electrogravity.blogspot.com/2006/06/moreonpolarizationofvacuumand.html and http://nige.wordpress.com/2006/10/09/16/ for more information. To find out how to calculate the 137 polarization shielding factor (1/alpha), scroll down and see the section below in this post headed ‘Mechanism for the strong nuclear force.’
RENORMALIZATION AND LOOP QUANTUM GRAVITY
Dirac’s sea correctly predicted antimatter and allows the polarization of the vacuum required in the Standard Model of particle physics, making thousands of accurate predictions. Einstein’s spacetime continuum of his general relativity allows only a very few correct predictions and has a large ‘landscape’ of ad hoc cosmological models (ie, a large number of unphysical solutions, or at least uncheckable solutions, making it an ugly physics model) and in addition it is false in so much as it fails to naturally explain or incorporate the renormalization of force field charges due to polarization of the particulate vacuum, and it also fails to even model the long range gauge bosons (which may be nonoscillatory radiation for the longrange force fields) exchange radiation of the YangMills quantum field theories which successfully comprise the Standard Model of electroweak and strong interactions. For example, Einstein’s general relativity is disproved by the fact that it contains no natural mechanism to allow for the redshift or related depletion of energy in gauge boson exchange radiation causing forces across the expanding universe!For these reasons, it is necessary to rebuild general relativity on the basis of quantum field theory. Smolin et al. show using Loop Quantum Gravity (LQG) that a Feynman path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent).
Regarding the physics of the metric: in 1949 some kind of crystallike Dirac sea was shown to mimic the SR contraction and massenergy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 1314: ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1  v^2 /c^2)^1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E(o)/(1  v^2 / c^2)^1/2, where E(o) is the potential energy of the dislocation at rest.’Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.Next, you simply have to make gravity consistent completely with standard modeltype YangMills QFT dynamics to get predictions:
‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’
 P. Woit, Not Even Wrong, Cape, London, 2006, p189. [Emphasis added.]
Surely this is compatible with YangMills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.
Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations. As Nobel Laureate Professor Phil Anderson wrote: "the flat universe is just not decelerating, it isn’t really accelerating …"  http://cosmicvariance.com/2006/01/03/dangerphilanderson
THE ULTRAVIOLET (UV) CUTOFF AND THE INFRARED (IR) CUTOFF
. .
. .
Fig. 2  The large void represents simply an enlargement of part of the left hand side of Fig. 1. The particulate nature of the Dirac sea explains the physical basis for the UV (ultraviolet) cutoff in quantum field theories such as the successful Standard Model. As you reduce the volume of space to such small volumes (i.e., as you collide particles to higher energies so that they approach so closely that there is very little distance between them) that it is unlikely that the small space will contain any background Dirac sea field particles at all, it is obvious that no charge polarization of the Dirac sea is possible. This is due to the Dirac sea becoming increasing coarse grained when magnified excessively. To make this argument quantitative and predictive is easy (see below). The error in existing quantum field theory which require manual renormalization (upper and lower cutoffs) is the statistical treatment in the equations, which are continuous equations: these are only valid where large numbers of statistics are involved, and they break down where pushed too far, thus requiring cutoffs.
The UV cutoff is explained in Fig. 2: Dirac sea polarization (leading to charge renormalization) is only possible in volumes large enough to be likely to contain some discrete charges! The IR cutoff has a different explanation. It is required physically in quantum field theory to limit the range over which the vacuum charges of the Dirac sea are polarized, because if there were no limit, then the Dirac sea would be able to polarize sufficiently to completely eradicate the entire electric field of all electric charges. That this does not happen in nature shows that there is a physical mechanism in place which prevents polarization below the range of the IR cutoff, which is about 10^{15} m from an electron, corresponding to something like 10^{20} volts/metre electric field strength.
Clearly, the Dirac sea is physically:
(a) disrupted from bound into freed charges (pair production effect) above the IR cutoff (threshold for pair production),
(b) given energy in proportion to the field strength (by analogy to Einstein’s photoelectric equation, where there is a certain minimum amount of energy required to free electrons from their bound state, and further energy above that mimimum then then goes into increasing the kinetic energy of those particles, except that in this case the indeterminancy principle due to scattering indeterminism introduces statistics and makes it more like a quantum tunnelling effect and the extra field energy above the threshold can also energise ground state Dirac sea charges into more massive loops in progressive states, ie, 1.022 MeV delivered to two particles colliding with 0.511 MeV each  the IR cutoff  can create an e and e+ pair, while a higher loop threshold will be 211.2 MeV delivered as two particles colliding with 105.6 MeV or more, which can create a muon+ and muon pair, and so on, see the previous post for explanation of a diagram explaining mass by ‘doubly special supersymmetry’ where charges have a discrete number of massive partners located either within the closein UV cutoff range or beyond the perimeter IR cutoff range, accounting for masses in a predictive, checkable manner), and
(c) the quantum field is then polarized (shielding electric field strength).
These three processes should not be confused, but are generally confused by the use of the vague term ‘energy’ to represent 1/distance in most discussions of quantum field theory. For two of the best introductions to quantum field theory as it is traditionally presented see http://arxiv.org/abs/hepth/0510040 and http://arxiv.org/abs/quantph/0608140
We only see ‘pairproduction’ of Dirac sea charges becoming observable in creationannihilation ‘loops’ (Feynman diagrams) when the electric field is in excess of about 10^{20} volts/metre. This very intense electric field, which occurs out to about 10^{15} metres from a real (longobservable) electron charge core, is strong enough to overcome the binding energy of the Dirac sea: particle pairs then pop into visibility (rather like water boiling off at 100 C).
The spacing of the Dirac sea particles in the bound state below the IR cutoff is easily obtained. Take the energytime form of Heisenberg’s uncertainty principle and put in the energy of an electronpositron pair and you find it can exist for ~10^{21} second; the maximum possible range is therefore this time multiplied by c, ie ~10^{12} metre.
The key thing to do would be to calculate the transmission of gamma rays in the vacuum. Since the maximum separation of charges is 10^{12} m, the vacuum contains at least 10^{36} charges per cubic metre. If I can calculate that the range of gamma radiation in such a dense medium is 10^{12} metre, I’ll have substantiated the mainstream picture. Normally you get two gamma rays when an electron and positron annhilate (the gamma rays go off in opposite directions), so the energy of each gamma ray is 0.511 MeV, and it is well known that the Compton effect (a scattering of gamma rays by electrons as if both are particles not waves) predominates for this energy. The mean free path for scatter of gamma ray energy by electrons and positrons depends essentially on the density of electrons (number of electrons and positrons per cubic metre of space). However, the data come from either the KleinNishita theory (an application of quantum mechanics to the Compton effect) or experiment, for situations where the binding energy of electrons to atoms or whatever is insignificant compared to the energy of the gamma ray. It is perfectly possible that the binding energy of the Dirac sea would mean that the usual radiation attenuation data are inapplicable!
Ignoring this possibility for a moment, we find that for 0.5 MeV gamma rays, Glasstone and Dolan (page 356) state that the linear absorption coefficient of water is u = 0.097 (cm)^{1}, where the attenuation is exponential as e^{ux} where x is distance. Each water molecule has 8 electrons and we know from Avogadro’s number that 18 grams of water contains 6.0225 * 10^23 water molecules, or about 4.818 * 10^24 electrons. Hence, 1 cubic metre of water (1 metric ton or 1 million grams) contains 2.6767 * 10^29 electrons. The reciprocal of the linear absorption coefficient u, ie, 1/u tells us the ‘mean free path’ (the best estimate of effective ‘range’ for our purposes here), which for water exposed to 0.5 MeV gamma rays is 1/0.097 = 10.3 cm = 0.103 m. Hence, the number of electrons and positrons in the Dirac sea must be vastly larger that in water, in order to keep the range down (we don’t observe any vacuum gamma radioactivity, which only affects subatomic particles). Normalising the mean free path to 10^{12} m to agree with the Heisenberg uncertainty principle, we find that the density of electrons and positrons in the vacuum would be: {the electron density in 1 cubic metre of water, 2.6767 * 10^29} * 0.103/[10^{12}] = 2.76 * 10^40 electrons and positrons per cubic metre of Dirac sea. This agrees with the estimate previously given from the Heisenberg uncertainty principle that the vacuum contains at least 10^{36} charges per cubic metre. However, the binding energy of the Dirac sea is being ignored in this Compton effect shielding estimate. The true separation distance is smaller still, and the true density of electrons and positrons in the Dirac sea is still higher.
Obviously the graining of the Dirac sea must be much smaller than 10^{12} m because we have already said that it exists down to the UV cutoff (very high energy, ie, very small distances of closest approach). The amount of ‘energy’ in the Dirac sea is astronomical if you calculate the rest mass equivalent, but you can similarly produce stupid numbers for the energy of the earth’s atmosphere: the mean energy of an air molecule is around 500 m/s, and since the atmosphere is composed mainly of air molecules (with a relatively small amount of water and dust), we can get a ridiculous energy density of the air by multiplying the mass of air by 0.5*(500^2) to obtain its kinetic energy. Thus, 1 kg of air (with all the molecules going at a mean speed of 500 m/s) has an energy of 125,000 Joules. But this is not useful energy because it can’t be extracted: it is totally disorganised. The Dirac sea ‘energy’ is similarly massive but useless.
REPRESENTATION THEORY AND THE STANDARD MODEL
Woit gives an example of how representation theory can be used in low dimensions to reduce the entire Standard Model of particle physics into a simple expression of Lie spinors and Clifford algebra on page 51 of his paper http://arxiv.org/abs/hepth/0206135. This is a success in terms of what Wigner wants (see the top of this post for the vital quote from Wiki), and there is then the issue of the mechanism for electroweak symmetry breaking, for mass/gravity fields, and for the 18 parameters of the Standard Model. These are not extravagant, seeing that the Standard Model has made thousands of accurate predictions with them, and all of those parameters are either already or else in principle mechanistically predictable by the causal YangMills exchange radiation effects model and a causal model of renormalization and gauge boson energysharing based unification (see previous posts on this blog, and the links section in the ‘about’ section on the right hand side of this blog for further information).
Additionally, Woit stated other clues of chiral symetry: ‘The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with spacetime symmetries, but left and righthanded spinors are distinguished purely by their behavior under a spacetime symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of spacetime.’
For the background to Lie spinors and Clifford algebras, Baez has an interesting discussion of some very simple Lie algebra physics here and here, and representation theory here, Woit has extensive lecture notes here, and Tony Smith has a lot of material about Clifford algebras here and spinors here. The objective to have is a simple unified model to represent the particle which can explain the detailed relationship between quarks and leptons and predict things about unification which are checkable. The short range forces for quarks are easily explained by a causal model of polarization shielding by leptontype particles in proximity (pairs or triads of ‘quarks’ form hadrons, and the pairs or triads are close enough to all share the same polarized vacuum veil to a large extent, which makes the poalrized vacuum generally stronger so that the effective longrange electromagnetic charge per ‘quark’ is reduced to a fraction of that for a lepton which consists of only one core charge: see this comment on Cosmic Variance blog.
I’ve given some discussion of the Standard Model at my main page (which is now partly obsolete and in need of a major overhaul to include many developments). Woit gives a summary the Standard Model in a completely different way, which makes chiral symmetries clear, in Fig. 7.1 on page 93 of Not Even Wrong (my failure to understand this before made me very confused about chiral symmetry so I didn’t mention or consider it’s role):
‘The picture [it is copyright, so get the book: see Fig. 7.1 on p.93 of Not Even Wrong] shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).
‘Under SU(3), the quarks are triplets and the leptons are invariant.
‘Under SU(2), the particles in the middle row are doublets (and are lefthanded Weylspinors under Lorentz transformations), the other particles are invariant (and are righthanded Weylspinors under Lorentz transformations).
‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’
This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (’quarks’), whereas SU(2) controls doublet’s of particles (’quarks’).
But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a righthanded downquark (electric charge 1/3) has a weak hypercharge of 2/3, while a lefthanded downquark (same electric charge as the righthanded one, 1/3), has a different weak hypercharge: 1/3 instead of 2/3!
Clearly this weak hypercharge effect is what has been missing from my naive causal model (where observed long range quark electric charge is determined merely by the strength of vacuum polarization shielding of the electric charges closely confined). Energy is not merely being shared between the QCD SU(3) colour forces and the U(1) electromagnetic forces, but there is the energy present in the form of weak hypercharge forces which are determined by the SU(2) weak nuclear force group.
Let’s get the facts straight: from Woit’s discussion (unless I’m misunderstanding), the strong QCD force SU(3) only applies to triads of quarks, not to pairs of quarks (mesons).
The binding of pairs of quarks is by the weak force only (which would explain why they are so unstable, they’re only weakly bound and so more easily decay than triads which are strongly bound). The weak force also has effects on triads of quarks.
The weak hypercharge of a downquark in a meson containing 2 quarks is Y=1/3 compared to Y=2/3 for a downquark in a baryon containing 3 quarks.
Hence the causal relationship holds true for mesons. Hypothetically, 3 righthanded electrons (each with weak hypercharge Y = 2) will become righthanded downquarks (each with hypercharge Y=2/3) bought close together, because they then share the same vacuum polarization shield, which is 3 times stronger than that around a single electron, and so attenuates more of the electric field, reducing it from 1 per electron when widely separated to 1/3 when brought close together (forget the Pauli exclusion principle, for a moment!).
Now, in a meson, you only have 2 quarks, so you might think that from this model the downquark would have electric charge 1/2 and not 1/3, but that anomaly only exists when ignoring the weak hypercharge! For a downquark in a meson, the weak hypercharge is Y=1/3 instead of Y=2/3 which the downquark has in a baryon (triad). The increased hypercharge (which is responsible physically to the weak force field that binds up a meson) offsets the electric charge anomaly. The handedness switchover, in going from considering quarks in baryons to those in mesons, automatically compensates the electric charge, keeping it the same!
The details of how handedness is linked to weak hypercharge is found in the dynamics of Pauli’s exclusion principle: adjacent particles can’t have have a full set of the same quantum numbers like the same spin and charge. Instead, each particle has a unique set of quantum numbers. Bringing particles together and having them ‘live together’ in close proximity forces them to arrange themselves with suitable quantum numbers. The Pauli exclusion principle is simple in the case of atomic electrons: each electron has four quantum numbers, describing orbit configuration and intrinsic spin, and each adjacent electron has opposite spin to its neighbours. The spin alignment here can be understood very simply in terms of magnetism: it needs the least energy to have sign an alignment (hving similar spins would be an addition of magnetic moments, so that north poles would all be adjacent and south poles would all be adjacent, which requires more energy input than having adjacent magnets parallel with opposite poles nearest). In quarks, the situation regarding the Pauli exclusion principle mechanism is slightly more complex, because quarks can have similar spins if their colour charges are different (electrons don’t have colour charges, which are an emergent property of the strong fields which arise when two or three real fundamental particles are confined at close quarters).
Obviously there is a lot more detail to be filled in, but the main guiding principles are clear now: every fermion is indeed the same basic entity (whether quark or lepton), and the differences in observed properties stem to the vacuum properties such as the strength of vacuum polarization, etc. The fractional charges of quarks always arise due to the use of some electromagnetic energy to create other types of short range forces (the testable prediction of this model is the forecast that detailed calculations will show that perfect unification will arise on such energy conservation principles, without requiring the 1:1 boson to fermion ’supersymmetry’ hitherto postulated by string theorists). Hence, in this simple mechanism, the +2/3 charge of the upquark is due to a combination of strong vacuum polarization attenuation and hypercharge (the downquark we have been discussing is just the clearest case).
So regarding unification, we can get hard numbers out of this simple mechanism. We can see that the total gauge boson energy for all fields is conserved, so when one type of charge (electric charge, colour charge, or weak hypercharge) varies with collision energy or distance from nucleus, we can predict that the others will vary in such a way that the total charge gauge boson energy (which mediates the charge) remains constant. For example, we see reduced electric charge from a long range because some of that energy is attenuated by the vacuum and is being used for weak and (in the case of triads of quarks) colour charge fields. So as you get to ever higher energies (smaller distances from particle core) you will see all the forces equalizing naturally because there is less and less polarized vacuum between you and the real particle core which can attenuate the electromagnetic field. Hence, the observable strong charge couplings have less supply of energy (which comes from attenuation of the electromagnetic field), and start to decline. This causes asymptotic freedom of quarks because the decline in the strong nuclear coupling at very small distances is offset by the geometric inversesquare law over a limited range (the range of asymptotic freedom). This is what allows hadrons to have a much bigger size than the size of the tiny quarks they contain.
MECHANISM FOR THE STRONG NUCLEAR FORCE
We’re in a Dirac sea, which undergoes various phase transitions breaking symmetries as the strength of the field is increased. Near a real charge, the electromagnetic field within 10^{15} metre exceeds 10^20 volts/metre which causes the first phase transition, like ice melting or water boiling. The freed Dirac sea particles can exert therefore a short range attractive force by the LeSage mechanism (which of course does not apply directly to long range interactions because the ‘gas’ effect fills in LeSage shadows over long distances, so the attractive force is shortranged: it is limited to a range of about one meanfreepath for the interacting particles in the Dirac sea). The LeSage gas mechanism represents the strong nuclear attractive force mechanism. Gravity and electromagnetism as explained the previous posts on this blog are both due to the YangMills ‘photon’ exchange mechanism (because YangMills exchange ‘photon’ radiation  or any other radiation  doesn’t diffract into shadows, it doesn’t suffer the short range issue of the strong nuclear force; the short range of the weak nuclear force due to shielding by the Dirac sea may be quite a different mechanism for having a shortrange).
You can think of the strong force like the shortrange forces due to normal sealevel air pressure: the air pressure of 14.7 psi or 101 kPa is big, so you can prove the short range attractive force of air pressure it by using a set of rubber ’suction cups’ strapped on your hands and knees to climb a smooth surface like a glassfronted building (assuming the glass is strong enough!). This force has a range on the order of the mean free path of air molecules. At bigger distances, air pressure fills the gap, and the force disappears. The actual fall of course is statistical; instead of the short range attraction becoming suddenly zero at exactly one mean free path, it drops (in addition to geometric factors) exponentially by the factor exp{ux} where u is the reciprocal of the mean free path and x is distance (in air of course there are weak attractive forces between molecules, Van der Waals forces, as well). Hence it is short ranged due to scatter of charged particles dispersing forces in all directions (unlike radiation):
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’
– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
(Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, which above the IR cutoff start to exert vast pairproduction loop pressure; this gives the foam vacuum.)
Now for the formulae! The reason for radioactivity of heavy elements is linked to the increasing difficulty the strong force has in offsetting electromagnetism as you get towards 137 protons, accounting for the shorter halflives. So here is a derivation of the strong nuclear force (mediated by pions) law including the natural explanation of why it is 137 times stronger than electromagnetism at short distances:
Heisenberg’s uncertainty says p*d = h/(2*Pi), if p is uncertainty in momentum, d is uncertainty in distance.
This comes from the resolving power of Heisenberg’s imaginary gamma ray microscope, and is usually written as a minimum (instead of with "=" as above), since there will be other sources of uncertainty in the measurement process. The factor of 2 would be a factor of 4 if we consider the uncertainty in one direction about the expected position (because the uncertainty applies to both directions, it becomes a factor of 2 here).
For light wave momentum p = mc, pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time. OK, we are dealing with massive pions, not light, but this is close enough since they are relativistic:
Et = h/(2*Pi)
t = d/c = h/(2*Pi*E)
E = hc/(2*Pi*d).
Hence we have related distance to energy: this result is the formula used even in popular texts used to show that a 80 GeV energy W+/ gauge boson will have a range of 10^17 m. So it’s OK to do this (ie, it is OK to take uncertainties of distance and energy to be real energy and range of gauge bosons which cause fundamental forces).
Now, the work equation E = F*d (a vector equation: "work is product of force and the distance acted against the force in the direction of the force"), where again E is uncertainty in energy and d is uncertainty in distance, implies:
E = hc/(2*Pi*d) = Fd
F = hc/(2*Pi*d^2)
Notice the inverse square law resulting here!
This force is 137.036 times higher than Coulomb’s law for unit fundamental charges! This is the usual value often given for the ratio between the strong nuclear force and the electromagnetic force (I’m aware the QCD inter quark gluonmediated force takes different and often smaller values than 137 times the electromagnetism force; that is due to vacuum polarization effects including the effect of charges in the vacuum loops coupling to and interfering with the gauge bosons for the QCD force).
This is the bare core charge of any particle, ignoring vacuum polarization which extends out to 10^{15} metres and shields the electric field by a factor of 137 (which is the number 1/alpha), ie, the vacuum is attenuating 100(1alpha) % = 99.27 % of the electric field of the electron. This energy is going into nuclear forces in the shortrange vacuum polarization region (ie, massive loops, virtual particles, W+/, Z_o and ‘gluon’ effects, which don’t have much range because they are barred by the high density of the vacuum, which is the obvious mechanism of electroweak symmetry breaking  regardless whether there is a Higgs boson or no Higgs boson).
The electron has the characteristics of a gravity field trapped energy current, a Heaviside energy current loop of black hole size (radius 2GM/c^2) for its mass, as shown by gravity mechanism considerations (see ‘about’ information on right hand side of this blog for links). The looping of energy current, basically a PoyntingHeaviside energy current trapped in a small loop, causes a spherically symmetric Efield and a toroidal shaped Bfield which at great distances reduces (because of the effect of the closein radial electric fields on transverse Bfields in the vacuum polarization zone within 10^{15} metre of the electron black hole core) to a simple magnetic dipole field (those Bfield lines which are parallel to Efield lines, ie, the polar Bfield lines of the toroid, obviously can’t ever be attenuated by the radial Efield). This means that since the E and Bfields in a photon are related by simply E = c*B, the vacuum polarization reduces only E by a factor of 137, and not B! This is long evidenced in practice as Dirac proved in 1931:
‘When one considers Maxwell’s equations for just the electromagnetic field, ignoring electrically charged particles, one finds that the equations have some peculiar extra symmetries besides the wellknown gauge symmetry and spacetime symmetries. The extra symmetry comes about because one can interchange the roles of the electric and magnetic fields in the equations without changing their form. The electric and magnetic fields in the equations are said to be dual to each other, and this symmetry is called a duality symmetry. Once electric charges are put back in to get the full theory of electrodynamics, the duality symmetry is ruined. In 1931 Dirac realised that to recover the duality in the full theory, one needs to introduce magnetically charged particles with peculiar properties. These are called magnetic monopoles and can be thought of as topologically nontrivial configurations of the electromagnetic field, in which the electromagnetic field becomes infinitely large at a point. Whereas electric charges are weakly coupled to the electromagnetic field with a coupling strength given by the fine structure constant alpha = 1/137, the duality symmetry inverts this number, demanding that the coupling of the magnetic charge to the electromagnetic field be strong with strength 1/alpha = 137. [This applies to the magnetic dipole Dirac calculated for the electron, assuming it to be a Poynting wave where E = c*B and E is shielded by vacuum polarization by a factor of 1/alpha = 137.]
‘If magnetic monopoles exist, this strong [magnetic] coupling to the electromagnetic field would make them easy to detect. All experiments that have looked for them have turned up nothing…’  P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, pp. 1389. [Emphasis added.]
The Pauli exclusion principle normally makes the magnetic moments of all electrons undetectable on a macroscopic scale (apart from magnets made from iron, etc.): the magnetic moments usually cancel out because adjacent electrons always pair with opposite spins! If there are magnetic monopoles in the Dirac sea, there will be as many ‘north polar’ monopoles as ’south polar’ monopoles around, so we can expect not to see them because they are so strongly bound!
CAUSALITY IN QUANTUM MECHANICS
Professor Jacques Distler has an interesting, thoughtful, and well written post called ‘The Role of Rigour’ on his Musings blog where he brilliantly argues:
‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’
Jacques also summarises the issues for theoretical physics clearly in a comment there:
I’ve explained there to Dr ’stringhype Haelfix’ that people should be working on nonrigorous areas like the derivation of the Hamiltonian in quantum mechanics, which would increase the rigour of theoretical physics, unlike string. I earlier explained this kind of thing (the need for checkable research not speculation about unobservables) in the October 2003 Electronics World issue opinion page, but was ignored, so clearly I need to move on to stronger language because stringers don’t listen to such polite arguments as those I prefer using! Feynman writes in QED, Penguin, London 1985:
‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’
There is already a frequency of oscillation in the photon before it hits the glass, and in the glass due to the sea of electrons interacting via YangMills forcecausing radiation. If the frequencies clash, the photon can be reflected or absorbed. If they don’t interfere, the photon goes through the glass. Some of the resonate frequencies of the electrons in the glass are determined by the exact thickness of the glass, just like the resonate frequencies of a guitar string are determined by the exact length of the guitar string. Hence the precise thickness of the glass controls some of the vibrations of all the electrons in it, including the surface electrons on the edges of the glass. Hence, the precise thickness of the glass determines the amplitude there is for a photon of given frequency to be absorbed or reflected by the front surface of the glass. It is indirect in so much as the resonance is set up by the thickness of the glass long before the photon even arrives (other possible oscillations, corresponding to a noninteger value of the glass thickness as measured in terms of the number of wavelengths which fit into that thickness, are killed off by interference, just as a guitar string doesn’t resonate well at nonnatural frequencies).
What has happened is obvious: the electrons have set up a equilibrium oscillatory state dependent upon the total thickness before the photon arrives. There is nothing to this: consider how a musical instrument works, or even just a simple tuning fork or solitary guitar string. The only resonate vibrations are those which contain an integer number of wavelengths. This is why metal bars of different lengths resonate at different frequencies when struck. Changing the length of the bar slightly, completely alters its resonance to a given wavelength! Similarly, the photon hitting the glass has a frequency itself. The electrons in the glass as a whole are all interacting (they’re spinning and orbiting with centripetal accelerations which cause radiation emission, so all are exchanging energy all the time which is the force mechanism in YangMills theory for U(1) electromagnetism), so they have a range of resonances that is controlled by the number of integer wavelengths which can fit into the thickness of the glass, just as the range of resonances of a guitar string are determined by the wavelengths which fit into the string length resonately (ie, without suffering destructive interference).
Hence, the thickness of the glass predetermines the amplitude for a photon of given frequency to be either absorbed or reflected. The electrons at the glass surface are already oscillating with a range of resonate frequencies depending on the glass thickness, before the photon even arrives. Thus, the photon is reflected (if not absorbed) only from the front face, but it’s probability of being reflected is dependent on the total thickness of the glass. Feynman also explains:
‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’
More about this here (in the comments; but notice that Jacques’ final comment on the thread of discussion about rigour in quantum mechanics is discussed by me here), here, and here. In particular, Maxwell’s equations assume that real electric current is dQ/dt which is a continuous equation being used to represent a discontinuous situation (particulate electrons passing by, Q is charge): it works approximately for large numbers of electrons, but breaks down for small numbers passing any point in a circuit in a second! It is a simple mathematical error, which needs correcting to bring Maxwell’s equations into line with modern quantum field theory. A more subtle error in Maxwell’s equations is his ‘displacement current’ which is really just a YangMills forcecausing exchange radiation as explained in the previous post and on my other blog here. This is what people should be working on to derive the Hamiltonian: the Hamiltonian in both Schroedinger’s and Dirac’s equations describes energy transfers as wavefunctions vary in time, which is exactly what the corrected Maxwell ‘displacement current’ effect is all about (take the electric field here to be a relative of the wavefunction). I’m not claiming that classical physics is right! It is wrong! It needs to be rebuilt and its limits of applicability need to be properly accepted:
Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. This means that chaotic motions on atomic scales can result from electrons influencing one another, and from the randomly produced pairs of charges in the loops within 10^{15} m from an electron (where the electric field is over about 10^20 v/m) causing deflections. The failure of determinism (ie closed orbits, etc) is present in classical, Newtonian physics. It can’t even deal with a collision of 3 billiard balls:
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of prequantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’
– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.
The Hamiltonian time evolution should be derived rigorously from the empirical facts of electromagnetism: Maxwell’s ‘displacement current’ describes energy flow (not real charge flow) due to a timevarying electric field. Clearly it is wrong because the vacuum doesn’t polarize below the IR cutoff which corresponds to 10^20 volts/metre, and you don’t need that electric field strength to make capacitors, radios, etc. work.
So you could derive the Schroedinger from a corrected Maxwell ‘displacement current’ equation. This is just an example of what I mean by deriving the Schroedinger equation. Alternatively, a computer Monte Carlo simulation of electrons in orbit around a nucleus, being deflected by pair production in the Dirac sea, would provide a check on the mechanism behind the Schroedinger equation, so there is a second way to make progress
HOW SHOULD CENSORSHIP PRESERVE QUALITY?
‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’  Letter of Galileo to Kepler, 1610, http://www.catholiceducation.org/articles/science/sc0043.html
‘There will certainly be no lack of human pioneers when we have mastered the art of flight. Who would have thought that navigation across the vast ocean is less dangerous and quieter than in the narrow, threatening gulfs of the Adriatic , or the Baltic, or the British straits? Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky travelers, maps of the celestial bodies  I shall do it for the moon, you, Galileo, for Jupiter.’  Letter from Johannes Kepler to Galileo Galilei, April 1610, http://www.physics.emich.edu/aoakes/letter.html
Kepler was a crackpot/noise maker; despite his laws and discovery of elliptical orbits, he got the biggest problem wrong, believing that the earth  which William Gilbert had discovered to be a giant magnet  was kept in orbit around the sun by magnetic force. So he was a noise generator, a crackpot. If you drop a bag of nails, they don’t all align to the earth’s magnetism because it is so weak, but they do all fall  because gravity is relatively strong due to the immense amounts of mass involved. (For unit charges, electromagnetism is stronger than gravity by a factor like 10^{40} but that is not the right comparison here, since the majority of the magnetism in the earth due to fundamental charges is cancelled out by the fact that charges are paired with opposite spins, cancelling out their magnetism. The tiny magnetic field of the planet earth is caused by some kind of weak dynamo mechanism due to the earth’s rotation and the liquid nickeliron core of the earth, and the earth’s magnetism periodically flips and reverses naturally  it is weak!) So just because a person gets one thing right, or one thing wrong, or even not even wrong, that doesn’t mean that all their ideas are good/rubbish.
As Arthur Koestler pointed out in The Sleepwalkers, it is entirely possible for there to be revolutions without any really fanatic or even objective/rational proponents (Newton was a totally crackpot alchemist who also faked the first ’theory’ of sound waves). My own view of the horrible Dirac sea (Oliver Lodge said: ‘A fish cannot comprehend the existence of water. He is too deeply immersed in it,’ but what about flying fish?) is that it is an awfully ugly empirical fact that is
(1) required by the Dirac equation’s negative energy solution, and which is
(2) experimentally demonstrated by antimatter.
My personal interest in the subject is more to do with a personal, bitter vendetta against string theorists who are turning physics into a religion and laughing stock in Britain, than because I have the slightest interest how the big bang came about or what will happen in the distant future. I don’t care about that, just about understanding what is already known, and promoting the hard, experimental facts. Maybe when time permits some analysis of what these facts say about the early time of the big bang and the future of the big bang will be possible (see my controversial comment here). I did touch on these problems in an eight pages long initial paper which I wrote in May 1996 and which was sold via the October 1996 issue of Electronics World (see letters pages for the Editor’s note). However, that paper is long obsolete and the whole subject needs to be carefully analysed before coming to important conclusions. But the main problem is that Woit summarises on p.259 of the UK edition of the brilliant book Not Even Wrong:
‘As long as the leadership of the particle theory community refuses to face up to what has happened and continues to train young theorists to work on a failed project, there is little likelihood of new ideas finding fertile ground in which to grow. Without a dramatic change in the way theorists choose what topics to address, they will continue to be as unproductive as they have been for two decades, waiting for some new experimental result finally to arrive.’
John Horgan’s 1996 excellent book The End of Science, which Woit argues is the future of physics if people don’t keep to explaining what is known (rather than speculating about unification at energy higher than can ever be seen, speculating about parallel universes, extradimensions, and other nonempirical drivel), states:
‘A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.’
This post is updated as of 26 October 2006, and will be further expanded to include material such as the results here, here, here, here and here.
I’ve not included gravity, electromagnetism or mass mechanism dynamics in this post; for these see the links in the ‘about’ section on the right hand side of this blog, and the previous posts on this blog. The major quantitative predictions and successful experimental tests are summarized in the old webpage at http://feynman137.tripod.com/#d apart from all of the particle masses which are dealt with in the previous post on this blog. It is not particularly clear whether I should spent spare time revising outdated material or studying unification and Standard Model details further. Obviously, I’ll try to do both as far as time permits.
L. Green, "Engineering versus pseudoscience", Electronics World, vol. 110, number 1820, August 2004, pp523:
‘… controversy is easily defused by a good experiment. When such unpleasantness is encountered, both warring factions should seek a resolution in terms of definitive experiments, rather than continued personal mudslinging. This is the difference beween scientific subjects, such as engineering, and nonscientific subjects such as art. Nobody will ever be able to devise an uglyometer to quantify the artistic merits of a painting, for example.’ (If string theorists did this, string theory would be dead, because my mechanism published in Oct 96 E.W. and Feb. 97 Science World, predicts the current cosmological results which were discovered about two years later by Perlmutter.)
‘The ability to change one’s mind when confronted with new evidence is called the scientific mindset. People who will not change their minds when confronted with new evidence are called fundamentalists.’  Dr Thomas S. Love, California State University.
This comment from Dr Love is extremely depressing; we all know today’s physics is a religion. I found out after emailed exchanges with, I believe, Dr John Gribbin, the author of numerous crackpot books like ‘The Jupiter Effect’ (claiming Los Angeles would be destroyed by an earthquake in 1982), and quantum books trying to prove Lennon’s claim ‘nothing is real’. After explaining the facts to Gribbin, he then emailed me a question something like (I have archives of emails by the way, so could check the exact wording if required): ‘you don’t seriously expect me to believe that or write about it?’
‘… a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’  Max Planck.
But, being antibelief and antireligious intrusion into science, I’m not interested in getting people to believe truths but on the contrary, to question them. Science is about confronting facts. Dr Love suggests a U(3,2)/U(3,1)xU(1) alternative to the Standard Model, which provides a test on my objectivity. I can’t understand his model properly because it reproduces particle properties in a way I don’t understand, and doesn’t appear to yield any of the numbers I want like force strengths, particle masses, causal explanations. Although he has a great many causal explanations in his paper, which are highly valuable, I don’t see how they connect to the alternative to the standard model. He has an online paper on the subject as a PDF file, ‘Elementary Particles as Oscillations in AntideSitter SpaceTime’ which I have several issues with: (1) antideSitter spacetime is a stringy assumption to begin with, (2) I don’t see checkable predictions. However, maybe further work on such ideas will produce more justification for them; they haven’t had the concentration of effort which string theory has had.
There are no facts in string ‘theory’ (there isn’t even a theory  see the previous post) which is merely speculation. The gravity strength prediction I give is accurate and compatible with the YangMills exchange radiation standard model and the validated (not the cosmiclandscape epicycles rubbish) aspects of general relativity. Likewise, I predict correctly the ratio of electromagnetic strength to gravity strength (previous post), and the ratio of strong to electromagnetic which means that I predict three forces for the price of one. In addition (see previous post) the masses of all directly observable particles (the masses of isolated quarks are not real as such because they can’t be isolated, because the energy required to separate them exceeds the energy required to create new quark pairs).
Don’t believe this, it is not a faithbased religion. It is just plain fact. The key questions are the accuracy of the predictions and the clear presentation of the mechanisms. Unlike string theory, this is falsifiable science which makes many connections to reality. However, as Ian Montgomery, an Australian, aptly expressed the political state of physics in an email: ‘… we up Sh*t Creek in a barbed wire canoe without a paddle …’ I think that is a succinct summary of the state of high energy physics at present and of the hope of making progress. There is obviously a limit to what a handful of ‘crackpots’ outside the mainstream can do, with no significant resources compared to stringers.
[Regards the ’spin 2 graviton’ see an interesting
comment on Not Even Wrong: ‘LDM Says:
October
26th, 2006 at 12:03 pm
Referring to footnote 12 of the physics/0610168 about string theory and GR…
If you actually check what Feynman said in the "Feynman Lectures on Gravitation", page 30…you will see that the (so far undetected) graviton, does not, a priori, have to be spin 2, and in fact, spin 2 may not work, as Feynman points out.
This elevation of a mere possibility to a truth, and then the use of this truth to convince oneself one has the correct theory, is a rather large extrapolation.’
Note that I also read those Feynman lectures on gravity when Penguin books brought them out in paperback a few years ago and saw the same thing, although I hated reading the abject speculation in them where Feynman suggests that the strength ratio of gravity to electromagnetism is like the ratio of the radius of the universe to the radius of a proton, without any mechanism or dynamics. Tony Smith quotes a bit of them on his site which I requote on my home page. The spin depends on the nature of the radiation, and if it is nonoscillating then it can only propagate via the 2way mode like electric/HeavisidePoynting energy due to the same reason of infinite selfinductance preventing it working by a single way mode (like two nonoscillating energy currents going in opposite directions) which will affect what you mean by spin.
On my home page there are three main sections dealing with the gravity mechanism dynamics, namely near the top of http://feynman137.tripod.com/ (scroll down to first illustration), at http://feynman137.tripod.com/#a and for technical calculations predicting strength of gravity accurately at http://feynman137.tripod.com/#h. The first discussion, near the top of the page, explains how shielding occurs: ‘… If you are near a mass, it creates an asymmetry in the radiation exchange, because the radiation normally received from the distant masses in the universe is redshifted by high speed recession, but the nearby mass is not receding significantly. By Newton’s 2^{nd} law the outward force of a nearby mass which is not receding (in spacetime) from you is F = ma = mv/t = mv/(x/c) = mcv/x = 0. Hence by Newton’s 3^{rd} law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, creating an asymmetry. So you get pushed towards the shield. This is why apples fall. …’ This brings up the issue of how electromagnetism works. Obviously, the charges of gravity and electromagnetism are different: masses don’t have the symmetry properties of the electric charge. For example, mass increases with velocity, while electric charge doesn’t. I’ve dealt with this in the last couple of posts on this blog, but unification physics is a big field and I’m still making progress. One comment about the spin. Fermions have halfinteger spin which means they are like a Mobius strip, requiring 720 degrees of rotation for a complete exposure of their surface. FermiDirac statistics describe such particles. Bosons have integer spin and spin1 bosons are relatively normal in that they only require 360 degrees of rotation for a complete revolution. Spin2 bosons gravitons presumably require only 180 degrees of rotation per revolution, so appear stringy to me. I think the exchange radiation of gravity and electromagnetism is the same thing  based on the arguments in previous posts  and is spin1 radiation, albeit continuous radiation. It is quite possible to have continuous radiation in a Dirac sea, just as you can have continuous waves composed of molecules in a water based sea.]
A fruitful natural philosophy has a double scale or ladder ascendant and
descendant; ascending from experiments to axioms and descending from axioms to
the invention of new experiments.  Novum Organum.
This would allow
LQG to be built as a bridge between path integrals and general relativity. I
wish Smolin or Woit would pursue this.
Light ... "smells" the
neighboring paths around it, and uses a small core of nearby space. (In the same
way, a mirror has to have enough size to reflect normally: if the mirror is too
small for the core of nearby paths, the light scatters in many directions, no
matter where you put the mirror.)
 Feynman, QED, Penguin, 1990, page
54.
That's wave particle duality explained. The path integrals don't mean
that the photon goes on all possible paths but as Feynman says, only a
"small core of nearby space".
The doubleslit interference experiment is
very simple: the photon has a transverse spatal extent. If that overlaps two
slits, then the photon gets diffracted by both slits, displaying interference.
This is obfuscated by people claiming that the photon goes everywhere, which is
not what Feynman says. It doesn't take every path: most of the energy is
transferred along the classical path, and is near that.
Similarly, you
find people saying that QFT says that the vacuum is full of loops of
annihilationcreation. When you check what QFT says, it actually says that those
loops are limited to the region between the IR and UV cutoff. If loops existed
everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole
vacuum would be polarized enough to cancel out all real charges. If loops
existed beyond the UV cutoff, ie to zero distance from a particle, then the
loops would have infinite energy and momenta and the effects of those loops on
the field would be infinite, again causing problems.
So the vacuum simply
isn't full of loops (they only extend out to 1 fm around particles). Hence no
dark energy mechanism.
String theory
Mainstream string theory or Mtheory (due to Witten, 1995) theory is the 10 dimensional superstring / 11 dimensional supergravity unification which can't predict anything potentially checkable. It says that there are 10 dimensions of particle physics predicting 10^500 or so different Standard Models (because particle properties can take many values due to the many parameters of size and shape for the complex 6dimensional CalabiYau manifold, which compactifies 6 of the 10 dimensions to give 4d spacetime), each in a parallel universe! Mtheory says that 10dimensional superstring theory is a (mem)brane on 11dimensional hyperspace of supergravity, like a 2dimensional flat credit card containing a 3dimensional hologram or 3 dimensional space containing ‘curvature’ due to time dimension(s). Despite all the ad hoc speculation, Mtheory can’t give any checkable physics!
Unobservable extra dimensions curled up into imaginary Planck scale CalabiYau manifold strings, and there is postulated 1:1 boson:fermion supersymmetric partners for all Standard Model particles, to achieve everunobservable unification at the Planck scale. Watch how string theory dances around to impress the public without giving any real physics! It cannot ever go away because it is not a falsifiable theory. So after being ridiculed and dismissed, it always survives and come back again to sneer at alternatives which are checkable!
Euclidean geometry is disproved by the curvature of caused by gravitational fields. The best example of this, which helps to clearly explain the entire problem, is not the deflection of light  after all bullets can be similarly deflected by wind, but that is obviously not taken to disprove Euclid  but the contraction implied by general relativity. The radius of the earth is contracted by (1/3)MG/c^{2} = 1.5 millimetres, but the circumference  because it is orthagonal to the gravitational field lines  suffers no contraction. Since circumference divided by radius equals the ratio p , it follows that for this ratio to be unaffected by contraction there must be a fourth dimension, so that the three observable dimensions are distorted by curvature. This is by analogy to the way that two dimensional geometrical diagrams drawn on a curved background suffer distortions. For example, try drawing a geometric diagram on the surface of a globe; rules for Euclidean plane geometry for the relationship between angles and lengths will generally be inaccurate and need corrections.
Another, physically equivalent, way of interpreting the contraction and all the other effects of general relativity is by causal mechanism of YangMills exchange radiation in just three dimensions. This mechanism is completely compatible with the mathematical theory of general relativity. In this situation, there are no extra dimensions. The contraction term in general relativity  which causes all of the departures from the predictions of Newtonian threedimensional gravitation  is then due to physical compression along radial lines. Because there is no transverse (circumference) contraction, the reduction in radius can be interpreted as a predictable change in the observable value of p , should it be possible to measure this.
However, the extra dimensional speculation on general relativity, reinforced by confirmation of general relativity in various experimental tests, has led to a hardening of orthodoxy in favour of the real existence of extra dimensions. Although general relativity is 3 + 1 dimensional, the extra dimension being treated as a resultant (time), the KaluzaKlein theory adds still another (fifth) dimension which is gives a way of combining electromagnetism and gravitation qualitatively (it makes no checkable predictions) through general relativity. The extra dimension was supposed to be rolled up into a small loop that constitutes a particle of matter. Vibrations of the loop or closed string allow it to represent different energy states, each corresponding to the different fundamental particles. There is no checkable prediction from this theory, not even the size of the loop, which is postulated to be Planck size due to Planck's fame. Planck's length  which he based on arbitrary dimensional analysis  is far bigger (G^{1/2}h^{1/2}c^{3/2} ~ 10^{35} m) than the black hole radius of an electron (2GM/c^{2} = 1.3 x 10^{57 }m) and so it is highly suspect whether the dimensional analysis numerology of the Planck size belies any real physics. The restmass energies of particles cannot be predicted from string theory. Later, the ad hoc suggestion was made that the CalabiYau six dimensional manifold be included in the string theory, leading to 10/11 dimensional superstrings/supergravity (unified by ideas like Witten's Mtheory and the holographic conjecture) with a 'landscape' of 10^{350} or so values of the quantum field theory vacuum energy ground state.
The correct way to predict gravity is to build upon experimental facts. At the time general relativity was built, in November 1915 by Hilbert and Einstein, it was not known that the matter of the universe is receding in all directions, nor that the recession is not being slowed with gravity. Einstein in his 1916 reconciliation of general relativity with cosmology, adopted a 'steady state' theory which has subsequently been disproved by observations. There are many cranks who don't like nature the way observation shows it to be, and don't like the big bang in any form. Generally they prefer to invent a completely speculative theory that redshifted spectra are 'somehow' being redshifted by a cause other than recession, and that the universe is in a steady state. In fact, none of these theories are consistent with the observations. The spectrum of light made red by gas or dust scattering is entirely different from the uniform frequencyindependent redshift seen in the recession of distant clusters of galaxies. The recession redshift theory is easily experimentally proved to be correct by the fact that recession of a light source does cause the light received to be redshifted in exactly the same way as the redshift from distant clusters of galaxies. The alternative (steadystate) theories all involve inventing unobserved, unscientific, 'explanations' and ignoring the proved (recession) mechanism. Professor Ned Wright has stated: 'There is no known interaction that can degrade a photon's energy without also changing its momentum, which leads to a blurring of distant objects which is not observed. The Compton shift in particular does not work.'
The correct theory of quantum gravity to describe general relativity, applied to cosmology, must discriminate between the big bang induced cosmic expansion and the contraction of the dimensions describing matter due to gravity. There are three expanding dimensions in the big bang cosmology and three dimensions for matter that are contracted by motion and by gravitation.
YangMills quantum field theory is abstract yet suggests physical dynamics: exchange of gauge bosons causes forces. This is clearly displaced by familiar Feynman diagrams depicting fundamental force exchange radiations. Via the October 1996 issue of the journal Electronics World, a mechanism was made available in an eight pages long.
Neither the equations of quantum mechanics nor Alain Aspects experiments disprove causality proper or prove Copenhagen philosophy/politics/religion.
Dr Thomas Love has proved that the entanglement philosophy is just a statement of the mathematical discontinuity between the timedependent and timeindependent Schroedinger wave equations when a measurement is taken. There’s no evidence for metaphysical wave function collapse in either the authority of Niels Bohr, the Solvay Congress of 1927, or Alain Aspect’s determination that the polarization of photons emitted in opposite directions by an electron correlate when measured metres apart.
Copenhagen quantum mechanics is speculative. So don’t build it up as a pet religion. The uncertainty principle in the Dirac sea has a perfectly causal explanation: on small distance scales, particles get randomly accelerated/decelerated/deflected by the virtual particles of the spacetime vacuum. This is like Brownian motion. On large scales, the interactions cancel out. If so, then photon polarizations correlate not because of metaphysical "wavefunction entanglement" but because the uncertainty principle doesn’t apply to measurements on light speed bosons, and only to massive fermions which are still there after you actually detect them.
A loop is a rotational transformation in the vacuum. The loop physically the exchange of energydelivering field radiation from one mass to another, and back to the first mass again. Like the exchange radiation in YangMills (Standard Model) theories, but with the added restriction of the conservation (looping between masses) of the exchange radiation? Things accelerated by a gravity field are losing gravitational potential energy and gaining kinetic energy, so the exchange radiation carries energy. If the LQG spinfoam vacuum does describes a YangMills energy exchange scheme, you can get solid checkable predictions by taking account of the effect of the expansion of the universe on these conserved gravity field mediators.
If you observe two supernovae at the same time, you can in fact determine which occurred first by simply noting from their redshifts how far they are from you in time and space, and hence how long after the big bang each occurred. Hence there is an absolute time scale. Special relativity as usually taught denies absolute chronology, which doesn’t work where you can place absolute chronology on events like supernovae. A better theory will clearly separate the treatment of the expanding big bang spacetime dimensions (which measure the volume of the vacuum), from the local contractable/time dilationable dimensions used for matter like clocks & rulers. Matter is contracted (in spacetime) by motion and gravity. But the big bang’s spacetime continues expanding. Hence the mathematical treatment of the universe needs to clearly distinguish between the 3 perpetually expanding spacetime dimensions for the volume of the universe, and the 3 contractable dimensions used to describe matter. When Einstein and Hilbert built general relativity in November 1915, they simply didn’t know that the volume of the vacuum was perpetually expanding. People thought it was static.
Objective: Using empirical facts, to prove mechanisms that make other predictions, allowing independent verification
Outdated preliminary outline publications: CERN Document Server preprint with updates here http://electrogravity.blogspot.com/. My CERN paper can’t be updated since the CERN server now only accepts uploads automatically from arXiv.org which is controlled by 10/11 dimensional string theorists who mislead the public by claiming to predict gravity to an infinite number of significant figures,see Sir Josephson’s string theory compatible telepathy paper hosted by the arXiv: http://arxiv.org/abs/physics/0312012 (the string theory basis of which is of course warmly welcomed by Professors Randall and Motl at Harvard University; see their book Warped Passages).
‘An important part of all totalitarian systems is an efficient propaganda machine. ... to protect the ‘official opinion’ as the only opinion that one is effectively allowed to have.’  STRING THEORIST Dr Lubos Motl, http://motls.blogspot.com/2006/01/powerofpropaganda.html. My papers in Walter Babin’s General Science Journal: 1, 2, 3, 4, 5. Bigotry of string theorists and introduction to latest developments.
The prediction compares well with reality but is banned by string theorists like Lubos Motl who say everyone who points out errors in speculative string theory, and everybody who proves the correct explanation, should be hated.
Now, what part can’t you 'mental vacuum state' paranormal string theorists understand? Don’t you believe in geometry? Don’t you believe in Newton’s laws? Don’t you believe in gravity? Don’t you believe in the forcecausing exchange radiation of the quantum field theory that predicts more particle physics than any other physical theory in the history of science? Don’t you believe in cosmic expansion? In that case you are religious, since religion concerns ‘beliefs’. Science concerns verified facts not groupthink like religious style beliefs.
Shielding: since most of the mass of atoms is associated with the fields
of gluons and virtual particles surrounding quarks, these are the
gravityaffected parts of atoms, not the electrons or quarks
themselves.
The mass of a nucleon is typically 938 MeV, compared to just
0.511 MeV for an electron and 35 MeV for one of the three quarks inside a
neutron or a proton. Hence the actual charges of matter aren't associated with
much of the mass of material. Almost all the mass comes from the massive
mediators of the strong force fields between quarks in nucleons, and between
nucleons in nuclei heavier than hydrogen. (In the welltested and
empirically validated Standard Model, charges like fermions don't have mass at
all; the entire mass is provided by a vacuum 'Higgs field'. The exact nature of
the such a field is not predicted, although some constraints on its range of
properties are evident.)
General relativity, absolute causality
Professor Georg Riemann (182666) stated in his 10 June 1854 lecture at Gottingen University, On the hypotheses which lie at the foundations of geometry: ‘If the fixing of the location is referred to determinations of magnitudes, that is, if the location of a point in the ndimensional manifold be expressed by n variable quantities x_{1}, x_{2}, x_{3}, and so on to x_{n}, then … ds = Ö [ĺ (dx)^{2}] … I will therefore term flat these manifolds in which the square of the lineelement can be reduced to the sum of the squares … A decision upon these questions can be found only by starting from the structure of phenomena that has been approved in experience hitherto, for which Newton laid the foundation, and by modifying this structure gradually under the compulsion of facts which it cannot explain.’
Riemann’s suggestion of summing dimensions using the Pythagorean sum ds^{2} = ĺ (dx^{2}) could obviously include time (if we live in a single velocity universe) because the product of velocity, c, and time, t, is a distance, so an additional term d(ct)^{2} can be included with the other dimensions dx^{2}, dy^{2}, and dz^{2}. There is then the question as to whether the term d(ct)^{2} will be added or subtracted from the other dimensions. It is clearly negative, because it is, in the absence of acceleration, a simple resultant, i.e., dx^{2 }+ dy^{2} + dz^{2} = d(ct)^{2}, which implies that d(ct)^{2} changes sign when passed across the equality sign to the other dimensions: ds^{2} = ĺ (dx^{2}) = dx^{2 }+ dy^{2} + dz^{2} – d(ct)^{2} = 0 (for the absence of acceleration, therefore ignoring gravity, and also ignoring the contraction/timedilation in inertial motion). This formula, ds^{2} = ĺ (dx^{2}) = dx^{2 }+ dy^{2} + dz^{2} – d(ct)^{2}, is known as the ‘Riemann metric’ of Minkowski spacetime. It is important to note that it is not the correct spacetime metric, which is precisely why Riemann did not discover general relativity back in 1854. [The algebraic Newtonianequivalent (for weak fields) approximation in general relativity is the Schwarzschild metric, which, ds^{2} = (1 – 2GM/r)^{1} (dx^{2 }+ dy^{2} + dz^{2} ) – (1 – 2GM/r) d(ct)^{2}. This reduces to the special relativity metric for the case M = 0, i.e., the absence of gravitation. However this does not imply that general relativity proves the postulates of special relativity. For example, in general relativity the velocity of light changes as gravity deflects light, but special relativity denies this. Because the deflection in light, and hence velocity change, is an experimentally validated prediction of general relativity, that postulate in special relativity is inconsistent and in error. For this reason, it is misleading to begin teaching physics using special relativity.]
Professor Gregorio RicciCurbastro (18531925) took up Riemann’s suggestion and wrote a 23pages long article in 1892 on ‘absolute differential calculus’, developed to express differentials in such a way that they remain invariant after a change of coordinate system. In 1901, Ricci and Tullio LeviCivita (18731941) wrote a 77pages long paper on this, Methods of the Absolute Differential Calculus and Their Applications, which showed how to represent equations invariantly of any absolute coordinate system. This relied upon summations of matrices of differential vectors. Ricci expanded Riemann’s system of notation to allow the Pythagorean dimensions of space to be defined by a line element or ‘Riemann metric’ (named the ‘metric tensor’ by Einstein in 1916):
g = ds^{2} = g_{m n} dx_{m }dx_{n }.
The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant). … We call four quantities A_{v} the components of a covariant fourvector, if for any arbitrary choice of the contravariant fourvector B^{v}, the sum over v, ĺ A_{v} B^{v} = Invariant. The law of transformation of a covariant fourvector follows from this definition.’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
When you look at the mechanism for the physical contraction, you see that general relativity is consistent with FitzGerald's physical contraction, and I've shown this mathematically at my home page. Special relativity according even to Albert Einstein is superseded by general relativity, a fact that Lubos Motl may never grasp, he like other ‘string theorists’ calls everyone interested in Feynman’s objective approach to science a ‘sciencehater’. To a string theorist, a lack of connection to physical fact is ‘scienceloving’ while a healthy interest in supporting empirically checked work is ‘sciencehating’. (String theorists borrowed this idea from KGB propaganda as explained by George Orwell as ‘doublethink’ in the novel 1984.) Because string theory agrees with special relativity, crackpots claim falsely that general relativity is based on the same basic principle of special relativity that is a lie because special relativity is distinct from general covariance that is the heart of general relativity:
‘... the law of the constancy of the velocity of light. But ... the general
theory of relativity cannot retain this law. On the contrary, we arrived at the
result according to this latter theory, the velocity of light must always depend
on the coordinates when a gravitational field is present.’  Albert Einstein,
Relativity, The Special and General Theory, Henry Holt and Co., 1920,
p111.
‘... the principle of the constancy of the velocity of light in
vacuo must be modified, since we easily recognise that the path of a ray of
light … must in general be curvilinear...’  Albert Einstein, The Principle of
Relativity, Dover, 1923, p114.
‘The special theory of relativity ... does
not extend to nonuniform motion ... The laws of physics must be of such a
nature that they apply to systems of reference in any kind of motion. Along this
road we arrive at an extension of the postulate of relativity... The general
laws of nature are to be expressed by equations which hold good for all systems
of coordinates, that is, are covariant with respect to any substitutions
whatever (generally covariant). ...’ – Albert Einstein, ‘The Foundation of the
General Theory of Relativity’, Annalen der Physik, v49, 1916.
‘According
to the general theory of relativity space without ether is unthinkable.’ –
Albert Einstein, Sidelights on Relativity, Dover, New York, 1952,
p23.
‘The MichelsonMorley experiment has thus failed to detect our
motion through the aether, because the effect looked for – the delay of one of
the light waves – is exactly compensated by an automatic contraction of the
matter forming the apparatus…. The great stumbingblock for a philosophy which
denies absolute space is the experimental detection of absolute rotation.’ –
Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity
in 1919), Space Time and Gravitation: An Outline of the General Relativity
Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.
The rank is denoted simply by the number of letters of subscript notation, so that X_{a} is a ‘rank 1’ tensor (a vector sum of firstorder differentials, like net velocity or gradient over applicable dimensions), and X_{ab} is a ‘rank 2’ tensor (for second order differential vectors, like acceleration). A ‘rank 0’ tensor would be a scalar (a simple quantity without direction, such as the number of particles you are dealing with). A rank 0 tensor is defined by a single number (scalar), a rank 1 tensor is a vector which is described by four numbers representing components in three orthagonal directions and time, a rank 2 tensor is described by 4 x 4 = 16 numbers, which can be tabulated in a matrix. By definition, a covariant tensor (say, X_{a}) and a contravariant tensor of the same variable (say, X_{a}) are distinguished by the way they transform when converting from one system of coordinates to another; a vector being defined as a rank 1 covariant tensor. Ricci used lower indices (subscript) to denote the matrix expansion of covariant tensors, and denoted a contravariant tensor by superscript (for example x^{n}). But even when bold print is used, this is still ambiguous with power notation, which of course means something completely different (the tensor x^{n} = x^{1 }+ x^{2} + x^{3 }+... x^{n}, whereas for powers or indices x^{n} = x_{1} x_{2} x_{3} ...x_{n}). [Another step towards ‘beautiful’ gibberish then occurs whenever a contravariant tensor is raised to a power, resulting in, say (x^{2})^{2}, which a logical mortal (who’s eyes do not catch the bold superscript) immediately ‘sees’ as x^{4},causing confusion.] We avoid the ‘beautiful’ notation by using negative subscript to represent contravariant notation, thus x_{n} is here the contravariant version of the covariant tensor x_{n}.
Einstein wrote in his original paper on the subject, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916: ‘Following Ricci and LeviCivita, we denote the contravariant character by placing the index above, and the covariant by placing it below.’ This was fine for Einstein who had by that time been working with the theory of Ricci and LeviCivita for five years, but does not have the clarity it could have. (A student who is used to indices from normal algebra finds the use of index notation for contravariant tensors absurd, and it is sensible to be as unambiguous as possible.) If we expand the metric tensor for m and n able to take values representing the four components of spacetime (1, 2, 3 and 4 representing the ct, x, y, and z dimensions) we get the awfully long summation of the 16 terms added up like a 4by4 matrix (notice that according to Einstein’s summation convention, tensors with indices which appear twice are to be summed over):
g = ds^{2} = g_{m n} dx_{m }dx_{n }= ĺ (g_{m n} dx_{m }dx_{n }) = (g_{11} dx_{1} dx_{1} + g_{21} dx_{2} dx_{1} + g_{31} dx_{3} dx_{1} + g_{41} dx_{4} dx_{1}) + (g_{12} dx_{1} dx_{2} + g_{22} dx_{2} dx_{2} + g_{32} dx_{3} dx_{2} + g_{42} dx_{4} dx_{2}) + (g_{13} dx_{1} dx_{3} + g_{23} dx_{2} dx_{3} + g_{33} dx_{3} dx_{3} + g_{43} dx_{4} dx_{3}) + (g_{14} dx_{1} dx_{4} + g_{24} dx_{2} dx_{4} + g_{34} dx_{3} dx_{4} + g_{44} dx_{4} dx_{4})
The first dimension has to be defined as negative since it represents the time component, ct. We can however simplify this result by collecting similar terms together and introducing the defined dimensions in terms of number notation, since the term dx_{1} dx_{1} = d(ct)^{2}, while dx_{2} dx_{2} = dx^{2}, dx_{3} dx_{3} = dy^{2}, and so on. Therefore:
g = ds^{2} = g_{ct} d(ct)^{2} + g_{x} dx^{2} + g_{y} dy^{2} + g_{z} dz^{2} + (a dozen trivial first order differential terms).
It is often asserted that Albert Einstein (18791955) was slow to apply tensors to relativity, resulting in the 10 years long delay between special relativity (1905) and general relativity (1915). In fact, you could more justly blame Ricci and LeviCivita who wrote the longwinded paper about the invention of tensors (hyped under the name ‘absolute differential calculus’ at that time) and their applications to physical laws to make them invariant of absolute coordinate systems. If Ricci and LeviCivita had been competent geniuses in mathematical physics in 1901, why did they not discover general relativity, instead of merely putting into print some new mathematical tools? Radical innovations on a frontier are difficult enough to impose on the world for psychological reasons, without this being done in a radical manner. So it is rare for a single group of people to have the stamina to both invent a new method, and to apply it successfully to a radically new problem. Sir Isaac Newton used geometry, not his invention of calculus, to describe gravity in his Principia, because an innovation expressed using new methods makes it too difficult for readers to grasp. It is necessary to use familiar language and terminology to explain radical ideas rapidly and successfully.
Professor Morris Kline describes the situation after 1911, when Einstein began to search for more sophisticated mathematics to build gravitation into spacetime geometry: ‘Up to this time Einstein had used only the simplest mathematical tools and had even been suspicious of the need for "higher mathematics", which he thought was often introduced to dumbfound the reader. However, to make progress on his problem he discussed it in Prague with a colleague, the mathematician Georg Pick, who called his attention to the mathematical theory of Ricci and LeviCivita. In Zurich Einstein found a friend, Marcel Grossmann (18781936), who helped him learn the theory; and with this as a basis, he succeeded in formulating the general theory of relativity.’ (M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990, vol. 3, p. 1131.)
Let us examine the developments Einstein introduced to accomplish general relativity, which aims to equate the massenergy in space to the curvature of motion (acceleration) of an small test mass, called the geodesic path. Readers who want a good account of the full standard tensor manipulation should see the page by Dr John Baez or Sir Kevin Aylward. There is also a good book by Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity.
We will give perhaps a slightly more practical and physical interpretation of the basics here. Ricci introduced a tensor, the Ricci tensor, which deals with a change of coordinates by using FitzgeraldLorentz contraction factor, g = (1 – v^{2}/c^{2})^{1/2}. For understanding the physics involved in general relativity, it is useful to define a new geometric tensor in the form R_{m n} = c^{2}(dg /dx_{m })(dg /dx_{n }), which equals zero in space which contains absolutely no mass or energy fields (Karl Schwarzschild produced a simple solution to the Einstein field equation in 1916 which shows the effect of gravity on spacetime, which reduces to the line element of special relativity for the impossible hypothetical case of zero mass), and to define the scalar R = c^{2}d^{2 }g/ds^{2}; in each case the resulting dimensions are (acceleration/distance) = (time)^{2}, assuming we can treat the tensors as real numbers (which, as Heaviside showed, is often possible for operators). Reduced to the simple case of three dimensions in spherical symmetry (x = y = z), tensor R_{m n} = R_{00} = a/x + a/y + a/z = 3a/r. Einstein played about with these tensors, building a representation of Isaac Newton’s gravity law a = MG/r^{2} (inward acceleration being defined as positive) in the form R_{m n} = 4p GT_{m n} /c^{2}, where T_{m n} is the massenergy tensor, T_{m n} = r u_{m }u_{n} . If we consider just a single dimension for low velocities (g = 1), and remember E = mc^{2}, then T_{m n} = T_{00} = r u^{2} = r (g c)^{2} = E/(volume). Thus, T_{m n} /c^{2} is the effective density of matter in space (the mass equivalent of the energy of electromagnetic fields). (We ignore pressure here.)
To get solutions, the source of gravity such as the energy of electromagnetic field, can in general relativity be treated as a 'perfect fluid' with no drag properties. Since the gravity source is conveyed by an intervening medium (the spacetime fabric, which we show to be dynamical YangMills exchange radiation based), this medium when considered as an electromagnetic field, causes gravity by behaving as a perfect fluid.
According to most statements of Newton’s second law and universal gravitation law, F = ma = mMG/r^{2}, but a serious flaw here is that F = ma is not an accurate statement because during acceleration the mass m varies with the speed (mass increases dramatically at relativistic velocities, i.e., velocities approaching c). A more accurate version of Newton's second law is therefore his original formulation, F = dp/dt where p is momentum (for low velocities only, p » mv). Even for the low velocity case where p » mv, this law expands by the product law in calculus to F = dp/dt » d(mv)/dt = (m.dv/dt) + (v.dm/dt). For the situation where m is a variable (relativistic velocities), the gravity law will therefore be complicated than Newton's universal gravitational law (F = mMG/r^{2}). The Poisson equation for the Newtonian potential is Ń^{2} F = 4p rG, where r is density. The Laplacian operator Ń^{2} signifies the sum of secondorder differentials of F; because there are three terms they add up (in spherical symmetry) to give 3a/r, where a is the gravitational acceleration along radius r. To convert Ń^{2} F = 4p rG into the Einstein field equation requires replacing the mass density r by the energymomentum tensor T_{m n} , so that field energy and pressure energy are included along with the energy equivalent of the mass density, and also replacing Ń^{2} F by rank2 tensor. The rank4 Riemann tensor is contracted to the Ricci tensor (rank2), R_{m} _{v}, giving R_{m} _{v} = k T_{m n} , where k is a constant. This generalisation of the Newtonian gravitational potential Ń^{2} F = 4p rG is false because it defies the conservation of energy reduces the value of the Ricci tensor by the amount ˝g_{m n} R, giving the EinsteinHilbert equation of general relativity –˝g_{m n} R + R_{m} _{v} = 8p GT_{m n} /c^{2}.
Einstein’s method of obtaining the final answer involved trial and error and the equivalence principle between inertial and gravitational mass, but using Professor Roger Penrose’s approach, Einstein recognised that while this equation reduces to Newton’s law for low speeds, it is in error because it violates the principle of conservation of massenergy, since a gravitational field has energy (i.e., ‘potential energy’) and viceversa. Einstein discovered that this error is cancelled out if a term equal to –˝g_{m n} R is added on to R_{m} _{v}. At low speeds, –˝g_{m n} R = R_{m} _{v}, so the total curvature is –˝g_{m n} R + R_{m} _{v} = 2R_{m n} = 2(4p GT_{m n} /c^{2}) = 8p GT_{m n} /c^{2}. Dividing all this out by two gives R_{m n} = 4p GT_{m n} /c^{2}. But at the speed of light –˝g_{m n} R = 0, so then R_{m} _{v} = 8p GT_{m n} /c^{2}. Therefore, light is accelerated by gravity exactly twice as much as predicted by Newton’s law, i.e., a = 2MG/r^{2} rather than a = MG/r^{2}. General relativity is a mathematical accounting system and this factor of two comes into it from the energy considerations ignored by Newtonian physics, due to the light speed of the gravitational field itself.
The average angle of the propagation of ray of light from the line to the centre of gravity of the sun during deflection is a right angle. When gravity deflects an object with rest mass that is moving perpendicularly to the gravitational field lines, it speeds up the object as well as deflecting its direction. But because light is already travelling at its maximum speed (light speed), it simply cannot be speeded up at all by falling. Therefore, that half of the gravitational potential energy that normally goes into speeding up an object with rest mass cannot do so in the case of light, and must go instead into causing additional directional change (downward acceleration). This is the mathematical physics reasoning for why light is deflected by precisely twice the amount suggested by Newton’s a = MG/r^{2}.
General relativity is an energy accountancy package, but you need physical intuition to use it. This reason is more of an accounting trick than a classical explanation. As Penrose points out, Newton’s law as expressed in tensor form with E=m c^{2} is fairly similar to Einstein’s field equation: R_{m n} = 4p GT_{m n} /c^{2}. Einstein’s result is: –˝g_{m n} R + R_{m} _{v} = 8p GT_{m n} /c^{2}. The fundamental difference is due to the inclusion of the contraction term, –˝g_{m n} R, which doubles the value of the other side of the equality.
In an article by Penrose in the book It Must Be Beautiful Penrose
explains the tensors of general relativity physically:
‘… when there is
matter present in the vicinity of the deviating geodesics, the volume reduction
is proportional to the total mass that is surrounded by the geodesics. This
volume reduction is an average of the geodesic deviation in all directions …
Thus, we need an appropriate entity that measures such curvature averages.
Indeed, there is such an entity, referred to as the Ricci tensor, constructed
from [the big Riemann tensor] R_abcd. Its collection of components is usually
written R_ab. There is also an overall average single quantity R, referred to as
the scalar curvature.’
Einstein’s field equation states that the Ricci
tensor, minus half the product of the metric tensor and the scalar curvature, is
equal to 8p GT_{m n} /c^{2}, where T_{m n} is the massenergy
tensor which is basically the energy per unit volume (this is not so simple when
you include relativistic effects and pressures). The key physical insight is the
volume reduction, which can only be mechanistically explained as a result of the
pressure of the spacetime fabric.
To solve the field equation, use is made of the simple concepts of proper lengths and proper times. The proper length in spacetime is equal to cň ( g_{m n} dx_{m }dx_{n })^{1/2}, while the proper time is ň (g_{m n} dx_{m }dx_{n })^{1/2}. Notice that the ratio of proper length to proper time is always c.
Now, –˝g_{m n} R + R_{m} _{v} = 8p GT_{m n} /c^{2}, is usually shortened to the vague and therefore unscientific and meaningless ‘Einstein equation,’ G = 8p T. Teachers who claim that the ‘conciseness’ and ‘beautiful simplicity’ of ‘G = 8p T’ is a ‘hallmark of brilliance’ are therefore obfuscating. A year later, in his paper ‘Cosmological Considerations on the General Theory of Relativity’, Einstein forcefitted it to the assumed static universe of 1916 by inventing a new cosmic ‘epicycle,’ the cosmological constant, to make gravity weaken faster than the inverse square law, become zero at a distance equal to the average separation distance of galaxies, and to become repulsive at greater distances. In fact, as later proved, such an epicycle, apart from being merely wild speculation lacking a causal mechanism, would be unstable and collapse into one lump. Einstein finally admitted that it was ‘the biggest blunder’ of his life.
There is a whole industry devoted to ‘G = 8p T’ which is stated as meaning ‘curvature of space = massenergy’ in an attempt to try to obfuscate so as to cover up the fact that Einstein had no mechanism of gravitation. In fact of course, Einstein admitted in 1920 in his inaugural lecture at Leyden that the deep meaning of general relativity is that in order to account for acceleration you need to dump the baggage associated with special relativity, and go back to having what he called an ‘ether’, or a continuum/fabric of spacetime. Something which doesn’t exist can hardly be curved, can it, eh? The Ricci tensor is in fact a shortened form of a big Riemann rank 4 tensor (the expansions and properties of which are capable of putting anyone off science). To be precise, R_{m} _{v} = R_{mavb} g_{ab} , while R = R_{m} _{v} g_{mv} . No matter how many times people ‘hype’ up gibberish with propaganda labels such as ‘beautifully simplicity,’ Einstein lacked a mechanism of gravity and fails to fit the big bang universe without forcefitting it using ad hoc ‘epicycles’. The original epicycle was the ‘cosmological constant’, L . This falsely was used to keep the universe stable: G + L g_{m n} = 8p T. This sort of thing is, while admitted in 1929 to be an error by Einstein, still being postulated today, without any physical reasoning and with just ad hoc mathematical fiddling to justify it, to ‘explain’ why distant supernovae are not being slowed down by gravitation in the big bang. I predicted there was a small positive cosmological constant epicycle in 1996 (hence the value of the dark energy) by showing that there is no long range gravitational retardation of distant receding matter because that is a prediction of the gravity mechanism on this page, published via the October 1996 issue of Electronics World (letters page). Hence ‘dark energy’ is speculated as an invisible, unobserved epicycle to maintain ignorance. There is no ‘dark energy’ but you can calculate and predict the amount there would be from the fact the expansion of the universe isn’t slowing down: just accept the expansion goes as Hubble’s law with no gravitational retardation and when you normalise this with the mainstream cosmological model (which falsely assumes retardation) you ‘predict’ the ‘right’ values for a fictitious cosmological constant the fictitious dark energy.
Light has momentum and exerts pressure, delivering energy. Continuous exchange of highenergy gauge bosons can only be detected as the normal forces and inertia they produce.
GENERAL RELATIVITY’S HEURISTIC PRESSURECONTRACTION EFFECT AND INERTIAL ACCELERATIONRESISTANCE CONTRACTION
Penrose’s Perimeter Institute lecture is interesting: ‘Are We Due for a New Revolution in Fundamental Physics?’ Penrose suggests quantum gravity will come from modifying quantum field theory to make it compatible with general relativity…I like the questions at the end where Penrose is asked about the ‘funnel’ spatial pictures of blackholes, and points out they’re misleading illustrations, since you’re really dealing with spacetime not a hole or distortion in 2 dimensions. The funnel picture really shows a 2d surface distorted into 3 dimensions, where in reality you have a 3dimensional surface distorted into 4 dimensional spacetime. In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’ Feynman discussed this simply as a reduction in radial distance around a mass of (1/3)MG/c^{2} = 1.5 mm for Earth. It’s such a shame that the physical basics of general relativity are not taught, and the whole thing gets abstruse. The curved space or 4d spacetime description is needed to avoid Pi varying due to gravitational contraction of radial distances but not circumferences.
The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)^{1/2}, so v^{2} = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.
By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v^{2} = 2GM/x) into the FitzgeraldLorentz contraction, giving g = (1 – v^{2}/c^{2})^{1/2} = [1 – 2GM/(xc^{2})]^{1/2}.
However, there is an important difference between this gravitational transformation and the usual FitzgeraldLorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:
FitzgeraldLorentz contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = (1 – v^{2}/c^{2})^{1/2} = 1 – ˝v^{2}/c^{2} + ...
Gravitational contraction effect: g = x/x_{0} = t/t_{0} = m_{0}/m = [1 – 2GM/(xc^{2})]^{1/2} = 1 – GM/(xc^{2}) + ...,
where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGeraldLorentz contraction: x/x_{0} + y/y_{0} + z/z_{0} = 3r/r_{0}. Hence the radial contraction of space around a mass is r/r_{0} = 1 – GM/(xc^{2}) = 1 – GM/[(3rc^{2}]
Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c^{2}. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.
This is the 1.5mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the LorentzFitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without viscosity. Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965. However, we have a solution…
Above: mechanism of attraction and repulsion in electromagnetism, and the capacitor summation of displacement current energy flowing between accelerating (spinning) charges as gauge bosons (by analogy to Prevost’s 1792 model of constant temperature as a radiation equilibrium). The net exchange is like two machine gunners firing bullets at each other; they recoil apart. The gauge bosons pushing them together are redshifted, like nearly spent bullets coming from a great distance, and are not enough to prevent repulsion. In the case of attraction, the same principle applies. The two opposite charges shield one another and get pushed together. Although each charge is radiating and receiving energy on the outer sides, the inward push is from redshifted gauge bosons, and the emission is not redshifted. The result is just like two people, standing back to back, firing machine guns. The recoil pushes them together, hence the attraction force.
UPDATED DISCUSSION from http://electrogravity.blogspot.com/ 14 December 2006:
The capacitor QFT model in detail:
At every instant, you have a vector sum of electric fields possible
across the universe.
The fields are physically propagated by gauge boson exchange. The gauge
bosons must travel between all charges, they can't tell that an atom is "neutral"
as a whole, they just travel between the charges.
Therefore even though the electric dipole created by the separation of
the electron from the proton in a hydrogen atom at any instant is randomly
orientated, the gauge bosons can also be considered to be doing a random walk
between all the charges in the universe.
The randomwalk vector sum for the charges of all the hydrogen atoms is
the voltage for a single hydrogen atom (the real charges mass in the universe is
something like 90% composed of hydrogen), multiplied by the square root of the
number of atoms in the universe.
This allows for the angles of each atom being random. If you have a large
row of charged capacitors randomly aligned in a series circuit, the average
voltage resulting is obviously zero, because you have the same number of
positive terminals facing one way as the other.
So there is a lot of inefficiency, but in a two or three dimensional set
up, a drunk taking an equal number of steps in each direction does make
progress. The taking 1 step per second, he goes an average net distance from the
starting point of t^0.5 steps after t seconds.
For air molecules, the same occurs so instead of staying in the same
average position after a lot of impacts, they do diffuse gradually away from
their starting points.
Anyway, for the electric charges comprising the hydrogen and other atoms
of the universe, each atom is a randomly aligned charged capacitor at any
instant of time.
This means that the gauge boson radiation being exchanged between charges
to give electromagnetic forces in YangMills theory will have the drunkard’s
walk effect, and you get a net electromagnetic field of the charge of a single
atom multiplied by the square root of the total number in the universe.
Now, if gravity is to be unified with electromagnetism (also basically a
long range, inverse square law force, unlike the short ranged nuclear forces),
and if gravity due to a geometric shadowing effect (see my home page for the
YangMills LeSage quantum gravity mechanism with predictions), it will depend on
only a straight line charge summation.
In an imaginary straight line across the universe (forget about gravity
curving geodesics, since I’m talking about a nonphysical line for the purpose
of working out gravity mechanism, not a result from gravity), there will be on
average almost as many capacitors (hydrogen atoms) with the electronproton
dipole facing one way as the other,
but not quite the same numbers!
You find that statistically, a straight line across the universe is 50%
likely to have an odd number of atoms falling along it, and 50% likely to have
an even number of atoms falling along it.
Clearly, if the number is even, then on average there is zero net
voltage. But in all the 50% of cases where there is an ODD number of atoms
falling along the line, you do have a net voltage. The situation in this case is
that the average net voltage is 0.5 times the net voltage of a single atom. This
causes gravity.
The exact weakness of gravity as compared to electromagnetism is now
predicted.
Gravity is due to 0.5 x the voltage of 1 hydrogen atom (a "charged
capacitor").
Electromagnetism is due to the random walk vector sum between all charges
in the universe, which comes to the voltage of 1 hydrogen atom (a "charged
capacitor"), multiplied by the square root of the number of atoms in the
universe.
Thus, ratio of gravity strength to electromagnetism strength between an
electron and a proton is equal to: 0.5V/(V.N^0.5) = 0.5/N^0.5.
V is the voltage of a hydrogen atom (charged capacitor in effect) and N
is the number of atoms in the universe. This ratio is equal to 10^40 or so,
which is the correct figure within the experimental errors involved
OLDER MATERIAL FOLLOWS:
Heuristically, gauge boson (virtual photon) transfer between charges to cause electromagnetic forces, and those gauge bosons don’t discriminate against charges in neutral groups like atoms and neutrons. The Feynman diagrams show no way for the gauge bosons/virtual photons to stop interactions. Light then arises when the normal exchange of gauge bosons is upset from its equilibrium. You can test this heuristic model in some ways. First, most gauge bosons are going to be exchanged in a random way between charges, which means the simple electric analogue is a series of randomly connected charged capacitors (positive and negative charges, with vacuum 377ohm dielectric between the ‘plates’). Statistically, if you connect an even number of charged capacitors in random along a line across the universe, the sum will be on average be zero. But if you have an odd number, you get an average of 1 capacitor unit. On average any line across the universe will be as likely to have an even as an odd number of charges, so the average charge sum will be the mean, (0 +1)/2 = 1/2 capacitor. This is weak and always attractive, because there is no force at all in the sum = 0 case and attractive force (between oppositely charged capacitor plates) in the sum = 1 case. Because it is weak and always attractive, it's gravitation? The other way they charges can add is in a perfect summation where every charge in the universe appears in the series +  + , etc. This looks improbable, but is statistically a drunkard's walk, and by the nature of pathintegrals gauge bosons do take every possible route, so it WILL happen. When capacitors are arranged like this, the potential adds like a statistical drunkard's walk because of the random orientation of ‘capacitors’, the diffusion weakening the summation from the total number to just the square root of that number because of the angular variations (two steps in opposite directions cancel out, as does the voltage from two charged capacitors facing one another). This vector sum of a drunkard's walk is the average step times the square root of the number of steps, so for ~10^{80} charges, you get a resultant of ~10^{40}. The ratio of electromagnetism to gravity is then (~10^{40}) /(1/2). Notice that this model shows gravity is electromagnetism, caused by gauge bosons. It does away with gravitons. The distances between the charges are ignored. This is explained because on average half the gauge bosons will be going away from the observer, and half will be approaching the observer. The fall due to the spread over larger areas with divergence is offset by the concentration due to convergence.
ALL electrons are emitting, so all are receiving. Hence they don't slow, they
just get juggled around and obey the chaotic Schrodinger wave formula instead of
a classical Bohr orbit.
‘Arguments’ against the facts of emission without
net energy loss also ‘disprove’ real heat theory. According to the false
claim that radiation leads to net energy loss,
because everything is emitted
heat radiation (separately from force causing radiation), everything should
quickly cool to absolute zero. This is wrong for the same reason above: if
everything is emitting heat, you can have equilibrium, constant
temperature.
On the small positive value of the CC see Phil Anderson’s comment on cosmic
variance:
‘the flat universe is just not decelerating, it isn’t
really accelerating’  Prof. Phil Anderson, http://cosmicvariance.com/2006/01/03/dangerphilanderson/#comment10901
The
point is that general relativity is probably not a complete theory of gravity,
as it doesn’t have quantum field theory in it.
Assume Anderson is right,
that gravity simply doesn’t decelerate the motion of distant
supernovae.
What value of the CC does this predict quantitatively?
Answer: the expansion rate without gravitational retardation is just Hubble’s
law, which predicts the observed result to within experimental error! Hence the
equivalent CC is predicted ACCURATELY.
(Although Anderson’s argument is
that no real CC actually exists, a pseudoCC must be fabricated to fit
observation if you FALSELY assume that there is a gravitational retardation of
supernovae naively given by Einstein’s field equation).
Theoretical
explanation: if gravity is due to momentum from gauge boson radiation exchanged
from the mass in the expanding universe surrounding the observer, then in the
observer’s frame of reference a distant receding supernova geometrically can’t
be retarded much.
The emphasis on theoretical predictions is
important. I've shown that the
correct quantum gravity dynamics (which
predict G accurately) give the effective or AVERAGE radius of the
gravitycausing mass around us (ie, the average range of the receding mass which
is causing the gravity we experience) is (1  e^1)ct ~ 0.632ct ~ 10,000 million
lightyears in distance, where t is age of universe. Hence a supernova which is
that distance from us, approximately 10,000 million light years away, is not
affected at all by gravitational retardation (deceleration), as far as we  as
observers of its motion  are concerned. (Half of the gravitycausing
mass of the universe  as far as our frame of reference is concerned  is
within a radius of 0.632ct of us, and half is beyond that radius. Hence the net
exchange radiation gravity at that distance is zero. This calculation already
has the redshift correction built into it, since it is used to determine the
0.632ct effective radius.) This model in a basic form was predicted in 1996, two
years before supernovae data confirmed it. Alas, bigots suppressed it, although
it was sold via the October 1996 issue of Electronics World
magazine.
This is a very important difference between the proper mechanism of gravity, which predicts
the Einstein's field equation (Newton's law written in tensor notation with
spacetime and a contraction term to keep the massenergy conservation accurate)
as a physical effect of exchange radiation caused gravitation!
The Standard Model tells us gravity and electromagnetic forces are caused by light speed exchange radiation. Particles exchange the radiation and recoil apart. This process is like radiation being reflected by the mass carriers in the vacuum with which charged particles (electrons, quarks, etc.) associate. The curvature of spacetime is caused physically by this process..
Radiation pressure causes gravity, contraction in general relativity, and other forces (see below) in addition to avoiding the dark matter problem. The Standard Model is the besttested physical theory in history: forces are due to Feynmandiagram radiation exchange in spacetime. There are 3 expanding spacetime dimensions in the big bang universe which describe the universe on a large scale, and 3 contractable dimensions of matter which we see on a small scale.
Force strengths, nuclear particle masses and elimination of dark matter and energy by a mechanism of the Standard Model, using only established widely accepted, peerreviewed facts published in Physical Review Letters.
High energy unification just implies unification of forces at small
distances, because particles approach closer when collided at high energy. So
really unification at extremely high energy is suggesting that even at low
energy, forces unify at very small distances.
There’s empirical evidence
that the strong force becomes weaker at higher energies (shorter distances) and
the electroweak force becomes stronger (electric charge between electrons is 7%
stronger when they’re collided at 90 GeV), there is likely some kind of unified
force near the bare core of a quark.
As you move away from the core, the
intervening polarised vacuum shields the bare core electric charge by a factor
that seemingly increasing toward 137, and the strong force falls because it
mediated by massive gluons which are short ranged.
If you consider energy
conservation of the vector bosons (photons, Z, W+, W and gluons), you would
logically expect quantitative force unification where you are near the bare
charge core: the QFT prediction (which doesn’t predict unification unless you
have SUSY) seems to neglect this in producing a prediction that electric charge
increases as a weak function (some logarithm) of nteraction energy.
The
reason why energy conservation will produce unification is this: the increasing
force of electroweak interactions with increased energy (or smaller distances
from the particle core) implies more energy is in the gauge bosons delivering
the momentum which produces the forces. This increased gauge boson energy is
completely distinct from the kinetic energy (although a moving mass gains
massenergy by the Lorentz effect, it does not gain any charge by this
route).
Where does the extra gauge boson energy that increases the
effective charge come from when you get closer to a particle core? Answer: the
fall in the strong force around a quark core as distance decreases implies a
decrease in the amount of energy in shortranged strong force gauge bosons
(gluons). The fall in the energy available in gluons, by conservation of
energy, is what is powering the increase in the energy of electroweak gauge
bosons at short ranges.
Levine's
PRL data from 1997 show that Coulomb's law increases by 7% as collision energy
rises to 90 GeV (PRL, v.78, 1997, no.3, p.424).
Look at it
physically: you are getting past the polarised shield of vacuum charges. Once
you are really near the core so there are no intervening vacuum charges, the
electric charge has no mechanism to carry on increasing further, so the QFT
prediction is false. The QFT formulae are for continuous variables, and
therefore don’t deal with the quantum (discrete) nature of vacuum charges
properly at extremely high energy:
You can’t physically have a fraction
of a polarised vacuum charge pair between you and the quark core! Hence the
usual mathematical model which doesn't allow for the discreteness, i.e., the
fact that you need at least one pair of vacuum charges between you and the quark
core for there to be any shielding at all, is the error in the mainstream QFT
treatment which, together with avoiding the conservation of energy for gauge
bosons, creates the unification 'problem' which is then falsely 'fixed' by
the speculative introduction of SUSY (supersymmetry, a new unobserved particle
for every onserved particle).
More on the mechanisms here and
here:
Dr
M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron
Theory, John Wiley & Sons, New York and London, 1961, pp 756:
'The
solution to the difficulty of negative energy states [in relativistic quantum
mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126,
p360, 1930]. One defines the vacuum to consist of no occupied positive energy
states and all negative energy states completely filled. This means that each
negative energy state contains two electrons. An electron therefore is a
particle in a positive energy state with all negative energy states occupied. No
transitions to these states can occur because of the Pauli principle. The
interpretation of a single unoccupied negative energy state is then a particle
with positive energy ... The theory therefore predicts the existence of a
particle, the positron, with the same mass and opposite charge as compared to an
electron. It is well known that this particle was discovered in 1932 by Anderson
[C. D. Anderson, Phys. Rev., 43, p491, 1933].
'Although the prediction of
the positron is certainly a brilliant success of the Dirac theory, some rather
formidable questions still arise. With a completely filled 'negative energy sea'
the complete theory (hole theory) can no longer be a singleparticle
theory.
'The treatment of the problems of electrodynamics is seriously
complicated by the requisite elaborate structure of the vacuum. The filled
negative energy states need produce no observable electric field. However, if an
external field is present the shift in the negative energy states produces a
polarisation of the vacuum and, according to the theory, this polarisation is
infinite.
'In a similar way, it can be shown that an electron acquires
infinite inertia (selfenergy) by the coupling with the electromagnetic field
which permits emission and absorption of virtual quanta. More recent
developments show that these infinities, while undesirable, are removable in the
sense that they do not contribute to observed results [J. Schwinger, Phys. Rev.,
74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto),
1, p27, 1949].
'For example, it can be shown that starting with the
parameters e and m for a bare Dirac particle, the effect of the 'crowded' vacuum
is to change these to new constants e' and m', which must be identified with the
observed charge and mass. ... If these contributions were cut off in any
reasonable manner, m'  m and e'  e would be of order alpha ~ 1/137. No
rigorous justification for such a cutoff has yet been proposed.
'All
this means that the present theory of electrons and fields is not complete. ...
The particles ... are treated as 'bare' particles. For problems involving
electromagnetic field coupling this approximation will result in an error of
order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu
= mu[zero] for the electron, whereas a more complete treatment [including
Schwinger's coupling correction, i.e., the first Feynman diagram] of radiative
effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with
the very accurate measured value of mu/mu[zero] = 1.001...'
VACUUM
POLARISATION, AN EMPIRICALLY DEFENDABLE FACT
What is the mechanism for
the empirically confirmed 7% increase in electric charge as energy increases to
90 GeV (PRL, v.78, 1997, no.3, p.424)?
There is something soothing in the
classical analogy to a sea of charge polarising around article cores and causing
shielding.
While the electroweak forces increase with interaction energy
(and thus increase closer to a particle core), data on the string force shows it
falls from alpha =1 at low energy to ~0.15 at 90 GeV.
If you think about
the conservation gauge boson energy, the true (core) charge is independent of
the interaction energy (although the mass rises with velocity), so the only
physical mechanism by which the electroweak forces can increase as you approach
the core is by gaining energy from the gluon field which has a falling alpha
closer to the core.
If true, then this logic dispenses with SUSY, because
perfect unification due to energy conservation will be reached at extremely high
energy when the polarised vacuum is penetrated.
RECAP
Strong force
coupling constant decreases from 1 at low energy to 0.15 at about 90 GeV, and
over this same range the electromagnetic force coupling increases from 1/137 to
1/128.5 or so.
This is empirical data. The 7 % rise in electric charge as
interaction energy rises to 90 GeV (Levine et al., PRL, v.78, 1997, no.3, p.424)
is the charge polarisation (core shield) being broken through as the particles
approach very closely.
This experiment used electrons not quarks because
of course you can’t make free quarks. But quarks have electric charge as well as
colour charge so the general principle will apply that the electric charge of a
quark will increase by about 7% as energy increases to 90 Gev, while the strong
nuclear (colour) charge will fall by 85%.
So what is it that supplies the
additional energy for the field mediators? It must be the decrease in the
strength of the shortrange strong force in the case of a quark.
Get
close to a quark (inside the vacuum polarisation veil) and the electric charge
increases toward the bare core value, while the colour charge diminishes
(allowing asymptotic freedom over a the size of nucleon). The energy to drive
the electromagnetic coupling increase must come from the gluon field, because
there is simply nowhere else for it to come from. If so then when the polarised
vacuum around the core is fully penetrated, the strong force will be in
equilibrium with the electroweak force and there will be unification without
SUSY.
Galactic rotation rates and other popular controversies
I've
complained bitterly about Jeremy Webb before. He is editor of New Scientist, and
regularly publishes articles which falsely make comments to the effect that
'nobody has predicted gravity, its mechanism is not understood by anyone (and
being a scientist, I'm not armwaving here, I've personally interviewed all of
the billions of people in the world) ...'
These comments appear in
articles discussing the fundamental forces, string theory, etc. The latest was
an editorial on page 5 of the 29 April 2006 issue, which is unsigned but may be
by him. The first email he sent me was on a Monday or Tuesday evening in
December 2002 (I can search it out if needed), and complained that he had to
write the editorial for the following morning. (The second email, a few months
later, from him complained that he had just returned from holiday and was
therefore not refreshed and able to send me a reply to my enquiry
letter...)
Anyway, in the editorial he (or whoever he gets to do his work
for him should he have been on holiday again, which may well be the case)
writes:
'The most that can be said for a physical law is that it is a
hypothesis that has been confirmed by experiment so many times that it becomes
universally accepted. There is nothing natural about it, however: it is a wholly
human construct. Yet we still baulk when somebody tries to revoke
one.'
This is very poorly written. Firstly, mathematically based laws can
be natural (Feynman argued that physical laws have a naturally beautiful
simplicity, and people such as Wigner argued  less convincingly  that because
Pi occurs in some geometric integrals relating to natural probability, the
mathematics is natural, and the universe is based on mathematics rather than
being merely incompletely modelled by it in some quantitative aspects depending
on whether you consider string theory to be pseudoscience or
genius).
Secondly, 'a miss is as near as a mile': even if science is
about falsifying well established and widely accepted facts (such activity is
deemed crackpot territory according to John Baez and many other mainstream
scientists), then failing to produce the results required, failing to deliver
the goods is hardly exciting. If someone tries to revoke a law and doesn't
succeed, they don't get treated in the way Sir Karl Popper claimed they do.
Popper claimed basically that 'science proceeds by falsification, not by proof',
which is contrary to Archimedes' proofs of the laws of buoyancy and so on.
Popper was seriously confused, because nobody has won a mainstream prize for
just falsifying an established theory. Science simply is not done that way.
Science proceeds constructively, by doing work. The New Scientist editorial
proceeds:
'That is what is happening to the inversesquare law at the
heart of Newton's law of gravitation. ... The trouble is that this relationship
fails for stars at the outer reaches of galaxies, whose orbits suggest some
extra pull towards the galactic centre. It was to explain this discrepancy that
dark matter was conjured up [by Fritz Zwicky in 1933], but with dark matter
still elusive, another potential solution is looking increasingly attractive:
change the law.'
This changed law programme is called 'MOND: modified
Newtonian dynamics'. It is now ten years since I first wrote up the gravity
mechanism in a long paper. General relativity in the usual cosmological solution
gives
a/R = (1/3)('cosmological constant', if any)  (4/3)Pi.G(rho +
3p)
where a is the acceleration of the universe, R is the radius, rho is
the density and p is the pressure contribution to expansion: p = 0 for
nonrelativistic matter; p = rho.(c^2)/3 for relativistic matter (such as
energetic particles travelling at velocities approaching c in the earliest times
in the big bang). Negative pressure produces accelerated expansion.
The
Hubble constant, H, more correctly termed the Hubble 'parameter' (the expansion
rate evolves with time and only appears a constant because we are seeing the
past with time as we look to greater distances) in this model is
H^2 =
(v/R)^2 = (1/3)('cosmological constant', if any) + (8/3)(Pi.G.rho) 
k/(XR)^2
~= (1/3)('cosmological constant', if any) +
(8/3)(Pi.G.rho)
where k is the geometry of the expansion curve for the
universe (k = 1, 0, or +1; WMAP data shows k ~ 0, in general relativity jargon
this is a very 'flat' geometry of spacetime) and X is the radius of the
curvature of spacetime, i.e., simply the radius of a circle that a photon of
light would travel around the universe due to being trapped by gravity (this is
the geodesic).
Because the cosmological constant and the third term on
the right hand side are generally negligible (especially if exponential
inflation occurs at the earliest absolute time in the expansion of the
universe), the gives the usual Friedmann prediction for density is
approximately:
Density, rho = (3/8)(H^2)/(Pi.G).
This is the
actual density for the WMAP observations of a flat spacetime. This formula is
overestimates the observed density of discovered matter in the universe by an
order of magnitude!
The gravity mechanism I came up with and which was
first written up about 10 years ago today, and first published via the October
1996 letters pages of Electronics World, gives a different formula, which 
unlike the mainstream equation above  makes the right
predictions:
Density, rho = (3/4)(H^2)/(Pi.G.e^3).
It also
predicted in 1996 that the universe is not slowing down, a prediction confirmed
by observation in 1998! It also leads to
a wide range of other useful and accurate predictions for Standard Model
parameters.
Normally in science you hear people saying that the one
thing which is impressive is good predictions! However, my work was simply
suppressed by Nature and other journals and the new 1998 observations were taken
into cosmology by a mechanismless fudge or arbitrary adjustment of the
equations to artificially force the model to fit the new facts! This is called
the LambdaCDM
(LambdaCold Dark Model) and disproves Kuhn's concept of scientific
revolutions. Faced with a simple mechanism, they prefer to ignore it and go for
a crackpot approach based on faith in the unseen, unobservable, unpredictable
type(s) of 'dark matter' which is really contrary to science.
If they had
a prediction of the properties of this matter, so that there was a possibility
of learning something, then it would at least have a chance of being tested. As
it is, it cannot be falsified and by Popper's nonsense it is simply not
science by their own definition (it is the mainstream, not me, that cling's
on to Popper's criterion when repeatedly ignoring my submissions and refuse to
read and peerreview my proof of the mechanism and correct equations; so
their hypocrisy is funny).
The LambdaCDM model says over 90% of the
universe is composed of a mixture of invisible 'dark matter' and otherwise
unobserved 'dark energy'. As with Ptolemies epicycles and other nonscientific
mainstream theories such as vortex
atoms, ghosts, ESP,
paranormal, Josephson, Brian, etc., it doesn't predict the properties of
either, or say how we can test it or learn anything from it. This puts it well
into the realms of string theory
crackpots who suggest that we need 10/11 dimensions in order to account for
gravitons which have never been observed, in and which do not predict
anything. Wolfgang Pauli called such speculations 'not even wrong'. However
the taxpayer funds string theory and this sort of thing, and people buy New
Scientist.
There is no mechanism possible to hound these pseudosciences
either out of control science, or into taking responsibility for investigating
the facts. The situation is like
that of the mountaineer who reaches the summit for the first time ever in
history. He takes photos to prove it. He returns and stakes his claim in the
records. The reactions are as follows:
1. 'Show your proof to someone
else, I'm not interested in it.'
2. 'The editor won't print this sort of
nonsense.'
3. 'Anybody can climb any mountain, so who cares?'
4. 'Feynman
once said nobody knows how to climb that mountain, so there!'
No
individual has the power to publish the facts. The closest I came was email
dialogue with Stanley Brown, editor of Physical Review Letters. On the one hand
I'm disgusted by what has happened, but on the other hand it was to be expected.
There is a danger that too much of the work is being done by me: in 1996 my
paper was mainly an outline of the proofs, but today it has hardened into more
rigorous mathematical proof. If it had been published in Nature even as just a
very brief letter in 1996, then the editor of Nature would have allowed
other people to work on the subject, by providing a forum for discussion.
Some mainstream mathematical physicists could have worked on it and taken the
credit for developing it into a viable theory. Instead, I'm doing all the work
myself because that forum was denied. This is annoying because it is not the
community spirit of a physics group which was I enjoyed at
university.
I've repeatedly tried to get others interested via internet
Physics Forums and similar, where the reaction has been abusive and downright
fascist. Hitting back with equal force, which the editor of Electronics
World  Phil Reed  suggested to me as a tactic in 2004, on the internet
just resulted in my proof being dishonestly called speculation and my being
banned from responding. It is identical to the Iraqi 'War', a fighting of
general insurgency which is like trying to cut soup with a knife  impossible.
Just as in such political situations, there is prejudice which prevents reasoned
settlement of differences. People led by Dr Lubos Motl and others have, by and
large, made up their minds that string theory settles gravity, and the Lambda
CDM model settles cosmology, and that unobserved 10/11 dimensional spacetime,
unobserved superpartners for every visible partner, and unobserved dark energy
and dark matter is 90% or more of the universe. To prove contrary facts is like
starting a religious heresy: people are more concerned with punishing you for
the heresy than listening and being reasonable.
If you have a particular
identifiable opponent, then you know who to fight to get things moving. But
since there is a general fasciststyle bigotry to contend with, everyone is
basically the enemy for one reason or more: the substance of your scientific
work is ignored and personal abuse against you is directed from many quarters,
from people who don't know you personally.
The cosmic background
radiation maps from COBE and more recently WMAP show a largescale anisotropy or
variation in temperature across space which is primordial (from the early big
bang) and smaller scale variation again due to variations
from acoustic waves (sound) in the compressed big bang fireball, but at
later times (the cosmic background radiation was emitted 400,000 years after the
big bang).
Because this can be plotted out into a wavy curve of wave
amplitude versus effective frequency, the standard dark matter model was often
presented as a unique fit to the observations. However, in 2005 Constantinos
Skordis used relativistic MOND to show that it produced a similar fit to the
observations! (This is published on page 54 of the 29 April 2006 issue of New
Scientist.) Doubtless other models with parameters that offset those of the
mainstream will also produces a similarly good fit (especially after
receiving the same anount of attention and funding that the mainstream has had
all these years!).
The important thing here is that the most
impressive claims for uniqueness of the arbitrary fudge Lambda CDM model are
fraudulent. Just because the uniqueness claims are fraudulent, does not in
itself prove that another given model is right because the other models are
obviously then not unique in themselves either. But it should make respectable
journal editors more prone to publish the facts!
Exact statement
Heisenberg's uncertainty principle
I've received some email requests for
a clarification of the exact statement of Heisenberg's uncertainty principle. If
uncertainty in distance can occur in two different directions, then the
uncertainty is is only half of what it would be if it can only occur in one
direction. If x is uncertainty in distance and p is uncertainty in momentum,
then xp is at least h bar providing that x is always positive. If distance can
be positive as well as negative, then the uncertainty is half h bar. The
uncertainty principle takes on different forms depending on the situation which
is under consideration.
http://www.math.columbia.edu/~woit/wordpress/?p=389:
knotted string Says:
May 15th,
2006 at 9:50 am
Can I suggest two new laws to speed up progress?
1. Any theory that predicts anything already known is ad hoc rubbish
2. A theory that predicts something that is not already known is speculative rubbish
;)
Wolfgang Pauli’s letter of Dec 4, 1930 to a meeting of beta radiation specialists in Tubingen:
‘Dear Radioactive Ladies and Gentlemen, I have hit upon a desperate remedy regarding ... the continuous betaspectrum ... I admit that my way out may seem rather improbable a priori ... Nevertheless, if you don’t play you can’t win ... Therefore, Dear Radioactives, test and judge.’
(Quoted in footnote of page 12, http://arxiv.org/abs/hepph/0204104.)
Pauli's neutrino was introduced to maintain conservation of energy in observed beta spectra where there is an invisible energy loss. It made definite predictions. Contrast this to stringy SUSY today.
Although Pauli wanted renormalization in all field theories and used this to block nonrenormalizable theories, others like Dirac opposed renormalization to the bitter end, even in QED where it was empirically successful! (See Chris Oakley’s home page.)
Unlike mainstream people (such as stringy theorists Professor Jacques
Distler, Lubos Motl, and other loud 'crackpots'), I don't hold on to any
concepts in physics as religious creed. When I state big bang, I'm referring
only to those few pieces of evidence which are secure, not to all the
speculative conjectures which are usually glued to it by the high
priests.
For example, inflationary models are speculative conjectures, as
is the current mainstream mathematical description, the Lambda CDM (LambdaCold
Dark Matter) model: http://en.wikipedia.org/wiki/LambdaCDM_model
That
model is false as it fits the general relativity field equation to observations
by adding unobservables (dark energy, dark matter) which would imply that 96 %
of the universe is invisible; and this 96 % is nothing to do with spacetime
fabric or aether, it is supposed to speed up expansion to fit observation and
bring the matter up to critical density.
There is no cosmological
constant: lambda = zero, nada, zilch. See Prof. Phil Anderson who happens to be
a Nobel Laureate (if you can't evaluate science yourself and instead need or
want popstar type famous expert personalities with bells and whistles to
announce scientific facts to you): 'the flat universe is just not
decelerating, it isn't really accelerating' http://cosmicvariance.com/2006/01/03/dangerphilanderson/
Also my comment there http://cosmicvariance.com/2006/01/03/dangerphilanderson/#comment16222
The
'acceleration' of the universe observed is the failure of the 1998 mainstream
model of cosmology to predict the lack of gravitational retardation in the
expansion of distant matter (observations were of distant supernovae, using
automated CCD telescope searches).
Two years before the first
experimental results proved it, back in October 1996, Electronics World
published (via the letters column) my paper which showed that the big bang
prediction was false, because there is no gravitational retardation of distant
receding matter. This prediction comes very simply from the pushing mechanism
for gravity in the receding universe context.
The current cosmological model is like Ptolemies work. Ptolemy’s epicycles are also the equivalent of ‘string theory’ (which doesn’t exist as a theory, just as a lot of incompatible speculations with different numbers of ‘branes’ and different ways to account for gravity strength without dealing with it quantitatively)  they were ad hoc and did not predict anything. It simply had no mechanism, and ‘predictions’ don’t include data put into it (astronomical observations). Mine does: http://feynman137.tripod.com/ This has a mechanism and many predictions. The most important prediction is the paper published via the Oct. 1996 issue of Electronics World, that there is no slowing down. This was discovered two years later, but the good people at Nature, CQG, PRL, etc., suppressed both the original prediction and the fact that the experimental confirmation confirms it. Instead they in 1998 changed cosmology to the lambda (cosmological constant) dark energy model, adding epicycles with no mechanism. The ‘arbitrary constants’ of the Standard Model; notice I predict these so they are no longer arbitrary. The world is full of bigotry and believes that religious assertions and political mainstream groupthink are a substitute science. Personal relationships are a kind of substitute for science: they think science is a tea party, or a gentleman's club. This is how the mainstream is run. "You pat my back and I'll pat yours". Cozy, no doubt. But it isn't science.
Caloric and phlogiston were replaced by two mechanisms and many laws of
thermodynamics. The crucial steps were:
(1) discovery of oxygen (proof of
correct facts)
(2) Prevost's key discovery of 1792 that constant temperature
is possible even if everything is always emitting heat (exchanging energy at the
same rate). This was equilibrium theory, permitting the kinetic theory and
radiation theory (proof of correct facts)
How will string theory and the dark energy cosmology be replaced? By you pointing out that gravity is already in the Standard Model as proved on this page. But be careful: nobody will listen to you if you have nothing at stake, and if you have a career in science you will be fired. So try to sabotage the mainstream bigots subversively.
Gravity is a residual of the electromagnetic force. If I have two hydrogen
atoms a mile apart, they are exchanging radiation, because the electron doesn't
stop doing this just because there is a proton nearby, and vice versa. There is
no mechanism for the charges a neutral atom to stop exchanging radiation with
other charges in the surrounding universe; it is neutral because the
attractiverepulsive Coulomb force is cancelled out by the two exchanges, not
because the exchanges suddenly stop when an electron and a proton form an
"uncharged" atom. This is fact: if you dispute it you must supply a mechanism
which stops exchange forcecausing radiation from occurring when an electron is
near a proton.
The addition of "charged capacitors" which are overall
"neutral" (ie charged atoms) in space can take two different routes with
severely different net voltage. A straight line across the universe encounters
randomly orientated atoms, so if there is an even number of atoms the average
net voltage will be zero like a circuit with an equal number of charged
capacitors pointed both ways in series. 50% of such lines are even numbers of
atoms, and 50% are odd. This is all simple fact from simple statistics, not
speculation. Learn it at kindergarden. The 50% of lines across the universe
which have an odd number of randomly orientated atoms in series series will be
have a voltage equal to that from a single charged atom.
The mean voltage
is then [(odd) + (even)]/2 = [(0) + (1)]/2 = 1/2 atom voltage = 1electron or
proton unit of charge. This force, because it always results from the odd atom
(where there is always attraction) is always attractive.
Now the sum for
the other network of charges in the universe is the random walk between all
charges all over space (counting each charge once only), which statistically
adds to the value of 1 charge multiplied by the square root of the total number.
This can be either attractive or repulsive, as demonstrated at below [scroll
down to paragraph beginning ‘Heuristically, gauge boson (virtual photon)
transfer between charges to cause electromagnetic forces, and those gauge bosons
don’t discriminate against charges in neutral groups like atoms and neutrons.
…’].
The ratio of the random sum to the straight line sum is the square root of the number of charges in the universe. So the relationship between gravity and electromagnetism is established.
Note the recent paper at arxiv by Dr Mario Rabinowitz which discredits the notion that gravity is normal quantim field theory: http://arxiv.org/abs/physics/0601218: "A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle. Quantum mechanics clearly violates the weak equivalence principle (WEP). This implies that quantum mechanics also violates the strong equivalence principle (SEP), as shown in this paper. Therefore a theory of quantum gravity may not be possible unless it is not based upon the equivalence principle, or if quantum mechanics can change its mass dependence. Neither of these possibilities seem likely at the present time. Examination of QM in nspace, as well as relativistic QM equations does not change this conclusion. "
So the graviton concept is a fraud even in 11 dimensional supergravity which is the limit to Mtheory linking strings to quantum gravitation! Spin2 gravitons don't exist. All you have is two routes by which electromagnetism can operate, based on analysis of Catt's "everything is a capacitor" concept.
The weak route is always attractive but is less by about 10^40 than the strong route of summation which is can be attractive or repulsive.
See the abstractlevel unification of general relativity and electromagnetism force by Danny Ross Lunsford at http://cdsweb.cern.ch/search.py?recid=688763&ln=en. Lunsford's unification is not explained in causal terms in that paper, but the implication is clear: there is no quantum gravity. The unification requires 3 extra dimensions which Lunsford attributes to "coordinatized matter".
Physically what is occurring is this: Einstein's general relativity fails to discriminate between spacetime scales which are expanding (time, etc) and contractable dimensions describing matter (which are contracted by gravity fields, for example GR shows the earth's radius is contracted by 1.5 mm by gravity, like an all round pressure acting not on the surface of the earth but directly on the subatomic matter throughout the volume of the earth ).
By lumping all the expanding dimensions together as time in the metric, Einstein gets the right answers. But really the expansion in spacetime occurs in three dimensions (x, y, z, with in each case time being the dimension divided by velocity c), while the contraction due to fields occurs in three overlapping dimensions. These are not the same mathematical dimensions, because one set is expanding at light speed and the other is contractable!
So physically there are three expanding spacetime dimensions describing time and three contractable dimensions describing matter. Saying there is one expanding time dimension ignores spacetime, which shows that any distance can be expressed as a time past. So the correct symmetry orthagonal group is as Lunsford says SO(3,3), a total of 6 dimensions divided equally into two sets of three each.
All the speculation about 10 dimensional superstrings and 11 dimensional supergravity is pure trash, with no mechanism, prediction or anything: http://www.amazon.com/gp/product/0224076051/10370284624323819?v=glance&n=283155
Catt’s coauthor Walton emailed me and cc’d Catt in 2001 that a TEM wave
is
not a good name for the Heaviside slab of electromagnetic energy,
because
nothing need have a periodic "wave": the energy can just flow at 10
volts in
a slab without waving. So basically you are replacing the electron
not just
with a TEM wave but by a nonwaving energy block. The de Broglie
frequency
of an electron is zero (ie it is not a wave at all) if its
propagation
velocity is zero. In order to reconcile the Heaviside energy
current with
an electron's known properties, the propagation of the electron
(at less
than c) occurs in a direction orthagonal to the energy current.
Bu
pythagoras, the velocity sharing between propagation speed v and
energy
current speed x is then (v^2) + (x^2) = (c^2), so the energy current
goes at
light speed when v=0, but is generally v/c = [1  (v^2)/(c^2)]^(1/2)
which
is the Lorentz contraction factor. Since all time is measured by
velocities
(pendulum, clock motor, electron oscillation, etc), this is
the
timedilation law and by spacetime x = ct and x' = vt, we get the
length
contraction in the propagation direction.
From: "Nigel Cook"
To: "Brian Josephson" bdj10@cam.ac.uk
Sent: Wednesday, May 17, 2006 11:23 AM
Subject: Feynman, light reflection and impedance mismatch!
Thanks! I notice
that they say "Light wave falling on a glass
surface suffers reflection
because of impedance mismatch". The impedance is
determined by the thickness
of the glass before the light strikes it.
This entirely explains the
problem Feynman writes about at great length from
the quantum field theory
standpoint in his book QED.
Of course you won't be satisfied
with any nonparanormal explanation,
Josephson. Far better for you if string
theory and ESP, spoonbending and
dousing are mentioned in the
explanation. If Templeton Foundation offered
you an award for promoting
religion in science, would you accept? Have you
any interest in Dr Thomas
Love of California State University who has
disproved quantum entanglement by
showing it is a mathematical fiddle
introduced by the switch over between
timedependent and timeindependent
forms of Schroedinger's equation at the
moment of measurement which
introduce "wave function collapse" manure? He is
being suppressed.
Nigel
 Original Message 
From: "Brian Josephson" <bdj10@cam.ac.uk>
To: <Monitek@aol.com>; <ivorcatt@hotmail.com>;
<imontgomery@atlasmeasurement.com.au>;
<forrestb@ix.netcom.com>;
<epola@tiscali.co.uk>
Cc: <sirius184@hotmail.com>; <ernest@cooleys.net>;
<nigelbryancook@hotmail.com>;
<ivor@ivorcatt.com>; <andrewpost@gmail.com>;
<geoffrey.landis@sff.net>; <jvospost2@yahoo.com>; <jackw97224@yahoo.com>;
<graham@megaquebec.net>; <pwhan@atlasmeasurement.com.au>
Sent:
Wednesday, May 17, 2006 10:36 AM
Subject: Re: The aether
>
Since people get worked up about these things, I should just note that
>
Monitek was actually quoting Catt there. Just in case I didn't do a
>
reply to all, let me just note that I was referring people to
>
>
<http://physics.usask.ca/~hirose/ep225/emref.htm>
>
>
where the 377 ohms is mentioned and used in physics dept. lecture
notes.
>
> =b=
>
> * * * * * * * Prof. Brian D.
Josephson :::::::: bdj10@cam.ac.uk
>
* MindMatter * Cavendish Lab., JJ Thomson Ave, Cambridge CB3 0HE, U.K.
>
* Unification * voice: +44(0)1223 337260 fax: +44(0)1223 337356
> *
Project * WWW: http://www.tcm.phy.cam.ac.uk/~bdj10
> > The Standard Model, which predicts all decay rates of
elementary
particles
> > very accurately (not nuclei) is composed of
the symmetry groups SU(3) x
> > SU(2) x U(1).
> >
> >
They are YangMills theories. They describe spin, charge, etc., but NOT
>
> MASS. This is why Ivor Catt's "trapped TEM wave" model for the
electron
> > is
> > COMPATIBLE with the Standard Model. The
mass is added in by the vacuum
> > field, not by the actual particles.
(But ... throw Catt a lifeline and
he
> > automatically rejects it,
so I've given up trying to explain anything to
> > him. He just doesn't
want to know.)
> >
> > In addition, only the electromagnetism
unit U(1) is a renormalisable
> > quantum
> > field theory (so
by fiddling it so that the force coupling strength from
> > the
>
> exchange of photons gives Coulomb's law, it then predicts other
things
> > accurately, like the Lamb frequency shift and magnetic
moment measured
for
> > an electron).
> >
> > The
SU(3) and SU(2) symmetries, for strong and weak nuclear forces,
> >
respectively, describe the forcecarrying mediators to be short ranged
>
> (why
> > is why they only participate in nuclear sized
interactions, and we only
> > see
> > electromagnetism and
gravity at the macroscopic scale).
> >
> > The short range is
caused by the force mediators having mass. For a
> > proton,
>
> only 11 MeV of the 938 MeV mass is due to quarks. Hence the force
>
> mediators
> > and the effect of the polarised vacuum multiplies
the mass by about a
> > factor
> > of 85. The actual quarks
themselves have ZERO mass, the entire mass
being
> > due to the
vacuum field which "mirs" them and creates inertia.
Gerald ‘t Hooft and Martinus Veltman in 1970 showed YangMills theory is the only way to unify Maxwell's equations and QED, giving the U(1) group of the Standard Model.
In electromagnetism, the spin1 photon interacts by changing the quantum state of the matter emitting or receiving it, via inducing a rotation in a Lie group symmetry. The equivalent theories for weak and strong interactions are respectively isospin rotation symmetry SU(2) and color rotation symmetry SU(3).
Because the gauge bosons of SU(2), SU(3) have limited range and therefore are massive, the field obviously carries most of the mass; so the field is there not just a small perturbation as it is in U(1).
Eg, a proton has a rest mass of 938 MeV but the three real quarks in it only contribute 11 MeV, so the field contributes 98.8 % of the mass. In QED, the field contributes only about 0.116 % of the magnetic moment of an electron.
I understand the detailed calculations involving renormalization; in the usual treatment of the problem there's an infinite shielding of a charge by vacuum polarisation at low energy, unless a limit or cutoff is imposed to make the charge equal the observed value. This process can be viewed as a ‘fiddle’ unless you can justify exactly why the vacuum polarisation is limited to the required value.
Hence Dirac’s reservations (and Feynman’s, too). On the other hand, just by one 'fiddle', it gives a large number of different, independent predictions like Lamb frequency shift, anomalous magnetic moment of electron etc.
The equation is simple (page 70 of http://arxiv.org/abs/hepth/0510040), for modeling one corrective Feynman diagram interaction. I've read Peter say (I think) that the other couplings which are progressively smaller (a convergent series of terms) for QED, instead become a divergent series for field theories with heavy mediators. The mass increase due to the field massenergy is by a factor of 85 for the quarkgluon fields of a proton, compared to only a factor of 1.00116 for virtual charges interacting with an electron.
So there are many areas where the calculations of the Standard Model could be further studied, but string theory doesn't even begin to address them. Other examples: the masses and the electroweak symmetry breaking in the Standard Model are barely described by the existing speculative (largely nonpredictive) Higgs mechanism.
Gravity, the ONE force hasn't even entered the Standard Model, is being tackled by string theorists, who  like babies  always want to try to run before learning to walk. Strings can't predict any features of gravity that can be compared to experiment. Instead, string theory is hyped as being perfectly compatible with nonobserved, speculative gravitons, superpartners, etc. It doesn't even scientifically 'predict' the unnecessary gravitons or superpartners, because it can't be formulated in a quantitative way. Dirac and Pauli had predictions that were scientific, not stringy.
Dirac made exact predictions about antimatter. He predicted the rest massenergy of a positron and the magnetic moment, so that quantitative comparisons could be done. There are no quantitative predictions at potentially testable energies coming out of string theory.
Theories that ‘predict’ unification at 10^{16} times the maximum energy you can achieve in an accelerator are not science.
I just love the fact string theory is totally compatible with special relativity, the one theory which has never produced a unique prediction that hasn't already been made by Lorentz, et al. based on physical local contraction of instruments moving in a fabric spacetime.
It really fits with the overall objective of string theory: the enforcement of genuine groupthink by a group of bitter, mainstream losers.
Introduction to quantum field theory (the Standard Model) and General Relativity.
Mainstream ten or eleven dimensional ‘string theory’ (which makes no testable predictions) is being hailed as consistent with special relativity. Do mainstream mathematicians want to maintain contact with physical reality, or have they accidentally gone wrong due to ‘groupthink’? What will it take to get anybody interested in the testable unified quantum field theory in this paper?
Peerreview is a sensible idea if you are working in a field where you have enough GENUINE peers that there is a chance of interest and constructive criticism. However string theorists have proved controlling, biased and bigoted groupthink dominated politicians who are not simply ‘not interested in alternatives’ but take pride in sneering at things they don’t have time to read!
Dirac’s equation is a relativistic version of Schroedinger’s timedependent equation. Schroedinger’s time dependent equation is a general case of Maxwell’s ‘displacement current’ equation. Let’s prove this.
First, Maxwell’s displacement current is i = dD/dt = e .dE/dt In a charging capacitor, the displacement current falls as a function of time as the capacitor charges up, so: displacement current i = e .d(v/x)/dt, [equation 1]
Where E has been replaced by the gradient of the voltage along the ramp of the step of energy current which is entering the capacitor (illustration above). Here x is the step width, x = ct where t is the rise time of the step.
The voltage of the step is equal to the current step multiplied by the resistance: v = iR. Maxwell’s concept of ‘displacement current’ is to maintain Kirchhoff’s and Ohm’s laws of continuity of current in a circuit for the gap interjected by a capacitor, so by definition the ‘displacement current’ is equal to the current in the wires which is causing it.
Hence [equation 1] becomes:
i = e .d(iR/x)/dt = (e R/x).di/dt
The solution of this equation is obtained by rearranging to yield (1/i)di = x.dt/(e R), integrating this so that the left hand side becomes proportional to the natural logarithm of i, and the right hand side becomes xt/(e R), and making each side a power of e to get rid of the natural logarithm on the left side:
i_{t} = i_{o}e^{ x t /( e R )}.
Now e = 1/(cZ), where c is light velocity and Z is the impedance of the dielectric, so:
i_{t} = i_{o}e^{ x c Z t / R}.
Capacitance per unit length of capacitor is defined by C = 1/(xcZ), hence:
i_{t} = i_{o}e^{ t / RC}.
Which is the standard capacitor charging result. This physically correct proof shows that the displacement current is a result of the varying current in the capacitor, di/dt, i.e., it is proportional to the acceleration of charge which is identical to the emission of electromagnetic radiation by accelerating charges in radio antennae. Hence the mechanism of ‘displacement current’ is energy transmission by electromagnetic radiation: Maxwell’s ‘displacement current’ i = e .dE/dt by electromagnetic radiation induces the transient current i_{t} = i_{o}e^{ t / RC}. Now consider quantum field theory.
Schroedinger’s timedependent equation is essentially saying the same thing as this electromagnetic energy mechanism of Maxwell’s ‘displacement current’: Hy = iħ.dy /dt = (˝ih/p )dy /dt, where ħ = h/(2p ). The energy flow is directly proportional to the rate of change of the wavefunction.
The energy based solution to this equation is similarly exponential: y _{t} = y _{o} exp[2p iH(t – t_{o})/h]
The nonrelativistic hamiltonian is defined as:
H = ˝ p^{2}/m.
However it is of interest that the ‘special relativity’ prediction of
H = [(mc^{2})^{2} + p^{2}c^{2}]^{2},
was falsified by the fact that, although the total massenergy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the KleinGordon equation, which states:
ħ^{2}d^{2y }/dt^{2} = [(mc^{2})^{2} + p^{2}c^{2}]y ,
While this is physically correct, it is nonlinear in only dealing with secondorder variations of the wavefunction.
Dirac’s equation simply makes the timedependent Schroedinger equation (Hy = iħ.dy /dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:
H = a pc + b mc^{2},
where p is the momentum operator. The values of constants a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’.
The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the KleinGordon equation for secondorder variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:
E = ± [(mc^{2})^{2} + p^{2}c^{2}]^{1/2}.
Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is ± ˝ ħ = ± ˝ h/(4p ). This explains two of the four solutions. The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc^{2}.
This equation proves the fundamental distinction between Dirac’s theory and
Einstein’s special relativity. Einstein’s equation from special relativity is E
= mc^{2}. The fact that in fact E = ±
mc^{2}, proves the physical shallowness of special relativity which
results from the lack of physical mechanism in special relativity.
‘…
Without welldefined Hamiltonian, I don’t see how one can address the time
evolution of wave functions in QFT.’  Eugene Stefanovich,
You can do
this very nicely by grasping the mathematical and physical correspondence of the
timedependent Schrodinger to Maxwell’s displacement current i = dD/dt. The
former is just a quantized complex version of the latter. Treat the Hamiltonian
as a regular quantity as Heaviside showed you can do for many operators. Then
the solution to the time dependent Schroedinger equation is: wavefunction at
time t after initial time = initial wavefunction.exp(iHt/[h bar])
This
is an general analogy to the exponential capacitor charging you get from
displacement current. Maxwell’s displacement curent is i = dD/dt where D is
product of electric field (v/m) and permittivity. There is electric current in
conductors, caused by the variation in the electric field at the front of a
logic step as it sweeps past the electrons (which can only drift at a net speed
of up to about 1 m/s) at light speed. Because the current flowing into the first
capacitor plate falls off exponentially as it charges up, there is radio
transmission transversely like radio from an antenna (radio power is
proportional to the rate of charge of current in the antenna, which can be a
capacitor plate). Hence the reality of displacement current is radio
transmission. As each plate of a circuit capacitor acquires equal and opposite
charge simultaneously, the radio transmission from each plate is an inversion of
that from the other, so the superimposed signal strength away from the capacitor
is zero at all times. Hence radio losslessly performs the role of induction
which Maxwell attributed to aetherial displacement current. Schroedinger’s
timedependent equation says the product of the hamiltonian and wavefunction
equals i[h bar].d[wavefunction]/dt, which is a general analogy to Maxwell’s i =
dD/dt. The KleinGordon, and also Dirac’s equation, are relativized forms of
Schroedinger’s timedependent equation.
Maxwell never went right in the paper. He failed to recognise that any electric current involves electrons accelerating which in turn results in electromagnetic radiation. This in turn induces a current in the opposite direction in another conductor. If the other conductor is charging as the first conductor is discharging, then the conductors swap electromagnetic energy simultaneously. There is no loss externally as electromagnetic radiation because of the fact that the superimposed electromagnetic radiation signals from each conductor exactly cancel to zero: http://electrogravity.blogspot.com/2006/04/maxwellsdisplacementandeinsteins.html. The charge polarisation of the vacuum is a response to the charging of the conductors, not a cause of it, and it is trivial and slow. Displacement current doesn’t do what Maxwell claimed it does.
The magnetic force is very important, notice the Pauli exclusion principle and its role in chemistry. Every electron has spin and hence a magnetic moment which is predicted by Dirac’s equation to within an accuracy of 0.116%. The 0.116% correction factor is given by the first vacuum (aether) coupling correction factor of quantum field theory of Schwinger, Feynman, Tomonaga: 1/(twice pi times the 137 factor) = 0.00116.
The magnetic field of the electron is always present, it is coeternal with the electric field. Regrettably, Ivor Catt refuses to accept this or even to listen to the evidence! It is easy to do the SternGerlach experiment that gives direct evidence of the magnetic moment of the electron. There are other experiments too. It is a simply experimental fact.
The mechanism of what is attributed to aetherial charge ‘displacement’ (and thus ‘displacement current’) is entirely compatible with existing quantum field theory, the Standard Model. The problem is that there is another mechanism, called electromagnetic radiation, which is also real. It predicts a lot of things, and induces currents and charges like ‘displacement current’. Of course there is some charge polarisation and displacement of the vacuum charges between two charged plates. However, that is a secondary effect. It doesn't cause the charging in the first place. It is slow, not light speed.
Maxwell had Faraday's 1846 ‘Thoughts on Ray Vibrations’ and Weber's 1856 discovery that 1 over the root of the product of electric and magnetic force law constants is a velocity of light.
This was his starting point. He was trying to fiddle around until he came up with an electromagnetic theory of light which produced Weber's empirical equation from a chain of theoretical reasoning based on Faraday’s electromagnetic light ray argument that a curling field produces a time varying field, etc. Half the theoretical input to the theory is Faraday’s own empirical law of induction, curl.E = dB/dt.
The other half would obviously by symmetry have to a law of the form curl.B = u.dD/dt where u is permeability and D is electric displacement, D = eE, where e is permittivity and E is electric field strength. The inclusion of the constant u is obviously implied by the fact that the solution to the two equations (Faraday and this new law) must give the Weber light speed law. So you have to normalise (fiddle) the new law to produce the desired result, Weber’s empirical relationship between electric and magnetic constants and light velocity.
Maxwell had to come up with an explanation of the new law, and found dD/dt has the units of displacement current. Don’t worry about what he claims to have done in his papers, just concentrate on the maths and physics he knew and what his reasoning was: no scientist who gets published puts the true ‘eureka’ bathtime moments into the paper, they rewrite the facts as per Orwell’s ‘1984’ so that the theory appears to result in a logical way from pure genius.
In a photon in the vacuum, the peak amplitude is always the same, no matter how far it goes from its source. The height and also the energy of a water wave (which is a transverse wave like light, the oscillatory motion of matter being perpendicular to the propagation of the energy flow direction) decreases as it spreads out. Photons of light don't behave like transverse waves in this sense, the peak electric field in the light oscillation remains fixed. Gamma rays for example remain of same amplitude and energy, they don't decay into visible light inversely with distance or the square of distance.
If light was a waterwave transverse wave, as you get move away from a radioactive source the gamma rays would become xrays, then ultraviolet, then violet, then other visible light, then infrared, then microwaves, then radio waves in accordance with the water wave equation. This doesn't happen.
The Standard Model
Quantum field theory describes the relativistic quantum oscillations of fields. The case of zero spin leads to the KleinGordon equation. However, everything tends to have some spin. Maxwell’s equations for electromagnetic propagating fields are compatible with an assumption of spin h/(2p ), hence the photon is a boson since it has integer spin in units h/(2p ). Dirac’s equation models electrons and other particles that have only half unit spin, as known from quantum mechanics. These halfinteger particles are called fermions and have antiparticles with opposite spin. Obviously you can easily make two electrons (neither the antiparticle of the other) have opposite spins, merely by having their spin axes pointing in opposite direction: one pointing up, one pointing down. (This is totally different from Dirac’s antimatter, where the opposite spin occurs while both matter and antimatter spin axes are pointed in the same direction. It enables the Paulipairing of adjacent electrons in the atom with opposite spins and makes most materials nonmagnetic (since all electrons have a magnetic moment, everything would be potentially magnetic in the absence of the Pauli exclusion process.)
Two heavier, unstable (radioactive) relatives of the electron exist in nature: muons and tauons. They, together with the electron, are termed leptons. They have identical electric charge and spin to the electron, but larger masses, which enable the nature of matter to be understood, because muons and tauons decay by neutrino emission into electrons. Neutrinos and their antiparticle are involved in the weak force; they carry energy but not charge or detectable mass, and are fermions (they have half integer spin).
In addition to the three leptons (electron, muon, and tauon) there are six quarks in three families as shown in the table below. The existence of quarks is an experimental fact, empirically confirmed by the scattering patterns when highenergy electrons are fired at neutrons and protons. There is also some empirical evidence for the three colour charges of quarks in the fact that highenergy electronpositron collisions actually produce three times as many hadrons as predicted when assuming that there are no colour charges.
The 12 Fundamental Fermions (half integer spin particles) of the Standard Model
Electric Charge: 
+ 2e/3 Quarks 
0 Neutrinos 
 e/3 Quarks 
 e Leptons 
Family 1: 
3 MeV u 
0 MeV v(e) 
5 MeV d 
0.511 MeV e 
Family 2: 
1200 MeV c 
0 MeV v(m ) 
100 MeV s 
105.7 MeV m 
Family 3: 
174,000 MeV t 
0 MeV v(t ) 
4200 MeV b 
1777 MeV t 
Notice that the major difference between the three families is the mass. The radioactivity of the muon (m ) and tauon (t ) can be attributed to these being highenergy vacuum states of the electron (e). The Standard Model in its present form cannot predict these masses. Family 1 is the vital set of fermions at low energy and thus, so far as human life is concerned at present. Families 2 and 3 were important in the highenergy conditions existing within a fraction of a second of the Big Bang event that created the universe. Family 2 is also important in nuclear explosions such as supernovae, which produce penetrating cosmic radiation that irradiates us through the earth’s atmosphere, along with terrestrial natural radioactivity from uranium, potassium40, etc. The t (top) quark in Family 3 was discovered as recently as 1995. There is strong evidence from energy conservation and other indications that there are only three families of fermions in nature.
The Heisenberg uncertainty principle is often used as an excuse to avoid worrying about the exact physical interpretation of the various symmetries structures of the Standard Model quantum field theory: the wave function is philosophically claimed to be in an indefinite state until a measurement is made. Although, as Thmoas S. Love points out, this is a misinterpretation based on the switchover of Schroedinger wave equations (timedependent and timeindependent) at the moment when a measurement is made on the system, it keeps the difficulties of the abstract field theory to a minimum. Ignoring the differences in the masses between the three families (which has a separate mechanism), there are four symmetry operations relating the Standard Model fermions listed in the table above:
Hermann Weyl and Eugene Wigner discovered that Lie groups of complex symmetries represent quantum field theory.
Colour rotation leads directly to the Standard Model symmetry unitary group SU(3), i.e., rotations in imaginary space with 3 complex coordinates generated by 8 operations, the strong force gluons.
Isospin rotation leads directly to the symmetry unitary group SU(2), i.e., rotations in imaginary space with 2 complex coordinates generated by 3 operations, the Z, W+, and W gauge bosons of the weak force.
In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the YangMills theory is the only model for Maxwell’s equations which is consistent with quantum mechanics and the empirically validated results of relativity. The photon YangMills theory is U(1). Equivalent YangMills interaction theories of the strong force SU(3) and the weak force SU(2) in conjunction with the U(1) force result in the symmetry group set SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on lefthanded spinning fermions, breaking the conservation of parity.
Mediators conveying forces are called gauge bosons: 8 types of gluons for the SU(3) strong force, 3 particles (Z, W+, W) for the weak force, and 1 type of photon for electromagnetism. The strong and weak forces are empirically known to be very shortranged, which implies they are mediated by massive bosons unlike the photon which is said to be lacking mass although really it carries momentum and has mass in a sense. The correct distinction is not concerned with ‘the photon having no rest mass’ (because it is never at rest anyway), but is concerned with velocity: the photon actually goes at light velocity while all the other gauge bosons travel slightly more slowly. Hence there is a total of 12 different gauge bosons. The problem with the Standard Model at this point is the absence of a model for particle masses: SU(3) x SU(2) x U(1) does not describe mass and so is an incomplete description of particle interactions. In addition, the exact mechanism which breaks the electroweak interaction symmetry SU(2) x U(1) at low energy is speculative.
If renormalization is kicked out by YangMills, then the impressive results which depend on renormalisation (Lamb shift, magnetic moments of electron and muon) are lost. SU(2) and SU(3) are not renormalisable.
Gravity is of course required in order to describe mass, owing to Einstein’s equivalence principle that states that gravitational mass is identical to, and indistinguishable from, inertial mass. The existing mechanism for mass in the Standard Model is the speculative (nonempirical) Higgs field mechanism. Peter Higgs suggested that the vacuum contains a spin0 boson field which at low energy breaks the electroweak symmetry between the photon and the weak force Z, W+ and W gauge bosons, as well as causing all the fermion masses in the Standard Model. Higgs did not predict the masses of the fermions, only the existence of an unobserved Higgs boson. More recently, Rueda and Haish showed that Casimir force type radiation in the vacuum (which is spin1 radiation, not the Higgs’ spin0 field) explains inertial and gravitational mass. The problem is that Rueda and Haish could not make particle mass or force strength predictions, and did not explain how electroweak symmetry is broken at low energy. Rueda and Haish have an incomplete model. The vacuum has more to it than simply radiation, and may be more complicated than the Higgs field. Certainly any physical mechanism capable of predicting particle masses and force strengths must be sophisticated than the existing Higgs field speculations.
Many set out to convert science into a religion by drawing up doctrinal creeds. Consensus is vital in politics and also in teaching subjects in an orthodox way, teaching syllabuses, textbooks, etc., but this consensus should be confused for science: it doesn’t matter how many people think the earth is flat or that the sun goes around it. Things are not determined in science by what people think or what they believe. Science is the one subject where facts are determined by evidence and even absolute proof: which is possible, contrary to Popper’s speculation, see Archimedes’ proof of the law of buoyancy for example. See the letter from Einstein to Popper that Popper reproduces in his book The Logic of Scientific Revolutions. Popper falsely claimed to have disproved the idea that statistical uncertainty can emerge from a deterministic situation.
Einstein disproved Popper by the case of an electron revolving in a circle at constant speed; if you lack exact knowledge of the initial conditions and cannot see the electron, you can only statistically calculate the probability of finding it at any section of the circumference of the circle. Hence, statistical probabilities can emerge from completely deterministic systems, given merely the uncertainty about the initial conditions. This is one argument of many that Einstein (together with Schroedinger, de Broglie, Bohm, etc.) had to argue determinism lies at the heart of quantum mechanics. However, the nature of all real 3+ body interactions in classically ‘deterministic’ mechanics are nondeterministic because of the perturbations introduced as chaos by more than two bodies. So there is no ultimate determinism in the real world many body situations. What Einstein should have stated he was looking for is causality, not determinism.
The uncertainty principle  which is just modelling scatteringdriven
reactions  shows that the higher the massenergy equivalent, the shorter
the lifetime.
Quarks and other heavy particles last for a fraction
of 1% of the time that electrons and positrons in the vacuum last before
annihilation. The question is, why do you get virtual quarks around real
quark cores for QCD and virtual electrons and positrons around real
electron cores. It is probably a question of the energy density of the
vacuum locally around a charge core. The higher energy density due to the
fields around a quark will both create and attract more virtual quarks
than an electron that has weaker fields.
In the case of a nucleon,
neutron or proton, the quarks are close enough that the strong core
charge, and not the shielded core charge (reduced by 137 times due to the
polarised vacuum), is responsible for the interquark binding force. The
strong force seems to be mediated by eight distinct types of gluon. (There
is a significant anomaly in the QCD theory here because there are
physically 9 types of green, red, blue gluons but you have to subtract one
variety from the 9 to rule out a reaction which doesn't occur in
reality.
The gluon clouds around quarks are overlapped and modified
by the polarised veils of virtual quarks, which is why it is just absurd
to try to get a mathematical solution to QCD in the way you can for the
simpler case of QED. In QED, the force mediators (virtual photons) are not
affected by the polarised electronpositron shells around the real
electron core, but in QCD there is an interaction between the gluon
mediators and the virtual quarks.
You have to think also about electroweak force mediators, the
W, Z and photons, and how they are distinguished from the strong force
gluons. For the benefit of cats who read, W, Z and photons have more
empirical validity than Heaviside's energy current theory speculation;
they were discovered experimentally at CERN in 1983. At high energy, the W
(massive and charged positive or negative), Z (massive and neutral), and
photon all have symmetry and infinite range, but below the electroweak
unification energy, the symmetry is broken by some kind of vacuum
attenuation (Higgs field or other vacuum field miring/shielding)
mechanism, so W and Z's have a very short range but photons have an
infinite range. To get unification qualitatively as well as quantitatively
you have to not only make the extremely high energy forces all identical
in strength, but you need to consider qualitatively how the colored gluons
are related to the W, Z, and photon of electroweak theory. The Standard
Model particle interaction symmetry groupings are SU(3) x SU(2) x U(1),
where U(1) describes the photon, SU(2) the W and Z of the weak force,
hence SU(2) x U(1) is electroweak theory requiring a Higgs or other
symmetry breaking mechanism to work, and SU(3) describes gluon mediated
forces between three strongforce color charges of quarks, red, green and
blue or whatever.
The problem with the gluons having 3 x 3 = 9 combinations but
empirically only requiring 8 combinations, does indicate that the concept
of the gluon is not the most solid part of the QCD theory. It is more
likely that the gluon force is just the unshielded core charge force of
any particle (hence unification at high energy where the polarised vacuum
is breached by energetic collisions). (The graviton I've proved to be a
fiction; it is the same as the gauge boson photon of electromagnetism: it
does the work of both all attractive force and a force root N times
stronger which is attractive between unlike charges and repulsive between
like charges. N is the number of charges exchanging force mediator
radiation. This proves why the main claim of string theory is entirely
false. There is no separate graviton.)
The virtual quarks as you say contribute to the (1) mass and
(2) magnetic moment of the nucleon. In the same way, virtual electrons
increase the magnetic moment of the electron by 0.116% in QED. QCD just
involves a larger degree of perturbation due to the aether than QED
does.
Because the force mediators in QCD interact with the virtual
quarks of the vacuum appreciably, the Feynman diagrams indicate a very
large number of couplings with similar coupling strengths in the vacuum
that are almost impossible to calculate. The only way to approach this
problem is to dump perturbative field theory and build a heuristic
semiclassical model which is amenable to computer solution. Ie, you can
simulate quarks and polarised clouds of virtual charges surrounding them
using a classical model adjusted to allow for the relative quantum
mechanical lifetimes of the various virtual particles, etc. QCD has much
larger perturbative effects due to the vacuum, with the vacuum in fact
contributing most of the properties attributed to nucleons. In the case of
a neutron, you would naively expect there to be zero magnetic moment
because there is no net charge, but in fact there is a magnetic moment
about two thirds the size of the proton's.
Ultimately what you have
to do, having dealt with gravity (at least showing electrogravity to be a
natural consequence of the big bang), is to understand the Standard Model.
In order to do that, the particle masses and force coupling strengths must
be predicted. In addition, you want to understand more about electroweak
symmetry breaking, gluons, the Higgs field  if it actually exists as is
postulated (it may be just a false model based on ignorant speculation,
like graviton theory)  etc. I know string theory critic Dr Peter Woit
(whose book Not Even Wrong  The Failure of String Theory and the
Continuing Challenge to Unify the Laws of Physics will be published in
London on 1 June and in the USA in September) claims in Quantum Field
Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hepth/0206135
that it is potentially possible to deal with electroweak symmetry without
the usual stringy nonsense.
Hydrogen doesn’t behave as a
superfluid. Helium is a superfluid at low temperatures because it has two
spin ˝ electrons in its outer shell that collectively behave as a spin 1
particle (boson) at low temperatures.
Fermions have half integer spin so hydrogen with a single electron will
not form bosonlike electron pairs. In molecular H_{2}, the two
electrons shared by the two protons don't have the opportunity to couple
together to form a bosonlike unit. It is the three body problem, two
protons and a coupled pair of electrons, so perturbative effects
continuously break up any boson like behavior.
The same happens to
helium itself when you increase temperature above superfluid temperature.
The kinetic energy added breaks the electron pairing to form a kind of
boson. So just the Pauli exclusion principle pairing remains at higher
temperature.
You have to think of the lowtemperature boseeinstein
condensate as the simple case, and to note that at higher temperatures
chaos breaks it up. Similarly, if you heat up a magnet you increase
entropy, introducing chaos by allowing the domain order to be broken up by
random jiggling of particles.
Newton's Opticks & Feynman's QED book both discuss the reflection
of a photon of light by a sheet of glass depends on the thickness, but the
photon is reflected as if from the front face. Newton as always fiddled an
explanation based on metaphysical "fits of reflection/transmission" by
light, claiming that light actually causes a wave of some sort (aetherial
according to Newton) in the glass which travels with the light and
controls reflection off the rear surface.
Actually Newton was wrong
because you can measure the time it takes for the reflection from a really
thick piece of glass, and that shows the light reflects from the front
face. What is happening is that energy (gauge boson electromagnetic
energy) is going at light speed in all directions within the glass
normally, and is affected by the vibration of the crystalline lattice. The
normal internal "resonate frequency" depends on the exact thickness of the
glass, and this in turn determines the probability that a light hitting
the front face is reflected or transmitted. It is purely causal.
Electrons have quantized charge and therefore electric field  hardly a description of the "light wave" we can physically experiment with and measure as 1 metre (macroscopic) wavelength radio waves. The peak electric field of radio is directly proportional to the orthagonal acceleration of the electrons which emit it. There is no evidence that the vacuum charges travel at light speed in a straight line. An electron is a trapped negative electric field. To go at light speed it's spin would have to be annihilated by a positron to create gamma rays. Conservation of angular momentum forbids an electron from going at light speed as the spin is light speed and it can't maintain angular momentum without being supplied with increasing energy as the overall propagation speed rises. Linear momentum and angular momentum are totally separate. It is impossible to have an electron and positron pair going at light speed, because the real spin angular momentum would be zero, because the total internal speed can't exceed c, and it is exactly c if electrons are electromagnetic energy in origin (hence the vector sum of propagation and spin speeds  Pythagoras' sum of squares of speeds law if the propagation and spin vectors are orthagonal  implies that the spin slows down from c towards 0 as electron total propagation velocity increases from 0 towards c). The electron would therefore have to be supplied with increasing massenergy to conserve angular momentum as it is accelerated towards c.
People like Josephson take the soft quantum mechanical approach of ignoring spin, assuming it is not real. (This is usually defended by confusing the switch over from the timedependent to timeindependent versions of the Schrodinger equation when a measurement is taken, which defends a metaphysical requirement for the spin to remain indeterminate until the instant of being measured. However, Love of California State Uni proves that this is a mathematical confusion between the two versions of Schrodinger's equation and is not real physical phenomena. It is very important to be specific where the errors in modern physics are, because most of it is empirical data from nuclear physics. Einstein's special relativity isn't worshipped for enormous content, but for fitting the facts. The poor presentation of it as being full of amazing content is crazy. It is respected by those who understand it because it has no content and yet produces empirically verifiable formulae for local mass variation with velocity, local time  or rather uniform motion  rate variation, E=mc2 etc. Popper's analysis of everything is totally bogus; he defends special relativity as being a falsifiable theory which it isn't as it was based on empirical observations; special relativity is only a crime for not containing a mechanism and for not admitting the change in philosophy. Similarly, quantum theory is correct so far as the equations are empirically defensible to a large degree of accuracy, but it is a crime to use this empirical fit to claim that mechanisms or causality don't exist.)
The photon certainly has electromagnetic energy with separated negative and positive electric fields. Question is, is the field the cause of charge or viceversa? Catt says the field is the primitive. I don't like Catt's arguments for the most part (political trash) but he has some science mixed in there too, or at least Dr Walton (Catt's coauthor) does. Fire 2 TEM (transverse electromagnetic) pulses guided by two conductors through one another from opposite directions, and there is no measurable resistance while the pulses overlap. Electric current ceases. The primitive is the electromagnetic field.
To model a photon out of an electron and a positron going at light velocity is false for the reasons I've given. If you are going to say the electronpositron pairs in the vacuum don't propagate with light speed, you are more sensible, as that will explain why light doesn't scatter around within the vacuum due to the charges in it hitting other vacuum charges, etc. But then you are back to a model in which light moves like transverse (gravity) surface waves in water, but with the electronpositron ether as the sea. You then need to explain why light waves don't disperse. In a photon in the vacuum, the peak amplitude is always the same, no matter how far it goes from its source. Water waves, however, lose amplitude as they spread. Any pressure wave that propagates (sound, sonar, water waves) have an excess pressure and a rarefaction (undernormal pressure) component. If you are going to claim instead that a displacement current in the aether is working with Faraday's law to allow light propagation, you need to be scientific and give all the details, otherwise you are just repeating what Maxwell said  with your own gimmicks about dipoles and double helix speculation  140 years ago.
Maxwell's classical model of a light wave is wrong for several reasons.
Maxwell said light is positive and negative electric field, one behind the other (variation from positive to negative electric field occurring along the direction of propagation). This is a longitudinal wave, although it was claimed to be a transverse wave because Maxwell's diagram, in TEM (Treatise on Electricity & Magnetism) plots strength of E field and B field on axes transverse to the direction of propagation. However, this is just a graph which does not correspond to 3 dimensional space, just to fields in 1 dimensional space, the x direction. The axes of Maxwell's light wave graph are x direction, E field strength, B field strength. If Maxwell had drawn axes x, y, z then he could claim to have shown a transverse wave. But he didn't, he had axes x, E, B: one dimensional with two field amplitude plots.
Heaviside's TEM wave guided by two conductors has the negative and positive electric fields one beside the other, ie, orthagonal to the direction of propagation. This makes more sense to me as a model for a light wave: Maxwell's idea of having the different electric fields of the photon (positive and negative) one behind the other is bunk, because both are moving forward at light speed in the x direction and so cannot influence one another (without exceeding light speed).
Theorems
Results and predictions
Many prefer to treat science as a brand of religion and to dismiss empirically based facts as personal pet theories. (Instead, many believe in speculative ‘mainstream’ schemes.) Since 2004, updates, revisions and improvements have been published on the internet. From 19962004 they were published in technical journals. If you dismiss the facts because you want to call them a particular person’s pet theory, or because you have a religious style belief in a ‘mainstream’ politicalstyle ‘theory’ like string speculation, you may be a fascist, but are not scientific.
Maxwell died believing that radiation travels through the vacuum because there is virtual charge in space to form a displacement current, so that Ampere’s law completes the electromagnetic cycle of Faraday’s law of induction. I’ve not seen anybody refute this.
Dirac predicted antimatter from a vacuum sea of electrons. Knock one out and you create a hole which is a positron. Einstein chucked out SR when he developed GR:
‘… the law of the constancy of the velocity of light. But … the general theory of relativity cannot retain this law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’  Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.
‘… the principle of the constancy of the velocity of light in vacuo must be modified, since we easily recognise that the path of a ray of light … must in general be curvilinear…’  Albert Einstein, The Principle of Relativity, Dover, 1923, p114.
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
‘According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Sidelights on Relativity, Dover, New York, 1952, p23.
The falsity of ‘dark energy’: since gravity is a response to the surrounding matter, distant galaxies in the explosion are not slowed down by gravity, so the supernova data showing an ‘acceleration’ offsetting fictional gravitational retardation (i.e., showing no departure from the Hubble law) was published and predicted in 1996, before observation confirmed it (because it was suppressed, the observations are fircefitted to a fraudulent Ptolemic epicycletype farce instead).
http://cosmicvariance.com/2006/01/03/dangerphilanderson/:
‘the flat universe is just not decelerating, it isn’t really accelerating’  Phil Anderson
‘As far as explaining what the dark energy is, I certainly won’t kid you, I have no idea! (Likewise inflation.) I’m extremely interested in alternatives, including modified gravity and backreaction of perturbations, and openminded about different candidates for dark energy itself.’  Sean
Look, Phil Anderson’s comment is exactly the correct prediction made via the October 1996 issue of Electronics World, which was confirmed experimentally two years later by Perlmutter’s observations.
The lack of deceleration is because the expansion causes general relativity: http://feynman137.tripod.com/#h
This existing paradigm tries to take general relativity (as based on local observations, including Newtonian gravity as a limit) to the universe, and force it to fit.
The reality is that gravity and contraction (general relativity) are predicted accurately from the big bang dynamics in a quantum field theory and spacetime context. There is nothing innovative here, it’s old ideas which have been ignored.
As Anderson says, the universe is ‘just not decelerating, it isn’t really accelerating’, and that’s due to the fact that the gravity is a proved effect surrounding expansion:
http://electrogravity.blogspot.com/
This isn’t wrong, it’s been carefully checked by peerreviewers and published over 10 years. This brings up Sean’s point about being interested in this stuff. It’s suppressed, despite correct predictions of force strengths, because it doesn’t push string theory. Hence it was even removed from arXiv after a few seconds (without being read). There is no ‘new principle’, just the existing wellknown physical facts applied properly.
The Standard Model is the best tested physical theory in history. Forces are due to radiation exchange in spacetime. The big bang has speed from 0 to c with spacetime of 0 toward 15 billion years, giving outward force of F = ma = mc/t. Newton’s 3rd law gives equal inward force, carried by gauge bosons, which are shielded by matter, see proof of G within 2%. Radiation pressure causes gravity, contraction in general relativity, and other forces.
The mainstream approach is to take GR as a model for the universe,
which assumes gravity is not a QFT radiation pressure force.
But if
you take the observed expansion as primitive, then you get a mechanism for
local GR as the consequence, without the anomalies of the mainstream model
which require CC and inflation.
Outward expansion in spacetime by
Newton's 3rd law results in inward gauge boson pressure, which causes the
contraction term in GR as well as gravity itself.
GR is best viewed
simply as Penrose describes it:
(1) the tensor field formulation of
Newton's law, R_uv = 4Pi(G/c^2)T_uv, and
(2) the contraction term
which leads to all departures from Newton's law (apart from
CC).
Putting the contraction term into the Newtonian R_uv =
4Pi(G/c^2)T_uv gives the Einstein field equation without the
CC:
R_uv  ˝Rg_uv = 8Pi(G/c^2)T_uv
Feynman explains very
clearly that the contraction term can be considered physical, e.g., the
Earth's radius is contracted by the amount ~(1/3)MG/c^2 = 1.5 mm.
This is like radiation pressure squeezing the earth on the subatomic level (not just the macroscopic surface of the planet), and this contraction in space also causes a related gravitational reduction in time, or gravity timedilation.
A shield, like the planet earth, is composed of very small, subatomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other.
The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inversesquare gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field.
From the illustration above, the total outward force of the big bang,
(total outward force) = ma = (mass of universe).(Hubble acceleration, a = Hc, see detailed discussion and proof further on below),
while the gravity force is the shielded inward reaction (by Newton’s 3^{rd} law the outward force has an equal and opposite reaction):
F = (total outward force).(crosssectional area of shield projected to radius R) / (total spherical area with radius R).
The vacuum has both massive particles and radiation. The pressure from radiation exchange between mass causing Higgs field particles causes electromagnetic forces with the inversesquare law by a quantum field theory version of LeSage’s mechanism, because the radiation cannot ‘diffract’ into ‘shadows’ behind shielding particles. However, the massive particles in the vacuum are more like a gas and scatter pressure randomly in all directions, so they do ‘fill in shadows’ within a short distance, resulting in the extremely short range of the nuclear forces.
The relationship between the strength of gravity and electromagnetism comes when you analyse how the potential (voltage) adds up between capacitor plates with a vacuum dielectric, when they are aligned at random throughout space instead of being in a nice series of a circuit. You also have to understand an error in the popular interpretation of the crucial ‘displacement current’ term in Maxwell’s equation for the curl of a magnetic field (the term added to Ampere’s law ‘for mathematical consistency’): it is not the whole story.
‘We have to study the structure of the electron, and if possible, the single electron, if we want to understand physics at short distances.’ – Professor Asim O. Barut, On the Status of Hidden Variable Theories in Quantum Mechanics, Aperion, 2, 1995, pp978. (Quoted by Dr Thomas S. Love.)
PARTICLE MASS PREDICTIONS. The gravity mechanism implies (see analysis further on) quantized unit masses. As proved further on, the 1/alpha or ~137 factor is the electromagnetic shielding of any particle core charge by the surrounding polarised vacuum. When a massgiving black hole (gravitationally trapped) Zboson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have 137 shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is: [Zboson mass]/(3/2 x 2.Pi x 137 x 137) ~ 0.51 MeV. If, however, the electron core has more energy and can get so close to a trapped Zboson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass: [Zboson mass]/(2.Pi x 137) ~ 105.7 MeV. The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is [electron mass].[137].n(N+1)/2 ~ 35n(N+1) Mev. (For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Zbosons. Lest you think this is all ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements:
Comparison of mass formula, M = [electron mass].[137].n(N+1)/2 = [Zboson mass].n(N+1)/[3 x 2Pi x 137] ~ 35n(N+1) Mev against experimental data
‘… I do feel strongly that this [string theory] is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. … why are the masses of the various particles such as quarks what they are? All these numbers … have no explanations in these string theories  absolutely none! …’ – Feynman in Davies & Brown, ‘Superstrings’ 1988, at pages 194195, http://www.math.columbia.edu/~woit/wordpress/?p=272#comment5295
N (number of black hole, gravitationally trapped Zbosons associated with each core) 
n (number of fundamental particles per observable particle core)  
1 
2 (i.e., 2 quarks) 
3 (i.e., 3 quarks)  
Leptons 
Mesons 
Baryons  
1 
Electron
0.511 Mev measured [Zboson mass]/(3/2 x 2.Pi x 137 x 137) = 0.51 Mev 
Pions
139.57, 134.96 Mev measured 35n(N+1) ~ 140 Mev 

2* 
Muon (Most stable particle after
neutron)
105.66 Mev measured 35n(N+1) ~ 105 Mev 


6 

Kaons
493.67, 497.67 Mev measured 35n(N+1) ~ 490 Mev 



Eta
548.8 Mev measured 35n(N+1) ~ 560 Mev 

8* 


Nucleons (VERY STABLE)
938.28, 939.57 Mev measured 35n(N+1) ~ 945 Mev 
10 


Lambda & Sigmas
1115.6, 1189.36, 1192.46, 1197.34 Mev measured 35n(N+1) ~ 1155 Mev 
12 


Xi
1314.9, 1321.3 Mev measured 35n(N+1) ~ 1365 Mev 
15 


Omega
1672.5 Mev measured 35n(N+1) ~ 1680 Mev 
50* 
Tauon 1784.2 35n(N+1) ~ 1785 Mev 


Only particles with lifetimes above 10^{23} s are included above. The blank spaces predict other particles. The integer formula is very close, as statistical tests show. (Notice that the periodic table of chemistry did not explain discrepancies from integer masses until mass defect due to binding energy, isotopic composition, and other factors were discovered long after the periodic table was widely accepted. Doubtless there is some similar ‘noise’ in the measurements due to field interactions.)
These facts on gravity above are all existing accepted orthodoxy; the Feynman diagrams are widely accepted, as is the spacetime (time after big bang decreasing with increasing observed distance), Newton’s laws of motion, geometry and applied physics are not controversial. However, have you seen this mechanism in any scientific journals? No? But you have seen string theory which predicts nothing testable and a whole load of unobservables (superpartners, supersymmetry at energy far beyond observations, 6/7 extra dimensions, strings of Planck size without any evidence, etc.)? Why? Why won’t they tell you the facts? The existing ‘string theory’ gravity is ‘speculative gibberish’: untestable hocus pocus ‘string theory’!
The administrators of arXiv.org still won’t publish this, preferring the embarrassment of it being dismissed as a mere ‘alternative’ to mainstream (M) theory of strings which can vaguely predict anything that is actually observable, by being nonspecific. ArXiv.org say: ‘You should know the person that you endorse or you should see the paper that the person intends to submit. We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author … if the work is entirely disconnected with current [string theory] work in the area.’ Hence innovation is suppressed. ArXiv rely entirely on suppression via guilt by association or lack of association, and as just quoted, they don’t care whether the facts are there at all! Recent improvements here are due mainly to the influence of Woit’s good weblog. I’d also like to acknowledge encouragement from fellow Electronics World contributor A. G. Callegari, who has some interesting electromagnetic data. ‘STRING THEORY’ FINALLY FINDS A USE: EXCUSING INFIDELITY.
Peter Woit in http://arxiv.org/abs/hepth/0206135
put forward a conjecture: "The quantum field theory of the standard model
may be understood purely in terms of the representation theory of the
automorphism group of some geometric structure."
Using Lie spinors
and Clifford algebras he comes up with an illustrative model on page 51,
which looks as if it will do the job, but then adds the guarded
comment:
"The above comments are exceedingly speculative and very
far from what one needs to construct a consistent theory. They are just
meant to indicate how the most basic geometry of spinors and Clifford
algebras in low dimensions is rich enough to encompass the standard model
and seems to be naturally reflected in the electroweak symmetry
properties of Standard Model particles."
This guarded approach
needs to be contrasted to the hype surrounding string theory.
"How in the world is it possible to sort out the
crackpots from the legitimate researchers if you lack the time,
background, mathematical sophistication, etc. to master the topic?" 
Daryl
You demand to see something called evidence. You
examine the evidence. If it consists solely of unobserved gravitons and
unobserved superpartners, you have to conclude that it fits into the
category of speculations which also contains organised moneymaking
religion. If the evidence is convincing and the theory is not otherwise in
contradiction of reality, then you have to scientifically appreciate that
it is a real possibility.
String
theorists call all alternatives crackpot. Alternatives to failed
mainstream ideas are not automatically wrong. Those who are censored for
being before their time or for contradicting mainstream nontested
speculation, are hardly crackpot. As a case in point, see http://cdsweb.cern.ch/search.py?recid=688763&ln=en
which was peerreviewed and published but censored off arxiv according to
the author (presumably for contradicting stringy speculation). It is
convenient for Motl
to dismiss this as crackpot by personallyabusive namecalling, giving
no reason whatsoever. Even if he gave a 'reason', that whoud not mean
anything, since these string theorists are downright ignorant. What Motl
would have to do is not just call names, or even go to providing a
strawman type 'reason', but to actually analyse and compare alternative
theories objectively to mainstream string theory. This he won't do. It is
curious that nobody remembers the problems that Einstein had when
practically the entire physics establishment of Germany in the 1930s was
coerced by fascists to call him a crackpot. I think Pauli’s categories of
"right", "wrong", and "not even wrong" are more objective than calling
suggestions "crackpot".
If you live in a society where unobserved
gravitons and superpartners are believed to be "evidence" that string
theory unifies standard model forces and "has the remarkable property of
predicting gravity" {quoted from stringy Mtheory originator Edward
Witten, Physics Today, Apr 96}, then your tendency to ignore it
is no help. You have to point out that it is simply vacuous.
String
theory lacks a specific quantum field theory vacuum, yet as Lunsford says,
that doesn’t stop string theory from making a lot of vacuous
"predictions".
String theory allows 10^500 or so vacua, a whole
"landscape" of them, and there is no realistic hope of determining which
is the right one. So it is so vague it can’t say anything useful. The word
"God" has about 10^6 different religious meanings, so string theory is
(10^500)/(10^6) = 10^494 times more vague than religion.
Feynman’s
statements in Davies & Brown, ‘Superstrings’ 1988, at pages
194195:
‘… I do feel strongly that this is nonsense! … I think all
this superstring stuff is crazy and is in the wrong direction. … I don’t
like it that they’re not calculating anything. … why are the masses of the
various particles such as quarks what they are? All these numbers … have
no explanations in these string theories  absolutely none! …’  http://www.math.columbia.edu/~woit/wordpress/?p=272#comment5295
Thomas
Larsson has listed the following more recent experts:
Sheldon
"string theory has failed in its primary goal" Glashow  http://www.pbs.org/wgbh/nova/elegant/viewglashow.html
Martinus
"string theory is a figment of the theoretical mind" Veltman  http://www.amazon.ca/exec/obidos/ASIN/981238149X/70155274959406712
Phil
"string theory a futile exercise as physics"Anderson http://www.edge.org/q2005/q05_10.html#andersonp
Bob
"string theory a 50yearold woman wearing way too much lipstick" Laughlin
 http://sfgate.com/cgibin/article.cgi?file=/chronicle/archive/2005/03/14/MNGRMBOURE1.DTL
Dan
"string theory is a complete scientific failure" Friedan  http://www.arxiv.org/abs/hepth/0204131
Also
note that even Dr Lubos Motl has expressed concerns with the ‘landscape’
aspect of ST, while Dr Peter Woit in his 2002 paper pointed out the
problem that ST doesn’t actually sort out gravity:
‘It is a
striking fact that there is absolutely no evidence whatsoever for this
complex and unattractive conjectural theory. There is not even a serious
proposal for what the dynamics of the fundamental ‘Mtheory’ is supposed
to be or any reason at all to believe that its dynamics would produce a
vacuum state with the desired properties. The sole argument generally
given to justify this picture of the world is that perturbative string
theories have a massless spin two mode and thus could provide an
explanation of gravity, if one ever managed to find an underlying theory
for which perturbative string theory is the perturbative expansion.’ –
Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hepth/0206135
In
addition, Sir Roger Penrose analysed the problems with string theory at a
technical level, concluding: ‘in addition to the dimensionality issue, the
string theory approach is (so far, in almost all respects) restricted to
being merely a perturbation theory.’  The Road to Reality, 2004, page
896.
So how did string theorists dupe the world?
'In the first
section the history of string theory starting from its Smatrix bootstrap
predecessor up to Susskind’s recent book is critically reviewed. The aim
is to understand its amazing popularity which starkly constrasts its
fleeting physical content. A partial answer can be obtained from the
hegemonic ideological stance which some of its defenders use to present
and defend it. The second section presents many arguments showing that the
main tenet of string theory which culminated in the phrase that it
represents "the only game in town" is untenable. It is based on a wrong
view about QFT being a mature theory which (apart from some missing
details) already reached its closure. ...
'A guy with the gambling
sickness loses his shirt every night in a poker game. Somebody tells him
that the game is crooked, rigged to send him to the poorhouse. And he
says, haggardly, I know, I know. But its the only game in town.  Kurt
Vonnegut, The Only Game in Town [13]
'This is a quotation from a
short story by Kurt Vonnegut which Peter Woit recently used in one of the
chapters in his forthcoming book entitled Not Even Wrong : The Failure
of String Theory & the Continuing Challenge to Unify the Laws of
Physics (using a famous phrase by which Wolfgang Pauli characterized
ideas which either had not even the quality of being wrong in an
interesting way or simply lacked the scientific criterion of being
falsifiable).'  Professor Bert
Schroer, arXiv:physics/0603112, p1.
Predictably, Dr Motl has
launched into a paranoid attack on Professor
Bert Schroer, just because of a poem in the paper which happened to
mention someone called Motl: http://motls.blogspot.com/2006/03/bertschroerspaper.html.
But, alas, the issues are real:
'I argue that string theory cannot
be a serious candidate for the Theory of Everything, not because it lacks
experimental support, but because of its algebraic shallowness. I describe
two classes of algebraic structures which are deeper and more general than
anything seen in string theory...'  T. A. Larsson, arXiv:mathph/0103013,
p1.
'The history of science is full of beautiful ideas that turned
out to be wrong. The awe for the math should not blind us. In spite of the
tremendous mental power of the people working in it, in spite of the
string revolutions and the excitement and the hype, years go by and the
theory isn’t delivering physics. All the key problems remain wide open.
The connection with reality becomes more and more remote. All physical
predictions derived from the theory have been contradicted by the
experiments. I don’t think that the old claim that string theory is such a
successful quantum theory of gravity holds anymore. Today, if too many
theoreticians do strings, there is the very concrete risk that all this
tremendous mental power, the intelligence of a generation, is wasted
following a beautiful but empty fantasy. There are alternatives, and these
must be taken seriously.'  Carlo Rovelli, arXiv:hepth/0310077,
p20.
SO WHY ISN’T THIS IN THE PHYSICAL REVIEW LETTERS? http://www.math.columbia.edu/~woit/wordpress/?p=273
‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ G. Orwell, 1984, Chancellor Press, London, 1984, p225.
‘(1). The idea is nonsense. (2). Somebody thought of it before you did. (3). We believed it all the time.’  Professor R.A. Lyttleton's summary of inexcusable censorship (quoted by Sir Fred Hoyle in ‘Home is Where the Wind Blows’ Oxford University Press, 1997, p154). Example: recent correspondence with Josephson.
Latest developments: see http://electrogravity.blogspot.com/ and http://lqg.blogspot.com/
Below is from a review of Thomas R.
Love’s preprint ‘Towards and Einsteinian Quantum Theory’, Departments of
Maths and Physics, California State University:
Einstein, in his 1936 paper Physics and Reality, argued that
quantum mechanics is merely a statistical means of accounting for the
average behaviour of a large number of particles. In a hydrogen atom,
presumably, the three dimensional wave behaviour of the electron would be
caused by the interaction of the electron with the particles and radiation
of the quantum mechanical vacuum or Dirac sea, which would continuously be
disturbing the smallscale motion of subatomic sized particles, by analogy
to the way air molecules cause the jiggling or Brownian motion of very
small dust particles. Hence there is chaos on small scales due to a causal
physical mechanism, the quantum foam vacuum. Because of the Poincare chaos
which the electromagnetic and other fields involves in 3+ body
interactions create, probability and statistics rule the small scale.
Collisions of particles in the vacuum by this mechanism result in the
creation of other virtual particles for a brief time until further
collisions annihilate the latter particles. Random collisions of vacuum
particles and unstable nuclei trigger the Poisson statistics behind
exponential radioactive decay, by introducing probability. All of these
phenomena are real, causal events, but like the wellknown Brownian motion
chaos of dust particles in air, they are not deterministic.
Love
has a vast literature survey and collection of vitally informative
quotations from authorities, as well as new insights from his own work in
quantum mechanics and field theory. He quotes, on page 8, from Asim O.
Barut's paper, On the Status of Hidden Variable Theories in Quantum
Mechanics, (Aperion, 2, 1995, p97): "We
have to study the structure of the electron, and if possible, the single
electron, if we want to understand physics at short
distances."
String theory claims to study the electron by vibrating
extra dimensional strings of Planck scale, but there is not a shred of
evidence for this. I'd point out that the Planck scale is meaningless
since the radius of a black hole electron mass (R = 2GM/c^2) is a lot
smaller than the Planck size, so why choose to speculate strings are
Planck size? (Planck was only fiddling around with dimensional analysis,
and falsely believed he had found the smallest possible length scale, when
in fact the black hole size of an electron is a lot, lot
smaller!)
On page 9, Love points out that: "The problem is that
quantum mechanics is mathematically inconsistent...", which compares the
two versions of the Schroedinger equation on page 10. The time independent
and timedependent versions disagree and this disagreement nullifies the
principle of superposition and consequently the concept of wavefunction
collapse being precipitated by the act of making a measurement. The
failure of superposition discredits the usual interpretation of the EPR
experiment as proving quantum entanglement. To be sure, making a
measurement always interferes with the system being measured (by recoil
from firing light photons or other probes at the object), but that is not
justification for the metaphysical belief in wavefunction
collapse.
Page 40: "There is clearly a relationship between the
mass of an elementary particle and the interactions in which it
participates."
To combine the heuristic quantum field theory
physical ideas with general relativity, matter causes the curvature of
test particle geodesics via radiation (vector boson) exchange. The
pressure causes GR curvature, the GR contraction of masses (squeezing by
radial radiation pressure from the surrounding universe), GR gravity
(LeSage shielding of the radiation pressure).
On page 40 Love
finds that "the present work implies that the curvature of the spacetime
is caused by the rotation of something..." We know the photon has spin, so
can we create a spin foam vacuum from radiation (photons)? Smolin is
interested in this.
Page 41: Muon as a heavy electron. Love says
that "Barut argues that the muon cannot be an excited electron since we do
not observe the decay muon > electron + gamma ray." Love
argues that in the equation muon > electron + electron neutrino +
muon neutrino, the neutrino pair "is essentially a photon." It does
seem likely from experimental data on the properties of the electron and
muon that the muon is an electron with extra energy which allows it to
associate strongly with the Higgs field.
Traditionally the Higgs
field is introduced into electroweak theory partly to give the neutral
Zboson (91 GeV) a limited range at low energy, compared to the infinite
range of photons. Now lets look at the mainstream heuristic picture of the
electron in the Dirac sea of QFT, which is OK as far as it goes, but
doesn't go far enough:
Most of the charge is screened out by
polarised charges in the vacuum around the electron core:'... we find that
the electromagnetic coupling grows with energy. This can be explained
heuristically by remembering that the effect of the polarization of the
vacuum ... amounts to the creation of a plethora of electronpositron
pairs around the location of the charge. These virtual pairs behave as
dipoles that, as in a dielectric medium, tend to screen this charge,
decreasing its value at long distances (i.e. lower energies).'  arxiv
hepth/0510040, p 71.
‘All charges are surrounded by clouds of
virtual photons, which spend part of their existence dissociated into
fermionantifermion pairs. The virtual fermions with charges opposite to
the bare charge will be, on average, closer to the bare charge than those
virtual particles of like sign. Thus, at large distances, we observe a
reduced bare charge due to this screening effect.’ – I. Levine, D.
Koltick, et al., Physical Review Letters, v.78, 1997, no.3,
p.424.
Koltick found a 7% increase in the strength of
Coulomb's/Gauss' force field law when hitting colliding electrons at an
energy of 80 GeV or so. The coupling constant for electromagnetism is
1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This
rise is due to the polarised vacuum being broken through. We have to
understand Maxwell's equations in terms of the gauge boson exchange
process for causing forces and the polarised vacuum shielding process for
unifying forces into a unified force at very high energy. The minimal SUSY
Standard Model shows electromagnetic force coupling increasing from alpha
of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from
1 to 1/25 at the same energy, hence unification. The reason why the
unification superforce strength is not 137 times electromagnetism but only
137/25 or about 5.5 times electromagnetism, is heuristically explicable in
terms of potential energy for the various force gauge bosons. If you have
one force (electromagnetism) increase, more energy is carried by virtual
photons at the expense of something else, say gluons. So the strong
nuclear force will lose strength as the electromagnetic force gains
strength. Thus simple conservation of energy will explain and allow
predictions to be made on the correct variation of force strengths
mediated by different gauge bosons. When you do this properly, you may
learn that SUSY just isn't needed or is plain wrong, or else you will get
a better grip on what is real and make some testable predictions as a
result.
It seems that the traditional role of the Higgs field in
giving mass to the 91 MeV Zboson to limit its range (and to give mass to
Standard Model elementary particles) may be backtofront. If Zbosons can
be trapped by gravity into loops, like the model for the electron, they
can numerically account for mass. Think of the electron as a bare core
with 137e, surrounded by a shell of polarised vacuum which reduces the
core charge to e. A Zboson, while electrically neutral as a whole, is
probably an oscillating electromagnetic field like a photon, being half
positive and half negative electric field. So if as a loop it is aligned
sideon it can be associated with a charge, providing mass. The point of
this exercise is to account for empirical recently observed coincidences
of masses:
Neutral Zboson: 91 GeV
Muon mass: 91,000/ (twice
Pi times 137 shielding factor) = 105.7 MeV
=> Muon is electron core
associated with a Zboson which has a polarised shield around its own
core.
Electron mass: Muon mass/(1.5 x 137) = 0.511 MeV
=>
Electron is like a muon, but there are two polarised shields weakening the
association (one polarised shield around electron core and one around
Zboson core).
So the Zboson, muon, and electron masses are
physically related by just multiplying by 1/137 factors, depending on how
many polarised shields are involved (ie, on whether the cores of the
electron and Zboson are close enough for the polarised veils of the Dirac
sea to overlap, or not). The 2Pi shielding factor above is explained as
follows: the spin of a fermion is half integer, so it rotates 720 degrees
(like a Mobius strip with a half turn), so the average exposed sideon
loop field area is half what you would have if it had spin of 1. (The
twist in a Mobius strip loop reduces the average area you see sideon, it
is a simple physical explanation.) The Pi factor comes from the fact that
when you look at any charged loop sideon, you are subject to a field
intensity Pi times less than if you loop at the field from the loop
perpendicularly.
The 1.5 factor arises as follows. The mass of any
individually observable elementary particle (quarks aren't separable to
I'm talking of leptons, mesons and baryons) is heuristically given
by:
M = {electron mass}.{137 polarised dielectric correction
factor; see below for proof that this is the shielding factor}.n(1/2 +
N/2).
In this simple formula, the 137 correction factor is not
needed for the electron mass, so for an electron, M = {electron
mass}.n(1/2 + N/2) = {electron mass}.
Here n stands for the number
of charged core particles like quarks (n = 1 for leptons, n = 2 for
mesons, n = 3 for baryons), and N is the number of vacuum particles
(Zbosons) associated with the charge. I've given a similar argument for
the causal mechanism of Schwinger's first corrective radiation term for
the magnetic moment of the electron, 1 + alpha/(2.pi) on my page. The
heuristic explanation for the (1/2 + N/2) factor could be the addition of
spins.
The problem is that whatever the truth is, whether string
theory or LQG, some kind of connection with reality of these numbers is
needed. You have three leptons and three families of quarks. The quark
masses are not "real" in the sense that you can never in principle observe
a free quark (the energy needed to break a couple or traid of quarks apart
is enough to form new pairs of quarks).So the real problem is explaining
the observable facts relating to masses: the three lepton masses
(electron, muon, tauon, respectively about 0.511, 105.66 and 1784.2 MeV,
or 1/137.0..., 1.5 and 25.5 respectively if you take the 1/137.0... as the
electron mass), and a large amount of hadron data on meson (2 quarks each)
and baryon (3 quarks each) masses.When you multiply the masses of the
hadrons by alpha (1/137.0...) and divide by the electron mass, you get, at
least for the longlived hadrons (half lives above 10^23 second) pretty
quantized (nearinteger) sized masses:
Mesons
Charged pions =
1.99
Neutral Pions = 1.93
Charged kaons = 7.05
Neutral kaons =
7.11
Eta = 7.84
Hyperons
Lambda = 15.9
Sigma+ =
17.0
Sigma0 = 17.0
Sigma = 17.1
Xi(0) = 18.8
Xi() =
18.9
Omega = 23.9
Of course the exceptions are nucleons,
neutrons and protons, which have both have masses on this scale of around
13.4. It is a clue to why they are relatively stable compared to all the
other hadrons, which all have half lives of a tiny fraction of a second
(after the neutron, the next most stable hadron found in nature is the
pion of course, which has a half life of 2.6 shakes (1 shake = 10^8
second).
All these particles masses are produced by the semi
empirical formula {M ~ 35n(N + 1) Mev} above to within 2% error, which is
strong statistical evidence for quantization (similar if not better than
Dalton's evidence for periodicity of the elements in the early nineteenth
century; note that Dalton was called a crackpot by
many):
N................................n = 1............n =
2...........n = 3
(number of Zbosons..1 particle........2
quarks.......3 quarks
associated with
core)...Leptons........Mesons........Baryons
1..............................Electron..........Pions
2..............................Muon
6..................................................Kaons
7...................................................Eta
8.....................................................................Nucleons
10..............................................................Lambda,Sigmas
12........................................................................Xi
15.....................................................................Omega
50.............................Tauon
As
you can see from the "periodic table" based on masses above, there are a
lot of blanks. Some if not all of these are doubtless filled by the
shorterlived particles.
What needs to be done next is to try to
correlate the types of quarks with the apparent integer number of vacuum
particles N they associate with, in each meson and baryon. I seem to
recall from a course in nuclear physics that the numbers 8 and 50 are
"magic numbers" in nuclear physics, and may explain the nucleons having N
= 8 and the Tauon having N = 50. This is probably the "selection
principle" needed to go with the formula to identify predictions of masses
of relatively stable particles. (As you comment, there is no real
difference between nuclear physics and particle physics.) I know Barut
made some effort to empirically correlate lepton masses in his paper in
PRL, v. 42 (1979), p. 1251, and Feynman was keener for people to find new
ways to calculate data than to play with string theory:
‘… I do
feel strongly that this [superstring theory stuff] is nonsense! … I think
all this superstring stuff is crazy and is in the wrong direction. … I
don’t like it that they’re not calculating anything. … why are the masses
of the various particles such as quarks what they are? All these numbers …
have no explanations in these string theories  absolutely none! …’  R.P.
Feynman, quoted in Davies & Brown, ‘Superstrings’ 1988, at pages
194195 (quotation provided by Tony Smith). The semiempirical formula is
not entirely speculative, as the shielding factor 137 can be justified as
you may have seen on my pages:
Heisenberg's uncertainty says pd =
h/(2.Pi), where p is uncertainty in momentum, d is uncertainty in
distance.This comes from his imaginary gamma ray microscope, and is
usually written as a minimum (instead of with "=" as above), since there
will be other sources of uncertainty in the measurement process.For light
wave momentum p = mc, pd = (mc)(ct) = Et where E is uncertainty in energy
(E=mc2), and t is uncertainty in time. Hence, Et = h/(2.Pi), so t =
h/(2.Pi.E), so d/c = h/(2.Pi.E)d = hc/(2.Pi.E). This result is used to
show that a 80 GeV energy W or Z gauge boson will have a range of 10^17
m. So it's OK. Now, E = Fd implies d = hc/(2.Pi.E) = hc/(2.Pi.Fd). Hence F
= hc/(2.Pi.d^2). This force is 137.036 times higher than Coulomb's law for
unit fundamental charges. Notice that in the last sentence I've suddenly
gone from thinking of d as an uncertainty in distance, to thinking of it
as actual distance between two charges; but the gauge boson has to go that
distance to cause the force anyway.
‘… the Heisenberg formulae can
be most naturally interpreted as statistical scatter relations, as I
proposed [in the 1934 German publication, ‘The Logic of Scientific
Discovery’]. … There is, therefore, no reason whatever to accept either
Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’
– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979,
p. 303. Note: statistical scatter gives the energy form of Heisenberg’s
equation, since the vacuum is full of gauge bosons carrying momentum like
light, and exerting vast pressure; this gives the foam
vacuum.
Clearly what's physically happening is that the true force
is 137.036 times Coulomb's law, so the real charge is 137.036e. All the
detailed calculations of the Standard Model are really modelling are the
vacuum processes for different types of virtual particles and gauge
bosons. The whole mainstream way of thinking about the Standard Model is
related to energy. What is really happening is that at higher energies you
knock particles together harder, so their protective shield of polarised
vacuum particles gets partially breached, and you can experience a
stronger force mediated by different particles! This is reduced by the
correction factor 1/137.036 because most of the charge is screened out by
polarised charges in the vacuum around the electron core.
The
problem is that people are used to looking to abstruse theory due to the
success of QFT in some areas, and looking at the data is out of fashion.
If you look at history of chemistry there were particle masses of atoms
and it took school teachers like Dalton and a Russian to work out
periodicity, because the bigwigs were obsessed with vortex atom maths, the
‘string theory’ of that age. Eventually, the obscure school teachers won
out over the mathematicians, because the vortex atom (or string theory
equivalent) did nothing, but empirical analysis did stuff. It was
eventually explained theoretically!
It seems that there are two
distinct mechanisms for forces to be propagated via quantum field theory.
The vacuum propagates long ranges forces (electromagnetism, gravity) by
radiation exchange as discussed in earlier papers kindly hosted by Walter
Babin, while shortrange forces (strong and weak nuclear interactions) are
due to the pressure of the spin foam vacuum. The vacuum is below viewed by
analogy to an ideal gas in which there is a flux of shadowed radiation and
also dispersed particlecaused pressure.
The radiation has an
infinite range and its intensity decreases from geometric divergence. The
material pressure of the spin foam vacuum is like an ideal gas, with a
small meanfreepath, and produces an attractive force with a very short
range (like air pressure pushing a suction plunger against a surface, if
the gap is too small to allow air to fill the gap). The probabilistic
nature of quantum mechanics is then due to the random impacts from virtual
particles in the vacuum on a small scale, which statistically average out
on a large scale.
There is strong evidence showing Maxwell's light
photon theory is not only drivel, but can be corrected by modern data from
electromagnetism. First consider what electricity is. If you charge up a x
metre long transmission line to v volts, energy enters at the speed of
light. When you dicharge it, you (contrary to what you may expect) get a
light speed pulse out of v/2 volts with a duration of 2x/c seconds, which
of course implies a pulse 2x metres long. Nobody has ever proposed a
mechanism where by energy travelling at light speed can magically stop
when a transmission line charges up, and magically restart when it is
allowed to discharge.
Static electrons are therefore to be viewed
as trapped electromagnetic field energy. Because there is no variation in
voltage in a static charged conductor, there is no electric drift current,
and no resistance or net magnetic field from current, yet energy is still
going at light speed.
Because we know a lot about the electron,
namely its electric charge in interactions at different energy, its spin,
its magnetic dipole, we can use Heaviside's model of energy current to
obtain a model for an electron: it's just a HeavisidePoynting energy
trapped in a loop by the only force that will do that, gravity. I
discussed this in ten pages of articles in Electronics World, August 2002
and April 2003, which are both now cited on Google Scholar (despite the
abuse from string theorists). This tells us the loop size is blackhole
sized. This in turn allows a mechanism for LeSage gravity to be tested
(although the calculations of the mechanism can also be done in another
way that doesn't depend on assumed black hole sized shield area for a
fundamental particle). Maxwell had no idea that electricity is related to
light in speed, or he would probably have grasped that electrons are
spinning at light speed:
James Clerk Maxwell, Treatise on
Electricity and Magnetism, 3rd ed., Article 574: "... there is, as yet,
no experimental evidence to shew whether the electric current... velocity
is great or small as measured in feet per second."
James Clerk
Maxwell, Treatise on Electricity and Magnetism, 3rd ed., Article 769:
"... we may define the ratio of the electric units to be a velocity...
this velocity [of light, because light was the only thing Maxwell then
knew of which had a similar speed, due to his admitted ignorance of the
speed of electricity ! ] is about 300,000 kilometres per
second."
So Maxwell was just guessing that he was modelling light,
because he didn't guess what Heaviside knew later on (1875): that
electricity is suspiciously similar to what Maxwell was trying to model as
"light".
More important, a photon of over 2 electron rest masses in
energy interacts with heavy nuclei (of high atomic number) by
pairproduction. Hence a 1.022 MeV gamma ray with spin 1 can be converted
into a 0.511 MeV electron of spin 1/2 and a 0.511 MeV positron of spin
1/2. Since a classical light ray is a variation in electromagnetic field,
usually drawn as being half negative electric field and half positive, the
direct causal model of pair production is the literal splitting or fission
of a gamma ray by the curvature of spacetime in the strong field near an
atomic nucleus. The two fragments gain potential energy from the field and
become trapped by gravity. The wavelength of a gamma ray of >1 MeV is
very small. (It's a tragedy that pairproduction was only discovered in
1932, well after the Bohring revolution, not in 1922 or
before.)
Additional key evidence linking these facts directly to
the Standard Model is that particles in the Standard Model don't have
mass. In other words, elementary particles are like photons, they have
real energy but are massless. The mass in the Standard Model is supplied
by a mechanism, the Higgs field. This model is compatible with the
Standard Model. Furthermore, predictions of particle masses are possible,
as discussed above.
Page 49: Love gives strong arguments that
forces arise from the exchange of real particles. Clearly from my position
all attractive forces in the universe are due to recoil from shielded
pressure. Two nuclear particles stick together in the nucleus because they
are close enough to partly shield each other form the vacuum particles. If
they are far apart, the vacuum particles completely fill the gap between
them, killing the short range forces completely. Gravity and
electromagnetism are different in that the vector bosons don't interact or
scatter off one another, but just travel in straight lines. Hence they
simply cannot disperse into LeSage "shadows" and cancel out, which is why
they only fall by the inverse square law, unlike materialcarried
shortrange nuclear forces.
P. 51: Love quotes a letter from
Einstein to Schrodinger written in May 1928; 'The HeisenbergBohr
tranquilizing philosophy  or religion?  is so delicately contrived that,
for the time being, it provided a gentle pillow for the true believer from
which he cannot easily be aroused. So let him lie there.'
P. 52:
"Bohr and his followers tried to cut off free enquiry and say they had
discovered ultimate truth  at that point their efforts stopped being
science and became a revealed religion with Bohr as its prophet." Very
good. Note the origin of Bohr's paranoid religion is Maxwell classical
equations (saying centripetally accelerated charge in atom radiate
continuously, so charge spirals into the nucleus, a fact which Bohr was
unable to resolve when Rutherford wrote to him about in in 1915 or so).
Being unable to answer such simple questions, Bohr simply resorted to
inventing a religion to make the questions a heresy. (He also wanted his
name in lights for all time.)
P. 55: excellent quotation from
Hinton! But note that vortex theory of atom was never applied to
electrons; it was heresy when discovery of radioactivity "disproved
it".
Although the application of GR by the 'cosmological constant'
fiddles to the big bang is a repeated failure of predictions for decades,
as new data arises, the basic observed Hubble law of big bang expansion,
nuclear reaction rates, etc., are OK. So only part of GR is found
wanting!
P. 72: "In order to quantize charge, Dirac had to
postulate the existence of magnetic monopoles." Love points out that
magnetic monopoles have never been found in nature. Heaviside, not
Maxwell, first wrote the equation div.B = 0, which is logical and only
permits magnetic dipoles! Hence it is more scientific to search for UFOs
than magnetic monopoles.
P. 93: interesting that Love has the
source for the origin of the crackpot claim that radioactive events have
no cause as Gurney and Condon 1929. I will get hold of that reference to
examine in detail if a satire can be made of their argument. But I suppose
Schrodinger did that in 1935, with his cat paradox?
P. 94:
Particles and radiation in the vacuum create the random triggers for
radioactive decays. Some kind of radiation or vacuum particles, triggers
decays statistically. Love gives arguments for neutrinos and their
antiparticles being involved in triggering radioactivity, but that would
seem to me to only account for long half lives, where there is a
reasonable chance of an interaction with a neutrino, and not for short
half lives where the neutrino/antineutrino flux in space is too small, and
other vacuum particles are more likely to trigger decays (vector bosons,
or other particles in the quantum foam vacuum). The more stable a nuclide
is, the less likely it is that an impact will trigger a decay, but due to
chaotic collisions there is always some risk. I agree with Love that
quantum tunnelling is not metaphysical (p. 95), but due to real vacuum
interactions.
The problem is that to get a causal mechanism for
radioactive decay triggering taken seriously, some statistical
calculations and hopefully predictions are needed, and that before you do
that you might want to understand the masses of elementary particles and
how the exact mass affects the half life of the particle. Probably it is a
resonance problem. I know the Standard Model does predict a lot of half
lives, but I've only studied radioactivity in nuclear physics so far, not
particle physics in depth.
P. 99: "It is interesting ... when a
philosopher ... attacked quantum field theory, the response was immediate
and vicious. But when major figures from within physics, like Dirac and
Schwinger spoke, the critics were silent." Yes, and they were also polite
to Einstein when he spoke, but called him an old fool behind his
back.
P. 106: O'Hara quotation "Bandwagons have bad steering, poor
brakes, and often no certificate of roadworthiness."
The vector
boson radiation of QFT works by pushing things together. ‘Caloric’, fluid
heat theory, eventually gave way to two separate mechanisms, kinetic
theory and radiation. This was after Prevost in 1792 suggested constant
temperature is a dynamic system, with emission in equilibrium with the
reception of energy. The electromagnetic field energy exchange process is
not treated with causal mechanism in current QFT, which is the cause of
all the problems. All attractive forces are things shielding one another
and being pushed together by the surrounding radiation pushing inward
where not shadowed, while repulsion is due to the fact that in mutual
exchange of energy between two objects which are not moving apart, the
vector bosons are not redshifted, whereas those pressing in on the far
sides are redshifted by the big bang, as they come from immense
distances. I've a causal mechanism which works for each fundamental force,
although it is still sketchy in places.
P. 119: "In the Standard
Model, the electron and the neutrino interact via the weak force by
interchanging a Z. But think about the masses ... Z is about 91 GeV". I
think this argument of Love's is very exciting because it justifies the
model of masses above: the masscausing Higgs field is composed of Z
particles in the vacuum. It's real.
P. 121: Matter is trapped
electromagnetic field energy: this is also justified by the empirical
electromagnetic data I've been writing about for a decade in
EW.
Spinspin interaction (Pauli exclusion force?) is clearly
caused by some kind of magnetic anti alignment or pairing. When you drop
two magnets into a box, they naturally pair up not end to end, but side by
side, with the north poles pointing in opposite directions. This is the
most stable situation. The same happens to electrons in orbits, they are
magnets so they generally pair up with opposite orientation to the their
neighbour. Hence Pauli's law for paired electrons.
P. 130: Vitally
important, excellent quotation from Dirac about physics developing by big
jumps when prejudices are overcome!
'When one looks back over the
development of physics, one sees that it can be pictured as a rather
steady development with many small steps and superimposed on that a number
of big jumps.... These big jumps usually consist in overcoming a
prejudice.'
 P. A. M. Dirac, 'Development of the Physicist's
Conception of Nature', in J. Mehra (ed.), The Physicist's Conception
of Nature, D. Reidel Pub. Co., 1973.
RELATIONSHIP OF CAUSAL ELECTRIC FORCE FIELD FORCE MECHANISM TO GRAVITY MECHANISM AND MAGNETIC FORCE FIELD
It seems that the electromagnetic forcecarrying radiation is also the
cause
of gravity, via particles which cause the mass of charged
elementary
particles.
The vacuum particles ("higgs particle")
that give rise to all mass in the
Standard Model haven't been observed
officially yet, and the official
prediction of the energy of the
particle is very vague, similar to the Top
Quark mass, 172 GeV.
However, my argument is that the mass of the uncharged
Zboson, 91 GeV,
determines the masses of all the other particles. It
works. The charged
cores of quarks, electrons, etc., couple up (strongly or
weakly) with a
discrete number of massive trapped Zbosons which exist in
the vacuum.
This mechanism also explains QED, such as the magnetic moment
of the
electron 1 + alpha/(2Pi) magnetons.
Literally, the electromagnetic
forcecausing radiation (vector bosons)
interact with charged particle
cores to produce EM forces, and with the
associated "higgs bosons"
(gravitationally selftrapped Zbosons) to produce
the correct inertial
masses and gravity for each particle.
The lepton and hadron masses
are quantized, and I've built a model,
discussed there and on my blog,
which takes this model and uses it to
predict other things. I think
this is what science is all about. The
mainstream (string theory, CC
cosmology) is too far out, and unable to make
any useful
predictions.
As for the continuum: the way to understand it is
through correcting
Maxwell's classical theory of the vacuum. Quantum
field theory accounts for
electrostatic (Coulomb) forces vaguely with a
radiationexchange mechanism.
In the LeSage mechanism, the radiation
causing Coulomb's law causes all
forces by pushing. I worked out the
mechanism by which electric forces
operate in the April 2003 EW
article; attraction occurs by mutual shielding
as with gravity, but is
stronger due to the sum of the charges in the
universe. If you have a
series of parallel capacitor plates with different
charges, each
separated by a vacuum dielectric, you need the total (net)
voltage
needs to take into account the orientation of the plates.
The
vector sum is the same as a statistical random walk (drunkard's
walk):
the total is equal to the average voltage between a pair of
plates,
multiplied by the square root of the total number (this allows
for the
angular geometry dispersion, not distance, because the universe
is
spherically symmetrical around us  thank God for keeping the
calculation
very simple!  and there is as much dispersion outward in
the random walk as
there is inward, so the effects of inverse square
law dispersions and
concentrations with distance both exactly cancel
out).
Gravity is the force that comes from a straightline sum,
which is the only
other option than the random walk. In a straight
line, the sum of charges
is zero along any vector across the universe,
if that line contains an
average equal number of positive and negative
charges. However, it is
equally likely that the straight radial line
drawn at random across the
universe contains an odd number of charges,
in which case the average charge
is 2 units (2 units is equal to the
difference between 1 negative charge and
1 positive charge). Therefore
the straight line sum has two options only,
each with 50% probability:
even number of charges and hence zero net result,
and odd number of
charges which gives 2 unit charges as the net sum. The
mean for the two
options is simply (0 + 2) /2 = 1 unit. Hence
electromagnetism is the
square root of the number of charges in the
universe, times the weak
option force (gravity).
Thus, electromagnetism and gravity are
different ways that charges add up.
Electric attraction is as stated,
simply a mutual blocking of EM "vector
boson" radiation by charges,
like LeSage gravity. Electric repulsion is an
exchange of radiation.
The charges recoil apart because the underlying
physics in an expanding
universe (with "redshifted" or at least reduced
energy radiation
pressing in from the outside, due to receding matter in the
surrounding
universe) means their exchange of radiation results in recoil
away from
one another (imagine two people firing guns at each other, for a
simple
analogy; they would recoil apart).
Magnetic force is apparently, as
Maxwell suggested, due to the spins of the
vacuum particles, which line
up.
There is no such thing in the world as a charge with a mass.
1. Mass
No charges have masses  the masses come from the vacuum (Higgs field or whatever explanation you prefer). This is a fact according to the well tested Standard Model. The mass you measure for the electron varies with its velocity, implying radiation resistance. Special relativity is just an approximation, general relativity is entirely different and more accurate, and and allows absolute motion (i.e., in general relativity the velocity of light depends on the absolute coordinate system, because it is bent by the spacetime fabric, but special relativity ignores this). Quantum field theory shows that the vacuum particles look different to the observer depending on the state of motion of the observer. This actually provides the mechanism for the contraction and mass increase seen in the MichelsonMorley experiment (contraction) and in particle asselerators (mass increase). In order to explain the actual variation in mass, you need a vacuum spacetime fabric theory. Mass arises due to the work needed to contract a charge in the direction of motion as you accelerate it. It's physically squashed by the radiation resistance of the vacuum, and that's where the energy resides that is needed to accelerate it. It's mass increases because it gains extra momentum from this added electromagnetic energy, which makes the charge couple more strongly to the vacuum (higgs or whatever) field particles, which provide inertia and gravity, hence mass.
2. Charge
Just as charges don't directly have mass (the mass arises from vacuum interactions) is no such thing as an electric charge (as in Coulomb's law) by itself. Electric charge always exists with light speed spin and with a dipole magnetic field. All electrons have spin and a magnetic moment. In addition, Coulomb's law is just an approximation. The electron core has an electric field strength about 137 times that implies by Coulomb's law. The nature of an electron is a transverse electromagnetic (HeavisidePoynting) energy current trapped in a loop.
Science is not a belief system. The Maxwell div.E equation (or rather Gauss electric field or Coulomb electric force "law") is wrong because electric charge increases in highenergy collisions. It is up by something like 7% at 90 GeV collisions between electrons. The reason for this is that the polarised charges of the vacuum shield over 99% of the core charge of the electron. Again, I've gone into this at http://feynman137.tripod.com/. So there is no Coulomb law, it's just a faulty approximation.
David Tombe, sirius184@hotmail.com, has written a paper describing an intricate magnetic field mechanism which is not very interesting as it doesn’t make any predictions, and moreover it is probably wrong, http://www.wbabin.net/science/tombe.pdf, yet it does contain some interesting and important insights and mathematics.
It's clear from quantum electrodynamics that his basic point is
correct:
the vacuum is full of charges. Normally people refuse simple
models of
rotating particles in the vacuum using some armwaving
principle like the
principle of superposition, whereby the spin state
of any particle is
supposed to be indeterminate until it is measured.
The measurement is
supposed to collapse the wavefunction. However, Dr
Thomas Love of
California State University sent me a paper showing that
the principle of
superposition is just a statement of mathematical
ignorance because there
are two forms of Schroedinger's equation (time
dependent and time
independent), and when a measurement is taken you
are basically switching
mathematical models. So superposition is just a
mathematical model problem
and is not inherent in the underlying
physics. So the particles in the
vacuum can have a real spin and motion
even not being directly observed.
So I agree that some kind of
vacuum dynamics are crucial to understanding
the physics behind
Maxwell's equations. I agree that in a charged capacitor
the vacuum
charges between the plates will be affected by the
electric
field.
It seems to me that when a capacitor charges up,
the timevariation in the
electric current flowing along the capacitor
plates causes the emission of
electromagnetic energy sideways (like
radio waves emitted from an aerial in
which the current applied varies
with time). Therefore, the energy of
'displacement current' can be
considered electromagnetic radiation similar
in principle to
radio.
Maxwell's model says the changing electric field in a
capacitor plate causes
displacement current in the vacuum that induces
a charge on the other
plate.
In fact, the changing electric
field in one plate causes a changing current
in that plate, which
implies charge acceleration, which in turn causes
electromagnetic
energy transmission to the other plate.
So I think Maxwell's
equations cover up several intermediate physical
mechanisms. Any
polarisation of the vacuum may be a result of the energy
transmission,
not a cause of it.
I am very interested in your suggestion that you
get a pattern of rotating
charges in a magnetic field, and in the
issues of gyroscopic inertia.
I'll read your paper carefully before
replying in more detail. Gyroscopes
are good toys to play with. Because
they resist changes to their plane of
spin, if you let one fall while
holding the axis in a pivoted way so that it
is forced to tilt in order
to fall, it will appear to lose weight
temporarily. In fact what
happens is gravitational potential energy is used
up doing work
changing the plane of the gyroscope. Part of the
gravitational
potential energy that the gyroscope is then gaining as it
falls is
being used to simply change the plane of the gyroscopes spin. You
need
to do work to change the plane of a spinning body, because the
circular
motion of the mass implies a centripetal
acceleration.
So some of the gravitational work energy (E = Fs =
mgs) is used up in simply
changing the plane of the spin of the
gyroscope, rather than causing the
whole thing to accelerate downward
at 9.8 ms^2. Gravity can often appear
to change because of energy
conservation effects: light passing the sun is
deflected by twice the
amount you'd expect from Newton's law (for slow
moving objects),
because light can't speed up (unlike a slow moving object).
Because
half the energy gained by a bullet passing the sun would be
used
increasing the speed of the bullet and half would be used
deflecting the
direction, since light cannot speed up, the entire
gravitational potential
energy gained goes into deflection (hence twice
the deflection implied by
Newton's law).
"You must remember though that Kirchhoff derived the EM
> wave
equation using the exact same maths in 1857."
I think the maths is
botched because it doesn't correspond to any physics.
Real light
doesn't behave like Maxwell's light. You have to remember that
there's
radiation exchange between all the charges all the time. If I have
two
atoms separated by 1 metre, the charges are going to be
exchanging
energy not just between nearby charges (within each atom)
but with the
charges in the other atom. There is no mechanism to
prevent this. The
vector bosons causing forces take all conceivable
routes as Feynman showed
in the path integrals approach to quantum
field theory, which is now
generally recognised as the easiest to deal
with. From
http://feynman137.tripod.com/:
It
seems that the electromagnetic forcecarrying radiation is also the
cause
of gravity, via particles which cause the mass of charged
elementary
particles.
The vacuum particles ("higgs particle")
that give rise to all mass in the
Standard Model haven't been observed
officially yet, and the official
prediction of the energy of the
particle is very vague, similar to the Top
Quark mass, 172 GeV.
However, my argument is that the mass of the uncharged
Zboson, 91 GeV,
determines the masses of all the other particles. It
works. The charged
cores of quarks, electrons, etc., couple up (strongly or
weakly) with a
discrete number of massive trapped Zbosons which exist in
the vacuum.
This mechanism also explains QED, such as the magnetic moment
of the
electron 1 + alpha/(2Pi) magnetons.
Literally, the electromagnetic
forcecausing radiation (vector bosons)
interact with charged particle
cores to produce EM forces, and with the
associated "higgs bosons"
(gravitationally selftrapped Zbosons) to produce
the correct inertial
masses and gravity for each particle.
The lepton and hadron masses
are quantized, and I've built a model,
discussed there and on my blog,
which takes this model and uses it to
predict other things. I think
this is what science is all about. The
mainstream (string theory, CC
cosmology) is too far out, and unable to make
any useful
predictions.
As for the continuum: the way to understand it is
through correcting
Maxwell's classical theory of the vacuum. Quantum
field theory accounts for
electrostatic (Coulomb) forces vaguely with a
radiationexchange mechanism.
In the LeSage mechanism, the radiation
causing Coulomb's law causes all
forces by pushing. I worked out the
mechanism by which electric forces
operate in the April 2003 EW
article; attraction occurs by mutual shielding
as with gravity, but is
stronger due to the sum of the charges in the
universe. If you have a
series of parallel capacitor plates with different
charges, each
separated by a vacuum dielectric, you need the total (net)
voltage
needs to take into account the orientation of the plates.
The
vector sum is the same as a statistical random walk (drunkard's
walk):
the total is equal to the average voltage between a pair of
plates,
multiplied by the square root of the total number (this allows
for the
angular geometry dispersion, not distance, because the universe
is
spherically symmetrical around us  thank God for keeping the
calculation
very simple!  and there is as much dispersion outward in
the random walk as
there is inward, so the effects of inverse square
law dispersions and
concentrations with distance both exactly cancel
out).
Gravity is the force that comes from a straightline sum,
which is the only
other option than the random walk. In a straight
line, the sum of charges
is zero along any vector across the universe,
if that line contains an
average equal number of positive and negative
charges. However, it is
equally likely that the straight radial line
drawn at random across the
universe contains an odd number of charges,
in which case the average charge
is 2 units (2 units is equal to the
difference between 1 negative charge and
1 positive charge). Therefore
the straight line sum has two options only,
each with 50% probability:
even number of charges and hence zero net result,
and odd number of
charges which gives 2 unit charges as the net sum. The
mean for the two
options is simply (0 + 2) /2 = 1 unit. Hence
electromagnetism is the
square root of the number of charges in the
universe, times the weak
option force (gravity).
Thus, electromagnetism and gravity are
different ways that charges add up.
Electric attraction is as stated,
simply a mutual blocking of EM "vector
boson" radiation by charges,
like LeSage gravity. Electric repulsion is an
exchange of radiation.
The charges recoil apart because the underlying
physics in an expanding
universe (with "redshifted" or at least reduced
energy radiation
pressing in from the outside, due to receding matter in the
surrounding
universe) means their exchange of radiation results in recoil
away from
one another (imagine two people firing guns at each other, for a
simple
analogy; they would recoil apart).
Magnetic force is apparently, as
Maxwell suggested, due to the spins of the
vacuum particles, which line
up.
Consider the most important and practical problem.
1. A
helicopter works by spinning plades which push down the medium
around
it, creating an upward reaction force (lift).
2. From quantum
field theory and general relativty, there is a vacuum field,
spacetime
fabric or Dirac sea/ether.
Is it possible at the fundamental
particle level to use magnetism to align
electrons or protons like tiny
helicopter blades, and then use them in the
same way as helicopter
blades, to push the spacetime fabric and create a
reaction? This would
not be violating Newton's 3rd law, because the force
the machine
experiences will result in an equal and opposite reaction on
the
spacetime fabric (in the same way that a helicopter forces air
downwards in
order to recoil upwards against gravity). If this is
possible, if the
required magnetic fields were not too large it could
probably be engineered
into a practical device once the physical
mechanism was understood properly.
(However electromagnetic vacuum
radiation does not diffuse in all directions
like the air downdraft
from a helicopter. The downdraft from a helocopter
is doesn't knock
people down because it dissipates in the the atmosphere
over a large
area until it is trivial compared to the normal 14.7 psi air
pressure.
If it were possible to create something using vacuum force
radiation in
place of air with the helicopter principle, anyone standing
directly
underneath it would  regardless of the machine's altitude  get
the
full weight of the the extra downward radiation just as if
the
helicopter had landed on him. Such a flying device  if it were
possible
and made  would leave a trail of havoc on the ground below
with the same
diameter as the machine, just like a steam roller. So it
would not be
practical, really. The minimum amount of energy needed
would basically be
the same, because the gravitational work energy is
unchanged.)
By changing the axis of rotation of a gyroscope you can
temporarily create a
reaction force against the vacuum, but you pay for
it later because the
inertial resistance becomes momentum as it begins
to accelerate, and you
then have to put a lot of energy in to return it
to the state it was in
before. If you rotate the spin axis of a
spinning gyroscope through 360
degrees, the net force you experience is
zero, but while you are doing this
you experience forces in all the
directions.
So it is impossible to get a motion in space from
ordinary gyroscopes alone,
but it might be possible in combination with
magnetism, if that would help
get a directional push against the Dirac
sea of the vacuum. This might be
useful for space vehicles because the
temperature is generally low enough in
space for superconductivity and
the creation of intense magnetic fields very
cheaply.
"I suspect that we will both agree on the following points.
Correct
me if I'm wrong.
(1) There is a dielectric medium pervading what
the
establishment consider to be empty vacuum.
(2) Electromagnetic
waves (TEM's) propagate in this medium at
the speed of light and
transfer energy while doing so.
(3) Cables act as wave guides for
TEM's.
Are we in agreement about these two points?"
My
comments:
(Aside: The "dielectric medium" is accepted to be the
quantum field theory
vacuum in modern physics. It is just convention
not to call it ether or
"Dirac sea", and to call it vacuum instead.
This is a bit silly, because it
exists in the air as well as in vacuum.
It is not a fight with the
establishment to show that SR is false,
because Einstein's GR of 1915
already says as much. Einstein admitted
that SR is wrong in 1916 and 1920,
because the spacetime fabric of GR
has absolute coordinates, you cannot
extend SR to accelerations, you
must abandon it and accept general
covariance of the laws of nature
instead  which is entirely different from
relativity. Obviously there
are plenty of physicists/authors/teachers who
don't know GR and who
defend SR, but absolute accelerations in GR proves
they are
ignorant.)
Maxwell's model for electromagnetic radiation is bogus.
Quantum theory
(Planck's and Bohr's) contradicted Maxwell's
model.
The true dynamics are like this: forces are caused by
continuous radiation
exchange between all charges (http://feynman137.tripod.com/).
Light
waves are caused by asymmetries in the normal continuous
radiation
exchange between charges. Such asymmetries occur when you
accelerate a
charge.
This is why light appears to take all
possible routes (path integrals)
through space, etc.
The normal
radiation exchange has no particular oscillatory frequency. When
you
emit radio waves, you're creating a net periodic force variation in
the
existing exchange of vacuum radiation between the charges in the
transmitter
aerial and any receiver aerial. The same occurs whether you
are emitting
radio waves by causing electrons to accelerate in an
aerial, or causing
individual electrons in atoms to change energy
levels in a jump.
http://electrogravity.blogspot.com/2006/01/solutiontoproblemwithmaxwells.html. Science doesn't progress by admiring a package or rejecting it, but by taking it to pieces, understanding what is useful and correct and less useful and wrong…. Catt thinks that the correct way to approach science is to decide whether personalities are crackpots or geniuses, and to isolate genius and not to build upon it. This whole classification system is quite interesting but contains no science whatsoever. Catt is orthodox in wanting the political type kudos from innovation, praise, money, prizes, fame and all the rest of it. Science however seeks as its aim one thing alone: understanding.
Sadly, Catt thinks he can get away with anything because he is a genius for some computer design or some principle that he stumbled on 30 years ago. Similarly, Brian Josephson's brain went to off** when he won a Nobel Prize and became a mindmatter unificator, see the abstract and unscientific (non mathematical, non physical) "content" of Josephson’s paper:
Catt and Forrest are wrong to say that there are "geometric details" separating the conceptual capacitor from the conceptual transmission line.
There is no defined geometry for a "capacitor" besides two conductors of any shape separated by an insulator such as vacuum.
Catt, Davidson and Walton refuse to use down to earth language, which means nobody will ever know what they are talking about.
Capacitors don't have to be pieshaped or circular discs, they can be any shape you can imagine. They can be two wires, plates, the plates can be insulated and then rolled up into a "swiss roll" to create the drumtype capacitors with high capacitance. The physics is not altered in basic principle. Two wires are a capacitor. Connect them simultaneously to the two terminals of a battery, and they charge up as a capacitor. All the talk of the energy having to change direction by 90 degrees when entering the capacitor comes from the conventional symbol used in circuit design, but it doesn't make any difference if the angle is zero. Catt introduces red herrings by "pie shaped" or disc shaped capacitor plates and by the 90 degrees angle issue.
_________ This is a capacitor () with the 90 degree direction change.
Another capacitor:
_________________
………..…..……..__________________
The overlapped part of the two wire system above is a capacitor without the 90 degree direction change. (This one will annoy Ivor!)
I described and solved the key error in Catt's anomaly which deals with the capacitor=transmission line issue, in the March 2005 issue of Electronics World.
Catt's interest in science is limited to simple algebra, i.e., nonmathematical stuff, stuff without Heaviside's operational calculus, without general relativity and without any quantum mechanics let alone path integrals or other quantum field theory.
He is not interested in physics, because as seen above, he dismisses "ideas" as personal pet theories. He would have done the same if Galileo or whoever came along. Catt has no interest. Of course, I should not complain about Catt's false statement that I (not Catt) am confused. I should just put up with him making offensive false statements. Kepler has a theory that the planets are attracted to the sun by magnetism, with the earth's magnetism as "evidence" in addition to Kepler's own laws. So if Newton had been Kepler's assistant instead of arriving on the scene much later, he could have been accused by Kepler of confusing Kepler's package.
(If anyone doesn't like references to Galileo and Kepler, as being too conceited, then think of someone more suitable like Aristarchus who had the solar system with circular orbits. If he has an associate of Kepler's skill, he would have been able to dismiss Kepler's ellipses as being imperfect and a confusion of Aristarchus with nonsense.)
There is current flowing in a wire provided there is a voltage variation along the wire to cause the flow.
In the plates of the capacitor, the incoming energy causes a field rise along the plate of 0 to say 9 volts initially. (Once the capacitor is half charged up, the rise is only from 4.5 to 9 volts, so the variation in the step voltage is half.)
If the rise time of this 0 to 9 volts is 1 ns, then the distance along the capacitor plate over which the voltage varies from 0 to 9 volts is ct = 30 cm. Catt ignores this but you can see that the physical size of the step front is appreciable in comparison to the size of a capacitor plate (even if it is a fat swiss roll). So you can write the field E = 9/0.3 = 30 v/m along the plate. This causes an electron drift current. In addition, from the time variation aspect in the capacitor plate, the increase in electric field from 0 to 30 v/m over the time of 1 ns causes the current to increase from 0 to its peak value before dropping as the field drops from 30 v/m to 0 v/m when the back part of the logic step (with steady 9 volts, hence E = 0 v/m) arrives.
What is important is to note that the varying electric current makes the capacitor plates behave like radio transmission aerials. The amount of power radiated transversely from a timevarying current (i.e., an accelerated electron) in watts from a nonrelativistic (slow drifting) charge is simply P = (e^2)(a^2)/[6(Pi).(Permittivity)c^3] where e is electric charge, a is acceleration, and c is velocity of light. The radiation occurs perpendicular to the direction of the acceleration.
This is what provably creates the current and induces the charge in the other plate: http://electrogravity.blogspot.com/2006/01/solutiontoproblemwithmaxwells.html
"Displacement current" is radio. This is hard, proved fact. It disproves the entire approach of Maxwell, which was to falsely claim there is dE/dt causes a current, when the actual mechanism is that the current variation di/dt (caused by dE/dt) accelerates charge causing electromagnetic radiation across the vacuum.
Maxwell: capacitor charges because dE/dt causes displacement current.
Fact: capacitor charges because dE/dt causes di/dt which causes electrons in the plate to accelerate and emit electromagnetic radiation transversely (to the other plate).
This does not disprove the existence of vacuum charges which may be polarised by the field. What it does prove is the mechanism for what causes the polarised charges in the vacuum: light speed radiation.
Maxwell's model of electromagnetic radiation, which consists of his equation for "displacement current" added to Faraday's law of induction, is long known to be at odds with quantum theory, so I'm not going to say any more about it.
The great danger in science is where you get hundreds of people speculating without facts, and then someone claims to have experimentally confirmed one of the speculations. Hertz claimed to have proved the details of Maxwell's model by discovering radio. Oc course Faraday had predicted radio without Maxwell's theory back in 1846, when Maxwell was just a small boy. See Faraday's paper "Thoughts on Ray Vibrations", 1846.
Take the +9 volt logic step entering and flooding a transmission line at light speed.
At the front end, the step rises from 0 volts to 9 volts. Thereafter, the voltage is 9 volts.
Hence, there is no electric current  at least there is no electric field mechanism for the electrons to drift along. Electrons aren't gaining any electric potential energy, so they can't accelerate up to any drift speed. Electric current may be caused, however, by the effect of the magnetic field in the opposite conductor of the transmission line. http://electrogravity.blogspot.com/2006/01/solutiontoproblemwithmaxwells.html discusses the mechanism.
Charge is not the primitive. Trapped lightspeed PoyntingHeaviside energy constitutes charge. I proved this in the April 2003 EW. Don't believe that the superposition principle of quantum mechanics magically prevents real electron spin when you are not measuring the electron: the collapse of the wavefunction is a mathematical artifact from the distinction of the two versions of Schroedinger's equation: timedependent and timeindependent:
http://electrogravity.blogspot.com/2006/03/copiesofmycommentstodrdantass.html:
Dr Thomas Love of California State University last week sent me a
preprint, "Towards an Einsteinian Quantum Theory", where he shows that the
superposition principle is a fallacy, due to two versions of the
Schroedinger equation: a system described by the timedependent
Schroedinger equation isn’t in an eigenstate between
interactions.
"The quantum collapse occurs when we model the wave
moving according to Schroedinger (timedependent) and then, suddenly at
the time of interaction we require it to be in an eigenstate and hence to
also be a solution of Schroedinger (timeindependent). The collapse of the
wave function is due to a discontinuity in the equations used to model the
physics, it is not inherent in the physics."
Electric charge is only detected via its electric field effect.
The
quantization of charge into electron size units (and sub units for
quarks,
which can never be observed by themselves because the energy to
separate a
quark exceeds that needed to produce a new pair of quarks
from the Dirac
sea/ether) has a mechanism.
It is curious that a
gamma ray with 1.022 MeV has exactly the same electric
field energy as
an electron plus a proton. Dirac's quantum field theory
mechanism for
pairproduction, which is the really direct experimental
physics
evidence (discovered by Anderson in 1932, passing gamma rays
through
lead) for E=mc2, is the vacuum is full virtual electrons and a
gamma ray,
with at least 1.022 MeV of energy, knocks a virtual electron
out of the
vacuum. The energy it is given makes it a real electron,
while the "hole"
it leaves in the ether is a positive charge, a
positron.
Dirac's equation is the backbone of quantum field theory,
but his ether
process is just conceptual. Pairproduction only occurs
when a gamma ray
enters strong field near a nucleus with high atomic
number, like lead. This
of course is one reason why lead is used to
shield gamma rays with energy
over 1 MeV, such as those from Co60.
(Gamma rays from Cs137 are on
average only 0.66 MeV so are shielded by
Compton scattering, which just
depends on electron abundance in the
shield, not on the atomic number.
Hence for gamma rays below 1 MeV,
shielding depends on getting as many
electrons between you and the
source as possible, while for gamma rays above
1 MeV it is preferable
to take advantage of pair production using the
nuclear properties of
elements of high atomic number like lead. The pairs
of electrons and
positrons are stopped very easily because they are charged,
unlike
Compton scattered gamma rays.)
Dirac's sea is naive in the sense
that the vacuum contains many forms of
radiation mediating different
forces, and not merely virtual electrons. You
can more easily deal with
pairproduction by pointing out that a gamma ray
is a cycle
electromagnetic radiation consisting 50% of negative electric
field and
50% of positive.
A strong field deflects radiation and 1.022 MeV is
the threshold required
for the photon to break up into two opposite
"charges" (opposite electric
field portions). Radiation can be
deflected into a curved path by gravity.
The black hole radius is
2GM/c^2, which is smaller than the Planck size for
an electron mass.
Conservation of momentum of the radiation is preserved as
the light
speed spin. Superposition/wavefunction collapse is a fallacy
introduced
by the mathematical discontinuity between the timedependent
and
timeindependent forms of Schroedinger's equation when taking a
measurement
on a system.
In the Heaviside light speed energy
current, electric field is equal in
magnitude to c times the magnetic
field, E=cB where each term is a vector
orthagonal to the others. We
already know from FeynmanSchwinger
renormalisation that the measured
charge and mass of the electron are
smaller than the effective core
values, which are shielded by the
polarisation of charges around the
core in Dirac's vacuum. The correct
value of the magnetic moment of the
electron arises from this model. You
cannot have charge without a
magnetic dipole moment, because the electron is
a Heaviside lightspeed
negative electric field energy current trapped in a
small loop. The
electric field from this is spherically symmetric but the
magnetic
field lines form a dipole, which is the observed fact.
Fundamental
charged particles have a magnetic moment in addition to
"electric charge".
Comparison of an Heuristic (Trial and Error Model) Spin Foam Vacuum to String Theory
Lee Smolin in recent Perimeter Institute lectures, Introduction to Quantum Gravity, showed how to proceed from Penrose’s spin network vacuum to general relativity, by a sum over histories, with each history represented geometrically by a labelled diagram for an interaction. This gets from a quantum theory of gravity (a spin foam vacuum) to a backgroundindependent version of general relativity, which dispenses with restricted/special relativity used as a basis for general relativity by string theorists (the alternative to the spin foam vacuum explored by Smolin and others). See http://christinedantas.blogspot.com/2006/02/handofmasterparts1and2.html
It seems that there are two distinct mechanisms for forces to be propagated via quantum field theory. The vacuum propagates long ranges forces (electromagnetism, gravity) by radiation exchange as discussed in earlier papers kindly hosted by Walter Babin, while shortrange forces (strong and weak nuclear interactions) are due to the pressure of the spin foam vacuum. The vacuum is below viewed by analogy to an ideal gas in which there is a flux of shadowed radiation and also dispersed particlecaused pressure.
The radiation has an infinite range and its intensity decreases from geometric divergence. The material pressure of the spin foam vacuum is like an ideal gas, with a small meanfreepath, and produces an attractive force with a very short range (like air pressure pushing a suction plunger against a surface, if the gap is too small to allow air to fill the gap). The probabilistic nature of quantum mechanics is then due to the random impacts from virtual particles in the vacuum on a small scale, which statistically average out on a large scale. This model predicts the strength of gravity from established facts and the correct mechanism for force unification at high energy, which does not require supersymmetry: http://nigelcook0.tripod.com/, http://electrogravity.blogspot.com/2006/02/heuristicexplanationofshortranged_27.html
Conservation of energy for all the force field mediators would imply that the fall in the strength of the strong force would be accompanied by the rise in the strength of the electroweak force (which increases as the bare charge is exposed when the polarised vacuum shield breaks down in high energy collisions), which implies that forces unify exactly without needing supersymmetry (SUSY). For the strength of the strong nuclear force at low energies (i.e., at room temperature):
Heisenberg's uncertainty says
pd = h/(2.Pi)
where p is uncertainty in momentum, d is uncertainty in distance.
This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with "=" as above), since there will be other sources of uncertainty in the measurement process.
For light wave momentum p = mc,
pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc2), and t is uncertainty in time.
Hence, Et = h/(2.Pi)
t = h/(2.Pi.E)
d/c = h/(2.Pi.E)
d = hc/(2.Pi.E)
This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^17 m. So it's OK.
Now, E = Fd implies
d = hc/(2.Pi.E) = hc/(2.Pi.Fd)
Hence
F = hc/(2.Pi.d^2)
This force is 137.036 times higher than Coulomb's law for unit fundamental charges.
Notice that in the last sentence I've suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.
Clearly what's physically happening is that the true force is 137.036 times Coulomb's law, so the real charge is 137.036. This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core:
"... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)."  arxiv hepth/0510040, p 71.
The unified Standard Model force is F = hc/(2.Pi.d^2)
That's the superforce at very high energies, in nuclear physics. At lower energies it is shielded by the factor 137.036 for photon gauge bosons in electromagnetism, or by exp(d/x) for vacuum attenuation by shortranged nuclear particles, where x = hc/(2.Pi.E)
All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles.
Quarks have asymptotic freedom because the strong force and electromagnetic force cancel where the strong force is weak, at around the distance of separation of quarks in hadrons. That’s because of interactions with the virtual particles (fermions, quarks) and the field of gluons around quarks. If the strong nuclear force fell by the inverse square law and by an exponential quenching, then the hadrons would have no volume because the quarks would be on top of one another (the attractive nuclear force is much greater than the electromagnetic force).
It is well known you can’t isolate a quark from a hadron because the energy needed is more than that which would produce a new pair of quarks. So as you pull a pair of quarks apart, the force needed increases because the energy you are using is going into creating more matter. This is why the quarkquark force doesn’t obey the inverse square law. There is a pictorial discussion of this in a few books (I believe it is in "The Left Hand of Creation", which says the heuristic explanation of why the strong nuclear force gets weaker when quarkquark distance decreases is to do with the interference between the cloud of virtual quarks and gluons surrounding each quark). Between nucleons, neutrons and protons, the strong force is mediated by pions and simply decreases with increasing distance by the inversesquare law and an exponential term something like exp(x/d) where x is distance and d = hc/(2.Pi.E) from the uncertainty principle.
Mainstream, Mtheory of strings extrapolates the welltested Standard Model into the force unification domain of 10^16 GeV and above using unobserved extra dimensions and unobserved supersymmetric (SUSY) partners to the normal particles we detect. The Standard Model achieved a critical confirmation with the detection of the shortranged neutral Z and charged W particles at CERN in 1983. This confirmed the basic structure of electroweak theory, in which electroweak forces have a symmetry and long range above 250 GeV which is broken by the Higgs field mechanism at lower energies, where only the photon (out of electroweak force mediators, photon, Z, W+ and W) continues to have infinite range.
In 1995, string theorist Edward Witten used Mtheory to unify 10 dimensional superstring theory (including SUSY) with 11 dimensional supergravity as a limit. In the April 1996 issue of Physics Today Witten wrote that ‘String theory has the remarkable property of predicting gravity’. Sir Roger Penrose questioned Witten’s claim on page 896 of Road to Reality, 2004: ‘in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory’.
The other uses of string theory are for providing a quantum gravity framework (it allows an spin2, unobserved gravitontype field, albeit without any predictive dynamics) and SUSY allows unification of nuclear and electromagnetic forces at an energy of 10^16 GeV (way beyond any possible high energy experiment on Earth).
In summary, string theory is not a scientific predictive theory, let alone a tested theory. The spin foam vacuum extension of quantum field theory as currently discussed by Smolin and others is limited to the mathematical connection between the framework of a quantum field theory and general relativity. I think it could be developed into a predictive unified theory very easily, as the components in this and earlier papers are predictive of new phenomena and are also consistent with those theories of modern physics which have been tested successfully. There is no evidence that string theory predictive of anything that could be objectively checked. Peter Woit of Columbia University has come up against difficulty in making the string theory mainstream listen to an objective criticism of the scientific failures of string theory, see: http://www.math.columbia.edu/~woit/arxivtrackbacks.html.
The string theory approach to QFT (quantum gravity, superforce
unification, SUSY) is extremely illucid and disconnected from
reality.
I've quoted a section from an old (1961) book on
'Relativistic Electron Theory' at http://electrogravity.blogspot.com/2006/02/standardmodelsaysmasshiggsfield.html:
'The
solution to the difficulty of negative energy states [in relativistic
quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc.
(London), A126, p360, 1930]. One defines the vacuum to consist of no
occupied positive energy states and all negative energy states completely
filled. This means that each negative energy state contains two electrons.
An electron therefore is a particle in a positive energy state with all
negative energy states occupied. No transitions to these states can occur
because of the Pauli principle. The interpretation of a single unoccupied
negative energy state is then a particle with positive energy ... It will
be apparent that a hole in the negative energy states is equivalent to a
particle with the same mass as the electron ... The theory therefore
predicts the existence of a particle, the positron, with the same mass and
opposite charge as compared to an electron. It is well known that this
particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev.,
43, p491, 1933].
'Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled 'negative energy sea' the complete theory (hole theory) can no longer be a singleparticle theory.
'The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.
'In a similar way, it can be shown that an electron acquires infinite inertia (selfenergy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].
'For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the 'crowded' vacuum is to change these to new constants e' and m', which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m'  m and e'  e would be of order alpha ~ 1/137. No rigorous justification for such a cutoff has yet been proposed.
'All this means that the present theory of electrons and fields is not
complete. ... The particles ... are treated as 'bare' particles. For
problems involving electromagnetic field coupling this approximation will
result in an error of order alpha. As an example ... the Dirac theory
predicts a magnetic moment of mu = mu[zero] for the electron, whereas a
more complete treatment [including Schwinger's coupling correction, i.e.,
the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 +
alpha/{twice Pi}), which agrees very well with the very accurate
measured value of mu/mu[zero] = 1.001...'
This kind of clearcut
physics is more appealing to me than string theory about extra dimensions
and such like. There is some evidence that masses for the known particles,
can be described by a a twostep mechanism. First, virtual particles in
the vacuum (most likely trapped neutral Z particles, 91 GeV mass) interact
with one another by radiation by give rise to mass (a kind of Higgs
field). Secondly, real charges can associate with a trapped Z particle
either inside or outside the polarised veil of virtual charges around the
real charge core: http://electrogravity.blogspot.com/2006/02/standardmodelsaysmasshiggsfield.html
The
polarised charge around either a trapped Z particle (OK it is neutral over
all, but so is the photon, and the photon's EM cycle is half positive
electrif field and half negative, in Maxwell's model of light, so a
neutral particle still has electric fields when considering the closein
picture) gives a shielding factor of 137, with an additional factor of
twice Pi for some sort of geometric reason, possibly connected to
spin/magnetic polarisation. If you spin a loop as seen edgeon, the
exposure it receives per unit area falls by a factor of Pi, compared to a
nonspinning cylinder, and we are dealing with exchange of gauge bosons
like radiation to create forces between spinning particles. The electron
loop has spin 1/2, so it rotates 720 degrees to cover a complete
revolution like a Mobius strip loop.
Thus, it has a reduction factor of twice Pi as seen edge on, and the
magnetic alignment which increases the magnetic moment of the electron
means that the core electron and the virtual charge in the vacuum are
aligned sideon.
Zboson mass: 91 GeV Muon mass (electron with a
Higg's boson/trapped Zboson inside its veil): 91 / (2.Pi.137) = 105.7
MeV.Electron mass (electron with a Higg's boson/trapped Zboson outside
its veil): 91 / [(1.5).(137).(2.Pi.137)] = 0.51 MeV.
Most hadron masses
are describable by (0.511
Mev).(137/2)n(N + 1) = 35n(N + 1) Mev where n and N are integers, with
a similar sort of heuristic explanation (as yet incomplete in details): http://feynman137.tripod.com/
Supersymmetry
can be completely replaced by physical mechanism and energy conservation
of the field bosons:
Supersymmetry is not needed at all because the
physical mechanism by which nuclear and electroweak forces unify at high
energy automatically leads to perfect unification, due to conservation of
energy: as you smash particles together harder, they break through the
polarised veil around the cores, exposing a higher core charge so the
electromagnetic force increases. My calculation at http://electrogravity.blogspot.com/2006/02/heisenbergsuncertaintysayspdh2.html
suggests that the core charge is 137 times the observed (long range)
charge of the electron. However, simple conservation of potential energy
for the continuouslyexchanged field of gauge bosons shows that this
increase in electromagnetic field energy must be conpensated for by a
reduction in other fields as collision energy increases. This will reduce
the core charge (and associated strong nuclear force) from 137 times the
lowenergy electric charge, compensating for the rising amount of energy
carried by the electromagnetic field of the charge at long
distances.
Hence, in sufficiently high energy collisions, the
unified force will be some way intermediate in strength between the
lowenergy electromagnetic force and the lowenergy strong nuclear force.
The unified force will be attained where the energy is sufficient to
completely break through the polarised shield around the charge cores,
possibly at around 10^16 GeV as commonly suggested. A proper model of the
physical mechanism would get rid of the Standard Model problems of
unification (due to incomplete approximations used to extrapolate to
extremely high energy): http://electrogravity.blogspot.com/2006/02/heuristicexplanationofshortranged_27.html
So
I don't think there is any scientific problem with sorting out force
unification without SUSY in the Standard Model, or of including gravity
(http://feynman137.tripod.com/).
The problem lies entirely with the mainstream preoccupation with string
theory. Once the mainstream realises it was wrong, instead of admitting it
was wrong, it will just use its preoccupation with string theory as the
excuse for having censored alternative ideas.
The problem is
whether Dr Peter Woit can define crackpottery to both include the
mainstream string theory, and exclude some alternatives which look
farfetched or crazy but have a more realistic change of being tied to
facts, and making predictions which can be tested. With string theory, Dr
Woit finds scientific problems. I think the same should be true of
alternatives, which should be judged on scientific criteria. The problem
is that the mainstream stringers don't use scientific grounds to judge
either their own work or alternatives. They say they are right because
they are a majority, and alternatives are wrong because they are in a
minority.
‘… the Heisenberg formulae can be most naturally interpreted as
statistical scatter relations, as I proposed [in the 1934 German
publication, ‘The Logic of Scientific Discovery’]. … There is, therefore,
no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist
interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective
Knowledge, Oxford University Press, 1979, p. 303. Note:
statistical scatter gives the energy form of Heisenberg’s equation, since
the vacuum is full of gauge bosons carrying momentum like light, and
exerting vast pressure; this gives the foam vacuum.
I've
just updated a previous post here
with some comments on the distinction between the two aspects of the
strong nuclear force, that between quarks (where the physics is very
subtle, with interactions between the virtual quark field and the gluon
field around quarks leading to a modification of the strong nuclear force
and the effect of asymptotic
freedom of quarks within hadrons), and that between one nucleon and
another.
Nucleons are neutrons and protons, each containing three
quarks, and the strong nuclear force between nucleons behaves as if
neutrons are practically identical to protons (the electric charge is an
electromagnetic force effect). Between individual quarks, the
strong force is mediated by gluons and is more complex due to screening
effects of the colour charges of the quarks, but between nucleons
it is mediated by pions, and is very simple, as my previous post
shows.
Consider why the nuclear forces shortranged, unlike gravity
and electromagnetism. The Heisenberg uncertainty principle in its
timeenergy form sets a limit to the amount of time a certain amount of
energy (that of the fausemediating particles) can exist. Because of the
finite speed of light, this time limit is equivalent to a distance limit
or range. This is why nuclear forces are shortranged. Physically, the
longrange forces (gravity and electromagnetism) are radiation exchange
effects which aren't individually attenuated with distance, but just
undergo geometric spreading over wider areas due to divergence (giving
rise to the inversesquare law).
But the shortranged nuclear
forces are physically equivalent to a gastype pressure of the vacuum. The
14.7 pounds/square inch air pressure doesn't push you against the walls,
because air exists between you and the walls, and disperses kinetic energy
as pressure isotropically (equally in all directions) due to the random
scattering of air molecules. The range over which you are attracted to the
wall due to the air pressure is around the average distance air molecules
go between rancom scattering impacts, which is the mean free path of an
air molecule, about 0.1 micron (micron = micrometre).
This is why
to get 'attracted' to a wall using air pressure, you need a very smooth
wall and a clean rubber suction cup: it is a shortranged effect. The
nuclear forces are similar to this in their basic mechanism, with a short
range because of the collisions and interactions of the forcemediating
particles, which are more like gas molecules than the radiations which
give rise to gravity and electromagnetism. We know this for the
electroweak theory, where at low energies the W and Z force mediators are
screened by the foam vacuum of space, while the photon
isn't.
Deceptions used to attack predictive, testable
physical understanding of quantum mechanics: (1) metaphysicallyvague
entanglement of the wavefunctions of photons in Alain Aspect's ESP/EPR
supposed experiment, which merely demonstrates a correlation in the
polarisation of photons emitted from the same source in opposite
directions and measured. This correlation is expected if
Heisenberg's uncertainty principle does NOT apply to photon measurement.
We know Heisenberg's uncertainty principle DOES apply to measuring
electrons and other nonlight speed
particles, which have time to respond to the measurement by being
deflected or changing state. Photons, however, must be absorbed
and then reemitted to change state or direction. Therefore,
correlation of identical photon measurements is expected based on the
failure of the uncertainty principle to apply to the measurement process
of photons. It is hence entirely fraudulent to claim that the correlation
is due to metaphysicallyvague entanglement of wave functions of photons
metres apart travelling in opposite directions. (2) Young's
double slit experiment: Young falsely claimed that light
somehow cancels out at the dark fringes on the screen. But we know energy
is conserved. Light simply doesn’t arrive at the dark fringes (if it does,
what happens to it, especially where you fire one photon at a
time!!!!!!????). What really happens with light is
interference near the double slits, not at the
screen, which is not the case for water wave type
interference (water waves are longitudinal so interfere at the screen,
light waves have a transverse feature which allows interference to occur
even when a single photon passes through one of two slits, if the second
slit is nearby, i.e., within a wavelength or so!). (3)
Restricted ('Special') Relativity:
"General
relativity as a generalization of special relativity
"Some
people are extremely confused about the nature of special relativity and
they will tell you that the discovery of general relativity has revoked
the constraints imposed by special relativity. But that's another
extremely deep misunderstanding of physics. General relativity is called
general relativity because it generalizes special relativity; it does not
kill it. One of the fundamental pillars of general relativity is the
equivalence principle that states that in locally inertial frames, the
laws of special relativity must be satisfied by all local
phenomena."
I just don't believe you [Lubos Motl] don't
understand that general covariance in GR is the important principle, that
accelerations are not relative and that all motions at least begin and end
with acceleration/deceleration.
The radiation (gauge bosons) and
virtual particles in the vacuum exert pressure on moving objects,
compressing them in the direction of motion. As FitzGerald deduced in
1889, it is not a mathematical effect, but a physical one. Mass increase
occurs because of the snowplow effect of Higgs boson (mass ahead of you)
when you move quickly, since the Higgs bosons you are moving into can't
instantly flow out of your path, so there is mass increase. If you were to
approach c, the particles in the vacuum ahead of you would be unable to
get out of your way, you'd be going so fast, so your mass would tend
towards infinity. This is simply a physical effect, not a mathematical
mystery. Time dilation occurs because time is measured by motion, and if
as the Standard Model suggests, fundamental spinning particles are just
trapped energy (mass being due to the external Higgs field), that energy
is going at speed c, perhaps as a spinning loop or vibrating string. When
you move that at near speed c, the internal vibration and/or spin speed
will slow down, because c would be violated otherwise. Since
electromagnetic radiation is a transverse wave, the internal motion at
speed x is orthagonal to the direction of propagation at speed v, so x^2 +
v^2 = c^2 by Pythagoras. Hence the dynamic measure of time (vibration or
spin speed) for the particle is x/c = (1  v^2/c^2)^1/2, which is the
timedilation formula.
As Eddington said, light speed is absolute but
undetectable in the MichelsonMorley experiment owing to the fact the
instrument contracts in the direction of motion, allowing the slower light
beam to cross a smaller distance and thus catch up.
‘The
MichelsonMorley experiment has thus failed to detect our motion through
the aether, because the effect looked for – the delay of one of the light
waves – is exactly compensated by an automatic contraction of the matter
forming the apparatus…. The great stumbingblock for a philosophy which
denies absolute space is the experimental detection of absolute rotation.’
– Professor A.S. Eddington (who confirmed Einstein’s general theory of
relativity in 1919), Space Time and Gravitation: An Outline of the General
Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20,
152.
Einstein said the same:
‘Recapitulating, we may say
that according to the general theory of relativity, space is endowed with
physical qualities... According to the general theory of relativity space
without ether is unthinkable.’ – Albert Einstein, Leyden University
lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on
Relativity, Dover, New York, 1952, pp. 1523.)
Maxwell failed to
grasp that radiation (gauge bosons) was the mechanism for electric force
fields, but he did usefully suggest that:
‘The ... action of
magnetism on polarised light [discovered by Faraday not Maxwell] leads ...
to the conclusion that in a medium ... is something belonging to the
mathematical class as an angular velocity ... This ... cannot be that of
any portion of the medium of sensible dimensions rotating as a whole. We
must therefore conceive the rotation to be that of very small portions of
the medium, each rotating on its own axis [spin] ... The displacements of
the medium, during the propagation of light, will produce a disturbance of
the vortices ... We shall therefore assume that the variation of vortices
caused by the displacement of the medium is subject to the same conditions
which Helmholtz, in his great memoir on Vortexmotion, has shewn to
regulate the variation of the vortices [spin] of a perfect fluid.’ 
Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles
8223
Compare this to the spin foam vacuum, and the fluid GR
model:
‘… the source of the gravitational field can be taken to be
a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid
is defined as one in which all antislipping forces are zero, and the only
force between neighboring fluid elements is pressure.’ – Professor Bernard
Schutz, General Relativity, Cambridge University Press, 1986, pp.
8990.
Einstein admitted SR was tragic:
‘The special theory
of relativity … does not extend to nonuniform motion … The laws of
physics must be of such a nature that they apply to systems of reference
in any kind of motion. Along this road we arrive at an extension of the
postulate of relativity… The general laws of nature are to be expressed by
equations which hold good for all systems of coordinates, that is, are
covariant with respect to any substitutions whatever (generally
covariant). …’ – Albert Einstein, ‘The Foundation of the General Theory
of Relativity’, Annalen der Physik, v49, 1916.
To understand what the vector boson radiation is (photons having spin 1
and stringy speculative "gravitons" having spin 2) we need to understand
the electromagnetic unification of Maxwell. It is all perfect except the
"displacement current" term which is added to Ampere's current to complete
continuity of circuit current in a charging capacitor with a vacuum
dielectric.
The continuum is composed of radiation! There are also
trapped particles in the vacuum which are responsible for the quantized
masses of fundamental particles, leptons and the pairs and triads of
quarks in hadrons. The change in my approach is due to physical
understanding of the displacement current term in Maxwell's equations.
Since about 2000 I've been pushing this way, hoping Catt would help, but
he is not interested in progress beyond Heaviside's model. See my recent
blog post: http://electrogravity.blogspot.com/2006/03/electromagnetismandstandardmodel_10.html
and the links to my earlier posts and Catt's critical paper.Maxwell
supposed that the variation in voltage (hence electric field strength) in
a capacitor plate causes an ethereal "displacement
current".
Mathematically Maxwell's trick works, since you put the
"displacement current" law together with Faraday's law of induction and
the solution is Maxwell's light model, predicting the correct speed of
light. However, this changes when you realise that displacement current is
itself really electromagnetic radiation, and acts at 90 degrees to the
direction light propagates in Maxwell's model. Maxwell's model is entirely
selfcontradictory, and so his unification of electricity and magnetism is
not physical!Maxwell's unification is wrong, because the reality is that
the "displacement current" effects result from electromagnetic radiation
emitted transversely when the current varies with time (hence when charges
accelerate) in response to the timevarying voltage. This completely
alters the picture we have of what light is!Comparison:
It seems that the electromagnetic forcecarrying radiation is also
the cause
of gravity, via particles which cause the mass of charged
elementary
particles.
The vacuum particles ("higgs
particle") that give rise to all mass in the
Standard Model haven't
been observed officially yet, and the official
prediction of the
energy of the particle is very vague, similar to the Top
Quark
mass, 172 GeV. However, my argument is that the mass of the
uncharged
Zboson, 91 GeV, determines the masses of all the other
particles. It
works. The charged cores of quarks, electrons, etc.,
couple up (strongly or
weakly) with a discrete number of massive
trapped Zbosons which exist in
the vacuum. This mechanism also
explains QED, such as the magnetic moment
of the electron 1 +
alpha/(2Pi) magnetons.
Literally, the electromagnetic
forcecausing radiation (vector bosons)
interact with charged
particle cores to produce EM forces, and with the
associated "higgs
bosons" (gravitationally selftrapped Zbosons) to produce
the
correct inertial masses and gravity for each particle.
The
lepton and hadron masses are quantized, and I've built a
model,
discussed there and on my blog, which takes this model and
uses it to
predict other things. I think this is what science is
all about. The
mainstream (string theory, CC cosmology) is too far
out, and unable to make
any useful predictions.
As for the
continuum: the way to understand it is through correcting
Maxwell's
classical theory of the vacuum. Quantum field theory accounts
for
electrostatic (Coulomb) forces vaguely with a
radiationexchange mechanism.
In the LeSage mechanism, the
radiation causing Coulomb's law causes all
forces by pushing. I
worked out the mechanism by which electric forces
operate in the
April 2003 EW article; attraction occurs by mutual shielding
as
with gravity, but is stronger due to the sum of the charges in
the
universe. If you have a series of parallel capacitor plates
with different
charges, each separated by a vacuum dielectric, you
need the total (net)
voltage needs to take into account the
orientation of the plates.
The vector sum is the same as a
statistical random walk (drunkard's walk):
the total is equal to
the average voltage between a pair of plates,
multiplied by the
square root of the total number (this allows for the
angular
geometry dispersion, not distance, because the universe
is
spherically symmetrical around us  thank God for keeping the
calculation
very simple!  and there is as much dispersion outward
in the random walk as
there is inward, so the effects of inverse
square law dispersions and
concentrations with distance both
exactly cancel out).
Gravity is the force that comes from a
straightline sum, which is the only
other option than the random
walk. In a straight line, the sum of charges
is zero along any
vector across the universe, if that line contains an
average equal
number of positive and negative charges. However, it is
equally
likely that the straight radial line drawn at random across
the
universe contains an odd number of charges, in which case the
average charge
is 2 units (2 units is equal to the difference
between 1 negative charge and
1 positive charge). Therefore the
straight line sum has two options only,
each with 50% probability:
even number of charges and hence zero net result,
and odd number of
charges which gives 2 unit charges as the net sum. The
mean for the
two options is simply (0 + 2) /2 = 1 unit. Hence
electromagnetism
is the square root of the number of charges in the
universe, times
the weak option force (gravity).
Thus, electromagnetism and
gravity are different ways that charges add up.
Electric attraction
is as stated, simply a mutual blocking of EM "vector
boson"
radiation by charges, like LeSage gravity. Electric repulsion is
an
exchange of radiation. The charges recoil apart because the
underlying
physics in an expanding universe (with "redshifted" or
at least reduced
energy radiation pressing in from the outside, due
to receding matter in the
surrounding universe) means their
exchange of radiation results in recoil
away from one another
(imagine two people firing guns at each other, for a
simple
analogy; they would recoil apart).
Magnetic force is
apparently, as Maxwell suggested, due to the spins of the
vacuum
particles, which line up. We’ll examine the details further
on.
http://cosmicvariance.com/2006/01/25/generalrelativityasatool/#comment15326
The
best way to understand that the basic field equation of GR is
empirical fact is extending Penrose’s arguments:
(1) Represent
Newton’s empirical gravity potential in the tensor calculus of
Gregorio RicciCurbastro and Tullio LeviCivita: R_uv =
4.Pi(G/c^2)T_uv, which applies to low speeds/weak fields.
(2)
Consider objects moving past the sun, gaining gravitational potential
energy, and being deflected by gravity. The mean angle of the object
to the radial line from the gravity force from the sun is 90 degrees,
so for slowmoving objects, 50% of the energy is used in increasing
the speed of the object, and 50% in deflecting the path. But because
light cannot speed up, 100% of the gravitational potential energy
gained by light on its approach to the sun is used to deflection, so
this is the mechanism why light suffers twice the deflection suggested
by Newton’s law. Hence for light deflection: R_uv =
8.Pi(G/c^2)T_uv.
(3) To unify the different equations in (1)
and (2) above, you have to modify (2) as follows: R_uv  0.5Rg_uv =
8.Pi(G/c^2)T_uv, where g_uv is the metric. This is the
EinsteinHilbert field equation.
At low speeds and in weak
gravity fields, R_uv =  0.5Rg_uv, so the equation becomes the
Newtonian approximation R_uv = 4.Pi(G/c^2)T_uv.
GR is based
entirely on empirical facts. Speculation only comes into it after
1915, via the "cosmological constant" and other "fixes". Think about
the mechanism for the gravitation and the contraction which constitute
pure GR: it is quantum field theory, radiation
exchange.
Fundamental particles have spin which in an
abstract way is related to vortices. Maxwell in fact argued that
magnetism is due to the spin alignment of tiny vacuum field
particles.
The problem is that electron is nowadays supposed to
be in an almost metaphysical superposition of spin states until
measured, which indirectly (via the EPRBellAspect work) leads to the
entanglement concept you mention. But Dr Thomas Love of California
State University last week sent me a preprint, "Towards an Einsteinian
Quantum Theory", where he shows that the superposition principle is a
fallacy, due to two versions of the Schroedinger equation: a system
described by the timedependent Schroedinger equation isn’t in an
eigenstate between interactions.
"The quantum collapse occurs
when we model the wave moving according to Schroedinger
(timedependent) and then, suddenly at the time of interaction we
require it to be in an eigenstate and hence to also be a solution of
Schroedinger (timeindependent). The collapse of the wave function is
due to a discontinuity in the equations used to model the physics, it
is not inherent in the physics."
Maxwell failed to grasp that radiation (gauge bosons) was the
mechanism for electric force fields, but he did usefully suggest
that:
‘The ... action of magnetism on polarised light
[discovered by Faraday not Maxwell] leads ... to the conclusion that
in a medium ... is something belonging to the mathematical class as an
angular velocity ... This ... cannot be that of any portion of the
medium of sensible dimensions rotating as a whole. We must therefore
conceive the rotation to be that of very small portions of the medium,
each rotating on its own axis [spin] ... The displacements of the
medium, during the propagation of light, will produce a disturbance of
the vortices ... We shall therefore assume that the variation of
vortices caused by the displacement of the medium is subject to the
same conditions which Helmholtz, in his great memoir on Vortexmotion,
has shewn to regulate the variation of the vortices [spin] of a
perfect fluid.’  Maxwell’s 1873 Treatise on Electricity and
Magnetism, Articles 8223
Compare this to the spin foam vacuum,
and the fluid GR model:
‘… the source of the gravitational
field can be taken to be a perfect fluid…. A fluid is a continuum that
‘flows’... A perfect fluid is defined as one in which all antislipping
forces are zero, and the only force between neighboring fluid elements
is pressure.’ – Professor Bernard Schutz, General Relativity,
Cambridge University Press, 1986, pp. 8990.
Einstein admitted
SR was tragic: ‘The special theory of relativity … does not extend to
nonuniform motion … The laws of physics must be of such a nature that
they apply to systems of reference in any kind of motion. Along this
road we arrive at an extension of the postulate of relativity… The
general laws of nature are to be expressed by equations which hold
good for all systems of coordinates, that is, are covariant with
respect to any substitutions whatever (generally covariant). …’ –
Albert Einstein, ‘The Foundation of the General Theory of Relativity’,
Annalen der Physik, v49, 1916.
‘Recapitulating, we may say that
according to the general theory of relativity, space is endowed with
physical qualities... According to the general theory of relativity
space without ether is unthinkable.’ – Albert Einstein, Leyden
University lecture on ‘Ether and Relativity’, 1920. (Einstein, A.,
Sidelights on Relativity, Dover, New York, 1952, pp.
1523.)
‘The MichelsonMorley experiment has thus failed to
detect our motion through the aether, because the effect looked for –
the delay of one of the light waves – is exactly compensated by an
automatic contraction of the matter forming the apparatus…. The great
stumbingblock for a philosophy which denies absolute space is the
experimental detection of absolute rotation.’ – Professor A.S.
Eddington (who confirmed Einstein’s general theory of relativity in
1919), Space Time and Gravitation: An Outline of the General
Relativity Theory, Cambridge University Press, Cambridge, 1921, pp.
20, 152.
The radiation (gauge bosons) and virtual particles in
the vacuum exert pressure on moving objects, compressing them in the
direction of motion. As FitzGerald deduced in 1889, it is not a
mathematical effect, but a physical one. Mass increase occurs because
of the snowplow effect of Higgs boson (mass ahead of you) when you
move quickly, since the Higgs bosons you are moving into can't
instantly flow out of your path, so there is mass increase. If you
were to approach c, the particles in the vacuum ahead of you would be
unable to get out of your way, you'd be going so fast, so your mass
would tend towards infinity. This is simply a physical effect, not a
mathematical mystery. Time dilation occurs because time is measured by
motion, and if as the Standard Model suggests, fundamental spinning
particles are just trapped energy (mass being due to the external
Higgs field), that energy is going at speed c, perhaps as a spinning
loop or vibrating string. When you move that at near speed c, the
internal vibration and/or spin speed will slow down, because c would
be violated otherwise. Since electromagnetic radiation is a transverse
wave, the internal motion at speed x is orthagonal to the direction of
propagation at speed v, so x^2 + v^2 = c^2 by Pythagoras. Hence the
dynamic measure of time (vibration or spin speed) for the particle is
x/c = (1  v^2/c^2)^1/2, which is the timedilation formula.As
Eddington said, light speed is absolute but undetectable in the
MichelsonMorley experiment owing to the fact the instrument contracts
in the direction of motion, allowing the slower light beam to cross a
smaller distance and thus catch up.
Dr Love helpfully quotes
Einstein's admissions that the covariance of the general relativity
theory violates the idea in special relativity that the velocity of
light is constant:
'This was ... the basis of the law of the
constancy of the velocity of light. But ... the general theory of
relativity cannot retain this law. On the contrary, we arrived at the
result according to this latter theory, the velocity of light must
always depend on the coordinates when a gravitational field is
present.'  Albert Einstein, Relativity, The Special and General
Theory, Henry Holt and Co., 1920, p111.
So general
relativity conflicts with, and supersedes, special relativity. General
relativity says goodbye to the law of the invariant velocity of light
which was used in a fiddle, special relativity:
'... the
principle of the constancy of the velocity of light in vacuo must be
modified, since we easily recognise that the path of a ray of light
... must in general be curvilinear...'  Albert Einstein, The
Principle of Relativity, Dover, 1923, p114.
The error with special relativity (which is incompatible with general relativity, since general relativity allows the velocity of light to depend on the coordinate system, and special relativity does not) is therefore the assumption that the spacetime reference frame changes when contraction occurs. In fact, the matter just contracts, due to the vacuum (gauge boson) force mechanism of quantum field theory, so you need to treat the spacetime of the vacuum separately from that of the matter. In general, Walter Babin’s point is valid where he suggests that Special Relativity’s insistence upon an invariant velocity of light and variable coordinate system should be replaced by a fixed coordinate system for covariance in general relativity, with the variable being the velocity of light (which varies according to general relativity, and that is a more general theory than special relativity):
From: Nigel Cook To: Walter Babin Sent: Tuesday, March 07, 2006 8:17 PM Subject: Special relativity
Dear Walter,
My feeling on special relativity has never been completely clear, but your new paper http://www.wbabin.net/babin/redux.pdf is excellent.
Your first argument, which suggests that a "constant speed of light + varying reference frame" is at best equivalent to more sensible "varying speed of light + fixed reference frame", intuitively appeals to me.
The contraction of 1 kg ruler 1 metre long to 86.6 centimetres in the direction of motion when travelling at c/2, is a local contraction of the material making up the ruler. The energy that causes the contraction must be the energy injected. Inertia is the force needed to overcome the pressure of the spacetime fabric. The mass increase of that ruler to 1.15 kg is explained by the spacetime fabric which is limited in speed to a maximum of c, and can't flow out of the way fast enough when you approach c, so the inertial resistance (and hence inertial mass) increases. The Standard Model of nuclear physics already says that mass is entirely caused by the vacuum "Higgs field". This seems already to violate the meaning commonly given to E=mc^2, since if m is due to the Higgs field surrounding matter possessing electromagnetic field energy E, then mass and energy are not actually identical at all, but one is simply associated with the other, just as a man and a woman are associated by marriage!
What is really going on is that objects are physically contracting in the direction of their motion when accelerated. You use 50 Joules of energy accelerate a 1 kg mass up to 10 m/s. Surely the energy you need to start something moving is physically the energy used to contract all the atoms in direction of motion?
Length contraction is real, but it is the physical material of the MichelsonMorley instrument that is being contracted for sure, so is absolute speed because (as FitzGeald showed) the MichelsonMorley result is explained by contraction in the direction of motion for a Maxwellian absolute speed of light (Maxwell predicted absolute speed of light in the MichelsonMorley experiment, which he suggested, although he died long before the experiment was done). Nowadays, history is so "revised" that some people claim falsely that Maxwell predicted relativity from his flawed classical mathematical model of a light wave!
Special relativity is a mathematical obfuscation used to get rid of the mechanical basis for the length contraction formula for the MichelsonMorley experiment. FitzGerald did this in 1889. Furthermore Einstein claims Maxwell's equations suggested relativity, but Maxwell was an aether theorist and interpreted his equations the opposite way. Of course Maxwell didn't predict the FitzGerald contraction, because his aether model was wrong. Joseph Larmor published a mechanical aetherial prediction of the timedilation formula in his 1901 book "Aether and matter". Larmor is remembered in physics today only for his equation for the spiral of electrons in a magnetic field.
I was surprised a decade ago to find that Eddington dismissed special relativity in describing general relativity in his 1920 book. Eddington says special relativity is wrong because accelerative motion is absolute (as measured against approximately "fixed" stars, for example): you rotate a bucket of water and you can see the surface indent. We know special relativity is just an approximation and that general relativity is deeper, because it deals with accelerations which are always needed for motion (for starting and stopping, before and after uniform motion).
In general relativity, the spacetime fabric pressure causes gravity by some kind of radiation LeSage mechanism, and the same mechanism causes the contraction term.
Einstein said that spacetime is fourdimensional and curved.
The Earth is contracted by 1.5 mm due to the contraction term in general relativity, which is given mathematically in the usual treatment by energy conservation of the gravitation field. But you can physically calculate the general relativity contraction from the FitzGerald contraction of length by the factor (1 – v^{2}/c^{2})^{1/2} = [1 – 2GM/(xc^{2})]^{1/2}. I obtain this starting with the Newtonian approximate empirical formula, which gives the square of escape velocity as v^{2} = 2GM/x, and the logical fact that the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v^{2} = 2GM/x) into the FitzgeraldLorentz contraction (1 – v^{2}/c^{2})^{1/2} which gives the gravitational contraction [1 – 2GM/(xc^{2})]^{1/2 }~ 1 – GM/(xc^{2}) using the first two terms in the binomial expansion.
This is a physical mechanism for the essential innovation of general relativity, the contraction term in the EinsteinHilbert field equation. Because the contraction due to motion is physically due to headon pressure (like wind pressure on your windscreen at high speeds) from the spacetime fabric, it occurs only in the direction of motion, say the x direction, leaving the size of the mass in directions x and z unaffected.
For gravity, the mechanism of spacetime fabric pressure causes contraction in the radial directions, outward from the centre of mass. This means the amount of contraction is as Feynman calculated about (1/3)GM/c^{2 }= 1.5 mm for the Earth.
If you look at Feynman's account of this, which is one of the most physically real, he gets his equation confused in words: Professor Feynman makes a confused mess of it in his relevant volume of Lectures, c42 p6, where he gives his equation 42.3 correctly for excess radius being equal to predicted radius minus measured radius, but then on the same page in the text says ‘… actual radius exceeded the predicted radius …’ Talking about ‘curvature’ when dealing with radii is not helpful and probably caused the confusion.
‘The MichelsonMorley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbingblock for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation, Cambridge University Press, Cambridge, 1921, pp. 20, 152.
I think Eddington's comments above are right. The speed of light is absolute but this is coveredup by the physical contraction of the MichelsonMorley experiment in the direction of motion, so the result was null. What I want to ask is whether special relativity is selfcontradictory here, because special relativity has both contraction and invariant speed of light, which taken together look incompatible with the MichelsonMorley result.
To be clear, FitzGerald's empirical theory is "physical contraction due to ether pressure + Michelson Morley result => variable speed of light depending on motion".
Special relativity is : "Michelson Morley result => invariant speed of light. Invariant speed of light + laws of nature independent of inertial motion => contraction".
So special relativity is ad hoc and is completely incompatible with FitzGerald's prior analysis. Since experimental data only verifies the resulting equations, Ockham's razor tells us to accept FitzGerald's simple analysis of the facts, and to neglect the speculation of special relativity. Furthermore, even Einstein agrees with this:
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
We know that there is a background forcecausing spacetime radiation fabric from quantum field theory and from the correction of Maxwell's extra term (allegedly vacuum current but actually electromagnetic radiation; Maxwell thought that "displacement current" is due to the variation in voltage or electric field, when it is really electromagnetic radiation emitted due to the variation in electric current on a charging capacitor plate which behaves a bit like a radio aerial, see: http://electrogravity.blogspot.com/2006/01/solutiontoproblemwithmaxwells.html).
One question I do have, Walter, is what we are trying to get out of this. I think it is going to be a very hard job to oust special relativity, for numerous reasons. However, it is necessary to get quantum gravity resolved and to dispense with outspoken prospecial relativity string theorists.
The equations from special relativity, in so much as they can also be obtained by other arguments from observations (FitzGerald, Lorentz, Larmor, etc), are useful.
Personally, I think the public relations aspect is paramount. Probably it is an error to attack Einstein or to disprove special relativity without giving a complete mathematical replacement. I do know that quantum field theory says that the virtual particles of the vacuum look different to observers in different motion, violating special relativity's "Lorentzian invariance" unless that specifically applies to the real contraction of material moving within the spacetime fabric, and the slowing down of physical processes, plus the piling up of the Higgs field at the "bow" of a relativistic particle to cause mass increase. This is extremely heretical as I will show.
Certainly nobody in any position of influence in physics wants to lose that position of influence by being namecalled a 'crackpot', as Professor Lubos Motl of Harvard has done to me and others today at http://www.math.columbia.edu/~woit/wordpress/?p=357#comment9009:
"I am at least trying to inhibit the kind of 'discussion' in the direction of ... Nigel Cook, and so many others... what these crackpots are saying..."
Notice that string theory is entirely speculative, but Professor Lubos Motl states that it is not crackpot, without providing any evidence for 10 dimensions and unobserved superpartners, or gravitons. It is entirely consistent that taxpayer funded people, who get money for speculation dressed up as science (the old name for such people in the medical arena is quack), align themselves with others. On Motl's blog, Michael Varney, a graduate research student who coauthored the paper published by Nature, Upper limits to submillimetrerange forces from extra spacetime dimensions (which neither confirms not denies abject speculation of string theory), used that paper to assert his rights to abuse call other people crackpot. He cites as authority a crank.net internet site run by Erik Max Francis, described impressively by Bonnie Rothman Morris in the New York Times of Dec. 21, 2000 as 'not a scientist, and has taken only a handful of classes at a community college'. Erik has a list of sites of people suppressed by the mainstream for many reasons, labels them crackpot or crank. He does not include his own claim to have proved Kepler's laws from other laws based on Kepler's laws (a circular crackpot argument), but presents his crackpotism separately. The New York Times article, which generally supports bigotry (http://www.greatdreams.com/nyt10198.htm) mentions that: 'Phil Plaitt, the Web master of Bad Astronomy started his site (www.badastronomy.com) ... [is] an astronomer and a friend of Mr. Francis.' This association with bigotry agrees with my experience of being suppressed from Plaitt's discussion forum two years ago by bigots, supported by the moderator (whoever that anonymous person was) who didn't accept any part of the big bang, despite the facts here: http://www.astro.ucla.edu/~wright/tiredlit.htm
We already know from the +/ 3 mK cosine variation in the 2.7 K microwave background that there is a motion of the galaxy at 400 km/s toward andromeda. This is probably largely due to the gravity of andromeda, but it does indicate a kind of absolute motion. Taking the 400 km/s as an orderofmagnitude figure of the motion of matter in the milky way in the 1.5 x 10^{10} yr since the big bang, that indicates we've moved 0.1% of the radius of the universe since the big bang. Hence, we are near the middle if we treat the big bang as a type of explosion. You know the regular "expanding cake" model which tries to mathematically fit cosmology to general relativity equations without gravity dynamics (quantum gravity) to the big bang with everything receding from everything else, but the problem is that nobody has ever looked at the universe from different places, so they really don't know, they're just speculating. The fact that so many epicycles are required using that approach (evolving dark energy being the latest), and are resolved in the correct mechanism shows the need employ the dynamics of quantum gravity to obtain the general relativity result, as shown here. Spacetime is still created in the instant of big bang. Clearly, Walter, there is a mixture of outright bigotry and ignorance in the scientific community, in addition to the usual attitude that most genuine physicists know there are problems between general relativity and quantum field theory, but keep silent if they don't have any constructive ideas on resolving these problems. The dynamics predict general relativity and gravity constant G within 2%, as shown on my home page and on one of the papers you kindly host.
Yours sincerely,
Nigel
General relativity has to somehow allow the universe's
spacetime to expand
in 3 dimensions around us (big bang) while also
allowing gravitation to
contract the 3 dimensions of spacetime in
the earth, causing the earth's
radius to shrink by 1.5 millimetres,
and (because of spacetime) causing time
on the Earth to slow down
by 1.5 parts in 6,400,000,000 (i.e., 1.5 mm in the
Earth's radius
of 6,400 km). This is the contraction effect of general
relativity,
which contracts distances and slows time.
The errors of general
relativity being forcefitted to the universe as a
whole are
obvious: the outward expansion of spacetime in the big bang
causes
the inward reaction on the spacetime fabric which causes the
contraction as
well as gravity and other forces. Hence, general
relativity is a
localscale resultant of the big bang, not the
cause or the controlling
model of the big bang. The conventional
paradigm confuses cause for effect;
general relativity is an effect
of the universe, not the cause of it. To me
this is obvious, to
others it is heresy.
What is weird is that Catt cling's on to
horseshit from crackpots which is
debunked (rather poorly)
here:
http://www.astro.ucla.edu/~wright/tiredlit.htm.
A better way to debunk all
the horseshit antiexpansion stuff is to
point out that the cosmic
background radiation measured for example
by the COBE satellite in 1992, is
BOTH (a) the most redshifted
radiation (it is redshifted by a factor of
1000, from a
temperature of 3000 K infrared to 2.7 K microwaves), and (b)
the
most perfect blackbody (Planck) radiation spectrum ever observed.
The
only mechanism for a uniform redshift by the same factor at
all frequencies
is recession, for the known distribution of masses
in the universe. The
claims that the perfectly sharp and uniformly
shifted light from distant
stars has been magically scattered by
clouds of dust without diffusing the
spectrum or image is
horseshit, like claiming the moonlandings are a hoax.
The real
issue is that the recession speeds are observations which apply
to
fixed times past, as a certain fact, not to fixed distances.
Hence the
recession is a kind of acceleration (velocity/time) for
the observable
spacetime which we experience. This fact leads to
outward force F=ma =
10^43 N, and by Newton's 3rd law equal inward
force which predicts gravity
via an improved, QFTconsistent LeSage
mechanism.
The mechanism behind the deflection of light by the sun is
that everything, including light, gains gravitational potential energy
as it approaches a mass like the sun.
Because the light passes
perpendicularly to the gravity field vector at closes approach
(average deflection position), the increased gravitational energy of a
slow moving body would be used equally in two ways: 50% of the energy
would go into increasing the speed, and 50% into changing the
direction (bending it towards the sun).
Light cannot increase
in speed, so 100% of the gained energy must go into changing the
direction. This is why the deflection of light by the sun is exactly
twice that predicted for slowmoving particles by Newton's law. All GR
is doing is accounting for energy.
This empiricist model
accurately predicts the value of G using cosmological data (Hubble
constant and density of universe), eliminating most dark matter in the
process. It gets rid of the need for inflation since the effective
strength of gravity at 300,000 years was very small, so the ripples
were small.
=> No inflation needed. All forces (nuclear, EM,
gravity) are in constant ratio because all have interrelated QFT
energy exchange mechanisms. Therefore the fine structure parameter 137
(ratio of strong force to EM) remains constant, and the ratio of
gravity to EM remains constant.
The sun's radiating power and
nuclear reactions in the 1st three minutes are not affected at all by
variations in the absolute strengths of all the fundamental forces,
since they remain in the same ratio.
Thus, if you double
gravity and nuclear and EM force strengths are also doubled, the sun
will not shine any differently than now. The extra compression due to
an increase in gravity would be expected to increase the fusion rate,
but the extra Coulomb repulsion between approaching protons (due to
the rise in EM force), cancels out the gravitational
compression.
So the ramshacklelooking empiricist model does
not conflict at all with the nucleosynthesis of the BB, or with
stellar evolution. It does conflict with the CC and inflation, but
those are just epicycles in the mainstream model, not objective
facts.
http://cosmicvariance.com/2006/03/16/wmapresultscosmologymakessense/
This
is hyped up to get media attention: the CBR from 300,000 years after
BB says nothing of the first few seconds, unless you believe their
vague claims that the polarisation tells something about the way the
early inflation occurred. That might be true, but it is very
indirect.
I do agree with Sean on CV that n = 0.95 may be an
important result from this analysis. I’d say it’s the only useful
result. But the interpretation of the universe as 4% baryons, 22% dark
matter and 74% dark energy is a nice fit to the existing LambdaCDM
epicycle theory from 1998. The new results on this are not too
different from previous empirical data, but this ‘nice consistency’ is
a euphemism for ‘useless’.
WMAP has produced more accurate
spectral data of the fluctuations, but that doesn’t prove the ad hoc
cosmological interpretation which was forcefitted to the data in
1998. Of course the new data fits the same ad hoc model. Unless there
was a significant error in the earlier data, it would do. Ptolemies
universe, once fiddled, continued to model things, with only
occasional ‘tweaks’, for centuries. This doesn’t mean you should
rejoice.
Dark matter, dark energy, and the tiny cosmological
constant describing the dark energy, remain massive epicycles in
current cosmology. The Standard Model has not been extended to include
dark matter and energy. It is not hard science, it’s a very indirect
interpretion of the data. I’ve got a correct prediction made without a
cosmological constant made and published in ‘96, years before the ad
hoc Lambda CDM model. Lunsford’s unification of EM and GR also
dismisses the CC.
http://cosmicvariance.com/2006/03/18/theresgoldinthelandscape/
They
were going to name the flower "Desert Iron Pyrites", but then they
decided "Desert Gold" is more romantic ;)
Dr Peter Woit has
kindly removed the following comments as requested, which I
made on the subject of physicist John Barrow's $1.4 million prize for
religion (see second comment below):
http://www.math.columbia.edu/~woit/wordpress/?p=364
anon
Says: March
18th, 2006 at 3:47 am
Secret milkshake, I agree! The
problem religion posed in the past to science was insistence on the
authority of scripture and accepted belief systems over experimental
data. If religion comes around to looking at experimental data and
trying to go from there, then it becomes more scientific than certain
areas of theoretical physics. Does anyone know what Barrow has to say
about string theory?
I learnt a lot of outoftheway ‘trivia’
from ‘The Anthropic Cosmological Principle’, particularly the end
notes, e.g.:
‘… should one ascribe significance to empirical
relations like m(electron)/m(muon) ~ 2{alpha}/3, m(electron)/m(pion) ~
{alpha}/2 … m(eta)  2m(charged pion) = 2m(neutral pion), or the
suggestion that perhaps elementary particle masses are related to the
zeros of appropriate special functions?’
By looking at
numerical data, you can eventually spot more ‘coincidences’ that
enable empirical laws to be formulated. If alpha is the core charge
shielding factor by the polarised vacuum of QFT, then it is possible
to justify particle mass relationships; all observable particles apart
from the electron have masses quantized as M=[electron
mass].n(N+1)/(2.alpha) ~ 35n(N+1) Mev, where n is 1 for leptons, 2 for
mesons and naturally 3 for baryons. N is also an integer, and takes
values of ‘magic numbers’ of nuclear physics for relatively stable
particles: for the muon (most stable particle after the neutron), N=2,
for nucleons N=8, for the Tauon, N=50.
Hence, there’s a
selection principle allowing masses of relatively stable particles to
be deduced. Since the Higgs boson causes mass and may have a value
like that of the Z boson, it’s interesting that [Zboson mass]/(3/2 x
2.Pi x 137 x 137) = 0.51 Mev (electron mass), and [Zboson mass]/(2.Pi
x 137) ~ 105.7 MeV (muon mass). In an electron, the core must be quite
distant from the particle giving the mass, so there are two separate
vacuum polarisations between them, weakening the coupling to just
alpha squared (and a geometrical factor). In the muon and all other
particles than the electron, there is extra binding energy and so the
core is closer to the massgiving particle, hence only one vacuum
polarisation separates them, so the coupling is alpha.
Remember
that Schwinger’s coupling correction in QED increases Dirac’s magnetic
moment of the electron to about 1 + alpha/(2.pi). When think outside
the box, sometimes coincidences have a reason.
The electromagnetic forcecarrying radiation is also the cause of
gravity, via particles which cause the mass of charged elementary
particles.The vacuum particles ("higgs particle") that give rise to
all mass in the Standard Model haven't been observed officially yet,
and the official prediction of the energy of the particle is very
vague, similar to the Top Quark mass, 172 GeV. However, my argument is
that the mass of the uncharged Zboson, 91 GeV, determines the masses
of all the other particles. It works. The charged cores of quarks,
electrons, etc., couple up (strongly or weakly) with a discrete number
of massive trapped Zbosons which exist inthe vacuum. This mechanism
also explains QED, such as the magnetic momentof the electron 1 +
alpha/(2Pi) magnetons.
Literally, the electromagnetic
forcecausing radiation (vector bosons) interact with charged particle
cores to produce EM forces, and with the associated "higgs bosons"
(gravitationally selftrapped Zbosons) to produce the correct
inertial masses and gravity for each particle.
The lepton and
hadron masses are quantized, and I've built a model, discussed there
and on my blog, which takes this model and uses it to predict other
things. I think this is what science is all about. The mainstream
(string theory, cosmological constant fiddled cosmology) is too far
out, and unable to make any useful predictions.
As for the
continuum: the way to understand it is through correcting Maxwell's
classical theory of the vacuum. Quantum field theory heuristically
accounts for electrostatic (Coulomb) forces with a radiationexchange
mechanism. In the LeSage mechanism, the radiation causing Coulomb's
law causes all forces by pushing. I worked out the mechanism by which
electric forces operate in the April 2003 EW article;
attraction occurs by mutual shielding as with gravity, but is stronger
due to the sum of the charges in the universe. If you have a series of
parallel capacitor plates with different charges, each separated by a
vacuum dielectric, you need the total (net) voltage needs to take into
account the orientation of the plates.
The vector sum is the
same as a statistical random walk (drunkard's walk): the total is
equal to the average voltage between a pair of plates, multiplied by
the square root of the total number (this allows for the angular
geometry dispersion, not distance, because the universe isspherically
symmetrical around us  thank God for keeping the calculation very
simple!  and there is as much dispersion outward in the random walk
as there is inward, so the effects of inverse square law dispersions
and concentrations with distance both exactly cancel
out).
Gravity is the force that comes from a straightline sum,
which is the only other option than the random walk. In a straight
line, the sum of charges is zero along any vector across the universe,
if that line contains an average equal number of positive and negative
charges. However, it is equally likely that the straight radial line
drawn at random across the universe contains an odd number of charges,
in which case the average chargeis 2 units (2 units is equal to the
difference between 1 negative charge and1 positive charge). Therefore,
the straight line sum has two options only, each with 50% probability:
even number of charges and hence zero net result, and odd number of
charges which gives 2 unit charges as the net sum. The mean for the
two options is simply (0 + 2) /2 = 1 unit. Hence, electromagnetism is
the square root of the number of charges in the universe, times the
weak option force (gravity).
Thus, electromagnetism and gravity
are different ways that charges add up.Electric attraction is as
stated, simply a mutual blocking of EM "vector boson" radiation by
charges, like LeSage gravity. Electric repulsion is anexchange of
radiation. The charges recoil apart because the underlying physics in
an expanding universe (with "redshifted" or at least reduced energy
radiation pressing in from the outside, due to receding matter in
thesurrounding universe) means their exchange of radiation results in
recoil away from one another (imagine two people firing guns at each
other, for a simple analogy; they would recoil apart).
Magnetic
force is apparently, as Maxwell suggested, due to the spins of the
vacuum particles, which line up. Ivor Catt,
who published in IEE Trans. EC16 and IEE Proc. 83 and 87 evidence
proving that electric energy charges a capacitor at light speed and
can't slow down afterward (hence electric energy has light speed), is
wondering whether to throw a celebration on 26/28 May 2006 to mark the
most ignored paradigmshift in history. Catt is discovered of
socalled Theory C (no electric current), which is only true in a
charged capacitor (or other static charge). However, Catt fails to
acknowledge that his own evidence for a light speed (spin) electron is
a massive advance. In the previous posts, I've quoted results from
Drs. Thomas Love, Asim O. Baruk, and others showing that the principle
of superposition (which is one argument for ignoring the reality of
electron spin in quantum mechanics) is a mathematical falsehood
resulting from a contradiction in the two versions of the Schroedinger
equation (Dr Love's discovery), since you change equations when
dealing with taking a measurement!
Hence, a causal model of
spin, such as a loop of gravitationally selftrapped (i.e., black
hole) Heaviside electric 'energy current' (the Heaviside vector,
describing light speed electric energy in conductors  Heaviside
worked on the NewcastleDenmark Morse Code telegraph line in 1872), is
the reality of the electron. You can get rid of the halfinteger spin
problem by having the transverse vector rotate half a turn during a
revolution like the Moebius strip of geometry. It is possible for a
person to be so skeptical that they won't listen to anything. Science
has to give sensible reasons for dismissing evidence. An empirical
model based on facts which predicts other things (gravitation, all
forces, all particle masses) is scientific. String 'theory' isn't.
On the subject of drl versus cosmological constant: Dr Lunsford
outlines problems in the 5d KaluzaKlein abstract (mathematical)
unification of Maxwell's equations and GR, and Lunsford published it
in published in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161177.
This peerreviewed paper was submitted to arXiv.org but was removed
from arXiv.org by censorship apparently since it investigated a
6dimensional spacetime is not consistent with Witten’s speculative
10/11 dimensional Mtheory. It is however on the CERN document server
at http://doc.cern.ch//archive/electronic/other/ext/ext2003090.pdf,
and it shows the errors in the historical attempts by Kaluza, Pauli,
Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct
unification of general relativity and Maxwell’s equations, finding 4d
spacetime inadequate:
‘Gravitation and Electrodynamics over
SO(3,3)’ on CERN document server, EXT2003090: ‘an approach to field
theory is developed in which matter appears by interpreting
sourcefree (homogeneous) fields over a 6dimensional space of
signature (3,3), as interacting (inhomogeneous) fields in spacetime.
The extra dimensions are given a physical meaning as ‘coordinatized
matter’. The inhomogeneous energymomentum relations for the
interacting fields in spacetime are automatically generated by the
simple homogeneous relations in 6D. We then develop a Weyl geometry
over SO(3,3) as base, under which gravity and electromagnetism are
essentially unified via an irreducible 6calibration invariant
Lagrange density and corresponding variation principle. The
EinsteinMaxwell equations are shown to represent a loworder
approximation, and the cosmological constant must vanish in order that
this limit exist.’
It is obvious that there are 3 expanding spacetime dimensions
describing the evolution of the big bang, and 3 contractable
dimensions describing matter. Total: 6 distinguishable dimensions to
deal with.
Lunsford begins with an enlightening overview of
attempts to unify electromagnetism and gravitation:
‘The old
goal of understanding the longrange forces on a common basis remains
a compelling one. The classical attacks on this problem fell into four
classes:
‘1. Projective theories (Kaluza, Pauli, Klein)
‘2.
Theories with asymmetric metric (EinsteinMayer)
‘3. Theories with
asymmetric connection (Eddington)
‘4. Alternative geometries
(Weyl)
‘All these attempts failed. In one way or another, each
is reducible and thus any unification achieved is purely formal. The
Kaluza theory requires an ad hoc hypothesis about the metric in 5D,
and the unification is nondynamical. As Pauli showed, any generally
covariant theory may be cast in Kaluza’s form. The EinsteinMayer
theory is based on an asymmetric metric, and as with the theories
based on asymmetric connection, is essentially algebraically reducible
without additional, purely formal hypotheses.
‘Weyl’s theory,
however, is based upon the simplest generalization of Riemannian
geometry, in which both length and direction are nontransferable. It
fails in its original form due to the nonexistence of a simple,
irreducible calibration invariant Lagrange density in 4D. One might
say that the theory is dynamically reducible. Moreover, the possible
scalar densities lead to 4th order equations for the metric, which,
even supposing physical solutions could be found, would be
differentially reducible. Nevertheless the basic geometric conception
is sound, and given a suitable Lagrangian and variational principle,
leads almost uniquely to an essential unification of gravitation and
electrodynamics with the required source fields and conservation
laws.’ Again, the general concepts involved are very interesting:
‘from the current perspective, the EinsteinMaxwell equations are to
be regarded as a firstorder approximation to the full
calibrationinvariant system.
‘One striking feature of these
equations that distinguishes them from Einstein’s equations is the
absent gravitational constant – in fact the ratio of scalars in front
of the energy tensor plays that role. This explains the odd role of G
in general relativity and its scaling behaviour. The ratio has
conformal weight 1 and so G has a natural dimensionfulness that
prevents it from being a proper coupling constant – so the theory
explains why general relativity, even in the linear approximation and
the quantum theory built on it, cannot be regularised.’
A
causal model for GR must separate out the description of matter from
the expanding spacetime universe. Hence you have three expanding
spacetime dimensions, but matter itself is not expanding, and is in
fact contracted by the gravitational field, the source for which is
vector boson radiation in QFT.
The CC is used to cancel out gravitational retardation of
supernovae at long distances. You can get rid of the CC by taking the
Hubble expansion as primitive, and gravity as a consequence of
expansion in spacetime. Outward force f=ma=mc/(age of universe) =>
inward force (3rd law). The inward force according to the Standard
Model possibilities of QFT, must be carried by vector boson radiation.
So causal shielding (Lesage) gravity is a result of the expansion.
Thus, quantum gravity and the CC problem dumped in one go.
I
personally don't like this result, it would be more pleasing not to
have to do battle with the mainstream over the CC, but frankly I don't
see how an ad hoc model composed of 96% dark matter and dark energy,
is defended to the point of absurdity by suppressing workable
alternatives which are more realistic.
The same has happened in
QFT due to strings. When I was last at university, I sent Stanley
Brown, editor of Physical Review Letters my gravity idea, a really
short concise paper, and he rejected it for being an "alternative" to
string theory! I don't believe he even bothered to check it. I'd
probably have done the same thing if I was flooded by nonsense ideas
from outsiders, but it is a sad excuse
Lee Smolin, in starting with known facts of QFT and building GR
from them, is an empiricist; contrasted to the complete speculation of
string theorists.
We know some form of LQG spin foam vacuum is
right, because vector bosons (1) convey force, and (2) have
spin.
For comparison, nobody has evidence for superpartners,
extra dimensions, or any given stringy theory.
Danny Lunsford
unites Maxwell's equations and GR using a plausible treatment of
spacetime where there exactly twice as many dimensions as observed,
the extra dimensions describing nonexpanding matter while the normal
spacetime dimensions describe the expanding spacetime. Because the
expanding BB spacetime is symmetrical around us, those three
dimensions can be lumped together.
The problem is that the work
by Smolin and Lunsford is difficult for the media to report, and is
not encouraged by string theorists, who have too much power.
Re
inflation: the observed CBR smoothness "problem" at 300,000 years (the
very tiny size scale of the ripples across the sky) is only a problem
for seeding galaxy formation in the mainstream paradigm for
GR.
1. General relativity and elementary particles
‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 8990.

Woit’s book, due out on 1 June 2006. Because of Drs Susskind and Witten, the media has let string theory go on without asking for definite testable predictions. I don’t think the layman public takes much notice of ‘theory’ it can’t understand. There are three types of notyetfalsified theory: 1. Experimentally confirmed but mathematically abstract and possibly incomplete (Standard Model, relativity, quantum mechanics, etc.). 2. Not experimentally confirmed but popularised with best selling books, but possibly testable (Hawking radiation, gravity waves, etc.). 3. Untestable/not falsifiable (overhyped string theory’s vague landscape ‘predicting’ 10^{500} vacua, 10/11 dimensions, vague suggestions of superpartners without predicting their energy to show if they can be potentially checked or not, ‘prediction’ of unobservable gravitons without any testable predictions of gravity). 
Gravity is the force of Feynman diagram gauge bosons coming from distances/times in the past. The Standard Model, the quantum field theory of electromagnetic and nuclear interactions which has made numerous wellchecked predictions, forces arise by the exchange of gauge bosons. This is well known from the pictorial ‘Feynman diagrams’ of quantum field theory. Gravitation, as illustrated by this mechanism and proved below, is just this exchange process. Gauge bosons hit the mass and bounce back, like a reflection. This causes the contraction term of general relativity, a physical contraction of radius around a mass: (1/3)MG/c^{2} = 1.5 mm for Earth. Newton’s gravity law is (written in tensor calculus notation): R_{m n} = 4p GT_{m n} /c^{2}. Einstein’s result is: R_{m} _{v} – ˝g_{m n} R = 8p GT_{m n} /c^{2}. Notice that the special term introduced is the contraction term (in red). Mass (which by the wellchecked equivalence principle of general relativity is identical for inertial and gravitational forces), arises not from the fundamental core particles of matter themselves, but by a miring effect of the spacetime fabric, the ‘Higgs bosons’. Forces are exchanges of gauge bosons: the pressure causes the cosmic expansion. The big bang observable in spacetime has speed from 0 to c with times past of 0 toward 15 billion years, giving outward force of F = ma = m.(variation in speeds from 0 to c)/(variation in times from 0 to age of universe) ~ 7 x 10^{43} Newtons. Newton’s 3rd law gives equal inward force, carried by gauge bosons, which are shielded by matter. The gauge bosons interact with uniform mass Higgs field particles, which do the shielding and have mass. Single free fundamental rest mass particles (electrons, positrons) can only associate with other particles by electromagnetism, which is largely shielded by the veil of polarised vacuum charges surrounding the fundamental particle core. Quarks only exist in pairs or triplets, so the fundamental particles are close enough that the intervening polarised vacuum shield effect is very weak, so they have stronger interactions.
Correcting the Hubble expansion parameter for spacetime: at
present recession speeds are divided into observed distances, H =
v/R. This is ambiguous for ignoring time! The distance R is
increasing all the time, so is not time independent. To get a proper
Hubble ‘constant’ therefore you need to replace distance with time t
= R/c. This gives recession constant as v/t which equals
v/t = v/(R/c) = vc/R = cH. So the correct spacetime formulation
of the cosmological recession is v/t = cH = 6 x
10^{10} ms^{2}. Outward acceleration! This means that
the mass of the universe has a net outward force of F=ma = 7 x
10^{43} N. (Assuming that F=ma is not bogus!) Newton’s
3rd law says there is an implosion inward of the same force, 7 x
10^{43} N. (Assuming that Newton’s 3rd law is not bogus!) This
predicts gravity as the
shielding of this inward force of gauge boson radiation to within
existing data! (Assuming that the inward force is carried by the gauge
bosons which cause gravity.)
Causal approach to loop quantum
gravity (spin foam vacuum): volume contains matter and spacetime
fabric, which behaves as the perfect fluid analogy to general
relativity. As particles move in the spacetime fabric, it has to flow
out of the way somewhere. It goes into the void behind the moving
particle. Hence, the spacetime fabric filling a similar volume goes in
the opposite direction to moving matter, filling in the void behind. Two
analogies: (1) ‘holes’ in semoconductor electronics go the other way to
electrons, and (2) a 70 litre person walking south along a corridor is
matched by 70 litres of air moving north. At the end, the person is at
the other end to the end he was in when he started, and 70 litres of air
has moved up to fill in the space he vacated. Thus, simple logic and
facts give us a quantitative and predictive calculating tool: an equal
volume of the fluid goes in the opposite direction with the same motion,
which allows the inward vacuum spacetime fabric pressure from the big
bang to be calculated. This allows gravity to be estimated the same way,
with the same result as the
other method. Actually, boson radiations spend part of their existence
as matterantimatter pairs. So the two calculations do not duplicate
each other. If the fraction due to radiation (boson) pressure is
f, that due to perfect fluid pressure is 1f. The total
remains the same: (f) + (1  f)= 1.
The net force is simply the proportion of the force from the projected cone (in the illustrations below), which is due to the asymmetry introduced by the effect of mass on the Higgs field (reflecting inward directed gauge bosons back). Outside the cone areas, the inward gauge boson force contributions are symmetrical from opposite directions around the observer, so those contributions all cancel out! This geometry predicts the strength of gravity very accurately…
A shield, like the planet earth, is composed of very small, subatomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other.
The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inversesquare gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field.
From the illustration above, the total outward force of the big bang,
(total outward force) = ma = (mass of universe).(Hubble acceleration, a = Hc, see detailed discussion and proof further on below),
while the gravity force is the shielded inward reaction (by Newton’s 3^{rd} law the outward force has an equal and opposite reaction):
F = (total outward force).(crosssectional area of shield projected to radius R) / (total spherical area with radius R).
PROOF (1) BY RADIATION PRESSURE: There is strong evidence from electromagnetic theory that every fundamental particle has blackhole crosssectional shield area for the fluid analogy of general relativity. (Discussed further on.) The effective shielding radius of a black hole of mass M is equal to 2GM/c^{2}. A shield, like the planet earth, is composed of very small, subatomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other. The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inversesquare gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field. From the illustration above, the total outward force of the big bang, (total outward force) = ma = (mass of universe).(Hubble acceleration, a = Hc, see detailed discussion and proof further on below), while the gravity force is the shielded inward reaction (by Newton’s 3^{rd} law the outward force has an equal and opposite reaction): F = (total outward force).(crosssectional area of shield projected to radius R) / (total spherical area with radius R). The crosssectional area of shield projected to radius R is equal to the area of the fundamental particle (p multiplied by the square of the radius of the black hole of similar mass), multiplied by the (R/r)^{2} which is the inversesquare law for the geometry of the implosion. The total spherical area with radius R is simply four times p, multiplied by the square of R. Inserting simple Hubble law results c = RH and R/c = 1/H give us F = (4/3)p r G^{2}M^{2}/(Hr)^{2}. We then set this equal to F=Ma and solve, getting G = (3/4)H^{2}/(p r ). When the effect of the higher density in the universe at the great distance R is included, this becomes G = (3/4)H^{2}/(p r _{(local)} e^{3}). Feynman discuss the LeSage gravity idea in ‘Character of Physical Law’ 1965 BBC lectures, with a diagram showing that if there is a pressure in space, shielding masses will create a net push. ‘If your paper isn’t read, they are ignorant of it. It isn’t even a putdown, just a fact.’ – my comment on Motl’s blog. The next comment was from Peter Woit: ‘in terms of experimentally checkable predictions, no one has made any especially significant ones since the standard model came together in 1973 with asymptotic freedom.’ Woit has seen the censorship problem! Via the October 1996 Electronics World letters, this mechanism – which Dr Philip Campbell of Nature had said he was ‘not able’ to publish – correctly predicted that the universe would not be gravitationally decelerating. This was confirmed two years later experimentally by the discovery of Perlmutter, which Nature did publish, although it omitted to say that it had been predicted. 
PROOF (2) BY THE SPACETIME FOAM FABRIC:Apples fall because of gauge boson shielding by nuclear atoms (mainly void space). The same pressure causes the general relativity contraction term.STEP 1: Pressure is force/area. By geometry (illustrated here), the scaled area of shielding below you is equal to the area of space pressure above that is pushing you down. The shielded area of the sky is 100% if the shield mass is the mass of the universe, so: A_{shielding} = A_{r} M / M_{universe}. (1) Force, F = P_{space} A_{shielding} = (F_{space} /A_{r}).(A_{r}M/M_{universe}) = F_{space}.M/M_{universe} Next (see step 2 below): introduce F_{space} = m_{space} a_{H}. Here, Hubble velocity variation in spacetime (v = HR) implies an acceleration equal to: a_{H} = dv/dt = c/t = c/(1/H) = cH = RH^{2}, while m_{space} = m(A_{R}/A_{r}) = m(R/r)^{2}, and the mass of the universe is its density, r , multiplied by its spherical volume, (4/3)p R^{3}. (2) F = F_{space}.M/M_{universe} = (m_{space} a_{H})M/M_{universe} = m(R/r)^{2}(RH^{2})M/ [(r 4pR^{3} /3)] STEP 2: Air is flowing around you like a wave as you as you walk down a corridor (an equal volume goes in the other direction at the same speed, filling in the volume you are vacating as you move). It is not possible for the surrounding fluid to move in the same direction , or a void would form BEHIND and fluid pressure would continuously increase in FRONT until motion stopped. Therefore, an equal volume of the surrounding fluid moves in the opposite direction at the same speed, pemitting uniform motion to occur! Similarly, as fundamental particles move in space, a similar amount of massenergy in the fabric of space (spin foam vacuum field) is displaced as a wave around the particles in the opposite direction, filling in the void volume being continuously vacated behind them. For the mass of the big bang, the massenergy of Higgs/virtual particle field particles in the moving fabric of space is similar to the mass of the universe. As the big bang mass goes outward, the fabric of space goes inward around each fundamental particle, filling in the vacated volume. (This inward moving fabric of space exerts pressure, causing the force of gravity.) ‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp. 323. The effective mass of the spacetime fabric moving inward which actually produces the gravity effect is equal to that which is exactly shielded by the mass (illustrated here). So m_{space} = m, but we also have to allow for the greater distance of the mass which is producing the gravity force by implosion. To take account of focussing due to the ‘implosion’ of space fabric pressure (see diagram) converging in to us in step 1 above (illustration above), we scale: m_{space} /m = A_{R} / A_{r}. Hence: m_{space} = mA_{R} / A_{r} = m(R/r)^{2}. This is because nearby areas on which force acts to produce pressure are much smaller than the area of sky at the very great distances where the recession and density are high and produce the source of space pressure and thus gravity. The big bang recession velocities vary from 0 to c with distance for observable times of 15,000 million years towards zero, so the matter of the universe has an effective outward acceleration of c divided by the age of the universe. This acceleration, a = c/t = cH = RH^{2} where H is the Hubble constant (in v = HR), is so small that its effects are generally undetectable. (Notice that if we could see and experience forces instantly, the universe would not show this acceleration. This acceleration is only real because we can’t see the universe at an age of 15 Gyr irrespective of distance. By Newton’s 2nd law, the actual outward force, when properly allowing for the varying effective density of the observed universe as a function of spacetime, is large and by Newton’s 3rd law it has an equal and opposite reaction, inward force which, where shielded, is gravity.) (3) F = m(R/r)^{2}(RH^{2})M/ [(r 4pR^{3} /3)] = (3/4)mMH^{2}/(rpr^{2}) Next: for mass continuity, dr/dt = Ń.(rv) = 3rH. Hence, r = r_{local} e^{3} (early visible universe has higher density). The reason for multiplying the local measured density of the universe up by a factor of about 20 (the number e^{3} , the cube of the base of natural logarithms) is because it is the denser, more distant universe which contains most of the mass which is producing most of the inward pressure. Because we see further back in time with increasing distance, we see a more compressed age of the universe. Gravitational push comes to us at light speed, with the same velocity as the visible light that shows the stars. Therefore we have to take account of the higher density at earlier times. What counts is what we see, the spacetime in which distance is directly linked to time past, not the simplistic picture of a universe at constant density, because we can never see or experience gravity from such a thing due to the finite speed of light. The mass continuity equation dr/dt = Ń.(rv) is simple hydrodynamics based on Green’s theorem and allows the Hubble law (v = HR) to be inserted and solved. An earlier method of calculation for this the notes of CERN preprint EXT2004007, is to set up a formula for the density at any particular time past, so as to calculate redshifted contributions to inward spacetime fabric pressure from a series of shells surrounding the observer. This is the same as the result r = r_{local} e^{3}. (4) F = (3/4) mMH^{2}/( p r^{2} r _{local} e^{3 }) = mMG/r^{2}, where G = (3/4) H^{2}/( p r _{local} e^{3 }) = 0.0119H^{2}/r _{local }= 6.7 x 10^{11} Nm^{2}kg^{2}, already accurate to within 1.65% using reliable supernovae data reported in Physical Review Letters! If there were any other reason for gravity with similar accuracy, the strength of gravity would then be twice what we measure, so this is a firm testable prediction/confirmation that can be checked even more delicately as more evidence becomes available from current astronomy research… 
SYMBOLS
F = force = ma = PA
M = mass of Earth
P = force / area = F/A = ‘pressure’
A = surface area of a sphere, 4p times (radius squared)
r = distance from person to centre of mass of shield (Earth)
R = radius to big bang gravity source
H = Hubble constant = apparent speed v of galaxy clusters radially from us divided by their distance R when the light was emitted = v/R, hence v = HR = dR/dt, so dt = dR/(RH):
a_{H} = dv/dt = [d(RH)]/[dR/(RH)] = RH.d(RH)/dR = RH^{2} = cH; a constant (Hubble saw light coming from fixed times past, not from stars at fixed distances).
r = density of universe (higher at great distances in space time, when the age was less and it was more compressed, dr/dt = Ń .(rv) = 3rH. So: r = r _{local} e^{3}, see be)
G = universal gravitational constant (previously impossible to predict from general relativity or string theory)
p = circumference divided by the diameter of a circle, approx. 3.14159265…
e = base of natural logarithms, approx. 2.718281828…
Mass continuity equation (for the galaxies in the spacetime of the receding universe): dρ/dt + div.(ρv) = 0. Hence: dρ/dt = div.(ρv). Now around us, dx = dy = dz = dr, where r is radius. Hence divergence (div) term is: div.(ρv) = 3d(ρv)/dx. For spherical symmetry Hubble equation v = Hr. Hence dρ/dt = div.(ρv) = div.(ρHr) = 3d(ρHr)/dr= 3ρHdr/dr= 3ρH. So dρ/dt = 3ρH. Rearranging:3Hdt = (1/ρ) dρ. Solving by integrating this gives say: 3Ht = (ln ρ1) – (ln ρ). Using the base of natural logarithms (e) to get rid of the ln’s: e^{3Ht} = density ratio. Because H = v/r = c/(radius of universe) = 1/(age of universe, t) = 1/t, we have: e^{3Ht} = (density ratio of current time to earlier, higher effective density) = e^{3(1/t)t} = e^{3} = 1/20. All we are doing here is focussing on spacetime in which density rises back in time, but the outward motion or divergence of matter due to the Hubble expansion offsets this at great distances. So the effective density doesn’t become infinity, only e^{3} or 20 times the local density of the universe at the present time. The inward pressure of gauge bosons from greater distances initially rises because the density of the universe increases at earlier times, but then falls because of divergence, which causes energy reduction (like redshift) of inward coming gauge bosons.
The proof [above] predicts gravity accurately, with G = ľ H^{2}/(p r e^{3}). Electromagnetic force (discussed in the April 2003 Electronics World article, reprinted below with links to illustrations) in quantum field theory (QFT) is due to ‘virtual photons’ which cannot be seen except via forces produced. The mechanism is continuous radiation from spinning charges; the centripetal acceleration of a = v^{2}/r causes the emission energy emission which is naturally in exchange equilibrium between all similar charges, like the exchange of quantum radiation at constant temperature. This exchange causes a ‘repulsion’ force between similar charges, due to recoiling apart as they exchange energy (two people firing guns at each other recoil apart). In addition, an ‘attraction’ force occurs between opposite charges that block energy exchange, and are pushed together by energy being received in other directions (shieldingtype attraction). The attraction and repulsion forces are equal for similar net charges (as proved in the April 2003 Electronics World article reprinted below). The net inward radiation pressure that drives electromagnetism is similar to gravity, but the addition is different. The electric potential adds up with the number of charged particles, but only in a diffuse scattering type way like a drunkards walk, because straightline additions are cancelled out by the random distribution of equal numbers of positive and negative charge. The addition only occurs between similar charges, and is cancelled out on any straight line through the universe. The correct summation is therefore statistically equal to the square root of the number of charges of either sign multiplied by the gravity force proved above.
Hence F(electromagnetism) = mMGN^{1/2}/r^{2} = q_{1}q_{2}/(4p e r^{2}) (Coulomb’s law)
where G = ľ H^{2}/(p r e^{3}) as proved above, and N is as a first approximation the mass of the universe (4p R^{3} r /3= 4p (c/H)^{3} r /3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:
e = q_{e}^{2}e_{2.7…}^{3} [r /(12p m_{e}^{2}m_{proton}Hc^{3})]^{1/2} F/m.
Testing this with the PRL and other data used above (r = 4.7 x 10^{28} kg/m^{3} and H = 1.62 x 10^{18} s^{1} for 50 km.s^{1}Mpc^{1}), gives e = 7.4 x 10^{12} F/m which is only 17% low as compared to the measured value of 8.85419 x 10^{12} F/m. This relatively small error reflects the hydrogen assumption and quark effect. Rearranging this formula to yield r , and rearranging also G = ľ H^{2}/(p r e^{3}) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ľ H^{2}/(p r e^{3}) to give a prediction for r which is independent of H:
H = 16p ^{2}Gm_{e}^{2}m_{proton}c^{3} e^{ 2}/(q_{e}^{4}e_{2.7…}^{3}) = 2.3391 x 10^{18} s^{1} or 72.2 km.s^{1}Mpc^{1}, so 1/H = t = 13.55 Gyr.
r = 192p ^{3}Gm_{e}^{4}m_{proton}^{2}c^{6} e^{ 4}/(q_{e}^{8}e_{2.7…}^{9}) = 9.7455 x 10^{28} kg/m^{3}.
Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanismbased predictive method.
The outward force of the big bang equals the inward force, which is exactly c^{4} / (e^{3} G) = 6.0266 x 10^{42} Newtons.
This inward force is a bit like air pressure, in the sense you don’t ‘feel’ it. Air pressure is 10 metric tons per square metre or 14.7 pounds/square inch. Since the human body area is stated as 2 square metres, the air force is 2 x 10 = 20 metric tons or 9.8 x 20,000 = 196,000 Newtons. The nuclear 'implosion' bomb works the same way as the big bang: TNT creates equal inward and outward force result, so the plutonium in the middle is compressed by the TNT explosion until its surface area shrinks, reducing neutron loss and causing a reaction.
So yes, unless the shell you refer to below is has such great strength that it could magically resist the 6.0266 x 10^{42} Newtons inward force, it would be accelerated inwards (collapse). The object in the middle would however be exchanging gauge bosons with the shell, which in turn would be exchanging them with the surrounding universe. You have to deal with it step by step.
Why should Heaviside energy, trapped by gravity (the only force which can bend light and which is generated by energy as well as mass according to general relativity), not have a black hole shielding area? Your question seems to falsely assume Planck dimensions obtained from fiddling about with assumed 'fundamental constants' using dimensional analysis with no empirical support or proof, are somehow real science, when there have no evidence whatsoever to justify any of them. I live in a world of science quite different, where every statement needs evidence or proof, not the authority of Planck or someone to substantiate the guess. Authority is not a safe guide. Black hole electron core is proved by Catt's observation that static electric charge is Heaviside electromagnetic energy and thus has a spin at light speed (see quote below). Planck dimensions are obtained from dimensional analysis, and are assumed to apply to strings by string theorists who don't have contact with the real world. Gravity is the shielding of gauge boson pressure. The shielding area of a fundamental particle is the area of a black hole of similar mass, which for electrons etc is far smaller than the Planck area. So even for the planet earth, most of the gravity causing gauge boson radiation is not stopped.
Error for quantum gravitation disproving the claims of Witten for string theory ‘predicting gravity’
In quantum gravity, the big error in physics is that Edwin Hubble in 1929 divided the Doppler shift determined recession speeds by the apparent distances to get his constant, v/R = H. In fact, the distances increase while the light and gravity effect are actually coming back to us. What he should have done is to represent it as a variation in speed with time past. The whole point about spacetime is precisely that there is equivalence between seeing thing at larger distances, and seeing things further back in time. You cannot simply describe the Hubble effect as a variation in speed with distance, because time past is involved! Whereas H has units s^{1} (1/age of universe), the directly observed Hubble ratio is equal to v/t = RH/(1/H) = RH^{2} (and therefore has units of ms^{2}, acceleration). In the big bang, the recession velocities from here outward vary from v = 0 towards v = c, and the corresponding times after the big bang vary from 15,000 million years (t = 1/H) towards zero time. Hence, the apparent acceleration as seen in spacetime is
a = (variation in velocity)/(variation in time) = c / (1/H) = cH = 6 x 10^{10} ms^{2}.
Although a small acceleration, a large mass of the universe is involved so the outward force (F = ma) is very large. The 3^{rd} law of motion implies equal inward force like an implosion, which in LeSage gravity gives the right value for G, disproving the ‘critical density’ formula of general relativity by ˝ e^{3} = 10 times. This disproves most speculative ‘dark matter’. Since gravity is the inward push caused by the graviton/Higgs field flowing around the moving fundamental particles to fill in the void left in their wake, there will only be a gravitational ‘pull’ (push) where there is a surrounding expansion. Where there is no surrounding expansion there is no gravitational retardation to slow matter down. This is in agreement with observations that there is no slowing down (a fictitious acceleration is usually postulated to explain the lack of slowing down of supernovae).
The density correction factor (e^{3} = 20), explained: for mass continuity of any expanding gas or explosion debris in hydrodynamics, dr/dt = Ń.(rv) = 3rH. Inserting the Hubble expansion rate v = HR and solving, r = r_{local} e^{3} (early visible universe has higher density). The reason for multiplying the local measured density of the universe up by a factor of about 20 (the number e^{3} , the cube of the base of natural logarithms) is because it is the denser, more distant universe which contains most of the mass which is producing most of the inward pressure. Because we see further back in time with increasing distance, we see a more compressed age of the universe. Gravitational push comes to us at light speed, with the same velocity as the visible light that shows the stars. Therefore we have to take account of the higher density at earlier times. What counts is what we see, the spacetime in which distance is directly linked to time past, not the simplistic picture of a universe at constant density, because we can never see or experience gravity from such a thing due to the finite speed of light. The mass continuity equation dr/dt = Ń.(rv) is simple hydrodynamics based on Green’s theorem and allows the Hubble law (v = HR) to be inserted and solved. An earlier method of calculation for this the notes of CERN preprint EXT2004007, is to set up a formula for the density at any particular time past, so as to calculate redshifted contributions to inward spacetime fabric pressure from a series of shells surrounding the observer. This is the same as the result r = r_{local} e^{3}.
I don’t have a model, just the facts which are based on Catt’s experiments. Catt puts Heaviside energy into a conductor, which charges up with energy at light speed, which has no mechanism to slow down. The nature of charge in a spinning fundamental particle is therefore likely to be energy. This associates with vacuum particles, Higgs bosons, which give rise to the mass. All this is Standard Model stuff. All I'm doing is pushing the causal side of the Standard Model to the point where it achieves success. Gauge bosons are radiated continuously from spinning charge, and carry momentum. The momentum of light has been measured, it is fact. It is being radiated by all charges everywhere, not just at great distances.
If volume of universe is (4/3)p R^{3} and expansion is R=ct, then density varies as t^{3}, and for a star at distance r, absolute time after big bang will be t – r/c (where t is our local time after big bang, about 15 Gyr), so the density of the universe at its absolute age corresponding to visible distance r, divided by the density locally at 15 Gyr, is [(t – r/c)/t]^{3} = (1 – rc^{1}t^{1})^{3}, which is the factor needed to multiply up the nearby density to give that at earlier times corresponding to large visible distances. This formula gives infinite density at the finite radius of the universe, whereas an infinite density only exists in a singularity; this requires some dismissal of special relativity, either by saying that the universe underwent a faster than c expansion at early times (Guth’s special relativity violating inflationary universe), or else by saying that the redshifted radiation coming to us is actually travelling very slowly (this is more heretical than Guth’s conjecture). Setting this equal to density factor e^{3} we see that 1  r/(ct) = 1/e. Hence r = 0.632ct. This means that the effective distance at which the gravity mechanism source lies is at 63.2 % of the radius of the universe, R = ct. At that distance, the density of the universe is 20 times the local density where we are, at a time of 15,000,000,000 years after the big bang. Therefore, the effective average distance of the gravity source is 9,500,000,000 light years away, or 5,500,000,000 years after the big bang.
Light has momentum and exerts pressure, delivering energy. The pressure towards us due to the gauge bosons (forcecausing radiation of quantum field theory), produces the contraction effect of general relativity and also gravity by pushing us from all directions equally, except where reduced by the shielding of the planet earth below us. Hence, the overriding push is that coming downwards from the stars above us, which is greater than the shielded effect coming up through the earth. This is the mechanism of the acceleration due to gravity. We are seeing the past with distance in the big bang! Gravity consists of gauge boson radiation, coming from the past just like light itself. The big bang causes outward acceleration in observable spacetime (variation in speed from 0 toward c per variation of times past from 0 toward 15,000,000,000 years), hence force by Newton’s empirical 2^{nd} law, F = ma. The 3^{rd} empirical law of Newton says there’s equal inward force, carried by gauge bosons that get shielded by mass, proving gravity to within 1.65%.
The proofs shows that the local density (i.e., density at 15,000,000,000 years after origin) of the universe is: r _{(local)} = 3H^{2}/(4pe^{3} G). The mechanism also shows that because gravity is an inward push as reaction to surrounding expansion, there is asymmetry at great distances and thus no gravitational retardation of the expansion (predicted via October 1996 issue of Electronics World, before experimental confirmation by Perlmutter using automated CCD observations of distant supernovae). Because there is no slowing down due to the mechanism, the application of general relativity to cosmology is modified slightly, and the radius of the universe is R = ct = c/H, where H is Hubble constant. The observable recession velocity in spacetime is a = dv/dt = c/t = Hc.
Hence, outward force of big bang: F = Ma = [(4/3) pR^{3} r _{(local)} ].[Hc] = c^{4} / (e^{3} G) = 6.0266 x 10^{42} Newtons. Notice the permitted high accuracy, since the force is simply F = c^{4} / (e^{3} G), where c, e (a mathematical constant) and G are all well known. (The density and Hubble constant have cancelled out.) When you put this result for outward force into the geometry in the lower illustration above and allow for the effective outward force being e^{3} times stronger than the actual force (on account of the higher density of the earlier universe, since we are seeing – and being affected by – radiation from the past, see calculations later on), you get F = Gm^{2} /r^{2} Newtons, if the shielding area is taken as the black hole area (radius 2Gm/c^{2} ). Why m^{2} ? Because all mass is created by the same fundamental particles, the ‘Higgs bosons’ of the standard model, which are the building blocks of all mass, inertial and gravitational! This is evidence that mass is quantized, hence a theory of quantum gravitation.
The heuristic explanation of this 137 anomaly is just the shielding factor by the polarised vacuum:
‘All charges are surrounded by clouds of virtual photons, which
spend part of their existence dissociated into fermionantifermion
pairs. The virtual fermions with charges opposite to the bare charge
will be, on average, closer to the bare charge than those virtual
particles of like sign. Thus, at large distances, we observe a reduced
bare charge due to this screening effect.’ – I. Levine, D. Koltick, et
al., Physical Review Letters, v.78, 1997, no.3, p.424.
Heisenberg's uncertainty says
pd = h/(2.Pi)
where p is uncertainty in momentum, d is
uncertainty in distance.
This comes from his imaginary gamma ray
microscope, and is usually written as a minimum (instead of with "=" as
above), since there will be other sources of uncertainty in the
measurement process.
For light wave momentum p = mc,
pd = (mc)(ct) =
Et where E is uncertainty in energy (E=mc2),
and t is uncertainty in time.
Hence, Et = h/(2.Pi)
t = h/(2.Pi.E)
d/c = h/(2.Pi.E)
d = hc/(2.Pi.E)
This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^17 m. So it's OK.
Now, E = Fd implies
d = hc/(2.Pi.E) = hc/(2.Pi.Fd)
Hence
F = hc/(2.Pi.d^2)
This force is 137.036 times higher than Coulomb's law for unit
fundamental charges.
Notice that in the last sentence I've suddenly
gone from thinking of d as an uncertainty in distance, to
thinking of it as actual distance between two charges; but the gauge
boson has to go that distance to cause the force anyway.
Clearly
what's physically happening is that the true force is 137.036 times
Coulomb's law, so the real charge is 137.036. This is reduced by the
correction factor 1/137.036 because most of the charge is screened out
by polarised charges in the vacuum around the electron core:
"... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)."  arxiv hepth/0510040, p 71.
The unified Standard Model force is F = hc/(2.Pi.d^2)
That's the superforce at very high energies, in nuclear physics. At lower energies it is shielded by the factor 137.036 for photon gauge bosons in electromagnetism, or by exp(d/x) for vacuum attenuation by shortranged nuclear particles, where x = hc/(2.Pi.E)
This is dealt with at http://einstein157.tripod.com/ and the other sites. All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles!
Maxwell supposed that the variation in voltage (hence electric
field
strength) in a capacitor plate causes an ethereal "displacement
current".
Mathematically Maxwell's trick works, since you put the
"displacement
current" law together with Faraday's law of induction
and the solution is
Maxwell's light model, predicting the correct
speed of light. However, this
changes when you realise that
displacement current is itself really
electromagnetic radiation, and
acts at 90 degrees to the direction light
propagates in Maxwell's
model. Maxwell's model is entirely
selfcontradictory, and so his
unification of electricity and magnetism
falls
apart.
Maxwell's unification is wrong, because the reality is
that the
"displacement current" effects result from electromagnetic
radiation emitted
transversely when the current varies with time
(hence when charges
accelerate) in response to the timevarying
voltage. This completely alters
the picture we have of what light
is!
Comparison:
(1) Maxwell's displacement current:
Voltage varying with time creates an
ethereal "displacement current"
(not mere electromagnetic radiation) in the
vacuum.
(2) True
model to replace fully "displacement current": Voltage varying
with
time accelerates charges in the conductor, which as a result
emit radiation
transversely.
I gave logical arguments for this
kind of thing (without the full details I
have recently discovered)
in my letter published in the March 2005 issue of
Electronics World.
Notice that Catt uses a completely false picture of
electricity with
discontinuities (vertically abrupt rises in voltage at the
front of a
logic pulse) which don't exist in the real world, so he does
not
bother to deal with the facts and missed the mechanism. However
Catt is
right for arguing that the flaw in Maxwell's classical
electromagnetism
stems to the ignorance Maxwell had of the way
current must spread along the
plates at light speed.
Physically, Coulomb's law and Gauss' law come from the SU(2)xU(1) portion of the Standard Model, the break down of electroweak theory by the way the vacuum rapidly attenuates the gauge bosons of weak forces (W and Z) over short ranges at low energy, but merely shields the electromagnetic force gauge boson (photon) by a factor of 1/137 at low energy, and
"... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electronpositron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)."  arxiv hepth/0510040, p 71 [12]
You have to include the Standard Model to allow for what happens in particle accelerators when particles are fired together at high energy. The physical model above does give a correct interpretation of QFT and is also used in many good books (including Penrose's Road to Reality). However as stated [13], the vacuum particles look different to observers in different states of motion, violating the postulate of Special/Restricted relativity (which is wrong anyway for the twins paradox, i.e., for ignoring all accelerating motions and spacetime curvature). This is why it is a bit heretical. Nevertheless it is confirmed by Koltick's experiments in 1997, published in PRL.
As proved, the physical nature of "displacement current" is gauge boson/radio wave energy exchange in the Catt anomaly, [16]. Catt has no idea what the Standard Model or general relativity are about, but that is what his work can be used to understand, by getting to grips with what "displacement current" really is (radio) as distinct from the fantasy Maxwell developed in which "displacement current" is not radio but is involved in radio together with Faraday's law, both acting at 90 degrees to the direction of propagation of radio. Maxwell's light is a complete fantasy that has been justified by a falsified history Maxwell and Hertz invented.
When Catt's TEM wave is corrected to include the fact that the step has a finite not a zero rise time, there is electromagnetic radiation emission sideways. Each conductor emits an inverted mirror image of the electromagnetic radiation pulse of the other, so the conductors swap energy. This is the true mechanism for the "displacement current" effect in Maxwell's equations. The electromagnetic radiation is not seen at a large distance because when the distance from the transmission line is large compared to the gap between the conductors, there is perfect interference, so no energy is lost by radiation externally from the TL. Also, the electromagnetic radiation or "displacement current" is the mechanism of forces in electromagnetism. It shows that Maxwell's theory of light is misplaced, because Maxwell has light propagating in a direction at 90 degrees to "displacement current". Since light is "displacement current" it goes in the same direction, not at 90 degrees to it
The minimal SUSY Standard Model shows electromagnetic force coupling
increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the
strong force falling from 1 to 1/25 at the same energy, hence
unification.
The reason why the unification superforce strength
is not 137 times electromagnetism but only 137/25 or about 5.5 times
electromagnetism, is heuristically explicable in terms of potential
energy for the various force gauge bosons.
If you have one force
(electromagnetism) increase, more energy is carried by virtual photons
at the expense of something else, say gluons. So the strong nuclear
force will lose strength as the electromagnetic force gains strength.
Thus simple conservation of energy will explain and allow predictions to
be made on the correct variation of force strengths mediated by
different gauge bosons. When you do this properly, you may learn that
SUSY just isn't needed or is plain wrong, or else you will get a better
grip on what is real and make some testable predictions as a
result.
I frankly think there is something wrong with the
depiction of the variation of weak force strength with energy shown in
Figure 66 of Lisa Randall's "Warped Passages". The weak strength is
extremely low (alpha of 10^10) normally, say for beta decay of a
neutron into a proton plus electron and antineutrino. This force
coupling factor is given by Pi2hM4/(Tc2m5), where h is Planck’s constant
from Planck’s energy equation E = hf, M is the mass of the proton, T is
the effective energy release ‘life’ of the radioactive decay (i.e. the
familiar halflife multiplied by 1/ln2 = 1.44), c is velocity of light,
and m is the mass of an electron.
The diagram seems to indicate that
at low energy, the weak force is stronger than electromagnetism, which
seems in error. The conventional QFT treatments show that electroweak
forces increase as a weak logarithmic function of energy. See arxiv
hepth/0510040, p. 70.
http://electrogravity.blogspot.com/
There is a lot of obfuscation introduced by maths even at low levels of physics. Most QED calculations completely cover up the problems between SR and QED, that the virtual particles in the vacuum look different to observers in different motion, etc.
In Coulomb's law, the QED vector boson "photon" exchange
force mechanism will be affected by motion, because photon exchanges
along the direction of motion will be slowed down. Whether the
FitzGeraldLorentz contraction is physically due to this effect, or to a
physical compression/squeeze from other forcecarrying radiation of the
vacuum, is unspeakable in plain English. The problem is dressed up in
fancy maths, so people remain unaware that SR became obsolete with GR
covariance in 1915. On the spin foam vacuum of LQG, the vacuum is full
of all kinds of real and imaginary particles with various spins, virtual
fermions, vector bosons, speculative Higgs field and superpartners.
First of all, take the simple question of how the vacuum allows
photons to propagate any distance, but quickly attenuates W an Z bosons.
Then you are back to the two equations for a transverse light wave
photon, Faraday's law of electric induction and Maxwell's vacuum
displacement current in Ampere's law. Maxwell (after discarding two
mechanical vacuums as wrong), wrote that the displacement current in the
vacuum was down to tiny spinning "elements" of the vacuum (Maxwell,
Treatise, Art. 822; based partly on the effect of magnetism on polarised
light).
I cannot see how loop quantum gravity can be properly
understood unless the vacuum spin network is physically understood with
some semiclassical model. People always try to avoid any realistic
discussion of spin by claiming that because electron spin is half a
unit, the electron would have to spin around twice to look like one
revolution. This isn't strange because a Mobius strip with half a turn
on the loop has the same property (because both sides are joined, a line
drawn around it is twice the length of the circumference). Similarly the
role of the Schroedinger/Dirac wave equations is not completely weird
because sound waves are described by wave equations while being composed
of particles. All you need is a lot of virtual particles in the vacuum
interacting with the real particle, and it is jiggled around as if by
brownian motion.
It's really sad that virtually nobody is
interesting in pursuing this line of research, because everyone is
brainwashed by string theory. I don't have the time or resources to do
anything, and am not expert in QFT. But I can see why nobody in the
mainstream is looking in the right direction, it's simply because fact
is stranger than fiction. They're all more at home in the 11th dimension
than anywhere else...
Black holes are the spacetime fabric perfect fluid. The lack of
viscosity is the lack of continuous drag. You just get a bulk flow of
the spacetime fabric around the fundamental particle. The Standard Model
says the mass has a physical mechanism: the surrounding Higgs field.
When you move a fundamental particle in the Higgs field, and approach
light speed, the Higgs field has less and less time to flow out of the
way, so it mires the particle more, increasing its mass. You can't move
a particle at light speed, because the Higgs field would have ZERO time
to flow out of the way (since Higgs bosons are limited to light speed
themselves), so inertial mass would be infinite. The increase in mass
due to a surrounding fluid is known in hydrodynamics:
‘In this
chapter it is proposed to study the very interesting dynamical problem
furnished by the motion of one or more solids in a frictionless liquid.
The development of this subject is due mainly to Thomson and Tait
[Natural Philosophy, Art. 320] and to Kirchhoff [‘Ueber die Bewegung
eines Rotationskörpers in einer Flüssigkeit’, Crelle, lxxi. 237 (1869);
Mechanik, c. xix]. … it appeared that the whole effect of the fluid
might be represented by an addition to the inertia of the solid. The
same result will be found to hold in general, provided we use the term
‘inertia’ in a somewhat extended sense.’ – Sir Horace Lamb,
Hydrodynamics, Cambridge University Press, 6th ed., 1932, p. 160.
(Hence, the gauge boson radiation of the gravitational field causes
inertia. This is also explored in the works of Drs Rueda and Haisch: see
http://arxiv.org/abs/physics/9802031 http://arxiv.org/abs/grqc/0209016
, http://www.calphysics.org/articles/newscientist.html and
http://www.eurekalert.org/pub_releases/200508/nsijv081005.php .)
The black holes of the spacetime fabric are the virtual fermions,
etc., in the vacuum, which are different from the real electron, because
the real electron is surrounded by a polarised layer of vacuum charges
and Higgs field, which gives the mass.
The field which is
responsible for associating the Higgs field particles with the mass can
be inside or outside the polarised veil of dielectric, right? If the
Higgs field particles are inside the polarised veil, the force between
the fundamental particle and the mass creating field particle is very
strong, say 137 times Coulomb's law. On the other hand, if the mass
causing Higgs field particles are outside the polarised veil, the force
is 137 times less than the strong force. This implies how the 137 factor
gets in to the distribution of masses of leptons and hadrons.
A revised treatment of some of the following material can be found at http://electrogravity.blogspot.com/2006/02/geometryofmagneticmomentcorrection.html
Geometry of magnetic moment correction for electron: reason for
number 2
Magnetic moment of electron= Dirac factor + 1st virtual
particle coupling correction term = 1 + 1/(2.Pi.137.0...) = 1.00116 Bohr
magnetons to 6 significant figures (more coupling terms are needed for
greater accuracy). The 137.0... number is usually signified by 1/alpha,
but it is clearer to use the number than to write 1 +
alpha/(2.Pi).
The 1 is the magnetic contribution from the core of
the electron. The second term, alpha/(2.Pi) or 1/(2.Pi.137), is the
contribution from a virtual electron which is associated with the real
electron core via shielded electric force. The charge of the core is
137e, the shielding due to the veil of polarised vacuum virtual charges
around the core is 1/137, so the observed charge outside the veil is
just e.
The core magnetism of 1 Bohr magneton predicted by
Dirac's equation is too low. The true factor is nearer 1.00116, and the
additional 0.116% is due to the vacuum virtual particles.
In
other words, the vacuum reduces the electron's electric field,
but increases its magnetic field! The reason for the increase
in the magnetic field by the addition of alpha/(2.Pi) =
1/(2.Pi.137.0...) is simply that a virtual particle in the vacuum pairs
up with the real particle via the electric field. The contribution of
the second particle is smaller than 1 Bohr magneton by three factors, 2,
Pi, and 137.0... Why? Well, heuristic reasoning suggests that the second
particle is outside the polarised shield, and is thus subject to a
shielding of 1/137.
The magnetic field from the real electron
core which is transverse to the radial direction (i.e., think about the
magnetic field lines over earth's equator, which run at 90 degrees to
the radial direction) will be shielded, by the 137 factor. But the
magnetic field that is parallel to the radial direction (i.e., the
magnetic field lines emerging from earth's poles) are completely
unshielded.
Whereas an electric field gets shielded where it is
parallel to another electric field (the polarised vacuum field arrow
points outward because virtual positrons are closer to the negative core
than virtual electrons, so this outward arrow opposes the inward arrow
of electric field towards the real electron core, causing attenuation),
steady state magnetic fields only interact with steady state electric
fields where specified by Ampere's law, which is half of one of
Maxwell's four equations.
Ampere's law states that a curling
magnetic field causes an electric current, just like an electric field
does. Normally to get an electric current you need an electric potential
difference between the two ends of a conductor, which causes electrons
to drift. But a curling electric field around the conductor does
exactly the same job. Therefore, a curling magnetic field around a
conductor is quite indistinguishable from an electric field which varies
along a conductor. You might say no, because the two are different, but
you'd be wrong. If you have an electric field variation, then the
current will (by conventional theory) cause a curling magnetic field
around the conductor.
At the end of the day, the two situations
are identical. Moreover, conventional electric theory has some serious
issues with it, since Maxwell's equations assume instantaneous action at
a distance (such as a whole capacitor plate being charged up
simultaneously), which have been experimentally and theoretically
disproved, despite the suppression of this fact as
'heresy'.
Maxwell's equations have other issues as well, for
example Coulomb's law which is expressed in Maxwell's equation as the
electric field from a charge (Gauss' law), is known to be wrong at high
energies. Quantum field theory and experiments confirming it published
by Koltick in PRL in 1997 shows that electric forces are 7% higher at 80
GeV than at low energies. This is because the polarised vacuum is like a
sponge foam covering on an iron cannon ball. If you knock such sponge
foam covered balls together very gently, you don't get a metallic clang
or anything impressive. But if you fire them together very hard, the
sponge foam covering is breached by the force of the impact, and you
experience the effects of the strong cores to a greater
degree!
The polarised vacuum veil around the real electron core
behaves a bit like the shield of foam rubber around a steel ball,
protecting it from strong interactions if the impacts are low energy,
but breaking down in very highenergy impacts.
The Schwinger
correction term, 1/(2.Pi.137) contains 137 because of the shielding by
the polarised vacuum veil.
The coupling is physically interpreted
as a Pauliexclusion principle type magnetic pairing of the real
electron core with one virtual positron just outside the polarised veil.
Because the spins are aligned to some extent in this process, the
magnetic field which is of importance between the real electron core and
the virtual electron is the transverse magnetic field, which is
(unlike the polar magnetic field) shielded by the 137 factor like the
electric field.
So that explains why the magnetic contribution
from the virtual electron is 137 times weaker than that from the real
electron core: because the transverse magnetic field from the real
electron core is reduced by 137 times, and that is what causes the Pauli
exclusion principle spin alignment. The two other reduction factors are
2 and Pi. These are there simply because each of the two particles is a
spinning loop and has its equator on the same plane to the other. The
amount of field each particle sees of the other is 1/Pi of the total,
because a loop has a circumference of Pi times the diameter, and only
the diameter is seen edgeon, which means that only 1/Pi of the total is
seen edge on. Because the same occurs for each particle, each of the two
particles (the one real particle and the virtual particle), the correct
reduction factor is twice this. Obviously, this is heuristic, and by
itself doesn't prove anything. It is only when you add this explanation
to the prediction of meson and baryon masses by the same mechanism of
137, and the force strengths derivation, that it starts to become more
convincing. Obviously, it needs further work to see how much it says
about further coupling corrections, but its advantage is that it is a
discrete picture so you don't have to artifically and arbitrarily impose
cutoffs to get rid of infinities, like those of existing (continuous
integral, not discrete) QFT renormalisation.
One
think more I want to say after the latest post (a few back actually)
here on deriving the strong nuclear force as 137 times Coulomb's law for
low energies. The Standard Model does not indicate perfect force
unification at high energy unless there is supersymmetry (SUSY), which
requires superpartners which have never been observed, and whose energy
is not predictable.
The minimal theory of supersymmetry predicts
that the strong, weak and electromagnetic forces unify at 10^16 GeV.
I've mentioned already that Koltick's experiments in 1997 were at 80
GeV, and that was pushing it. There is no way you can ever test a SUSY
unification theory by firing particles together on this planet, since
the planet isn't big enough to house or power such a massive
accelerator. So you might as well be talking about UFOs as SUSY, because
neither are observable scientifically in any conceivable future scenario
of real science.
So let's forget SUSY and just think about the
Standard Model as it stands. This shows that the strong, weak, and
electromagnetic forces become almost (but not quite) unified at around
10^14 GeV, with an interaction strength around alpha of 0.02, but that
electromagnetism continues to rise at higher energy, becoming 0.033 at
10^20 GeV, for example. Basically, the Standard Model without SUSY
predicts that electromagnetism continues to rise as a weak (logarithmic
type) function of energy, while the strong nuclear force falls.
Potential energy conservation could well explain why the strong nuclear
force must fall when the electromagnetic force rises. The fundamental
force is not the same thing as the particle kinetic energy, remember.
Normally you would expect the fundamental force to be completely
distinct from the particle energy, but there are changes because the
polarised vacuum veil around the core is progressively breached in
higher energy impacts.
The muon is 1.5 units on this scale but this is heuristically explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to about 1 + 1/(2p 137). The mass increase of a muon is 1 + 1/2 because Pi is due to spin and the 137 shielding factor doesn’t apply to bare particles cores in proximity, as it is due to the polarised vacuum veil at longer ranges. This is why unification of forces is approached with higher energy interactions, which penetrate the veil.
The mechanism is that the 137 number is the ratio between the strong nuclear and the electromagnetic force strength, which is a unification arising due to the polarisation of the vacuum around a fundamental particle core. Therefore, the Coulomb force near the core of the electron is the same as the strong nuclear force (137 times the observed Coulomb force), but 99.27% of the core force is shielded by the veil of polarised vacuum surrounding the core. Therefore, if the masscausing Higgs bosons of the vacuum are outside the polarised veil, they couple weakly, giving a mass 137 times smaller (electron mass), and if they are inside the veil of polarised vacuum, they couple 137 times more strongly, giving higher mass particles like muons, quarks, etc (depending on the discrete number of Higgs bosons coupling to the particle core: the for all directly observable elementary particle masses (quarks are not directly observable, only as mesons and baryons) is (0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev
This idea predicts that a particle core with n fundamental
particles (n=1 for leptons, n = 2 for mesons, and obviously n=3 for
baryons) coupling to N virtual vacuum particles (N is an integer) will
have an associative inertial mass of Higgs bosons of:
(0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev,
where 0.511 Mev is the electron mass. Thus we get everything from this one mass plus integers 1,2,3 etc, with a mechanism. We test this below against data for mass of muon and all ‘longlived’ hadrons.
The problem is that people are used to looking to abstruse theory due to the success of QFT in some areas, and looking at the data is out of fashion. If you look at history of chemistry there were particle masses of atoms and it took school teachers like Dalton and a Russian to work out periodicity, because the bigwigs were obsessed with vortex atom maths, the ‘string theory’ of that age. Eventually, the obscure school teachers won out over the mathematicians, because the vortex atom (or string theory equivalent) did nothing, but empirical analysis did stuff. It was eventually explained theoretically!
There was a crude empirical equation for lepton masses by A.O. Barut, PRL, v. 42 (1979), p. 1251. We can extend the basic idea to hadrons. The muon is 1.5 units on this scale but this is heuristically explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to about 1 + 1/(2.Pi.137). The mass increase of a muon is 1 + 1/2 because Pi is due to spin and the 137 shielding factor doesn’t apply to bare particles cores in proximity, as it is due to the polarised vacuum veil at longer ranges. This is why unification of forces is approached with higher energy interactions, which penetrate the veil. This idea predicts that a particle core with n fundamental particles (n=1 for leptons, n = 2 for mesons, and obviously n=3 for baryons) coupling to N virtual vacuum particles (N is an integer) will have an associative inertial mass of Higgs bosons of: (0.511 Mev).(137)n(N + 1)/2 = 35n(N +1) Mev.
Accuracy tested against data for mass of muon and all ‘longlived’ hadrons:
LEPTON (n=1)
Muon (N=2): 105 Mev (105.66 Mev measured), 0.6% error!
HADRONS
Mesons (contain n=2 quarks):
Pions (N=1): 140 Mev (139.57 and 134.96 actual), 0.3% and 3.7%
errors!
Kaons (N=6): 490 Mev (493.67 and 497.67 actual), 0.7% and
1.6% errors!
Eta (N=7): 560 Mev (548.8 actual), 2% error!
Baryons (contain n=3 quarks):
Nucleons (N=8): 945 Mev (938.28 and 939.57 actual), 0.7% and 0.6%
errors!
Lambda (N=10): 1155 Mev (1115.60 actual), 3.5%
error!
Sigmas (N=10): 1155 Mev (1189.36, 1192.46, and 1197.34
actual), 3.0%, 3.2% and 3.7% errors!
Xi (N=12): 1365 Mev (1314.9 and
1321.3 actual), 3.8% and 3.3% errors!
Omega (N=15): 1680 Mev (1672.5
actual), 0.4% error!
The mechanism is that the charge of the bare electron core is 137 times the Coulomb (polarisationshielded) value, so vacuum interactions of bare cores of fundamental particles attract 137 times as much virtual mass from the vacuum, increasing the inertia similarly. It is absurd that these close fits, with only a few percent deviation, are random chance, and this can be shown by statistical testing using random numbers as the null hypothesis. So there is empirical evidence that this heuristic interpretation is on the right lines, whereas the ‘renormalisation’ is bogus: http://www.cgoakley.demon.co.uk/qft/
Masses of Mesons:
Pions = 1.99 (charged), 1.93 (neutral)
Kaons = 7.05 (charged), 7.11 (neutral)
Eta = 7.84
Masses of Baryons:
Nucleons = 13.4
Lambda = 15.9
Sigmas = 17.0 (positive and neutral), 17.1 (negative)
Xi = 18.8 (neutral), 18.9 (negative)
Omega = 23.9
The masses above for all the major longlived hadrons are in units of (electron mass)x137. A statistical Chisquared correlation test against random numbers as the null hypothesis, indeed gives positive statistical evidence that they are close to integers. The mechanism is that the charge of the bare electron core is 137 times the Coulomb (polarisationshielded) value, so vacuum interactions of bare cores of fundamental particles attract 137 times as much virtual mass from the vacuum, increasing the inertia that much too. Leptons and nucleons are the things most people focus on, and are not integers when the masses are in units of (electron mass)x137. The muon is about 1.5 units on this scale but this can be explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to 1 + 1/(2.Pi.137). The mass increase of the muon is 1 + 1/2 because the Pi is due to spin and the 137 shielding factor doesn’t apply to bare cores in proximity.
To recap, the big bang has an outward force of 6.0266 x 10^{42} Newtons (by Newton’s 2nd law) that results in an equal inward force (by Newton’s 3rd law) which causes gravity as a shielded inward force, Higgs field or rather gauge boson pressure. This is based on standard heuristic quantum field theory (for the Feynman path integral approach), where forces are due not to empirical equations but to the exchange of gauge boson radiation. Where partially shielded by mass, the inward pressure causes gravity. Apples are pushed downwards towards the earth, a shield: ‘… the source of the gravitational field [gauge boson radiation] can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 8990.
LeSage in 1748 argued that there is some kind of pressure in space, and that masses shield one another from the space pressure, thus being pushed together by the unshielded space pressure on the opposite side. Feynman discussed LeSage in November 1964 lectures Character of Physical Law, and elsewhere explained that the major advance of general relativity, the contraction term, shortens the radius of every mass, like the effect of a pressure mechanism for gravity! He does not derive the equation, but we have done so above.
The magnetic force in electromagnetism results from the spin of vacuum particles, and this seems to be one thing about Maxwell’s spacetime fabric that was possibly not entirely wrong:
Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles 8223: ‘The ... action of magnetism on polarised light [discovered by Faraday not Maxwell] leads ... to the conclusion that in a medium ... is something belonging to the mathematical class as an angular velocity ... This ... cannot be that of any portion of the medium of sensible dimensions rotating as a whole. We must therefore conceive the rotation to be that of very small portions of the medium, each rotating on its own axis [spin] ... The displacements of the medium, during the propagation of light, will produce a disturbance of the vortices ... We shall therefore assume that the variation of vortices caused by the displacement of the medium is subject to the same conditions which Helmholtz, in his great memoir on Vortexmotion [of 1858; sadly Lord Kelvin in 1867 without a fig leaf of empirical evidence falsely applied this vortex theory to atoms in his paper ‘On Vortex Atoms’, Phil. Mag., v4, creating a mathematical cult of vortex atoms just like the mathematical cult of string theory now; it created a vast amount of prejudice against ‘mere’ experimental evidence of radioactivity and chemistry that Rutherford and Bohr fought], has shewn to regulate the variation of the vortices [spin] of a perfect fluid.’
‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 8990.
‘In this chapter it is proposed to study the very interesting dynamical problem furnished by the motion of one or more solids in a frictionless liquid. The development of this subject is due mainly to Thomson and Tait [Natural Philosophy, Art. 320] and to Kirchhoff [‘Ueber die Bewegung eines Rotationskörpers in einer Flüssigkeit’, Crelle, lxxi. 237 (1869); Mechanik, c. xix]. … it appeared that the whole effect of the fluid might be represented by an addition to the inertia of the solid. The same result will be found to hold in general, provided we use the term ‘inertia’ in a somewhat extended sense.’ – Sir Horace Lamb, Hydrodynamics, Cambridge University Press, 6^{th} ed., 1932, p. 160. (Hence, the gauge boson radiation of the gravitational field causes inertia. This is also explored in the works of Drs Rueda and Haisch: see http://arxiv.org/abs/physics/9802031 http://arxiv.org/abs/grqc/0209016 , http://www.calphysics.org/articles/newscientist.html and http://www.eurekalert.org/pub_releases/200508/nsijv081005.php .)
So the Feynman problem with virtual particles in the spacetime fabric retarding motion does indeed cause the FitzGeraldLorentz contraction, just as they cause the radial gravitationally produced contraction of distances around any mass (equivalent to the effect of the pressure of space squeezing things and impeding accelerations). What Feynman thought may cause difficulties is really the mechanism of inertia!
Einstein’s greatest achievement, proof for the fabric of space in general relativity, known as the dielectric of the vacuum in electronics and called continuum by Einstein in his inaugural lecture at Leyden University in 1920, is a neglected concept (while Einstein’s proof for the properties of the fabric of space is theoretical, Ivor Catt and others developed experimental proof while working with sampling oscilloscopes and pulse generators on the electromagnetic interconnection of IC’s). Notice that air pressure is 10 metric tons per square metre but people don’t ridicule that they can’t feel the 14.7 pounds per square inch. People ridicule the idea that gravity is a pushing effect, claiming it that were so then an umbrella would somehow stop it (despite the fact that xrays and other radiation penetrate unbrellas, showing they are mainly void). These people ignore the fact that the same false argument would equally ‘disprove’ pulling gravity ideas, since standing on the umbrella would equally stop ‘attraction’ …
‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities... According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15, 16, and 23.)
‘The MichelsonMorley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbingblock for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.
‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, ‘New Pathways in Science’, v2, p39, 1935.
‘The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.’ – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.
‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘er’ problem.’ – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 1845; written quickly to get Jewish Infeld out of Nazi Germany and accepted as a worthy refugee in America.
So the contraction of the MichelsonMorley instrument made it fail to detect absolute motion. This is why special relativity needs replacement with a causal general relativity:
‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.
‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2^{nd} ed., v1, p. v, 1951.
‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties... It has specific inductive capacity and magnetic permeability.’  Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.
‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. ... What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. ... Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and PseudoScience, pages 96102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
The physical content of GR is the OPPOSITE of SR:
Newton’s laws of motion and gravity expressed in general relativity, together with Maxwell’s equations of electromagnetism, are the core of classical physics. Einstein’s general relativity was inspired by a failure of special relativity to deal with accelerations (the ‘twins paradox’) and gravity. Einstein’s equivalence principle argues that inertial and gravitational forces are equivalent.
In 1917, Einstein applied general relativity to the universe, claiming to model the static universe idea. He simply added a ‘cosmological constant’ term that made gravity strength fall faster than the inverse square law, becoming zero at the average intergalactic distance and repulsive at greater distances. He claimed this would keep all galaxies at the same average distance from one another. Later it was proved (1) that Einstein’s model was unstable and would lead to galaxies clumping together into compressed lumps over time, and (2) that the universe is not static. The failure of the prediction of the static universe was exposed by the discovery of strong evidence for a big bang. Despite this, speculative efforts by Kaluza and Klein were made to unify forces using extra dimensions and later ‘strings’. This is refuted by Danny Ross Lunsford:
D.R. Lunsford has a paper on ‘Gravitation and Electrodynamics over SO(3,3)’ on CERN document server, EXT2003090: ‘an approach to field theory is developed in which matter appears by interpreting sourcefree (homogeneous) fields over a 6dimensional space of signature (3,3), as interacting (inhomogeneous) fields in spacetime. The extra dimensions are given a physical meaning as ‘coordinatized matter’. The inhomogeneous energymomentum relations for the interacting fields in spacetime are automatically generated by the simple homogeneous relations in 6D. We then develop a Weyl geometry over SO(3,3) as base, under which gravity and electromagnetism are essentially unified via an irreducible 6calibration invariant Lagrange density and corresponding variation principle. The EinsteinMaxwell equations are shown to represent a loworder approximation, and the cosmological constant must vanish in order that this limit exist.’ Lunsford begins with an enlightening overview of attempts to unify electromagnetism and gravitation:
‘The old goal of understanding the longrange forces on a common basis remains a compelling one. The classical attacks on this problem fell into four classes:
‘1. Projective theories (Kaluza, Pauli, Klein)
‘2. Theories with asymmetric metric (EinsteinMayer)
‘3. Theories with asymmetric connection (Eddington)
‘4. Alternative geometries (Weyl)
‘All these attempts failed. In one way or another, each is reducible and thus any unification achieved is purely formal. The Kaluza theory requires an ad hoc hypothesis about the metric in 5D, and the unification is nondynamical. As Pauli showed, any generally covariant theory may be cast in Kaluza’s form. The EinsteinMayer theory is based on an asymmetric metric, and as with the theories based on asymmetric connection, is essentially algebraically reducible without additional, purely formal hypotheses.
‘Weyl’s theory, however, is based upon the simplest generalization of Riemannian geometry, in which both length and direction are nontransferable. It fails in its original form due to the nonexistence of a simple, irreducible calibration invariant Lagrange density in 4D. One might say that the theory is dynamically reducible. Moreover, the possible scalar densities lead to 4^{th} order equations for the metric, which, even supposing physical solutions could be found, would be differentially reducible. Nevertheless the basic geometric conception is sound, and given a suitable Lagrangian and variational principle, leads almost uniquely to an essential unification of gravitation and electrodynamics with the required source fields and conservation laws.’ Again, the general concepts involved are very interesting: ‘from the current perspective, the EinsteinMaxwell equations are to be regarded as a firstorder approximation to the full calibrationinvariant system.
‘One striking feature of these equations that distinguishes them from Einstein’s equations is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behaviour. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant – so the theory explains why general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularised.’ [Lunsford goes on to suggest gravity is a residual of the other forces, which is one way to see it.]
Danny Ross Lunsford’s major paper, published in Int. J. Theor. Phys.,
v 43 (2004), No. 1, pp.161177, was submitted to arXiv.org but was
removed from arXiv.org by censorship apparently since it investigated a
6dimensional spacetime which again is not exactly worshipping Witten’s
10/11 dimensional Mtheory. It is however on the CERN document server at
http://doc.cern.ch//archive/electronic/other/ext/ext2003090.pdf
, and it shows the errors in the historical attempts by Kaluza, Pauli,
Klein, Einstein, Mayer, Eddington and Weyl. It proceeds to the correct
unification of general relativity and Maxwell’s equations, finding 4d
spacetime inadequate:
‘… We see now that we are in trouble in
4d. The first three [dimensions] will lead to 4th order differential
equations in the metric. Even if these may be differentially reduced to
match up with gravitation as we know it, we cannot be satisfied with
such a process, and in all likelihood there is a large excess of
unphysical solutions at hand. … Only first in six dimensions can we form
simple rational invariants that lead to a sensible variational
principle. The volume factor now has weight 3, so the possible scalars
are weight 3, and we have the possibilities [equations]. In contrast to
the situation in 4d, all of these will lead to second order equations
for the g, and all are irreducible  no arbitrary factors will appear in
the variation principle. We pick the first one. The others are
unsuitable … It is remarkable that without ever introducing electrons,
we have recovered the essential elements of electrodynamics, justifying
Einstein’s famous statement …’
D.R. Lunsford shows that 6 dimensions in SO(3,3) should replace the KaluzaKlein 5dimensional spacetime, unifying GR and electromagnetism.
QUANTUM LOOP GRAVITY: SPIN FOAM VACUUM
The fabric of spacetime is a sea in which boson radiations spend part of their time converted into a perfect fluid of matterantimatter.
‘In 1986, Abhay Ashtekar reformulated Einstein’s field equations of general relativity using what have come to be known as Ashtekar variables, a particular flavor of EinsteinCartan theory with a complex connection. He was able to quantize gravity using gauge field theory. In the Ashtekar formulation, the fundamental objects are a rule for parallel transport (technically, a connection) and a coordinate frame (called a vierbein) at each point. Because the Ashtekar formulation was backgroundindependent, it was possible to use Wilson loops as the basis for a nonperturbative quantization of gravity. Explicit (spatial) diffeomorphism invariance of the vacuum state plays an essential role in the regularization of the Wilson loop states. Around 1990, Carlo Rovelli and Lee Smolin obtained an explicit basis of states of quantum geometry, which turned out to be labelled by Penrose’s spin networks.’  Wikipedia.
In the October 1996 issue letters page of Electronics World the basic mechanism was first released, with further notices placed in the June 1999 and January 2001 issues. Two articles in the August 2002 and April 2003 issues, were followed by letters in various issues. In 2004, the result r = r_{local} e^{3} was obtained using the mass continuity equation of hydrodynamics and the Hubble law, allowing for the higher density of the earlier time big bang universe with increasing distance (divergence in spacetime or redshift of gauge bosons, prevents the increase in effective observable density from going to infinity with increasing distance/time past!). In 2005, a radiation pressurebased calculation was added and many consequences were worked out. The first approach worked on is the ‘alternative proof’ below, the fluid spacetime fabric: the fabric of spacetime described by the Feynman path integrals can be usefully modelled by the ‘spin foam vacuum’ of ‘loop quantum gravity’.
The observed supernova dimming was predicted via the Oct 96 Electronics World magazine, ahead of discovery by Perlmutter, et al. The omitted mechanism (above) from general relativity does away with ‘dark energy’ by showing that gravity generated by the mechanism of expansion does now slow down the recession. In addition, it proves that the ‘critical density’ obtained by general relativity ignoring the gravity mechanism above is too high by a factor of half the cube of mathematical constant e, in other words a factor of 10. The prediction was not published in PRL, Nature, CQG, etc., because of bigotry toward ‘alternatives’ to vacuous string theory.
2. Quantum mechanics and electromagnetism
Equations of Maxwell’s ‘displacement current’ in a vacuum, Schroedinger’s timedependent waves in space, and Dirac.
‘I think the important and extremely difficult task of our time is to try to build up a fresh idea of reality.’ – W. Pauli, letter to Fierz, 12 August 1948.
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.)
‘... the view of the status of quantum mechanics which Bohr and Heisenberg defended  was, quite simply, that quantum mechanics was the last, the final, the nevertobesurpassed revolution in physics ... physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p. 6.
‘To try to stop all attempts to pass beyond the present viewpoint of quantum physics could be very dangerous for the progress of science and would furthermore be contrary to the lessons we may learn from the history of science … Besides, quantum physics … seems to have arrived at a dead end. This situation suggests strongly that an effort to modify the framework of ideas in which quantum physics has voluntarily wrapped itself would be valuable …’ – Professor Louis de Broglie, Foreword to Dr David Bohm’s book, Causality and Chance in Modern Physics, Routledge and Kegan Paul, London, 2^{nd} ed., 1984, p. xiv.
‘Niels Bohr brainwashed a whole generation of physicists into believing that the problem had been solved fifty years ago.’ – Murray GellMann, in The Nature of the Physical Universe, Wiley, New York, 1979, p. 29.
STRING THEORY: Every age has scientists abusing personal pet speculations without evidence, relying on mathematics to dupe the public. Maxwell had the mechanical aether, Lord Kelvin the vortex atom, J.J. Thomson the plum pudding atom. Wolfgang Pauli called this kind of thing ‘not even wrong’.
Maxwell's displacement current states: displacement current = [permittivity of space].dE/dt. This is similar in a sense to the timedependent quantum mechanical Schroedinger equation: H[psi] = (0.5ih/pi).(d[psi]/dt). Here H is the hamiltonian,[psi] is the wave function, i = (1)^0.5, & pi = 3.14... The product H[psi] determines energy transfer. This Schrodinger equation thus says that energy transfer occurs in proportion to the rate of change of the wave function, just as the displacement current equation says that electric energy transfer occurs in proportion to the rate of change of electric field strength. Dirac's equation is just an ingenious relativistic generalisation of Schroedinger's timedependent equation. Since there is much evidence that 'displacement current' energy flow is radio type electromagnetic energy, all the 'weird' consequences of quantum mechanics and quantum electrodynamics are down to 'displacement current' gauge bosons. Electrons are always emitting and receiving this forcecausing energy, which causes the wavetype diffraction effects.
Dr Arnold Lynch worked for BT on microwave transmission and interference, and wanted to know what happens to electromagnetic energy when it apparently "cancels out" by interference. Because energy is conserved, so you can't cancel out energy although you can indeed cancel out the electromagnetic fields. This is the case of course with all matter, where opposite charges combine in atoms to give neutral atoms, and they magnetically pair up with opposite spins (Pauli exclusion principle) so that there is usually no net magnetism.
The contraction of materials only in the direction of their motion through the physical fabric of space, and their contraction due to the space pressure of gravity in the outward (radial) direction from the centre of a mass indicates a physical nature of space consistent with the 377 ohm property of the vacuum in electronics. Feynman’s approach to quantum electrodynamics, showing that interference creates the illusion that light always travels along the shortest route, accords with this model of space. However, Feynman fails to examine radio wave transmission, which cannot be treated by quantum theory as the waves are continuous and of macroscopic size easily examined and experimented with. The emission of radio is due to the accelerations of electrons as the electric field gradient varies in the transmitter aerial. Because electrons are naturally spinning, even still electrons have centripetal acceleration and emit energy continuously. The natural exchange of such energy creates a continuous, nonperiodic equilibrium that is only detectable as electromagnetic forces. Photon emission as described by Feynman is periodic emission of energy. Thus in a sheet of glass there are existing energy transfer processes passing energy around at light speed before light enters. The behaviour of light therefore depends on how it is affected by the existing energy flow inside the glass, which depends on its thickness. Feynman explains in his 1985 book QED that ‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’ Feynman in the same book concedes that his pathintegrals approach to quantum mechanics explains the chaos of the atomic electron as being simply a Bohmtype interference phenomenon: ‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ Thus Feynman suggests that a single hydrogen atom (one electron orbiting a proton, which can never be seen without an additional particle as part of the detection process) would behave classically, and it is the presence of a third particle (say in the measuring process) which interrupts the electron orbit by interference, creating the 3+ body chaos of the Schroedinger wave electron orbital.
QFT is heuristically explained with a classical model of a polarised virtual charge dielectric
Bootstrap model with a big TOE, booted of course, accelerated at high energy towards Mtheory
‘As I proceeded with the study of Faraday, I perceived that his method of conceiving the phenomena was also a mathematical one, though not exhibited in the conventional form of mathematical symbols. I also found that these methods were capable of being expressed in the ordinary mathematical forms … For instance, Faraday, in his mind’s eye, saw lines of force transversing all space where the mathematicians saw centres of force attracting at a distance: Faraday saw a medium where they saw nothing but distance: Faraday sought the seat of the phenomena in real actions going on in the medium, they were satisfied that they had found it in a power of action at a distance…’ – Dr J. Clerk Maxwell, Preface, A Treatise on Electricity and Magnetism, 1873.
‘In fact, whenever energy is transmitted from one body to another in time, there must be a medium or substance in which the energy exists after it leaves one body and before it reaches the other… I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its action…’ – Dr J. Clerk Maxwell, conclusion, A Treatise on Electricity and Magnetism, 1873 edition.
Analogy of the ‘string theory’ to ‘Copenhagen Interpretation’ quantum mechanics math
‘Statistical Uncertainty. This is the kind of uncertainty that pertains to fluctuation phenomena and random variables. It is the uncertainty associated with ‘honest’ gambling devices…
‘Real Uncertainty. This is the uncertainty that arises from the fact that people believe different assumptions…’ – H. Kahn & I. Mann, Techniques of systems analysis, RAND, RM18291, 1957.
Let us deal with the physical interpretation of the periodic table using quantum mechanics very quickly. Niels Bohr in 1913 came up with an orbit quantum number, n, which comes from his theory and takes positive integer values (1 for first or K shell, 2 for second or M shell, etc.). In 1915, Arnold Sommerfeld (of 137number fame) introduced an ellipticalshape orbit number, l, which can take values of n –1, n – 2, n – 3, … 0. Back in 1896 Pieter Zeeman introduced orbital direction magnetism, which gives a quantum number m with possible values l, l – 1, l – 2, …, 0, …  (l 2), (l – 1), l. Finally, in 1925 George Uhlenbeck and Samuel Goudsmit introduced the electron’s magnetic spin direction effect, s, which can only take values of +1/2 and –1/2. (Back in 1894, Zeeman had observed the phenomenon of spectral lines splitting when the atoms emitting the light are in a strong magnetic field, which was later explained by the fact of the spin of the electron. Other experiments confirm electron spin. The actual spin is in units of h/(2p ), so the actual amounts of angular spin are + ˝ h/(2p ) and – ˝ h/(2p ). ) To get the periodic table we simply work out a table of consistent unique sets of quantum numbers. The first shell then has n, l, m, and s values of 1, 0, 0, +1/2 and 1, 0, 0, 1/2. The fact that each electron has a different set of quantum numbers is called the ‘Pauli exclusion principle’ as it prevents electrons duplicating one another. (Proposed by Wolfgang Pauli in 1925; note the exclusion principle only applies to fermions with halfintegral spin like the electron, and does not apply to bosons which all have integer spin, like light photons and gravitons. While you use fermidirac statistics for fermions, you have to use boseeinstein statistics for bosons, on account of spin. Nonspinning particles, like gas molecules, obey maxwellboltzmann statistics.) Hence, the first shell can take only 2 electrons before it is full. (It is physically due to a combination of magnetic and electric force effects from the electron, although the mechanism must be officially ignored by order of the Copenhagen Interpretation ‘Witchfinder General’, like the issue of the electron spin speed.)
For the second shell, we find it can take 8 electrons, with l = 0 for the first two (an elliptical subshell is we ignore the chaos effect of wave interactions between multiple electrons), and l = 1 for next other 6.
Experimentally we find that elements with closed full shells of electrons, i.e., a total of 2 or 8 electrons in these shells, are very stable. Hence, helium (2 electrons) and Argon (2 electrons in first shell and 8 electrons filling second shell) will not burn. Now read the horses*** from ‘expert’ Sir James Jeans:
‘The universe is built so as to operate according to certain laws. As a consequence of these laws atoms having certain definite numbers of electrons, namely 6, 26 to 28, and 83 to 92, have certain properties, which show themselves in the phenomena of life, magnetism and radioactivity respectively … the Great Architect of the Universe now begins to appear as a pure mathematician.’ – Sir James Jeans, MA, DSc, ScD, LLD, FRS, The Mysterious Universe, Penguin, 1938, pp. 20 and 167.
One point I’m making here, aside from the simplicity underlying the use of quantum mechanics, is that it has a physical interpretation for each aspect (it is also possible to predict the quantum numbers from abstract mathematical ‘law’ theory, which is not mechanistic, so is not enlightening). Quantum mechanics is only statistically exact if you have one electron, i.e., a single hydrogen atom. As soon as you get to a nucleus plus two or more electrons, you have to use mathematical approximations or computer calculations to estimate results, which are never exact. This problem is not the statistical problem (uncertainty principle), but a mathematical problem in applying it exactly to difficult situations. For example, if you estimate a 2% probability with the simple theory, it is exact providing the input data is reliable. But if you have 2 or more electrons, the calculations estimating where the electron will be will have an uncertainty, so you might have 2% +/ a factor of 2, or something, depending on how much computer power and skill you use to do the approximate solution.
Derivation of the Schroedinger equation (an extension of a Wireless World heresy of the late Dr W. A. ScottMurray), a clearer alternative to Bohm’s ‘hidden variables’ work…
1. The equation for waves in a threedimensional space, extrapolated from the equation for waves in gases:
Ń ^{2} Y = Y (2p f/v)^{2}
where Y is the wave amplitude. Notice that this sort of wave equation is used to model waves in particlebased situations, i.e., waves in situations where there are particles of gas (gas molecules, sound waves). So we have particlewave duality resolved by the fact that any wave equation is a statistical model for the orderly/chaotic group behaviour of (3+ body Poincare chaos). The term Ń ^{2} Y is just a shorthand (the ‘Laplacian operator’) for the sum of secondorder differentials: Ń ^{2} Y = d^{2} Y_{ x} /dx^{2} + d^{2} Y^{ }_{y} /dy^{2} + d^{2} Y^{ }_{z} /dz^{2}. (Another popular use for the Laplacian operator is heat diffusion when convection doesn’t happen – such as in solids, since the rate of change of temperature, dT/dt = (k /C_{v}).Ń ^{2} T, where k is thermal conductivity and C_{v} is specific heat capacity measured under fixed volume.) The symbol f is frequency of the wave, while v is velocity of the wave. Now 2p is in there because f/v has units of reciprocal metres, so 2p is needed to make this ‘reciprical metres’ into ‘reciprocal wavelength’. Get it?
2. All waves behave the wave axiom, v = l f, where l is wavelength. Hence:
Ń ^{2} Y = Y (2p /l )^{2}.
3. Louis de Broglie, who invented ‘waveparticle duality’ (as waves in the physical, real ether, but that part was suppressed), gave us the de Broglie equation for momentum: p = mc = (E/c^{2})c = [(hc/l )/c^{2}]c = h/l . Hence:
Ń ^{2} Y = Y (2p mv/h)^{2}.
4. Isaac Newton’s theory suggests the equation for kinetic energy E = ˝ mv^{2} (although the term ‘kinetic theory’ was I think first used in an article published in a magazine edited by Charles Dickens, a lot later). Hence, v^{2} = 2E/m. So we obtain:
Ń ^{2} Y = 8Y mE(p /h)^{2}.
5. Finally, the total energy, W, for an electron is in part electromagnetic energy U, and in part kinetic energy E (already incorporated). Thus, W = U + E. This rearranges using very basic algebra to give E = W – U. So now we have:
Ń ^{2} Y = 8Y m(W – U).(p /h)^{2}.
This is Schroedinger’s basic equation for the atomic electron! The electromagnetic energy U = q_{e}^{2}/(4p e R) where q_{e} is charge of the electron, and e is the electric permittivity of the spacetime vacuum or ether. By extension of Pythagoras’ theorem into 3 dimensions, R = (x^{2} + y^{2} + z^{2})^{ ˝}. So now we understand how to derive the Schroedinger’s basic wave equation, and as Dr ScottMurray pointed out in his Wireless World series of the early 1980s, it’s child’s play. It would be better to teach this to primary school kids to illustrate the value of elementary algebra, than hide it as heresy or unorthodox, contrary to Bohr’s mindset!
Let us now examine the work of Erwin Schroedinger and Max Born. Since the nucleus of hydrogen is 1836 times as massive as the electron, it can in many cases be treated as at rest, with the electron zooming around it. Schroedinger in 1926 took the concept of particlewave duality and found an equation that could predict the probability of an electron being found within any distance of the nucleus. The full theory includes, of course, electron spin effects and the other quantum numbers, and so the mathematics at least looks a lot harder to understand than the underlying physical reality that gives rise to it.
First, Schroedinger could not calculate anything with his equation because he had no idea what the hell he was doing with the wavefunction Y . Max Born naively, perhaps, suggested it is like water waves, where it is an amplitude of the wave that needs to be squared to get the energy of the wave, and thus a measure of the massenergy to be found within a given space. (Likewise, the ‘electric field strength’ (volts/metre) from a radio transmitter mast falls off generally as the inverse of distance, although the energy intensity (watts per square metre) falls off as the inversesquare law of distance.)
Hence, by Born’s conjecture, the energy per unit volume of the electron around the atom is E ~ Y ^{2}. If the volume is a small, 3 dimensional cube in space, dx.dy.dz in volume, then the proportion of (or probability of finding) the electron within that volume will thus be: dx.dy.dz.Y ^{2} /[ň ň ň Y ^{2} dx.dy.dz]. Here, ň is the integral from 0 to infinity. Thus, the relative likelyhood of finding the electron in a thin shell between radii of r and a will be the integral of the product of surface area (4p r^{2}) and Y ^{2}, over the range from r to a. The number we get from this integral is converted into an absolute probability of finding the electron between radii r and a by normalising it: in other words, dividing it into the similarly calculated relative probability of finding the electron anywhere between radii of 0 and infinity. Hence we can understand what we are doing for a hydrogen atom.
The version of Schroedinger’s wave equation above is really a description of the timeaveraged (or timeindependent) chaotic motion of the electron, which is why it gives a probability of finding the electron at a given zone, not an exact location for the electron. There is also a timedependent version of the Schroedinger wave equation, which can be used to obfuscate rather well. But let’s have a go anyhow. To find the timedependent version, we need to treat the electrostatic energy U as varying in time. If U = hf, from de Broglie’s use of Planck’s equation, and because the electron behaves the wave equation, its timedependent frequency is: f^{2} = (2p Y )^{2} (dY /dt)^{2} where f^{2} = U^{2} /h^{2}. Hence, U^{2} = h^{2} (2p Y )^{2} (dY /dt)^{2}. To find U we need to remember from basic algebra that we will lose possible mathematical solutions unless we allow for the fact that U may be negative. (For example, if I think of a number, square it, and then get 4, that does not mean I thought of the number 2: I could have started with the number –2.) So we need to introduce i = Ö (1). Hence we get the solution: U = ih(2p Y )^{1} (dY /dt). Remembering E = W – U, we get the timedependent Schroedinger equation.
Let us now examine how fast the electrons go in the atom in their orbits, neglecting spin speed. Assuming simple circular motion to begin with, the inertial ‘outward’ force on the electron is F = ma = mv^{2}/R, which is balanced by electric ‘attractive’ inward force of F = (q_{e}/R)^{2}/(4p e ). Hence, v = ˝q_{e} /(p e Rm)^{1/2}.
Now for Werner Heisenberg’s ‘uncertainty principle’ of 1927. This is mathematically sound in the sense that the observer always disturbs the signals he observes. If I measure my car tyre pressure, some air leaks out, reducing the pressure. If you have a small charged capacitor and try to measure the voltage of the energy stored in it with an old fashioned analogue volt meter, you will notice that the volt meter itself drains the energy in the capacitor pretty quickly. A digital meter contains an amplifier, so the effect is less pronounced, but it is still there. A geiger counter held in fallout area absorbs some of the gamma radiation it is trying to measure, reducing the reading, as does the presence of the body of the person using it. A blind man searching for a golf ball by swinging a stick around will tend to disturb what he finds. When he feels and hears the click of the impact of his stick hitting the golf ball, he knows the ball is no longer where it was when he detected it. If he prevents this by not moving the stick, he never finds anything. So it is a reality that the observer always tends to disturb the evidence by the very process of observing the evidence. If you even observe a photograph, the light falling on the photograph very slightly fades the colours. With something as tiny as an electron, this effect is pretty severe. But that does not mean that you have to make up metaphysics to stagnate physics for all time, as Bohr and Heisenberg did when they went crazy. Really, Heisenberg’s law has a simple causal meaning to it, as I’ve just explained. If I toss a coin and don’t show you the result, do you assume that the coin is in a limbo, indeterminate state between two parallel universes, in one of which it is heads and in the other of which it landed tails? (If you believe that, then maybe you should have yourself checked into a mental asylum where you can write your filthy equations all over the walls with a crayon held between your big ‘TOEs’ or your ‘theories of everything’.)
For the present, let’s begin right back before QFT, in other words with the classic theory back in 1873:
Fiat Lux: ‘Let there be Light’
Michael Faraday, Thoughts on Ray Vibrations, 1846. Prediction of light without numbers by the son of a blacksmith who became a bookseller’s delivery boy aged 13 and invented electric motor, generator, etc.
James Clerk Maxwell, A Dynamical Theory of the Electromagnetic Field, 1865. Fiddles with numbers.
I notice that the man (J.C. Maxwell) most often attributed with Fiat Lux wrote in his final (1873) edition of his book A Treatise on Electricity and Magnetism, Article 110:
‘... we have made only one step in the theory of the action of the medium. We have supposed it to be in a state of stress, but we have not in any way accounted for this stress, or explained how it is maintained...’
In Article 111, he admits further confusion and ignorance:
‘I have not been able to make the next step, namely, to account by mechanical considerations for these stresses in the dielectric [spacetime fabric]... When induction is transmitted through a dielectric, there is in the first place a displacement of electricity in the direction of the induction...’
First, Maxwell admits he doesn’t know what he’s talking about in the context of ‘displacement current’. Second, he talks more! Now Feynman has something about this in his lectures about light and EM, where he says idler wheels and gear cogs are replaced by equations. So let’s check out Maxwell's equations.
One source is A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 459). Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated:
‘… history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’
Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:
‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of Ö 2 smaller than the velocity of light.’
It took three years for Maxwell to finally forcefit his ‘displacement current’ theory to take the form which allows it to give the alreadyknown speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’
Weber, not Maxwell, was the first to notice that, by dimensional analysis (which Maxwell popularised), 1/(square root of product of magnetic force permeability and electric force permittivity) = light speed.
Maxwell after a lot of failures (like Keplers trialanderror road to planetary laws) ended up with a cyclical light model in which a changing electric field creates a magnetic field, which creates an electric field, and so on. Sadly, his picture of a light ray in Article 791, showing inphase electric and magnetic fields at right angles to one another, has been accused of causing confusion and of being incompatible with his lightwave theory (the illustration is still widely used today!).
In empty vacuum, the divergences of magnetic and electric field are zero as there are no real charges.
Maxwell’s equation for Faraday’s law: dE/dx = dB/dt
Maxwell’s equation for displacement current: dB/dx = m e .dE/dt
where m is magnetic permeability of space, e is electric permittivity of space, E is electric field strength, B is magnetic field strength. To solve these simultaneously, differentiate both:
d^{2}E/dx^{2} =  d^{2}B/(dx.dt)
d^{2}B/(dx.dt) = m e . d^{2}E/dt^{2}
Since d^{2}B /(dx.dt) occurs in each of these equations, they are equivalent, so Maxwell got dx^{2}/dt^{2} = 1/(m e ), so c = 1/Ö (m e ) = 300,000 km/s. Eureka! This is the lie, the alleged unification of electricity and magnetism via light. I think ‘Fiat Lux’ is a good description of Maxwell’s belief in this ‘unification’. Maxwell arrogantly and condescendingly tells us in his Treatise that ‘The only use made of light’ in finding m and e was to ‘see the instrument.’ Sadly it was only in 1885 that J.H. Poynting and Oliver Heaviside independently discovered the ‘PoyntingHeaviside vector’ (Phil. Trans. 1885, p277). Ivor Catt (http://www.ivorcatt.org/) has plenty of material on Heaviside’s ‘energy current’ lightspeed electricity mechanism, as an alternative to the more popular ~1mm/s ‘electric current’. The particlewave problem of electricity was suppressed by mathematical obfuscation and ignorant officialdom still ignores the solution which Catt’s work ultimately implies (that the electron core is simply a lightspeed, gravitationally trapped TEM wave). We can see why Maxwell’s errors persisted:
‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, c18, p2.
‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mindhistory of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime… The result will easily be guessed. Egypt stood still… Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.
‘What they now care about, as physicists, is (a) mastery of the mathematical formalism, i.e., of the instrument, and (b) its applications; and they care for nothing else.’ – Karl R. Popper, Conjectures and Refutations, R.K.P., 1969, p100.
‘The notion that light possesses gravitating mass, and that therefore a ray of light from a star will be deflected when it passes near the sun, was far from being a new one, for it had been put forward in 1801 by J. Soldner…’ – Sir Edmund Whittaker, A History of the Theories of Aether and Electricity: Modern Theories, 19001926, Nelson and Sons, London, 1953, p40.
It doesn't take genius for me to see that general relativity deals
with absolute acceleration, while special relativity doesn't, so special
relativity is incomplete and therefore wrong if misused. Some of the
crackpots have some useful ideas scattered in their papers, which is
exactly the case with Kepler.
Kepler thought magnetism held the
earth in orbit around the sun, and was wrong. He also earned a living by
astrology and his mother was prosecuted on a charge of witchcraft. But
instead of calling Kepler a complete 100% crackpot, Newton had the wit
to focus on what kepler had done right, the three laws of planetary
motion, and used them to get the correct law of gravity for low speeds
and weak fields (the limit in general relativity). I don't think anyone
will go down as a good person for calling misguided people crackpots.
The harder task is making sense of it, not blacklisting people because
they make some errors or don't have the benefit of a good education! In
fact, there are not millions of crackpots with testable mechanisms that
seem to be consistent with major physics. The number is about 5, and
includes D.R. Lunsford and Tony Smith, both censored off arXiv.org. Ivor
Catt has a little useful material on electromagnetism from experiments,
but mixes it with a lot of political diatribe. Basically Catt's
experimental work is an extension of Oliver Heaviside's 1893 work on the
light speed model of electric energy transfer. Walter Babin has some
correct ideas too, in particular the idea that there is a superforce
which is basically electrical. However, he has not made as much with
this idea as he could. Because the core electric force of the electron
is 137 times Coulomb's observed electric force for an electron,
unification should be seen as the penetration of the virtual polarised
charge shield which reduces the core strength by the factor 1/137.
Darwin was trying to assert a simple model which was far from new.
All Darwin had was 'technical' evidence. It was the sum of the evidence,
added together, which made the simplicity convincing. Aristotle was of
course a theorist but he did not dig deeply enough. In his work
'Physics' of 350 BC, Aristotle argued using logic. I don't think Darwin
would like to be compared to Aristotle, or even Maxwell for that matter.
Faraday would be a better alternative, because experiments and
observations were more in Darwin's sphere than fiddling with speculative
models that turned out to be false (elastic aether and mechanical gear
cogs and idler wheel aether, in Maxwell's theory). Darwin would be more
interested in unifying a superforce using all the available evidence,
than guessing.
The unshielded electron core charge, Penrose
speculates in 'Road to Reality', is 11.7 times the observed Coulomb
force. His guess is that because the square root of 137.0... is used in
quantum mechanics, that is the factor involved. Since the Heisenberg
uncertainty formula d = hc/(2.Pi.E) works for d and E as realities in
calculating the ranges of forces carried by gauge bosons of energy E, we
can introduce work energy as E = Fd, which gives us the electron core
(unshielded) force law: F = hc/(2.Pi.d^2). This is 137.0... times
Coulomb. Therefore, Penrose's guess is wrong. Penrose has a nice
heuristic illustration on page 677 of his tome, The Road to Reality. The
illustration shows the electron core with the polarised sea of virtual
charges, so that the virtual positrons are attracted close to the real
electron core, while the virtual electrons are repelled further from the
real core: ‘Fig. 26.10. Vacuum polarisation: the physical basis of
charge renormalisation. The electron [core] E induces a slight charge
separation in virtual electronpositron pairs momentarily created out of
the vacuum. This somewhat reduces E’s effective charge [seen at a long
distance] from its bare value – unfortunately by an infinite factor,
according to direct calculation.’ Penrose gets it a bit wrong on page
678 where he says ‘the electron’s measured dressed charge is about
0.0854 [i.e., 1/square root of 137], and it is tempting to imagine that
the bare value should be 1, say.’
In fact, the bare value in these units is 11.7, not 1, because the ratio of bare to veiled charge is 137, as the bare core electric force is hc/(2.Pi.x^2), proved on my home page, which is 137 times Coulomb. It the bare core charge is not completely ‘unobservable’ since in high energy collisions a substantial reduction of the 137 factor has been experimentally observed (Koltick, Physical Review Letters, 1997), showing a partial penetration of the polarised vacuum veil. The bare core of the electron, with a charge 137 times the vacuumshielded one, is a reality. At early times in the big bang, collisions were energetic enough to penetrate through the vacuum to bare cores, so the force strengths unified. So we can use the heuristic approach to understand how strongly the polarised vacuum protects the electron (or other fundamental particle) core force strength; the numbers which are given for unification energy by quantum field theory abstract calculations. (You can’t dismiss the electron core model as being not directly observable unless you want to do the same for atomic nuclei!)
The physical mechanism does give rise to a lot of mathematics, but
not the same type of useless mathematics that ‘string theory’ generates.
Because ‘string theory’ falsely is worshipped as a religion, naturally
the productive facts are ridiculed. The accurate predictions include the
strengths of gravity, electroweak and strong nuclear forces, as well as
solutions to the problems of cosmology and the correct ratios of some
fundamental particles. Feynman correctly calculates the huge ratio of
gravity attraction force to the repulsive force of electromagnetism for
two electrons as 1/(4.17 x 10^{42} ). He then says: ‘It is very
difficult to find an equation for which such a fantastic number is a
natural root. Other possibilities have been thought of; one is to relate
it to the age of the universe.’ He then says that the ratio of the time
taken by light to cross the universe to the time taken by light to cross
a proton is about the same huge factor. After this, he chucks out the
idea because gravity would vary with time, and the sun’s radiating power
varies as the sixth power of the gravity constant G. The error here is
that there is no mechanism for Feynman’s idea about the times for light
to cross things. Where you get a mechanism is for the statistical
addition of electric charge (virtual photons cause electric force)
exchanged between similar charges distributed around the universe. This
summation does not work in straight lines, as equal numbers of positive
and negative charges will be found along any straight line. So only a
mathematical drunkard’s walk, where the net result is the charge of one
particle times the square root of the number of particles in the
universe, is applicable: http://members.lycos.co.uk/nigelbryancook/Image11.jpg.
This
means that the electric force is equal to gravity times the square root
of the number of particles. Since the number of particles is effectively
constant, the electric force varies with the gravity force! This
disproves Feynman: suppose you double the gravity constant. The sun is
then more compressed, but does this mean it releases 2^{6} = 64
times more power? No! It releases the same. What happens is that the
electric force between protons – which is called the Coulomb barrier –
increases in the same way as the gravity compression. So the rise in the
force of attraction (gravity) is offset by the rise in the Coulomb
repulsion (electric force), keeping the proton fusion rate stable!
However, Feynman also points out another effect, that the variation in
gravity will also alter the size of the Earth’s orbit around the sun, so
the Earth will get a bit hotter due to the distance effect if G rises,
although he admits: ‘such arguments as the one we have just given are
not very convincing, and the subject is not completely closed.’ Now the
smoothness of the cosmic background radiation is explained by the lower
value of G in the past (see discussion of the major predictions, further
on). Gravity constant G is directly proportional to the age of the
universe, t. Let’s see how far we get playing this game (I’m not really
interested in it, but it may help to test the theory even more
rigorously). The gravity force constant G and thus t are proportional to
the electric force, so that if charges are constant, the electric
permittivity varies as ‘1/t’, while the magnetic permeability varies
directly with t. By Weber and Maxwell, the speed of light is c
=1/(square root of the product of the permittivity and the
permeability). Hence, c is proportional to 1/ [square root of
{(1/t).(t)}] = constant. Thus, the speed of light does not vary
in any way with the age of the universe. The strong nuclear force
strength, basically F = hc/(2p d^{2 })
at short distances, is varying like gravity and electroweak forces,
results in the implication that h is proportional to G and thus also to
t.
Many ‘tests’ for variations in G assume that h is a constant.
Since this is not correct, and G is proportional to h, the
interpretations of such ‘tests’ are total nonsense, much as the
MichelsonMorley experiment does not disprove the existence of the sea
of gauge bosons that cause fundamental forces! At some stage this model
will need to be applied rigorously to very short times after the big
bang by computer modelling. For such times, the force ratios vary not
merely because the particles of matter have sufficient energy to smash
through the shielding veils of polarised virtual particles which
surround the cores of particles, but also because the number of
fundamental particles was increasing significantly at early times! Thus,
soon after the big bang, the gravity and electromagnetic forces would
have been similar. The strong nuclear force, because it is identical in
strength to the unshielded electroweak force, would also have been the
same strength because the energy of the particles would break right
through the polarised shields. Hence, this is a unified force theory
that really works! Nature is beautifully simple after all. Lunsford’s
argument that gravity is a residual of the other forces is
right.
Predicted masses of all nuclear particles
http://cosmicvariance.com/2005/11/14/ourfirstguestbloggerlawrencekrauss/:
The
whole basis of the energytime version of the uncertainty principle is
going to be causal (random interactions between the gauge boson
radiation, which constitutes the spacetime fabric).
Heuristic explanations of the QFT are required to further the
basic understanding of modern physics. For example, Heisenberg’s minimum
uncertainty (based on impossible gamma ray microscope thought
experiment): pd = h/(2p ), where p is
uncertainty in momentum and d is uncertainty in distance. The product pd
is physically equivalent to Et, where E is uncertainty in energy and t
is uncertainty in time. Since, for light speed, d = ct, we obtain: d =
hc/(2p E). This is the formula the experts
generally use to relate the range of the force, d, to the energy of the
gauge boson, E. Notice that both d and E are really uncertainties in
distance and energy, rather than real distance and energy, but the
formula works for real distance and energy, because we are dealing with
a definite ratio between the two. Hence for 80 GeV massenergy W and Z
intermediate vector bosons, the force range is on the order of 10^17 m.
Since the formula d = hc/(2.Pi.E) therefore works for d and E as
realities, we can introduce work energy as E = Fd, which gives us the
strong nuclear force law: F = hc/(2p d^2). This
inversesquare law is 137 times Coulomb’s law of electromagnetism.
History of gravity mechanism
Gravity is the effect of inward directed graviton radiation pressure of the inflow of the fabric of spacetime inwards to fill the volume left empty by the outward acceleration of galaxies in the big bang. LeSageFeynman shadowing of the spacetime fabric – which is a light velocity radiation on the 4 dimensional spacetime we observe – pushes us downward. You can’t stop space with an umbrella, as atoms are mainly void through which space pressure propagates!
Newton’s 3rd empirical law states outward force has an equal and opposite reaction (inward or implosive force). The bomb dropped on Nagasaki used TNT around plutonium, an ‘implosion’ bomb. Half the force acted inward, an implosion that compressed the plutonium. The inward or implosion force of the big bang is apparently physical space pressure. Fundamental particles behave as small black holes (electrons, quarks) which shield space pressure. They are therefore pressed from all sides equally except the shielded side, so they are pushed towards masses. The proof (below) predicts gravity. A calculation using black hole electrons and quarks gives identical results.
This inward pressure makes the radius of the earth contract by a distance of 1.5mm. This was predicted by Einstein’s general relativity, which Einstein in 1920 at Leyden University said proved that: ‘according to the general theory of relativity, space without ether [physical fabric] is unthinkable.’ The radius contraction, discussed further down this page, is GM/(3c^{2}). (Professor Feynman makes a confused mess of it in his relevant volume of Lectures, c42 p6, where he gives his equation 42.3 correctly for excess radius being equal to predicted radius minus measured radius, but then on the same page in the text says ‘… actual radius exceeded the predicted radius …’ Talking about ‘curvature’ when dealing with radii is not helpful and probably caused the confusion. The use of Minkowski light ray diagrams and string ‘theory’ to obfuscate the cause of gravity with talk over ‘curved space’ stems to the false model of space by the surface of a waterbed, in which heavy objects roll towards one another. This model when extended to volume type, real, space shows that space has a pressurised fabric which is shielded by mass, causing gravity.) But despite this insight, Einstein unfortunately overlooked the Hubble acceleration problem and failed to make the link with the big bang, the mechanism of gravity, which is proved below experimentally with step by step mathematics. The gravitational contraction is radial only, not affecting the circumference, so there a difference between the true radius and that calculated by Euclidean geometry. Thus curved space using nonEuclidean geometry, or you can seek the physical basis of the pressure in the surrounding universe.
Georges Louis LeSage, between 174782, explained gravity classically as a shadowing effect of space pressure by masses. The speculative, nonquantitative mechanism was published in French and is available online (G.L. LeSage, Lucrece Newtonien, Nouveaux Memoires De L’Academie Royal de Sciences et Belle Letters, 1782, pp. 40431). Because gravity depends on the mass within the whole earth’s volume, LeSage predicted that the atomic structure was mostly void, a kind of nuclear atom which was confirmed by Rutherford’s work in 1911.
Taking things simply, the virtual vacuum surrounding each charge core is polarised, which screens the core charge. This is geometrical. The virtual positronelectron pairs in the vacuum are polarised: the virtual positive charges are attracted closer to the negative core than the virtual electrons, which are repelled to greater distances. Hence the real negative core has a positive virtual shell just around it, with a negative virtual shell beyond it, which falls off to neutral at great distances. This virtual particle or heuristic (trial and error) explanation is used in the Feynman approach to quantum field theory, and was validated experimentally in 1997, by firing leptons together at high energy to penetrate the virtual shield and observe the greater charge nearer the bare core of an electron.
Some 99.27 % of the inwarddirected electric field from the electron core is cancelled by the outwarddirected electric field due to the shells of virtual charges polarised in the vacuum by the electron core. Traditionally, the normal mathematics of quantum field theory has had the issue of having to be ‘renormalised’ to stop the electron core from interacting with an infinite number of virtual charges. The renormalisation process forcefits limits the size of the integral for each coupling correction, which would otherwise be infinity. Heuristically, renormalisation is limiting each coupling correction (Feynman diagram) to one virtual charge at one time. Hence, for the first coupling correction (which predicts the electron’s magnetism right to 5 decimals or 6 significant figures), the electron core charge is weakened by the polarised charge (positron shell) and is 137 times weaker when associating with 1 virtual electron in the space around the positive shell. The paired magnetic field is 1 + 1/(2.Pi.137) = 1.00116 Bohr magnetons, first term is the unshielded magnetism of the real electron core, and the second is the contribution from the paired virtual electron in the surrounding space, allowing for the transverse direction of the core magnetic field lines around the electron loop equator (the magnetic field lines are radial at the poles). My understanding now is that the transverse magnetic field surrounding the core of the electron is shielded by the 137 factor, and it is this shielded transverse field which couples with a virtual electron. The radial magnetic field lines emerging from the electron core poles are of course not attenuated, since they don’t cross electric field lines in the polarised vacuum, but merely run parallel to electric field lines. (This is a large step forward in heuristic physics from that a couple of weeks back.)
The pairing is the Pauliexclusion process. Because an electron has a spin, it is a magnet. Every two adjacent electrons in the atom have opposite spin directions (up or down). There are two natural ways tou can put two magnets together, end to end or side to side. The side to side arrangement, with one North pole facing up and the other down, is most stable, so it occurs in the atom where the electrons are in chaotic orbits. The only way you can measure the spin of an electron is by using a magnetic field, which automatically aligns the electron, so the spin can only take two possible values (up or down), so the magnetism is either adding to or subtracting from the background field. You can flip the electron over by adding the energy needed for it to add to the magnetic field. None of this is mystical, any more than playing with magnets and finding they naturally align in certain (polar) ways only. The Pauli exclusion principle states that the four quantum numbers (including spin) are unique for every electron in the atom. Spin was the last quantum number to be accepted.
In order to heuristically explain the abstruse 1 + 1/(2.Pi.137) =
1.00116 first coupling correction for the electron’s magnetism in QED,
we suggested on Motl’s blog that the electron core magnetism is not
attenuated by the polarised vacuum of space, while the electric field is
attenuated by a factor of 137. The 2.Pi factor comes from the way a
virtual electron in the vacuum couples with the real core electron, both
of which are spinning. (Magnetism is communicated via the spin of
virtual particles in the vacuum, according to Maxwell’s
electromagnetism.) The coupling is related to the mechanism of the Pauli
exclusion principle. The coupling is weakened by the 137 factor because
the polarisation of virtual charges creates an inner virtual positron
shell around the real electron core, with an outer virtual electrons
shell. The polarised vacuum shields the core charge by a factor of
137.
The extra massenergy of a muon means that interacts not
only with virtual electrons and positrons, but also more energetic
virtual particles in the vacuum. This very slightly affects the measured
magnetic moment of the muon, since it introduces extra coupling
corrections that don’t occur for an electron.
Could it be that the effect on the electron’s mass is greater for the same reason, but that the effect for mass is greater than magnetic field, because it doesn’t involve the 137attenuation factor? Somehow you get the feeling that we are going towards a ‘bootstrap’ physics approach; the muon is more 207 times more massive than the electron, because the greater mass causes it to interact more with the spacetime fabric, which adds mass! (‘I pulled myself upward by my own bootstraps.’) I’ll come back to this at the end of this paper, with a list of tested predictions of particle masses that it yields.
The gravity mechanism has been applied to electromagnetism that has
both attractive and repulsive forces, and nuclear attractive forces.
These are all powered by the gravity mechanism in a simple way. Spinning
charges in heuristic quantum field theory all radiate and exchange
energy as virtual photons, which gets redshifted when travelling large
distances in the universe, due to the big bang. As a result, the
exchange of energy between nearby similar charges, where the expansion
of the universe does not occur between the charges, is strong and they
recoil apart (repulsion), like two people accelerating in opposite
directions due to exchanging streams of lead bullets from machineguns!
(Thank God for machine guns and big bangs, or physics would seem daft.)
As a virtual photon leaves any electron, the electron must recoil, like
a rifle firing a bullet. According to the uncertainty principle, the
range of the virtual photon is half its wavelength. Since the
inversesquare law is simple geometric divergence (of photons over
increasing areas) with no range limit (infinite range), the wavelength
of the virtual photons in electromagnetism is infinite. Hence, they are
continuous energy flow, not oscillating. This is why you can’t hear
steady electromagnetic forces on a radio: there is no oscillation to
jiggle the electrons and introduce a resonate current. (Planck’s formula
E = hf implies that zero net energy is carried when f = 0, which is due
to the Prevost exchange mechanism of 1792 that also applies to quantum
energy exchange at constant temperatures, where cooling objects are in
equilibrium, receiving as much as they radiate each second.) When we
accelerate a charge, we then get a detectable photon with a definite
frequency. The spin of a loop electron is continuous not a periodic
phenomena so it radiates energy with no frequency, just like a trapped
electric TEM wave in a capacitor plate.
Electric attraction
occurs between opposite charges, which stop virtual photons from each
other’s direction, and so are pushed together like gravity, but the
force is multiplied up from gravity by a factor of about 10^40, due to
the drunkard’s walk (statistical zigzag path) of energy between similar
charges in the universe. This ‘displacement current’ of electromagnetic
energy can’t travel in a straight line or it will statistically
encounter similar numbers of equal and opposite charges, cancelling out
the net electric field. Thus mathematical physics only permits a
drunkard’s walk, in which the sum is gravity times the square root of
the number of similar charges in the universe. A diagram here http://members.lycos.co.uk/nigelbryancook/Image11.jpg
proves that the electric repulsion force is equal to the attraction
force for equal charges, but has opposite directions depending on
whether the two charges are similar in sign or different:
Hence F(electromagnetism) = mMGN^{1/2}/r^{2} = q_{1}q_{2}/(4p e r^{2}) (Coulomb’s law)
where G = ľ H^{2}/(p r e^{3}) as proved above, and N is as a first approximation the mass of the universe (4p R^{3} r /3= 4p (c/H)^{3} r /3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:
e = q_{e}^{2}e_{2.7…}^{3} [r /(12p m_{e}^{2}m_{proton}Hc^{3})]^{1/2} F/m.
Testing this with the PRL and other data used above (r = 4.7 x 10^{28} kg/m^{3} and H = 1.62 x 10^{18} s^{1} for 50 km.s^{1}Mpc^{1}), gives e = 7.4 x 10^{12} F/m which is only 17% low as compared to the measured value of 8.85419 x 10^{12} F/m. This relatively small error reflects the hydrogen assumption and quark effect. Rearranging this formula to yield r , and rearranging also G = ľ H^{2}/(p r e^{3}) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ľ H^{2}/(p r e^{3}) to give a prediction for r which is independent of H:
H = 16p ^{2}Gm_{e}^{2}m_{proton}c^{3} e^{ 2}/(q_{e}^{4}e_{2.7…}^{3}) = 2.3391 x 10^{18} s^{1} or 72.2 km.s^{1}Mpc^{1}, so 1/H = t = 13.55 Gyr.
r = 192p ^{3}Gm_{e}^{4}m_{proton}^{2}c^{6} e^{ 4}/(q_{e}^{8}e_{2.7…}^{9}) = 9.7455 x 10^{28} kg/m^{3}.
Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanismbased predictive method.
Tony Smith has predictions of quark masses: http://www.valdostamuseum.org/hamsmith/
and http://www.valdostamuseum.org/hamsmith/jouref.html#arxivregreq2002
See
also http://www.valdostamuseum.org/hamsmith/MENTORxxxarXiv.html
The
root of Tony Smith’s ‘suppression’ by arXiv.org seems to be his defence
in one paper of 26dimensional string theory which is now officially
replaced by 10/11 dimensional Mtheory: http://cdsweb.cern.ch/search.py?recid=730325&ln=en
In the 1960s while at Motorola, Catt (born 1935, B.Eng. 1959) charged up a 1 m length of coaxial cable to 10 volts, and then discharged it, measuring with a Tektronix 661sampling osclloscope with 4S1 and 4S2 (100 picosecond) plugins, finding an output of a 2 m long 5 v pulse. In any static charge, the energy is found to be moving at the speed of light for the adjacent insulator; when discharged, the 50% of the energy already moving towards the exit point leaves first, while the remaining 50% first goes in the opposite direction, reflects back off the far edge, and then exits, creating a pulse of half the voltage and twice the duration needed light to transit the length. Considering a capacitor reduced to simply two oppositely charged particles separated by a vacuum, e.g., an atom, we obtain the particle spin speed.
So the electromagnetic energy of charge is trapped at light speed in any ‘static’ charge situation. David Ash, BSc, and Peter Hewitt, MA, in their 1994 book reviewing electron spin ideas, The Vortex (Gateway, Bath, page 33), stated: ‘… E = mc^{2} shows that mass (m) is equivalent to energy (E). The vortex goes further: it shows the precise form of energy in matter. A particle of matter is a swirling ball of energy … Light is a different form of energy, but it is obvious from Einstein’s equation that matter and light share a common movement. In E = mc^{2}, it is c, the speed of light, which related matter to energy. From this, we can draw a simple conclusion. It is obvious: the speed of movement in matter must be the speed of light.’ However, Ash and Hewitt don’t tackle the big issue: ‘It had been an audacious idea that particles as small as electrons could have spin and, indeed, quite a lot of it. … the ‘surface of the electron’ would have to move 137 times as fast as the speed of light. Nowadays such objections are simply ignored.’ – Professor Gerard t’Hooft, In Search of the Ultimate Building Blocks, Cambridge University Press, 1997, p27. In addition, quantum mechanical spin, given by Lie’s mathematics, is generally obscure, and different fundamental particles have different spins. Fermions have halfinteger spin while bosons have integer spin. Neutrinos and antineutrinos do have a spin around their propagation axis, but the maths of spin for electrons and quarks is obscure. The twisted paper loop, the Mobius strip, illustrates how a particle can have different quantum mechanical spins in a causal way. If you half twist a strip of paper and then glue the ends, forming a loop, the result has only one surface: in the sense that if you draw a continuous line on the looped paper, you find it will cover both sides of the paper! Hence, a Mobius strip must be spun around twice to get back where it began! The same effect would occur in a spinning fundamental particle, where the trapped energy vector rotates while spinning.
Magnetism, in Maxwell’s mechanical theory of spinning virtual particles in space, may be explained akin to vortices, like whirlpools in water. If you have two whirlpools of similar spin (either both being clockwise, or both being anticlockwise), they attract. If the two whirlpools have opposite spins, they repel. In 1927, Samuel Goudsmit and George Uhlenbeck introduced the spin quantum number. But under Bohr’s and Heisenberg’s ‘Machian’ (‘nonobservables like atoms and viruses are not real’) paranoid control, it was subsumed into Lie algebra as a mathematical trick, not a physical reality, despite Dirac’s endorsement of the ‘aether’ in predicting antimatter. Apart from the spin issue above that we resolved by the rotation of the HeavisidePoynting vector like a Mobius strip, there is also the issue that the equator of the classical spherical electron would revolve 137.03597 times faster than light. Taking Ivor Catt’s work, the electron is not a classical sphere at all, but a HeavisidePoynting energy current trapped gravitationally into a loop, and it goes at light speed, which is the ‘spin’ speed.
If the electron moves at speed v as a whole in a direction orthogonal (perpendicular) to the plane of the spin, then the c speed of spin will be reduced according to Pythagoras: v^{2} + x^{2} = c^{2} where x is the new spin speed. For v = 0 this gives x = c. What is interesting is that this model gives rise to the LorentzFitzGerald transformation naturally, because: x = c(1  v^{2} / c^{2} )^{1/2} . Since all time is defined by motion, this (1  v^{2} / c^{2} )^{1/2} factor of reduction of fundamental particle spin speed is therefore the timedilation factor for the electron when moving at speed v. So there is no metaphysics in such ‘time travel’! Mass increase occurs due to the snowplough effect of the fabric of spacetime ahead of the particle, since it doesn’t have time to flow out of the way when the speed is great.
The light photon has a spin angular momentum is cmr where the effective mass m is of course energy equivalent, m = E/c^{2} (from E = mc^{2} ). Using Planck’s E = hf = hc/l where f is frequency and l is wavelength (l = 2p r ), we find that the spin angular momentum is cmr = ˝ h/p , which is well verified experimentally. Since the unit of atomic angular momentum is ˝ h/p , we find the light boson has a spin or 1 unit, or is a spin1 boson, obeying BoseEinstein statistics. The electron, however, has only half this amount of spin, so it is like half a photon (the negative electric field oscillation of a 1.022 MeV gamma ray, to be precise). The electron is called a fermion as it obeys FermiDirac statistics, which applies to halfinteger spins. (The spins of two fermions can, of course, under some special conditions ‘add up’ to behave as a boson, hence the ‘BoseEinstein condensate’ at very low temperatures.)
The only widely known attempt to introduce some kind of causal fluid dynamics into quantum mechanics was by Professor David Bohm and Professor J. P. Vigier in their paper ‘Model of the Causal Interpretation of Quantum Theory in Terms of a Fluid with Irregular Fluctuation’ (Physical Review, v 96, 1954, p 208). This paper showed that the Schroedinger equation of quantum mechanics arises as a statistical description of the effects of Brownian motion impacts on a classically moving particle. However, the whole Bohm approach is wrong in detail, as is the attempt of de Broglie (his ‘nonlinear wave mechanics’) to guess a classical potential that mimics quantum mechanics on the small scale and deterministic classical mechanics at the other size regime. The whole error here is due to the Poincaré chaos introduced by the threebody problem, which destroys determinism (but not causality) in classical, Newtonian physics:
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of prequantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Tim Poston and Ian Stewart, Analog, November 1981.
So it is not quantum physics that is the oddity, but actually classical physics. The normal teaching of Newtonian physics at low levels falsely claims that it allows the positions of the planets to be exactly calculated (determinism) when it does not. Newton’s laws do not contain any exact solution for more than two bodies, and there are more than two bodies in our solar system. So the problem to address is the error in classical, Newtonian physics, which explains why quantum mechanics is the way it is. Bohm’s approach was to try to obtain a classical model of quantum mechanics, which is the wrong approach, since classical physics is the fiddle. What you first have to admit is that Newton only dealt with two bodies, so his laws simply don’t apply to reality.
Henri Poincaré’s work shows that in any atom, you will have chaos whenever you observe it, even in the Newtonian mechanics framework. The simplest atom is hydrogen, with an electron going around a proton. As soon as you try to observe it, you must introduce another particle like a photon or electron, which gives rise to a 3body situation! Therefore the chaotic, statistical behaviour of the situation gives rise to the statistical Schroedinger wave equation of the atom without any need to introduce explanations based on ‘hidden variables’. The only mediation is the force gauge boson, which is well known in quantum field theory, and is not exactly a ‘hidden variable of the sort Bohm looked for. Newton’s error is restricting his theory to the oversimplified case of only two bodies, when in fact this is a bit like Euclidean geometry, missing a vital ingredient. (Sometimes you do really have to deepen the foundations to build a taller structure.)
In 1890, Poincaré published a 270pages book, On the problem of Three Bodies and the Equations of Dynamics. He showed that two bodies of similar mass have predictable, deterministic orbital motion because their orbits trace out closed, repeating loops in space. But he found that three bodies of similar mass in orbit trace out irregular, continuously changing unclosed loops and tangles throughout a volume of space, not merely in the flat plane they began in. The average radius of a chaotic orbit that is equal to the classical (deterministic) radius, and the probability of finding the particle beyond average radius diminishes, so giving the basis of the Schroedinger model, where the probability of finding the electron peaks at the classical radius and diminishes gradually elsewhere. Computer programs approximate chaotic motion roughly by breaking up a three body problem, ABC, into steps AB, AC, and BC, and then cyclically calculating motions of each pair of bodies brief period of time while ignoring the other body for that brief period. This is not exact, but is a useful approximation for understanding how chaos occurs and what statistical variations are possible over a period of time. It disproves determinism! Because most of the physicists working in quantum mechanics have not studied the mathematical application of chaos to classical atomic electrodynamics, they have no idea that Newtonian physics is crackpot off the billiard table, and can’t describe the solar system in the way it claims, and that the ‘contradiction’ usually presented as existing between classical and quantum physics is not a real contradiction but is down to the falsehood that classical physics is supposed to be deterministic, when it is not.
Feynman explains in his 1985 book QED that ‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’ Feynman in the same book concedes that his pathintegrals approach to quantum mechanics explains the chaos of the atomic electron as being simply a Bohmtype interference phenomenon: ‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ Thus Feynman suggests that a single hydrogen atom (one electron orbiting a proton, which can never be seen without an additional particle as part of the detection process) would behave classically, and it is the presence of a third particle (say in the measuring process) which interrupts the electron orbit by interference, creating the 3+ body chaos of the Schroedinger wave electron orbital.
Fundamental particles – the leptons, quarks, neutrinos and gauge bosons of the Standard Model
In 1897 J. J. Thomson showed that deflected cathode rays (electrons) in an oldtype TV cathode ray tube, have a fixed mass to charge ratio. When the quantum unit of charge was later measured by Milikan by using a microscope to watch tiny charged oil drops just stopped from falling due to gravity by an electric field, the mass of an electron could be calculated by multiplying the quantum unit of charge by Thomson’s mass to charge ratio!
From 19257, a quantum mechanical atomic theory was developed in the probability of finding an electron anyplace is proportional to the product of a small volume of space and the square of a wavefunction in that small volume. Integrating this over the whole atomic space allows total probability to be ‘normalised’ to 1 unit per electron. The normalisation factor therefore allows calculation of the absolute probability of finding the electron at any place. Various complications for orbits and spin are easily included in the mathematical model.
In 1929, Dirac found a way to combine the quantum mechanical (wave statistics) with relativity for an electron, and the equation had two solutions. Dirac predicted ‘antimatter’ using the extra solution. The antielectron, the positron, was discovered in 1932.
From 19439, Feynman and others worked on the problem of calculating how the electron interacts with its own field, as propagated by virtual particles in the spacetime fabric. The effect is a 0.116% increase in the magnetic moment calculated by Dirac. Because the field equation is continuous and can increase to infinity, a cutoff is imposed to prevent the nonsensical answer of infinity. This cutoff decided by a trick called ‘renormalisation’, which consists of subtracting the unwanted infinity. Physically, this can only be interpreted as implying that the electron is coupling up with one virtual electron in the vacuum at a time, not all of them!
WOIT AND THE STANDARD MODEL
Tony Smith’s CERN document server paper, EXT2004031, uses the Lie
algebra E6 to avoid 11 bosonfermion supersymmetry: ‘As usually
formulated string theory works in 26 dimensions, but deals only with
bosons … Superstring theory as usually formulated introduces fermions
through a 11 supersymmetry between fermions and bosons, resulting in a
reduction of spacetime dimensions from 26 to 10. The purpose of this
paper is to construct … using the structure of E6 to build a string
theory without 11 supersymmetry that nevertheless describes gravity and
the Standard Model…’
Peter Woit goes in for a completely
nonstring approach based on building from quantum field theory of
spinors, http://www.arxiv.org/abs/hepth/0206135.
Woit has some
sensible ideas on how to proceed with the Standard Model:
‘Supersymmetric quantum mechanics, spinors and the standard model’,
Nuclear Physics, v. B303 (1988), pp. 32942; ‘Topological quantum
theories and representation theory’, Differential Geometric Methods in
Theoretical Physics: Physics and Geometry, Proceedings of NATO Advanced
Research Workshop, LingLie Chau and Werner Nahm, Eds., Plenum Press,
1990, pp. 53345:
‘… [it] should be defined over a Euclidean
signature four dimensional space since even the simplest free quantum
field theory path integral is illdefined in a Minkowski signature. If
one chooses a complex structure at each point in spacetime, one picks
out a U(2) [is a proper subset of] SO(4) (perhaps better thought of as a
U(2) [is a proper subset of] Spin^c (4)) and … it is argued that one can
consistently think of this as an internal symmetry. Now recall our
construction of the spin representation for Spin(2n) as A *(C^n) applied
to a ‘vacuum’ vector.
‘Under U(2), the spin representation has
the quantum numbers of a standard model generation of leptons… A
generation of quarks has the same transformation properties except that
one has to take the ‘vacuum’ vector to transform under the U(1) with
charge 4/3, which is the charge that makes the overall average U(1)
charge of a generation of leptons and quarks to be zero. The above
comments are … just meant to indicate how the most basic geometry of
spinors and Clifford algebras in low dimensions is rich enough to
encompass the standard model and seems to be naturally reflected in the
electroweak symmetry properties of Standard Model
particles…
‘For the last eighteen years particle theory has been
dominated by a single approach to the unification of the Standard Model
interactions and quantum gravity. This line of thought has hardened into
a new orthodoxy that postulates an unknown fundamental supersymmetric
theory involving strings and other degrees of freedom with
characteristic scale around the Planck length. …It is a striking fact
that there is absolutely no evidence whatsoever for this complex and
unattractive conjectural theory. There is not even a serious proposal
for what the dynamics of the fundamental ‘Mtheory’ is supposed to be or
any reason at all to believe that its dynamics would produce a vacuum
state with the desired properties. The sole argument generally given to
justify this picture of the world is that perturbative string theories
have a massless spin two mode and thus could provide an explanation of
gravity, if one ever managed to find an underlying theory for which
perturbative string theory is the perturbative expansion.’ – Dr P. Woit,
Quantum Field Theory and Representation Theory: A Sketch (2002),
http://arxiv.org/abs/hepth/0206135.
Heuristic explanation of QFT
The problem is that people are used to looking to abstruse theory due to the success of QFT in some areas, and looking at the data is out of fashion. If you look at history of chemistry there were particle masses of atoms and it took school teachers like Dalton and a Russian to work out periodicity, because the bigwigs were obsessed with vortex atom maths, the ‘string theory’ of that age. Eventually, the obscure school teachers won out over the mathematicians, because the vortex atom (or string theory equivalent) did nothing, but empirical analysis did stuff.
QUARKS
Like electrons, a quark core is surrounded by
virtual particles, namely gluons and pairs of quarks and their
antiquarks. Because of the strong nuclear force, the virtual gluons,
unlike photons, do have a strong force charge called ‘colour’ charge (to
distinguish it from electric charge). This means that both the virtual
gluon cloud and the overlapping cloud of quark and antiquark pairs
interfere with the forces away from the core of a quark. While there are
two types of electric charge (arbitrarily named positive and negative),
there are three types of nuclear colour charge (arbitrarily named red,
green, and blue in quantum chromodynamics, QCD). If the quark core
carried ‘red’ charge, then in the surrounding cloud of virtual quark
pairs, the virtual antired quarks will be attracted to the red quark
core, while the virtual red quarks will be repelled to a greater average
distance. This effect shields the colour charge of the quark core, but
the overlapping cloud of virtual gluons has colour charge and has the
opposite effect. The overall effect is to diffuse the colour charge of
the quark core over a volume of the surrounding virtual particle cloud.
Therefore, the net colour charge decreases as you penetrate through the
virtual cloud, much as the earth’s net gravity force falls if you were
to go down a tunnel to the earth’s core. Thus, if quarks are collided
with higher and higher energies, they will penetrate further through the
virtual cloud and experience a reduced colour charge. When quarks are
bound close together to form nucleons (neutrons and protons), they
therefore interact very weakly because their virtual particle clouds
overlap, reducing their net colour charge to a very small quantity. As
these trapped quarks move apart, the net colour charge increases,
increasing the net force, like stretching a rubber band! This makes it
impossible for any quark to escape from a neutron or proton. Simply put,
the binding energy holding quark together is more than the energy to
create a pair or triad of quarks, so you can never isolate a single
quark. Attempts to separate quarks by collisions require so much energy
that new pairs (mesons) or triads (baryons and nucleons) of quarks are
formed, instead of breaking individual quarks loose.
COLOUR
CHARGES
A nucleon, that is a neutron or proton, has no
overall ‘colour’ charge, because the ‘colour’ charges of the quarks
within them cancel out exactly. Pairs of quarks, mesons, contain one
quark with a given colour charge, and another quark with the anticharge
of that. Triads of quarks, baryons and nucleons, contain three quarks,
each with a different colour charge: red (R), blue (B) and green (G).
There are also Anticolours, AR, AB, and AG. Common sense tells you that
the gluons will be 9 in number: RAR, RAB, and RAG, as well as BAR,
BAB, and BAG, and finally GAR, GAB, and GAG, a 3x3 = 9 result
matrix.
If you search the internet, you find a page dated 1996 by
Dr James Bottomley and Dr John Baez which addresses this question: ‘Why
are there eight gluons and not nine?’ They point out first that mesons
are composed of quark and antiquark pairs, and that baryons (neutrons,
protons, etc.) are triads of quarks. Then they argued that the
combination RAR + BAB + GAG ‘must be noninteracting, since otherwise
the colourless baryons would be able to emit these gluons and interact
with each other via the strong force – contrary to the evidence. So
there can be only eight gluons.’ Fair enough, you subtract one gluon
without saying which one (!), to avoid including a general possibility
that makes the colour charge false. (Why does the term ‘false epicycle’
spring to mind?) I love the conclusion they come to: ‘If you are
wondering what the hell I am doing subtracting particles from each
other, well, that’s quantum mechanics. This may have made things seem
more, rather than less, mysterious, but in the long run I'm afraid this
is what one needs to think about.’
All quantum field theories are based ultimately upon simple
extensions of Dirac's mathematical work in attempting to unify special
relativity with quantum mechanics in the late 1920s. People such as Dr
Sheldon Glashow and Dr Gerard t'Hooft developed the framework. A quantum
field theory, the 'Standard Model' [gauge groups SU(3) x SU(2) x U(1)]
is built on a unitary group, U(1), as well as two symmetryunitary
groups, SU(2) and SU(3).
U(1) describes electric charge (having a
single vector field or gauge boson, the photon). Because bosons are spin
1, the force can be attractive or repulsive, depending on the signs of
the charges. (To have a charge which is always positive or attractive,
like gravity, would require a spin 2 boson which is why the postulated
quantum gravity boson, the unobserved graviton, is supposed to have a
spin of 2.)
SU(2) describes weak isospin interactions (having 3
vector fields or 3 gauge bosons: Z, W+, W). Electroweak theory is
SU(2)x(U1), with 4 gauge bosons.
SU(3) describes the strong
nuclear force, the 'colour charge' interactions (having 8 vector fields
or 8 gauge bosons: gluons). Gauge bosons are force mediators, 'gauge'
coming from the size scale analogy of railway line gauges, and 'boson'
coming from Einstein's collaborator Bose, who worked out the statistical
distribution describing a gas of light photons.
SU(2) allows left
handed fields to form doublets, while left handed fields in SU(3) allows
triplets of quarks (baryons like neutron and proton) and singletons
(leptons like electron and muon) to form. The right handed fields are
the same for SU(3) but only form a pair of two singlets (mesons) for
SU(2).
To work, mass must be provided by an uncharged massive
particle, the 'Higgs field boson'. SO(3) is another symmetry group which
describes the conservation of angular momentum for 3 dimensional
rotations. Is the Standard Model a worthless heap of trash, as it
requires the existence of an unobserved Higgs field to give rise to
mass? No, it is the best available way of dealing with all available
physics data, and the Higgs field is implied as a type of ether. If you
see an inconsistency between the use of special relativity in quantum
field theory and the suggestion that it implies an ether, you need to
refresh yourself on the physical interpretation of general relativity,
which is a perfect fluid (ether/spacetime fabric) theory according to
Einstein. General relativity requires an additional postulate to those
of special relativity (which is really a flat earth theory, as it goes
not allow for curved geodesics or gravity!), but gives rise to the same
mathematical transformations as special relativity.
Spin in quantum field theory is described by ‘spinors’, which are
more sophisticated than vectors. The story of spin is that Wolfgang
Pauli, inventor of the phrase ‘not even wrong’, in 1924 suggested that
an electron has a ‘twovalued quantum degree of freedom’, which in
addition to three other quantum numbers enabled him to formulate the
‘Pauli exclusion principle’. (I use this on my home page to calculate
how many electrons are in each electron shell, which produces the basic
periodic table.)
Because the idea is experimentally found to sort
out chemistry, Pauli was happy. In 1925, Ralph Kronig suggested that the
reason for the two degrees of freedom: the electron spins and can be
orientated with either North Pole up or South Pole up. Pauli initially
objected because the amount of spin would give the old spherical model
of the electron (which is entirely false) an equatorial speed of 137
times the speed of light! However, a few months later two Dutch
physicists, George Uhlenbeck and Samuel Goudsmith, independently
published the idea of electron spin, although they got the answer wrong
by a factor (the gfactor) of 2.00232 (this is just double the 1.00116
factor for the magnetic moment of the electron). The first attempt to
explain away this factor of 2 was by Llewellyn Thomas and was of the
abstract variety (put equations together and choose what you need from
the resulting brew). It is called the ‘Thomas precession’. Spincaused
magnetism had already been observed as the anomalous Zeeman effect
(spectral line splitting when the atoms emitting the light are subjected
to an intense magnetic field). Later the SternGerlach experiment
provided further evidence. It is now known that the ordinary magnetism
of iron bar magnets and magnetite is derived from electron spin
magnetism. Normally this cancels out, but in iron and other magnetic
metals it does not completely out in each atom, and this fact allows
magnets. Anyway, in 1927 Pauli accepted spin, and introduced the
‘spinor’ wave function. In 1928, Dirac introduced special relativity to
Pauli’s spinor, resulting in ‘quantum electrodynamics’ that correctly
predicted antimatter, first observed in 1932.
The Special
Orthogonal group in 3 dimensions, or SO(3), allows spinors. It is traced
back to Sophus Lie who in 1870 introduced special manifolds to study the
symmetries of differential equations. The Standard Model, symmetry
unitary groups SU(3)xSU(2)xU(1) is a development and application of
spinor mathematics to physics. SU(2) is not actually the weak nuclear
force despite having 3 gauge bosons. The weak force arises from the
mixture SU(2)xU(1), which is of course the electroweak theory. Although
U(1) described aspects of electromagnetism and SU(2) aspects of the weak
force, the two are unified and should be treated as a single mix,
SU(2)xU(1). Hence there are 4 electroweak gauge bosons, not 1 or 3. One
whole point of the Higgs field mechanism is that it is vital to shield
(attenuate) some of those gauge bosons, so that they have a short range
(the weak force), unlike electromagnetism.
On the other hand, for
interactions of very high energy, say 100GeV, the weak force influence
SU(2) vanishes and SU(3)xU(1) takes over, so the strong nuclear force
and electromagnetism then dominate.
History of quantum field theory
‘I must say that I am very dissatisfied with the situation, because this socalled ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small – not neglecting it when it is infinitely great and you do not want it! … Simple changes will not do … I feel that the change required will be just about as dramatic as the passage from the Bohr theory to quantum mechanics.’ – Paul A. M. Dirac, lecture in New Zealand, 1975 (quoted in Directions in Physics).
The following list of developments is excerpted from a longer one given in Dr Peter Woit’s notes on the mathematics of QFT (available as a PDF from his home page). Dr Woit says:
‘Quantum field theory is not a subject which is at the point that it can be developed axiomatically on a rigorous basis. There are various sets of axioms that have been proposed (for instance Wightman’s axioms for nongauge theories on Minkowski space or Segal’s axioms for conformal field theory), but each of these only captures a limited class of examples. Many quantum field theories that are of great interest have so far resisted any useful rigorous formulation. …’
He lists the major events in QFT to give a sense of chronology to the mathematical developments:
‘1925: Matrix mechanics version of quantum mechanics (Heisenberg)
‘19256: Wave mechanics version of quantum mechanics, Schroedinger equation (Schroedinger)
‘19279: Quantum field theory of electrodynamics (Dirac, Heisenberg, Jordan, Pauli)
‘1928: Dirac equation (Dirac)
‘1929: Gauge symmetry of electrodynamics (London, Weyl)
‘1931: Heisenberg algebra and group (Weyl), Stonevon Neumann theorem
‘1948: Feynman path integrals formulation of quantum mechanics
‘19489: Renormalised quantum electrodynamics, QED (Feynman, Tomonoga, Schwinger, Dyson)
‘1954: Nonabelian gauge symmetry, YangMills action (Yang, Mills, Shaw, Utiyama)
‘1959: Wightman axioms (Wightman)
‘19623: SegalShaleWeil representation (Segal, Shale, Weil)
‘1967: GlashowWeinbergSalam gauge theory of weak interactions (Weinberg, Salam)
‘1971: Renormalised nonabelian gauge theory (t’Hooft)
‘19712: Supersymmetry
‘1973: Nonabelian gauge theory of strong interactions, QCD (Gross, Wilczek, Politzer)
(I’ve omitted the events on Dr Woit’s list after 1973.)
Dr Chris Oakley has an internet site about renormalisation in quantum field theory, which is also an interest of Dr Peter Woit. Dr Oakley starts by quoting Nobel Laureate Paul A.M. Dirac’s concerns in the 1970s:
‘[Renormalization is] just a stopgap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr’s orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number.’
The Nobel Laureate Richard P. Feynman did two things, describing the accuracy of the prediction of the magnet moment of leptons (electron and muon) and Lamb shift, and two major problems of QFT, namely ‘renormalisation’ and the unknown rationale for the ‘137’ electromagnetic force coupling factor:
‘… If you were to measure the distance from Los Angeles to New York to this accuracy, it would be exact to the thickness of a human hair. That’s how delicately quantum electrodynamics has, in the past fifty years, been checked … I suspect that renormalisation is not mathematically legitimate … we do not have a good mathematical way to describe the theory of quantum electrodynamics … the observed coupling … 137.03597 … has been a mystery ever since it was discovered … one of the greatest damn mysteries …’ – QED, Penguin, 1990, pp. 7, 1289.
Dr Chris Oakley writes: ‘… I believe we already have all the ingredients for a compact and compelling development of the subject. They just need to be assembled in the right way. The important departure I have made from the ‘standard’ treatment (if there is such a thing) is to switch round the roles of quantum field theory and Wigner’s irreducible representations of the Poincaré group. Instead of making quantising the field the most important thing and Wigner’s arguments an interesting curiosity, I have done things the other way round. One advantage of doing this is that since I am not expecting the field quantisation program to be the last word, I need not be too disappointed when I find that it does not work as I may want it to.’
Describing the problems with ‘renormalisation’, Dr Oakley states: ‘Renormalisation can be summarised as follows: developing quantum field theory from first principles involves applying a process known as ‘quantisation’ to classical field theory. This prescription, suitably adapted, gives a full dynamical theory which is to classical field theory what quantum mechanics is to classical mechanics, but it does not work. Things look fine on the surface, but the more questions one asks the more the cracks start to appear. Perturbation theory, which works so well in ordinary quantum mechanics, throws up some higherorder terms which are infinite, and cannot be made to go away.
‘This was known about as early as 1928, and was the reason why Paul Dirac, who (along with Wolfgang Pauli) was the first to seriously investigate quantum electrodynamics, almost gave up on field theory. The problem remains unsolved to this day. Perturbation theory is done slightly differently, using an approach based on the pioneering work of Richard Feynman, but, other than that, nothing has changed. One seductive fact is that by pretending that infinite terms are not there, which is what renormalisation is, the agreement with experiment is good. … I believe that our failure to really get on top of quantum field theory is the reason for the depressing lack of progress in fundamental physics theory. … I might also add that the way that the whole academic system is set up is not conducive to the production of interesting and original research. … The tone is set by burnedout old men who have long since lost any real interest and seem to do very little other than teaching and politickering. …’
Actually, the tragedy started with two rival approaches to the development of Isaac Newton’s gravitational theory were almost simultaneously proposed, one mathematical horses*** which became popular, and the other the LeSage physical mechanism which was ignored. The mathematical horses*** ‘theory’ was proposed by Jesuit theologian Roger J. Boscovich, a Fellow of the Royal Society, in 1758, Theory of Natural Philosophy. This ‘theory’ was just a kind of distorted sinewave curve, with a crackpot claim that it showed numerically how the unexplained force of nature ‘oscillates’ between attraction and repulsion with increasing distance between ‘points of matter’. This started the cult pseudoscience of guessed nontheoretical crackpot stuff that has led to 11 dimensional Mtheory (it might one day be defendable mathematically after a lot more work, but there is no evidence, and even if it is right, it can’t predict forces, let alone particle masses). Whenever Boscovich’s ‘force’ (line on a graph) crossed over from ‘attraction’ to ‘repulsion’, there was a supposedly stable point where things could be stable, like molecules, water drops, and planets; without collapsing or exploding. This led Einstein to do the same to keep the universe ‘static’ with the cosmological constant, which he later admitted was his ‘biggest blunder’. The cosmological constant makes gravity zero at the distance of the average separation between galaxies, simply by making gravity fall off faster than the inverse square law, become zero at the galactic interspersion distance, and become repulsive at greater distances. However, Einstein was not merely wrong to follow Boscovich because of the lack of gravitational mechanism and the 1929 evidence for the big bang, but also because even neglecting these, the solution would not work. There is no stability in such a solution, since the nature of the hypothetical force when crossingover from attraction to repulsion is to magnify any slight motion, enhancing instability, so there is no real stability. Hence it is entirely fraudulent, both scientifically and mathematically.
In article 111 of his Treatise on Electricity and Magnetism, 1873, Maxwell says: ‘When induction is transmitted through a dielectric [like space or glass or plastic], there is in the first place a displacement of electricity…’
Catt seems to question the details of this claim here: http://www.ivorcatt.org/icrwiworld78dec1.htm and http://www.ivorcatt.org/icrwiworld78dec2.htm. Maxwell imagined that volume is filled with a physical space, a sea of charge that becomes polarised in the spacefilled gap between two plates of a capacitor. This analogy (which is from chemical electrolysis – electroplating and battery reactions) ignores the mechanism by which the capacitor charges up. But weirdly, as we now know from evidence in QED, the ‘ether’ really does contain virtual charges that get polarised around the electron core. This shields the electric core/strong nuclear force by a factor of 137 to give a force 137 times weaker than the strong nuclear force, electric force. (Dirac used the sea of virtual particles to help him visually in using his equations to predict antimatter, which is weird, since Dirac was relying on an ether theory to unify quantum theory and special relativity, which most people think says there is no ether! Dirac was ethical enough that he later published in Nature that there is definitely an ‘aether’, which helped his arcane reputation no end!)
It is interesting that Dirac’s conceptual model of the ether for the pairproduction (matterantimatter production when a suitably energetic gamma ray is distorted by the strong field near a nucleus) may be wrong, just as Maxwell’s aether model for the charging capacitor is wrong. The equations can appear right experimentally under some conditions, but it is possible that reality is slightly different to what first appears to be the case. The first real problem with Maxwell’s theory arose in 1900, with Planck’s quantum theory, which predicted the spectrum of light properly, unlike Maxwell’s theory. Planck’s theory is steps or quanta, named ‘photons’. This is far from the continuous emission of radiation predicted by Maxwell.
‘Our electrical theory has grown like a ramshackle farmhouse which has been added to, and improved, by the additions of successive tenants to satisfy their momentary needs, and with little regard for the future.’ – H.W. HeckstallSmith, Intermediate Electrical Theory, Dent, London, 1932, p283.
‘a) Energy current can only enter a capacitor at the speed of light.
‘b) Once inside, there is no mechanism for the energy current to slow down below the speed of light…’
 Ivor Catt, Electromagnetism 1, Westfields Press, St Albans, 1995, p5.
In the New Scientist, Professor Leonard Susskind is quoted as having said he wants to outlaw all use of the word ‘real’ in physics [metaphysics?]. Why not apply to have string theory receive recognised religious status, so it is protected from the fact it can’t make testable predictions? Freedom does not extend to criticisms of religious faiths, so then ‘heretics’ will be burned alive or imprisoned for causing a nuisance.
Religious Creed or ‘Confession of Faith’ of a Mainstream Crackpot
I believe in one way forward, string theory.
In 11 dimensions, Mtheory was made.
All equations bright and beautiful,
String theory on the Planck scale made them all.
All epicyclical cosmic matter of the darkness,
All ad hoc dark energies of the night,
Are entirely right.
Amen.
‘Teachers of history, philosophy, and sociology of science … are up in arms over an attack by two Imperial College physicists … who charge that the plight of … science stems from wrongheaded theories of knowledge. … Scholars who hold that facts are theoryladen, and that experiments do not give a clear fix on reality, are denounced. … Staff on Nature, which published a cutdown version of the paper after the authors’ lengthy attempts to find an outlet for their views, say they cannot recall such a response from readers. ‘It really touched a nerve,’ said one. There was unhappiness that Nature lent its reputation to the piece.’ – Jon Thurney, Times Higher Education Supplement, 8 Jan 88, p2. [This refers to the paper by T. Theocharis and M. Psimopoulos, ‘Where Science Has Gone Wrong’, Nature, v329, p595, 1987.]
Consider the wonderful exchange between science writer John Horgan (who I’m starting to admire for common sense views on modern physics, although I didn’t much like his attack a while back on a U.S. weapons scientist on the basis that the scientist had unorthodox ideas in other, unrelated areas).
The debate occurs in ‘Edge’ (http://www.edge.org/) issue 165, 15 Aug 05:
John Horgan (Author of ‘The End of Science’), In Defence of Common Sense: ‘… I feel compelled to deplore one aspect of Einstein’s legacy: the widespread belief that science and common sense are incompatible … quantum mechanics and relativity shattered our commonsense notions about how the world works. [No, no, no: Einstein himself pointed out in a lecture at Leyden University in 1920 that according to general relativity, gravity is caused by ether, the continuum or spacetime fabric (and he dumped special relativity as just a mathematically ‘restricted’ approximation in 1916 when he developed general relativity); he also in 1935 published a paradox in the crazy ‘Copenhagen Interpretation’ of quantum mechanics!]… As a result, many scientists came to see common sense as an impediment to progress … Einstein’s intellectual heirs have long been obsessed [‘interested’ is a more polite term than ‘obsessed’, Horgan!] with finding a single ‘unified’ theory that can embrace quantum mechanics, which accounts for electromagnetism and the nuclear forces [quantum field theory], and general relativity, which describes gravity. … The strings … are too small to be discerned by any buildable instrument, and the parallel universes are too distant. Common sense thus persuades me that these avenues of speculation will turn out to be dead ends. [Right result, but inadequate reasoning!] … ultimately, scientific truth must be established on empirical grounds. [Spot on!]… Einstein … could never fully accept the bizarre implication of quantum mechanics that at small scales reality dissolves into a cloud of probabilities. [ No, no, no: Einstein supported the pilotwave theory of de Broglie, in which particles cause waves in the surrounding ‘ether’ as they move, causing the wavetype diffraction and uncertainty effects, and Einstein also tried to get Dr David Bohm, the hidden variables theorist, to be his assistant at the Institute of Advanced Study in Princeton, but was blocked by the director Dr Robert Oppenheimer (who was better at blowing people up than being constructive, as Dr Tony Smith has pointed out on Not Even Wrong).]’
Dr Leonard Susskind (Felix Bloch Professor of Theoretical Physics, Stanford University), In Defence of Uncommon Sense: ‘John Horgan … has now come forth to tell us that the world’s leading physicists and cognitive scientists are wasting their time. … Every week I get several angry email messages … [I wonder why?] … as Horgan tells us, it’s a dangerous sea where one can easily lose ones way and go right off the deep end. [Easily??!!??] But great scientists are, by nature, explorers. To tell them to stay within the boundaries of common sense may be like telling Columbus that if he goes more than fifty miles from shore he’ll get hopelessly lost. [So Dr Susskind is like Columbus, which probably means that he will be convinced that he has found Western India, when really he is on a different continent, America.] Besides, good old common sense tells us [Susskind] that the Earth is flat. …’
My quotation of Dr Susskind above omits a lot of more sensible, yet irrelevant, comments. However, like other people who complain that ‘good old common sense tells us that the Earth is flat’, he is not helping physics by defending science fiction (string theory). If I look at pictures of the Earth taken from the Moon, common sense tells me the Earth is approximately spherical. Newton stated it is nearer an oblate spheroid, but others (with more accurate data) show it is ‘pearshaped’ (which seems ‘common sense’ to those of us with a sense of humour). The fact that there is a horizon in every direction, and that you see a greater distance when you get higher up, suggests that there is some kind of sloping off of the ground in every direction. It was the fact that ships disappeared gradually over the visible horizon, but later returned, that suggested a difficulty in the flatearth theory. Dr Susskind would have done better by saying that common sense tells us that the sun orbits the earth, but possibly feared absolute motion, preferring the crazy idea Copernicus didn’t work on ‘the solar system’, but had instead discredited absolute motion, paving the way for relativism). I suggest to Dr Susskind and Mr Horgan that they debate ‘causality’ instead of ‘common sense’, and do it in a real forum with plenty of custard pies available for observers to use against the loser. The basic problem is that Dr Susskind is so busy defending prejudice against farout ideas that he forgets he and other string theorists are creating just a little indirect prejudice against more classical or testable new ideas which might be able to sort out problems in physics.
Dr Peter Woit is a Columbia University mathematician who runs the weblog ‘Not Even Wrong’ about string theory – physicist Pauli deemed speculative belief systems like strings which predict nothing and cannot be tested or checked ‘not even wrong’. He has written a book that will sort out the nonsense in physics.
Nigel Says: January
14th, 2006 at 2:18 pm
Some kind of loop quantum
gravity is going to be the right theory, since it is a spin foam vacuum.
People at present are obsessed with the particles that string theory
deals with, to the exclusion of the force mediating vacuum. Once
prejudices are overcome, proper funding of LQG should produce
results.
Lee Smolin Says: January
14th, 2006 at 4:41 pm
... Thanks also to Nigel for
those supporting comments. Of course more support will lead to more
results, but I would stress that I don’t care nearly as much that LQG
gets more support as that young people are rewarded for taking the risk
to develop new ideas and proposals. To go from a situation where a young
person’s career was tied to string theory to one in which it was tied to
LQG would not be good enough. Instead, what is needed overall is that
support for young scientists is not tied to their loyalty to particular
research programs set out by we older people decades ago, but rather is
on the basis only of the quality of their own ideas and work as well as
their intellectual independence. If young people were in a situation
where they knew they were to be supported based on their ability to
invent and develop new ideas, and were discounted for working on older
ideas, then they would themselves choose the most promising ideas and
directions. I suspect that science has slowed down these last three
decades partly as a result of a reduced level of intellectual and
creative independence avaialble to young
people.
Thanks,
Lee
Sadly then, Dr Lubos Motl, string ‘theorist’ and assistant professor at harvard, tried to ridicule this aproach by the false claim that Dirac’s quantum field theory disproves a spacetme babric, as it is allegedly a unification of special relativity (which denies spacetime fabric) and quantum mechanics. Motl tried to ridicule me with this, although I had already explained the reason to him! "An important part of all totalitarian systems is an efficient propaganda machine. ... to protect the 'official opinion' as the only opinion that one is effectively allowed to have."  STRING THEORIST Dr Lubos Motl: http://motls.blogspot.com/2006/01/powerofpropaganda.html Here is a summary of the reasons why Dirac’s unification is only for the maths of special relativity, not the principle of nofabric, and in fact Dirac was an electrical engineer before becoming a theoretical physicist, and later wrote:
‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.
Thankfully, Peter Woit has retained so far a comment on the discussion post for loop quantum gravity which points out that Motl is wrong:
http://www.math.columbia.edu/~woit/wordpress/?p=330
anonymous Says:
January
21st, 2006 at 1:19 pm
Lumos has a long list of publications about speculation on unobservables. So I guess he’s well qualified to make vacuous assertions. What I’d like to see debated is the fact that the spin foam vacuum is modelling physical processes KNOWN to exist, as even the string theorists authors of http://arxiv.org/abs/hepth/0601129 admit, p14:
‘… it is thus perhaps best to view spin foam models … as a novel way of defining a (regularised) path integral in quantum gravity. Even without a clearcut link to the canonical spin network quantisation programme, it is conceivable that spin foam models can be constructed which possess a proper semiclassical limit in which the relation to classical gravitational physics becomes clear. For this reason, it has even been suggested that spin foam models may provide a possible ‘way out’ if the difficulties with the conventional Hamiltonian approach should really prove insurmountable.’Strangely, the ‘critics’ are ignoring the consensus on where LQG is a useful approach, and just trying to ridicule it. In a recent post on his blog, for example, Motl states that special relativity should come from LQG. Surely Motl knows that GR deals better with the situation than SR, which is a restricted theory that is not even able to deal with the spacetime fabric (SR implicitly assumes NO spacetime fabric curvature, to avoid acceleration!).
When asked, Motl responds by saying Dirac’s equation in QFT is a unification of SR and QM. What Motl doesn’t grasp is that the ‘SR’ EQUATIONS are the same in GR as in SR, but the background is totally different:
‘The special theory of relativity … does not extend to nonuniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of coordinates, that is, are covariant with respect to any substitutions whatever (generally covariant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
What a pity Motl can’t understand the distinction and its implications.
(See also http://nigelcook0.tripod.com/ and scroll down; http://lqg.blogspot.com/, and http://electrogravity.blogspot.com/)
http://christinedantas.blogspot.com/2006/02/doesquantummechanicsapplyto.html
Ordinary QM makes no attempt to deal with spacetime or the spacetime fabric, but QFT does.
Dirac's equation is Schrodinger's timedependent equation with a term for massenergy added (the E=mc2 result comes from the mass increase Lorentz formula, which is a decade before Einstein derived it from SR, so contrary to Lubos Motl, Dirac is not tied to a noether SR).
It is interesting that Maxwell's special term, added to the "Maxwell equations" (Heaviside's equations) for Ampere's law, is ultimately describing the same thing as the timedependent Schrodinger equation which is the basis of Dirac's equation too. The energy Schrodinger's equation describes is electromagnetic, as is that of Maxwell's equation.
They describe energy by the hamiltonian, while the field or the wave function varies with time. Maxwell's equation is stated as being for 'Displacement current' flowing from one conductor to another in a charging capacitor, across a vacuum dielectric. However, the real process is induction, electromagnetic energy flowing from one conductor to the other.
See <A HREF="http://electrogravity.blogspot.com/2006/01/solutiontoproblemwithmaxwells.html">this</A> for a basic situation Maxwell missed.
To get a detailed understanding of QM for spacetime, you need to stop thinking about abstract wavefunctions and field strengths, and rewrite the equations as energy exchange processes (for Feynman diagrams).
Because quantum field theory is more complete than QM, surely Feynman's sum over histories approach (path integrals), introduces spacetime properly into quantum mechanics? I'm sure the QFT equations will be impractical to use for QM, but the principle holds.
http://motls.blogspot.com/2006/02/buydanishproducts.html:
Dear
Lubos,
You claim that Dirac's theory unifies SR and QM, when in
fact Dirac's equation (which is his theory) is an expansion of the
timedependent Schroedinger equation to include the massenergy result
which comes from electromagnetism (there are dozen's of derivations of
E=mcs, not merely SR). The timedependent Schroedinger equation is
similar to Maxwell's "displacement current", which actually doesn't
describe real electric current but the energy flow in the vacuum when a
capacitor or such like charges by induction.
Maxwell's theory of
"displacement current" was a spin foam vacuum:
Maxwell’s 1873
Treatise on Electricity and Magnetism, Articles 8223: ‘The ... action
of magnetism on polarised light [discovered by Faraday not Maxwell]
leads ... to the conclusion that in a medium ... is something belonging
to the mathematical class as an angular velocity ... This ... cannot be
that of any portion of the medium of sensible dimensions rotating as a
whole. We must therefore conceive the rotation to be that of very small
portions of the medium, each rotating on its own axis [spin] ... The
displacements of the medium, during the propagation of light, will
produce a disturbance of the vortices ... We shall therefore assume that
the variation of vortices caused by the displacement of the medium is
subject to the same conditions which Helmholtz, in his great memoir on
Vortexmotion [of 1858; sadly Lord Kelvin in 1867 without a fig leaf of
empirical evidence falsely applied this vortex theory to atoms in his
paper ‘On Vortex Atoms’, Phil. Mag., v4, creating a mathematical cult of
vortex atoms just like the mathematical cult of string theory now; it
created a vast amount of prejudice against ‘mere’ experimental evidence
of radioactivity and chemistry that Rutherford and Bohr fought], has
shewn to regulate the variation of the vortices [spin] of a perfect
fluid.’
Lorentz invariance is as the name suggests Lorentz not SR
invariance.
Lorentz invariance is aetherial. Even if you grasp
this and start calling the contraction a metaphysical effect unrelated
to physical dynamics of the quantum vacuum, you don't get
anywhere.
Feynman's innovation was introducing spacetime
pictures, because you need to see what you are doing clearly when using
mathematics. The increase in the magnetic moment of an electron that
Feynman, Schwinger and Tito came up with is 1 + 1/(2.Pi.137), where the
first term is from Dirac's theory and the second is the increase due to
the first Feynman coupling correction to the vacuum.
The
1/(2.Pi.137) is from a renormalised or cutoff QFT integral, but the
heuristic meaning is clear. The core of the electron has a charge 137
times the observed charge, and this is shielded by the polarised vacuum
as Koltick's 1997 PRL published experiments confirm (the 1/137 factor
changes to 1/128.5 as collision energy goes to 100 GeV or so; at
unification energy it would be 1/1 corresponding to completely breaking
through the veil of polarised vacuum).
Renormalisation is
limiting the interaction physically to 1 vacuum particle rather than an
infinite number, and that particle is outside the veil so the
association is 137 times weaker at low energies, and the geometry causes
a further reduction by 2Pi (because the exposed length of a spinning
loop particle seen as a circle is 2Pi times the sideon or diameter
size). So that is physically what is behind adding 1/(2Pi.137) 0r
0.00116 to the core's magnetic moment (which is unshielded by the
polarised veil, because that only attenuates electric field).
In
addition, the same mechanism explains the differing masses for different
fundamental particles. If the Standard Model mass causing particle
(Higgs field particle) is inside the polarised veil, it experiences the
core strength, 137 times Coulomb, and is strongly associated with the
particle core, increasing the mass.
But if the Higgs field
particle is outside the polarised veil, it is subject to the shielded
strength, 137 times less than the core charge, so the coupling is weaker
and the effective miring mass by the Higgs field is 137 times
weaker.
This idea predicts that a particle core with n
fundamental particles (n=1 for leptons, n = 2 for mesons, and obviously
n=3 for baryons) coupling to N virtual vacuum particles (N is an
integer) will have an associative inertial mass of Higgs bosons
of:
(0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev,
where
0.511 Mev is the electron mass. Thus we get everything from this one
mass plus integers 1,2,3 etc, with a mechanism.
Many of these
ideas are equally applicable to string theory or LQG, since they're
dealing with practical problems.
Tell me if you would have
dismissed Feynman's diagrams in 1948 as crackpot, like Oppenheimer did
at first.
Best wishes,
Nigel
Nigel Cook  Homepage 
02.06.06  3:25 am 
Claims that we have a good idea that many combination’s of ‘laws’ won’t allow life: these are disproved by cosmologists claiming that the successful calculation of fusion of light elements in the big bang demonstrates G didn’t vary by more than 20% from 3 minutes to today. Actually, it could have varied enormously and without affecting fusion in stars or indeed the big bang, simply by the constancy of the ratio of Coulomb to gravity. If G was a million times weaker at 3 minutes, compression and fusion rate would be less. But if the ratio of gravity to Coulomb was constant, the weaker Coulomb repulsion between nuclei would make fusion more likely, offsetting the reduced compression.
Conclusion: the widely held claims that the anthropic principle can dismiss many different combinations of laws has been abused. It is a false claim that the antropic principle is useful, it is arguing from plain ignorance, which is the kind of philosophising that supposedly went out of fashion with the scientific revolution.
Spacetime says distance is light speed multiplied by the time in the past that the event occurred. So the recession of galaxies is varying with time, in the framework of spacetime that we can actually see and experience with measurements. A speed varying linearly with time is acceleration Hc = 10^10 ms^2, hence outward force of big bang is mass of universe. By the 3rd law of motion, you then get an inward force, the gauge boson exchange force, which causes the general relativity contraction and also gravity.
Kepler was an astrologer; since it's just calculating where the planets are in the sky it doesn't matter how you do the calculations. Nobody says that because Kepler speculated the planets orbited the sun due to magnetism, or because he was an astrologer, he should be banned from the history of science for what he got right (the planetary laws the alchemist Newton used).
Another guy of likemind was Maxwell, who had various elastic and mechanical aethers in his papers of 18625. However, his equations are compatible with the FitzGeraldLorentz contraction of 198993 or the same formulae from 'special'/restricted relativity.
I read a 1933 book, 'My Philosophy' by Sir Oliver Lodge, a physicist who pioneered radio. It's a mix of physics and pseudoscience, just like the modern stringy stuff. Lodge was clinging on to Maxwell's ether, and trying to popularise it with telepathy, etc. Very sad.
Then he quoted a chunk of Einstein's 1920 lecture 'Ether and Relativity' which pointed out that GR deals with absolute motion like acceleration and is compatible with an ether that behaves like a perfect flowing fluid.... which is sensible in view of quantum mechanical vacuum.
If the vacuum is a fluid virtual particles, a particle of matter moving in it is going to be compressed slightly in the direction of motion, gain inertial mass from the fluid (like a boat moving in water), and create waves. Wonder why the filmmakers prefer metaphysical entanglement, when aetherial entanglement is for sale dirt cheap?
Professor
Paul Davies very indirectly and obscurely (accidentally?) defends
Einstein's 1920 ‘ether and relativity’ lecture …
In 1995,
physicist Professor Paul Davies  who won the Templeton Prize for
religion (I think it was $1,000,000), wrote on pp5457 of his book
About Time:
‘Whenever I read dissenting views of time, I
cannot help thinking of Herbert Dingle... who wrote ... Relativity for
All, published in 1922. He became Professor ... at University College
London... In his later years, Dingle began seriously to doubt Einstein's
concept ... Dingle ... wrote papers for journals pointing out Einstein’s
errors and had them rejected ... In October 1971, J.C. Hafele [used
atomic clocks to defend Einstein] ... You can't get much closer to
Dingle's ‘everyday’ language than that.’
Now, let's check out
J.C. Hafele.
J. C. Hafele is against crackpot science:
Hafele writes in Science vol. 177 (1972) pp 1668 that he uses
‘G. Builder (1958)’ for analysis of the atomic
clocks.
G. Builder (1958) is an article called
'ETHER AND RELATIVITY' in Australian Journal of
Physics, v11, 1958, p279, which states:
‘... we conclude that the
relative retardation of clocks... does indeed compel us to recognise the
CAUSAL SIGNIFICANCE OF ABSOLUTE velocities.’
Einstein himself slipped up in one paper when he wrote that a clock at the earth’s equator, because of the earth’s spin, runs more slowly than one at the pole. One argument, see http://www.physicstoday.org/vol58/iss9/p12.html, is that the reason why special relativity fails is that gravitational ‘blueshift’ given by general relativity cancels out the time dilation: ‘The gravitational blueshift of a clock on the equator precisely cancels the time dilation associated with its motion.’
It is true that general relativity is involved here, see the proof below of the general relativity gravity effect from the Lorentz transformation using Einstein’s equivalence principle. The problem is that there are absolute velocities, and special relativity by itself gives the wrong answers! You need general relativity, which introduces absolute motion, because it deals with acceleration like rotation, and observers can detect rotation as a net force, if in a sealed box that is rotating. It is not subject to the principle of relativity, which does not apply to accelerations. Other Einstein innovations were also confused: http://www.guardian.co.uk/print/0,3858,3928978103681,00.html, http://www.italianamerican.com/depretreview.htm, http://home.comcast.net/~xtxinc/prioritymyth.htm.
‘Einstein simply postulates what we have deduced … I have not availed myself of his substitutions, only because the formulae are rather complicated and look somewhat artificial.’ – Hendrik A. Lorentz (discoverer of timedilation in 1893, and rediscoverer of George FitzGerald’s 1889 formula for contraction in the direction of motion due to aether).
‘You sometimes speak of gravity as essential & inherent to matter; pray do not ascribe that notion to me, for ye cause of gravity is what I do not pretend to know, & therefore would take more time to consider of it… That gravity should be innate inherent & essential to matter so yt one body may act upon another at a distance through a vacuum wthout the mediation of any thing else by & through wch their action or force may be conveyed from one to another is to me so great an absurdity ...’ – Sir Isaac Newton, Letter to Richard Bentley, 1693. ‘But if, meanwhile, someone explains gravity along with all its laws by the action of some subtle matter, and shows that the motion of planets and comets will not be disturbed by this matter, I shall be far from objecting.’ – Sir Isaac Newton, Letter to Gottfried Wilhelm Leibniz, 1693.
The two curl ‘Maxwell’ (Heaviside) equations are unified by the
Heaviside vector E = cB, where E is electric field strength and B is
magnetic field strength, and all three vectors E, c, and B are
orthagonal, so the curl vector (difference in gradients in perpendicular
directions) can be applied simply to this unique E=cB:
curl.E =
c.curl.B
curl.B = (1/c).curl.E
Now, because any field
gradient or difference between gradients (curl) is related to the rate
of change of the field by the speed of motion of the field (eg, dB/dt =
c dB/dr, where t is time and r is distance), we can replace a curl by
the product of the reciprocal of c and the rate of field
change:
curl.E = c [(1/c)dB/dt] = dB/dt (Faraday’s law of
induction)
curl.B = (1/c) [(1/c) dE/dt] = (1/c^{2} )
dE/dt (‘displacement current’ ???)
Notice that all electrons
have a magnetic field as well as an electric field, see for example: http://photos1.blogger.com/blogger/1931/1487/1600/electron.gif;
a Heaviside energy vector trapped by gravitation is an electron
(magnetic field dipole mechanism can be seen here: http://members.lycos.co.uk/nigelbryancook/Image11.jpg).
The eternal magnetic fields of charges are normally cancelled out by the
pairing of electrons with opposite spins (Pauli exclusion principle). In
his final (1873) edition of his book A Treatise on Electricity and
Magnetism, Article 110:
‘... we have made only one step in
the theory of the action of the medium. We have supposed it to be in a
state of stress, but we have not in any way accounted for this stress,
or explained how it is maintained...’
In Article 111, he admits
further confusion and ignorance:
‘I have not been able to make
the next step, namely, to account by mechanical considerations for these
stresses in the dielectric [spacetime fabric]... When induction is
transmitted through a dielectric, there is in the first place a
displacement of electricity in the direction of the
induction...’
First, Maxwell admits he doesn’t know what he’s
talking about in the context of ‘displacement current’. Second, he talks
more! Now Feynman has something about this in his lectures about light
and EM, where he says idler wheels and gear cogs are replaced by
equations. So let’s check out Maxwell's equations.
One source is
A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics
Education, vol. 10, 1975, pp. 459). Chalmers states that Orwell’s novel
1984 helps to illustrate how the tale was fabricated:
‘… history
was constantly rewritten in such a way that it invariably appeared
consistent with the reigning ideology.’
Maxwell tried to fix his
original calculation deliberately in order to obtain the anticipated
value for the speed of light, proven by Part 3 of his paper, On Physical
Lines of Force (January 1862), as Chalmers explains:
‘Maxwell’s
derivation contains an error, due to a faulty application of elasticity
theory. If this error is corrected, we find that Maxwell’s model in fact
yields a velocity of propagation in the electromagnetic medium which is
a factor of root 2 smaller than the velocity of light.’
It took
three years for Maxwell to finally forcefit his ‘displacement current’
theory to take the form which allows it to give the alreadyknown speed
of light without the 41% error. Chalmers noted: ‘the change was not
explicitly acknowledged by Maxwell.’ Weber, not Maxwell, was the first
to notice that, by dimensional analysis (which Maxwell popularised),
1/(square root of product of magnetic force permeability and electric
force permittivity) = light speed.
Maxwell’s innovation was:
Total current = electric current + displacement current. But he didn’t
understand what the terms were physically! Really atoms are capacitors
themselves, not solids as Maxwell thought in 1873 (Xrays and
radioactivity only confirmed the nuclear atom in 1912). So the light
speed mechanism of electricity is associated with ‘displacement current’
and electric current results from the electric field induced by
‘displacement current’.
In March 2005, Electronics World carried
a longish letter from me pointing out that the error in the
Heaviside/Catt model of electricity is the neglect of the energy flowing
in the direction of displacement current. We know energy flows between
the conductors from Feynman’s correct heuristic interpretation of
Dirac’s quantum electrodynamics. Gauge bosons, photons, are exchanged to
cause forces, and we know that energy flows ‘through’ a
charging/discharging capacitor, appearing on the opposite side of the
circuit. Catt/Heaviside proclaim, nothing (including energy) flows from
one plate to the other, which is false, like their ignorance of
electrons in the conductors.
A radio transmitter aerial and
receiver aerial form a capacitor
arrangement:
____________________
Catt is right at http://www.ivorcatt.com/2604.htm
to point out that Maxwell ignored the flow of light speed energy along
the plate connected to a charge. He is wrong to ignore my statement to
him, based on Feynman's heuristic quantum mechanics and my fairly deep
mechanistic knowledge of radio from experimenting with it myself instead
of reading equations and theories from armchair experts in books (I read
the books after experimenting, and found a lot of
ignorance).
Radio transmitter aerial: 
Radio receiver
aerial: 
Transmitter aerial and receiver aerial arranged for
strong reception: 
Transmitter aerial and receiver aerial
arranged for zero reception: 
Transmitter and receiver aerial
in a more usual situation (receiver picking up a much weaker than
transmitted field): ….. 
Hence, a radio link is a capacitor,
with radio waves the ‘displacement current’. This is the simplest theory
which fits the experimental facts of radio! It was Prevost in 1792
who discovered that if a cooling object is also receiving energy in
equilibrium, you don’t measure a temperature fall.
Charges
radiate continuous displacement current energy and receive energy, there
is equilibrium. Where equilibrium doesn't occur, you have forces
resulting, potential energy changes, and so on. Displacement current as
Maxwell formulated it only occurs while ‘charging/discharging’. In any
case, it is not the flow of real charge, only energy. The
electromagnetic field of displacement current is really energy, and this
is what propagates through space, causing the long range fundamental
forces. It’s easier to write articles or books, or make wormhole movies,
than to explain work tied to the facts! You don’t see many popular books
about the standard model or quantum loop gravity, and nothing on TV or
in the cinema about it (unlike adventures in wormholes, parallel
universes, and backward time travel). The myth is that any correct
theory will be either built on string theory mathematics, or will be
obviously correct. To give an offtopic example, since Maxwell’s aether
was swept away (rightly in many ways, as his details were bogus), nobody
has tried to explain what his ‘displacement current’ energy flow is. It
is energy flowing from one parallel conductor to another across a vacuum
in a charging/discharging capacitor, just like radio waves. If so, then
spinning (accelerating) charges are exchanging nonoscillatory
‘displacement current’ with one another all the time, as the gauge boson
of electromagnetism.
‘String/Mtheory’ of mainstream physics is falsely labelled a theory because it has no dynamics and makes no testable predictions, it is abject speculation, unlike tested theories like General Relativity or the Standard Model which predicts nuclear reaction rates and unifies fundamental forces other than gravity. ‘String theory’ is more accurately called ‘STUMPED’, STringy, Untestable Mtheory ‘Predictions’, ExtraDimensional. Because these ‘string theorists’ suppressed the work below within seconds of it being posted to arXiv.org in 2002 (without even reading the abstract), we should perhaps politely call them the acronym of ‘very important lofty experts’, or even the acronym of ‘science changing university mavericks’. There are far worse names for these people.
HOW STRING THEORY SUPPRESSES REALITY USING PARANOIA ABOUT ‘CRACKPOT’ ALTERNATIVES TO MAINSTREAM
‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the telltale hallmarks of this horrible attitude? Paranoid controlfreakery; an obsessional hatred of any criticism or contradiction; the lust to characterassassinate anyone even suspected of it; a compulsion to control or at least manipulate the media ... the majority of the rank and file prefer to face the wall while the jackbooted gentlemen ride by. ... But I do not believe the innate decency of the British people has gone. Asleep, sedated, conned, duped, gulled, deceived, but not abandoned.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.
‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mindhistory of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime… The result will easily be guessed. Egypt stood still… Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.
‘Whatever ceases to ascend, fails to preserve itself and enters upon its inevitable path of decay. It decays … by reason of the failure of the new forms to fertilise the perceptive achievements which constitute its past history.’ – Alfred North Whitehead, F.R.S., Sc.D., Religion in the Making, Cambridge University Press, 1927, p. 144.
‘What they now care about, as physicists, is (a) mastery of the mathematical formalism, i.e., of the instrument, and (b) its applications; and they care for nothing else.’ – Sir Karl R. Popper, Conjectures and Refutations, R.K.P., 1969, p100.
‘... the view of the status of quantum mechanics which Bohr and Heisenberg defended  was, quite simply, that quantum mechanics was the last, the final, the nevertobesurpassed revolution in physics ... physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p6.
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.)
SUMMARY OF RESULTS PROVING CAUSALITY
http://motls.blogspot.com/2005/11/supernovaelambdaisconstant.html:
Nigel said...
'Dark energy' comes originally from Saul Perlmutter's experimental
results of 19978 on supernovae recession at vast distances obeying the
Hubble law, and not slowing down due to gravity.
In the October
1996 letters page of Electronics World,
http://www.softcopy.co.uk/electronicsworld/ , the editor sold a paper of
mine at Ł4.50 a copy.
This paper gave the mechanism whereby the
inward reaction to the outward expansion in the big bang is the cause of
gravity.
This model is like an explosion, see Explosion
physics.
You get an outward force of
big bang of F=ma where a is linear variation in recession speeds (c0)
divided by linear variation in times past (15,000,000,000 years  0) = 6
x 10 ^ 10 m/s^2.
Multiply this by mass of the surrounding
universe around us and you get outward force = 7 x 10 ^ 43 newtons.
(I've allowed for the increased density at great distances due to the
earlier time in the big bang, when the universe was denser, by the
factor I derive here.)
This outward force
has an equal and inward reaction, due to Newton's 3rd law of
motion.
The screening of this inward force by fundamental
particles geometrically gives gravity to within 1.7 %, as shown
here
Feynman said, in his
1964 Cornell lectures (broadcast on BBC2 in 1965 and published in his
book Character of Physical Law, pp. 1713):
"The inexperienced,
and crackpots, and people like that, make guesses that are simple, but
[with extensive knowledge of the actual facts rather than speculative
theories of physics] you can immediately see that they are wrong, so
that does not count. ... There will be a degeneration of ideas, just
like the degeneration that great explorers feel is occurring when
tourists begin moving i on a territory."
On page 38 of this book,
Feynman has a diagram which looks basically like this: >E S<,
where E is earth and S is sun. The arrows show the push that causes
gravity. This is the LeSage gravity scheme, which I now find Feynman
also discusses (without the diagram) in his full Lectures on Physics. He
concludes that the mechanism in its form as of 1964 contradicted the
noether relativity model and could not make any valid predictions, but
finishes off by saying (p. 39):
"'Well,' you say, 'it was a good
one, and I got rid of the mathematics for a while. Maybe I could invent
a better one.' Maybe you can, because nobody knows the ultimate. But up
to today [1964], from the time of Newton, no one has invented another
theoretical description of the mathematical machinery behind this law
which does not either say the same thing over again, or make the
mathematics harder, or predict some wrong phenomena. So there is no
model of the theory of gravitation today, other the mathematical
form."
Does this mean Feynman is after physical mechanism, or is
happy with the mathematical model? The answer is there on page
578:
"It always bothers me that, according to the laws as we
understand them today, it takes a computing machine an infinite number
of logical operations to figure out what goes on in no matter how tiny a
region of space, and no matter how tiny a region of time. How can all
that be going on in that tiny space? Why should it take an infinite
amount of logic to figure out what one tiny piece of space/time is going
to do? So I have often made the hypothesis that ultimately physics will
not require a mathematical statement, that in the end the machinery will
be revealed, and the laws will turn out to be simple, like the chequer
board with all its apparent complexities."
Best
wishes,
Nigel
http://www.blogger.com/deletecomment.g?blogID=8666091&postID=113295729767516235Nigel said...
The problem with "dark energy" that I predicted via Oct. 96 EW was
that it is false.
The prediction of GR that gravity slows the big
bang expansion is wrong because it ignores the mechanism for gravity,
which says gravity is the asymmetry in a push.
The inward push is
due to surrounding expansion. Since supernovae at great distances have
less mass of the universe beyond them, there is less inward push from
that direction, so they aren't slowed down. This is what drives the
expansion as observed.
Hence the whole dark energy thing is a
myth due to assuming gravity is not caused by the expansion.
I
predicted this before Perlmutter made his "discovery", in EW, which was
why I tried to get the idea published.
http://cosmicvariance.com/2005/12/19/theuniverseisthepoormansparticleaccelerator:
Sean
claims (falsely) that the big bang nucleosynthesis validates the
universal gravitational constant as having the same value during fusion
of the light elements within the first few minutes of the big bang as
today.
http://cosmicvariance.com/2005/12/19/theuniverseisthepoormansparticleaccelerator:
"When you claim +/ 10% agreement on G could you provide reference
please? (The one big failure of general relativity for the big bang is
that the gravitational effect is out by a factor of 10, implying
unobserved dark matter.) "
http://cosmicvariance.com/2005/12/19/theuniverseisthepoormansparticleaccelerator:
"Best I could find after 30 seconds of searching was this paper, which
puts a 20% limit on the variation of G. So I edited the post, just to be
accurate. "
http://cosmicvariance.com/2005/12/19/theuniverseisthepoormansparticleaccelerator:
"Thanks! Fusion rate would increase (due to compression) if G rises, but
would be reduced if the Coulomb repulsion between protons also rises:
the two effects offset one another. So G will appear constant if you it
is really varying and you ignore a similar variation with time of
Coulomb’s law. The strong nuclear force can’t cause fusion beyond a very
limited range, so the longer range forces control the fusion
rate."
http://cosmicvariance.com/2005/12/19/theuniverseisthepoormansparticleaccelerator:
"On quantum gravity: the error is not necessarily in quantum mechanics,
but in the incompleteness of GR. My point is that if the difference
between electromagnetic force and gravity is the square root of the
number of charges in the universe (and there is only one proof of this
available), this ratio is nearly fixed to the presentday figure within
1 second of the big bang by the creation of quarks. If gravity was x
times weaker during the fusion of nucleons into light elements in the
first few seconds to minutes, then Coulomb’s law would also be x times
weaker. So the gravitational compression would be less, but so would the
Coulomb barrier which hinders fusion. The two effects cancel each other
out. Therefore you have no basis whatsoever to claim G only varies by
20%. What you should say is that fusion calculations validate that the
ratio of gravity to electromagnetism was the same to within 20% of
today’s ratio."
Hence G could have been any number of times
weaker when fusion occurred and the same fusion would have resulted,
simply because Coulomb's law is linked to G!
H = 16p ^{2}Gm_{e}^{2}m_{proton}c^{3} e^{ 2}/(q_{e}^{4}e_{2.7…}^{3}) = 2.3391 x 10^{18} s^{1} or 72.2 km.s^{1}Mpc^{1}, so 1/H = t = 13.55 Gyr.
r = 192p ^{3}Gm_{e}^{4}m_{proton}^{2}c^{6} e^{ 4}/(q_{e}^{8}e_{2.7…}^{9}) = 9.7455 x 10^{28} kg/m^{3}.
Heisenberg’s uncertainty (based on impossible gamma ray microscope thought experiment): pd = h/(2.Pi), where p is uncertainty in momentum and d is uncertainty in distance. The product pd is physically equivalent to Et, where E is uncertainty in energy and t is uncertainty in time. Since, for light speed, d = ct, we obtain: d = hc/(2.Pi.E). This is the formula the experts generally use to relate the range of the force, d, to the energy of the gauge boson, E.Notice that both d and E are really uncertainties in distance and energy, rather than real distance and energy, but the formula works for real distance and energy, because we are dealing with a definite ratio between the two. Hence for 80 GeV massenergy W and Z intermediate vector bosons, the force range is on the order of 10^17 m.Since the formula d = hc/(2.Pi.E) therefore works for d and E as realities, we can introduce work energy as E = Fd, which gives us the strong nuclear force law: F = hc/(2.Pi.d^2). The range of this force is of course d = hc/(2.Pi.E).
‘… the Heisenberg formulae can be most naturally interpreted as
statistical scatter relations, as I proposed [in the 1934 book ‘The
Logic of Scientific Discovery’]. … There is, therefore, no reason
whatever to accept either Heisenberg’s or Bohr’s subjectivist
interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford
University Press, 1979, p. 303. (Note statistical scatter gives the
energy form of Heisenberg’s equation, since the vacuum is full of gauge
bosons carrying momentum like light, and exerting vast pressure; this
gives the foam vacuum.)
Now let us 'see' a picture of a
fundamental particle core: http://members.lycos.co.uk/nigelbryancook/Image11.jpg.
Surrounding this core is a veil of virtual particles in the spacetime
fabric 'quantum foam', with virtual positrons attracted closer to the
negative electron core than virtual electrons. Now I've said on Dr
Motl's blog that the 1 + 1/(2.Pi.137) = 1.00116 correction of the Dirac
magnetic moment of the electron arises because a virtual particle
associated with the electron core by Pauli's exclusion (pairing)
principle. The 1 is the core magnetism, which is unshielded by the
radially polarised veil of virtual charges, but the electric field is
attenuated by 137 times, so the virtual particle which pairs with the
core is paired with a weakening factor of 137 times, while the 2.Pi
factor comes in from the relative spin + orbit speeds involved (or
wavelength). I'm still vague. But let's now try to build a physical
picture of the polarised shells of aether.
The vacuum is full of 'virtual' particles  particles that we feel
only as forces such as inertial resistance to acceleration, nuclear
confinement of charged particles, electromagnetism and
gravity.
Aound any particle core some of these particles will be
attracted, forming a polarised veil which acts like a dielectric,
shielding the charge of the core. Electron cores tend to attract virtual
positrons, leaving behind an outer zone of virtual electrons. This
shields the real electron core charge by a factor of 137, the 'magic
number' of QED. For the massive quarks, you get virtual quark pairs
polarising around them. This limits the range of the colour charge of
quantum chromodynamics. Gravity is just the shielding of a background
pressure of the virtual particles in the vacuum. Because all energy has
speed c, as per Einstein's E=mc^2, gravity goes at
c.
We can visualise an emerging unified force as a
progressive shielding effect by the polarised vacuum on the particle
core. The strong nuclear force is the basic force, and gets
progressively filtered down by the polarised virtual charges of the
vacuum around the particle core until we get through electromagnetism,
weak force, and finally gravity. No extra dimensions!
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermionantifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
Dr Peter Woit has very interesting ideas on the problem of the actual particles themselves: http://www.arxiv.org/abs/hepth/0206135
Traditionally, the ‘deltadoubleplus’ particle is used to justify the existence of three types of quark charge, which are referred to arbitrarily as ‘colours’. This was proposed because the deltadoubleplus contains identical three quarks spinning the same way, which violates the Pauli exclusion principle (which prohibits two or more particles from having the same set of quantum numbers, which include spin). This appears to be correct because the mechanism for the Pauli exclusion principle may be linked to the intrinsic magnetism of spinning charged particles forcing adjacent particles to be paired up with opposite spin (however, there are problems to be resolved). According to quantum chromodynamics (QCD, the analogy to quantum electrodynamics, QED) any proton contains one green, one red, and one blue quark, making the proton as a whole colourless, the nuclear force equivalent of an atom with equal positive and negative charge. The three quarks also have electric charges of +2/3, +2/3 and –1/3 (two upquarks and one downquark), so the net electric charge is +1. The strong force depends on the colour charge, which is short ranged and operates somewhat like the shortrange ‘Van der Waals force’ of chemistry that binds some electrically neutral atoms together. (This is really a geometric effect of the locations of the shells of charge with can be polarised to allow a net electromagnetic force to be felt over short distances.) While the QED force is mediated by photons, the QCD force is mediated by ‘gluons’, of which 8 different types exist. Gluons attract (or shield) each other as well as the quarks. There is also a field of other virtual particles around each quark, just as there is a field of ‘virtual’ fermions around an electron, which partly shield the charge of the core. Colour charge has very indirect evidence. While it is a fine example of how usefully predictive ad hoc modifications can be made to physics, it is crass to take it as evidence discrediting causality. A clearer illustration is to consider the weak nuclear force, which has been unified with the electromagnetic force. QCD may similarly be unified with electroweak force when more is known. The problem with colour charge is that it is vague with respect to which of the two otherwiseidentical upquarks in a proton has which colour and why: colour charge is not a fundamental property like electric charge. Is it real, or just a useful corrective epicycle for some more subtle physics?
The first thing to do is understand the causal mechanism behind the exclusion principle, the spin thing, which is a magnetism force effect. The electrons and quarks are small normal dipole magnets and align in pairs, cancelling each other's magnetism out, just as when you have a pile of magnets. They don't align themselves naturally into one super magnet, but into pairs pointing opposite ways, because the entropy increases that way. Electrons in an atom feel each other's magnetism as well as the electric force. In fact the polar (radial) magnetic field from the electron core won't be shielded by the polarised vacuum, so it will produce greater magnetic force effects than the electric field from the core which is reduced by a factor of 137 by the polarised vacuum shield.
(Dr Leonard Susskind is quoted in a recent issue of New Scientist suggesting that the Casimir force reality issue should be solved by banning physicists from using the word ‘real’, which is kind of sweet, useful to him and other string theorists! While you are about it, why also not ‘ban’ all physics experiments and people that might possibly damage string theory? Dr D.R. Lunsford objects in English and also in Latin.)
Extracts from http://en.wikipedia.org/wiki/Ivor_Catt:
The proved experimental facts of electromagnetism which contradicted the textbook formula for Maxwell’s displacement current but allowed Ivor Catt to design the world’s first waferscale integration product, a 160 MB solid state memory in 1988, won Catt the ‘Product of the Year Award’ from the U.S. journals Electronic Design [14] (on 26 October 1989) and also from Electronic Products [15] (in January 1990), after Sir Clive Sinclair’s offshoot computer company, Anamartic, invested Ł16 million.
"Ivor Catt [is] an innovative thinker whose own immense ability in electronics has all too often been too far ahead of conventional ideas to be appreciated..."  Wafers herald new era in computing, New Scientist, 25 February 1989, p75[1].
The Sinclair team has developed the ideas of a British inventor, Ivor Catt, who tried to get British firms to listen to him. On that point this newspaper must admit to the British disease – we didn’t have the bottle to write about Catt then, in part because the technological establishment dismissed his notions. On the risk front, Sinclair has tackled, via Catt, the fundamental breakthrough of the microchip business. ... A whole new range of opportunities for computer use come forward.
– Hamish McRae, The Guardian, 13 March 1985, p23
I entered the computer industry when I joined Ferranti (now ICL) in West Gorton, Manchester, in 1959. I worked on the SIRIUS computer. When the memory was increased from 1,000 words to a maximum of 10,000 words in increments of 3,000 by the addition of three freestanding cabinets, there was trouble when the logic signals from the central processor to freestanding cabinets were all crowded together in a cableform 3 yards long. ... Sirius was the first transistorised machine, and mutual inductance would not have been significant in previous thermionic valve machines...In 1964 I went to Motorola to research into the problem of interconnecting very fast (1 ns) logic gates ... we delivered a working partially populated prototype high speed memory of 64 words, 8 bits/word, 20 ns access time. ... I developed theories to use in my work, which are outlined in my IEEE Dec 1967 article (EC16, n6) ... In late 1975, Dr David Walton became acquainted ... I said that a high capacitance capacitor was merely a low capacitance capacitor with more added. Walton then suggested a capacitor was a transmission line. Malcolm Davidson ... said that an RC waveform [Maxwell’s continuous ‘extra current’ for the capacitor, the only original insight Maxwell made to EM] should be ... built up from little steps, illustrating the validity of the transmission line model for a capacitor [charging/discharging]. (This model was later published in Wireless World in Dec 78.)’
extract from Electromagnetic Theory Volume 2, Ivor Catt, St Albans, 1980, pp. 20715  [4]. Catt's paper in effect multiplies Maxwell's continous "displacement current" formula by the stepwise factor [(1  2Z/R)^n] e^(2Zn/R), which introduces a discrete (quantized) theory to classical electromagnetism, so since the atom is a kind of capacitor, quantum theory arises naturally. The stepwise (true) charging of a capacitor/transmission line can give rise to glitches and other unexpected interference in vital electronics.
Catt stated in Electronics World September 2003 issue, "EMC  A Fatally Flawed Discipline" pages 4452:
during the Falklands War, the British warship HMS Sheffield had to switch off its radar looking for incoming missiles ... This is why it did not see incoming Exocet missiles, and you know the rest. How was it that after decades of pouring money into the EMC community, this could happen ... that community has gone into limbo, sucking in money but evading the real problems, like watching for missiles while you talk to HQ.
This is confirmed on a recent official Ministry of Defence internet site [12]: "Mutual Interference between the SATCOM and EW systems onboard HMS Sheffield resulted in the inability to detect an incoming exocet missile during the Falklands war resulting in the loss of the ship and the lives of 20 sailors." However the BBC report [13] ignores the underlying cause by saying: "The Exocet missile is designed to skim the sea to avoid radar detection." The facts show that Catt is not putting forward a point of view but exposing a conspiracy of technical incompetence.
Below: Oliver Heaviside showed when signalling with Morse Code in the undersea cable between Newcastle and Denmark in 1875 that electrical energy is transmitted at the speed of light for the insulator of a cable: it ‘charges up’ just like a capacitor! The apparent charge is just photontype energy (the photon ‘gauge bosons’ of quantum field theory), because it takes more time for charge to flow. The lightspeed ‘Heaviside energy current’ consists of photon radiation at 90 degrees to the traditional (Maxwellian) light wave. In Heaviside’s energy current, the ‘photon’ or electromagnetic energy has positive and negative fields sidebyside, instead of one behind the other as in Maxwell’s theory. Ivor Catt, Malcolm Davidson and David S. Walton on 28 May 1976 proved that Maxwell’s theory of ‘unified electromagnetism’ is wrong, because the Heaviside charging of a capacitor violates Maxwell’s equation for ‘displacement current’. Where you have equal and opposite electric fields in space, you do not necessarily violate the principle of conservation of electric charge. This is because all electromagnetic energy consists of such fields, even photons, and the varying electric field throughout a photon does not violate conservation of charge, because photon number is not conserved in quantum mechanics! All forces are created by the continuing exchange of gauge bosons in quantum field theory, the experimentallyvalidated Feynman diagram approach to forces, which allows unification.
Above: Maxwell’s electromagnetic theory innovation, called ‘displacement current’ is wrong: i = e (dE/dt). Maxwell’s approach covers up the fact that a capacitor is really identical to a radio transmission and receiver aerial system (two parallel conductors, with a delay time in the energy transfer due to a current in one plate occurring first) and also to a power transmission line, which ‘charges up’ when energy enters it even if the power line is an open circuit. People die because a false mathematical theory is used to hoodwink the public into accepting the competence of speculative and false physics built upon Maxwell’s incorrect ‘electromagnetic unification’. Energy must flow along a capacitor plate and increases stepwise when it reflects back off the end (the last electron is bound and bounces back, like the end ball bearing in the toy called Newton’s cradle).
At the front of the logic step, electrons are accelerated where the voltage is rising. Whenever charge accelerates, it emits electromagnetic radiation transversely, which happens in the rise portion of the front of the TEM wave. Transverse radiation is important at the front of the logic step, which the Catt anomaly deals with, although where the voltage is steady the important energy flow of electricity is longitudinal.
Catt and Walton in Dec 78 W.W. article Displacement current ( http://www.ivorcatt.org/icrwiworld78dec2.htm ) show that a battery of potential V volts charges up a capacitor like a transmission line via resistor R. Assuming that the resistance of the line was much higher than the impedance of space, (R >> Z) they found that VZ / (R + Z) volts starts off into the transmission line after passing through the resistor, and when it reflects at the open end of the transmission line (ie the edge of the capacitor plate) it returns, adding to further entering energy to give a total voltage of 2VZ / (R + Z), the it reflects off the other edge of the capacitor and so on. The total voltage in the capacitor rises therefore in steps as the light speed energy transverses the length of the capacitor plate. Since capacitors are small and the speed of light large, billions of steps occur in a second so the true voltage of V [1  (1  2Z / R)]^n where n = ct/(2x) and x is the length of the capacitor plate, looks when measured wih small capacitors just like the false standard exponential formula for a charging capacitor (based on Maxwell's false displacement current formula), V [1  e^(t/RC)].
‘The Sinclair team has developed the ideas of a British inventor, Ivor Catt, who tried to get British firms to listen to him. On that point this newspaper must admit to the British disease – we didn’t have the bottle to write about Catt then, in part because the technological establishment dismissed his notions. On the risk front, Sinclair has tackled, via Catt, the fundamental breakthrough of the microchip business. ... A whole new range of opportunities for computer use come forward.’ – Hamish McRae, The Guardian, 13 March 1985, p23. But after Sinclair sold out, Catt became again suppressed!
Although in the Electronics World article I did not use the example of air traffic control problems in the 9/11 disaster, the resulting abuse from the media was still so bad (claiming it is all crackpot and refusing to print a single word) that I followed that article up by writing the ‘Comment’ column on page 3 of the August 2003 issue of Electronics World, vol 109, number 1808:
‘Throwing Stones in Glasshouses. ... Faced with evidence of a problem, a group of idiots have a choice between ignoring it, which is a shortterm option only, or trying to discredit it by foul means. ... every point raised is seen to be such a danger to a fragile subject that it must be guarded against the slightest inspection. ... Physics has gained a reputation for gobbledegook. It is losing its gloss, not gaining respect, by hanging everything on the last word of gentlemen popularising without proof or evidence multipleuniverses, 11 dimensional space, etc. Anyone who points out a contradiction or gives a mathematical proof for a simpler, less obscure mechanism is (1) ignored, (2) becomes a target for a repeat of the gobbledegook they are replacing, or (3) gets misquoted ... in an effort to claim that the person who is defending science against gobbledegook is actually attacking science.’ (More on this: http://nigelcook0.tripod.com/.)
Electronics World editor, Phil Reed, next received two abusive letters, the authors of which are apparently PhD students of a Nottingham University professor of electromagnetism. They will keep throwing stones in glasshouses, replying 100% on the respect of the media, which makes more people die needlessly.
http://www.ivorcatt.org/icrwiworld78dec1.htm
http://www.ivorcatt.org/icrwiworld78dec2.htm
Let’s all compare the two simple equations shall we?
Maxwell's innovation to Ampere’s law: curl B = ui + ue .dE/dt, where B is magnetic field strength, i electric current, u permeability, e permittivity, and E electric field.
Inductive charging of any object occurs in DISCRETE STEPS as light speed energy reflects off the end of the capacitor plate and nearly doubles, not in the continuous curve of Maxwell. Although Maxwell’s formula approximates reality for small sized capacitors, inductors, transformers, etc, it fails miserably for large systems (see comparison at http://www.ivorcatt.org/icrwiworld78dec2.htm). Maxwell ignored the fact that electricity flows at light speed and hence increases in capacitor, inductor and transformer situations. Maxwell forgot the fact that current can’t instantly move along the whole capacitor plate, but flows along it like a radio transmitter aerial. This is the error: the equation of ‘displacement current’ is incomplete. The ‘displacement current’ is radio wave energy. The entire electromagnetic theory of Maxwell’s light/radio is false. (References: I. Catt et al., IEEE Trans. vol EC16, no. 6, dec67; Proc. IEE June 83, Proc. IEE June87, IEE paper HEE/26, 1998. For Catt's experimental proof see the end of his March 1983 Wireless World article ‘Waves in Space.’)
NEW INTERNET PAGE ON RADIO DETAILS: http://feynman137.tripod.com/radio.htm:
Maxwell’s Equations, Transmission lines, the Capacitor, Radio and the Atom as Charged Capacitor
Nigel Cook
Maxwell ignored the spread of charge along the capacitor plates during charging, and merely postulated a vacuum ‘displacement current’ flowing from one plate to the other during charging. This ‘displacement current’, i = e .dE/dt where e is permittivity and E is electric field (volts/metre).
Catt proved (see http://www.ivorcatt.org/icrwiworld78dec1.htm and particularly the very interesting mathematics and graphical comparison on the next page) Maxwell’s error and tried to correct it by showing that the spread of charge along the plates of a capacitor can be treated using Heaviside’s transmission line theory. This treatment shows that the capacitor charges in discrete steps as energy reflects off the far end of the capacitor plates and adds to further inflowing energy, nearly doubling the voltage in a step.
However, Catt’s treatment contains three major interrelated errors, inherited from Heaviside’s treatment. First, it ignores the fact that the inflowing electricity has a risetime and is not a true mathematical discontinuity. So at any given time there is a distance over which the voltage rise occurs (from 0 to v volts before the first reflection), and also a variation in current (from 0 to i in this example) which causes a radio energy transmission from one plate to the other.
Second, it ignores any transverse action, i.e., between the capacitor plates (by assuming that energy only flows parallel to them). Thirdly, it assumes that both capacitor plates charge at the same time, ignoring the mechanism for the delay which occurs if one capacitor plate charges first and causes charge in the second plate by induction.
In summary, Catt failed to build a correct model by ignoring the mechanism just as Maxwell had. Whereas Maxwell ignored the fact that the whole plate of a capacitor does not charge simultaneously, Catt ignored the mechanism by which energy is transferred from one plate to another to appear in the rest of the circuit. The reason why Catt ignored the facts is that he was using Heaviside’s simple mathematical model of a transmission line, which contains no mechanism of electromagnetism and ignores all transverse motions. The transverse electromagnetic wave of Poynting and Heaviside ironically is longitudinal, not transverse. The real electromagnetic wave is transverse. Therefore, the PoyntingHeaviside vector, while useful for some types of calculation, is false: it ignores the transverse exchanges of energy that causes the electric force, and it fails to say anything about the mechanisms.
Main source: http://electrogravity.blogspot.com/ (post dated 4 January, scroll down to the comments on capacitors):
"As long as you have some asymmetry in the current, any conductor can
be made to work, with the radio emission occurring in a direction
perpendicular to the varying current. A spherical conductor with a central
feed would not emit radio waves, because there would be no net current in
any direction, but you can use a cylindrical conductor in coax as an
aerial.
"Catt's analysis applies to the case where the capacitor
plates are close together in comparison to the length of the plates. For
all capacitors used in electronics, this is true, since only a thin
insulating film separates the foil plates, which are long and are
generally rolled up. In this situation, any delay from one plate to the
other is small.
"But if you separate the plates by a large distance
in the air, the capacitor appears more like a radio, with an appreciable
delay time. The signal induced the second plate (receiver aerial) is also
smaller than that in the first plate (transmitter aerial) because of the
dispersion of energy radiated from the first plate. The second plate
(receiver aerial) responds with a timelag of x/c seconds(where
x is the distance between the aerials or plates), and with a
voltageof vy/(y + x), where v is the value in the first
plate, y is the length ofthe plates (assuming both are parallel),
and x is the distance between the plates. This formula is the
simplest possible formula that reduces to vvolts when the ratio
x/y is small (normal capacitors) and but becomes vy/x
volts for radio systems (so that the radio signal strength in volts/metre
falls off inversely with distance of the constant length receiver
aerialfrom the transmitter). …
"In normal radio transmission the signal frequency is obviously matched to the aerial like a tuning fork, with a loading coil as necessary. So the dE/dt due to the radio feed would govern the transmission, not steps. Catt's stepwise curve kicks in where you have a constant step applied to the aerial, like a capacitor plate charging up. dE/dt then becomes very high while the pulse is reflecting (and this adding to more incoming energy) at the end of the aerial or capacitor plate. Obviously any real signal will have a rise time, so dE/dt will not be infinite.
"The actual value of dE/dt will gradually fall as the capoacitor charges and equal to approximately (assuming uniform rise): v/(XT) where X is the distance over which voltage step v rises, X = cT where T is the risetime of the Heaviside signal. Hence, dE/dt ~ v/(XT) = v/(cT^{2}). …
"Radio emission results when the current in the aerial varies with time, ie if di/dt is not zero (this is equivalent to saying that radio emission results from the acceleration of charge). There is a variation in the Efield along the conductor, even in direct current, over the small distance at the front of the step where the voltage rises from 0 to v. The current similarly rises from 0 to i. So there is radio energy transfer in a charging capacitor.
"(1) In order to detect radio energy, you need to have an oscillatory wave. Feynman says the normal forces of electromagnetism (for example, attraction between the two charged capacitor plates) is some kind of exchange of forcecarrying energy (photons called gauge bosons). Feynman does not say any more about the dynamics. However, the continuous action of such forces implies a continuous exchange of energy. This is like Prevost’s breakthrough in thermodynamics of 1792, when he realised that in the case now called oscillatory photons (infrared radiation), there is a continuous exchange at constant temperature.
"(2) Point (1) above says that energy is being continuous exchanged as shown by the Feynman diagram quantum field theory. This is not a heresy. Heuristic development of the subject in a physical way is a step forward. Oscillatory photons carry heat, others carry forces. Proof: http://feynman137.tripod.com/"
Catt does not deal with a single conductor charging first and inducing
charge on the other after the appropriate time delay. This leads to
understanding radio. There is no mention of radio on Catt's website, but
lots of bogus claims about Catt's theory being relevant to
electromagnetism, which is therefore incomplete.
In particular,
Catt’s whole treatment (including the ‘Catt Anomaly’) is based on treating
the Heaviside slab of energy as having a discontinuity at the front end,
and no transverse energy delivery. This is a false model, as there will
always be a rise time. During the rise time of the current, the current
varies. Hence there is transverse radio energy emission. This treatment
allows the whole problem to be formulated correctly. Maxwell’s error of
ignoring current spreading at light speed along capacitor plates and
reflecting back upon further incoming current is only partially corrected
by Catt’s approach; Catt ignores the radio emission.
Developing the mechanistically correct model of the capacitor charging is important for analysing the disagreement between classical and quantum electrodynamics, particularly the stepwise energy levels of the atom. It could be that the idea that the atomic energy levels are caused by capacitor charging (steps) is wrong, but at the moment that remains to be seen.
Obviously the step risetime voltage variation at the front operates
all the time the capacitor is charging, not just at reflections. All that
reflections signify is the complete coverage of the plates. The voltage
rises to nearly 2v after the first reflection, but this is caused by
addition, so the increase just after reflection is still by the same
amount, v. Before reflection the step is 0 to v (change by v), and just
after reflection it is v to 2v (again, a change by v).
What
actually happens therefore is that "displacement current" flows
continuously (with the usual exponential fall if a capacitor), not in
pulses. It only flows at the small zone at the front of the energy current
in which the voltage rises from 0 to v.
From: Nigel Cook
To: Brian Josephson ; jonathan post ; Forrest Bishop ; George Hockney
Cc: Ivor Catt ; CooleyE@everestvit.com
Sent: Tuesday, January 10, 2006 10:21 AM
Subject: Errors of the Catt Anomaly
http://www.electromagnetism.demon.co.uk/catanoi.htm
Errors in Catt Anomaly
1. The Catt anomaly diagram shows a true step, which can't occur in reality. There is always a risetime, albeit a short one. During the rise time T, voltage and current varies gradually from 0 to the peak current i, so the current variation is on the order di/dT, which is a charge acceleration that causes radio emission with frequency f ~ 1/T. (The Catt anomaly diagram shows 0 rise time, so di/dT would be INFINITE, resulting in an infinitely powerful burst of radio energy of infinitely high frequency, which is absurd.)
2. When you correct the Catt anomaly diagram, you realise that there is radio emission in the direction of traditional displacement current, which Catt fails to show.
3. You also notice that the radio energy emission depends on di/dt, which only occurs while the logic step voltage and current are varying, like displacement current.
4. Catt's diagram of the Catt anomaly is totally wrong for a completely different reason: it shows displacement current continuing after the logic step has passed, in other words, in the part of the step to the left, where Catt shows the voltage is steady.
This is a LIE, because displacement current i = [permittivity].dE/dt = [permittivity].dv/(dt.dx). This shows that displacement current ONLY flows if voltage varies with distance along the transmission line (x) or time (t).
Catt should delete all the displacement current arrows (labelled D) which point downwards in the second diagram, and only show it as occurring where the step rise occurs! Catt will then notice that he has discovered the correct disproof of Maxwell's radio theory. While Maxwell had displacement current at 90 degrees to radio propagation, the two actually are the same thing, so Maxwell's theory of radio is false. Will Catt publish this?
Nigel Cook
An isolated radio aerial can radiate energy, the radio emission is
proportional to di/dt fed into the aerial. An isolated conductor connected
to a charge (battery terminal) radiates energy as it charges up. It
behaves like a radio aerial, and as the current varies in it, during
charging, radio emission occurs. The current falls off rapidly along the
wire because of this emission of energy. Hence you cannot transfer
significant energy with a single wire. For a pair of conductors connected
to the two terminals of a battery (Catt's anomaly situation) the current
in each conductor is the opposite of the other, so the radio emission
cancels out beyond the system, and no energy is wasted by radio  it all
goes into the opposite conductor, so the two conductors help each out
out.
This is during the rise time portion of the TEM wave, see the
illustration: http://electrogravity.blogspot.com/2006/01/solutiontoproblemwithmaxwells.html
(1) an aerial is a single conductor and does radiate radio as the current applied varies, we know a single conductor can't propagate a constant current because its inductance is infinite (which is a mechanism for Kirchoff's law), (2) if the capacitor is a transmission line, as stated before, the radio emission due to each conductor (capacitor plate) is the exact opposite of the other, and cancels out as seen from a distance. What I'm saying is that to resolve the Catt anomaly the TEM wave step needs to be analysed in two parts, first where the current is increasing (which is omitted from today's treatment), and second where the current is constant (which the current treatment does describe, using steady magnetic and electric fields). If the current rise (step front) was vertical, "displacement current" there (however you think of it) would be infinite, and since "displacement current" is an invention by Maxwell to retain continuity of current flow across the vacuum, you would then have the paradox of a finite current flowing along one wire, turning into an infinite "displacement current" across the vacuum, and then returning to a finite displacement current in the other wire. The true rise is not vertical, because the current does not rise from 0 to i instantly at any point on a conductor as the step passes by. It can be very great. The standard treatment of radio shows that radio emission is proportional to the variation rate of the net current di/dt in a conductor. "Displacement current" is the radio exchange process where the front of the TEM wave in each conductor swaps energy by radio (or electromagnetic pulse, if you prefer to reserve "radio" for sine wave shaped electromagnetic waves, as some do). The wires must swap energy across the vacuum to propagate; each one is inducing the current in the other one. This is why the TEM wave goes at the speed of light in the vacuum between the conductors. We get electromagnetic radiation (radio emission) from a net timevarying current in a conductor. If you have two such conductors, with each having an inverted form of the signal in the other, they exchange energy which induces the current in the other. But there is no long distance propagation of this energy due to exact interference, so the coupling is perfect. This is how the front of the logic step propagates: each conductor causes the current in the opposite conductor by simple electromagnetic radiation due to the timevarying current as it rises. Catt simply missed out this mechanism.
A timevarying current results in radio emission. Neither Catt nor
anyone else has measured the fields in the space between two conductors as
a TEM wave passes: ''they have only measured induced currents in other
conductors''. The diagrams ignore radio emission occurring at the front of
a logic step! Catt got the "Catt anomaly" wrong by relying on a book
published in 1893 that ignored the step effects at the front of the TEM
wave. Asserting ignorance is wrong. At the front of a logic step, current
rises (in accepted picture) and this results in radio emission. Since each
conductor is oppositely charged and carries an opposite current, the radio
emission from each conductor (acting as aerials) is exactly out of phase
with the other and so completely cancels that from the other as seen at a
large distance. So there is no energy radiated to large distances! The
''only'' radio emission of energy occurs from each conductor to the other.
Maxwell wrote ‘displacement current’ in terms of electric field
strength. However, as the voltage rises at front of the logic step,
current rises. Maxwell should have written the extra current (displacement
current in vacuum) equation in terms of the ordinary (conductor based)
current, which means ‘displacement current’ is radio. Maxwell:
displacement current D = e.dE/dt = e(v/ct^2), v is uniform voltage rise
over time t, and e is permittivity. What I'm saying is that the mutual
radio emission causes the front of the logic step (the rising part) to
propagate. Each conductor induces current in the other! It is fact that
the inversesquare law doesn’t apply: there is ‘no’ net radio transmission
beyond the system because of ‘perfect interference’, as the current rise
in each conductor is the ‘exact opposite of that in the other one’ so the
radio transmissions from each conductor ‘exactly’ cancels the other
outside the transmission line!
Electricity has two components: (1) the Heaviside electric field, the lightspeed forcecarrying gauge boson radiation transmitted all the time by charges spinning at light speed (which is why it goes at that speed), and (2) the 1 mm/second (or so) drift current which is the response by charges to the electric field variation with distance along the wires that has already been set up by the Heaviside electric field!
‘(a) Energy can only enter the capacitor at the speed of light. (b) Once inside, there is no mechanism for the energy to slow down… [magnetic field curls due to equal amounts of light speed energy going in each direction cancel out, while electric fields add up] … The dynamic model is necessary to explain the new feature to be explained, the charging and discharging of a capacitor …’ – Ivor Catt.
‘It had been an audacious idea that particles as small as electrons could have spin and, indeed, quite a lot of it. … the ‘surface of the electron’ would have to move 137 times as fast as the speed of light. Nowadays such objections are simply ignored.’ – Professor Gerard t’Hooft, In Search of the Ultimate Building Blocks, Cambridge University Press, 1997, p27. (t’Hooft won the Nobel Prize in physics for electroweak force theory unification work.)
Charge is not conserved which is why the fifth Maxwell equation was dropped when charge creation from gamma rays exceeding the energy equivalent of two electrons was discovered in 1932. The abuse from ignorant crackpots is well documented. Catt himself refuses to concentrate on the facts. The ‘displacement current’ is radio wave energy. The entire electromagnetic theory of Maxwell’s light/radio is false.
Mention of charge conservation wholly vacuous. Charge is only conserved in the sense that the total sum is always zero. A 1.022 MeV uncharged boson, called a gamma ray, when being stopped by a lead nucleus (or other nucleus of high atomic number) can give rise to pair production, creating a negative charge (electron) and positron (positive electron). Since photon numbers are not conserved, the total amount of charge is also not conserved, only the balance between positive and negative. This was discovered in 1932 by Carl Anderson who won a Nobel Prize for it by confirming Dirac’s prediction of positrons. As a result, the fifth MaxwellHeaviside equation, stating conservation of charge, was dropped around 1932, in favour of the fact that charge is created in shielding the wellknown nuclide cobalt60 that emits gamma rays of average energy 1.25 MeV. It’s a pity most people’s knowledge of charge conservation predates pair production. The Heaviside energy current./electromagnetic gauge bosons guided by the conductors, consists of a negative electric field (the half of it that is around the negative conductor), and a positive electric field around the other conductor. Conservation of charge is maintained because the sum of the positive and negative charge or rather electric field (but you can’t tell the difference) is zero. The Heaviside energy current is photons, the gauge bosons of electromagnetism. The apparent charge of the energy current is not conserved, any more than the negative and positive electric fields of a light ray photon are conserved. All the conservation can do is to say they are created in equal amounts, any amounts.
Maxwell’s equations have failed to predict light properly (they predict electrons in an atom will radiate continuously and spiral into the nucleus, instead of gaining and losing energy in discrete steps like Catt’s capacitor  an atom is basically a capacitor, separated positive and negative charge) so they predict a false light spectrum in which power becomes infinite at short wavelengths. These failures of Maxwell were known with the early development of quantum theory, 190016. Despite this, crackpots still falsely claim they are correct, and that the failure lies in quantum mechanics. Are electrons electromagnetic energy trapped by gravity, black holes? Below we will test the conclusion that the gravity (black hole) trapped HeavisidePoynting energy forms an electron. This disproves the crackpot claim of string theorists, based on Planck’s dimensional analysis, that the size of fundamental particle cores is Planck size. The black hole size is much smaller. The entire philosophy of string theory is dictatorial nonsense with zero evidence.
The solution to the ‘Catt anomaly’ is that Heaviside’s electromagnetic radiation, the lightspeed forcecausing gauge bosons of quantum field theory, is the precursor to electric current, which is a drift at 1 millimetre/second and carries negligible kinetic energy: The light speed Heaviside electromagnetic radiation is like a photon moving sideways: the positive and negative electric fields of the energy sit side by side, not one behind the other as Maxwell envisaged. This is why the correct electric fields propagate at light speed in the ‘Catt anomaly’ even though electron current flows at only 1 mm/second. (Conservation of charge is not violated in Heaviside’s light speed energy current, because the electric fields moving at light speed are due to radiation, not charges. To claim Catt’s anomaly violates conservation of charge is like saying a photon does so, just because it contains electric fields.)
‘It always bothers me that, according to the [path integral] laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – Professor Richard P. Feynman, Character of Physical Law, Penguin, 1992, pp 578.
In the same book Feynman discusses the pushing gravity mechanism as of November 1964, when it was in crisis. Feynman’s error was ignoring his own heuristic Feynman diagram approach, ignoring the physical reality of gauge bosons in space causing forces, ignoring the reality of particle spin which continuously radiates gauge bosons, and the Prevost equilibrium process which allows radiating charges to receive energy. The ‘drag’ he objected to is the very radiation pressure of gauge bosons which causes the FitzGeraldLorentz contraction in the direction of motion, as well as the general relativity contraction term which is a physical contraction around mass, caused by radiation pressure, as demonstrated below. In a fluid, pressure is scattered and dispersed in all directions by molecular collisions (so air pressure horizontally is similar to that vertically), but the vacuum does not scatter and disperse gauge boson radiation. Consequently, gauge boson pressure acts in the radial direction from and towards masses, and ‘shadowing’ causes fundamental forces.
Dr Luboš Motl, Harvard University: ‘… quantum mechanics is perhaps the deepest idea we know. It is once again a deformation of a conceptually simpler picture of classical physics.’
The danger of the speculation in physics about ‘string theory’ approaches to quantum gravity is discussed in a weblog by Dr Peter Woit:
‘the danger is that there may be lots of ways of ‘quantizing gravity’, and with no connection to experiment you could never choose amongst them. String theory became so popular partly because it held out hope for being able to put the standard model and gravity into the same structure. But there’s no reason to believe it’s the only way of doing that, and people should be trying different things in order to come up with some new ideas.’
Galaxies apart from a few nearby galaxies like Andromeda, all have a red shift. While there are speculations that the red shift may be tired light, there is no mechanism and no evidence of this from the spectrum of the red shifted light. In fact, the best ever experimental black body radiation spectrum was obtained by the cosmic background explorer satellite in 1992 from the 2.7 K (microwave) redshifted 3,000 K (infrared) big bang radiation flash. This frequency spectrum was uniformly reduced by over 1,000 times by red shift, not by the effects of scattering of radiation (scattering is frequencydependent). It was emitted about 300,000 years after the big bang. The three pieces of evidence for the big bang, namely (1) red shifts, (2) microwave background spectrum, and (3) the abundance of hydrogen, deuterium and helium in the universe are conclusive proof of the big bang in general. The purpose of this paper is to establish a fourth piece of evidence and to clarify what more we can learn from the big bang by proved experiments rather than by speculation.
Dr John Gribbin writes a mixed bunch of books, some of which are good
and some contain nonsense. His In Search of Schroedinger’s Cat,
1984,
states on page 2 ‘... what quantum mechanics says is that
nothing is real ... Schroedinger’s mythical cat was involked to make the
differences between the quantum world and the everyday world clear.’ This
is false, since Schroedinger was just pointing out the metaphysics of the
Copenhagen Interpretation (see Schroedinger’s paper in
Naturwissenschaften v23 p812)! On page 3 he says that the
experimental proof by Aspect in 1982 is ‘unambiguous’; the Copenhagen
Interpretation is verified (in fact it is false). On page 4 he claims:
‘Some interaction links the two [photons] inextricably, even though they
are flying apart at the speed of light, and relativity theory tells us
that no signal can travel faster than light. The experiments prove that
there is no underlying reality to the world.’ No they don’t prove anything
because they rely on the assumption that Heisenberg’s uncertainty
principle somehow applies to photon polarisation when it has only works
for electrons and other particles with rest mass! Polarisation works like
Young’s doubleslit experiment (the slits must be close enough that the
wave overlaps, so interference occurs), which is nothing to do with
wavefunction collapse or quantum entanglement metaphysics. In fact, you
cannot introduce uncertainty into the polarisation of a photon by
measuring it, since the photon is going at light speed. Your measurement
particle would have to cross the transverse extent of the photon instantly
(faster than the speed of light) so that polarisation is changed, which
you simply cannot to. No metaphysics and no faster than light influence.
All that happens is that the measurement doesn’t alter the polarisation
of the photon being measured, hence there is correlation.
The metaphysical nonsense of the 1982 Aspect experiment was a perverse repetition of the crackpot MichelsonMorley controversy of 1887: ‘The MichelsonMorley experiment has thus failed to detect our motion through the aether, because the effect looked for  the delay of one of the light waves  is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumblingblock for a philosophy which denies absolute space is the experimental detection of absolute rotation.’  Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152. (Earth’s ‘absolute motion’ in the universe is about 390 km/s as measured by the ± 0.003 Kelvin cosine variation in the 2.7 Kelvin microwave background radiation. This blueshift/redshift effect is well known, an article appeared in it in the Scientific American in around 1977 titled ‘The New Aether Drift’, and it is always removed from the cosmic microwave background before the data is processed to find ripples, which are millions of times smaller in size than the relatively massive effect due to the earth’s motion.) The metaphysics of quantum entanglement is due to the incompetence of Bohm and Bell, who tried to test false ‘hidden variables’ theories versus the Copenhagen Interpretation, when both are wrong. There are no mysterious ‘hidden variables’, just gauge bosons which are known from quantum field theory (Feynman diagrams) anyway. Einstein with B. Podolsky and N. Rosen in 1935 wrote ‘Can quantummechanical description of physical relaity be considered complete?’ in the Physical Review v47 p777. Bohm and Bell tried to reapply it from molecules to light photons, which was a mistake.
Electronic Universe. Part 2, Electronics World, N. Cook Electronics World, Vol. 109, No. 1804 (2003), downloads of two articles titled ‘An Electronic Universe’; first part August 2002 and second part April 2003 containing illustrations of the mechanism of the electron and electromagnetic forces:
(For the diagrams get the PDF version of the APRIL 2003 "ELECTRONIC UNIVERSE" article – note that there is also the earlier part 1 dated August 2002 available under the same title! – from the Electronics World website at http://www.softcopy.co.uk/electronicsworld/ or http://members.lycos.co.uk/nigelbryancook/Penrose.htm for a low quality idea of what they look like. Version below is the draft version with amendments, it was condensed to 6 pages of print by the magazine’s editor, Phil Reed)
The nature of matter
The speed of electric energy entering a pair of wires, and leaving them is that of light in the medium between the wires. The speed of electric energy is identical to the speed of light. Energy entering and leaving a capacitor has no mechanism to slow down, and maintains light speed. Cutting up the capacitor, each unit is found to have light speed energy as ‘static’ electricity, right down to the electron. The phenomena of transmission lines, capacitors, inductors, tuned circuits, and static electricity have been experimentally and theoretically proven to be entirely the result of 300,000 km/s electromagnetic waves. Everything in existence is based on these waves. We live in a ‘single velocity universe.’ When a conductor x metres long is charged to v volts via a resistor, the energy flows into it at light speed for the surrounding medium, and there is no way for the energy to slow down. Catt and Dr Walton measured the discharge speed with a sampling oscilloscope, and found as they had predicted that the energy comes out in a pulse 2x/c seconds long and at only v/2 volts. This is because energy scatters inside a charged conductor, bouncing around, and passing through itself equally in both directions.
The magnetic curls from each equal and opposite half of the energy going up and down the charged up conductor oppose one another and cancel out, but the electric fields add up, giving v/2 + v/2 = v volts as observed. No resistance is encountered while equal amounts of energy pass at light speed through one another, so no heat is generated in the static charged object. When discharged suddenly, the half of the energy which is already going towards the discharge point exits first, while the remainder reflects at the other end before exiting, so the output is a pulse 2x metres or 2x/c seconds long at v/2.
Ivor Catt’s experimental work, with contributions theoretically by Malcolm Davidson, David Walton, and Mike Gibson (see Part 1, Wireless World, August) rigorously proves that a charged particle is a standing wave (circular motion) of transverse electromagnetic (TEM) energy, always having electric field E, magnetic field B, and speed c at right angles to one another (Fig. 1). This TEM wave, trapped by its own gravity, has a spin which produces a spherically symmetrical electric field, and a magnetic dipole, explaining the wave packet in physics. The ‘particle’ is a trapped transverse electromagnetic wave.
If a particle propagates in a direction along its spin axis, it can spin either clockwise or anticlockwise around that axis relative to the direction of propagation. This gives rise to the two forms of charge, positive (positrons, upquarks, etc) and negative (electrons, downquarks, etc). The orbital directions of the electrons can be in such a direction as to either add or cancel out the magnetic fields resulting from the spin, which gives rise to the pairing of adjacent electrons in the Pauli exclusion principle. Arrows can be drawn in opposite directions from charges to distinguish positive and negative. The existing speed of the TEM waves which constitute charges give rise to electricity which travels at the same speed. For copper wire, the outer electrons have a spin at about 99% of light speed, and a chaotic orbital motion at about 1% of light speed that is at right angles to the spin. For the magnetic field to occur as observed around a wire carrying a current, the field is not being caused by the electrons but by energy that is carried like ‘passtheparcel’ by electrons at light speed. [The electrons’ magnetic moment as determined to 13 significant places theoretically by QED calculation, is normally cancelled by Pauli (exclusion principle) ‘pairing; of adjacent electrons.]
Everything seems to be either static or in motion at relatively slow speeds. The idea that everything is always in constant 300,000 km/s motion, together with the statement that there is no significant energy transfer by an electric drift current, sounds ridiculous. But we see using such motion all the time, in the sense that sight itself necessitates 300,000 km/s electromagnetic energy entering the eyes. The origin of the constant speed of light lies in the matter which physically emits the light, and this offers an obvious and simple mechanical explanation to account for the phenomena of relativity.
The solution to the biggest problem, particlewave duality, is that the electron is the negative half oscillation of a light wave, proven by Anderson in 1932 when he stopped gamma rays with matter in a cloud chamber, and got two particles from the light/gamma ray, each curling in opposite directions in the magnetic field, thus the pairproduction process. The transformation of light of 1.022 MeV energy or more into an electron and a positron, was a great discovery in science.
Unification of quantum mechanics and relativity
When drifting along at right angles to their plane of spin, the consistent circular spin speed of 300,000 km/s at each point on the ring electron will be reduced, because part of the spin speed is then diverted into drift motion. Pythagoras’ theorem gives us: x^{2} + v^{2} = c^{2}, where x is the speed of spin of the electron, v is the drift speed, and c is the velocity of light. If an electron moves at a speed approaching the velocity of light, the equation above, rearranged, shows the measure of time for the electron (the relative spin speed), is: x/c = (1 – v^{2}/c^{2})^{1/2}.
This formula, derived from Ivor Catt’s proof of the single velocity universe, is identical to the timedilation formula predicted in an abstract mathematical manner a century ago by Einstein. Further, it follows from Catt’s work that if time is dilated, then distance must be contracted by the same factor, in order that the velocity of light (ratio of distance to time) remains constant.
[A more mechanical explanation of timedilation, based on the fabric of space being real, is that all time is measured by motion, such as a pendulum, a mechanical clock, or an electron oscillating. When you are moving fast in absolute space, the matter has to cover more absolute distance to complete a single swing of a pendulum or the oscillation of an atom. For example, if you swim back and forth across a river when there is no current, the time you take is minimal. But if there is a current, you take longer to complete each crossing. The mathematics of this is the Pythagorean sum, which is the same factor of x/c = (1 – v^{2}/c^{2})^{1/2}. In this model of timedilation, the slowing down is not caused by a variation in time but rather an increase in the distance which means that it takes longer for a moving clock to tick than a stationary one. Since all time is defined by motion, there is absolutely no difference between this physical explanation and the mathematical prediction from special relativity.]
Mass increase similarly comes from electromagnetic theory, since mass depends on contracted distance. Therefore, Catt’s work implies not only the correct timedilation but also the distancecontraction formula previously postulated to explain the inability of the MichelsonMorley experiment to detect our motion through the dielectric of the vacuum (an effect of the contraction of the measuring instrument by precisely the same factor as the change in the velocity of light).
The equation E = mc^{2} is far better derived by Ivor Catt’s singlevelocity universe (see part one) than by Einstein’s postulates. Dr Arnold Lynch suggested to the writer that where observed fields apparently ‘cancel out’ energy still needs to be accounted for. Since electrons pair up with opposite spins, the magnetic fields are cancelled out at long distances, but the magnetic field energy is still there. Since half of the TEM wave energy is magnetic, we see the reason behind the factor of ˝ in the familiar equation for kinetic energy, which is absent from Einstein’s relation. In the constant velocity universe, the product of acceleration and time is ‘at = c’, so:
E = Fd = (ma).(ct) = (mc).(at) = (mc).(c) = mc^{2}.
The timedilation (slower clock electrons or energy oscillations in an atomic or quartz clock, and slower motion of atoms and molecules, etc.) due motion when travelling at high speeds, is set up during the acceleration period, so ‘at = c’ tells us exactly how acceleration and timerate flow changes are related, e.g., t = c/a. The absence of this from special relativity caused Einstein problems when a critic around 1915 pointed out the ‘twins paradox’ of relativity. The solution to the twins paradox is that acceleration needs to be taken into account and general relativity applies to accelerated motion. As with the application of quantum mechanics to chemistry, the application of general relativity is impractical because of the difficulty or impossibility of finding analytical solutions to real life common problems (although many ingenious ‘approximations’ provide valuable rough quantitative estimates for computer calculation).
The reader interested in this aspect can consult the official U.S. Atomic Energy Commission compilation, edited by Dr Glasstone, Sourcebook on Atomic Energy (Van Nostrand, London, 1967, pp. 889). This provides two derivations of E = mc^{2}. One derivation uses the binomial theorem and the other uses calculus: neither contains any physical explanation of how the speed of light is related to matter. Catt’s theory dispenses with Einstein’s mathematical postulates of relativity, by providing a physical mechanism. The big bang universe has an absolute age and hence absolute chronology and size, which is incompatible with special relativity which assumed the steadystate, infinite, eternal universe, which was the acceptable theory back in 1905.
The four fundamental forces in the universe
‘In many interesting situations… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 8990.
Dr Albert Einstein stated in his inaugural lecture at Leyden University, 1920: ‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities.’ – Dr Albert Einstein, Inaugural Lecture, ‘Ether and Relativity’, 1920, reprinted in: A. Einstein, Sidelights on Relativity, Dover Books, New York, 1952, p. 23.)
Fig. 2 gives the mathematical derivation of gravity using the dielectric of electronics, based on Ivor Catt’s analysis of the physical properties of the 377 ohm dielectric or continuum of the vacuum. The dielectric is a continuous medium imbedded with atoms of matter. The continuity of dielectric with matter means that there is no continuous dissipation of energy due to particle collisions as occurs in material fluids, and therefore only resistance only to acceleration (thus providing the explanation of the inertia). If we accept that the stars are receding as modern astronomy shows, we must accept that the electronic dielectric of the vacuum moves in the opposite direction (towards us), maintaining continuity of volume. If you walk down a corridor, your volume of matter V will move in one direction, and will be continuously balanced by a volume of air, also V, moving in the opposite direction at the same time; this is why you do not leave a vacuum in your wake. (Without the opposite motion of an equal volume of the surrounding medium, flowing around you as a wave in space, motion would be impossible.)
Since distance is proportional to time (the sun being 8 lightminutes away, and the next star 4.3 light years, etc), the statement of the Hubble constant as velocity divided by observed distance is misleading (the stars will recede by a further amount during the interval that the light is travelling to us), but a true ‘constant’ for the speed of recession is proportional to the time taken for light to reach us (which is also the time past when the light was actually emitted). Correcting Hubble’s error thus gives us a constant which has the units of acceleration, which leads directly to gravity. This shows G to vary depending on the place in the universe, analogous for our purposes to a 10^{55} megatons nuclear explosion in outer space (like a scaledup version of the 9 July 1962 Starfish Prime 1.4 megaton test at 400 km altitude).
There are four fundamental forces:
The shortranged strong nuclear force holds the nuclei of atoms together against the immense coulombrepulsion of many positive, closelyconfined protons, and this strong attraction is explained by the vacuum flux of ‘virtual’ particles being exchanged between nucleons (neutrons and protons). At short ranges (adjacent nucleons), the strong nuclear force is 137 times stronger than electromagnetism, because that is the difference between the force given by Heisenberg’s uncertainty equation (in its energy and hence momentum exchange version), and Coulomb’s law of electric force, for unit charges. (Since the nucleon’s shield one another, the force within the nucleus drops exponentially with distance, in addition to the inverse square law.) The duration of time, t, for which the energy can be ‘borrowed’ to create virtual charges sets a limit to the light speed range of the strong nuclear force, d = ct, so it does not extend to infinite distances, unlike EM and gravity. The same is true of the weak nuclear force. (Traditionally, the weak nuclear force is said to be a weaker version of the quantum EM force which contains an arbitrary factor of 137 to make it work. In fact, the weak nuclear force is a weak version of the strong nuclear force which itself is the socalled quantum EM force without the arbitrary forcefitting factor of 137. Hence, the socalled ‘electroweak’ unification is actually a halfbaked ‘strongweak’ unification.)
(You sometimes get people using the uncertainty principle to invent virtual particles like ‘gravitons’ for speculations about quantum gravity in the fabric of space. The uncertainty principle states the product of the uncertainties in momentum and distance is at least h divided by twice pi. The product of momentum and distance is dimensionally equivalent to the product of energy and time, so energy can be being borrowed from the vacuum to form virtual particles for a time that is inversely proportional to the energy borrowed. This works well for the nuclear forces, which are caused by relatively heavy particles that can therefore only exist for a tiny amount of time. It doesn’t work for Coulomb or gravity, because any energy of the carrier particles would limit the range instead of allowing an inverse square law! People who fiddle the facts and suppress the falsehood of ‘graviton’ particle quantum gravity, as the editor of Classical and Quantum Gravity, Inst. Phy., who tried to suppress the proof in this paper, are just paranoid bigots, it’s not as if the title of their paper suggests prograviton prejudice, does it now?)
We can prove this. Werner Heisenberg’s uncertainty relation for energy is simply: Et = h/(2p ), where E is the uncertainty in the energy, and t is the uncertainty in the time. This can be rewritten in the form of E/t by rearranging to give 2p /h = 1/(Et) and then multiplying both sides by E^{2}, which yields E/t = 2p E^{2}/h.
Force is given by:
F = ma = dp/dt = d(mc)/dt = c.(dm/dt) = (1/c).(dE/dt).
Substituting the uncertainty principle (in the form E/t = dE/dt = 2p E^{2}/h), together with the definition of work energy as force multiplied by distance moved in the direction of the force (E = Fd) into this result gives us:
F = (1/c).(dE/dt) = 2p E^{2}/(hc) = 2p (Fd)^{2}/(hc).
Because this result contains F on both sides, we must cancel terms and rearrange, which gives us the strength of the strong nuclear force at short distances:
F = hc/ (2p d^{2}).
This result is stronger than Coulomb’s law of electromagnetism by the ratio 2hce /e^{2}, which equals about 137.036. Therefore, we have the explanation for the traditionally mysterious constant of physics, 137.
Here’s another proof of the same result: Heisenberg's uncertainty (based on impossible gamma ray microscope thought experiment): pd = h/(2.Pi), where p is uncertainty in momentum and d is uncertainty in distance. The product pd is physically equivalent to Et, where E is uncertainty in energy and t is uncertainty in time. Since, for light speed, d = ct, we obtain: d = hc/(2.Pi.E). This is the formula the experts generally use to relate the range of the force, d, to the energy of the gauge boson, E.Notice that both d and E are really uncertainties in distance and energy, rather than real distance and energy, but the formula works for real distance and energy, because we are dealing with a definite ratio between the two. Hence for 80 GeV massenergy W and Z intermediate vector bosons, the force range is on the order of 10^17 m.Since the formula d = hc/(2.Pi.E) therefore works for d and E as realities, we can introduce work energy as E = Fd, which gives us the strong nuclear force law: F = hc/(2.Pi.d^2). The range of this force is of course x = hc/(2.Pi.E). When you take the statistical nature of the uncertainty principle into account, you may find that the ‘range’ is a relaxation length, so that it gives an exponential attenuation to the force, like exp(d/x), so the string force is F = hc[exp(d/x)]/(2.Pi.d^2).
[Relationship of momentum and energy: Heisenberg says product of momentum and position (distance) uncertainties is
ps = h/ (2p ).
Where momentum p = mc, while s = ct
Hence ps = (mc)(ct) = mcct = (mc^{2} )t = Et = h/(2p )
Thus we have proved the really useful formula Et = h/(2p ) from Catt’s experimentally proved single velocity universe (v = c), without the dimensional analysis that Heisenberg's followers use to get the same equation. This means it is rigorous (the problem with saying dimensionally that ps = Et is that a dimensionless constant may be missed out, thus Newton got a speed equation for sound from dimensional analysis but missed out the adiabatic effect and the gamma = 1.4 dimensionless constant; similarly Maxwell's Jan 1862 displacement current formula has a problem with a factor of the square root of 2, because he had a false mechanism).]
The nuclear force virtual photons that are exchanged must be half cycle photons, with a net charge, and not neutral full cycle photons, for the attraction effect to work by polarisation of charge and moleculartype cohesive bonding. Full cycle photons can only cause repulsive forces by delivering momentum, although Fig. 3 of this article proves how electromagnetic attraction occurs from opposite charges blocking momentum exchange in each other’s direction which is coming in from the surrounding universe. It is essential to be clear on the fact that the inversesquare law forces are simple momentum delivery by continuous energy exchange between spinning, radiating particles and for gravity by the continuum or dielectric inward pressure caused by the Hubble motion of matter outwards from us. The nuclear forces have an exponential component that limits their range to nuclear dimensions, in addition to the inversesquare law. Because pulling quarks apart from a nucleon requires enough energy to create still more quarks, the overall force may be even more complicated when these possibilities are included in the force law, so the force may actually increase with distance.
The weak nuclear force consists basically of Coulomb’s force law of electromagnetism, with a 10^{10} multiplication factor for phase space of the beta radioactivity for neutrons which are free to decay into protons. This factor is given by p ^{2}hM^{4}/(Tc^{2}m^{5}), where h is Planck’s constant from Planck’s energy equation E = hf, M is the mass of the proton, T is the effective energy release ‘life’ of the radioactive decay (i.e. the familiar halflife multiplied by 1/ln2 = 1.44), c is velocity of light, and m is the mass of an electron. In beta radioactivity, neutrons decay into protons by releasing an electron (beta particle) and an antineutrino. The fact that the controlling force is smaller than the Coulomb force gives the name of the ‘weak’ force. It is still very much stronger than gravity.
Electromagnetic forces are caused by a continuous (nonquantum) electromagnetic energy emission and reception at the speed of light. Electric attraction is caused by opposite charges blocking the reception of energy from each other’s direction, and so recoil towards one another, due to receiving energy from every other direction. Repulsion is caused by similar charges which exchange more energy than they receive from the expanding universe around them, so they recoil apart (the momentum, p, delivered by electromagnetic energy is: p = mv = mc = E/c, from E = mc^{2}).
A continuous electromagnetic energy transfer in theoretically static situations would lead to equilibrium, with no net gain or loss of energy, but the big bang universe is expanding. The energy input from distant receding charge is redshifted. This energy exchange process is distinct from the quantum theory of radiation which explains thermodynamics. In 1792 Prevost had first introduced the modern thermodynamic idea that all objects at constant temperatures are in a thermal equilibrium, receiving just as much energy as they radiate to the surroundings. The expansion of the universe disturbs the thermal equilibrium between galaxies by redshifting energy, which is why less energy is received than is radiated into space, providing a useful heat sink.
Fig 3, using the principle that energy exchange transmits momentum, proves the mechanism behind electrostatic attraction and repulsion, and proves that the magnitude of the attractive force is equal in magnitude to that of the repulsion force. (Fig 3 shows that energy hitting you on all sides from the surrounding universe will be slightly less intense, or redshifted, compared to that a particle is radiating out. This is due to the big bang. It means that two similar charges nearby will exchange more energy between one another than is hitting them on their unshielded sides, and thus recoil apart, causing ‘repulsion’. Two dissimilar charges do the opposite because they block momentum exchange between one another to the same extent.) The impedance of the fabric of space in the vacuum is 377 ohms, not ohms/metre. So there is no attenuation with distance. Hence, the force of electromagnetism is numerically the same as the gravity mechanism already explained, multiplied up by the statistical vector summation, a randomwalk (mathematically similar to molecular diffusion) of energy between all of the surrounding charges.
In the universe there are N particles distributed randomly in spherical symmetry around us, so the contribution from each particle in the universe must added together. The distance of greatest contribution is near the outer edge of the universe where both the recession speed and number of particles is greatest and also most symmetrical around us. The contribution from nearby objects is insignificant. Hence, the forces of electromagnetism are equal to the attractive force of gravity for each particle, multiplied up by a factor that allows for the number of particles in the surrounding universe.
Since opposite charges block each other (by cancelling each other’s field), we find that there can be no straightline addition because the continuous mode TEM wave in a straight line will encounter equal numbers of positive and negative charges in the universe, which will block each other, statistically cancelling out.
Instead, the continuous mode TEM wave has an effective power due only to travel along a ‘random walk’ between particles of similar charge in the universe. This situation must therefore be analysed by random walk statistics, which for N particles contributing equally and distributed in space in perpendicular directions x, y, and z, predicts a vector sum by Pythagoras’ theorem (assigning the 50% of energy contributions towards us a positive sign in the expansions of the brackets, and the 50% which are away from us are assigned a negative sign, which means that the 3 given by the expansion for each original term in brackets cancels down to just 1 term per particle): [X/(gravitational acceleration)]^{2} = (1_{x1} + 1_{x2} + …)^{2} + (1_{y1} + 1_{y2} + …)^{2} + (1_{z1} + 1_{z2} + …)^{2} = N. Hence, X = (gravitational acceleration).N^{1/2}.
N can be calculated from the know density of matter (average mass of a star, multiplied by the number in a galaxy, divided by the average volume of space which each galaxy has to itself; which gives density, and this is multiplied by the volume of a sphere with a radius equal to the distance travelled by light during the age of the universe; which roughly gives the entire mass of the universe, and this in turn is divided by the mass of a hydrogen atom to give the number of particles of either charge because about 90% of the atoms in the universe are hydrogen; the final result is 10^{80} particles, so N^{1/2} is about 10^{40}, which is indeed the factor we need).
The ratio of electromagnetic to gravitational force is thereby proven to be N^{1/2}, where N is the number of like charges in the universe. Calculation proves that this rigorous theory is correct, giving us unified electromagnetism and gravity. The electric charges which participate in electric attraction and repulsion are those on the outside surfaces of conductors, since the electromagnetic radiation which produces the forces by delivering momentum cannot penetrate conductors. This is the reason why gravitation, being due to the dielectric pressure, affects all the subatomic particles in the entire internal volume of an object composed of atoms, whereas Coulomb’s law is found to only apply to the outer ‘charged’ surface layer of atoms.
Proving the basic equations of electromagnetism
‘From a long view of the history of mankind – seen from, say, ten thousand years from now – there can be little doubt that the most significant event of the 19^{th} century will be judged as Maxwell’s discovery of the laws of electrodynamics. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade.’ – R.P. Feynman, R.B. Leighton, and M. Sands, Feynman Lectures on Physics, vol. 2, AddisonWesley, London, 1964, c. 1, p. 11.
James Maxwell translated Faraday’s empirical law of induction into the mathematical form, curl.E =  dB/dt. Here, E is electric field, B is the magnetic field, t is time, and ‘curl.E’ is a simple mathematical operator: it is the difference between the gradients (variations with distance) of E in two perpendicular directions. It is evident that curl.E can be constant only if the electric field line has a constant curvature, a circular shaped field line, from whence the vector operator’s name ‘curl.’ This is why electric generators work on the principle of varying magnetic fields in coils of wire.
Maxwell then sought to correct Ampere’s incorrect law of electricity, which states the strength of the magnetic field curling around a wire is simply proportional to the current, I: curl.B = m I, where m is the magnetic constant (permeability). He realised that a vacuumdielectric capacitor, while either charging or discharging, constitutes a physical break in the electric circuit.
He tried to explain this by studying capacitors with a liquid dielectric. Particles of the liquid, charged ions, drift towards oppositely charged capacitor plates, creating a ‘displacement current’ in the liquid. Maxwell naturally assumed that the fabric of space permits a similar phenomenon, with virtual charges forming a displacement current in a vacuum. He therefore decided to add a term for displacement current of his own invention to the current I in Ampere’s law to correct it: curl.B = m (I + e .dE/dt). So Maxwell’s equation for displacement current is the rate of change of electric field (dE/dt) times the electric constant e (permittivity).
Before I start applauding Maxwell for either physical or mathematical insight, it is worth considering historian A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 459).
Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated: ‘history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’
Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:
‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of 2^{1/2} smaller than the velocity of light.’
It took three years for Maxwell to finally forcefit his ‘displacement current’ theory to take the form which allows it to give the alreadyknown speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’
Maxwell never summarised the four socalled ‘Maxwell equations.’ He produced chaos with hundreds, and never wrote them in their final form. It took the selftaught deaf mathematician Oliver Heaviside to identify them. Heaviside was partly suppressed like his acolyte Ivor Catt a century later. Heaviside in 1875 worked with his brother on the undersea telegraph cable between Newcastle and Denmark.
Contrary to 1984type popular ‘official’ history which says falsely that Bell invented the telephone, Bell actually just made a handset and intercom system, and failed to invent a longdistance telephone at all. Electronics engineers had a massive job getting signals out of long lines without distortion, regardless of amplifier or signal strength. The speech was distorted! [High frequencies suffered more than lower freequencies.] William Preece (head engineer of UK Post Office telephone research) suppressed mathematician Heaviside by not reading his work, and claiming Heaviside to be a country bumpkin. This meant that the entire world had to wait 20 extra years for digital signals (Morse code) to be supplemented by understandable speech in long telephone lines!
Preece had his own ‘official’ mad guesswork theory that Faradayinductance in long transmission lines caused distortion and should be somehow reduced by making special expensive cables. But Heaviside worked on both Maxwell’s equations and the NewcastleDenmark undersea cable in 1875, and discovered experimentally as well as theoretically that Preece was wrong, and the way to sort the problem was to increase the inductance by adding loading coils to the telephone lines periodically, a bit like adding springs to a mechanical system [helping the higher frequencies to propagate because the loading coils had a high resonate frequency like a similar tuning fork in music]! Preece’s attacks on, and hatred of, Oliver Heaviside the failed to ‘kill off’ the bumpkin, and ended up backfiring on Preece, who was exposed as a little bit of a charlatan. What a pity he would not listen to a country yokel who had discovered the answer through hard work! Heaviside afterwards wrote about it, bitterly pointing out Preece’s crackery!
Heaviside became famous for successfully predicting the ionosphere! Poor old Preece, what a laughing stock he became! The ionosphere is a charged layer of ions caused by solar radiation in the upper atmosphere. Being electrically conducting, it reflects back almost all radio below 100 MHz, allowing Marconi to contact America by radio from Europe and rubbish the claims of hundreds of theoretical physicists who falsely sneered ‘radio only travels in straight lines’. They did not understand that radio could be reflected back by the conducting surface in the ionosphere! In fact, despite the alleged progress in atomic physics in 1913 by the ‘great’ Bore, it was not until the 1920s that theoretical physicist Appleton first worked out the maths of the ionosphere reflection. Even now, ignorant writers regularly claim that ‘radio waves have been leaking out across space since the 1920s’, blissfully unaware of the ionosphere blocking all frequencies below UHF, and the work NASA had to find reliable portable Sband radio gear for Moon contact! Here are Heaviside’s ("Maxwell’s") equations (the divergence operator is just the sum of the gradients of the field for each perpendicular direction of space):
The crucial two curl equations can be done using Catt’s discovery that the fundamental entity of the universe is the eternally cspeed transverse electromagnetic (TEM) wave, which has the simple property: E = cB. Each term here is a vector since they are at right angles for each other, but bold print is not necessary if you remember this. (It is permissible to take curls of a perpendicular vector equation, since the curl operator itself is defined as the difference in gradients of the field between perpendicular directions.) I emailed my derivation to Catt’s coauthor Dr David Walton, who agreed with the mathematics. Start by taking the curls of both sides of E = cB and its equivalent, B = (1/c).E, giving:
curl.E = c.curl.B
curl.B = (1/c).curl.E
Now, because any field gradient or difference between gradients (curl) is related to the rate of change of the field by the speed of motion of the field (eg, dB/dt = c dB/dr, where t is time and r is distance), we can replace a curl by the product of the reciprocal of c and the rate of field change:
curl.E = c [(1/c)dB/dt] = dB/dt (Faraday’s law of induction)
curl.B = (1/c) [(1/c) dE/dt] = (1/c^{2}) dE/dt (Maxwell’s ‘displacement current’ ???)
We can therefore see how the universe is controlled by the c speed TEM wave, eternally in motion, and eternally having both magnetic field B and electric field E related by E = cB. From this fundamental building block of the universe, springs electricity and the law of electromagnetic induction.
We also see that the speed c is actually the speed of electricity. What Maxwell thought of as being the exception to the rule (displacement current in a vacuum) is actually the rule: the normal mechanism of energy transfer in electricity, since a capacitor is a transmission line. Less than well informed teachers (see Feynman quotation above) like Nobel Laureate Feynman, who never discovered the mechanisms for the forces of nature, have done useful mathematical work. However, Feynman is popularly quoted as saying that, because something has not yet been done, it will therefore always be impossible to do it: ‘nobody understands quantum mechanics.’
This is most frequently quoted by the establishment professors who squander taxpayers money on efforts to promote the mistaken results of their obfuscating mathematical trivia. Instead of using remarks like Feynman’s to hold back progress, we should read them as being proof of the ignorance and apathy behind the equations. I disagree with Ivor Catt and Dr Walton who say that modern physics will collapse and should be ignored. The mathematics of modern physics, quantum mechanics, electromagnetism, and relativity, are correct. It started out as an idea proven by the nature of physical space and its reaction to the big bang. A rigorous mathematical proof was then formulated. The first version predicted specifically that the outer regions of the universe will not be slowed down by gravity, long before Saul Perlmutter discovered from supernovae redshifts that there is, as predicted, no longrange gravitational retardation of the big bang. So we have a prediction preceding experimental confirmation.
FIGURE LEGENDS (SEE PRINT VERSION FOR COPYRIGHT ILLUSTRATIONS: illustrations are reproduced in a PDF version of the APRIL 2003 AN ELECTRONIC UNIVERSE article – note that there is also the earlier part 1 dated August 2002 available under the same title! – from the Electronics World website at http://www.softcopy.co.uk/electronicsworld/ ):
Fig 1. The Transverse Electromagnetic (TEM) wave electron. Shows how a spherically symmetrical electric field (E) and a dipole magnetic field (B) result from gravitationally trapped light. If you have a light photon which is an oscillating electric field with accompanying magnetic field, half the cycle is negative and half positive. In pair production, electron and positron pairs (negative and positive electron pairs) are created when a gamma ray of energy exceeding the equivalent of 2 electron rest masses disappears in the strong, space warping fields near a heavy nucleus. This was photographed and proved (using radioactive sources in Wilson cloud chambers) by Anderson and Blackett in the 1930s.
Fig.2. The 16 step proof of gravity’s cause and detailed mechanism.
Fig. 3. The mechanisms of electromagnetic forces. A continuous electromagnetic energy transfer in theoretically static situations would lead to equilibrium, with no net gain or loss of energy, but the big bang universe is expanding. The energy input from distant receding charge is redshifted. This energy exchange process is distinct from the quantum theory of radiation which explains thermodynamics. In 1792 Prevost had first introduced the modern thermodynamic idea that all objects at constant temperatures are in a thermal equilibrium, receiving just as much energy as they radiate to the surroundings. The expansion of the universe disturbs the thermal equilibrium between galaxies by redshifting energy, which is why less energy is received than is radiated into space, providing a useful heat sink. Fig 3, using the principle that energy exchange transmits momentum, proves the mechanism behind electrostatic attraction and repulsion, and proves that the magnitude of the attractive force is equal in magnitude to that of the repulsion force. (Fig 3 shows that energy hitting you on all sides from the surrounding universe will be slightly less intense, or redshifted, compared to that a particle is radiating out. This is due to the big bang. It means that two similar charges nearby will exchange more energy between one another than is hitting them on their unshielded sides, and thus recoil apart, causing ‘repulsion’. Two dissimilar charges do the opposite because they block momentum exchange between one another to the same extent.) The impedance of the fabric of space in the vacuum is 377 ohms, not ohms/metre. So there is no attenuation with distance. Hence, the force of electromagnetism is numerically the same as the gravity mechanism already explained, multiplied up by the statistical vector summation, a randomwalk (mathematically similar to molecular diffusion) of energy between all of the surrounding charges. In the universe there are N particles distributed randomly in spherical symmetry around us, so the contribution from each particle in the universe must added together. The distance of greatest contribution is near the outer edge of the universe where both the recession speed and number of particles is greatest and also most symmetrical around us. The contribution from nearby objects is insignificant. Hence, the forces of electromagnetism are equal to the attractive force of gravity for each particle, multiplied up by a factor which allows for the number of particles in the surrounding universe. Since opposite charges block each other (by cancelling each other’s field), we find that there can be no straightline addition because the continuous mode TEM wave in a straight line will encounter equal numbers of positive and negative charges in the universe, which will block each other, statistically cancelling out. Instead, the continuous mode TEM wave has an effective power due only to travel along a ‘random walk’ between particles of similar charge in the universe. This situation must therefore be analysed by random walk statistics, which for N particles contributing equally and distributed in space in perpendicular directions x, y, and z, predicts a vector sum by Pythagoras’ theorem (assigning the 50% of energy contributions towards us a positive sign in the expansions of the brackets, and the 50% which are away from us are assigned a negative sign, which means that the 3 given by the expansion for each original term in brackets cancels down to just 1 term per particle): [X/(gravitational acceleration)]^{2} = (1_{x1} + 1_{x2} + …)^{2} + (1_{y1} + 1_{y2} + …)^{2} + (1_{z1} + 1_{z2} + …)^{2} = N. Hence, X = (gravitational acceleration).N^{1/2}. N can be calculated from the know density of matter (average mass of a star, multiplied by the number in a galaxy, divided by the average volume of space which each galaxy has to itself; which gives density, and this is multiplied by the volume of a sphere with a radius equal to the distance travelled by light during the age of the universe; which roughly gives the entire mass of the universe, and this in turn is divided by the mass of a hydrogen atom to give the number of particles of either charge because about 90% of the number of atoms in the universe are hydrogen; the final result is 10^{80} particles, so N^{1/2} is about 10^{40}, which is indeed the factor we need). The ratio of electromagnetic to gravitational force is thereby proven to be N^{1/2}, where N is the number of like charges in the universe. Calculation proves that this rigorous theory is correct, giving us unified electromagnetism and gravity.
It is interesting that Hawking has shown that at least large (compared to fundamental particle) black holes emit black body radiation, with an effective emission temperature of T = hc^{3}/(16p ^{2}kGM), where h is Planck’s constant and k is Boltzmann’s constant. For the normal big black holes in the universe, radiation is mainly in the gamma ray region, since the emission temperature is so high. The problem with detecting Hawking radiation is not that it is hard to detect gamma rays, but that outer space is flooded with gamma rays far more intense than Hawking’s prediction. The natural background gamma ray ‘noise’ from supernova explosions and radioactivity swamps detection of Hawking radiation, which undoubtedly exists on the basis of the Hawking mathematical proof, despite having no experimental proof for astronomical black holes. You can see from the Hawking formula that the effective emission temperature increases as the mass of the black hole, M, is reduced. For a black hole the mass of an electron, the assumptions Hawking made do not apply, but if the formula did apply, the effective radiating temperature would be exceedingly high. Sufficiently highenergy gamma rays are actually very hard to detect, because the penetrating power increases with the energy. If you have gamma rays of great penetrating power, they will penetrate instruments without electron interaction (Compton effect and photoelectric effect), and will be undetected, apart from nuclear effects like the momentum they impart to the nuclei of matter. The greater the energy and hence penetrating power of a gamma ray, the less energy it is likely to deposit in material, and the less damage it does. So gravity ‘gravitons’ would be Hawking radiation with energies exceeding gamma radiation. The apparent spin of the predicted ‘graviton’ could be an artifact of interaction between the Hawking gravity radiation and the Higgs boson. This gravity radiation is a quantum energy exchange process, distinct from the classicaltype mechanism for electromagnetic forces discussed on this page. The fundamental particles emitting the gravitycausing Hawking radiation are the ‘virtual’ particles that fill the fabric of space. (The contribution from the real particles of matter, which fill only a tiny proportion of space, will be insignificant by comparison, since there is so little real matter in comparison to so much virtual matter or ‘spacetime fabric’ filling the volume of space.)
Back in 1996, ‘popular physics’ authors were flooding the media with hype about backward time travel, 10 dimensional strings, parallel universes and Kaku flying saucer speculation, and were obviously lying that such unpopular nontestable guesses were science. Alevel physics uptake now falls by 4% per year!
In February 1997, I set up the peerreviewed journal Science World, ISSN 13676172, which Dr Arnold Lynch, Catt’s coauthor of the 1998 IEE HEE/26 paper ‘A difficulty in electromagnetic theory’, expressed critical interest in. Lynch criticised my initial approach to electron spin. He had been taught about the discovery of the electron by its discoverer, J.J. Thomson. Lynch encouraged me to break away from the PopperFeynman (experimentally confirmed speculation) approach to physics, and to instead prove results using experimental facts. This new approach immediately began to generate the desired interest. I also set up an internet page, making the facts freely available.
How could experimentallybased lifesaving improvements to Air Traffic Control generate abuse?
Trying to promote Catt’s useful work (http://www.ivorcatt.com/3ew.htm), I came across a few people who have nothing better to do with their time than defend ‘string theory’ status quo. These few egotists are sufficiently persistent that they can spoil discussions by making personal comments about how Catt and myself must be egotists! They offer nothing constructive, just sneers. At a deeper level, I think Catt is wrong in having innate faith in society. People enjoy bashing new ways to assemble the facts that are struggling to gain acceptance. They enjoy sneering, and this is not just when asked for help. People will actively attack innovation, pretending to be defending status quo against antiscience fanatics. The only people who can really escape this are, unfortunately, politicians. ‘Innovation = egotism. Science = status quo. Change = attack on science.’ – Paranoia of egotists who think they are defending science by ignorantly attacking life saving technology as laughable.
Safety innovation suppressed: http://www.ivorcatt.com/3ew.htm
Electronics World: http://www.softcopy.co.uk/electronicsworld/
Hydrodynamics (sound and shock waves): http://einstein157.tripod.com/
Fundamental physics: http://feynman137.tripod.com/
http://members.lycos.co.uk/nigelbryancook/
http://electrogravity.blogspot.com/
http://glasstone.blogspot.com/
Analytical mathematics for physical understanding, versus abstract numerical computation
‘Nuclear explosions provide access to a realm of hightemperature, highpressure physics not otherwise available on a macroscopic scale on earth. The application of nuclear explosions to destruction and warfare is well known, but as in many other fields of research, out of the study of nuclear explosions other new and seemingly unrelated developments have come.’ – Dr Harold L. Brode, ‘Review of Nuclear Weapons Effects’, Annual Review of Nuclear Science, Vol. 18, 1968, pp. 153202.
Introduction. The sound wave is longitudinal and has
pressure variations. Half a cycle is compression (overpressure) and the other
half cycle of a sound wave is underpressure (below ambient pressure). When a
spherical sound wave goes outward, it exerts outward pressure which pushes on
you eardrum to make the noises you hear. Therefore the sound wave has outward
force F = PA where P is the sound wave pressure and A
is the area it acts on.
Note the outward force and equal and opposite
inward force. This is Newton’s 3rd law. The same happens in explosions, except
the outward force is then a short tall spike (due to air piling up against the
discontinuity and going supersonic), while the inward force is a longer but
lower pressure. A nuclear implosion bomb relies upon Newton’s 3rd law for TNT
surrounding a plutonium core to compress the plutonium. The same effect in the
Higgs field surrounding outwardgoing quarks in the ‘big bang’ produces an
inward force which gives gravity, including the compression of the earth's
radius (1/3)MG/c^{2} = 1.5 mm (the contraction term effect in general
relativity). Fundamental physical force mechanisms have been developed in
consequence.
Sir G. I. Taylor, in his analysis of the Trinity nuclear test, observed in 1950: ‘Within the radius 0.6R the gas has a radial velocity which is proportional to the distance from the centre…’ (Proc. Roy. Soc., v. 201A, p. 181.) Thus, Hubble’s ‘big bang’. The writer came across this effect in the computer outputs published by H. A. Brode in ‘Review of Nuclear Weapons Effects’, Annual Review of Nuclear Science, v. 18, pp. 153202 (1968) and decided to study the big bang with the masscausing gauge boson or Higgs field as a perfect fluid analogy to air. The result is a prediction of gravity and other testable data.
History. Archimedes’ book On Floating Bodies neglected fluid mechanisms and used a mathematical trick to ‘derive’ the principle of fluid displacement he has empirically observed: he states you first accept that the water pressure at equal depths in the ocean both below floating object and in open water is the same. Since the pressure is caused by the mass above it, the mass of water displaced by the floating object must therefore be identical to the mass of the floating object. Notice that this neat proof includes no dynamics, just logical reasoning.
MECHANISM OF BUOYANCY: HOW BALLOONS RISE DUE TO GRAVITY
Archimedes did not know what pressure is (force/area). All he did was to prove the results using an ingenious but mathematically abstract way, which limits understanding. There was no air pressure concept until circa 1600 and the final proof of air is credited falsely to Maxwell's treatise on the kinetic theory of gases.
In water, where you sit in a bath like Archimedes, you just observe that the water displaced is equal in volume to your volume when you sink, or is proportional to your mass if you float. Archimedes proves these observed facts using some clever arguments, but he does not say that it is the variation in pressure with depth which causes buoyancy. For a 70 kg person, 70 litres of air (84 grams) is displaced so a person’s net weight in air is 69.916 kg, compared to 70 kg in a vacuum.
The reason a balloon floats is because the air pressure closer to the earth (bottom of balloon) is bigger than higher up, at the top, so the net force is upward. If you have a narrow balloon, the smaller crosssectional area is offset by the greater vertical extent (greater pressure difference with height). This is the real cause of buoyancy. Buoyancy is proportional to the volume of air displaced by an object by the coincidence that upthrust is pressure difference between top and bottom of the object, times horizontal crosssectional area, a product which is proportional to volume. If you hold a balloon near the ground, it can't get buoyant unless air pushes in underneath it. The reason for buoyancy is the volume of the atmosphere that it has displaced from above it to under it and therefore pushes it upwards. A floating object shields water pressure, which pushes it upward to keep it floating. This is not a law of nature but is due to a physical process.
In water the water pressure in atmospheres is 0.1D, where D is depth in metres. You are pushed up by water pressure because it is bigger further down, because of gravity, causing buoyancy. Air density and pressure (ignoring small effects from temperature variations) falls by half for each 4.8 km increase in altitude.
Air pressure falls with altitude by 0.014% per metre like air density (ignoring temperature variation). Take a 1m diameter cubeshaped balloon. If the upward pressure on the base of it is 14.7 psi (101 kPa), then the downward pressure on the top will be only 14.698 psi (100.9854 kPa). Changing the shape but keeping volume constant will of course keep the total force constant, because the force proportional to not just the horizontal area but also the difference in height between top and bottom, so the total force is proportional to volume. Archimedes’ upthrust force therefore equals the displaced mass of air multiplied by gravitational acceleration. Mechanisms are left undiscovered in physics because of popular obfuscation by empirical ‘laws’. Someone discovers a formula and gets famous for it, blocking further advance. The discoverer has at no stage proved that the empirical mathematical formula is a Godgiven ‘law’. In fact, there is always a mechanism.
Sir Isaac Newton in 1687 made the crucial first step of fluid dynamics by fiddling the equation of sound speed, using dimensional analysis lacking physical mechanism, to give the ‘right’ (already assumed) empirical result. Laplace later showed that Newton had thus ignored the adiabatic effect entirely, which introduces a dimensionless factor of 1.4 into the equation. The compression in the sound wave (which is a pressure oscillation) increases its temperature so the pressure rises more than inversely as the reduction in the volume. The basic force physics of sound waves were ignored by mathematicians from S.D. Poisson [‘Mémoire sur la théorie du son’, Journal de l’école polytechnique, 14 `me Cahier, v. 7, pp. 31993 (1808)] to J. Rayleigh [‘Aerial plane waves of finite amplitude’, Proceedings of the Royal Society, v. 84, pp. 24784 (1910)], who viewed nature as a mathematical enterprise, rather than a physical one.
In 1848, the failure of sound waves to describe abrupt explosions was noticed by E. E. Stokes, in his paper on the subject, ‘On a difficulty in the theory of sound’, Philosophical Magazine, v. 33, pp. 34956. It was noticed that the ‘abrupt’ or ‘finite’ waves of explosions travel faster than sound. Between 187089 Rankine and Hugoniot developed the ‘RankineHugoniot equations’ from the mechanical work done by an explosion in a cylinder upon a piston (the internal combustion engine). [W. J. M. Rankine, ‘On the thermodynamic theory of waves of finite longitudinal disturbance’, Transactions of the Royal Society of London, v. 160, pp. 27788 (1870); H. Hugoniot, ‘Sur la propagation du mouvement dans les corps et spécialement dans les gaz parfaits’, Journal de l’école polytechnique, v. 58, pp. 1125 (1889).] As a result, experiments in France about 1880 by Vielle, Mallard, Le Chatelier, and Berthelot demonstrated that a tube filled with inflammable gas burns supersonically, a ‘detonation wave’, which Chapman in 1899 showed was a hot RankineHugoniot shock wave which almost instantly burned the gas as it encountered it. These studies, on the effects of explosions upon pistons in cylinders, led to modern motor transport, which cleverly utilises the rapid sequence of explosions of a mixture of petrol vapour and air to mechanically power cars. Similarly, another ‘purely’ scientific instrument, the type of crude vacuum tube screen used by J.J. Thomson to ‘discover the electron’ was later used as the basis for the first television picture tubes.
Lord Rayleigh (18421919) is the author of the current existing textbook approach to ‘sound waves’ and no where does he worry about sound waves being really composed of air molecules, particles! Rayleigh in fact corresponded with Maxwell, who developed the kinetic theory to predict the spectrum of air molecule speeds (Maxwell distribution; MaxwellBoltzmann statistics), but their discussion on the physics of sound was limited to musical resonators and did not include the mechanism for sound transmission in air. Maxwell’s distribution was based on flawed assumptions, which makes it only approximate. For example, collisions of air molecules are not completely elastic, because collision energy is partly converted into internal heat energy of molecules, and thermal radiation results; in addition, molecules attract when close.
Rayleigh’s nonmolecular ‘theory of sound’ was first published in two volumes in 1877, titled (inaccurately and egotistically), The Theory of Sound. It is no more a ‘theory of sound’ than Maxwell’s elastic solid aether is the theory of electromagnetism. But it won Rayleigh the fame he wanted: when Maxwell died in 1879, his position as professor of experimental physics at the Cavendish lab was given to Rayleigh. We remember the lesson of Socrates, that the recognition of ignorance is valuable because it shows what we need to find out. Aristotle in 350 BC had proposed that sound is due to a motion of the air, but like Rayleigh, Aristotle ignored the subtle or trivial technical problem of working out and testing a complete mechanism! Otto von Guericke (160286) claimed to have disproved Aristotle’s idea that air carries sound, experimentally. This is another lession: experiments can be wrong. Von Guericke’s experiment was a fraud because be pumped out air from a jar containing a bell, and continued to hear the bell ring through vacuum. In fact, the bell was partly connected to the jar itself, which transmitted vibrations, and the resonating jar caused sound. Another scientist, Athanasius Kircher, in 1650 repeated the bellinvacuum experiment and confirmed von Guericke’s finding, this time the error was a very imperfect vacuum in the jar, due to an inefficient air pump. (More recently, in 1989, a professor of physical chemistry at Southhampton University allegedly picked up a neutron counter probe that was sensitive to heat, and obtained a gradual reading due to the effect of hand heat on the probe when placing the probe near a flask, claiming to have detected neutrons from ‘cold fusion’.)
Only in 1660 did Robert Boyle obtain the first reliable evidence that air carries sound. He did this by observing the decrease in the sound of a continually ringing bell as air was efficiently pumped out of the jar. Gassendi in 1635 measured the speed of sound in air, obtaining the high value of 478 m/s. He ignored temperature and the effect of the wind, which adds or subtracts to the sound speed. He also found that the speed is independent of the frequency of the sound. In 1740, Branconi showed that the speed of sound depends on air temperature. Newton in Principia, 1687, tried to get a working sound wave theory by fiddling the theory to fit the inaccurate experimental ‘facts’. Newton says the air oscillation is like a pendulum which has a velocity of v = (gh)^{1/2}, so for air of pressure p = rhg, sound velocity is (p/r)^{1/2}. Lagrange in 1759 disproved Newton’s theory, and in 1816 Laplace produced the correct equation. Although Newton’s formula for sound speed is dimensionally correct, it omits a dimensionless constant, the ratio of specific heat capacities for air, g = 1.4. This is because, as Laplace said in 1816, the sound wave has pressure (force/area), which alters the temperature. The sound wave is therefore not at constant temperature (isothermal), and the actual temperature variation with pressure increases the speed of sound. This is the adiabatic effect; the specific heat energy capacity of a unit mass of air at constant temperature differs from that at constant pressure, which Laplace deals with in his Méchanique Céleste of 1825. Newton falsely assumes, in effect, the isothermal relationship p/p_{o} = r /r _{o} (where subscript o signifies the value for the normal, ambient air, outside the sound wave), when the correct adiabatic equation is p/p_{o} = (r /r _{o} )^ g. Hence the Newtonian speed of sound, (p/r)^{1/2} , is false and the correct speed of sound is actually higher, (g p/r)^{1/2}. Of course, it is still heresy to discuss Newton’s fiddles and ignorance objectively, just as it is heresy to discuss Einstein’s work objectively. Criticisms of ‘heroes’ upset cranks.
Lord Rayleigh’s biggest crime was to reject Rankine’s 1870 derivation of the conservation of mass and momentum across a shock front. Rayleigh objected that energy cannot be conserved across a discontinuity, but Hugoniot in 1889 correctly pointed out that the discontinuity in the entropy between ambient air and the shock front changes the equation of state. Sadly, Rayleigh’s false objection was accepted by textbook author Sir Horace Lamb and incorporated in editions of Lamb’s Hydrodynamics (first published in 1879) up to and including the final (sixth) edition of 1932 (chapter X). Such was the influence of Lord Rayleigh that Sir Horace Lamb merely mentioned Hugoniot’s solution in a footnote where he dismisses it as physically suspect. Lord Rayleigh repaid Lamb by writing an enthusiastic review of the 4^{th} edition of Hydrodynamics in 1916 which described other books on the subject as ‘arid in the extreme’, stated ‘to almost all parts of the subject he [Lamb] has made entirely original contributions’, and misleadingly concluded: ‘the reader will find expositions which could scarely be improved.’
Perhaps the greatest challenge to common sense by a mathematician working in fluid dynamics is Daniel Bernoulli’s 1738 Hydrodynamics that mathematically related pressure to velocity in a fluid. This ‘Bernoulli law’ was later said to explain how aircraft fly. The experimental demonstration is that if you blow between two sheets of paper, they move together, instead of moving apart as you might naively expect. The reason is that a faster airflow leads to lower pressure in a perpendicular direction. The myth of how an aircraft flies thus goes like this: over the curved upper surface of an aircraft wing, the air travels a longer distance, in the same time as the air flowing around the straight lower surface of the wing. Therefore, Bernoulli’s law says there is faster flow on the top with lower pressure down against the wing (perpendicular to the air flow), so the wing is pushed up by the higher upward directed pressure on the lower side of wing due to the slower airflow: http://quest.arc.nasa.gov/aero/background/. But to teach this sort of crackpot ‘explanation’ as being real physic is as sadistic as saying Newton created gravitation, see http://www.textbookleague.org/105wing.htm:
‘That neat refutation of ‘the common textbook explanation’ comes from an article that Norman F. Smith, an aeronautical engineer, contributed to the November 1972 issue of The Physics Teacher. The article was called ‘Bernoulli and Newton in Fluid Mechanics’. Smith examined Bernoulli’s principle, showed it was useless for analyzing an encounter between air and an airfoil, and then gave the real explanation of how an airfoil works:
Newton has given us the needed principle in his third law: if the air is to produce an upward force on the wing, the wing must produce a downward force on the air. Because under these circumstances air cannot sustain a force, it is deflected, or accelerated, downward.
‘There was nothing new about this information, and Smith demonstrated that lift was correctly explained in contemporary reference books. Here is a passage which he quoted from the contemporary edition of The Encyclopedia of Physics:
The overwhelmingly important law of low speed aerodynamics is that due to Newton. . . . Thus a helicopter gets a lifting force by giving air a downward momentum. The wing of a flying airplane is always at an angle such that it deflects air downward. Birds fly by pushing air downward. . . .
‘Nearly 30 years later, fake ‘science’ textbooks continue to dispense pseudoBernoullian fantasies and continue to display bogus illustrations which deny Newton's third law and which teach that wings create lift without driving air downward.’ Also: http://www.lerc.nasa.gov/WWW/K12/airplane/wrong1.html
The ‘Force of sound’
The sound wave is longitudinal and has pressure variations. Half a cycle is
compression (overpressure) and the other half cycle of a sound wave is
underpressure (below ambient pressure). When a spherical sound wave goes
outward, it exerts outward pressure which pushes on you eardrum to make the
noises you hear. Therefore the sound wave has outward force F = PA
where P is the sound wave pressure and A is the area it acts
on. When you read Raleigh’s textbook on ‘sound physics’ (or whatever dubious
title it has), you see the fool fits a wave equation from transverse water waves
to longitudinal waves, without noting that he is creating particlewave
duality by using a wave equation to describe the gross behaviour of air
molecules (particles). Classical physics thus has even more wrong with it
becauss of mathematical fudges than modern physics, but the point I’m making
here is that sound has an outward force and an equal and opposite inward force
following this. It is this oscillation which allows the sound wave to
propagate instead of just dispersing like air blown out of your
mouth.
Note the outward force and equal and opposite inward force. This
is Newton’s 3rd law. The same happens in explosions, except the outward force is
then a short tall spike (due to air piling up against the discontinuity and
going supersonic), while the inward force is a longer but lower pressure. A
nuclear implosion bomb relies upon Newton’s 3rd law for TNT surrounding a
plutonium core to compress the plutonium. The same effect in the Higgs field
surrounding outward going quarks produces an inward force which gives gravity,
including the compression of the earth's radius (1/3)MG/c^{2} = 1.5 mm
(the contraction term effect in general relativity).
Why not fit a wave equation to the group behaviour of particles (molecules in air) and talk sound waves? Far easier than dealing with the fact that the sound wave has an outward pressure phase followed by an equal underpressure phase, giving an outward force and equalandopposite inward reaction which allows music to propagate. Nobody hears any music, so why should they worry about the physics? Certainly they can't hear any explosions where the outward force has an equal and opposite reaction, too, which in the case of the big bang tells us gravity.
http://www.math.columbia.edu/~woit/wordpress/?p=348
Nearly everyone is out to destroy science for their own ends, so Susskind isn’t alone. Newton had to overcome all kinds of bigotry.
Newton is the founder of laws of nature. He discovered an inverse square law proof (for circular orbits only) in 1666, but only published his book in 1687. The major delay was doing extra work and finding a framework to avoid three kinds of objectors:
(1) Religious objections (hence religious style laws of ‘nature’/God)
(2)
Petty colleagues who would ridicule any errors/omissions
(3) ‘Little
smatterers’ (Newton’s term) against innovation (=> Latin)
It would have been delayed longer if Halley had not funded the printing from his own pocket when he did.
Spacetime is not regarded as crackpot, but links time and distance. This implies that the big bang recession velocity can be expressed as a function of time, not just distance. Hence an acceleration, which allows big bang force to be calculated from F=ma, and you then get the magnitude of the inward vector boson force that causes gravity from Newton’s 3rd law. This is nonspeculative, unless you disagree with the concept of spacetime, or F=ma.
First of all, take the simple question of how the vacuum allows photons to
propagate any distance, but quickly attenuates W an Z bosons. Then you are back
to the two equations for a transverse light wave photon, Faraday's law of
electric induction and Maxwell's vacuum displacement current in Ampere's law.
Maxwell (after discarding two mechanical vacuums as wrong), wrote that the
displacement current in the vacuum was down to tiny spinning "elements" of the
vacuum (Maxwell, Treatise, Art. 822; based partly on the effect of magnetism on
polarised light).
I cannot see how loop quantum gravity can be properly
understood unless the vacuum spin network is physically understood with some
semiclassical model. People always try to avoid any realistic discussion of spin
by claiming that because electron spin is half a unit, the electron would have
to spin around twice to look like one revolution. This isn't strange because a
Mobius strip with half a turn on the loop has the same property (because both
sides are joined, a line drawn around it is twice the length of the
circumference). Similarly the role of the Schroedinger/Dirac wave equations is
not completely weird because sound waves are described by wave equations while
being composed of particles. All you need is a lot of virtual particles in the
vacuum interacting with the real particle, and it is jiggled around as if by
brownian motion.
But there is a lot of obfuscation introduced by maths even at low levels of
physics. Most QED calculations completely cover up the problems between SR and
QED, that the virtual particles in the vacuum look different to observers in
different motion, etc.
In Coulomb's law, the QED vector boson "photon"
exchange force mechanism will be affected by motion, because photon exchanges
along the direction of motion will be slowed down. Whether the
FitzGeraldLorentz contraction is physically due to this effect, or to a
physical compression/squeeze from other forcecarrying radiation of the vacuum,
is unspeakable in plain English. The problem is dressed up in fancy maths, so
people remain unaware that SR became obsolete with GR covariance in 1915.
http://motls.blogspot.com/2006/02/andyspubliclecture.html:
For the universe to be a black hole, R = 2GM/c^2.
M = [density].[4/3
Pi R^3].
Hence
R = 2G.[density].[4/3 Pi R^3]/c^2
=
(8/3)G.[density].Pi(R^3)/c^2
Hence:
density = (3/8)
(c^2)/(Pi.G.R^2)
If Hubble constant H = c/R,
density = (3/8)
(H^2)/(Pi.G)
This is the formula for the "critical density" which is
about 10 times or so higher than observed density.
The point I'm getting
at, is that existing cosmology which uses the standard solution for general
relativity implicitly ASSUMES that there is no mechanism for gravity within the
universe (ie, it assumes that the universe is a black hole).
If there is
a mechanism for gravity which has any analogy to other forces (vector boson
exchange between charges in electroweak theory, for instance), then the universe
can't be a black hole because it can be considered a single mass.
If
gravity is due to exchange of gauge boson radiation, spin2 gravitons in string
theory, for example, then the black hole in the middle of the Milky Way is there
because gravitons are being exchanged between it and the surrounding
matter.
This can't happen if the whole universe is the black hole, unless
you are going to picture a lot more universes around our own!
Gravitons
aren't stopped as light is stopped by a black hole. Gravitons, if there are such
things, must be exchanged between all masses, including black holes. Therefore,
if you have just a single black hole, it will lose energy by radiating away
gravitons without exchange (ie without receiving any gravitons back from other
masses).
So all this speculation is ignorant of energy conservation, not
to say the basic premises of quantum gravity.
Nigel Cook  Homepage  02.09.06  11:34 am  #
Nonperturbative spacetime fabric! This is vital for physically understanding
QFT results in terms of the Dirac sea, and for understanding how QFT may be
unified with general relativity (and the Lorentz transformation etc of
"restricted relativity") by the spacetime fabric:
The polarised
dielectric of the vacuum around the core of a fundamental particle (be that a
string or a loop) which shields the core force.
Unification physically
occurs when you knock particles together so hard that they penetrate the
polarised dielectric which is shielding most of the electric field from the
core, so that the stronger electric field of the core is then involved in the
reaction.
OK, QCD involves gluons not photons as the mediator, but the
STRENGTH of the forces becomes equal if you smash the particles together with
energy enough to break through the polarised spacetime fabric around a
fundamental particle.
Suppose the Standard Model QFT is right, and mass
is due to Higgs field.
In that case, the Higgs field particles
associating with the core of a fundamental particle give rise to mass,
OK?
The field which is responsible for associating the Higgs field
particles with the mass can be inside or outside the polarised veil of
dielectric, right?
If the Higgs field particles are inside the polarised
veil, the force between the fundamental particle and the mass creating field
particle is very strong, say 137 times Coulomb's law.
On the other hand,
if the mass causing Higgs field particles are outside the polarised veil, the
force is 137 times less than the strong force.
This implies how the 137
factor gets in to the distribution of masses of leptons and hadrons.
‘All
charges are surrounded by clouds of virtual photons, which spend part of their
existence dissociated into fermionantifermion pairs. The virtual fermions with
charges opposite to the bare charge will be, on average, closer to the bare
charge than those virtual particles of like sign. Thus, at large distances, we
observe a reduced bare charge due to this screening effect.’ – I. Levine, D.
Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
D R Lunsford Says:
January 31st, 2006 at 1:04
am
John  even Penrose is ignored. He gives the simplest conceivable argument showing that inflation, to take an example, not only doesn’t fix what it’s supposed to fix, it actually makes it worse*. The alternative for it is to resort to anthropism. Since no one seems to be willing to abandon inflation, we can only assume that anthropism has already taken a seat.
Penrose’s argument can be understood by an undergraduate. So why doesn’t it have a prominent place?
drl
* http://www.princeton.edu/WebMedia/lectures/
Look for Penrose lecture "Fantasy"
science Says:
January 31st, 2006 at 5:49
am
String theory has ‘queered the pitch’ for physics ideas. Few people have the time or inclination to study the details of the maths of stringy Mtheory. String theory hype says they can ‘predict’ gravity, the Standard Model, and unify everything.
This is equivalent to saying: ‘all alternatives are unnecessary’ or, more clearly, ‘all ‘’alternatives'’ are crackpot junk that is not science.’
Notice that the first line of defence ignorant people have against criticism of mainstream is to claim there is no alternative. When you point out that you are critical BECAUSE alternatives are being suppressed, they then sneer at the alternatives because they haven’t had the level of funding of mainstream string theory for 2 months, let alone 20 years…
You just need to think clearly of what QFT says, which is that forces are due to exchanges of some kind of bosons between mass. CC attempts to explain the extreme redshift without considering if the force causing bosons of gravity are affected by redshift at extreme distances, near the horizon.
CONCLUSION
Scientists have no more control over discoveries of nature than they have over whom they marry. You cannot choose whom you marry, because it is a joint decision, depending on the other person. Scientists can present the illusion that untested extradimensional, unobservable graviton‘explaining’ string theories are useful or real, just as they could present the illusion that they are in love with somebody because of a singlesided decision they made. But nature decides if your model is right, so it is not just your decision. Just as ‘singlesided love’ is selfdeception, so is theorising, unless you have evidence.
Of course sometimes there is more than one suggested theory that fits parts of nature. Obviously the best theory to suit nature is then that which fits nature the best, having the least disagreements and the greatest number of agreements. It is quite commonplace for people to try to defend existing theories, throughout history, as being perfect, so as to ridicule ‘alternatives’. Many of the scientists are doublefaced, hypocritical monsters and ridicule as unnecessary alternatives while declaring in the media that the existing theory is still incomplete or contains anomalies. This is the political or human edge, controversy:
Copy of fast series of comments
from:
http://motls.blogspot.com/2006/02/albionandrgflow.html
Lubos,
Heisenberg's
uncertainty says
pd = h/(2.Pi)
where p is uncertainty in
momentum,
d is uncertainty in distance.
This comes from his imaginary
gamma ray microscope, and is usually written as a minimum (instead of with "="
as above), since there will be other sources of uncertainty in the measurement
process.
For light wave momentum p = mc,
pd = (mc)(ct) =
Et
where E is uncertainty in energy (E=mc2), and t is uncertainty in
time.
Hence, Et = h/(2.Pi)
t = h/(2.Pi.E)
d/c =
h/(2.Pi.E)
d = hc/(2.Pi.E)
This result is used to show that a 80
GeV energy W or Z gauge boson will have a range of 10^17 m. So it's
OK.
Now, E = Fd implies
d = hc/(2.Pi.E) =
hc/(2.Pi.Fd)
Hence
F = hc/(2.Pi.d^2)
This force is 137.036
times higher than Coulomb's law for unit fundamental charges.
Notice that
in the last sentence I've suddenly gone from thinking of d as an uncertainty in
distance, to thinking of it as actual distance between two charges. I don't
think this affects the result, because the gauge boson has to go that distance
to cause the force anyway.
Clearly what's physically happening is that
the true force is 137.036 times Coulomb's law, so the real charge is 137.036.
This is reduced by the correction factor 1/137.036 because most of the charge is
screened out by polarised charges in the vacuum around the electron
core:
"... we find that the electromagnetic coupling grows with energy.
This can be explained heuristically by remembering that the effect of the
polarization of the vacuum ... amounts to the creation of a plethora of
electronpositron pairs around the location of the charge. These virtual pairs
behave as dipoles that, as in a dielectric medium, tend to screen this charge,
decreasing its value at long distances (i.e. lower energies)."  arxiv
hepth/0510040, p 71.
I just think there is a wrong attitude in modern
physics that simple ideas are crackpot.
Nigel
Nigel Cook  Homepage 
02.17.06  10:32 am  #
Nigel,
Humm...
http://www.crank.net/physics.html
Guess
who leads the pack?
Simple ideas are not synonymous with crackpot
physics.
Yours are.
Michael Varney  Homepage  02.17.06  11:17 am 
#
Varney to the crackpot rescue... I was just talking about you the
other day in context with a crank website that is maintained by your old buddy,
"nemesis"... hahaha
LTNS, Mikey... I hope that you haven't turned into a
string theory crackpot, like Lumo...
island  Homepage  02.17.06  2:07 pm
 #
Hi Michael Varney,
Glad you checked the maths. Notice
that the guy who runs "Crank Dot Net" a certain Erik Max Francis
"Mr.
Francis, 29, is not a scientist, and has taken only a handful of classes at a
community college."
 Bonnie Rothman Morris in The New York Times of Dec.
21, 2000 
http://en.wikipedia.org/wiki/Tal...i/Talk:
Symmetry
Notice that this Erik does not include himself despite having
links to his own pages claiming to have proved Kepler's laws from other stuff
which is based on Kepler's laws!!! (Hint: crackpot circular
argument.)
Notice that this Erik does not include Tony Blair for the
dodgy dossier theory that Saddam woulc wipe out Earth in 3 seconds, or Hitler's
cranky schemes.
If he likes mass murderers and so does not include them
in his list of cracks, then I'm glad to be in the list and not favoured by the
man. …
Nature Physical Sciences Editor Karl Ziemelis’ 26 November 1996 letter to NC: ‘… a review article on the unification of electricity and gravity… would be unsuitable for publication in Nature.’
Recent discussion with Professor Josephson (see also here, here, and here)
From: Nigel Cook To: Brian Josephson Sent: Tuesday, February 21, 2006 10:54 AM Subject: Re: wheat and chaff
Maxwell had no idea what electricity is (although he did abjectly speculate on the electron, I found), so he assumed it INSTANTLY flood along wires and capacitor plates. Maxwell is totally bunk, except for one useful equation which he got finally in 1865 by working backwards from the already known answer (namely, Weber's 1856 empirical equation for the speed of light, as the square root of the reciprocal of the electric and magnetic force constants):
Maxwell, Treatise..., 1873 ed., Article 610: "One of the chief peculiarities of this treatise is the doctrine which asserts, that the true electric current, I, that on which the electromagnetic phenomena depend, is not the same thing as i, the current of conduction, but... I = i + dD/dt (Equation of True Currents)."
Maxwell is a proven crackpot not for his different mechanical aethers and his failure to learn to apply vector calculus to "Maxwell's equations" (actually Gauss' equations, Ampere's equations) which Heaviside without any college education did and went without credit for doing. I've said before, Maxwell lied in his Jan 1862 paper "On Physical Lines of Force, Part 3". His lie is getting the right speed of light from false working, using an elasticity factor which is wrong. If Maxwell had not fiddled the calculation, he would have got the speed of light wrong by the square root of 2. Maxwell did not make an error, he provably deliberately fiddled the whole model to get the equation for the speed of light which had been discovered empirically in the year 1856 by Weber.
It took three years for Maxwell to finally forcefit his ‘displacement current’ theory to take the form which allows it to give the alreadyknown speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’ (Source: A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 459). Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated: ‘… history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’ [8])
James Clerk Maxwell, Treatise on Electricity and Magnetism, 3rd ed., Article 574:
Maxwell had no idea what electricity is (although he did abjectly speculate on the electron, I found), so he assumed it INSTANTLY flood along wires and capacitor plates. Maxwell is totally bunk, except for one useful equation which he got finally in 1865 by working backwards from the already known answer (namely, Weber's 1856 empirical equation for the speed of light, as the square root of the reciprocal of the electric and magnetic force constants):
Maxwell equations are simply rough approximations which omit mechanism (QFT). The normal "Maxwell" equations are only valid at low energy, and are wrong at high energy when Gauss' law (the Maxwell equation for div.E) shows a stronger electron charge.
You have to include the Standard Model to allow for what happens in particle accelerators when particles are fired together at high energy. The physical model above does give a correct interpretation of QFT and is also used in many good books (including Penrose's Road to Reality). However as stated [13], the vacuum particles look different to observers in different states of motion, violating the postulate of Special/Restricted relativity (which is wrong anyway for the twins paradox, i.e., for ignoring all accelerating motions and spacetime curvature). This is why it is a bit heretical. Nevertheless it is confirmed by Koltick's experiments in 1997, published in PRL.
When Catt's TEM wave is corrected to include the fact that the step has a finite not a zero rise time, there is electromagnetic radiation emission sideways. Each conductor emits an inverted mirror image of the electromagnetic radiation pulse of the other, so the conductors swap energy. This is the true mechanism for the "displacement current" effect in Maxwell's equations. The electromagnetic radiation is not seen at a large distance because when the distance from the transmission line is large compared to the gap between the conductors, there is perfect interference, so no energy is lost by radiation externally from the TL. Also, the electromagnetic radiation or "displacement current" is the mechanism of forces in electromagnetism. It shows that Maxwell's theory of light is misplaced, because Maxwell has light propagating in a direction at 90 degrees to "displacement current". Since light is "displacement current" it goes in the same direction, not at 90 degrees to it
The minimal SUSY Standard Model shows electromagnetic force coupling
increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong
force falling from 1 to 1/25 at the same energy, hence unification.
The
reason why the unification superforce strength is not 137 times electromagnetism
but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable
in terms of potential energy for the various force gauge bosons.
If you
have one force (electromagnetism) increase, more energy is carried by virtual
photons at the expense of something else, say gluons. So the strong nuclear
force will lose strength as the electromagnetic force gains strength. Thus
simple conservation of energy will explain and allow predictions to be made on
the correct variation of force strengths mediated by different gauge bosons.
When you do this properly, you may learn that SUSY just isn't needed or is plain
wrong, or else you will get a better grip on what is real and make some testable
predictions as a result.
I frankly think there is something wrong with
the depiction of the variation of weak force strength with energy shown in
Figure 66 of Lisa Randall's "Warped Passages". The weak strength is extremely
low (alpha of 10^10) normally, say for beta decay of a neutron into a proton
plus electron and antineutrino. This force coupling factor is given by
Pi2hM4/(Tc2m5), where h is Planck’s constant from Planck’s energy equation E =
hf, M is the mass of the proton, T is the effective energy release ‘life’ of the
radioactive decay (i.e. the familiar halflife multiplied by 1/ln2 = 1.44), c is
velocity of light, and m is the mass of an electron.
The diagram seems to
indicate that at low energy, the weak force is stronger than electromagnetism,
which seems in error. The conventional QFT treatments show that electroweak
forces increase as a weak logarithmic function of energy. See arxiv
hepth/0510040, p. 70.
http://electrogravity.blogspot.com/
 Original Message 
From: "Nigel Cook" <nigelbryancook@hotmail.com>
To: "Brian Josephson" <bdj10@cam.ac.uk>
Sent: Tuesday, February 21, 2006 10:39 AM
Subject: Re: wheat and chaff
> Maxwell never wrote: "I'm so thick I can't imagine electricity goes
at light
> speed, any more than I can express my 20 differential equations
as 4 vector
> calculus equations."
>
> That doesn't mean he
didn't make implicit assumptions which were false.
>  Original
Message 
> From: "Brian Josephson" <bdj10@cam.ac.uk>
> To: "Nigel Cook" <nigelbryancook@hotmail.com>
>
Sent: Sunday, February 19, 2006 1:31 PM
> Subject: Re: wheat and
chaff
>
>
> > I see no assumption that electricity goes
at infinite speed there ...
> >
> > bdj
> >
>
> On 19 February 2006 12:07:19 +0000 Nigel Cook
> > <nigelbryancook@hotmail.com>
wrote:
> >
> > > James Clerk Maxwell, Treatise on
Electricity and Magnetism, 3rd ed.,
> > > Article 574:
> >
> "... there is, as yet, no experimental evidence to shew whether the
>
> > electric current... velocity is great or small as measured in
feet
> > > per second."
> > >
> > > James
Clerk Maxwell, Treatise on Electricity and Magnetism, 3rd ed.,
> > >
Article 769:
> > > "... we may define the ratio of the electric
units to be a
> > > velocity... this velocity [of light, because
light was the only thing
> > > Maxwell then knew of which had a
similar speed, due to his admitted
> > > ignorance of the speed of
eletricity ! ] is about 300,000 kilometres
> > > per
second."
> > >
> > >
> > >
> > >
 Original Message 
> > > From: "Brian Josephson"
<bdj10@cam.ac.uk>
> > > To:
"Nigel Cook" <nigelbryancook@hotmail.com>
>
> > Sent: Saturday, February 18, 2006 10:30 PM
> > > Subject:
Re: wheat and chaff
> > >
> > >
> > >>
On 18 February 2006 22:13:52 +0000 Nigel Cook
> > >>
<nigelbryancook@hotmail.com>
wrote:
> > >>
> > >> > Maxwell assumed energy
flows in both wires and capacitor plates at
> > >> > infinite
speed, and that only light goes at light speed.
> > >>
>
> >> !!???!! Really? You have a source for this?
> >
>>
> > >> =b=
> > >>
> > >> *
* * * * * * Prof. Brian D. Josephson :::::::: bdj10@cam.ac.uk
> > >> *
MindMatter * Cavendish Lab., JJ Thomson Ave, Cambridge CB3 0HE,
> >
>> U.K. * Unification * voice: +44(0)1223 337260 fax: +44(0)1223
>
> >> 337356 * Project * WWW: http://www.tcm.phy.cam.ac.uk/~bdj10
> > >> * * * * * * *
> > >>
>
>
> >
> >
> > * * * * * * * Prof. Brian D.
Josephson :::::::: bdj10@cam.ac.uk
> > *
MindMatter * Cavendish Lab., JJ Thomson Ave, Cambridge CB3 0HE, U.K.
>
> * Unification * voice: +44(0)1223 337260 fax: +44(0)1223 337356
>
> * Project * WWW: http://www.tcm.phy.cam.ac.uk/~bdj10
> > * * * * * * *
I’m no longer doing this for love of science or for pride, but out of annoyance at ‘who’s funding you’type political and ignorant censorship. I’ve had this clubmainstream message since 1996, preaching their holy opinions, while ignoring facts as if they are personal speculations. This is not just a problem with Brian ‘ESPstring theory’ Josephson, but extends to ignorant ‘critics’ of the mainstream like my former friend Ivor Catt who are religiously bigoted against quantum mechanics and relativity. Ivor Catt has always ignored all scientific comments from me or others (please correct me if necessary) concerning his own work; he immediately takes any criticism which is constructive as a personal insult. He now sends me regular insulting and frankly inaccurate emails and refuses to respond to corrections. If the criticism is toned down, he simply ignores it or tries to swamp it with vague and nonscientific politicalstyle prejudices he has against all facts which he (often inaccurately) considers are contrary to his inaccurate windowdressing model of electricity. Ivor Catt’s suppression is not totally wrong. Unfortunately he is now in the state (of paranoia or however you should describe it) where he considers all discussion a personal criticism and considers his suppression to be a conspiracy of paranoid ignorance, which is only partially true. People are suppressing him mainly because they do not understand him, but there is also a lot of bigotry and of course ignorance on the censors part should not be an excuse to censor people. (Many great advances have been suppressed for a long time for this reason; would you have suppressed general relativity in 1915 because it was hard for you personally to understand? Or would you have accepted it because the name of the author was Albert Einstein or David Hilbert? Must all abstract papers come from people who are already famous in order to merit serious attention?) I think his basic scientific results deserve attention, and they have not really been suppressed anyway (being published in IEEE Transactions on Electronic Computers, EC16, 1967, numerous issues of Wireless World and its descendant Electronics World, and also in IEE Proceedings 1984 and 1987). Catt is complaining not because he does not have articles published, but because he claims nobody builds on them or sees their implications for physics! Yet when I did precisely that, he then claimed that any reconciliation between his experimental work in electromagnetic crosstalk (mutual inductance) and quantum field theory will ‘contaminate’ his work with ‘mathematical garbage’. I did my best to educate Ivor Catt on the importance of radioactivity, particle and nuclear physics for understanding the deep problems in physics and the role of mathematics for symmetry laws to govern interactions, but he was not interested. From the various comments of Ivor Catt on modern physics (some of which are on video), he understands absolutely none of the quantum field theory and relativity facts and has no interest in listening to them carefully; his knowledge is similar to that dished out by John Gribbin in equationless books about the weirdest interpretations of the Aspect’s entanglement (alleged evidence for multiple universes, etc.). Sadly Catt thinks he is superior to quantum field theory and general relativity because he has never seen any of it justified by energy conservation and other arguments in theory, let alone by experimental facts which don’t have other meaningful explanations. The true facts of modern physics are not 10/11 dimensions, unobserved gravitons, an unobserved superpartner for each observed particle, dark energy, dark matter, many universes, baby universes, strings, etc. The real facts are the mathematical models that have been experimentally validated; the facts are restricted to where the models are within their realm of validity and uniqueness. If two or more mathematical models can give rise to the same predictions, then experimental confirmation of one of the models is obviously not evidence that it is right, and the scientist should proceed with great caution and without making misleading and vacuous claims. It is tragic that physics is in such a mess.
Exact statement Heisenberg's uncertainty principle
I've received some
email requests for a clarification of the exact statement of Heisenberg's
uncertainty principle. If uncertainty in distance can occur in two different
directions, then the uncertainty is is only half of what it would be if it can
only occur in one direction. If x is uncertainty in distance and p is
uncertainty in momentum, then xp is at least h bar providing that x is always
positive. If distance can be positive as well as negative, then the uncertainty
is half h bar. The uncertainty principle takes on different forms depending on
the situation which is under consideration.
Obsolete 21 June 2006 version of this page (contains links at end to an archive of previous versions): http://feynman137.tripod.com/draft.htm
ARCHIVES OF OLD MATERIAL DELETED FROM THIS MAIN PAGE ON 15 February 2006: http://members.lycos.co.uk/nigelbryancook/archive2.htm
ARCHIVES OF OLD MATERIAL DELETED FROM THIS MAIN PAGE ON 9 December 2005: http://members.lycos.co.uk/nigelbryancook/archive.htm
FURTHER MATERIAL: http://members.lycos.co.uk/nigelbryancook/discussion.htm
DISCUSSION BLOGS: http://electrogravity.blogspot.com/, http://lqg.blogspot.com/, http://glasstone.blogspot.com/, http://nige.wordpress.com/