There are two symbiotic strands to my research. The first involves testing MiHsC against all the data that I can, and I've just submitted a paper testing MiHsC successfully against Milky Way dwarf galaxies, low acceleration systems ideal for testing MiHsC (summary) and in which the dark matter hypothesis becomes ridiculous (they have to concentrate it too much to be consistent). The other strand is trying to understand MiHsC in a deeper way. This strand involves a lot of visualisation and maths. Often the maths, in its logical plodding manner, can take me further than visualisation, and only later do the pictures catch up. Some maths I've been playing with gets me close to familiar formula, such as gravity, in oddly two-dimensional ways. I've just derived something close enough to MiHsC to be plausible using information. I do not want to say too much because my paper is in review, but I can show that if you assume that it is not mass-energy that is conserved, but Energy plus Mass plus horizon-Information (EMI) then you get MiHsC. In the diagram below, when a shapeship accelerates, then the Rindler horizon it sees comes closer and shrinks. This deletes information stored on the horizon, which appears as inertia in the MiHsC framework:
To those wondering where all the energy needed for the new effects predicted by MiHsC comes from, my original way of explaining it (which is still valid) is to say that information horizons create gradients in the usually uniform zero point field / Unruh radiation from which new energy can be extracted. The second (equally valid) way of explaining it is: mass-energy appears from the destruction of horizon-information. Understanding leads to control, and this is a new potential energy source.
16 comments:
Yesterday I received an interesting email from Robert Ludwick, so with his permission I reproduce it here:
"Hi Dr. McCulloch,
Please excuse my use of email instead replying on the blog in the normal fashion, as I have no ‘identity’ acceptable to the reply filter. So, as usual, I’ll just use my name.
Your theory has at least one handicap in its battle against dark matter/dark energy. And it is enormous. Literally.
The theory of dark matter/dark energy practically DEMANDS a multi-billion dollar, multi-year, multi-PhD, multi-minion research effort to DETECT said dark matter/dark energy.
Your theory demands that competent persons compare the observed data with the mathematics of MiHsC and determine if MiHsC explains the data better than 'settled theory’, accurately predicts additional observations that conflict with accepted theory, or if your theory contains flaws that produce predictions contradictory to other observations better explained by the settled science (which was promoted from theory to axiom years ago). No expensive new hardware required.
The handicap, plus the fact that the competing, accepted science is infinitely adjustable and therefore non-refutable, will almost certainly prove fatal to your theory.
Unfortunately.
Bob Ludwick"
Find an expensive experiment for MiHsC? :)
Maybe I should emphasise that MiHsC is a potential new source of energy, get the oil companies mad at me, and then reap the sympathy vote? :)
For the hard-of-thinking, can you use this approach to explain the EM-drive results?
As for the multi-billion-dollar thing, it does indeed seem that dark matter would be a lot less popular without the LHC, which holds out the possibility of finding Something Odd which the DM theorists can latch onto. If LHC reaches its design energy without finding Something Odd - and indeed potentially excluding a number of the simpler DM candidates - we may see a wave of restructuring pass through the theoretical community. Or if it does find Something Odd, at least we'll have something to navigate by.
Heh. What should we be looking for in the LHC data to validate MiHsC? :)
AdamW: Well, I'm a great fan of CERN. The experimentalists there are doing a fantastic and unique job by probing a new high acceleration/energy regime. Usually MiHsC shows itself at very low acceleration, but I published a nice CERN test for MiHsC a few years ago (see EPL, 90, 29001, 2010). The idea is that the LHC can accelerate objects so fast that the Unruh radiation the objects see, which is usually light years in wavelength, becomes short wavelength enough to be interfered with by our technology (eg: very long radio waves). Can we modify a particle's inertia electromagnetically? I gave a talk on that up in St Andrews a while ago.
Would the Unruh radiation be easily distinguishable from the normal synchrotron radiation? Is there something you could do at Diamond?
I forgot about Diamond.. Let's see, the Diamond Light Source has electrons travelling close to c, in a ring 89m in radius, so (forgetting about the corners for now) it accelerates at about 10^15 m/s^2 so the Unruh waves should shorten to a tractable 720 m and they'll be different from the synchrotron radiation. These waves will only be seen by the electrons, not us, but if we take account of relativity, and apply radio waves of the right wavelength so that in the electrons frame of ref' they're 720m long, then we could damp the em-component of the electrons' Unruh waves, and change their inertia. This'll have to be calculated for the vertices of the polygon: the acceleration will be higher there.
I would be very, VERY careful about pursuing the "free energy" publicity and funding route. There have been far, FAR too many frauds and charlatans on that path. Jim Woodward's method wrt his "Mach Effect Thrusters" is a much better way to build quiet credibility - make limited predictions, no promises, adhere strictly to the scientific method, open the books at all times so that anyone can examine your procedures and database. And do as much as possible in test-stand hardware; show tangible results, not the kind of mathematized navel-gazing metaphysical pseudo-philosophy that produced dark energy and dark matter to begin with.
Hi Duane. Good point. The road to truth is not gold-paved, but the view is better.
Wouldn't the proposed "information deletion" violate the quantum no-deleting theorem ?
Before answering any "deletion" arguments, I think one should make more precise the notion of "information storage". Ideally, this appears to require at least some binary medium. In such a medium any deletion/erasure act would invoke Landauer's Principle on irreversibility thus raising the entropy of the medium. But there are more tricky ways to store information in analog media via Fourier encodings. So, the exact storage mechanism here does matter.
I've assumed that one bit is one Planck area, since that is the smallest detectable area (digital model). I think that's sensible, but feel free to comment.
WRT one bit per Planck area. Is this right? Wouldnt quantum effects allow more bits per unit area?
Stuart: Good comment. I'll have to look at that, since I always get an unwanted factor when I derive MiHsC from information.
Allow me to add that the "single bit" proposal has already been put forth by Zeilinger and some followers about the "UR"-objects (primordial building blocks) which should have a minimal information capacity. Same objection as that by Stuart applies of course in the case of simplest particles as 1-bit objects. After all, there is no way to know the lower bound necessary for nature to startup and the only way for this would be to find an exact match with a generic automaton or network of automata that perfectly mimic actual phenomena. This then brings about the totally "infamous" Zuse-Fredkin-Wolfram Conjecture which is nothing but a rather contrived version of the old Church - Turing Thesis. (Infamous because after 2000 people associated it with "Simulationists" arguments like Bostrom and others.) Too much water has run on this mill from the 60s onwards without any clear result but there were some very interesting attempts that have now been largely forgotten. For instance, who remembers nowadays the work by Pierre Noyes on the combinatorial hierarchy?
https://en.wikipedia.org/wiki/Combinatorial_hierarchy
https://en.wikipedia.org/wiki/Bit-string_physics
Even worse, there are things that have been swept under the carpet like the isomorphism of certain dualities and concurrencies with the algebraic structure of QM, first shown by Vaughan Pratt with the aid of some rather deep categorical mathematics in "Chu Spaces".
http://chu.stanford.edu/guide.html
So the quest continues but the true reason for my previous comment is the existence of analog encodings which are equally good with "binary" ones and hence they put into trouble all this talk about an absolute distinction between "discrete" and "continuous". In fact, what I find unnerving is how much (bad) engineering (say "Gates" or just "It's-Better-Manually") may become an evil influence for physicists. Once I met a blog article by a "stringy" theorist who was beating down two guys by bringing in their face a 1961 paper by Landauer claiming that his principle should be promoted to an axiom of nature. I could not conceive of a biggest misdirection given that there are already various methods for reversible computing. But that's not the worse. Recently they managed to do it with just "schmutzdecke"!
https://www.elsevier.com/about/press-releases/research-and-journals/computing-with-slime
So, the one thing I would need to know is the encoding, the basis of everything else.
Post a Comment