You can download a copy of this essay for free here or the even more thorough Power Point presentation here to help spread the word. Additional links for downloading related articles are at the bottom of the essay.
The Core Essay: Rethinking Physics, Metaphysics, Material Philosophy, and Cosmology
by Chris Freely
The nature of the universe is something that I have studied all my life. In the course of my studies I came up with new ways of looking at the way things operate. In high school I came up with a set of principles that had to be true about matter as we thought we understood it through quantum physics. This theory I called field theory.
It was my strongest opinion that matter had to be conscious. This was implied in the quantum physics worldview that is still the only version of reality that most people know. The reason was that charge could not operate in a vacuum as a condition. Hypothetically, protons and electrons had to know they had entered the field of the polarity of the opposite in order to react to the change in conditions.
What this meant, when I created this idea, was that matter operated according to a set of rules. Those rules were based on interaction with fields that obviously had to surround particles in order for these properties to exist the way we understood them. Those properties, of course, being the four forces of the standard model: electromagnetism, gravity, the weak force, and the strong force. Up until I read Mark McCutcheon's The Final Theory I had these old ideas I came up with as my basic philosophy of how matter operated in the quantum physics paradigm.
In my old theory a conscious matter particle would enter a field upon which a transformation would be induced in its operation which process I termed "boundary field conditions change". When boundary field conditions change, the electron enters the sphere of influence of the proton and its actions change. It does this because that is what it knows, that is its nature. I used this theory to label the processes I was learning in chemistry, biology, and physics later on until I understood how to identify what was going on in any particular chemical or physical reaction.
Another element I created was when boundary fields separated. Obviously this is when an electron leaves an orbit for example. The resulting process I called "boundary field separation" and the result of this was that the electron returned to its prior state of being not attracted. This was because the electron would be pulled and distorted in the direction of the polarity when it entered the field of a positively charged nucleus, and that it would no longer be pulled and distorted. I assumed the electron had something like a desire and that it had some sort of will since it did all these things. The same with any other particle. It never occurred to me to examine things past this level to think of a physical mechanism to explain the behavior of particles. Perhaps it would have in time, but by luck I was lead to Mark McCutcheon's physical theory that fuses the four forces of nature.
Now, of course, with Mark McCutcheon's Final Theory all the rules changed and the concept of my old field theory must be discarded. However, a new problem emerges that is just as perplexing which I began to address in my cosmogenesis notes in the post titled that in this blog which anyone can now download and read (if you can get past the somewhat unfinished nature of this business of coming up with new theories of everything that is very much evident in my notes). The new problem in the new theory in terms of explanation is how does the electron, which I refer to in Cosmogenesis notes as the quantum electron, know how to interact with other electrons? Where does it get its property from? More properly these questions must be stated in the form; what is the nature of matter, where does it come from, how does it interact, and in what capacity does it do what it does?
For those of you who haven't read Mark's book, the summary in this regard to what he discovered and puts forward is that the electron is the fundamental particle of the universe. Everything is made of electrons. And what creates the four forces of nature is the expansion of these electrons into the space around them. So if you take this theory to be true, several questions arise immediately to be addressed. What is the electron precisely?
At the most basic level of our observation the electron is a three dimension sphere that expands. How does this expansion work? Mark brought up the idea that the electron expands into the space around it based on an idea he called primordial time that existed in a dimension that supports the laws of the expanding electron. I believe he even postulated the idea that this could all be a matrix like computer program rather than what I would call a field universe (one built on transdimensional but natural fields to what extent natural can be defined as being not of intelligent design). From my general analysis which is in the Cosmogenesis Notes PDF, I concluded that the electron had to expand in such away that accounted for its properties of mass. Because the electron does not change its expansion rate in our physical universe as we can readily see from the properties of matter around us, I decided that the expansion had to be an accelerated expansion, meaning that the equation governing it must be in the form of a doubling in the electron size every exact interval of time which I called the universal time interval or uT for short.
This uT concept retained time as a basic property of the universe not related to the expansion of the electron. Mark's concept is that all of what we experience as time is not real except for the expansion of the electron and as such he invented the concept of primordial time to try and explain how the electron was expanding as a serial sequence of events from outside the physical universe in a sort of cosmic law space that built the system we live in. Now, of course there are plenty of pitfalls here for people to pick holes at either of our concepts. I essentially decided I wasn't going to accept the idea of time as being completely unreal except for the electron's expansion. This gets into a nasty debate about the nature of consciousness, which is what we use to measure time, and the true nature of consciousness, which gets us right back to the old chicken and egg question of what came first, the universe or consciousness?
What prevented me from accepting the idea of a primordial time directly was simple geometric thinking. A sphere expanding at a steady rate will ALWAYS see a very rapid decline in the total amount of force it is exerting as far as I could tell in my own thought experiment when thinking about the electron and how fast it must be expanding. The result would be a rapid and precipitous decline in the expansion pressure of the electron, which of course cannot be occurring. This assumes, of course, that the electron can truly be said to have any true volume at all, which is another point of debate! (I promise you this is just the beginning of these little issues).
We should return now to our analogy of a 3d sphere with some sort of 3d volume at least relative to other electrons in its universe expanding in 3d space. We know that the expansion rate must be pretty high if we use our mathematical models to account for the force that keeps the nucleus together. This means that the hypothetical volume of these spheres must be increasing at a really high rate. What's more, every time the volume increases it must keep increasing at an increasing rate to maintain the same properties of pressure exerted. This last point is the critical one and the reason I settled on an equation where the volume of the electron increases at a steady accelerated rate: V = 2(n(Ut)) where v is electron volume, n(Ut) is equal to the number of Ut intervals in time.
This universal time interval could be any amount hypothetically. If the electron doubled its volume every 5 seconds it would be 5 seconds. If the electron doubled its volume every .5 seconds, it would be .5 seconds. This doubling rate determines the properties of everything in the universe.
Now come the nasty philosophical questions. What causes an accelerated expansion naturally and why? The only means we know of doing this is to create a computer algorithm. What is it that is exactly expanding? The answer seems to be vibration in the form of a 3d sphere expanding from a hypothetical point at its center. So then the electron is vibration. Vibration in what, though? This brings up the idea of space as ether in which these vibrations occur. Ether is of course the old concept of metaphysicians and occultists to describe an invisible fluid and was considered possibly real by physicists until Einstein's concepts became widely accepted. But this brings up more questions (See these little issues keep getting more common and bigger just like I said). Is this ether 3rd or 4th dimensional fluid? Before I get to ether there are several more concepts that need considering.
In the physical universe (at least) mass is the resistant property of accelerating matter, and energy is a transferable property of accelerating matter. Energy, at a basic level is vibration. Physical mass always has energy because it always has a relative motion in 3d space relative to all other mass. Why does energy move mass? Because mass is resistant to energy. This is a vital point in understanding the way in which both electrons operate and all objects made of electrons, but also to consider the nature of what is actually happening to understand the relationship between matter, mass, energy, and motion. In analysis mass is separate from energy and transforms it through resistance, however the two are linked together 100% in the physical universe through the expanding electron.
The radical hypothesis presented here is that the accelerating expansion of the electron is what creates mass and energy as an emergent property of accelerated expansion itself. This property, in this holistic concept of mass, is the reason that vector motion, the motion of atoms and objects, is transferred upon impact between objects, and by extension, electrons. The conservation of vector motion is a major problem in thinking when considering expansion theory and acceleration. Why is motion outside the electron conserved relative to the tremendous internal accelerated expansion (or is it?)? Must we include additional properties of the electron/matter and definitions like force, power, or even substance to account for this? More thought experiments here are required.
Returning to the idea of 4th dimensional ether or 4th dimensional etheric field, it was a concept I considered due to the nature of the universe and its properties. Mark, in his book, describes the possibility of changes in the expansion rate of the electron which would change the properties of subatomic particles, atoms, gravity, etc. The idea that the ether was what the electron was expanding into had to be considered. I imagine the ether as a 4th dimensional field with properties of something like fluid, though what its properties are is highly speculative at this point other than that it is possible to hypothetically imagine. Obviously the existence of a 4th dimensional ether encourages us to think of a 5th dimensional field and on up.
Now the properties of the electron's expansion into physical space as the only physical particle and that which all physical objects are made could be considered in light of something else that could hypothetically modulate the properties of this expansion in terms of the rate of expansion. The property which I came up with first I called etheric density. My hypothesis is that expansion is hypothetically infinite and the bounds of this infinite expansion should be modifiable based on the properties of whatever substance or field the expansion occurs in. I did not go as far as to think that space itself was the ether, however, this is another hypothesis that may be considered. Another hypothesis to consider is that the ether and any higher dimensional field beyond it may in fact be expanding itself. This provides an interesting take on the concept of etheric density, as etheric expansion would create etheric density.
This leads us to traditional metaphysical concepts of the levels of creation. An early attempt to synergize these concepts is available in the Thought, Idea, Mind, and Ego file which is available for download from this website on post entitled the same and Biology and Consciousness Field Theory (which is a process level analysis of reality entirely of my own) which is available for download also from another post by that title. The concepts in Thought, Idea, Mind, and Ego I was influenced in considering from metaphysical writings in particular John Gordon's Egypt: Child of Atlantis (though I don't believe in the existence of Atlantis) who described the Egyptian systemology of the hierarchy of consciousness very well, and meditations on my teacher's ideas concerning our nature as human beings. Any serious student of "new age" occult thought or students of eastern traditions of spirituality knows about the chakras and the hypothetical planes of being and this is where we return to Mark's theory to attempt to create a physical-metaphysical synthesis.
The concept of the etheric as a 4th dimensional field into which physical electrons expand, which itself possibly has an expansion rate, means that the spiritual concept of the etheric body has a new conceptual means by which it may be conceived in this model. The hypothetical idea involves the concept of the ether as something akin to substance. If this ether surrounds matter, then as matter expands it may create a wake, like a ship passing through the ocean, ahead of its expansion in the etheric. This wake is a holistic impression on the etheric substance of the 4th dimensional field surrounding the 3rd dimensional plane.
Similarly we must consider the idea that if the etheric is expanding itself, it must also create a wake in the 5th dimensional field surrounding it! In New Age thought and other metaphysical schools this 5th dimensional level is called the astral. From there the same or similar explanations can be used to create a working model of all the higher planes within the self and create a complete model of the human being in transdimensional space, and of course a generally complete model of any other being or phenomenon in the cosmos. The current listed levels of consciousness worked with by New Age thinkers in their descriptions of higher planes are, in order of complexity and possibly dimensionality are physical, etheric, astral, mental, buddhic, atmic, and adi. The concepts in John Gordon's book were more descriptive in terms of planes: physical, astral, mental, plane of the spiritual soul, spiritual, semidivine, and divine.
Attempting to isolate the planer constituents of experience in term of their actual derived processes is why I started working on the Thought, Idea, Mind, Ego essay. The most recent version of this idea map is actually in that very post and is available for download.
If you read the whole mess of ideas you will get a general picture of what the universal meta-analysis looks like. You will see the complexity of the world we live in terms of attempting to define levels of experience (especially intellectually). You will also note that if you continue the analysis logically that the number of hypothetical explanations for processes and how they operate should diminish considerably because the number of valid explanations becomes limited to the constraints of this new model which is based on evidence and observation of our reality. Spiritual questions become constrained when considering plausible hypothesis of eternal soul development. With higher planer systemology, however, the limits of what is possible becomes clearer (at least to me).
Somewhere along the way from simple expanding electrons in a physical universe constrained by a simple hypothetically and mathematically derived 4th dimensional field space we have arrived at a place where deeper questions of spiritual reality become valid again outside the scientific mainstream and outside the purely materialistic school of thought. There are additional questions of the nature of the constraints upon matter and expansion of the electron itself. I will elaborate upon this by beginning an analysis of Mark's concepts concerning the limits of physical properties of the expanding electron in the universe.
Mark's contention is that all electrons must have the same expansion rate in the universe because if they did not we would soon be looking at electrons the size of universes. There are some hypothetical issues with this idea, though it does seem scientifically valid (I tend to agree with Mark). However, being a devil's advocate at times, I propose that depending on the definitions Mark may not be correct and that different areas of the cosmos may have different expansion pressures and still occupy the same space.
I began to define this idea in the Dimensionality and Cosmogenesis file (though some of the ideas here as still quite crude) which again is downloadable under a post by the same title. Thought Experiment: At a basic level it should be hypothetically possible for an expanding universe to be embedded in another expanding universe if the expanding ether forms a equalizing expansion pressure shell surrounding the universe with a different expansion rate. In my thought experiment, however, this only naturally worked one way where the universe with the higher expansion pressure and lower etheric density/etheric expansion was revolving around the universe with the lower expansion pressure and higher etheric density. This model, which is the opposite of the one in the Dimensionality and Cosmogenesis file, would suggest that the etheric expansion into space would create a similar effect to gravitational orbits as described in Mark's Final Theory (you'll have to read Mark's book here, but basically physically expanding electrons create the gravity effect indirectly through the boundary of the atom expanding so that objects moving in a straight line end up moving in a curve around planets because planetary expansion (note this obviously applies to any object made of atoms, not just the giant ones)).
For those that read Mark's book the leap of logic here is probably manageable, but it involves alot of questions which is why this idea of universes with different expansion pressures revolving around each other is just that a hypothesis. It can also be considered a model of higher dimensional interaction if no interaction is possible between two different universes with two different sets of electron expansion rates. Hypothetically I had to bridge several difficult barriers in thinking to even suggest this idea, but hypothetically they must be considered in general.
First, the actual volume of the electron isn't technically proven 100%. This is because the nature of space is not fully defined. Space exists, but we don't know what it is exactly. Also, we don't know the basis of the electron's expansion. As such hypothetical relationships between these different elements of our not knowing must be bridged slowly and compared with evidence to see which one is true and which one is false. In this regard even our spiritual ideas that I have presented here must still be considered hypothetical if we lack evidence (not withstanding any of our abilities to have dreams about events before they happen as many people have). What this means is that we don't know enough to say what is really, really going on yet.
Returning to the idea of the problem of accelerated expansion that I mentioned earlier, the cause is indeterminate. The possibility of complex relations must be considered between higher dimensional fields and ourselves. Even creationist musings of some nature whether by more evolved beings than ourselves or by a classic deity cannot be discounted 100% as false even though reason and common sense say they are highly implausible.
The correct bias in science is always towards natural explanations and explanations of non-interference as much as they can be conceived. This means that we must first search for every avenue of a natural explanation for what we observe before we turn to either supernatural or intelligent manipulation of the cosmos. My early works, if you read them, may seem to be very much in favor of the idea of intelligent manipulation at times, however, the way I envisioned this was in terms of a spiritual reality within ourselves that saw before its own incarnation what is was creating. Today, while I still believe in the precognition of higher dimensional states of consciousness a prioii to the incarnation of our beings in the physical universe, I do not consider it likely that the operation of planetary bodies are in fact consciously directed. However, I don't know for sure, it just seems that the mechanisms would be difficult to conceive and highly unnecessary. In other words, I doubt in intelligent design by mode of interference in nature from a transdimensional being as such a process has no purpose. The only exception to this idea is the creation of the universe itself, which would make sense, however, we must first rule out nature as the answer.
What is nature though? We study the laws of nature, but we don't know 100% what nature is. In my Principles file (my most current and developed body of notes) which, again, can be downloaded from the post titled Principles: The Philosophy of Knowledge and Extended Topics, I discuss this complex issue briefly. I propose the philosophical notion that is the basis of the answer that nature has no limits, but evolves within them. We experience nature, but life is itself beyond definition.
Knowing this we come to a difficult question. Is it possible for nature to define itself? If the answer is yes, then we may consider consciousness the emergent property of a system, nature, which requires an agent to examine its existence. This is a common spiritual hypothesis among many thinkers. However, if the answer is no, then we must ask what is it that defines nature herself?
If we go back to the simple fact of the expanding physical electron, we see the basis for all of nature we see outside ourselves. But does the physical electron define our consciousness? Does it define our reality internal to our experience as human beings living in a physical universe made of these expanding electrons? Or is it in fact part of something bigger? Is it part of a holistic field of consciousness whose parts cannot be understood outside the whole experience? It is this hypothesis of the holistic field of consciousness that deals with transdimensional field theory and the nature of the universe as an internal subjective experience of learning and evolving.
In the imagination we can create any possibility with enough information and enough power. Our computers prove this. We can create any universe with any type of physics imaginable so long as it is internally consistent. While the universe we live is may be highly controlled and regulated by the invisible rules that govern its existence, our spirits certainly yearn for more possibilities. Perhaps we are all gluttons for choice, but such is the nature of desire and will. I believe it is the spirit of desiring more that I expanded on Mark's theory into the nature of a more interesting Cosmos than the one we read about in our books, as fascinating as that quantum universe was.
The new cosmology is birth from the intersection between expansion theory and the imagination. In all its possibilities it must give birth to a new science that is built on a new quest to find the ultimate answers for a new generation of scientists and thinkers. I submitted in Cosmogenesis Notes #1 the suggestion that our stellar model is incorrect. Why? For one because the basis for the mechanism seems faulty. The erroneous interpretation of Heisenberg's uncertainty principle is the basis for the concept of quantum tunneling and the basis for Heisenberg's principle is the old quantum model of the atom where the electron is a probability cloud surrounding the nucleus. Mark shows this model to be utterly false and demonstrates that the electron is a solid particle that bounces off the expanding nucleus which creates the false notion of a probability cloud in extended analysis only if one assumes that a probability cloud can exist anywhere outside the human imagination. Physicality does not work that way, even if you are drunk.
What this means is that to break past the Coulomb barrier (another idea in physics that needs reexamining in light of Mark's expansion theory) an old style proton had to jump over the barrier caused by what was once considered polarity. Since it was (and is) widely believed protons repelled other protons based on charge rather than expansion, this Coulomb barrier was the imaginary resistance the proton had to overcome to fuse with another proton and go from being hydrogen to be deuterium when the 2nd proton decayed into a neutron upon fusion. The only known mechanism that made this possible hypothetically was the probability cloud hypothesis of Heisenberg (The Uncertainty Principle) in combination with Schrodinger's Wave Equation. Somehow, by probability, the proton acquired (from where they don't say) the energy to overcome the barrier. This is how it was hypothesized that the Sun could get its power from H-H fusion. But no amount of laboratory science ever made basic one proton hydrogen ever fuse less than 2-3 billion degrees or so.
The idea that was suggested here was that protons under massive pressure inside the star would be able to "jump" this barrier through what appears to be random chance at what was guesstimated as the actual internal temperature of the Sun which was 15 million degrees (common knowledge in our 2016 world almost). However, we can see immediately that assumptions built on assumptions don't prove anything. We have never replicated these conditions. When we do use nuclear power in fusion we do not attempt to fuse hydrogen and hydrogen but rather the rare isotopes deuterium and tritium which have 1 and 2 neutrons respectively. This may be because H-H fusion cannot occur at a liberation of energy at all. If this is true, then the case of H-H fusion powering stars falls flat on its face. An analysis of what I believe is actually going on is elaborated in the Appendix (1).
A simple summary is that the current theory of physics states that because there is a mass difference between 4 basic hydrogens and helium, the difference in mass, when the hydrogens are fused, converted to energy using Einstein's equation E=MC^2 accounts for the creation of energy. However, this is also an unproven hypothesis and Einstein's equation has been shown to be a forgery by Mark McCutcheon's simple mathematical analysis. Logically the definition of energy here isn't even valid which is something Mark addresses as well. Tesla, apparently, may have realized some of this but didn't elaborate in detail merely stating that Einstein was a mathematical charlatan in general.
Mark's alternative hypothesis, which is where I depart from some of his original conclusions, is that if electrons are freed from the internal atomic realm and expand in the atomic realm they are automatically converted into light by means of the incredible internal expansion of the electron. However, this ignores a few problems of which the most egregious is the ignoring of vector motion and the ignoring the fact the universe is teeming with protons and other nuclei seemingly stripped of their electrons in the form of cosmic rays that aren't converted to light. Even plasma states on Earth in labs may disprove this idea of Mark's that it simply takes removing protons/neutrons/other subatomic particles from the inner atomic realm to create light instantaneously as a function of expansion pressure. The jury is still out on that one for some more nasty thought experiments. I suspect many of you are past the point where your brains might already be full after all I just said.
From here the hypothesis is that electrons must still obey vector motion and that energy isn't simply the conversion of inner expansion pressure to light/energy effect in the real world because motion imparted to objects still have a real world energy effect. If this hypothesis is true for whatever reason, say the hypothesis of the conservation of vector motion in accelerated expansion theory (that's the version where the electron expansion is hypothetical volumetric doubling based on a universal time mentioned earlier) due to other properties undefined (see discussion somewhere above mentioning force, power, and substance) of the electron/matter/mass in general.
Going along these lines we get, finally, as a lover of astronomy, to the fun part: new theories of how stars, planets, galaxies, on up work. The first and primary idea presented in the absence of a fusion model of stars is a new model. If one is familiar with the Electrical Universe alternative theory to the standard quantum model, one has a beginning of understanding this new hypothesis. In the Electrical Universe stars are powered by charge externally from currents. Obviously, Mark's theory makes the concept of charge irrelevant so this theory may be discarded partially. Also, the amount of currents available in the known local universe (the heliosphere current sheet for instance) when measured by our current science come out to be woefully inadequate in terms of the power transfer necessary to fuel the Sun's massive 4X10^26 watts of power output.
There is, however, an alternative plausible source of power, oddly enough suggested by Nicola Tesla indirectly years ago(which he either didn't get the mechanism for or just didn't say anything about it). The hypothesis involves the examination of the nature of magnetic fields. Mark demonstrates that magnetic fields are in fact constructed of expanding electrons that are wrapped around their originating body; they are essentially outer extensions of electron sheets within the body of the object in which they are embedded and originating from.
Magnetic fields are physical extensions of electron sheets either in the subatomic realm or from the electrons floating about in electrical fields above the atoms bouncing electron shell
If this magnetic field is bombarded externally by something that imparts pressure and energy into the field, then the result is a transfer of energy from the magnetic object, light, or, most significantly for our analysis, cosmic rays to the magnetic field, and by extension, the star/planet/other body in which is it embedded. A cosmic ray is a nucleus of an atom moving at very high speeds through physical 3d space. The amount of cosmic rays is extraordinary. And the amount of hypothetically available energy imparted through cosmic rays into a magnetic field is determined by its size and its ability to absorb the energy of the impact of these rays through resistance. Hypothetically a magnetic field could also get energy from resistance to other magnetic fields, light (made of electron clusters again in Mark's theory), and other small subatomic electron clusters (What Mark calls what we currently call subatomic particles).
In order to calculate the amount of cosmic ray flux, the amount of cosmic rays passing a particular area per second, bombarding the outer magnetic field of the Sun we need only check online. We find out that the cosmic ray flux on Earth at sea level is something like 1 GeV/square cm/sec from numerous sources online. To determine the cosmic ray flux in space around Earth we multiply by about 50 because the Earth blocks out the rest from atmosphere and magnetic fields, a figure I read on a NASA website I believe. Additional proof is found in this article that states the cosmic ray flux near Earth that matches this figure exactly. Here we only have the cosmic ray flux up for particles above 200 - 300 MeV because the rest is long since blocked out by the Sun's massive magnetic field as indicated by this rather complex scientific paper on the subject.. In order to get the actual total potential you must account for the whole flux bombarding the outer magnetic shell of the magnetic field of the Sun.
The first mathematical step is to determine the amount of particles in the primary range we are using to calculate the flux which is about 2 GeV - 200 MeV approximately. Because of the deflection of a large number of particles before they even reach Earth we must use the Voyager 1/2 data to increase the flux estimate just for particles in this range of energy which this website and graph show is at least approximately 500%. The next step is to prove the amount of flux estimate the range of energies in the remaining cosmic ray bands.
The general rule I read on this website for calculating estimated cosmic ray energy flux is that for every factor of 10x energy downward you must increase the amount of total particles by 50 so that the general increase in amount of energy available is 5 fold per every decrease in power of each individual cosmic ray of 10 times. As a general example if the amount of energy flux for particles from 2 GeV to 200 Mev is 5 GeV/square cm/sec then the amount for particles from 200 MeV to 20 MeV is 25 GeV/square cm/sec. This is true down to a certain level, below which it is untrue. It appears to be true at least down to the level of 1.5 MeV as indicated by the Voyager 1 data from its cosmic ray subsystem as indicated here. The amount indicated in the data is little over half the amount required, but because of intervening magnetic fields between Voyager and the true open interstellar space (if such a thing even exists) this is easily accounted for because of the "low energy" nature of cosmic rays at that energy level as they are easily deflected by magnetic fields.
What is the end of the analysis to determine the amount of energy hypothetically available to convert to power the Sun or any other star or any other celestial body? The outer boundary of the Sun's magnetic field and its surface area must be determined and the flux of cosmic rays must be determined to see if the energy is equal to or greater than the Sun's output which is 4X10^26 watts for which I used the Voyager 1 data of the heliopause out to 121 AU (distance from Earth to Sun x 121), also a very conservative approach. You also must take into account that the Sun's magnetic field has a tail that may be increase the surface area as much as 10X and plausible interactions between neighboring discovered bubble magnetic fields as they may feed energy into the Sun's field which would increase the possible surface area outwards. However, every time I ran the calculation it turned out that more than enough energy was available to power the Sun from the cosmic ray flux bombarding its outer magnetic field assuming that the 5 fold increase in available energy was true down to about 200 KeV and that the remaining cosmic rays below 200 KeV contributed double the total energy of those that level of energy (a conservative estimate).
The Math for This Analysis
Approximate Surface Area of Solar Magnetic Field in square cm
Radius - 121 A.U. = 1.81 X 10^15 cm
Surface Area = 4 * 3.14 * 1.81 X 10^15 cm ^ 2 = 4.115 X 10^31 square cm
Flux per square cm
1 GeV/sec/square cm - sea level cosmic ray flux on Earth
1 GeV/sec/square cm X 50 = 50 GeV/sec/square cm - cosmic ray flux in space near Earth
50 GeV/sec/square cm X 5 = 250 GeV/sec/square cm - cosmic ray flux at edge of solar magnetic field for cosmic ray range of 2 GeV-200 MeV
250 GeV/sec/square cm X 5 X 5 X 5 = 31520 GeV/sec/square cm = cosmic ray flux at edge of solar magentic field for cosmic ray range of 200 MeV - 200 KeV
31520 GeV/sec/square cm X 2 = 62500 GeV/sec/square cm = cosmic ray flux with 2 x more cosmic ray flux from range 200 KeV to 1KeV
62500 GeV/sec/square cm = 1.001 X 10^-5 joules/sec/square cm = 1.001 X 10^-5 watts/square cm
Total Cosmic Ray Energy Available For Conversion To Solar Power
(1.001 X 10^-5 watts/square cm) (4.115 X 10^31 square cm) = 4.11 X 10^26 watts
We can reach this final number, which as you can see is the amount required to power the Sun, in several ways. If the amounts of cosmic rays at the lower end or at any point in the analysis aren't available, then the power could come from the Sun's extended magnetic heliotail which could potentially add as much as 10 X to the surface area required. We could also increase the surface area of the magnetic field outwards in our analysis by assuming that the bubble magnetic fields surrounding the heliopause are able to transmit the energy they absorb to the solar magnetic field through whatever means we can conceive (pressure, direct energy transfer through magnetic field lines overlapping, etc). Some study of the nature of magnetic field energy transfer may be required here to determine the exact nature of this transfer and how it might occur.
Not only is it apparently true that the cosmic ray flux bombarding the sun's magnetic field is equal to or greater than the Sun's actual output, but in addition, this is also true of the energy being released by the gas giant planets Jupiter and Saturn if we use the solar ray flux from the Sun instead of the cosmic ray flux in the case of the Sun. It may even be shown that the amount of energy that Earth emits seems to suggest our own planet's magnetic field is converting mostly solar rays in Earth's case to Earth's natural internal heat. Do the math, and you will see the rather startling relationship.
So if it is true that cosmic rays bombard the magnetic field of planets and stars and the energy impact of these rays as well as possible other galactic magnetic field transfer into the planet or star's magnetic field then how is the energy transferred to the core of the Sun? The hypothetical answer is physically through pressure directly right to the core in some fashion where it can be released through some sort of magnetic re-connection or other hypothetical mechanism of which I suggested a couple in Cosmogenesis Notes #1.
The cosmic ray powered universe is very different from the one we learn about in standard physics class, but does bare a decent resemblance to the Electrical Universe model because in both models planets and stars divide by fission just like cells in what we call novas and supernovas. The mechanisms are different however between the Electrical Universe model and the cosmic ray powered universe. In the cosmic ray powered universe what occurs is one of several possible causes. If magnetic fields become too large relative to the underlying mass of a star the result is that the electrical field of the star becomes heated to a very great degree. At a certain level of energy it is possible that the field would simply overload due to the amount of energy being utilized similar to any electrical overload. Perhaps a massive charged energy buildup accompanies this. If necessary an even more powerful process described below could be responsible for novas and supernovas.
The existence of hypothetical matter in the core of stars made of ultra-dense matter is highly plausible in the new model. This ultra-dense matter would be in the form of very high level transuranium atoms that would only be stable under high pressures inside stellar cores. If this pressure were removed because the above layers melted due to the massive energy of a huge heated electrical field supported by a massive magnetic field then at some point a massive fission explosion would occur blowing the stellar core in two possibly depending on what layers melted and where they were located.
At first this may appear to be a very exotic explanation. However, there is so much strange and unusual events that occur in space news that confirm that supernovas and novas occur in very different ways from what is expected in current stellar models. In fact, if one looks for discrepancies in theory vs. observation there are quite a few. What about hyper dense matter? We already see examples of that in standard theory in the form of degenerate matter, what white dwarves are supposedly made of, and neutronium, the ultra hyper dense theoretical substance that makes up the hypothetical neutron stars. Between this ridiculous density of 1 billion tons per teaspoon and the heaviest metal we now know there is quite a range of possible transuranium atoms that could potentially fit and be stable under extreme pressures. We are talking about a very, very big periodic table under high pressures here.
Let's return a bit to Mark's contentions in the Final Theory again to see some very interesting facts he points out as a result of his theory. The most interesting I found was the idea he suggested that since gravity is no longer a force that holds objects together through some invisible force pulling them towards each other then gravity is determined by the size of the object, not its mass because of the expansion of the electron being the cause of gravity again. What this does in astronomy is nothing short of amazing. The resulting conclusion we must draw is that we do not know the mass of any body in the universe by measuring it's gravity. The Sun could have a neutron star at its core (I don't believe this, but as an example) and we wouldn't know it from the gravity. In fact, because gravity is not determined by mass, but by size then the theories of black hole gravity and neutron star gravity are utterly false. While superdense bodies made of ultra dense matter is possible, they would not exhibit any gravitational effect different from a similar body of the same size made of styrafoam.
When I was considering the stellar model in light of the new theory this concept of ultra-dense cores that were gravitationally undetectable was rather tantalizing. Our concepts of stars and planets would have to be thoroughly re-imagined to account for this possibility. However, during my general thinking meditation about how stars, planets, and "black holes" would work in a magnetic field powered solar model I found that the ideas flowed rather smoothly.
The new stellar model is built upon the idea of the external powering of systems except where they have stored sufficient energy from the external power source to power themselves for a certain duration. Our Sun then is powered by the rest of the galaxy primarily in the form of cosmic rays from other high driving force stars (driving force in this case is the energy that pushes cosmic rays out from the surface of a star) and natural synchrotron radiation sources such as large galactic magnetic fields or what we call "black holes"/"neutron stars". In this model the ultimate source of energy is always external to the system in question whether it is a planet, star, "neutron star", "black hole", galaxy, galaxy cluster, and on up the scalar chain of possible systems.
The first realization we come to is the idea that a system in a cosmic void is in big trouble. If there is no source of external energy in a void, then a system that is crossing it will soon begin to suffer from a lack of external energy. The result is that the system will begin to use up its internal stores of energy at a rapid pace. If the system is to survive it must make it out of the void as fast as possible before its stores of internal energy are used up. We will return to this idea soon, an idea I refer to as void evaporation.
Considering again the origin of planets and stars, the idea must be considered that some or most planets and birthed from stars. However, it is impossible for this to always be true for all planets because the known mechanism of planetary condensation would create what I refer to as dirt planets, planets at low "normal" density that form from gas clouds and collisions between asteroids as described in our current model. The question would arise at some point, "is Earth a dirt planet or a planet with a hyperdense core ejected from another planet?" The answer could only be determined by discovering the true mass within Earth's planetary inner solid core. So far, not enough evidence is available, and we cannot determine this by gravity as described in McCutcheon's initial foray into expansion theory.
A planet with a hyperdense core would be an interesting beast. As would a star. What would happen over time if the planetary or stellar magnetic field was bombarded by solar rays, cosmic rays, and magnetic field pressure from external magnetic field smacking into it? The amount of energy in the field would continually increase so long as the pressure remained constant or increasing. The star or planet would gradually increase the amount of energy it contained so that the star would become brighter and the planet would become more and more geologically active. This would not always be the case depending on many, many factors, but this general trend would be true.
From what we know of Earth, the amount of volcanism on Earth has declined in general, however this does not mean that it will continue to do so. More information is required.
What would happen as a star's magnetic field energy increased? Unless there is a source of additional matter, mass loss would slightly reduce the amount of matter within the star. There are two plausible sources of new matter that could be considered as a source for this matter. The first is obvious, a star could collide with another body to increase its mass such as absorbing a planet or colliding with another star. The 2nd, which is less obvious, is the possibility of a star absorbing mass through a process of matter subduction.
What is the hypothesis of matter subduction? Because in Mark McCutcheon's expansion theory every form of matter including magnetic fields are made of the same basic particle, the electron, a hypothesis must be suggested that under some state of magnetic "vibration" normal physical matter such as protons or cosmic rays might be subducted into the magnetic field of planets, stars, or any conceivable magnetic system or object. What this would do is to cause the proton or other matter particle(s) in question to "disappear" into the magnetic field that surrounds them and of course transfer any energy it has to the field in question as well. Certain conditions would have to be true and a very long thought experiment along with a clear model and evidence supporting is required to verify if this is even possible. But if it is, then we have a source of matter replenishment for large scale systems such as stars that does not require them to absorb more matter in the form of collisions with other celestial bodies.
A 3rd possible source of matter in this system is highly implausible, but again should be considered. This possibility is for the creation of matter from motion or energy under certain conditions, what I refer to as a type of field copy hypothesis. This would envision the universe as a sort of "free energy" "perpetual motion machine" however and implies a different ruleset than what we would consider thermodynamically correct. It also must be fully flushed out as a theoretical framework, but because it sounds "too good to be true", the idea must be considered highly unlikely. Because the full nature of the electron is not known, however, it must be examined as a distant possibility. In such a hypothetical system, matter and energy are a byproduct of motion and as such they are automatically created as time progresses in nature.
Returning to our basic scientific model of the new planet/stellar idea, as a star's magnetic field expands under magnetic field bombardment from external sources, even if there is low mass loss except when the star absorbs another object, the magnetic field is going to become stronger and larger. The hypothetical mechanism here is the speed of the electrons in the magnetic field, and by extension the electrical field inside the star. The electrons will move faster and faster and thus the amount of energy in the star will increase. The star's magnetic field will expand allowing for more energy to enter from outside as the larger the surface area of the field the more energy it is collecting if all other processes are equal. The star becomes brighter as a result. This process can continue so long as the star is stable.
When a star becomes unstable it is because it can no longer support the massive magnetic field that it has generated either because there is an absolute size the magnetic field can reach before the amount of energy causes a type of overload or because the heat of the star's internal environment has melted too much of its outer core exposing hyperdense matter underneath to too low of a pressure resulting in a fission reaction that goes critical.
Hyperdense matter can only be stable under great pressures in nature based on what we know about radioactivity and heavy elements. We see that radioactive rates of decay do vary based on external pressure in this article and there is other evidence that this is true.
When a star goes critical it produces either a nova or a supernova. Current observations of nova and supernova show that they do not follow the conventional models in their intensity, longevity, and in other ways varying considerably in scope. This will lend credence to the concept that not only can novas/supernovas occur at any range of energy levels, but that many of the smaller explosions do not even get registered at all.
What do I mean here? If the key property of an overload of a magnetic field powered star or planet is the mass underlying the system core and the critical explosion is caused by some combination of magnetic field overload and a fission explosion caused by hyperdense matter exposure to lower external pressures then this would occur at any range of possible power levels depending on the underlying system's nature. It means that hypothetically Jupiter, Saturn, Neptune, or Uranus could explode tomorrow if it had a relatively small core and it suddenly reached it's critical state. This is, of course, highly implausible, however because we can't tell what stage of stability the core of any particular system is in, until we know more about the signs of unstable cores then any planet or star could go critical. Since every planet in the system appears stable we can assume that none of them is about to go critical.
In order to determine whether a system is near its absolute maximum carrying capacity as a magnetic field bearing celestial object, we must find out the rule-set upon which this is based. In my original analysis, which I have long since lost since I conducted it about six years ago just in my own in my mind with internet sources as a guide, I loosely estimated (and this is a very loose estimate based upon alot of assumptions looking at supernova explosions) that there was a ratio of luminosity of a nova/supernova to underlying mass that should be able to determine the true mass of the underlying core AFTER it went supernova/nova. The figure I came up with was about 160,000 L (L = 1 solar luminosity or 4 X 10^26 watts, remember that figure from earlier in our analysis?), which again is just a really loose guesstimate. The real figure could be higher or lower, more probably higher.
We're quite hypothetical at this point of the analysis, but the 160,000 L figure was based on the amount of luminosity what we consider a current solar mass would put out as a nova if it went critical. If the theory of cosmic ray powered stars is true and the mechanism by which an overload happens is mostly magnetic field overload based on mass to magnetic field ratio then the built up energy of a Sun like star as conceived by our current stellar models without a hyperdense core (or a "dirt star") when it goes nova would simply puff up and then collapse into a dense object like a "white dwarf" just as the conventional theory sort of suggests. But this assumes that the star is exactly formed in the manner described by current solar models which is a star that is created from a contracting gas cloud and has no hyperdense core at its center.
So if the Sun was such a star hypothetically reaching the end of its life it would puff out to 160,000 L which is 160,000 times its current luminosity . Its magnetic field would then have been dissipated and it would shrink down into a planetary size ball because it no longer would have the magnetic field to absorb the external cosmic ray and magnetic field pressure coming from the rest of the galaxy.
This is obvious not what I think is going to happen to the Sun. There is no signs of instability that we would expect if such a thing was happening, but the possibility of stars forming from gas clouds in this new model still exists and this is what we might expect if such a "dirt star" were to form and live out its life in such a way.
If you continue this analysis you will notice that what I have just said indicates that most stars in our galaxy are much, much more massive than we currently think possible because of these hyper-dense cores. The 160,000 L figure if applied to large supernova would suggest that the star in question would be over half a million solar masses according to what we think about what the Sun weighs currently. But such a star would be among the largest in the galaxy assuming it gives a 100 billion L at maximum luminosity of a supernova. Also, this figure shouldn't be all that amazing considering that current astrophysics places some galactic core "black holes" at 10 billion solar masses.
As we extend our new model further we see that hyperdense cores are what makes big, hot massive stars possible and keeps them stable. It also makes quasars possible and could keep them stable, though evidence on quasars isn't complete to really truly define them correctly or even determine if they exist at all. The closest quasar is something like 2.4 billion light years from here and the lack of one close by may indicate quasars may not be anything other than optical illusions caused by misreading of redshift distances as suggested by some Electrical Universe proponents. However, active galactic nuclei are real and many exist closeby. More evidence is needed to establish whether or not quasars exist at all. The new theory suggests they are possible, but they would be rare features of very large core galaxies.
Dealing with the new stellar model creates a big headache for anyone trying to retool the universe according to the new concept of magnetic field powered stars. The current interpretations of the hertzsprung-russell diagram presents the biggest challenge encountered. In the current theory stars burn out their core hydrogen then jump off the main sequence to the giant or supergiant branch and then burn out their nuclear fuel turning into white dwarfs or exploding in supernova turning into neutron stars and black holes depending on how much mass they have and what type of nuclear fuel they can burn up in their cores according to standard astrophysics models.
The Hertzsprung-Russell Diagram of 22,000 Nearby Stars - The Bane of Alternative Theories As This Is What Must Be Explained
In addition to explaining the standard hertzsprung-russell diagram of star magnitude and size, a successful alternative theory must overcome the hurdle of what has been discovered in terms of variations on the standard diagram explained in various ways by conventional astronomy such as the fact that globular cluster and open cluster hertzsprung-russell diagrams vary considerably from the "normal" galactic diagrams.
In the magnetic field powered star theory, the explanation for difference in solar output comes from the fact that as a stellar magnetic field grows the star itself increases in luminosity so it moves towards the upper left branch of the diagram or towards the blue end of what is called the main sequence which is the main band in the diagram above. However, hypothetically if we think about the way stars are observed in the sky, the star could also evolve towards the giant branch as well. What we tend to observe as astronomers is that nearly all O and B stars occur in tight groups which are generally referred to as open clusters, but the vast majority of giant stars occur by themselves in normal star density stellar space.
What this suggests is that O and B stars as well as their supergiant cousins are stars that are very ancient and very massive in their internal core hyperdensity. It has generally been noticed that giants and supergiants are generally unstable suggesting that these are stars who are reaching their magnetic field limits and are closer to going critical. Massive stars wouldn't have this problem because their hyperdense cores can take so much more external cosmic ray/magnetic field flux, so they would stay on the main sequence because they are much more stable at higher levels of magnetic field bombardment.
Why would they tend to gather in groups? The hypothesis here is an already existing massive star would, when undergoing a binary fission through nova/supernova process, would fission to create large stars instead of small ones. In addition, another hypothesis of stellar creation must be considered, the multiple binary fission of large "black holes" ejected from the central galactic "black hole" or similar such very large objects. In summary, large massive stars come from other large massive stars and from "black holes" so they tend to hang around each other.
So then what's a "black hole"? The answer is that it is most likely another giant stellar object who puts out most of its energy outside the visible range in the form of x-rays, gamma - rays, and cosmic rays. From this point forward we will refer to "black holes" as dark quasar to differentiate them from the theory of black holes as suggested by quantum physics which we debunked with Mark McCutcheon's help at the beginning of this discussion.
Returning to the analysis of star groups we can now explain the hertzsprung-russell diagram of globular clusters and open clusters with what is called their turnoff point from the main sequence. The reason these groups of stars have a turnoff point that is truncated is that we are witnessing the exact stellar development of a particular group of stars whose largest members are only so large and have not achieved the hyperdense core mass required to become any larger than they are.
Open Cluster Turnoff Point for Two Open Clusters
If you see the diagram above you see that the turnoff point for both these two open clusters is very well defined. Stars in our magnetic field powered star concept would begin their development down on the right hand and depending on how much the mass in their cores were would proceed to develop in the direction of the turnoff, turning towards the giant branch when they began to reach the limits of their magnetic field core mass ratio. The cluster doesn't have stars larger than the turnoff except maybe a few of the outliers to the upper left because the largest stars in this group do not have the core mass required to support higher levels of luminosity acquired from the magnetic field pressure of external cosmic rays and magnetic fields.
What about stars down in the red dwarf - k dwarf (what we currently call low mass main sequence stars) range? If a star has relatively low amounts of hyperdense matter, it may not reach the magnetic field size required for it to become a giant, instead a low mass star with relatively low levels of hyperdense mass would simply become a regular A,F, G-class star (which was probably unstable) as its final stage and then puff out to 160,000 L times its solar mass when it was done and it reached its magnetic field mass ratio limit. We would hardly even notice this it would occur so rarely and someone would only see a brief temporary star show up and disappear. Who would even notice something like this against the background of hundreds of millions of stars visible from Earth?
What happens when our magnetic field powered stars collapse after their magnetic fields have been blown out by nova/supernova? Sometimes, but not always they would have split in two, but in either case they are likely to not be dead in the way we think of stars today. A "white dwarf" is not a dead star! It's just a core that needs time to come back to life again. How would this happen?
White dwarfs would be composed of ultradense matter (oddly enough just like today) except the matter would be far denser the closer we got to the core. Because there is still radioactive hyperdense metals in the core, heat would still be generated. Over time, that heat will begin the process of melting. Hyderdense elements, in addition, could begin to slowly fission releasing less dense matter above them. This would result in a slow process whereby the star begins to regenerate itself through melting, release of lighter elements from fission, and other processes. Given a sufficient time all that is required for it to start serious regrowth is to get its magnetic field active again sufficient to support the process of capturing energy again from cosmic rays and external magnetic fields. How does it do this?
The main process in magnetic field expansion of star size objects is massive plasma envelope that surrounds the nucleus of the star. Once a star possessed this, it would again be capable of generating the size of magnetic field it needed to fuel its continued growth. This creates the idea of a solar cycle whereby a star goes to completion, novas or supernovas, returns to the basic "white dwarf" configuration, and then slowly regenerates its outer shell through release of lighter elements from nuclear decay of hyderdense matter and heavy radioactive metals.
What we can envision is a planetary stellar cycle which goes through the classical four elements. Beginning as a quiescent cold white dwarf slowly the star core decays slowly releasing enough matter to have an atmosphere again becoming a liquid covered white dwarf looking something like a small gas giant like Neptune or Uranus. Unless you had exact magnetic field and mass readings it would be difficult to tell at a distance what the object was at the core. Then as the process continued over presumable a very long time perhaps billions of years the core would continue to decay and the old star now looks like a typical gas giant like Jupiter. The star continues this development over a long course of time until it is brown dwarf and then again on the main sequence as a red dwarf.
This process is very likely to vary considerably based on the mass of hyperdense core leftover after the nova/supernova. Very large stars may restart as what we call "neutron stars", though they certainly wouldn't be 15 km across. There needs to be more investigation on the claims of 15 km across neutron stars. We can't see objects the size of Earth at these distances yet astronomers claim to see 15 km wide stars whose boundaries cannot be determined at this range. In any case, the larger the original star, the more likely that it starts its assent back to its maximum power somewhere other than at the baseline of being a "white dwarf". What this means is that it is conceivable for a very large star that it is possible that it blows of its magnetic field and starts its new lifecycle as something as large as a g-type main sequence star like our Sun. It all depends on how large the core is and how much mass it contains.
Galaxies are powered externally by cosmic rays and external magnetic fields as well. All of the power of the galaxy is centered on its giant central dark quasar (remember this is what we call a supermassive galactic black hole in current astrophysics). Think of the amount of power available to a galactic magnetic field. The field is ridiculously enormous! All this energy is cored to the central body in the galaxy. The central dark quasar must be immense and contain immense amounts of hyperdense matter. If it puts out most of its energy as cosmic rays, the central black hole feeds the galaxy just as the galaxy gets its power from external sources like the entire local supercluster. In a infinite universe the scales have no limit to how large they can go.
Globular clusters can be explained in the new theory as either an ejected dark quasar core that divided through binary fission or one whose galaxy was stripped by the larger galaxy and also divided by binary fission. Either theory works but the 1st theory requires a quasar like process as current theories describe in order to account for the force of projection necessary to expel such a large core from the central dark quasar. Another hypothesis is that the globular clusters have just slowly developed on their own orbiting the galaxy slowly growing on their own. Also the current theory of these clusters being the cores of galaxies that have been mostly stripped of their stars is probably the most likely hypothesis in general.
If stars become planets as at the end of their normal lives , don't planets become stars. Yes, in the new concept they do. Over time a planet would also reach its critical limit, explode, go back to its own baseline of development and start over. Assuming it wasn't absorbed by another system by crashing into it, it would just keep on developing. Earth could be just another gas giant that blew out its outer magnetic fields and is now quietly building up its magnetic field or it could be a dirt planet that formed from asteroids crashing into each other and gas condensing from a "pre-planetary" nebula. We don't know what's at the Earth's core to tell us.
All this comes down to the idea that a planet or star evolves larger and larger each growth cycle it completes. Over time hypothetically as each grand planetary/stellar cycle is completed, so long as the general amount of available external cosmic rays and magnetic fields didn't go down, the star or planet would in general though not every time start its next cycle with slightly more matter in its hyperdense core than last time. Over a very long time scale the celestial objects would continually grow in general, but again not in every case every time. This process would slowly grow the inner hyperdense core. The main exception to this rule is if the system was thrown into a void where cosmic ray pressure or external magnetic fields were insufficient in which case the system would tend to evaporate over time in the process called void evaporation unless it was successfully able to cross the void before it ran out of internal energy stored in the hyperdense core in the form of hyperdense radioactive elements again.
As a review we are analyzing two elements of the lives of planets, stars, neutron stars, and dark quasars here: the short term life cycle going from a starting point in the form of being birthed from a binary fission from another star or planet in a nova/supernova or having just reduced its power by nova/supernova until the next nova/supernova and the long term life cycle which continues each time this process is completed with the restart of the cycle until the day the celestial body either evaporates in a void or collides with a larger body thus ending its cycle of growth. Each short term cycle completes and in general assuming external energy is available in the form of cosmic rays and external magnetic fields the hyperdense core grows more massive with each cycle. This occurs either because of planetary/stellar collisions that add mass to the core most likely during its expansion into a giant/supergiant or because of matter subduction where the magnetic field is absorbing mass from its surroundings.
As we observe a star in its day to day, year to year normal operations, a star's magnetic field would also respond to changes in its cosmic ray and external magnetic field environment. This is short term planetary/stellar magnetic field change. In general if the external cosmic ray and magnetic field environment became poorer, the field would respond by expanding outwards over time. Proof for this process can be found in this article. This will very quickly match the absorption of energy required to continue the star or planets energy "consumption" from cosmic rays or external magnetic fields. External cosmic ray and magnetic conditions vary in the short term depending on surrounding stars and types of space environment.
Different space environments in a galaxy could consist of a magnetic field/cosmic ray conveyor belt for stars passing through them and for cosmic rays as they move from one region of space to another empowering regional galactic magnetic fields as they move around. A hypothesis to consider is the idea that in general the denser the medium through which a celestial body is passing through, the more the energy is contained in the form of magnetic fields instead of cosmic rays. Regions of space that are molecular clouds or cold neutral medium are denser than areas that are warm neutral medium or hot neutral medium. The former would tend to have more energy to be absorbed from magnetic fields, the later in the form of more cosmic ray pressure. This is purely hypothetical, but whatever the case, a star passing areas with more available external energy sources in this form will see its magnetic field shrink temporarily while if it enters an area with less pressure will tend to see its magnetic field expand. The general hypothesis is that more energy is available in denser mediums because they are better able to store energy since cosmic rays would be slowed down passing through such mediums making them sort of like cosmic battery parks as the energies would transfer to the local gas cloud magnetic field. These would be difficult to detect as even the Voyager probes could not detect magnetic field changes until they passed through them in areas of space that would be considered highly visible to detection.
Globular clusters, redder dwarf galaxies, and elliptical galaxies share the similar property of having fewer supernovas and few blue stars compared with the disk of a spiral galaxy or bluer irregular and dwarf galaxies. This property has to be accounted for in the new theory. The most probable cause here is the presence of large amounts of gas and dust in the galaxies that have more blue stars. If the gas and dust clouds are related to the magnetic conveyor belt phenomenon, then the gas and dust are essentially accelerating the stars going through their growth cycle because of the huge amount of stored cosmic ray pressure contained within the cloud’s magnetic field. With accelerated growth of the stars the number of supernovas would increase proportional to the energy stored in the magnetic fields of the gas and dust clouds.
The partial lack of blue stars in globular clusters indicates that the size of the stars in question is not especially large. This does mean that there are no blue stars at the core where one would expect, as this has been shown to be false, especially when we look at the cores of elliptical and spiral galaxies where the star density is greatest. Also many so-called blue stragglers have been found in elliptical galaxies that our current theories try to explain as being caused by star collisions. Yet the evidence shows that the collision theory in our current models cannot explain the way in which blue stragglers appear in a globular cluster with a clear positive relationship between the apparent mass of the cluster in our current ideas and the number of blue stragglers. In addition there are some authorities that state there are literally no globular clusters known without these blue stars that shouldn’t be there!
The general conclusion in our new theory is that blue stars exist in the globular clusters because they are meant to. The dusty central cores of all of these galaxy types as well as the abundance of blue stars at the cores in most surveys I have read about the matter and this supports the magnetic field/cosmic ray powered model of stars as I have outlined. The difference in the numbers of blue stars compared to the disk population (the stars in the areas of a spiral galaxy outside the core) is rooted in the dynamics of gas/dust conveyor clouds and their effects on stellar magnetic fields.
I believe that two possibilities exist that are not totally mutually exclusive that account for the nature of how stellar populations exist in this manner. The first possibility is that the division of stars is hampered in the gas/dust clouds of dusty irregular galaxies, blue dwarf galaxies, and spiral disks hypothetically due to the speed at which energy is accumulated at the core of the stars through the bombardment effect of the external magnetic fields and cosmic rays. As a result of this hampering of the division in stars, the stars in dusty regions tend to be larger than the ones outside because they are dividing less frequently. Consequently the large explosions that occur tend to be larger and more visible.
The other possibility which I find slightly more plausible is that larger stars in groups simply create dust clouds because they produce more dust and this creates the environment that we observe that allows for them through the magnetic field conveyor belt process to regrow themselves back to full size easier when they are large numbers of them close together. And this, of course, is what creates the much larger number of supernovas seen in the spiral galaxies and dusty dwarf and dwarf ellipticals. In elliptical and redder galaxies this dynamic is not present because the nature of the stellar populations in these galaxies are general consistent of smaller and less massive hyperdense matter stars as envisioned, except at the very cores.
Why would this be the case then? It makes sense that the gas that supports the magnetic field cosmic ray conveyor belt phenomenon and cannot exist in the tighter more compact globular clusters and elliptical galaxies. The stars literally burn away the gas clouds from their light. Without the gas clouds, the blue stars in these galaxies would not be able to grow as fast and perhaps this would cause them to divide more due to the nature in which the hyperdense cores of these stars are affected by different rates of cosmic ray/external magnetic field pressure. This idea, that stars would tend to divide more under lower cosmic ray/external magnetic field pressure and would tend to divide less under higher cosmic ray/external magnetic field pressure would account for the differences between the apparent stellar populations of the different galaxy and cluster types. This is supported by the fact that the very core of any galaxy or cluster in terms of the stellar population regardless of galaxy or cluster type looks similar to dusty regions of galaxies, only with somewhat less dust.
This would incidentally not affect the fact that each star has an ultimate size that it can sustain under cosmic ray bombardment and external magnetic field stress under which it would explode as a nova/supernova. It would however mean that the ratio of luminosity to mass when the nova/supernova occurred would vary based on the external cosmic ray and magnetic field pressure.
So then why would stars divide in this way? I believe it has to due with the structure of the hyperdense core. These hyperdense cores grow in a certain way. Perhaps the pressure of the external energy sources are required to keep them more stable so that they can grow larger. Perhaps the liquification of the external pars of the core, which I see as solid hyperdense matter, occurs quicker at lower cosmic ray pressures making it more likely that nova/supernova result in a binary fission. There is a huge amount to think about for future scientists just on this one question.
Another similar process to the conveyor belt effect happens when a star enters a void. The star's magnetic field expands as far as it can in order to gather as much energy as it can. However, if it cannot find enough, the star should begin to lose energy slowly as the radioactive hyperdense core will continue to decay releasing energy. Eventually even a core explosion is hypothetically possible if magnetic field pressure assists in some way in stabilizing hyperdense cores. In order to understand the process better thought experiments have to carried out considering the relationship between the magnetic field supporting plasma surrounding the hyperdense core, the hyperdense core and how it responds to the magnetic field of its own star, and what allows for continual stability of the hyperdense core under various conditions. So hypothetically a hyperdense core could explode either at the end of a short term stellar cycle or during a trip through a void. The former is more likely than the later, but until the relationships are fully understood either must be considered possible.
Lets deal with time scales here. What appears clear is that stars live a great deal longer in this idea that in our current models. It is generally estimated that the Sun has slowly increased in strength over the course of its lifetime based on what we know. This is not by much, but the increase suggests a gradual growing process of the magnetic field as a result of a slight excess of cosmic ray and external magnetic field flux. This excess I figured had to be very small but sufficient that the Sun would increase its luminosity to perhaps like current fusion star models suggest which is around 10% per billion years. We don't really have enough data to say. Perhaps the process is even slower, but the idea that ancient Earth was hotter than today in general is supported by the fossil record but can be generally completely accounted for by CO2 levels in the atmosphere. More studies are required to find out for sure, and examination of other stars might reveal more.
So in hypothetical territory let's say ever 10 billion years or so a star's luminosity doubles. We can calculate the required cosmic ray and magnetic field flux to have this be true. Over time it keeps doubling until it reaches critical mass and exits along the giant branch from the main sequence until its magnetic field reaches critical and it novas or supernovas. We know the amount of time it takes for a star's luminosity to double can't be very fast in our solar neighborhood because Earth's incoming solar radiation hasn't been apparently much lower in the past. Maybe even the 10 billion year timetable is too short.
The number of times a star or planet could successfully double its luminosity would determine the length of the full short term cycle of a star or planet's life cycle. Say each doubling took 10 billion years and a star could double 20 times before reaching critical magnetic field to core mass ratio, then each short cycle in the star's life would take about 200 billion years to complete. That is alot of time, and shows how different the new model is from the current one in terms of how old celestial objects are.
When a star goes nova and releases some its core mass creating a planet nearby, the planet increases the potential of a star to gather mass temporarily as long as the planet is reaborbed into its parent star when that star becomes a giant or supergiant. The presence of large numbers of large superjupiters close to stars discovered recently supports the theory that this happens quite frequently. A planet would generally start close to a star unless it was a very large core overload explosion and then be pushed out slowly by the star's radiant pressure from the light and solar cosmic rays it emits. How does the planet increase the potential for a star to gather mass? Because, the planet, moving away from the parent star, has a chance to sweep up other bodies that the star might have captured into its gravitational expansion and then when the star goes giant and supergiant the planet again is pulled back into the star. In this way many solar systems can be seen as extended parts of their parent star that will be reabsorbed by that star when it reaches its expanded phases at the end of the short phase of the solar lifecycle. This hypothesis can be called planetary swarm mass capture re-absorption. This all would also be true of smaller stars ejected from larger stars being in close proximity to them.
Let's talk about another topic, galaxy evolution. Galaxies merge and galaxies break apart in the new model similar to the way stars from other stars and planets and then crash into them as well. There is a continual process in the cosmos in our new model of celestial bodies and systems colliding and merging and also dividing and separating. The model for galactic separation involves the dark quasar's light phase if such a phase exists. A quasar then is a galaxy in the process of being about to divide. Another possibility is that the massive gamma-ray bursters hypothetically discovered by current astrophysics are supermassive supernova of dark quasars at the core of galaxies who have just had a binary fission into two dark quasars while going through a magnetic field collapse core overload.
Two dark quasars would generally push each other apart through radiating x-ray, gamma-ray, and cosmic rays resulting in the slow division of the galaxy in many ways like a dividing cell in biology. Another possibility is that the force of the supermassive gamma-ray burst supernova propels the smaller part of the dark quasar that is expelled during a core overload explosion out into the outer regions of the galaxy resulting in a big mess as stars adjust to the new gravitational regime.
Our new model is looking more complete the more we think about it. Its a bit odd at first and many questions exist. Some inconsistencies may be present, but the general ideas are all here. Stars and planets are powered externally by cosmic rays and external magnetic fields. They grow in the short term by expanding their magnetic field through cosmic ray bombardment and external magnetic field interaction with their magnetic field. They absorb mass through either collisions or through subduction of matter particles into their magnetic field. They grow until they reach critical mass and then explode often in a binary fission that produces a new planet or star. Sometimes they may expel some of their matter midway through a cycle if the conditions are right as possibly suggested by hot jupiters because a temporarily minor core instability has developed from normal changes to their cores.
Each short cycle ends in a return to the baseline system with a minimum magnetic field and a dense core usually in the form of a hot rocky planet like body like a white dwarf, but generally increasing in mass and energy output each cycle as more and more matter is added to the core through collisions or subduction. Eventually the star may live so long that it grows into a dark quasar and hosts its own galaxy. If not, it "dies" in a collision with another star or dark quasar or evaporates in a void if it is flung out into the depth of space beyond galaxies and cannot find a new galactic home before it runs out of hyperdense core fuel.
Galaxies can be any size conceivable as the central dark quasar has no limit on its own size. As we explore the extended cosmos we may find larger and larger systems the more we look. In our area of the universe there may even be a super core for the local space out to 10 billion light years. At the center is likely a truly gargantuan galaxy with a truly gargantuan central dark quasar. The idea of our local cosmos having a core of sorts has recently been supported by new evidence as this wikipedia page demonstrates clearly. It can be easily hypothesized that the more we explore the larger the structures that we find will be. If this is an infinite physical universe, then this has no limits.
A few more odd realizations of what this new model could mean closer to home. If most celestial objects are created by binary fission it is entirely plausible that the Moon was created by an ejection event from the Earth during a magnetic field overload of some kind. The current hypothesis is that the Moon was kicked out by a Mars size body that impacted Earth and sent some of the debris into orbit which eventually condensed into our Moon. Again, we don't have enough evidence to support either theory yet. There is also the old "dirt planet" "dirt moon" hypothesis that the Earth and the Moon condensed from a planetary nebula of some sort around the Sun, perhaps when the Sun was moving through a molecular cloud. Once again, data and investigation through observation and reasoning will reveal which of these hypothesis is most likely.
If the Earth was created from a planetary fission event, Jupiter or Saturn are the likely parents of Earth, not the Sun as it would tend to release a Jupiter or Saturn size body on binary fission core overload. If this is the case, how did Earth get to its present orbit in the solar system? A weird hypothesis in this regard was the Jupiter may have been the central star of the solar system before the Sun reached its current size. The Sun, during such an earlier phase in the Solar System, would have been a slightly smaller star. This sort of trade in roles is possible between different stars in different life cycle phases. Another interesting feature of the new model that is quite intriguing. If this is true than Jupiter would have undergone a core overload at some point and is now back to being a gas giant perhaps passing through another phase between its last core overload and now.
If this is true then we would expect some sort of evidence in the form of differential radioactive isotopes on the surface of different planets of our solar system. The planets surfaces would essentially have different ages. The only possible reason they wouldn't in this model is if their surfaces were melted by Jupiter's core overload during which time a nova would have engulfed the surfaces of the inner solar system (which of course would look very different than today as everything would be rotating around a solar Jupiter instead of the Sun). This possibility may be considered as having happened about 4.6 billion years ago approximately. This interesting speculation is unlikely though, as Jupiter would probably have taken longer to get back to being a gas giant, but shouldn't be completely ruled out just yet. Incidentally the theory that Earth was birthed from a gas giant was one I read the from the Electrical Universe proponent Velikovsky who proposed the Earth being birthed from Saturn.
As you can see, there is no big bang in this model. The cosmos is likely a very, very large place if not infinite, but the systems in it basically grow slowly and contract periodically. Where did all this come from? What is the ultimate source for our cosmos? Transdimensional theory, if you remember from the beginning of this essay, is the where we must turn to to consider what the nature of the system is at a higher levels of purpose, nature, and being. But a few paradoxes need resolving first. The first is a very perplexing problem that can be referred to as the infinite physical cosmos external energy source paradox.
In this thought experiment we see that the cosmos is infinite and that the infinite becomes its own source. This immediately enters into the fallacy of magical thinking as this is basically the perpetual motion machine. The thought begins with the idea that because the cosmos is infinite, more energy is always available from the next level above in the infinite series of "layers" between here and infinity. This idea is sort of the ultimate cosmic free lunch. We can balance this idea by suggesting that the cosmos is in balance in infinity so that energy does come from everywhere in the form of cosmic rays and light and that it just sort of flows around the cosmos creating periodic shortages in some parts and periodic booms in others.
This doesn't seem on the surface to be too much of a problem with our easy solution, however, we never address whether or not the infinite paradox could in fact be valid as a source of external power as the opposite is impossible to prove through serial logic. We have to come up with a theory of the infinite in order to framework our infinite cosmic concept and apply additional rational limits to the idea to make it sensible. Otherwise we are left with the free lunch problem that is obviously ridiculous.
The problem in the analysis doesn't come from this, but from another at first seemingly unrelated problem in metaphysics. That problem is the problem of soul growth. Before I get to that area of analysis, however, I want to return to the model of the atom and go into chemistry because what's greatest about our new expanding electron model is that it offers a new way of seeing chemistry and biological processes rooted in chemistry that could potentially offer major cures to diseases down the line. That is more important and so I will cover these concepts now.
Atomic Orbitals as Currently Envisioned by Standard Chemistry Theory Presumably Through Extensions of Schrodinger's Wave Equation Which Cannot Be Used as Predictive Tools in Light of The New Electron Expansion Theory
The property of differential atomic bonds must be considered. Based on expansion theory electrons expand and bounce off the expanding nucleus which is also made of electrons. If neutrons consist of expanding electrons why isn't the number of neutrons important in determining the chemical nature of the atom? Why do isotopes exist and yet their chemical properties are often nearly indistinguishable from one another?
The explanation must be that protons are the only subatomic particles that create the structure of the nucleus from which bouncing electrons create the electron cloud surrounding it. Protons are responsible for nearly all chemical properties of the vast majority of atoms, while neutrons are not responsible for almost any of them but add mass to the nucleus. If electrons are bouncing off the nucleus, this means that neutrons cannot contribute their true electrons (of which they are made) to the expanding nucleus or they would have an effect on the chemical properties of an element. So neutrons must exist in a manner that makes them "invisible" to the bouncing electrons.
My first solution was to consider the possibility that neutrons existed not as solid particles in the nucleus but as fields similar to magnetic fields and this hypothesis can be called the neutron nuclear magnetic field hypothesis. My second thought was that if neutrons formed a shell whose size was determined by the protons in some fashion, then the electrons would be bouncing off the neutron shell whose size was determined somehow by the protons, though this system I couldn't work out quite in my head as to how such a system would work. My third and final thought was the idea that protons could form the outer shell of the nucleus with neutrons inside them where their mass/volume would not affect the electron clouds bouncing off the nucleus. This is the idea which makes the most sense in terms of the physical consistency with what we know so far and this hypothesis can be called the proton nuclear shell hypothesis. There is also the consideration of exotic explanations such as the idea that the neutron is a energy phantom of internal processes inside the atom, though more analysis would be required to ascertain its exact nature.
Another part of the theory of new chemistry revolves around explaining the differential properties between atoms that we learn in modern chemistry without expansion theory. We know certain atoms are more electronegative than others. We know that the right side of the periodic table has elements that tend to absorb electrons, while the left side metals tend to give electrons. The new physical theory of chemistry suggests a plausible explanation.
Firstly, if different atomic nucleus's had different sizes it would account for the different properties very clearly. A large atomic nucleus would have a higher nuclear expansion pressure and would tend to push electrons away. So metals should generally have larger nucleuses as they tend to donate electrons. Non-metals then must have smaller nucleuses and the differential between the expansion pressure of metals and that of non-metals must be explained by the fact the metals push their electrons towards the non-metals once these atom specific electron clouds overlap. In this concept, the larger the nucleus, the more expansion pressure exists in the atomic envelope surrounding the nucleus because a larger nucleus expands with greater force than a smaller one. The nucleus is doubling every uT interval, so a larger nucleus doubles in the same time a smaller one does, creating more pressure because of its larger expansion area. This is related to the concept Mark mentions for larger planets having larger gravitational expansion based on their volume, not their mass. This means that the reason alkaline metals give their electrons up so easily is their nucleus's are volumetrically larger than other atoms so they literally push their electrons out of their envelope due to the higher pressure within from the volumetrically larger expanding nucleus. So fluorine, oxygen, chlorine and other electronegative halogens and non metals are so because their relatively small nucleus's do not create as much pressure in their atomic envelope, which means electrons easily enter them because there is less resistance.
Additionally the new model of the atom would have to take into account some interesting facts concerning isotopes. There is a small chemical difference between isotopes with one major general exception, hydrogen's three isotopes. Deuterium and tritium are said to be somewhat different in the way they chemically react to basic protium which is single proton hydrogen. Deuterium and tritium each have one and two neutrons respectively. Both are said to have generally stronger bonds than ordinary hydrogen. With a complete suite of properties to compare deuterium and tritium we could find the physical changes in the nucleus that corresponds to the chemical properties observed.
This would also be the easiest starting point in our thought experiment about the nature of the neutron - proton relationship within the atom as hydrogen is the simplest atom known.
A major problem exists with the definition of the electron in our current particle physics models that must be addressed. The electron that our scientists call the electron is not the electron of the expansion theory. Our current particles we call electrons are understood as having a mass - 9.10938215(45)×10−31 kg. However, this must be wrong in expansion theory if we examine the entire breath of the theory. Electrons compose clusters which make up light packets. If their mass was so high, then photons (electron clusters in light) would weigh a significant amount more than they do, and would consequently have a much higher energy. Using Plank's and Einstein's equations a photon of 500 nm wavelength (blue-green in color) has a mass of 4.417 X 10^-36 kg, which is over 200,000 times smaller than the mass of the official electron. The same light frequency, if we use the quanta as the basis particle of the light cluster/photon uses up approximately 1.5 million protons per second to maintain (see mass of true electron in Appendix). This means it would take 12 billion years for a mole of protons (about 1 gram of mass) to be used up generating this wavelength of light using the quantum, the true electron, as the basic particle (see later discussion concerning the mass of the true electron).
If we use the current particle physics electron on the other hand, assuming an equivalence of one electron mass to one quanta we get that the same mole will last about .003 seconds, or in one second the weakest light of 500 nm would use 325 grams of matter. This clearly cannot be the case. The same light would require, continuing this analysis, over 10 million kilograms, or 10,000 tons to shine for one year. If this were the case, then shining a flashlight using only one weakest possible beam of light should vaporize the entire flashlight into light in a about 3 or 4 seconds. Clearly, that does not happen. The photons, thus, cannot be made of electrons which are much, much more massive than them, which is how we understand them currently, but photons can be made of quanta without any of issues for the larger subatomic particle we now call the electron. This means that the particle described in contemporary science particle experiments called the electron is not in fact McCutcheon's electron, which I have referred to as the true electron to avoid confusion. The current particle we call the electron must in fact be another particle made of true electrons that is stable in its configuration, much as protons and neutrons. We must also now rename the electron to avoid confusion with the true electron, but first its true particle nature and properties must be explored to contextualize our new understanding of the old electron and its part in the subatomic zoo of particles (electron clusters) made of true electrons.
Concerning the electron as we know it today, it may be that the electron is a ball of true electrons consisting of the whole amount of true electrons bouncing off the nucleus either completely or within what is called a shell. A shell may or may not be a separate shell (in reference to standard chemical orbital theory) above the nucleus but only an amount of true electrons that can be expelled from a true electron envelope by nuclear expansion or true electron cluster impact successfully and remain stable as a cluster outside the nucleus. This would mean, if this is the case, that there is some connection between the mass of the electron, the number of true electrons involved, and the discoveries of quantum physics with regard to the quantum condition and the angular momentum connection to the mass of the current electron (as used by Bohr in his equation).
McCutcheon's definition of electron clusters doesn't add up completely. If the quantum is a single electron, it should have a unit of MASS, not a unit of impulse as implied by Plank's equation E = hv. Based on his description of electron clusters, the amount of energy to create smaller electron clusters is not reflected in mass/energy ratio of the higher ends of the EM spectrum. If a large electron cluster and a small electron cluster pass the same point at the speed of light, the larger electron cluster should have more energy than the small one assuming they are moving at the same speed. The wavelength would increase, but the amount of mass passing a given point of space at the same time would remain the same or would decrease if we use basic geometry. This is the opposite of what we should expect if the smaller clusters are at the high end of the spectrum. There is something that is obviously missing here. The problem may be with the way we interpret the E=hv experiment itself or that Mark's may have been mistaken about which electron clusters of light were larger as explained later.
Mark's ideas about light being composed of electron clusters is correct when we see that the quanta is in fact the true electron, which is the basic particle of all matter and of light photons. I believe that all matter is composed of these true electrons, and I believe these true electrons are expanding at a fixed accelerating rate. I also believe it is possible to ascertain the nature of the true electron from what we know about physical constants so far including Plank's constant.
I have found the mass of the true electron using Einstein's (which still applies to light itself as a measure of its energy and matter content) and Plank's equations. This mass is approximately 7.3622 X 10-51 Kg. Using Einstein's and Plank's equations it is possible to calculate the mass of each true electron cluster/spiral (photon) which composes an individual true electron clusters of EM radiation from the Terahertz band up (the boundary between true electron clusters (photons) and true electron bands of microwave/radio band radiation from Mark's description of EM radiation) through the high end gamma rays. Incidentally, this means that a single true electron travelling at the speed of light, if impacting another electron or an object at rest, would impart 6.626 X 10-34 joules of energy. All of this however, must still be reexamined because we need a perfect understanding of all these properties first.
The application of these equations using mass as an indicator of the true electron show that Mark may have been mistaken when he assumed the gamma ray end of the spectrum contained smaller true electron clusters than the lower end of the spectrum. This is based on the fact that as light travels at the speed of light when it is released from a source (from the point of view of the source) then if a given photon has a higher energy, then it must have higher mass in order to account for the energy being higher. It has been well established through plank's equation that gamma and x-rays have higher energies individually than EM radiation lower down on the EM spectrum. The one explanation that can be accepted given our current understanding of physics is that the clusters at the high end are larger or that they are possibly not clusters but in fact twists on a corkscrew shaped stream of electrons or that the clusters are not actually physically touching (cannot physically touch/can overlap) so that more of them can be compressed into a single stream from a source. The larger clusters may not actually be physically in contact with one another and thus can be squeezed tighter together. A third possibility is that the clusters are all the same size and that they pass a given point in larger numbers because they are not physically touching or can potentially overlap without disruption. In any case the energy and mass constraints of the system must not be violated and only studies can determine which of these possibilities is real. It would be possible, with the large body of already existing data, to perhaps determine this without any additional new experiments on light.
After examining Plank's experiment I concluded that Plank makes a fundamental error similar to the way Mark describes Einstein and Newton making errors in their equation which is by insertion of an assumption into the equation as mathematical fact. This assumption is the existence of an actual wavelength. Mark demonstrates in his model and theory that wavelength is meaningless when measuring subatomic electrons or electron clusters because length is only useful in the atomic realm when measuring objects made of atoms. Mark himself states that electron clusters that make up light are in fact subatomic structures by definition and as such their relative size in the physical is meaningless.
The result of this analysis is that the concept of wavelength has no meaning when applied to light. Logically, the old science assumed that light was similar to radio waves and that it was propagated through the ether proposed in that time period. This is the mental environment that classical physics existed in and the reason that Plank assumed that wavelength existed. As a result the original black body radiation equation was stated in the form e = hv = hc/λ where h is plank's constant, v is frequency, and λ is wavelength. But, wavelength here is assumed as waves propagating in ether (the old concept of ether, which is similar though not exactly the same as the one promoted earlier in this essay) which Mark shows can only apply to microwave and radiowave bands of electrons expanding outwards from a electrical field generating source in the macroatomic realm of molecules and larger.
If we make this mathematical shift in Plank's equation then plank's constant itself must be changed because it is no longer true that e = hv = hc/λ as you will get two different units for h without the wavelength. However, the resulting idea causes changes the way in which we can interpret this new situation. The first realization is that frequency is no longer a number of electron clusters passing per second, hence it is no longer light's frequency. It is actually the number of electrons that is passing per second, because h, the quanta, is the electron moving at the speed of light specifically.
Only two possible interpretations now exist for the concept of light that can be considered. The first is that light is electron clusters moving at the speed of light in a classical physics manner expanding in the atomic realm with conservation of vector motion (my general thinking on the matter) or that light is electron clusters expanding in the atomic realm without classical motion in classical vector physics solely through expansion pressure which is Mark's concept.
The two different ways of looking at this expanding electron concept come down to which fits the data better. Right now, I'm obviously partial to my interpretation, but because the data says to me the larger electron clusters must be larger the closer one gets to the gamma ray end of the EM spectrum. If we have, in fact, eliminated wavelength as a function of the size of electron clusters because Plank was originally in error including wavelength as a assumption (incidentally Mark's original thinking was that wavelength was a measure of how large the electron clusters were), we are left with the startling conclusion that electron clusters are probably larger at the higher end of the EM spectrum and that we can drop our ideas about electron clusters being spirals while preserving the whole electron cluster model by inverting the size ratios of the non banded EM spectrum (that which is above mid Terra-hertz range).
Expanding now on the difference between light as moving through expansion solely or light moving in a classical manner with vector motion conserved through properties of matter/expanding electrons. The model of physicality is the issue as partially mentioned earlier when dealing with possible properties of matter and the electron specifically. In a mathematically expanding model of the electron it may be possible that Mark is correct, but even here we would need to run a computer simulation to see how the effects work in order to be sure. For the most part substantiation of properties could be feature of physicality that is non-mathematical. The issue comes down to one of abstraction versus reality. Mark's expanding electron clusters based solely on mathematical expansion may only be valid in abstract thinking. The electron may not be abstract and as such may have properties of a field depending on whether fields are part of the complete picture. This is what I refer to as holistic field theory, and it is the basis of a complex view of reality where systems exist that are whole with properties that are defined as extensions of universal logic. Mark's idea of a purely mathematical system of expanding electrons, while elegant and simple, needs to be proven in a simulation and that simulation must match our observations. Another issue regarding true electron clusters is the manner they are ejected from electrical fields as light by atoms. Mark believes that the ejection occurs in the gaps between atoms. While this analogy may work in the case of metals and electrical current in a wire, there are problems with the idea as it applies to ionized gases because the gaps between the molecules of these gases would not be the same size. It seems likely that the atoms themselves may in fact eject the true electrons (in the form of clusters/spirals) from the true electron envelope surrounding the nucleus.
When a true electron or a true electron cluster strikes a nucleus, it imparts momentum and energy into the true electron envelope that surrounds the nucleus. The result is that the envelope may expand because vector motion is conserved inside the envelope when it is transferred from a bombarding outer source into the envelope itself. The phenomenon would be equivalent to heating gas and the gas expanding because of the additional energy contained in the gas system.
Returning just momentarily to astronomy to consider redshift and the Doppler effect. Redshift and blueshift on local scales are still due to the fact that if the source of electromagnetic radiation is moving towards or away from you, the number of clusters per second that pass a given point at any one instance would be altered either by having more clusters per second, increasing the apparent wavelength, or by having fewer clusters, decreasing the apparent wavelength. Long distance red shift could still be caused by Mark's idea that electron clusters passing through magnetic fields and matter may be gaining/losing mass and increasing/decreasing in size or perhaps losing speed. This idea has actually been around for awhile and has been suggested by other opponents of the Big Bang Theory. Indeed, light does not have to travel at the speed of light according to this interpretation of electron expansion theory. It could conceivably travel at any speed. Light moving slower would have a lower wavelength, while light moving faster would have a longer wavelength because the number of clusters that pass per second would be higher regardless of the size of the clusters. We must assume that it is still the wavelength based on the number of clusters per second that determines the frequency rather than their size.
Below the level of protons exist many different true electron clusters of smaller and smaller sizes that are not incorporated into different light beams in light clusters/spirals. Currently science has the concept of a tiny particle called a neutrino which can pass through huge amounts of matter without interacting with it. While the neutrino itself is probably not the neutrino we currently think it is in the standard model, small true electron clusters hurled at tremendous speed would have a certain degree of penetrative power either directly or through secondary, tertiary, etc. clusters created by impact with a wall or deep object. While it is questionable if a small true electron cluster of this nature could pass through miles upon miles of lead as it is alleged, it could very conceivably pass through a few hundred feet if it has sufficient energy on impact, or if it arrived in large waves/groups.
During Supernova 1987A, neutrino detectors detected a huge spike of neutrinos through secondary decay (secondary particles created upon impact) in large underground water tanks. It would be expected that in the event of a tremendous number of small true electron clusters arriving en masse from a supernova, for instance, that they could indeed create such an effect as passing through several hundred feet of concrete and causing such a mass release of secondary decay particles. These tanks do also detect other decay events, though some scientists have questioned whether these are in fact neutrinos as the stand model describes and not some radioactive byproduct of the surrounding ground and walls. During the arrival of the energetic true electron clusters from Supernova 1987A, there was most definitely a penetration into the water tanks secondary rays caused by the arrival of these particles. According to current stellar models, neutrinos, in fact, represent 99%+ of a supernova's energy output.
While the current models are questionable in so many regards, the idea that much of the energy of such an explosion could be released in the form of clusters of true electrons that would not be identified as light, but instead as freely expanding true electrons outside the framework of recognizable light beams or normal EM radiation because they would not exist as a stream but as individual true electron clusters expelled from a core explosion of a electromagnetic and/or nuclear variety must be considered as a possible explanation of the existence of what we call neutrinos Such neutrinos would be in all manner of sizes from 1 true electron all the way up to perhaps the size of a proton (though this would qualify them as a cosmic ray at this size) and thus would have a large range of masses and energy. They would still be absorbed by a sufficiently thick slab of matter that would likely be far less than the amount currently assumed. These true electron clusters that we may call non-photon sub-proton true electron clusters would be very important in determining invisible fluxes of energy in the Cosmos because they would be present just about everywhere and could alter the equations of magnetic field absorption of cosmic rays if they are responsible for transferring large amounts of energy into magnetic fields. These largely invisible type of true electron clusters are certainly part of the general energy-matter flux of the cosmos.
The final topic in our general science roundup is possible future technology. The same process of electrical and magnetic field alteration of matter in stars and planets under sufficient magnetic field stress can be applied in a laboratory. It is likely that matter can be copied or transformed under the right magnetic and electrical field stress. Magnetic field and electrical field streams are composed of the same electrons as all matter. If we have sufficient electrons in a group we have a proton, if we have a great deal more we end up with an atom. Electrons in a magnetic field sheet are same as electrons in a electric current which are the same as the ones in atoms (if they are true electrons according to the new definition). This means that if we are able to condense a sheet or current into a sphere of a proton and successfully eject it from the electrical/magnetic field, we have manufactured a proton from an electrical or magnetic field.
In a similar manner could we potentially form an atom or even molecule from a electro-magnetic mold at the proper "frequency"/current/voltage (inner magnetic or electrical field) by molding the shape of an atom (using the atom) against the background of the current/magnetic field and cutting out an exact shape in the perfect conditions (true electron for true electron) We could even add matter to atoms to change them from one type to another. Of course, this is all hypothetical with a great deal more knowledge than we currently have.
Thus, under the right conditions, and with sufficient energy, we could indeed convert lead to gold if we understand the new science completely. We could also make more of any elemental substance we required simply by finding the magnetic field stress combined with a proper electrical current of the right voltage and amperage to remove true electrons from the electrical or magnetic stream and store it in the elemental crystal lattice. This would obviously take a great deal of energy and would likely be very destructive in terms of the amount of energy released when a stream of electrons going the speed of light is brought to an abrupt halt inside a crystal lattice. It is likely this process will be very slow in growing elements in any appreciable amounts (at first at least), but it could very likely be used to prove certain theories concerning the development of stellar cores. This process, however, could be made to work if there is a way to recycle at least some of the energy released into the crystal lattice when the stream condenses out of it.
If we could reliably use this approach to create any element, we would have no need to mine asteroids or dig further into the Earth in search of fresh metals. Our current supplies of many metals are estimated to begin running out in just 30 years. Finding a easy solution to this shortage may simply be a matter of building sufficient renewable energy resources and build the necessary machinery to replicate any element that we may require. Do we have the science and the models yet? No, but in the next 20 years as computer science advances and new scientists come forward to work on this problems, this solution to our material shortage may become viable. It is certainly worth a look into.
According to our current theories it is impossible to go faster than light. According to both electron expansion theory and the derived science we have been describing, faster than light travel is very much possible. Several new ideas concerning potential energy storage technologies emerge within the context of the new science.
Our current technologies are limited to the periodic table as we know it as well as electro-magnetism as we know it. The highest densities obtainable according to our current ideas is only a little more than what is available in the Earth's crust. Under higher pressure, possibly both electromagnetic and gravitic pressure, elements such as Uranium can become stable, and elements higher in the periodic table can also enter into the realm of stability. According to the best guesstimate of what our current science knows the highest possible density will be neutron star matter level density under normal conditions, though higher levels may be possible. This means that elements should exist with larger and larger nucleuses up to near neutron star density. It also means that matter can be compressed to much higher densities under sufficient pressure. It is this hyper dense matter that exists at the core of stars and planets that can be manufactured in a laboratory under the proper conditions and could allow us to power starships that could reach other stars relatively quickly.
Just as a star or planet can possibly manufacture new elemental matter and condense matter in its core through electrical and magnetic activity under cosmic ray bombardment pressure, so too can such a miniature scale system be used to recreate this process in a laboratory. The same processes that work in planets and stars can work to create ultra dense matter preferably in a laboratory on the far side of the Moon using either particle accelerators to bombard magnetic fields, electrical currents, or magnetic field pressure. All this is required is a complete model of the proposed electro-magnetic alterations involved and an equipped lab to do the work. Of course, safety issues need to be a priority here as the dangers of a core overload of hyperdense transuranium metals would make the Fukushima disaster look like a picnic by comparison. Perhaps the experiments can be conducted in the future in space far, far way from our delicate planet (like as far away as another planet if necessary) as these technologies would be spaceship technologies for the future.
Ultra dense matter, if it is in metallic form, will serve as sort of a ultra fission material that could be used in starships as a reactor similar to a radioscopic thermoelectric generator, but much, much more powerful. Another possibility is that the reaction could be controlled in a manner like a star or planet so that in a controlled environment where the magnetic field of the reactor core is kept stable by it being embedded in a much larger expanded electromagnetic field created by a plasma fluid, then by modulating the magnetic field power could be drawn from the electrical field in the reactor core itself. This would be in a similar manner to our new stellar model which shows in the event cosmic ray bombardment lowers, stellar magnetic fields expand and begin drawing energy from the electrical fields in the core of the star, which in turn draws matter out of the core which emits energy mostly through radioactive decay.
Such elemental matter in a reactor core would have to be kept stable with continual electrical and magnetic field pressure to prevent it from exploding with a much, much greater force than an atomic bomb. Also when modulating the core, it would be very important to know exactly how much modulation of the larger plasma matrix field would be safe to prevent a reactor core overload. After all, we would be playing with the same force that is likely responsible for some nova and supernova somewhere in the cosmos. As such the science for developing this technology will have to wait until we can safely contain it and until we have a sufficiently great clean energy resource for generating the power to create this system. I have called the hypothetical engine system derived from this technology a Magnetically Stabilized Micro Stellar Core Fission Reactor Engine (MSMSCFRE).
For a safer, more stable means to power propulsion, hyper dense crystals of ordinary substances made of several atoms, such as quartz, which be designed and created in a lab that would be sufficiently dense that it could be used to store light or perhaps electrical energy in its hyper dense lattice. The crystal would then be etched as it is created as a pathway to release this energy (if it is in the form of light) slowly from the core. The long term goal of such a project would be to create crystals such as this that would weight 100's or 1000's of times the normal density that could hold and contain additional energy in a small volume. Because these crystals would not be explosive in the same way that a very heavy element would be, they would be relatively safe to outside a stabilizing magnetic and/or electrical field. The hypothetical engine system designed from this type of reactor would be called a Hyper Dense Light Storage Crystal Reactor Engine (HDLSCRE) or a Hyper Dense Crystal Capacitor Reactor Engine (HDCCRE) if electricity is stored instead of light.
A third hypothetical idea is to use the property of spin, which is probably the safest and easiest way to store energy as long as the spin does not tear apart the core. We see spin in "neutron stars" we call pulsars holding together incredibly powerful magnetic fields at a distance. Using physical spin and containing a core with magnetic fields and various safety features may be the best physical way to store energy as spin can hypothetically be much faster than the speed of light. If you run the analysis for the likely properties of "neutron stars" (as envisioned according to the Cosmogenesis #1 file such that neutron stars are actually about the size of white dwarves or slightly bigger except composed of much higher density hyperdense matter) then doing the math the speed of the pulsar spin would exceed the speed of light by quite a bit. Could we use ordinary matter instead of hyperdense matter for safety purposes for such a system? It would depend on how well that matter held up under such immense pressure. The answer is probably not for very, very high rates of speed. But down the line we could potentially use the property of spin to add much more energy to a system at the limits of what could be created in terms of density such as one we would refer to as solid neutronium (neutron star matter). Such a system is far, far into the future of our potential technology of course.
The property of spin though offers the best avenue to attempt to design systems at our current technological level I believe even with ordinary density matter. In addition, other types of energy storage systems can be envisioned using magnetic field properties of electron speed internal to the magnetic field. Speed stores energy, it's as simple as that. And if we are getting into deep space we are going to need it one day.
Appendix
1. A hypothetical mechanism can be constructed to account for what might happen when two protons (hydrogen nuclei) collide in nuclear fusion. The energy of the motion becomes locked into the structure of the new deuterium nucleus that is created in the high energy impact. One plausible hypothesis is that the impact creates a double nucleus with one shell of electrons on the outside and one shell on the inside. Because of this rearrangement, the pent up motion of the collision is stored within the structure of this double shell in the form of expanding electrons bouncing between the inner shell (neutron ?) and the outer shell (proton ?). This energy can be liberated in fusion of deuterium or tritium, but not basic hydrogen as it has no stored motion until the impact creates the new physical configuration whereby motion can be stored in this manner as structural rearrangement of pressure in this complex manner. Obviously more thought has to go into the idea in order to get a better grasp of the exact workings of this model or plausible other models.
This inner shell for neutrons and outer shell for protons would explain why protons are only important in determining the atom's primary chemical characteristics as the neutrons would be inside the shell of protons and so have no impact on the electron cloud bouncing off the nucleus. Now the exception would be that because the hydrogen atom is very small, the change in the nucleus from a single proton to double the size with a proton and neutron would provide some change in chemical properties.
Mass of Proton is 1.672621637(83)×10-27 kg
Mass of True Electron is approximately 7.3622 X 10-51 -kg
Number of True Electrons per Proton is approximately 2.2719 X 1023
New Essays Concerning the New Science Paradigm