Quotes & Sayings

We, and creation itself, actualize the possibilities of the God who sustains the world, towards becoming in the world in a fuller, more deeper way. - R.E. Slater

There is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have [consequential effects upon] the world around us. - Process Metaphysician Alfred North Whitehead

Kurt Gödel's Incompleteness Theorem says (i) all closed systems are unprovable within themselves and, that (ii) all open systems are rightly understood as incomplete. - R.E. Slater

The most true thing about you is what God has said to you in Christ, "You are My Beloved." - Tripp Fuller

The God among us is the God who refuses to be God without us, so great is God's Love. - Tripp Fuller

According to some Christian outlooks we were made for another world. Perhaps, rather, we were made for this world to recreate, reclaim, redeem, and renew unto God's future aspiration by the power of His Spirit. - R.E. Slater

Our eschatological ethos is to love. To stand with those who are oppressed. To stand against those who are oppressing. It is that simple. Love is our only calling and Christian Hope. - R.E. Slater

Secularization theory has been massively falsified. We don't live in an age of secularity. We live in an age of explosive, pervasive religiosity... an age of religious pluralism. - Peter L. Berger

Exploring the edge of life and faith in a post-everything world. - Todd Littleton

I don't need another reason to believe, your love is all around for me to see. – Anon

Thou art our need; and in giving us more of thyself thou givest us all. - Khalil Gibran, Prayer XXIII

Be careful what you pretend to be. You become what you pretend to be. - Kurt Vonnegut

Religious beliefs, far from being primary, are often shaped and adjusted by our social goals. - Jim Forest

We become who we are by what we believe and can justify. - R.E. Slater

People, even more than things, need to be restored, renewed, revived, reclaimed, and redeemed; never throw out anyone. – Anon

Certainly, God's love has made fools of us all. - R.E. Slater

An apocalyptic Christian faith doesn't wait for Jesus to come, but for Jesus to become in our midst. - R.E. Slater

Christian belief in God begins with the cross and resurrection of Jesus, not with rational apologetics. - Eberhard Jüngel, Jürgen Moltmann

Our knowledge of God is through the 'I-Thou' encounter, not in finding God at the end of a syllogism or argument. There is a grave danger in any Christian treatment of God as an object. The God of Jesus Christ and Scripture is irreducibly subject and never made as an object, a force, a power, or a principle that can be manipulated. - Emil Brunner

“Ehyeh Asher Ehyeh” means "I will be that who I have yet to become." - God (Ex 3.14) or, conversely, “I AM who I AM Becoming.”

Our job is to love others without stopping to inquire whether or not they are worthy. - Thomas Merton

The church is God's world-changing social experiment of bringing unlikes and differents to the Eucharist/Communion table to share life with one another as a new kind of family. When this happens, we show to the world what love, justice, peace, reconciliation, and life together is designed by God to be. The church is God's show-and-tell for the world to see how God wants us to live as a blended, global, polypluralistic family united with one will, by one Lord, and baptized by one Spirit. – Anon

The cross that is planted at the heart of the history of the world cannot be uprooted. - Jacques Ellul

The Unity in whose loving presence the universe unfolds is inside each person as a call to welcome the stranger, protect animals and the earth, respect the dignity of each person, think new thoughts, and help bring about ecological civilizations. - John Cobb & Farhan A. Shah

If you board the wrong train it is of no use running along the corridors of the train in the other direction. - Dietrich Bonhoeffer

God's justice is restorative rather than punitive; His discipline is merciful rather than punishing; His power is made perfect in weakness; and His grace is sufficient for all. – Anon

Our little [biblical] systems have their day; they have their day and cease to be. They are but broken lights of Thee, and Thou, O God art more than they. - Alfred Lord Tennyson

We can’t control God; God is uncontrollable. God can’t control us; God’s love is uncontrolling! - Thomas Jay Oord

Life in perspective but always in process... as we are relational beings in process to one another, so life events are in process in relation to each event... as God is to Self, is to world, is to us... like Father, like sons and daughters, like events... life in process yet always in perspective. - R.E. Slater

To promote societal transition to sustainable ways of living and a global society founded on a shared ethical framework which includes respect and care for the community of life, ecological integrity, universal human rights, respect for diversity, economic justice, democracy, and a culture of peace. - The Earth Charter Mission Statement

Christian humanism is the belief that human freedom, individual conscience, and unencumbered rational inquiry are compatible with the practice of Christianity or even intrinsic in its doctrine. It represents a philosophical union of Christian faith and classical humanist principles. - Scott Postma

It is never wise to have a self-appointed religious institution determine a nation's moral code. The opportunities for moral compromise and failure are high; the moral codes and creeds assuredly racist, discriminatory, or subjectively and religiously defined; and the pronouncement of inhumanitarian political objectives quite predictable. - R.E. Slater

God's love must both center and define the Christian faith and all religious or human faiths seeking human and ecological balance in worlds of subtraction, harm, tragedy, and evil. - R.E. Slater

In Whitehead’s process ontology, we can think of the experiential ground of reality as an eternal pulse whereby what is objectively public in one moment becomes subjectively prehended in the next, and whereby the subject that emerges from its feelings then perishes into public expression as an object (or “superject”) aiming for novelty. There is a rhythm of Being between object and subject, not an ontological division. This rhythm powers the creative growth of the universe from one occasion of experience to the next. This is the Whiteheadian mantra: “The many become one and are increased by one.” - Matthew Segall

Without Love there is no Truth. And True Truth is always Loving. There is no dichotomy between these terms but only seamless integration. This is the premier centering focus of a Processual Theology of Love. - R.E. Slater


Note: Generally I do not respond to commentary. I may read the comments but wish to reserve my time to write (or write from the comments I read). Instead, I'd like to see our community help one another and in the helping encourage and exhort each of us towards Christian love in Christ Jesus our Lord and Savior. - re slater

Thursday, July 1, 2021

Whitehead's Process Speculation about Multiverses before there was Speculation

Whitehead's Process Speculation about
Multiverses before there was Speculation

I just recently put together Over the last several days a couple articles on EM/QED and then saw Paul's statement below in connection with Whitehead's multiverse theory and electromagnetic societies as spacetime singularities:
Alfred North Whitehead, the smartest man who ever lived [in my opinion], foretold of our universe existing as only one of many. Today it is known as the multiverse theory.

Seventy years before modern physics, [mathematician-philosopher] Alfred North Whitehead pioneered the framework of multiverse theory by what he described as a "plurality of cosmic epochs", “the theory of society,” and the notion of "the geometrical society" which harbors the existence of the cosmic epochs - one which [may] contain all possible geometrical configurations, allowing multiple dimensions required by M-theory.

Whitehead also foretold that evidence of “our cosmic epoch” (our universe) is all we would be able to to trace. 
The phrase "cosmic epoch is used to mean the widest society of actual entities whose immediate relevance to ourselves is traceable.”

Whitehead also called “our cosmic epoch” an "electromagnetic society that began as a spacetime singularity" - now known as the big bang roughly 14 billion years ago and has been expanding and cooling ever since. 

Following up Paul's reference turned up this little gem by Leemon McHenry at California State University:

The Multiverse Conjecture: Whitehead’s Cosmic Epochs and Contemporary Cosmology (21 pages)

*Leemon McHenry teaches philosophy at California State University, Northridge, CA 
Abstract: Recent developments in cosmology and particle physics have led to speculation that our universe is merely one of a multitude of universes. While this notion, the multiverse hypothesis, is highly contested as legitimate science, it has nonetheless struck many physicists as a necessary consequence of the effort to construct a final, unified theory. In Process and Reality (1929), his magnum opus, Alfred North Whitehead advanced a cosmology as part of his general metaphysics of process. Part of this project involved a theory of cosmic epochs which bears a remarkable affinity to current cosmological speculation. This paper demonstrates how the basic framework of a multiverse theory is already present in Whitehead’s cosmology and defends the necessity of speculation in the quest for an explanatory description. 


An example of entropy in biological systems

Process as a Continuous State of Unfolding Entropy

For myself, I can see the historical appropriations of Whitehead connecting process philosophy to Maxwell's electromagnetic theory (it's easy enough to do between both systems as I hinted at here). And though I have no problem with multiverse theory (it'd be highly unusual if our cosmos were the only universe... without having simultaneous derivatives of all kinds of universes as proposed under M-theory). But I didn't think the many worlds concept came out until the late fifties by Hugh Everett (1957). Still, process philosophy would very easily connect with this theory too as apparently Whitehead speculated when sensing the flow and rhythm of an organic universe. That is to say, things do not arise by themselves, but in relational communities with one another, which is the nub of multiverse theory.

Said differently, even as evolutionary theory shows the process of entropy attempting to lowering its loss of energy (thus a hot earth is cooled by living organic processes) so too would one expect an evolution of cosmi (plural of cosmos?) which come-and-go exploiting all connective opportunities while driven towards novelty and in-state wellbeing.

Thus, these perturbations of our cosmos finds ourselves in it and asking the kind of questions sentient beings might ask within the framework and conditions of this cosmos (I lean strongly towards the weak anthropic principle - see here and here: "A Tale of Two Cities" - where there can be no preconditions, no divine fiats or commands overruling the process proceeding from God's Self, only undirected interactions in relational context to the whole. My only argument for the strong anthropic principle lies in the embedding of God's Self and His Love within the process itself granting a positive creativity and need for wellbeing. Combined, both concepts give a teleology to process theology).

Does Process Thought Allow for a Teleology?

That said, I also believe God gave to all universes indeterminant freewill underlaid by the process principles of divine love speaking to not only freedom but wellbeing (a kind of entropic statism, if you will).
How Do We Explain the Incredible Uniqueness of Our Form of Multiverse?

[Excerpt] "The concept of other universes has been proposed to explain how our Universe appears to be fine-tuned for conscious life as we experience it. If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), some of these universes, even if very few, would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve.

"The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developing consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand [carbon-based] life) to emerge and evolve, this would not necessarily require intelligent design per a teleological argument for the strong anthropic principle as the only explanation for the conditions in the Universe which promotes our existence in it." - res, June14.2012  


Summary Speculations

Which means, our universe is neither the first, nor the last, in a long line of novel creations but are always found in the perpetual state of outcome as "organically relational cosmic entities" developing from states of being towards endless states of becoming. And not simply one after the other in linear progression, but as many, becoming many more, like bubbles shot out of an infinite number of bubble guns!

This is the kind of process creation which I would expect a process God to have created. A God who himself is the first process of all succeeding sub tending processes as they mix and break apart from one another in differing combinations of novelty and creative relational bundling.

And lastly, we live in a much older "-Verse" than mere physical light years can measure when thinking of all the preceding -verses which have come and gone before our own. -Verses which are more than their matter, of which makes they are composed. But of a summing up of a panpsychic, panexistential, albeit "spiritual," presence we seldom seem to senseas we bustle about like ants on the ground, but can feel vibrating all around us in the aftermaths of creational spaces.

The vagabond butterfly, the whispering tree, the moving wind... even our own personal beings and presence with one another with nature itself. There is a there, there, which one might call divine or spiritual but we might all call beauty and wonder. And like Whitehead's speculation, God's handiwork extends everywhere... both in this world and far beyond it.

Though a Process God does not determine the future, as the Very Process itself God is steeped within it, infilling all its spaces and relational processes. God does not need to know the future because God is the future. God is the One who ever lives on the edge of the becoming future.

Process Theology then is a different kind of animal then the church has witnessed before even as evolutionary processes and the quantum sciences. And it's time to re-read the bible's narratival stories with an eye towards the processes occuring in spacetime amongst the ancients - and even before them in the primal dawns of the living, the lit, and the whole. Peace, my friends. Peace.

July 1, 2021

* * * * * * * *

Are we living in a Multiverse?

A closer look at four different types
of parallel universe(s)

by Prince
March 26, 2018

I have found the multiverse subject extremely enticing, as it provided me a way to reflect upon my existence and forced me to question everything. In this article I want to share and explain from different perspectives (scientific, theological, fictional, philosophical) the four different levels of multiverses suggested by scientists and astronomers.

We usually think of our universe as a vast nearly endlessly expanse that contains every star [and] galaxy in existence. But, what if there is more than one universe? Could it be that we live in a multiverse?

Our universe as we know, originated from a huge explosion that is known as the big bang. In the first split second after the big bang the universe underwent a fast expansion, known as the cosmic inflation. 

Our universe in the last 13.7 billion years has expanded enormously from the size of an atom and [it has kept expanding ever] since then. There was a time when the universe was expanding so rapidly that the parts of it were moving away from each other faster than the speed of light.

Why we might be living in a Multiverse?

In ancient times it was believed that the earth [was the center of the universe and the other planet revolved around it. Then], later on, we discovered that the earth revolves around the Sun which is part of the solar system, and our solar system was found to be a part of the Milky Way galaxy. By further observations scientists learned that our universe is composed with other billions of such galaxies, each galaxy containing billions of stars. We can only see a small portion of the entire universe, known as the observable universe, which has a diameter of 93 billion light years and the radius of approximately 46 billion light years.

According to modern theories of particle physics there might be other parallel universes like ours in the vast collection of universes so called multiverse. Scientists have started taking the idea of parallel universes very seriously in the past years and the majority of cosmologists today agree with the concept of a multiverse, which is the idea that our universe might not be the only one of its kind. There are number of theories about what the multiverse could be and there are four different perspectives of looking at them.

Level I — The Quilted Multiverse

Quilted Universe

There isn’t one single multiverse hypothesis, cosmologist Max Tegmark has proposed four different types of multiverse that might exist. The quilted universes model is predicted by the theory of inflation developed by Alan Guth and Andrei Linde, which suggests that the space itself is not just big but is infinite in size. Beyond the range of our telescopes are other regions of space, those regions are a type of parallel universes with the same physical laws and constants, some identical to ours while some very different from ours, and there is a probability for one of those parallel universes being identical to ours.

From scientific point of view, we are just a configuration of different particles, and according to science matter can be arranged only in finite ways and afterwards it tends to repeat itself. Based on the same idea according to Dr Tegmark an identical Hubble space to ours should be around 10 to the 10¹¹⁸, beyond that it must repeat, which means there might another you in another universe. However, we cannot observe those regions of space with our current technology, the farthest that we can observe is about 42 billion light years, which is the distance that light has been able to travel to us since the big bang happened. This quilted multiverse model is not really a theory, but rather a prediction, because it is predicted by the theory of inflation, and it agrees with the data provided by the cosmologists.

For example, Einstein’s general theory of relativity is not a theory anymore, it was in the past, but it has been proved and tested, and now scientists take it very seriously and use it as a scientific model in order to make sense of the events, even though it predicts lots of things which cannot be tested or observed, such as what happens if you fall inside a black hole. Hence, this model makes sense from scientific perspective, which requires logic and nature to make sense of the events. Even though this model is not a scientific model yet, as it is not testable, and we don’t have any observational evidences to prove this model, but it is predictable. For a theory to be scientific, you don’t have to observe everything that it predicts, but be able to observe only one thing that it predicts. Therefore, the lack of evidence for the existence of something, is not the evidence for the lack of something.

Level II — Inflationary Multiverse

Inflationary Multiverse

The level I multiverse model was complicated to comprehend, the level II model forces us to open our imagination to infinite possibilities. The second model is based on the idea of infinite bubble universes, known as the inflationary multiverse model. In order to understand this model it is necessary to understand how the theory of inflation works, which tells us how the big bang occurred. The inflationary multiverse model suggests that the universe is infinite in size, and according to the theory of cosmic inflation the big bang that created our universe may not have been a onetime event, instead it could have happened again and again and going on forever known as eternal inflation. As you are reading this sentence, there might be another big bang happening out in the cosmos, giving births to other universes or bubble universes. Our universe is part of one of those infinite bubbles, which is filled with matter deposited by the energy field that drove inflation, a process that would continue eternally. We will never get to those bubbles even if we travel at the speed of light. The bubbles vary not only in the initial conditions but also in aspects of nature with different space-time dimensions and different physical constants.

This model is not scientific either, as it lacks observable evidences. This idea of the multiverse is sounder from a theological perspective, where all the natural laws can be broken. Much of modern theology tries to address the questions concerning our existence on this planet. Since theology doesn’t require any observational evidences in order to make sense of the events.

Theists can easily argue that God is the creator and sustainer of the entire universe. Being omnipotent and omniscient God has the power to control everything. Hence, God decided to create not one universe but many, as God would be the one who created space and time, the inflation and the big bang. According to theism, once you have a transcendence source of everything, space and time, matter and energy, then God is free to create any type physical reality he wants. Therefore, theology can agree with this concept more than science, as science require evidence, unlike theology.

Level III — Quantum Universe

Quantum Universe

The third level multiverse model, is known as quantum many worlds, which is the most controversial type. This quantum multiverse model concentrates on the idea of quantum mechanics, and is very different from the first two models. Quantum mechanics works on probabilities, it states that there are a range of possible observations, each with a different probability. In order to be clear, if in this universe you are reading this paper, in another quantum universe you might be reading a different paper, in yet another universe you got offered a job, or perhaps in many you don’t exist at all. This idea tells us that there are an infinite number of universes, with infinite number of possible outcomes, where random quantum processes splits the universe into multiple copies. At every point, a new universe is being created.

This model makes more sense from fictional point of view, as there are no limits to the realms, events can make sense or not, one can either obey the logic or defy it. I am firmly unconvinced by this theory, it is still fictional to me, as I believe that we do not have established the analysis about how the quantum thinking would describe this observation. Quantum physics is the science that attempts to explain phenomena which cannot be explained by science and physics. This is why perhaps scientists love the quantum world idea as it explains mathematically things which are not observable.

I believe that we do not understand the quantum physics completely yet. In order to understand quantum mechanics it is very important to understand how quantum mechanics would link up with the observation. The link between quantum mechanics and the observation is still missing.If the ideas which makes sense mathematically are linked up with the observation, then perhaps it can enhance our understanding for the quantum multiverse. Nevertheless, there needs to be done more research on this theory in order to understand it completely which will require more time from physicists.

Level IV- Brane Multiverse

Brane Multiverse

The level I, II and III varies from each other but they are governed by the same natural fundamental laws. Moving to the fourth level multiverse, which revolves around string theory, is called the brane multiverse. This model suggests that universes can differ not only in shape, but also in different laws of physics. The brane multiverse theory suggests that there can be more dimensions than three. We live in a four dimensional universe including time, but in brane multiverse, our four dimensional universe lives on a membrane, or brane, that is embedded with more than four dimensions. The idea here is that our membrane is not the only one, there might be other membranes. Existing outside of space and time, they are almost impossible to visualize.

This model is the most unclear and sounds crazy to me. It is definitely not a scientific model, as String theory is not a complete theory yet. I would see this model more from a philosophical point of view, where observational evidences are not required, one can use logic to make sense of the events. I strongly believe that the brane multiverse hypothesis has high probability to fall under scientific realm in the near future, as the brane multiverse model has the chance of being experimentally tested based on string theory under shortest time frame.

String theory states that the space is made up of tiny little filaments known as strings which vibrates at different patterns, and according to scientists this proposal might be tested at the LSG (large hadron collider).

There is no doubt that the concept remains science fiction for now, however the lack of scientific proof should not be the reason to stop questioning. Hence, it is important for the concept of parallel universes to be explored completely, even though they lack observational evidences. One way can be to work on the multiverse theories which have highest chances of being tested in the shortest period of time frame.

I can confidently conclude, none of the multiverse models mentioned above are scientific models and remains unproven for now, as they lack observational evidences, but this should not stop science to investigate further on these ideas. I am eager to find out what the next big discovery will be in the multiverse hypothesis.

* * * * * * * *

Sean Carroll: Many-Worlds Interpretation of Quantum Mechanics
Nov 5, 2019

The Many Worlds of the Quantum Multiverse | Space Time | PBS Digital Studios
Oct 26, 2016

Parallel Worlds Probably Exist. Here’s Why
Mar 6, 2020

Sean Carroll explains: what is the many-worlds interpretation?
Jan 8, 2020

Roger Penrose - Many Worlds of Quantum Theory
Mar 16, 2020

Sean Carroll: The many worlds of quantum mechanics
Jun 24, 2020

* * * * * * * *

Many-worlds interpretation

Jump to navigationJump to search
The quantum-mechanical "Schrödinger's cat" paradox according to the Many-Worlds interpretation. In this interpretation, every quantum event is a branch point; the cat is both alive and dead, even before the box is opened, but the "alive" and "dead" cats are in different branches of the universe, both of which are equally real, but which do not interact with each other.[a]
The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wavefunction collapse.[2] This implies that all possible outcomes of quantum measurements are physically realized in some "world" or universe.[3] In contrast to some other interpretations, such as the Copenhagen interpretation, the evolution of reality as a whole in MWI is rigidly deterministic.[2]:8–9 Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957.[4][5] Bryce DeWitt popularized the formulation and named it many-worlds in the 1960s and 1970s.[1][6][7][2]

In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s,[8][9][10] and have become quite popular. MWI is now considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the Copenhagen interpretation), and hidden variable theories such as Bohmian mechanics.

The many-worlds interpretation implies that there are very many universes, perhaps infinitely many.[11] It is one of many multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realised. This is intended to resolve some paradoxes of quantum theory, such as the EPR paradox[5]:462[2]:118 and Schrödinger's cat,[1] since every possible outcome of a quantum event exists in its own universe.


In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". Schrödinger stated that replacing "simultaneous happenings" with "alternatives" followed from the assumption that "what we really observe are particles", calling it an inevitable consequence of that assumption yet a "strange decision". According to David Deutsch, this is the earliest known reference to many-worlds, while Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger.[12][13][14]

MWI originated in Everett's Princeton Ph.D. thesis "The Theory of the Universal Wavefunction",[2] developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state";[15] Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt,[2] who was responsible for the wider popularisation of Everett's theory, which was largely ignored for a decade after publication.[11]

~ The Overview and Science sections are skipped in this post ~


MWI's initial reception was overwhelmingly negative, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory.[d] Everett left academia in 1956, never to return, and Wheeler eventually disavowed the theory.[11]

One of MWI's strongest advocates is David Deutsch.[64] According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing,[65] he suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". Deutsch has also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin.[66]

Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated.[67]

Some consider MWI[68][69] unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others[66] claim MWI is directly testable.

Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes.[70] Collaborating with James Hartle, Gell-Mann had been, before his death, working toward the development a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists dismiss the many-worlds interpretation as too extreme, while noting it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse".[e]

Philosophers of science James Ladyman and Don Ross state that the MWI could be true, but that they do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so they do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy.[71]


A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".[72]

Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."[73]

In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory",[74] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[75]

A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[76]

A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[77] the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.[77]

Debate whether the other worlds are real

Everett believed in the literal reality of the other quantum worlds.[22] His son reported that he "never wavered in his belief over his many-worlds theory".[78]

According to Martin Gardner, the "other" worlds of MWI have two different interpretations: real or unreal; he claimed that Stephen Hawking and Steven Weinberg both favour the unreal interpretation.[79] Gardner also claimed that most physicists favour the unreal interpretation, whereas the "realist" view is supported only by MWI experts such as Deutsch and DeWitt. Hawking has said that "according to Feynman's idea", all other histories are as "equally real" as our own, [f] and Gardner reports Hawking saying that MWI is "trivially true".[81] In a 1983 interview, Hawking also said he regarded MWI as "self-evidently correct" but was dismissive of questions about the interpretation of quantum mechanics, saying, "When I hear of Schrödinger's cat, I reach for my gun." In the same interview, he also said, "But, look: All that one does, really, is to calculate conditional probabilities—in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities."[82] Elsewhere Hawking contrasted his attitude towards the "reality" of physical theories with that of his colleague Roger Penrose, saying, "He's a Platonist and I'm a positivist. He's worried that Schrödinger's cat is in a quantum state, where it is half alive and half dead. He feels that can't correspond to reality. But that doesn't bother me. I don't demand that a theory correspond to reality because I don't know what it is. Reality is not a quality you can test with litmus paper. All I'm concerned with is that the theory should predict the results of measurements. Quantum theory does this very successfully."[83] For his own part, Penrose agrees with Hawking that quantum mechanics applied to the universe implies MW, but he believes the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics.[27]

Quantum Physics - The Electromagnetic Spectrum parallels Process Thought in Theology

Why Theology Must Pay Attention
to the Sciences

by R.E. Slater

I occasionally reach forwards and backwards into non-biblical topics generally classified by the church as "Natural Theology" to help bring home newer theological ideas that may be unknown to the bible reader; a bit strange and out of the ordinary to the Platonic et al systems we've been raised in as Western thinkers; and perhaps more helpful in discussing the harder subjects of who God is, what God is doing, and how we might operate in God's wonderful-and-terrifying worlds of creation.

The bible tells us about salvation through the humble narratives of those having sought God  and of those who did not. These narratives were set in the harsh climes those seekers lived as they saw, or experienced, poverty and misery, injustice and inequality, brutality and cruelty. But the bible was not written to explain the natural world we know as creation. This is the area described by older church ages as "natural theology" - which is a descriptor for the sciences of all stripes and flavors.

Nor does the bible speak to areas of philosophical concern... primarily because its many narratives were set at one time historically, and later, in the retelling, from other historical perspectives and era settings. Each setting holding its own milieu of philosophical thought which the common people accepted. To tell a narrative from one era-bound anthropologic setting carried with it a common perspective of that time. But to then retell it from within another cultural/geological era held a different meaning philosophically than it did in its one-time original context.

Hence, today, as Westernized Christians, we proclaim the bible stories within our own embraced religious beliefs based upon our perceived and accepted philosophical cultural perspectives. Which is why there are so many different kinds of Christian beliefs and practices within competing cultural perspectives and practices we naively call secular not realizing our own Christian tenets and dogmas have been similarly secularized.

Embracing the Ancients

All this is said to say that when describing and projecting a Process Theology upon the Christian faith from Whitehead's Process Philosophy, we are attempting to (i) update our thinking of the world of creation even as we are attempting to (ii) recognize the competing philosophies of the bible's narratives, told and retold, through the eras.

Today's quantum sciences are likewise doing the same with older cultural perspectives of the world and man. The old-line thinking of the Ancients v Greeks v Scholasticists v Enlightenment v Modernists have fallen away even as remnants of those traditions hang on to pervade our thinking. In a postmodern era it seems that process thought best corresponds with the newer worlds of evolution and the chaotic theories of the maths and sciences.

To now think in terms of process flow and symmetry, rhythm and balance, seems the more apt description of the world altogether. Yet this is not a new idea... the ancients from long ago felt and shared this... from the African to Greek to Indian to Asian to Native American cultures. They each understood the living, restless world they lived in terms of liveliness, organic relationships, and as a dynamic force informing human civilizations how to live, and be, and respond.

Similarly, today's quantum physics is describing the classical worlds of mechanistic reductionism backwards towards the more ancient ideas of flow and rhythm. Or, in evolutionary terms,  as describing the classical worlds as progressing from one form or stage of social-biological evolvement to another form (either higher or lower) of transitionary evolvement. The world is in itself a living organism. It is neither static nor isolated from its parts. It is one, and whole, and operates together. Welcome to the world of process philosophy... a view as ancient as it is helpfully informative in today's quantum worlds of science.

Stories of Becoming

To describe Process Thought in its religious-Christian derivatives one must pay attention to the newer profound worlds of science and anthropology. Nothing lives isolated from the other. We live in relational worlds of being which are always moving towards becoming or un-becoming. As Westernized thinkers we have never learned to read the bible in this way. Process relational thinking was no less real then as it is today. We just weren't seeing it. Our religious perceptual maps were being informed by other cultural-religious perspectives.

In the stories of salvation of the bible were also stories of real live people caught in their personal trajectories of becoming enlivened to the spirit of God. They lived one way, then became informed in some spiritual sense, to then live another way. Their being was in transition. They were becoming something else because of their encounter with some facet of God or God's creation as they navigated life's many obstacles, wonders, and miracles.

Process Theology, as does the Process Philosophical perspective, says that we are beings who are becoming more than we once were. But if we do not, than we may also transition downwards into fundamentally non-becoming beings. These are the stories of light and dark of the bible. Of those who comprehended life as God drew them towards the real meanings of life, as versus those who escalated downwards into darkness and away from the light. The cosmos is more than matter and forces. The cosmos is where life and light might be apprehended for those willing to look and to become nestled into the flow and rhythm of the very God whose cosmos lives and breathes movement from disorder. Peace.

R.E. Slater
July 1, 2021

* * * * * * * *

What is the main difference between
classical and quantum physics?

There is a huge difference between Classical and Quantum Theory. In Classical theory, a body always chooses the least action path and there is only one path. In Quantum theory, a particle also always chooses the least action path and it chooses multiple least action paths simultaneously.

What is the difference between
classical physics and quantum physics?

1. Classical physics is causal - complete knowledge of the past allows computation of the future. Likewise, complete knowledge of the future allows precise computation of the past. (Chaos theory is irrelevant to this statement; it talks about how well you can do with incomplete knowledge.)

Not so in quantum physics. Objects in quantum physics are neither particles nor waves; they are a strange combination of both. Given complete knowledge of the past, we can make only probabilistic predictions of the future.
In classical physics, two bombs with identical fuses would explode at the same time. In quantum physics, two absolutely identical radioactive atoms can and generally will explode at very different times. Two identical atoms of uranium-238 will, on average, undergo radioactive decay separated by billions of years, despite the fact that they are identical.
There is a rule that physicist often use to separate classical physics from quantum. If Planck's constant appears in the equations, it is quantum physics. If it doesn't, it is classical physics.

Most physicists believe that quantum physics is the right theory, even though many details are yet to be worked out. Classical physics can be derived from quantum physics in the limit that the quantum properties are hidden. That fact is called the "correspondence principle."

2. Quantum physics is the revolution that overthrew classical physics. Describing the difference between them is like describing the difference between the Bolsheviks and the Tsars.

Where do we even begin?

On the one hand, we have the Newtonian picture of a clockwork universe. In this paradigm, all of physical reality is a giant machine that ticks forward in time, changing its configuration predictably according to deterministic laws. Newton saw his God as a mathematician who constructed the cosmos out of physical elements, setting them in motion according to a small set of simple mathematical laws. These laws are ultimately responsible for all the complexity and diversity of natural phenomena. Likewise, all phenomena, no matter how complex, can be understood in terms of these simple laws. "All discord," wrote Alexander Pope, is "harmony not understood."

On the other hand, we have the quantum universe, which from our perspective, seems to resemble more of a slot machine than a clock. In the quantum universe, we see the machinery as fundamentally probabilistic. If there is harmony underlying quantum discord, it is inaccessible to the experimenter.

In fact, the quantum revolution goes much deeper than merely introducing probability as a fundamental feature. It altogether trashes the Newtonian clock, replacing it with a completely alien device built out of much more advanced mathematics. The quantum revolution tells us that the classical perspective isn't just wrong, it is fundamentally unsalvageable.

Lets proceed by discussing some Newtonian components to be thrown in the trash:

1. Particles and fields possess well defined dynamic variables at all times. Dynamic variables are the quantities used to describe the motion of objects, such as position, velocity, momentum, and energy. Classical physics presupposes that the dynamic variables of a system are well defined and can be measured to perfect precision. For example, at any given point in time, a classical particle exists at a single point in space and travels with a single velocity. Even if the exact values of the variables are uncertain, we assume that they exist and only take one specific value.

2. Particles as point-like objects following predictable trajectories. In classical mechanics, a particle is treated as a dimensionless point. This point travels from A to B by tracing out a continuous path through the intermediate space. A billiard ball traces out a straight line as it rolls across the table, a satellite in orbit traces out an ellipse, and so on. The idea of a definite trajectory presupposes well defined dynamic variables, and so once the first point above is abandoned, the idea of a definite trajectory must be discarded as well.

3. Dynamic variables as continuous real numbers. In classical physics, dynamic variables are smoothly varying continuous values. Quantum physics takes its name from the observation that certain quantities, most notably energy and angular momentum, are restricted to certain discrete or 'quantized' values under special circumstances. The in-between values are forbidden.

4. Particles and waves as separate phenomena. Classical physics has one framework for particles and a different framework for waves and fields. This matches the intuitive notion that a billiard ball and a water wave move from A to B in completely different fashions. In quantum physics however, these two phenomena are synthesized and treated under a unified, magnificent framework. All physical entities are particle/wave hybrids.

5. Newton's Second Law. Without the four kinematic features mentioned above, ∑F=ma∑F=ma is more than wrong, it's nonsensical. A radically different dynamics must be developed that is governed by a very different equation of motion.

6. Predictability of measurement outcomes. In classical physics, the outcomes of measurements can be predicted perfectly, assuming full knowledge of the system beforehand. In quantum mechanics, even if you have full knowledge of a system, the outcomes of certain measurements will be impossible to predict. With the above list in mind, its no wonder that quantum mechanics took an international collaboration several decades to develop. How do you build a coherent model of the universe without these features?

Well, thankfully, not quite everything from classical physics had to be scrapped. The conservation laws are preserved (or Great Conservation Principles as Feynman called them, always capitalized to highlight their centrality to all areas of physics). Quantum physics conserves things like momentum, energy, and electric charge as perfectly as classical physics.

Also, while Newton's formulation of classical mechanics is completely abandoned, the conservation laws encourage us adapt tools from the more mathematically elegant Hamiltonian and Lagrangian formulations of classical mechanics. Erwin Schrodinger chose to adapt the Hamiltonian formalism which led to his eponymous equation. Richard Feynman adapted Lagrangian mechanics which led to his path integral formulation. Heisenberg developed his own esoteric approach called matrix mechanics.

All three approaches to quantum mechanics are mathematically equivalent and useful in their own right (there's more than three, but these are the standard formulations). Schrodinger's formulation of quantum mechanics is usually the one everyone encounters first, and his is the formalism most widely used in the field.

So lets go back through the list above, and replace Newton's components with Schrodinger's:

1. Particles possess a wave function Ψ(x,t)Ψ(x,t) at all times. The wave function assigns a complex number to each point in space at each moment in time. This function contains within it all available information about the particle. Everything that can be known about the particle's motion is extracted from Ψ(x,t)Ψ(x,t). To recover dynamic information, we use Born's rule and calculate ΨΨ∗ΨΨ∗ to get the probability density of the particle's position, and we calculate ϕϕ∗ϕϕ∗ to get the probability density of the particle's momentum, where ϕ(p,t)ϕ(p,t) is the Fourier transform of the wave function. This is a radically different approach to kinematics than in classical mechanics, which describes particles by listing off the values of the dynamic variables.

2. Trajectories are replaced with wave function evolution. As the wave function changes in time, so do the probabilities of observing particular positions and momenta for the particle. The evolution equation is the time-dependent Schrodinger equation:
iℏΨ˙(x,t)=HΨ(x,t)iℏΨ˙(x,t)=HΨ(x,t). HH is the Hamiltonian operator for the system, i.e. the self-adjoint operator corresponding to the total energy of the system (described in point 3 below).
3. Dynamic variables are Hermitian matrices. Instead of real-valued, continuously evolving dynamic variables, Schrodinger uses fixed Hermitian matrices (or self-adjoint operators) to represent observable quantities. Each observable such as position, momentum, energy, etc has a corresponding matrix/operator. The eigenvalues of the matrix/operator determines the allowed values of the corresponding observable. The energy levels of atoms, for example, are eigenvalues of the Hamiltonian operator. This is another completely radical shift from how classical physics treats motion.

4. Unification of particles and waves. A mathematical analysis of the Schrodinger equation reveals that it has wavelike solutions, and so particles propagate as waves. This means that we shouldn't picture particles as tiny spheres bouncing around their environment. The closest you can get to visualizing a particle is by visualizing its wave function. As stated previously in the first point above, the wave function assigns a complex number to each point in space. This field of complex numbers evolves in time. What does this evolution look like? Well, if you are familiar with phasors, it looks like a field of rapidly rotating phasors (excellent visualization). To be more specific, the field of phasors for a particular particle looks like a screw that twists in the direction of motion.

5. The time-dependent Schrodinger equation replaces Newton's second law.

6. Measurement is random. Even if you have full knowledge of a quantum system prior to measurement (i.e. you know Ψ(x,t)Ψ(x,t)), you still will not be able to predict the outcomes of measurements in general. The outcome of the measurement is probabilistic. The possible outcomes are determined by the eigenvalues of the operator you are observing (see point 3), and the probability of each outcome is determined by the projection of the wave function onto the eigenvectors of that operator.

So this is a sketch of what Schrodinger's quantum mechanics looks like. Alternate formulations would have different details, but the gist is the same.

Hopefully it is now clear that the differences between classical physics and quantum physics are vast. The quantum revolution is really one of the most stunning intellectual developments of the 20th century, and in many ways the effects of the revolution have yet to be fully felt.
Quantum computing, for example, is one ramification that hasn't quite yet materialized. The philosophical and technological ramifications will most certainly continue to transform the 21st century in extraordinary ways.

3. I am going to make this as simple as possible and not throw a lot of math at you.

In classical physics, there is an "in-principle" determinism. If you had N atoms of neon in a gas canister, and you knew the position, and momentum of every one, in principle you could describe the history fully for all time.

That doesn't mean that you can't use statistical methods or treat the motions as random (to treat them deterministically you'd need to keep track of 6N numbers as a function of time!). And in fact, such methods are extremely useful in classical physics. It just means that there are exact knowable properties such as position and momentum that are measurable to any accuracy, independent of the process of observation.

In classical physics, things like electrons and atoms were supposed to be treated as strictly particles, and things like light and other forms of electromagnetic radiation treated strictly as waves. (It turns out that there are a lot of things that happen with light and electrons that cannot be properly explained in classical physics!)

In classical physics. Each particle has an exact position and momentum. The pool table has an almost completely uniform coefficient of friction, and the collisions are approximately elastic. True, some of the tricks with backspin appear a little freaky....

In quantum physics, there properties such as position and momentum that are NOT measurable to any accuracy, independent of the process of observation. Specifically in the case of position and momentum, there is a limit on how accurately you can measure both at once.

You can think of a particle as being described as a wave, which encodes the probability of making a specific measurement. Possible observations are determined by the probabilities, and are not determinate. There is no "trajectory" between subsequent observations.

The variation becomes significant on the atomic scale and below. Large macroscopic objects that have, say maybe 7,000,000,000,000,000,000,000,000,000 atoms in them, like you and me, can have variations due to quantum uncertainty that are such a tiny fraction of them, they can effectively be treated as classical objects for almost all purposes. Indeed, the formula for the wave associated with a human body, or a pool ball or table, gives a wavelength that is so incredibly short, that the quantum calculations approximate the classical ones for these large objects to a tremendous degree of accuracy.

The double slit experiment. A wave that strikes a surface with two small nearby openings will interfere with itself, producing interference fringes. In this video electrons are fired at a pair of slits one at a time. Electrons are definitely particles! Yet, the electrons don't seem to follow a definite trajectory, and show up randomly. When a lot have been transmitted, they form interference fringes!

4. Classical physics took form when Newton developed his theory of gravity and the mathematics we commonly known as calculus. Newtonian physics were three dimensional: width, height and depth. Energy comes in tiny lumps, in packets whereas a single packet is a quantum and Planck's ideas were soon called the "quantum theory." Quanta can behave like particles and quanta can behave like waves. It seems counter-intuitive, but, light can be both particles and waves and the difference depends fundamentally on how it is studied.

5. There is a huge difference between Classical and Quantum Theory:
1. In classical theory, a body always chooses the least action path and there is only one path. In Quantum theory, a particle also always chooses the least action path and it chooses multiple least action paths simultaneously.

2. If there are 9 boxes and 10 pigeons, then at least one box will end up with two pigeons. This is in Classical Theory. No such thing happens in Quantum Theory. We can pass infinite electrons just from two boxes.

3. We can determine position and velocity of a particle simultaneously with great accuracy in Classical Physics. Quantum Physics follows the Heisenberg Uncertainty Principle.

4. Classical Physics is applicable to macroscopic particles. Quantum Physics is applicable to microscopic particles.

6. Here's a simple analogy.

Suppose you are playing squash with a sponge ball. And you wish to build a machine that can play it with you, the first thing you would need is to mathematically model the mechanics of the sponge ball so that you can incorporate it in the design of the machine. For this a classical model would suffice.

Now lets go quantum, if you want to replace the sponge ball with an electron, the classical model of a sponge ball breaks apart. First of there is no deterministic way of knowing the location of the ball before it hits your bat. Then there is a probability that it will tunnel through your bat even if you got it right.

And so, we have just started with the long list of phenomenon unseen in classical mechanics. These phenomenon are modelled into the mathematics in QM and for a probabilistic theory it does explain why things happen beautifully.

The problem however is that with this new model, the world looks like a much stranger place. The ball is no longer a ball anymore, but an *eigenvalue in a wave equation. It's nothing like the world that we are familiar with. This posts interesting puzzles on what the mathematics mean. It's both mind bending and confusing to visualize yet so intriguing because it's very counter intuitive.

*The word eigen is German for “own”, “particular”, or “proper.” so when combined with value, it can be thought of as the “the particular value”, the value that's “just right” for the situation at hand.

Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for "proper", "characteristic", "own".

Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization.

* * * * * * * *

Electromagnetic Spectrum and Quantum Theory

Planck's Quantum Theory | Electromagnetic Waves and Wave Optics | Physics Videos

* * * * * * * *


QUANTUM ELETRODYNAMICS (qed) - second article



Quantum Physics Perspective on Electromagnetic
and Quantum Fields Inside the Brain

* * * * * * * *

Electromagnetic radiation

Jump to navigationJump to search
linearly polarized sinusoidal electromagnetic wave, propagating in the direction +z through a homogeneous, isotropic, dissipationless medium, such as vacuum. The electric field (blue arrows) oscillates in the ±x-direction, and the orthogonal magnetic field (red arrows) oscillates in phase with the electric field, but in the ±y-direction.

In physicselectromagnetic radiation (EM radiation or EMR) refers to the waves (or their quantaphotons) of the electromagnetic field, propagating through space, carrying electromagnetic radiant energy.[1] It includes radio wavesmicrowavesinfrared(visible) lightultravioletX-rays, and gamma rays. All of these waves form part of the electromagnetic spectrum.[2]

Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. Electromagnetic radiation or electromagnetic waves are created due to periodic change of electric or magnetic field. Depending on how this periodic change occurs and the power generated, different wavelengths of electromagnetic spectrum are produced. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. In homogeneous, isotropic media, the oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. The wavefront of electromagnetic waves emitted from a point source (such as a light bulb) is a sphere. The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength these are: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays.[3]

Electromagnetic waves are emitted by electrically charged particles undergoing acceleration,[4][5] and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena.

In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions.[6] Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level.[7] Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation.[8] The energy of an individual photon is quantized and is greater for photons of higher frequency. This relationship is given by Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is Planck's constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light.

The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of visible or lower frequencies (i.e., visible light, infrared, microwaves, and radio waves) is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules or break chemical bonds. The effects of these radiations on chemical systems and living tissue are caused primarily by heating effects from the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are called ionizing radiation, since individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, and can be a health hazard.


Shows the relative wavelengths of the electromagnetic waves of three different colours of light (blue, green, and red) with a distance scale in micrometers along the x-axis.

Maxwell's equations

James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave.[9][10] Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves.

[11] Maxwell realized that since a lot of physics is symmetrical and mathematically artistic in a way, that there must also be a symmetry between electricity and magnetism. He realized that light is a combination of electricity and magnetism and thus that the two must be tied together. According to Maxwell's equations, a spatially varying electric field is always associated with a magnetic field that changes over time.[12] Likewise, a spatially varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the electric field are always accompanied by a wave in the magnetic field in one direction, and vice versa. This relationship between the two occurs without either type of field causing the other; rather, they occur together in the same way that time and space changes occur together and are interlinked in special relativity. In fact, magnetic fields can be viewed as electric fields in another frame of reference, and electric fields can be viewed as magnetic fields in another frame of reference, but they have equal significance as physics is the same in all frames of reference, so the close relationship between space and time changes here is more than an analogy. Together, these fields form a propagating electromagnetic wave, which moves out into space and need never again interact with the source. The distant EM field formed in this way by the acceleration of a charge carries energy with it that "radiates" away through space, hence the term.

Near and far fields

In electromagnetic radiation (such as microwaves from an antenna, shown here) the term "radiation" applies only to the parts of the electromagnetic field that radiate into infinite space and decrease in intensity by an inverse-square law of power, so that the total radiation energy that crosses through an imaginary spherical surface is the same, no matter how far away from the antenna the spherical surface is drawn. Electromagnetic radiation thus includes the far field part of the electromagnetic field around a transmitter. A part of the "near-field" close to the transmitter, forms part of the changing electromagnetic field, but does not count as electromagnetic radiation.

Maxwell's equations established that some charges and currents ("sources") produce a local type of electromagnetic field near them that does not have the behaviour of EMR. Currents directly produce a magnetic field, but it is of a magnetic dipole type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric dipole type electrical field, but this also declines with distance. These fields make up the near-field near the EMR source. Neither of these behaviours are responsible for EM radiation. Instead, they cause electromagnetic field behaviour that only efficiently transfers power to a receiver very close to the source, such as the magnetic induction inside a transformer, or the feedback behaviour that happens close to the coil of a metal detector. Typically, near-fields have a powerful effect on their own sources, causing an increased "load" (decreased electrical reactance) in the source or transmitter, whenever energy is withdrawn from the EM field by a receiver. Otherwise, these fields do not "propagate" freely out into space, carrying their energy away without distance-limit, but rather oscillate, returning their energy to the transmitter if it is not received by a receiver.[citation needed]

By contrast, the EM far-field is composed of radiation that is free of the transmitter in the sense that (unlike the case in an electrical transformer) the transmitter requires the same power to send these changes in the fields out, whether the signal is immediately picked up or not. This distant part of the electromagnetic field is "electromagnetic radiation" (also called the far-field). The far-fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation always decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field close to the source (the near-field), which vary in power according to an inverse cube power law, and thus do not transport a conserved amount of energy over distances, but instead fade with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil).

The far-field (EMR) depends on a different mechanism for its production than the near-field, and upon different terms in Maxwell's equations. Whereas the magnetic part of the near-field is due to currents in the source, the magnetic field in EMR is due only to the local change in the electric field. In a similar way, while the electric field in the near-field is due directly to the charges and charge-separation in the source, the electric field in EMR is due to a change in the local magnetic field. Both processes for producing electric and magnetic EMR fields have a different dependence on distance than do near-field dipole electric and magnetic fields. That is why the EMR type of EM field becomes dominant in power "far" from sources. The term "far from sources" refers to how far from the source (moving at the speed of light) any portion of the outward-moving EM field is located, by the time that source currents are changed by the varying source potential, and the source has therefore begun to generate an outwardly moving EM field of a different phase.[citation needed]

A more compact view of EMR is that the far-field that composes EMR is generally that part of the EM field that has traveled sufficient distance from the source, that it has become completely disconnected from any feedback to the charges and currents that were originally responsible for it. Now independent of the source charges, the EM field, as it moves farther away, is dependent only upon the accelerations of the charges that produced it. It no longer has a strong connection to the direct fields of the charges, or to the velocity of the charges (currents).[citation needed]

In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity, are both associated with the electromagnetic near-field, and do not comprise EM radiation.[citation needed]


Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave of electric and magnetic fields. This 3D animation shows a plane linearly polarized wave propagating from left to right. The electric and magnetic fields in such a wave are in-phase with each other, reaching minima and maxima together.

Electrodynamics is the physics of electromagnetic radiation, and electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition.[13] For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves.[citation needed]

The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect.[14][15]

In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.[16]

EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter.[17] Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair.

Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon.[18] When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once.

quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics.

Electromagnetic waves can be polarized, reflected, refracted, diffracted or interfere with each other.[19][20][21]

Wave model

Representation of the electric field vector of a wave of circularly polarized electromagnetic radiation.

In homogeneous, isotropic media, electromagnetic radiation is a transverse wave,[22] meaning that its oscillations are perpendicular to the direction of energy transfer and travel. The electric and magnetic parts of the field stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). A common misconception[citation needed] is that the E and B fields in electromagnetic radiation are out of phase because a change in one produces the other, and this would produce a phase difference between them as sinusoidal functions (as indeed happens in electromagnetic induction, and in the near-field close to antennas). However, in the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a more correct description is that a time-change in one type of field is proportional to a space-change in the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below).[citation needed]

An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion.

A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:[23]

where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant.

Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization.

Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. An example of interference caused by EMR is electromagnetic interference (EMI) or as it is more commonly known as, radio-frequency interference (RFI).[citation needed] Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation.[24]

The energy in electromagnetic waves is sometimes called radiant energy.[25][26][27]

Particle model and quantum theory

An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years. It later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by

where h is Planck's constant is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation.[28] In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave.[29]

Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength:

The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect.[citation needed][30]

As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence.[31][32]

Wave–particle duality

The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned using an object with a large mass. A bold proposition by Louis de Broglie in 1924 led the scientific community to realize that matter (e.g. electrons) also exhibits wave–particle duality.[33]

Wave and particle effects of electromagnetic radiation

Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation.[34] An example is the emission spectrum of nebulae.[citation needed] Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature.

These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift.[35]

Propagation speed

When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current. In many such situations it is possible to identify an electrical dipole moment that arises from separation of charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the charges move back and forth. This oscillation at a given frequency gives rise to changing electric and magnetic fields, which then set the electromagnetic radiation in motion.[citation needed]

At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in a transition state that has an electric dipole moment that oscillates in time. This oscillating dipole moment is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Such states occur (for example) in atoms when photons are radiated as the atom shifts from one stationary state to another.[citation needed]

As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is Planck's constant, 6.626 × 10−34 J·s, and f is the frequency of the wave.[36]

One rule is obeyed regardless of circumstances: EM radiation in a vacuum travels at the speed of lightrelative to the observer, regardless of the observer's velocity. (This observation led to Einstein's development of the theory of special relativity.)[citation needed] In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum.[citation needed]

Special theory of relativity

By the late nineteenth century, various experimental anomalies could not be explained by the simple wave theory. One of these anomalies involved a controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's equations did not appear unless the equations were modified in a way first suggested by FitzGerald and Lorentz (see history of special relativity), or else otherwise that speed would depend on the speed of observer relative to the "medium" (called luminiferous aether) which supposedly "carried" the electromagnetic wave (in a manner analogous to the way air carries sound waves). Experiments failed to find any observer effect. In 1905, Einstein proposed that space and time appeared to be velocity-changeable entities for light propagation and all other processes and laws. These changes accounted for the constancy of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—even those in relative motion.

History of discovery

Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London.[37] Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared.[38]

In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions.[39]

In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.[40]:286,7

Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.[40]:307

The last portion of the EM spectrum to be discovered was associated with radioactivityHenri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.[40]:308,9

Electromagnetic spectrum

Electromagnetic spectrum with visible light highlighted
γ = Gamma rays

HX = Hard X-rays
SX = Soft X-Rays

EUV = Extreme-ultraviolet
NUV = Near-ultraviolet

Visible light (colored bands)

NIR = Near-infrared
MIR = Mid-infrared
FIR = Far-infrared

EHF = Extremely high frequency (microwaves)
SHF = Super-high frequency (microwaves)

UHF = Ultrahigh frequency (radio waves)
VHF = Very high frequency (radio)
HF = High frequency (radio)
MF = Medium frequency (radio)
LF = Low frequency (radio)
VLF = Very low frequency (radio)
VF = Voice frequency
ULF = Ultra-low frequency (radio)
SLF = Super-low frequency (radio)
ELF = Extremely low frequency (radio)

EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radiomicrowaveinfraredvisibleultravioletX-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal monochromatic waves, which in turn can each be classified into these regions of the EMR spectrum.

For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum.

The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe.

Radio and microwave

Radio waves have the least amount of energy and the lowest frequency. When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge. Such effects can cover macroscopic distances in conductors (such as radio antennas), since the wavelength of radiowaves is long.

Electromagnetic radiation phenomena with wavelengths ranging from as long as one meter to as short as one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz.

At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both.


Like radio and microwave, infrared (IR) also is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, Infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below).

Infrared radiation is divided into spectral subregions. While different subdivision schemes exist,[41][42] the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm).[43]

Visible light

Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light.

As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal, causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light.

Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels.

Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons.

Infrared, microwaves and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation.

Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again.

Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.[44]


As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects. This property of causing molecular damage that is out of proportion to heating effects, is characteristic of all EMR with frequencies at the visible light range and above. These properties of high-frequency EMR are due to quantum effects that permanently damage materials and tissues at the molecular level.[citation needed]

At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV." Ionizing UV is strongly filtered by the Earth's atmosphere.[citation needed]

X-rays and gamma rays

Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter.

Atmosphere and magnetosphere

Rough plot of Earth's atmospheric absorption and scattering (or opacity) of various wavelengths of electromagnetic radiation

Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted.

Visible light is well transmitted in air, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor.[citation needed]

Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.[citation needed]

Finally, at radio wavelengths longer than 10 meters or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 meters or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 meters).[45]

Thermal and electromagnetic radiation as a form of heat

The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescenceharmonic generationphotochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.[46][citation needed]

Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.[46][citation needed]

Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed.[47]

The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.[48]

The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy.[49]

Biological effects

Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to visible light) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect.[50]

Despite the commonly accepted results, some research has been conducted to show that weaker non-thermal electromagnetic fields, (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation[50][51][52]), and modulated RF and microwave fields have biological effects.[53][54][55] Fundamental mechanisms of the interaction between biological material and electromagnetic fields at non-thermal levels are not fully understood.[50]

The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B - possibly carcinogenic.[56][57] This group contains possible carcinogens such as lead, DDT, and styrene. For example, epidemiological studies looking for a relationship between cell phone use and brain cancer development, have been largely inconclusive, save to demonstrate that the effect, if it exists, cannot be a large one.

At higher frequencies (visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules.[58] All UV frequences have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.[59][60]

Thus, at UV frequencies and higher (and probably somewhat also in the visible range),[44] electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum.

* * * * * * * *

Quantum electrodynamics

Jump to navigationJump to search

In particle physicsquantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction.

In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuumRichard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[1]:Ch1


The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who (during the 1920s) was able to compute the coefficient of spontaneous emission of an atom.[2]

Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang PauliEugene WignerPascual JordanWerner Heisenberg and an elegant formulation of quantum electrodynamics due to Enrico Fermi,[3] physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck,[4] and Victor Weisskopf,[5] in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer.[6] At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics.

Difficulties with the theory increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom,[7] now known as the Lamb shift and magnetic moment of the electron.[8] These experiments exposed discrepancies which the theory was unable to explain.

A first indication of a possible way out was given by Hans Bethe in 1947,[9] after attending the Shelter Island Conference.[10] While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford.[9] Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization.

Feynman (center) and Oppenheimer (right) at Los Alamos.

Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga,[11] Julian Schwinger,[12][13] Richard Feynman[14][15][16] and Freeman Dyson,[17][18] it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with the 1965 Nobel Prize in Physics for their work in this area.[19] Their contributions, and those of Freeman Dyson, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent.[17] Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus".[1]:128

QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s work by H. David PolitzerSidney ColemanDavid Gross and Frank Wilczek. Building on the pioneering work of SchwingerGerald GuralnikDick Hagen, and Tom Kibble,[20][21] Peter HiggsJeffrey Goldstone, and others, Sheldon Lee GlashowSteven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.

Feynman's view of quantum electrodynamics


Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter,[1] a classic non-mathematical exposition of QED from the point of view articulated below.

The key components of Feynman's presentation of QED are three basic actions.[1]:85

photon goes from one place and time to another place and time.
An electron goes from one place and time to another place and time.
An electron emits or absorbs a photon at a certain place and time.
Feynman diagram elements

These actions are represented in the form of visual shorthand by the three basic elements of Feynman diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram.

As well as the visual shorthand for the actions Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, . If a photon moves from one place and time  to another place and time , the associated quantity is written in Feynman's shorthand as . The similar quantity for an electron moving from  to  is written . The quantity that tells us about the probability amplitude for the emission or absorption of a photon he calls j. This is related to, but not the same as, the measured electron charge e.[1]:91

QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman.

The basic rules of probability amplitudes that will be used are:[1]:93

  1. If an event can happen in a variety of different ways, then its probability amplitude is the sum of the probability amplitudes of the possible ways.
  2. If a process involves a number of independent sub-processes, then its probability amplitude is the product of the component probability amplitudes.

Basic constructions

Suppose, we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability.[citation needed]

But there are other ways in which the end result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering.[citation needed]

There is an infinite number of other intermediate processes in which more and more photons are absorbed and/or emitted. For each of these possibilities, there is a Feynman diagram describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude.

That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a possibility of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.)[1]:89, 98–99

Probability amplitudes

Feynman replaces complex numbers with spinning arrows, which start at emission and end at detection of a particle. The sum of all resulting arrows represents the total probability of the event. In this diagram, light emitted by the source S bounces off a few segments of the mirror (in blue) before reaching the detector at P. The sum of all paths must be taken into account. The graph below depicts the total time spent to traverse each of the paths above.

Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers.

Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by


The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers.

Addition of probability amplitudes as complex numbers
Multiplication of probability amplitudes as complex numbers

Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction.

That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows.[1]:120–121 There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping.

Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", E(A to D) × E(B to C) − E(A to C) × E(B to D), where we would expect, from our everyday idea of probabilities, that it would be a sum.[1]:112–113


Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows:

where a shorthand symbol such as  stands for the four real numbers that give the time and position in three dimensions of the point labeled A.

Mass renormalization

A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process".[1]:128


Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem."[1]:152

* * * * * * * *


Logo of mjms

 2020 Feb; 27(1): 1–5.
Published online 2020 Feb 27. doi: 10.21315/mjms2020.27.1.1
PMCID: PMC7053547
PMID: 32158340

Quantum Physics Perspective on Electromagnetic and Quantum Fields Inside the Brain

Quantum Physics for the Brain

Quantum physics is the branch of physics that deals with tiny objects and quantisation (packet of energy or interaction) of various entities. In contrast to classical Newtonian physics which deals with large objects, quantum physics or mechanics is a science of small scale objects such as atom and subatomic particles. In general, central elements of quantum physics are: i) particle-wave duality for quantum entity such as elementary particles, and even for compound particles, for instances, atoms and molecules; ii) quantum entanglement. It can be defined as a phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated; iii) coherence and decoherence. Coherence is referred to waves that have a constant phase difference, same frequency, or same waveform morphology, whilst decoherence means interference is present; iv) superposition. It is a complex property of a wave. In a wave, there could be many other smaller waves; v) quantum tunneling. It is a phenomenon where a quantum particle passes through a barrier; vi) quantum uncertainty. It is also known as Heisenberg’s uncertainty principle. It states that the more precisely the position of some particle is determined, the less precisely its velocity can be known, and vice versa (). Thus, quantum physics is seen as dealing with ambiguity in the physical world.

Based upon the first principle, human brain can be viewed entirely as either in particle or in wave form. The particle perspective portrays brain in anatomical form, whilst the wave perspective depicts the brain in wave form (). Waves of the brain can be classified further into two main entities: i) brainwaves that commonly detected or studied using electroencephalography (EEG) or magnetoencephalography (MEG) and is based on electromagnetic principles; and ii) waves perspective of brain anatomical particles. The first waves or brainwaves can be named as electric waves with energy or field in it—thus stated here as electromagnetic field (EMF) and for the second waves, quantum waves or quantum field (QF) is used. Therefore, brain can be viewed as either i) anatomical brain with brainwaves (classical) or ii) all are waves with onefield or energy, but this field can be divided further into EMF (large object physics) and QF (small object physics). In this editorial commentary, the author describes briefly these two concepts of brain fields and invite readers to think of quantum physics as a science that not only capable in describing the behaviour of subatomic particles, but also behaviour of people (people mind).

Electromagnetic and Quantum Fields Inside the Brain

A physiological principle states a neuron communicates with each other by using electrical signals. The electrical signal called action potential travels along the axon and triggers neurotransmitter at the synapse, hence further electrical signal can be passed to other neurons. With electrical signal, there is always a simultaneous presence of magnetic field. Thus, this type of communication is known as EMF communication (). On the contrary the QF type of communication considers all brain elements are waves, thus the energy is still wavy (ups and downs) and perhaps in diffused pattern with more complex networks. In this perspective, EMF of the brain is viewed as arising from: i) a projected stimuli outside the brain such as our five senses of stimuli—seeing, hearing, touching, tasting, and smelling; or ii) the brain itself such as in virtual reality, dreaming and hypnosis (without external stimuli); and iii) non-cognition such as pure motor movements. On the other hand, brain QF is viewed as onefield or wholeness or oneness with our universe. Thus, it is commonly regarded as having one consciousness (). With this understanding, consciousness concept in quantum realm is not restricted only to human brain. In other words, we may say QF permeates whole of our universe. The quantum entity that suits this permeating energy concept is the light, whilst the non-quantum entity (Newtonian physics) that suits the focus or limited projection is electricity. Hence, EMF has electric feature whilst QF has light feature. This is summarised in Figure 1 and Table 1.

An external file that holds a picture, illustration, etc.
Object name is 01mjms27012020_edf1.jpg

Limited consciousness for the brain and limited projection for the universe form a principle for the collapse of wave function of the particle. Brain and universe are permeated by quantum field, whilst electromagnetic field inside the brain arises from a discrete or limited projection from our universe

Table 1

General features of brain EMF and brain QF

1Wave patternPresence of pilot/directional waveDiffused waves
2Wave characteristicsHigh frequency waveLow frequency wave
3WavelengthShort wavelengthLong wavelength
4Quantum conceptDeterministic (locatable)Non-deterministic (unlocatable, varied)
5DimensionHigh dimension (electric)Low dimension (light)
6Physics conceptBohmian mechanicsQuantum mechanics
7Brain networkSimple network (few nodes)Complex network (many nodes/varied)
8SymmetryMore asymmetryMore symmetry
9Evoked responseLarge evoked response with few stimuli (few trials/tests)Smaller evoked response and need higher amount/number of stimuli (multiple trials/tests)
11Wholeness/Oneness/Onefield conceptNo (it is a projected/limited field)Yes (spreading or permeating whole of universe/field)
12Related to psychiatryLess relevantMore relevant, because this QF is related to wholeness or reality or one consciousness concept
13The way to alter the networkFocus few electrode [deep brain stimulation (DBS) like electrode for Parkinson etc.]Smaller and multiple electrodes (toothbrush like electrodes)
14The way to alter the network using frequencyHigh frequency is preferred in most cases (inhibition)Low (stimulation) and high frequency stimulation depends on clinical manifestations

Brain functionCombination of EMF and QF (two brain fields/energy)

ABrain function (motor, sensory, vision, sound, touch) and its impairmentNon-cognitive impairment such as stroke affecting motor, sensory, vision, sound, touch
EMF is more affected than QF
Associated with degree of impairment
BBrain function (language, emotion, memory, attention, planning etc.) and its impairmentCognitive impairment for language, emotion, memory, attention, planning etc.
QF is affected significantly together with EMF
Associated with degree of impairment
CBrain function and psychosisPsychotic manifestations such as auditory or visual hallucination, thought insertion, delusions etc.
QF is more affected than EMF
Yes or No (presence or not) (not associated with degree)

The two aforementioned concepts (i.e. projected stimuli or internally arising stimuli for EMF and brain as part of one consciousness) unintentionally introduce a ‘limited’ principle for both our universe and brain. For our universe, the projected stimuli is a limited stimuli from only certain aspect, area or dimension of our universe; and for the brain, the limited principle is applied for the consciousness—our brain is part of one consciousness or has limited consciousness. With these ‘limitation principle of our universe and brain’, our brain (3D-vision) cannot see wave function of a particle or atom, we can only see them in particle, atom, molecule, matter or object form. In other words, we may say partial consciousness that we have, collapsing the wave function of a particle and thus limited our perception to only three-dimensional vision (particle, atom, molecule, matter).

EMF and QF in Relation to Medicine

In reference to Table 1, brain EMF is based on electric signal that has pilot or directional high frequency, short wavelength waves that are locatable or can be determined using few stimuli or trial with noted large evoked electrical (or magnetic) response. Thus, it seems to cover our five basic senses with simpler brain networks. Conversely, light (also a type of electromagnetic spectrum) is regarded as main entity for brain QF. It is diffused or non-directional lower frequency and longer wavelength waves that are unlocatable or non-deterministic or varied. Other features of QF are more symmetry (light feature) and having more complex networks. Thus, QF may play bigger role not in our five common senses, but more in our brain cognitions. With these features (varied, complex, diffused), neuroplasticity is thought to happen more likely in cognitive (language, emotion, memory, attention etc.) than in our five-sense or motor impairments.

With the aforementioned reasons, patients who are suffering from motor, sensory or cognitive impairments obviously require EEG or/and MEG in order to come out with better diagnosis or better knowing the extent of impairment. Those suffering from psychiatric disorders or psychosis spectrum disorders, QF is probably a better energy that should be studied in them. It is because of oneness or wholeness concept for QF; any fragmentation in this onefield (loss from reality) likely causes psychotic-like manifestation (). Noteworthy, more symmetry and lower frequency waves features of QF may be utilised in making diagnosis and monitoring for psychiatric disorders. In relation to this understanding, one may treat cognitively impaired or psychotic patient by using a more diffused, smaller and multiple electrodes (toothbrush-like electrodes) implanted at certain cognitive or psychosis brain networks.


The universe and brain are considered as two most complicated entities with obvious links exist between them. One of those links is ‘limitation principle’ for both. The energy in our brain is thought as pairing, with obvious EMF and more hidden QF energy. QF is thought as a permeating background energy for our brain and universe, while EMF is more focus-, limited- and projected-like brain energy. Greater understanding in QF may open new ways on how to treat some medical disorders, particularly ones that related to cognitive impairment and psychiatry.


The author is thankful to Professor Dato’ Dr Hj Jafri Malin Abdullah for establishing the Neurosurgery, Clinical and Fundamental Neurosciences Centre in Universiti Sains Malaysia (USM) with modern facilities to study human brainwaves (EEG, ECOG and MEG).


Conflict of Interest





2. Idris Z. An infinite-light and infinite-frequency in cosmology and neurosciences. Open Journal of Philosophy. 2019;9:236–251. doi: 10.4236/ojpp.2019.92016. [CrossRef[]
3. Jamali M, Golshani M, Jamali Y. A proposed mechanism for mind-brain interaction using extended Bohmian quantum mechanics in Avicenna’s monotheistic perspective. Heliyon. 2019;5(7):e02130. doi: 10.1016/j.heliyon.2019.e02130. [PMC free article] [PubMed] [CrossRef[]
4. Idris Z, Faruque R, Jafri MA. Microgravity, hemispheric brain specialisation and death of a person. In: Sisu AM, editor. Human anatomy—reviews and medical advances. London, UK: IntechOpen; 2017. https://doi.org/10.5772/67897. Available at: https://www.intechopen.com/books/human-anatomy-reviews-and-medical-advances/human-brain-anatomy-prospective-microgravity-hemispheric-brain-specialisation-and-death-of-a-person[]
5. Idris Z, Kandasamy R, Reza F, Jafri MA. Neural oscillation, network, eloquent cortex and epileptogenic zone revealed by magnetoencephalography and awake craniotomy. Asian J Neurosurg. 2014;9(3):144–152. doi: 10.4103/1793-5482.142734. [PMC free article] [PubMed] [CrossRef[]
6. Bulbring E, Burn JH. Observations bearing on synaptic transmission by acetylcholine in the spinal cord. J Physiol. 1941;100(3):337–368. doi: 10.1113/jphysiol.1941.sp003947. [PMC free article] [PubMed] [CrossRef[]
7. Hameroff S, Penrose R. Consciousness in the universe: a review of the ‘Orch OR’ theory. Phys Life Rev. 2014;11(1):39–78. doi: 10.1016/j.plrev.2013.08.002. [PubMed] [CrossRef[]
8. Singer W. A naturalistic approach to the hard problem of consciousness. Front Syst Neurosci. 2019;13:58. doi: 10.3389/fnsys.2019.00058. [PMC free article] [PubMed] [CrossRef[]
9. Latif WA, Ggha S. Understanding neurobehavioural dynamics: a close-up view on psychiatry and quantum mechanics. Malays J Med Sci. 2019;26(1):147–156. doi: 10.21315/mjms2019.26.1.14. [PMC free article] [PubMed] [CrossRef[]
10. Maung HH. Dualism and its place in a philosophical structure for psychiatry. Med Health Care Philos. 2019;22(1):59–69. doi: 10.1007/s11019-018-9841-2. [PMC free article] [PubMed] [CrossRef[]

Articles from The Malaysian Journal of Medical Sciences : MJMS are provided here courtesy of School of Medical Sciences, Universiti Sains Malaysia