AETHER
43

REG

by Jeff Dunne

Dr Jeff Dunne is an engineer and physicist, chief scientist at the at the Johns Hopkins University Applied Physics Laboratory, and on the board of trustees at the International Consciousness Research Laboratories (ICRL).


REGs and a Changing Worldview

“What’s an REG? It is nor hand, nor foot, nor arm, nor
naught but a fancy counter of random happenings.”

– nor Shakespeare 

The world is changing.  This is nothing new.  The world has always been changing, and no aspect of it more than humanity.  We are a multi-faceted mystery, with faces that twist and turn, with connecting edges of desires and fears that rise and fall, and all in an endless collection of cycles intrinsically interrelated in such overwhelming complexity that the patterns often seem completely random.  But what does it mean to be random?  The very concept of ‘randomness’ is a matter of information—that is to say, an assessment of randomness is an assessment of information, and vice versa.  In hindsight, we sometimes feel we can piece things together.  It seems possible to trace and account for the swings of culture, our tolerance for the importance (and, in fact, the very realness) of spirituality, and our fascination-turned-obsession with materialism and the elimination of uncertainty.  Perhaps it is possible.  Perhaps not.

Over the past few hundred years, humanity became enchanted with the gifts of science, often to the point of losing the ability to define what science even is, what it can and cannot do.  Some people will say that science is a methodology, others that it is a collection of evidence.  There are those who say science is a model of the universe responsible for describing the wheels and levers that drive reality.  Some say it is infallible, but those people are rarely professional scientists who have borne witness to the foibles and absurdities that arise therein—as they do in every other facet of the organism that is humanity—absurdities that drift like flotsam on a sea of human insecurity.  

So what is the scientific methodology?  It is a chicken-and-egg, spiral dance of observation, hypothesis, and testing.  We see things, we wonder about them and form ideas about what might be going on, and then design and execute tests to see if we are right.  Well, almost.  We create tests to hopefully not be wrong, and that is not quite the same.  Since humans do not have intellectual access to the infinity of all things that make up reality, the best we can do is explore ideas within the constraints of our imagination.  Alas, there is always some young upstart who is slightly more imaginative than the last batch of scientists, uncovering exceptions and pushing beyond limits to break the latest ‘unquestionable truth’.  Let’s remember how long it took for humanity to realise that things don’t always fall downwards when you drop them.  The first helium balloon only floated up and away in defiance of laws of gravity in the early 1800’s.  Yes, yes, hot air balloons were fifty years sooner and Da Vinci was being imaginative a few hundred years before that, but you get the point.  We should articulate it anyway?  Okay, here it is: science never proves things, because we are incapable of testing a theory under every conceivable set of conditions/circumstances.  That doesn’t make those theories useless, of course; it simply means that good scientists are humble in recognising that new discoveries—discoveries that will demand revision of existing world models—are so likely as to be practically guaranteed.  At least that’s how it has worked thus far throughout the entirety of human history.  Have you heard of quantum mechanics?

Even though quantum mechanics (QM) is now a very popularised topic, having become so universally accepted as ‘really hip’ (a phrase that is, alas, no longer really hip) that people use the term “quantum” to sell nearly anything… regardless of whether there is any quantization going on at all.  Yet QM is not the most recent revelation in scientific circles.  In the latter half of the 20th century, a significant body of research was performed in order to validate something that was fully appreciated thousands of years ago: that consciousness plays a critical role in the establishment of the physical universe.  Ironically, mainstream science had been working so hard to downplay the significance of subjective experience, to prove (despite science’s inability to actually prove anything) that the universe is objective and deterministic, that a) a great many people were sold on the idea of a purely mechanistic universe in which consciousness is either irrelevant or simply an emergent peculiarity (or both), and b) most scientists weren’t prepared to listen when the founders of QM were saying that everything starts with consciousness.  It’s there in our life experiences, it’s there in the equations of QM, but we needed more.

In the late 1970’s, a series of curious happenings raised this question of the nature of the connection between consciousness and ‘objective’ systems to a scientist at Princeton University, one Dr. Robert Jahn, then Dean of the School of Engineering.  Dr. Jahn and his research partner Dr. Brenda Dunne, were far from the first people to become convinced that this connection was worth studying, but they did something very different than past investigators.  Rather than looking at a few ‘big signals’, i.e. rather than the typical approach of focusing on a small set of individuals displaying spectacular, atypical psychic capabilities, this pair of researchers approached things differently.  At the Princeton Engineering Anomalies Research (PEAR) Laboratory, Jahn and Dunne built a research program that collected massive amounts of data from the attempts of ordinary people to influence ‘objective’ systems (meaning ‘systems specifically designed to be unaffected by the thoughts or intentions of people’).

Not surprisingly (at least today in 2023), those studies demonstrated that the conscious intent to influence the behaviour of a system has a (often small but) measurable effect on that system.  But where prior studies had focused on the rare and amazing instances that mainstream science dismissed as flukes (or mistakes, or cheating, or any number of creative ways to justify ignoring the data), the PEAR Laboratory’s results were of a very different character.  They amassed an overwhelming body of scientific evidence that showed the effects of conscious intent, and with a consistency and statistical certainty that was far beyond any reasonable cutoff for describing it as ‘a random aberration’.

This immense data set was collected using many operators and many physical systems.  A complete summary of the details of the PEAR research goes far beyond the scope of this article, but they are thoroughly documented in a series of papers and books published by Jahn and Dunne from the 1980’s and into the next century.  What is important here is to note that a) the operators came from all walks of life—young, old, academically-minded and not, scientifically-minded and not, and every other spectrum across which humanity varies—and b) the nature of the equipment was likewise diverse.  PEAR performed research with equipment that was thermal, optical, mechanical, fluid-based… and of course digital.  

Digital systems, the primary one being the REG (Random Event Generator), had a distinct advantage over other systems.  That advantage was not stronger signals, more significant effects, or anything that had to do with the observability of the effects of consciousness.  Rather, it was a matter of immense practicality: it is far easier to collect, store, and analyse data from digital systems than for any other kind.  All other phenomena ultimately had to be transformed into digital content for analysis, so the collection of experimental results from digital systems was simply the most efficient way to accumulate large amounts of data.  Face it, if you want a device you can shove into a pocket, you go for a tiny computer and not a Michelson Interferometer. For these reasons, the largest portion of PEAR data was collected using REGs.

Random Event Generator.  REG.  In fact, many researchers refer to these devices as Random Number Generators, or RNGs.  That is understandable, since the core output of such devices is random numbers, but it is not quite accurate.  These devices work by leveraging (counting from) a specific source of random events.  This can be achieved in a myriad of ways: monitoring for nuclear decay, measuring quantum tunnelling, examining thermal effects at microscopic levels, etc.  One can design such a device to leverage any random system, but at PEAR (as with essentially all research laboratories of this nature) ‘small’ and ‘portable’ are very valuable traits for experimentation.  Whatever the source of randomness, these devices count events and use that to generate numbers.  If the event has a 50/50 chance of occurring in any given time period, then waiting (for example) two hundred time periods should result in a total of 100 random events occurring.  If 99 (or 101) events occur, that is not particularly exciting.  After all, the events are called ‘random’ for a reason.  If, say, 90 or 110 events occur, that is slightly more exciting.  If 15 (or 185) events occur, that is categorised in the scientific literature as ‘really freaking weird’.   Okay, that phrase doesn’t actually appear much, but it is the generally accepted interpretation of ‘significantly unusual behaviour’.

Let’s consider an example.  Suppose one waits for 200 time intervals, and measures 103 events when expecting, on average, a total of 100 events to occur.  This is not particularly remarkable.  The next time you run the experiment, you might get 94, or 100, or 101, or 97, etc.  An individual small deviation is simply not a big deal.  And this is another illustration of the distinction between PEAR’s approach and other work that had come before it.  PEAR was not attempting to get huge variations (e.g. measuring 160 when one expects 100).  Instead, Jahn and Dunne focused on the net accumulation of effect.

What?  Let me explain.  If you measure 103 (when expecting 100) once, it is, as described above, no big deal.  If you then do it again and get 102, again it is no big deal.  But… by the time you repeat the experiment thousands of times, those small variations are supposed to average out with corresponding low numbers, i.e. as many trials falling above expectation as below.  But that’s not what happened.  PEAR’s data showed that the application of intent during the experiment consistently resulted in net deviations.  By the time one has repeated the data collection thousands (or millions or billions) of times, the odds of the results being ‘just a random variation’ are simply too small to be chance behaviours.  If one would have to replay the entire history of the universe multiple times to get such a ‘fluke’, we must accept that something more interesting than random behaviour is going on. It was this approach that established irrefutable evidence that conscious intent can affect the behaviour of random systems.

The mechanism of how intent has an impact is still an area of active research, and there are many models attempting to describe the phenomenon.  It is not an easy problem for various reasons:

  • The effects are essentially independent of physical mechanism.  In other words, it doesn’t matter whether you try to make a thermometer read a higher/lower temperature, or a stream of water go turbulent sooner/later, or a photon refract left/right, or whatever… the effect is there.
  • The effect has been repeatedly shown to have no relationship with the spatial separation between the intending operator and the physical system.  The operator can be next to the system, in the adjoining room, or halfway around the world.  The results are the same.
  • The effect has also been shown to have no dependency on temporal proximity.  That is, the operator can have their intent at the time the data is being collected, or days, weeks, or months either before or after the data is collected.

Also, there is strong evidence to indicate that the effect is outcomes-based, not method-based, a point easiest to explain with an example.

When operators were instructed to use their intentions to affect an REG, they are told to try to make the machine ‘go high’ or ‘go low’, i.e. create more high numbers or low numbers.  Most operators were under the impression that the machine’s randomness was such that one would affect the outcome by having ‘more or less’ of something (e.g. more or fewer nuclear decay events).  However, in order to provide robustness against environmental influences, REGs are typically designed to take a series of event/no-event data and compare them with a binary ‘comb’, i.e. 10101010…  If an event occurs when compared with a 1 in the comb, that’s a count.  If an event occurs when compared with a 0, that not a count.  A non-event with a 1 is also not a count, but a non-event with a 0 is a count.  In other words, even though most operators thought they were trying to create more/less of a given type of event, what they were really doing was influencing a system to align events with an established pattern.  Reiterating, the measured effects were about getting the right end result, not about ‘directing’ the actions/behaviours of the system at a mechanistic level.

We’ve noted that there are many traditional expectations for a model of this effect that we must avoid: dependence on time, space, and the idea that one is influencing specific mechanisms to achieve an outcome.  But do not despair; there are also findings that can help guide us in developing an understanding of what is going on.

  • Operators tended to have unique signatures for how they would affect the data.  Yang personalities tended to achieve moderate effects in the intended ‘direction’, whereas yin personalities tended to achieve much stronger effects, but often in the opposite direction of the intent.
  • Pairs of operators had signatures that were different from their independent signatures, and operators with strong emotional connections to each other typically had stronger influences.
  • Effects were stronger when operators felt they established a bond or partnership with the system, i.e. where operator and system were working together as a team.
  • The state of humility of the operator played a very important role.  Complete openness to the possibility of having an effect, coupled with an appreciation that the operator had no idea how to do it, produced very strong results.  Confidence that one had ‘figured it out’ or ‘knew how to make it work for certain’ tended to give poor results.  A belief that ‘this is impossible’ was almost always a guarantee to have no results at all.  Note the relationship of this observation to the role of uncertainty, for this will become important in a few paragraphs.

Enumerating some of the personality traits that led to strong results—openness, resonance, humility—one might notice that they are the same traits that many past cultures have encouraged as a recommended path for living a healthy and fulfilling life.  Is it possible that the healthiest people are those who have strong, supportive relationships with all the systems around them (perhaps even including the world itself)?  It is certainly a question worth pondering, and one that perhaps motivates an exploration of the second half of this article’s subject, namely ‘a changing worldview’.  

If our intentions have an influence on the world around us, as the evidence suggests (demands, really), the implications are significant.  Just consider that statement: Our intentions have an impact on the world around us.  We affect things.  Because of our intentions… intentions that are often driven by our attitudes and our needs and our fears.

If you knew that the way you think about things will have an effect, would you be more inclined to control your thoughts and emotions, more hesitant to allow them to bounce around randomly in response to anything happening nearby?  And while you practised gaining control over yourself, would you be more deliberate, more purposeful, in deciding where to spend your time, and with whom?  And for that matter, if your intentions are affecting the world around you, is it not unreasonable to think that your intentions are affecting your own self in subtle, perhaps unappreciated, ways?  It might sound pedantic to state that our thoughts affect ourselves as if it was some novel, insightful conclusion, yet how often have you encountered someone having trouble remembering it?

Taking this a step towards the logical, our ability to have an effect on the world around us means that we are connected to that world.  We are more than in it, we are part of it – and it is part of us.  Said differently-but-equivalently, we are components of a larger ecosystem, cells in a larger organism.  As such, it is in our own self-interest to promote the health and wellness of that larger organism.  If a hand determined it was the only part of the body that mattered, that would not go well.  Whether that hand attempts to conquer and eliminate the rest of the body, or decides that it should dictate how all other parts of the body perform/behave, either case is equally foolish.

Recognition that we are part of a greater organism comprising things other than ourselves further suggests that there is value in diversity.  Instead of attempting to make all cells evolve into left-pinky-fingernail cells, it is logical to recognize the value to the world of including people who are different, who can think differently, feel differently, perceive differently, solve problems differently.  And not only other people, or even just lifeforms (as we typically classify them).  The systems being affected in the PEAR research were generally not biological ones.  All other aspects of reality are components in this way.

But what are those other aspects?  If one considers a rock as an accumulation of minerals which are themselves an accumulation of atoms built off elementary particles… which seem to behave according to the expectations of quantum mechanics… we come full circle.  In QM, concepts like space, time, and energy are measurements, the result of an observation on something that does not itself have such properties.  In other words, ideas like time and space result from asking questions about time and space.  They are manifestations of an organisational paradigm that has been laid out by… what?  The founders of QM (and the author of this article) argue that such manifestations are laid out by consciousness.

Suppose for a moment that time and space (as two examples) are organisational constructs.  This suggests that we may need to rethink many aspects of our lives… including its cessation.  In such a light it is perhaps more natural to view ‘death’ as a phase transition as opposed to a termination of existence.  This is far from an original concept, of course; the idea has been entertained—and embraced—by many cultures throughout human history.

As a final point in considering the implications for changes in worldviews resulting from the PEAR research, it is fitting to conclude with a discussion about uncertainty.  We have likely all encountered the unpleasant occurrence of people who are absolutely certain.  Think back to a time when you were talking to someone who had eliminated all uncertainty about their own beliefs.  Did you find them to be open-minded or rigid?  Did the conversation lay the seeds of future discourse, discovery, or exploration, or did it feel fruitless?  How fiercely were they prepared to fight to assert the truth of their perspectives?  And if you continued to question those perspectives, how did the person react?  Did they become angry? Defensive?  Perhaps even irrational?

Our innate inclination to reduce uncertainty is rather curious, particularly in light of how important uncertainty is for us.  The need for it is everywhere.  Were there to be a sequel to this article, it might be interesting to explore the interconnection between uncertainty and free will, a topic that can be found lurking in the shadows of philosophical discourse over thousands of years.  But here let us focus on the PEAR experiments, and the finding that it is through the filter of uncertainty that we are connected to the rest of reality.  Arguably one of the most profound results is that there may be value (in so many senses) in identifying the optimal amount of uncertainty for a given situation—enough to allow for an effect, but not so much that we feel like the world is simply an uncontrollable maelstrom beyond our influence.

There are undoubtedly as many ways to achieve that balance as there are people who attempt to do so.  For the author, he has found that the best approach is to consider the universe of our experiences as the intersection of two components: an (inconceivably) immense realm of stuff (another technical term there, although if you are a fan of William James then you might prefer “sensible aboriginal muchness”) and consciousness within it.  This consciousness exists at many levels, with any considered “subset” representing some conscious entity—whether that is a human, a planet, a mountain, or even a rock or an idea.  When that consciousness “touches” some subset of that immense realm—perhaps joins with it in some way—we experience.  And as we have experiences, and organise them into thoughts and words in order to communicate them (whether with others or ourselves), thus do we give form to space and time, and build the construct of an “objective” reality.

But hey, like all models that have come before, this one may be wrong, and is almost certainly incomplete.  Of course, that doesn’t mean it isn’t useful, that there isn’t some potential for it to guide us towards a healthier future worldview.

Image: Fractal Experiment by Dr Jeff Dunne

download heredownload heredownload heredownload heredownload here
43

REG

by Jeff Dunne

Dr Jeff Dunne is an engineer and physicist, chief scientist at the at the Johns Hopkins University Applied Physics Laboratory, and on the board of trustees at the International Consciousness Research Laboratories (ICRL).


REGs and a Changing Worldview

“What’s an REG? It is nor hand, nor foot, nor arm, nor
naught but a fancy counter of random happenings.”

– nor Shakespeare 

The world is changing.  This is nothing new.  The world has always been changing, and no aspect of it more than humanity.  We are a multi-faceted mystery, with faces that twist and turn, with connecting edges of desires and fears that rise and fall, and all in an endless collection of cycles intrinsically interrelated in such overwhelming complexity that the patterns often seem completely random.  But what does it mean to be random?  The very concept of ‘randomness’ is a matter of information—that is to say, an assessment of randomness is an assessment of information, and vice versa.  In hindsight, we sometimes feel we can piece things together.  It seems possible to trace and account for the swings of culture, our tolerance for the importance (and, in fact, the very realness) of spirituality, and our fascination-turned-obsession with materialism and the elimination of uncertainty.  Perhaps it is possible.  Perhaps not.

Over the past few hundred years, humanity became enchanted with the gifts of science, often to the point of losing the ability to define what science even is, what it can and cannot do.  Some people will say that science is a methodology, others that it is a collection of evidence.  There are those who say science is a model of the universe responsible for describing the wheels and levers that drive reality.  Some say it is infallible, but those people are rarely professional scientists who have borne witness to the foibles and absurdities that arise therein—as they do in every other facet of the organism that is humanity—absurdities that drift like flotsam on a sea of human insecurity.  

So what is the scientific methodology?  It is a chicken-and-egg, spiral dance of observation, hypothesis, and testing.  We see things, we wonder about them and form ideas about what might be going on, and then design and execute tests to see if we are right.  Well, almost.  We create tests to hopefully not be wrong, and that is not quite the same.  Since humans do not have intellectual access to the infinity of all things that make up reality, the best we can do is explore ideas within the constraints of our imagination.  Alas, there is always some young upstart who is slightly more imaginative than the last batch of scientists, uncovering exceptions and pushing beyond limits to break the latest ‘unquestionable truth’.  Let’s remember how long it took for humanity to realise that things don’t always fall downwards when you drop them.  The first helium balloon only floated up and away in defiance of laws of gravity in the early 1800’s.  Yes, yes, hot air balloons were fifty years sooner and Da Vinci was being imaginative a few hundred years before that, but you get the point.  We should articulate it anyway?  Okay, here it is: science never proves things, because we are incapable of testing a theory under every conceivable set of conditions/circumstances.  That doesn’t make those theories useless, of course; it simply means that good scientists are humble in recognising that new discoveries—discoveries that will demand revision of existing world models—are so likely as to be practically guaranteed.  At least that’s how it has worked thus far throughout the entirety of human history.  Have you heard of quantum mechanics?

Even though quantum mechanics (QM) is now a very popularised topic, having become so universally accepted as ‘really hip’ (a phrase that is, alas, no longer really hip) that people use the term “quantum” to sell nearly anything… regardless of whether there is any quantization going on at all.  Yet QM is not the most recent revelation in scientific circles.  In the latter half of the 20th century, a significant body of research was performed in order to validate something that was fully appreciated thousands of years ago: that consciousness plays a critical role in the establishment of the physical universe.  Ironically, mainstream science had been working so hard to downplay the significance of subjective experience, to prove (despite science’s inability to actually prove anything) that the universe is objective and deterministic, that a) a great many people were sold on the idea of a purely mechanistic universe in which consciousness is either irrelevant or simply an emergent peculiarity (or both), and b) most scientists weren’t prepared to listen when the founders of QM were saying that everything starts with consciousness.  It’s there in our life experiences, it’s there in the equations of QM, but we needed more.

In the late 1970’s, a series of curious happenings raised this question of the nature of the connection between consciousness and ‘objective’ systems to a scientist at Princeton University, one Dr. Robert Jahn, then Dean of the School of Engineering.  Dr. Jahn and his research partner Dr. Brenda Dunne, were far from the first people to become convinced that this connection was worth studying, but they did something very different than past investigators.  Rather than looking at a few ‘big signals’, i.e. rather than the typical approach of focusing on a small set of individuals displaying spectacular, atypical psychic capabilities, this pair of researchers approached things differently.  At the Princeton Engineering Anomalies Research (PEAR) Laboratory, Jahn and Dunne built a research program that collected massive amounts of data from the attempts of ordinary people to influence ‘objective’ systems (meaning ‘systems specifically designed to be unaffected by the thoughts or intentions of people’).

Not surprisingly (at least today in 2023), those studies demonstrated that the conscious intent to influence the behaviour of a system has a (often small but) measurable effect on that system.  But where prior studies had focused on the rare and amazing instances that mainstream science dismissed as flukes (or mistakes, or cheating, or any number of creative ways to justify ignoring the data), the PEAR Laboratory’s results were of a very different character.  They amassed an overwhelming body of scientific evidence that showed the effects of conscious intent, and with a consistency and statistical certainty that was far beyond any reasonable cutoff for describing it as ‘a random aberration’.

This immense data set was collected using many operators and many physical systems.  A complete summary of the details of the PEAR research goes far beyond the scope of this article, but they are thoroughly documented in a series of papers and books published by Jahn and Dunne from the 1980’s and into the next century.  What is important here is to note that a) the operators came from all walks of life—young, old, academically-minded and not, scientifically-minded and not, and every other spectrum across which humanity varies—and b) the nature of the equipment was likewise diverse.  PEAR performed research with equipment that was thermal, optical, mechanical, fluid-based… and of course digital.  

Digital systems, the primary one being the REG (Random Event Generator), had a distinct advantage over other systems.  That advantage was not stronger signals, more significant effects, or anything that had to do with the observability of the effects of consciousness.  Rather, it was a matter of immense practicality: it is far easier to collect, store, and analyse data from digital systems than for any other kind.  All other phenomena ultimately had to be transformed into digital content for analysis, so the collection of experimental results from digital systems was simply the most efficient way to accumulate large amounts of data.  Face it, if you want a device you can shove into a pocket, you go for a tiny computer and not a Michelson Interferometer. For these reasons, the largest portion of PEAR data was collected using REGs.

Random Event Generator.  REG.  In fact, many researchers refer to these devices as Random Number Generators, or RNGs.  That is understandable, since the core output of such devices is random numbers, but it is not quite accurate.  These devices work by leveraging (counting from) a specific source of random events.  This can be achieved in a myriad of ways: monitoring for nuclear decay, measuring quantum tunnelling, examining thermal effects at microscopic levels, etc.  One can design such a device to leverage any random system, but at PEAR (as with essentially all research laboratories of this nature) ‘small’ and ‘portable’ are very valuable traits for experimentation.  Whatever the source of randomness, these devices count events and use that to generate numbers.  If the event has a 50/50 chance of occurring in any given time period, then waiting (for example) two hundred time periods should result in a total of 100 random events occurring.  If 99 (or 101) events occur, that is not particularly exciting.  After all, the events are called ‘random’ for a reason.  If, say, 90 or 110 events occur, that is slightly more exciting.  If 15 (or 185) events occur, that is categorised in the scientific literature as ‘really freaking weird’.   Okay, that phrase doesn’t actually appear much, but it is the generally accepted interpretation of ‘significantly unusual behaviour’.

Let’s consider an example.  Suppose one waits for 200 time intervals, and measures 103 events when expecting, on average, a total of 100 events to occur.  This is not particularly remarkable.  The next time you run the experiment, you might get 94, or 100, or 101, or 97, etc.  An individual small deviation is simply not a big deal.  And this is another illustration of the distinction between PEAR’s approach and other work that had come before it.  PEAR was not attempting to get huge variations (e.g. measuring 160 when one expects 100).  Instead, Jahn and Dunne focused on the net accumulation of effect.

What?  Let me explain.  If you measure 103 (when expecting 100) once, it is, as described above, no big deal.  If you then do it again and get 102, again it is no big deal.  But… by the time you repeat the experiment thousands of times, those small variations are supposed to average out with corresponding low numbers, i.e. as many trials falling above expectation as below.  But that’s not what happened.  PEAR’s data showed that the application of intent during the experiment consistently resulted in net deviations.  By the time one has repeated the data collection thousands (or millions or billions) of times, the odds of the results being ‘just a random variation’ are simply too small to be chance behaviours.  If one would have to replay the entire history of the universe multiple times to get such a ‘fluke’, we must accept that something more interesting than random behaviour is going on. It was this approach that established irrefutable evidence that conscious intent can affect the behaviour of random systems.

The mechanism of how intent has an impact is still an area of active research, and there are many models attempting to describe the phenomenon.  It is not an easy problem for various reasons:

  • The effects are essentially independent of physical mechanism.  In other words, it doesn’t matter whether you try to make a thermometer read a higher/lower temperature, or a stream of water go turbulent sooner/later, or a photon refract left/right, or whatever… the effect is there.
  • The effect has been repeatedly shown to have no relationship with the spatial separation between the intending operator and the physical system.  The operator can be next to the system, in the adjoining room, or halfway around the world.  The results are the same.
  • The effect has also been shown to have no dependency on temporal proximity.  That is, the operator can have their intent at the time the data is being collected, or days, weeks, or months either before or after the data is collected.

Also, there is strong evidence to indicate that the effect is outcomes-based, not method-based, a point easiest to explain with an example.

When operators were instructed to use their intentions to affect an REG, they are told to try to make the machine ‘go high’ or ‘go low’, i.e. create more high numbers or low numbers.  Most operators were under the impression that the machine’s randomness was such that one would affect the outcome by having ‘more or less’ of something (e.g. more or fewer nuclear decay events).  However, in order to provide robustness against environmental influences, REGs are typically designed to take a series of event/no-event data and compare them with a binary ‘comb’, i.e. 10101010…  If an event occurs when compared with a 1 in the comb, that’s a count.  If an event occurs when compared with a 0, that not a count.  A non-event with a 1 is also not a count, but a non-event with a 0 is a count.  In other words, even though most operators thought they were trying to create more/less of a given type of event, what they were really doing was influencing a system to align events with an established pattern.  Reiterating, the measured effects were about getting the right end result, not about ‘directing’ the actions/behaviours of the system at a mechanistic level.

We’ve noted that there are many traditional expectations for a model of this effect that we must avoid: dependence on time, space, and the idea that one is influencing specific mechanisms to achieve an outcome.  But do not despair; there are also findings that can help guide us in developing an understanding of what is going on.

  • Operators tended to have unique signatures for how they would affect the data.  Yang personalities tended to achieve moderate effects in the intended ‘direction’, whereas yin personalities tended to achieve much stronger effects, but often in the opposite direction of the intent.
  • Pairs of operators had signatures that were different from their independent signatures, and operators with strong emotional connections to each other typically had stronger influences.
  • Effects were stronger when operators felt they established a bond or partnership with the system, i.e. where operator and system were working together as a team.
  • The state of humility of the operator played a very important role.  Complete openness to the possibility of having an effect, coupled with an appreciation that the operator had no idea how to do it, produced very strong results.  Confidence that one had ‘figured it out’ or ‘knew how to make it work for certain’ tended to give poor results.  A belief that ‘this is impossible’ was almost always a guarantee to have no results at all.  Note the relationship of this observation to the role of uncertainty, for this will become important in a few paragraphs.

Enumerating some of the personality traits that led to strong results—openness, resonance, humility—one might notice that they are the same traits that many past cultures have encouraged as a recommended path for living a healthy and fulfilling life.  Is it possible that the healthiest people are those who have strong, supportive relationships with all the systems around them (perhaps even including the world itself)?  It is certainly a question worth pondering, and one that perhaps motivates an exploration of the second half of this article’s subject, namely ‘a changing worldview’.  

If our intentions have an influence on the world around us, as the evidence suggests (demands, really), the implications are significant.  Just consider that statement: Our intentions have an impact on the world around us.  We affect things.  Because of our intentions… intentions that are often driven by our attitudes and our needs and our fears.

If you knew that the way you think about things will have an effect, would you be more inclined to control your thoughts and emotions, more hesitant to allow them to bounce around randomly in response to anything happening nearby?  And while you practised gaining control over yourself, would you be more deliberate, more purposeful, in deciding where to spend your time, and with whom?  And for that matter, if your intentions are affecting the world around you, is it not unreasonable to think that your intentions are affecting your own self in subtle, perhaps unappreciated, ways?  It might sound pedantic to state that our thoughts affect ourselves as if it was some novel, insightful conclusion, yet how often have you encountered someone having trouble remembering it?

Taking this a step towards the logical, our ability to have an effect on the world around us means that we are connected to that world.  We are more than in it, we are part of it – and it is part of us.  Said differently-but-equivalently, we are components of a larger ecosystem, cells in a larger organism.  As such, it is in our own self-interest to promote the health and wellness of that larger organism.  If a hand determined it was the only part of the body that mattered, that would not go well.  Whether that hand attempts to conquer and eliminate the rest of the body, or decides that it should dictate how all other parts of the body perform/behave, either case is equally foolish.

Recognition that we are part of a greater organism comprising things other than ourselves further suggests that there is value in diversity.  Instead of attempting to make all cells evolve into left-pinky-fingernail cells, it is logical to recognize the value to the world of including people who are different, who can think differently, feel differently, perceive differently, solve problems differently.  And not only other people, or even just lifeforms (as we typically classify them).  The systems being affected in the PEAR research were generally not biological ones.  All other aspects of reality are components in this way.

But what are those other aspects?  If one considers a rock as an accumulation of minerals which are themselves an accumulation of atoms built off elementary particles… which seem to behave according to the expectations of quantum mechanics… we come full circle.  In QM, concepts like space, time, and energy are measurements, the result of an observation on something that does not itself have such properties.  In other words, ideas like time and space result from asking questions about time and space.  They are manifestations of an organisational paradigm that has been laid out by… what?  The founders of QM (and the author of this article) argue that such manifestations are laid out by consciousness.

Suppose for a moment that time and space (as two examples) are organisational constructs.  This suggests that we may need to rethink many aspects of our lives… including its cessation.  In such a light it is perhaps more natural to view ‘death’ as a phase transition as opposed to a termination of existence.  This is far from an original concept, of course; the idea has been entertained—and embraced—by many cultures throughout human history.

As a final point in considering the implications for changes in worldviews resulting from the PEAR research, it is fitting to conclude with a discussion about uncertainty.  We have likely all encountered the unpleasant occurrence of people who are absolutely certain.  Think back to a time when you were talking to someone who had eliminated all uncertainty about their own beliefs.  Did you find them to be open-minded or rigid?  Did the conversation lay the seeds of future discourse, discovery, or exploration, or did it feel fruitless?  How fiercely were they prepared to fight to assert the truth of their perspectives?  And if you continued to question those perspectives, how did the person react?  Did they become angry? Defensive?  Perhaps even irrational?

Our innate inclination to reduce uncertainty is rather curious, particularly in light of how important uncertainty is for us.  The need for it is everywhere.  Were there to be a sequel to this article, it might be interesting to explore the interconnection between uncertainty and free will, a topic that can be found lurking in the shadows of philosophical discourse over thousands of years.  But here let us focus on the PEAR experiments, and the finding that it is through the filter of uncertainty that we are connected to the rest of reality.  Arguably one of the most profound results is that there may be value (in so many senses) in identifying the optimal amount of uncertainty for a given situation—enough to allow for an effect, but not so much that we feel like the world is simply an uncontrollable maelstrom beyond our influence.

There are undoubtedly as many ways to achieve that balance as there are people who attempt to do so.  For the author, he has found that the best approach is to consider the universe of our experiences as the intersection of two components: an (inconceivably) immense realm of stuff (another technical term there, although if you are a fan of William James then you might prefer “sensible aboriginal muchness”) and consciousness within it.  This consciousness exists at many levels, with any considered “subset” representing some conscious entity—whether that is a human, a planet, a mountain, or even a rock or an idea.  When that consciousness “touches” some subset of that immense realm—perhaps joins with it in some way—we experience.  And as we have experiences, and organise them into thoughts and words in order to communicate them (whether with others or ourselves), thus do we give form to space and time, and build the construct of an “objective” reality.

But hey, like all models that have come before, this one may be wrong, and is almost certainly incomplete.  Of course, that doesn’t mean it isn’t useful, that there isn’t some potential for it to guide us towards a healthier future worldview.

Image: Fractal Experiment by Dr Jeff Dunne

download heredownload heredownload heredownload heredownload here