Page 1


PUBLISHED BY SARABAND SUITE 202, 98 WOODLANDS ROAD GLASGOW, G3 6HB, SCOTLAND COPYRIGHT © 2010 SARABAND (SCOTLAND) LTD. ALL RIGHTS RESERVED. NO PART OF THIS PUBLICATION MAY BE REPRODUCED, STORED IN A RETRIEVAL SYSTEM, OR TRANSMITTED, IN ANY FORM OR BY ANY MEANS, ELECTRONIC, MECHANICAL, PHOTOCOPYING, RECORDING, OR OTHERWISE, WITHOUT FIRST OBTAINING THE WRITTEN PERMISSION OF THE COPYRIGHT OWNER. EDITED BY SARA HUNT DESIGN BY DEBORAH WHITE ISBN-13: 978-188735471-4 PRINTED AND BOUND IN CHINA 1 3 5 7 9 10 8 6 4 2

NS SB FM TM2.indd Sec2:4

12/13/10 1:32 PM


CONTENTS I ntroduction M ind

6

B ody

10

• A Short History of Me • Beyond the Self • The Universe Within • Wisdom of the World • Pagan Priorities • From the Gods to Gaia • Hereafters: Death and Beyond

12 24 30 38 46 58 68

W ellness , S cience

76

and

and

M edicine

• Balancing Body and Mind • The Mind at Ease: Meditation • Placebo Effects: Mind over Matter • Healing Hands • Distant Healing

78 88 94 104 114

P otentialities : I ntegrating the C onscious and the P hysical

124

• Problems, Perceptions • In Search of Psi • The Noosphere: Our Collective Consciousness? • Mind and Matter: Human Consciousness and the Physical World • Persuasion, Peace and War

C onclusion Index Bibliography and Sources Acknowledgments

126 138 156 166 176

200 204 207 208


I ntroduction

I ntroduction

C

an we know something we can’t directly perceive?

Can we communicate that knowledge to other people without words or gestures, by a sheer effort of the will? Are there things we don’t even need to communicate because, in some mysterious way, we all just know? And what of the physical world around us: can we act upon that by the power of the mind alone? Could a collective consciousness be harnessed to transform the world? Below: The Thinker, as imagined by Auguste Rodin—but is he thinking deeply enough? Might we all be capable of thinking our way to other dimensions, as yet unperceived, or even unimagined?

One thing that’s impinged upon all our consciousnesses in the last couple of years has been noetic science, through Dan Brown’s The Lost Symbol, if we hadn’t heard of it before. A new attempt to answer questions that in one form or another have been asked just about as long as humanity has existed, noetic science sets out to subject what are often seen as fanciful notions to rigorous examination. Is it true? It certainly is in the sense that it’s a genuinely existing (and fast-expanding) field of study. It’s also an increasingly disciplined discipline. At the Institute of Noetic Sciences (IONS) in Petaluma, California, researchers work hard to maintain the sort of scientific standards that would be observed in any university faculty or research unit. But that doesn’t make it all true, of course. Does noetic science deal in real phenomena? Does the emperor have any clothes? Are these “scientists” exploring something that isn’t there?

Nous and Noetics

Ancient Greek in origin, the word nous endures in everyday English in the sense of a sort of commonsense shrewdness. Early thinkers used it very differently, not just from us but from one another: the story of its changing nuances would make a volume in itself. In general, though, it’s fair to say that the nous, though not specifically spiritual itself, was the capacity of the mind or heart to appreciate what was. If it was viewed as knowledge, it was as a knowledge of “higher things”; if seen as perception, it was the perception of things that lay concealed from the senses; where it was wisdom, it was a wisdom that saw more deeply than conventional thinking could. Can the ancient and modern uses be brought together? Can the mysteries of consciousness be reconciled with common sense? That’s precisely what today’s noetic science seeks to do.

6


I ntroduction

Rainbows and Reason Mainstream Western thought in modern times has taken pride in privileging the idea of the rational over that of the intuitive. Science and logic trump the instinct, we’ve been told, as they do the hunch, the premonition, or any feeling that there might be more to our world than what our senses apprehend. Have we lost something by this sort of thinking? Undoubtedly. As the English Romantic poet John Keats observed: There was an awful rainbow once in heaven: We know her woof, her texture, she is given In the dull catalogue of common things. Philosophy will clip an angel’s wings, Conquer all mysteries by rule and line, Empty the haunted air, the gnomèd mine – Unpick a rainbow... And yet you might well ask whether we want our air to be haunted, our mines populated by gnomes; aren’t unpicked rainbows a fair exchange for trolls and devils? When Keats asks (he seems to think rhetorically) “Do not all charms fly / At the mere touch of cold philosophy?”, we might be tempted to retort: “Quite honestly, no, they don’t!” A Middle Way? Rainbows, it might be said, are no less awe-inspiring for the knowledge that they’re produced by an interplay of light and moisture; gnomes have their place in fairy stories, angels in scripture (which plenty of people nowadays suspect are the same thing). For most of us, there’s a certain amount of middle ground: we’re intrigued (and awed) by the beauty of the rainbow; we wouldn’t 7

Above: The rainbow arches gracefully over the supposed gulf between aesthetic appreciation and scientific understanding.

Below: Goblins and fairies have traditionally populated the mysterious recesses of the human mind. Might there be more to the imagination than a reductive rationalism on the one hand or fairytale fantasy on the other?


M ind

N

and

B ody

A S hort H istory

of

Me

o subject comes closer to home for any of us than that of our own subjectivity: just what it is that makes us who we are.

Surprisingly, perhaps, given that this is something every human being in history has “known” firsthand, it is something we’re still struggling to understand. And while it feels as though it should be one of the great universals of our existence, it’s been regarded—and maybe experienced— differently in every age. The one great constant down the centuries has been that of disagreement: we’ve never found consensus on what it was to be ourselves. Page 10: Euclid (or, some say, Archimedes) explains the geometric workings of the world to later scientists in a scene from Raphael’s The School of Athens (c. 1510). Below: Wisdom from the innards of the earth, the Delphic Oracle was notoriously cryptic. To know anything, it said, it was first necessary to know yourself.

Self-Knowledge The inscription on the wall of Apollo’s Temple at Delphi, in ancient Greece, threw out a famous challenge: “Know Thyself!” That can be hard enough, even if it’s just a matter of being aware of your limitations and your prejudices, but what if “Know Thyself ” means something more? Who am I? Think about it for a moment and it becomes clear that, while this may appear to be the most absurdly obvious of questions, at the same time it is one of the most puzzling. The body I stand here in is strikingly and solidly me; I can see and feel its physicality. Yet the very statement “I see and feel it” draws a distinction between the “I” who sees and feels and the “me”— the object that is seen and felt. The more deeply we ponder it, the less likely it seems that “I” can really just mean the physical mass of me (am I the more “me” when I have put on weight?) So it seems that, along with my body, I have another, immaterial, sort of selfhood: call it “mind,” “soul,” “spirit,” or what you will. Is this intangible essence the real me? Body and Soul Many thinkers have taken the view that the body is no more than a vehicle in which the mind or “spirit” (some untouchable essence) is ferried around. For Plato, writing in the Athens of the third century BC, the soul inhabited the living body but existed independently. Aristotle disagreed: for him the soul was not a phantom self inside us but the animating energy that gave the body life and made us do the things we did. An empiricist, he believed that we could only know what we perceived through our senses, and his thinking on the self was of a piece with this. As far as Aristotle was concerned, consciousness was the running sum of our interactions with our world. There was no inner “us” existing separately. 12


A S hort H istory

of

Me

Avicenna Afloat

Born in Bukhara, in what is now Uzbekistan, in around AD 980, Ibn Sina was one of the most important thinkers of his day. Medieval Europe knew him as Avicenna: they valued him not only for his own ideas but for the ancient philosophy and science he channeled in his work. In the violence and anarchy of the Dark Ages, which had followed the fall of Rome in the fifth century, the rich learning and literature of classical Greece had substantially been lost. But it lived on in the Islamic world: the Arab conquerors of Egypt, Byzantium and the Middle East had been scrupulous in preserving the heritage they encountered, including the philosophy, science and medicine of ancient Greece. Ibn Sina is most famous for his work in rehabilitating the forgotten philosophy of Aristotle: it was through his books that Aristotelean teaching reached the western world. But his thinking on the self rejects Aristotle’s empiricist view. It owes more to the Platonic position that the living, physical being has a soul existing independently inside it. Toward 1020, Ibn Sina had based himself at Hamadan, in western Iran, when he fell foul of the local ruler and was imprisoned. It may have been his own removal from the world that inspired the famous “thought experiment” in which he tried to imagine an individual consciousness held in isolation from all external stimuli of any sort. Imagine floating in the air, removed from all sensory messages; even from those originating in your own body. There would be nothing whatsoever to tell you that you existed— yet you’d know, said Ibn Sina; your innermost consciousness would always be aware. Modern philosophers have objected that the Floating Man is no more than an analogy, but there’s no doubt that it’s a most persuasive one. It rings true and seems to confirm the view that our self-consciousness exists in its own right, not just in relation to our experiences and perceptions in the world.

Religious writers have often gone further, regarding the physical being as no more than a prison for a soul that by rights belongs in some other, more spiritual dimension beyond our own. The body was a spiritual “house,” wrote the sixteenth-century Spanish mystic, St. John of the Cross: to achieve the blissful union with God it sought, the soul had to leave this dwelling—freeing itself completely from earthly thoughts and appetites. (A succession of theologians went further: they didn’t just think the body was unimportant, but saw it as a burden, its hungers humiliating the spirit, its desires dragging us down into sin. They inveighed against “the world, the flesh and the devil” as a sort of trinity of damning forces.) Hasidic Jewish mystics hoped to suppress the body and its wants to the point of nullification so that their true selves—their spirits—might be swept up into the divine light of God. To this day, Buddhist believers see the body (and the physical dimension it inhabits) as something the purer soul should strive to escape, and which, with years of good living and meditation, it may eventually transcend. 13


M ind

and

B ody

Einstein’s Brain

What Leonardo da Vinci was to the Renaissance, Einstein has become for the modern age: the iconic intellect, with the brain to end all brains. Legendary even in his lifetime, his brain was not to be allowed to rest in peace: it was removed, weighed and dissected within hours of his death in 1955. It was all a bit anticlimactic, however: the brain that had propounded the Theory of Relativity and helped design the atom bomb did not look much different from anyone else’s. A study by researchers from McMaster University, Ontario, in 1999 found no significant difference in weight between the great man’s brain and those of thirty-five men and fifty-six women of average intelligence. By this time, however, half a century on from Einstein’s death, neurologists had made more progress in the mapping of the brain. The McMaster researchers were able to identify the area believed to be most involved in mathematical speculation and spatial relations, and this did seem to be significantly better developed in Einstein’s case, to the extent that his brain was 15 percent greater in its total width. The brilliant scientist also stood out from the crowd in that a groove that normally runs through this zone was absent in his case: this may have enabled the neurons here to interact more easily, the researchers speculated.

Below: Recent advances in neuroimaging have given us a much better understanding of which bit of the brain does what, blurring the oncesacrosanct boundary between mental “will” and physical action.

Mapping the Brain In recent decades, techniques of neuroimaging have been developed that have allowed the workings of the brain to be charted in unprecedented accuracy and depth. Functional magnetic resonance imaging (fMRI) provides a way of representing fluctuations in the blood flow in the brain, showing where hotspots of abnormal activity may have arisen owing to damage by injury or disease. Electroencephalography (EEG) and magnetoencephalography (MEG) offer different ways of identifying areas of electrical activity, or the changes in magnetic fields this activity causes. The metabolic (or chemical) mapping of the brain has been transformed by the introduction of positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanning. Radioactive isotopes injected into the bloodstream and then tracked into the brain allow a detailed three-dimensional picture to be built up. A Collective Consciousness Exploration of the brain has proceeded apace in recent years, assisted by these advances in technology. Neurologists have been able to identify those zones associated with particular activities and skills. Researchers have even succeeded in pinpointing those neuron clusters responsible for such specific tasks as moving a certain finger or blinking a given eye. 34


T he U niverse W ithin

But the boundaries of these areas have tended not to be clear-cut. Once again, it’s become clear that the brain is better described as a cloud than a clock, its organization “softer,” yet at the same time more flexible, more resilient, than might be thought. Brain damage is rightly feared: often fatal, it can shut down important functions even when it is survived, yet even now it is unpredictable in its effects. Up to a point, the brain has an ability to restore itself: where a given zone has sustained damage through accident or illness, it has frequently managed to find “workarounds,” the neurons of neighboring zones apparently stepping in to take the strain. This cooperative capacity perhaps helps explain what the researchers have so far signally failed to find: any sign of a zone in which the consciousness resides. Selfhood, it seems, is a collective function produced by the activity of all our billions of neurons working side by side. A Learning Brain? The ability of the injured brain to restore itself in some cases raises other intriguing possibilities: if it can find ways of working around such injuries, what else might it learn to do? We are accustomed to the idea that as conscious beings we might use our brains for learning, but what if the brain itself were to learn—to be trained to develop new faculties, new skills?

An Intellectual Internet?

The fact that consciousness appears to have no “home,” but to be produced by the activity of the brain as a whole, raises fascinating questions—and not just for the study of the self. It opens the door to the idea that, potentially, at least, there might be a collective consciousness that transcends the individual mind. Just as the Internet links up the world’s computers and enables them to work in concert, could a shared intelligence someday unite humanity? Or is this already happening?

35

Above: Electrical impulses are transmitted down a nerve in this artist’s impression. Below: Twins are popularly believed to be able to intuit one another’s thoughts. They have been studied extensively by scientists seeking a better understanding of consciousness.


M ind

and

B ody

A Tale of Two Hemispheres The cerebrum, the upper part of our brain, is naturally split in two by what anatomists call the “medial longitudinal fissure.” This has been clear for centuries: more recently, however, it has become clear that the left and right hemispheres do slightly different jobs. To some extent they mirror each other’s functions, the left hemisphere controlling the right-hand side of the body while the right hemisphere controls the left-hand side. But their roles are more specialized still: there is evidence that in most people important skills of reasoning and language acquisition are located in the left hemisphere, while those of spatial awareness are to be encountered in the right, along with other nonverbal and intuitive insights. These well-documented distinctions have in the last few decades given rise to a veritable industry of oversimplifying pop-psychology. This divides the world into coldly logical “left-brained” people and warm, artistic, sensitive “right-brainers.” The latter supposedly see existence in spatial, pictorial terms, relying on intuition and imagination rather than reductive, rule-bound reasoning. They, the cliché has it, are fated to be misunderstood by literal-minded, left-brained teachers and employers. Some writers have even suggested that we have a left-hemisphere-dominated history, in which the abstract and analytic have held undisputed sway. Phineas the Freak

Phineas Gage was born in 1823 and died in 1860, twelve years after the dreadful accident that should have killed him. The foreman of a railroad gang working with explosives as they dug a cut south of Cavendish, Vermont, he was standing close by when a charge went off. An iron spike weighing over 13 lb (6 kg) was sent flying by the blast: it caught him in the cheek, went clear through the frontal lobe of his brain, exited the top of his head and flew on before falling to earth some 80 feet (25 m) away. Incredibly, Gage didn’t die. Within minutes, indeed, he was sitting up and talking cheerfully. Not long after, he coughed and a bit of his brain fell out. He did suffer in the weeks that followed: he had a series of infections, and he was never to recover the vision in his left eye. But otherwise he made a full recovery. So, at least, it seemed. But there were reports that his personality had changed: his former workmates found him different—and difficult: short-tempered and given to outbursts of bad language.

36


T he U niverse W ithin

Left: The respective roles of the brain’s two hemispheres are not yet fully understood.

Below: Mozart wrote sublime music; below him, John Frederick Daniell did pioneering work in chemistry: did these two great minds think more alike than we’ve previously assumed?

These arguments may have the ring of truth—it’s never hard to see more scope for the lyrical and intuitive—but that doesn’t mean that they actually are true. The reality in the brain is much more complex. The different functions may be unevenly distributed between the two hemispheres, but that does not necessarily amount to such a bald division between left and right. The reality in human activities is more complex too: our intellectual activities don’t cleave so neatly between cold, hard analysis and warm, soft intuition. A Mozart piano concerto may be beautifully expressive, but it is also highly regimented (music in general was at first assumed to be a left-brain art). And, even if such linguistic skills as grammar and vocabulary are seen as belonging to the left hemisphere, others like the sense of intonation and rhythm are concentrated on the right. We should resist being bamboozled, then: the importance of the left-brain / right-brain divide is much exaggerated. What it means for individual minds, their talents and their skill sets tends to be very difficult to read: most of us are drawing on both hemispheres all the time. Research involving talented young musicians, for example, has shown intense activity in both hemispheres, with enhanced communication between the two sides, and resulting improvement to the memory. But then such surprises are par for the course: research into the brain is still at a comparatively early stage, and its findings are not necessarily those we might expect. We could be forgiven indeed for wondering whether we know anything much at all. The work of John Horgan questions most of the claims made by what might be called “mind sciences” in recent years—not just neurological studies but psychology and psychiatry as well. Such is the “explanatory gap” between what is observed and the ways it has been accounted for, he says, that these studies barely warrant the name of “science” at all. Whether he is right or wrong, the one thing we know for sure is that there’s a great deal more to be discovered. 37


M ind

and

B ody

Colonies or Creatures?

Termite mounds provide another example of eusocial living. A colony may number in the hundreds of thousands, or even millions. As in the beehive, nonbreeding workers gather food for all, and a queen produces eggs; rather than a squadron of drones, however, the termite queen tends to be fertilized by a single male, known, not surprisingly, as the king. Members don’t just differ in their functions, but in their anatomical details. The soldiers, which defend the colony, have armored casing and overdeveloped jaws—so much so, in some cases, that they cannot eat without assistance from workers. They have also evolved specialized behaviors, learning to line up in single file to defend narrow tunnels against marauding ants or to mass into a solid “wall” in wider spaces. So seamless is its organization, so smoothly does it function, that it seems only natural to think of the termite colony as a superorganism.

Below: Lactobacilli like these swarm in the digestive system, playing an important part in the processes by which it extracts energy from food.

But is the superorganism model simply a metaphor? It seems to fall somewhere between the status of persuasive analogy and a real entity. Bees are clearly separate organisms, living and dying individually, however closely they may cooperate. Ten thousand of them surely can’t be seen as somehow a single creature? At the same time, though, they subsist collectively—can only survive collectively—and they reproduce as a hive—and only as a hive. While each performs a relatively simple function, their behavior is collectively complex: altogether, a “swarm intelligence” is shown. Seen in that light, the idea that a hive of honeybees is a superorganism appears neither metaphorical nor far-fetched. Our Superorganism Selves? Should we see ourselves as superorganisms? In recent years, scientists have been coming to a fuller understanding of the role played by bacteria in our bodies, in health and in sickness. We’ve long known that microorganisms live on and within us: the germs our mothers are always telling us to wash from our hands; the bacteria in our stomachs. We’ve even come to appreciate that there may be beneficial microbes at work within our guts, helping us to digest our food. Researchers are now going much further, though. It’s not just a matter of microbes piggybacking on our lives as parasites, or helping out as welcome guests. On average, it’s thought, there are 100 trillion bacterial cells in the human body—ten times as many as there are human cells. And they’re doing important work, not only in our digestive tract, but in our skin, 64


F rom

the

G ods

to

G aia

our bones, teeth, brains and blood. They’re in symbiotic relationship with us: indeed, it may be misleading to talk of “them” and “us” at all, since “they” may well be an integral part of what “we” are. “Human beings are not really individuals,” the American microbiologist Margaret McFall-Ngai has said. “They’re communities of organisms,” she insists. Flavor of the Month Everywhere we look, it seems, we’re seeing superorganisms. In some ways, this is no more than a new perspective on the same old things: our bodies haven’t changed since we saw them in terms of “healthy” humans and “invasive” microbes. But we’re seeing our physical selves in a new light, which is not only enabling us to appreciate the complexities of our bodies’ workings, but which complements the philosophical sense we’ve been acquiring of a “decentered” self (see page 22). This new view also allows us to relate to Lovelock’s argument that the biosphere is a superorganism. Seen in this light, his theory rings true. Scientists who didn’t like the sound of the Gaia Hypothesis, feeling that the 65

Above: This mistletoe looks utterly at home among the branches, but it’s actually a parasite on a host tree. Below: The human cheek in close-up—just another set of cells.


W ellness , S cience

Above: The word “om,” as written in Sanskrit.

Below: The lotus position is traditionally believed to be ideally suited to successful meditation.

and

M edicine

The Sound of Creation You’d be doing well to find a much shorter word than om—the sacred exclamation used in so many eastern prayers. A little like amen in the Judeo– Christian tradition, it punctuates prayerful language and has the effect of underlining its religious significance. (Some scholars in fact believe that the holy sound, imported from India into the Middle East, was actually the origin of the Hebrew word.) Where amen is generally used as a conclusion, a sort of signing-off, om is employed as an opening, an introductory calling to attention. This is certainly how it is used in the ancient Vedas and the later Upanishads. It represents an opening, a start in a profounder sense as well. The god Vishnu, Hindu tradition has it, blew a mighty blast on his conch-shell trumpet: the word om is the sound that brought the universe into being. Hummed aloud, it is held to recall this “primordial vibration”: in Hinduism, Buddhism and Jainism, it is the word both of creation and of God. This is why, for so many centuries, it has been chanted as a mantra—the word, phrase or formula that is repeated as an aid to meditation. Sometimes longer sections or verses of the Vedas may be recited, but—like the Cloud of Unknowing author—Eastern mystics have found that, in meditation, short is beautiful. And, with all its meanings, the word om is a prayer in itself. Yoga: A Life’s Work It’s easy to forget in the Western context, in which it often falls ambiguously somewhere between alternative philosophy and exercise, that the tradition of yoga was originally a religious regimen. Believers pretty much consecrated their whole lives to the pursuit of enlightenment through exercise and meditation—and still do, through much of the world, where the great Indian religions continue to be practiced. No spoken mantra is necessarily needed. Often counting breaths (anapanasati or “mindful breathing”) is all that’s needed; alternatively, the thoughts can be focused on an object—perhaps an element or color. An experienced yogi may be able to enter the meditative state without any of these aids—but such mastery doesn’t make the act of meditation any less important. Seated meditation—in the lotus position—is at the very center of Zen Buddhism, as it is practiced in the Far East. This sort of sitting meditation is known as zazen. Often, however, practitioners will literally ring the changes, a sounding bell being the signal for zazen to end and for a period of kinhin—walking meditation, with rhythmic breathing—to begin. Fringe Benefits By the standards of the dedicated eastern yogi, even the keenest Western, part-time practitioner of yoga is really only dipping 90


T he M ind

at

E ase : M editation

into the regime. The benefits of such involvement are likely to be marginal too, though many thousands testify to the sense of wellbeing they gain by doing it. It is hard to measure the results of any holistic health treatment, precisely because it isn’t generally targeted at curing any particular illness, with an identifiable set of symptoms, but at moving toward a situation of all-round wellness. Specific benefits of yoga that we can point to tend to be in such obvious areas as back and muscle pain. Work on yoga’s asanas (sitting and standing positions) promotes good posture. There’s nothing too mysterious about why this might help to ease these problems—though for thousands of sufferers, the improvement is not to be sniffed at. The same can be said for the use of yoga for the elderly and in post-operative care: even if we ignore the spiritual side, it’s easy to see why these exercises would help maintain and restore physical flexibility. There may well be other benefits too. As we saw in the last chapter, many apparently physical pains and discomforts are actually sparked by stress, and any regime that can facilitate relaxation is likely to bring great relief. There is already sound evidence that yoga can help speed healing in wounds and signs as well that it can alleviate high blood pressure, heart disease and the symptoms of asthma. It is believed that it may also help ease the pain of arthritis and multiple sclerosis. Slowly but surely, the medical establishment appears to be overcoming its suspicion of what for many years it saw as an exotic fad and learning to love yoga and associated techniques. Spiritual or Secular? Few of those practicing yoga in the west have any serious interest in religious self-purification. It has been embraced as an aspect of secular life—a private pastime, a healthcare treatment. That it has fitted into that role so easily raises the question of how essential the spiritual dimension really is to Eastern-style meditation—or, for that matter, to those of medieval Christendom. 91

Above: The seven chakras, as represented in an Indian manuscript of 1899.


W ellness , S cience

Above: One pill is pretty much like another, as far as we’re concerned. How much does it really matter what they contain? Below: Jesus heals the sick, but is the relief we get from modern medication really no more than a matter of quasireligious faith?

and

M edicine

The Wright Stuff In 1957 the California-based German psychologist Bruno Klopfer (1900–71) had a unique opportunity to observe the effects of ups and downs in patientconfidence in real time. One male patient, “Mr. Wright,” was suffering from cancer of the lymph nodes and was expected to be dead within the month. That is, until he heard news of an apparent breakthrough in cancer treatment: Krebiozen, then undergoing its preliminary tests in that same hospital, was rumored to be achieving remarkable results. The effect on Mr. Wright was certainly little short of miraculous when he managed to persuade Klopfer to bypass the official testing process and simply try out the new drug on him. Within a few days he was up and about, in high good humor, his tumors having shrunk from something like three inches in diameter to about an inch. In no time at all, it seemed, he had been cured completely. Just a few weeks later, though, the news came through that Krebiozen had flunked its tests: it wasn’t the wonder-drug they’d all been saying after all. Mr. Wright, cast down, quickly succumbed to the same cancer that had so nearly killed him before; his tumors returned and quickly grew. Klopfer’s response was arguably unethical. It would break all the regulations in force today. He told Wright that there’d been a mistake in the test findings. The Krebiozen he’d used on him had been taken from a faulty batch: that was why his cure hadn’t worked out. Klopfer then gave his patient another dose— at least, that was what he told him. In fact, though, he was injecting him with water. Once more, the tumors withered away and Mr. Wright seemed to have made a complete recovery. Seven months went by before the final official findings of the tests on Krebiozen were published. The American Medical Association gave the new drug their definitive thumbs-down, and Klopfer was unable to explain the bad news away. At this point, it all went wrong for Mr. Wright: the tumors came back with a vengeance and within a very few days he was dead. Mr. Wright’s apparent miracle-cures should be considered with as much caution as Krebiozen. They haven’t been replicated in other patients, nor has there been much sign of similar placebo treatments working in other cancer cases. For reasons we don’t really understand, lymphoma tumors do tend to come and go of their own accord to some extent; it might have been sheer coincidence that Mr. Wright’s disappeared and reappeared when they did. The key contribution of the Wright case was that it won so much publicity for the idea of the placebo effect, which had already been identified by the American researcher Henry K. Beecher (1904–76). 96


P lacebo E ffects : M ind

over

M atter

Saline in the Front Line Beecher’s interest in placebo treatments had first been sparked in the most dramatic of circumstances when he was working in a field hospital in Italy in the final months of World War II. Allied troops had been taking heavy casualties and supplies of morphine had been running low; eventually the moment came when they ran out altogether. The fighting didn’t cease, though, and neither did the flow of casualties. One GI was brought in badly wounded, his agony only too obvious for all to see. Necessity was the mother of invention: a nurse grabbed a syringe of saline solution and drove it into the body of the yelling, writhing man. Within moments, he had calmed himself: now, it seemed, he felt no pain. Beecher was even able to operate upon him successfully. He hadn’t set out to conduct an experiment, but that was what he had ended up doing despite himself. And the results had been sensational. The belief that one was being given morphine might be every bit as anesthetizing as the morphine itself could be, Beecher realized. Reduced to the same subterfuge again in the weeks that followed, he had exactly the same success. Fascinated by what he had found, he pursued his researches back in Boston at the Massachusetts General Hospital when the war was over. Examining earlier drug trials, in the course of which placebos had been administered to groups as a “control” in the approved manner, he compared the relative efficacy of the placebos with the drugs actually being tested. He released his results in a paper, “The Powerful Placebo,” in 1955. 97

Above: The wounds sustained in battle could hardly be more real, yet it was in a wartime field hospital that the power of the placebo was first established.


P otentialities : I ntegrating

Above: We see in South Africa’s savannah in long exposure what we so easily forget in the city: our lives are caught up in the rhythms of the cosmos as a whole.

Below: Our sense of the universe’s vastness should be frightening, but instead it can bring us peace.

the

C onscious

and the

P hysical

founded his own Integral Institute. Dedicated to developing integral thought and applying it to a wide range of human problems, it sets out “to bring new depth, clarity and compassion to every level of human endeavor—from unlocking individual potential to finding new approaches to global-scale problems.” For decades now, Wilber has worked hard to broaden the base of what he sees as a “narrow science” concerned exclusively with what can be monitored and measured by the senses. Science, he believes, should cast its net of curiosity a great deal wider, bringing in not just logic and reason but insight and intuition. A true science, he says, would take account not just of empirical observation but the findings of philosophers, and the experiences of mystics and religious seers. Although he embraces Aurobindo’s idea that we exist both at material and spiritual levels, Wilber sets this sort of holism in a more systematic scheme. Every idea or thing, he says, is fundamentally twofold. It exists, he says, simultaneously as an entity unto itself and as a part of a greater whole. And, ultimately, as an element in a hierarchy of being which weaves the universe together as infinity and as one. The Universe and I At the same time, Wilber points out how fluid and indeterminate our selfhood is; how hopelessly reductive the idea of the “I” and of the “Cogito... .” “I” alone exist in several different states: waking, sleeping, dreaming. Even in my conscious state, I have different sensory experiences and different moods. Why corral so many different subjectivities into a single “I”? Why not see them as seamless continuations of a complex universe? In this interpretation, the self is, quite naturally and spontaneously, self-transcending; we are “at one with the universe,” whether we will it or we don’t. It’s hard to think of lil’ ol’ me in the context of so vast and totalizing a cosmic vision, of course: this is one of the key challenges we face when we try to apprehend the integral view. Yet if we can come to appreciate that our consciousness is continuous with that of the universe, we may start to see how it might conceivably be possible, in Gandhi’s terms, to “be the change we want to see.” The gap between perception and problem-solving is partly a function of the gap between ourselves and the world around us—itself a problem of perception, it might be argued. If the integral theorists are to be believed, a better understanding of the universe really might help to bring about a better world. 136


P roblems , P erceptions

A Transcendent Vision Things are often easier to grasp if we can only get a bit of perspective, but then not too many of us have the chance to see our world from the outside. One man who did—quite literally—is Edgar Mitchell. In 1971, he became the sixth man ever to walk on the Moon, but it was the return voyage through space that changed his life forever. The astronaut has since described in unforgettable terms how, Suddenly from behind the rim of the moon, in long, slow-motion moments of immense majesty, there emerges a sparkling, blue and white jewel, a light, delicate sky-blue sphere laced with slow swirling veils of white, rising gradually like a small pearl in a thick sea of black mystery. “My view of our planet was a glimpse of divinity,” he afterwards observed. “I suddenly experienced the universe as intelligent, loving, harmonious.” But, more importantly, perhaps, it was also a profound insight into humanity. “It takes more than a moment to fully realize this is Earth—home.” “We went to the moon as technicians, we returned humanitarians,” he said. True exploration always has this effect, it might be said. Any science that ignores the human dimension ignores the universal. It can ultimately amount to no more than technical tinkering. Mitchell himself was inspired by his epiphany to do everything he could to understand the universe and our place in it in ways which, while respecting science, were prepared to transcend the conventions which currently limit its scope. The enduring monument to this quest is the Institute of Noetic Sciences itself, endowed and led by Mitchell from its inception. 137

Above: Earthrise as seen from the surface of the moon. The sight of his home planet from a completely new perspective gave Edgar Mitchell “a glimpse of divinity,” he recalled.


P otentialities : I ntegrating

Opposite: Houdini shows how “spirit photographs” purporting to reveal the dead are faked.

Below: A simple doubleexposure “spirit” photograph like this would not fool anybody today, but in the early twentieth century it was different.

the

C onscious

and the

P hysical

Smoke and Mirrors Those who wish us to keep an open mind about psychokinesis, telepathy and all things psychic are up against the fact that we’ve found ourselves made fools of so many times. Not that it’s necessarily easy to tell, at least for the vast majority of us for whom the mysteries of illusion are almost as unfathomable as those of the occult could ever be. This is why so many of the most successful fraud-busters in the past have had practical experience as illusionists themselves. Édouard Isidore-Buguet (see box, page 144) is a celebrated case, although historians have tended to be cynical about his conversion from fraudster to fraud-buster so late in life. In our own time, no one has done more to expose imposture than James Randi (b. 1928). “The Amazing Randi” is truly amazing. Not just on account of his astounding stage illusions, but because of his tireless work as an investigator of the pseudoparanormal. Having made his mark and honed his skills as the world’s most successful escapologist, he’s been a true successor to Houdini, taking up his campaign to unmask the trickster and disabuse the naïve. In 1981, for example, he showed how James Hydrick’s feats of psychokinesis had been staged, dramatically thwarting a display by the supposed psychic on TV’s That’s My Line. It took the illusionist’s eye to guess that, when Hydrick made the page of the telephone book before him lift itself, he was actually blowing discreetly so it wafted upward. Randi sprinkled polystyrene chips on the table immediately in front of the book and—lo and behold!— Hydrick’s psychokinetic powers failed. Randi’s book, The Truth About Uri Geller (1982), wasn’t the truth at all, the Israeli psychic claimed, but his libel action against Randi was unsuccessful. Not content with contemporary psychics, Randi turned his attentions to the past: his case against Daniel Dunglas Home (see pages 139–41) is intriguing, but generally considered to be unproven. 148


I n S earch

of

P si

Friends Fall Out

A younger contemporary of Isidore-Buguet, Harry Houdini (1874–1926) showed more consistency in his campaign against what he saw as wholesale Spiritualist fraud. History’s most famous escapologist set his skills in managing illusion to good use. Setting up his own pseudoséances, he showed his audiences how psychic communication could be faked, much to the outrage of many of those attending. Just how much of themselves believers invested in these ceremonies became clear when the author Arthur Conan Doyle turned on his former friend, berating him furiously. Houdini, he alleged, was not practicing trickery at all but employing psychic methods. Dishonesty of dishonesty, he was pretending to be a fraud! As so often, the element of personal feeling ensured that things became completely toxic. The fact that Sherlock Holmes’ creator had himself introduced the skeptic to leading Spiritualists made this betrayal especially hard to take. Conan Doyle felt he had been played for a fool: Houdini had made him the unwitting instrument of his own attack on the Spiritualism to which Conan Doyle had devoted his later life. But then, human feelings were always going to run high in discussions of Spiritualism, a field whose whole purpose was to establish communication between the living and their lost loved ones. The emotional dimension made detachment well nigh impossible and debunking hurtful—on occasion, hazardous. Hence the desire of the parapsychologists to find other avenues of research.

British illusionist Derren Brown (b. 1971) and fellow “mentalists” take this demystifying approach a step further, exploiting our desire simultaneously to be wowed by seemingly magical illusions while believing that the practitioner is capable of exerting a scientifically grounded, “authentic” influence over his or her subjects in order to confound and manipulate them. Brown makes no claims of paranormal powers, but suggests that his tricks and stunts are rooted in a combination of psychology and brilliant showmanship. His book Tricks of the Mind (2007) promises to teach us how to harness some of his powers of manipulation and telepathy ourselves, although it’s implied that we’re unlikely to get beyond a few simple party tricks and shouldn’t expect to pull off the dazzling feats of mind control he himself is capable of. 149

Below: The psychokinetic power that lifts the pages of the book may actually be nothing more than the trickster’s breath.


P otentialities : I ntegrating

the

C onscious

and the

From Mind to Machine?

P hysical

”The mind in creation,” wrote the Romantic poet Shelley, “is as a fading coal … when composition begins, inspiration is already on the decline.” Could it be possible one day to give our mental processes direct-to-digital reality? To cut out the mediating mental work and the mechanics of writing, painting, sculpting? Or even, more prosaically, to take a dictation; to record our thoughts; take notes on our developing ideas? Could we maybe “think” a simple message on to paper (or a digital device) via some mind/machine interface? Recent advances in this area have brought neuroscientists closer to finding a way for people to communicate via sophisticated computers that read the data transmitted by their brainwaves. Scientists at Orsay’s University of Paris-Sud have used scanners to “look” into a subject’s brain and watch it working. By placing electrodes on the outside of the interparietal cortex—part of the brain known to be related to the conceptualizing and processing of numbers—researchers were able to watch what happened when the subject imagined a number and attempt to find ways of identifying which number was being thought of. Neuroscientists at the Jacksonville, Florida, campus of the Mayo Clinic have also revealed findings indicating that people may one day be able to communicate with thought alone. Researchers led by neurologist Jerry Shih tested their fledgling mind/machine interface in patients with epilepsy. Using the direct implant of electrodes onto the brain, like the French researchers (rather than electroencephalography, whereby the electrodes are placed on the skull), they were able to view on a computer screen the letter that their patient was focusing on, achieving almost perfect accuracy. These findings represent concrete progress toward mind/machine devices that could help millions suffering from severe injuries or the advanced stages of neurological disorders. Amputees might learn to “will” movement in prosthetic limbs, and people suffering from full-body paralysis, unable to speak or even blink an eyelid, could simply “think aloud”—in theory, at least. How might Tony Judt (see page 14) have been cheered by the prospect of retaining the ability to communicate until the very last? A world where we can communicate by thought alone? The possibilities are endless. But, for now, the scope of this research is limited, and the process of implanting electrodes into the brain is invasive, requiring patients to undergo a craniotomy— so we’re unlikely to be dictating our journal entries this way for some time yet.

154


I n S earch

of

P si

philosophy” says Shakespeare’s Hamlet, and you don’t have to be a believer to keep an open mind. From the flat earth to the Newtonian cosmos, after all, the whole history of science can be seen in terms of the overturning of what have seemed self-evident truths. The Institute of Noetic Sciences has the investigation of psi as one of its long-term aims. Its lab is set up with shielded testing rooms with closed-circuit TV links so that communication between isolated individuals can be rigorously monitored, as well as some of the most sophisticated EEG brain-mapping equipment available. Increasingly, though, it has been looking beyond its own four walls for its experimental data. The advent of the Internet has allowed surveying to take place on an enormous scale. (Albeit on a selfselecting sample of what are arguably likely to be sympathetic individuals.) A series of online games allows you to test your for psi abilities while adding to the Institute’s bank of data for ongoing research. Click on the cartoon figure to guess— or divine—the site of his or her sickness; use your powers of concentration to drive out the pain. As players score points, supposedly learning about (and, by all accounts, improving) their psi abilities, IONS collects and crunches all its findings, game by game. Since the emphasis, as IONS points out, is all on healing and questing in search of the self, these activities start out at an ethical advantage over aggressive conventional car-chase and shoot-’em-up games. But this is by the way, if what we’re interested in is the Institute’s stated aim of establishing the existence or otherwise of psi. That, it would seem, remains quite some way off. Data and Doubt Is it ever going to work? How worthwhile can any data collected be when players may be anywhere in the world, and may be approaching the game in any frame of mind from implicit faith through skepticism to a spirit of sabotage? And how, in any case, are results to be assessed? What can we say other than that, in an online test whose outcomes have essentially to be taken on trust, those disposed to believe will believe, while skeptics will feel skeptical? Did I diagnose my cartoon-partner’s intestinal trouble by intuition or a lucky chance? Was his or her illness relieved by my mental efforts, or did it just go? The skeptic will sense a certain circularity of scientific logic and wonder whether anything substantive is being tested at all. This, of course, is where we came in: Is there any such thing as psi? It’s a question we’re clearly going to have to keep on asking. 155

Above: William Hogarth’s Satire on False Perspective (1753) doubles as a fascinating investigation into the kinds of visual tricks the brain can play. Below: Online tests have become a vital tool of psi research at IONS.


P otentialities : I ntegrating

Above and below: Grief comes to us all, we know, but could it be more collective than we’ve realized? Random-number readings have picked up what may be mass emotions.

the

C onscious

and the

P hysical

Not So Random? A Random Number Generator (RNG) issues an endless stream of ones and zeros in random order. Since 1998, as part of an ongoing Global Consciousness Project coordinated from Princeton, sixty-one such devices have been set to work concurrently in forty-one countries. The results have been recorded and then brought together in a running chart. Most of the time, if truth be told, there’s nothing very much to see: the ones and zeros cancel one another out in aggregate and the graph flatlines. Every now and then, however, there are major spikes. While all numbers generated by the RNGs are random, some seem to be more random than others. The suggestion is that significant eruptions of emotional or intellectual energy have a distorting effect that can be measured and that this in turn shows that individuals and the overall mass of humanity have some sort of connection at the level of consciousness worldwide. The first significant spike we know of came in 1997, before the project got under way—it was indeed an important inspiration for the founders. This one coincided with the funeral of Princess Diana in London. This clearly corresponded with a moment of great emotion in which much of the world could be seen to share. It was a transient moment, to be sure, but there’s no disputing the scale of the shock at the Princess’s death or the mass mourning that followed. If any human feeling was going to rock the world of math, it would be this one. Yet why would it? Why should a “resonance” of this kind have any effect whatever on the flow of random numbers from computers around the world? Four years later, just as abruptly, there was another spike of wayward randomness. This one came on September 11, 2001—a fateful date indeed. More interestingly (and confusingly), this spike didn’t just follow the terrorist attacks on New York and Washington but preceded them by a good four hours. Was the pattern predicting what was going to come? Since the same thing happened in the hours before the Asian Tsunami of 2004, it seems reasonable to ask whether the RNG’s “knew” exactly what was afoot. Intriguing, without a doubt. And yet exactly what are the RNGs supposed to be measuring? Again, there’s the question of why any world event—however calamitous—should affect a stream of numbers issuing randomly from a computer. Then there’s the fact that so few of us knew about the terrorist attacks or the earthquake in advance. Not, at least, in any normal sense of the verb “to know.” Did we intuit the 160


T he N oosphere : O ur C ollective C onsciousness ?

imminence of disaster in these cases, by some sort of precognition? Or was the normal sequence of causality reversed for these (admittedly key) events? Big questions … But they prompt another one: how are we to establish whether a global consciousness exists if we’re not even sure what we mean by the word consciousness to start with? Scientists, not surprisingly, are uncomfortable with a question that seems to pile imponderable upon imponderable in this way: they prefer to engage with no more than one unknown at any given time. Killjoys like Robert Todd Carroll (compiler of the online Skepticsí Dictionary) point to what they see as selective reading of the data: researchers naturally picked out patterns that suited them. Analysis of the raw statistics allows a great deal of interpretive scope when it comes to identifying relevant time windows and what are deemed significant levels of deviation. Scientific skeptics have questioned too whether a truly random RNG can actually be created: ultimately, what the device produces is determined by the way it’s programmed. 161

Above: A spike in randomnumber generator readings actually preceded the Asian tsunami of 2004. Was it somehow “known about” before it happened? Left: Random number strings: just so much meaningless gobbledegook, or are there answers to eternal questions here?


P otentialities : I ntegrating

Below: Faccummy

the

C onscious

and the

Do-It-Yourself Science

P hysical

It’s the common complaint of the crank that the scientific establishment’s mind is closed: How else to explain official disdain for well-attested reports of alien abductions? At the same time, though, few would seriously doubt that (like any other institution) academic science may have its own vested interests, its own inertia. It’s easy enough to see how scientists might be slow to engage with ideas that threaten to upset the settled assumptions of many decades. “They laughed at the Wright Brothers...”—or so we’re told. Hence the concern of Rupert Sheldrake to take science out of the laboratory; but hence too his insistence that the utmost rigor must be maintained. The establishment is quick to cry “mumbo-jumbo” when alternative scientific theories are proposed—and there’s no shortage of mumbo-jumbo out there, after all. His 1994 book Seven Experiments That Could Change the World hasn’t yet lived up to its title, but it did offer readers the first few struts of a framework around which a new understanding of the cosmos could be built. Sheldrake’s proposed devolution of scientific responsibility to the general public was revolutionary in itself: the experiments he suggested could easily be conducted by ordinary people, without special apparatus of any kind. The collective spirit was crucial, of course: no one reader’s individual experience could amount to anything much, but Sheldrake hoped to mobilize an army of researchers, conducting the same experiments many thousands of times over across the world. But his experiments were radical too in their focus and their emphasis, engaging as they did with issues around which conventional science had skirted. Small-scale, simple experiments tackled what were on the face of it quirky questions, but potentially far-reaching in their implications. How did pigeons home? Did your dog really know when you left the office for your journey home? When amputees sense feeling in phantom limbs, are they just imagining things? One subject—the sense of being stared at—was to give rise to a full-length book in 2003. Again, though, the experimental essentials were straightforward. Blindfolded subjects had to indicate when they thought they were being stared at and when they weren’t: a steady-ish 60 percent have guessed right in repeats of this experiment over time. Not quite a eureka moment, then—though Sheldrake is happy enough to acknowledge that. Scrupulously modest in his claims, he merely argues that these collective findings point toward psychic connections that conventional science can’t account for. He’s firmer, though, in challenging those who’ve rushed to trash his experimental findings. If anyone’s short of evidence, they are, he suggests. In hindsight, it can be seen that Seven Experiments... was an early example of the interactive culture that was to take an increasing hold as the reach of the internet extended. The Institute of Noetic Sciences has been able to call on the contributions of enthusiasts worldwide by conducting online experiments; Sheldrake has been running several experiments on his own website (www.sheldrake.org).

172


M ind

and

M atter : H uman C onsciousness

and the

P hysical W orld

Collective Crosswords

Every time somebody solves a clue in a given crossword, Sheldrake’s theory implies, that insight becomes part of the collective understanding. That crossworddoer A in Albuquerque has solved it doesn’t mean that crossword-doer B in Boston will instinctively and instantly hit on the right answer, but the fact that A has been there before him or her does make the answer in some sense more available to B. Researchers at England’s Nottingham University tried this experiment with a yetto-be-published puzzle provided by London’s Evening Standard newspaper. They tested its respective difficulty with student subjects the day before publication and then again the day after, comparing these with subjects’ experience of a control crossword, which wasn’t published at all during the relevant time. For what it’s worth, the study found that, over the 24 hours that followed its publication in the Standard, the puzzle became some 25 percent easier to do.

Universal Learning Quite how literally Sheldrake means it when he talks of a collective memory can be a bit startling. Precedent is all, whether consciously experienced or not. Somewhere “out there” everything that happens is recorded. This affects life at all levels, from the cell cluster whose random mutations are channeled down established lines to the fully formed animal or human mind, whose experiences at some level become a part of the common consciousness. When something happens to me, I’m affected, of course; but the universal mind is modified as well: we all live, think, desire differently because of the way our forebears lived, thought, desired. But we’re affected by our contemporaries, too, the collective intelligence developing with everything that happens, from the most universal level to the most trivial. What we think as individuals has implications for what is thinkable for our species; skills we’ve worked to master become easier for others to acquire. When a rat finds its way around a maze in a Milwaukee laboratory, for instance, it becomes a little easier for rats in Manchester or Milan to navigate the same maze. And since we’re talking about a universal intelligence, one that encompasses inanimate as well as living things, each time a particular chemical reaction takes place—anywhere—it becomes a little easier. Imagine that kind of process replicated countless times over at every level of existence. Every time a chemical reaction takes place, anywhere, it becomes a little easier. The entire universe evolves this way, learning and developing as it goes. Repetition makes the random habitual and, suggests Sheldrake, those habits that are repeated most often become most habitual, by what amounts to natural selection of a sort. “Animals,” Sheldrake argues, inherit the successful habits of their species as instincts. We inherit bodily, emotional and mental and cultural habits, including the habits of our languages. 173

Below: South America’s tufted capuchin is among the most conspicuously intelligent of primates: tool use is routine, young monkeys quickly copying their parents. The capuchin’s capacity to learn is extraordinary: recent studies have suggested that, after training over a period of months, it can take on so comparatively elusive a concept as that of monetary exchange.

Mind Over Matter: Noetic Science and the Question of Consciousness  

Is it possible to transmit thoughts directly into someone's brain? Can random number generators predict future events? Fantastical, maybe, b...

Read more
Read more
Similar to
Popular now
Just for you