Beyond Magazine - Indesign Class Project

Page 1

APRIL_2016

MAGAZINE

2018 The Year of the James Webb Space Telescope

VR TIMELINE With predictions of our future

3D-PRINTED PROSTHETICS Changing the way you see amputees




PROSPECTS

Simon Stålenhag’s SciFi Suburbia Featured Artist | By Dante D’Orzario

The artwork is impactful as a result of this juxtaposition between the harsh realities of life and the sci-fi technologies of our dreams. It’s reminiscent of worlds like the one so effectively portrayed in games like Half-Life 2.

Welcome to rural Sweden, sometime in the late ‘80s. Citizens go about their mundane lives and children explore the countryside. But something isn’t quite right. Robots and hovercrafts are commonplace, and decaying science facilities sprout from the harsh Scandinavian landscape. There’s even a rumor circulating that dinosaurs have returned from the dead after some failed experiment. This is the world that exists in artist Simon Stålenhag’s mind, and it’s only accessible through his paintings. The alternate universe he’s created is inspired by the sci-fi movies he watched as a kid growing up in the rural areas around Stockholm. As he explains to The Verge, “The only difference in

the world of my art and our world is that ... ever since the early 20th century, attitudes and budgets were much more in favor of science and technology.” So boxy Volvos, Volkswagens, and Mercedes share the landscape with robots. But science has lost some of its luster. In Sweden, a massive government science facility (equipped with an underground particle collider, of course) is long past its glory days in the field of “experimental physics.” The artwork is impactful as a result of this juxtaposition between the harsh realities of life and the sci-fi technologies of our dreams. It’s reminiscent of worlds like the one so effectively portrayed in games like Half-Life 2, and like such great video games, the universe created by the artist seems to continue well beyond the edge of the canvas. The acclaimed artist, concept designer and author of Ur Varselklotet (2014) Simon Stålenhag (b. 1984) is best known for his highly imaginative images and stories portraying illusive sci-fi phenomena in mundane, hyper-realistic Scandinavian landscapes. Ur Varselklotet was ranked by The Guardian as one of the ‘10 Best Dystopias’, in company with works such as Franz Kafka’s The Trial and Andrew Niccol. PAGE_2 |BEYOND MAGAZINE


P

CURIOSITY

Birthday in Space, RSVP Curiosity Rover approaches 4th year on Mars By Mike Wall

Curiosity’s handlers think they know how to avoid the types of terrain that inflict the most dings and dents, and they’re making some changes to the software that drives the wheels, he explained.

NASA’s Mars rover Curiosity has now been trundling across the Red Planet for three very productive and eventful years. Curiosity landed on the night of Aug. 5, 2012, pulling off a dramatic and unprecedented touchdown with the aid of a rocket-powered “sky crane” that lowered the 1-ton rover gently to the Martian surface via cables. The six-wheeled robot then set out to determine if its immediate environs a 96-mile-wide (154 kilometers) crater named Gale could ever have supported microbial life. That work and more are chronicled in a new NASA video on Curiosity’s discoveries on the Red Planet. Curiosity quickly succeeded in this main task. The rover’s observations of rocks at an area near its landing site called Yellowknife Bay allowed mission scientists to deduce that Gale Crater supported a potentially habitable lake-and-stream system for long stretches in the ancient past perhaps for millions of years at a time. Curiosity departed the Yellowknife Bay area in July 2013, making tracks toward the foothills of the towering Mount Sharp, which rises 3.4 miles (5.5 km) into the Martian sky from Gale’s center. Mount Sharp’s base has been Curiosity’s primary destination since before the $2.5 billion mission’s November 2011 launch. The rover team wants Curiosity to climb up through the mountain’s lower reaches, reading a history of Mars’ changing environmental conditions in the rocks. Artist’s concept depicts the NASA Mars Science Laboratory Curiosity rover, a nuclear-powered mobile robot for investigating the Red Planet’s past or present ability to sustain microbial life. “It’s been an adventure, partly because we’re on the mountain now, and driving is much more challenging,” Vasavada said.

Such mountaineering will take time — time that the mission team does not officially have at the moment. Curiosity is about halfway through its first two-year extended mission, which NASA approved after the two-year prime mission ended in 2014. The rover’s handlers plan to keep applying for additional two-year extensions for the foreseeable future, Vasavada said.

“It’s been an adventure, partly because we’re on the mountain now, and driving is much more challenging,” He said he thinks they’ll have a very good case for at least the next four years, because Curiosity remains productive and in good health. The rover team has made a lot of progress in troubleshooting a glitch that recently cropped up in Curiosity’s drilling mechanism, and concerns about the mounting damage to the rover’s six wheels have abated recently, Vasavada said. Curiosity’s handlers think they know how to avoid the types of terrain that inflict the most dings and dents, and they’re making some changes to the software that drives the wheels, he explained. “The combination of all those things makes us confident now that the wheels are going to last as long as we need them to in the mission that we have planned to get higher up on Mount Sharp,”

PAGE_3 |BEYOND MAGAZINE


P

PLUTO

NOW IN VIVID COLOR

ILLUSTRATED VIEW OF PLUTO FROM ONE OF HIS MOONS

Findings from the “NEW HORIZONS” mission By Mike Wall

When NASA researchers launched New Horizons back in 2006, they didn’t know what they would encounter on Pluto–some theorized that because of its distance from the sun, the planet could have cooled down long ago and ceased to have significant geologic changes. But it turns out that Pluto’s landscape is home to incredible variety– everything from wide, flat plains to soaring, icy mountains. And now maybe ice volcanoes to boot. PHOTOS COURTSEY OF NASA

NASA’s New Horizons spacecraft has sent back the first in a series of the sharpest views of Pluto it obtained during its July flyby – and the best close-ups that humans may see for decades. Each week the piano-sized New Horizons spacecraft transmits data stored on its digital recorders from its flight through the Pluto system on July 14. These latest pictures are part of a sequence taken near New Horizons’ closest approach to Pluto, with resolutions of about 250-280 feet (77-85 meters) per pixel – revealing features less than half the size of a city block on Pluto’s diverse surface. In these new images, New Horizons captured a wide variety of cratered, mountainous and glacial terrains. “These close-up images, showing the diversity of terrain on Pluto, demonstrate the power of our robotic planetary explorers to return intriguing data to scientists back here on planet Earth,” said John Grunsfeld, former astronaut and associate administrator for NASA’s Science Mission Directorate. Images from NASA’s New Horizons’ mission continue to astound–this time, with a startling

phenomena researchers had never seen in our solar system before: two possible ice volcanos on the surface of Pluto. When NASA researchers launched New Horizons back in 2006, they didn’t know what they would encounter on Pluto–some theorized that because of its distance from the sun, the planet could have cooled down long ago and ceased to have significant geologic changes. But it turns out that Pluto’s landscape is home to incredible variety–everything from wide, flat plains to soaring, icy mountains. And now maybe ice volcanoes to boot. One the features scientists are studying is Wright Mons, which was named in honor of the Wright Brothers. It’s a huge feature about 90 miles across and 2.5 miles tall. If it is in fact an ice volcano, it would be the largest such feature discovered in our outer solar system.

PAGE_4|BEYOND MAGAZINE


P

JWST

HUBBLE’S little brother

ILLUSTRATIONS COURTSEY OF NASA

The James Webb Space Telescope is on track to launch in 2018 How will the Hubble’s successor match up? By Lee Billings

If the Hubble Space Telescope’s 2.4 meter mirror were scaled to be large enough for Webb, it would be too heavy to launch into orbit. The Webb team had to find new ways to build the mirror so that it would be light enough - only one-tenth of the mass of Hubble’s mirror per unit area - yet very strong.

In a gymnasium-sized cleanroom dominated by a laser-guided robotic arm on mustard-yellow scaffolding, bunny-suited technicians are on the verge of completing the primary mirror of the James Webb Space Telescope, a $9 billion orbital observatory planned to lift off in 2018, being built by NASA in partnership with the European and Canadian Space Agencies. Gripped by the robotic arm, the last of the mirror’s 18 lightweight hexagons of gold-coated beryllium hovers, ready for installation.

1.5 million kilometers from Earth. There, at a point of gravitational quiescence called L2, Webb will begin what astronomers say will be revolutionary studies of the universe.

Once the final segment is mounted, it will mark the most tangible milestone yet in the observatory’s multi-decadal path to launch.

“To finish Webb’s mirror is to hear the first heartbeat of the magnificent creature that will carry us to when the universe that bore us was itself born.”

Each segment is as big as a coffee table but hollowed out to only weigh 20 kilograms. The entire mirror spans 6.5 meters edge to edge. After being mated to the rest of the telescope, which is still under construction, it will ultimately be launched to its deep-space destination some

Turned skyward and concave in a supportive cobweb of carbon fiber called a backplane, the mirror looks like the giant, unblinking compound eye of an insect. For Alan Dressler, a senior astronomer at the Carnegie Institute for Science, Webb’s completed mirror brings other anatomies to mind.

CONTINUED ON PAGE_11


BUILDING THE

ARTIFICIAL

BRAIN


N

CAN WE CREATE MACHINES WHO LEARN LIKE WE DO? by_kate mcalpine (photo illustration by_Joseph Xu)

w

e’re summoning the demon. That’s what Elon Musk, serial entrepreneur on a cosmic scale, said about AI research to an audience at MIT last October. “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. … In all those stories where there’s the guy with the pentagram and the holy water, it’s like, he’s sure he can control the demon. “Doesn’t work out.” The British astrophysicist and brief historian of time Stephen Hawking broadly agrees. In December, he told the BBC, “The development of full artificial intelligence could spell the end of the human race.” Wow. That’s heavy. But this isn’t the first time that AI has seen a lot of hype. And last time, it wasn’t the human race that disappeared.

Wei Ku, maker of brain-inspired computer hardware and professor of electrical engineering and computer science

7


A.I

has gone through different stages,” said Wei Lu, an associate professor of electrical engineering and computer science at U-M. “It has died twice already!” AI’s deaths – or its winters, as insiders more generally refer to the field’s downturns – are characterized by disillusionment, criticism and funding cuts following periods of excitement, money and high expectations. The first full-scale funeral, around 1974, was caused in part by a scathing report on the field as well as drastically reduced funding from what is now the Defense Advanced Research Projects Agency (DARPA). The second, around 1987, also involved heavy cuts from DARPA. But hey, it’s now summertime for AI, and there is serious money at play. Google snapped up the company DeepMind for a reported $400 million and Facebook has started its own AI lab. Ray Kurzweil, futurist and AI don for Google, said machines could match human intelligence by 2029. Technology certainly seems smarter than it was a decade ago. Most smartphones can now listen and talk. Computers are getting much better at interpreting images and video. Facebook, for instance, can recognize your face if you’re tagged in enough photos. These advances are largely thanks to machine learning, the technique of writing algorithms that can be “trained” to recognize images or sounds by analyzing many examples (see The Perception Problem sidebar with this story). In spite of the AI optimism (to the point of existential pessimism), the field might best be described as a hot mess. Hot because it’s a whirlwind of activity – you’ve got self-driving cars and virtual assistants. You’ve got artificial neural networks parsing images, audio and video. Computer giants are starting to make special chips to run the artificial neural networks. But the challenge of organizing these pieces into an intelligent system has taken a back seat to the development of the new techniques. That’s the mess part.

NEURAL HARDWARE AI. Current AI systems are great at the tasks for which they have been programmed, but they are missing our flexibility. He cites IBM’s Jeopardy-winning Watson. Drawing on an extensive stockpile of knowledge, it’s tops for answering questions (or questioning answers, if you prefer). “But you can’t teach it Tic Tac Toe,” Laird pointed out. (Some solace for human champion Ken Jennings.) Humans can learn how to do new things in a variety of ways – through conversations, demonstrations and reading, to name a few. No one has to go in and hand-code our neurons. But how do you get computers to learn like that? Researchers come at it in a variety of ways. Lu’s pioneering work with a relatively new electrical component feeds into a bottom-up strategy, with circuits that can emulate the electrical activity of our neurons and synapses. Build a brain, and the mind will follow. Laird, on the other hand, is going straight for the mind. He is a leader in cognitive architectures – systems inspired by psychology, with memories and processing behaviors designed to mirror the functional aspects of how humans learn. These aren’t the only approaches (after all, who said the human brain was the ideal model for intelligence?), but these opposing philosophies represent a central question in the development of AI: if it is going to live up to its reputation, how brain-like does AI have to be? Let’s start at the bottom. Much is known about the brain’s connections at the cellular level. Each neuron gathers electrical signals from those to which it is connected. When the total incoming current is high enough, it sends out an electrical pulse. When a neuron fires, it provides input to the neurons connected to its outgoing “wires.” In terms of computers, that’s a processing behavior – the neurons filter their incoming signals and decide when to send one out. But the connections between the neurons, called synapses, change depending on the pulses that came before. Some pathways get stronger while others get weaker. And that, in computer terms, is memory. Simple enough, right? Except the best estimates for the number of neurons in an adult human brain are in the tens of billions, with thousands of synapses per neuron. Yet there are people trying to simulate that in all its complicated glory. They call themselves the Human Brain Project. Led by the Swiss Federal Institute of Technology in Lausanne, Switzerland, the group claims that it will model the brain down to its chemistry on a supercomputer. Europe’s scientific funding agency made a $1.3 billion bet on them in 2013.


Last summer, less than a year into the project, 500 neuroscientists around Europe condemned the cause as a waste of money and time. They say neuroscience doesn’t yet have the background knowledge for an accurate simulation. “The question is where will the function come from?” said Lu. “Neuroscientists still don’t understand that, so there’s no guarantee that putting a system together with a large number of neurons and synapses, regulated by chemistry at the molecular level, actually will be very intelligent.” In 2012, Chris Eliasmith of the University of Waterloo in Ontario, Canada, made the news with a million-neuron simulation of a brain. It won’t be taking over the world any time soon, but it can recognize handwritten numbers, remember sequences, and make predictions about numerical patterns. The virtual brain, called Spaun, can turn a handwritten image into the idea of the number, explained Eliasmith. “Spaun knows that it’s a number 2 and that 2 comes before 3 and after 1. All that kind of conceptual information is brought online,” he said. Spaun can then use that idea to answer questions, some of which you might find on an IQ test. For instance, in one of the challenges highlighted in the team’s videos, Spaun is presented with 4:44:444, 5:55:? “Spaun figures out the pattern in the digits – an abstract pattern that only humans can recognize,” said Eliasmith. It responds 555 by drawing three 5s with a virtual arm, translating the idea into a sequence of motions.

spot and deliver it to another. When the researchers made the system start cold with the delivery task, it only managed 76 percent of the possible points. But if the system first learned the simpler task of how to reach a location, it scored 96 percent, meaning it chose more efficient paths. “This ability to use what was learned in the past to perform better in the future is a hallmark of biological cognitive systems,” said Eliasmith. After Eliasmith’s group has finished developing these and other abilities, it will integrate them into Spaun. Part of the reason why the team first develops smaller models is because Spaun runs very slowly on the computers available to the team. It takes 2.5 hours of processing time to get 1 second of the full simulation – and that’s before the upgrades. But there may soon be a better way.

Ensuring that machine intelligence works for the good of humanity needs to be one of our primary goals

OK, so that might not be the most impressive thing you ever saw a computer do, but what distinguishes Spaun is how it operates. Spaun is still the largest working brain simulation, and Eliasmith and his team are continuing to expand its abilities. “Our real goal here is not to take a whole mess of neurons and see what happens. It’s to take neurons that are organized in a brain-like way and see how those neurons might control behavior,” said Eliasmith.

A DIFFERENT KIND OF BRAIN One of Spaun’s upcoming features is a hippocampus, which Eliasmith describes as the part of the brain that records our experiences and extracts facts from them. The team has also improved learning in the spiking neural networks, creating a two-tier system. While the higher tier is focused on learning the goal, the lower tier is concerned with the steps needed to reach that goal. The hierarchy makes it possible to apply previously learned skills to new tasks. To test the system, the team first had it learn how to move an avatar to a target location in a virtual room. They used reinforcement learning, giving the avatar positive points when it reached the target and taking away a fraction of a point for every step it took to get there. The avatar doesn’t just want to win by reaching the target – it wants the max score. Then, with the same rules in place, the network learned to pick up an item from one

“Hardware is facing bottlenecks because we can’t keep making devices faster and faster,” said Lu. “This has forced people to reexamine neuromorphic approaches.” Qualcomm and IBM are right on this with the new chips Zeroth and True North. These chips are hardwired with versions of neurons and synapses in traditional computing parts, but a relatively new electronic component enables a more direct parallel with biology. Memristors, only around since 2008, can play the role of the synapse. Like a synapse, a memristor modulates how easily an electrical current can pass depending on the current that came before. The memristor naturally allows the current flow between the wires to vary on a continuum. In contrast, the transistors in traditional processors either allow current to pass or they don’t. The different levels of resistance enable memristors to store more information in each connection. The neuromorphic networks made by Lu’s group don’t look much like the wild web of cells in a biological neural network. Instead, the team produces orderly layers of parallel wires, with each layer running perpendicular to its neighbors. Memristors sit at the crossing points, linking the wires. The neurons are at the edges of the grid, communicating with one another through the wires and memristors. The circuits can be designed so that, like their biological counterparts, the electronic neurons only send out an electrical pulse after reaching a certain level of current input.

9


“We Need To Talk About Annika” by Simon Stålenhag


P

JWST

| CONTINUED FROM PAGE_4

That may sound overly dramatic, but, then again, the universe is a dramatic place, and Dressler has been waiting for this moment for more than twenty years. In the early 1990s, after the launch of NASA’s Hubble Space Telescope, he chaired an influential committee that recommended the agency’s next great observatory should do something Hubble could not—stare back all the way to nearly the beginning of time, to “first light,” an era more than 13 billion years ago when stars and galaxies first coalesced and ended the primordial dark ages of the cosmos. Witnessing that first light, astronomers could then retrace the universe’s evolution in unprecedented detail, watching the assembly and growth of galaxies, the emergence of subsequent generations of stars and the births of planetary systems. Now, finally, the Webb telescope is poised to do just that. To see all the way back to the first lights turning on in the universe, the telescope needs a mirror much bigger than Hubble’s 2.4-meter disk of silvered glass. It must also be very cold. As it streams through the expanding universe, the visible light emitted by those very first luminous objects is stretched like taffy, becoming a ghostly infrared glow that can only be seen—felt, really—by something cooled close to absolute zero, the coldest temperature there is. To conceive of how difficult it is to glimpse these distant galaxies, imagine looking up at the moon and trying to find the faint glow of a child’s nightlight on its surface. In its search for first light, Webb’s planners say, it will be trying to see galaxies twenty times dimmer than that.

A TANGLED WEBB

A

s Webb edges closer to ushering in a new era of space science, one might wonder where it came from, what exactly it will do, and what, if anything, might come after its mission is done. As the last segments of the observatory’s giant mirror were being set into place, I visited Goddard to talk with the people who know Webb best—the scientists and engineers who have brought it to life through a long, rocky and multigenerational gestation.

spoken astrophysicist at Goddard who wears sensible shoes and a toothy grin. He speaks about Webb like a grandfatherly Scoutmaster would about building fires or tying knots—there is a soothing, almost wholesome patience in his diction, and he savors cutting through difficult details with short, simple summaries. It also does not hurt that he has a Nobel Prize in physics, which he won for groundbreaking studies of the cosmic microwave background, the faint afterglow of the Big Bang that constitutes the universe’s first and earliest baby picture. Mather was still in his forties when he became Webb’s senior project scientist in 1995, and today is nearing 70. In all that

MAKING THE MIRRORS

T

hese days, however, Mather is most excited about what Webb can reveal of our local, present-day corner of the cosmos, rather than its far-off past. The observatory’s keen infrared vision can peer into the dust-shrouded centers of molecular clouds and circumstellar disks to watch as worlds grow like embryos in a womb. And Webb’s giant mirror, he says, is just big enough to spy signs of water vapor—evidence of possible oceans— upon a few favorably positioned small, just-maybe rocky and Earth-like planets around our nearest neighboring stars. Ever since I was six or seven years old, I’ve been wondering ‘how did we get

“TO FINISH WEBB’S MIRROR IS TO HEAR THE FIRST HEARTBEAT OF THE MAGNIFICENT CREATURE THAT WILL CARRY US TO WHEN THE UNIVERSE THAT BORE US WAS ITSELF BORN.” time, he has been working toward placing the next set of pictures in his cosmic photo album, the telescope’s promised images from the mysterious era of first light. Instead, Mather says, the first light could have come from supermassive black holes that were messy eaters. Found at the centers of most galaxies, these billion-solar-mass behemoths must have plumped up by swallowing immense volumes of gas in the primordial universe, building up white-hot accretion disks around their maws like barbeque sauce on the jowls of a competitive eater. The universe’s first light could have come via glowing crumbs from a black hole’s table, and if so, Webb could tell us.

here?’ but I couldn’t get the answers,” Mather says. “We didn’t know anything about the primordial universe. We didn’t know if planets were unique to our sun or common. We still don’t know if life is unique or common. Every step along that path is something Webb can work on.” As other, smaller NASA astrophysics projects suffered postponements and cancellations to offset Webb’s swelling costs and delays, the observatory earned a reputation as “the telescope that ate astronomy.”

The troubles culminated in 2010 and 2011, when fed-up members of Congress threatened to defund Webb entirely. tell Congress the budget wasn’t enough, and they were either going to kill Webb or make it right. From my perspective, a miracle happened.”

Webb’s foremost scientific champion is arguably John Mather, a slender and softPAGE_11 |BEYOND MAGAZINE


Embracing Adaptive Technology 3 tech companies seeking to change the way we approach disabilities suitX Exoskeletons | Bespoke Innovations | Enabling the Future


P

atients who are missing limbs are generally prescribed socket-type prostheses that are custom-made to fit over their missing arm or leg. Despite two million people in the United States living with the loss of a limb, very little has changed about prosthetic technology over the years. While standard socket prosthetic devices are common, they are not without their flaws and drawbacks. The cost can often be tens of thousands of dollars, much of which is not covered by most healthcare plans. And there also tends to be a lack of stability when the prosthetic is used, which can result in falls or accidents. Ill-fitting prosthetics can also lead to complications like painful sores and ongoing, chronic pain. Curiously, 3D printing may be able to provide solutions to both. 3D scanning has greatly improved the ability to have prosthetic devices made for a specific patient however, and 3D printing is threatening to drop the price of prosthetic devices to a fraction of the cost of traditionally made prosthetics. 3D printing is also being explored as a way to make custom-fit prosthetic implants that are surgically implanted directly into the patient’s bone, so uncomfortable or improperly fit sockets are eliminated and the prosthetic will attach directly to the patient’s body. – by Scott J. Grunewald


Helping Hands From e-Nable Sadly, the FDA has not approved implanted prosthetic sockets for general use; they have only granted exceptions for research purposes. The primary reasons are because of the various complications, which prohibitively cause the patient stress, and leave them open to ancillary injuries. Ruppert and his fellow researchers are focusing their efforts on addressing these drawbacks however, and hope to find ways to reduce the rehabilitation period and eliminate the worry over infection. They are also focusing heavily on finding ways of merging the implant and the patient’s’ specific anatomy, and 3D printing is looking to be a viable option. The research team is also focusing on exploring the effects of using a therapeutic process called low intensity pulsed ultrasound (LIPUS) on the implant/ bone connection. The process involves exposing the patient’s entire body to low-magnitude and highfrequency vibrations at a very specific amplitude range.

14



16


Scott Summit’s 3D Printed Prosthetics Sadly, the FDA has not approved implanted prosthetic sockets for general use; they have only granted exceptions for research purposes. The primary reasons are because of the various complications, which prohibitively cause the patient stress, and leave them open to ancillary injuries. Ruppert and his fellow researchers are focusing their efforts on addressing these drawbacks however, and hope to find ways to reduce the rehabilitation period and eliminate the worry over infection. They are also focusing heavily on finding ways of merging the implant and the patient’s’ specific anatomy, and 3D printing is looking to be a viable option.


The suitX

“Phoenix” The World’s Lightest Exoskeleton

The high cost of prosthetic devices is caused by the need for each device to be custom made for the patient. A socket that attaches to the amputee’s missing limb needs to be fit exactly and held in place with painful straps or braces. Even with a professional fit, sores and wounds are a common occurrence to those wearing the device. 3D scanning has greatly improved the ability to have prosthetic devices made for a specific patient however, and 3D printing is threatening to drop the price of prosthetic devices to a fraction of the cost of traditionally made prosthetics. 3D printing is also being explored as a way to make custom-fit prosthetic implants that are surgically implanted directly into the patient’s bone, so uncomfortable or improperly fit sockets are eliminated and the prosthetic will attach directly to the patient’s body. Research into osseointegration, the surgical integration of prosthetic implants with the amputee’s’ remaining bone structure, or direct skeletal prosthesis, has been explored for a few years now, but recently it has begun gaining popularity again. The implant will penetrate the skin and connect directly to the bone, so a prosthetic limb can physically be attached to the patient’s body. But the socket would be the only permanent implant, meaning the limb could easily be removed as needed. This direct prosthesis-tobone connection offers a more stable connection to the patient’s body giving them greater control of the prosthesis and virtually eliminates any lingering pain or sores that traditional prosthetics can cause. It also offers the user more sensory feedback, so they can more directly interact with their environment.

PAGE_18 |BEYOND MAGAZINE


19


The research team is also focusing on exploring the effects of using a therapeutic process called low intensity pulsed ultrasound (LIPUS) on the implant/bone connection. The process involves exposing the patient’s entire body to low-magnitude and high-frequency vibrations at a very specific amplitude range. Preliminary research has shown that this can increase the density of the bone around the implant, and to minimize the loss of bone during the rehabilitation process. Ruppert and his fellow researchers recently presented their findings at the Annual Meeting of the Orthopaedic Research Society, an international organization that seeks to study, support and advocate for new musculoskeletal research findings. The Orthopaedic Research Society was founded back in 1954, and is currently the world’s largest forum for presenting musculoskeletal breakthroughs. Photos Courtesy of suitX Exoskeletons | Bespoke Innovations | Enabling the Future


“It can be like a piece of jewelry or a tattoo. It can become a fashion statement. It can be beautiful” -Scott Summit

21



THE HISTORY OF

VIRTUAL REALITY From stereoscopes to the Oculus Rift, we’ve compiled VR’s past, present, and future

By Matthew Schnipper

T

he promise of virtual reality has always been enormous. Put on these goggles, go nowhere, and be transported anywhere. It’s the same escapism peddled by drugs, alcohol, sex, and art — throw off the shackles of the mundane through a metaphysical transportation to an altered state. Born of technology, virtual reality at its core is an organic experience. Yes, it’s man meets machine, but what happens is strictly within the mind. It had its crude beginnings. A definition of virtual reality has always been difficult to formulate — the concept of an alternative existence has been pawed at for centuries — but the closest modern ancestor came to life in the fifties, when a handful of visionaries saw the possibility for watching things on a screen that never ends, but the technology wasn’t yet good enough to justify the idea. The promise of the idea was shrouded, concealed under clunky visuals. But the concept was worth

pursuing, and others did (especially the military, who have used virtual reality technology for war simulation for years). The utopian ideals of a VR universe were revisited by a small crew of inventors in the late ’80s and early ’90s. At the time the personal computer was exploding, and VR acolytes found a curious population eager to see what the technology had to offer. Not enough, it turned out. Though a true believer could immerse him or herself in the roughly built digital landscape, the chasm between that crude digital experience and the powerful subtly of real life was too great. The vision simply did not match the means. In the mid-’90s, VR as an industry basically closed up shop. Though still used in the sciences, those eager to bring VR to the masses found themselves overshadowed by a glitzier, more promising technological revolution: the internet.

23


YESTERDAY 1838|STEROSCOPIC PHOTOS & VIEWERS In 1838 Charles Wheatstone’s research demonstrated that the brain processes the different twodimensional images from each eye into a single object of three dimensions. Viewing two side by side stereoscopic images or photos through a stereoscope gave the user a sense of depth and immersion. The later development of the popular View-Master stereoscope (patented 1939), was used for “virtual tourism”. The design principles of the Stereoscope is used today for the popular Google Cardboard and low budget VR head mounted displays for mobile phones.

1793 | PANORAMIC PAINTINGS PAGE_24 | BEYOND MAGAZINE


1929|LINK TRAINER, THE FIRST FLIGHT SIMULATOR In 1929 Edward Link created the “Link trainer” (patented 1931) probably the first example of a commercial flight simulator, which was entirely electromechanical. It was controlled by motors that linked to the rudder and steering column to modify the pitch and roll. A small motor-driven device mimicked turbulence and disturbances. Such was the need for safer ways to train pilots that the US military bought six of these devices for $3500. In 2015 money this was just shy of $50 000. During World War II over 10,000 “blue box” Link Trainers were used by over 500,000 pilots for initial training and improving their skills.

1950s|MORTON HEILIG’S “SENSORAMA” In the mid 1950s cinematographer Morton Heilig developed the Sensorama (patented 1962) which was an arcade-style theatre cabinet that would stimulate all the senses, not just sight and sound. It featured stereo speakers, a stereoscopic 3D display, fans, smell generators and a vibrating chair. The Sensorama was intended to fully immerse the individual in the film. He also created six short films for his invention all of which he shot, produced and edited himself. The Sensorama films were titled, Motorcycle, Belly Dancer, Dune Buggy, helicopter, A date with Sabina and I’m a coca cola bottle!

1930s|SCI FI WRITER STANLEY G. WEINBAUM PREDICTS VR

25


TODAY 1991|“VIRTUALITY” GROUP ARCADE MACHINES We began to see virtual reality devices to which the public had access, although household ownership of cutting edge virtual reality was still far out of reach. The Virtuality Group launched a range of arcade games and machines. Players would wear a set of VR goggles and play on gaming machines with realtime (less than 50ms latency) immersive stereoscopic 3D visuals. Some units were also networked together for a multi-player gaming experience.

1995|THE NINTENDO “VIRTUAL BOY” The Nintendo Virtual Boy (originally known as VR-32) was a 3D gaming console that was hyped to be the first ever portable console that could display true 3D graphics. It was first released in Japan and North America at a price of $180 but it was a commercial failure despite price drops. The reported reasons for this failure were a lack of colour in graphics (games were in red and black), there was a lack of software support and it was difficult to use the console in a comfortable position. The following year they discontinued its production and sale.

1999|“THE MATRIX” IS RELEASED

PAGE_26| BEYOND MAGAZINE


2014|FACEBOOK BUYS OCULOUS RIFT, DARPA COMMISSIONS IT Facebook acquired virtual-reality technology company Oculus VR for $2 billion. Oculus makes the Oculus Rift, a virtual-reality headset originally funded on Kickstarter. The United States military constantly wants to outmaneuver its enemies, and with the battle moving into the more advanced technology realm, the Pentagon’s Defense Advanced Research Projects Agency (DARPA) is looking for answers. It may have found a partner, as reports suggest the Oculus Rift virtual reality headgear could be heading to a soldier near you.

2016|VR HEADSETS TO GENERATE $895M IN REVENUE

27


TOMORROW

* As predicted by Google futurist Ray Kurzweil

2020s|VR HEADSETS ARE REPLACED WITH BRAIN IMPLANTS

2030s|DIGITALLY UPLOADING CONCIOUSNESS BECOMES POSSIBLE PAGE_28| BEYOND MAGAZINE


2040s|

HUMANS SPEND MOST OF THEIR TIME IN VIRTUAL REALITY

The first fifteen years of the 21st century has seen major, rapid advancement in the development of virtual reality. Computer technology, especially small and powerful mobile technologies, have exploded while prices are constantly driven down. The rise of smartphones with highdensity displays and 3D graphics capabilities has enabled a generation of lightweight and practical virtual reality devices. The video game industry has continued to drive the development of consumer virtual reality unabated. Depth sensing cameras sensor suites, motion controllers and natural human interfaces are already a part of daily human computing tasks. Recently companies like Google have released interim virtual reality products such as the Google Cardboard, a DIY headset that uses a smartphone to drive it. Companies like Samsung have taken this concept further with products such as the Galaxy Gear, which is mass produced and contains “smart� features such as gesture control.

Developer versions of final consumer products have also been available for a few years, so there has been a steady stream of software projects creating content for the immanent market entrance of modern virtual reality. It seems clear that 2016 will be a key year in the virtual reality industry. Multiple consumer devices that seem to finally answer the unfulfilled promises made by virtual reality in the 1990s will come to market at that time. These include the pioneering Oculus Rift, which was purchased by social media giant Facebook in 2014 for the staggering sum of $2BN. An incredible vote of confidence in where the industry is set to go. When the Oculus Rift releases in 2016 it will be competing with products from Valve corporation and HTC, Microsoft as well as Sony Computer Entertainment. These heavyweights are sure to be followed by many other enterprises, should the market take off as expected.

29


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.