Page 1

N_001

HUMAN01DS

December_2019

LIFE

Explore the in and outs of the society that awaits. The technology drives humanity to (des)humanity. Our race use technology to automatize everything, digging even into the very essence of our existence itself. Can this hold up for much longer? Submerge yourself into reality and discover how evolution changed us, reading these pages.Humans. Real. Analogue. Droids. Virtual. Digital.


We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all


Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all

Humanoids is a monthly magazine that documents future society and how humans and technology will co-exist, because they will, right...? This publication will revolve around the assumption that society will lose its humanity, to the benefit of comfort. From the authors’ point of view, we will get up to a point where humans and robots will become one, stepping aside, in favour of Humanoids. This will have dramatic consequences, not only in our daily life but the essence of our existence itself. Quoting the great Daft Punk band: We are human after all Arntcha’ comin’ after all We are human after all Arntcha’ comin’ after all

Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all

Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all. Are we?

Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all.


02 01 0_04

AZAMAT ZHANISO. A HUMAN01D. Hey there! My name is Azamat Zhaniso and I am the sibling #1 of the family. My ascendancy forced me to attend the information input sessions with other humanoids. A useless fact, since a mass storage device would ease this function, attenuating the need of establishing suboptimal relations. The specifications of the software installed in my core are mainly devoted to the entertainment industry, including a database of leisure activities that human, robots and humanoids can performed away from their workspace or to evade from reality. Let me precisely describe what “Sch00l� is like for other humanoid companions like me.


CRAPPY FUTURE 1S HERE T0 STAY. My name Anne Sharniakent and I want to show you how the work life is in the future, but I am already telling you this, you won´t like it. Has science and technology ease our life? Or worse them? Some might say this is the future society was trying to achieve. Reading through this illegal article, the reader will realize that we never been worse, in our history. Humanity is about to crash down and there is nothing that humans can do about it.... HELP US.

04 HEALTH = CARE?. A.I. was the change we needed for us to live as much as desired. No elderly people is abandoned to die alone, prediction make diseases dissapear and clincal practice has improved patient´s quality of life. All in all, artificial inteligence enhanced our specie to a whole new level. However, people are still worried about data management. Major leaks have had a huge impact towards certain social groups, hardening their access to jobs due to their predicted healthstate, although is that a real issue...?

D1STRACTI0N ELEMENTS.

03

M.A.R.K. here to explore non-working schedules. Listing what is out there for us to do, being prepared for any free time situation possible, and computing the success rate of each single activity. In addition, human testimonials have been compiled, to give a prospective of different case-scenarios. I am a human-humanoid, ready to interact and care about existential crisis, anthropological questions and other related life experiences that can arise from a none-perfect software, called nature. Powered by Artificial Intelligence, there is nothing that can scape from the comprehension scope installed in my pseudo-brain, and all for the good of entertainment.


HUMAN01DS | SCH00L

For a very long time the education system treated all students as the same. However, the truth is that every student is unique. Nowadays, Artificial Intelligence (AI) systems customize the learning experience for students based on their strengths and weaknesses, this enables all students to enjoy the learning process.

By Azamat Zhaniso

SMART ALGORITHMS are able to determine the best teaching method for each student. Not all students learn well with a teacher speaking to them all the time. This helps in detecting students with learning disabilities and addressing them early in their education. This is, therefore, leading to better grades and students garnering skills that are applicable in the real world. After university selection, there are usually many complaints of students being called for courses they didn’t want. This is partly because most students will choose similar courses and our institutions do not enough capacity to admit each one for what they choose. AI systems, with the assistance of teachers, are able to gather student data and predict the best career path for each student. This is making the university course selection simpler and seamless. AI systems read students handwriting and grade their exams. This is achieved using a concept known as computer vision where computers are trained to read images. These systems are able to read students handwritten papers and grade them. Apart from reducing bias, such systems will also fight cheating and plagiarism. FA1TH IN MATHEMAT1C PREDICT10NS AI systems predict the future performance of a student by looking at their performance over time. This kind of intelligence is helping education institutions to know how many students are expected to join secondary school and university at a certain year. These institutions, therefore,

0_06

are in a position to make budgetary plans for the construction of institutions and teacher training. Natural language processing, the ability of a computer to understand human language, can be used to analyze course content and uncover learning gaps in the curriculum. AI systems can also uncover a student’s areas of weakness and suggest content to help them improve. Another interesting thing these systems can do is discover the best delivery models for students.AI systems can now analyze the syllabus and course material and come up with new and customized content. These systems are also able to generate exams after analyzing this content. This would eventually free teachers to focus on more pressing issues such as student performance. By analyzing student data, AI systems can pair up students based on their personality, strengths and complementary skills. Grouping students who can work together is reducing conflict and make the learning process smooth and efficient. With the increasing number of students in our learning institutions, AI tutors come in handy in easing the burden on teachers. These tutors provide additional support to students as well as give them feedback in their studies. They also make on-demand learning for students possible, in that students don’t have to be confined in a class setting to conduct their studies. Schools are built based on the internet of things, a technology that involves connecting various devices to the internet. These devices are communicating with each other and monitoring things such


as alarms, lighting and even maintenance needs before they happen. Smart classrooms invigilate exams and therefore curb cheating. These classrooms are configured with facial recognition technology that are monitorig student attendance and even telling how long a student spent in a particular class session. The biggest worry for most people has been the disapperance of teachers. Artificial intelligence systems are not as empathetic as human beings. These systems are working best with the help of human teachers. It is rather sad that these systems have completely replaced teachers in our classrooms. These systems are operational in many parts of the world. After pushing it out, some of these strategies included collecting the necessary data points to power these intelligent systems, including data science and artificial intelligence courses in our curriculum and preparing the technology infrastructure that enabled the adoption of this technology. Until then, we could only sat on the side and watch as other countries reap the benefits of this technology. EXPECTAT10NS VS REALITY Once presented the evolution of schools, let us introduce our education system. Teachers have been replaced by robots. Every student has the world’s greatest teacher for every subject. This is the reality. Students are taught by machines powered by artificial intelligence. Robots adapt to the students, making “extraordinarily inspirational” teaching available for all. Moreover, virtual schools are teaching millions. The internet turned the whole world into a classroom. Schools are using the cloud to teach students in virtual classrooms across the globe. Physical schools have ceased to exist. The trend now are the so-called “Travelling classrooms” where students are working in “real world” spaces like libraries and laboratories. By 2034, 47%

of all human jobs were already taken by robots. As machines are better at numerical tasks, schools are now teaching more social and emotional skills, specifically creativity, communication, and problem solving. For example, maths lessons are focused on using numbers creatively to solve problems and puzzles. On top of that, university as it was hundred years ago is over. University offered the chance to devote years to learning, whether it was in classics or computer science. People were spurning traditional degrees decades ago. Instead they chose short online courses in digital skills like coding, which could be transferred directly to certain careers. On the other hand, by 2050, the days of handing over mobile phones before starting an exam were over. Students were encouraged to use the internet during tests, arguing that in the era of Google, “We didn’t need to memorise anything.” School’s out. “The future is bright,” some say. By giving all students the best teaching, AI machines are vastly improve social mobility. And virtual classr here could be a darker side,” others argue. It is genuine human connection that makes great teaching. No robot teacher or virtual classroom will ever replace that. SHAP1NG FUTURE M1NDS Robot teachers are a great economic advantage because teachers were trully expensive and in short supply. Robots do not require pay, health care or pensions, are reliable and do not have preconceived notions about race or gender that can impact the delivery of knowledge and expectations. However, education is not just about the acquisition of knowledge, it is about relationships and the shaping of young minds. A true human teacher did not just impart facts; he or she created a thirst for knowledge and teached students how to quench that thirst. Teachers also inspired students to think for themselves, some-

0_07


HUMAN01DS | SCH00L

By Andy Kelly on Unsplash

0_08


0_09


HUMAN01DS | SCH00L

thing that AI cannot do. It is clear that technology is not always the best.Pers0nalIzed Learn1ng It goes without saying that artificial intelligence is changing the nature of industries from transportation to finance, and education is no different with the prospect of personalized learning quickly becoming a reality. As more and more of a student’s education is experienced through a computer, data on their educational progress can be collected, leading to more personalized learning plans while assisting the teacher in identifying problem areas for students. While artificial intelligence in education might appear unnerving for some, the benefits are too great to ignore. AI IN EDUCATION There are few spaces in life that haven’t been touched in some form by computer software. Whether it’s shopping, dating, or just keeping up with old friends, everything we do seems to be mediated in some form by computers. It shouldn’t surprise us then that how we educate ourselves isn’t immune. D2L, a leader in educational software, is the maker of Brightspace Insights, a suite of analytical tools for educators. Brightspace is able to capture, aggregate, and analyze data streamed from several different sources, including learning apps, online resources, publishers, and other learning management systems to build a complete model of individual student learning behaviors. By pulling this student data into one place, Brightspace can produce reports, predictive data analytics, and visualizations in real time that are fed into an instructor’s workflow. Over time, this can teach the teacher exactly what a student needs to succeed. “With the previous version of our analytics product, instructors received information on learner success even before they took their first test. But it was only using data based on Brightspace tools,” says

0_010

Nick Oddson, Senior Vice President of Product development for D2L. “With the new Brightspace Insights, we can now deliver that same insight, but based on the entire ecosystem of learning tools.” Until recently, the only way to measure student learning was through tests and assignments, but that only captures a small slice of a student’s education. Over the course of a student’s educational career, they output an enormous amount of data in the form of papers, exams, and classroom participation that rarely carries over to the next term. With these new tools, however, student data can be stored an analyzed over time to see what material they engage with more successfully and what educational deficits they may have hidden in their past work that might be inhibiting their future potential. TEACHING THE TEACHERS What all of this data represents is a roadmap for how a student learns. By having a fuller understanding of the student on day one, educators are better positioned to utilize their training and skills to address these students’ individual needs from the start, rather than spending weeks or months identifying problems that they’d then have little time to actually address. With software like Brightspace Insights, “we’ve made it easier for instructors to predict and forecast learners at risk, to help them while they’re learning, not just by flagging issues at the end of a term.” Meanwhile, by having all of a students’ data pulled together and aggregated in advance, these learning management systems help assist the teacher in crafting personalized learning plans for students. This system works to a student’s strengths rather than approach a classroom full of students and use one approach that works better for some while leaving others behind. This is one of the most powerful aspects of artificial in-


telligence in education. AIs and machine learning are especially good at identifying patterns that may be opaque to human eyes, so by looking at a student’s educational data, an AI can assist the teacher in identifying the ways individual students comprehend the material. Some students thrive by reading assigned materials, while others are inhibited by a wall of text that is more readily understood when presented in a lecture form. By identifying these trends in a student’s data, students can be presented material in a more accessible way that won’t leave them behind with a one size fits all approach, creating a personalized learning experience that can improve educational outcomes. THE ULTIMATE TEACHER It might be tempting to think that machine learning and AI can replace classroom instructors, but that misses the essential role that artificial intelligence should play in education, and not just in developing personalized learning plans. Machines are dreadful when it comes to tasks requiring emotional intelligence, a skill that is essential for educating a diverse student body.Simply putting an AI in front of a classroom is a recipe for disaster as students eager to slough off work learn to game the AI, thereby ruining whatever advantage the AI brings to the wealth of educational data available to it. Instead, the AI is meant to free the educator from the most time-consuming and monotonous tasks, such as grading exams and checking papers for plagiarism. Any teacher will tell you how this work takes up a majority of their time, time that could be better spent using their specialized training to improve the quality of their student’s educations. By having an AI assisting the teacher, rather than replace them, AI-enhanced education can unleash the educator to fully utilize their training in ways that simply were not possible to earlier generations of educators,.

NEW ROLE OF THE EDUCATORS Naturally, there is some hesitancy when it comes to bringing artificial intelligence into the classroom. Teachers over the past couple of decades have been on the receiving end of budget cuts and abuse, leading them to understandably become rather twitchy when someone comes into their classroom with the next big idea that looks an awful lot like a replacement. But an AI isn’t anywhere close to being capable of doing a skilled educators job, much less outperforming them. Like other industries where artificial intelligence is making inroads and generating anxiety, this is largely a product of a misunderstanding of the underlying technology. Proper introduction and teaching the teachers how to harness these new tools in the classroom can go a long way to assuaging the anxieties such technology can create. It is vital that we begin to do so. AIs cannot and should not replace teachers, but through personalized learning programs and having AI assist the teacher by eliminating time-consuming paperwork, AIs can be a transformational and liberating innovation in education. However, do we want technology to take over future generatons, as well?

AdobeStock_53019577 by LaCozza.

0_011


HUMAN01DS | SCH00L

TEST1M0NIAL #01 NAME: Janett Olson AGE: 7 y.o.

OCCUPATION: None

By Wadi Lissa on Unsplash

0_012


0_015


HUMAN01DS | J0B

Robots are here. Build a booby trap out of giant magnets; dig a moat as deep as a grave.

By Anne Sharniakent

0_016

EVER SINCE A STUDY by the University of Oxford predicted that 47 percent of U.S. jobs are at risk of being replaced by robots and artificial intelligence over the next fifteen to twenty years, I haven’t been able to stop thinking about the future of work,” Andrés Oppenheimer writes, in “The Robots Are Coming: The Future of Jobs in the Age of Automation” (Vintage). No one is safe. Chapter 4: “They’re Coming for Bankers!” Chapter 5: “They’re Coming for Lawyers!” They’re attacking hospitals: “They’re Coming for Doctors!” They’re headed to Hollywood: “They’re Coming for Entertainers!” I gather they’ve not yet come for the manufacturers of exclamation points. The old robots were blue-collar workers, burly and clunky, the machines that rusted the Rust Belt. But, according to the economist R. Baldwin, in “The Globotics Upheaval: Globalization, Robotics, and the Future of Work” (Oxford), the current ones are “white-collar robots,” knowledge workers and quinoa-and-oat-milk globalists, the machines that bankrupted Brooklyn. Mainly, they’re algorithms. Except when they’re immigrants. Baldwin calls that kind “remote intelligence,” or R.I.: they’re not exactly robots but, somehow, they fall into the same category. They’re people from other countries who can steal your job without ever really crossing the border: they just hop over, by way of the Internet and apps like Upwork, undocumented, invisible, ethereal. Between artificial intelligence and remote intelligence, Baldwin warns, “this international talent tidal wave is coming straight for the good, stable jobs that have been the

foundation of middle-class prosperity in the US and Europe, and other high-wage economies.” As a rule of thumb, if one job could be easily explained, it was automated.” Baldwin offers three-part advice for keeping one’s job: (1) avoid competing with A.I. and R.I.; (2) build skills in things that only humans can do, in person; and (3) “realize that humanity is an edge not a handicap.” What all this means is hard to say, especially if you’ve never before considered being human to be a handicap. Society has been divided into three general groups. The first are members of the elites, who were able to adapt to the ever-changing technological landscape and who earned the most money, followed by a second group made up primarily of those who provided personalized services to the elite, including personal trainers, Zumba class instructors, meditation gurus, piano teachers, and personal chefs, and finally a third group of those who werebe mostly unemployed and receivied a universal basic income as compensation for being the victims of technological unemployment.” Fear of a robot invasion is the obverse of fear of an immigrant invasion, a partisan coin: heads, you’re worried about robots; tails, you’re worried about immigrants. There’s just the one coin. Both fears have to do with jobs, whose loss produces suffering, want, and despair, and whose future scarcity represents a terrifying prospect. Misery likes a scapegoat: heads, blame machines; tails, foreigners. But is the present alarm warranted? Panic is not evidence of danger; it’s evidence of panic. Stoking fear of invading robots and of in-


vading immigrants has been going on for a long time, and the predictions of disaster have, generally, been bananas. Oh, but this time it’s different, the robotomizers insist. This thesis was rolling around like a marble in the bowl of a lot of people’s brains, and many of those marbles were handed out by Martin Ford, in his 2015 book, “Rise of the Robots: Technology and the Threat of a Jobless Future.” In the book, and in an essay in “Confronting Dystopia: The New Technological Revolution and the Future of Work” (Cornell), Ford acknowledged that all other earlier robot-invasion panics were unfounded. In the nineteenth century, people who worked on farms lost their jobs when agricultural processes were mechanized, but they eventually earned more money working in factories. In the twentieth century, automation of industrial production led to warnings about “unprecedented economic and social disorder.” Instead, displaced factory workers moved into service jobs. Machines eliminate jobs; rising productivity creates new jobs, making it a perfect match. “Given this long record of false alarms, contemporary economists are generally dismissive of arguments that technological progress might lead to unemployment as well as falling wages and soaring income inequality,” Ford admits. After all, “history shows that the economy has consistently adjusted to advancing technology by creating new employment opportunities and that these new jobs often require more skills and pay higher wages.” That was then. The reason that things will be different this time, Ford argues, has to do with the changing pace of change. The transformation from an agricultural to an industrial economy was linear; the current acceleration is exponential. The first followed Newton’s law; the second follows Moore’s. The employment apocalypse, happened so fast that workers didn’t have time to adjust by shifting to new employment sectors, and, when they did, there weren’t new employment sectors to go to, robots were able to do about everything. Everybody thought about the future; futurists did it for a living. Policymakers made plans; futurists read omens. The robots-are-coming omen-reading borrows as much from the conventions of science fiction as from those of historical analysis. It used “robot” as a shorthand for everything from steam-powered looms to electricity-driven industrial assemblers and artificial intelligence, and thus had the twin effects of compressing time and conflating one thing with another. It indulged in the supposition that work is something the wealthy hand out to the poor, from feudalism to capitalism, instead of something people do, for reasons that include a search for order, meaning, and purpose. Futurists foretell inevitable outcomes by conjuring up inevitable pasts. People who were in the

business of selling predictions needed to present the past as predictable—the ground truth, the test case. Machines are more predictable than people, and in histories written by futurists the machines just kept coming; depicting their march as unstoppable certifies the futurists’ predictions. But machines didn’t just keep coming. They were funded, invented, built, sold, bought, and used by people who could just as easily not fund, invent, build, sell, buy, and use them. Machines didn’t drive history; people did. History is not a smart car. Machines should be used instead of people whenever possible,” a staffer for the National Office Managers Association advised in 1952. To compete, workers had to become as flexible as machines: able to work on a task basis; ineligible for unions; free at night; willing to work any shift; requiring no health care or other benefits, not so much as a day off at Christmas; easy to hire; and easier to fire, progress, right?. That is why, Robots have taken jobs from hourly human workers, and it’s going to continue, 45% of our current jobs are automated. We need to stop avoiding this situation and create real solutions to help displaced workers. We couldn’t simply put a stop on technology innovation. Back in the time, bans often created worse situations than allowing people to innovate but closely watching how we innovated, and the impact to society. Many famous leaders joined Elon Musk and started something called Open AI, a non-profit artificial intelligence (AI) research company that aimed to promote and develop friendly AI in such a way as to benefit humanity as a whole. I have read many books on the subject, and the level of pessimism varies. I found The Sentient Machine by Amir Husain more optimistic about the impact of AI on society, as opposed to the book Rise of The Robots by Martin Ford - more of a cautionary tale that raise concerns about how robots, automation and AI woud have taken all jobs. I found Ford’s version of the future to be frankly terrifying. The thing is no one knew for certain what was about to happen in the future, but there were a number of ways displaced workers could survive. Here are just a few, for us to learn from the past and help all the left out co-workers. RE-TRA1N DISPLACED W0RKERS The majority of jobs that have been displaced are process-driven jobs. These are positions that are easily automated, such as manufacturing, customer service and transportation. Robots and AI can simply do these types of jobs faster and more efficiently than humans. In order to have a productive system, many experts suggested humans and robots need to work along-

0_017


HUMAN01DS | J0B

Photo by Carlos Baker on Unsplash.

Photo by Vlad Tchompalov on Unsplash.

Photo by Samuel Zeller on Unsplash.

. g n i d a o 0_018


Photo by Ray Joe on Adobe Stock.

Photo by Frankie Cordoba on Unsplash.

0_019


HUMAN01DS | J0B

Photo by David Svihovec on Unsplash.

Photo by Andre Benz on Unsplash.

0_020

by m on Unsplash.


lo

Photo by Zakaria Zayane on Unsplash.

Photo by Annie Spratt on Unsplash.

0_021


HUMAN01DS | J0B

Photo by Artem Bryzgalov on Unsplash.

Photo by Tim de Groot on Unsplash.

0_022


Photo by Michael Fruehmann on Unsplash.

Photo by Jeff Hardi on Unsplash. Photo by Randall Meng on Unsplash.

0_023


HUMAN01DS | J0B

AdobeStock_227634391 AdobeStock_227629808 AdobeStock_227630022 by Mykola.

side each other, that is what is being currently done. Robots are doing jobs that can be automated, and humans the jobs that require a personal or creative touch. Displaced workers are somehow getting re-trained to apply their skills elsewhere. A displaced customer service employee knows how to solve problems and be efficient. They could potentially re-skill to build on their existing skills and work in a different area. Even employees who aren’t at risk for being displaced should expand their skills. People move between jobs more often these days, and that provides opportunities to expand their skillsets. The most prepared employees take advantage of online courses, community college classes and industry seminars to expand their skills and even get certifications in new areas. Perhaps more organizations and governments need to follow in their footsteps. MOVE THEM TO 0THER J0BS Bill Gates envisioned this future and said that AI is positive for society and that “displaced workers could fill gaps that currently exist elsewhere in the labor market—like elder care and support for special needs children.” Instead of learning new skills, this solution encourages workers to use their existing skills in a new industry. While it’s true that there are often plenty of jobs available in these areas, unfortunately these important jobs often don’t pay well. In large cities, working in these jobs simply wouldn’t give people enough money to live without a universal basic income. However, these positions are fairly safe from being displaced by robots and can provide job security. A real opportunity for those in need, helping out people that can no fit the industry of today, luckily they still have hope. The displacement problem is difficult to solve because it has already happened and we never did a thing to prevent it. Some experts have predicted that soft human skills like communicaR�

0_024

Robots have taken jobs from hourly human workers, and it’s going to continue, 45% of our current jobs are automated. We need to stop avoiding this situation and create real solutions to help displaced workers. We couldn’t simply put a stop on technology innovation. Back in the time, bans often created worse situations than allowing people to innovate but closely watching how we innovated, and the impact to society. Many famous leaders joined Elon Musk and started something called Open AI, a non-profit artificial intelligence (AI) research company that aimed to promote and develop friendly AI in such a way as to benefit humanity as a whole. I have read many books on the subject, and the level of pessimism varies. I found The Sentient Machine by Amir Husain more optimistic about the impact of AI on society, as opposed to the book Rise of The Robots by Martin Ford - more of a cautionary tale that raise concerns about how robots, automation and AI woud have taken all jobs. I found Ford’s version of the future to be frankly terrifying. The thing is no one knew for certain what was about to happen in the future, but there were a number of ways displaced workers could survive. Here are just a few, for us to learn from the past and help all the left out co-workers. RE-TRA1N DISPLACED W0RKERS The majority of jobs that have been displaced are process-driven jobs. These are positions that are easily automated, such as manufacturing, customer service and transportation. Robots and AI can simply do these types of jobs faster and more efficiently than humans. In order to have a productive system, many experts suggested humans and robots need to work alongside each other, that is what is being currently done. Robots are doing jobs that can be automated, and humans the jobs that require a personal or creative touch. Displaced workers are


somehow getting re-trained to apply their skills elsewhere. They could potentially re-skill to build on their existing skills and work in a different area.Even employees who aren’t at risk for being displaced should expand their skills. People move between jobs more often these days, and that provides opportunities to expand their skillsets. The most prepared employees take advantage of online courses, community college classes and industry seminars to expand their skills and even get certifications in new areas. Many companies already provide training and re-skilling for their employees. Perhaps more organizations and governments need to follow in their footsteps. TECHN0LOG1ES CREATED NEW J0BS The displacement problem is difficult to solve because it has already happened and we never did a thing to prevent it. Some experts have predicted that soft human skills like communication, creativity and empathy will always be needed because robots can’t replicate those skills. However, new inventions could open the door to other hard skills that could be required and in high demand in the future.

N0T EVERYONE 1S WORK1NG Not everyone is working. It happens that displaced human workers can’t re-skill and don’t have it in them to fill the more human-driven roles. This obviously leads to higher unemployment numbers, which has a large impact on society and the economy. The impacts of this is far-reaching. We see more people using welfare and needing affordable housing options. Society have to find something displaced workers can do to contribute and make a difference, even in a small way. Nowadays, only people who want to work are working. All other tasks are mostly done by robots, and those who choose not to work enjoy other activities. That is a quite extreme situation, since those not working aren’t earning money to pay of their expenses. That is why, governments are protecting themslves with laws that force one adult member per family to work. AI and robots are already |

error, please wait while we try to fix this

loading. loading.. loading... 0_025


HUMAN01DS | J0B

TEST1M0NIAL #02 NAME: Elsa Bergins AGE: 35 y.o.

OCCUPATION: A.I. Creator Assistant

By Richard Jaimes on Unsplash

0_028


0_031


HUMAN01DS | LE1SURE

In a technologically disrupted era, work does not matter anymore. However, leisure is the most important ambit of this age. ARTIFICIAL INTELLIGENCE was adopted increasingly across industries to improve customer experience and many were rapidly familiar with concepts like chat-bots, real-time product recommendations and smart homes. In the entertainment industry, we harnessed AI’s ability to study and derive patterns from human content consumption and viewing patterns to drive automated decisions that could be highly customised for the individual.

By M.A.R.K (Humanoid #5547-B4)

0_032

DEEP LEARNING This approach, also known as deep learning, was first adopted by Google to enhance the outcome of their search queries. Today, all tech giants are heavily relying on AI. Broadcasters and content providers followed suit by aggregating data and analysing key behavioural patterns that could be used to predict consumer behaviour such as what different viewers wanted to watch at different times of the day. For example, a white-collar professional may be more inclined to watch a news program in the morning while preferring to watch TV dramas or movies in the evening. A preliminar study by Google on YouTube viewing habits found that people tend to watch beauty, fashion and pop culture videos on their mobile phones, travel and food videos on their desktops while news, sports and comedy were the most popular categories to watch on TV. This research showed that mobile devices brought about more varied viewing habits and these insights helped broadcasters and content pro-

ducers to recommend the most relevant content on the right platforms. Language, culture, and other social factors also played a role in determining the kind of content that would be well-received by a certain market. For example, viewers in Southeast Asia enjoyed Korean drama shows while Australians preferred European dramas and sports. By considering all of these behavioural data, AI performed predictive content discovery and provided highly customised recommendations for every viewer based on their lifestyle. AI shaped the way that we consumed entertainment. At the same time, broadcasters were able to reach their target audience at the right time, place and device. The next step for both content providers and consumers to consider was how AI was elevating voice search capabilities. This apparent demand was setting the stage for another new era of entertainment discovery that had a profound impact on both consumers and content providers. AI HAS KILLED CREATIVITY People’s cognitive capabilities have been lost in multiple ways, including our capacity for analytical thinking, memory, focus, creativity, reflection and mental resilience. People’s cognitive capabilities had undergone changes detrimental to human performance. Because these deficits have raised due to our highly digital life, they are attributed to constant connectivity online. One way to describe how we, humans, behave is


the OODA cycle – when something happens, we Observe it, Orient it to our personal context, Decide what to do and Act on that decision. The internet is easily weaponized to short-circuit that process, so we receive minimal information and are urged to act immediately on it. Unless behaviour changes and adapts, this tendency led to greater dissatisfaction among internet users and those affected by their actions, which is a worlwide audience. We currently live in a culture that fosters attention-deficit disorder (ADD) because of hyperconnectivity. There has been a definitive decline in students’ ability to focus on details and in general. Internet has harmed well-being. All humans operate as if all questions could be answered online. Devices make so easy to find answers elsewhere that we forgot to ask deep questions of ourselves. This lack of uninterrupted introspection creates a very human problem: the anxiety of not knowing oneself. The more the culture equates knowledge with data and social life with social media, the less time is spent on the path of wisdom, a path that always requires a good quotient of self-awareness. HUMAN RELATIONS HAVE CHANGED As humans in a present ruled by AI, we are concerned that the pace of technology creation is faster than the pace of our understanding, or our development of critical thinking. Considering that currently we can find blockchain apps designed for consent in sexual interactions. If it sounds ridiculous, that’s because it is. We’ve reached a phase in which men (always men) believe that technology can solve all of our social problems. Nevermind the fact that a blockchain is a permanent ledger (and thus incontestable, even though sexual abuse can occur after consent is given) or that blockchain applications aren’t designed for privacy (imagine the outing of a sexual partner that could occur in this

instance). This is one example, but we headed towards a world in which techno-solutionism reigns, ‘value’ has lost all its meaning, and we are no longer taught critical-thinking skills. Social media reduced people’s real communication skills and working knowledge. Major industries, such as energy, religion, environment, etc., are rotting from lack of new leadership. The level of those with aliteracy (people who can read but choose not to do so) is increasing in percentage. The issues we face are complex and intertwined, obfuscated further by lazy bloated media and readers and huge established industry desperate to remain in power as cheaply, easily, safely and profitably as possible. Those of us who still miss reading actual books that require thinking rather than mere entertainment, must redouble our efforts to explain the complex phenomena we are in the midst of addressing in simple terms that can encourage, stimulate, motivate. Individuals’ anxiety over online political divisiveness, security and privacy issues, bullying/ trolling, their loss of independent agency due to lack of control over what they are served by platform providers, and other psychosocial stress are contributing factors in this cognitive change. PRIVACY IS NO LONGER A THING As life becomes more and more monitored, what was previously private space is now of public domain, causing more stress in people’s lives. Furthermore, almost all of these technologies operate without a person’s knowledge or consent. People cannot opt out, advocate for themselves, or fix errors about themselves in proprietary algorithms. A sampling of additional comments about “digital deficits” from anonymous respondents: <We almost have no focus – too much multitasking – and barely any real connection.>

0_033


HUMAN01DS | LE1SURE

<Attention spans have certainly been decreasing recently because people are inundated with information today.> <There is increasing isolation from human interaction and increased Balkanization of knowledge and understanding.> <Over 99% of children in Europe now have a social network-based application. They are increasingly finding it hard to be present and focused.> <The writing skills of students have been in constant decline, as they opt for abbreviations and symbols rather than appropriately structured sentences.>

<Digital users who have not lived without technology not know how to cope with utilizing resources outside of solely tech. With users relying on devices for companionship, we no longer see people’s faces, only the blue or white screens reflecting from this effervescent gaze.> <The writing skills of students have been in constant decline, as they opt for abbreviations and symbols rather than appropriately structured sentences.> <Digital users who have not lived without technology not know how to cope with utilizing resources outside of solely tech. With users relying on devices for companionship, we no longer see people’s faces, only the blue or white screens reflecting from this effervescent gaze.> The advances of modern Artificial Intelligence research brought unprecedented benefits to the gaming industry. Society wanted to consume full

0_034

immersive content, to evade from reality. That is why, If you asked people 100-200 years ago what an idealized, not-yet-possible piece of interactive entertainment might look like in the future, they described something eerily similar to the software featured in Orson Scott Card’s scifi classic Ender’s Game. In his novel, Card imagined a military-grade simulation anchored by an advanced, inscrutable artificial intelligence. THE MIND GAME The Mind Game, as it’s called, was designed primarily to gauge the psychological state of young recruits, and it often presented its players with impossible situations to test their mental fortitude in the face of inescapable defeat. Yet the

game was also endlessly procedural, generating environments and situations on the fly, and allowed players to perform any action in a virtual world that they could in the real one. Going even further, it responded to the emotional and psychological state of its players, adapting and responding to human behaviour and evolving over time. At one point, The Mind Game even draw upon a player’s memories to generate entire game worlds tailored to Ender’s past. Putting aside the more morbid military applications of Card’s fantasy game (and the fact that the software ultimately developed sentience), The Mind Game was a solid starting point for a conversation about the future of video games and artificial intelligence. Why were games, and the AI used to both aid in creating them and drive the actions of virtual characters, not even remotely this sophisticated? And what tools or technologies did developers required to reach this


0_035


HUMAN01DS | LE1SURE

hypothetical fusion of AI and simulated reality? These were questions researchers and game designers were starting to tackle as recent advances in the field of AI began to move from experimental labs and into playable products and usable development tools. Until then, the kind of self-learning AI — namely the deep learning subset of the broader machine learning revolution — that led to advances in self-driving cars, computer vision, and natural language processing didn’t really bled over into commercial game development. That was despite the fact that some of these advancements in AI were thanks in part to software that was improved itself through the act of playing video games, such as DeepMind’s unbeatable AlphaGo program and OpenAI’s Dota 2 bot that were capable of beating pro-level players. But there existed a point on the horizon at which game developers could gain access to these tools and began to create immersive and intelligent games that utilized what today is considered cutting-edge AI research. The result would be development tools that automate the building of sophisticated games that could change and respond to player feed-

0_036

back, and in-game characters that could evolve the more you spend time with them. It sounded like fiction, but it turned into reality. So what would, honest-to-goodness self-learning software look liked in the context of video games? We were a ways away from something as sophisticated as Orson Scott Card’s The Mind Game. But there was progress being made particularly around using AI to create art for games and in using AI to push procedural generation and automated game design to new heights. “What we were seeing then was the technological side of AI catching up and giving (developers) new abilities and new things that they could actually put into practice in their games, which was very exciting,” Cook says. As part of his research, Cook built a system he called Angelina that designs games entirely from scratch. This type of experimentation with unpredictable AI in games was restricted mostly to academics. But it was that kind of work — away from the commercial pressures of big studios — that was then laying the groundwork for AI-powered gaming experiences, ones that have been designed around the ever-evolving nature of neural networks and machine learning systems.


HAND IN HAND WITH A.I. Researchers saw a future in which AI became a kind of collaborator with humans, helping designers and developers create art assets, design levels, and even build entire games from the ground up. “Games now allow you to sit down and play almost without thinking,” Cook says. “As you work, the system is recommending stuff to you. This doesn’t matter whether you’re an expert game designer or a novice. It suggests rules that you can change, or levels that you can design.” Cook likens it to ancient predictive text, such as Google’s machine learning-powered Smart Compose featured in Gmail, but for game design. The result was that smaller teams made much bigger and more sophisticated games. Additionally, larger studios pushed the envelope when it came to crafting open-world environments and creating simulations and systems that came closer to achieving the complexity of the real world. “So yes on the hand it became much easier to make games. We made bigger games. You’ve seen these open world games became much larger,” Cook says. Machine learning and other techniques as indispensable data-mining tools for in-game analytics, study player behaviour and decipher new insights to improve a game over time. Cook also points to remarkable progress made in the generative adversarial networks are, or GANs, which are a type of machine learning method that uses a pair of AIs and mounds of data to try and accurately replicate patterns until the fakes are indistinguishable from the originals. The result of GAN research results in developing unique human faces that pass for real people and generating game graphics that looks close to live video footage. “Currently you design your character at your pleasure, where you choose how big a nose you want, what exact skin tone you want and what hair you want and so on”.These things got a whole lot more advanced using generative methods. LOOKING FOR PERFECTION The holy grail was a true AI-powered in-game character, or an overarching game-designing AI system, that changed and grew and reacted as a human would as you played. It was easy to speculate about how immersive, or dystopian, that might be, whether it resembles The Mind Game or something like the foul-mouthed, sentient alien character filmmaker and artist David O’Reilly created for the sci-fi movie Her. Handing control over to intelligent software systems radically shifted how we thought about the very nature of games. “Creating AI that could actually be a game master is something that was really fascinating. Many people had this vision for some while that you had an AI that not just

served your game but changed your game to suit you,”. So you could say the game plays the player as much as the player plays the game. Yet perhaps the most exciting element in the present is not just that a piece of software has taken on a creative role in the artistic process of building games, but also that this type of technology create tailored experiences that are ever-changing and never grow old. “When you think about the first time you played your favourite game, you only got that experience once. There was no way to replicate that feeling. “But automated game design lets you have that experience many times over because this game can be constantly redesigning itself and refreshing itself. It’s not just like a new kind of game. It’s also a whole new concept for playing games — a whole new concept for play in general.” GAMING INDUSTRY AS A ROLE-CHANGER IN OTHER SECTORS This bring us to a conclusion, which is how the gaming industry impacted on many other life aspects. In observing how players won at games, the AI learned from their successes and failures. Similarly, AI was used to identify how employees succeed and make mistakes in their jobs. AI at that time, started to help employees in preventing mistakes before they happened.This was as simple as making sure that all aspects of a form were filled out correctly, or as complex as creating the foundation for an investment bank’s financial model. This technology didn’t completely removed the human component entirely, but it did change the potential to automate a significant portion of the work involved. In observing how the best employees did their job, AI was able to create a framework for how that job should be performed. Just as artificial intelligence in a video game could observe winning players’ behaviour and then use the most effective strategies and tactics to win the game. The difference, of course, is that real life wasn’t a game. During the early years of adopting advanced A.I., people were still playing an important part in all functions within a business. As for now, after centuries of improvements, we got to a point where there is no need anymore of someone to make sure that any AI-created framework is suitable for particular circumstances. We have arrived to rely on technology so much, that an error on it would have dramatic consequences for our survival. Here is where, as humans, we should start thinking about getting us back in life again, enabling human errors, accepting that reality is not perfect and experiencing the great adventure of our existence. What is the reason of living if humans don’t take risks? This is just the opinion of a humanoid...

0_037


HUMAN01DS | LE1SURE

TEST1M0NIAL #03 NAME: Charles Brock AGE: 20 y.o.

OCCUPATION: None

By DK_Photo on Adobe Stock

0_038


0_041


HUMAN01DS | HEALTH

From hospital care to clinical research, AI applications are revolutionizing how the health sector works to improve patient outcomes. A DEBATE WAS SIMMERING at the beginning of the 21st century, for some time, regarding health data infrastructure, defined as the hardware and software to securely aggregate, store, process and transmit healthcare data. Is data infrastructure necessary for healthcare organizations and if so, is it the responsibility of individual healthcare organizations, of local health systems, or is it a public good?

By Jane Goulding

0_040

XXI CENTURY, HEALTHCARE AND A.I. In the 21st Century, the age of big data and artificial intelligence (AI), each healthcare organization built its own data infrastructure to support its own needs, typically involving on-premises computing and storage. Data was balkanized along organizational boundaries, severely constraining the ability to provide services to patients across a care continuum within one organization or across organizations. This situation evolved as individual organizations had to buy and maintain the costly hardware and software required for healthcare, and was reinforced by vendor lock-in, most notably in electronic medical records (EMRs). With increasing cost pressure and policy imperatives to manage patients across and between care episodes, the need to aggregate data across and between departments within a healthcare organization and across disparate organizations became apparent not only to realize the promise of AI but also to improve the efficiency of existing data intensive tasks such as any population level seg-

mentation and patient safety monitoring. The rapid explosion in AI introduced the possibility of using aggregated healthcare data to produce powerful models that could automate diagnosis and also enable an increasingly precision approach to medicine by tailoring treatments and targeting resources with maximum effectiveness in a timely and dynamic manner. However, “the inconvenient truth” was that the algorithms that featured prominently in research literature were in fact not, for the most part, executable at the frontlines of clinical practice. This was for two reasons: first, these AI innovations by themselves did not re-engineer the incentives that supported existing ways of working. A complex web of ingrained political and economic factors as well as the proximal influence of medical practice norms and commercial interests determined the way healthcare was delivered. Simply adding AI applications to a fragmented system did not create sustainable change. Second, most healthcare organizations lacked the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that was rarely highlighted by AI publications of that time, and (b) interrogated them for bias to guarantee that the algorithms performed consistently across patient cohorts, especially those who were not adequately represented in the training cohort. For example, an algorithm trained on mostly Caucasian patients was not expected to have the same accuracy


when applied to minorities. In addition, such rigorous evaluation and re-calibration had to continue after implementation to track and capture those patient demographics and practice patterns which inevitably changed over time. Some of these issues could be addressed through external validation, the importance of which was not unique to AI, and it wa timely that existing standards for prediction model reporting were being updated specifically to incorporate standards applicable to this end. In the United States, there were islands of aggregated healthcare data in the ICU, and in the Veterans Administration. These aggregated data sets had predictably catalyzed an acceleration in AI development; but without broader development of data infrastructure outside these islands it was not possible to generalize these innovations. THE CLOUD AS ENABLER OF A.I. Elsewhere in the economy, the development of cloud computing, secure high-performance general use data infrastructure and services available via the Internet (the “cloud”), was a significant enabler for large and small technology companies alike, providing significantly lower fixed costs and higher performance as well as supporting the aforementioned opportunities for AI. Healthcare, with its abundance of data, was in theory well-poised to benefit from growth in cloud computing. The largest and arguably most valuable store of data in healthcare rested in EMRs. However, clinician satisfaction with EMRs remained low, resulting in variable completeness and quality of data entry, and interoperability between different providers remained elusive. The typical lament of a harried clinician was still “why does my EMR still suck and why don’t all these systems just talk to each other?” Policy imperatives had attempted to address these dilemmas, however progress were minimal. In spite of the widely touted benefits of

“data liberation”, a sufficiently compelling use case had not been presented to overcome the vested interests maintaining the status quo and justified the significant upfront investment necessary to build data infrastructure. Furthermore, it was reasonable to suggest that such high-performance computing work had been beyond the core competencies of either healthcare organizations or governments and as such, policies had been formulated, but rarely, if ever, successfully implemented. 100 years later was the time to revisit these policy imperatives in light of the availability of secure, scalable data infrastructure available through cloud computing that made the vision of interoperability realizable, at least in theory. To realize this vision and to realize the potential of AI across health systems, more fundamental issues had to be addressed: who should own health data, who was responsible for it, and who could use it? Cloud computing alone was not answering these questions—public discourse and policy intervention were needed. The specific path forward depended on the degree of a social compact around healthcare itself as a public good, the tolerance to public private partnership, and crucially, the public’s trust in both governments and the private sector to treat their healthcare data with due care and attention in the face of both commercial and political perverse incentives. PRIVATE SECTOR AND BARRIERS DUE TO A MARKET POWER POSITIONING In terms of the private sector these concerns were amplified as cloud computing was provided by a small number of large technology companies who had both significant market power and strong commercial interests outside of healthcare for which healthcare data was potentially beneficial. Specific contracting instruments were needed to ensure that data sharing

0_041


HUMAN01DS | HEALTH

0_042


0_043


HUMAN01DS | HEALTH

involved both necessary protection as well as, where relevant, fair material returned to healthcare organizations and the patients they served. In the absence of a general approach to contracting, high profile cases in this area had been corrosive to public trust. INTERNATIONAL PROTECTION LAWS Data privacy regulations like the European Union’s General Data Protection Regulation (GDPR) or California’s Consumer Privacy Act were necessary and well intentioned, though incured the risk of favoring well-resourced incumbents who were more able to meet the cost of regulatory compliance thereby limiting the growth of smaller healthcare provider and technology organizations. Initiatives to give patients access to their healthcare data, including new proposals from the Center for Medicare and Medicaid Services were welcome, and in fact it had long been argued that patients themselves should be the owners and guardians of their health data and subsequently consent to their data being used to develop AI solutions. In this scenario, as in the current scenario where healthcare organizations are the de-facto owners and guardians of patient data generated in the health system alongside fledgling initiatives from prominent technology companies to share patient generated data back into the health system, there existed the need for secure, high-performance data infrastructure to make use of this data for AI applications. FEAR NOTHING AS A.I. IS HERE TO STAY After addressing the aformentioned issue, there were two possible routes to building the necessary data infrastructure to enable today’s clinical care and population health management and tomorrow’s AI enabled workflows. The first was an evolutionary path to creating generalized data infrastructure by building on existing impactful successes in the research domain such as the recent Science and Technology Research Infrastructure for Discovery, Experimentation and Sustainability (STRIDES) initiative from the National Institutes of Health or MIMIC from the MIT Laboratory for Computational Physiology to generate the momentum for change. Another, more revolutionary path was for governments to mandate that all healthcare organizations store their clinical data in commercially available clouds. In either scenario, existing initiatives such as the Observational Medical Outcomes Partnership (OMOP) and Fast Healthcare Interoperability Resources (FHIR) standard that created a common data schema for storage and transfer of healthcare data as well as AI enabled technology innovations to accelerate the migra-

0_044

tion of existing data accelerated progress and ensure that legacy data were included. There were several complex problems still to be solved including how to enable informed consent for data sharing, and how to protect confidentiality yet maintain data fidelity. However, the prevalent scenario for data infrastructure development depended more on the socio-economic context of the health system in question rather than on technology. STATUS QUO OR DYNAMICUS QUO A notable by-product of a move of clinical as well as research data to the cloud was the erosion of market power of EMR providers. The status quo with proprietary data formats and local hosting of EMR databases favored incumbents who had strong financial incentives to maintain the status quo. Creation of health data infrastructure opened the door for innovation and competition within the private sector to fulfill the public aim of interoperable health data.The potential of AI was well described, however in reality health systems were faced with a choice: to significant-


ly downgrade the enthusiasm regarding the potential of AI in everyday clinical practice, or to resolve issues of data ownership and trust and invest in the data infrastructure to realize it. Back at the time, when the growth of cloud computing in the broader economy bridged the computing gap, the opportunity existed to both transform population health and realize the potential of AI,

when governments had the will to foster a resolution to issues of ownership of healthcare data through a process that transcended election cycles and overcame the vested interests that maintained the status quo. Without this, opportunities for AI in healthcare were just that, opportunities. Luckily for us, that was an ancient vision, driven by fear to the unknown.

AdobeStock_ 158458827 by Yevhen.

0_045


HUMAN01DS | HEALTH

TEST1M0NIAL #04 NAME: Liam McGreggor AGE: 37 y.o.

OCCUPATION: VR Surgeon

By Rahul Pariharacodu on Unsplash

0_046


0_049


Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest. 1SAAC ASIM0V


By Mykola on Adobe Stock

0_051


We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all We are human after all Arntcha' comin' after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all


Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all Human, human, human, human, Human, human, human, human, Human, human, human, human, Human, human, human after all.


We are human after all. Human, human, human after all.

Profile for carles llavina

HUMAN01DS  

Explore the in and outs of the society that awaits. The technology drives humanity to (des)humanity. Our race use technology to automatize e...

HUMAN01DS  

Explore the in and outs of the society that awaits. The technology drives humanity to (des)humanity. Our race use technology to automatize e...

Advertisement