Geek Gazette Autumn 2018

Page 1

GEEK GAZETTE ISSUE 20 AUTUMN 2018

COVER STORY : MUSIC AND LINGUISTICS

EDITORIAL

BIG STORY

INTERVIEW

Welcome to the Machine

Creating New Languages

Hack the Planet

acm Chapter

Association for Computing Machinery Indian Institute of Technology Roorkee


“What you wear, Matters”

Extra 5% discount For those who come along with this magazine.

Suiting and Shirting #110, B.T. Ganj, Roorkee

Readymade Wear

+91-1332-266595

10 Civil Lines, PMR Plaza, Prem Mandir Road, Roorkee

atams.roorkee@gmail.com

+91-1332-270008


CONTENTS The Plight of Fallible

05

Unrequited Affections

06

What is !QC

08

Designing Languages

Hack the Planet with INFOSEC IITR

10 12

Huge, next-level brain thingy

14

No More Men‽

16

Gut Feeling

18

Music and Linguistics

20

War of Words

24

Picasso's Nightmare

26

Cogito Ergo Sum

28

Falling Down to Earth

29

To Infinity and Beyond

30

Parallelism and Chaos

32

Welcome to the Machine

34


GEEK SPEAK I

f our world’s a blank, crisp, white canvass, then its music and languages, certainly, are the ablaze pastels and jazzy acrylics which render it a spirited texture. The aforementioned form those inevitable facets of our lives without which imagining a civilization is next to impossible. But given our potpourri of music and languages, we need not put our minds through that strain. Appreciating this premise, we as Editors of Geek Gazette, bring forth in our current issue, the theme, ‘Music and Linguistics’. Ever wondered what goes in the mind of a linguist? How does one come up with a virgin dialect of its own when the process is a complex labyrinth of vocabulary, grammar, and semantics? In our article, Creating New Languages, we throw light on this intricate task of creating new languages through the medium of some well-known conlangs of pop-culture. Our cover story, Music and Linguistics, on the other hand, draws a comparison between music and language taking in their use, structure, cognitive capacities, and the way meaning is derived from the two. War of Words is our attempt to bring out the “friction” that exists between the two seemingly different fronts of literature—poetry, and lyrics. Like our previous issues, in this issue also we are loyal but are not limited to the thematic content. The techie-stuff content in What is !QC and Parallelism and Chaos uncover the latent truths behind the promising field, Quantum Computing, whereas in the latter one it surfaces the pandemonium associated with parallel computations. The title, To Infinity and Beyond, hovers upon the eccentric realm of infinity, while some exotic ideas are brainstormed under The Next-Level Brain Thingy. Given a choice would you replace your hand, an arrangement of flesh and bones, with a mechanized prosthetic one? Your answer may be a vehement no or a reluctant yes to this esoteric question, whatever be the choice, you would surely think again after walking through the article Welcome to the Machine. Taking a different road, we have also incorporated ideas on human behavioural traits and psychology. The articles on Unrequited AffectionsRelations Beyond Comprehensions and The Plight of Infallible advocate ideas that look into an entirely different category of human relations and intelligence. The eye-popping titles of Gut Feeling and No More Men embody some obscure biological phenomena. For the interview section, we had a long heart-to-heart with the InfoSec IITR team who deciphered our various queries on information security and gave us insights about their much-accoladed team’s success, ideals, and inception. Team Geek


THE PLIGHT OF THE FALLIBLE E

ver since Darwin first proposed his Theory of Evolution, people have tried to explain a lot of societal trends and phenomenon using the phrase, ‘Survival of the fittest’. Some have even used it to justify their otherwise irrational bigotry towards a particular physical or biological trait. In an effort to better understand this changing society of humans, the expression has also been applied to social peculiarities of individuals. Being a part of a particular clan might ease someone's existence in general. Looking at the times, another important fitness class that can enhance one’s lifestyle is psychological. Possessing a high intelligence quotient is a very efficacious weapon to better resources and opportunities. To discern these variations better, let's wind our clocks back to the 1950s. Being ethical, dedicated, hardworking, and well-groomed escalated someone's chance of scoring a job. Few employers looked for ‘smart’ employees or least looked for high SAT scores, which have become a pivotal part of many job requirements in present times. Zooming forward to 2010s, the job market is filled with profiles expecting individuals with validated high intelligence in the forms of SAT scores, IQ test scores, or educational degrees. The appearance of robots and software has simultaneously led to the disappearance of low skilled jobs where repetitive monotonous physical labor is needed. The number of people required to wait tables, manage shops, switch goods between assembly lines, fill kiosks, drive vehicles etc. has decreased drastically. Even the job profiles that have recently started to expect educational achievements, mostly in the form of degrees, have not become any harder to perform.

If this trend continues, we might develop new social classes on the basis of the IQ parameter or start trolling influential people everytime they use the S-word on someone. Even though the latter seems more likely, given our fundamentally hypocritical goal of equality, an amalgamation of both seems a convincing prediction. Working towards decreasing the gap between the classes on the basis of cognitive abilities is becoming progressively necessary as we are subconsciously increasing it. As of now, very few steps have been taken to increase the

ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience in young children. The major step would be to take all the kids, right from a very early age to schools and programmes working towards the cognitive and emotional development in an effective manner. One obstacle is to prevent the correlation between cognitive deficit and economic deficit and make sure the conventionally privileged do not get the unfair advantage as they mostly do. Another change we should strive for is to not create any social difference on the basis of intelligence. After all, Albert Einstein, considered to be one of the most intelligent humans said, “The measure of intelligence is the ability to change”.

05

GEEK GAZETTE

Apart from job prospects, this selection is also visible in other fields. Affinity towards intelligence is deep rooted

within our lifestyles where the most basic form of disapproval is stating the opposite person or idea to be stupid. Smart people are hunted for advice, decision-making, and relationships. Popular entertainment media and art openly mock stupid people and their decisions while on the contrary worship intelligence to the extent of a superpower!


UNREQUITED AFFECTIONS B

AUTUMN 2018

BC presenter, newsreader, and journalist Jill Dando, upon her demise, was mourned by thousands who had known her only through the media. In reality, what had Jill Dando done, apart from hosting a TV programme and reaching out to the masses through the media? In the course of our lives, we’ve all experienced a sense of fascination and infatuation with public figures. As it turns out, this has been a pervasive and ubiquitous phenomenon in our society. One of the striking characteristics of newer forms of mass media—television, social networking applications, etc.—is that they create an illusion of a face-to-face interaction of the audience with the performer. This might put individuals under a cloak of illusion that they fully comprehend the personality and are acquainted with the idiosyncrasies of their favourite performer. In reality, it is the image that these big shots have created, that bear little resemblance to their real persona, which people come to accept as their real selves. The epiphany of the existence of such interactions dawned on Donald Horton and R. Richard Worl back in 1956, when they coined the term ‘parasocial relationship’ to describe such one-sided, unrequited associations. In their book titled, Mass Communication and Parasocial Interaction: Observation on Intimacy at a Distance, they stated, “The conditions of response to the performer are analogous to those in a primary group. The most remote and illustrious men are met as if they were in the circle of

06

one's peers; the same is true of a character in a story who comes to life in these media in an especially vivid and arresting way.” Back in their time, Horton and Worl noticed a significant change in the way media personas relayed their ideas and information to the general public. For instance, American television personality, Dave Garroway, looked directly into the lens of the camera, without addressing his audience as “ladies and gentlemen”, knowing that his audience extended to the proletarians watching it from their homes, as opposed to just a handful seated in front of him. Nowadays, the presence of parasocial relationships is more palpable than ever before. YouTube sensations, Instagram blogging, and other forms of social networking have raised the standard of parasocial interactions to a whole new level. PewDiePie, Jacksepticeye, Troye Sivan, and Jenna Marbles are few of the many YouTubers who personally engage with fans and convey their ‘love’ to them. Nearly 62% of Americans aged 13-24 claim to love YouTubers because they were easy to relate to, and made them feel good about themselves. Often, personalities formulate special lingo and names for their fan-bases, like fans of Beyoncé are popularly called ‘BeyHives’, or those of Justin Bieber, ‘Beliebers’. In such cases, the bond of intimacy is nothing more than an illusion, and reciprocity non-existent. If we were to dig deeper into this, we’d realize that we’ve been passively surrounded by TV shows that promoted


parasocial relations since time immemorial. Jennifer Barnes, a psychologist and writer, did a conservative estimate in 2012, as to how much time people had invested in the Harry Potter books and movies, by assuming that only half the books bought had been read in three hours, that no one ever read the same book twice, or no two people read the same book and that the movies had only been watched in the theatres. Interestingly, the amount of time spent was a whopping 235,000 years! In another survey, she interviewed 134 participants, asking them about the degree of devastation they’d feel if a fictional character they loved and an acquaintance died.

There is a very thin line that regulates the difference between abusive and healthy parasocial relations, which some agents of the media often use as a skipping rope.

Having said that, one might hastily conclude that there is nothing beneficial that one can extract from a parasocial relationship. However apparent the negative repercussions of parasocial relationships, we cannot ignore the positive facets. People with low self esteem often use parasocial relations as a mechanism to become more like the person they admire by emulating them. Such persons might not have been able to do so with the assistance of a real-life relationship, that constantly reminds them of their deficiencies. Also, the seriously ill benefit from parasocial relations by watching shows like Ellen and Oprah. They find a friend in them with whom they can laugh and relate to, and rejuvenate themselves. Since reading fiction also increases empathy of the readers and makes them care more about the people around them, leveraging the affection for fictional characters and transferring it to the real world is one way by which people can use parasocial relationships to alleviate the conditions of society. The Harry Potter Alliance is one such organization that promotes activism through the power of fiction. From blind obsession and idealization of public and imaginative figures, to working on a poor self-image, propagating social activism and an open mind, it is undoubtedly the individual’s choice as to how they utilize parasocial relationships. There is a very thin line that regulates the difference between abusive and healthy parasocial relations, which some agents of the media often use as a skipping rope. Like any other phenomena in society, parasocial relationships also need to be very carefully dealt with. Hence, being mindful of the kind and amount of media one consumes would lead to healthy parasocial relationships that would possibly benefit said individual and society as a whole. GEEK GAZETTE

Astonishingly, female participants reported more grief at the death of a fictional character than that of an acquaintance, while there was no significant difference in the degree of sorrow experienced by males. This is indeed shocking, as there is no statistically meaningful effect because neither group said that they’d be more devastated by the death of an acquaintance. A 2008 study set out to analyze ‘social facilitation’- the phenomenon that makes people perform better on simple tasks and worse on complex ones when they're around others. Researchers wondered, would social facilitation occur when people viewed images of their favorite television characters? Sure enough, it did; the more they liked the character, the more their performance on various tasks reflected that the character might as well be in the room with them. Likewise, studies have shown that people anticipate having the same negative emotions during "parasocial breakups" - that is, a character leaving their favorite show - as during the end of a real friendship. Hence, we can safely conclude that fictional stories are extremely efficient at engaging emotions. It doesn’t take a genius to understand that parasocial relations are being used as a tool to control mass behaviour—due to lack of proper awareness, poor people are coerced to believe in leaders who are popular among

the majority. The target audience is often adolescents, who have impressionable minds and are naïve enough to take things at face value. Given that people throw too many emotions into this, influential people take advantage of this and foster affection from others, later exploiting them for monetary gains. It is off-putting and disturbing when such deliberate manipulations take place cynically.

07


WHAT IS !QC

AUTUMN 2018

W

With the world advancing towards the death of Moore’s Law (the law which governs the exponential speedup in computational power owing to hardware miniaturization), researchers are constantly looking for better computing paradigms which can withstand the booming demand for increased computing power. Out of the many other proposed solutions, the one which appears to possess the highest potential is quantum computing, as it marks the shift from classical theories towards modern physics, taking into account the effect of computation on the smallest scales. But this paradigm comes with its own set of rules and challenges, both of which need to be identified and addressed through a modern way of thinking.

08

Quantum computing is obviously different from classical computing. While we have the very basic logic gates of classical computing to be the AND, OR, XOR and NOT gates, the set of basic logic gates of quantum computing is an almost completely different set because unlike classical gates, the quantum gates—and hence all quantum operations—need to be reversible in nature, implying a unique output for a unique input which leads to very different (and absurd) types of algorithms in the QC world. To give an example of a reversible gate, we can consider the Controlled-NOT gate which takes 2 inputs (X,Y) and returns the output (X, (X XOR Y)), which is invertible (in fact, it is its own inverse).


Quantum computing derives its real strength from the infamous superposition and entanglement principles of the quantum world. While a classical byte (8-bits) can represent only a single state at a time, a quantum byte (8-qubits) can be in a superposition of 28 (=256) states at a single instance. The algorithms need to be designed in such a way so that they can efficiently exploit this ‘quantum parallelism’ to compute the same operation on different states in one go. This leads to a very common misconception about QC, that is, we can compute and get the results of calculations on different states simultaneously, only half of which is true. Quantum computing is not parallel processing. We can prepare a superposition of states and perform the same computation simultaneously on them, but the daunting task is to get a meaningful result. The measurement is the crucial step. While the qubits can be in a superposition of states, the measurement is classical in nature and yields only a single state with a certain probability. The key task in designing QC algorithms is to increase the probability of a certain desired output to (almost) 1 or get only meaningful outputs with non-zero probability.

measure the state in a certain basis. But once measured, the information is changed, as it is no more in a superposition of various states but in the measured state alone. Therefore, any malicious activity on the network can be identified by checking the state of the information.

The measurement is the only part of a quantum algorithm which is classical in nature and hence, irreversible. Once measured, all the information stored in the form of superposition of different states is destroyed forever. This is because, “measurement” actually means measuring the spin of an electron or polarisation of a photon, and one can find (i.e. measure) the electron in only a certain spin, or a photon in only a certain polarisation. It is exactly the same as the Schrodinger’s Cat thought experiment in which the cat is simultaneously dead and alive (in superposition) until the box is opened and checked (measured), after which, the cat is either only dead or only alive. This aspect of measurement leads to a very important application of QC – communication. To obtain the information being sent, the adversary needs to

It is crucial to understand the limitations of a resource to realize its full potential. Just like the advent of quantum physics did not result in the death of classical physics, quantum computing doesn’t mean the death of classical computing. Quantum processors can perform certain tasks better than classical ones, and it is not wrong to replace all classical processors with quantum ones, but it is unnecessary. We do not expect to replace classical processors with the quantum ones, but to see them working synchronously and harmoniously with each other in a single computer. It is only after we understand the true nature of quantum computation - what it is and what it is not, that we can expand its horizons by using it in a multitude of fields like cloud computing, machine learning and many more applications. In conclusion, we can certainly say that quantum computing is !(complete) yet, but definitely !(boring).

09

GEEK GAZETTE

Quantum computing is the future, but that future also has classical computing in it.

Quantum computing has led to the creation of ‘efficient’ solutions to a few problems considered to be very difficult for classical computers. The standard encryption algorithm for internet (RSA) depends on a difficult problem (called ‘factoring’) which cannot be solved efficiently by a classical computer, but a quantum computer can solve it efficiently leading to the breaking of our current security until we switch to some ‘quantum-safe’ methods, which are currently a work in progress. But it has not yet been theoretically proven if quantum computing is surely more powerful than classical computing, which, if proved, would lead to something which is nothing less than a breakthrough in the history of computer science. There are many problems in which quantum computing does not provide any speedup over its classical counterpart. To put it in another way, for most of the algorithms, quantum computing doesn’t yet have any algorithms which can solve the problem in a significantly faster way as compared to Classical computing algorithms. Quantum computing is definitely not the ultimate solution to all our computational problems).


DESIGNING LANGUAGES

“Vedui’il’er! Lle n'vanima ar' lle atara lanne”

T

his melodic ovation is assured to sweep the audience off their feet to fling them swooshing to the Gray Havens. It’s English translation, however, isn't so canorous, being, “Greetings everyone! You're ugly and your mother dresses you funny” Belonging to the mystical dialect of the Elves from The Middle Earth, the phrase is an example of an intricate and beguiling art-form—Conlanging.

AUTUMN 2018

According to the Oxford dictionary, derived from constructed languages, a conlang is a language that has been artificially created. The art of creating conlangs is known as conlanging. There can be various purposes for the creation of languages, be it pleasure or passion. Distinct motives lead to multiple categories for classification—the most popular and alluring category being fictional languages. Created to be uttered by the inhabitants of a fictional setting such as a movie, book, comic, video game, television show, or a toy, fictional languages had, and continue to set the stage for the lingual masters to showcase their dexterity of words. The most prominent fictional languages in existence should probably be the Elvish Jargons developed by J.R.R. Tolkien for his fictional world, the Middle Earth. In his works, including The Lord of The Rings, The Hobbit, The 10

Silmarillion, Tolkien has religiously worked to develop his imaginary world as detailed as possible. The languages are an integral part of different cultures that are pivotal components of the Middle Earth. Since a very young age, Tolkien loved to study languages, and developed his first conlang at the age of thirteen. It is due to his fondness of languages and his devotion to building an entirely new world, that he took to creating these languages from scratch and developed his own scripts. He developed around twenty languages for the same. Of these, the two most extensive are—Quenya and Sindarin. The use of inflection i.e. modification of a word with the use of prefix, suffix, or infix, to express different grammatical categories in Finnish, one of Tolkien's favourite, gave rise to Quenya. Sindarin, on the other hand, draws inspiration from Welsh phonology. Both these languages use a fictional script cultivated by Tolkien himself, called Tengwar. The detailed peculiarities of these languages along with others are profoundly important to the depth that the meticulous artist poured in his works. Contrasting to these well-developed languages is Minio-

nese, the language spoken by the Minions, in the sense that it is mostly gibberish. Irrespective of the fact that the language is a combination of frivolous terms, phrases, and noises, it is possible to fathom the Minions and follow their conversations. Created and voiced by director Pierre Coffin for his movie Minions and the Despicable Me movie series, he admitted having used random words from


languages across the world. Their usage is such that the dialogues sound more sterling, funny, and to not generate a specific pattern, grammar, or meaning. Multilingualism adds to the fact that the little round-headed yellow beings have been around forever serving masters from around the world. Even though the language lacks meaningful sentences, it is possible to understand the “Banana Language” as it has a strong association with the body language, the tone, and the visual context presented by the director. The fact that ‘popoy’ means a toy or the Italian word ‘Gelato’ means ice cream is coherent and lucid even to someone with no knowledge of the Italian language because of the way it is used in the movie.

Characterized by limited vocabulary and restricted grammar, Newspeak provides a captivating intersection between psychology and linguistics.

Conlanging is an art with a wide spectrum. It can be used to satisfy a particular purpose or ornament the process of fulfilling the purpose, or just for the unexplainable satisfaction associated with most art forms. The Language Creation Society (LCS) is an organization for conlanging experts as well as conlanging wannabes with their primary purpose being promotion and furthering of the art, craft,

and science of language creation (conlanging) through conferences, books, journals, outreach activities, or other means. There is also a subreddit dedicated entirely to this art - r/conlangs. Omniglot is the conlang's own online encyclopedia. The extent of resources available to start or contribute to the craft is tremendous. So, whether it is for pretty polly or filly, working with languages is a doubleplusgood and papaya experience.

GEEK GAZETTE

Even though Minionese is not a fully-formed language, it is still an artistic formulation. The case is quite similar when an artist creates a slang language like Anthony Burgess did in his novel A Clockwork Orange. The novelist, also a linguist, created a fictional cryptic language Nadsat used by the teenage narrator and his ‘droogs’. Nadsat itself materializes from the Russian suffix of numbers from 11 to 19, representing ‘teen’. Similarly, ‘droogs’ is the Russian for ‘close friend’. With its origin in the Russian language, the fictional register gathers heavily from Cockney rhyming slang, the King James Bible, the German language, and some words fabricated by the polyglot Burgess himself. Many words are derived either by blending multiple words or by clipping other words. The author uses the argot to depict the antihero, who is perfectly capable of speaking proper English, Alex’s indifference to the societal norms, and to represent the youth subculture in the dystopian world of his book. Another motive was to prevent his work from being dated by not using the contemporary models of speech prevalent at that time. One good example is the use of ‘ptitsa’, which translates to ‘bird’ (chick) in the Russian language which itself is a common British slang for young woman, in place of girl.

Another utterly unique disposition of creating a fictional language rather than entirely devising a new language is presented by George Orwell in his literary masterpiece Nineteen Eighty-four. In his dystopian novel, Orwell stages an altered form of regular English called Newspeak. In the novel, the language is controlled by the ruling party of the totalitarian state, Oceania. Characterized by limited vocabulary and restricted grammar, Newspeak provides a captivating intersection between psychology and linguistics. The aim of the state is to suppress freedom of thought, individualism, and self-expression of the inhabitants of the fictional anti-utopian state, by fixing the usage of suffixes and prefixes. This is achieved by removing all synonyms and antonyms, where ‘bad’ becomes ‘ungood’ and ‘warm’ becomes ‘uncold’. To decide which antonym is to be used, that is ungood versus unbad, is in the hands of the Party. While ‘un-’ is added for antonyms, ‘plus-’ and ‘doubleplus-’ are to create the superlative effect. Similarly, suffixes ‘-full’, ‘-ed’, and ‘-wise’ are used for adjectives, past tense, and adverbs respectively. Newspeak intends not only to limit the thoughts of the populace to conform to the ruling party and the big brother but also to destroy their ability to conceive any other point of view.

11


H AC K I N G T H E P L A N E T W I T H

INFOSEC IITR InfoSec IITR is IIT Roorkee’s very own group of information security and hacking enthusiasts. They have achieved remarkable success in various CTF events and competitions since their inception three years ago. Geek Gazette had the pleasure of interviewing their team for an inside look at their working. For obvious reasons, the names of the member’s are not revealed.

AUTUMN 2018

GG: Let’s start with a bit of history of the group? How did it come into existence and what were the ideals behind it? IS: Back in 2015, some seniors got together to participate in some CTF competitions, and they were quite good. Then some guys prompted the others to start InfoSec as a separate group for the campus as well. These founding members introduced the group to the rest of the campus mostly through in-person interactions. Few of us interested people from Rajendra bhawan used to go to Rajiv for regular meetups where they would point us to resources and guide us through security challenges. So that's basically how it started. We participated in the meetings and CTFs, there was a Slack group for communicating where we were added. I guess our ideals are mainly promoting knowledge regarding information security and the culture of openness and unrestricted information, which is reflected in the fact that this group is an open group and we don't have any recruitments. Anyone can join any meeting at any point of time in the year. GG: You guys are obviously privacy advocates. How much do you rely on resources like Google and 12

Facebook? Or do you use alternatives which are better? IS: There are alternatives which are better. There is Twitter’s alternative - Mastodon, which anyone can host. But these alternatives fall short with regards to their usability and outreach. Not everyone will migrate to that Twitter clone. And if not everyone, then it will just be a bubble of privacy focused individuals talking to one another. We need that the companies which have built the widely used platforms like Google, FB, Twitter should have a certain sense of responsibility. They should be concerned about the user’s privacy, while the rest of the tech community should come up with the privacy focused tools which do not prove to be greater hurdle for any layman. Although it is probably an unrealistic dream. GG: What is the process behind tackling an insecure system? What are you guys on the lookout for, and what do you do after finding something meaningful? IS: We search for vulnerabilities while analysing the system. It is very much possible to exploit the vulnerability and get access to data. But upon finding any such security lapses, our first step is to report the issue to the administrator of the system. If the administrator does not


do anything in terms of securing their data, we deliberately exploit the system (*laughter*). Not for malicious intentions, but so that it gets noticed and is rectified. GG: So as a complete beginner how should one start to learn more about this field, i.e tackle insecure systems? IS: You cannot exploit or find vulnerabilities in a system unless you completely understand how that website or the system works. A beginner must understand how programs run, how network protocols work and that is not a simple task. We ourselves have not completely understood that, in spite of how much we want to. Starting from the bottom is important, and it takes a lot of effort to gain knowledge which would be enough to start finding security lapses. But perhaps that is why InfoSec is a very diverse field, wherein you’ll get to learn about software development, networks, operating systems, low-level architectures etc. GG: What are Capture The Flag (CTF) events and what role do they play in InfoSec? IS: CTF events are basically gamifications of real world security challenges. They offer a safe space for people to practice their skills in an arena where innovative exploits are encouraged, unlike in the real world where us hackers might be forced to operate secretively. The bugs and security vulnerabilities in CTF challenges are often similar to the ones found in the real world.However, while attempting a CTF challenge one knows that a vulnerability definitely exists, whereas there is no such assurance in real-world scenarios.

GG: What should be the beginner’s step towards pursuing infosec? IS: Firstly, as all other fields of interests, this also requires curiosity, urge to learn more and letting the passion in you strive further. There may be hurdles in the process but you mustn’t give up. Secondly, ask questions, everytime. InfoSec is an open group where anyone can ask questions and we encourage people to be inquisitive. Learning is obviously the primary objective, not winning CTFs. GG: Computing and Hacking attract many aspirants, partly because of their portrayal in popular culture. So what do you guys know about ‘hacker culture’ and any suggestions for people who want to dive into this culture as well? IS: Yes, hacker culture is something we really like to indulge in. There is undoubtedly some overlap with cyberpunk, neo-noir and sci-fi genres. Popular TV shows include Mr. Robot and Black Mirror and popular movies include Hackers (1995), Sneakers (1992), the Matrix series, the Blade Runner series etc. Books like “Ghost in The Wires: My Adventures as the World's Most Wanted Hacker”, “Hackers: Heroes of the Computer Revolution” detail the origin of the hacker culture and others like PoC||GTFO, while deeply informative, still manage to preserve that culture. GG: That about wraps it up. Thanks for the great interview, we hope you find success in your upcoming CTF’s and other ventures!

GEEK GAZETTE

GG: InfoSec has been known to participate in several CTF competitions, and you guys recently placed first in the CSAW CTF’17. Any memorable experiences you remember? IS: The experience was as good as any other CTF. Unfortunately, it is one of the only CTFs in India which has an onsite round. Microsoft and Deloitte used to organize CTFs a few years back but they’ve stopped. We have been winning the CSAW CTF consecutively for two years. It's kind of the only onsite CTF organized in India to look forward to for us now. This year as well, we will be going on 8th to 10th November to IIT Kanpur.

GG: Is there a lack of academic focus on InfoSec in our college? Are there any plans from their side to address this? IS: There is an elective course on Information Security in the CSE Department. It’s mostly based on crypto, but there is a practical aspect that is missing at our college. In other colleges, there are certain courses like Modern Binary Exploitation (RPISEC) and Binary Bomb Lab (CMU). It is probably due to the existence of such courses that teams from these universities perform better in CTFs.

13


HUGE, NEXT-LEVEL BRAIN THINGY

O

n the scale of the Universe, humans are so hopelessly minuscule that the tiny human brain can't even begin to comprehend it. And yet from the time they first appeared, humans have always looked outward and tried to expand and explore. The day the earth is well and truly conquered is not far and humanity will then turn its eyes upwards, to the skies. Quoting Star Trek: “Space, The Final Frontier”. There is no doubt that someday humans will be a space-faring species and once they reach the stars and asteroids and planets, they will claim them for themselves, plant their flags and as they do wherever they go, exploit them. Today, imagining what this would be like seems like science-fiction, but considering the possibilities of what could be achieved with that level of resources and energy is simply breathtaking.

AUTUMN 2018

Physicist Robert Bradbury introduced the concept of an advanced supercomputer that was powered by an entire star. In his own words “Advances in computer science and

programming methodologies are increasingly able to emulate aspects of human intelligence. Continued progress in these areas leads to a convergence which results in megascale superintelligent thought machines. These machines referred to as Matrioshka Brains, consume the entire power output of stars (~10^26W), consume all of the useful construction material of a solar system (~10^26kg), have thought capacities limited by the physics of the universe and are essentially immortal.” Theoretical physics and astro-engineering introduced a 14

hypothetical class of devices known as stellar engines. These are classified into 3 main classes. Class A act as sails, which use a star’s asymmetrical radiation pressure to modify its trajectory. Class B convert the star’s energy to work and Class C are a mixture of the other classes. In his 1960 paper, Physicist Freeman Dyson formally proposed a class B stellar engine that came to be called a Dyson Sphere. It consists of a hollow shell around a star, likely made of “computronium” – a hypothetical material engineered to maximize its use as a computing substrate. Nest a bunch of these to extract every last drop of star-juice and voila! You get a Matrioshka Brain! This design resembles the nested Russian Matryoshka dolls from which it inherits its name. While the idea of the Matrioshka Brain violates none of the currently known laws of physics, the engineering details of building such a structure would be staggering. To begin with, mankind is a very long way from even considering building something of this scale. To build a Dyson sphere we must rank at least 2 on the Kardashev Scale. In 1964, Russian astrophysicist Nikolai Kardashev proposed a scale to rank civilisations based on their energy consumption. A Type I civilisation can harness and store all the energy available on its own planet. Type II can harness the energy of its host star while Type III control all the energy in their galaxy. Carl Sagan later modified this scale to a logarithmic one to include intermediate values.


Currently humans rate 0.724 on the Sagan - Kardashev Scale. Scientists believe it will take a few hundred years to reach type I, a few thousand for type 2 and a million more to reach type 3. To build even a single Dyson sphere, the materials needed would deplete entire planets. It would have to be considerably bigger than the star itself or the material it would be made of would melt. Consecutive spheres would then have to have even larger radii to create a considerable temperature difference (based on Stefan Boltzmann’s Law) between consecutive spheres to increase their conversion efficiency. Consider a sphere of radius 0.25AU (37.5 million km) and thickness 1 cm made of, for illustration, steel. The volume of this shell would be 1.76 x 10^20 m³. Assuming steel has 1% carbon and 80% of the earth’s core is iron, we would need 23 Earths to build this. That’s not even accounting for the computing machinery! All this raw material would then have to be shipped light years to the build site and then actually assembled in space. The logistics become unspeakably worse if the civilisation decides to use some other star system instead of its own. Alternative designs, like swarms or even a single belt of smaller panels have been suggested.

A computing system such as this would be capable of the collective intelligence of 100s of billions of human civilizations - perhaps more. It is likely that the entire construction would be done using self - replicating robots. These robots would mine a planet and set up smaller panels around the star to power their operations. Then using the materials available on the planet they would clone themselves and thus the workforce and construction efficiency increase exponentially.

year. A computing system such as this would be capable of the collective intelligence of 100s of billions of human civilizations - perhaps more. It would be a godlike entity. One cannot begin to imagine the kind of thoughts such a brain would have. It would be so powerful, that it could run simulations of entire civilisations so effectively, that they would never even realise it. In fact, that is its most likely purpose, as suggested by Charles’ Stross in his book Accelerando. Instead of exploring further outwards in the “real” universe, this advanced civilisation could just move inwards by uploading its consciousness into the Brain and living on in a perfectly crafted simulation for seemingly longer than the outside world. As the Brain completely surrounds the star, it wouldn’t even be visible from a distance, thus shielding it from prying eyes. Their energy requirements could also be manipulated as their entire world is now “just code”. However, this brings us to a question to a very popular and much-debated question: are humans in a simulation? If this is actually true then it explains the Fermi Paradox, as believed by many scientists including Elon Musk. The paradox states that given that there are billions of stars and planets in the universe, and that the universe has been around for billions of years, it is very unlikely that intelligent life did not develop anywhere else. If some other civilisation did develop before humans, they would leave some detectable physical evidence. Yet nothing has been found. Could this be because there isn’t anything out there, because there is no one else inside this simulation? If humanity is indeed in a simulation then, how did it get here? Is the “real” world anything like this one? Can humanity go back? Should it? Stay tuned and find out, when The Human Show returns! The galaxy is watching!

GEEK GAZETTE

Assuming a civilisation was advanced enough and inclined to build such a megastructure, what would they do with it? A Matrioshka brain can perform more computations per microsecond than all the computers on the earth do in a

15


NO MORE MEN‽

T

“Man in his arrogance thinks of himself a great work, worthy the interposition of a deity.” -Charles Darwin, The Descent of Man ince time immemorial, we as humans keep validating our own existence by claiming that we run the world. There are moments when the sight of frolicking squirrels makes us think, “Lucky creatures, they don’t have to go to school”. Biology classes in school were not really pleasant, considering the number of times we’ve been reminded of the fact that the tiny Archaebacteria have stood the test of time better than we ever could.However, the journey to modern day Homo sapiens hasn’t been easy either. While Bacteria and Archaebacteria stuck to changes at the molecular genetics level, the evolution of the frontal lobe and the neocortex regions of the brain helped increase the depth of planning, communication and problem-solving skills of humans. Developing such advanced cognitive functions, as compared to the other living beings on Earth, drove the species towards massive lifestyle changes. This seems to have been disadvantageous for the species altogether. Humans had to additionally hunt for better food sources, fire, tools for survival and protection.

S

Nature, having been exploited beyond limits by humans to keep up with their lifestyle, seems to have taken up vengeance as its current aim. Scientists have discovered that the male sex chromosome, or the Y-chromosome, is rapidly losing its genes, which might eventually lead to its complete disappearance, and consequently, the complete disappearance of males. This problem might have many reasons, all of which are not exactly known. We can guess one of them is the fundamental flaw of the Y- chromosome. While every other chromosome has a pair, the Y- chromosome is all alone. It is singularly responsible for imparting masculine characteristics to humans, that help to differentiate males from the opposite sex. This implies that it is also entirely responsible for the evolution of males into a fitter species. The singularity of the Y-chromosome means that the genes it carries are not able to undergo ‘genetic recombination’, the phenomenon of shuffling of genes that occurs in each generation to help eliminate damaging gene mutations. Deprived of the perks of gene recombination, the genes on the Y-chromosome get degraded over time, and are


eventually lost from the genome. And hence, in its relentless efforts to make males fitter, it keeps losing genes because of the extremely rapid pace of the process. The other reason can be analogous to the fact that the human pancreas has decreased in size over generations. The Early Man would consume raw meat and other similar foods that contained a lot of fat, and hence the pancreas had to be more efficient than its current version. With time, as eating habits changed, we did not need the pancreas to be that potent, which led to its shrinkage. Likewise, we can assume that genes on the Y-chromosome that have been lost were not really essential for survival.

The male sex chromosome, or the Y-chromosome, is rapidly losing its genes, which might eventually lead to its complete disappearance, and consequently, the complete disappearance of males.

Another possible outcome might be humans ultimately evolving into hermaphrodites. Hermaphrodites are individuals that have both male and female sex organs, however, they still require a partner for reproduction, in which either of them can act as the female or the male. There are no separate sexes. Even though, androgynous humans all over the planet might seem rather monotonous and boring, but this prognosis has societal advantages. There would be no gender to discriminate against, and no sexual orientation to be frowned upon. It is also possible that neither of the mentioned outcomes actually takes place. There are always infinite possibilities, and it will be truly fascinating to wonder about all of them till we wait and watch.

GEEK GAZETTE

Another evolutionary phenomenon that has left scientists perplexed is the ‘gay gene’, based on the fact that homosexuality might be a genetically determined trait. Dr. D.H. Hamer and his team at the Laboratory of Biochemistry, National Institutes of Health, performed DNA linkage analyses on homosexual brothers and found the region Xq28 on the X-chromosome to be the candidate gene in male homosexuals. Although these findings have been difficult to replicate, the work has received much recognition from other scientists working in the same field. Nonetheless, it is no surprise that homosexual individuals won’t reproduce. But then, why does a non-reproductive trait even persist? What is more confusing is the fact that the very factor that determines male homosexuality also makes the women carrying it better breeders. Extensive research into the genetics of 'personality' suggested that more than one gene were responsible for conferring homosexuality to the individual. Dr. Edward M. Miller and his team at the University of New Orleans found out that the presence of single allele of these genes make for greater sensitivity, empathy, tender-mindedness and kindness in females,

making them more attractive mates and ultimately resulting in a reproductive success. Presence of duplicate alleles, however, produces homosexual males, whose brains are markedly similar to heterosexual females. The ‘evolutionary paradox of homosexuality’ is the new concern for molecular geneticists. Considering the increased fecundity of the females, however, we also need to remember that they will pass on this gene to more number of offsprings, and hence there are more chances of producing homosexual children. All of these phenomena might have multiple outcomes that we cannot possibly predict. However, it is always amusing to make logical assumptions. Apart from the assumption that humans will ultimately reach extinction, there are a few other outcomes we might consider. The complete degradation of Y-chromosome might make way for a finer sex with just the right amount of capabilities, and produce individuals that are reproductively different from the current ones. It would be exciting to have new humans on the planet, though. We might witness egg-laying humans. Would the egg be bigger than the ostrich’s or smaller than the bee hummingbird's? We can also consider humans evolving into marsupials. The modern perambulators might get replaced by natural 'pouches’, and mothers would not have to worry anymore about their babies running away.

17


GUT FEELING T

AUTUMN 2018

he saying ‘You are what you eat’ was popularised in the 1960s, to encourage healthy and organic eating habits which would help keep diseases at bay. But the origins of this saying date back to the 1800s. Jean Anthelme Brillat-Savarin, a French gastronome famous for writing La Physiologie du Goût (The Philosophy of Taste), wrote in his book ,“Dis-moi ce que tu manges, je te dirai ce que tu es.”, which translates to ‘Tell me what you eat and I will tell you what you are.’ On first glances, this statement might seem like an exaggeration and rather far-fetched. But when one takes the organ they have no knowledge of—rightfully referred to as ‘the forgotten organ’—into consideration, the statement might be plausible. The gut microbiota (forgotten organ) collectively refers to all the bacteria that take abode in our stomachs. The relationship between a person and their uninvited microscopic guest could either be symbiotic or parasitic, depending upon the type of interaction it has with the body. It has been found that if one were to compress this microbial biome into a solid structure, it would be as big as the liver and if one were to try to make a sheet out of it, the sheet would have the dimensions of a basketball court. Quantitatively measured, there exist nine bacteria for every human cell. Thus it would be naive to overlook the effects of this biome on our bodies. It was initially deemed impossible for the microbes to interact with the brain due to the presence of the blood-brain barrier (a barrier to fend off infections). But prolonged observation 18

and research led to the astonishing discovery of a two-way communication through the vagus nerve (a nerve connecting the brain to most organs) between the gut and the brain. This communication might have an impact on the physical and mental response to stimuli in an individual. The physical effect of the gut bacteria was observed in a woman who went through a Fecal Microbiota Transplantthe transfer of good microbes from the donor to the acceptor through fecal matter- as she was diagnosed with Clostridium difficile infection. After a successful transplant, her infection had subsided, but the woman happened to gain 15 kilograms in weight in a matter of 16 months, despite supervised liquid-protein diet and robust exercise routines. The cause for this is believed to be her overweight daughter, who offered to be the donor. The microbiota transplant from a lean mouse to a relatively plump mouse resulted in weight loss or absence of weight gain in the latter. This unusual transfer of body types might make one wonder whether microbiota transplant would be a feasible replacement for liposuction, making the process of fat removal low-risk and safe. The microbes in your gut affecting your eating habits and your body type seems rather fitting, given their area of residence, so one might wonder how they affect the brain It was observed that 80-90% of the signals in the vagus nerve actually travel from gut to brain, making the


phenomenon of your brain activity being affected by your gut response entirely possible. The enteric nervous system (system of the gut) doesn’t require the vagus nerve to function, indicating the presence of its own complex neural interlinking, making it almost like a second brain. The activity of this second brain was observed in a research experiment conducted at McMaster University in the year 2011. The experiment comprised of two different types of mice, an anxious (A) type and an extroverted (E) type, whose level of anxiety was determined as a measure of how long they took to get off an elevated platform. Initially, E took no longer than a few seconds, whereas A took close to 4.5 minutes, to get off the platform. When the gut microbiota of these mice was interchanged, it was observed that E took an entire minute, whereas A took an entire minute lesser than before, to get off the platform. It was observed that mice get attracted to cats when colonized by the bacteria Toxoplasma Gondii, making their lives as volatile as dry ice at room temperature. This shows how big a role the second brain plays in our perception of things.

Tell me what you eat and I will tell you what you are.

Dr Derrick McFabe, while affiliated with The University of Calgary, wrote a paper in the in the year 2013 describing the behaviour of mice when subjected to propionic acid (PPA). It was observed that the mice demonstrated behaviours that were very similar to the autism spectrum disorders. Coincidentally, PPA happened to be the fermentation product of Clostridia which is a bacteria which isn’t targeted by antibiotics. The biome of an autistic subject has been found to be rich in the Clostridium bacteria and the feces in PPA. A pilot study run by Dr Finegold and Dr R Sandler administered vancomycin—an antibiotic that particularly targets clostridia—to children with autism. It was observed that 80% of the group showed transient but drastic changes, hinting the possibility of excessive use of antibiotics during early age being a cause for autism, as parents usually report that they first observed autism after a few courses of antibiotics. Although nothing concrete can be said about the topic of autism, studying the gut-brain axis could lead to revolutionary changes in our approach to mental illness and diet, opening up an entirely new approach to the field of medicine.

19

GEEK GAZETTE

Scientists also feel a proper understanding of the gut-brain could help with our interpretation of mental illness. It was observed that 90% of serotonin (neurotransmitter responsible for happiness) was actually produced in the gut. The gut is also responsible for the production of 50% of dopamine (neurotransmitter responsible for motivation), and also GABA (neurotransmitter responsible for relaxing and anti-anxiety effects). Rodents, when subjected to probiotics (substances which help the growth of certain useful microbes), showed more activity, opening up the possibility of probiotics having antidepressant and antianxiety effects. A study showed that the bacteria Bifidobacterium infantis had the same effects as an antidepressant as the drug citalopram. Various bacteria are responsible for the production of these neurotransmitters in our gut; it is believed that ingesting probiotics that cultivate these might help in understanding and possibly curing mental illness.

Our gut biome is a warzone laden with good bacteria pitted against the bad, and antibiotics are a nuke that kills a lot of bad bacteria, but they take some good bacteria with them. Thus the prolonged and frequent use of antibiotics would clearly harm the good bacteria in our body as well. In the US, it was found that 1365 courses of antibiotics are prescribed per 1000 babies under the age of two. Although antibiotics, when administered at appropriate times, are good, such excessive usage could cause major harm to a baby. When a baby is delivered through C-section (Cesarean-section) the chances of the baby being diagnosed with obesity, breathing disorders, immune deficiencies etc. advance up to 25%. A baby delivered by C-section gets coated with microbes in the hospital environment instead of its mother’s birth canal, which results in the creation of a fairly weaker microbiota. This shows that the absence of certain bacteria in our gut makes us prone to a few diseases or disorders. Hence, the use of excessive antibiotics clearly would pose harm to the health of a child.


MUSIC AND LINGUISTICS


E

ver since the dawn of civilisations, humans have been using sounds to communicate. This ability to express one’s thoughts using sounds has manifested into two seemingly intertwined and elemental components of human society—Language and Music. Some form of both can be found across cultures ranging from the early civilisations of Mesopotamia to the complex cross-cultural modern society, which is a strong evidence for our innate competence for both musical and linguistic expression. There are striking similarities between the two; both need a shared understanding or a common ground between the speaker and the listener, the learning process involves imitation and then calibration based on feedback. There’s even a shared area of the brain involved in the interpretation. This has prompted people to think that music and language are not very different after all. Numerous musicians and linguists alike, advocate the idea of music being a universal language that transcends linguistic barriers, which begs the question of whether the age-old cliché holds any water or is it just blatant romanticisation of music. In other words, Is William Shakespeare’s Hamlet not all that different from Chopin’s Nocturnes as an artwork?

A more philosophical take on the problem is questioning the nature of meaning derived from music. Meaning is a crucial property of language, and the way we derive meaning from a language is fairly obvious. Spoken language is essentially a mapping between sounds and propositional or conceptual thought. Such mapping and a resulting objective sense of meaning are absent in music. Leonard Bernstein, a prodigious composer and a celebrated conductor, argues that meaning in music comes from the comparison between the different elements of it. One note on its own carries no meaning, only with another note to compare it to and contextualise it with, arises a perceived meaning. A simple example is the most well-known song in the world—Happy Birthday. The notes of the melody are G G A G C B; on their own, the notes carry no value but as soon as G is established as the root, the notes A and C which are the second and the fourth notes in the G major scale acquire a certain tension to them which resolves when we go to B—the major third of G which is the trademark interval of all that is happy in the world, making it “Happy Birthday”. This process of comparing things that aren’t the same but might have an underlying connection is almost metaphorical and Bernstein considers metaphor to be the key to understanding music. “In any sense in which music can be considered a language, it is a totally metaphorical one. Considering the etymology of the word metaphor (meta-beyond and pherein-to carry), it means carrying meaning beyond the literal, the tangible, beyond the grossly semantic” said Bernstein in his lecture at Harvard titled “The unanswered question” while he tried to grapple with his idea of a universal musical grammar, which was rekindled by Noam Chomsky’s pioneering work in the field of linguistics based on the idea of an innate competence for grammar in humans.

21

GEEK GAZETTE

To answer the question, we have to break it down into simpler sub-questions and compare different properties of music and language beyond the superficial level. An important question to ask at this point is whether the similarities shared by language and music are genuinely distinct from other human activities and comparing the cognitive capacities involved in the acquisition and use of the two might be a good starting point. Substantial memory capacity is required for both language and music but it isn’t unique to them. The ability to remember the appearances and behaviours of things that we interact with and memorise the detailed geography of one’s environment gives a clear evolutionary advantage. Also, the perception and comprehension of novel stimuli and setting up expectations based on previous encounters with similar stimuli is common to both, but this too is shared with our visual system. Fine motor skills required for both music and language are also employed in drawing and tool making which humanity has been doing ever since its inception. The desire and ability to imitate others and the ability to engage in jointly intended activities are not only the cornerstones of musical and linguistic

learning but are also responsible for the evolution of culture which includes but isn’t limited to music and language. The cognitive abilities involved aren’t unique to music and language and this suggests that music is perhaps an evolutionary by-product, or as Steven Pinker puts it, “auditory cheesecake”, an exquisite confection crafted to tickle the sensitive spots of our mental faculties.


Bernstein’s approach to studying music under the same intellectual paradigm as linguistics offers many interesting ideas about the structural and syntactical similarities between music and language. One of the important discoveries of the generative theory of tonal music was the extent to which phonology and music are structured rhythmically by very similar metrical systems, both based on a metrical grid. Speech patterns in most spoken languages are found to have a rhythmic element to them. This is an important formal parallel between the two domains, perhaps shared by only music and language. The structural similarity manifests in middle grounds between the two, poetry employs isochrony or strict rhythm which brings it closer to the metrical nature of music. Rap music on the other hand, which probably lies somewhere towards the more musical side of the spectrum, strongly hinges on the creative and cleverly structured use of language. Another front for exploration can be the use of language and music. The organic use of music as in lullabies, celebratory music, and songs of work and rebellion differs greatly from the highly commercialised and quasi-manufactured form of popular music of today just as paintings in museums do from the doodles on the walls of caves by the early men. But the constant in that

evolution has been the ability of music to evoke emotion in the listener. It can be safely said that the affective use of music has been preserved. Language has a more propositional use, it can be used to make declarations or ask questions. It can even be used to make statements that don’t make any semantic sense but still carry meaning like ”The window to the world can be covered by a newspaper”. Music, on the other hand, is good at something language isn’t, it can convey abstract concepts with ease, for example, Steve Vai playing Tender Surrender conveys the emotion of ecstasy much more effectively than the statement “I am ecstatic”. The comparison leads to the inference that music and language are very different in terms of their uses and consumption. Though music and language share a multitude of similarities at the surface, the statement that music is a language in a formal sense is fairly ignorant, since the similarities are also shared by numerous other human activities and the differences between the two are too significant to be ignored. But they don’t need to be binary separate entities as they are almost always found complementing each other. Every work of art can be put on a spectrum with ill-defined boundaries or the labels can be omitted altogether as far as art is concerned.

1

2

3

4

PUZZLE CORNER Listed aside are some of the lesser known albums of popular artists across a variety of genres. Can you identify the artist based on their album art?


Brooks Brothers A men’s clothing store

B.T. Ganj Gurudwara Road, Near Gurudwara

Wholesale factory outlet products

20% discount for all IITians

+91 701-794-9318 +91 730-275-2377


WAR OF WORDS

Why look'st thou so?'—With my crossbow I shot the ALBATROSS. - The Rime of the Ancient Mariner, PART 1

T

he Ancient Mariner shot the Albatross. The closing monostich of the first of seven parts of Samuel Coleridge’s greatest work could leave a reader with a series of interpretations to ponder. Was it grief? Was it dread? Or was it because the albatross “spoke” to the mariner? There doesn’t exist a perfect answer to the aforementioned. In fact, a unique answer would only have been a surreptitious attempt by the poet to ostracize readers from their imaginative world. On the contrary, Coleridge renders his readers their independence of melancholy, rage, regret, or joy of the Mariner over the death of his good omen, the albatross. But when the shouted lyrics and frantic guitars lament over the death of the albatross in Iron Maiden’s song, The Rime of the

Ancient Mariner, the scenario is not the same. “The Mariner kills the bird of Good Omen”, when pierces the

AUTUMN 2018

ears of its listeners, it’s laden with predetermined emotions and is devoid of distractions. Its rhythm binds the listeners to provoked emotions that the “lyric” has to offer, thus limiting the multitude of emotions that Coleridge had offered into a miniscule feeling. The hazy demarcation that sets apart the poet and the band, are the ubiquitous entities—poetry and lyrics. Superficially, the two may be renowned to be as different 24

as a book is from its adaptation into a script, but at a deeper level the two art forms of literature are tangled and untangled in their own mysterious ways oblivious to many. Poetry is an art of expression. A poet(ess) is an artist who adroitly carves words into gems. Gems, which are weaved together to form ornamental lines. Lines, whose apt arrangements transform themselves into jeweled octaves, gilded sonnets, or festooned couplets. But a poem is always a poet(ess)’s incomplete work. It’s an intentional lethargy on the part of the poet(ess) that leaves the finishing touches to readers’ interpretation of his/her work. When Robert Frost praises the woods in his famous work, Stopping by Woods, he lays a structure for his readers to furbish from their assimilation of his temptations, responsibilities, and obligations. This is what a poem offers its readers—words that take place against the context of silence. Poetry’s condescended counterpart, lyrics, on the other hand, are guided by a lot of calculated musical information—melody, rhythm, instrumentation, the quality of singer’s voice, and the quality of the recording as a whole. They sometimes are designed, intentionally, to adhere to the tune they are meant to serve. They are dependent. Melody fills them with emotions, symphony kindles their soul; music breathes life into them. I stopped an old man along the way, hoping to find some long


forgotten words or ancient melodies—the line would have been no different from an ordinary line if it weren’t the part of Africa by Toto, and that had made all the difference. A poem works on the page. The literary mastery of its creator only comes into play when it’s read. This necessity doesn’t apply to lyrics. Their reading is an option a reader chooses as a trivial activity. This contributes to the way poetry and lyrics present themselves to the readers. It would not take a reader a second glance to appreciate the line uniformity which Shakespeare has encompassed in his Sonnet 18 or the meticulously calculated syllables in John Keats’, Ode on a Grecian Urn. Whereas, if we apply the same criterion to John Lennon's masterpiece, Imagine, or Pink Floyd’s Time, they certainly would be platers in the race of visual mastery.

The cohesion of music and poetry, in fact, is a reason behind the birth of contemporary art forms of rap and slam poetry.

Lyrics and poetry don’t place themselves on the opposite poles, albeit marked by differences. Many musicians present their songs’ lyrics in the form of poetry whereas, as many poems, specially tagged with “nursery rhymes” have their grace because of the rhythm, or the particular rhyming. Musical-poetic collaborations is not a phenomenon of yesterday. Missy Mazzoli, Gabriel Kahane, are some of the examples who have successfully set poems by contemporary poets to music with circumspect effort to design music that moves around the poem or not tormenting them into overly strained forms to follow a musical structure. The understanding of the language that runs into a poetry plays a crucial part. The cohesion of music and poetry, in fact, is a reason behind the birth of contemporary art forms of rap and slam poetry.

spoken-word poetry, rap is often downgraded in terms of literary standard due to its use of strong language, anger, and street origins. On the similar lines to rap, slam poetry too gives strong emphasis on speech and delivery. It speaks into the emotional space of its listeners by binding an aura of melody. Many of its delivery styles closely resonate with the vocal delivery style found in hip-hop music and is often accompanied by beatboxing, foot-tapping, claps, and snaps to set the rhythm. Neither slam nor rap is an independent art form. Their dependence is directly correlated to the coalescence of music and poetry. Not only these art forms translate linguistics of poetry into vocals but also bridge the gap between lyrics and poetry. If one contemplates, the basic thinking procedure of a lyricist and a poet(ess) revolves around the same elements—rephrasing a basic idea in creative ways, weaving words into a pattern of rhyme, or arranging sections into a logical, seamless whole. But their final products are unique in their own ways. While the former garners outros, bridges, choruses, and verses, the latter treasures ballads, haikus, and limericks. But then why is it always the poet(ess) who is showered upon by the highest accolades of literature whereas the musician is marooned to unfold their art under the veil of poetry just for the sake of their audience taking it “seriously”? It seems an absurd notion to contend that lyrics have less literary merit than poetry, or tagging the work of a lyricist as a cakewalk, or that they do not deserve the glorified title of “poetry”. Right from our childhood, poetry has been served to us on a silver plate. It always has been an integral part of our classroom studies. Lyrics usually did not find a place in our curriculum, and even if they did, that was not due to their literature but cultural propaganda or patriotism. It is important to understand that lyrics and poems are two different genres of literature, even if they share similar literary structures. Both possess idiosyncrasies and distinctions of their own and one must view each of these media through different perspectives to truly understand the greatness brimming within each.

GEEK GAZETTE

Rap, an abbreviation of rhythm and poetry, is in a crude sense poetry piggybacked upon the delivery, beats, and vocals of its writer. It’s generally performed over a backbeat or musical accompaniment. Although a form of

25


PICASSO’S NIGHTMARE

The Dancing Salesman Problem by The Painting Fool (A.I)

Historians agree that after the early man found his first recluse from territorial predators and the drive for copulation, visual art was the first art form to originate as he picked up the nearby stone to doodle on his cave walls. It would only be poetic to constrain ourselves to this very first art form for this article.

AUTUMN 2018

‘Art’ has always been far from the easily describable concepts of human society. Connoisseurs and philosophers have always debated and failed on the prospect of drawing a boundary around the concept. The largely accepted ‘requirements’ for an art piece are that it expresses an idea, an emotion or, more generally, a world-view. Thus, calling the cathedral paintings of the Basilica of Notre-Dame a “superior” artwork to the miniature buffalos and human hands covering the walls of Indonesian caves would be contradictory to its definition itself. ‘The Arts’ refers to the theory and physical expression of creativity found in human societies and cultures. It encompasses a diverse range of human activities, creations and ways of expression, including music, literature, film, sculpture, and paintings. With the emergence of Contemporary art in the twentieth century, it became increasingly difficult to trace a path for the evolution of the visual arts, their very perception in the society being radically altered. First moving to larger contextual frameworks such as personal and cultural identity, artists became more interested in expressing basic human emotions such as doom, tragedy, ecstasy

26

and so on. An utter departure from reality was observed with Abstract art. Artists started experimenting along the continuum between reality and total abstraction, thus abstaining from the trend of pushing the boundaries established by traditional art forms farther. But rather, they started to question the very existence of these boundaries. In 2015, Google released DeepDream, a software with a set of tools for visual content recognition and generation. At first, the images seem to have passed through nothing more than a psychedelic Instagram filter but there is plenty more happening behind the curtains. The Machine Learning algorithms rely on a unique technique, Google likes to call ‘Inceptionism'. The Google artificial neural network functions like a computer brain, inspired by the central nervous system of animals. When the engineers feed the network an image, the first layer of 'neurons' process it. This layer then 'talks' to the next layer, which then has a go at processing the image. This process is repeated 10 to 30 times, with each layer identifying key features and isolating them until it has figured out what the image is. The method gives little success when implemented for image-recognition and can only be


considered as a cheap copy of the human brain. But when put in reverse, i.e. when the knowledge of what objects look like is used to locate and generate their appearance in an image, it surpasses all human capacity. In a landscape where human eyes would see little more than a cloudy sky, striking patterns of flying squirrels and dogfishes could be detected. It seems the machines are much better at recognizing the abstract.

to the ones it had been fed. The technique seems to be capable of successfully passing through the GAN machine, the “Turing test for paintings”. This Generative Adversarial Network, en masse a database of over a million artworks from the 15th to the 20th century, is able to detect if an artistic style is ”original”. The software doesn’t fit existing styles like the Renaissance or Impressionist in this category.

There hasn’t been a better time in history to realize that humans and machines, both being pattern-recognition entities are similar in striking fashions. The AI algorithms are “trained” through repetitive actions while making small tweaks in the external “stimuli”. This is quite similar to how an artist’s brain works, albeit on a much superior scale. Our actions and experiences always leave an impression on our minds, seldom consciously. And when one picks up a paintbrush to materialize the “creative” idea bubbling through his consciousness, it is nothing but a combination of the stimuli he has acquired across his lifetime.

One would be hard-pressed to answer the question of the creator of the painting. Is it Mr. Goodfellow, who wrote the algorithm? Is it the people who give the algorithm the initial dataset for it to learn from? Or is the algorithm to be considered an entity in itself, one capable of creating auctionable artworks?

Subtle mimicry from which the illusion of creativity originates

Most would expect the fields of art and creativity to be the last ones to be supplemented by AI. It’s easy to imagine medical diagnosis or financial planning through an AI assistant but when it starts writing songs or painting pictures—it comes deviously close to a Black Mirror episode. But then, perhaps it is a reflection on the current human perception of art and beauty itself. Perhaps, we are not at the point in history where we can comfortably look at it the other way round; to comfortably acknowledge an artist’s inspirations and creativity as nothing more than a dataset continually expanding through his lifetime.

So can the Deepdream generated images be truly called art? It is hard to argue against it from the common definition we know. The technology and datasets are the key constraints for the generator. It might just be difficult for us to ignore the present level of blatant imitation in the results coming out, as opposed to the subtle mimicry from which the illusion of creativity originates. But then, perhaps it is just a stepping stone into the next Art revolution.

GEEK GAZETTE

The news currently making headlines is the declaration of the auction of Portrait of Edmond de Belamy (2018), an uncanny, algorithm-created rendering of an aristocratic gentleman. Obviously, a Paris-based collective of artists, Machine Learning researchers claims to have fed it a training dataset of more than 15,000 portraits created between the 14th and 20th centuries. Using these images, the algorithm was able to “generate” new images similar

27


COGITO ERGO SUM GHOST IN THE SHELL (1995) Genre

: Drama/Fantasy

IMDb

: 8.0

Rotten Tomatoes

: 96%

“What if a cyber brain could possibly generate its own ghost, create a soul all by itself? And if it did, just what would be the importance of being human then?”

B

AUTUMN 2018

ased on a manga by the legendary artist Masamune Shirow, Ghost in a Shell is widely renowned as one of the greatest animated films of all time. Partially due to the fact that it was one of the first films to explore the theme of what it means to be human in a world where the boundary between man and machine has all but been erased. The production of the movie is on par with big-budget Hollywood movies, with tense, beautifully-animated action sequences, an immersive soundtrack, a wide variety of engaging characters, and philosophical deliberations far ahead of its time. The movie served to inspire many famous directors and creators, the most notable example being the creators of The Matrix trilogy, which has several scenes as direct tributes to the movie. Set in a dystopian future in the year 2029, advanced cybernetic and augmentation technology has eliminated the need for the human body. One may choose to transfer their consciousness to a ‘cyberbrain’, with the capability of accessing different networks; effectively, becoming a ‘ghost’ inside a synthetic ‘shell’. Data is as valuable as currency in this technologically adept society, and hackers are seen as the new breed of terrorists. The story revolves around Major Motoko Kusanagi, a cyborg operative working for one of the Public Security Sections of New Port City, in Japan. Upon investigating an incident involving a major political

28

figure, she and her team learn about a mysterious hacker known as the Puppet Master, the ‘most dreaded cyber-criminal of all time’. As Kusanagi pursues the hacker, her discovery of several shocking truths brings forth more questions associated with identity, humanity, and consciousness. Of course, it is not just the animation and far-sightedness of the creators that set the movie amongst the greats. The philosophical ideals discussed in the movie are questions which will need to be answered sooner or later, as society progresses towards a more digital era. Major Kusanagi, who was a pure cyborg, was shown to remain mostly naked throughout the movie, which was indicative of a future where gender had become completely irrelevant due to customisable cybernetic ‘shells’. One of the major plot points of the movie was the protagonist’s constant questioning of her own humanity and the looming question, that, if the parameters of humanity can be calculated, could it result in an entity perhaps more human than the rest? Sentient machines and AI’s are multiplying and becoming commonplace. The definitions of humanity must be reconstituted, as the similarities and differences between humans and machines become nigh indistinguishable. The movie might feel a bit perplexing due to its insufficient world-building and backstory, but viewers must look past the story and into the ideals conveyed through it. Overall, the movie is a pioneer in the cyberpunk/dystopian future genre, and a must watch for all enthusiasts of the genre.


FALLING DOWN TO EARTH THE MOUNTAIN Artist

: Haken

Genre Release Year

: Progressive Rock/Progressive Metal : 2013 : 62:05 minutes

Length

T

he Mountain is a 2013 album by the progressive metal

band Haken, and arguably the best prog album of that year. The album was acclaimed by fans and critics alike and bolstered them with a loyal fan base. While it cannot be considered an actual concept album, it does have a reference to the Greek king Sisyphus, who was punished to roll a stone to the top of a mountain, that was cursed to fall back, rendering his punishment eternal and all of his efforts futile. Another underlying theme in the album is the course of events in one’s life, from the optimistic view of life in ‘The Path’ to the heart-wrenching pleas in ‘Somebody’, everything in between can be attributed to various vices and events in life. However, the concept is not all there is, each song is a delicacy in itself.

Falling down the road comes ‘Pareidolia’ which is an oddity in a series of oddities with an exotic opening bass riff, syncopated guitar riffs, and incredible drum work. The song checks off all the boxes for a prog-metal classic. The final track ‘Somebody’ is a ballad so powerful and fitting for the end that it gives all you’ve waited for and more. The haphazard sounding polymetric vocal line repeating the phrase “I wish I could have been somebody” conveys the surreal and overwhelming sense of despair followed by acceptance, which describes the emotional state during the concluding phase of one’s life, making for a very fitting outro. The influence of several other artists is apparent but Haken manages to preserve a signature sound which is instantly recognisable. Infusing elements from such a diverse range of genres and striking the right balance between technicality and feel is a truly a herculean task and Haken has indeed risen to the challenge they’ve set for themselves. GEEK GAZETTE

The album’s most exceptional quality is perhaps the ingenious and seamless incorporation of rhythmic and harmonic complexity that keeps the album interesting at every point and keeps the listener engaged till the end. Haken fuses elements from jazz, metal, soul, progressive rock, even genres like 80s synthpop and glitch-hop coherently in a fantastic display of musicianship. The album begins with ‘The Path’, which is a vocal-centric track about the vulnerable yet optimistic state of mind in the early days of one’s life. The next song ‘Atlas Stone’ is perhaps the strongest song on the album and is a great example of the ease with which Haken uses odd time signatures and makes them sound natural. ‘Cockroach King’, the most popular track on the album boasts a very elaborate arrangement with a rich acapella segment, heavy metal riffs and an eerie

sonic palette. The video suggests that this was Haken’s attempt at a progified “Bohemian Rhapsody”. ‘Because it’s there’, the central ballad, stands out because of the brilliant bass playing and unique percussions. The album takes a dark turn in the middle of ‘Falling Back to Earth’ and the downward spiral is equally captivating.

29


TO INFINITY AND BEYOND W

AUTUMN 2018

ith an existence that has spanned over 2000 years, the notion of infinity is often credited with taking digs at the limitations of our understanding of the Universe. It continues to be an enigma to scientists everywhere—seducing them with its mesmerising intricacy; and rendering mathematicians and philosophers flabbergasted with its fascinating features. Through the ages, endless interdisciplinary debates and disputes have often created divisions among mathematicians and thinkers. Such discussions trace back to the question whether our universe is spatially finite or infinite, a dubiety whose conclusion we may never draw. The quest for understanding infinity underwent an abrupt change when scientists and philosophers began pondering whether the comparison of infinities was possible. Scientists defied the fact that the whole cannot be the same size as the part, a theory, which was ingeniously illustrated by Hilbert’s Paradox of the infinite hotel. In the course of exploring the universe and infinity, scientists and philosophers have often met with a

30

dilemma: whether to consider something like a continuum or not. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, barring no cracks or discontinuities. Having dealt with the hideous complications of understanding infinity, scientists have begun to argue over the veracity of this untested assumption that forms the basis for most of the modern theories in physics and cosmology, ranging from modern physics to cosmic inflation. In an ironical regard, the assumption that something truly infinite exists underlies almost every physics course we’re being taught. Georg Cantor, in his momentous discovery, introduced the idea of transfinite cardinal numbers (that refer to the numbers describing the sizes of infinite sets) and showed, using his diagonal argument, that some infinite sets are greater than others. His pursuit brought forth the pioneering idea of using a bijection to compare the sizes of two infinite sets. But what followed were a series of counterintuitive results. For instance, the set of natural numbers and that of even numbers follow this one-to-one correspondence or bijection, as each element in the first


set pairs off with a corresponding element in the other and vice-versa. Although the second set is a subset of the first one, yet they remain the same size, hence justifying that whole can indeed be the same size as a part. The second question Cantor considered was the cardinality of real numbers. Covering all the points on the continuous number line, they can never be put in bijection with a countably infinite set, and hence referred to as uncountably infinite.

Whole can indeed be the same size as a part.

That being said, it's pretty clear to see that these overwhelmingly massive uncountable sets are larger than countable ones. This knowledge led mathematicians to wonder: if there are big and small infinite sets, can we have medium infinities too? This question is the continuum hypothesis and is placed in the vaunted topmost spot amid David Hilbert’s list of 23 of the most important problems in mathematics. Disproving the continuum hypothesis would mean that there are medium-sized infinities; proving it would mean there are only the bigs and the smalls. In 1940, mathematician Kurt Gödel showed that the continuum hypothesis could not be disproved with the usual axioms of mathematics. In the 1960s, mathematician Paul Cohen showed that the continuum hypothesis cannot be proven by the standard set theory. This won Cohen the Fields Medal, the highest honor in mathematics, and the unsolvable Continuum hypothesis got appended to the crisis of our knowledge.

ingesting the assumption that our universe is merely a tiny speck hovering in the entirety of everything. With all the infinite realms out there, every possibility is played out infinite times across this extensive multiverse. In our distinct universe, two-headed men are rarer than single-headed men. In an infinitely stretching multiverse, both kinds are infinite in number. So what does the ratio correspond to? With cosmic inflation theory advocating the idea of the multiverse, we are robbed of any ability to uniquely explain the properties of nature. Every event we describe as beyond the bounds of possibility in our universe is a trivial reality in infinitely many other universes on the canvas of multiverse. This perplexity in measuring relative odds against different occurrences gives rise to ambiguities over extracting meaningful measurements and making predictions on cosmological scale. Swayed by the persistent impressions, we may be speaking of infinite volumes with infinitely many galaxies, but our observable universe contains only about two trillion galaxies. If space is a true continuum, then to describe even something as simple as the distance between two points requires an infinite amount of information, specified by a number with infinitely many decimal places. If eternal inflation model is accepted, statistical analysis and cosmological predictions make sense no more. Anticipating the culmination of years of exploration, we are risking the possibility of much of our knowledge being sabotaged by this untested assumption of infinity. Apparently, it’s high time we called for canvassing the roots and alternatives for this outwardly convenient construct with uncertain prospects, that is making us delve into the frontiers of a non-empirical science.

GEEK GAZETTE

Another eminent flaw posed by the idea of infinity is in the domain of inflationary cosmology. Inflation successfully explains the beginning and smoothness of the universe, with quantum fluctuations during inflation guiding the way to the formation of galaxies and large structures in the universe. Quantum fluctuations ensure that this process continues forever, hence leading to the ‘eternal inflation’ model. This, in turn fosters a multiverse. While widely varying opinions exist over the acceptance and flaws of the multiverse model, a number of problems in physics and astronomy could be solved effortlessly by

31


PARALLELISM AND CHAOS T

AUTUMN 2018

he answers to some of the most basic questions in the field of Computer Science can sometimes be counter-intuitive. Consider an example—the case of increment over a certain variable. Today, an everyday computer can compute a double increment over a certain variable incorrectly if due care is not taken. The result can be an unexpected single increment! But how is it that simple addition leads to incorrect results? It all started with the introduction of commercial uniprocessors in the 70s, which ran a single operating system efficiently. Since all of the commercially available processors were single core, application programmers wrote “serial” programs without worrying much about the pace at which they ran. This was attributed to a famous law proposed by Gordon Moore in his paper “Cramming more components in the integrated circuit”, better known as the Moore’s law. The law states that the number of transistors on a chip would double every eighteen months. Additional speedups received with processor updates ensured programmers with guaranteed performance rendering them oblivious to the hardware advancements. Decades passed with incremental speedups, guaranteeing application programmers to focus on a better overall software experience. The cycle broke in the late 90s when achieving speedups in terms of clock frequency became inherently difficult. This was due to the ever-increasing power consumption with a marginal increase in clock frequency. Higher power consumption 32

leads to heating losses and thus better cooling is necessary. For instance, to run a modern CPU at clock speeds around 7 GHz, one would not only require a sub-zero temperature, but also Liquid Nitrogen for the optimum cooling. Even with such efforts, the CPU would not run for more than a few minutes at best. Such instability was the root cause which lead chip manufacturers to seek a different model to provide speedups. A completely new computer architecture model was developed taking inspiration from the parallel supercomputers advancements that had advanced since its inception in the 70s. In the early 2000s, CPU with multiple cores entered the market. While most of the ideology was adopted from the advancements in the parallelism in supercomputers, it was completely new for the application programmer developing everyday use applications for the general consumer. With multicore processors came in a variety of issues. Marginal speedups with respect to single core performances meant that the previous serial programs required a complete overhaul to deal with the current global standards. To put it in perspective, the overall speedup for a single core performance with the recent update to 8th generation core i7 from the 7th generation is around 10%, while the multicore performance speedup ranged from 50-90%. To clearly define the difference between a serial and a “concurrent” program, knowledge of threading is required. A thread is the smallest sequence


of programmed instructions that can be managed independently by the operating system i.e. an independent code strip that can be executed independently. A serial program runs on a single thread of execution while a “parallel” program runs on multiple threads of execution simultaneously. This allows for multiple instructions to run concurrently (i.e. parallelly) allowing for speedups resulting from multicore executions.

“The biggest sea change in software development since the OO revolution is knocking at the door, and its name is concurrency.”

non-atomic operation will make sure that only a single thread executes the operation at a time. This will allow the second thread of execution to read the updated value i.e. 1 when it executes the operation. The final computed result will, therefore, be deterministic.

The biggest sea change in software development since the OO revolution is knocking at the door, and its name is concurrency.

-Herb Sutter (2005) With an introduction to concurrency, parallel algorithms boomed. Any parallel algorithm or data structure can be categorized into blocking or non-blocking algorithms. Blocking algorithms calls the system lock functions to acquire the lock and are the easiest and most intuitive to implement. Non-blocking algorithms do not utilize locks. Thus, these algorithms allow for multiple threads to execute simultaneously. These algorithms are exponentially difficult to implement and requires attentiveness to make sure that race conditions are met with care. To minimize the overall stress modern languages comes with concurrency implementations and thread-safe implementations of various functions. Many programmers have also collaborated to provide APIs (Application Programming Interface) like OpenMP, OpenMPI and runtime libraries like HPX. These libraries have helped the programmer to focus more on the software algorithm and less on the underlying difficulties of implementing concurrency. The burden still remains on the programmer to not mess up a program at its execution. So the next time you see a blue screen of death on your Windows OS, you know that it’s a badly implemented concurrent code!

GEEK GAZETTE

Now that the idea behind a multicore processor is clear, we come back to the initial problem case. Parallel programs require the programmer to be alert about its execution scheme. This is because a program might not run in the written program order itself! Let the initial variable discussed above be ‘x’ with an initial value of 0 and two threads each with an increment operation on ‘x’ (i.e. ++x or x++ or x += 1 depending upon the programming language) running concurrently. The deterministic value of ‘x’ according to any layman would be 2, but the above operation can very well end up with a value of 1 (since when did 1+1 = 1?). Such non-deterministic results arise since the seemingly single instruction increment operation can be further divided into a read, increment and write operation, all of which is a single instruction. Suc h indivisible instructions are called atomic instructions and are executed for a single thread at a time. Considering that thread 1 read the variable first and thread 2 reads it just after thread 1, both the threads will execute on the same value of ‘x’ i.e. 0. The final computed output, thus, will be 1 and not 2. Such scenarios are common in a concurrent programming setting and are collectively known as “race conditions”, since one can find a race situation to get hold of the same shared data. Proving that 1 + 1 = 1 in a computer is pretty common, a solution is necessary. This is achieved with the help of “mutual exclusion”, a method used to control concurrency and allow for serial execution in a concurrent setting. Mutual exclusion can be implemented with a variety of ways, the easiest and most intuitive being a lock. Allowing a thread to acquire a lock before executing a

33


WELCOME TO THE MACHINE

There is an indescribable desire amongst humanity to seek methods to expand our relatively limited capabilities. The most obvious answer to this desire seems to come from our awareness of the limited timespan of our existence, and the inadequacy of our frail bodies as a means of transcending our natural confines. Since the age of Renaissance, scientists and philosophers have dreamed of a time when human beings, through the use of novel sciences and innovations, are no longer controlled by the laws of nature, but rather achieve mastery over these laws and utilise them to achieve a better standard of living. Transhumanism, a philosophical movement that aims to improve the human condition through the use of modern technology, takes its roots in this thought. The ultimate goal of the transhumanist movement is to help envision and establish a future, where through the use of innovative science and technologies, human abilities are enhanced to such great extents that we need not be at the mercy of the world around us, and ultimately attain the status of a posthuman. This term has many potential interpretations, but the underlying principles of each remain the same—posthumanism is a state beyond being human with respect to social systems, communication, physical and intellectual abilities, ethics, and philosophies. Transhumanism, meanwhile, has a narrower focus on the biological, behavioural, and intellectual enhancement of an individual, as a transitional state to posthumanism. The term was first used by an English biologist, Julian Huxley, in a 1957 paper where it was used to refer to a belief that the human species will transcend to something greater, not just individually but as a collective through the use of science and technology.

AUTUMN 2018

T

34

The most fundamental obstacle to progress at astronomical relevant scale is death. The fear and anxiety of a looming demise lead to the natural necessity of ignoring long-term goals in favor of short-term satiations. Therefore, one of the primary aims of the transhumanist movement is to eventually turn death from an inevitable certainty to a choice. FM-2030, an Iranian-American author, philosopher, and futurist, was a very strong advocate of the transhumanist movement. He originally had the name Fereidoun M. Esfandiary, which he legally changed to FM-2030, for two reasons—first, in the hopes of living to celebrate his hundredth birthday in 2030, and second, to break free of the traditional naming conventions that he thought were a restrictive mark of society's collective identity—gender, ancestry, nationality, religion—which he strongly believed to be an unpleasant relic of humanity's brutal past. According to him, as we used technology to transcend our animalistic traits, “survival emotions” like fear, love, jealousy, competitiveness would no longer control our actions and restrict our viewpoint to short-term, ephemeral ambitions. We could choose how long we wish to live, and when we want to end our existence—the point being that death is no longer an inevitability that controls us, but our own choice. He was not wrong regarding this prediction—biological, medical, and technological advancement is proceeding at a massive scale. Every day, researchers are discovering new techniques to prevent and cure the most untreatable illnesses—be it the common cold or cancer. After all, death is simply a consequence of the human body's internal mechanisms breaking down and losing


functionality due to ageing, which results in an onset of diseases and maladies. So, cellular ageing and breakdown, if somehow completely eliminated, would result in the prevention of death itself. The ideas may seem improbable, but surprising progress has been made in this field over the past few years. Cutting-edge research in the field of biotechnology has led to the identification of genes responsible for ageing through genome sequencing. The targeted replacement of these genes through CRISPR Cas9 genome editing technology has shown great potential as a method to slow down, or entirely prevent ageing. Another tool, the humans-on-chips, which are essentially a collection of microchips lined with living cells that replicate major human organ systems like the lungs, stomach, heart, etc. are used to collectively replicate the human body once assembled together. It may be too big of a leap to say that we are close to eluding death completely and achieving immortality, but through anti-ageing technology, we may eventually obtain the freedom to choose when we want to die, becoming the first generation of amortal human beings. Even after his long-sought idea of having the liberty of a choice, FM-2030 succumbed to pancreatic cancer at the age of 69. But what is interesting to note is that his body is placed currently under cryogenic suspension. Perhaps one day, through the advancement of medical sciences and cybernetics, we would be technologically adept enough to prevent the degradation of human bodies, and as a consequence, extend our relatively low lifespan; maybe even reverse death itself. FM-2030 may be able to walk the Earth again, finally observing a society whose inception he had predicted.

"If it is natural to die, then to hell with nature. Why submit to its tyranny? We must rise above nature. We must refuse to die." ~ FM-2030

loads, or painting the most delicate strokes. This raises a very interesting question, the answer to which would no doubt raise several controversies—would you replace your perfectly fine, normal hand with a superior prosthetic one? At this point in time, the idea definitely seems unnatural or unthinkable to most. But scrutinizing further, the notion that humans have been using technology to modify themselves synthetically and functionally is not too unfamiliar to us. Individuals with broken limbs are provided with crutches or wheelchairs to enable mobility. If we face vision problems, we use glasses or contact lenses. People with weak hearts are implanted with pacemakers to prevent arrhythmia. Considering all these cases, we have already exited the natural cycle of organic

35

GEEK GAZETTE

The exact characteristics of a posthuman society are pretty much unimaginable at this point in time. For all we know, by the next millennium, we may be nothing but a node in a singular consciousness network, without any requirement of a physical forms, or we could be ethereal entities, with a constant physical form that will linger till the end of eternity. But it is not difficult to visualize the ways through which we are taking the first steps towards becoming a transhumanist society. With the advancements in technology and medical sciences, we are already witnessing a standard of living much better than those of previous generations. Mortality rates, when

compared to fifty years ago, are at an all-time low. People who suffer from physical or mental disabilities are receiving the sort of care and treatment that did not exist up till twenty years ago. For instance, if we look back towards the 1940s, prosthetics for soldiers who had lost their limbs on the battlefield were often simply made from wood or tin, serving little more than cosmetic purpose. However, if we consider the situation today, prosthetics have come a long way in terms of utility, production cost, and ease of replacement. Taking the example of prosthetic hands, electrodes attached to a key nerve ending can easily detect the action an individual tries to make and can allow for individual finger control, something which is markedly revolutionary. The loss of a limb suddenly does not feel like that big a deal. There will come a time when the functionality provided by a real human hand and a prosthetic limb is completely indistinguishable. And of course, prosthetic research will only spiral to new heights. Soon we will witness a wave of prosthetics which will functionally surpass what a normal human body could ever achieve—be it lifting very heavy


beings a long time ago. We have surpassed natural evolution to favour artificial evolution suited to our desires. And if we continue along this path, we can hope to achieve complete independence from the constraints that Nature imposes on organic beings. With the growing popularity of the idea amongst the masses, transhumanism has become a recurring trope in various forms of media. Various authors such as Isaac Asimov, Arthur C. Clarke, William Gibson, and Dan Brown have explored the life of humans in a transhuman society. Movies, TV series, and video games with techno-dystopian envisionings of transhumanist societies have steadily risen in popularity over the past decade. Usually, the common theme or pattern that is explored in all these examples are associated with the darker side of transhumanism— human bioengineering gone wrong, or the division of society between transhumanist factions and “purist” factions (composed of individuals who do not support the ideas of transhumanism) or how a transhumanist society will further demarcate societal strata, as new technology can only be afforded by the rich, which effectively robs poorer individuals from the benefits of transhumanism. These are all valid arguments, and no doubt make for compelling stories. However,

these reasons are definitely not enough to deter our advancement to a transcendent age, where we are no longer loose puppets on the strings of some primeval forces, but as masters of our own destiny with the ability to choose which string we want to cut and which we want to reinforce. It is no secret that at this point; humanity’s future looks bleak—years of natural resource exploitation has finally started affecting the planet irreversibly. Increasing sea levels could result in several civilizations being submerged within the next hundred years, and the ever growing population will need to expand, most likely to suitable nearby planets. We cannot assume we will immediately find a planet with ecological conditions as favorable as the Earth’s, so should it suffice to wait another two million years to evolve into a species more suited to this new planet’s atmosphere? Or should we use our technological advancements to accelerate this evolution to a more tangible timeline? The answer is obvious. Cybernetics, prosthetics, and transhumanism must be embraced, as it is the only way humanity will come close to transcending the infinite universe that we are an inconsequential part of.

Wow ! This band is amazing.

Hey ! Check out this new cool band I discovered.

DAISY BELL

Hey ! Check out this cool band.

Wow ! This band is so cool.

Wow ! This new band is amazing.

I guess they are just okay.


T E AM GE E K Faculty Advisor

President

Vice Presidents

Dr. P. Sateesh Kumar

Samar Singh Karnawat

Pratyush Singhal

Morvi Bhojwani

Editor-In-Chief

Design Head

Finance Head

Web Manager

Rishi Ratan

Darshan Kumawat

Karandeep Singh

Yash Agrawal

Editorial & News

Design

Finance

Web

Abhishek Talesara

Chirag Sharma

Deepti Srivastava

Mehak Mittal

Rajsuryan Singh

Kunal Satpal

Saloni Agarwal

Yash Dev Lamba

Shashank PM

Aasiya Mansoori

Swapnesh Kumar

Vivek Chand

Abhay Mudgal

Bhavya Sihmar

Ekta Singhai

Supratik Das

Mohd. Arbab

Gouranshi Choudhary

Nitya Meshram

Aniket Kumar

Natansh Mathur

Harshvardhan

Rushabh Zambad

Niharika Agrawal

Sanjana Srivastava

Naveen Dara

Saumya Gupta

Ritvik Jain

Nikunj Gupta

Twarit Waikar

Shubham Joshi

Shivanshi Tyagi

Raghav Dhingra

Yash Khandelwal

Akshat Khandelwal

Vishal Goddu

Aastik A Tripathi

Aanand Vishnu

Aditya Ramkumar

Akshay Kamath

Ankita Gurung

Anurag Mukherjee

Idika Verma

Anmol Chalana

Arnav Sambhare

Kashish Jagyasi

Aviral Gangwar

Karma Dolkar

Parth Bahuguna

Harshil Mendiratta

Murtaza Bookwala

Shriya Ramchandani

Rhythm Gothwal

Nirbhay Nandal

Shruti

Sejal Gupta

Pushpam Choudhary

Vidhit Jain

Tiasa Sen

Executive Members Abhishek Gupta

Harshit Sharma

Pragya Choudhary

Siddharth Saravanakumar

Akshat Bharadwaj

Kshitija Saharan

Pramit Singhi

Sushmita Senapati

Aman Tandon

Naini Panchal

Rinkle Jain

Tanmay Joshi

Ankita Bansal

Nikhil Yadav

Rohith A.S.R.K.

Utkarsh Gupta

Apoorva Agarwal

Prafulla Anurag

Shreya Jain

Vinam Arora


Enliven Salon Take Fresh

Hair

Body

Skin

Eat Fresh

Tattoo

VEG

NON-VEG

30% discount for all IITians Prem Mandir Road, Civil Lines, Roorkee

895-479-8973 789-519-2599

72/6, Opp. Woodland Showroom,

+91 976-099-9209

Haridwar Road, Roorkee

www.desitadkaroorkee.com

REAL HYDERABADI BIRYANI A Family Restaurant

Veg

Non-Veg

NOW OPEN For Birthday, Ring Ceremony, Kitty and other get togethers on DISCOUNTED RATES

Happy Hours 10% Discount between 3:00PM to 6:00PM Welcome Again Offer 10% Discount on presenting previous bill

Opp. Tanishq Jewellers, Near Center Point Hotel, II Floor, Clark Tower, Civil Lines, Roorkee

9758344844, 9758344944


US POLO

Parker The New You.

We are pleased to announce the opening of Parker Showroom for men’s ready made garments

Shirts start from 250/-

Shirts • Trousers • T-Shirts • Blazers Jeans • Suit • 3-Piece • Waistcoat

+91 992-716-6679 Radhav Madhav Apparels, Near Sudershan Plaza, Civil Lines, Roorkee


Special Discounts and Offers for IITians Financing Facilities Available

New Dell Exclusive Store

GANGOTRI ENTERPRISES ROORKEE Desktops

1519 Chaw Mandi, Railway Road Near Malviya Chowk, Roorkee

Laptops

Accessories

dellstore.ge@gmail.com

+91 976-071-9195 +91 812-660-4448


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.