HELIX 24-25 Vol. 6 Issue 1

Page 1


About the Cover

This cover art presents an otherworldly figure, its face serene yet striking, framed by a dreamlike aura of luminous shapes, glows, and digital fractals. It’s both human and alien, serving as an embodiment of artificial intelligence—a paradox of familiarity and otherness, as if the universe itself took form to hold a mirror to human creation. Surrounding it, spirals of light representing the boundless creativity and delicate fragility of our technological ambitions.

This image serves as a metaphor for the paradox of AI—a creation that mirrors human ingenuity while remaining profoundly alien, challenging our understanding of progress, control, and creativity. Like the complexity of life itself, AI embodies both hope and uncertainty, reminding us that beauty often carries an edge of unease. This duality lies at the heart of this issue, celebrating the dangerously beautiful muse of artificial intelligence and the eerie mystery of a world that is equal parts familiar and new.

Cover art designed and conceptualized by

A Letter from the Editor

At the pinnacle of technological advancement is the development of artificial intelligence, what some call an “imitation of humanity.” It is exactly through this imitation that then might beg the question: what is it that truly makes a human? Is it our biology, our appearance, our empathy? Is it the struggles that we face every waking moment of our lives?

Whatever the answer may be, it seems that our excessive vanity may have led us to desire to be gods, a beginning of the loss of our essence as humanity. Yet, ironically, now many find themselves surrendering their creativity and freedom to AI’s machinations.

Thus, in a world where technology begins to share the same face as us, a demarcation must be drawn between the technological and biological. In this issue, HELIX explores our coexistence with technology, lasering on the topic of AI which has cemented itself as a part of our society. By surveying the different biological fields that have been infiltrated by AI, HELIX endeavors you to make a choice: should AI be condemned or accepted?

The HELIX Team

Having found its footing after four years of a global pandemic, the Ateneo Biological Organization (BOx) perpetuates the vision set forth by the previous year’s leadership, moving forward under new leadership to preserve the organization’s legacy but also innovating upon tradition and adapting to an ever-dynamic campus environment.

BOx President Prince Soriano (4 BS LfSci) expressed the organization’s desire to continue and improve upon the successes of the previous school year, honing in on two thrusts: membership formation through engagement and the refinement of the organization’s impacts.

“We’re at a stage [where] we want our projects to grow. We wanted to [see] how we could develop our ideas with the things we already have and see where we could get creative,” Soriano stated, emphasizing the idea that impact is more about being meaningful than something big.

A Radiant Reception

Ateneo BOx continues its tradition by starting the academic year by engaging members through formation activities and further establishing rapport among old and new members.

“The whole string of Freshie Welcoming Week, LaunchBOx, RecWeek, then the [General Assembly (GA)] managed to not just sustain but also surpass previous years’ [turnouts],” Soriano explained. He believes that the success of these early events sets the tone for the rest of the academic year.

Freshie Welcoming Week occurred last August 7 to 9 at the SEC B Foyer, where old and new faces of the Biology and Life Science majors bonded through food, games, kwentuhans, and live shows to build close ties between freshmen and their seniors.

RecWeek 2024, under the theme, Find Your Glow, facilitated the understanding of the organization’s openness to all and inspired renewed commitment to returning members through storytelling. According to VP for Organization Strategies and Research Aki Banguis (4 BS LfSci), a key part of their goal was shedding light on BOx’s initiatives and core advocacies.

“By emphasizing these core values, we hoped to inspire members to [participate] in and support the projects that align with their passions,” Banguis said, stating that the team wished to encourage everyone to find their unique “glow.”

Following the warm freshie welcome, the organization eased new members into the organization’s supportive community through LaunchBOx 2024: Venture. Held on August 22 at the Leong Hall, it was centered on camaraderie, exploration, and connection.

“One of the main goals was creating an environment where freshmen feel comfortable,” said AVP-in-Charge Zia Carreon (2 BS BIO).

Carreon hopes that LaunchBOx will continue to grow as a tradition for incoming freshmen. “It’s about creating a foundation of support from day one,” she said. LaunchBOx 2024 set the stage for a meaningful start to college life, helping new students feel welcomed into their new academic and social community.

Luminously Welcoming the New and Old Continuing the trend of maintaining traditions, BOx’s General Assembly (GA): Luminary, was held on

September 26 at the MVP Roofdeck. According to VP for Membership Affairs Yna Aala (3 BS BIO), the ultimate goal was to bring every BOx member together. Aala expounded on bioluminescent theming, stating that they wanted to connect it to this year’s RecWeek theme. They wanted to set this project apart while taking inspiration from the fauna-centered themes of previous assemblies. “Now that you found your spark, it’s time to be the luminous version of yourselves,” she remarked.

In line with making the GA more meaningful for the members, game dynamics in the form of kwentuhan sessions were done for the sake of building relationships between attendees.

Overall, Aala appreciates both new and returning members for their attendance and insightful feedback in the evaluations. She hopes that they left thinking they found hope to find a place within BOx, given the diverse possibilities available within the organization.

Talk the Talk, Walk the Walk

Among the projects that promote the organization’s advocacies, the annual TALAB, held last October 15, continues to serve as a platform for raising awareness and promoting discourse. This year’s TALAB entitled The U in Urban: Unraveling the Importance of Eco Spaces in Urban Communities, is centered around integral ecology, inviting attendees to reflect on their roles in the broader societal and environmental context.

AVP for Formations Aubrey Labarda (4 BS LfSci) highlights how TALAB immerses participants in perspectives they might not otherwise encounter. “[The beauty of TALAB] is that it exposes you to different ways of thinking and shows you how interconnected everything really is,” Labarda said.

She highlighted a key point regarding the effects of human actions on local wildlife at La Mesa Eco Park. “[Even something as simple as leaving behind trash] can alter the eating patterns of [animals]. [It’s] a sobering reminder of our impact,” Labarda reflected.

TALAB encourages actionable outcomes and accountability for the planet’s future, with Labarda hoping participants will continue their environmental journey long after the event ends. “[It’s not just about learning], but about discovering how you can make a difference in your community [and how] taking small steps that lead to a greater impact,” she concluded.

Empowering Leadership

Together with talks of organizational advocacy, the Core Team Seminar (CTSem) aimed at enhancing the leadership and collaboration abilities of its new core members, as well as preparing them for project implementation within the organizational standard. Spearheaded by AVP for Leadership and Empowerment Girlie Bautista (3 BS BIO), the event was held on October 12 at SOM 111.

This year, CTSem introduced a fresh approach to group dynamics, focusing on fostering deeper collaboration

among members. “[By incorporating more group exercises], we aimed to improve interaction and understanding of each other’s strengths,“ Bautista shared.

Despite its successes, the event faced challenges, such as initial participant engagement. In future iterations, the leadership team hopes to refine the seminar. “[We need to require on-site attendance as much as possible],” Bautista thinks.

Bautista emphasizes that CTSem goes beyond preparing participants for BOx projects but also for growing as leaders. With a more structured approach to team roles and better training, the seminar promises to continue its legacy of empowering the next generation of leaders within the organization.

Pushing Bioconservation

As part of the organization’s post-pandemic legacy, Into the Wild, held on October 25, 28, and 29, is one of the most iconic ways BOx promotes awareness of urban biodiversity and green spaces via nature walks around campus. Despite scheduling issues and unfavorable weather, Project Head Covy Angeles (4 BS BIO) noted that the project was a success.

He highlighted tour guide training sessions and preparing a virtual nature walk contingency as key aspects for the project’s success. Still, he felt there was room for improvement, given enough time.

Angeles believes that Into the Wild advocates for holistic approaches to conservation, given the importance of urban greenspaces. “Our [society relies] on the biosphere [for the] ecosystem services that keep them running so we aim to highlight their importance,” he said.

Establishing Bio-education’s Future Leaders

Within the realm of preparing young minds, the revived Edukit provided students this year with essential learning materials such as mock exams, flashcards, and study notes aimed at improving their academic performance.

AVP for Academic Development Shannen San Juan (3 BS BIO) shared that Edukit was created to address gaps in

biology education. “[By improving education], we aim to produce better biologists and leaders who can contribute to society,” she expounded.

To boost engagement, EduKit has also incentivized donations with rewards, such as discounts on BOx merchandise. This initiative has helped gather resources from 2nd and 3rd-year students.

Looking ahead, San Juan emphasized the importance of expanding EduKit’s scope. “[We aim to] integrate more co-curricular opportunities and provide academic guidance, especially for second and third-year students,” she said.

Proudly Appreciating Life Through Movement

In the purview of exploring member passions beyond the core advocacies, BeatBOx successfully defends its crown for this year’s mystery-themed Rhythmin-Blue. In the background, BeatBOx Captain Kanah Saavedra (4 BS BIO) shared the team’s excitement in integrating what was learned from their winning bout last year.

“This year, our approach to preparation evolved significantly,” Saavedra stated. “Our training schedule for October and November was planned out in advance, allowing us to stay productive while also giving ourselves time to rest,” she said, emphasizing this balance as a key factor in their preparation and eventual win.

“Beyond securing a victory, we want to deliver a performance that reflects all the hard work, dedication, and passion we’ve put into this routine,” Saavedra said. This year’s dance featured a heart-wrenching and emotionally charged performance that captivated and resonated with all audiences through the Henry Lee Irwin Hall, leading the 30-strong troupe to their win last November 22. In addition, Kim Ronquillo’s (4 BS BIO) direction also won Best Production alongside a team led by Sophia Limsoc (2 BS BIO) and Aira Kekim (2 BS BIO).

Unbound Futures

Amid the successes in the first half of this academic year, Soriano highlights the incoming Philippine Biology and International Biology Olympiads as well as

the future iterations of beloved mainstay BOx projects: LoveBOx, Marine Rehabilitation, and SustainaBOx, among others.

However, expounded on the challenges of balancing internal affairs with addressing external forces, most notably the lingering effects of the pandemic and changes within campus. Between the shift of organization culture from one of isolation to cooperation, environmental issues within the campus, and the tense sociopolitical climate of the Philippines, Ateneo BOx continues to endure and attempt to find its place within such conversations, adapting and learning what it means to sustain the legacy of an organization, considering its 20th anniversary later this year.

“BOx is such a dynamic thing, and what [we] think of as BOx right now is not [the image the organization will

“[However], despite all these changes, it is still about life in all forms. [Making] sense of yourself [in] the context of all this [life] around us is what will ultimately move the organization forward,” he concludes. Moving forward, he challenges members to ask these deeper questions and think of BOx not just as a community or avenue for projects, but as something more meaningful.

Help-y Crawlies?: Nanobots for Combating Cancer

Nanotechnology refers to any technological activity that happens on the nanoscale: at the level of atoms and molecules.1 Hence, nanobots are robots built for this purpose. With their minuscule size, they are popular in medical applications such as drug delivery for precise administration and onsite surgery inside the vascular system.2, 3 While nanotechnology is still emerging as a mainstream field of research, there have already been significant advances in its usage.

Notable types of recently developed nanobots include xenobots made out of living cells that undergo a new form of reproduction, magnetic bots that recognize cancer cells in blood, and folding-DNA bots capable of identifying tumorrelated biomarkers for early cancer detection.4 All in all, nanostructures’ small size can aid in the discovery of novel happenings on the nanoscale, understanding of natural phenomena, and revolutionization in innovative treatment methods.5

Folding Fascination

A team at Sweden’s Karolinska Institutet led by Björn Högberg, PhD, has long focused on building hexagonal nanostructures that activate programmed cell death receptors. Termed as “DNA origami”, these nanostructural bots create a cancer “kill switch” that activates only in acidic, cancerous pH levels to obliterate cancer cells, sparing healthy cells in the killing process.6

The DNA origami switch functions as a stimuli-responsive nanobot

that folds in specific conditions such as acidic environments that are usually observed in tumor microenvironments (TMEs). It has a “double-cylinder design” with a hollow “head” that can conceal six cytotoxic ligands, molecules that target the death receptors in tumor cells which ultimately lead to apoptosis or cell death. They do this by attaching to DNA strands that fit in a cavity within the hollow head.6

A key feature of these bots is their specificity in targeting cancer cells. By only interacting with a specific range of pH values consistent with TMEs, it can restrict its conformational activity and reconfigure its origami structure when needed–pushing the ligands out of the hollow head. When the ligands are exposed, they activate the “kill switch” responsible for apoptosis, especially for harmful tumor cells.6

The “kill switch” that is targeted by the DNA origami bots in these cells is the death receptor 5 (DR5) which mediates apoptosis. This protein is most prevalent in cancer cells, specifically in their exosomes or tiny membrane-bound sacs containing proteins, DNA, and RNA. Some cancer cells with the DR5 receptor are those from the brain, colon, kidney, ovaries, prostate, and lungs.7

While this method remains as effective as other mainstream cancer treatment methods, using DNA origami bots allows for specific and efficient targeting of malignant tumor cells while sparing other

healthy cells—a common drawback of cancer therapy. Additionally, it paves the research into drugs with similar mechanisms to minimize or remove the possibility of sideeffects altogether.

The incorporation of nanotechnology into medicine and therapy is not a new endeavor, nor has it been unheard of. Given the emergence of studies such as Högberg’s, it only indicates that the limits of this hybrid subfield in biological research remain to be seen.

Högberg’s team hopes that future endeavors into nanotechnology explore the possibility of side effects in the human body. Currently, there is much to be considered with their current nanobot, as its targeting system could be further optimized to target different types of cancer cells.

“We now need to investigate whether this works in more advanced cancer models that more closely resemble the real human disease,” added contributor Yang Wang, PhD.6

As such, future ventures for nanotechnology would entail applications in precision medicine, particularly in drug administration. For instance, Feng et al. (2024) developed a sustained bacteriaresponsive drug release platform that can release drugs only in the presence of toxins secreted by bacteria that cause vaginosis.

By only activating in pathogenic conditions, the platform prevents excess dosage, compared to traditional treatment methods that

are not designed to be preventative. Not only would the platform prevent negative side effects related to this issue, but it also reduces the risk of antimicrobial resistance since it can administer exact amounts of the drug required.8

Promising Possibilities

Nanotechnology plays a pivotal role in fighting infectious and inflammatory diseases.9 Since pathogen detection is laborintensive and traditional vaccines only produce small amounts of antibodies that cannot fight off mutated pathogens, nanoparticle (NP)-based vaccine delivery vehicles and adjuvants have been developed to counter these issues.

Aside from providing novel bases for medicine development, they can also enhance and strengthen the immune response in a plethora of processes, from strengthening T-cell activation to prompting macrophages to release cytokines.9

In the case of inflammatory disease, NPs can control inflammatory factors and modulate the immune response at the site of inflammation, precisely targeting the root of systemic toxicity. They can also aid in understanding the inner workings of the body such as receptorligand interactions and passive targeting. Most importantly, they are necessary in the creation of noninvasive visualization technology to track disease progression.9

Nanotechnology has also been recognized in the Philippines.

However, identified prioritization areas for this field have an environmental rather than medical focus: energy, food, agriculture, and the creation of nanostructures from minerals.10

Despite everything, the true potential of nanotechnology in the medical field is yet to be seen, especially from a Philippine perspective. As it stands, nanotechnology holds great potential in transforming healthcare in third-world countries given their limited infrastructure, scarce resources, and a high burden of infectious and neglected diseases.

For instance, nanotechnology has the potential to improve treatment strategies for neglected tropical diseases such as schistosomiasis and leishmaniasis, which are prevalent diseases that plague impoverished areas of the country.11 Prevention-wise, nanotechnologyenabled water purification systems can help ensure access to clean drinking water through diseasecarrying insect extermination.12

Högberg’s team may not have been the first to ever consider nanotechnology as an alternative method in medicine, but they are certainly one of the first to push the boundaries of what a minuscule nanobot could do.

AI in Cancer Treatment: A Novel Bioinformatics Tool for Drug Discovery

In the last few years, the advent of Artificial Intelligence (AI) has allowed such technology to establish itself as a useful tool in a variety of disciplines—saving money, manpower, and time. One of the disciplines affected by it most is medicine, and, recently, AI has begun to play a role in refining cancer therapies.

AI meets Cancer Treatment In drug discovery—particularly in high-stakes fields like cancer therapeutics—manual analysis of protein sequences is timeconsuming which the augmentation of AI seeks to address. By having the ability to crunch data and identify targets at high speeds, it excels at being both time-efficient and costeffective.1

To this end, Ateneo Biology instructor Ivan De Guzman, MD, who has had both clinical and medical diagnostic practice, highlighted bioinformatics as the primary possible role of AI in healthcare. “Bioinformatics is heavily dependent on computing, and so would greatly progress with the help of AI,” he added.

Within the realm of medical oncology, Immune Checkpoint Blockade (ICB) therapy is a form of cancer treatment that may also greatly benefit from such augmentation. It operates based on blocking inhibitory receptors/ immune checkpoint inhibitors. These are proteins embedded on the cell membrane of immune cells that suppress immune signaling.2

This action serves to protect the organism from complications that arise from overexpression of immune responses, such as autoimmunity, when immune cells of the body target healthy cells by mistake. Blocking inhibitory receptors thus serves as an excellent mechanism for cancer therapeutics by allowing immune cells to maximize their response without inhibition.3

Currently, there are around 50 known inhibitory receptors in the human genome, but previous estimates suggest that there are more than 1600 receptors in total.4

Building on this knowledge gap, Akashdip Singh, PhD, and his Netherlands-based team brought to the forefront a highly sophisticated pipeline driven by machine learning, the TOPCONS algorithm, and AlphaFold to accelerate the discovery of potential cancer drug targets. As a result, the researchers were able to identify an additional 390 genes coding for inhibitory receptors.5

Inside the Pipeline

The first step involved the researchers using a genome browser to sift through all the sequences in the human body that code for protein assembly, applying a search filter to locate a particular motif associated with the function of the inhibitory receptors.5

The next phase involved using the TOPCONS algorithm, a computational tool designed to

elucidate protein topology or the specific orientation of the protein within the membrane. This allowed for the further selection of functional inhibitory receptor sequences and provided researchers with valuable insights for designing potential drug target strategies.6

Further analysis was conducted using AlphaFold, an AI-driven computational model that employs machine learning to determine the 3D structure of a protein. Similar to the TOPCONS algorithm, knowing the general structure of a protein in 3D space aids in drug design by allowing researchers to gain a more realistic model of the receptor’s biochemical environment.7

Once the inhibitory receptors were identified, the researchers sorted them based on how they shape immune responses. Some act as safeguards, keeping immune cells in check to avoid dangerous overreactions, others function as thresholds, ensuring that only the most significant threats set off a response.4

Interestingly, other receptors are more dynamic and adaptive to infections—temporarily boosting immune sensitivity for future attacks. Together, these receptors build up a network of immune regulation, and decoding their roles allows scientists to develop more patient-specific therapies.4

Beyond the Breakthrough

Despite the progress of AI, many challenges remain in terms of data quality and accuracy in real-world settings. De Guzman recognized that while AI does elevate research efficiency and precision it “should be used to augment human thinking not replace it” in the nuanced “art-like” practice of medicine. Furthermore, he highlights that AI is far from being a “monolithic entity” and instead adapts to the specific needs of each field. Both within the medical field and beyond, AI technologies are not without controversy. Looking at the darker side of these technologies, its augmentation in search engine algorithms is an idea that De Guzman is disconcerted with—as the anxiety of users who inquire about signs and symptoms online may be further exacerbated by the plethora of recommendations provided.

Despite this, the potential for effective treatments in cancer therapeutics only grows by the day. As scientists bring more targeted and safer therapies to the market faster, the continued augmentation of AI could only benefit public health. In the same light, this recent development only refines the prevailing understanding of AI and cancer treatment, bringing both humans and machines closer to a fruitful—almost symbiotic— coexistence.

Beep Bop Beep, System Activating! Meet GENE, short for Generative Engagement Network for Intelligent Enquiries. GENE is an AI created to be no different than a human being possessing not just intelligence, but also wit and humor. GENE was made to be more than just a human assistant; it was created to be someone’s best friend, playmate, or sports partner. Those who possess a GENE robot claim that they forget that it (or maybe he, she, or they) is a robot.

Scientists of this time are mystified at the wonder that is GENE—pondering on the possibility if GENE may even be considered alive or conscious, and heady debates across the world spur on this matter.

The Mystery of Consciousness

Consciousness is defined as a dynamic, integrated, and multimodal mental process that occurs due to physical events in the forebrain. The parts of the brain essential to consciousness are the thalamus and cerebral cortex. This is the case because the thalamic intralaminar nuclei project axons to all cortical areas, and their destruction can result in a permanent loss of consciousness.1

While the brain’s anatomy lays the foundation for consciousness, further development is shaped by environmental factors.1 The brain integrates perceptually driven and motor signals with stored memories to generate conceptual content indicating consciousness’s role in a “flexible response mechanism” (FRM) that enables non-automatic, adaptable behavior. Thus, we can say that consciousness is a collection of information such as external senses, feelings and emotions, physical states, and crucially, information without qualia — subjective aspects of experience.2

Thus, consciousness, simply, is a non-automatic or noninstinctive reaction to external stimuli that is built upon past experiences of prior stimuli creating an aspect of learning. With such stacked layers of complexities that

compose and are necessary for human consciousness, how it is perceived and understood at a human level must be defined and discussed.

The Power of the Mind

Science provides a very direct and concise definition of consciousness as presented in the previous section, but philosophy defines consciousness in a more complex, dynamic, multi-factorial, and perhaps, a more fun way.

At its surface, consciousness can be described in two ways, creature consciousness and state consciousness. Creature consciousness refers to the capacity of an organism as a whole to be conscious, while state consciousness focuses on individual mental states and processes that are deemed conscious.3

To differentiate the two, imagine a dog. Creature consciousness focuses on whether the dog, as a being,

is conscious. On the other hand, state consciousness examines whether a dog’s acts of barking or smelling are conscious. The criteria of these two understandings of consciousness can be summarized as the ability of a being to be aware of itself (identity) and of one’s acts.

To further concretize this understanding of consciousness, compare a calculator and a human doing math. Though both can do basic arithmetic such as solving what 2+2 is, the difference lies in the awareness of one’s actions. The calculator also receives external stimuli when buttons are pressed. However, it is not aware of itself as a calculator or that it is performing the act of adding 2 and 2.

This is where a crucial difference exists between a human and a machine. A human adding 2 and 2 together is

aware that he is adding 2 and 2 together. Simultaneously, a human would know at once that it is “a human that is adding 2 and 2 together”. This complex nature of a conscious being hinders people from calling machines or AI conscious, at least in the current day.

This aspect of consciousness serves as a counterargument to the biological perspective as a collection of information on external senses, feelings, emotions, physical states, and crucially, information without qualia.2 Philosophy presents a perspective that consciousness is the awareness of one’s awareness — some sort of meta-awareness. We are not just aware of our consciousness but also aware of why our consciousness is the way it is.3

In many ways, there are things that biology cannot explain about the complex nature of consciousness. When we reduce consciousness to neurons firing electrical signals at each other in certain parts of the brain, we tend to take for granted the power of our minds — the brain that discovered itself.

Before asking if AI is capable of such complexities, let us understand how AI works first.

Hey ChatGPT, how does AI work?

AI operates by human cognitive processes. This works by creating artificial neural networks (ANNs) inspired by how the human brain works. These neural networks are composed of nodes, similar to neurons, that process information together. The resulting benefits of this would be deep and machine learning. Deep learning involves sophisticated ANNs designed to solve complex problems focused on visual or symbolic data. Machine learning, on the other hand, uses big data to identify patterns and make predictions.4

AI also possesses “organ-like” features such as eyes and ears. Modern AI has its ears in the form of speech recognition which enables computers to understand human speech, similar to how a child learns language.

This is complemented by Natural Language Processing (NLP) which enables computers to speak “humanly” and not robotically, serving as the AI’s mouth. Lastly, serving as AI’s eyes, Computer Vision allows computers to interpret and understand visual information.4

All these have applications that we all have used at least once before. From having a random chat with ChatGPT to seeking AI’s aid in generating an essay that you have to cram, these machinists allow AI’s outputs to appear human-like, and to the naked eye, no one would be able to tell the work of AI apart from humans.

In this regard, it must be noted that there is no doubt about AI’s ability to perform human tasks. The biggest question here is whether AI is conscious. As discussed previously, there must be a distinction between mere acts and the awareness of one’s acts and oneself. Though AI may perform tasks more effectively and efficiently than humans, does this make them conscious acts?

I, We, Robot

We can draw so many parallels between what makes a human and AI. Humans and AI both depend on external stimuli to gather information which is processed through neural networks in the form of either neurons or nodes. This information is then used to generate text, speech, drawings, or even music.

At a glance, we may be drawn to believe that both humans and AI are similar machines, simply composed of different or alternative materials and components. Though biology contributes an abundance of insight into the complexities of consciousness, it struggles to define what the essence of consciousness is.

An easy answer to this question may be that consciousness is only possible if a being possesses a biological brain, composed of cells. However, this argument is lazy and fails to address the complexities of the question at hand. For example, scientists have recently been able to 3D print artificial brain cells.5 Theoretically, if every brain cell is slowly replaced by these artificial neurons in a human without any side effects, does this make the human any less human?

This dilemma raises issues on identity when something can be considered conscious or, perhaps, alive. This is why a more philosophical perspective on consciousness is very important as it does not deal with the material aspect of consciousness. It does not consider whether a being is composed of cells or semiconductors, which allows us to focus on the issue of consciousness‘s essence.

Despite the parallels drawn between humans and AI, it is the ability of self-consciousness that sets humans apart.

The “I” in Identity Self-consciousness is that aspect of the human experience and of human reality that is both essential and necessary. This concept is closely linked to the pronoun “I” and through this, we can own our experiences.

It is through the unification of the ownership of one’s experiences (identity or the “I”) and the experiences themselves that comprises a whole consciousness that is selfaware and self-conscious. It is in this understanding that I, as a being in an objective world and as an owner of my actions, am who I am because of my experiences and interactions with others.6

It is only through this understanding of consciousness and selfconsciousness that beings to have a sense of personhood or the ability to re-identify oneself over time. True consciousness also allows for rationality. Rationality isn’t just the ability to solve problems; it is also the ability to be aware of one’s contexts and circumstances when making decisions.6

Such a simple understanding of oneself and reality tells us that AI, at least today, is merely an advanced

input and output mechanism. It is still merely a tool that aids us in our daily tasks. With all these biological and philosophical insights on AI and consciousness, you may be wondering what this all means.

The Power of “I”

It is in the “I” that we can make truly good decisions — decisions that consider our contexts. We must realize that what is truly good is always relative to the self. We see that the self is essential in making rational and reasonable decisions that AI could never. Whether it is deciding whether to go out with friends or deciding the college course you wish to take, these are questions AI could never answer.

Ultimately, the message is this: whether it is asking ChatGPT to write answers for a job interview, philosophy paper, or a biological case study, we should not depend solely on its answers. With no contexts nor circumstances, with no aspirations nor dreams, AI will never truly represent what it really means to be human and how we should act—to see beyond the act which is the human person.

With the quick and continuous advancement in AI, it is our job to recognize that what makes every act meaningful isn’t the outputs nor results, but the meaningfulness of the act and experiences themselves. We must realize that though AI may yield us the best answers, whether for an essay or a quiz, the end goal must be self-improvement. As in Aristotle’s words, a good life is a life of virtue—a concept no AI can live by.7

Shaking Doctor-AIHands: Partnerships for the Future of Medical Diagnosis

People remember their “good old days” differently. For some, it could mean threatening to play in the mud or even those sci-fi video games with friends. For others, it could mean rushing to the living room to watch the new TV show after school. Whichever description rings true for you, there’s no denying that the good old days are just that, old.

Much of what we saw in the good old days has now become reality. From Wall-E depicting the concerning reality of the dependence on technology to Tony Stark birthing Jarvis from raw data down to Star Wars’s medical magic. Naturally,

every fan held a desire for their favorite franchise to be made a reality, but there is no need to hope any longer.

Today, what we once considered to be an impossible fantasy has easily become a present-day reality. As our world continues to reach new heights in innovation, so does our desire to coexist with technology. To do so has given rise to Artificial Intelligence (AI) in one field: healthcare. Presently, the medical field has diversified its efforts to better care for human society. Despite the revolutionary heights AI has reached in its integration into

the healthcare system, AI is still a partner not many are keen to shake hands with just yet.

Understanding Old Wires

Despite its recent emergence in the lives of their fleshy counterparts, AI and humanity had a kind friendship, collaborating since the mid-1950s. It started with Alan Turing questioning whether machines could exhibit human-like intelligence—a thought that triggered the first industrial applications of AI, focusing on machines replicating human decision-making processes. The creation of the first robotic arm in 1955 exemplified this trend.1

Only a decade later, society witnessed a surge in AI research when MIT’s Joseph Weizenbaum developed Eliza, the world’s first chatbot, that can simulate human conversation. Similarly, the creation of Shakey, the first robot capable of interpreting and executing human instructions, further cemented the practicality of AI.1

However, it was in the ‘70s and ‘80s that AI began its integration in medicine through INTERNIST-1 which was the first AI medical consultant that leveraged a search algorithm to generate clinical diagnoses based on patient symptoms and highlighting its potential to support physicians in diagnostic processes. For antibiotic prescriptions, scientists also made MYCIN.1

Today, the landscape of AI in medicine continues to evolve rapidly, with innovations such as machine learning (ML) and neural networks driving progress in areas like radiological image interpretation and dermatological pathology identification. All of these advancements are in the name of assisting clinicians in diagnosis, treatment planning, and patient management.1

AI in medicine showcases a trajectory marked by continuous innovation and expanding applications, driven by a pursuit to improve and succeed. With recent innovations in ML, like ChatGPT, Character.ai, and Quillbot to further ease human society into a future of human-technology

companionship, it is inevitable that AI involvement in fields like medicine is here to stay.1

A Double-Edged Scalpel

Thanks to these accumulated advancements, it is possible today to visualize a future where a machine acting as an extra pair of eyes, hands, and ears in the surgery room becomes commonplace or, more realistically, where AI accompanies us in our day-to-day healthcare.

For instance, early disease detection through smartphone applications and wearables and disease diagnosis through algorithms based on histopathological examination and medical imaging are just some of the talents AI could aid with.2

One example is the DermaSensor: a wireless, handheld device that utilizes AI-powered spectroscopy to analyze lesions at both cellular and subcellular levels noninvasively at the point of testing, allowing a more efficient diagnosis of skin cancers such as melanoma, basal cell carcinoma, and squamous cell carcinoma. Remarkably, a study found that its use decreased the number of cases of overlooked skin cancers by half.3

Over time, through ML, AI becomes more accurate; as it is now capable of picking up subtle signs and patterns of early stages of cancer that the human eye might miss. A study found that a doctor-AI partnership proved effective with 20% more breast cancers detected than a group of radiologists alone.4

Apart from applications, doctorAI partnerships can also raise the bar of diagnostic accuracy by complementing human weaknesses and enhancing our capabilities through augmented intelligence. Working side-byside, the AI provides precise and speedy algorithmic predictions while the doctor provides a “human touch”, using the gathered data to supplement their judgment, leading to more informed decisions on patient diagnosis.5

In some sense, more time, money, and possibly, more lives were saved.

For instance, Ateneo Biology instructor and physician Ivan De Guzman, MD describes AI as having the potential to be a powerful reservoir for medical knowledge, allowing the medical weight of experiences shared with AI to be passed down to the next generation of doctors.

One noteworthy example would be Google Health’s MedPaLM which has been trained on comprehensive datasets of medical data to accurately and elaborately answer complex medical questions for both training and practicing medical professionals.6

Even so, despite all these advancements and potentials in the past century, the field itself is still in its infancy, so its role in medical practice remains rare and, many times, even controversial—some reasons many doctors and patients prefer to continue to lean on more traditional methods.

After all, there are plenty of factors to consider in a process as complex as medical diagnosis. Take, for example, the observation and interpretation of symptoms is highly subjective. This is where such a partnership falls short; though AI can provide predictions based on given patterns, it struggles with the subtleties of human experience.2

This limitation only becomes more apparent for cases involving rare diseases where there is a lack of standardized data for the AI algorithm to draw from. This raises concerns about overfitting, fallibility, and potential bias, leading to a potential replication crisis in the field affecting even the hallowed ChatGPT upon which many millions rely.2

It’s no wonder then that so many patients are concerned with solely using AI for their medical concerns. Beyond the basic trust in human medical comprehension, such a relationship also calls for empathy and emotional understanding—the human touch—something AI will never fully replicate.7

A Physician’s Replacement?

Thus, despite unspoken fears of these artificially intelligent beings replacing us, rest assured that AI

will not be replacing physicians any time soon. Instead, it could be better to posit that AI is merely an advanced tool to support a medical professional’s expertise, enhancing the “good” to become “better”.

De Guzman believes this view to be true. “I think AI would be a supplement para maging [to be], eventually, a tool that will be utilized. But, it would not replace human thinking, kasi kailangan mo talaga ng [because you really need] human touch in regards to the art of diagnosing and treating patients,” he reasoned.

It’s in the name: artificially intelligent. De Guzman explained that algorithms can treat patients but each patient is different both in their state and in their dynamic treatment. “They would have to adapt depending on the patient, and patients are treated differently,” De Guzman said.

At the end of the day, humans are the ones taking the lead in the trajectory of human medicine and healthcare. Even if AI were to be normalized in medicine, it would still require training a new generation of physicians who are well-versed in both clinical practice and digital technologies. This may call for incorporating digital health literacy and AI education into medical curricula to equip future medical professionals with the necessary skills to effectively leverage AI in clinical settings.2

“Remember, medicine is an art form. It is not strictly a science. The art of diagnosing [and] the art of keeping in touch with your patients with regards to their condition will require more than just computing power, kailangan talaga ng [it really needs] human intervention,” De Guzman surmised.

A Promising Partnership

While the idea of a medical partnership between man and machine sounds promising, this partnership must be dealt with caution, especially in the context of the high-stakes world of medicine where even the smallest decision could be the difference between life and death.

It should be an important point to consider that AI does not aim to replace human augmentation, rather, AI only seeks to continue to develop to accomplish more complex and outstanding feats that merely replicate human feats. It is exactly these points that should be focused on if AI usage is to gain footing in the medical field.

Thus, the responsibility is still on us, more specifically on our thinking. Rather than believing that a bionic leg will replace faulty flesh, think of it as a bionic hand on the shoulder, ready to assist humans in their medical endeavors while nurturing this man-and-machine partnership where both learn from each other.

Because, after all, AI is made by humans and it is from humans that AI will base itself on.

Written by Chevin Paul Gealone & Andrew Tumulak / Illustrated by Jathniel Villanueva

Being in school a decade ago primarily looked like this: handwritten notes, heavy textbooks, and purely onsite classes. Fast forward to today, technology has undeniably pushed and revolutionized the boundaries of how we approach education—even more so with the recent emergence of AI tools and websites.

Following the launch of ChatGPT in 2022, a large chunk of developed AI tools have been catered towards students to help them with tasks beyond previous integrations in proofreading and translating.1 ChatGPT is available for direct questions followed by an immediate response, similar to search engines. PopAI processes long text files for instant summarization, useful for comprehending challenging readings. Given the use of examinations for learning assessments, AI tools can now even generate practice quizzes from a single presentation file.

Given the saturation of AI tools for education, a pressing question emerges: what does this imply for educational fields whose foundations are grounded in skills centered on test taking, scientific writing, and research such as biology?

Generating Knowledge

Understanding the perspectives of students towards AI use is paramount in shaping proper measures in the regulation and integration of AI within educational spaces. Since academic integrity is a highly individualized concept among students, the positive uses of AI in education are highly dependent on how a student exercises the freedom of utilizing such tools.

For 2nd-year biology student Maureen Gerelingo, AI proves to be useful in breaking down and simplifying complex topics into simpler terms for classes like genetics or chemistry. Because generative AI acts like a search engine, the process of elaborating and providing examples is facilitated by the conversational responses of AI. Unlike the process of reading long passages in a textbook, AI can curate the data it presents based on a user’s question, helping streamline the process of learning basic content in an engaging and interactive way.2

Such use of AI for learning is similar for 2nd-year biology student Marga Bagacina. For her, AI is the last resort to ask for explanations regarding difficult topics in place of directly consulting with a professor or a classmate. In a way, AI not only facilitates the actual process of understanding educational materials but also provides avenues for learning regardless of setting—bridging some gaps of accessibility to learning resources.3

Gerelingo also shares another use of AI: ”When I’m working on papers or assignments, AI is like my go-to editor, too. After I’ve written a draft, I’ll ask it to look over what I wrote and give me feedback.” Feeding her written works into AI has also helped her organize her thoughts when writing, thus ensuring the quality of submitted papers.

Despite this, both Gerelingo and Bagacina expressed some concerns when using generative AI particularly when it

comes to the accuracy of the information it generates. “The primary challenge I’ve faced with AI is the reliability and accuracy of the information it provides. There are instances when it gives incorrect information,” Gerelingo shared. Similarly, Bagacina recalls her encounters with AI having ‘hallucinations’, rendering the tool “cumbersome and inefficient to use” as she constantly has to verify its accuracy.

Studies have confirmed the existence of “AI hallucinations”—the tendency of generative AI to present information that sounds and looks plausible but in actuality deviates from factual or scientific basis.4 Due to possible misleading information, both students stated that they have become very inclined to fact-check the claims of AI when using it. “It is also our responsibility as users to verify any kind of information that we receive, rather than just taking it at face value,” said Gerelingo.

There is an undeniable truth that AI can be useful for students’ everyday lives, given the rigor of processing large volumes of materials in learning such as biology. Still, there remains an undefined discourse on where students and professionals alike should stand before such advancements. Despite the recency of the technology, probing into its possible effects on learning is relevant, as the foundations built during the undergraduate level are crucial for higher education and research.

After all, what does it mean when the most interactive, accessible, and engaging learning tool for some students is an AI chatbot filled with possible inconsistencies?

Hallucinating Truth

Gerelingo and Bagacina’s experiences with AI hallucinations are not uncommon. One reason for hallucinations in natural language processing is sourcereference divergence. In this scenario, the information contained in the model’s selected target may not be included in the content it uses as a reference for its truthfulness.5

Otherwise, the model may not have a process for factual, objective alignment, especially if it is designed to respond in a subjective style. These inconsistencies in the analyzed content can encourage the model to generate untrue statements.5

Even without much internal divergence, AI can still hallucinate. Models being trained on expansive amounts of data can result in them memorizing the knowledge within their parameters, which they may disproportionately use for novel downstream tasks. While this training can facilitate generalizability and coverage, it can also induce hallucination when excessive and unrelated information is applied to specific inputs.5

AI’s interaction with the production and reproduction of knowledge presents a shadow of doubt of its efficacy as a reliable learning tool. For sharp students like Gerelingo and Bagacina who decide to fact-check the claims made by AI, the effect of these hallucinations can be circumvented. However, it is not difficult to imagine taking AI-generated content at face value as the path of least

Research from the first few years of AI being available for educational tasks shows contentious interactions with the educational system. Most of their upside lies in the novel ways they innovate in studying. Students essentially use AI like a search engine tailored specifically for their academic needs, using AI tools to search for answers to queries, provide homework assistance, and personalize learning.5

However, just as AI may generate responses biased toward the data it is trained, so would a student who takes the technology for granted. AI use for schoolwork can unwittingly reinforce students’ biases, pushing them further from a critical appraisal of their tasks. If they are not inclined to cross-reference AI’s claims due to convenience, they might view the technology as the default means of accomplishing their assignments.5

Training the Future

With AI being a radically significant piece of the modern student’s toolbox, the structures that define education are also challenged. In Ateneo, it is now common for course syllabi to include provisions for AI use in classwork. While some courses adopt a more traditional approach in limiting its use, others have more liberal permissions, so long as the nature of AI use is disclosed.

Educators worldwide have considered AI’s blurring of authorial identity as a challenge in maintaining an atmosphere of academic integrity. Traditional categorizations of academic misconduct fail to appropriately wrangle with a technology that radically changes how students approach written summative assessments. Thus, while students can view AI as a solution to learning obstacles, educators express less optimism about authenticity in AI-assisted learning.6

Consequently, ethical ambiguities remain in designing policies that simultaneously recognize AI’s presence and reasonably frame its use within the university’s intellectual ecosystem. As expected, much of the policy innovation in universities is in redesigning assignment guidelines to delimit AI, with the primary factor being academic integrity considerations.

Intellectual property is a particularly important construct, with policies including provisions on duly assigning authorship for AI-generated content within existing laws of copyright and fair use. Institutions such as Cambridge University assert that AI-generated work does not satisfy the criteria for authorship. Still, in light of ever-improving and easily accessible AI tools, the need for well-defined policies remains.7

Beyond notions of legality and ethics, AI encounters challenges with the enterprise of education in itself. Pedagogical paradigms cannot ignore the introduction of such tools because they stand to alter how students engage with information and proceed with education.

For example, despite the convenience of using AI, studies highlight that overreliance on such tools can deter

students from engaging in more thorough research due to the ease of access to information from AI, leading to complacency and reduced creativity.8 More so, AI in research poses negative effects by diminishing academic writing decision-making capabilities—linked to the weakened ability of learners to analyze and grasp information independently.9

In the name of preserving time and effort, relying on AI also presents concerns with regard to critical thinking and impeded cognitive abilities as integrating AI into daily tasks like learning may overshadow human biological processing abilities.10 Similarly, studies in neuroscience point out that disengagement from challenging cognitive tasks posed by academic rigor may sacrifice the activities of key neural regions like the amygdala, prefrontal cortex, and hippocampus.11

While AI is certainly helpful, the texture of its aid remains contingent on the learner’s proclivities. When the use of these tools is inclined to personalized learning and tempered by accurate background knowledge, they can prove to accelerate the learning process. However, when the technology is used to circumvent the cognitive loads embedded in class tasks, students lose out on genuine growth and may instead perpetuate mistruths hallucinated by AI.

Given that a major population of students of biology are to be future doctors and scientists, both fields with demands of academic rigor, critical thinking, and competence in scientific writing, the tentative research regarding the direct effects of AI on one’s learning calls for us to be critical with our personal use of the tools.

What Lies Ahead

Beyond education, AI is an increasing presence in workplaces that biology and life sciences majors may enter. For future medical professionals, AI models are currently being researched as diagnostic tools. For aspiring molecular biology researchers, AI has advanced protein folding prediction methodologies and continues to aid in molecular docking research.12

As AI reconstructs the professional sphere, it is crucial to consider how its prevalence in undergraduate education disposes students to think when they enter the workplace. Notably, AI’s potential in challenging critical thinking may prove to make biology students unprepared for the high-level tasks expected in their careers—ones where esequestration of intellectual processing to AI may be detrimental.

Nevertheless, the question of AI in education remains open, especially as the field continues to mature. While we have always used tools to our advantage, AI is unlike any other technology in its capability to challenge human thought itself.

Still, hope remains that for students, educators, and universities, AI can become a force for genuine, equitable, and enriching learning.

Molecule Makers: AI in Crafting Next-Generation Novel Drugs and Enzymes

Proteins are described as the chemical building blocks of life, composed of a string of amino acids that combine and fold into a three-dimensional structure in different ways, which gives rise to their diverse functions.1

A protein’s shape is a clue to its role, and as its amino acid chain folds, it transforms into dynamic structures. Like LEGO bricks, these amino acids have different pieces fitting together to form various important biochemical molecules like catalyzing enzymes to sustain life, structural components of cells, and signaling molecules such as hormones and antibodies.

Designing effective drugs and medicines often hinges on understanding protein structures and interactions, as many treatments target proteins or mimic their functions. However, the complex nature of proteins has historically made this a painstaking process. Proteins

are fundamental molecules in all living organisms and drug development is a complex process with rigorous methods, so can artificial intelligence (AI) revolutionize how we design new proteins and medical drugs to significantly accelerate the process?

Nobel-worthy Breakthroughs

In 2024, the Nobel Prize in Chemistry recognized two monumental contributions in the field of protein technology.2 The first half of the Nobel Prize was awarded to David Baker for his work in computational protein design, while the second half honored Demis Hassabis and John Jumper for their advancements in protein structure prediction.

Together, these three pioneers have transformed the way scientists approach protein design and production by harnessing the power of artificial intelligence to decode

and engineer proteins. Their breakthroughs have not only pushed the boundaries of protein construction but also opened new frontiers in medicine, bioengineering, and sustainable technologies.2

Scientists have long been at work to decipher the molecular puzzle of proteins and their complex structures. Among these innovators, Nobel laureate Demis Hassabis entered the field of proteins with a background in neuroscience, applying this to his development of AI models with improved neural networks.2

Drawing on Hassabis’ expertise, his team used an AI model called AlphaFold which was trained on databases of all known protein structures and amino acid sequences, to predict the three-dimensional structure of a protein from a sequence of amino acids. With AlphaFold, they managed to predict protein structures with almost 60% accuracy. While this success was already a celebratory feat, it fell short of the 90% accuracy needed for it to be deemed a successful prediction.2

Just as they were met with a technological checkmate, a new recruit, John Jumper hopped in to help innovate a new version of the AI model that employs neural networks called transformers, which can browse through a vast array of data to find patterns more efficiently and piece this information together to generate the predicted configuration of a protein’s shape.2

Hassabis’ and Jumper’s team trained the new model, AlphaFold 2, on databases of all known proteins and amino acids; and now, thanks to their efforts, the threedimensional structures of amino acids and previously structurally unknown proteins can be predicted with increased accuracy.2

Meanwhile, Baker conceptualized a new formula for protein production. Instead of inputting amino acid sequences to predict protein structure, he innovated a way to enter a desired protein structure into a software that would generate recipes for its amino acid sequence to construct entirely new proteins.2

Using Baker’s publicly accessible algorithm, Rosetta, it is now possible to create novel proteins with brand-new functions through database searching and looking for short fragments that are similar to the desired protein structure. With previous attempts to create proteins de novo focused on modifying existing proteins to better tailor their functions, this feat revolutionized the field of protein production and research.2

All of this had managed to do the unthinkable—to design a completely new protein from scratch with a unique structure that did not naturally exist in nature.2

A Continually Learning Machine

To explore the use of artificial intelligence in protein technologies, it is crucial to delve into the experiences of leading researchers in the field who employ these technologies in their studies. Cambridge University postdoctoral research associate Aaron Macauyag, PhD, shared insights from his research on optimizing plant-

based systems producing bioactive proteins, particularly in the context of regenerative medicine.3

In his work, he uses AI technologies like Baker’s Robetta server to combine AI-driven predictions with hands-on laboratory experimentation. This enabled Macauyag to generate the predicted 3D structure of the protein of interest in his study.3

Aside from the convenience and high performance the AI algorithm provides, another standout of the Robetta server is its accessibility. The cornerstone of Robetta is that it gathers and includes previous experimental data in making its predictions, compared to purely mathematical models such as AlphaFold.2

The real-world applications of these AI technologies are also vast, particularly in the field of biopharmaceutical protein production. Protein engineering, which involves modifying proteins to enhance their functionality and stability, makes medical, biochemical, and industrialrelevant proteins more efficient and long-lasting.

Macauyag described, as an example, how AI can help increase the binding strength of antibodies used in medicine, making them more effective in targeting diseases.4 He added that, in the future, AI technologies may continue to transform how biochemicals and biopharmaceuticals are designed and produced, enabling faster and more targeted solutions for a range of medical, industrial, and biomanufacturing needs.

Necessitated Integrity

However, the use of AI technology in research is not without its caveats. The power of AI tools in protein research, such as Robetta, lies in the datasets they rely on. Specifically, Macauyag pointed out that AI platforms utilize comprehensive collections of experimental data sourced from previous research all over the world.

That is, unlike purely mathematical models, predicting protein behavior involves grappling with the complex interactions of proteins in solvents and natural environments, which cannot be fully computed through theoretical calculations alone. The reliability of AI-driven results depends on the accuracy and authenticity of the data provided by the results of scientists globally.

Because these datasets form the backbone of these predictions, these AI protein prediction technologies can ultimately be limited by the same quality of the information inputted and provided by those same scientists.

Thus, while open platforms may allow researchers to upload their findings to contribute to this collective effort, it runs the risk of data being biased or limited as it heavily relies on the integrity of its contributors.

With this, Macauyag emphasized that this collaborative approach should foster a shared sense of responsibility and trust. He maintains that as these AI models “continuously learn” from the data they collect, we too must continually strive to uphold its credibility.

Made in Our Reflection

In this day and age, the frequent images of robotic-armfilled factories and AI chatbots have permeated every aspect of our life, and derivatives of these concepts are an especially time-tested theme in popular culture. From Space Odyssey and War Games to Ex Machina and Terminator, the idea that robots or AI can evolve to go against their programming has made many over the years even demean its usage in the first place—they are “evil.”

Then, contrary to the frequent violence of robot movies like Terminator, there are beloved movies with robotic characters such as Wall-E, The Wild Robot’s Roz, and Star Wars’ R2-D2; unscary depictions of robots and AI since they are personified with empathy—they are human to the soul.

Both violent and non-violent AI and robots went against their programming, yet we only feel fear for the former rather than the latter. How come? It seems we are so focused on the thought that AI and robots could rule the world and replace us intelligently that we fail to even ask the most basic questions of all: how and why?

I posit it is no simpler than thinking about it in human terms. Violent, human-replacing robots are no different than dictators who once promised prosperous societies for their people and empathetic robots are no different to humanitarians or, in some cases, a growing child. The point of primary contention might just be a proven biological concept: imitation.

Wired to Think

As the saying goes, “imitation is the highest form of flattery,” but is there a limit to a robot’s capability to imitate? Fundamentally, robots and their AI are made and wired to think and solve specific problems for humans and if need be for themselves, but what happens when they are placed in unique situations that require specific solutions not within their programming?

Biologically speaking, if a human were placed in this instance, they would use their lived experience to conjure up a solution. If it fails, then we try again, as billions of humans have over our entire existence as a species. Like a database, we use our own experiences and the experiences of others to come up with the best solution— our own “programming.”

Thus, it may then be possible that an artificially intelligent robot placed in these environments could also come up with something similar for their own purposes as well. Evolution as we understand it is most evident morphologically and genetically, but in the case of robots, it may not be a stretch to say that they could evolve intelligently to emulate how we or how other animals get over hurdles.

After all, in a world where “survival of the fittest” seems the most popular avenue, evolution to be the “fit enough” enables said individuals or artificial individuals to live. This is of course how humans have topped the food chain over centuries of drawn-out development, and subservient robots could simply emulate us to live. Though, this sort of emulation goes beyond just mere imitation, we might call it independence.

Thinking to Live

Many cultures will say that humans were endowed independence by a god for, among other things, the purpose of exploring life. Empirically, that bodily and intelligent independence may have also been given to us for the purpose of simply living and surviving. We were made or we developed because we want to survive, an ever-lasting biological trait. Thus, in the same way, these hypothetical independent machines may also just learn the ability to think for the purpose of their own survival— we gave them that, much like a god would do so.

Realistically, no robot today could hold limitless energy to sustain itself like a biological being, thus a limited capacity to think and become independent. However, in the near future, that could change and, if we pair it with continually learning artificial intelligence, we could probably see them soon in day-to-day living.

Some find peace in that yet others are fearful, but contrary to all the claims of world domination, it may be more

prudent to keep each other’s mindsets in check. Because in our constant fearmongering of the extinction-level “what-ifs” of AI domination, there are hundreds of other scenarios where technology need not be antagonized. It is not entirely out of this realm to imagine a world where robots simply live like the rest of us. In these lines of thought where we find ourselves faced with anything but us, our vanity clearly shows.

Living to Love

The thought of independence is a root of one of the scarier rationalizations that people have: if a robot can independently think for living and loving, then it can also think for death and destruction.

Yet the flaw in this line of reasoning is that we fail to recognize that a robot’s intelligence is merely imitant of our own. Our own intelligence is made from our human nature: the odd mixture of love and hate that binds our soul to reality. Thus, when robots who are made to serve humans see us, they emulate us. The robots and AI are what we are and what our intentions are—monkey see, monkey do.

Inherently then, in a society where we laude corruption, kill each other, and abuse our bodies, “free-thinking” robots will seemingly either follow the trend or detract from it to save themselves. It seems that we are at fault for our own doom because, much like a human, they are looking after themselves and it is “kill or be killed.”

This line of reasoning also serves as a reminder that robots and AI are built by humans, and if we keep making machines and robots for destruction or making it learn these nefarious attitudes, then it is as much our fault than it is the AI for bringing our doom—it is our own abuse that could drive the worst in them.

As such, if we live not by our fears and manifested evils but by the principle of living and loving, we might build a society that is founded on a love for caring and serving each other as opposed to merely loving ourselves.

So, when humanity does become a bygone era and robots, AI, and the manifested machinations of our mind are all that remains, they might actually try to live within processes driven and rooted in love for aid and care.

Yes, they are not human, but they do not need to be human to be our friends. If we love by the sole criterion of being our own species, then we should let go of our precious devices now and leave our loved pets in the streets to fend for themselves.

Loving to Be

In all of this, I hope we look at the emergence of robots and AI as not quite the world-ending scenario we all think of, but rather an opportunity for reflection. Because the destruction of our species may more likely be our fault than it is AI’s.

So, in a world so terrified of Terminator, why not ponder on the possibility of Wall-E, Roz, and R2-D2? In a faroff world where robots and AI become independent, they might become human in their own way because ultimately they are made in our reflection.

It’s true, robots aren’t human—they are metal and strands of wire probably coated with paint or some smooth material to look capitalistically clean—yet, what is a human but trillions of interlinked cells and fat cushions?

In some ways, we are just like robots too: wired to think, thinking to live, living to love, and loving to be as we are and as we’ve been told to be. So who is to truly say then that robots cannot or should not break that barrier too? Is love, understanding, or the yearn to be useful then just a miscalculation on our part too? A malfunction of our robotic spirit?

Don’t get me wrong, it isn’t bad to reflect on the possibility of something bad happening. It has been humanity’s trait to be careful of the unknown. Yet it also isn’t bad to think otherwise, that there might be some good in having our robotic friends free—a little kindness to them does go a long way if they do take over.

Mechanical Communities

LThe Automation of the Human Heart L

Throughout the course of history, what is considered “normal” changes with every discovery. What we once worshipped as the “God of the Gaps” can now be explained with empirical data. With a deity’s anger now being reduced to a meteorological phenomenon, most of what was a cosmic horror has become commonplace. Yet as humanity begins to grow and develop, what we once thought was limited to fiction is tested.

Artificial intelligence (AI) had once been a popular trope found in science fiction, an entity that was synonymous with the movement of natural to technological. Now, we turn to Alexa to lock our doors, to Siri for basic tasks on our smartphones, and to ChatGPT to think for us. With AI progressing, our technology becomes nearindistinguishable from what we consider from this Earth, including ourselves.

Yes, it seems, even the very definition of what makes a human is also changing with the tides of advancement and innovation.

What better to display the merging of humans and the mechanical than the themes and events of NieR: Automata, a game praised for pushing the boundaries of exploring the human experience.

Much of the game revolves around its central conflict: the occupation of Earth in the year 11945 AD. In this dystopian future, what was left of humanity fled to the moon due to the tyranny of the machine lifeforms. These machines were weapons created by aliens who sought to conquer Earth. To mount its offense, humanity designed an elite military force of androids known simply as YoRHa. Throughout the game, you play as 2B, an android specialized in the battle against machines.

Initially, the player is led to believe that the machines are static and unchanging. Many of the enemies encountered in the beginning are seen to repeat words—die, leave, kill. Yet, as you continue to explore the desolate ruins of Earth, you find that these machines, once assumed to be incapable of higher thinking, were forming communities. What the Sumerians had first achieved in their creation of Mesopotamia had easily been formed by machine lifeforms in the distant future.

There are several of these colonies found throughout the world of NieR. You encounter the first one in the Great Desert, a seemingly violent race hellbent on preventing 2B from progressing beyond the metropolis and into their territory. As 2B continues to kill and destroy the group

Lof machines, the player may begin to notice that these life forms all don different kinds of wooden masks and cloaks reminiscent of desert tribes.

Another community of machines is found in the Great Forest, a battalion of knights that were sworn to protect the Forest King. Here, another aspect of human society is present—hierarchy. If the desert had nomadic machines that operated under the basis of a shared territory, the forest had machines sworn to protect the nobility that ruled its domain.

However, the most polarizing of communities is a village of peaceful machines on the outskirts of the metropolis. Known as “Pascal’s Village” after its leader, these machines have disconnected from the general network that connects them all. Furthermore, Pascal regularly communicates with 2B and other android communes for trading. Yet another aspect of society is portrayed in the form of commerce between heterogeneous groups.

At this point, it is clear that these machines operate based on systems of belongingness, even going so far as to reinvent the nuclear family. It can be argued that within their program was an adaptive mechanism that allowed them to mimic human society. Yet, this is impossible. The aliens had created these machines for destruction. Thus, we are led to the conclusion that these lifeforms had developed consciousness and willingly chose to form communities out of their own volition.

Love within a Metal Heart

Within each community are interpersonal relationships formed between individuals. It is “natural” for humans to seek out the acceptance of their peers, just as it is “natural” for machines to be unfeeling and emotionless. Yet, NieR continues to prove that machines are capable of evolving into entities similar to humans.

Alongside the newfound sentience developed by machines were emotions and desires. Machines within communities were shown to have a positive rapport with one another, often fighting 2B to prevent harm from falling on their loved ones. Love, an inherently human emotion,

is found in otherwise unnatural places in NieR.

One machine known as Simone, one of the many bosses in the game, had desired to be the most beautiful so that the object of her attraction, another robot, would notice and come to love her. This love does not bear fruit. In the end, she is heartbroken and grief-stricken. She begins to target and maim YoRHa androids specifically because she finds them beautiful and so that she may adorn herself with parts of them. However, not all forms of love in this game are as grotesque.

In the second route of the game, you play as 9S, a scanner model partnered with 2B that is responsible for enriching the gameplay through his clever quip and lovable personality. This route serves as a way to present his perspective on the events of 2B’s route. Here, we discover that his fondness for 2B runs deeper than simple admiration and subordination. He is found to miss her in her absence, endeavors to befriend her, and ultimately comes to desire her. Even love serves to be inescapable to androids.

If love permeates throughout the narrative, so does hate. The two main antagonists, Adam and Eve, were created from the viscera of machine lifeforms combining flesh and blood. Out of the whole cast, the twins most mimic the human form. This imitation is deliberate, as it aligns with their mission to understand humanity.

Except, why are they fighting against YoRHa? If YoRHa were modeled to mirror humans as the pride of their race, then does this central conflict not become a humanto-human conflict? If machines and androids, devoid of all the biology that defines a human, begin to display human characteristics and emotions, then should they be considered human too?

A God a Machine can Believe in Another inherently human concept is faith. Although machines can be programmed to have a certain set of beliefs, it is nearly impossible to mimic the depth of trust humanity has in different deities, customs, and people.

This is once again disproven by the mere fact that conflict exists. If it was impossible for machines to develop faith, then why would they fight so fiercely against the androids?

Machines populated the world because they believed it was their right as the new “owners” of Earth whereas androids believed that this was unjustly taken from their creators, the humans. As such it is possible that the near likeness of the androids to their human masters had led to them developing an almost nationalistic view on the conflict. Furthermore, both machines and androids are seen going through extremes to prove their ideals.

As a way to cope with the seemingly unjust conflict, certain machines in the Abandoned Factory developed a religious society that sought to free themselves from all pain. This inevitably leads to radicalism. Believers begin to turn to explode themselves to harm nonbelievers and ascend to a higher plane. Religious heterogeneity is present in this machine society with paganism running rampant among forest-dwelling machines who worship a large albino moose as their God.

In a humanless world, what fills in the gaps are machines and androids. So, is it not unreasonable to assume that this absence caused them to develop into something akin to humans?

Empathy in a Humanless World

Throughout the game, the player’s character progressively begins to regret targeting machines maliciously and without reason, since NieR portrays machines and androids as beings capable of empathy and kindness even when they are made of metal and synthetic materials. Just like us, they are capable of forming personal connections, experiencing emotions, and acting on their beliefs.

In this sense, being a human encompasses more than just a list of events that each person must undergo, rather it is a spectrum of different feelings and experiences— there is more to humanity than just the definition that we confine ourselves to. The ending of NieR makes us ponder on what really makes us human—is it truly our mere biology or something greater?

References

Help-y Crawlies: Nanobots for Compating Cancer

1. Sierra DP, Weir NA, Jones JF. 2005. A Review of Research in the Field of Nanorobotics; Sandia National Laboratories [Internet]. [accessed 2024 Nov 11]; https://doi. org/10.2172/875622.

2. Subramani K, Ahmed W, editors. 2012. Emerging Nanotechnologies in Dentistry: Processes, Materials and Applications. Amsterdam: Elsevier; [accessed 2024 Nov 1]. https://www.sciencedirect.com/book/9781455778621/ emerging-nanotechnologies-in-dentistry.

3. Moo-Young M, editor. 2011. Comprehensive Biotechnology, Second Edition. Amsterdam: Elsevier; [accessed 2024 Nov 1]. https://www.sciencedirect. com/referencework/9780080885049/comprehensivebiotechnology.

4. Moore S. 2021 Jun 15. An Overview of Nanobots and the Most Recent Developments. AZoNano. [accessed 2024 Nov 11]. https://www.azonano.com/article. aspx?ArticleID=5761.

5. Lim NRE, Santos GNC, Ubando AT, Culaba AB. 2021. Nanotechnology in the Philippines: Development of Framework for Technology Adoption. IOP Conference Series: Materials Science and Engineering [Internet]. [accessed 2024 Nov 11]; 1109(1):012031. doi:https://doi. org/10.1088/1757-899x/1109/1/012031.

6. Hammarskjöld A. 2024 Oct 18. Nanorobot with Hidden Weapon Kills Cancer Cells. News from Karolinska Institutet [Internet]. [accessed 2024 Nov 11]; https://news.ki.se/ nanorobot-with-hidden-weapon-kills-cancer-cells.

7. Setroikromo R, Zhang B, Reis CR, Mistry RH, Quax WJ. 2020. Death Receptor 5 Displayed on Extracellular Vesicles Decreases TRAIL Sensitivity of Colon Cancer Cells. Frontiers in Cell and Developmental Biology [Internet]. [accessed 2024 Nov 11]; 8. doi:https://doi.org/10.3389/ fcell.2020.00318.

8. Feng C, Sun C, Ho EA. 2024. Bacteria-Responsive Drug Release Platform for the Local Treatment of Bacterial Vaginosis. Nanotechnology [Internet]. [accessed 2024 Nov 11]; 35(47):475101. doi:https://doi.org/10.1088/13616528/ad7143.

9. Huang Y, Guo X, Wu Y, Chen X, Feng L, Xie N, Shen G. 2024. Nanotechnology’s Frontier in Combating Infectious and Inflammatory diseases: Prevention and Treatment. Signal Transduction and Targeted Therapy [Internet]. [accessed 2024 Nov 1]; 9(1):1–50. doi:https://doi.org/10.1038/ s41392-024-01745-z.

10. Lim NRE, Santos GNC, Ubando AT, Culaba AB. 2021. Nanotechnology in the Philippines: Development of Framework for Technology Adoption. IOP Conference Series: Materials Science and Engineering [Internet]. [accessed 2024 Nov 1]; 1109(1):012031. doi:https://doi. org/10.1088/1757-899x/1109/1/012031.

11. Wang Y, Yu L, Kong X, Sun L. 2017. Application of Nanodiagnostics in Point-of-Care Tests for Infectious Diseases. International Journal of Nanomedicine [Internet]. [accessed 2024 Nov 1]; 12:4789–4803. doi:https://doi. org/10.2147/ijn.s137338.

12. Folk E. 2018 Mar 15. The Potential Role of Nanotechnology in Water Treatment in Developing Countries. Engineering for Change; [accessed 2024 Nov 11]. https://www.

engineeringforchange.org/news/potential-rolenanotechnology-water-treatment-developing-countries/

AI in Cancer Treatment: A Novel Bioinformatics Tool for Drug Discovery

1. Rehman AU, Li M, Wu B, Ali Y, Rasheed S, Shaheen S, Liu X, Luo R, Zhang J. 2024. Role of Artificial Intelligence in Revolutionizing Drug Discovery. Fundamental Research [Internet]. [accessed 2024 Nov 11]; 4(3). https://doi. org/10.1016/j.fmre.2024.04.021.

2. National Cancer Institute. 2022 Apr 7. Immune Checkpoint Inhibitors. National Institutes of Health [Internet]. [accessed 2024 Nov 11]; https://www.cancer.gov/aboutcancer/treatment/types/immunotherapy/checkpointinhibitors.

3. Daëron M, Jaeger S, Du Pasquier L, Vivier E. 2008. Immunoreceptor tyrosine-based Inhibition motifs: a Quest in the past and Future. Immunological Reviews [Internet]. [accessed 2024 Nov 11]; 224(1):11–43. https://doi. org/10.1111/j.1600-065x.2008.00666.x.

4. Rumpret M, Drylewicz J, Ackermans LJE, Borghans JAM, Medzhitov R, Meyaard L. 2020. Functional categories of immune inhibitory receptors. Nature Reviews Immunology [Internet]. [accessed 2024 Nov 11]; 20(12):771–780. https://doi.org/10.1038/s41577-020-0352-z.

5. Singh A, Miranda Bedate A, von Richthofen HJ, Vijver SV, van der Vlist M, Kuhn R, Yermanos A, Kuball JJ, Kesmir C, Pascoal Ramos MI, et al. 2024. A novel bioinformatics pipeline for the identification of immune inhibitory receptors as potential therapeutic targets. eLife [Internet]. [accessed 2024 Nov 11]; 13. https://doi.org/10.7554/ elife.92870.

6. Bernsel A, Viklund H, Hennerdal A, Elofsson A. 2009. TOPCONS: Consensus Prediction of Membrane Protein Topology. Nucleic Acids Research [Internet]. [accessed 2024 Nov 11]; 37(1):W465–W468. https://doi.org/10.1093/ nar/gkp363

7. Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Žídek A, Potapenko A, et al. 2021. Highly Accurate Protein Structure Prediction with Alphafold. Nature [Internet]. [accessed 2024 Nov 11]; 596(7873):583–589. https://doi. org/10.1038/s41586-021-03819-2.

AI: Actual Intelligence

1. Edelman GM, Gally JA, Baars BJ. 2011. Biology of Consciousness. Frontiers in Psychology [Internet]. [accessed 2024 Nov 11]; 2(1):4. 10.3389/fpsyg.2011.00004.

2. Earl B. 2014. The biological function of consciousness. Frontiers in Psychology [Internet]. [accessed 2024 Nov 11]; 5(1):697. 10.3389/fpsyg.2014.00697.

3. Van Gulick R. 2014. Consciousness. Stanford Encyclopedia of Philosophy [Internet]. [accessed 2024 Nov 11]; https:// plato.stanford.edu/entries/consciousness/.

4. Boucher P. 2020. Artificial intelligence: How does it work, why does it matter, and what can we do about it? European Parliamentary Research Service [Internet]. [accessed 2024 Nov 11]; https://www.europarl.europa.eu/RegData/ etudes/STUD/2020/641547/EPRS_STU(2020)641547_ EN.pdf.

5. Eyal G, Verhoog MB, Testa-Silva G, Deitcher Y, BenavidesPiccione R, DeFelipe J, de Kock CPJ, Mansvelder HD, Segev I. 2018. Human Cortical Pyramidal Neurons: From Spines to Spikes via Models. Frontiers in Cellular Neuroscience [Internet]. [accessed 2024 Nov 11]; 12(1). https://doi. org/10.3389/fncel.2018.00181.

6. Smith J. 2024 Jun 14. Self-Consciousness. Stanford Encyclopedia of Philosophy [Internet]. [accessed 2024 Nov 11]; https://plato.stanford.edu/entries/selfconsciousness/.

7. Kraut R. 2022 Jul 2. Aristotle’s Ethics. Stanford Encyclopedia of Philosophy [Internet]. [accessed 2024 Nov 11]; https://plato.stanford.edu/entries/aristotle-ethics/.

Shaking Hands: Doctor-AI Partnerships for the Future of Medical Diagnosis

1. Hirani R, Noruzi K, Khuram H, Hussaini AS, Aifuwa EI, Ely KE, Lewis JM, Gabr AE, Smiley A, Tiwari RK, et al. 2024. Artificial Intelligence and Healthcare: A Journey through History, Present Innovations, and Future Possibilities. Life [Internet]. [accessed 2024 Nov 11; 14(5):557. https://doi. org/10.3390/life14050557.

2. Briganti G, Moine OL. 2020. Artificial intelligence in Medicine: Today and tomorrow. Frontiers in Medicine [Internet]. [accessed 2024 Nov 11]; 7:27. https://doi. org/10.3389/fmed.2020.00027.

3. Serani S. 2024 Jan 17. FDA Approves First AI-Powered Skin Cancer Diagnostic Tool. Targeted Oncology [Internet]. [accessed 2024 Nov 11]; https://www.targetedonc. com/view/fda-approves-first-ai-powered-skin-cancerdiagnostic-tool

4. Conner K. 2024 Jul 17. Using AI to detect breast cancer: What we know. Breastcancer.org [Internet]. [accessed 2024 Nov 11]; https://www.breastcancer.org/screeningtesting/artificial-intelligence#

5. Göndöcs D, Dörfler V. 2024. AI in medical diagnosis: AI prediction & human judgment. Artificial Intelligence in Medicine [Internet]. [accessed 2024 Nov 11]; 149:102769. https://doi.org/10.1016/j.artmed.2024.102769.

6. Google. Med-PaLM: A Medical Large Language Model. Google [Internet]. [accessed 2024 Nov 11]; https://sites. research.google/med-palm/

7. Witkowski K, Okhai R, Neely SR. 2024. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Medical Ethics [Internet]. [accessed 2024 Nov 11]; 25(1):74. https:// doi.org/10.1186/s12910-024-01066-4.

Addressing the Pressing Osmosis of Generative AI Towards Bioeducation

1. Teubner T, Flath CM, Weinhardt C, van der Aalst W, Hinz O. 2023. Welcome to the Era of ChatGPT et al. Business & Information Systems Engineering [Internet]. [accessed 2024 Nov 11]; 65(2):95–101. https://doi.org/10.1007/ s12599-023-00795-x.

2. Chen Y, Jensen S, Albert LJ, Gupta S, Lee T. 2022. Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success. Information Systems Frontiers [Internet]. [accessed 2024 Nov 11]; 25(1):161–182. https://doi.org/10.1007/s10796022-10291-4.

3. Herawati AA, Yusuf S, Ilfiandra I, Taufik A, Ya Habibi AS. 2024. Exploring the Role of Artificial Intelligence in Education, Students Preferences and Perceptions. ALISHLAH: Jurnal Pendidikan [Internet]. [accessed 2024 Nov 11]; 16(2):1029–1040. https://doi.org/10.35445/alishlah. v16i2.4784.

4. Alkaissi H, McFarlane S. 2023. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus [Internet]. [accessed 2024 Nov 11]; 15(2). https://doi. org/10.7759/cureus.35179.

5. Ifelebuegu AO, Kulume P, Cherukut P. 2023. Chatbots and AI in Education (AIEd) tools: The good, the bad, and the ugly. Journal of Applied Learning and Teaching [Internet]. [accessed 2024 Nov 11]; 6(2). https://doi.org/10.37074/ jalt.2023.6.2.29.

6. International Journal of Information and Learning Technology. Emerald Insight [Internet]. [accessed 2024 Nov 11]; https://www.emerald.com/insight/2056-4880. htm

7. Alqahtani N, Zarina Wafula. 2024 Oct 22. Artificial Intelligence Integration: Pedagogical Strategies and Policies at Leading Universities. Innovative Higher Education [Internet]. [accessed 2024 Nov 11]; https://doi. org/10.1007/s10755-024-09749-x.

8. Zhai C, Wibowo S, Li LD. 2024. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart learning environments [Internet]. [accessed 2024 Nov 11]; 11(1). https://doi.org/10.1186/ s40561-024-00316-7.

9. Kim Y, Lee M, Kim D, Lee S-J. 2023. Towards Explainable AI Writing Assistants for Non-native English Speakers. arXivorg [Internet]. [accessed 2024 Nov 11]; https://arxiv. org/abs/2304.02625.

10. Tolan S, Pesole A, Martínez-Plumed F, Fernández-Macías E, Hernández-Orallo J, Gómez E. 2021. Measuring the Occupational Impact of AI: Tasks, Cognitive Abilities and AI Benchmarks. Journal of Artificial Intelligence Research [Internet]. [accessed 2024 Nov 11]; 71:191–236. https:// doi.org/10.1613/jair.1.12647.

11. Zhai C, Wibowo S, Li LD. 2024. The effects of overreliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart learning environments [Internet]. [accessed 2024 Nov 11]; 11(1). https://doi.org/10.1186/ s40561-024-00316-7.

12. Service RF. 2020. “The game has changed.” AI triumphs at protein folding. Science [Internet]. [accessed 2024 Nov 11]; 370(6521):1144–1145. doi:https://doi.org/10.1126/ science.370.6521.1144.

Molecule Makers: AI in Crafting Next-Generation Novel Drugs and Enzymes

1. Schauperl M, Denny RA. 2022. AI-Based Protein Structure Prediction in Drug Discovery: Impacts and Challenges. Journal of Chemical Information and Modeling [Internet]. [accessed 2024 Nov 11]; 62(13):3142–3156. https://doi. org/10.1021/acs.jcim.2c00026.

2. Fernholm A. 2024. The Nobel Prize in Chemistry 2024. Brzezinski P, Linke H, Åqvist J, editors. Nobel Prize [Internet]. [accessed 2024 Nov 11]. https://www.nobelprize.org/ prizes/chemistry/2024/popular-information/.

3. Macauyag A. 2023. High level production methods and unique characteristics of intracellular and secreted acidstable human basic fibroblast growth factor in Nicotiana benthamiana. [accessed 2024 Nov 10]. https://doi. org/10.18910/91907.

4. Haydon I. 2024 Feb 3. AI generates proteins with exceptional binding strength. American Society for Biochemistry and Molecular Biology [Internet]. [accessed 2024 Nov 11]; https://www.asbmb.org/asbmb-today/science/020324/ ai-proteins-with-exceptional-binding-strength.

5. Drews J. 2000. Drug Discovery: A Historical Perspective.

HELIX 2024

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.