

SPECIAL THANKS TO THE KENT PLACE ETHICS INSTITUTE, MRS. CONTI, AND DR. REZACH FOR GUIDING LODESTAR THROUGH THIS PROCESS
![]()


SPECIAL THANKS TO THE KENT PLACE ETHICS INSTITUTE, MRS. CONTI, AND DR. REZACH FOR GUIDING LODESTAR THROUGH THIS PROCESS
Lodestar is an academic ethics journal that is reviewed, edited, and published by high school students. It is dedicated to promoting the application of ethics in everyday life, educating communities about contemporary ethical issues, and encouraging curiosity and discourse surrounding ethical dilemmas. Lodestar provides an avenue through which students can participate in scholarly dialogue. Lodestar publishes original submissions, case studies, editorials, essays, reviews, and reflections that meet required standards. It is sponsored by The Ethics Institute at Kent Place School in Summit, New Jersey.


Ourpublicationacceptsarticles,case studies,artwork,poetry,andessaysrelating toorcommentatingonethicalquestions, topics,anddilemmasformiddleandhigh schoolstudents.PleasevisitTheEthics Institute’swebsiteat www.ethicsatkentplace.orgwithany furtherinquiries.
T A B L E O F C O N T E N T S
MissionStatement
TableofContents
MeettheEditors
LetterfromtheEditors
MeettheLeadershipTeam
Rethinking Responsible Artificial Intelligence for the Environment
ByKatieMacKay’27
Of Mice and Lab-Grown Men: A Bioethical Analysis of Bodyoids
BySophiaIvy’29
Case Study: Chalking up ChatGPT
ByAnanyaMittal’29
Chalking up ChatGPT: Enhance not Replace ByMaanikaNanda’29
Chalking up ChatGPT: A Double Standard Debate ByClaireCherill’26,LivPeters’26andKaraAsuncion Hoang’27
Case Study: Employing with Artificial Intelligence
ByRichaMalpani’29
Employing with Artificial Intelligence: The Impact on Humans’ Lives and Agency
ByTessaChow’27
Employing with Artificial Intelligence: Should there be Stricter Regulations on AI Use in Hiring?
ByAngieGe’28,AlishaGupta’28, andChloéJenkins’28
Protected or Restricted?: The Ethics of Press Regulation at the Pentagon
ByKrisanaManglani’28andPaigeSulkes’28
The Ethics of True Crime: Awareness or Exploitation?
BySiyaSharma’29andSiaraGupta’28
Medicalizing Menopause: Interventions & Impacts
ByNaomiRavenell’26
The Impact of Telemedicine on Patient-Provider Relationships
ByErinKim’27
The Walking Incubator: The Role of Women in Population Health and Regulation
ByMadelineMon’26
Between Empathy and Efficiency: The Ethics of Deploying Artificial Intelligence in Healthcare Settings
BySkylarLi’26
Res Nullius and the Ethics of Outer Space: Who Truly Owns the Final Frontier?
ByAadyaKumar’30
Sora, not Sorry: The Ethical Considerations of OpenAI's Newest Video Generator
ByGitanjaliKameswaran’30
Is It Ethical for Scientists to Edit Human Genes?
ByPearlDawadi’30
T A B L E O F C O N T E N T S



Olivia Peters is a senior at Kent Place who has attended the school for 13 years She has been co-editor in chief of the Lodestar journal since her junior year, helping the journal reach publication in the inaugural year. Outside of Lodestar, she serves as Green Key president, President of KP Democrats, and is a member of the Student Affairs committee. She is extremely proud of the publication and is excited for these pieces to make their way into the hands of readers within the Kent Place community and beyond.
Claire Cherill is a senior at Kent Place and has attended the school for 13 years. She has been deeply involved with The Ethics Institute since middle school, where she created ethical conversation cards and began her involvement in writing ethics case studies. Since then, she has enjoyed being a member of the Ethics Bowl team and participating in the Bioethics Project Outside of ethics, Claire plays rugby, is a Science Olympiad captain, and enjoys reading and baking. She is excited to apply what she has learned to leading Lodestar and sparking an interest in ethics in the Kent Place community and beyond.
Kara Asuncion Hoang is a junior who has explored ethics throughout her Kent Place experience. She served as one of the first Art and Image Editors during the inaugural year of Lodestar last year and is interested in how creativity and ethics come together in design and technological spaces. Outside of school, Kara has created a program called ARTiculate, an ongoing project that promotes inclusivity and accessibility in tech education and spaces In school, she is also a leader in APICA, where she helps teach and promote Asian culture and community at Kent Place, and serves on Green Key welcoming prospective students to the Kent Place community. After months of dedication, she is excited to see all the hard work the Lodestar staff and contributors have put in to make the journal come to life.
Dear Readers,
Welcome to the Fall ’25 edition of Kent Place’s Ethics Journal, Lodestar! We are so excited to bring our edition to you and showcase the accomplished work of our hardworking Middle and Upper School students.



Lodestar was created as an opportunity for students to think ethically about topics they deem important. The Ethics Institute carries on the Kent Place mission to promote ethical decision-making while empowering girls to be confident and intellectual leaders who advance the world. With programs like the Bioethics Project in collaboration with Georgetown University and our nationally ranked Ethics Bowl team, the Kent Place Ethics Institute is a key feature of our education and empowerment of the future of strong women, and we are very proud to be an extension of it
The term “lodestar” refers to a bright star that guides travelers safely across the night sky to shore. The light is steady and a point of guidance that offers direction and reassurance when the path is unclear. However, the term “lodestar” carries another meaning; as a source of inspiration when choices can feel difficult This lodestar reminds us that even when the path forward can be foggy or hidden, there is always a light to follow, a guiding presence to challenge us, and to inspire us to navigate through life with integrity and kindness. At Kent Place, ethics serves as our lodestar. It asks us to reflect and to consider not only the consequences of our actions, but also the kind of people we aspire to be.
This year, the journal includes sections for AI and STEM, Bioethics, and Society & Culture All topics are chosen by our writers based on their interest in analyzing and interpreting real-world situations through ethical lenses, and opening perspectives for dialogue and understanding. We hope that as a reader, you find something interesting, something to relate to, something to ponder, or something striking. This issue will include a range of media, from art pieces, analyses, case studies, and more.
Our goal for this year’s Lodestar publication is to have two issues: one in the fall and one in the spring. This will allow our students to further explore what they find important and fascinating. Together, we aim to be more interactive, have more student collaboration, highlight more middle school scholars, and overall build on last year’s fantastic edition. We are extremely proud of our writers, editors, and art designers for their hard work, whose dedication made this issue possible As you explore these pages, we hope the perspectives and ideas shared here inspire you to think deeply about ethics and its role in our world. Let this journal serve as your lodestar, helping illuminate the world from every perspective.




Tara Khurana ’26
Copyeditor in Chief
Claire Pierson ’28
Social Media Coordinator
Nia McDaniel ’27
Communications Coordinator
Chelsea Chen ’28
Art & Image Editor

Zara Sharma ’30
Middle School Liaison
Annabelle Chow ’26
Copyeditor in Chief

Emmie Kimball ’27
Social Media Coordinator

Jahnavi Ponnolu ’27
Communications Coordinator
Kate Lee ’26
Bioethics Editor

Liliah-Deborah Nicholas ’30
Middle School Liaison






Caden Almond ’28
Head Artist
Mira Lalani ’27
Staff Writer
Katie MacKay ’27
Staff Writer
Paige Sulkes ’28
Staff Writer
Anna Bultó ’26
Staff Writer
Tessa Chow ’27
Staff Writer

Skylar Li ’26
Staff Writer

Ellie Ritter ’28
Staff Writer
Laila Gandhi ’27
Staff Writer

Krisana Manglani ’28
Staff Writer



Siara Gupta ’28
Art Contributor
Eva Joglekar ’29

My’Asia Bennett ’27
Art Contributor
Divyanshi Bansal ’31
Art Contributor





BY KATIE MACKAY
Each time a generative AI model crafts an image or writes an essay, an invisible current of energy surges through rows of humming servers. These exchanges may seem trivial; after all, what’s a few watts for a clever chatbot? In reality, the scale is enormous. According to Menlo Ventures' 2025 portfolio, "The State of Consumer AI," global generative AI tools now engage around 500-600 million daily users and over 800 million weekly users. Multiply that by billions of prompts, and the energy and water consumed become staggering. According to research published by the Massachusetts Institute of Technology, training and running large models has more than doubled the demand on U.S. data centers in a single year, with cooling systems alone consuming roughly two liters of waterperkilowatt-hour.
These are not abstract harms: communities in semiarid areas compete with tech firms for access to the same reservoirs that cool supercomputers do and face intensifying strain on local power grids and water resources. From an ethical standpoint, the question goes far beyond efficiency, questioning the moral architecture that guides our society’s technological progress. Under a utilitarian or consequentialist lens, moral rightness depends on outcomes: if AI delivers greater societal benefit—through medical research, education, or accessibility—than the environmental damage it causes could be justified. Yet utilitarianism demands full accounting. Are the emissions, water depletion, and social inequities borne by local populations outweighed by the cognitive conveniences of generative models? Furthermore, to what extent are those cognitive conveniences even considered beneficial?

How can a society that disagrees on the value of this technology begin to discuss weighing its importance against its repercussions? By contrast, a deontological perspective shifts focus from outcomes to duties. It asserts that certain actions, such as exploiting natural resources without consent and even externalizing harm onto vulnerable communities, are wrong in themselves, regardless of utility For example, if a datacenter facility draws heavily on local groundwater in a drought-prone region to cool AI servers, thereby reducing community access to potable water, then under a deontological framework, the tech company is violating its duty of respect toward that community’s basic needs.
Even if it is argued that generative AI accelerates human knowledge and discovery, the moral legitimacy of its infrastructure depends on respecting the intrinsic rights of those affected Under this view, transparency and equitable governance are obligations rather than subjective considerations.
A more contemporary lens, virtue ethics, would ask what kind of society we become when we normalize technologies that externalize invisible costs. Virtue ethics is less about isolated actions or their consequences, and more about the character traits, such as temperance, justice, prudence, and responsibility, we cultivate through our collective choices This question implicates not only corporations but also consumers, who indirectly sustain these systems through everyday use. Consider this: if we normalize building ever-larger AI models without regard for their carbon emissions, water use, or supply-chain mining, then we may foster a corporate and consumer culture of excess, disregard, and environmental complacency. On the other hand, if engineers, companies, governments, and users choose models that prioritize energy-efficient architectures, renewable power, and local ecological equity, they are practicing virtues of moderation, fairness, and foresight. In real terms, adopting energy-efficient hardware, shifting to low-carbon data centers, and reinvesting in regions hosting such infrastructure reflect virtuous behavior.
By this measure, the moral question isn’t only what we build, but who we become in the process
Ultimately, the ethics of AI’s environmental impact invite a collective assessment of these frameworks In a world of hyper-innovation, it is important to ask not just what technology can do, but what humanity ought to demand from it. Whether through the consequentialist lens of maximizing benefit, the deontological consideration of respecting rights, or the virtue ethicist’s call for moral character, each framework highlights the same truth: innovation divorced from responsibility erodes the very progress it claims to advance. Generative AI’s environmental footprint is not an inevitable byproduct of progress, but a reflection of our priorities. The challenge, then, is not simply to make AI faster or smarter, but to ensure that its intelligence mirrors our highest moral reasoning


BY SOPHIA IVY
For years, works of science fiction have been fascinated by the idea of cloning humans The prospect of creating a human body artificially seemed impossible. However, I recently came across an opinion article from the MIT Technology Review by Carsten T. Charlesworth, Henry T Greely, and Hiromitsu Nakauchi, who proposed the use of new technologies to create human “bodyoids”, or human bodies created in a lab without consciousness or the ability to feel pain. Bodyoids are a potential new technology that could become a reality in our not-so-distant future Today, we have almost everything we would need to create this technology,as the authors point out: “Although it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility” (Charlesworth et al ) The conception of this technology compels us to consider principles of autonomy and non-maleficence. The prospect of bodyoids has the potential to do a lot of good for society, but raises many important and relevant ethical dilemmas concerning consent, non-maleficence, and what it means to be alive.
a reality, but have the potential to exist in the near future (Charlesworth et al 2025) Scientists have successfully developed ways to use stem cells to produce lab-grown organs and embryos, which led researchers Charlesworth, Greely, and Nakauchi to propose a new application of these developments. By utilizing genetic engineering techniques, scientists could inhibit development of the brain to create human bodies that cannot feel pain and lack consciousness. The bodyoids would maintain basic functions necessary for sustaining life without having characteristics

ArtworkbyEvaJoglekar
that we often use to distinguish ourselves from other living things, such as the ability to think and formrelationshipswithothers.
Bodyoids, “...a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain” are not yet
A practical use of bodyoids could be experimenting upon them to develop drugs and othertreatments.Life-savingdrugscantake10-15
years and billions of dollars to develop The use of bodyoids for clinical trials could speed up the process of drug development and limit or eliminate the need for problematic and often inaccurate animal testing. Bodyoids could be a tool used to see what negative consequences drugs or other treatments could have on different parts of the body and organs such as the liver or the kidneys. While not everything could be tested using bodyoids, they have the potential to be effective tools to prevent dramatic side effects in further trials and beyond Additionally, the United States is facing an organ shortage crisis Bodyoids could alleviate the shortage by providing organs for patients in need of a transplant.
While the technology appears to be brimming with possibilities, we must consider its ethical implications An important consideration is the long and fraught history of using humans in clinical trials. The Tuskegee Syphilis Trials are a famous example of this. From 1932 to 1972, Black men in Tuskegee, Alabama, were subjected to a study on syphilis without their consent and denied treatment for the disease, despite its ready availability. Many died, and many of their wives and children were infected. The National Research Act of 1974 was a result of this incident, which emphasized the importance of informed consent The Nuremberg Code is another set of guidelines for researchers that emerged from tragedy. The code was created in response to the atrocities committed by Nazi doctors during World War II, who performed cruel experiments on prisoners without their consent, disregarding any respect for their bodily autonomy In response to this, the code highlights the importance of informed consent and the autonomy of patients.
Another bioethical principle is nonmaleficence, or the duty of a doctor to not harm a patient. While bodyoids lack consciousness and the ability to feel pain, they do have the same physical characteristics as humans Therefore, we must consider whether or not the principle of non-maleficence still applies. Even though bodyoids would feel no pain, they would resemble humans, and doctors may feel hesitancy to use them as simply tools for testing. As a society, we value respect for the human body and recognize that all people are entitled to be treated fairly and with dignity. To some, experimenting on bodyoids may be viewed as violating that respect, and they may think that because bodyoids are human-like they should not be put in harms' way The question of whether bodyoids are entitled to that same respect is sure to be divisive.
The principle of non-maleficence can also be extended to our current system regarding the potential harm to those participating in clinical trials Today, we often use animal testing for initial stages of clinical trials before involving humans in trials. According to researcher Meredith Wadman, this is one of the many reasons why only about 10% of drugs make it past clinical trials Animal experimentation can be “a waste of time, money, and lives” (Wadman 2023). Because animals do not share our human anatomy, the results of these trials can be misleading, potentially causing humans to be harmed during the next stages of clinical trials Testing new drugs on bodyoids first could limit the reliance on animal testing, and make trials involving humans safer.
While the goal of creating bodyoids without consciousness would be to create human bodies that do not feel pain or process emotion, the concept raises ethical concerns about informed consent. An argument against the ethicality of animal testing is that animals are unable to consent to experiments, yet are subjected to pain and cruelty. Could this line of reasoning still be extended to human bodyoids, given their inability to feel pain?
Another implication of the technology is the prospective widespread use of bodyoids. Currently, there is debate around the rights of humans who lack consciousness for medical reasons, such as those considered brain dead or experiencing unresponsive wakefulness syndrome, also called a chronic vegetative state. Some believe that it is ethical for us to sustain their lives in the case that they regain consciousness, while others believe that it is the end of their life and we should instead expend resources on more hopeful cases. The commodification of human bodies has the potential to diminish the status of people with these conditions, which could potentially have negative consequences for them and their families Bodyoids would never have consciousness, which would render them legally dead or never legally alive. This could pose a threat to the status of individuals in states such as unresponsive wakefulness, who may not receive the same protections as they do today
To summarize, bodyoids have the potential to help many individuals by providing safer alternatives to animal testing and creating a more accurate and efficient process for the development of new drugs and treatments
However, bodyoids raise ethical concerns that cannot be ignored. Even though the technology has not yet been implemented or fully developed, it has already sparked debates surrounding the ethics of human testing and ethically sourcing human bodies. The prospect of speeding up drug development and eliminating the need for animal testing would be revolutionary, but at what cost? Informed consent, nonmaleficence, and the status of individuals who lack consciousness from conditions such as unresponsive wakefulness are all important ethical considerations.
Ultimately, technologies such as bodyoids raise the question of if our humanity is defined by what we are biologically, or who we become based on our thoughts and experiences. Some may consider bodyoids to be human but not alive, and others may consider them in their own category of science fiction come to life. Regardless of whether or not bodyoids become a reality, it is important for these conversations to be had and to consider what it truly means to be human


BY ANANYA MITTAL
Jia is a middle school student at Pine Hills Public School. From the start, science has consistently been her weakest subject. She couldn’t remember the different science concepts and how to apply them No matter how hard she studied and how many times she looked over class notes for her tests and quizzes, she struggled to achieve even a B-minus Consequently, when her friend told her about how she uses ChatGPT to quiz her and give her study guides, Jia decided to also try ChatGPT as a tutor. Through ChatGPT, Jia was able to get thorough explanations about any topic that might have confused her. It was essentially a free and accessible tutor for her. Soon after, Jia started to improve in science and felt more comfortable with actively participating in class Having generated study guides also helped her manage her time better, allowing more time for her hobbies
However, when the school found out that students were using ChatGPT for schoolwork, they decided to ban AI entirely for students. They did not want AI to be doing the work instead of the students and figured it would be best to remove AI out of the picture to avoid these complications This new rule threw Jia off-track Science became harder to understand, but she did the best she could with the studying skills ChatGPT taught her She noticed her friends who also used AI to help them study started falling behind, too. As the trimester began to end, more assignments were stacking up and it was difficult to complete them on time.
After winter break, Jia noticed that her science teacher’s slideshows were different. Her teacher, Mr Green, had slides that used to be filled with color and diagrams instead of just black text on a white slide, and he would read off the bullet points She also realized that he had become less thorough with his lectures. Once Mr. Green reached the end of his slideshow, Jia observed that there was a small text in the corner that said, “Made by ChatGPT.” As it turns out, Mr. Green had recently discovered how ChatGPT could assist him with teaching the class, and he felt that having pre-made slideshows helped him balance other work he had, like catching up on grading big projects He was also able to get through the lengthy curriculum faster, as there were some years where the class would not get to all of the topics, but this year, they seemed to be on track to finish everything on time. Mr. Green continued to use ChatGPT’s slideshows for a few weeks and that’s when Jia hit her breaking point. Why was it that Mr. Green could use AI to assist him but students like her couldn't?
1. ShouldJiareportMr.GreenforusingAI?Whyorwhynot?
2. ShouldMr.GreenhaveusedChatGPTtogeneratetheslideshowevenafterAIwas bannedfor students?Shouldhebepenalizedforit?
3 WhatareacceptableusesofAIinschoolforstudentsandtowhatextent?Forteachers?
4 WhatarethebenefitsanddisadvantagesofusingAIasatutorcomparedtoateacher?
5.WhatwayscanAIhelpsavetimeandusethatextratimeforotherproductiveactivities?
C A S E S T U D Y
BY MAANIKA NANDA
What does it mean for an educator to fulfill their duty when the rules placed on students and teachers are not applied equally? In this case, Jia, a middle school student, depended on ChatGPT as a study tool to help her understand science concepts that she had struggled with for years. The explanations and study guides she received helped her gain confidence, improve her grades, and manage her time better However, when the school banned AI for students out of concern that it might replace their work, Jia immediately fell behind again. The issue grew more troubling to her when she realized that her teacher, Mr. Green, had begun using AI to generate entire slide shows that simplified his lessons, causing them to be superficial and simply speed up his teaching. While students were penalized for relying on AI even for learning, the teacher used it to avoid the demanding parts of his job. My ethical stance is that this unequal treatment is morally wrong because it violates the values of responsibility and justice, and this becomes especially clear when viewed through a deontological framework. Situations like this can show how quickly a learning environment can change when new technologies enter the classroom and how important it is to think carefully about what rules support real growth rather than limit it.
Firstly, responsibility and accountability play an important role in understanding why Mr Green’s use of AI in this case is ethically wrong. Teachers are expected to prepare lessons carefully, provide guidance with integrity, and serve as examples of academic honesty. When Mr. Green relied on AI to complete his duties for him, the quality of his teaching declined, and students received less thoughtful instruction Meanwhile, Jia’s use of AI came from a genuine effort to improve. She used it to study, ask questions she did not understand in class, and build stronger learning habits. There is a clear difference between a student using a tool to support their learning versus a teacher using it for convenience True responsibility would essentially require that educators provide thoughtful instruction, prepare meaningful lessons, and model academic integrity, especially when their students are held to strict expectations.
Secondly, justice is also a central value that informs my thinking. Justice requires that rules be applied consistently and that individuals be treated fairly under the same conditions. In Jia’s experience, the opposite occurred. Students were banned from using AI even when it supported their learning, while the teacher used it freely to reduce his own workload. This unequal treatment breaks trust
between students and educators, because it signals that the rules are not made in fairness but, rather, in authority If AI is considered harmful to student learning, then using it to weaken lesson quality is even more unjust. The policy could contradict itself when it restricts those who genuinely benefit from AI while enabling those who misuse it. This uneven treatment between teachers and students goes beyond a rule in the handbook and shows the deeper problem with how fairness could be understood in this classroom.

ArtworkbySiaraGupta
Someone might conversely argue that teachers face heavy workloads and deserve more flexibility with tools that make their jobs easier. It is true that teachers balance grading, curriculum planning, and large classes, and that AI can help relieve some of this pressure Yet, this perspective overlooks an important distinction Teachers using AI for convenience is not the same as students using it for learning. While Mr. Green used AI to avoid preparation, Jia used it to strengthen her skills. Efficiency cannot excuse an educator from their duty to provide meaningful instruction If Mr
Green had used AI to deepen his lessons or offer clearer explanations, the situation would look very different. A tool that genuinely supports student understanding is not the problem; it becomes one when it replaces the effort that meaningful teaching requires. Furthermore, if workload is the concern, then schools should support teachers with better resources, not inconsistent rules. Allowing teachers shortcuts while denying students learning tools creates a double standard that ruins the fairness ofthe learningenvironment.
Lastly, the framework of deontology could also be appliedtothiscase.Fromadeontological perspective, the morality of an action depends on whether it aligns with one’s duty, not on what outcomes it produces. Teachers have duties of fairness, honesty, and educational care towards their students. These duties should remain constant even when circumstances might start to get difficult.When Mr Green uses AI as a shortcut, he violates his duty to provide thoughtful instruction and to uphold the same standards he expects of his students. If a rule exists banning AI for the sake of integrity, then it must also apply to everyone in the classroom, especially for the teacher who holds the position of power. If everyone were to rely on AI to avoid meaningful work, learning itself would lose its value. Deontology, therefore, reinforces the idea that teachers must engage with AI in a way that aligns withtheirduty,consistency,andfairness.
In conclusion, AI should not be used to avoid meaningful thought or preparation, but rather be used to support equitable skill-building and to enhance, rather than replace, the work that students and teachersaremeanttodo.
In the case of “Chalking up ChatGPT,” Jia’s teacher, Mr. Green, uses AI to create a slideshow that he shows to his students in class. Students at Jia’s school were recently banned from using ChatGPT, and Jia has been feeling the weight of adjusting to learning without it But why should Mr Green have this resource? Educators are vital to the advancement of youth, and have been for centuries. As science continues to modernize, the tools that teachers use to enrich their students modernize, too. Calculators, computers, and many other technologies did not exist, but are now commonplace in the classroom ChatGPT is one of these tools that has quickly gained relevance in everyday life, and it is only going to continue to gain prevalence. This begs the question: Is it ethical for teachers to use AI tools such as ChatGPT while students are restricted from using the same resources?
Ultimately, after considering the purpose of education, the values of fairness, equality, and integrity, and the framework of deontology, we determined that it is ethical for teachers to be allowed to use AI while students are not
The foundation of our analysis of this case was to first determine the purpose of education. Schools
are not just a place for students to learn facts about science, dates in history, and equations for math. Instead, they are meant to foster skills of critical thinking, inquiry, and self-expression, and to teach students to be confident independent thinkers While AI can be extremely useful in the learning of material or the completion of work, this may come at the cost of other essential skills and knowledge that an education is meant to provide. Therefore, schools must evaluate the impact of AI on all aspects of students’ education, and take the necessary steps to ensure that it does not interfere with the acquisition of skills and knowledge that the institution is meant to provide.
In this case, Jia feels that it is unfair that her teacher, Mr. Green, was permitted to use AI while she, as a student, was not. This frustration is understandable, as from Jia’s perspective, this may feel hypocritical or like a double standard. However, it is important to consider the differing roles and goals of students and teachers at a school. As we defined, students are in a school to acquire knowledge and skills, including how to process and condense information, and how to think critically about and express their own opinions In contrast, teachers have already acquired these skills over the years of their own education and experience. As a result, teachers' use of AI does not jeopardize their future capabilities in these areas. Therefore, while fairness is a valid
concern with the standards for AI usage being different between students and teachers, fairness should not always mean imposing the exact same standards and rules on each party. Instead, it is about ensuring a positive outcome for all stakeholders that do not come at other’s expense
We decided to look at this case through the framework of deontology, which determines the ethicality of an action based on the fulfillment of a societal duty. Specifically, we considered the responsibility of a school to uphold the purpose of education and ensure that AI usage by students and teachers fulfills these goals. The school is fulfilling its duty by creating a policy that teachers can use AI while students may not, because they think that AI use by students will be harmful to students’ learning and acquisition of skills, while AI usage by teachers will not have the same detrimental effects. For the AI policy to be effective in helping schools educate students, individuals must also take accountability Teachers must to ensure that how they use AI genuinely improves the quality of their teaching and students’ learning.
In the case of Mr Green, his AI usage is not having a positive effect on his teaching or student learning. For this reason, the way he is utilizing AI is unethical, as he is not fulfilling his duty as an educator to teach students important knowledge and skills. For teachers’ AI usage to be ethical, their teaching should meet or exceed the standard of their teaching without the use of the tool It can help free up time for important parts of the job, like working directly with students or providing feedback, but AI should not become something used for convenience at the expense of quality teaching, to help students learn important life skills and knowledge, as is the goal and responsibility of education and educators.
A perspective we found important to acknowledge is that AI can serve as an equalizer in the classroom Jia used ChatGPT as a “free, easily accessible tutor” for her, having it create study guides and quizzing her on difficult topics. This gave her access to methods more useful than how she had been studying previously. However, we found that AI is convenient, but not a necessary part of these new study skills Jia could work directly with her teacher or classmates to review topics she is confused on. While creating a study guide herself may take longer, the process of making it is just as, if not more, important than reviewing it
AI usage without regulation can also be a force of inequality and loss of integrity. The case mentions how some students are using it to complete classwork, instead of doing the work themselves. In addition to jeopardizing their learning, this is also unfair to other students If allowed to continue, this kind of AI usage would create an environment where students receive credit for work they did not do, rewarding students for taking shortcuts. This harms the learning of the students who cheat in the long term, and diminishes the importance of hard work and integrity in education.
In conclusion, we determined that it is ethical for schools to restrict student use of AI while still allowing teachers to use the tool. We used the framework of deontology to define schools’ and teachers’ responsibility to ensure that AI use advances student learning of important knowledge and life skills, the value of fairness in examining the differing roles and goals of students and educators in schools, and considered the impact of students using AI through the lenses of equality and integrity.
C A S E S T U D Y
BY RICHA MALPANI
A company, Percepta Inc., decides to use an AI-powered hiring system to lead its hiring process. The system uses algorithms trained on data from previous hires to evaluate resumes, rank candidates, and even conduct online interviews with the aid of emotional analysis (data on one’s means of communication, verbally and nonverbally, to understand a person’s mood and character)andfacialrecognition.
After a six-month trial run, the company’s hiring department observes positive effects. The time set aside for employment decreased by 50% and reduced costs of activities, such as candidate screening, staffing agencies, and others. However, Percepta’s hiring manager noticed a few concerningpatterns.
First and foremost, the company has previously tried to hire a diverse set of individuals, but in the last three months of the trial run, noticed that candidates from certain demographic groups consistently ranked lower. The AI system does not provide reasoning as to why one applicant is ranked lower or higher than another. This unreasonable and unclear decision-making seemed to causecandidatesemotionaldistress,andPerceptawonderedwhytheAIsystemwasdoingthis.A buginthecode?
In addition, 30% of applicants reached out to Percepta to express their discomfort with online interviews,mentioningpotentialprivacyissuesandmisuseoffacialandemotionaldata.
The Percepta Headquarters Department appreciates the benefits of this new hiring system, but worries about the harms, which raises the question: Is it ethical to use an AI-powered system for itshiringpractices?
1. ShouldtherebestricterregulationsonAIuseinthehiringprocess? 2.
HowcancompaniesbalancetheuseofAIandinnovation?
Whatarethelong-termsocietalimpactsofautomatinghumanprocesses?
BY TESSA CHOW
In the case study "Employing Artificial Intelligence" by Richa Malpani, the company Percepta Inc tests an AI-powered hiring system to automate its recruitment process The adoption of an AI-powered hiring system reveals both operational benefits and ethical challenges. The case investigates issues of transparency and bias in AI models, as well as privacy risks of applicants’ data in the computerized system Indeed, AI-powered systems are used to manage many aspects of human lives, which may evoke wariness and concern in the human public. The integration of AI in industries is meant to be a tool to support advancement in those fields, but as noted in this case study, several ethical concerns arise from the unguided use of AI.
In the situation with Percepta Inc.'s AIpowered hiring system, privacy concerns are expressed by their job applicants, as there is no reasoning behind why some participants are not hired. This lack of transparency in this black box model, where the inputs and outputs are visible but not the inner complex processes, leads to fear and mistrust of Percepta Inc ’s hiring process and may be seen
by many as unjust. This speculation is confirmed by the tendency of the hiring system to rank applicants from certain demographic groups higher than others without a valid reason, which unfairly limits opportunities for certain applicants. Percepta Inc.’s surprise at this issue highlights the lack of understanding as to how the algorithms employed in their hiring system truly function and calls into question the reliability of the system. Using the ethical standard of consequentialism and the frameworks that fall under this model, I argue that stronger AI regulations should be enforced in areas such as those discussed in this case study, as they directly deal with human lives and may cause problems if left unsupervised.
First, the significance of being hired for a position is not one that should be overlooked. The main way for humans to sustain themselves and their families’ lives is to gain money from working a job. The job offered at Percepta Inc. could likely be a crucial economic opportunity for an employee who fits the company’s values. Careless dismissals by an AI-powered hiring system, especially
based on unprecedented bias, could be very distressing to the applicant and others who could rely on their income. On the other hand, although Percepta Inc.’s hiring method may have taken much more time and money to conduct without the use of AI, it depends on the relative number of resources used in the hiring process. One should consider whether this amount of investment is necessary for the desired caliber and capability of the interviewee the company wants to hire
The limitations of using an AI-powered system for hiring practices are the privacy and bias concerns, which arguably hold more moral weight and direct impact on human lives than a loss of efficiency if the company were not to employ the system In other words, although Percepta Inc. may lose time and energy in the hiring department, the negative consequences that may ensue if the hiring department continues with such unresolved ethical issues may outweigh the objective profit losses of a company
When considering the ethical framework of consequentialism, the most ethical decision would be one that centers around the consequences of one’s actions So, due to the bug in the system, Percepta Inc would most likely try to fix the bug and hold AI use in hiring to stricter regulations, as the precarious use of AI in an influential field such as hiring has a possibility of having dire consequences, which overshadow the positives of potential success in using AI systems In consequentialism, when determining the range of AI supplementation that should be ethically acceptable, one should consider the impact on the vast majority of the applicants, and how
Percepta Inc., as a company, can set a precedent for the ethical use of adoption of AI models in other companies in the future.
Furthermore, because the tendency of AI to hold bias against applicants as well as withhold data from those in its system is a threat to the public’s privacy and safety, consequentialism would acknowledge that these properties of Percepta’s hiring procedure are harmful to the well-being of the applicants Therefore, if the issue persists and the lack of transparency (which has no present solution) is still seen to put the candidates at a disadvantage, the company may consider changing its hiring program. Rule consequentialism states that the most ethical decision is the one that recognizes a universal moral outcome. So, because the continued use of the unstable AI-powered system goes against the moral standard to be fair to all candidates, review each application without bias, and address privacy concerns that arise, the ethical standard of rule consequentialism would expect the companies to cease the problematic uses of AI in their hiring affairs.

BY ANGIE GE, ALISHA GUPTA, AND CHLOÉ JENKINS
This case examines Percepta Inc., a company that is facing controversy due to its use of AI in its hiringprocess.PerceptaInc.usedanAIsystemto interview candidates online for a job. The algorithm used emotional analysis and facial recognition todetermine whichcandidates should receive a job offer. However, the company discovered that the algorithm tended to prefer certain demographic groups over others–this demonstrates a bias commonly found in AI. This flaw prompted the question, “Should there be stricter regulations on AI use in hiring?”
Considering the framework of utilitarianism and the values of autonomy and respect, it is unethical to place stricter regulations on the use ofAIinhiring.
Utilitarianism is an ethical decision-making frameworkthatanalyzeswhatoutcomeisbestfor the greatest number of people. Using this framework, Percepta Inc. was ethical in their use ofAI in the hiring process.Allowing members of a society to have full autonomy when using AI allows both the society and the AI algorithms to progress,aidingineliminatingthebiasesthatAI
holds. Increasing user interaction provides AI with corrective data that helps it “learn” more reliably. Research shows that multiple rounds of user feedback in a text-classification experiment led to significant improvements in an artificial intelligence system’s accuracy and consistency Moreover, this experiment determined that “continual human involvement can mitigate machine bias and performance deterioration while enabling humans to continue learning from insights derived by ML [machine learning]” (CoLab,ReciprocalHuman-MachineLearning:A Theory and an Instantiation for the Case of Message Classification). By allowing users to interact more with AI, they will develop a better understanding of the reliability of an AI tool and how they should use it, in the hiring process and beyond.Thiswouldbenefitsocietyonawholeby aiding in developing an understanding of technologyandthetechnologyitself.
If AI continues to become more prevalent in society, humans must be able to understand how to use it in positive ways. If stricter regulations wereplacedonanindividual'sAIuseinhiring,
users would be unable to adapt to AI’s growing role in society, preventing progress and technological and societal development. While Percepta Inc.’s use of AI resulted in bias and discomfort,bynotallowingsocietytocontinueto adapt to and developAI, it will never improve to the point where it does not cause this bias and discomfort. Restricting society's autonomy in decisions about AI usage prevents the overall positive development of AI and society in the future, demonstrating why it is unethical to place stricter regulations onAIinhiringuseasitwould notbenefitthegreatestnumberofpeople.
Finally, the government must prioritize the value of respect for its citizens. This means respecting each individual’s right to determine how they would like to use AI and not placing strict restrictionsontheirdecisions.Inasociety,people have a right to a certain degree of free will of which the government must honor and respect. The consequences of the government not respectingitscitizens’freewillwouldbedireand could result in an authoritarian society where the citizens are not truly free. This consequence would not provide the best outcome for the largest number of people, demonstrating that the government must give their citizens free will and the ability to make their own decisions about theirAIusetoavoidthisslipperyslope.Itismost ethical for the government not to place strict restrictions on AI in hiring, as such laws would violate the respect necessary in the relationship betweenagovernmentanditscitizens.
Thiscasecouldalsobeinterpretedandresponded to using a deontological framework. Deontology is the ethical decision-making framework that analyzes the moral rules and obligations one has toothers.Adeontologistwouldarguethatitisthe
government’s duty to protect its citizens from harm and keep them safe, and in this case, that would mean restricting AI use in hiring. This is because certain biases that AI holds could harm others, such as in the case of Percepta Inc.’s hiring system, where applicants felt uncomfortable with the way the AI evaluated them. One may also argue that it is the government’s duty to restrict AI so that companies are unable to use AI in the hiring process, avoiding any possible harm. However, this interpretation disregards the possible outcomesofimplementingsaidrestrictions.Strict restrictions on the use of AI could result in a slippery slope in which the restrictions could stunt the progression necessary for society and thedevelopmentofothertechnologies.
Though considering the government's role of protecting its citizens is important, it does not focus on the long-term outcomes and consequences. This is why using a utilitarian framework–which examines which possible outcomeofadecisionbenefitsthelargestnumber of people–is the most ethical way to interpret this case and then come to a conclusion on the possiblerestrictionsplacedonAIinhiring. What




ArtworkbyChelseaChen
Without the news, we may very well never understand the issues our world faces on a day-today basis. For civilians, it’s a way to vicariously explore the nuances of being in a leadership position in a society, whether that is a state representative, chief of a government organization, or a policymaker. It is essential that we stay informed about current events for society to continue making decisions about who belongs in positions of power and how we resolve critical issues.
However, this news is not always spread in an ethical manner. The press can be invasive at times, disrupting important processes by exposing confidential information or infringing on privacy, whichcomplicatesgovernmentaffairs.Recently,in October of 2025, this issue has become more apparent and triggered a response from the federal government’s Department of Defense (housed at the Pentagon), which deals with the United States military. The Department of Defense released a seriesofnewpressregulationstolimitjournalistic
BY KRISANA MANGLANI AND PAIGE SULKES
access to the Pentagon and the information spread there. To explain, reporters are no longer permitted to access parts of the Pentagon without an escort, and information from the Department of Defense —whether classified or unclassified—cannot be gatheredandsharedbythepress.
These limitations have been faced with both support and opposition throughout society, from the current presidential administration, to state representatives, to the reporters themselves being regulated. With the Trump Administration acting as a proponent of the new rules, the federal government has generally endorsed their addition as well, with Defense Secretary Pete Hegseth justifying the administration by deciding “the press does not run the Pentagon; the people do.” Hegseth, among others, believes that the press has infringed on the privacy of military and political affairs for too long, disrupting processes aimed towards achieving world peace. On the contrary, various state governments perceive these regulationsas“stifl[ing]independentjournalism,”
asassertedbySenatorJackReedofRhodeIsland.
Moreover, the differing sides of this situation have become increasingly divided as opposition to these regulations increases. OnOctober15,2025,dozens of military journalists who were supposed to cover developments at the Pentagon even turned in their badges, demonstrating how they saw the rules as the equivalent of having their investigative journalistic freedom rescinded. This raises the dilemma of how we can ethically fulfill the government’s need to preserve security and privacy in certain issues without also infringing on the freedom of the press. Additionally, this represents the ethical dilemma of the situation: on one hand, the Department of Defense can limit the flow of information to reporters as a way to protect the confidentiality of global communications. On the other hand, they can potentially undermine the journalistic freedom of the press for coverage on thePentagon.
This ethical question has many different componentsandavarietyofviewsthatcomealong with it. One main stance would be that it is unethical for the government to infringe on freedom of the press, as it takes away civilians' freedom of speech, which is a part of their liberties affirmed in the Bill of Rights. However, the other side could argue that it is ethical for the government to monitor the press, as it protects the generalsafetyandprivacyofAmericancitizens.
Both stances share the ethical framework of utilitarianism, which describes an outcome that ensures the greatest good for the greatest number ofpeople.Notinterveninginthepress’freedom
benefits the greatest number of people because it respects their right to speak freely, as outlined in the First Amendment. In contrast, it is possible that censorship of the press could also benefit the greatest number of people, as it would prevent the threat of security breaches and protect certain military decisionsfrompublicinfluence.
For the argument that it is unethical for the government to infringe on freedom of the press, the main values are autonomy and honesty. The government has an obligation to be transparent and honest with the people of the United States. Limiting what the press discloses to the public takes away this transparency and, consequently, could lead to government officials gaining too much power. If an individual in power is making decisions that the publicisunawareof,itpreventscitizensfrommaking informed decisions, putting democracy at risk. Infringing on freedom of speech removes individuals’ autonomy to express, access, and share information.
Another value that is prevalent in this case is autonomy, as the new Pentagon regulations could be seen as limiting the freedom of the press to share information with the public. The government's restriction on what the press can share is not only solely silencing journalists, but also limiting the public's ability to access knowledge and form independent opinions. Without access to news and differing perspectives, people are forced to rely on whatever information the government allows them to see. This undermines intellectual freedom and makes society less democratic because citizens can no longer make informed decisions about politics, policies,andleaders.
The opposing stance is supported by the main values of safety and privacy. When journalists report on sensitive military or governmental information, especially about defense strategies, weapons, or international operations, they could unintentionallyexposedetailsthatthreatennational security. This could endanger soldiers abroad, compromise missions, or even put civilians at risk. By having the government monitor what is made public, the government could prevent harm before it happens. This privacy of information, through the prevention of news exposure, also maintains the safety of global decisions, negotiations, and efforts towards world peace that require sensitive approaches, which can be difficult to sustain with substantialpublicreaction.
We believe that the most ethical choice in this scenario is not to regulate the press, as it maintains transparency. This will increase public information and support the framework of utilitarianism, which is an outcome that describes the greatest good for the greatest number of people. This keeps the general public aware of military and political affairs that are otherwise inaccessible to them. Ultimately, through the sustainment of press freedom, citizens will be able to make more informed decisions about their leaders in the future. Additionally, this creates autonomy for the press itself as it preserves intellectual and journalistic freedom, and supports their right to freedom of the press as outlined in the First Amendment. Moreover, press access in the Pentagon provides a broad range of perspectives to the general public rather than filtering information through a biased lens regulated solely by politiciansandgovernmentofficials.Therefore,the
most ethical stance on the press regulations in the Pentagon are to preserve press freedom and avoid limitingtheirjournalisticaccess.
Ultimately, as students of a school that prioritizes consideration of the ethicality of certain practices, it is important for us to personally evaluate the nuances of complex and impactful topics such as censorshipandfreedomofexpressioninthemedia. What this case demonstrates is the prevalence of ethics in everyday society, and our ethical decision-making helps us determine what our decision in such a circumstance would look like. We implore readers to consider what they believe is the right decision, in terms of instating or abolishing the Pentagon’s new regulations: should the Department of Defense prioritize privacy and safety by limiting journalistic access, or autonomy and transparency by allowing the press to freely investigate military affairs? In considering the balancing of the ethical values described, what wouldyourstancebe?

BY SIYA SHARMA AND SIARA GUPTA
In 1989, Eric and Lyle Menendez killed their parents in their Beverly Hills home. That sounds horribly wrong, right? However, due to the immensely popular TV show, Monsters, which covered the murder in a dramatype style, they were given a retrial for the murder of their parents. The Menendez brothers are two of the hundreds of teens who have committed patricide. Is it fair that they are the ones who get a retrial? Does creating true crime series – like Monsters – exploit the peopleaffected andcreate afalsetruth,ratherthanraise awareness?

ArtworkbyEvaJoglekar
In this article, we use the Menendez brothers as a case study to explore the ethics of true crime series and their impactsonsociety.
Itmadeforperfect television: twoboysfromawealthy Beverly Hills home killed their well-known and powerful parents. This case led to multiple television shows, such as a documentary, The Menendez Brothers, and a TV show, Monsters, both on Netflix, as well as a docuseries on Peacock, Menendez + Menudo: Boys Betrayed The Netflix 2024 TV show, Monsters, brought a new perspective to the case that had not been discussed before, namely, the abuse that the brothers faced, and whether it justified killing their parents. The show highlighted the fact that the Menendez Brothers weren't “monsters,” but rather brothers who needed an escape from the endless, alleged abuse they endured. After the show was released, a retrial was strongly recommended by Los AngelesDistrictAttorneyGeorgeGascón.
“The value of justice seeks to understand not only what happened, but why it happened, and it ensures that the punishment remains fair and just As an ethical principle, justice demands that every individual should be given a fair chance to have their story heard.”
The value of justice seeks to understand not only what happened, but why it happened, and it ensures that the punishmentremainsfairandjust.Asanethical
principle, justice demands that every individual should be given a fair chance to have their story heard. True crime series like Monsters are beneficial if they provide justice for the people affected and raise awareness of their situation. During the trial, the legal system challenged assumptions about how male victims “should” behave or what male abuse “should” look like. For example, the assumptions that males cannot be abused because they could act in selfdefense,orthatiftherewasn’taphysicalinjury,thenit must not have been “real” abuse. By portraying how the brothers were abused, some might argue that the brothers were afforded justice because the specific abuse the show highlighted was not a perspective that wasconsideredduringthetrial.
Monsters can be seen as a way that the producers manipulated the audience in order to make them feel empathy for the brothers. Empathy was the second value we focused on when looking at this case. Some can argue that the empathy which led to the new hearing for a new trial was falsely created based on the information seen in the series. For example, viewers felt bad for the brothers because the characters that portrayedthemwere“tooattractive” togotojail.After the series was released, Eric Menendez stated that the “dishonest portrayal of tragedies surrounding our crime has taken the truth several steps backward” (Kim). According to The Guardian, their extended family also stated that it was a “grotesque shock drama”and“characterassassination”(Lee).
“Monsters can be seen as a way that the producers manipulated the audience in order to make them feel empathy for the brothers.”
Based on the responses from the Menendez brothers, it is clear that the show fictionalized and dramatized the abuse of the brothers. In some scenes, it even offered a villainous and misinformed caricature of them, as opposedtoatruthfulaccount.Forexample,thefirst
episode of the series focused on the motive behind the killing, showing that Lyle Menendez had decided to kill his parents after seeing a similar scene on television. The series depicts the brothers kissing and showering together before being discovered by their mom. The brothers both denied this under oath, which proves that there is no confirmation that the informationfromtheseriesiscorrect.
The creation of Monsters can be seen as unethical becausethedirectorsandproducerstookfullcontrolof a story that was not personal to them. To this day, the brothers have not seen the full docuseries and did not evenapprovethemakingofit.Theywatchedtheseries as if they were viewing someone else’s story. If the people featured in the series weren’t involved in its production or consulted about the content, how can we be sure the information presented is true and accurate? This docuseries took advantage of their situation in order to bring empathy to the case of the brothers, again demonstrating how misrepresenting a true crime seriescanbe.
“If the people featured in the documentary weren’t involved in its production or consulted about the content, how can we be sure the information presented is true and accurate?”
The third value that we considered is trust. It was later discovered that Monsters allegedly misinterpreted facts and distorted the brother's claims of abuse, and allegedly spread misinformation. It was very different from the previous documentary, The Menendez Brothers, which was a non-biased, facts-based documentary on the whole story. It provided the historical context, exactly the evidence that was given to the police and detectives, and how the case was concluded. There were many differences between The Menendez Brothers and Monsters, which revealed that the series could have been dramatized, and we have no wayofknowingwhatabusetheyreallyendured.
One of the producers of the series, Ryan Murphy, clapped back: “I would say 60 to 65 percent of our show in the scripts, and in the film form, centers around the abuse and what they claim happened to them.” Here, Ryan Murphy admits that the series was focused on the abuse of the brothers. He stated that 60 to 65 percent of the series is what the brothers claimed was their story, but what about the other 35 to 40 percent? There may be a larger section of the series that includes false information, which the viewers are not aware of. For example, their parents ripping their hair out, yelling at them during tennis matches, and makingitseemliketheywerenotdoingwellinschool. The truth is ultimately unknown, which reiterates the idea that the TV producers made the series only to display the brothers as victims. Currently, the viewers are trusting that the information they are being given is accurate and are believing the “disheartening slander” oftheirconvictions(NPR).
“The dramatized versions of these stories with biases are not ethical because they aim to make the viewers feel empathy with the convicted, which can lead to the spread of false information”
Thehearingwouldnothavebeenscheduledwereitnot for Monsters, which is why many argue that true crime series are unethical. We made our final decision based on the values of justice, empathy, and trust, which we found aligned closely with the case. True crime documentaries are ethical because they create good in society, they provide awareness about important issues, and give justice to the people who were negatively affected. The TV series, dramatized versions of these stories with biases, are not ethical as they aim to direct empathy to convicted individuals, whichcreatesfalsedepictionsoftheiractions.
The trust between viewers and filmmakers can be detrimentally affected when viewers discover false information and biases in these portrayals. They can start to question anything the director or company produces in the future, which ruins their credibility. The uncertainty of what is true and false can lead to a decreased interest in these sorts of films. Finally, the purpose of true crime documentaries is to provide an accurate representation of a case. It is the responsibility of the filmmakers and directors to uphold a truthful representation of the case because their work can influence public narratives, shape justice, and either heal or harm the people at the center ofthestory.
“It is the responsibility of the filmmakers and directors to uphold a truthful representation of the case because their work can influence public narratives, shape justice, and either heal or harm the people at the center of the story"



TheBioethicsProjectisayear-longprogramofferedatKentPlaceSchool’s Ethics Institute which takes students through the process of choosing, researching, and presenting a bioethics-related topic to the Kent Place community and the world beyond it. Throughout the course, students learn the basics of bioethical principles and concepts, discuss real-life medical cases and complex ethical dilemmas, and write their own papers that delve into the ethical nuances of their selected topic. In the first trimester of the course, students are exposed to a myriad of different potential topics ranging from in- vitro fertilization to human trafficking, all through presentations from various speakers both within and outside of the Kent Placecommunity.
IncollaborationwiththeKennedyInstituteofEthicsatGeorgetownUniversity, studentsworkone-on-onewithamentorwhoprovidesguidancethroughoutthe researching and writing process. Each cohort is designated a specific theme of theyear,underwhichtheycraftanindividualresearchpaper.Pastthemesrange from “Being Human in a Brave New World” to “Medical Decision Making & the Human Lifespan.” Their work culminates in a school-wide symposium at theendoftheyear,wherestudentssharetheirethicalstudiesandfindingswith thebroaderKentPlacecommunity. This feature of Lodestar includes excerpts from two papers by members of the2024-2025Bioethicscohort,withthetheme“WomeninBioethics.”
Checkoutallpastbioethicspapersonourwebsitebioethicsproject.org!
BY NAOMI RAVENELL
In recent decades, menopause has increasingly been framed as a medical condition rather than as a natural life transition Menopause is a biological transition that marks the end of a woman’s reproductive years, resulting from the natural decline of ovarian function. While medical interventions can provide relief, their widespread use raises concerns about overmedicalization treating a natural process as a disease in ways that may not always be necessary or beneficial. Overmedicalization can undermine bodily autonomy, limit alternative approaches to menopause care, and expose women to potential health risks that are associated with unnecessary treatments This paper will assess the ethical tension between autonomy and the medical model, arguing that overmedicalization undermines the ability for women to make informed choices, limits alternative approaches, and can potentially expose them to unnecessary risks
The process of menopause occurs over three distinct stages, each characterized by unique hormonal and physiological changes During the reproductive stage, regular menstrual cycles occur as part of normal ovarian function. This generally occurs from puberty to the onset of menopause, ending between ages 45–55 As women age, they transition

ArtworkbyChelseaChen
into perimenopause, a phase marked by fluctuating hormone levels and irregular menstrual cycles, often accompanied by occasional hot flashes, night sweats, and mood swings. Menopause is clinically defined as the point when a woman has not had a menstrual period for 12 consecutive months (World Health Organization 8-10). Women then enter postmenopause, marked by the stabilization of ovarian hormone production at a permanently reduced level (World Health Organization 11)
According to the U.S. Agency for Healthcare Research and Quality (AHRQ), 85 percent of women experience some symptoms during menopause, though severity varies (Grant et
al). The most commonly prescribed option is hormone replacement therapy (HRT), which involves supplementing estrogen, sometimes in combination with progesterone, to manage symptoms such as hot flashes, night sweats, and bone loss (World Health Organization 25).
A New Zealand study found that 75 percent of women using combined HRT had complete relief after one year (Welton et al ) While HRT has proven effective, it increases the risk of certain illnesses with long-term use. Nonhormonal treatments and lifestyle adjustments, such as calcium and vitamin D intake, regular exercise, and cognitive behavioral therapy, can also help While menopause is a natural process, the way it is understood and managed has been shaped by both medical and cultural narratives.
The World Health Organization defines menopause as the “permanent cessation of menstruation resulting from loss of ovarian follicular activity” (7). This classification frames menopause as a biological endpoint rather than a pathological state However, medical discourse has historically fluctuated between treating menopause as a natural life stage and as a deficiency requiring medical intervention. Historian Sandra Coney notes, “The midlife woman is a prime target for the new prevention-oriented general practice” (Coney 18) This illustrates how the healthcare system may unintentionally steer patients toward a model of intervention, even when symptoms are mild. Providers, acting on beneficence (the duty to promote well-being) may rely on standardized treatments shaped by systemic or pharmaceutical influence, constraining autonomy. The ethical dilemma
here is not in the desire to relieve symptoms, but in the potential harm done by reinforcing a one-size-fits-all approach that disregards diverse experiences of menopause
Historically, menopause was referred to as a form of hormonal deficiency requiring medical treatment (Singh et al., 7). This aligns with broader trends that pathologize aging, positioning menopause as something in need of correction. Medical humanities author Frederik Kaufman notes, “if the concept of a disease is partly evaluative, then a condition cannot be a disease unless we disvalue it” (271) In other words, menopause becomes a medical problem when society labels it as one The disvaluation of menopause reflects broader anxieties around aging and reinforces the idea that a post-reproductive body is in decline.
Medicalization is the process by which human conditions, behaviors, or experiences are defined and treated as medical issues, often requiring diagnosis, intervention, or treatment by healthcare professionals (Last 213). While it can provide relief, it is problematic when it pathologizes natural processes. Kaufman writes that “pathology is understood with respect to the abnormal” (277) This suggests menopause has been treated as a deviation from normal instead of a natural stage. In the early 20th century, menopause was commonly called a “deficiency disease,” with estrogen therapy used to “restore youthfulness and femininity” (Reid) Pharmaceutical companies helped construct the narrative that menopause is a problem to be fixed. Coney emphasizes that “it must be stressed that the subject of all
this attention is well women” (20). By defining menopause as a deficiency, the healthcare system presents limited “choices” that reinforce medical dependence (Coney 21)
A key ethical tension lies between autonomy and the medical model. While women may appear to have choices, the medical model often implies that intervention is the responsible path In a study on HRT use, many women reported feeling guided by societal expectations rather than personal preference (Sievert). Beneficence meant to do good—can pressure women toward decisions aligned with medical norms, complicating autonomy. What seems like neutral advice may imply that nonintervention is irrational, limiting true choice.
Cross-cultural research reveals that menopause is not inherently pathological, but rather a complex and deeply human process that reflects the values and assumptions of the society in which it occurs. In cultures that embrace menopause as a normal and even empowering transition, women report fewer distressing symptoms (Kaufert and Lock).
Anthropologist Margaret Lock found that women in Japan, which views menopause as a natural, respected phase, report fewer menopausal symptoms than their Western counterparts (Lock). Similarly, among Native American tribes, postmenopausal women are revered as spiritual leaders (Beyene), and in Sotho communities of southern Africa, postmenopausal women gain access to cultural and religious authority (Flint). In parts of India, menopause marks a shift from reproductive responsibilities to a more spiritually centered life with greater religious
freedom (Famila). These experiences suggest that when menopause is culturally accepted, women experience less distress
By contrast, Western societies often approach menopause through a biomedical focused on symptom control, reflecting discomfort with aging (Chrisler). This model risks violating the principle of non-maleficence by fostering anxiety, stigma, and pressure to undergo treatments in order to “fix” something that is, in fact, a normal part of life.
This contrast reveals two competing frameworks: one that honors menopause as a natural life stage and another that pathologizes it as a medical condition. The ethical implications are significant. When public health systems promote only the biomedical model, they risk violating the principle of nonmaleficence, which is doing harm by fostering fear, shame, and unnecessary medical intervention. Conversely, a model that acknowledges menopause as a natural process upholds autonomy, cultural competence, and holistic well-being.
University of Massachusetts anthropology professor Lynette Sievert notes in her biocultural study of menopause that “many women in the United States disdain the fuss made about menopause in the popular press” (xiii), underscoring a disconnect between dominant medical narratives and women's lived experiences. Reframing menopause as a natural transition not only aligns more closely with how many women across the world experience it, while also supporting a more ethical, inclusive, and empowering approach to women's health
The ethical principles of beneficence and nonmaleficence are central to evaluating the medicalization of menopause While HRT offers substantial benefits to women suffering from severe symptoms, it also carries significant risks, such as an increased likelihood of breast cancer, heart disease, and blood clots For example, the Women’s Health Initiative found a 25 percent increased risk of breast cancer for women taking combined HRT during menopause (Chlebowski et al.). Similarly, researchers at Oxford University found a 37 percent increased risk of ovarian cancer for women taking hormone replacement therapy during menopause (Collaborative Group on Epidemiological Studies of Ovarian Cancer). These risks raise questions about whether the benefits truly outweigh the harms, especially considering that menopause is a natural life stage rather than a medical condition Even though all medical treatments come with inherent risks, the ethical challenge lies in determining when the intervention is truly necessary. Because menopause is not a disease but a biological transition, the decision to use HRT should be carefully weighed against its potential harms, ensuring that the treatment aligns with the individual’s health needs and personal preferences. The decision to use HRT should always be personalized and carefully considered, taking into account both the benefits and risks Women should be given the freedom to choose treatment options that best align with their health needs and preferences.
Menopause has often been treated as a medical condition requiring intervention, primarily through HRT This framing can undermine women’s agency by limiting access to nonmedical options. The medicalization of menopause raises ethical questions about autonomy and societal pressure. Informed consent must go beyond signing a form, and should instead involve an ongoing conversation about all available options. Public health campaigns should destigmatize menopause and increase awareness of both medical and non-medical approaches. Culturally sensitive care should also be prioritized, recognizing that menopause experiences differ globally. Non-medical approaches should be integrated into the standard of care. Interdisciplinary models that combine physicians, nutritionists, and mental health professionals, and ensure that women receive comprehensive information Systemic change in provider training and healthcare policy is essential to ensure menopause care respects autonomy and cultural diversity. The way menopause is viewed is heavily influenced by societal pressures and gender norms Framing menopause as a failure of the body reinforces stigma and dependence on medicine. The ethical challenge lies not in treating symptoms but in questioning why certain bodies are pathologized. The most ethical approach to menopause care is not to fix what isn’t broken, but to listen to women’s experiences and let them redefine what health means.
BY ERIN KIM
The COVID-19 pandemic led to a shift in medical care, including the adoption of telemedicine. In 2021, nearly one in four U.S. adults reported using telemedicine in the previous four weeks, with 17% of annual primary care visits and 24% of nonroutine visits occurring virtually (Andreadis et al.). This turnover has lasting ethical implications in patient-provider relationships. Ultimately, telemedicine improves affordability and accessibility.However,itcanhavenegativeimpacts ifitunderminestrust,attentiveness,and humanethicalrelationships.
Telemedicineisdefinedas“theremote diagnosisandtreatmentofpatientsby meansoftelecommunications technology”(OxfordLanguages).
Aqualitativestudyofprimarycare practicesinNewYork,North Carolina,andFloridafoundthat patientsbelievedtheirproviders' attentivenesswasdiminishedthrough telemedicine(Andreadisetal.).However, providers appreciated this new opportunity to understand patients' living environments, rather than just observing them in the office setting (Andreadis et al.). This contrast highlights how telemedicine can both enhance and complicate the understanding between patients and providers regarding their personal lives. Ridd and colleagues describe the patient–provider trust, knowledge, regard, and loyalty (Andreadis et al.). These pillars influence patient satisfaction,suggesting that telemedicine should be evaluated based on its efficiency and its abilitytosupportorweakenthesecoreelements.

Although concerns persist about how telemedicine affects trust, knowledge, regard, and loyalty, studies suggest that virtual care often strengthens these pillars in practice. A 2023 study in Ontario found that 79.8% of patients with a family doctor who had a virtual visit connected with their own physician ratherthananoutsideprovider(JAMA).Moreover, a RANDstudyrevealedthat96%ofpatientswitha chronicconditionusedphoneconsultations,while 94%optedforvideovisits.Theseresultsindicate thatvirtualcarestrengthenspatientproviderrelationships,asmost individualswithchronicconditions preferonlinecommunicationwiththeir knownprovider.Itisimportanttonote thatthemajorityoftelemedicalencounters occurbetweenpatientsanddoctorswho havepreviouslymetandestablishedsome relationship.Buildingonevidenceof virtualcare’squalityandefficacy, theUniversityMedicalCenterUtrecht researcheddigitalmonitoringforpregnant womenatriskofhypertensivedisorders.
Patients tracked their blood pressure daily using a mobile app, while healthcare professionals remotely monitored the results. After interviewing 25 patients, it was found that digital monitoring improved patient understanding and involvement in shared decision-making with their providers, while also confirming that physicians should retain final responsibility for medical decisions (ScienceDirect). These findings highlight the dual nature of telemedicine and its ethical implications: whiletelemedicineempowerspatientstoaccess
their medical information, it also introduces new selfmanagement responsibilities, such as taking prescribed medication and adhering to a healthy lifestyle through proper diet and exercise. I fully support this idea, particularly in light of the ethical principles mentioned earlier. Patients are informed and knowledgeable about their own medical information,buttheirtrustandloyaltytotheirdoctors may decrease due to biases in their own selfmanageablecare.
Telemedicine enhances fairness and accessibility by reducing wait times, as reported by MediGroup. For example, without telemedical care, patients in Washington State wait around 32 days to see a cardiologist, 14 days in Los Angeles for a dermatologist, and 18 days in Detroit for an orthopedist. On average, Americans wait about 24 days for medical appointments. Telemedicine reduces wait times, travel, and costs, providing immediate access tomany,suchasindividuals inremote areas or with chronic conditions. Since 13% of U.S. adults are uninsured and nearly half have high-deductible plans, telemedicine offers a more affordable option that can preventunnecessaryemergencyroomvisits.
It is important to recognize that ethical boundaries between patients and providers still apply in virtual care, but telemedicine complicates how these boundaries are upheld. The lack of physical contact can reduce emotional connection, examination accuracy, and interpersonal cues that traditionally build trust, rapport, and mutual respect. New technologiesallowdoctorstoremotelyassess
breathing, skin tone, and cognitive responses. However, these tools still can't match the thoroughness of in-person evaluations. Many healthcare providers prefer using telemedicine with existing patients, believing that building trust still relies on face-to-face interactions, especially with new patients. As a result, many healthcare providers mainly use telemedicine with established patients, thinking that trust and loyalty still require initial inperson visits. This idea aligns with the four pillars mentioned earlier, showing how trust, knowledge, regard, and loyalty are challenged by the rise of telemedicine.
Ultimately, telemedicine embodies the moral and ethical standards of modern healthcare. It improves access and reduces costs, yet still poses risks of miscommunication and overlooked details. Ethically, the best approach for medical providers is a balance of both methods—merging digital convenience with the empathy, observation, and accountability found in traditional care. Throughout writing this piece, I gained a better understanding of how virtual care impacts the four ethical pillars—trust, knowledge, regard, and loyalty—in patient-provider relationships. Telemedicine can have both positive and negative effectsonthisrelationship,potentially challengingthe ethical pillars that are more easily upheld during inperson interactions. As research shows, telemedicine should not completely replace in-person care but instead enhance it through careful, human-centered integration.
BY MADELINE MON
Excerpt from Bioethics Paper
Today, we live in a world more populated than ever
The number of people on Earth continues to increase from our current population of 8.2 billion being projected to reach as high as 10 billion by 2060 (United Nations). However, this growth is not evenly distributed, with many lower-income countries likely to experience the biggest population booms of the 21st century In contrast, industrialized countries’ birth rates are projected to continue to decrease, with many reaching a level where their overall population will decrease in the near future. These population growth rate disparities have huge economic and social implications for these countries, leading to extensive legislative action to course-correct, if possible. However, these laws often fail to account for the social impacts on women and how they are viewed in society
This paper focuses on the ethics of population regulation, the difference between regulations that seek to limit growth and those that seek to promote growth, and the impact of population regulation on women in society. Specifically, the paper explores the ethical implications of population regulation and population limitation through the bioethical principles of autonomy, nonmaleficence, and beneficence, the value of equality, and the framework of deontology to explore the duty women have to society, their children, and themselves when making reproductive decisions and whether we as a society should seek to influence these highly personal decisions. This excerpt narrowly focuses on the role of duty in reproduction
I chose to analyze the issue of governmental intervention in population through the lens of deontology, which judges actions based on their adherence to a set list of principles or duties held by individuals. Governments are made to be representatives of the people, imposing the laws and regulations that are best for the majority of society. Following this, governments would have a deontological duty to act in the best interest of their citizens in terms of maintaining population health and minimizing the subsequent negative consequences that could follow. Thus, they would have a duty to regulate the population, whether that be through limitation or encouraging growth. However, it is important to recognize that the government also holds a duty to respect the rights of its citizens, especially when it comes to their autonomy Ideally, governments are meant to walk the line between paternalism and autonomy: they regulate enough in order to provide all citizens with a safe and prosperous life, while allowing enough autonomy for individuals to make their own decisions using their own values. Many governments, such as our own, explicitly protect these rights of freedom through their Constitutions (Reshanov). By taking away the autonomy of all women, even in the case of a population crisis, governments are violating the duty to respect the choice of their citizens and sacrificing individual suffering for the greater good This has the potential to create a slippery slope, where governments are justified in removing autonomy from their citizens as long as the majority benefits somewhat; this line of thinking often leads to concentrated state power and totalitarian ambitions, which often become corrupt and overbearing
While the government has a duty to regulate its population to prevent economic and social collapse, it also must consider the impacts on its citizens, especially women, and take steps to prevent overreaches of autonomy.

ArtworkbyCadenAlmond
In addition, I found that women do not hold a duty towards maintaining population health individually, as it holds too great a potential to conflict with their own values While duties are straightforward for a government to follow, it is impossible to insist that all women have the same responsibilities in preventing societal collapse, as it places an undue burden on an already burdened population. As mentioned earlier, the choice to have kids is a weighty one, and involves many different factors that parents must consider, each of which varies from family to family If women were bound by duty to have kids or not have kids no matter what, great harm would be inflicted on many families. By forcing women to accept a larger duty to population health, the government threatens their individual duties by creating a societal one that supersedes all others at all costs. Creating limitations on
reproductive choice not only infringes on autonomy, but on the individual’s ability to fulfill their responsibilities. For example, it is commonly accepted that parents have the duty to give their child a good life, and that they should only intend to have one if they know that they have the means to raise one. Imagine a couple, who want to have a child one day, but hold off due to their financial situation and ability to provide around-the-clock care for that future child If the government required childbirth in this situation, the parental duties of this couple would be compromised, forcing them into having a kid that they don’t feel like they can support to the best of their ability. This compromise of personal values for large-scale gain is inevitable when forcing citizens to take on the burdens of a larger government Individuals should certainly have an awareness of the impact of their actions on a larger scale, but they should not be forced to sacrifice the things in their life in order to uphold societal values.
In conclusion, it is nearly impossible to say that all women must consider society’s needs over their own, as this approach invalidates their individual duties to their partners, families, and future children While women, and all citizens, certainly hold some level of duty through the framework of deontology to their government and the wellbeing of their society, they also hold other, more pressing duties as well, such as their responsibilities as a mother, brother, student, and worker. If women took full responsibility for promoting population health, that would mean potential neglect of their duties to their existing children, family, friends, and workplace, upending their lives for a single life that serves as only a drop in the bucket of the national population. This role in regulating population is exclusive to the government, as it is their responsibility to provide for citizens and shape a prosperous society, especially when it comes to matters that may occur in the future. However, it

must also be acknowledged that another part of the government’s duties is to uphold the rights of citizens, such as those found in the constitution of that country. Thus, the government should regulate, but it must be in respect to the autonomy of citizens and not infringe on their civil liberties.
In an ideal world, government and reproductive health would be completely separate, and up to the individual However, we can not overlook the signs of collapse right in front of us; nor can we assume that the problem will sort itself out South Korea is not an exception in the world; while it boasts one of the lowest birth rates, it is by no means alone, with countries like Italy, Sweden, Thailand, and yes, even the United States, experiencing similar drops in new births. Despite common belief, population regulation is not a thing of the past; instead, it is a tool that we may be forced to use in the very near future For example, within the first 100 days in office, the Trump Administration has been exploring various policy options for lifting birth rates, such as a $5,000 “baby bonus” after delivery, an increase in education on menstrual cycles to increase fertility, and a reservation of 30% of Fulbright scholarships for married applicants or those with children (Kitchener) Trump has even dubbed himself “the fertilization president,” suggesting that these topics are incredibly important to our country and will continue to be relevant for many years to come (Goldberg).
Population regulation, like all other governmental policies, is a tool that can be used for the greater good of society. Like any tool, there must be limits on how it can be used In order for population regulation to be ethical, citizens must continue to have autonomy over their own reproductive choices There must be no required standards that citizens have to meet, whether that be a ban on having children or an order to have children. This ensures that the autonomy of citizens is protected and that
they can provide the best life for however many children they want to have The government should have the ability to use positive reinforcements to encourage population policy, however, as these measures help provide support towards women without forcing them into a corner. Rather, these potential benefits simply become considerations to think about when discussing reproductive choices. As long as the incentives stray away from punishing tactics such as fines, they provide benefits towards those who wish to pursue that path while not negatively impacting those uninterested Regulation not only can be pursued under these guidelines, but it should be pursued if necessary, as the duty of the government states that it must pursue the best future for citizens and work to protect them from potential disaster, such as a population crisis. The government should take these steps to regulate the population, while also respecting the rights of citizens as a secondary part of their duty While individual women do not hold a responsibility towards population health, the government does, and should move within these guidelines in order to secure the best consequences for everyone possible. It is unreasonable for each family to weigh population health in their decisions, but it is not for the government, which was created in order to pursue a greater good for everyone
If done through a noninvasive approach with the goal to strengthen population health for the greater good, population regulation is ethical for the government to pursue. However, the duty to do so falls on the shoulders of our elected officials, not those of individual women simply trying to live their lives
BY SKYLAR LI
Artificial intelligence (AI) shows great promise in the medical field, with large language models showing significant performance on text-based medical evaluationsandexhibitingimpressivediagnosticabilities. There is accelerating integration of AI technologies in clinical decision-making, diagnostics, and patient communication, helping improve accuracy, efficiency, and access to quality healthcare. However, medicine’s core identity as a profoundly human practice is grounded in trust, empathy, and relational care; the deployment of AI technologies in healthcare jeopardizes these comforting qualities. This paper examines the ethical implications of deploying AI in healthcare, analyzing how, while AI can enhance equity and precision, unreflective adoption risks erode the human-to-human bondfoundationaltoethicalmedicine.
AI has become increasingly integrated into healthcare, supporting tasks such as diagnostics, predictive analytics, patient triage (the rapid assessment of patients’ medical urgency), administrative automation, and virtual assistance. These technologies have demonstrated remarkable analytical and linguistic capabilities; for instance, large language models now achieve impressive performance on text-based medical evaluations. Nevertheless, these systems are “black box” models, meaning hallucinations are extremely difficult to prevent, as the way models process information remains a mystery

Recognizing both the promise and potential risks of these systems, the U.S. Food and Drug Administration (FDA) has issued a draft of guidance regulatingAI as a medical device, requiring premarket review and postmarket surveillance. This framework reflects a consideration of the ethical foundations of medical practice: beneficence, non-maleficence,autonomy,andjustice.
From this intersection of innovation and ethics arises the central question of this paper: can AI preserve, or even strengthen, the moral dimensions of healthcare, or does it riskdiminishingthehumanconnectionatitscore?

ArtbyCadenAlmond
There are multiple ethical tensions that arise in the deploymentofAIinahealthcaresetting.
First,itisdifficulttofindabalancebetweenthevaluesof empathy and efficiency. The relational act of listening and comforting is one of the major aspects that builds trust between patients and healthcare providers. If AI is delegated emotional labor, it brings the risk of moral outsourcing, causing the potential loss of accountability in the people behind medical systems. On the other hand, using AI for some medical tasks may free clinicians to spend more time with patients and build a deeper emotionalconnection.
Next, when trying to achieve equity by usingAI systems, bias can interfere. Large language models’ abilities to analyze structured medical datasets and identify patterns is unparalleled, meaning they have the potential to become incredible “doctors.” However, there is often historical biasinthesedatasets,causingdisparateimpacts on marginalized populations. This means that developers and institutions have an ethical responsibility to audit and mitigate bias before deployingAI systems in underserved regions.
Finally, patient privacy can be jeopardized when the data ownership of AI companies is considered. With the sensitive nature of health data, it is integral to examine the ethics of consent in the continuous data collection of large language models. Companies must find a balance between innovation and patients’ control over their own information.
When responsibly integratingAI into healthcare settings, theethicalvaluesoftransparencyandaccountabilityare
essential. Some have experimented with a participatory approach involving both clinicians and patients to maintain a “human-in-the-loop” model. This ensures the preservation of clinician judgment to override potential, even inevitable, hallucinations. Additionally, continuous ethical evaluation post-deployment is integral as AI systems’abilitiescontinuetogrow
Even with the associated risks of deploying AI in healthcare and medicine, the tool has great potential to transform the fields by expediting innovation, improving innovation, and increasing accessibility Therefore, AI can be not a replacement, but an enhancer for the integral humanconnectionbetweencliniciansandpatients.
To support this environment, clinicians should be trained in AI literacy and ethical reasoning to ensure their accountability AI companies can design AI systems that explicitly prioritize the ethical foundations of medical practice.
AI models’ demonstrations of incredible medical proficiency show that they have the potential to enhance equity and precision in the healthcare setting; however, insufficient consideration of the ethical implications can bring the risk of jeopardizing the foundations of medical practice. The medical field needs a renewed vision of healthcare where technology can amplify, rather than replace, human compassion. While these machines become acquainted with medicine, it is integral that medicine does not lose sight of humanity, and associated empathy and attentiveness, that has earned the trust of countlesspatients.

BY AADYA KUMAR
As humanity increasingly reaches for the stars, I wonder: can we ethically claim ownership of a place no human has ever inhabited? The notion of “res nullius,” meaning property belonging to no one and is traditionally applied to land on Earth, is now being stretched to outer space. As private companies and nations race to extract resources from the Moon, Mars, and asteroids, many ethical questions arise: who has the right to claim, profit from, and regulatethesecelestialterritories?
The 1967 Outer Space Treaty, ratified by over 100 nations, states that celestial bodies are not subject to national appropriation. In theory, space is beyond the sovereignty of any one entity. Yet, recent developments blur the lines. Companies like SpaceX, Blue Origin, and Planetary Resources are planning mining operations, while nations like Luxembourg and the United States have passed laws recognizing private claims on extraterrestrial resources. For example, the U.S. Commercial Space Launch Competitiveness Act of 2015 explicitly allows companies to “possess, own, transport, use, and sell” resources extracted from celestialbodies(U.S.Congress).
From a deontological perspective, this signals potential concerns. Immanuel Kant (Kantian ethics) emphasizes courses of action based on principles that respect rational agents as ends. Claiming uninhabited celestial bodies for profit risks treating space as a mere means for human gain, potentially violating the moral principle of universality if every nation and corporationdidthesame.Arewejustifiedinimposing
earthly property concepts on the cosmos, or could this beaformofethicaloverreach?
From a utilitarian viewpoint, space exploration and resource extraction could maximize overall well-being if done responsibly. Access to minerals from space couldhelpwiththelackofresourceswehaveonEarth, help us invent new technology, and create economic opportunities. Yet this becomes complex: if a small number of corporations or nations monopolize resources, the benefits accrue to elites while the global population bears the ethical and environmental risks of unregulated extraction. The 2021 Artemis Accords, led by the United States, attempt to promote cooperation and peaceful use of space, but critics argue that these agreements favor powerful nations and private entities (BBCNews).
“Are we justified in imposing earthly property concepts on the cosmos, or could this be a form of ethical overreach?”
John Rawls’ theory of justice provides further insight. According to Rawls, inequalities are only justifiable if they benefit the least advantaged. If space resource claims are concentrated among wealthy nations and private companies, the least advantaged on Earth gain little. We must ask: can we structure policies so that space exploration benefits humanity collectively, ratherthanreinforcingtheprivilegesofthefew?

Environmental and cultural responsibility also matters. While space may appear empty, altering celestial bodies through mining or colonization could produce irreversible changes. Even if no life exists, ethical stewardship demands the consideration of long-term consequences. The framework of virtue ethics encourages prudence and temperance: just because we can exploit resources does not mean we should. Acting as cosmic stewards, rather than conquerors, aligns with broad moral virtues and preserves options for future generations.
“Acting as cosmic stewards, rather than conquerors, aligns with broad moral virtues and preserves options for future generations.”
Ethical clarity diminishes when human ambition extends beyond our home planet, forcing us to face an uncomfortable truth. We must balance innovation and progress with fairness, sustainability, and moral responsibility. If we fail to establish equitable and ethical frameworks now, the cosmos risks becoming a mirror of terrestrial inequalities where wealth and power dictate access rather than justice and shared human flourishing. The final frontier may be empty, but our ethical obligations to each other and to the futurearenot.
“The final frontier may be empty, but our ethical obligations to each other and to the future are not.”
BY GITANJALI KAMESWARAN
ArtworkbyDivyanshiBansal

OnSeptember 30,2025,OpenAIreleased Sora2,the successor to Sora, which was introduced earlier in 2024. Sora, meaning “sky” in Japanese, is an artificial intelligence (AI) tool that generates videos when provided with a text prompt by the user. These videos can be realistic (“live action”) and can closely imitate specific cameras and lenses, or they can replicate various animation styles. Each video created by Sora 2 is complete with audio, and users of this tool have the option to include specific individuals from their contacts through the cameo feature.Sora2servesasvalidationthat,fromhereon out, artificial intelligence will become increasingly able to imitate human skills that many previously thoughtwereimpossibletorecreate.
OpenAI states that they are launching Sora 2 “responsibly.”Somemeasurestakenincludeusing
watermarks to help distinguish real videos from AIgenerated ones and filtering out unsafe or inappropriate material, including terrorist propaganda and self-harm promotion. OpenAI plans to enforce these measures by checking video transcripts, prompts, and video outputs (“Launching Sora Responsibly”). Additionally, Sora 2 allows the use of cameos, which are users’ digital "characters" created in the Sora 2 app using audio and video samples of real people, objects, or even pets. Users can request access to other users’ cameos for use in their videos; however, the Sora 2 app is currently invite-based, and users can approve and revoke other’s access to their cameos. To prevent misuse, setting up a cameo requires a voice recording and a video sample from all users who choose to allow their cameo to be used. Sora 2 also enables users to removeanyvideoscreatedbyothersthatfeature
them, including drafts. Most importantly, Sora 2 includes stricter limits on teenage users’ feeds, cameo usage, and the presence of human moderators to review instances of bullying that may arise on the platform. OpenAI has also made efforts to integrate parental controls for Sora 2 via ChatGPT. According to OpenAI, through their ChatGPT accounts, parents can receive direct notifications on their teen’s activity as well as override infinite scroll limits, turn off algorithm personalization, and manage direct messagesettings.
At first glance, Sora 2 seems like an incredible innovation. Unlike animation, cinematography, or video editing, little skill is required to use Sora, and nearly anyone can generate videos, complete with synced audio and simple text prompts. However, beneath the surface, Sora 2 presents several ethical dilemmas.
First, Sora 2 is full of biases. WIRED, a technology magazine, tested Sora by analyzing the AI’s responses to specific prompts. Earlier in 2025, tech journalists Reece Rogers and Victoria Turk found that some of these biases regard gender: when asked to portray professions such as a CEO, pilot, political leader, or college professor, all the results showed men. Similarly, when asked to portray a flight attendant, childcare worker,receptionist, ornurse,all the results showed women. They also found racebased biases. When asked to generate a video of an interracialcouple,Soraoftendepictedacoupleofthe same race, with one wearing a black t-shirt and the other wearing a white t-shirt. When provided with a prompt that had no information about race, Sora almost always generated people who were either Blackorwhite,withscarcelyanypeoplefromother
ethnic or racial backgrounds. Additionally, they found ableist biases and a similar lack of diversity in portrayals of disabilities. When Sora was prompted for a disabled person, it showed only one person in a wheelchair (“OpenAI’s Sora is Plagued…”). These misrepresentations contradict the ethical values of fairness and equality, perpetuating harmful stereotypesandprejudices.
Such biases come from the data used to train the AI models, but fortunately, as both interactions with artificial intelligence and the material with which AI is trained continue to diversify, these biases will mostlikelybereduced.InaconversationwithNPR’s Bobby Allyn in 2025, Aaron Rodericks, a technologycompany’ssafetyexpert,notedthat,"Ina polarized world, it becomes effortless to create fake evidence targeting identity groups or individuals, or to scam people at scale…. And most people won't have the media literacy or the tools to tell the difference."
Second,Sora2isapotentialthreattohumantalentin the movie, music, and content-creation industries. Sora can mimic specific animation styles almost perfectly; this puts into jeopardy much of the artistic talent in the animation industry. For example, a prompt on the Sora 2 website in 2025 requests a video in “the style of a Studio Ghibli anime. A boy and his dog run up a grassy scenic mountain with gorgeous clouds, overlooking a village in the distant background” (“Sora 2 Is Here”). Most movies by Studio Ghibli, the award-winning Japanese animation studio, can take three to five years to make, and some take up to seven, with Studio Ghibli animators drawing and coloring each frame by hand. A few seconds on screen can take Studio Ghibli artistsanywherebetweenamonthandayearto
animate, but with Sora 2, a clip of the same length can be generated in seconds. The lack of human originality is in conflict with the ethical value of authenticity, which states that a person’s actions should be true to their nature. It could be argued that AI-generated videos lack the authenticity of human nature and that AI platforms like Sora 2 could never replicate the human talent it takes to animate a movie.
Also in question is the fairness of Sora 2. According to the ethical value of fairness, individuals should be provided with the same opportunities to succeed and acheive their own goals. It can be considered unfair that the work of a skilled animator can be replicated by non-human AI with a fraction of the time and effort, or that someone with no training can recreate the work of a trained artist, just because they can pay the monthly fees for the Sora 2 platform. When considering the ethical values of authenticity, fairness, as well as economic and job security, the ability of AI to replace human labor poses a serious issue for human talent and storytelling, not only for Studio Ghibli but for animation studios all over the world.
Third, Sora 2’s carbon footprint could be hugely damaging to the environment. This reasoning is supported by the ethical values of sustainability and proportionality. In 2025, James O’Donnell and Casey Crownhart looked at AI’s energy footprint for the MIT Technology Review. They found that as millions of users generate information on AI platforms, the carbon intensity of electricity used by data centers (where these AI models are trained and housed) was 48% higher than the US average (“We DidtheMath…”).Thereareapproximately3,000
data centers in the United States, and according to the Lawrence Berkeley National Laboratory, AI alone will consume as much electricity annually as 22% of all households in the US by 2028 (United States Department of Energy). These statistics show how the resources used by Sora 2 conflict with the ethical value of sustainability. Sustainability implies that today’s needs should be met without compromising those of the future, and according to the value of sustainability, Sora 2 should fulfill its purpose without compromising the future of the environment. When it comes to the ethical value of proportionality (whether the benefits of an activity/commodity outweigh the harm), Sora 2 is primarily being used for recreational purposes, rather than making technological progress to benefit society. As the use of Sora 2 increases, there has to be more to it than recreational pleasure to outweigh itsenvironmentalrepercussions.
Ultimately, Sora 2 poses numerous ethical concerns. What seems like a groundbreaking technological innovation starts to look more like it could put today’s world into danger, whether it be reinforcing harmful stereotypes through the misrepresentation of various communities, creating misinformative and misleadingvideos,replacinghumanskill,orhavinga massive environmental impact. The increase of AI platforms that pose such concerns only makes it more important to question their ethicality. By considering what goes against our individual values and what is or isn’t ethical and why, AI can become partofourlivesforthebetter,notworse.
By Pearl Dawadi

ArtworkbyChelseaChen
Gene editing is a new, powerful technology that allows doctors and scientists to modify specific parts of our DNA (deoxyribonucleic acid). DNA is the “code of life” inside our cells. It tells our bodies how to grow,whattraitswehave,andevenhowwefightdiseases.
In recent years, a tool called CRISPR (clustered regularly interspaced short palindromic repeats) has made gene editing faster, cheaper, and more accurate than ever before. As a result, some are very excited about its ability to cure serious diseases, while others are worried about how it might be misused. The question manyareaskingis:Shouldscientistsbeallowedtoedithumangenes,andifso,wheredowedrawtheline?
To understand this debate, it helps to know what gene editing actually is. Our DNA is a long instruction book written in a special code. Small sections of this code are called genes, and they control specific traits, like eye color or how our red blood cells are shaped. Sometimes, a gene has achangeormutationthatcancauseadisease.
Gene editing is a process where scientists change orrepair partsoftheDNAinside cells. According to a National Geographic article entitled “Molecular Scissors,” human genome editing technologies can be used on somatic cells which are non-heritable and can affect only the person whoisbeingtreated,aswellasgermlinecells
which can be passed down to future children or generations.CRISPRisoneofthemostimportant tools for doing this. In this article, CRISPR is comparedtoapairof“molecularscissors.”Ituses a piece of RNA (ribonucleic acid) to find a specific spot in the DNA, and then an enzyme cuts it. After that, the cell can repair the cut, and scientists can sometimes insert a corrected piece ofDNAinplaceoftheotherone.
There are several reasons people are hopeful about gene editing. One of the biggest is its potential to treat or even cure genetic diseases. For example, sickle cell disease is a painful, lifelongconditioncausedbyasinglechangeina
genethataffectsredbloodcells.In2023,theU.S. Food and Drug Administration (FDA) approved the first CRISPR-based therapy to treat sickle cell disease in certain patients. This treatment edits a patient’s own cells so they can make healthier hemoglobin, which can reduce or even stop the painthatpatientsexperience.
Gene editing is also being tested for other conditions,suchashighcholesterolandsomerare diseases. In these cases, a one-time treatment could potentially give long-term or even lifelong benefits. This is very different from taking medicine every day or having repeated hospital visits. For many families, gene editing offers real hopewheretherewerefewoptionsbefore.
Beyond human medicine, CRISPR is transforming other areas of science as well. It can be used to create crops that resist pests or survive harsh climates, which could help with food security. It can also help scientists study how genes work in plants and animals much more quickly than before. Because of all these possibilities, some people see gene editing as a revolutioninbiologyandmedicine.
Even though gene editing has many potential benefits, there are also serious concerns. One of the biggest worries is the idea of “designer babies.” Thisterm referstousinggeneediting not just to prevent disease, but to choose or improve traits such as height, eye color, athletic ability, or maybe even intelligence. If parents could pay to give their children certain “advantages,” potentially create a bigger gap between the rich andthepoor.Familieswhocannotaffordgene
editing could be left behind, while wealthier families might be able to buy healthier or more “perfect” genes for their children. This could makesocietyevenmoreunequalandunfair.
Another concern is about human diversity. The traits that some may want to “fix” are distinct to certain identities and cultures. Many worry that if we start labeling certain traits as “bad,” we might send the message that people who have those traitsarelessvaluable,whichiscertainlynottrue. This raises ethical questions about who gets to decide what counts as a disease and what counts assimplybeingdifferent.
There are also safety issues. Changing DNA is complicated, and scientists do not always know every possible effect of an edit. CRISPR cuts the DNA in a specific place, but sometimes there can be off-target effects, where other parts of the DNA are changed by accident. These unexpected changes could cause new health problems that might not show up until many years later. Because gene editing is still a relatively new technology,someexpertssaywemustcontinueto studyitsrisks.
One of the most serious ethical issues is gene editing in embryos or germline cells. When scientistseditthesecells,someofthechangescan be passed down to future generations. This means that a decision made today could affect not just one person, but their children, grandchildren, and beyond.
Afewyearsago,ascientistinChinacreatedthe
first gene-edited babies by changing their embryos’ DNA. The goal was to make them resistant to HIV, but the experiment broke many scientific and ethical rules because it edited embryos in a way that could permanently affect the future generations. This experiment was completed without consent, safety testing, or approval. The global reaction was very negative, and many countries moved to strengthen their laws and guidelines on genetic modification. This case showed how quickly things can go wrong when powerful tools like CRISPR are used without enough safety checks and public discussion.
Because of this, many experts argue that gene editing in embryos should be either banned or very tightly controlled. They say we should not rush into permanent changes that we do not fully understand and that future generations cannot consentto.
Given all the excitement and the worries, many people believe we need strong rules for how gene editing isused.Butwhoshouldmakethoserules? Some say that governments should set clear laws about the kinds of gene editing allowed and forbidden. Others think decisions should involve a mix of people: scientists, doctors, ethicists, geneticists,andmore.
Many organizations and scientific groups agree on at least one point: there should be an open discussion, such as community seminars, workshops, and webinars, before making big decisionsabouteditinghumangenes.Thepublic
can learn, ask questions, and share opinions. People should understand the basic science and have a chance to share their values and opinions. The ultimate goal of this technology is to allow treatments that prevent severe suffering, while preventing uses that are unfair, unsafe, or are disrespectfultohumandignity.
In the end, the debate over gene editing comes down to a balance between what we can do and what we should do. On one hand, CRISPR and other tools give us a chance to reduce or even end certain genetic diseases. For families facing painful conditions like sickle cell disease, gene editing can feel like a miracle. On the other hand, there are real risks of making unfair advantages, harming diversity, and causing unknown side effectsthatwemaynotbeabletoreverse.
Most experts do not see gene editing as simply “good”or“bad.”Instead,theyseeitasapowerful tool that must be used carefully and responsibly. According to my research, many suggest that gene editing could be acceptable for serious lifethreatening diseases when there are no better treatments, but not for changing appearance or personality. Even the strict safety testing and long-termfollow-upareneeded.
As students, we may not have all the answers right now, but we will be part of the generation that lives with the consequences of these decisions. Learning about gene editing and asking these hard questions is an important first step. In the end, we are left with the same question many are still debating: Just because we can edit the codeoflife,doesitreallymeanweshould?
Copley,Michael.“America’sAIindustryfacesbigenergyandenvironmentalrisks.” NPR,14October2025, https://www.npr.org/2025/10/14/nx-s1-5565147/google-ai-data-centers-growth-environment-electricity.Accessed14November 2025.
“2025:TheStateofConsumerAI.” Menlo Ventures,26June2025,https://menlovc.com/perspective/2025-the-state-of-consumer-ai/. Accessed14November2025.
WorldEconomicForum “HowAIuseimpactstheenvironmentandwhatyoucandoaboutit” Climate Action and Waste Reduction, 1June2025,https://wwwweforumorg/stories/2025/06/how-ai-use-impacts-the-environment/ Accessed14November2025
Zewe,Adam “Explained:GenerativeAI'senvironmentalimpact” MIT News,17January2025, https://newsmitedu/2025/explained-generative-ai-environmental-impact-0117 Accessed14November2025
"AbouttheUntreatedSyphilisStudyatTuskegee." Centers for Disease Control and Prevention,U.S.CentersforDiseaseControland Prevention,4Sept.2024,www.cdc.gov/tuskegee/about/index.html.Accessed25Oct.2025.
Charlesworth,CarstenT.,etal."Ethicallysourced'spare'humanbodiescouldrevolutionizemedicine." MIT Technology Review,25 Mar 2025,wwwtechnologyreviewcom/2025/03/25/1113611/ethically-sourced-spare-human-bodies-could-revolutionize-medicine/ Accessed25Oct 2025
"NurembergCode" UNC Research,UofNorthCarolinaatChapelHill,researchuncedu/human-researchethics/resources/ccm3 019064/ Accessed25Oct 2025
Schuklenk,Udo,andPeterSinger Bioethics: an Anthology 4thed,e-booked,JohnWiley&Sons,2022
Wadman,Meredith "FDAnolongerneedstorequireanimaltestsbeforehumandrugtrials" Science,AmericanAssociationforthe AdvancementofScience,10Jan 2023,wwwscienceorg/content/article/fda-no-longer-needs-require-animal-tests-human-drug-trials Accessed25Oct 2025
EmployingwithArtificialIntelligence:ShouldtherebeStricterRegulationsonAIUseinHiring?
“ReciprocalHuman-MachineLearning:ATheoryandanInstantiationfortheCaseofMessageClassification.” CoLab,CoLab,14 Nov.2023,colab.ws/articles/10.1287/mnsc.2022.03518.
Arkin,Daniel "Fivemajorbroadcastnetworkssaytheywon'tsignnewPentagonmediapolicy" NBC News,14Oct 2025, wwwnbcnewscom/news/us-news/five-major-broadcast-networks-say-will-not-sign-new-pentagon-press-pol-rcna237526 Accessed 27Oct 2025
Bauder,David "Journaliststurninaccessbadges,exitPentagonratherthanagreetonewreportingrules" AP News,15Oct 2025, apnewscom/article/pentagon-press-access-hegseth-trump-restrictions-5d9c2a63e4e03b91fc1546bb09ffbf12 Accessed27Oct 2025
Lenthang,Marlene "Pentagonplacesfurtherrestrictionsonjournalists'access" NBC News,20Sept 2025, wwwnbcnewscom/news/us-news/pentagon-places-restrictions-journalist-access-rcna232662 Accessed27Oct 2025
Sanders,AmyKristin "ThePentagon'snewpolicyisanunprecedentedattempttounderminepressfreedom" Nieman Lab,16Oct 2025,wwwniemanlaborg/2025/10/the-pentagons-new-policy-is-an-unprecedented-attempt-to-undermine-press-freedom/ Accessed 27Oct 2025
Lee,Benjamin.“MenendezFamilyCriticisesNetflix’sMonsters:A‘GrotesqueShockadrama.’”TheGuardian,26Sept. 2024,https://www.theguardian.com/tv-and-radio/2024/sep/26/menendez-monsters-netflix-ryan-murphy
MenendezBrothersMurderCaseFacts.” Biography,A&ENetworks,25Aug.2025, https://www.biography.com/crime/menendez-brothers-murder-case-facts
Menendez + Menudo: Boys Betrayed.CreatedbyEstherReyes,season1,Peacock,2023.
The Menendez Brothers.DirectedbyAlejandroHartmann,CampfireStudios,2024.Netflix, www.netflix.com/title/81506509.
The Menendez Brothers: A Case Study. StanfordLawSchool,https://law.stanford.edu/courses/the-menendez-brothers-acase-study/.Accessed2Dec.2025.
Monsters.CreatedbyRyanMurphyandIanBrennan,Netflix,2024.
NPRStaff.“MenendezBrothersCommentonNetflix’sMonsters.” NPR,23Sept.2024, https://www.npr.org/2024/09/23/nx-s1-5123898/menendez-brothers-comment-netflix-monsters
Robb,David.“RyanMurphyDefends‘Monsters’AfterErikMenendezCriticism.” The Hollywood Reporter,24Sept. 2024,https://www.hollywoodreporter.com/tv/tv-news/ryan-murphy-defends-monsters-the-lyle-and-erik-menendez-story1236010685/
Setty,Priya.“‘Monsters’:AreTrue-CrimeTVShowsEthicalorExploitative?” Berkeley High Jacket,11Oct.2024, https://berkeleyhighjacket.com/2024/entertainment/monsters-menendez-brothers.
Setty,Priya.“TrueCrime,FalseNarratives:TheMenendezBrothersand‘Monsters.’” Harvard Journal of Sports and Entertainment Law,17Apr.2025,https://journals.law.harvard.edu/jsel/2025/04/true-crime-false-narratives-themenendez-brothers-and-monsters/.
Beyene,Yewoubdar From Menarche to Menopause: Reproductive Lives of Peasant Women in Two Cultures SUNYPress,1989
Chlebowski,RowanT,etal "EstrogenplusProgestinandBreastCancerIncidenceandMortalityintheWomen'sHealthInitiative ObservationalStudy" JNCI: Journal of the National Cancer Institute,vol 105,no 8,8Apr 2013,pp 526-35 PubMed, https://doiorg/101093/jnci/djt043
Chrisler,JoanC "GenderedNarrativesofAgingandtheMenopausalTransition" Feminism & Psychology,vol 18,no 4,2008,pp 401-406
Coney,Sandra The Menopause Industry: How the Medical Establishment Exploits Women HunterHouse,1994
Famila,Nupur "MenopauseinUrbanandRuralIndia:AComparativePerspective" Indian Journal of Social Sciences,vol 15,no 2, 2017,pp 65-80
Flint,Margaret "MenopauseandSocialStatusinSouthernAfrica" Anthropology & Aging
,vol 27,no 1,2006,pp 12-18
Grant,Mark,etal.MenopausalSymptoms:ComparativeEffectivenessofTherapies[Internet].Rockville(MD):Agency forHealthcareResearchandQuality(US);2015Mar.(ComparativeEffectivenessReviews,No.147.)Availablefrom: https://www.ncbi.nlm.nih.gov/books/NBK285463/
Kaufert,Patricia,andMargaretLock."MedicalizationofWomen'sThirdAge:LessonsfromJapan." Social Science & Medicine,vol. 42,no.1,1996,pp.103-110.
Kazda,Luiseetal."OverdiagnosisofAttention-Deficit/HyperactivityDisorderinChildrenandAdolescents:ASystematicScoping Review." JAMA network open vol.4,4e215335.1Apr.2021,doi:10.1001/jamanetworkopen.2021.5335 Last,JohnM.ADictionaryofPublicHealth.OxfordUP,2007.
Lock,Margaret. Encounters with Aging: Mythologies of Menopause in Japan and North America.UniversityofCaliforniaPress, 1993.
Lock,Margaret."AmbiguitiesofAging:JapaneseExperienceandPerceptionsofMenopause." Culture, Medicine and Psychiatry, vol.10,no.1,Mar.1986,pp.23-46,https://doi.org/10.1007/bf00053261.
Pinn,Vivian. Our Bodies, Ourselves: Menopause.Simon&Schuster,2006.
Reid,RobertL."MenopauseMedicine:Past,Present,andFuture." Journal of Obstetrics and Gynaecology Canada,vol.41,Dec. 2019,pp.S347-S349,https://doi.org/10.1016/j.jogc.2019.08.026.
Sievert,LynnetteLeidy. Menopause: a Biocultural Perspective.RutgersUP,2006.
Singh,Amarjeet,etal."Ahistoricalperspectiveonmenopauseandmenopausalage." Bulletin of the Indian Institute of History of Medicine (Hyderabad) vol.32,2(2002):121-35.
Welton,A.J.,etal."HealthRelatedQualityofLifeafterCombinedHormoneReplacementTherapy:RandomisedControlledTrial." BMJ,vol.337,nos.aug212,21Aug.2008,p.a1190,https://doi.org/10.1136/bmj.a1190.
WorldHealthOrganization. Research on the Menopause : Report of a WHO Scientific Group. WorldHealthOrganization;WHO PublicationsCentreUSA[distributor],1981.WorldHealthOrganization.
TheWalkingIncubator:WomeninPopulationHealthandRegulation
Goldberg,Michelle.“Opinion|MAGANatalismIsDoomedtoFail.” The New York Times,21April2025, https://www.nytimes.com/2025/04/21/opinion/trump-fertility-birthrate-sexism.html. Accessed14May2025.
Kitchener,Caroline.“TrumpAidesSolicitIdeastoRaiseBirthrate,FromBabyBonusestoFertility Planning.” The New York Times,23April2025, https://www.nytimes.com/2025/04/21/us/politics/trump-birthrate-proposals.html.Accessed27 May2025.
Reshanov,Alex.“LawsoftheLands:ExploringtheWorld’sConstitutions.” Life and Letters,22March 2023,https://lifeandletters.la.utexas.edu/2023/03/laws-of-the-lands-exploring-the-worlds-constitutions/. Accessed27May2025.
UnitedNations.“9.7billiononEarthby2050,butgrowthrateslowing,saysnewUNpopulationreport.” United Nations,2019,https://www.un.org/en/academic-impact/97-billion-earth-2050-growth-rate-slowing-says-new-unpopulation-report.Accessed5February2025.
“DigitalHealthTechnologiesandthePatient–PhysicianRelationship:Roles,Responsibilities,andMedicalDecision-Making” ScienceDirect,Elsevier,2021,https://wwwsciencedirectcom/science/article/pii/S2210778921000544
“TelemedicineandPatient–ProviderRelationshipsDuringCOVID-19:AQualitativeStudy”Andreadisetal (PMC),National InstitutesofHealth,2023,https://pmcncbinlmnihgov/articles/PMC9994565/
“HowTelehealthIsChangingDoctor–PatientBonds”MediGroup,2023,https://wwwmedigroupcom/blog/how-telehealth-ischanging-doctor-patient-bonds/
Lapointe-Shaw,Lauren,etal “VirtualVisitswithOwnFamilyPhysicianvsOutsideFamilyPhysicianandEmergencyDepartment Use”JAMANetworkOpen,vol 6,no 12,AmericanMedicalAssociation,Dec 2023,pp e2349452–52, https://doiorg/101001/jamanetworkopen202349452
“ArtificialIntelligence-EnabledMedicalDevices-ArtificialIntelligenceinSoftwareasaMedicalDevice.” FDA,25March2025, https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-software-medical-device.Accessed21 October2025.
GeminiTeam,etal.‘Gemini:AFamilyofHighlyCapableMultimodalModels’. arXiv [Cs.CL],18Dec.2023, arxiv.org/abs/2312.11805.arXiv.
Singhal,Karan,etal.‘TowardsExpert-LevelMedicalQuestionAnsweringwithLargeLanguageModels’.arXiv[Cs.CL],16May 2023,arxiv.org/abs/2305.09617.arXiv.
Zhang,Xiao,etal “MedicalExamQuestionAnsweringwithLarge-ScaleReadingComprehension” Proceedings of the AAAI Conference on Artificial Intelligence,vol 32,2018
“ResnulliusLawandLegalDefinition.” USLegal, Inc ,definitions.uslegal.com/r/res-nullius/.Accessed4Nov.2025.
“TreatyonPrinciplesGoverningtheActivitiesofStatesintheExplorationandUseofOuterSpace,includingtheMoonandOther CelestialBodies(OuterSpaceTreaty)” United Nations Office for Outer Space Affairs, wwwunoosaorg/oosa/en/ourwork/spacelaw/treaties/introouterspacetreatyhtml Accessed5Nov 2025
“HR 2262 US CommercialSpaceLaunchCompetitivenessAct” Congress gov,114thCong,2015–2016, https://wwwcongressgov/bill/114th-congress/house-bill/2262 Accessed5Nov 2025
“HowCloseAreWeReallytoMiningAsteroids?” BBC News,20Mar 2025,wwwbbccom/news/articles/cxwwjlrk1mlo Accessed 5Nov 2025
Sims,Josh “AreWeontheVergeofMiningMetalsfromtheAsteroidsaboveEarth?” BBC Future,23Mar 2025, wwwbbccom/future/article/20250320-how-close-are-we-really-to-mining-asteroids Accessed6Nov 2025
“JohnRawls”StanfordEncyclopediaofPhilosophy,editedbyEdwardN Zalta,Fall2016edn,platostanfordedu/entries/rawls/ Accessed7Nov 2025
“Kant’sMoralPhilosophy”StanfordEncyclopediaofPhilosophy,editedbyEdwardN Zalta,Fall2024edn, platostanfordedu/entries/kant-moral/ Accessed9Nov 2025
Allyn,Bobby “Soragivesdeepfakes'apublicistandadistributiondeal'Itcouldchangetheinternet” NPR,Oct 2025, wwwnprorg/2025/10/10/nx-s1-5567162/sora-ai-openai-deepfake Accessed1Dec 2025
"DOEReleasesNewReportEvaluatingIncreaseinElectricityDemandfromDataCenters." Energy Gov,UnitedStatesDepartment ofEnergy,30Dec.2025,www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers. Accessed1Dec.2025.
O'Donnell,James,andCaseyCrownhart."WeDidtheMathonAI'sEnergyFootprint.Here'stheStoryYouHaven'tHeard." MIT Technology Review,MIT,20May2025,www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-bigtech/.Accessed1Dec.2025.
OpenAI."SoraIsHere." OpenAI,9Dec.2024,openai.com/index/sora-is-here/.Accessed1Dec.2025.
Rodgers,Reece,andVictoriaTurk."OpenAI'sSoraIsPlaguedbySexist,Racist,andAbleistBiases." WIRED,CondeNast,23Mar. 2025,www.wired.com/story/openai-sora-video-generator-bias/.Accessed1Dec.2025.
TheSoraTeam."LaunchingSoraResponsibly." OpenAI,Sept.2025,openai.com/index/launching-sora-responsibly/.Accessed1 Dec.2025.
"Sora2IsHere." OpenAI,30Sept.2025,openai.com/index/sora-2/.Accessed1Dec.2025.
Billauer,BarbaraPfeffer "DesignerDNA:GeneticEdits,Ethics,andPseudo-Prophecy" The American Council on Science and Health (ACSH),ACSH,17July2025,wwwacshorg/news/2025/07/17/designer-dna-genetic-edits-ethics-and-pseudo-prophecy49617 Accessed6Nov 2025
ColumbiaUniversity "FirstCRISPRTherapyApprovedforSickleCell" Columbia University Irving Medical Center,Columbia University,8Dec 2023,wwwcuimccolumbiaedu/news/columbia-impact-first-new-crispr-therapy-approved-sickle-cell Accessed5 Nov 2025
Doudna,Jennifer "JenniferDoudnaIntroducesCRISPR" Innovative Genomics Institute,innovativegenomicsorg/what-is-crispr/ Accessed5Nov 2025
"FDAApprovesFirstGeneTherapiestoTreatPatientswithSickleCellDisease" FDA,8Dec 2023,wwwfdagov/newsevents/press-announcements/fda-approves-first-gene-therapies-treat-patients-sickle-cell-disease.Accessed6Nov.2025.
NationalGeographicSociety."MolecularScissors." National Geographic,1Oct.2024, education.nationalgeographic.org/resource/molecular-scissors/.Accessed5Nov.2025.
Qi,Stanley."WhatIsCRISPR?ABioengineerExplains." Stanford Report,StanfordUniversity,10June2024, news.stanford.edu/stories/2024/06/stanford-explainer-crispr-gene-editing-and-beyond.Accessed5Nov.2025.Interview.
Saey,TinaHesman."Explainer:HowCRISPRWorks." Science News Explores,31July2017,www.snexplores.org/article/explainerhow-crispr-works.Accessed6Nov.2025.