

Dear Reader,
The selection of research in this journal contributes to historical memory by illuminating lesser-known narratives and presenting arguments that arise from new perceptions of underlying contexts. This is the most extensive historical project many of us have conducted, and we are grateful to have had such an opportunity in high school. Our work has also laid the foundations for the editorial boards and issues in years to come.
Each of the contained papers bridge disciplines to provide perspectives that contribute to a more comprehensive understanding of past figures, events, and regions. As our school name would suggest, many of us have taken a historical approach to scientific subjects, while others have taken an empirical approach to history.
The journal’s name, Quetzalcoatl, was proposed by our advisor and instructor, Dr. Marcelo Aranda, and inspired by a family painting in his childhood home. The cover, which resembles the painting, is a representation of the Aztec god Quetzalcoatl (“Feathered Serpent”). The deity’s importance to Latin American tradition reflects the cultural connections that inspired many of our chosen topics. Quetzalcoatl is the god of knowledge and learning. We, as editors and researchers, have conducted interviews, peer-reviewed our drafted research, orally presented our findings, and finally merged the products into an extensive body of knowledge—this publication.
We would like to acknowledge Dr. Marcelo Aranda for his guidance and flexibility while we pursued the topics that excited our curiosity, and the Chair of Humanities at NCSSM-Morganton, Dr. Sarah Booker, for granting our request to publish and for laying out the logistics to achieve this goal. The Research in Humanities class and editorial board have given significant effort to composing this journal (So much so, that our work warranted homemade cookies and celebratory trips for tacos.)
In your hands is the first NCSSM-Morganton Humanities Research Journal, which we hope provokes new questions and influences further attention in historical academia to be given to the contained subjects
Sincerely,
The Research in Humanities
Founding Editorial Board:
Erin Collins
Sydney Covington
Kimberly Gómez-González
Nitya Kapilavayi
Rucha Padhye
Armed Spaniards approached hundreds of Aztecs greeting them to the New World. After a series of performances and ceremonies, the Aztecs led the Spanish across a lengthy bridge. Moctezuma, the ruler of the Aztec empire, and Hernán Cortés, a Spanish conquistador, were suddenly only a couple of meters away when Moctezuma decided to dismount from his horse. Approaching Cortés, the two men briefly embraced each other representing the first direct contact of the Spanish with Nahua civilization. Between them stood Malinche.1 From then on, she would become known as his interpreter, advisor, lover, mother of his child, and the greatest asset in the conquest of the Aztec empire.
Before her life took an extraordinary turn, Malinche was on track to fulfill her role in society as a woman. In Aztec society, the birth of a child was celebrated throughout the empire. As the baby leaves the mother upon birth, the midwife releases a series of battle cries in praise of the mother’s strength and enduring labor for her child. The midwife would continue to inform the newborn of its good fortune belonging to the family. She then would assure the newborn that a man or woman had many duties, but their stewardship of the empire and admiration of the gods would take priority for as long as they lived. Afterward, the midwife conducted the bathing ceremony in the presence of the child’s extended family. During this, she laid a basin of water on a reed mat and surrounded it with miniature tools appropriate to the baby’s sex and the family’s stand and profession. For example, boys who had been destined for the army by the tonalamtl priests would be laid with a small bow and arrow on top of an amaranth dough tortilla to represent a warrior’s shield. A female baby would be
1 Bernal Diaz, “Historia Verdadera: AHA,” Historia Verdadera | AHA, accessed November 14, 2023, https://www.historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/historia-verdadera.
laid with a tiny skirt together with spinning instruments to represent a woman’s work.2
The Aztec empire placed a great deal of importance on education. At around the age of three, daughters were taught to grind maize and to make tortillas. At the same age, boys began fetching and carrying water. When boys reached the age of six or seven years, they began to assist fathers, learning to fish and gather reeds. During early adolescence, children of commoners went to the telpochcalli (‘young person’s house’). At these institutions, girls and boys were taught separately. However, they were both taught history, public speaking, dancing, and singing and received religious instruction. Boys also underwent rigorous military training, while girls learned to serve in the temples of the city. The Aztec education system emphasized a young person’s responsibilities: he or she was encouraged to find fulfillment in serving the gods, the state, the calpulli (‘family homes’), and the family’s livelihood. Disparities in education were largely based on socioeconomic status. For example, the sons and daughters of the nobility were educated separately in establishments called calmecac. Children of nobility would learn military theory, agriculture and horticulture, astronomy, history, and the skills of architecture and arithmetic in addition to the topics commoners learned. They would also be instructed in religious rituals, learning how to follow the sacred calendar, interpret the tonalamatl, and understand the organization of the main festivals of the Aztec religious year.3
Within the domestic spaces of the empire, women would educate their daughters on the social etiquette specific to their gender in preparation for courtship and marriage. An exam-
2 Inga Clendinnen, Aztecs: an Interpretation (Cambridge, MA: Cambridge University Press, 1991).
3 Inga Clendinnen, Aztecs: an Interpretation (Cambridge, MA: Cambridge University Press, 1991).
ple of this is showing her how, when met with a man on a path, she should turn away and present her back to him, so that he could pass easily and undisturbed.4 Once in their late teens, professional match-makers would set young men with their chosen young women. The matchmaker presented his case on four consecutive days, and then the woman’s parents would be expected to announce whether they accepted the marriage offer. Marriage was every young Aztec girl’s dream and only option in life if she wished to be respected. The marriage ceremony was held in the house of the bride’s father. The mother of the bride spoke to her new son-in-law, asking him to remember that he owed his wife a duty of love, hard work, and self-sacrificing attention. After the wedding, the new husband and wife lived in the house or the group of houses occupied by the bride’s family for about six or seven years. The husband’s new in-laws benefited from his labor throughout this time and, indeed, by custom could evict him from their home compound if he refused to work.5
After marriage, the wife’s responsibility to go to the market to negotiate prices of goods and upkeep the household in addition to participating in social events was mandatory. Ultimately, if the marriage did not work out, both parties could petition the court to get the marriage nullified and separate from their partners.6 If there were younger children involved, they would typically remain with the mother, while older children tended to go with the same-sex parent. This social norm resulted in men divorcing their wives if they proved unable to have children or if they failed in duties such as making or preparing their evening baths. The level of independence and agency for women of this period was decent, due to living in smaller populations, and thus, had a lower need for the specialization of the sexes.
Born as Malinalli but called Malitzin in Nahuatl, and ultimately pronounced Malinche by Spaniards, Malinche was reborn when she was baptized and earned the name Doña Marina around 1519.8 According to caciques, or native chiefs,
4 Ibid.
5 Inga Clendinnen, “Wives,” Aztecs: an Interpretation (Cambridge, MA: Cambridge University Press, 1991).
6 Ibid.
7 Charles Phillips, “Aztecs,” essay, in The Complete Illustrated History of the Aztec & Maya: The Definitive Chronicle of the Ancient People of Central America & Mexico--Including the Aztec, Maya, Olmec, Mixtec, Toltec & Zapotec (London: Hermes House, 2010), 349.
8 Bernal Diaz, “Historia Verdadera: AHA,” Historia Verdadera | AHA, accessed November 14, 2023, https://www.historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/historia-verdadera.
near Coatzacoalcos, Malinche’s parents were raised by her two parents, Aztec caciques. After her father passed when she was young, her mother sold her to the Xicalango so that her stepbrother would be the sole heir of the estate.9 This anecdote about Malinche’s upbringing was even mentioned in detail in Bernal Diaz’s Historia Verdadera de Nueva España, otherwise known as True History of New Spain. According to Diaz, her mother “gave the little girl to some Indians from Xicalango at night so no one would see them, and told people that she had died.” Working for the Xicalango and Tabasco, Malinche picked up Yucatec, the language of the Maya civilization, contributing to her talent in aiding Spanish expansion. In 1519, Malinche, along with nineteen other native women, was given to the Spaniards by the Chontals, a Mayan people that occupied Tabasco, in an attempt to create a peace agreement while she was in her late teens. The Spaniards, identifying the ease at which she communicated, put Malinche to work with Spanish conquistador Alonso Hernandez de Portocarrero as a servant and mistress until she was sent to work with Hernán Cortés10
Diaz states, “... they were baptized, and the name Doña Marina was given to one Indian woman, who was given to us and who truly was a great Cacique…[She was of] the first women to become Christians in New Spain.” When the conquistadors found out the ease at which she communicated, they partnered her with Geronimo de Aguilar, a Spaniard who spoke Mayan, to translate for Cortés. Aguilar had funnily enough been stranded on the coast of Yucatec for eight years until Cortés had rescued him to form part of his team. La Malinche quickly picked up Spanish and arguably overshadowed Aguilar. Diaz would later write that“...Doña Marina had shown herself to be such an excellent woman and fine interpreter throughout the wars in New Spain, Tlaxcala, and Mexico, as I shall show later on, Cortés always took her with him…And Doña Marina was a person of the greatest importance, and was obeyed without question by the Indians throughout New Spain.”11 Due to her becoming an invaluable member of the expansion, she was then given the name Doña Marina as a sign of respect by the Spaniards and given the suffix -tzin, becoming Malitzin by the natives. The name Malitzin was then misheard by the Spaniards and became Malinche.
9 Oswaldo, Estrada, Troubled Memories: Iconic Mexican Women and the Traps of Representation (State University of New York Press, 2019).
10 Bernal Diaz, “Historia Verdadera: AHA,” Historia Verdadera | AHA, accessed November 14, 2023,https://www.historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/historia-verdadera.
11 Ibid.
La Malinche began her involvement with Spanish expansion as an understated intermediary, a crucial department in every growing empire’s development. The ruler, in this case, was Charles V who was both the Holy Roman Emperor and the King of Spain during the time of Hernán Cortés and La Malinche’s partnership. Alternatively, Moctezuma, the emperor of the Aztec empire, was also identified as the ruler. An example of a regional leader in this instance was Hernán Cortés, who had direct orders from Charles V and was leading other Spaniards in the conquest of the Aztec empire. Intermediaries, such as La Malinche, would have been translators, advisors, and essentially anyone native to the desired area. Intermediaries were used as a way to connect to natives and gain trust through that individual’s social connections.12 To ensure reliance and devotion to the empire, slaves and other people of low socioeconomic status would be elected to be intermediaries.13 Because of this, and her linguistic faculties, La Malinche was the perfect candidate to be an ally to the Spanish crown. Regardless of the strategy with which they were used, and despite the empire’s perspective of the natives, all empires utilized intermediaries in one form or another due to their effectiveness in aiding in the mission of expansion.
Fulfilling her role as an intermediary, La Malinche was particularly useful in gaining native allies to fight for the Spaniards against the Aztec empire. Malinche stood alongside Cortés in Cempoala as the Mexicas sent tax collectors and warned the Cempoalans against receiving the Spaniards. Malinche identified the tensions between the Cempoalans and the Mexica, and Cortés offered to protect the Cempoalans, many of whom accompanied him as he marched toward Tenochtitlan. Simultaneously, he released the Mexica tax collectors he was holding, telling them that he hoped that Moctezuma was well and eagerly awaited visiting him.14 This allowed Cortés to play to both sides until he could determine whom to trust to help him conquer the land.
After being heavily dependent on Malinche, as seen through images in the Florentine Codex and Cacique accounts, Cortés was called Señor Malinche, or even just Malinche by natives, making him seem one with his interpreter.15 Hernán Cortés had previously gained fame from an expedition to Cuba which earned him support in an expedition westward in 1519 when Spain heard murmurings of gold in what was occupied by the Aztec empire, and what is now considered to be Mexico. He was appointed head of an expedition of eleven ships and five hundred men to Mexico. Curiously, Cortés only ever mentions La Malinche twice in his letters that he would routinely send to the King while on the prolonged expedition. In 1520 he wrote, “my interpreter, who is an Indian woman” and in 1526 he wrote, “Marina, who traveled always in my company after she had been given to me as a present.”16 An integral part of the conquest were these native allies who had weapons, knew the land, and knew the weaknesses of the Mexica. La Malinche was Cortés’ only mode of communication to these communities, and so it can arguably be said that without the interpretation of La Malinche, the Spanish would have been unable to conquer the Aztec empire. Indigenous allies of the Spanish are seldom spoken of or given glory by the Spanish crown. Instead, Cortés was rewarded with fame and fortune in Spain and sole responsibility for the conquest and massacre being put on Malinche by the Mexicas.
12 Jane Burbank and Frederick Cooper, “The Empire Effect,” Public Culture 24, no. 2 (2012): 239–47, https://doi.org/10.1215/08992363-1535480. 13 Ibid.
14 “Divisions and Conflicts between the Tlaxacalans and the Mexicas: AHA,” Divisions and Conflicts between the Tlaxacalans and the Mexicas | AHA, accessed November 14, 2023, https://www.historians. org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/ the-conquest-of-mexico/narrative-overviews/divisions-and-conflicts-between-the-tlaxacalans-and-the-mexicas ; Bernal Diaz, “Historia Verdadera: AHA,” Historia Verdadera | AHA, accessed November 14, 2023, https://www.historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/historia-verdadera.
Upon arrival at Tenochtitlan, the capital of the Aztec empire, Moctezuma gave an elaborate and honorific speech to his visitors. Like those given to ambassadors, this speech referred to the visitors as “my child,” demonstrating respect, especially in elite Aztec circles. Malinche was the only Nahua-speaking individual among the conquistadors and elected to speak brutally directly with the absence of any honorary titles.17 We will never know if Moctezuma said something worthy of this rude act of resistance. However, shortly thereafter, Cortés reported to Charles V the initiation of the conquest of the inhabitants of Mesoamerica. According to Spanish Law, the Spanish King had no right to demand foreign peoples to be subjects of the Spanish empire, but he did have the power to take control over rebels of the Spanish crown. By documenting the initial alliance of the Aztecs to the Spanish, they could later be identified
15 Oswaldo, Estrada, Troubled Memories: Iconic Mexican Women and the Traps of Representation (State University of New York Press, 2019).
16 Hernán Cortés, “Letters from Hernán Cortés,” American Historical Association, accessed November 15,2023,https://www.historians.org/ teaching-and-learning/teaching-resources-for-historians/teachingand-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/letters-from-hernan-cortes.
17 Camilla Townsend, “Malintzin’s Choices : An Indian Woman in the Conquest of Mexico,” Internet Archive, January 1, 1970, https://archive. org/details/malintzinschoice0000town.
as rebels and ‘rightfully’ overthrown.18 This theory is further supported when it isn’t until 1552 that Cortés writes about his meeting with Moctezuma and the first Indigenous account isn’t until the later 1550s. Bernal Diaz, Spanish conquistador and close colleague to Cortés wrote as to how the Aztecs viewed the myths with which they have been associated. As the Spanish heard stories that they were almighty Gods, Diaz claimed Moctezuma dismissed the stories by saying, “You must take them as a joke, as I take the story of your thunders and your lightnings.”19 This misconception could have led to the Spanish viewing the Aztec empire as a threat and beginning the undertaking of justifying colonization.
After the conquest of the Aztec empire, Hernán Cortés impregnated La Malinche with her first son out of wedlock. Now known as Martin Cortés or El Mestizo, he is known for being the first mestizo. ‘Mestizo’ refers to an individual of both European and Indigenous descent. While it isn’t completely clear as to what the genuine nature of the relationship had been, she proceeded to live with him in a very lavish home in the later years of her life. Although he lived with multiple concubines, their long-standing proximity supports the claim they had a close relationship. This was until her skills were no longer needed and she was married off to a colleague of Cortés. Shortly thereafter, Cortés moved back to Spain with Martin by his side to become the “Commander of Santiago” according to Diaz.20 According to Bernal Diaz, La Malinche would outwardly exclaim how she was lucky to be a Christian woman working for Hernán Cortés, saying “Even if they were to make her mistress of all the provinces of New Spain…she would refuse the honor, for she would rather serve her husband and Cortés than anything else in the world.”21
During the beginning stages of the conversion of New Spain, the Spanish dedicated themselves to destroying temples, idols, and symbols of indigenous religions. This included closing religious schools, murdering pastors, and disposing of evidence of religious human sacrifice. In 1525, missionaries were sent to implement systemic evangelization. During this same time, cathedrals were built atop temples, and thousands of Indig-
18 Octavio Paz, Sor Juana, Or, The Traps of Faith, (Cambridge, MA: Harvard University Press, 1988).
19 Bernal Diaz, “Historia Verdadera: AHA,” Historia Verdadera | AHA, accessed November 14, 2023, https://www.historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/historia-verdadera.
20 Matthew Restall, Seven Myths of the Spanish Conquest (New York, N.Y: Oxford University Press, 2021).
21 Oswaldo Estrada, Troubled Memories: Iconic Mexican Women and the Traps of Representation (State University of New York Press, 2019).
enous peoples were baptized.22 Some of these missionaries dedicated themselves to studying the languages and religions of the Aztecs to communicate better. Of these missionaries was Fray Bernardino de Sahagún, who arrived in 1529 and is known today as history’s first anthropologist. He compiled information about the conquest from native students and elders into the Florentine Codex or The General History of the Things of New Spain. While he worked to convert the natives to Christianity, he gained a comprehensive understanding of the ways of the Aztec people, accompanied by 2,468 images, that uniquely blend both European and Nahua styles of drawing.23 Diaz later writes, “ …with Aguilar as interpreter, [they shared] many good things about our holy faith to the twenty Indians [women], who had been given us, telling them not to believe in the Idols, which they had been trusted in, for they were evil things and not gods, and that they should offer no more sacrifices to them for they would lead them astray, but that they should worship our Lord Jesus Christ.”24 This was about the first twenty women that were given to them by the Tabasco Indigenous peoples, including Malinche.
The stark contrasts between the gender roles assigned to the two sexes are completely alien to the beliefs of the Aztecs. This is especially reflected in the Aztec religion. Xochiquetzal is the deity of sexual love, artistry, and delight, and has historically been depicted as having multiple sexual relationships with deities. Deities with close connections to nature were particularly androgynous and could be depicted as either male or female. For example, Tlacatecutli, or ‘Earth Lord’ and the Earth Monster were deities who would often fluctuate from being depicted as either male or female. Furthermore, select deities were depicted as being gender-twinned such as Xochiquetzal and Xochipilli, Chalchihuitlicue, and Tlaloc. The deity representative of maize, was depicted to have a lifecycle resembling humans’ and changed sex throughout the year. Xilonen, the deity representative of the young maize, would embody the slender cob, milky kernels, and long, silky corn tassels, becoming either feminine or masculine versions of Centeotl. Young
22 Margarita Zires, “Los Mitos de La Virgen de Guadalupe. Su Proceso de Construcción y ...,” JSTOR, 1994, https://www.jstor.org/stable/1051899.
23 Bernardino de Sahagún, “General History of the Things of New Spain: The Florentine Codex.,” The Library of Congress, accessed November 14, 2023, https://www.loc.gov/item/2021667837/.
24 Bernal Diaz, “Historia Verdadera: AHA,” Historia Verdadera | AHA, accessed November 14, 2023, https://www.historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/historia-verdadera.
Lord Maize Cob was depicted as muscular and hardened like the cob and mainly depicted as a male.25
Queerness amongst the holy gods was not uncommon, with female deities being shown as lovers even more often than as sisters. There was a greater value associated with the social division of roles, rather than the sexual relationship. The war god Huitzilopochtli is said to have had intimate relations with his mother Coatlicue and his sister as well as his adversary Coyolxauhqui. The intenseness of the battle came from their relationship as siblings, not as sexual antagonists. The notion of a ‘war between the sexes’ and violent sexual acts, which is now completely engrained into Mexican society, is completely unassociated with gods worshipped for hundreds of years by the Aztecs.26 Attitudes towards androgyny within the Aztec empire are difficult to conclude. At birth, there would be no identification of anatomically ambiguous individuals. Additionally, transvestism during the Aztec empire supports the direct contrast between male and female individuals, instead of the blurring of femininity and masculinity. While homosexuality was recognized, some scholars believe that it was largely deplored. This was especially so with homosexual women’s sexuality being abused through prostitution.27
Religion continues to be an integral part of Mexico since it is attached to the birth of the Mexican nation. Celebrating the Virgin Mary has become a national project that facilitates the spread of Christianity and trespasses the colonial area, the war of independence, the Mexican revolution, and the long twentieth century of church-state tensions, social activism, marxism, economic development, human rights, and liberation theology. Media regarding Malinche’s relationship to Christianity were published to improve relations between the Catholic Church and Mexico to enhance the administration’s image with foreign lenders and private investors whose support would be crucial in a neoliberal area. Diplomatic relations between Mexico and the Vatican began in 1992 and would continue into the present day where they have now accomplished many projects and investments in the country.28
The association of certain Aztec religious traditions with Catholic traditions was permitted as a means of creating a smoother transition to Catholicism.29 An example of this is
25 Inga Clendinnen, “Wives,” Aztecs: An Interpretation (Cambridge, MA: Cambridge University Press, 1991).
26 Inga Clendinnen, “Wives,” Aztecs: An Interpretation (Cambridge, MA: Cambridge University Press, 1991)...
27 Ibid.
28 Ibid.
29 Margarita Zires, “Los Mitos de La Virgen de Guadalupe. Su Proceso de Construcción y ...,” JSTOR, 1994, https://www.jstor.org/stable/1051899.
a variety of gods and goddesses associated with saints. Most notably, the goddess Tonantzinwas associated with La Virgen de Guadalupe, the Virgin Mary.30 The Aztecs believed their gods led them to settle their new city near a lake where an eagle would stand on a cactus with a snake in its beak. Here they settled in Tenochtitlan in 1325 along the Valley of Mexico on an island in a large salty lake, what is now Mexico City. This story is even reflected in Mexico’s flag today. However, to facilitate spiritual national unity and move away from ‘demonic’ Aztec traditions, the story of the aspiration of the Virgin Mary was born in Tepeyac and popularized by Fray Alonso de Montúfar. The story of the Virgin Mary known to Spanish settlers was set in Europe. In contrast, the stories given to the Indigenous people during this transitional period were much different as she is representative of a foreign land.
Before religious conversion, there was no particular social capital placed upon people who decided to be celibate. Interestingly, sex workers were respected and could only be hired by Aztec warriors of high rank after giving a generous payment to the owner of these “state-controlled brothels or ‘Houses of Joy.’” Additionally, they were not only not cut off from female society, instead, they worked alongside other women in the market, and they were highly sought-after curers. It was not uncommon among men and women to be celibate as a devotion to their religion or even simply a particular deity. Spiritual purity was not solely based on virginity, as a ruler tells his young virgin daughter, “‘There is still jade in your heart, turquoise. It is still fresh, it has not been spoiled… nothing has twisted precious green stone’ and it is ‘still virgin, pure, undefiled.”31 This supported the idea that purity for Aztec women was based on things entirely unrelated to their sexual activity. Contrastly, the Roman Catholic Church has always idealized the Virgin Mary for her chastity, motherly nature, and selflessness. When Aztec society had been converted, they determined that La Malinche had embodied the polar opposite traits, causing already brewing hatred towards her to solidify sexist ideologies into the Mexican psyche.
Modern scholarship has given La Virgen, La Malinche, and Sor Juana Ines de la Cruz recognition for their contributions to what we know today as the Mexican consciousness. La Virgen de Guadalupe is seen to be the first archetype of Mexican women, she is seen as subordinate to the patriarchy, sacrificing herself for the salvation of humanity through her son.
30 Lara, Irene. “Goddess of the Américas in the Decolonial Imaginary: Beyond the Virtuous Virgen/Pagan Puta Dichotomy.” JSTOR, 2008. https://www.jstor.org/stable/20459183.
31 Inga Clendinnen, “Wives,” Aztecs: an Interpretation (Cambridge, MA: Cambridge University Press, 1991).
La Malinche is seen to be the second archetype, the Indigenous young woman whose collaboration and personal involvement with Hernan Cortez is said to be the reason for the Empire’s demise. As the story of La Malinche has been interpreted by many scholars throughout the ages, she has been described in multiple ways. In non-indigenous stories, she is portrayed as La Chingada Madre perspective, or the fucked mother describes the stances of multiple communities towards Malinche in which she is depicted as a hypersexual being. These perspectives towards Malinche are reinforced by telenovelas such as La Rosa de Guadalupe, with steamy scenes of sexual encounters as a form of agency. However, in various renditions of Malinche’s story, she is depicted with Spanish conquistadors, losing her virginity, participating in painful sex, and professing her undying love. This associates her participation as a result of her blind love for these conquistadors, almost making her innocent due to her endless love for these men. In Indigenous stories, Malinche is depicted as being a political agent for change. Regardless of whether it was correct or moral what she did, these individuals identify the agency Malinche demonstrated through her work as an intermediary. This comparison is critical as it is reflective of women’s agency in both indigenous and non-Indigenous communities. According to Bernal Diaz, La Malinche said, “She would give up the entire world just to serve her husband and Hernan Cortez.” It could also be theorized that her participation in the Conquest of the Mexica was a means of transgressing her marginal situation as a slave and thus becoming an indispensable woman in the conquest of Mexico. Regardless, we can not confirm nor deny that she enjoyed her sexual encounters with any of these conquistadors, including Hernán Cortés as we do not have any firsthand accounts from her stating it. While many consider the origins of what we now know as Mexico to be the Aztec empire, others believe it was when La Malinche had a son with Hernán Cortés. In Masculinidad, Imperio y Modernidad en Cartas de Relación de Hernán Cortés, Rubén Medina explores the relationship between masculinity and imperialism. With the stark contrast between the gender roles of Europe and the Aztec empire, the need to spread traditional Christian gender roles acted as a motivator to conquer Mesoamerica. La Malinche and Cortés’ son was of both Indigenous and European descent, considered to be the first mestizo. In Martyrs of Miscegenation; Racial and National Identities in Nineteenth-century Mexico, Lee Skinner explores the relationship between racial and national identities. These identities were social structures that were directly threatened by La Malinche, Hernán Cortés, and her mestizo son, causing uncertainty for the future. Hatred towards La Malinche coupled with efforts to convert the Indigenous to Christianity is what created Marianismo and subsequently Malinchismo and
its dyadic relationship to Machismo. As the antithesis of La Virgen de Guadalupe, she had a substantial negative impact on the perception of the female gender.
The third archetype is Sor Juana de la Cruz, the young scholar mestizo who openly defied gender stereotypes but constantly grappled with society’s expectations for her as a woman of the veil. Sor Juana Ines de la Cruz was the illegitimate daughter of Creole Spanish parents from the pueblo of Chimalhuacan, near the valley of Mexico. Identified as a child prodigy, she went to live with relatives in the capital at the age of eight. Her beauty, wit, and skill at poetry and her amazing knowledge of books and ideas made her an instant celebrity at the court. At age fifteen, the admiring viceroy and his wife sent her before a panel of learned professors at the University of Mexico. At this time the university was only for men and women were not permitted to study there. Given an oral exam by these professors with questions ranging from physics, mathematics, theology, and philosophy, they failed to stump her. Before her sixteenth birthday, she entered a Carmelite convent, quickly joining the Jeronymite Order as a nun where she stayed the rest of her life. Around the same time, Sor Juana stated she held a “total disinclination to marriage,” she would choose this conventional way to remain faithful to her religion and pursue her passion for study.32
While marriage and/or religious seclusion were common for Colonial Spanish women, misogyny still ran rampant in the clergy throughout the seventeenth century. Women were seen as daughters of Eve, temptresses who invited sin and damnation. The majority of Sor Juana’s works were written in these spaces. She connected to other literary friends and scientists in the vibrant city of Mexico and even abroad until 1694 during her time at the convent. She gave away her books and scientific and musical instruments and in her blood wrote out an unqualified renunciation of her learning by saying, “I Sor Juana Ines de la Cruz, the worst in the world…. Christ Jesus came into the world to save sinners, of whom I am the worst of all.” Shortly thereafter she would pass away while taking care of her fellow sisters who had contracted smallpox.33
While we can get a clearer sense of men’s perception of women through non-fictional works, women would often use fiction as a safer medium through which to spread their political thought. Sor Juana’s style of writing was reflective
32 William B. Taylor, Kenneth Mills, and Lauderdale Sandra Graham, Colonial Latin America: a Documentary History (Vancouver: Crane Library at the University of British Columbia, 2009).
33 William B. Taylor, Kenneth Mills, and Lauderdale Sandra Graham, Colonial Latin America: a Documentary History (Vancouver: Crane Library at the University of British Columbia, 2009)..
of her time, regularly using ornate, obscure styles popular at the time, seen through Christmas carols, morality plays, and allegorical pieces. While the subjects explored were not necessarily a demonstration of rebellion, she held beliefs that went against conventional boundaries for women’s lives and spiritual activity. El Divino Narciso, Los Empeños de una Casa, and La Respuesta were all written by Sor Juana de la Cruz. As an archetype of Mexican feminist literature, her works have been instrumental in learning about the internal struggle of mestizas. El Divino Narciso is a humorous play based on the story of the Holy Trinity. Many writers of Baroque Mexico would write these stories surrounding the Holy Trinity but Sor Juana’s rendition, like many of her works, was influenced by her political and religious beliefs. Los Empeños de una Casa is another example of her writing being used as a means of spreading political messages. While this play is based on two couples who are continuously pining for one another, through humor Sor Juana critiques gender roles and social dynamics.
However, La Respuesta is thought to be her most iconic contribution to feminist Mexican Baroque literature. In response to a letter from a Bishop, under a feminine pen name, Sor Juana recounts her internal struggle with her identity as an intellectual and a woman of the cloth. Sor Juana de la Cruz is evidence of women who grew up in this society and had this internal struggle of choosing their Indigeneity or religion that had been so embedded into her society.
Within the hallowed halls of Mestizo family households, “Hija de tu Madre!” is yelled at a person exhibiting undesirable traits. In English, the term translates to “daughter of your mother!” While this term always mentions “madre” or mother, “hija” is exchanged with “hijo,” emphasizing the individual’s relation to the mother. Similarly, the term “malinchista” has been associated with the feminine. The term is derived from “La Malinche” and means “a person who adopts foreign values, assimilates to a foreign culture, or serves foreign interests,” essentially used to insult someone for being a traitor to their people.34 This term accurately reflects just how the Mexica thought of La Malinche after the Spanish conquest. The ‘a’ at the end of ‘malinchista,’ genders the term, permanently associating the ‘betrayal’ of the Aztec empire with women, inherently saying that betrayal of the highest degree is a feminine trait. Marianismo is defined as “an idealized traditional feminine gender role characterized by submissiveness, selflessness,
chastity, hyper femininity, and acceptance of machismo in males.”35 The term is rooted in ‘Maria’, in reference to La Virgen Maria, the Virgin Mary, and values desirable qualities of the saint by the Catholic church. Machismo is rooted in the word ‘macho’ meaning male and is defined to be “a set of expectations for males in a culture where they exert dominance and superiority over women.” Marianismo and Machismo have a dyadic relationship, essentially upholding one another.36 For example, without Marianismo there cannot be Machismo since men need women to be submissive for them to be superior. Due to this, the two sexes are forced to follow the gender roles they have been predestined to follow by society.
I still hear the words echoing in my head as I am transported to my childhood home, tears blurring my vision as a broken vase lies at my feet. As I grew older I would associate the traits of my mother with what is wrong with me as a person. As I have grown into myself and my femininity, I reflect on this quote from Bonnie Burstow, “Often father and daughter look down on mother (woman) together. They exchange meaningful glances when she misses a point. They agree that she is not as bright as they are, cannot reason as they do. This collusion does not save the daughter from the mother’s fate.”37 For so long women have rejected their likeness to them and perpetuated injustice by upholding the patriarchy, white settler colonialism and greater systems of oppression. My cheeks are stained in empathy for the fate of my mother, La Malinche, La Virgen de Guadalupe, and Sor Juana Ines de la Cruz.
Barrientos, Joaquín Álvarez. “El Modelo Femenino En La Novela Española Del Siglo XVIII.” Hispanic Review 63, no. 1 (1995): 1–18. https://doi.org/10.2307/474375.
Bernardino de Sahagún. “General History of the Things of New Spain: The Florentine Codex.” The Library of Congress. Accessed November 14, 2023. https://www.loc.gov/ item/2021667837/.
Burstow, Bonnie. Radical feminist therapy: Working in the Context of Violence. Newbury Park: Sage Publications, 1992.
35 Emma Garcia , “Blending the Gender Binary: The Machismo-Marianismo Dyad as a Coping Mechanism,” Digital Commons , 2021, https:// digitalcommons.iwu.edu/cgi/viewcontent.cgi?article=1023&context=phil_honproj.
34 Inga Clendinnen, “Wives,” Aztecs: an Interpretation (Cambridge, MA: Cambridge University Press, 1991); Mary L Pratt, “‘Yo Soy La Malinche’: Chicana Writers and the Poetics of Ethnonationalism,” JSTOR, accessed November 15, 2023, https://www.jstor.org/stable/2932214?typeAccessWorkflow=login.
36 Julia L. Perilla, “Domestic Violence as a Human Rights Issue: The Case of Immigrant Latinos,” Hispanic Journal of Behavioral Sciences 21, no. 2 (1999): 107–33, https://doi.org/10.1177/0739986399212001.
37 Bonnie Burstow, Radical Feminist Therapy: Working in the Context of Violence (Newbury Park: Sage Publications, 1992).
Ceballos, Miriam. "Machismo: A Culturally Constructed Concept," California State University, 2013. https://scholarworks.calstate.edu/downloads/rj430597t.
Clendinnen, Inga. “Wives,” Aztecs: an interpretation. Cambridge, MA: Cambridge University Press, 1991.
Cortés, Hernan. “Cartas de Relacion.” Archive, 1519. https:// archive.org/details/cartasderelacion0000cort.
Cortés, Hernan. “Letters from Hernán Cortés.” American Historical Association. Accessed November 15, 2023. https:// www.historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/letters-from-hernan-cortes.
Diaz, Bernal. “Historia Verdadera: AHA.” Historia Verdadera | AHA. Accessed November 14, 2023. https://www. historians.org/teaching-and-learning/teaching-resources-for-historians/teaching-and-learning-in-the-digital-age/ the-history-of-the-americas/the-conquest-of-mexico/historia-verdadera.
“Divisions and Conflicts between the Tlaxacalans and the Mexicas: AHA.” Divisions and Conflicts between the Tlaxacalans and the Mexicas | AHA. Accessed November 14, 2023. https://www.historians.org/teaching-and-learning/ teaching-resources-for-historians/teaching-and-learningin-the-digital-age/the-history-of-the-americas/the-conquest-of-mexico/narrative-overviews/divisions-and-conflicts-between-the-tlaxacalans-and-the-mexicas.
Estrada, Oswaldo. Troubled Memories: Iconic Mexican Women and the Traps of Representation. State University of New York Press, 2019.
Garcia, Emma. “Blending the Gender Binary: The Machismo-Marianismo Dyad as a Coping Mechanism.” Digital Commons, 2021. https://digitalcommons.iwu.edu/cgi/viewcontent.cgi?article=1023&context=phil_honproj.
Hind, Emily. “The Sor Juana Archetype in Recent Works by Mexican Women Writers.” Hispanófila, no. 141 (2004): 89–103. http://www.jstor.org/stable/43807179.
Lara, Irene. “Goddess of the Américas in the Decolonial Imaginary: Beyond the Virtuous Virgen/Pagan Puta Dichotomy.” Feminist Studies 34, no. 1/2 (2008): 99–127. http:// www.jstor.org/stable/20459183.
Medina, Rubén. “Masculinidad, Imperio y Modernidad En Cartas de Relación de Hernán Cortés.” Hispanic Review 72, no. 4 (2004): 469–89. http://www.jstor.org/stable/3247142.
Paz, Octavio. Sor Juana, Or, The Traps of Faith. Cambridge, MA: Harvard University Press, 1988.
Perilla, Julia L. “Domestic Violence as a Human Rights Issue: The Case of Immigrant Latinos.” Hispanic Journal of Behavioral Sciences 21, no. 2 (1999): 107–33. https://doi. org/10.1177/0739986399212001.
Phillips, Charle. “Aztecs.” Essay in The Complete Illustrated History of the Aztec & Maya: The Definitive Chronicle of the Ancient People of Central America & Mexico--Including the Aztec, Maya, Olmec, Mixtec, Toltec & Zapotec, 349. London: Hermes House, 2010.
Pratt, Mary Louise. “‘Yo Soy La Malinche’: Chicana Writers and the Poetics of Ethnonationalism.” Callaloo 16, no. 4 (1993): 859–73. https://doi.org/10.2307/2932214.
Restall, Matthew. Seven Myths of the Spanish Conquest. New York, N.Y: Oxford University Press, 2021.
Skinner, Lee. “Martyrs of Miscegenation: Racial and National Identities in Nineteenth ...” JSTOR, May 2001. https://scholarship.claremont.edu/cgi/viewcontent.cgi?article=1364&context=cmc_fac_pub.
Taylor, William B., Kenneth Mills, and Lauderdale Sandra Graham. Colonial Latin America: a documentary history. Vancouver: Crane Library at the University of British Columbia, 2009.
Townsend, Camilla. “Malintzin’s Choices : An Indian Woman in the Conquest of Mexico.” Internet Archive, January 1, 1970. https://archive.org/details/malintzinschoice0000town.
Zires, Margarita. “Los Mitos de La Virgen de Guadalupe. Su Proceso de Construcción y Reinterpretación En El México Pasado y Contemporáneo.” Mexican Studies/ Estudios Mexicanos 10, no. 2 (1994): 281–313. https://doi. org/10.2307/1051899.
Logan Reich
Tracing back to ancient history, Jewish communities have constantly been persecuted and removed from many of the places they’ve settled. This continued during the Spanish Inquisition in 1492 when the Spanish crown signed the Alhambra decree to “Banish… Jews from [their] kingdoms” (The Alhambra Decree), and to scatter Jewish communities across the Atlantic. Spain took action to enforce the “separation of the said Jews in all the cities, towns, and villages of our kingdoms and lordships.” The decree removed Jews from across Spain in the name of preventing people from “Judaiz[ing] and apostatiz[ing] [the] holy Catholic faith.” The separation of Jews from their homes and spaces resulted in many communities losing access to culture, tradition, and faith. Because Spain’s intentions were rooted in the Catholic Church’s goal of spreading Catholicism, the influence of these actions spread beyond the Alhambra decree. This persecution of Jewish communities continued in 1654 when the Portuguese crown reclaimed the small portion of Brazil that Jews had settled in and displaced them once again. Other Jews had found space in different areas across Europe, but many were barred from full citizenship and freedoms. Accordingly, many Jews died in pogroms and religious tolerance was rare throughout the continent.
Eventually, the Sephardim, Jews with lineage to North Africa and the Iberian Peninsula, came to the colonies where their initial goal was not to grow the largest Jewish community in the world but, instead, to be safe in their new homes. These Jews experienced the process of acculturation; the cultural modification of an individual, group, or people by adapting to or borrowing traits from another culture (Merriam-Webster). The distinction from assimilation is that the Jewish community acculturated by becoming a part of a larger society without sacrificing what made them Jewish.
The essay focuses on Newport, a historically significant American Jewish community. As one of the oldest Jewish commu-
nities in the country, Newport exemplified American Jewry for generations.
The primary sources engaged throughout the essay highlight the history of Rhode Island’s establishment and the interactions of the Jewish community in Newport. The Rhode Island Parliamentary Patent established religious tolerance in Rhode Island in 1643 (Charles) and George Washington’s letter later in 1790 bookends this part of history by marking their acceptance (Washington). Further background on Rhode Island was given in Jonathan Beecher Field’s Errands Into the Metropolis academic paper on New England’s colonial history (Field). Secondary sources regarding Jewish people’s settlement in Newport include information on population numbers, occupations, and background. Acceptance and Assimilation by Daniel Ackermann provides context and details on the integration process undergone by Jews in Newport (Ackermann). This is continued in The Jews of Rhode Island by George Goodwin and Ellen Smith when they put numerics on the establishment and a longer timeline of Jews in Newport (Goodwin, Smith). All of the previous sources mentioned explain how Jews acculturated, but they lack context with architecture. The next sources elaborate on Jewish architecture. Laura Liebman and Steven Fine argue about the origins of the Touro synagogue’s architecture in their respective articles “Sephardic Sacred Space in Colonial America” and “Jewish Religious Architecture” to expand this. They mostly examine architectural components that were prominent in Europe or emphasized in Touro. These sources lack a holistic examination of how these concepts of Jewish Identity are related to the architecture of these synagogues. Using architecture to highlight the establishment of American Jewish identity will help share their story.
Through religion and culture, the Jewish people upheld their values and priorities as they acculturated into Rhode Island. While adapting to the norms that American colonists and countrymen expected of the new community, the early Jews of
Newport maintained what made them distinguishably Jewish while finding a place in the large context of the city. By examining the interactions between other religious groups in Rhode Island and the Jewish community, and looking at the architecture of Jewish spaces of the time, we can understand how they established themselves while also maintaining their identities.
In 1658, the first Jews settled into the young colony of Rhode Island in hopes that the emerging trade city of Newport would offer them the religious protection they sought. They quickly found footing despite the challenges one might assume they would face. Jew’s reasons for going to Newport were the same reason why Rhode Island was founded in the first place: religion. In 1636, Roger Williams founded Rhode Island based on religious tolerance after the Massachusetts Bay colony “ordered his immediate arrest” (Beecher Field 29) due to his vocal dissent on how Massachusetts played a role in religion. Just seven years later, in 1643, King Charles I signed the parliamentary patent that established Rhode Island as a colony (Parliamentary Patent, 1643). The charter says that the laws in Providence Plantations would be similar to those in England “so far as the nature and constitution of the place will admit.” This clause meant Rhode Island needed similar laws to England, but where the principal of the colony had different intentions than England, they had liberty. That extended to religion in the Rhode Island Colony. This distinction from other colonies’ establishments is markedly different in that it leaves room for Rhode Island to serve its purpose for the tolerance that Jews needed. Newport welcomed Jews, as well as other religious minorities, to make Rhode Island their home.
The community that Jews made in Newport became strong quickly. In 1677, they built a cemetery to establish themselves in the city. The construction of the cemetery was very basic and resembled what the Christian cemeteries in Newport looked like. This construction was also similar to what other Sephardic cemeteries had looked like back in Europe (Leibman 22). The cemetery’s construction indicated Jewish integration into Newport, but nothing about the architecture of the entrance or layout can be identified as being influenced by the colonists. As these people settled in, they would gain footing and integrate with the economy fairly quickly. In 1686, Rhode Island made an agreement with the Jewish community to let them conduct business with full protection of the law as “resident strangers” (Touro). Throughout this time they continued to grow and congregate in homes of Jewish leaders throughout Newport until they hit their “golden age” in the 1740s. They took advantage of these liberties to work as traders, merchants, and farmers. Jews would make up about 10 percent of the mercantile population in Newport in 1740 (Goodwin 2). This growth of
the community meant not only a cemetery but a synagogue as well.
In 1759, the Touro synagogue, the second oldest synagogue in the United States, was completed and became home to the two hundred Jews living in Newport. Whether it be ritual, passing down of cultural traditions, or fostering discussion around issues pertinent to Jewish people, synagogues historically have served as both a meeting place and as a house of worship. The synagogue served as an integral aspect of the Jewish community because it served as the home base for those who were establishing themselves in this new location, while also being the most prominent and visual representation of who Jews were to the larger community. Later the context of how the synagogue in Newport was so integral to this concept will be further expanded.
In Europe, there were common features found amongst Sephardic Synagogues that were constructed around the time that Jews began leaving for the Americas. Aside from their architecture, the buildings themselves were often in predominantly Jewish areas of the towns and not necessarily integrated into the rest of the community, but Newport diverted in this concept. The Touro Synagogue’s location in the middle of the town, near the water, and not far from their cemetery indicates that the Jewish people had become a part of the community in Newport (Ackermann 200). The synagogue being in such a central location was different from many of the churches in Newport at the time and expanded the understanding of the deep respect and strong relationships the Jews of Newport had made with the larger community.
The theme of integration continued with the architectural features of the Touro synagogue as well. When evaluating architectural features that were incorporated because of larger influences in Newport compared to what was traditionally built, context plays a large role. The facade of Touro shifts the focus to the high arching windows, the first defining feature. Sephardic European synagogues often had tall arches that were found as windows or in the interior support of the buildings (Fine 224). This is seen on the exterior of the Portuguese Synagogue in Amsterdam (Figure 1) and the diagram of the building’s interior (Figure 2). The high arches can be attributed to the influence of other architecture surrounding the Jewish communities but were prominent in the majority of European synagogues. With no religious significance, they were simply a cultural norm amongst the Jews of Europe. The prominence of arches in European synagogues could be attributed to the fact that stained glass was very rarely found in Sephardic synagogues. Whether it was due to religious tradition or the cost of materials throughout the world, elaborate glass mosaics were
rarely found in European Sephardic spaces. It would make sense that as an alternative to that art, they used these arches. This example of how Touro worked these into their facade perfectly sums up what Jewish acculturation at the time meant: a balancing act between working in the standard of the community, and maintaining individual tradition. The windows
are still arched but instead of longer stretching panes, they are the same size as these other buildings with the curve on top. This is seen in Figures 3 and 4 where the front sides of the buildings are almost identical, including the arched windows. Unlike many churches, there is no indication or outstanding detail that labels the space as a specifically religious building. Touro didn’t have any Hebrew or a name of the building on the exterior.
The next feature was the incorporation of the mechitza, the partition between men and women during the time of ritual, into their sanctuaries. The tradition in the Sephardic community was to build a balcony that overlooked the central prayer space on the bottom floor. Women would pray from the top while the men prayed on the floor of the sanctuary. The mechitza in Touro was, in principle, similar to what was
prominent in other Sephardic synagogues, however, the balcony space was much smaller than in many of their European counterparts (Ackermann, 198). In Figure 2, you can see how there are two rows of seats in a seemingly larger sanctuary while the interior of the Touro synagogue has a narrow space above that has room for one row of seats. This could have indicated shifting ritual priorities as they settled into a new environment with different religious influences. With the branches of Christianity not being as ritually focused, it would make sense for some of the practices within Judaism to lose some prominence. It also could have simply been a matter of space, given that there was more space to build the synagogue on the property as seen in Figure 3.
Shifting the focus further, understanding how the layout of the sanctuary, the central location of ritual in Judaism, was preserved furthers the argument. The floor and layout of the synagogue was virtually unchanged from what was traditional in Europe. The bimah, a platform from which prayers are chanted and ritual is led, in the middle of the sanctuary plays a central role in the orientation of the space. The bimah traditionally worked with the “Divine ratio” which separated the synagogue in a 37:100 ratio. Jewish law did not require this; however, the common ratio that separated the back, the thirty-seven part is behind the center bimah and the one-hundred part is between the bimah and the arc where the Torah was housed. This ratio was found in the Amsterdam Portuguese Synagogue, London Bevis Marks synagogue, amongst many other synagogues constructed in the fourteenth and fifteenth centuries throughout Europe (Liebman, 27). This distinction of the appearance of the synagogue changing while the central space remained unchanged is a testament to how the Jewish community acculturated.
The Jewish community in Newport grew relations with other religious groups that had many effects on other people throughout Newport and beyond. The Seventh Day Baptist Meeting House, one of the oldest surviving Baptist churches in the country, sits next door to the Touro Synagogue. The notable features of the meeting house are that it looks extremely similar to the front awning and the shape is almost identical. These similarities indicate there were some interactions between the groups however the interior balcony that was made, directly mirroring the mechitza inside the Touro synagogue, serves as a further support that these communities interacted both socially and ritually. Although the synagogue was built after the church was, considering their geographic proximity and range in architectural similarities, it can be reasoned that they had a strong influence on each other’s construction. The Jewish community, although small, branched out to establish relations and build bonds that persisted through any sort of
tension that might exist between the Jewish community and other religious groups.
One of the more notable interactions of the Newport Jewish community in their history was in their correspondence with President George Washington. As was done by many religious organizations throughout the United States, the Touro synagogue wrote a letter congratulating the President on his election and his new role in leading the country. In his response, he placed a heavy emphasis on religious freedom saying that “It is now no more that toleration is spoken of as if it were the indulgence of one class of people that another enjoyed the exercise of their inherent natural rights” (Letter from George Washington to Hebrew Congregation at Newport 1790). The significance at the surface is executive support for expanded religious freedom. The deeper impact is that in so many countries that Jews had been removed from, there was no connection between strong Jewish communities and those in power. In his letter, George Washington committed to ensuring that the “Government of the United States [would] give bigotry no sanction, and to persecution no assistance.” (Letter from George Washington). This further supported that even George Washington believed in supporting Jewish safety and culture within the United States.
The Jews of Newport, Rhode Island may have taken a long way to get there, but their resilience to establish themselves into the groundwork of the colony and the larger Jewish community into the framework of this country was well documented. As they evolved their culture and built their structures, their story was shown. This group of two hundred Jews balanced the task of preserving their ritual and culture while becoming genuine parts of the society that they had to incorporate into. While documents quantify the ways in which the Jewish people acculturated into the colony, the synagogue they left behind tells much more than just the facade of a building on the corner of Touro Street, it tells the story of so many.
Bibliography
“Acculturation Definition & Meaning.” Merriam-Webster, 6 November 2023, https://www.merriam-webster.com/dictionary/acculturation. Accessed 27 November 2023.
Ackermann, Daniel Kurt. “Acceptance and Assimilation.” Magazine Antiques, vol. 183, no. 1, Jan. 2016, pp. 196–201. EBSCOhost, research.ebsco.com/linkprocessInor/ plink?id=81dc46af-e82d-3215-84d9-07de31e72e8c.
Charles I, King. Parliamentary Patent, 1643. 1643, https:// docs.sos.ri.gov/documents/civicsandeducation/teacherresources/Parliamentary-Pat ent.pdf.
Everett. “History.” Touro Synagogue, https://tourosynagogue.org/history/. Accessed 5 December 2023.
Field, Jonathan Beecher. Errands Into the Metropolis: New England Dissidents in Revolutionary London. Dartmouth College Press, 2012. Accessed 30 November 2023.
Finch, John. “Seventh Day Baptist Meeting House Exterior.” Trace Your Past, 2018, www.traceyourpast.com/locations/ rhode-island.
Fine, Steven, and Ronnie Perelis. Jewish Religious Architecture: From Biblical Israel to Modern Judaism. Brill, 2020. pp. 221-237. EBSCOhost, research.ebsco.com/linkprocessor/ plink?id=1e1104c5-3fd9-3282-83a8-e90c6d12c8d1.
Goodwin, George M., and Ellen Smith, editors. The Jews of Rhode Island. Brandeis University Press, 2004. Accessed 8 November 2023, TheJewsofRhodeIsland.pdf (brandeis.edu).
Hoberman, Michael. New Israel / New England: Jews and Puritans in Early America, University of Massachusetts Press, 2011. ProQuest Ebook Central, https://ebookcentral-proquest-com.proxy216.nclive.org/lib/ncssm/detail.action?docID=453 2908.
Leibman, Laura. “Sephardic Sacred Space in Colonial America.” Jewish History, vol. 25, no. 1, 2011, pp. 13-41. ProQuest, https://www.proquest.com/scholarly-journals/sephardic-sacred-space-colonial-america/do cview/824413491/se-2, doi:https://doi.org/10.1007/s10835-010-9126-7.
Queen Isabella I, and King Ferdinand II. Alhambra Decree. 31 March 1492, https://www.fau.edu/artsandletters/pjhr/ chhre/pdf/hh-alhambra-1492-english.pdf.
Yasmine, Sarah. “Touro Synagogue Newport Rhode Island 1” Wikimedia Commons, commons.wikimedia.org/w/index. php?curid=18002825. Accessed 17 Apr. 2024.
Veenhuysen, Jan. “Interieur van de Portugese Synagoge Aan de Houtgracht Te Amsterdam.” Rijksmuseum, 2010, www. rijksmuseum.nl/en/collection/RP-P-AO-24-28.
Washington, George. Letter from George Washington to the Hebrew congregation at Newport. 1790, https://teachingamericanhistory.org/document/letter-to-the-hebrew-congregation at-newport/.
William, Ridden.“Portuguese Synagogue.” Flickr, Yahoo!, 17 Apr. 2024, www.flickr.com/photos/97844767@ N00/8780652959.
Sugar is a word that is used frequently in today’s society. With some type of sugar being in almost every food and drink consumed, its effects need to be talked about so that the public can understand what they are ingesting. This thinking can also be extended to learning the history of sugar and how it impacted the world. This is important because, not surprisingly, the world would be different if sugar had never been discovered. However, the impacts of sugar are more significant and far-reaching than one may believe. Since its discovery, sugar has left its mark on the world in a variety of ways. Many of these impacts still resonate in the present day, yet two major events stand out which occurred due to the implementation of sugar into the everyday lives of society. When sugar was introduced into the New World, it brought with it a unique aspect of the plantation system and it birthed the development of the West African Slave Trade, two events that heavily influenced the history and future of the world.
Even though the major changes for sugar didn’t occur until it was brought to the New World, the crave for sugar began long before the first sugar plantations had appeared in the Caribbean in the fifteenth century. In fact, it was about 3000 years ago that sugar first became refined into the granules that are so sought after today. Sugar was first processed in Eastern Asia, and at a meeting at Jundi Shapur, an Iranian university, “one of the most important seminars the world has ever seen” occurred (Rook). There “...Greek, Christian, Jewish and Persian scholars gathered in around 600 A.D. and wrote about a powerful Indian medicine, and how to crystallise it” (Rook). Around this time, Hippocratic medicine was the dominant medical model in Europe. Hippocratic medicine focused on keeping a person’s body in balance, and this mostly dealt with their diet; sugar was an aid to their ailments because it helped with stomach-related problems and dietary issues. Sugar was also known to relieve a cough and sore throat, so it was used by many doctors then. After sugar became more popular in Europe, though, it changed from being solely for medicinal purposes to now becoming a spice and sweetener in newly discovered foods.
The change in the reason for the demand for sugar came with changes in the European diet. After the Crusades, Europeans
began to change how they ate. Most importantly, they wanted more sugar. New foods were being discovered, namely coffee, and the Europeans wanted to sweeten this newfound commodity. This renewed yearning for sugar made it a product that was highly sought after, and many planters began to notice this. In fact, “Beginning in the twelfth century, Venetians established sites of sugar production in the Mediterranean” (Kernan). However, although Europeans could now have a backyard sugar-making facility, still “they were limited to small quantities due to climate, space, and labor” (Kernan). With sugar production now being a lot closer to home for most Europeans, the prices fell, allowing more people to buy the sweet treat. Yet, as more people tried sugar, the demand increased. New sugar production facilities continued to be created along the Mediterranean to meet demands. However, there came a point where there was not enough space for more plantations to be built, so a new place needed to be found to continue growing the sweet treat. Additionally, the climate was not optimal for growing the plant in the Mediterranean because in the subtropics it required a lot more labor than normal with irrigation and fertilization. Yet, at the turn of the fifteenth century, with the discovery of a brand new continent, things began to change in the sugar industry. The discovery of America changed the future of sugar. It changed sugar from being a scarce treat only for the wealthy to the most exported commodity from the Caribbean in the span of a few decades. Sugar demand was ever-growing, and during the 15th century, the leading nations of Europe were competing in a dominance race to see which was the most powerful. These powers believed that investing in sugar would help them become the dominant force of Europe, with Prince Henry of Portugal stating that “sugar production held the key to success for his Atlantic acquisitions” (Hancock). Portugal and Spain were the first countries to establish sugar plantations in the Caribbean, and these plantations quickly became profitable. One instance is of the Island of Madeira, in which “the first sugar was refined…in 1432, and by 1460 the island was the world’s largest sugar producer.” (Hancock). In a span of just 28 years, sugar had changed from a substance only for the wealthy to now one of the most influential cash crops in the world. After Madeira, many other sugar plantations were
established on other Caribbean islands, and sugar became the main export of the Caribbean islands. The production and exportation of sugar was making the islands, and therefore the colonizing European nations, very wealthy, so they continued to expand their industries by making them more cost-efficient. The sugar industry became more efficient by implementing two world-changing practices: the plantation system and the Transatlantic slave trade.
When the Caribbean Islands were first colonized, European settlers had relatively small estates to grow sugar. However, as time went on, the newly established fields could not keep up with the ever-increasing demand for the sweet substance. Due to this, the plantation system emerged. Although the plantation system was known to the world prior to its implementation on the sugar estates, its introduction into the Caribbean brought changes that affected the future of agricultural practices. It quickly grew in popularity and lasted for centuries, and it was unique in that it introduced monoculture to the Caribeean, a practice where only one type of crop was grown on a large scale. The introduction of the plantation system also produced the need for each island to essentially become self-sustaining. Sugarcane is a very fleeting product, as it spoils just days after being harvested. Furthermore, due to the sheer amount of cane that the fields were producing, there was no time for it to be shipped somewhere else to be processed, so all of the refining had to be done on the islands. Yet, this caused an increase in prices of founding and supporting a sugar plantation, as “the cost of building plantation works to process the sugar put the financial endeavour beyond the reach of all but the wealthiest merchants or landowners” (“Sugar Plantations”). Because each island had to have its own processing facilities, costs became very unaffordable for most of the smaller planters that had established their own plantations. As a result, the larger plantations that were run by the wealthier planters began to take over. Essentially, the demand for sugar had turned the industry into a monopoly, where only a few wealthy planters were able to afford the initial costs of establishing a sugar plantation. Yet, the rising costs of sugar plantations not only weeded out the small planters, but also caused the wealthier ones to want to cut down on expenses in other places in order to save some extra money. The planters found that the right spot was with their workforce.
When colonists from Europe first came to the Americas to establish sugar farms, they used labor from indentured servants, or the natives if the island was previously inhabited. However, the planters were dissatisfied with the current labor force. Indentured servants were expensive to maintain, and they would get their freedom in 5-7 years, in addition to gaining land after their release. This not only made the planters
have to replace their workforce frequently, but the workers then became an obstacle for the planters to expand their industry, both physically and economically, as the workers became competitors after their release. The enslavement of the natives didn’t work either, as the harsh labor conditions caused many of them to die. After seeing this, the planters wanted to find people suited for the tropical environment and the harsh labor to work their plantations. Their eyes landed upon Africans. The planters thought that “Africans would be more suited to the conditions…as the climate resembled that [of] the climate of their homeland in West Africa” (“Slavery in the Caribbean”). The plantation owners also realized that “Enslaved Africans were also much less expensive to maintain than indentured European servants or paid wage labourers” (“Slavery in the Caribbean”). The West Africans proved to be more fit for the work than the previous laborers, so the planters decided to keep importing an increasing number of slaves. Thus began the West African Slave Trade, an event spanning centuries that displaced millions of Africans from their homes. The importation of slaves into the Americas continued as the sugar industry expanded, and out of the total number of slaves imported, “some 40 per cent of enslaved Africans were shipped to the Caribbean Islands, which, in the seventeenth century… [became] the principal market for enslaved labour” (Sherman-Peter). Though slaves were shipped all over the Americas, the primary place for their use was in the Caribbean, where sugar plantations were still continuing to expand in size, number, and efficiency. The introduction of slaves into the Caribbean also created the Triangular Trade. The Triangular Trade involved the shipping of slaves from Africa to the Caribbean, the shipping of sugar from the Caribbean to other American colonies to be refined into rum, and then the shipping of that rum back to Africa to trade for more slaves. The sugar and rum were shipped to other places, but the Triangular Trade kept the sugar industry sustainable for many years. In addition to all of this, the introduction of enslaved Africans in America led to long-term issues regarding race and white supremacy in the United States.
Slavery has been prevalent in most of human history, so nothing was new about the fact that slaves were being used to work fields in the Caribbean. However, there was something that made slavery in the New World unique: it became based on race. The European colonists viewed the Africans as an inferior people due to their skin color, and they believed that the Africans being slaves was their natural state. Brenda Wilson puts it perfectly when she states that “ideas about white supremacy and inferiority of non-whites justified Black African enslavement and exploitation for the production of the luxury of sugar…” (page 4). This racial hierarchy was most prevalent
in the United States, though it was disputed ever since slavery was used in the region, even before it became a sovereign nation. The arguments over the morals of slavery continued to grow heated to the point where a civil war eventually broke out in the United States in 1861, and they continued up until the passage of the Thirteenth Amendment, which freed all of the slaves in the United States. However, even after the Civil War, in which slavery was abolished, the resonance of white supremacy and black inferiority continued to reign supreme in the Jim Crow South for nearly 100 years. After a century, blacks finally gained more rights through the Civil Rights Movement of the 1960s. Yet presently, around 60 years later, blacks are still facing societal prejudices solely due to their race. It has been around 160 years since slavery was abolished in the United States, but the effects of it are still widely felt. It has taken, and will continue to take, the efforts of many people to combat these prolonging prejudices that affect so many today. All of these issues that affect so many stemmed from “the production of the luxury of sugar” (Wilson). It is difficult to imagine that a sweet treat such as sugar could cause such turmoil and violence in history, yet the seemingly innocent granule has given rise to numerous societal problems regarding race. Additionally, the rise of sugar not only caused violence and turmoil among races, it introduced problems into the environment due to the practices of the plantation system and their resemblances to modern factories.
European colonists viewed the Africans as an inferior people due to their skin color, and they believed that the Africans being slaves was their natural state. Brenda Wilson puts it perfectly when she states that “ideas about white supremacy and inferiority of non-whites justified Black African enslavement and exploitation for the production of the luxury of sugar…” (page 4). This racial hierarchy was most prevalent in the United States, though it was disputed ever since slavery was used in the region, even before it became a sovereign nation. The arguments over the morals of slavery continued to grow heated to the point where a civil war eventually broke out in the United States in 1861, and they continued up until the passage of the Thirteenth Amendment, which freed all of the slaves in the United States. However, even after the Civil War, in which slavery was abolished, the resonance of white supremacy and black inferiority continued to reign supreme in the Jim Crow South for nearly 100 years. After a century, blacks finally gained more rights through the Civil Rights Movement of the 1960s. Yet presently, around 60 years later, blacks are still facing societal prejudices solely due to their race. It has been around 160 years since slavery was abolished in the United States, but the effects of it are still widely felt. It has taken, and will continue to take, the efforts of many people to combat these prolonging preju-
dices that affect so many today. All of these issues that affect so many stemmed from “the production of the luxury of sugar” (Wilson). It is difficult to imagine that a sweet treat such as sugar could cause such turmoil and violence in history, yet the seemingly innocent granule has given rise to numerous societal problems regarding race. Additionally, the rise of sugar not only caused violence and turmoil among races, it introduced problems into the environment due to the practices of the plantation system and their resemblances to modern factories.
The demand for sugar continued growing, and this non-cessation of sugar cravings forced the plantations to become more efficient. In fact, “By the 1700s, sugar was the most important internationally traded commodity and was responsible for a third of the whole European economy” (Hancock). Sugar had become one of the highest sought-after commodities globally, and its value as a cash crop never diminished. As a result, plantations continued to become more efficient in their production and processing so that they could export as much sugar as possible. The greatest increase in the efficiency of sugar production came with the rise of the Industrial Revolution. The mechanization of processes around the world made things remarkably more efficient, and sugar plantations were no exception. One instance of this was in Cuba, where “in 1827 only 2.5 percent of the 1,000 ingenios [sugar mills] in Cuba were steam-powered…yet they accounted for 15 percent of the island’s sugar crop” (Tomich). With the rise of industrialization, sugar plantations could now easily export more sugar and gain more profit. Through mechanization of the sugar industry, though, the plantations began to look similar to the factories that emerged as a result of the Industrial Revolution. This raises the question of whether the sugar plantations were the antecedents of mechanized factories or not.
The simplest way to explain modern factories is that they make a product in the most efficient way possible, without regard to the external consequences of their production. It can be argued that sugar plantations functioned in this same way. In fact, many researchers are now examining sugar plantations through the lens of plantationocene, which “is a way of conceptualizing the planetary impacts of the exploitation of natural resources, monoculture expansion, and forcible labor” (Edwards, Part II). The expansion of monoculture has already been discussed, as has forcible labor; these things are also what most would normally know about the effects of plantations. However, the exploitation of natural resources on plantations is not something that is widely known, but it has had a large impact on current agricultural practices. The expansion of monoculture gave rise to environmental effects that still resonate today. Sugarcane is a very labor-intensive crop to grow, and “repeated sugarcane plantings depleted the soil of
nutrients, [so] intensive preparation of the fields was crucial” (Kernan). The depletion of nutrients in the soil made it so that the fields had to be heavily fertilized before planting so that the sugarcane would grow. The planters didn’t care about their environmental effects, though; they were solely worried about making as much profit as possible. These ideas of arriving at a place and exploiting its natural resources is also known by another name: imperialism.
Imperialism has been one of the major parts of history affecting many people. From the first ventures of the Europeans to America to current-day invasions of countries,history affecting many people. From the first ventures of the Europeans to America to current-day invasions of countries, places have been invaded and subsequently had their resources exploited. Sarah Kernan states that “the logics of imperialism are rooted in the 1493 papal decree [the Treaty of Tordesillas] that granted rights to all lands in the western and southern hemispheres to Spain and Portugal.” Imperialism may have existed before this decree in an unofficial sense, but the Treaty of Tordesillas allowed for the legal invasion of lands and the exploitation of their natural resources. The cause of imperialism can be linked to the rise in the sugar industry, also, because as previously mentioned, the European nations were in a competition to expand their powers in the early fifteenth century. The prevalent powers of Europe needed a great deal of money to be able to call themselves the dominant power, though. Imperialism stood out to them because the only initial investment was the founding of colonies or settlements in the new lands, and the rest of the monetary dealings were profit. The nations then focused on settling the islands in the Caribbean to establish sugar plantations as a means of stable and profitable income. The settlement of these islands and their subsequent exploitation, which caused monumental impacts on future actions of nations, was all due to the want for money, and that money was earned from the exportation of sugar.
The sweet treat has had a far greater impact on the world than one may have thought. From its first uses in Asia as a medicine to being in most foods ingested today, sugar has left its marks on the world. It was always present, watching, and usually causing, major events in history that changed the trajectory of the world. In the fifteenth century, Europeans began competing for dominance, and then a new continent was discovered, adding fuel to the fire of their attempts to become the greatest nation. Their power came from money, and sugar was a stable and profitable crop that could be grown large-scale. The nations looked to the Caribbean to establish sugar plantations that would be a stable and profitable crop. Once sugar plantations were established, they changed the world. The demand for sugar was ever-increasing, and greedy planters were the
owners of the plantations. This was the perfect recipe to exploit both the production and processing of sugar through the rise of the plantation system and the West African Slave Trade. The plantation system grew the popularity of monoculture, which is still widely used today, and the West African Slave Trade had had monumental impacts on society ever since it began and its resonance continues to affect many. Overall, the sugar industry has been the origin of many events in history that have lasting effects on the present world.
Edwards, Justin D., et al., editors. Dark Scenes from Damaged Earth: The Gothic Anthropocene. University of Minnesota Press, 2022, https://manifold.umn.edu/read/dark-scenesfrom-damaged-earth/section/d1a9b759-2cfa-4469-8294-fb00221fcd36#cvi. Accessed 6 December 2023.
Hancock, James. “Sugar & the Rise of the Plantation System.” World History Encyclopedia, 18 June 2021, https://www. worldhistory.org/article/1784/sugar--the-rise-of-the-plantation-system/. Accessed 30 November 2023.
Kernan, Sarah Peters. “Sugar and Power in the Early Modern World – Digital Collections for the Classroom.” Digital Collections for the Classroom, 18 March 2021, https://dcc. newberry.org/?p=16944. Accessed 30 November 2023.
Rook, Hugo. “The Sugar Series: The History of Sugar.” Czarnikow, 20 December 2019, https://www.czarnikow.com/ blog/the-history-of-sugar. Accessed 5 December 2023.
Sherman-Peter, A. Missouri. “The Legacy of Slavery in the Caribbean and the Journey Towards Justice | United Nations.” United Nations, 24 March 2022, https://www. un.org/en/un-chronicle/legacy-slavery-caribbean-and-journey-towards-justic e. Accessed 6 December 2023.
“Slavery in the Caribbean.” National Museums Liverpool, https://www.liverpoolmuseums.org.uk/archaeologyofslavery/slavery-caribbean. Accessed 6 December 2023.
“Sugar plantations.” National Museums Liverpool, https:// www.liverpoolmuseums.org.uk/archaeologyofslavery/sugar-plantations. Accessed 6 December 2023.
Tomich, Dale. “World Slavery and Caribbean Capitalism: The Cuban Sugar Industry, 1760-1868.” Theory and Society, vol. 20, no. 3, 1991, pp. 297–319. JSTOR, http://www.jstor. org/stable/657555. Accessed 30 Nov. 2023.
Wilson, Brenda K. “'When There is No Money, That is When I Vomit Blood': The Domino Effect and the Unfettered Lethal Exploitation of Black Labor on Dominican Sugar Plantations.” Globalization and Health, vol. 19, 2023, pp. 1-13. ProQuest, https://www.proquest.com/scholarly-journals/ when-there-is-no-money-that-i-vomit-blood -domino/ docview/2865406401/se-2, doi:https://doi.org/10.1186/ s12992-023-00963-4.
Erin Collins
This research is a study of the Tremé, America’s oldest African American neighborhood, in relation to the long lasting socio-cultural impacts the French had on it. In the early seventeeth century when New Orleans was establishing itself, the French took over the colony after the Spanish and then the land went to Americans following the Louisiana Purchase in 1803. Under French rule, New Orleans was a melting pot of cultures, to which the Tremé was central. Not only did enslaved people and African-Americans live and gather there but also the French, Italians, Germans, Spanish, Haitians, and more.1 In the seventeenth and early eighteenth centuries, when slavery was still prominent in Louisiana, free people of color and slaves who bought their freedom could purchase property in the Tremé. Admittedly, the French were known to be more tolerant when compared to the Anglo-Americans who took over New Orleans following the Louisiana Purchase. However, the French still had their demeaning social constructs of race and ultimately how it defined life in New Orleans.
As early as 1665 during French colonial rule, there were black codes or legislation limiting black life in colonized territories. Despite this, New Orleans under French rule had an extremely diverse community, rooted in immigration that was only heightened following the Louisiana Purchase. This paper examines the political organization of legislation in New Orleans, more specifically the Tremé, from the time of French colonization (seventeenth century) to the development of Anglo-America (early nineteenth century). I hone in on this in order to see how political formation and acts of legislation impacted the conditions and shaped the culture for black people in the region. The impacts of this transformation are seen today, with the Tremé serving as a place for social and cultural refuge for oppressed peoples; by observing this history, we can have a better understanding of how culture is impacted. The primary research questions include: How did legislation contribute to altering social organization? Why is there a stark disconnect between what the law states in regard
1 Spear, Jennifer M.. 2009. Race, Sex, and Social Order in Early New Orleans. Baltimore: Johns Hopkins University Press. Accessed September 24, 2023. ProQuest Ebook Central.
to governing people of color and how they were treated in real time? I explore the contrast between the legislation of New Orleans and what the social organization actually looked like.
New Orleans has always been a melting pot of cultures which dates back to French colonialism beginning in the early 1700s. The Tremé was the center of not only African American culture in New Orleans but also the French, Italians, Irish, Spanish, and many more. Frederick Law Olmsted, a famous explorer once said, “I doubt if there is a city in the world, where the resident population has been so divided in its origin” such as New Orleans.2 From its inception in the 1700s, New Orleans and the Tremé was the place of opportunity and refuge for people of color. There, free people of color could purchase property and start businesses which was a rarity in other parts of the world during slavery. Prior to the Louisiana Purchase, New Orleans had an increasingly diverse population due to influxes in immigration as a result of conflicts around the world at the time. To name one, there was a mass migration of white, black, and enslaved Haitian immigrants fleeing to New Orleans following the Haitian Revolution from 1791-1804. New Orleans was a prime location because the majority of society spoke French and practiced Catholicism. The number of immigrants all around the globe to New Orleans doubled after the Louisiana Purchase as immigrants sought a chance to be free and have rights under the government. Jennifer Spear mentions in her book that in New Orleans during the Louisiana Purchase, one in five people was a free person of color. Due to events like these and prospects for a new life, New Orleans became the melting pot that it is still today. New Orleans under French rule is always described to be a tolerant city and I will not entirely refute that. However, despite their tolerance, the
2 Olmsted, Frederick Law, and University of North Carolina at Chapel Hill University Library. 1856. A Journey in the Seaboard Slave States: With Remarks on Their Economy Internet Archive. New York : Dix & Edwards ; London : Sampson Low, Son & co. https://archive.org/details/journeyinseaboarolms/page/582/mode/2up.
French had demeaning social constructs of race that ultimately defined subsequent life in New Orleans for people of color.3
During French colonial rule, there were black codes or legislation limiting black life in colonized territories. The Code Noir, or Black Code enacted by the French in 1685, detailed restrictions on black life in any French colonies.4 Some of which included the forbidding of slaves from different gatherings, holding a government position, representing themselves in court, and more. Examining the legislation of New Orleans helps to paint the picture of how societal beliefs affected life. There are in fact similarities of the French ruling to America. When America took over Louisiana during the purchase, there was a legislature passed to bar free black men the ability to go into the state due to influxes of immigrants.5 Although never fully enforced, it is telling of social beliefs. Legislation such as the Louisiana Purchase and Black Codes are crucial to examining the social-cultural systems from the time of French colonization to Anglo-America. This all serves the purpose of discovering how systems were reversed and why a strong culturally diverse society remained in spite of rules and regulations governing life for people of color.
Frederick Olmstead’s accounts of New Orleans are needed to understand the race organization and how the mixture of blacks, natives, and French each have names to classify and a rank within society.6 The memoirs of Pierre Clément, a French commissioner at the time of the Louisiana Purchase offers a great deal of perspective on social order in New Orleans. He details living conditions in New Orleans, the issue of slavery, institutional patterns, and much more.7 He spent a great deal of time in New Orleans and his experiences give insight to the social scene at the time. How can the origins of how culture kept evolving be tracked, through an in depth study of New Orleans and the Tremé? How did the social constructs by the French at the time impact life? The broader field of what the
3 Spear, Race, Sex, and Social Order in Early New Orleans, 2009.
4 “Code Noir (1685).” 2022. August 21, 2022. https://slaverylawpower. org/code-noir-1685/.
5 “Acts Passed at the First Session of the First Legislature of the Territory of Orleans : Begun and Held in the City of New Orleans, on the 25th Day of January, ...” n.d. HathiTrust. Accessed September 24, 2023. https://hdl. handle.net/2027/mdp.35112203962842.
6 A Journey in the Seaboard Slave States: With Remarks on Their Economy, Olmsted, Frederick Law, 1822-1903 : Free Download, Borrow, and Streaming : Internet Archive.”
7 Pierre-Clément De Laussat. 1978. Memoirs of My Life to My Son during the Years 1803 and After, Which I Spent in Public Service in Louisiana as Commissioner of the French Government for the Retrocession to France of That Colony and for Its Transfer to the United States. Baton Rouge: Published For The Historic New Orleans Collection By The Louisiana State University Press.
project aims to study is the topic of evolving colonialism and the long lasting effects.
Primary accounts of New Orleans and the territorial legislature truly conveys an understanding of the social beliefs in New Orleans and the Tremé under French rule. The Tremé is truly one of its kind because it served as the hub of opportunities for free blacks to own property and create a livelihood. Through examining how the French systems evolved, it tells the story of the Tremé and its success. The Tremé itself was founded by free blacks as an opportunity to buy property and make a living. Due to many free blacks migrating to New Orleans and the Tremé, it caused legislation to be enacted to restrict the success of people of color. This all serves the purpose of discovering how systems were reversed, and to find out why a strong culturally diverse society remained in spite of rules and regulations governing life for people of color. This research can lead to further discussion in how historians track the social-cultural aspects of an area and how it can be defined in the past and present.
Race, Sex, and Social Order by Jennifer Spear details the evolving culture of New Orleans in the years before and after the Louisiana Purchase. During the Louisiana Purchase in 1700, there were more than thirteen hundred free people of color in New Orleans. This accounted for nearly a fifth of the city’s population. The majority were born into freedom.8 Jenniffer Spear truly describes how diverse the area was. In the 1790s there was a large influx of African immigrants who spoke French and were Catholic refugees escaping their homelands.9 The population of free blacks only increased. With the transition from French New Orleans to Anglo American, there were significant changes in how society was organized. However, the beliefs of the Americans were not that different from those of the French.
Long before Louisiana became American territory, the French were sharing similar ideas on slavery. Black codes or Code Noir were enacted. This was legislation meant to limit black life and opportunity. The codes were enacted as early as 1685 and later in 1720 because the French wanted to respond to slavery which was rapidly emerging in the lower part of Louisiana. The slave codes were a means of placing “restrictions on slaves’ behavior” and governing the “everyday treatment of slaves or delineated the legal status of slaves.10 The articles defined slaves as moveable property, restricted them from gathering during the day or night and more. The economy was changing and the French believed slavery and plantations were a way to counteract
8 Spear, Race, Sex, and Social Order in Early New Orleans, 2009. 9 Ibid., 184.
10 Spear, Race, Sex, and Social Order in Early New Orleans, 2009, 60.
that.11 Thus, “the 1724 Code Noir reflected the transition from a status-based hierarchy to one rooted in race.”12
New Orleans was a society that owned slaves and the amount of slaves outnumbered Euro-Louisianans.13 However, slaves did a variety of labor ranging from agricultural to any aspect that affected the economy. It took well over a century for Louisiana to become “a full blown slave society.”14 Under French rule Louisiana was primarily used as a military outpost meant to defend French colonies. Jennifer Spear mentions that due to it being a military base, it had “little economic value itself.”15 John Crozat Law, a Frenchman, wanted a monopoly over all French trading. However his company, Company of the West, ended up collapsing and then became Company of the Indies. The business was under new rule. That’s when people realized it would take a lot of money to invest in Louisiana.16 Many people believed that slaves were the key to bringing success to New Orleans, leading some slaves to be brought in 1706. It was an extremely slow process due to slaves not being in high demand until about a decade later.
In 1706, slaves were highly demanded by colonists due to scarce laborers and high mortality rates at the time. However, France was unwilling to spend a lot of money on African slaves and worried about revolts. “Father Pierre François Xavier worried that nègres were “always Foreigners” who would never think of Louisiana as their native land and were “attached to us only by fear.”17 This led to the use of Indian slaves because they were original inhabitants of the land, but the number of slaves was limited due to fear of revolt. There was no approval yet for African slaves, but they migrated to New Orleans in 1709. In 1719, tobacco was being cultivated and then began the large importation of African slaves to New Orleans. For Lousiana, 1722 was the “height of the slave trade” and “the desire for slaves far outstripped colonizers’ ability to pay for them.”18 On average, it would cost 1,000 livres for a slave, which was “the same price it charged the much wealthier planters in the French Caribbean.”19 The French colonists could not afford such prices so compromises were made, allowing the purchases of millions of slaves. When slaves arrived in Louisiana, they mostly worked in warehouses or the sea. There were some
11 Ibid., 179.
12 Ibid.
13 Ibid.
14 Spear, Race, Sex, and Social Order in Early New Orleans, 2009, 56.
15 Ibid.
16 Ibid., 57.
17 Spear, Race, Sex, and Social Order in Early New Orleans, 2009, 54.
18 Ibid., 57.
19 Ibid., 57-58.
who worked on plantations in 1721 in order to clear lands to produce crops such as “tobacco, silk, indigo, cotton and rice.”20 Sugar and cotton would become a staple in Louisiana many decades later. In the 1750s, there were many plantations making money but according to “Commissaire Ordonnateur Honoré Michel de la Rouvillière noted, “the seeds…are rather difficult to detach.”21 Slavery in New Orleans did not really start to take off until the 1800s. That is in line with the statement about cotton seeds being harder to detach because the cotton gin wasn’t invented until 1793, which impacted the development of slavery in New Orleans.
Memoirs of My Life by Piérre Clement offers perspective on the Louisiana Purchase legalities and social order in New Orleans. Clement was the French commissioner and prefect of Louisiana at the time of the purchase in 1803. He was involved when Louisiana transferred from Spain to French rule and France to the United States. Through his memoir, he talks about the culture of life in New Orleans, the laws in place, as well as the impact the purchase had on both governmental and social aspects.22 Book 1 takes place from January 1803 to December of 1803 and details life in colonial Louisiana and the Louisiana Purchase.23 In early chapters, Clement details the infrastructure of New Orleans and its port that allows for a bustling city. He states how New Orleans “is destined to become, very shortly, one of the most populated, most productive, most lively, and richest countries in the world.”24 Clement describes the atmosphere in New Orleans as vibrant and joyful. The men are seen as very frank and the women as beautiful and polite. Everyone has a fondness for pleasure and it’s common to find traditional toasts and songs at every meal.25 New Orleans had a rich history and culture of the French, Spanish and more.
Piérre Clement was one of the highest officials in Louisiana that presided over French affairs. In his memoir, he provided his experiences in talking with other country officials. He provides actual context regarding the conflicts and a firsthand account of what the French were thinking during the time leading up to the Anglo Americans taking over Louisiana. Just in the first fifty pages, he mentioned the social atmosphere of New Orleans, the bustling economy, and the responses to him ceding the colony
20 Ibid., 58
21 Ibid.
22 Dargo et al., “Memoirs of My Life to My Son during the Years 1803 and after, Which I Spent in Public Service in Louisiana As Commissioner of the French Government For the Retrocession to France of That Colony and for Its Transfer to the United States.,” November 1, 1978, xi.
23 Ibid., 3.
24 Ibid., 27.
25 Ibid., 20.
to the Americans. He wrote decrees and was continuously called to meetings to finalize details. He was also isolated from the Louisianians due to the government affairs. Residents saw their “prosperity, [their] wealth, [their] good fortune…in his hands” and that was taken away as a result of the purchase.26
On May 30, 1802 Clement received rumors of disagreements within British and American officials on who should rule Louisiana.27 General Dayton, an American who signed the U.S. Constitution many years before in 1787, met with Clement on the idea of cession. Clement talks about American rationale for fear of taking the colony, which was based on the power of the French and divisive wars. As Clement was the French prefect, he explained France’s policies and how they wanted Louisiana. He suggested that the French could be a part of America, and they would introduce four categories of men. Black slaves, highly trained army men, farmers, and passionate men of the Republic were to build up a majority of the population.28 Essentially, Clement proposed that the French and Americans could be neighbors that distrust each other. He also talked about how the French were not at fault for America being so divided at the time and needing a lot of land.29 While Piérre Clement provides clarity on attitudes within law and society, writer Frederick Law Olmsted provides more context of society and the beliefs regarding race following the Louisiana Purchase.
A Journey in the Seaboard Slave States: With Remarks on Their Economy by Frederick Law Olmsted offers a firsthand perspective on New Orleans’ culture from in the 1850s. The source is long after the Louisiana Purchase. In spite of this, it serves as a point of comparison from Piérre Clement’s experiences in New Orleans before and during the purchase to Frederick Olmsted’s accounts of ‘New Louisiana.’ What is referred to as New Louisiana is the culmination of the already diverse colony but with added English settlers controlling it. Although it was now a ‘New Louisiana,’ it was still a culturally French area. When Olmsted arrived at a terminus in New Orleans, he was met by carriages “in the style of Paris,” French smells, and French signs.30 Similar to Piérre Clement’s description of people in New Orleans having “extremely remarkable clever-
26 Ibid., 47.
27 Ibid., 33.
28 Ibid.
29 Dargo et al., “Memoirs of My Life to My Son during the Years 1803 and after, Which I Spent in Public Service in Louisiana As Commissioner of the French Government For the Retrocession to France of That Colony and for Its Transfer to the United States.,” November 1, 1978, 35.
30 A Journey in the Seaboard Slave States: With Remarks on Their Economy. Olmsted, Frederick Law, 1822-1903 : Free Download, Borrow, and Streaming : Internet Archive,” Internet Archive, 1856, 580, https://archive.org/details/journeyinseaboarolms/page/582/mode/2up.
ness” and a luxury “wardrobe [resembling] that of Paris.”31 The people in New Orleans have a French ‘air’ with their elegance, clothing, and attitudes. Walking into the city’s marketplace there were “not only the pure old Indian Americans, and the Spanish, French, English, Celtic, and African” but nearly every mixture of cultures.32 When discussing New Orleans, Olmsted mentions how the mixture of blacks, natives, and French each have names to classify and a rank within society.33 The French organize race through “various grades of colored people” into different sub categories based on how much negro blood is in a person.34 He then talks about how the mixture of “French and Spanish with the African produces a finer and healthier result than that of Northern European races.”35 He follows up by saying he would not be surprised if scientific observation would “show them to be more vigorous than either of the parent races.”36 Olmsted was known to be very strongly anti-slavery. However, his statement on people of mixed African descent to be more vigorous or stronger, shares the same sentiment as many pro-slavery advocates at the time justifying Africans as better slaves because they can endure more pain or are physicaly more agile. The idea that those with recently traced African ancestry are more immune further elicits how Olmsted’s American views likely resemble that of ‘New Louisiana.’ That point is only further proved when Olmsted recounts the comments of passersby about slaves.
As Olmsted was walking through New Orleans, he saw a row of 22 black men standing outside a clothing store, each dressed in a blue suit, black hat, and shoes in hand. A passerby spoke to Olmsted, who believed the men’s owner was inside the store looking for clothes. “Dam’d if they aint just the best gang of cotton-hands ever I see," one says to Olmsted.37 Another man says all of those slaves would be at least twelve hundred each and that twenty thousand wouldn’t be an issue. Then a man
31 Dargo et al., “Memoirs of My Life to My Son during the Years 1803 and After, Which I Spent in Public Service in Louisiana As Commissioner of the French Government For the Retrocession to France of That Colony and for Its Transfer to the United States.,” November 1, 1978, 20.
32 A Journey in the Seaboard Slave States: With Remarks on Their Economy. Olmsted, Frederick Law, 1822-1903 : Free Download, Borrow, and Streaming : Internet Archive,” Internet Archive, 1856, 583, https://archive.org/details/journeyinseaboarolms/page/582/mode/2up.
33 Ibid.
34 Ibid.
35 Ibid.
36 Ibid.
37 A Journey in the Seaboard Slave States: With Remarks on Their Economy. Olmsted, Frederick Law, 1822-1903 : Free Download, Borrow, and Streaming : Internet Archive,” Internet Archive, 1856, 585, https://archive.org/details/journeyinseaboarolms/page/582/mode/2up.
Race and Law at the Mouth of the Mississippi
says “Give me half on em’, I’d sign off.”38 ‘New Louisiana’ was the product of Americans taking over after French rule. The first thought of people seeing black men in the street neatly dressed, was their worth as slaves. People were so impressed that they wouldn’t mind paying thousands. There were people commenting that they would pay thousands, or that those slaves would sell high at an auction block. Despite this being long after Clement’s accounts, it was only an extension of French New Orleans as their influence was always prevalent. Olmsted’s American views resemble new and old Louisiana. Through easy banter, Americans commented about the worth of other beings. As this was still during the time of slavery, similar attitudes were shown in Piérre Clement’s suggestion as French Prefect in 1802, prior to any American intervention. One of his proposals was that black slaves were the first key part to French assimilation with America. Black slaves were used to hold up society’s economics and the French having them in New Orleans was quite common at the time despite the French frequently saying slaves were not a part of their culture.
Olmsted provides more context of society and the beliefs regarding race. New Orleans remained a French city despite being under American rule and Louisiana was no longer the same following the Louisiana Purchase. The use of Olmsted’s source provides a firsthand account shedding light on the social-cultural attitudes of New Orleans in the 1800s. It truly shows a contrast in how under French rule, diversity was a part of the culture and the English had a different way of displaying it.
Clement’s memoirs on life in Louisiana detail the French response to America’s proposal to take land. He briefly talks about the social scene in Louisiana and how there is no finer place. Olmstead shares this sentiment with the statement of how New Orleans “is destined to become, very shortly, one of the most populated, most productive, most lively, and richest countries in the world.”39 Examining these sources simultaneously helps to paint the picture of New Orleans’ evolving culture in relation to social organization and legislation in place at the time. There was a shift when Anglo Americans took over Louisiana, as everything was now defined by race. Prior to that, the French had ways of identifying or categorizing those of different races, even though there was no proper way to distinguish them. In Louisiana, under the French, there were slave codes since the beginning of slaves being introduced. However slaves did not strictly work plantations and their jobs
would span across manufacturing and more. It wasn’t until the very late eighteenth century with the invention of the cotton gin that truly expedited slavery, along with the cultivation of tobacco and sugar in New Orleans.
In 18th century Louisiana, slaves were given Sundays off. This was a result of Code Noir, or black codes establishing laws for slaves. Despite this rule, slaves were still barred from congregating in large groups under any circumstance. However, slaves would still gather in public areas, one of them being Congo Square located in the Tremé–America’s oldest African American neighborhood. The Tremé was also where free people of color or slaves could buy freedom and purchase property. Congo Square was a “slave and Indian marketplace where African religious and musical customs were practiced during slaves’ free time.”40 Under French rule, slaves would perform dance, sing and sell items in Congo Square in order to make money and purchase their freedom. The Tremé was the reason that slaves could buy freedom and build lives for themselves under French rule. That trend continued even under American occupation following the Louisiana Purchase. After the Louisiana Purchase, Congo Square became even more prominent. Congo Square would receive visitors from all over the U.S. due to many protestant colonies or states banning African music.41 Another factor was the exponential growth of African populations as a result of immigration and refugees from the Haitian Revolution. Due to the revolution, New Orleans would receive thousands of immigrants that were people of color, further contributing to the cultural explosion.
In the eighteenth and nineteenth century, the Tremé was the place of opportunity for slaves, blacks, and people of color. In contemporary times, the Tremé is seen as a low-income African American neighborhood that is “affected by the same social conditions that persist in similar urban communities across the United States: low educational at-tainment, lowwage occupations, unemployment, violent crime, and drugs.”42 However, very few know that the Tremé “was the site of significant economic, cultural, political, social and legal events that
38 Ibid.
39 Dargo et al., “Memoirs of My Life to My Son during the Years 1803 and after, Which I Spent in Public Service in Louisiana As Commissioner of the French Government For the Retrocession to France of That Colony and for Its Transfer to the United States.,” November 1, 1978, 27.
40 Michael Crutcher, Tremé: Race and Place in a New Orleans Neighborhood, 2010, 11, https://doi.org/10.1353/book11480.
41 Wikipedia - Vlach (1990). The Afro-American Tradition in Decorative Arts. The University of Georgia Press. pp. 26, 135.
42 Michael E. Crutcher Jr, Tremé: Race and Place in a New Orleans Neighborhood (University of Georgia Press, 2010).
have literally shaped the course of events in Black America for the past two centuries.”43
Michael Crutcher, a writer, states that the New Orleans Tremé has three dominant identities. The Tremé is a place with unique African American cultural performance traditions, significant African American political achievement and historic architecture.”44 The combination of these identities The primary identity of the neighborhood is the culture. Tremé has “colorful parades and funerals’’ along with jazz music, Mardi Gras, and more festivities.45 The second-line is prominent in Tremé’s culture. It is a parade that has no separation “between parade and audience” and people enjoy music and dance.46 Many of the second-line parades and events are sponsored by descendents of benevolent societies that began in the early 19th century. Another aspect of the Tremé’s culture is the “relationship that African Americans and Native Americans developed as oppressed peoples during the city’s colonial and early American periods.”47 During Mardi Gras, neighborhood tribes dressed as Indians would have “ritualized combat” with the “big chiefs” as a way of celebrating cultures.48 In the colonial period there was immense culture and progress that sprouted from hard times faced by African Americans.
In the early 1800s following the Louisiana Purchase, New Orleans was Americanized. The social organization of society drastically changed after the purchase. Free creoles of color demanded equal citizenship under American rule and were met with rejection. They were denied citizenship and were “systemically and aggressively attacked” because whites were afraid of a “liberal social system” that allowed blacks and whites in the same space.49 An interesting thing to note is that creoles of color were the children of a mulatto, a person of mixed black and white ancestry, and a white person. Due to this, those of mixed race who could racially pass, “living as a white person” in order to reap the social or economic benefits of whiteness.”50 The majority of creoles of color were free; however, this was not significant to Americansas free creoles were still denied citizenship. It was actually many creoles and blacks from the Tremé that demanded freedom and in this effort created newspapers. For example, the creoles created the “Comité
43 Ibid.
44 Ibid., 15.
45 Ibid.
46 Crutcher, Tremé: Race and Place in a New Orleans Neighborhood, 16.
47 Ibid., 17.
48 Ibid.
49 Ibid., 26.
50 Ibid., 17.
des Citoyens (Citizens Committee)” that later sponsored “the landmark segregation case Plessy v. Ferguson (1896).”51
Following the Louisiana Purchase, New Orleans’ attitudes towards slaves and free people of color shifted under an Anglo-American state. The New Orleans Catholic Church continued “its liberal stance towards blacks” but outside the church “repressive policies took hold immediately.”52 In New Orleans, there were already slave codes and policies but under American rule slaves could no longer own property and have any legal standing. In 1808, there was a policy enacted prohibiting slave dancing unless it was Sunday, and only in specific locations. Congo Square, located in the Tremé was the place of gathering for slaves. The primary use was as a market for slaves to sell items. Music, dance, and were important to its identity as well, but were mainly a byproduct of the market.53
Over many decades, there were more competing markets, which led to a decline in the Congo Square’s market. One of the competitors, the French Market, “had little effect on the slave market until 1823” with the addition of “vegetable vending [facilities].”54 Another reason for the dwindling slave market was due to the newfound presence of “boat vendors docked nearby at the Carondelet Canal’s basin.”55 However, what truly sealed the slave market’s fate was the establishment of the Tremé Market in 1839. Following this, the slave market declined and police ordinances continued their focus on limiting slave gatherings. This led to further ordinances, such as the ordinance of 1817, prohibiting the use of firearms, fireworks, and even bonfires unless permitted by the Mayor. This meant no one “shall let off any fireworks, or shall make any bonfire” in any street during “festivals or public rejoicings” the consequence was “five to ten dollars upon each offender” and if the offender were a slave, they would receive thirty lashes.56 “With each restrictive ordinance, the square became increasingly associated with music and dancing.”57
51 Ibid.
52 Crutcher, Tremé: Race and Place in a New Orleans Neighborhood, 27.
53 Ibid.
54 Ibid.
55 Ibid.
56 Dukelawweb, “Ordinances Ordained and Established by the Mayor & City Council of the City of New Orleans Page 68, Image 68 (1817) Available at The Making of Modern Law: Primary Sources. | Duke Center for Firearms Law,” Duke Center for Firearms Law, November 2, 2022, https:// firearmslaw.duke.edu/laws/ordinances-ordained-and-established-by-themayor-city-council-of-the-city-of-new-orleans-page-68-image-68-1817available-at-the-making-of-modern-law-primary-sources/.
57 Crutcher, Tremé: Race and Place in a New Orleans Neighborhood, 27.
Race and Law at the Mouth of the Mississippi
Congo Square was key to all things culture in New Orleans including dance and music. The ordinance of 1817 prohibiting bonfires, festivals, or public rejoicing was targeted towards the slave population and people of color under American rule. All of the things listed were extremely important to performances and said performances were so prominent that people from all around the U.S. would travel to see them. The legislation enacted altered the behavior of social systems in New Orleans and the Tremé. Prior to American rule, slave markets were how slaves made money through sales and performances. Following the Louisiana Purchase, the slave market increased but eventually dwindled and then closely became associated with music or dance as a result of legislation.
What defines a place as distinct from another? Why is it that some geographical areas in America are favored more than others? What special qualities are embedded into the fabric of such places? New Orleans happens to be such a place. It is and has always been known for its diverse culture and traditions such as Mardi Gras, a parade in which people dance, play music, dress in colorful costumes, and more. The Krewes, groups that make up a large part of the parade happenings. New Orleans is the place of creation of African American culture. New Orleans is a melting pot of cultures that dates back to French colonialism beginning in the early 1700s. In New Orleans is the Tremé, America’s oldest African American neighborhood that was a place of opportunity for many. The Tremé was the center of not only African American culture in New Orleans, but also the French, Italians, Irish, Spanish, and many more. The Tremé was where slaves that purchased freedom and people of color could buy and own property in the 18th and 19th century. It was where Congo Square, a market that allowed slaves to make money, was housed. It was where the birth of jazz and all types of music originated from. The Tremé was the location of significant change in not only African American culture, but also American culture.
Studying New Orleans and the Tremé prior to and following the Louisiana Purchase has helped to better paint the picture of the evolving culture of society. The Tremé is truly one of its kind because it served as the hub of opportunities for free blacks to own property and create a livelihood. In regards to New Orleans and the Tremé, there was a lot of diversity and opportunity for people of color under French rule. Following the transition to American rule, society shifted from being based on status, to one rooted in race. There is such a unique place that throughout history was at the intersection of cultures ranging from enslaved Africans to Creole, and Europeans. New Orleans and the Tremé was the heart of that. Through examining how the French systems evolved, it tells the story of the Tremé and its success. Due to many free blacks migrating
to New Orleans and the Tremé, legislation was enacted that restricted the success of people of color. The Tremé is proof of the immense culture that developed as a result of segregation and in spite of slavery and restrictive slave codes. The culture was shaped by placing blacks into lower societal roles, but their impact far surpassed that. Observing this is necessary for a better understanding of a space of significance for black culture and its development from the historical and economical treatment of black people.
A Journey in the Seaboard Slave States: With Remarks on Their Economy. Olmsted, Frederick Law, 1822-1903 : Free Download, Borrow, and Streaming : Internet Archive,” Internet Archive, 1856, https://archive.org/details/journeyinseaboarolms/page/582/mode/2up.
Crutcher, Michael E., Jr. Tremé: Race and Place in a New Orleans Neighborhood. University of Georgia Press, 2010.
Dargo, George, Pierre Clement De Laussat, Agnes-Josephine Pastwa, and Robert D. Bush. “Memoirs of My Life to My Son during the Years 1803 and after, Which I Spent in Public Service in Louisiana As Commissioner of the French Government For the Retrocession to France of That Colony and for Its Transfer to the United States.” Journal of Southern History 44, no. 4 (November 1, 1978). https://doi. org/10.2307/2207622.
Dukelawweb. “Ordinances Ordained and Established by the Mayor & City Council of the City of New Orleans Page 68, Image 68 (1817) Available at The Making of Modern Law: Primary Sources. | Duke Center for Firearms Law.” Duke Center for Firearms Law, November 2, 2022. https:// firearmslaw.duke.edu/laws/ordinances-ordained-and-established-by-the-mayor-city-council-of-the-city-of-new-orleans-page-68-image-68-1817-available-at-the-making-ofmodern-law-primary-sources/.
“Slave Policies in French Louisiana on JSTOR,” www.Jstor. org, n.d., https://www.jstor.org/stable/4231982.
Slavery Law & Power in Early America and the British Empire, “Code Noir (1685),” Slavery Law & Power in Early America and the British Empire - Documents and Images From the Seventeenth and Eighteenth Centuries, October 11, 2023, https://slaverylawpower.org/code-noir-1685/.
Srihas Surapaneni
The period in American history spanning from the early 17th to late 19th century was marked by the brutal realities of slavery and the social stratification of African-American populations. While the institution of slavery and plantation culture expanded throughout the early years of America’s development, there was a simultaneous growth in social movements such as the abolitionist movement. Much progress was made in the journey towards abolition and freedom for African-American people. Though their stories are not as widely told, many African Americans succeeded in making their escape to freedom. These escapees became branded as maroons. The practice of marronage, or becoming a maroon, is widely associated with the maroon communities of Jamaica, whose people are commemorated and honored, even being on their currency. However, these escaped slaves or maroons found themselves displaced and scattered throughout the Americas, which led to the meaning of the word marooned. The Great Dismal Swamp, a vast expanse of marsh and wilderness stretching across North Carolina and Virginia, harbored the largest and longest-standing maroon community in the southern U.S., and provided a home to the thousands of displaced African people across the nation. The Great Dismal and the history of marronage in the United States played a significant role in the development and synthesis of various African diasporic cultures, and in furthering the abolitionist movement and the fight for African-American freedom such as through the Underground Railroad. This essay examines the role of the Great Dismal Swamp as a sanctuary in the history of Marronage, analyzing its impact on the abolitionist movement, the development of African-American cultural and religious identity, and the factors that contributed to the unlikely success of this community.
The unique geography and landscape of the Great Dismal Swamp region made it an ideal location for the thousands of maroons that escaped to the region. Surrounded by thick bogs, with a floor so weak that one wrong step would certainly lead to drowning in the thick mud, it is no wonder that runaways to the region were rarely captured or even pursued. An anthropologist studying the region observed that for these desperate maroons and runaways, facing the heat, bugs, bears, snakes, quicksand or anything of the matter was barely a price to pay for the salvation that lay ahead of them if successful (Blackburn 49). Escape to the Dismal was worth anything that they might face in their journey to these maroons. In contrast, in the opinions of slave owners and white people in this period, the Dismal was a horrible place filled with dangers. In a poem regarding a slave’s escape to the Dismal, Longfellow says that the Dismal is a place “Where hardly a human foot could pass, Or a human heart would dare” (Longfellow). The Dismal became known to outsiders as a dangerous region and many even believed that life was impossible even for the enslaved escapees who made it there. Slave hunters found attempting to infiltrate the dense growth surrounding the Swamp to be a fruitless endeavor, and the runaways would pay any price to get there. Such a contrasting opinion about the region worked in the favor of the runaways making the Dismal an ideal location for escape.
Though sources surrounding the Dismal Swamp are scarce, those that do exist and confront the history of the community often insinuate that the Swamp was a sanctuary solely for maroons or escaped slaves. Contrary to this belief however, evidence shows that displaced indigenous tribes among others had been living in the Dismal Swamp region almost a decade before the first African-American maroons arrived in around
1619. Of the many native tribes in the Americas it has been inferred that members from the Chesapeake, Nansemond, and other tribes were present in the region. Furthermore, throughout the history of the Great Dismal, there are a plethora of instances in which European laborers and escaped convicts would also flee to the safety of the swamp. The Great Dismal Swamp became a refuge for all who sought self-removal from their world (Sayers 87). The Dismal was already a well-known retreat for runaways by the time marronage came to the region, and it only continued to grow as a community and as a sustainable haven for its residents.
The sustainability of the Dismal Swamp as a place to live was surprising for many onlookers. At first glance, even after overlooking the constant threat of capture and the terrors lurking in every corner, the Swamp provides barely any forms of growing, hunting, or gathering food and other materials. In reality, the source of the Swamp’s sustainability came from relations with the outside world. According to an enslaved man working near the Dismal Swamp, the maroons of the Swamp often emerge from the Swamp, surviving off of wages, food, clothes, lumber, etc given to them by the lumbermen living near the Swamp. Often these men are poor white men looking for cheap labor which is provided to them by the desperate maroons in the region. The maroons were also often employed by the enslaved people living near the region, given clothes, food, and even money in exchange for gathering lumber and performing other tasks that would normally be asked of them. The enslaved man who had dictated this information had in fact “been himself quite intimate with them” (Olmsted 160). The maroons of the Dismal Swamp were not entirely isolated from the rest of the world. This and other accounts indicate that they advantaged relationships with outsiders in order to gain the resources they needed to survive in the safety of the Swamp for years, proving that the Dismal community was able to survive and even thrive in the region for as long as it did.
The growth of the Dismal Swamp’s popularity became a problem for slave owners across the United States. Despite this, the Dismal remained impenetrable by outsiders, not however by a lack of attempts at doing so. In fact, there were many attempts at provoking an infiltration of the Great Dismal. John Washington, the brother of George Washington, was one of many notable slave owners who found themselves affected by the appeal of the Great Dismal Swamp to maroons. A slave by the name of Tom, whom Washington owned, escaped in April of 1767. Washington put out an advertisement for the runaway offering a reward of 3 pounds for whoever could apprehend the man and take him away from “the proprietors of the Dismal Swamp” (Washington). 3 pounds in the colonies in 1768, when the ad was created, has the same buying power as
approximately 600 U.S. dollars today. Washington was only one of many who lost slaves to the Great Dismal and as its fame grew so did its population, however, throughout history, it remained untouched by outsiders.
Despite its growing fame as an escape for all who sought out its protection, the Great Dismal Swamp community made no efforts to keep themselves away from the conflicts of the outside world. The inhabitants of the Great Dismal became strong supporters of the abolitionist movement not just through words but through physical support. First and foremost the Dismal Swamp was one of if not the only water-based stop on the Underground Railroad, helping to shelter and prepare escapees for their journey to Canada, the Northern States or to settle down in the Dismal Region. On top of this, it is believed that the people of the Dismal Swamp were involved in a multitude of major slave rebellions before the Civil War. The Dismal history of rebellion began with the Chesapeake Rebellion in 1730, following which many of the displaced survivors of the rebellion fled to the Dismal, marking the beginning of the Dismal’s rebellious history but also the origins of many African American escapees arrival in the swamps. This event led to an immense uptick in population in the community.
Following this event in August of 1800, marking the first major uprising from the Swamps since the community’s formation around 1730, came Gabriel’s Rebellion. An enslaved man from Virginia by the name of Gabriel led the rebellion which also became known as Gabriel’s Conspiracy. Gabriel’s ability to rally the people was truly a spectacle and was one of the reasons that though unsuccessful, Gabriel’s Conspiracy is still known today. By its peak, Gabriel had rallied almost every free and escaped African American in the region of his planned insurrection, Virginia, and was able to strategically locate only those who would benefit the cause (Egerton 196-199). Furthermore, Gabriel strategically planned his insurrection in 1800 in the midst of the 1800 presidential election between Thomas Jefferson and John Adams, taking advantage of two Frenchmen assisting in his conspiracy in order to levy his position against the Republican Party if the rebellion went south (Egerton 213). Though Gabriel’s rebellion never ended up coming to fruition, its success in rallying the enslaved people of the Americas came as a shock to the white colonists and played a part in the resulting widespread fear of maroons and their capabilities. This fear included that for the Great Dismal Swamp whose people are believed to have played a part in the insurrection due to it being centered in Virginia (Day 44). This event proved to be a turning point in the public opinion of the Dismal Swamp maroons, proving their capability.
Soon after the news of Gabriel’s Rebellion had calmed, a second major slave rebellion was put into motion, known far and wide as Turner’s Rebellion. Turner’s Rebellion, led by Nat Turner, once again took place in Virginia. When Turner began planning and recruiting for his revolt he did so through what he called neighborhoods or communities with large slave populations. After rallying all his men, Turner launched his rebellion in August of 1831, resulting in the death of around sixty people at the hands of rebels (Kay 715). Though similar to Gabriel’s Rebellion, Turner’s Rebellion also resulted in failure; however, it was a lot longer lived than that of Gabriel and was much more successful in bringing attention to the capabilities of African American people including maroons. During his travels and recruitment for his rebellion, it is said that Turner spent a substantial period staying in the Great Dismal Swamp where he was able to gain the help of the maroons hiding in the region. Many accounts indicate that Turner truly believed in the Dismal’s importance in his insurrection, implying that its refuge and people would be irreplaceable for the success of his revolution and that the insurrectionists of Nat Turner had indeed been sheltered in the Swamp for a substantial period (Sayers 104). Though it is difficult to quantify the exact extent of the assistance provided by the Great Dismal Swamp maroons it is known that the assistance they did provide was substantial enough to have left a lasting impression on the people of the United States. In fact, the thought of the Maroons in the Swamp remained a threat to the outside world well throughout the nineteenth century (Day 44). The role of the maroons of the Dismal Swamp in assisting in the abolitionist movement prior to the beginnings of the Civil War is undeniable and the success of many movements such as these can be attributed to the Dismal Swamp. Furthermore the widespread fear and infamy that such revolts brought to the maroons of the United States contributed widely to their ability to remain hidden away and thrive. Later as the Civil War era came upon the United States around 1861 many of the maroons hidden away in the Great Dismal Swamp emerged from their haven and took up arms, joining the North in the fight for abolition, proving once again the determination of the Dismal Swamp maroons in earning freedom for not just themselves but for all of their displaced African American brethren.
The Dismal Swamp’s African-American population was the largest free African population in the New World. With such a group of displaced individuals, it is obvious that cultural and religious aspects as well as the languages spoken by these people might be affected through the synthesis of these various cultures. One such group known to originate from the Dismal Swamp is known as the Gullah people. These people are often known for being closely related to the rice plantation
slaves in the New World, however, it is also widely held that the Gullah were one of the many groups that resided in the Great Dismal Swamp for a period of time. As a direct result of their isolation and freedom to adapt, the Gullah people developed a rich culture with deep African origins including distinctive arts, crafts, food, music, etc. The Gullah language they developed is a combination of the various languages and dialects brought together through their displacement (Gullah Geechee Cultural Heritage Corridor). Similar to the Gullah people, many of the Dismal Swamp’s maroons developed and synthesized their various cultures, forming new cultures and customs and Creole or “combination” languages. The Dismal Swamp fostered the growth of many African-American diasporic cultural practices, etc. that are still observed today.
The Great Dismal Swamp was a haven for self-liberated African Americans and others, including Native Americans, runaways, and other misfits who separated themselves from their past in the outside world in its isolated marshes. Its people survived through trading with neighboring towns and plantations. Major slave uprisings were also supported by the Dismal Swamp, giving organizers like Gabriel and Nat Turner a place to hide. Due to the increased fear that their rising notoriety caused among slave owners, North Carolina passed legislation in 1847 specifically designed against the Dismal Swamp maroons, stating that all Dismal Swamp slaves must be registered (Blythe). Despite their notoriety, the maroons were shielded by the Dismal terrain for around a century before many eventually emerged to enlist in the Union Army. Their seclusion from the outside world after being displaced from their homelands also encouraged the creation and synthesis of many African-American diasporic cultural practices. The Dismal Swamp Maroons were one of the first independent black societies on American soil and their story, though somewhat lost through time, deepens our understanding of the fight for emancipation and the extent of its impact.
Blackburn, Marion. “American Refugees.” Archaeology, vol. 64, no. 5, 2011, pp. 49–58. JSTOR, http://www.jstor.org/ stable/41780729. Accessed 5 Dec. 2023.
Blythe, John. “Remembering the Runaways in the Great Dismal Swamp.” NC Miscellany, 5 July 2011, blogs.lib.unc.edu/ncm/2011/07/05/remembering-therunaways-in-the-great-dismal-swam p/#:~:text=The%20 Great%20Dismal%20became%20such,39%20lashes%20 on%20the% 20back.
Day, Thomas. “Mired Memory: ‘Marronage’ in The Great Dismal Swamp.” Social and Economic Studies, vol. 67, no. 1, 2018, pp. 33–47. JSTOR, http://www.jstor.org/ stable/45174649. Accessed 5 Dec. 2023.
Egerton, Douglas R. “Gabriel’s Conspiracy and the Election of 1800.” The Journal of Southern History, vol. 56, no. 2, 1990, pp. 191–214. JSTOR, https://doi.org/10.2307/2210231. Accessed 5 Dec. 2023.
Frederick Law Olmsted, A Journey in the Seaboard Slave States: with Remarks on Their Economy (New York: Dix & Edwards, 1856) https://docsouth.unc.edu/nc/olmsted/olmsted.html
Kaye, Anthony E. “Neighborhoods and Nat Turner: The Making of a Slave Rebel and the Unmaking of a Slave Rebellion.” Journal of the Early Republic, vol. 27, no. 4, 2007, pp.
705–20. JSTOR, http://www.jstor.org/stable/30043545. Accessed 6 Dec. 2023.
Longfellow, Henry Wadsworth. “The Slave in the Dismal Swamp.” Henry Wadsworth
Longfellow: Selected Works. Lit2Go Edition. 1866. Web. https://etc.usf.edu/lit2go/71/henry-wadsworth-longfellow-selected-works/5034/the-slave in-the-dismal-swamp/. December 05, 2023.
Sayers, Daniel. A Desolate Place for a Defiant People: The Archaeology of Maroons, Indigenous Americans, and Enslaved Laborers in the Great Dismal Swamp, University Press of Florida, 2014. ProQuest Ebook Central, https://ebookcentral-proquest-com.proxy216.nclive.org/lib/ncssm/detail. action?docID=18 87344.
The Gullah Geechee - Gullah Geechee Cultural Heritage Corridor. Gullah Geechee Cultural Heritage CorridorWhere Gullah Geechee Culture Lives, 5 Aug. 2019, gullahgeecheecorridor.org/thegullahgeechee/.
Washington, John. Virginia Gazette. The Geography of Slavery, June 23, 1768. http://www2.vcdh.virginia.edu/ gos/browse/browse_ads.php?state=&locale=&placetype= all&yer=&month=&rows=10&numResults=4490&page=57
Nitya Kapilavayi
During British rule in India, including both the East India Company and the British Raj, the nation of India suffered through constant famine, which has left lasting effects today. Particularly, from around 1850 to 1900, 24 major famines occurred in the short span of 50 years, resulting in fifteen million fatalities. While many famines began due to weather conditions such as drought, British administration exacerbated the conditions, prolonging them because proper aid was not given. But even after India regained independence and there was no longer a terrible state of famine, the effects of the famines still exist today in all South Asians’ lives because they face much higher risks for developing cardiometabolic diseases, especially type 2 diabetes.
In October 1943, politicians and citizens of India demanded an inquiry into the current situation of famine in Bengal when the new field marshal, Archibald Wavell, arrived. The Secretary of State for India, Leopold Amery, however, advocated against the idea because it would reveal that the government was at fault for much of what happened.1 In an effort to appease the politicians while also shifting blame, the Famine Inquiry Commission reported false and misleading information in their reports so that the focus of the blame could be shifted from British administration to the force of nature.
The Famine Inquiry Commission issued a final report on the famine in 1945, focusing mainly on Bengal, as desired by the politicians. In this report, they discuss the short-term aspects of the famine and potential measures that could be taken to improve the situation and decrease mortality. They start by establishing Bengal as a ‘deficit’ area, among others like Bihar, Madras, Bombay, and Bijapur, going on to say that the main source of famine in these areas is due to drought. Monsoons that were supposed to occur failed, resulting in drought, as
1 Madhusree Mukerjee, “Bengal Famine of 1943: An Appraisal of the Famine Inquiry Commission,” Economic and Political Weekly 49, no. 11 (2014): 71–75, https://www.jstor.org/stable/24479300.
monsoons were concentrated into a few months and outside of those months, conditions were relatively dry. If a monsoon failed, then the whole year’s crop could fail because at that time, it was the only dependent source of water. As a result, 1.2 million people in these areas were affected, with only 400,000 people receiving relief from the government.2
The relief they received included over 20,000 tons of millet and 2,18,00,000 rupees spent on other relief operations, and the Famine Inquiry Commission writes that these measures were the main reason that famine didn’t cause large numbers of mortality among the people.3 In Bijapur, rationing was introduced among other relief works as said in the Famine Code, and so by the end of 1942, agricultural conditions greatly improved, and an official declaration of famine was withdrawn, as exceptional mortality was prevented.
But, could all of this data be trusted? The Nanavati papers prove otherwise. The Nanavati papers are the unpublished transcripts of the secret meetings held by the members of the Famine Inquiry Commission as they wrote the 1945 report.4 These papers were ordered to be destroyed so that no person would know the true nature of the information behind the reports, but Judge Nanavati kept his copy of them. Reading them shows that the Bengal and Indian government anticipated famine in advance and warned the war cabinet in London several times, yet they failed to do anything. In 1943, the British were holding 29 million tons of wheat that they refused to provide to India because they would rather save it as a reserve for the future, instead of focusing on the current famine.5 In the war cabinet meetings, Churchill even mentioned that he only wanted to provide aid to Indians who were directly contributing to the war effort.6
2 John Woodhead, “Famine Inquiry Commission Final Report, 1945,” Indian Culture, 1945, https://indianculture.gov.in/reports-proceedings/ famine-inquiry-commission-final-report-1945.
3 Woodhead, “Famine Inquiry Commission Final Report, 1945.”
4 Mukerjee, “Bengal Famine of 1943” 71–75.
5 Ibid.
6 Ibid.
For example, the third chapter of the 1945 report focuses specifically on food administration in India during the war. The 400 million people of India are split up into three classes: First, those who grow more food than they need and sell the remaining surplus; second, those who grow less food that they need and buy food from markets depending on their supply; and third, those who buy their entire food supply from markets.7 In the areas where there is a surplus, the government introduced Grain Purchase Officers to buy the surplus based on ceiling prices fixed by the government and in the areas where there is a deficit, the government would distribute the surplus by rationing.8 All private trade was prohibited and the government had a monopoly system of purchase. These were the measures taken by the government, but it was their fault that they had to implement them in the first place.
According to the Nanavati papers, London was the main cause, as they pressured India to export rice for the war effort instead of sending wheat to them to relieve famine.9 Obviously, India was in no state to export rice if they could not even properly feed their population, but this failed to reach the 1945 Famine Inquiry Commission report. The report portrays the government as the hero who implemented these measures in spite of famine to help the Indians, but in reality, the government was doing the least they could to provide extra aid to India and instead just shuffle the food already present in India.
Another relief operation implemented was the Grow More Food campaign, which was created with the help of the Indian Central Cotton Committee and the Advisory Board of the Imperial Council of Agriculture Research.10 The first measure recorded in the campaign was to increase the area under which food crops were grown through three methods. The first method was to introduce new land under cultivation, including fallow land. Fallow land, however, has low fertility and needed time to rest in between crops, allowing it to lay “fallow," and new land couldn’t easily be cultivated because any land that was not already cultivated either faced unhealthy conditions, deep-rooted grassed and weeds, low fertility, salinity and alkalinity, was liable to damage from wild animals, and lacked water or drainage. The second method was double-cropping, which was reported to have worked in all provinces but Bombay and the North-West Frontier province. The third method was to divert land that was growing non-food crops to grow food crops, which involved reducing land used to grow cotton. The Indian Central Cotton Committee recom-
7 Woodhead, “Famine Inquiry Commission Final Report, 1945.”
8 Ibid.
9 Mukerjee, “Bengal Famine of 1943” 71–75.
10 Woodhead, “Famine Inquiry Commission Final Report, 1945.”
mended that the original 24 million acres used for cotton be reduced to 16 million, with the 8 extra million acres used to grow bajra (pearl millet) and jowar (sorghum/broomcorn).11
The second measure recorded in the campaign was to increase the supply of water for irrigation by improving and extending existing irrigation canals and systems and constructing additional wells. The third measure was to increase the use of manure for fertilization, but there was little success with the increase of production and application of different types of manure. Compost, another type of manure, seemed promising, but highly trained staff were reunited to complete the process. So, the Government of India issued a grant for 2,25,000 rupees to train staff and create production methods.12 Also, a restriction on artificial fertilizers was placed to reduce harmful environmental effects.
The fourth and final measure was to increase the supply of improved seeds, which can increase yield by 5-10%.13 Seed farms were established in Balochistan. European vegetables were acclimated to the Indian climate in Kashmir, and seed was imported from America so that improved seed could be produced on a large scale and distributed to increase farmer’s yields on a nationwide scale.14
Together, these measures were reported to have increased the amount of food, according to the Famine Inquiry Commission report, decreasing mortality. But once again, the Nanavati papers prove otherwise. The numbers published in the report were false, as they were just predictions, not actual measurements. The report says that only 1 million people died in the Bengal famine, in part due to the relief operations, but research documents from the Indian Statistical Institute show that 1.5 million people died in the first year of famine, with twice as many dying in the second year.15 These do not match the numbers in the report.
The members of the Famine Inquiry Commission were specifically chosen by the Secretary of State for India such that they were members who would not look into matters logically and would blame famine on any other factor than the government.16 This can explain why there are so many discrepancies between the published data in the 1945 report and the actual data from other sources that were provided because the Famine Inquiry Commission members did not logically analyze the data given.
11 Woodhead, “Famine Inquiry Commission Final Report, 1945.”
12 Ibid.
13 Ibid.
14 Ibid.
15 Mukerjee, “Bengal Famine of 1943” 71–75.
16 Ibid.
As a result, everything in the reports was skewed because the politics of famine barely made their way into the report, even though they were a vital part in the cause of the famines.
Britain followed a classical famine policy, which stated that government activity would cause famine, not relieve it, explaining their reasoning behind their anti-help actions and mindset.17 They did not prioritize saving the lives of their subjects, especially if it cost a lot of money, because they believed the starving were disgusting creatures that didn’t deserve help and were just lazy. This idea was made popular by Adam Smith, a Scottish philosopher who wrote the Wealth of Nations, where he says that regulations put in place by the government would turn food scarcity into a famine, using the example of Bengal.18 Drought there turned into famine because of government activity, so the government should minimize their role in famine. Followers of Smith preached this idea of non-interventionism and policy makers followed, leading to many deaths due to starvation. Any time a politician attempted to speak up against the classical famine policy and introduce strategies like price control, non-interventionists remained on top.
Even in Ireland, when it was a colony of Britain, British newspapers shifted the blame of famine onto the Irish poverty themselves, accusing them of not being grateful enough for the resources they had and being too stubborn.19
Government aid was only provided to those it deemed helpful, similar to what Churchill was saying, as the British believed resources and help should be earned through work. Indians who idled around lazily weren’t deserving of government aid and weren’t worthy of receiving it. Many Indians were forced to take on laborious jobs on public projects in exchange for wages and food. But to obtain these jobs, they underwent several tests to determine how desperate they were for sustenance.20 Could they travel long distances to eat a small quantity of food? Could they live in a poorhouse instead of their home to work? Could they work to only receive the bare minimum of wages? Could they eat food cooked by someone of a different caste? If they were truly starving, they would pass these tests and receive the job, is what the British thought.
The classical famine policy finally fell under attack after articles exposed the true nature of the circumstances in India. Exporting grain during a famine was the opposite of the principles of political economy, and the British should be distributing their extra resources within India. Famine codes were established, stating that anyone who could work would be allowed a job for reasonable wages, and anyone who couldn’t work would still be offered relief and aid.21 This was the start to prevention of famine, although not enough on their own.
If this is how poorly the British handled famine in their colonies, what was a more just way to handle famine? The Qing in China are a prime example of how to deal with famine, as they had one of the most effective famine relief systems during the same time period. It was all because of a difference in perspective: The Qing prioritized saving their subjects, no matter how much it would cost. Policies in China were also very different, extra grain was put aside during times of surplus and distributed during famine.22 These types of preventative measures were absent in British administration.
The Qing also were motivated to help their subjects because they viewed natural disasters like drought as a warning sign that the rulers had displeased heaven and should reflect on their policies.23 The British viewed drought as a punishment for the citizens, not the government, or as a means of population control. Because of this, the classical famine policy was very ineffective, and had they paid more attention and thought to Chinese strategies in the newspapers instead of insulting them, maybe famine in India could have been prevented. Or at the very least, defer countless deaths.
The main factor that contributed to the famine in Bengal was the war, as food was being exported in times of scarcity instead of remaining locally to feed the starving population. What happened in Bengal can be classified as a “man-made” famine because although natural factors may have contributed to the famine, it was the government who created many of the negative effects by not taking enough proper prevention measures to relieve the situation.24 Because of how severe the effects of the Bengal famine were, it has gained popularity throughout famine history and has set a precedent for the causes of all of India’s other famines, which is not the case. Although the
17 Kathryn Edgerton-Tarpley, “Tough Choices: Grappling with Famine in Qing China, the British Empire, and Beyond,” Journal of World History 24, no. 1 (2013): 135–76, http://www.jstor.org/stable/43286248.
18 Edgerton-Tarpley, “Tough Choices,” 135–76.
19 Edgerton-Tarpley, “Tough Choices,” 135–76.
20 Ibid.
21 Ibid.
22 Ibid.
23 Edgerton-Tarpley, “Tough Choices,” 135–76.
24 Tirthankar Roy, “Famines in India: Enduring Lessons,” Economic and Political Weekly (2021), https://www.proquest.com/magazines/famines-india-enduring-lessons/docview/2554956584/se-2.
Bengal famine occurred due to one thing, other famines in India were separate and should be treated as such.
The Deccan famines that occurred from 1630 to 1632 are a prime example of where water problems were the main factor of the famine, like many others. Issues with water included scarcity of drinking water, contaminated water, cholera outbreaks, and migration due to lack of water. Unlike Bengal, which was one of India’s few water abundant areas, the Deccan regions affected tens of millions of people because of water scarcity.
Many of India’s famines can be classified as dryland famines, which are caused by a shortage of water, moisture, and resultantly, food.25 Water supply failure is the root of food supply failure, so all famines caused by drought are caused more by water issues rather than food issues. The theory of dryland famines can explain this. Because of India’s tropical monsoon climate, it is more prone to drought, as compared to areas that experience a temperate climate. If a monsoon fails in India, subsequent water shortages occur, leading to food shortages. Extreme heat from the climate takes over, causing surface water to evaporate faster than normal. As this process repeats itself for several years, drought occurs and most famines before 1900 occurred because of drought. In 1876, rainfall data records showed that this was the driest year in over a century and similar phenomena were observed in 1886, 1899, and 1918.26 Because the Deccan regions were more southern of India, canals and rivers could not be filled with water like northern canals which could be filled with melted snow water from the Himalayas. Many rivers like the Krishna, Tungabhadra, Godavari, and Bheema almost dried out completely due to drought and water all over the Deccan regions almost completely disappeared.27 In some extreme cases, whole villages were left abandoned because they had no water.
Famine by definition is an acute food shortage, but what is the cause of food shortages and how can we confirm that there is actually a food shortage? A two-pronged approach can prove this. First, food shortages only occur because of water shortages, so preventative measures should focus on water supply while mitigation measures should focus on food supply.28 This way, if the water problem is fixed first, the resulting food problem won’t occur.
The government, however, dreaded solving the water shortage problem. While this is the root cause of famine and famine
25 Roy, “Famines in India: Enduring Lessons.”
26 Roy, “Famines in India: Enduring Lessons.”
27 Ibid.
28 Ibid.
would be resolved if this problem was solved, it was actually very difficult to solve. Well water was separated by caste and religion and were private property, so the government’s first priority was to protect the integrity of the private property, instead of making water more accessible to the people.29
Water was also not something where the cost of transportation could be reduced, like it is for food. Food problems had solutions, so water problems were overlooked. Even what little water was left on the surface was very contaminated, leading to many cholera outbreaks. So much so that in the late nineteenth century, many Indians died from cholera and other waterborne diseases rather than malnutrition or starvation.30 And even if food distribution systems are set in place, they can only mitigate the effects of famine, not prevent them. The water famine was just too hard to solve, so the main factor of dryland famines were water, not food.
Although British classical famine policy indicated little to no intervention in the prevention and mitigation of dryland famines, Sir Edwin Arnold projects the opposite image of famine to the American public.31 As news was spreading throughout America that the British government was hindering progress in India, Arnold published a book laying out the “facts” of the famine so that they don’t prematurely judge the government, but rather have more information with which to form their opinion.
To do this, he focuses on natural causes of deficient rainfall as the main source of famine.32 All countries depend on rain to fertilize their crops, so naturally, drought will cause famine. He points out that farming methods in India cannot be made more efficient to increase crop yields because Indians refuse to use Western machinery or more scientific methods because they are too stubborn to change their ways, portraying Indians as backwards to the Americans, because they do not have the money nor the brains to switch.
According to Arnold, the British government does all that they can do to help the Indians, which is not completely true based on the Nanavati papers and the classical famine policy. They rule “for the sake of the Indians first, and for revenue and rep-
29 Ibid.
30 Ibid.
31 Edwin Arnold (1832-1904) was an English journalist and poet and given the honor of being named the Knight Commander of the Indian Empire. He was best known for his story of Buddha, called The Light of Asia
32 Edwin Arnold, “The Famine in India,” The North American Review 164, no. 484 (1897): 257–72, http://www.jstor.org/stable/25118780.
utation and power afterwards,” and create departments and funds to help their ideal of saving life with all of their power, which is also false based on classical famine policy.33 By doing this, he paints the British government in a positive light to the Americans, who are constructing their opinions with minimal information. This would also prevent Indians from potentially being recognized as needing additional support and food supply because they are under the impression that the British government is helping.
Starvation to him is just a slow disease, and people who are starved are basically already ‘gone’ even though technically alive.34 He reinforces the image of Indians as physically weak by saying that the Hindu race is not strong. Instead, they are innocent and timid because their meals consist mainly of vegetables, which can be attributed to undernutrition and the Indian diet.
With the prolonged famine state prevailing throughout India, undernutrition started to become prevalent because of abandoned diets due to famine conditions. As defined by the United Nations, undernutrition is the outcome of undernourishment and/or poor absorption and/or poor biological use of nutrients consumed as a result of repeated infectious disease.35 Undernutrition includes being underweight, too short, deficient in vitamins and minerals (micronutrient malnutrition), or undernourished. Malnutrition, however, can be both undernutrition and overnutrition, leading to out-ofrange weight and BMI. Undernourishment is a state lasting for at least one year of inability to acquire enough food defined as a level of food intake insufficient to meet dietary energy requirements, also as defined by the Food and Agricultural Organization of the UN.36 In 2015, India had the highest estimated number of undernourished people in the world, falling short of their millennium development goal from the 2025 World Summit.37
These conditions are present because of the survival of 190 years of famine, which has led to diets in India becoming primarily carbohydrate-based. Unnatural famine diets turned into an actual strict tradition for diets. For example, the Amer-
ican Diabetes Association defines the ideal amounts of macronutrients as:
For a 2000-calorie diet: 250g digestible carbohydrates, 100g protein, 66g fats For a 1400-calorie diet: 175g digestible carbohydrates, 70g protein, 46g fats; while the Indian Diabetes Association defines the ideal amounts of macronutrients as: For a 2000-calorie diet: 275g digestible carbohydrates, 75g protein, 66g fats For a 1400-calorie diet: 192g digestible carbohydrates, 52g protein, 46g fats.38
Comparing these templates shows that both respective (2000/1400 calorie) diets contain the same amounts of fats, but the Indian diet has less protein and more digestible carbohydrates. A large percentage of Indians are vegetarians, with vegetarian diets typically consisting of more carbohydrates.
Continued diet of protein deficiency and higher intake of refined carbohydrates has led to increasing prevalence of insulin resistance syndrome in India, which is when cells throughout the body do not respond to insulin at the rate that is required and resultantly, glucose remains in the blood, raising blood sugar levels. Insulin resistance syndrome includes a large range of health conditions, such as diabetes, dyslipidemia, heart disease, obesity, PCOS (polycystic ovarian syndrome), and sleep apnea. Some of these diseases increase risk for each other, like PCOS increasing risk for cardiovascular disease. India currently ranks for the second-highest prevalence of diabetes in the world, according to the International Diabetes Federation, meaning that insulin resistance syndrome is on the rise.39
Insulin resistance syndrome will continue to pass onto future generations of South Asians, and health risks will always be present, as shown in a study conducted by Brown University.
Focusing on the Chinese famine of 1959-1961 that affected 600 million people, participants in this study were all from the Suihua Beilin region of the Heilongjiang province in Northeastern China.40 Information including sex, age, smoking use, physical activity, body weight, BMI, food-frequency questionnaire (FFQ), oral-glucose-tolerance-test (OGTT), and serum
33 Arnold, “The Famine in India.
34 Arnold, “The Famine in India.
35 Manoshi Bhattacharya, “A Historical Exploration of Indian Diets and a Possible Link to Insulin Resistance Syndrome,” Appetite 95 (2015): 421–54, https://doi.org/10.1016/j.appet.2015.07.002.
36 Bhattacharya, “Historical Exploration of Indian Diets,” 421–45.
37 Ibid.
38 Bhattacharya, “Historical Exploration of Indian Diets,” 421–45.
39 Ibid.
40 Jie Li et al., “Prenatal Exposure to Famine and the Development of Hyperglycemia and Type 2 Diabetes in Adulthood across Consecutive Generations: A Population-Based Cohort Study of Families in Suihua, China,” The American Journal of Clinical Nutrition 105, no. 1 (2017): 221–27, https://doi.org/10.3945/ajcn.116.138792.
blood glucose were taken.41 A food-frequency questionnaire (FFQ) is a checklist of common foods/beverages where participants indicate how often they consumed a food within a certain timeframe. Based on this information, the nutrient intake can be calculated for each participant. An oral-glucose-tolerance-test (OGTT) measures how well the participant can digest/process large amounts of sugar. In this study, diabetes was defined as a fasting blood glucose (FBG) concentration of >/= 7.0 mmol/L from the 1999 WHO OGTT criteria and hyperglycemia (high blood sugar) was defined as an FBG concentration of >/= 5.6 mmol/L.42
Prenatal Exposure to Famine:43
Presence of Conditions:44
Families were examined as a whole in order to look at two generations: the parents and the offspring. Parents exposed to famine prenatally typically were older and had higher BMIs, FBGs, and 2h-Glu (a 2-hour glucose test that measures blood glucose levels). Offspring from the famine-exposed parents had higher FBGs and 2h-Glu as compared to the offspring
from the non-famine-exposed parents. The results of the study showed that in the F1 generation, prenatal exposure to famine was associated with a higher BMI by 0.39, and higher mmol/L FBG concentration by 0.27, and a higher mmol/L 2h-Glu of 0.32.45 The participants in the F2 generation that had maternal/ paternal exposure to famine were not significantly associated with higher FBG concentrations or 2h-Glu concentrations. The results of the study are summarized in the tables.
Prenatal exposure to famine increased the risk of hyperglycemia in the two consecutive generations, which shows that adverse effects of prenatal exposure on glucose metabolism can be passed on to the next generations. This also shows that prenatal nutrition is very important to control the risk of type 2 diabetes in the next generations. The results of this study clearly show how being exposed to famine in the womb, or early out of the womb can impact the risk at which you can develop type 2 diabetes or hyperglycemia. Although this study was conducted based on the results of a Chinese famine, its results can be applied to the famines in India because famine conditions were the same: undernutrition.
As the effect of famine is carried throughout generations, this can explain why many South Asians face higher risks for type 2 diabetes and hyperglycemia. Since many areas of India were afflicted with famine, current descendants from all across India and South Asia are affected today. And because it will be carried throughout generations, it can never be rid of completely no matter how closely one controls their diet or physical activity. It will always be a possibility just due to generational exposure to famine.
Also, many generations of Indians were exposed to famine throughout the course of several hundred years, so it can be implied that the effect of them combined is more severe than that of the effect of just one famine as seen in the study. Famine occurred one after another without break, so many generations had both prenatal exposure to famine and postnatal exposure to famine throughout their lives, which may also worsen the effects of developing hyperglycemia and type 2 diabetes in resulting generations.
41 Li et al., “Prenatal Exposure to Famine,” 221–27.
42 Li et al., “Prenatal Exposure to Famine,” 221–27.
43 Ibid.
44 Ibid.
Along with Indian diet and generational exposure to famine, the main effect of the famine is the starvation-adaptations that many South Asians have today. These starvation-adaptations cover a wide range of adaptations, but the overarching symptom that many of these adaptations induce is insulin resistance,
45 Li et al., “Prenatal Exposure to Famine,” 221–27.
which is a major risk factor for developing type 2 diabetes. As shown earlier, insulin resistance also increases risk for many cardiometabolic diseases other than type 2 diabetes, such as obesity, hyperglycemia, cardiovascular disease, and coronary artery disease. Diet and undernourishment also affects starvation-adaptations, and as shown earlier, Indian diets consist of a higher intake of refined carbohydrates, saturated fats, processed foods, and less protein.46 These starvation-adaptations are typically favorable in times of famine, as they promote fat and calorie retention when food is scarce, but are harmful in the current-day because many South Asians do not live in the same famine conditions and have no need to retain the excess fat and calories.
In South Asians with type 2 diabetes, most have a higher rate of DNA methylation, which is an established indicator of epigenetics change.47 DNA methylation occurs when methyl groups are added to the DNA so that the DNA becomes less accessible to the body for use, changing gene expression in that it can inhibit the body from producing certain proteins that are encoded for in those genes. This is harmful when the proteins encoded in those genes are needed for the body to regulate the onset of conditions like type 2 diabetes. But because it is such a strong indicator of epigenetic change, it could be utilized in the future as a screening tool for intervention prior to the onset of diabetes so that the patient can proactively take action.48
Due to the starvation-adaptation, South Asians have a thinfat phenotype, in which a disproportionate amount of fat is concentrated in the abdominal region, even if the person is relatively thin overall.49 Especially in the United States, South Asians have the highest body-fat percentages and lowest lean muscle masses, which can make them more prone to developing obesity. Higher levels of body-fat in South Asians is seen when deep and visceral fat contains increased secondary storage and ectopic fat, where fat is stored in tissues other than the adipose tissue, where it should be stored, and instead in places where fat should be low, like in the heart, pancreas, skeletal muscle, and the liver.50
South Asians also face a condition called ethnic lipodystrophy, so their adipokine profile (which consists of cytokines in fat cells that control metabolism, energy, and inflammation) is
46 Mubin Syed, “The Susceptibility of South Asians to Cardiometabolic Disease as a Result of Starvation Adaptation Exacerbated during the Colonial Famines,” Endocrinology, Diabetes and Metabolism Journal 6, no. 2 (October 7, 2022): 1–9, https://doi.org/10.31038/edmj.2022621.
47 Syed, “Susceptibility of South Asians,” 1–9.
48 Ibid.
49 Ibid.
50 Syed, “Susceptibility of South Asians,” 1–9.
less favorable.51 Resistin, one of these adipokines, is found in higher concentrations in South Asians and promotes insulin resistance and obesity. Adiponectin, another adipokine, is found in lower concentrations in South Asians and promotes peripheral insulin sensitivity. Myostatin is another protein that is found in higher concentrations in South Asians in the skeletal muscle and inhibits its growth, resulting in the lower lean muscle masses discussed earlier.52 Lean mass, especially that found in the organs and in the muscles, burns more calories than fat at a faster rate, so low amounts lead to less calories being burned. Lower lean muscle masses can increase risk for insulin resistance and cardiovascular disease.
Energy burned at rest, or thermoneutral resting energy expenditure, is lower for South Asians, meaning that fewer calories are being burned at rest and more are retained by the body. Typically, this is 32% lower than that of Caucasians. This is helpful during famine because calories are not wasted being burned at rest, allowing calories to be saved but harmful now because not enough calories are being burned at rest. Brown adipose (fat) tissue, which specializes in heat generation in the body and usually accounts for 20% of total energy expenditure, is lower by 34% in South Asians as compared to Caucasians. This lack of brown adipose tissue increases risk of developing cardiometabolic disease in times of prosperity. South Asians also need 80% more exercise than Caucasians to see the same effects, needing around 232-266 of moderate intensity exercise while Caucasians only need 150 minutes of the same intensity exercise.53
The MC4R gene is an important gene, as mutations with it account for the most common form of obesity. In South Asians, a single-nucleotide polymorphism of this gene is highly prevalent, which increases levels of visceral fat and insulin resistance and along with this, there are at least six more variants in genes that are associated with insulin resistance that South Asians possess.54 This promotes fat retention, which would be favorable during famine, but is harmful currently because the body unnecessarily holds onto fat. Excess fat found in South Asians is stored in superficial subcutaneous adipose tissue and is metabolically inert.55 Because storage space is smaller in South Asians, it is overfilled earlier as compared to Caucasians, resulting in conditions like dysglycemia and dyslipidemia. Dysglycemia is any abnormality in blood sugar stability and dyslipidemia is any imbalance of lipids such as cholesterol,
51 Ibid.
52 Ibid.
53 Ibid.
54 Syed, “Susceptibility of South Asians,” 1–9.
55 Ibid.
LDL-C (low-density lipoprotein cholesterol), triglycerides, and HDL (high-density lipoprotein).
Vitamin C was one of the nutrients that was scarce during times of famine, and lipoprotein a or Lp(a) protects against vitamin C deficiency and acts as an analog for vitamin C, so it is found in higher levels in South Asians.56 While it helps in wound healing and protects against scurvy, which was prevalent in India in the past, it is very damaging to the body in modern day. Being deficient in vitamin C can weaken the walls of the arteries, as it is used to maintain the collagen framework. Lp(a) then acts as a replacement for vitamin C, but it brings extra LDL-cholesterol into arterial walls and builds up plaque. Because of this phenomenon, many South Asians have high cholesterol levels, specifically LDL-C. Lp(a) is very injurious in the modern day because it is one of the strongest independent risk factors for premature cardiovascular disease because plaque buildup in the arteries can block or reduce the flow of blood to the heart. The heart, in return, has to pump harder, increasing blood pressure as well, and it can also lead to heart attacks. 45% of South Asians have unnatural Lp(a) levels, which is one of the highest prevalence rates in the world per ethnic group.57
The British government played a very active, yet inactive role in the famines in India in that they were actively trying to fault nature and drought for the cause of the famine through their Famine Inquiry Commission reports, yet they were inactive in providing proper and sufficient aid to the Indians to relieve famine. Although it was in their capacity to provide resources to the Indian nations, the British simply didn’t, resulting in millions of unnecessary deaths that could have been prevented in advance. Paying no mind to the value of an Indian’s life, they failed to consider the disastrous consequences of the famines that continue to plague many South Asians’ current lives today because of the adaptations gained from sustained periods of starvation. Even though South Asians have escaped the direct control of the British empire, the British will continue to indirectly control a part of all South Asians’ lives due to the resulting health conditions they are inherently at risk for due to epigenetics change. Because of this, South Asians must pay special attention to their health and be more aware, cautious, and always alert. They will never escape these effects and there is nothing no one can do to rid South Asians of them because of how the British administration chose to respond to the famines.
Bibliography
Arnold, Edwin. “The Famine in India.” The North American Review 164, no. 484 (1897): 257–72. http://www.jstor.org/ stable/25118780.
Bhattacharya, Manoshi. “A Historical Exploration of Indian Diets and a Possible Link to Insulin Resistance Syndrome.” Appetite 95 (December 2015): 421–54. https://doi. org/10.1016/j.appet.2015.07.002.
Edgerton-Tarpley, Kathryn. “Tough Choices: Grappling with Famine in Qing China, the British Empire, and Beyond.” Journal of World History 24, no. 1 (March 2013): 135–76. http://www.jstor.org/stable/43286248.
Li, Jie, Simin Liu, Songtao Li, Rennan Feng, Lixin Na, Xia Chu, Xiaoyan Wu, et al. “Prenatal Exposure to Famine and the Development of Hyperglycemia and Type 2 Diabetes in Adulthood across Consecutive Generations: A Population-Based Cohort Study of Families in Suihua, China.” The American Journal of Clinical Nutrition 105, no. 1 (January 2017): 221–27. https://doi.org/10.3945/ajcn.116.138792.
Mukerjee, Madhusree. “Bengal Famine of 1943: An Appraisal of the Famine Inquiry Commission.” Economic and Political Weekly 49, no. 11 (March 2014): 71–75. https:// www.jstor.org/stable/24479300.
Roy, Tirthankar. “Famines in India: Enduring Lessons.” Economic and Political Weekly (June 2021).
https://www.proquest.com/magazines/famines-india-enduring-lessons/docview/25549565 84/se-2.
Syed, Mubin. “The Susceptibility of South Asians to Cardiometabolic Disease as a Result of Starvation Adaptation Exacerbated During the Colonial Famines.” Endocrinology, Diabetes and Metabolism Journal 6, no. 2 (October 2022): 1–9. https://doi.org/10.31038/EDMJ.2022621.
Woodhead, John. “Famine Inquiry Commission Final Report, 1945.” Indian Culture, 1945. https://indianculture. gov.in/reports-proceedings/famine-inquiry-commission-final-report 1945.
56 Ibid.
57 Ibid.
Mannah Patel
When someone envisions Indian medicine today, images of herbal concoctions, skin-lightening pastes, and maybe even viral TV hoaxes may come to mind. In a contemporary context, these practices are often dismissed as pseudoscience, relegated to the realms of traditionalism and superstition. However, it is critical to recognize that what is now labeled as “pseudoscience” was, for centuries, the cornerstone of healthcare in India. These practices, deeply ingrained in the cultural fabric, were considered highly pure and esteemed within South Asian society.
The exploration of Indian indigenous medicine unveils a rich tapestry woven over centuries of complex South Asian history, blending Hindu Ayurvedic medicine, Muslim/Mughal Unani medicine, Tamil Dravidian Siddha medicine, and homeopathy into a holistic indigenous approach often referred to as AYUSH (Ayurveda, Unani, Siddha, Homeopathy). Within this small-scale industry, Vedas, Hakims, and other local practitioners held profound respect within society. Health was measured by instinct, and medical traditions were passed down through familial generations, creating a kinship-like relationship between patients and practitioners.
However, underlying this apparent unity were disagreements within AYUSH’s sub-communities, like conflicts between Ayurveda and Unani doctors. These differences, while contributing to the vibrancy of indigenous medical practices, also sowed the seeds of discord within the community. And as history witnesses, within the backdrop of this incredibly diverse and divided medical tapestry, a brand new British colonial era unfolded. As the British established their dominance in India, they viewed the indigenous medical systems with skepticism, pointing out perceived problems in the fundamental science—or lack thereof—underlying Indian medicine. Additionally, the lack of solidarity within the AYUSH communities presented a vulnerability that the British exploited to usher in a significant transformation.
Gradually and systematically, the British colonial authorities replaced the existing indigenous medical structures, from pharmaceutical administration to clinic setups, with Western medicine. The shift was profound, impacting not only
the healthcare landscape but also the very essence of traditional practices that had been the lifeblood of Indian society for centuries.
Understanding this historical transition is imperative, because it bears significant implications for the modern Indian healthcare system. India, with the highest population in the world, grapples with the consequences of a transformation that occurred under colonial rule. The complex interplay of cultural traditions, colonial interventions, and contemporary challenges in healthcare make the exploration of this historical journey important for all South Asian historians and medical professionals to understand and incorporate in their ethos today.
In order to accurately map out the transformation of Indian healthcare, it is necessary to understand the motivations behind Britain’s insistence on replacing indigenous Indian medicine, the mechanisms by which they did so, and the implications of their actions on Indian patients today.
The British motivation to uproot indigenous medicine from India was a complex interplay of various factors, ranging from racist mis-definitions of Indian culture to a modern scientific urge for standardization, all fueled by the subcontinent’s dire need for far-reaching public health measures. For centuries, indigenous medicine had played a crucial role in meeting the diverse medical needs of the Indian populace. Whether through cured neem leaf tonics or antibiotic turmeric pastes, AYUSH had visible medical credibility and leverage in dayto-day Indian life. However, with the introduction of British racism and Western biases into the Indian healthcare system, the entire identity of indigenous medicine was quickly reduced to the assumption that it blindly followed Indian scriptures and word-of-mouth fairytales.
The British colonial perspective couldn’t grasp how a scientific system rooted in religion could positively impact the human body. “The Madras Government and Indigenous Systems of Medicine,” a 1924 article in an unnamed medical journal, outlines the same ideologies. The author R.H. Elliot shares his opinions on the effectiveness of indigenous medicine after the local government in Madras agreed to recognize indig-
enous systems in the town’s hospitals and medical schools. Summarizing the origins of AYUSH, Elliot (1924, 787) contends that “the familiar features of the phase of development reached by Ayurveda [at the time]” emulate “the history of his own [Western] science from centuries ago.” This was a perfect window into understanding that even after almost 200 years of Indian occupation, British officials still failed to understand the progress and potential of local medicine. Elliot takes jabs at AYUSH’s strong dependence on scriptures and spirituality, echoing the prejudice of thousands when asserting that Indian science has “emerged from the metaphysical state” and “need[s] to understand the difference between metaphysical and positive knowledge” in order to form an even remotely valid medical system. Further taking qualms with the government’s “unfortunate” validation of AYUSH within new British hospitals, Elliot stood by the belief that the “efficiency of the hospitals will be seriously diminished” if kept subject to the “incomprehensibly backwards” pathologies and education that “resembles that which was current in the time of Aristotle” (1924, 788). It is important to call attention to the hypocrisy of this British perspective considering that European medicine stemmed from and did not differ much from Greek medicine until the 1500s.
Traditional healing methods were reduced to racist stereotypes, with the British unable or unwilling to understand the sophisticated methodologies behind indigenous pharmaceuticals and healing procedures. Elliot not only questioned the modern scientific relevance of indigenous medicine but, by conflating genuine indigenous procedures with racist stereotypes about Indian culture, he demonstrated the deep influence that British racism and sense of superiority had on their perception of Indian medicine. Instead of highlighting effective practices such as cured neem tonics or antibiotic turmeric pastes, Elliot chose to focus on “the dung of the sacred cow, or virgin’s urine” as the warped keystones of indigenous practice (Elliot 1924, 787). His stance shows the widespread ignorance shared by the British medical community at that time and how it fueled their efforts to eliminate indigenous practices.
Not only was AYUSH deemed to be stuck in the past, but indigenous pharmaceuticals and procedures were deemed too unstandardized to be considered a valid, modern form of science. Typically botanical in nature, indigenous medicine relied on an estimation culture to create specific prescriptions from locally available herbs. It worked for the people of India for centuries, but the British seemed to be taken aback by the lack of standardization. Coming from a scientific culture that revolved around the scientific method of ask, experiment, analyze, and repeat, British doctors found it almost impossible to validate clinical practices that weren’t consistently replicable
and data-driven.1 Their initially racist skepticism of indigenous treatment was further heightened by the finding that their treatment plans were, by Western scientific standards, essentially completely random and chance-based.
Additionally, there was virtually no national medical community actively working on expanding the allegedly underdeveloped anatomical and disease textbooks or educational material. In fact, the first medical board in India didn’t exist until 1823 under the French takeover of Pondicherry.2 This lack of apparent drive to explore the human body like the West did—with microscopes, bacterial and viral analyses, and cadavers—confused the British. Why, they wondered, were the Indians sticking to what looked “only like superstition” and “found[ing] a medical school on those antiquated lines when the new was so easily in the reach of all?”3
Beyond the British’s notions about the capabilities of Indian scientists, they also felt as though the foundations of indigenous medicine were too narrow to support the growing needs of a modern, increasingly diverse Indian subcontinent. Amidst all the British concerns surrounding indigenous medicine, some justifiable public health motives existed behind discrediting it. A resultant shortcoming of AYUSH, stemming from its limited anatomical knowledge, was the complete absence of departmentalization in local clinics. If Western hospitals were split into specialized departments of pediatrics, obstetrics, cardiology, immunology, standard indigenous clinics were oneroom, grassroots establishments.4 There would be one healer, Veda, or fakir providing treatments that treated the body as one faulty machine as opposed to a delicate balance of several body systems. As shown in Figure 1, clinics were often based out of doctors’ homes, leaving little to no room for sterilization or the development of hygiene standardization.5 Without the budget of a large hospital or the support of a concrete govern-
1 Anshu, Dr., and A Supe. “Evolution of Medical Education in India: The Impact of Colonialism.” J Postgrad Med 62, no. 4 (October 2018). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC5105212/.
2 Mushtaq, Muhammad Umair. “Public Health in British India: A Brief Account of the History of Medical Services and Disease Prevention in Colonial India.” Indian Journal of Community Medicine 34, no. 1 (January 1, 2009): 6. https://doi.org/10.4103/0970-0218.45369.
3 India Office Library and Records. “Report on Native Papers for the Week Ending May 02, 1896.” Report on Native Papers for the Week Ending ..., May 2, 1896. https://jstor.org/stable/saoa.crl.25636124.
4 Rastogi, Sanjeev. “Emanating the Specialty Clinical Practices in Ayurveda: Preliminary Observations from an Arthritis Clinic and Its Implications.” Journal of Ayurveda and Integrative Medicine 12, no. 1 (January 1, 2021): 52–57. https://doi.org/10.1016/j.jaim.2019.09.009.
5 National Library of Scotland. Report on the Working of the Government Medical School, Rangoon, 1924. Accessed September 2, 2023. https://digital.nls.uk/indiapapers/browse/archive/74983096.
ment health sector, AYUSH doctors had virtually no money or incentive to develop surgical and procedural technology for any procedures beyond skin-level treatments. Invasive procedures of any kind—especially long-term surgeries—were therefore completely out of the picture in AYUSH’s healthcare model.6 In the advent of colonial bioexchanges and consequent influx of disease, wartime conflict injuries, and widespread extreme famine and malnutrition, the short-term, quick-fix care that AYUSH promised was no longer adequate for the growing, more diversified Indian population.
British officials with years of big-city and large population healthcare expertise saw this pitfall, insisting that large-scale departmentalized hospitals were the only way to meet India’s medical needs.7 A wounded soldier couldn’t be fixed with herbal paste—they needed surgery. An older man in his 80s suffering from cancer couldn’t go back and forth to these holein-the-wall clinics—he needed long-term, consistent bedside care in a brick-and-wall hospital. However, AYUSH had anowhere near enough technological development to accommodate procedural needs of colonial Indians. In a 1921 paper called the Usman Report, the author Sir Mahomed Usman, despite being a strong proponent of combining Indian and Western medicine, conceded that while AYUSH was “self sufficient and efficient in medicine,” “in surgery they are not.” The reality of colonizers settled in the subcontinent also being subject to the
6 Rastogi, Sanjeev. “Emanating the Specialty Clinical Practices in Ayurveda: Preliminary Observations from an Arthritis Clinic and Its Implications.” Journal of Ayurveda and Integrative Medicine 12, no. 1 (January 1, 2021): 52–57. https://doi.org/10.1016/j.jaim.2019.09.009
7 Anshu, Dr., and A Supe. “Evolution of Medical Education in India: The Impact of Colonialism.” J Postgrad Med 62, no. 4 (October 2018). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC5105212/.
same, small-scale healthcare system, further ignited British aspirations for long-term, preventative, surgical, and targeted treatments (Rastogi 2021, 56).
The only way, in British eyes, to achieve this larger style of healthcare was through large-scale, departmentalized hospitals. Their road to these establishments was peppered with bigotry, ignorance, and genuine public health pitfalls, and took a variety of systemic vehicles and methods to navigate.
In the wake of the British decision to replace indigenous medical practices, a formidable challenge emerged: seamlessly integrating Indian doctors and patients into the new healthcare systems. Central to this integration was the crucial role of Indian doctors in drawing patients to British hospitals. The British devised two primary strategies to redirect indigenous practitioners from their existing clinics: persuading them of the superiority of the British approach and creating systemic conditions that made medical practice in India difficult beyond the boundaries of British infrastructure. In their colonial pursuit of power retention, British medical officials ingeniously employed both approaches to redefine clinical systems.
Around the mid-19th century, British efforts to eliminate indigenous medicine were well in action. Indigenous practitioners found themselves having to make an incredibly difficult decision. They had to choose between preserving their medical traditions and risking exclusion from the colonial medical system entirely or assimilating into Western practices to safeguard their medical authority. In most cases, practitioners opted for the latter, sacrificing aspects of their cultural heritage to sustain their careers and livelihoods in a changing healthcare landscape.
The British spearheaded the transition of indigenous pratitioners to Western medicine with quite manipulative promises of financial gain and professional advantage to the change. In cities like Bombay and Delhi, the newly established large hospitals began awarding local doctors university certificates for their training in modern medicine. Securing one of these certificates made it so that locally-trained doctors had the same prescription and surgical authority as their British counterparts, allowing them to charge exponentially larger fees for their medical services and enjoy legal protection from regional British medical boards. In 1860s Punjab, local hakims located outside mosques were provided with 3-month courses in Western chemistry and pharmacy so as to prepare for them
high-demand dispensary jobs.8 In 1882, the Central British Medical Board registered all indigenous practitioners with university certificates in the same legal group as British doctors, allowing them the same government certificate, clinic ownership, suing, and fee/salary rights.9 This strategy proved effective until around the 1890s, fostering a significant generation of Indian doctors who, in their pursuit of medicine, chose to relinquish their cultural heritage and family career in small increments.
As the process of Indians seeking certifications gained momentum due to its relative ease and several obvious benefits, many Indian doctors began advocating for equal pay with their British counterparts. However, as Kumar (1997, 168) highlights, “one such development was the Indian doctors trained in modern medicine seeking freedom from and even parity with the superior Indian medical service [British doctors serving in India].” As expected, however, their requests were met with rejection by the British authorities who argued that Indian doctors, regardless of their certification status, still retained elements of “primitive, prehistoric treatments used by hakims and others” (Kumar 1997, 168). Consequently, Indians realized they couldn’t straddle the line between their indigenous Ayurvedic and Unani practices and Western medicine. They were coming to the British-induced realization that they couldn’t have the best of both worlds. As colonial India rolled into the 20th century, growing frustration about this neitherhere-nor-there situation led a significant number of Indians to directly enroll in British medical schools, entirely not bothering with initial training in indigenous medicine.10 It quickly became evident that the British plan to shepherd future generations of Indian doctors out of homegrown indigenous training and into British medical schools was going to succeed.
Now faced with the spiked enrollment of Indian students in these innovative medical schools, British officials encountered a pivotal challenge in sustaining this momentum. Quickly recognizing the significance of fostering inclusion, officials emphasized the integral role of Indian students as providers
8 Kumar, Deepak. “Medical Encounters in British India, 1820-1920.” Economic and Political Weekly 32, no. 4 (1997): 166–70. http://www.jstor. org/stable/4405022.
9 Mushtaq, Muhammad Umair. “Public Health in British India: A Brief Account of the History of Medical Services and Disease Prevention in Colonial India.” Indian Journal of Community Medicine 34, no. 1 (January 1, 2009): 6. https://doi.org/10.4103/0970-0218.45369.
10 Anshu, Dr., and A Supe. “Evolution of Medical Education in India: The Impact of Colonialism.” J Postgrad Med 62, no. 4 (October 2018). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5105212/.
of rather than mere subjects in the emerging healthcare infrastructure. This strategic approach was reflected in the composition of the instructional landscape, which prominently featured Indian doctors and professors recruited by colonial medical boards. This not only cultivated a sense of comfort and trust among students but also conveyed the implicit message that climbing the colonial hierarchy and achieving excellence was within their grasp, so long as they adhered to the Western expectations.
Madras Medical College, along with institutions in Bombay (Mumbai) and Delhi, played a pivotal role in the British strategy to assimilate Indian doctors into Western medical practices.11 As these medical colleges thrived, British officials acknowledged the imperative of sustaining momentum in transforming Indian medical education. Pioneering academics were hired as professors so as to set examples for Indian students– sacrifice your culture, secure your career.12 The symbiotic relationship between British medical officials and Indian academics showcased the delicate balance sought in integrating Western medical principles while putting up a facade of respecting the Indian people.
However, the cultural congruence for medical students largely ended with their instructors. While the presence of Indian authority figures was crucial for a semblance of familiarity, British institutions in India were often residential, intentionally mirroring the daily life and medical education practices of the UK. From mandatory sports and etiquette practices like tea time to social events such as balls and galas, every aspect of a daily routine was carefully tailored towards an environment that made Indian students feel integrated with Western life.13 These institutions operated with “the twin purpose of impressing upon the natives the value of Western thought and of preparing them for taking up jobs to assist in the [medical] administration of the country” (Kocchar 1992, 1). This subtle ideological shift on a personal level steadily facilitated the students’ professional acceptance of the drastically different British medical curricula.
By the mid-19th century, British endeavors to eradicate indigenous medicine were well underway, with notable instances such as the establishment of the Madras Medical College in
11 Kumar, Deepak. “Medical Encounters in British India, 1820-1920.” Economic and Political Weekly 32, no. 4 (1997): 166–70. http://www.jstor. org/stable/4405022.
12 Pati, Biswamoy, and Mark Harrison. The Social History of Health and Medicine in Colonial India, 2008. https://doi.org/10.4324/9780203886984.
13 Kochhar, R. K. “English Education in India: Hindu Anamnesis versus Muslim Torpor.” Economic and Political Weekly 27, no. 48 (1992): 2609–16. http://www.jstor.org/stable/4399193.
1835 serving as landmarks.14 In these new schools, perhaps the most cumbersome adaptation for Indian students was the abrupt adoption of new, entirely Westernized curricula. The transformation of this new British-Indian medical education under British influence was best seen through Sir Edward Winter, a British medical administrator, who led a systematic reformation of medical curricula with an emphasis on anatomy, physiology, and surgical principles. Initially motivated by the plight of British soldiers on duty in Southern India, Winter ran a national board campaign urging for “a hospital [system] and regimented [departmentalized and government-run] healthcare” that he felt “were necessary to treat the English soldiers” (Chakrabarti 2006). It is important to note here that Winter’s ideal development of hospital standardization in India was not motivated by the needs of locals, but rather by the needs of colonial residents. It was perhaps this English-centered sentiment that resonated with the Central Government’s medical boards.
By the mid-1880s, Winter’s emphasis on anatomical and educational standardization infiltrated the content of Indian medical education, encompassing standardized textbooks, examinations, and treatment protocols.15 The curriculum dived into intricate details of anatomy and physiology, guiding students through the system by system, layer by layer dissection of the human body. It introduced them to the rigors of medical ethics, midwifery techniques, principles of hygiene, and the intricacies of pathology and insanity.
While the systematic approach sought to ensure a consistent quality of medical training, it gradually disintegrated the diversity and richness inherent in traditional Indian medical practices. The shift for Indian students was especially arduous; they had to transition from AYUSH’s whole-body perspective to treating specific parts of the human anatomy. British officials sought to shift from the previous model where all indigenous doctors had a general understanding of various body systems into modern medical specialization that facilitated a more comprehensive and advanced level of care within specific medical domains. Curricula emphasized in-depth knowledge of specific body systems, such as cardiology, respiratory care, and gynecology. This departure meant that physicians were encouraged to become experts in a particular field, allowing
14 Mushtaq, Muhammad Umair. “Public Health in British India: A Brief Account of the History of Medical Services and Disease Prevention in Colonial India.” Indian Journal of Community Medicine 34, no. 1 (January 1, 2009): 6. https://doi.org/10.4103/0970-0218.45369.
15 Mushtaq, Muhammad Umair. “Public Health in British India: A Brief Account of the History of Medical Services and Disease Prevention in Colonial India.” Indian Journal of Community Medicine 34, no. 1 (January 1, 2009): 6. https://doi.org/10.4103/0970-0218.45369.
them to explore the intricacies of their chosen domain.16 The medical efficiency of Western medicine was finally being placed into action. Procedures like knee surgery and Cesarean births, previously unknown to the older generation of Indian doctors, became standard practices. The impact of more time-efficient and streamlined treatments resonated with the specific needs of colonial India.
Beyond anatomical and technical knowledge, the establishment of the Central Board of Nursing and Midwifery in India in 1902 introduced novel, specific subjects such as midwifery, hygiene, and bedside care into the fabric of Indian healthcare.17 Students’ empathy “increasingly shifted from hands-on training in the field toward theoretical training with emphasis on preventive care services” (Prasad and Rajib 2013, 10). In the past, patient encounters involved a visit to a Hakim or Veda, where an examination was conducted, a prescription provided, and the patient quickly departed. However, the introduction of these interactive, soft skills-based fields, places where physicians opened up their hearts and ears to patients over the course of several hours, days, or even months, marked an incredibly important emphasis on a physician’s ability to provide sustained, long-term, and preventative care.
Within large hospitals, the integration of midwifery meant that expectant mothers now had access to a continuum of care, extending from prenatal to postnatal periods. Surgical patients, once confined to the procedural aspects of their treatment, could now benefit from comprehensive pre- and post-operational care, fostering a more holistic approach to medical interventions. The elderly, previously lacking dedicated assistance during extended hospital stays, now had 24/7 access to attending nurses.
Standing as a testament to these foundational medical changes, the establishment of the Burma Government Medical School in Rangoon between 1907 and 1922 heralded a transformative era in Burmese and South Asian medical history. This pioneering institution, rather than solely functioning as an educational entity, meticulously documented the novel and dynamic shifts in medical practices of the time period.
16 Sk, Chatterjee, Ramdip Ray, and Dilip K. Chakraborty. “Medical College Bengal—A Pioneer over the Eras.” Indian Journal of Surgery 75, no. 5 (August 3, 2012): 385–90. https://doi.org/10.1007/s12262-012-07142; Report on the Working of the Government Medical School, Rangoon. (1924). National Library of Scotland. Retrieved September 2, 2023, from https://digital.nls.uk/indiapapers/browse/archive/74983096
17 Prasad, Rupa, and Rajib Dasgupta. “Missing Midwifery: Relevance for Contemporary Challenges in Maternal Healt.” Indian J Community Med 38, no. 1 (March 2013): 9–14. https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC3612303/.
Figure 2: Report on the Working of the Government Medical School, Rangoon. (1924). National Library of Scotland. Retrieved September 2, 2023, from https://digital.nls.uk/ indiapapers/browse/archive/74983096
Figure 3: Report on the Working of the Government Medical School, Rangoon. (1924). National Library of Scotland. Retrieved September 2, 2023, from https://digital.nls.uk/ indiapapers/browse/archive/74983096
Illustrated in Figure 2 is the introduction of specialized courses, a noteworthy development. Of particular interest is the diverse faculty leading these courses, comprising both Indian and British professors occupying varying positions within the hospital hierarchy. 18 This unique combination of cultural perspectives within the academic sphere added a layer of richness—and alleged diversity— to the educational experience.
Complementing traditional classroom instruction, Figure 3 underscores the institution’s commitment to practical training within the operating room. This experiential component, emerging as a requisite co-curricular activity, epitomized the institution’s dedication to nurturing well-rounded and proficient medical professionals who could carry on the Western commitment to surgery and biotechnology.19
The records of the Burma Government Medical School offer a comprehensive overview of British India’s academic landscape. Student reports provide meticulous details on student pass rates, shedding light on academic achievements. Class and instructor codes delineate the organizational structure, emphasizing the collaborative efforts of individuals from diverse cultural backgrounds. Discipline records underscore the institution’s uncompromising stance on professionalism among its student body. Furthermore, a thorough examination of the history of skills testing reveals the school’s evolution of competency standards within the medical curriculum.
18 Report on the Working of the Government Medical School, Rangoon. (1924). National Library of Scotland. Retrieved September 2, 2023, from https://digital.nls.uk/indiapapers/browse/ archive/74983096
19 Report on the Working of the Government Medical School, Rangoon. (1924). National Library of Scotland. Retrieved September 2, 2023, from https://digital.nls.uk/indiapapers/browse/ archive/74983096
The Burma Government Medical School emerged as a symbol of innovation, where the convergence of diverse cultural perspectives and Western educational approaches laid the groundwork for a new era in medical education. The documentation was a starkly Western method of documenting formative years of British-Indian medical education.
By the early 1900s, the British had almost permanently revolutionized the structure of Indian medicine via institutions such as the one in Rangoon. The remainder of Britain’s time in the subcontinent was marked by new strides in public health, patient coverage, and social implications of their new cross-cultural hospitals.
British healthcare revealed its true impact during times of crises. In the face of small-scale local epidemics or natural disasters, the emergence of British hospitals equipped to house entire communities underscored the resilience of British healthcare infrastructure in comparison to indigenous clinics. Single British facilities, capable of providing beds for affected populations, stood as beacons of support, highlighting a stark contrast to the limitations of small indigenous clinics. And so, as British medical education finally solidified within Indian society around the early 1900s, the next step was to permanently embed Western public health into the subcontinent’s healthcare. Recall that starting in 1757, all hospitals and clinics had been under the control of the Central British-Indian Government.20 However, as new hospitals became self-sufficient, local medical systems began to transfer into provincial government’s control in 1919, leading to “the transfer of public health, sanitation, and vital statistics to the provinces” (Mushtaq 2009). All Province hospital systems had their own Inspector General, Sanitary Commissioner, and Surgeon General. This provided a structure that mitigated loopholes in illicit AYUSH clinics, hygiene and sanitary procedures in British hospitals, and the constant development of Indian medical technology. This was all enforced on the hospital and medical school level through Deputy Surgeon Generals and Hospital Director, the regionalization of which allowed medical leaders to cater to a very specific population’s needs without several additional legal layers in the central government.21
Perhaps one of the best examples of this regionalization transition was in the Madras Province. A hub of medical controversy in the early stages of Colonial takeover, the Madras Public Health Act of 1939 laid down city regulations surrounding food, drink, clinical hygiene protocol, neighborhood sanitation, and more. Considering that Madras was one of the largest cities straddling both agrarian and urban Indian society at the time, this act was a profound message to the entire nation, coining public health as a non-negotiable aspect of Indian healthcare and law.
This was not, however, always the case. The Madras Famine, also known as the Great Famine of 1876-1878 was arguably one of the worst public health disasters in pre-independence India. It is estimated that around 8.2 million people died within those two years, with thousands of citizens resorting to cannibalism, inter-caste warfare, and other extreme measures.22 This occurred at a point in the region’s medical development when the British public health boards had not yet been established.23 Medical schools and nowhere near enough bed space, there was a severe lack of British-trained Indian doctors that the people of Madras would trust, and the entire region was devastated. The Madras Famine was a blaring red signal to the British administration, cementing their suspicions that indigenous systems as they existed in the 19th century were simply not capable of tending to India’s needs.
However, with the Madras Public Health Act and subsequent bureaucratic measures, India’s public serving capacities metamorphosed. Between 1948 and 1960, after India’s independence from Britain, Madras was experiencing one of the most severe tuberculosis outbreaks in Asia. With indigenous models, it was nearly impossible to determine where this outbreak was coming from. However, through a Western-style public health analysis of Madras citizens’ immune systems, the TB Division of Ministry of Health found that “the high incidence of TB in Madras city and state was due to inadequacies of a rice-based South Indian diet which had a limited nutritional value” (Neelakantan 2017). A carb-focused diet meant that the people of Madras did not have enough nutrients in their body to fight against molecular diseases like TB. The TB Division soon began distributing fresh produce to TB patients and neighborhoods with high incidence rates, eventually curbing the disease by the end of 1960. Madras had
20 Mushtaq, Muhammad Umair. “Public Health in British India: A Brief Account of the History of Medical Services and Disease Prevention in Colonial India.” Indian Journal of Community Medicine 34, no. 1 (January 1, 2009): 6. https://doi.org/10.4103/0970-0218.45369.
21 Food and Agricultural Organization. “The Tamil Nadu Public Health Act, 1939,” n.d. https://faolex.fao.org/docs/pdf/ind195028.pdf.
22 Wikipedia contributors. “Great Famine of 1876–1878.” Wikipedia, October 27, 2023. https://en.wikipedia.org/wiki/Great_Famine_ of_1876%E2%80%931878#:~:text=The%20excess%20mortality%20in%20 the,the%20Madras%20famine%20of%201877.
23 Anshu, Dr., and A Supe. “Evolution of Medical Education in India: The Impact of Colonialism.” J Postgrad Med 62, no. 4 (October 2018). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5105212/.
become a beacon of public health in Asia.24 It soon became clear that this revolutionary disease prevention was only possible due to the British-endowed large hospitals, public health structure, and legal medical bureaucracy and power.
This marked a shift in colonizer-colonized relationships as the first few generations of Indian students and physicians witnessed the tangible benefits of adopting the colonial medical framework. The clear positive impact fueled a shift in allegiance, with Indians becoming increasingly and blindly more willing to abandon their indigenous medical practices in favor of embracing the British system.
As the colonizers would have hoped, gratitude and a sense of indebtedness soon began to flow through the Indian community towards British medical leaders. This sentiment, once established, acted as a catalyst for an expanding dependence on the new infrastructure. The cycle of gratitude and dependency became a reinforcing loop, solidifying the perception that Western medicine was non-negotiable for the well-being and advancement of healthcare in India.
These historical instances and the contributions of specific academics underscore the nuanced and dynamic nature of the transformation in British Indian medical education. The interplay between colonial policies, the aspirations of Indian students, and the evolving curriculum shaped a cross-cultural narrative that significantly impacted the final trajectory of healthcare in post-colonial and modern India.
The transformation of Indian healthcare under British colonial rule stands as a complex historical journey with profound implications for the present-day Indian healthcare system. What began as a clash of worldviews and a dismissal of indigenous medical practices evolved into a systematic overhaul that replaced traditional systems with Western medicine. The motivations behind this colonial intervention were driven by a mix of racial biases, a desire for standardization, and genuine concerns for public health. However, the consequences of this transformation were far-reaching, impacting not only the healthcare landscape but also the cultural fabric of Indian society.
The British colonial authorities, blinded by their own prejudices, failed to recognize the value embedded in indigenous medical traditions. The clash of medical ideologies was, at its core, a collision of two vastly different perspectives on healthcare. The rich tapestry of AYUSH, comprising Ayurveda, Unani, Siddha, and Homeopathy, was marginalized and
24 Neelakantan, Vivek Neelakantan. “Tuberculosis Control in Postcolonial South India and Southeast Asia: Fractured Sovereignties in International Health, 1948-1960.” Wellcome Open Res 2, no. 4 (2017). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC5920540/.
replaced with Westernized medical practices. The dismissal of holistic, family-centric healthcare in favor of large-scale departmentalized hospitals marked a significant shift that sought to address the diverse medical needs of an increasingly complex Indian subcontinent.
The strategies employed by the British to integrate Indian practitioners into the new healthcare system further highlighted the power dynamics at play. Promises of financial gain and professional advancement lured many indigenous doctors into adopting Western practices, leading to a gradual erosion of cultural heritage. The intentional anglicization of medical education and the imposition of standardized Western curricula dismantled the inherent diversity of traditional Indian medicine.
The implications of this transformation extended beyond medical procedures to include a shift in the socio-cultural fabric of healthcare. In light of Indian hospitals’ newfound capability to provide better public health, Western hospitals were strategically depicted as sanctuaries for the suffering, particularly within the increasingly impoverished colonial Indian communities. Exploiting their locations in areas such as rural Burma and famine-stricken Madras, the British forged a narrative that positioned their colonial presence as a shining bright light of medical hope, the sole symbols of relief and progress in communities suffering from issues brought upon by their own colonialism. The narrative of gratitude and dependency that emerged further solidified the perception that Western medicine was indispensable for the well-being and advancement of healthcare in India.
As the British colonial era came to an end, the impact of their interventions persisted. The regionalization of medical systems and the establishment of public health measures such as those in Madras and Burma became cornerstones of post-independence India. The transformation of Indian healthcare under British colonial rule was a calculated arrangement of cultural clashes, power dynamics, and genuine public health concerns. The legacy of this historical journey continues to shape the modern Indian healthcare system, demanding a nuanced understanding of the past for both South Asian historians and medical professionals. A critical examination of the duplicitous motivations behind colonial interventions comes as an essential for fostering a healthcare ethos that is inclusive, culturally sensitive, and responsive to the diverse needs of the South Asian population.
Anshu, Dr., and A Supe. “Evolution of Medical Education in India: The Impact of Colonialism.” J Postgrad Med 62, no. 4 (October 2018). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC5105212/.
Chakrabarti, Pratik. “'Neither of Meate nor Drinke, but What the Doctor Alloweth': Medicine amidst War and Commerce in Eighteenth-Century Madras.” Bull Hist Med 80, no. 1 (2006). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC2630004/#b4-80.1chakrabarti.
Dhwty. “Ayurvedic Medicine: A Traditional Knowledge of Life from India That Has Endured the Passage of Time.” Ancient Origins Reconstructing the Story of Humanity’s Past, December 7, 2015. https://www.ancient-origins.net/ history-ancient-traditions/ayurvedic-medicine-traditional-knowledge-life-india-has-endured-passage-020647#google_vignette.
Elliot, R. H. “The Madras Government and Indigenous Systems of Medicine.” The British Medical Journal 2, no. 3330 (1924): 786–88. http://www.jstor.org/stable/20438202.
Fatima, Zareen. “The History of Western Medicine and Its Rise in Colonial India.” Heritage Times, September 8, 2023. https://www.heritagetimes.in/the-history-of-western-medicine-and-its-rise-in-colonial-india/.
Fecht, Sarah. “What Caused the Great Famine?” State of the Planet, December 15, 2017. https://news.climate.columbia. edu/2017/12/15/causes-great-famine-drought/.
India Office Library and Records. “Report on Native Papers for the Week Ending May 02, 1896.” Report on Native Papers for the Week Ending ..., May 2, 1896. https://jstor.org/stable/ saoa.crl.25636124.
Kochhar, R. K. “English Education in India: Hindu Anamnesis versus Muslim Torpor.” Economic and Political Weekly 27, no. 48 (1992): 2609–16. http://www.jstor.org/ stable/4399193.
Kumar, Deepak. “Medical Encounters in British India, 1820-1920.” Economic and Political Weekly 32, no. 4 (1997): 166–70. http://www.jstor.org/stable/4405022.
Mahammadh, V. Raj. “Plague Mortality and Control Policies in Colonial South India, 1900–47.” South Asia Research 40, no. 3 (September 1, 2020): 323–43. https://doi. org/10.1177/0262728020944293.
Mushtaq, Muhammad Umair. “Public Health in British India: A Brief Account of the History of Medical Services and Disease Prevention in Colonial India.” Indian Journal of Community Medicine 34, no. 1 (January 1, 2009): 6. https:// doi.org/10.4103/0970-0218.45369.
Neelakantan, Vivek Neelakantan. “Tuberculosis Control in Postcolonial South India and Southeast Asia: Fractured Sovereignties in International Health, 1948-1960.” Wellcome
Open Res 2, no. 4 (2017). https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC5920540/.
Patel, By Dinyar. “Viewpoint: How British Let One Million Indians Die in Famine.” BBC News, June 10, 2016. https:// www.bbc.com/news/world-asia-india-36339524.
Pati, Biswamoy, and Mark Harrison. The Social History of Health and Medicine in Colonial India, 2008. https://doi. org/10.4324/9780203886984.
Prasad, Rupa, and Rajib Dasgupta. “Missing Midwifery: Relevance for Contemporary Challenges in Maternal Health.” Indian J Community Med 38, no. 1 (March 2013): 9–14. https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC3612303/.
Rastogi, Sanjeev. “Emanating the Specialty Clinical Practices in Ayurveda: Preliminary Observations from an Arthritis Clinic and Its Implications.” Journal of Ayurveda and Integrative Medicine 12, no. 1 (January 1, 2021): 52–57. https://doi. org/10.1016/j.jaim.2019.09.009.
Mamatha, K. “Institutionalisation of Health Care System in Colonial Malabar.” Proceedings of the Indian History Congress 75 (2014): 848–59. http://www.jstor.org/stable/44158469.
National Library of Scotland. Report on the Working of the Government Medical School, Rangoon, 1924. Accessed September 2, 2023. https://digital.nls.uk/indiapapers/ browse/archive/74983096.
“Six-Day Clinical Trial Finds Integrative Medicine Program Alters Blood Serum,” September 9, 2016. https://today.ucsd. edu/story/six_day_clinical_trial_finds_integrative_medicine_program_alters_blood_seru.
Sk, Chatterjee, Ramdip Ray, and Dilip K. Chakraborty. “Medical College Bengal—A Pioneer over the Eras.” Indian Journal of Surgery 75, no. 5 (August 3, 2012): 385–90. https://doi.org/10.1007/s12262-012-0714-2.
Food and Agricultural Organization. “The Tamil Nadu Public Health Act, 1939,” n.d. https://faolex.fao.org/docs/ pdf/ind195028.pdf.
Wikipedia contributors. “Great Famine of 1876–1878.” Wikipedia, October 27, 2023. https://en.wikipedia.org/wiki/ Great_Famine_of_1876%E2%80%931878#:~:text=The%20 excess%20mortality%20in%20the,the%20Madras%20 famine%20of%201877.
Rucha Padhye
Stereotypically, many people associate Indians with education and high intelligence, which derives from the intense higher education culture in India. Students typically apply for college after 10th grade and attend a two-year college program before going to university. The selection process for universities is extremely rigorous, with board exams being exceptionally difficult and the application pool very competitive.
Currently, nearly 41.4 million students are enrolled in higher education in India, increasing the rate of enrollment by 21% since 2014, and leaving India as the third largest higher education system in the world.1 These figures are really impressive as India churns out hard-working students from college who go on to join the workforce and help India grow socioeconomically. Before independence from the United Kingdom, India was forced into an educational system where the British used higher studies as a way to control what was being taught to Indians and also instill British ideals. Because of this, many age-old Indian practices that were associated with higher education—such as outdoor classes, more philosophical thinking, and a holistic method of learning—were abandoned and replaced by the British methods, with English being the primary language as they believed Indian languages had no implications in the real world and were too complex.2 Additionally, only Brahmins and other upper-caste Indians were allowed to attend higher education, which created a socioeconomic divide between the educated and uneducated. The methods of teaching also changed greatly as traditional Indian higher education was primarily taught outdoors with more philosophical and religious implications than British education.
After independence, India was on its own to restore progress and form a new, powerful higher education system to build back centuries of wealth lost to the British. With the imple-
1Shefali Anand, “India: What You Need to Know About the World’s Largest Workforce,” SHRM, 08/03/23, https://www.shrm.org/topics-tools/ news/india-need-to-know-worlds-largest-workforce ; Ministry of Education, “All India Survey on Higher Education,” PIB, 09/21/23, https://pib. gov.in/PressReleasePage.aspx?PRID=1894517
2 Rekha Reddy, ”Impact of Colonialism on Indian Education,” Rekha Reddy, 09/02/21 http://www.rekhareddy.com/impact-of-colonialism-on-indian-education/q1
mentation of many different policies and a very strategic college system, India began to have great success with those who attended university as they were well-taught and ready to work and earn money. Because of the rigorous education system that instilled hard work and discipline into many students, the higher education system was able to flourish.
In order to get a holistic view of higher Indian education, chronology is a necessity. Starting off immediately after Independence, there was a “Report on the University Education Commission” by the Ministry of Education created in 1948 which delves into all the aspects needed for higher education in India. The focus of the Ministry of Education was to bring proper higher education to the country so that it could prosper and grow. As said in the report,
They must enable the country to attain, in as short a time as possible, freedom from want, disease, and ignorance, by the application and development of scientific and technical knowledge. India is rich in natural resources and her people have intelligence and energy and are throbbing with renewed vigour. It is for the universities to create knowledge and train minds who would bring together the two, material resources and human energies. If our living standards are to be raised, a radical change of spirit is essential.3
Since higher education in India’s colonial period was limited to the British’s choosing of people, it was imperative that the government establish a sentiment that described their goal for future education. Additionally, they placed a heavy emphasis on interdisciplinary education. “It is wrong, to think that the more intelligent go to the universities and the less intelligent to technical schools. Success in a technical school requires as high an intelligence as success in a purely literary or scientific
3 Ministry of Education. “The Report of the University Education Commission: India: Manager Government For India Press”, 1963. https://www.educationforallinindia.com/1949%20Report%20of%20 the%20University%20Education%20Commission.pdf
course. It may be of a different kind even as pupils are of different kinds, meditative or mechanical, scientific or artistic. Bookishness or the manipulation of concepts is not the only kind of intelligence.” This quote explains the approach that Indian higher education was taking from the 1950’s. Students were encouraged to pursue what they wished to and they were given the opportunity to choose in universities.
Secondly, the Ministry wrote “There is a great disparity between what our country requires and what our education offers. We produce a large number of arts and law graduates, but not enough teachers, administrators, doctors, engineers, technicians, scientific researchers, and the like. On account of their expensive character, we have neglected the scientific and technical Courses.” After independence, there was a greater emphasis on getting more STEM-related education to balance out the disparity between the large number of arts and law graduates. This is a key point in chronology as in current times, most Indians choose to pursue a STEM-related education, and stereotypically, India is known for its doctors and engineers rather than their lawyers and artists. The progress of this education has come far in India and this doctrine was successful.
The Ministry also discusses the importance of good instructors and how an emphasis should be placed on their qualifications and engaging methods of teaching rather than their ability to lecture.” They also required written exercises such as practice problems to ensure that the students had a chance to practice every topic before the final examinations. They suggest that this should be done once a week or every fortnight. Interestingly, it denounces the usefulness of textbooks and recommends that they should be used as supplemental knowledge and its implications should be discussed in class. Attendance was also mandatory for 75% of classes. It then emphasizes the importance of tutorial hours, or after school help where students can go to professors for questions. All of these methods combined were expected to deliver a quality education for the students. This engaging method of teaching must’ve increased enrollment and graduation rates as students are more likely to be interested in the material being taught.
Lastly, it discussed equity for everyone around the states. The Ministry wanted to expand education to all scheduled castes and women. In 1948, India abolished the caste system, so it was important for them to address this in every new policy they made. However, there was no real method stated on how to implement this, they just emphasized the importance of an educated population.
In order to get a personal experience story on education, I interviewed two people who went to university after the 1950’s. This provides a more personal perspective on the education system during that time and how much of the policies that were implemented were true and the implications of that. The interview process involved asking a set of questions. I asked these same questions to everyone I interviewed:
• What college did you go to for 11th-12th? Why did you choose that school? What did you study?
• What university did you go to and why? What did you study?
• What did you think of your overall education experience? Did you think it was too rigorous?
• How much studying did you do every day?
• What would you want to change? Classes/campus life?
• What do you like about your experience?
• America vs India, which college education system do you think is better or what do you think about American education in general?
• Is there anything you would like to add?
The first interview I conducted was with Mangal Joshi, who graduated from GS Science, Arts, and Commerce College in the early 1960’s. She mentioned how they didn’t have the college and then university system; it was straight into university after 12th grade. She went to study PCB which is physics, chemistry, and biology. When asked about study time, she explained how she usually only studied when it was time for exams which happened two times a semester. However, her study schedule was rigorous when she did have to, as she slept from 8 pm - 12 am and studied through the night. If she could change something about her college experience, she would have more choices for the classes she took. There were many zoology classes she was required to take, although she wished she could’ve taken more chemistry courses instead. She said there was not much of a campus life and she started off with 13 close friends, but during the course of her time there 10 of them either flunked out or dropped out because it was so difficult. Only three of them (including Mangal) graduated. When asked about a comparison of the American higher education system based on what she knows about it, she thinks the Indian higher education system is more effective.
The next interview I conducted was with Mukund Joshi who graduated from VMV College of Science and Arts in the early
1960’s. Again, he explained how there wasn’t really college for 11th-12th, there was something called precollege which was basically high school. For his undergraduate at VMV, he studied PCM, physics, chemistry, and math. He moved around a lot when he was in school, so VMV was near where he had just moved to which is why he chose to go there. He didn’t have that much work to do outside of school unless it was around exams, which there were a few per semester. For the exams, he said there was a lot of studying that he had to do and it took up a lot of time. He didn’t have much of an opinion on what should’ve been changed, he just did what the requirements were. He didn’t live on campus but he said that he had a good amount of friends in college. When asked to compare American and Indian higher education systems he said he doesn’t know much about American systems so he isn’t sure which one is better.
Based on the interviews, it is clear that there was a system of PCM or PCB introduced to the colleges that made students choose if they wanted to do physics, chemistry, math/bio in 11th grade. The scores they received in 12th grade would determine which universities they could get into. It is also interesting how there is not that much homework, but there are two big exams per semester which are very important. “The Report of the University Education Commission” emphasized giving students a chance to practice before the final exams, which seems to be consistent with what the interviewees said as they had small assignments leading up to the final examinations.4 The report also talks about STEM education and increasing its prevalence in higher education institutions. This change was clearly made as both Mangal and Mukund went to colleges and universities that focused on STEM, and they even had specific paths to follow. Around 10 years after the “The Report of the University," their main goals began to manifest in the higher education institutions of India.
Next, there was a National Policy of Education written in 1968 which built off of the Ministry’s report on education. The report discusses primary and secondary education and then gets into higher education. It discusses the standards of a University, which should be established only if there is enough funding to make it a good, successful university.5 This is a key
reason why education at Indian universities is so established because there are hardly any universities founded on the basis of a bad foundation. On the other hand, this may be a reason why universities are less available in scheduled caste areas because they don’t have enough funding to start them up.
This next portion of the Policy is vital to the current day structure of Indian education. The policy introduces the 10+2+3 structure, which is 10 years of school, two years of college, and three years of university. In India, the two years of college don’t get you your degree, so the three years of education after are what get you your bachelor's. This structure is still widely followed today in India, which is important because it created a standardized way of education around the nation, making it easier to move from state to state if needed.
In order to get a personal experience story on education, I interviewed two people who went to university in the 1990s. The same questions that were asked above were asked in these interviews.
4 “The Report of the University Education Commission,” 1963. Education Commission
5 Ministry of Education, “National Policy on Education,” 1968, https://www.education.gov.in/sites/upload_files/mhrd/files/document-reports/NPE-1968.pdf
The first person I interviewed was Sandeep Padhye. Sandeep went to the Institute of Chemical Engineering (previously UDCT) and graduated in the early 1990s. He went to college at Shivaji Science College, Amravati for his 11th-12th. He focused on PCM and took an elective in electronics. He mentioned that during his time, the only electives offered at science schools were electronics, computer science, or biology. Sandeep chose to go to UDCT because it was one of the top chemical engineering colleges in the state. He had gotten into one of the best colleges for computer engineering called BITS Pilani (Birla Institute of Technology & Science), but since it was out of state, the tuition was far too expensive for him and he couldn’t take out a loan. Because of this, he decided to go to UDCT. His experience in college was good as the teachers were extremely qualified and he received a good education. He had a lot of support from teachers if needed and he also had a large friend group. However, he found the education atmosphere to be very competitive. Since it was a very good school they did have access to lots of courses, but they were all focused on chemical engineering. He wishes that there were more arts and sciences offered that weren’t just related to engineering. He had only semester exams so he didn’t have much work during the semester but the finals were very stressful and exhausting. He would change the campus life experience and incorporate more sports and extracurricular activities because that was almost non-existent. Something important he emphasized was having classes/clubs at school that actually teach skills. He highlighted how Indian education was focused
on learning the content but there was no real application of it until people got into jobs. So graduates were very knowledgeable in their field, but because of their lack of hands-on experience, the skills required to actually work in the field had to be developed later on. When asked to compare the American and Indian higher education systems, Sandeep carried on the notion that American education allows students to gain skills in college, whether it is public speaking skills, communication skills, or hands-on skills, so they are prepared when entering the workforce. In that aspect, he thinks American education is better because India lacks that preparedness.
The next interview I conducted was for Aparna Padhye. She went to VJTI for her degree in computer technology. For college, she went to Biyani College. She also completed PCM with an elective in electronics. She chose to go to VJTI because it was the best government school for engineering. The tuition was cheap and additionally, women’s tuition is discounted in engineering schools because they want to encourage women to attend college. Up until 12th grade, her school experience was free; girls get free tuition if they attend a public school until 12th grade. She said if she hadn’t gotten into VJTI she would’ve gone to art school, for most people there, art school is similar to a last resort or a safety school. Her college experience was good but she said studying came in “pockets.” There was a lot of studying to be done around exams which was once in a while, there wasn’t that much homework every night but there was around 1-2 hours daily. If there was something she could change, she wished there were art classes at her school. During her final year, she described her experience as “burned out” and being able to take other classes besides computer science would’ve been really helpful in balancing her courses. There was a good amount of support from the teachers when needed. When asked to compare Indian and American education, she described how each system could benefit from incorporating elements/aspects from the other. She said in India, they need to learn how to apply the skills they learn in college early on because she only got to apply classroom knowledge for the first time when she went into the workforce. This made it much harder for her to adjust and she thinks America does a good job of preparing students for that. She also thinks India does a good job of instilling good study habits from a young age and also requires students to take certain classes that allow them to be very good at a topic. She explains how American students have too many choices sometimes which can make it hard for them to decide on what they want to do with their education. Indian education eliminates that confusion and provides a clear-cut path that could guarantee a job for the student.
These interviews were interesting to compare to the National Policy on Education of 1968 because a lot of the goals highlighted in the NPE were showcased throughout the interviews. Firstly, the NPE emphasized strong institutions that were established and could function well. The two universities that Sandeep and Aparna attended, UDCT and VJTI, seem to exhibit what a good university would look like to the Ministry of Education. They both described having good support from the teachers, which is emphasized in the NPE. Additionally, the universities from the interviews seemed to have many resources which is what the NPE also outlined. Additionally, this is where the 10+2+3 method is implemented into the Indian systems. Both Aparna and Sandeep went to college for two years before going to three years of university. This can be compared to Mangal and Mukund’s interview as they did not have college before university which makes sense as the NPE came out in 1968 and took a few years to implement, so it was after their time. Lastly, Aparna discusses in her interview how women had their tuition aided and discounted in higher education to encourage women’s participation in education, which aligns with the NPE’s goal of promoting equity and access to higher education for everyone in India.
The last Policy on Education that was written was in 2020. It dives into the current problems that need to be addressed. “(b) less emphasis on the development of cognitive skills and learning outcomes; (c) a rigid separation of disciplines, with early specialization and streaming of students into narrow areas of study; (d) limited access particularly in socio-economically disadvantaged areas, with few HEIs that teach in local languages.”6 Based on the previous national policies, it seems that there hasn’t been a drastic improvement in socioeconomic disparities as that is still something that needs to be addressed since the 1950 policy. The first two points seem to be backtracking on the current standard of education that has been established in India since the 1950 policy. Firstly, the lack of emphasis on the development of cognitive skills is fascinating because Indian education is built off of memorization rather than application, which has started to backfire by resulting in less creative and critical thinking skills. This might be why India will have to shift the way they teach specific subjects to ensure the content is being understood and not just memorized. Next, the early specialization of streamlining students into narrow
6 Ministry of Education, “National Policy on Education,” 2020 ,https://www.education.gov.in/sites/upload_files/mhrd/files/NEP_Final_English_0.pdf
areas is interesting because this goes along with the current 10 + 2 + 3 method, where students have to choose a specialized area of study when they are only in 10th grade.
The policy also discusses how it wants colleges to take a more interdisciplinary approach and not be focused on only one discipline. This could affect Indian higher education completely because there are so many successful engineers and doctors because they have been learning in those specialized fields for years. This brings into question whether specialized education is more valuable than interdisciplinary education. Would having a larger variety of classes offered at an institution also increase the overall enthusiasm of the students as they will be given different avenues to explore different passions?
The last interview that I conducted was with Gauri Parnaik who graduated from Bedekar University in 2022. She studied psychology at Bedekar. She chose Bedekar because it is a good school and it is close to home. Her university was stressful as there was a lot of work she had to do and the last year was especially difficult. Many students dropped out or failed because of the difficulty. However, she said the teacher and staff support was amazing and because of that she was able to deepen her passion for psychology. The amount of work was around at least one hour per day. Some teachers were unsupportive and hard on the students, so Gauri wished that would change. She also wishes the pressure to do well would decrease as there were many expectations placed on the students by the professors which were hard to keep up with. When asked about American vs Indian higher education, she found American higher education more effective. Gauri described how India is very focused on years, so she needs a definite three years to complete her bachelor’s. However, in America, it is more about credits, so someone with the right amount of credits for their major can graduate early, and she said that was a much better system because it gives students freedom in what they wish to study. Gauri also thinks India should focus more on extracurricular activities because there is hardly any emphasis on those. Lastly, she appreciates how American students can take whatever classes they want in college, they don’t have to take classes that are specifically for their major, while in India they can only take major-related courses.
Gauri’s interview gives a great insight into what the National Policy on Education 2020 is talking about. The problems that are highlighted in the NPE, and the interviews have already addressed them. The first issue, “less emphasis on the develop-
ment of cognitive skills and learning outcomes,” is addressed in both Aparna and Sandeep’s interviews as they explain how they would like for Indian universities to teach real skills in class before going out into the real world because it would prepare them better. NPE calls this problem to attention as it mentions how the schools have less emphasis on developing the skills. Since Aparna and Sandeep’s time in university, the problem has been exacerbated and millions of students are left with a lot of information but hardly any skill. Additionally, Gauri emphasizes the choice that Americans have when it comes to classes in higher education and she would like to see that choice in India as well. This makes sense, as Indian students are forced to choose what they want to do once they go to college, and based on the college they go to a university focused on the same topics. This can get redundant, and Gauri attests to the idea of having a more interdisciplinary approach rather than focusing on just one subject for three years. The NPE explains this problem as the narrowing of pathways, which can be changed if the institutions change what courses they offer. Lastly, Gauri talks about the pressure she receives from the professors at university which could be a potential fuel for demotivation in school as there are too many expectations on students that cannot be met, which eventually leads to burnout.
In Govardhan Wankhede’s “Higher Education and the Scheduled Castes in Maharashtra,” he discusses the ostracization lower caste students still face in today’s society as students are not treated equally by teachers or their peers. Interestingly enough, out of the 1312 scheduled caste students interviewed in Wankhede’s survey, only 414 of them were going to continue into higher education. A quote from a male student described his experience in higher education: “I am pursuing a BE in electronic engineering from a private unaided college. I feel lost, find no support and guidance from anyone. I find the course very expensive and the subjects are hard to understand. The teachers do not make any extra effort to teach students like me.”7 There are a few more quotes like this in this study that exemplify the experience of a scheduled caste member in the higher education system in India. Even though they are offered “reserved” spots in the colleges, they are not treated equally or given the same enthusiasm or opportunities when it comes to learning. This is why there is still an implicit caste system going on in India because the opportunities that young people have to get an education and break out of their cycle of oppression
8 Govardhan Wankhede. “Higher Education and the Scheduled Castes in Maharashtra.” Economic and Political Weekly, vol. 51, no. 6, 2016, pp. 83–86. JSTOR, http://www.jstor.org/stable/44004358
under their caste are ruined by the higher education system which may end up further disadvantaging them by leaving them in debt and with no good education.
Additionally, the financing trends are also leaving people of lower income areas with lesser access to higher education. “Trends in Growth and Financing of Indian Higher Education” writes that “the disbursement of funds by the UGC is uneven and the bulk of it goes to the central universities and their affiliated colleges and to a few deemed to be universities. A vast majority of universities and other degree-awarding institutions are not even eligible to receive any kind of grants from the UGC. In all, only 158 out of 348 universities are eligible to receive grants from the UGC.” This is key because the money that is given to higher education by the government is being unevenly distributed by the UGC to central universities. Additionally, it discusses the type of universities that are getting most of this funding. “... understandably goes to Indian Institutes of Technology (IITs). The Indian Institutes of Management (IIMs), Indian Institute of Science (IISc), National Institutes of Technology (NITs), and All India Council for Technical Education (AICTE).”8 The funding makes a huge impact on what is succeeding in India. Since the UGC is focusing on these STEM institutions and more “prestigious” schools, lower income areas are not getting the funding they deserve from the government which makes it difficult for them to have higher education institutions. This is because the NPE of 1968 discusses how institutions should only be established if they have the facilities to be good universities. This creates a barrier between the richer and poorer areas as the poorer areas cannot afford to build a “good facility.”
Overall, Indian higher education began with many lawyers and art graduates and has swung in a completely different direction today with most of the institutions being focused on STEM. It has evolved through the many different policies implemented by India, specifically through the University Report of 1948, the National Policy on Education in 1968, and the National Policy on Education in 2020. History repeating itself is what makes this research significant. It is clear that India has now gone too far into their one subject focus which is causing students to lose critical skills that are necessary for working and also is causing students to miss out on the interdisciplinary education which, based on the interviews, students are wishing for. Going forward, it is important for India to find a middle point between STEM and arts educa-
tion, as well as emphasizing equity and accessibility for education which still hasn’t made much progress since the very first report on the universities. Once the real problems start to get addressed, India has the potential to create one of the world’s greatest university systems in the world.
Agrawal, Pawan, “Indian Higher Education: Envisioning the Future.” India: SAGE Publications, 2009. https://sk.sagepub. com/books/indian-higher-education
Anand, Shefali, “India: What You Need to Know About the World’s Largest Workforce,”SHRM, August 3, 2023. https://www.shrm.org/topics-tools/news/india-need-toknow-worlds-largest-workforce
Avanse. “Top 5 Emerging Trends in Higher Education in India,” Avanse, 2021. https://www.avanse.com/blog/top-5emerging-trends-in-higher-education-in-india
Ministry of Education, “All India Survey on Higher Education,” PIB, 09/21/23, https://pib.gov.in/PressReleasePage. aspx?PRID=1894517
Ministry of Education, “National Policy on Education,” 1968, https://www.education.gov.in/sites/upload_files/mhrd/files/ document-reports/NPE-1968.pdf
Ministry of Education, “National Policy on Education,” 2020, https://www.education.gov.in/sites/upload_files/mhrd/files/ NEP_Final_English_0.pdf
Ministry of Education. “The Report of the University Education Commission: India: Manager government for India press”, 1963, https://www.educationforallinindia. com/1949%20Report%20of%20the%20University%20Education%20Commission.pdf
Nagarajan, Rema,“Only 10% of Students Have Access to Higher Education in Country”, Times of India, January 15, 2014. https://timesofindia.indiatimes.com/education/ news/only-10-of-students-have-a ccess-to-higher-education-in-country/articleshow/28420175.cms
Prakash, Ved, “Trends in Growth and Financing of Higher Education in India.” Economic and Political Weekly, vol. 42, no. 31, 2007, pp. 3249–58. JSTOR, http://www.jstor.org/ stable/4419875.
9 Prakash, Ved, “Trends in Growth and Financing of Higher Education in India.” Economic and Political Weekly, vol. 42, no. 31, 2007, pp. 3249–58. JSTOR, http://www.jstor.org/stable/4419875.
Reddy, Rekha, “Impact of Colonialism on Indian Education,” rekhareddy, September 2, 2021. https://www.rekhareddy. com/impact-of-colonialism-on-indian-education/ Wankhede, Govardhan,“Higher Education and the Scheduled Castes in Maharashtra.” Economic and Political Weekly, vol. 51, no. 6, 2016, pp. 83–86. JSTOR, http://www.jstor.org/stable/44004358.
Sudiksha Battineni
India is the world’s fastest growing country. With its gigantic population of 1.433 billion (as of November 15, 2023) and rising population growth rate, this subcontinent of Asia is predicted to dominate every other country in population soon.1 With a country that is still developing and is increasing its percentage in youth demographics, it is important to pay attention to how this population of youths is spread throughout the country.
With an increasing population, the country also needs more energy capacity. India is fairly new to the renewable energy sector and there is not much knowledge on how the growth of this sector impacts the sociology and human geography of the country. It is important to investigate the relationship between the population demographics and renewable energy growth to fill in knowledge gaps between citizens, communities, and cities.
India’s renewable energy sector has grown exponentially over the past two decades. With this growth, there are many factors that should be brought into consideration by the Indian government and by citizens. Such factors include jobs, new forms of renewable energy, renewable energy potential, competing industries, and India’s overall energy capacity. Along with these, there are also some topics and factors that can prevent or limit the growth of this sector in the diverse nation that may seem “unrelated.” India prides itself in the preservation of history and advancement in westernization. This means that there is a vast mix of citizens and culture between religions, tribal groups, and ethnic cultures. Incorporating both culturally and ecologically important details will allow accurate predictions on companies, jobs, and population growth to be made.
The starting point of understanding the renewable energy sector in India is to first understand energy consumption and
1 “India Population (2023).” n.d. Worldometer. Accessed November 15, 2023. https://www.worldometers.info/world-population/india-population/.
where the power currently being used is coming from. India has a very established energy consumption background. The country is the third largest energy consumer in the world after China and the United States.2 In sustainable energy specifically, India is one of the leading countries for most capacity in renewable energy sources compared to any other country in the world. “India stands 4th globally in Renewable Energy Installed Capacity (including Large Hydro), 4th in Wind Power capacity & 4th in Solar Power capacity (as per REN21 Renewables 2022 Global Status Report).”3 The Indian government has also set a goal to reach 500 GW of non-fossil fuel based energy by 2030 which makes this initiative the world’s biggest renewable energy expansion plan.
Ever since India first gained independence from the British in 1947, the country has been working to rapidly increase their overall power capacity. The goal was to increase the country’s socioeconomic power through energy efficiency and facilities. On December 31, 1947, India had 1,362 megawatts (MW) of energy.4 By 2014, the country had amassed 248.5 GW of grid-connected energy. A majority of this energy comes from coal. Although the nation has created the plan to reach 500 GW of non-fossil fuel based energy by 2030, they have implemented a similar set of plans like this in the past (although they were not as ambitious). India had implemented multiple five-year plans (FYP) to install more GW of energy in the span of five years. The last FYP implemented was India’s 12th FYP implemented and it was for the years 2012 to 2017 to install 118.53 GW of energy. Overall, India has a ton of potential for
2 “Electricity consumption by country 2021.” 2023. Statista. https://www. statista.com/statistics/267081/electricity-consumption-in-selected-countries-worldwide/.
3 “Renewable Energy in India: Investment Opportunities in the Pow...” n.d. Invest India. Accessed October 20, 2023. https://www.investindia. gov.in/sector/renewable-energy.
4 The United States had around 800,400 MW of electrical energy in 1947 from source: Waring, John A. n.d. “Historical statistics of the United States, Colonial Times to 1957.” Historical statistics of the United States, Colonial Times to 1957. Accessed December 4, 2023. https://www2.census.gov/library/publications/1960/compendia/hist_stats_colonial-1957/ hist_stats_colonial-1957-chS.pdf.
power supply position Electricity Authority (2013b)
renewable energy. As of 2023, the country now has a capacity of 416 GW of grid-connected energy.5 The potential for renewable energy sectors is highest for solar energy, followed by wind energy, small hydro power, biomass, and then bagasse cogeneration energy which is made from the coarse material leftover after extracting juice from sugar cane.6 The image below shows the renewable energy potential per type of sustainable energy source.
offshore wind farms due to their accessibility to bodies of water and their population.
India’s peninsula shaped landform will help increase advancements in the offshore wind facility sector. “India’s 7,600 kilometer-wide coastline has the potential to generate approximately 140 Gigawatt (GW) of electricity from offshore wind.”9 The estimation for capacity of offshore wind energy in Tamil Nadu and Gujarat combined is 70 GW. This is enough energy to power 50 million houses in the two states.
Source: MNRE website
Figure 3: Renewable energy potential in India*
Figure 1: “Renewable Energy Potential in India,” Graph from Tushar Sud et al. 2015. Case Study: India’s Accelerated Depreciation Policy for Wind Energy, (International Institute for Sustainable Development)
* Wind data are from the National Institute of Wind Energy (2014); small hydro data are from the Ministry of New and Renewable Energy (2014c); and biomass and bagasse data are from Ministry of New and Renewable Energy (2014a).
Institute of Wind Energy, 2014) and other studies assessing total wind potential in excess of 200 GW (see Annex A, Section 2 for more detail).
Also, with an estimated potential of 18 GW, India is also very well placed to harness biomass energy to meet its rural energy needs (MNRE, 2014a). In addition, the country’s small hydro potential is estimated around 20 GW (MNRE, 2014c).
The renewable energy sector has grown at a tremendous pace over the last decade, with its share in the installed grid-connected capacity increasing from 7.32 per cent in FY 2008 to 12.57 per cent as of August 2014 (31 GW). Figure 4 shows renewable energy capacity additions by technology type, and with close to 21 GW of installed capacity at the end of FY 2014, wind accounts for 67 per cent of the total renewable energy capacity.
There is also a newer source of renewable energy that is not commonly talked about and that is offshore wind energy, a variation of wind energy. In India, offshore wind energy is predicted to dominate the renewable energy sector. As a part of the wind energy sector, sites were identified in Gujarat and Tamil Nadu that are suitable for building offshore wind energy facilities. These two states were chosen specifically because of their proximity to the coastline and accessibility to bodies of water. To identify more specific locations, light detection and ranging (LiDAR) resources and bathymetry (the study of ocean beds and their depths) were used.7 With the utilization of these two resources, there were eight sites along the coast of Gujarat at the Gulf of Khambhat that were identified as potential locations for these farms. There were also eight identified sites along the coast of Tamil Nadu at the Gulf of Munnar.8 The states of Gujarat and Tamil Nadu were selected to build
International Institute for Sustainable Development
5 “Renewables, hydro cross 40 per cent of India’s installed power generation capacity in FY23: Report.” 2023. ET EnergyWorld. https:// energy.economictimes.indiatimes.com/news/renewable/renewables-hydro-cross-40-per-cent-of-indias-installed-power-generation-capacity-in-fy23-report/100151725.
6 “Bagasse Power.” 2011. Biomass Magazine. https://biomassmagazine. com/articles/bagasse-power-5247.
IISD.org/gsi 3
7 “What is bathymetry?” n.d. National Ocean Service. Accessed October 18, 2023. https://oceanservice.noaa.gov/facts/bathymetry.html.
+00:00
8 0:26 to 0:31 Sengupta, Amit. 2023. “Offshore wind energy: India set to harness coastal breezes.” YouTube. https://www.youtube.com/watch?v=rrIRZ2tVxOY.
Now after investigating various popular avenues that renewable energy can grow in India, assumptions can be made about which specific sectors in renewable energy will grow more. According to a report done by the Council on Energy, Environment and Water’s Centre for Energy Finance (CEEWCEF), another 83 GW of capacity is set to be implemented in renewable energy as of February of 2023. Aside from this, in FY23, there was an additional 17 GW of energy added in total and 92% of this energy came from renewable energy sources. Although the chart in Figure 1 does not include solar energy, out of the 17 GW added, 84% came from installed solar power.10 Now the overall renewable energy percentage in terms of all of India’s power is around 11.8%.
On the topic of increasing growth, India has seen a ton of growth in its renewable energy sector in the past two decades. Based on previous research, the amount of wind energy has increased almost exponentially after 2002 compared to the cumulative gigawatts of wind energy previous to 2002. India, in around 2015 had energy capacities of 749GW from solar, 103GW from wind, 25GW from bio-energy and 20G from small hydropower energy.11 In India’s renewable energy estimations, there are 350 gigawatts (GW) that come from the offshore wind energy sector which will be the biggest in Tamil Nadu and Gujarat. This sector is something that the Indian government is trying to expand on and create a larger capacity for in the future. In terms of hydropower, the country has been deploying “small hydro projects.” These hydro projects are being implemented all over the nation but more so in
9 “India Offshore Wind Energy.” 2023. International Trade Administration. https://www.trade.gov/market-intelligence/india-offshore-wind-energy.
10 “Renewables, hydro cross 40 per cent of India’s installed power generation capacity in FY23: Report.” 2023. ET EnergyWorld. https:// energy.economictimes.indiatimes.com/news/renewable/renewableshydro-cross-40-per-cent-of-indias-installe d-power-generation-capacity-in-fy23-report/100151725.
11 “Renewable Energy Market Developments: A Study of India” Kar, Sanjay Kumar. “Renewable Energy Market Developments: A Study of India.” Renewable Energy Law and Policy Review 6, no. 4 (2015): 238–47. http:// www.jstor.org/stable/26256467.
Karnataka. Cumulatively, there are over 1,000 of these projects all over India. In terms of solar power, there are four states leading that energy sector: Rajasthan, Jammu and Kashmir, Maharashtra, and Madhya Pradesh.
Figure 2. Mukhopadhyay, Subrata. n.d. “Wind Energy map of India | Download Scientific Diagram.” ResearchGate. Accessed December 4, 2023.
Although these projects are being implemented in many different states throughout the nation, how is the government subsidizing industries and corporations who are choosing to expand each of the renewable energy sectors? The government of India wanted to build 60 solar cities. There have also been other plans to build a hundred thousand solar pumps. The government also wanted to utilize solar energy on the India-Pakistan border and along canals. For subsidies, the government is offering tax breaks and relief for renewable project developers and component manufacturers. Solar energy provides three different types of subsidies: customers, channel partners, and DISCOMS (distribution companies).12 Customer subsidies are only given on residential homes and depending on the capacity, the subsidy is given in a percent. Channel partners work with customers and the government to speed up the process of acquiring a subsidy for the customer. In other words, channel partners are the real estate agents of the solar energy world.
12 Chandra, Nishi. 2023. “Rooftop Solar Subsidy in India 2023: How Much, How & Where to Get.” loom solar. https://www.loomsolar.com/ blogs/collections/solar-panel-subsidy-in-india.
Figure 3. “Hydroelectric power plants in India.” n.d. Optimize IAS. Accessed December 4, 2023.
They bridge the “buyer” (customer) and the “seller” (government) and govern the process. The last subsidy is for distribution companies of solar panels. These companies increase outreach and send solar panels to commercial businesses.
Wind energy producers get incentives such as tax holidays and they do not have to pay taxes on certain materials they use to construct wind-operated generators. Small hydro power projects also have incentives. Along with preferential tariff treatment, they also get a 10 year tax holiday to encourage investors into this sector.
Renewable energy does not only better the economy and encourage growth through subsidies but increased employment as well. More than anything, the growth in the solar energy and wind energy sector is opening up new jobs and opportunities for Indian citizens. In financial year 22 (FY22) of the renewable energy sector, 99% of the growth was from the solar energy sector.13 42% of the overarching amount of solar energy came from rooftop solar panels and the remaining 57%
13 “India’s Expanding Clean Energy Workforce” from Council on Energy, Environment, and Water, National Resources Defense Council, and Skill Council for Green Jobs. n.d. “India’s Expanding Clean Energy Workforce.” NRDC India. Accessed October 11, 2023. https://www.nrdcindia. org/pdf/NRDC-Jobs%20report-Feb-2023.pdf.
came from utility scale solar panels. Together, both of these sectors created 52,700 jobs. By the end of FY22, there were 164,000 workers in the solar and wind energy sector. This was a 47% increase from the workers in FY21. “India’s renewable energy sector continues to grow steadily and create employment opportunities. Our earlier studies have showcased the potential to employ 1 million people in the sector as India marches towards the 2030 ambitions.”14
Another blog post from the same website based on projections from the same report of “India’s Expanding Clean Energy Workforce” estimated that over 3.4 million jobs will be created by 2030. The Indian government has set a goal that by 2030, the country should have a generation capacity of 500 GW non-fossil electricity.15 To account for the growth of renewable energy jobs, the government has been training citizens under a national-level solar energy Suryamitra training program. “Over 78,000 workers have been trained till date by the government.”16 If this pattern continues, there will be special training programs and this may just be what the country needs to further encourage the growth of the renewable energy sector among its citizens.
There are clear benefits, both economically and ecologically, that come with an increased renewable energy sector but what are some limits to the sector? Religion, land, and culture. Although predominantly Hindu, Indians are widespread in faith and beliefs.17 The country served as the birthplace of four religions: Hinduism, Buddhism, Jainism, and Sikhism.18 Islam and Christianity are also two big religions in India. These religions are practiced throughout the country, some more concentrated in particular areas than others. For example Sikhism is practiced in Punjab over any other state in India. Jainism is commonly practiced in Gujarat while Christianity
14 Quote by Mr. Neeraj Kuldeep, Senior Programme Lead, CEEW source from: “Renewable Energy: A Driver for Job Growth” from Kwatra, Sameer. 2023. “Renewable Energy: A Driver for Job Growth.” NRDC. https://www.nrdc.org/bio/sameer-kwatra/renewable-energy-driver-job-growth.
15 “India Could Create Millions of Jobs Through Renewable Energy” from Golchha, Akanksha. 2022. “India Could Create Millions of Jobs Through Renewable Energy.” NRDC. https://www.nrdc.org/bio/charlotte-steiner/india-could-create-millions-jobs-through-renewable-energy.
16 “India Could Create Millions of Jobs Through Renewable Energy” from Golchha, Akanksha. 2022. “India Could Create Millions of Jobs Through Renewable Energy.” NRDC. https://www.nrdc.org/bio/charlotte-steiner/india-could-create-millions-jobs-through-renewable-energy.
Quote from Dr. Praveen Saxena, CEO of the Skill Council for Green Jobs.
17 “Religion in India: Tolerance and Segregation.” 2021. Pew Research Center. https://www.pewresearch.org/religion/2021/06/29/religion-in-india-tolerance-and-segregation/.
18 “Religion | The Story of India - Photo Gallery.” n.d. PBS. Accessed April 2, 2024. https://www.pbs.org/thestoryofindia/gallery/photos/14.html.
is concentrated in Tamil Nadu and Kerala. On the other hand, there is the caste system: a social hierarchy that was originally for Hindus that has become widespread amongst all religions. Within the country itself, there are religious and caste based tensions. Highly elected government officials, including the prime minister, are mostly Hindu. When it comes to resources or certain groups of people benefitting from renewable energy whether it is in terms of jobs or power, there is a source of bias. Resources and locations are usually matters of question and they can differ heavily between members of one religious affiliation or caste to another. An area with a higher Hindu population of Brahmins may be more likely to get more resources and support from the government than an area with a high Muslim population. This limits India in states and regions in which renewable energy can be implemented.
There is more information to come on these rural education programs but part of the reason they are so great is because they are laying the groundwork for the demolition of rural-urban migration. The gap between rural and urban India began with “modernization.”
Modernization in India is said to have started after British rule. Previous to the British colonization of India, India focused its morals on caste systems defined at birth, self-sufficient government, and the way of agriculture with an emphasis on craftsmanship.19 However, after India gained independence from the British, the country as a whole started taking its first steps towards modernization. This term in India was a way to cover up the shift to Westernized beliefs. The Western values and thought process became more prominent after British rule. This included four specific areas.20
The first one is production being based on machine advancements versus craftsmanship. Instead of using human workers to work in an assembly line in factories or having sweatshops. Machine usage became more popular and the need for mass amounts of physical laborers and skilled merchants decreased. The second thing is the social system. Previously, the social caste system that was heavily followed in India was built off birthrights. Western thought, however, placed a heavy emphasis on the social construct and importance of an individual
19 “British Colonial Rule: India Before and After Colonization with Examples.” n.d. Toppr. Accessed November 1, 2023. https://www.toppr. com/guides/economics/indian-economy-on-the-eve-of-independence/ british-colonial-rule/.
20 Sheth, N. R. “Modernization and the Urban-Rural Gap in India: An Analysis.” Sociological Bulletin 18, no. 1 (1969): 16–34. http://www.jstor. org/stable/23618701.
being dependent on the achievement of that person. The third shift was in politics. India was a self-sufficient country but Westernization is based on the utilization of democracy as a government system. The fourth and last shift occurred in the scientific outlook of the citizens. Indians who normally went about their daily lives shifted into a thought process of innovation and invention. Science was adopted and furthered as the citizens in the Indian subcontinent started competing with the Western nations for recognition in the field.
Although all these societal shifts happened after Independence, the majority of the Indian thought process remains the same. This is true especially in the social system. Caste is still a very valued part of Indian society as sad as it is. It is seen in the treatment of the “untouchables” from their wealthier counterparts and it is also observed in marriage proposals where certain families look for brides or grooms for their children who are within the same caste. This is how modernization looks like for India. Even as the country pushes to progress, ideologies like the caste system have found an ability to sustain the pressures of Western society and refrain from Modernization.
Now how does this connect to urban and rural areas? Urban areas accept Modernization much more easily than rural communities. Modernization in urban areas happens fast because education is more widespread and urban communities are more accepting of Western values. This furthers the already existing gap between the two geographically and demographically separated areas.
How are rural and urban communities separated? This question has a complicated answer because of the variations of communities. The first definition of urban areas starts with urbanism.21 Urbanism, “according to which any local community which shows predominance of non-agricultural economy and acceptance of certain standards and value patterns of social life is regarded as urban.” Another different criteria was placed by the Indian government consensus where communities with 5,000 or over are regarded as an urban area. Another definition of urban areas is any area with a population density of over 400 per square kilometer.
These gaps occur because rural areas have a lot more problems than urban areas in terms of accessibility. There are medical care, sanitation, government, and income issues. Another large issue is the education system in rural areas. In most villages around the country, it is typical to not have education after grade 10. Most students in these villages will end up moving to
the city or going to a “hostel” (the term for a boarding school) in another state to complete their intermediate education (the equivalent of 11th and 12th grade in America) and continue on to their postgraduate education. Chandra B.P. Singh believes that there is a baseline list of what the poor from these villages need. “(a) rapid and sustainable agricultural growth, (b) employment intensive non-agricultural growth, (c) relative stability of food prices, (d) a sound system of food security, (e) good health, and (f) quality education.”22
Poverty in India is measured by comparing the normative minimum calorie intake of the country to how much the citizens in the area are actually consuming.23 This definition of poverty goes hand in hand with Singh’s belief that poor villages need food security and a stable income.
The Indian government has made many attempts to adopt programs or pass legislation that will change how rural India functions. Many of these programs have failed to make significant progress in the economy like they were meant to due to the push of many factors. Singh describes how cognitive assessment is based on how well others can do “our” tasks and not their own. “This misfit between the nature of the cognitive resources and strategies available in rural society and the nature of interventions induced to the rural people contributed to the failure of numerous rural development projects.”24
Another article blamed the poverty of the rural areas on the composition of the nation’s agrarian structures.25 The relationship between the two opens up ways to think about the origins of poverty and what is keeping it constant. Shergill blames the existing agrarian production relations as a reason why so many laborers in rural India are so poor. These populations center their income around agriculture and the market is what gives them their profit. The produce is sold for less than its value and the farmers that worked hard to grow the crops get paid less than they should. A vast majority of the time, that is how
22 “Rural Psychology in India: Issues and Approaches” from Singh, Chandra B. P. “Rural Psychology in India: Issues and Approaches.” Indian Journal of Industrial Relations 37, no. 3 (2002): 404–19. http://www.jstor. org/stable/27767798.
23 “On India’s Poverty Puzzles and Statistics of Poverty” from Richard Palmer-Jones, and Kunal Sen. “On India’s Poverty Puzzles and Statistics of Poverty.” Economic and Political Weekly 36, no. 3 (2001): 211–17. http://www.jstor.org/stable/4410196.
24 “Rural Psychology in India: Issues and Approaches” from Singh, Chandra B. P. “Rural Psychology in India: Issues and Approaches.” Indian Journal of Industrial Relations 37, no. 3 (2002): 404–19. http://www.jstor. org/stable/27767798.
21 Wirth, Louis. 1938 Urbanism as a Way of Lifes. American Journal of Sociology, XLV : 1-24. This source was derived from the source in footnote 2.
25 “Agrarian Structure as a Factor in Rural Poverty: Some Cross-Section Evidence” from H. S. Shergill. “Agrarian Structure as a Factor in Rural Poverty: Some Cross-Section Evidence.” Economic and Political Weekly 24, no. 12 (1989): A9–12. http://www.jstor.org/stable/4394563.
production relations in rural areas prohibits monetary benefits from reaching the origin of the produce market.
Now with all the pieces and bits of history, viewing urban migration as a whole is a simpler task. Urban migration is the term used to define the movement of people from rural areas to larger cities.26 When India is studied for rural development, it is also investigated for its relationship with urban migratory patterns. This migration is a result of both rural push factors and urban pull factors. Some common push factors are unemployment, poverty, and the land locked type of living. Some pull factors are education, industry, proximity, and accessibility. Common ways to prevent or slow down this migration are to increase the development in rural areas. This development in rural areas is, a majority of the time, seen in agriculture.27 Although this may seem like a dead end or that this development will only pertain to farmers, the overarching umbrella of agricultural development also ties in components of irrigation, increased variations of crop seed, and improved marketing arrangements amongst other things.
Another way to shift the pattern of rural-urban migration is to add social services to make rural places more accessible and better for living overall. Things such as family planning, education, health services, and high quality protective forces like police departments are predicted to decrease this common trend of migration. In migration studies, the area with the strongest correlation to migration was the education gap. This leads us to believe that increased education will slow down the number of individuals moving to urban areas for purposes of education. Another point is that normal education and a typical curriculum do not work in rural communities. Rural citizens need education based on their needs and their surroundings. Family planning is another big concept in the rural-urban migration gap. The idea is that increased familial planning and contraceptive education will decrease the pressure caused by larger populations in these communities.
The next umbrella is that of rural health services. This is not limited to physical wellness but also encompasses the overarching need for sanitation, proper irrigation, septic systems, disease eradication, and nutrition amongst other things. In the article “Rural Development and Urban Migration: Can We Keep Them Down on the Farm?” pages 55 to 59 has a large table which discusses a possible development activity,
the effects that the activity will have on the rural population as well as the effects that the activity will have on migration.28
In India, every child has a right to education yet the country as a whole struggles with illiteracy rates. A point could be made saying that the reason behind this is the high percentage of the population that is involved with agriculture but the population demographics in India have never been skewed to citizens over the age of 65. Although literacy rates should be higher in these areas, they are significantly lower. Although the children do go to school and attend classes, what they learn does not heed the same benefits that urban students get. This is because, compared to urban areas, rural families provide less support to children being educated. Their households do not have the resources or the education to support these children. Many of these students dropout of school because of the distance of the school or to support their families.
To help solve this issue, many programs and organizations have formed with the common goal of changing education in rural India. One of these organizations is Transform Rural India (TRI).29 “[TRI enables] the most impoverished rural communities in India to achieve their full potential by ensuring they have equal access to the opportunities available to everyone else.”30 TRI uses a network of engaged teachers and volunteers to ensure that both students and their parents are involved with the process of education. Another one of the top non governmental organizations (NGOs) is called Bal Utsav which was established in 2009.31 This organization is centered around two specific programs: Sampoorna Shaala and iShaala. Sampoorna Shaala is dedicated to rural institutions that educate over 500 students. On the other hand, iShaala uses technology to revitalize smaller schools with an average of 40 students.
Apart from the many, many long standing NGOs developed to help resolve this issue, the government has also started many programs with the same idea in mind. An example is Samagra
28 Rhoda, R. (1983). Rural Development and Urban Migration: Can We Keep Them Down on the Farm? The International Migration Review, 17(1), 34–64. https://doi.org/10.2307/2545923
29 “Rural Education in India: Education Programs for Rural Development | TRI.” n.d. Transform Rural India. Accessed October 25, 2023. https://www.trif.in/education/#.
26 “Urban Migration definition in American English | Collins English Dictionary.” n.d. Collins Dictionary. Accessed November 3, 2023. https:// www.collinsdictionary.com/us/dictionary/english/urban-migration.
27 Rhoda, R. (1983). Rural Development and Urban Migration: Can We Keep Them Down on the Farm? The International Migration Review, 17(1), 34–64. https://doi.org/10.2307/2545923
30 “About TRI - Transforming Rural India Foundation.” n.d. Transform Rural India. Accessed October 25, 2023. https://www.trif.in/about-us/.
31 Sharma, Bhunesh, and Neha Sharma. 2022. “Top 10 NGOs Providing Education to Kids in Rural Areas-NGOBOX-06 Oct . 2022.” ngobox. https://ngobox.org/full-news_Top-10-NGOs-Providing-Education-toKids-in-Rural-Areas-NGOBOX_24672.
Shiksha, which was started between 2018 and 2019.32 This initiative aims to equalize schooling across the country at every grade level. Another initiative is called Shiksha Karmi Project which was a project launched in 1987 in Rajasthan to reconstruct the education systems in some of the villages where the school system was very dysfunctional. Apart from in person initiatives and projects, the government is also utilizing media and technology to make resources more readily available to the country.
The government of India has realized that in order to bring more strength to their economy, the nation needs to start advocating for career oriented education in rural areas. A step below this is providing basic education for students of all ages instead of cutting off the education system at the 10th standard (10th grade equivalent in the States) in villages. Based on data from 2021, 65% of the almost 1.4 billion Indians live in rural areas. Out of this percent, 47% of these people depend on agriculture for their living.33 Many of the children growing up in rural areas have no choice but to move to urban areas to continue their higher education. When students do this, they go into highly competitive environments where their skills are behind those of their urban peers. This causes an issue when it comes to employment for these students and eventually, many of them have to return back to their villages or hometowns to pursue agriculture as their families have done for generations. To combat this general lack of education, in the past two decades, more colleges and universities are being founded in rural areas for rural audiences.
Even with these colleges and universities, these rural students are at a significant disadvantage when it comes to competing for jobs. This is because of one main reason: soft skills. Soft skills are arguably the most important series of knowledge that a person can have, some may say even further than education. These skills will only reach their potential if applicable to the environment around them.34 This is one form of the education that rural Indian children or students are not getting and it is one of the most important types of education. There are reasons why this form of non-formal education is more reasonable and important to teach rural students than formal education. One example of this is time. Formal education needs allotted time: years and years of it. On the other
32 “Rural Education – Integral To India’s Progress.” 2022. IBEF. https://www.ibef.org/blogs/rural-education-integral-to-india-s-progress.
33 “Press Information Bureau.” 2023. Press Information Bureau. https:// pib.gov.in/PressReleasePage.aspx?PRID=1894901.
34 NAYAR, D. P. “Non-Formal Education and Rural Development in India.” Community Development Journal 14, no. 1 (1979): 48–54. http:// www.jstor.org/stable/44272785.
hand, non-formal education about the surrounding environments can be taught and implemented immediately based on need. Another point is that some skills taught in school are not immediately necessary or applicable to the environment in which they live. On the other hand, non-formal skills teach students how to take care of themselves and the surrounding environment. This causes students to put what they learned to immediate use. This forces the government to implement soft skill education to all ages in rural areas. A plus side of utilizing non-formal education over formal education is that it can be used for every single person, not necessarily students or younger students.
Upon further research the term “barefoot engineering” was found.35 Barefoot engineering refers to the engineering that rural citizens learn to further develop their skills to aid where they come from. This particular article focused on the livelihood of a barefoot engineer named Hemlata Naik. After she completed up to grade 12, she got married and had children. Through this new on-site training program, she has learned many things. “I learnt doing layouts, measurements, and planning rural infrastructure. I am proud to work as a barefoot technician. I will be working in my village, for my community while improving my family’s economic situation.” This training program is known as “BFT” or Barefoot Technician program. This three month long program for rural citizens who have completed up to grade 10 has trained 5,000 BFTs already and is planning to train at least 3,000 more. Programs such as BFT are what truly prove to be valuable to rural communities.
Although there are multiple sources documenting the renewable energy in India, there is not enough readily available data on the geographic regions of these renewable energy plants. With knowledge on renewable energy plants and where they are most prominent, it will be easier to piece together the connections between jobs, youth migration patterns, the types of jobs being created, the increase in students studying different subjects pertaining to renewable energy, etc. If data sets had locations of where this sector is estimated to grow, then connections can be made to big city areas. With power plants, dams, and wind turbines that already exist, collecting data on their energy capacity, how many homes they supply energy to, big cities in the nearby vicinity, employee count, and other data would also make analysis easier. However, since renewable energy is something that has only been rapidly increasing
35 “‘Barefoot Engineering’: How to Boost Rural Development and Local Youth Employment.” 2017. ILO. https://www.ilo.org/newdelhi/info/public/fs/WCMS_575653/lang--en/index.htm.
in the past two decades more than any other point, there has not been a lot of previous research or data done in this aspect. The last two decades prompted growth of the sector but keeping official documents and data is something that needs to be done in the future. Now, the Indian government has also been taking various steps to increase the presence of green energy in the nation. Predictions say that growing this sector is step one and documenting and data collection is step two.
Another gap is the lack of opinions from rural Indians. Most rural Indians are not educated and this makes it uncomfortable for them to communicate their feelings and opinions with the general public, foreign or domestic. When conducting research on migration patterns between rural and urban India, it would be very helpful to have interviews or oral history documented on how these people feel about renewable energy, education, or working in the green energy sector. This also goes for Indians living in urban parts of the country. This is because it is important to understand how peoples’ idea of living in rural areas shifts if the government makes all the changes discussed above with education, social systems, and accessibility.
Overall, the biggest knowledge gaps throughout this project occurred in data of current wind turbine farms, dams, and power plants as well as interviews with citizens of rural and urban India that either do not agree with the growth of the renewable energy sector or think it will harm their livelihood.
Renewable energy has been rapidly increasing over the past few decades. The Indian government has started taking up multiple initiatives to increase the capacity of renewable energy in the nation for future generations. With these plans, India is also experimenting with the boundaries of offshore wind farms, a form of renewable energy that is not as widely known as wind turbine energy. Only 18 countries around the world have implemented this form of renewable energy with over 61% of these countries having less than five offshore wind farms.36 With India’s scouted sites and accessibility to bodies of water, the country can steadily grow their renewable energy sector through this avenue. In the past couple of years, India has also recorded and collected data on which avenues they can create the most capacity in for sustainable energy. With this information, the government has been able to partner with companies and big industry partners to bring some of these visions to life.
The growth of this sector also creates tons of jobs in India. Since the majority of these power generating farms will be
located in rural areas, it is important to determine the connection that these farms will have on Indians living in rural areas. The biggest connection the two have is education and employment opportunities. With increasing industry in an area with smaller populations, there is more willpower to stay where they are instead of moving to urban areas for jobs and education. This will, slowly but surely, decrease the rural-urban migration rate which will in turn contribute to a more evenly spread population demographic.
Rural-urban migration exists because of many different factors such as medical care, sanitation, government, and income issues to name a few. One of the biggest reasons for youth migration is education. Students and their families migrate to bigger cities for a better education which will then lead them on to a better job, resulting in stable income. This domino effect is initiated by one aspect: education. The problem is that students who move from rural to urban areas under the name of education are at a severe disadvantage when competing with students who grew up in rural areas. This is because they have not had a proper, strong education like their urban counterparts and also did not have the option for educational support at home considering that many of their parents were not educated. Since jobs are being created in rural areas through renewable energy, the only variable that is still lacking in the forefront is education.
Rural education is a big aspect that the government and many non-government organizations (NGOs) are coming together to address and fix. The government has implemented new rural education programs. These are different from getting a “normal urban” education because this type of formal education does not help rural students or development. Learning about their environment and the problems that surround them are what make the biggest difference in their education. This approach to education also impacts the development of rural areas because not only do you have well educated students, but you also have a whole task force of citizens who understand the importance of fixing issues such as water quality, irrigation, sanitation, etc. Another great thing about rural education programs is that they are not only limited to youngsters who are still in school but those who did not have a chance to finish school or couldn’t afford it who are older.
36 Fernández, Lucía. n.d. “Offshore wind farms by key country 2023.” Statista. Accessed November 15, 2023. https://www.statista.com/statistics/264257/number-of-offshore-wind-farms-worldwide-by-country/.
How do rural education and renewable energy connect? Renewable energy is a new field and implementing new education systems in rural areas is an approach that the Indian government and many other non government organizations (NGOs) just started taking. As many government based organizations such as the National Institute of Wind Energy and the Ministry of New and Renewable Energy scout sites to place new renewable energy farms, the concept of renewable energy will only con-
tinue to grow. Methods of collecting sustainable energy such as hydropower dams, offshore wind energy farms, wind farms, amongst others are commonly implemented in rural areas over urban areas. Cities do not have enough land to place a wind turbine farm in the center so this is when land from rural areas is important. When these farms are constructed, they need staff and employees to help support the systems. This is where rural communities come into play. If all of the youth are migrating to the cities, there will not be enough educated citizens to reliably run these power plants. This is why as the implementation of renewable energy continues to grow in India, the government will have to enforce rural education programs. This idea also goes the other way around. If the government places a higher emphasis on immediate education of the rural population and higher education of the youth in rural communities, there will be more areas open to place renewable energy farms. Farmers and their children will be educated on the importance of renewable energy and this educational path will make them more willing to shift their lifestyle little by little. A majority of farmers in India are unwilling to give up their land for it to be turned into a sustainable energy plant but if education of their community happens, it will make it easier for the government attempts to find land in rural communities. As we see more wind turbine farms and dams, the literacy rate will go up in rural areas and if students have a way of completing their post secondary education in their own village, there isn’t much of an external pull factor for them to migrate into cities. This is a pattern we will continue to see as India tries to rise to the top of the global charts as a renewable energy based country.
However, they may be a hold back from religious groups and tribal groups. India is home to multiple religions and various groups of people. The higher government of India consists mostly of Hindu individuals. This, along with the fact that there are many religious tensions between Hindus and Muslims, would create questions as to whether Muslim predominant areas would be getting the same resources or consideration for a renewable energy project as states with higher Hindu populations. The same works for areas affiliated with higher caste areas and lower caste areas. Tribal groups on the other hand or farmer’s lands require more education amongst the population to propose and build a renewable energy plant. This is a set back that the government has to account for when choosing a location to expand this energy sector. In conclusion, as a result of India’s growing population and growing youth demographics, the country has a greater need than ever to strengthen employment opportunities and energy capacity. Over the past few decades, the country has seen a substantial increase in renewable energy capacity from various sources. With this increase comes employment opportunities in rural areas which can
help battle the congregation of Indian youth in large cities in comparison to smaller villages. This starts by addressing the rural-urban migration pattern due to education. With NGOs, government initiatives, and specific rural education programs, citizens are being educated to benefit the area that they live in while learning skills that are directly applicable to their everyday lives. This increase in job offerings and better approach to education will help slow down the migration of youth into big cities, creating a great reason to increase renewable energy and educational outreach programs to undereducated citizens.
“About TRI - Transforming Rural India Foundation.” n.d. Transform Rural India. Accessed October 25, 2023. https:// www.trif.in/about-us/.
“Agrarian Structure as a Factor in Rural Poverty: Some Cross-Section Evidence” from H. S. Shergill. “Agrarian Structure as a Factor in Rural Poverty: Some Cross-Section Evidence.” Economic and Political Weekly 24, no. 12 (1989): A9–12. http://www.jstor.org/stable/4394563.
“Bagasse Power.” 2011. Biomass Magazine. https://biomassmagazine.com/articles/bagasse-power-5247.
“‘Barefoot Engineering’: How to Boost Rural Development and Local Youth Employment.” 2017. ILO. https://www. ilo.org/newdelhi/info/public/fs/WCMS_575653/lang--en/ index.htm. “British Colonial Rule: India Before and After Colonization with Examples.” n.d. Toppr. Accessed November 1, 2023. https://www.toppr.com/guides/economics/ indian-economy-on-the-eve-of-independence/b ritish-colonial-rule/.
Chandra, Nishi. 2023. “Rooftop Solar Subsidy in India 2023: How Much, How & Where to Get.” loom solar. https://www. loomsolar.com/blogs/collections/solar-panel-subsidy-in-india.
“Electricity consumption by country 2021.” 2023. Statista. https://www.statista.com/statistics/267081/electricity-consumption-in-selected-countries worldwide/.
Fernández, Lucía. n.d. “Offshore wind farms by key country 2023.” Statista. Accessed November 15, 2023.https://www. statista.com/statistics/264257/number-of-offshore-windfarms-worldwide-by -country/.
“Hydroelectric power plants in India.” n.d. Optimize IAS. Accessed December 4, 2023. https://optimizeias.com/hydroelectric-power-plants-in-india/.
“India Could Create Millions of Jobs Through Renewable Energy” from Golchha, Akanksha. 2022. “India Could Create Millions of Jobs Through Renewable Energy.” NRDC. https://www.nrdc.org/bio/charlotte-steiner/india-could-create-millions-jobs-through-rene wable-energy.
“India’s Expanding Clean Energy Workforce” from Council on Energy, Environment, and Water, National Resources Defense Council, and Skill Council for Green Jobs. n.d. “India’s Expanding Clean Energy Workforce.” NRDC India. Accessed October 11, 2023. https://www.nrdcindia.org/pdf/ NRDC-Jobs%20report-Feb-2023.pdf.
“India Offshore Wind Energy.” 2023. International Trade Administration. https://www.trade.gov/market-intelligence/ india-offshore-wind-energy.
“India Population (2023).” n.d. Worldometer. Accessed November 15, 2023. https://www.worldometers.info/ world-population/india-population/.
Mukhopadhyay, Subrata. n.d. “Wind Energy map of India | Download Scientific Diagram.” ResearchGate. Accessed December 4, 2023. https://www.researchgate.net/figure/ Wind-Energy-map-of-India_fig5_261245790. Nayar, D. P.
“Non-Formal Education and Rural Development in India.” Community Development Journal 14, no. 1 (1979): 48–54. http://www.jstor.org/stable/44272785.
“On India’s Poverty Puzzles and Statistics of Poverty” from Richard Palmer-Jones, and Kunal Sen. “On India’s Poverty Puzzles and Statistics of Poverty.” Economic and Political Weekly 36, no. 3 (2001): 211–17. http://www.jstor.org/ stable/4410196.
“Press Information Bureau.” 2023. Press Information Bureau. https://pib.gov.in/PressReleasePage.aspx?PRID=1894901.
“Religion in India: Tolerance and Segregation.” 2021. Pew Research Center. https://www.pewresearch.org/religion/2021/06/29/religion-in-india-tolerance-and-segrega tion/.
“Religion | The Story of India - Photo Gallery.” n.d. PBS. Accessed April 2, 2024. https://www.pbs.org/thestoryofindia/gallery/photos/14.html.
“Renewable Energy: A Driver for Job Growth” from Kwatra, Sameer. 2023. “Renewable Energy: A Driver for Job Growth.” NRDC. https://www.nrdc.org/bio/sameer-kwatra/renewable-energy-driver-job-growth.
“Renewable Energy in India: Investment Opportunities in the Pow...” n.d. Invest India. Accessed October 20, 2023. https://www.investindia.gov.in/sector/renewable-energy.
“Renewable Energy Market Developments: A Study of India” Kar, Sanjay Kumar. “Renewable Energy Market Developments: A Study of India.” Renewable Energy Law and Policy Review 6, no. 4 (2015): 238–47. http://www.jstor.org/ stable/26256467.
“Renewables, hydro cross 40 per cent of India’s installed power generation capacity in FY23: Report.” 2023. ET EnergyWorld. https://energy.economictimes.indiatimes.com/news/renewable/renewables-hydro-cross-40 -per-cent-of-indias-installed-power-generation-capacity-in-fy23-report/100151725.
Rhoda, R. (1983). Rural Development and Urban Migration: Can We Keep Them Down on the Farm? The International Migration Review, 17(1), 34–64. https://doi. org/10.2307/2545923
“Rural Education in India: Education Programs for Rural Development | TRI.” n.d. Transform Rural India. Accessed October 25, 2023. https://www.trif.in/education/#.
“Rural Education – Integral To India’s Progress.” 2022. IBEF. https://www.ibef.org/blogs/rural-education-integral-to-india-s-progress.
“Rural Psychology in India: Issues and Approaches” from Singh, Chandra B. P. “Rural Psychology in India: Issues and Approaches.” Indian Journal of Industrial Relations 37, no. 3 (2002): 404–19. http://www.jstor.org/stable/27767798.
Sharma, Bhunesh, and Neha Sharma. 2022. “Top 10 NGOs Providing Education to Kids in Rural Areas-NGOBOX-06 Oct . 2022.” ngobox. https://ngobox.org/full-news_Top10-NGOs-Providing-Education-to-Kids-in-Rural-Areas -NGOBOX_24672.
Sheth, N. R. “Modernization and the Urban-Rural Gap in India : An Analysis.” Sociological Bulletin 18, no. 1 (1969): 16–34. http://www.jstor.org/stable/23618701.
Sud, Tushar, Rajneesh Sharma, Radhika Sharma, and Lucy Kitson. “Overview of India Electricity and Renewable Energy Landscape.” Case Study: India’s Accelerated Depreciation Policy for Wind Energy. International Institute for Sustainable Development (IISD), 2015. http://www.jstor. org/stable/resrep14787.5. Image compiled from the National Institute of Wind Energy and the Ministry of New and Renewable Energy
“Urban Migration definition in American English | Collins English Dictionary.” n.d. Collins Dictionary. Accessed November 3, 2023. https://www.collinsdictionary.com/us/ dictionary/english/urban-migration.
Waring, John A. n.d. “Historical statistics of the United States, Colonial Times to 1957.” Historical statistics of the United States, Colonial Times to 1957. Accessed December 4, 2023. https://www2.census.gov/library/publications/1960/ compendia/hist_stats_colonial-1957/h ist_stats_colonial1957-chS.pdf.
“What is bathymetry?” n.d. National Ocean Service. Accessed October 18, 2023. https://oceanservice.noaa.gov/ facts/bathymetry.html.
Wirth, Louis. 1938 Urbanism as a Way of Lifes. American Journal of Sociology, XLV : 1-24. 0:26 to 0:31 Sengupta, Amit. 2023. “Offshore wind energy: India set to harness coastal breezes.” YouTube. https://www.youtube.com/ watch?v=rrIRZ2tVxOY.
E.G. Palmer
North Carolina officially established South Mountains State Park as public land in November 1975. According to forester Rob Amberg, parts of the South Mountain range were being considered for National Forest purchases as far back as 1911.1 Officials surveyed the land for potential state park development as far back as 1942. However, it wasn’t until the environmental decade of the 1970s that park development began to be seriously considered. Local groups discussed questions such as who would conserve the South Mountains, what should be conserved, and later, discussed the appropriate level of human involvement in the park. These debates took the form of editorials in the local newspapers, park planning meetings attended by hundreds, and letters written to government officials. The specific conclusions reached in these debates couldn’t have existed without the historical context of the 1970s. Drawing on rhetoric and aesthetic of wilderness from the Romantic era, the ecology movement peaked in the 1970s, leading people to value environmental relationships more than in the past. Through examining primary sources documenting the park’s creation and considering environmental trends from the 1970s, it becomes clear that the creation of South Mountains State Park reflected the ecological values of the time.
The most revealing feature of the debates surrounding the creation of the park was that all parties believed that the area should be conserved. There was only disagreement about who was best to do it. When South Mountains State Park was created, there were two main groups that inhibited the park’s creation: independent small landowners and a corporation, Pine Mountain Lakes. These two opponents of the state park agreed that the part of the South Mountains should be conserved, but they each made cases for why they should be the ones
responsible for conserving it. In addition to questions about who should conserve the land, the community also questioned who should have access to the land. Yet, notably absent from these discussions were the voices of timber and mining interests. This may have been a reflection of the pro-environment climate of the 1970s, when many political figures agreed that conservation was important and even passed impactful environmental legislation, such as the Clean Air Act (1970) and Clean Water Act (1972). In previous decades, logging and mining interests (who would likely argue against environmental protection) would have had a much larger influence.
One of the main catalysts for the creation of the park was encroaching residential development in the region, namely upscale development that would prohibit public access. In 1973, newspapers began to report on a new development that was under construction in the South Mountains. This planned community was called Pine Mountain Lakes, and it would become synonymous with all development in the South Mountains. Pine Mountain Lakes was intended to be an enormous project. It was originally planned to house 10,000 families (although its size was later reduced). It was also intended to be a “recreation resort community,” complete with a lodge that “will resemble a large tri-level country club with outdoor swimming pool, a restaurant, game rooms, several quiet areas, and tennis and paddle tennis courts.”2 Most relevantly to the public, it was intended to be (and still is to this day) a gated community, with no access allowed to non-residents. Many people were upset by this. Shelby citizen Lester Roark wrote, “This nation has long been dedicated to the proposition that areas of ‘unusual scenic beauty’ should be set aside for the enjoyment of ALL our people- young and old, rich and poor.”3 Equitable access regardless of social class is an especially important idea here, considering that Pine Mountain Lakes was (and still is) a development intended for the rich. The development would have
closed off a significant amount of land in the Jacob’s Fork area of the park for private use if it had fulfilled all of its plans. Even though the land wouldn’t be available for public use, Pine Mountain Lakes claimed to be committed to conservation in the region. According to the Morganton News Herald, Pine Mountain Lakes hired a team of professionals “including geologists, engineers, ecologists, and land-planning experts” in order to better construct the development.5 In his letter to the Shelby newspaper, titled “Developers, Park Can Coexist,” Lester Roark mentioned Pine Mountain Lakes in a positive way, writing “THEY ARE DOING AN EXCELLENT JOB and should be commended for their efforts to protect the landscape.” Although Roark was against Pine Mountain Lakes closing the land off to the public, he believed that they were committed to conserving the land. At least publicly, Pine Mountain Lakes seemed to be committed to protecting the South Mountains in some way.
Although Pine Mountain Lakes claimed to be committed to conservation, the development is alleged to have been less than careful about the ecological balance of the region. Art Linkletter, TV and radio personality turned public relations representative for the development, is quoted as saying, “Ecology is like any crusade of its type. It will reach an intense pitch and then subside”.6 It is relevant that Linkletter used the word “ecology” here. Ecology was a very new idea in the 1970s. As opposed to earlier ideas like “conservation” that emphasized sustainable development, ecology emphasized the interconnectedness of different parts of nature. Although Pine Mountain Lakes was outwardly committed to conservation, they also engaged in environmentally irresponsible behaviors that may have harmed the ecology of the region. Alan Eakes, chief of planning and interpreting of the state parks division, said about Pine Mountain Lakes “It’s just frightening to see what they are doing. [The trees cleared for roads] were being burned like crazy in the middle of last week. It looked from a distance like the whole forest was on fire.”7 While there was other development going on in the South Mountains at the time, the scale of the Pine Mountain Lakes development seemed especially threatening to the ecology of the region. According to the article “Mining Under Way?” Pine Mountain Lakes was granted a permit to mine gravel in specific areas, but not in Jacob’s Fork River. However, some residents accused Pine Mountain Lakes of mining in the Jacob’s Fork River that caused the river to look “like chocolate milk.”8 Allegedly, the mining killed over 200 fish. While Pine Mountain Lakes conserved parts of the landscape, some actions taken by this development had damaging ecological impacts.
The small property owners within the area of the proposed park added another dimension to the debate about the rightful ownership of the land. Some of these property owners,
mostly from families who lived in the South Mountains for many generations, strongly opposed the park’s creation. Some argued that they stewarded the land better than the government could. One area resident, Faye Deviney, told a local newspaper that “people mean pollution” and that the park would attract “undesirables” to the area and “alter (the community’s) way of life.”9 She believed that the more people invited into the area, the more polluted and dangerous it would become. This sentiment is echoed by other residents of the South Mountains. M.H. Smith, a farmer from the Burke County community of Casar, wrote to North Carolina Governor James Holshouser concerning the park. Smith argued, “The landowners have carefully conserved the natural resources and the natural beauty of these hills. We are extremely proud of what we have created and maintained.”10 Smith opposed the area he lived in for a park, but didn’t seem against the idea of a park altogether. He wrote, “There are other sites more readily available and more suited for a park.”11 This makes sense in the context of the 1970s, when there was widespread support for conservation. Despite the assurances of the locals that the land was well taken care of, it was eventually transferred to the government by 1975. Although there were likely several factors leading to this decision (such as the aforementioned encroaching development), this decision also reflects another important idea in American conservation: technocracy. They believed that land is best managed in the hands of experts and scientists rather than the public. Even though the landowners may have been conserving the land, others believed it could be conserved best in the hands of the government, where scientists and park officials could monitor it more easily.
Romanticism was one of the most powerful environmental philosophies of the 19th century. Its fascination with sublime landscapes—those that inspire both fear and amazement—still is relevant to 20th century environmentalism. The idea of sublime landscapes has historically had religious connotations. Romantics believed that by experiencing sublime landscapes (such as waterfalls and mountain vistas) people could become closer to God. These ideas have been ingrained in American culture so deeply that it is echoed in the writing about South Mountains State Park, and likely played a small part in conservation decisions. When the state purchased land for the park in 1975, Thomas Ellis, the Superintendent of State Parks at the time, remarked “I think some things ought to belong to all people. The High Shoals area is one of the God given works that the good Lord has blessed us with, and it should belong to everybody.”12 Ellis refers to the waterfall as a blessing, which is significant because a “blessing” has a connotation of some-
The Creation of South Mountains State Park
thing that everyone should enjoy. By describing the waterfall this way, Ellis appeals to a philosophy of conservation that has been responsible for many historic national parks.
Waterfalls were one of the main sources of the sublime for 19th century writers. It is no coincidence that when Pine Mountain Lakes was buying land in the South Mountains prior to the park’s formation, the feature people cared the most about was the waterfall. Many letters and newspapers referenced the High Shoals Waterfall on Jacob’s Fork River as an object of special concern. Pine Mountain Lakes had purchased a lot of the land around the waterfall, and citizens were concerned that there would not be enough free land around the waterfall to make a park. One article reported that “The state park may have to be discarded despite the hopes and aspirations of residents of Cleveland, Burke, and Rutherford counties.”13 Several letters and statements from organizations all over the region about the park paid special attention to the High Shoals Waterfall. For example, the Cecilia Music Club of Shelby sent out a statement saying “We respectfully request that the state of North Carolina move swiftly to buy the Jacob’s Fork High Shoals as a national wilderness area.”14 William Cronon wrote about the tendency to preserve sublime landscapes, saying: “God was on the mountaintop, in the chasm, in the waterfall, in the thundercloud, in the rainbow, in the sunset. One has only to think of the sites that Americans chose for their first national parks—Yellowstone, Yosemite, Grand Canyon, Rainier, Zion—to realize that virtually all of them fit one or more of these categories.”15 People chose the first national parks, especially Yellowstone and Yosemite, because of their sublime landscapes. Waterfalls, such as High Shoals Falls, are the quintessential representation of sublime landscapes. This helps explain why people wrote about the falls in a Romantic way. It also helps explain why this rhetoric was effective for conservation, and why so many people cared about preserving the falls for public access.
American conservation has historically been fascinated with the idea of a “pure” or “pristine” wilderness. The most widely read articles about South Mountains State Park are probably those that appeared in the magazine Our State (or The State, as it was known prior to 1996). South Mountains State Park was featured in at least two articles, and both times provide notable examples of rhetoric related to purity. The earlier article, written by Edgar Abernathy, describes South Mountains State Park as “rugged” and “pristine…uncluttered by tourists.”16 This reinforces the idea that to be pristine or pure, humans have to stay away from the wilderness. The later article, written by Marshall Ellis and appearing in the March 1999 issue of Our
State, also included references to the park’s “pristine streams.”17 These articles were relatively accessible to the general public, so it is significant that this is the rhetoric that was promoted about the park. It shows how the South Mountains State Parks’s purity was one of the most press-worthy things about it.
The ecological perspectives of the time also supported the idea that environments work best without any human interference. In the 1970s and 80s, the dominant environmental idea was that nature would always reach a climax community without human interference. A climax community can best be explained in the words of Eugene Odum, a biologist who pioneered the field of ecosystem ecology. Odum wrote that all ecosystems are fundamentally “directed toward achieving as large and diverse an organic structure as is possible within the limits set by the available energy input and the prevailing physical conditions of existence.”18 Odum’s ideas were highly influential in the early 1970s. People believed that ecosystems would, without human interference, always move toward a climax (or k-selected) community. From an ecological standpoint, non-interference was always considered the right thing to do. Although this viewpoint changed after the 1980s, it influenced how the park was created. According to a 1978 memorandum, there was a debate about whether or not to build an artificial lake. The report says one of the plans proposed involved a “12 acre lake with a swimming beach, fishing, row boating and canoeing” that would be created by damming the Jacob’s Fork River.19 The public was not in favor of this proposal, and it never happened. Local newspapers were thrilled about this decision. The Shelby Daily Star published an article called “Keeping Park Natural” praising the decision to limit development in the park. The article claimed, “With the lesser development plan approved, fewer funds should be needed so that the minimal development should be able to be undertaken quickly. More important, the natural beauty of South Mountains State Park will have been preserved.”20 Although funding appeared to be a part of the issue, to the community, the decision was about preserving the “natural beauty” of the park. They thought that the lake would make it artificial, which meant less beautiful. This reinforces the concept of wilderness as something that is as far removed from human intervention as possible.
As mentioned before, there were many people living in South Mountains State Park before its creation. As with all state and national parks, these people had to be moved before the park was created because they were incompatible with the idea of wilderness. There is a very interesting historical precedent to this decision. The Great Smoky Mountains National Park, established in 1926, was the first national park created with land the government didn’t already own. The government had
to decide what to do with the people who lived there. If the park “was to be seen as wilderness, they could not contain farmers whose families had lived there for hundreds of years.”21 Again, the idea appears that true wilderness and humans are incompatible. The National Park Service never “intended to allow the mountain dwellers to stay in the park.”22 They were all forced out, likely using the powers of eminent domain granted to North Carolina and Tennessee. By setting this precedent, the national park service sent a clear message about the place of human inhabitants of the wilderness that was followed by all future state and national parks.
Environmental historian William Cronon once wrote, “To the extent that we celebrate wilderness as the measure with which we judge civilization, we reproduce the dualism that sets humanity and nature at opposite poles. We thereby leave ourselves little hope of discovering what an ethical, sustainable, honorable human place in nature might actually look like.”23 Cronon’s quote speaks to the importance of environmental history. Nature cannot be fully understood without considering the human relationships to it. Environmental history is increasingly important considering that the Anthropocene era has made it so the human world and the non-human world are even more connected than ever before. The creation of South Mountains State Park is an important example of how the human world and the environment were connected in the historical context of the 1970s. Through understanding why the park was established, what was conserved, and how it was managed, it can be better understood how human culture and scientific thought impact environmental management.
Notes:
1. Amberg, Rob. Contact about Old Surveys of Rollins, Etc. 8 Aug. 2007. South Mountains State Park.
2. Houser, Troy. “New Burke Community Born.” The News Herald, 88th ed., 6 July 1973. Burke County Library, Pine Mountain Lakes.
3. Roark, Lester. “Developers, Park Can Coexist.” The Shelby Daily Star. South Mountains State Park.
4. Roark, Lester. “Developers, Park Can Coexist.” The Shelby Daily Star. South Mountains State Park.
5. 10,000 Families Expected in New Burke Development. Burke County Library, Pine Mountain Lakes.
6. Inman, Bill. “A Star Without a Mike.” Shelby Daily Star, 3 June 1974. South Mountains State Park.
7. Weathers, Jim. “Park May Lose Jacob Fork Area.” Shelby Daily Star, 11 Mar. 1974.
8. Weathers, Jim, and Joe DePriest. “Mining Under Way?” Shelby Daily Star.
9. Nicholas, Wayne. “Lonely, Silent Mountains Wait as Developers Begin to Hum.” The Charlotte Observer. South Mountains State Park, Formation of the Park.
10. Smith, M. H. 11 Feb. 1974. South Mountains State Park, Formation of Park.
11. Smith, M. H. 11 Feb. 1974. South Mountains State Park, Formation of Park.
12. Hall, Ted. “State Purchases South Mtn. Park.” The News Herald, 19 Dec. 1975. Burke County Library.
13. South Mountain Park Now in Real Danger. 12 Mar. 1974. South Mountains State Park.
14. Mrs. Lee Gilliatt. Statement. 5 Dec. 1973. South Mountains State Park.
15. Cronan, William. The Trouble with Wilderness; or, Getting Back to the Wrong Nature. https://www.williamcronon. net/writing/Trouble_with_Wilderness_Main.html. Accessed 15 June 2023.
16. Abernathy, Edgar. “The South Mountains.” The State. Burke County Archive.
17. Ellis, Marshall. “Call of the Wild.” Our State, Mar. 1999, pp. 60–64. Burke County Library.
18. Worster, Donald. “The Ecology of Order and Chaos.” Environmental History Review, vol. 14, no. Spring-Summer 1990, pp. 1–18, https://www.jstor.org/ stable/3984623.
19. Weaver, Steve and Fred Hagenberger. Additional Public Input Needed Regarding South Mountains State Park Master Plan. 18 Apr. 1978. South Mountains State Park, Formation of the Park.
20. “Keeping Park Natural.” The Shelby Daily Star, 14 Apr. 1978. South Mountains State Park.
21. Cantrill, James G., and Christine L. Oravec, editors. The Symbolic Earth: Discourse and Our Creation of the Environment. 1st ed., University Press of Kentucky, 1996. JSTOR, http://www.jstor.org/stable/j.ctt130j1tg. Accessed 23 June 2023.
22. Cantrill, James G., and Christine L. Oravec, editors. The Symbolic Earth: Discourse and Our Creation of the Environment. 1st ed., University Press of Kentucky, 1996. JSTOR, http://www.jstor.org/stable/j.ctt130j1tg. Accessed 23 June 2023.
23. Cronan, William. The Trouble with Wilderness; or, Getting Back to the Wrong Nature. https://www.williamcronon. net/writing/Trouble_with_Wilderness_Main.html. Accessed 15 June 2023.
10,000 Families Expected in New Burke Development. Burke County Library, Pine Mountain Lakes.
Abernathy, Edgar. “The South Mountains.” The State. Burke County Archive. Amberg, Rob. Contact about Old Surveys of Rollins, Etc. 8 Aug. 2007. South Mountains State Park.
Cantrill, James G., and Christine L. Oravec, editors. The Symbolic Earth: Discourse and Our Creation of the Environment. 1st ed., University Press of Kentucky, 1996. JSTOR, http:// www.jstor.org/stable/j.ctt130j1tg. Accessed 23 June 2023.
Cronan, William. The Trouble with Wilderness; or, Getting Back to the Wrong Nature. https://www.williamcronon.net/ writing/Trouble_with_Wilderness_Main.html. Accessed 15 June 2023.
Crystal Waters Fouled By Sand. South Mountain State Park.
Ellis, Marshall. “Call of the Wild.” Our State, Mar. 1999, pp. 60–64. Burke County Library. Hall, Ted. “State Purchases South Mtn. Park.” The News Herald, 19 Dec. 1975. Burke County Library.
Houser, Troy. “New Burke Community Born.” The News Herald, 88th ed., 6 July 1973. Burke County Library, Pine Mountain Lakes.
Inman, Bill. “A Star Without a Mike.” Shelby Daily Star, 3 June 1974. South Mountains State Park.
“Keeping Park Natural.” The Shelby Daily Star, 14 Apr. 1978. South Mountains State Park. Mrs. Lee Gilliatt. Statement. 5 Dec. 1973. South Mountains State Park.
Nicholas, Wayne. “Lonely, Silent Mountains Wait as Developers Begin to Hum.” The Charlotte Observer. South Mountains State Park, Formation of the Park.
Roark, Lester. “Developers, Park Can Coexist.” The Shelby Daily Star. South Mountains State Park.
Smith, M. H. 11 Feb. 1974. South Mountains State Park, Formation of Park. South Mountain Park Now in Real Danger. 12 Mar. 1974. South Mountains State Park.
Weathers, Jim. “Park May Lose Jacob Fork Area.” Shelby Daily Star, 11 Mar. 1974.
Weathers, Jim, and Joe DePriest. “Mining Under Way?” Shelby Daily Star.
Weaver, Steve and Fred Hagenberger. Additional Public Input Needed Regarding South Mountains State Park Master Plan 18 Apr. 1978. South Mountains State Park, Formation of the Park.
Worster, Donald. "The Ecology of Order and Chaos." Environmental History Review, vol. 14, no. Spring-Summer 1990, pp. 1-18, https://www.jstor.org/stable/3984623
Sydney Covington
The grandness and military strength of the atomic bomb evoke a cautious excitement in American society and reflect uniquely on the attributes of its creators, specifically emphasizing traditionally masculine characteristics such as daring, raw power, and patriotism. Substantial attention is cast on eminent male scientists from the mid-20th century through books, cinema, and scholarly papers. Works such as American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer or Einstein and Oppenheimer: The Meaning of Genius inadvertently distract from other narratives by dramatically reintroducing the lives of select scientists to American consciousness. Oppenheimer’s popularized image as a modern Prometheus—first introduced in Scientific Monthly grants him a distinguished, self-sacrificial portrayal as a target of 20th-century atomic bomb controversy and anticommunist political oppression.1 Furthermore, descriptions of scientists’ appearances and eccentric personas arouse public interest. Accounts of Oppenheimer’s charisma are accompanied by descriptions of him as “a tall, thin chain smoker, who often neglected to eat during periods of intense concentration.”2 Meanwhile, few accounts detail the lives of female nuclear scientists—neither their seemingly mundane characteristics, nor their contextualized intellectual contributions. Rather, numerous women who influenced the atomic bomb’s construction, like those residing and working at Los Alamos, are highlighted only as secretaries and wives. The book Their Day in the Sun by Howes and Herzenberg reflects on female scientists in the
1 F.L. Campbell “Atomic Thunderbolts” The Scientific Monthly, Vol. 61, 3 (1945): 234.
2 Bird, Kai, and Martin J. Sherwin. American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer (New York: A.A. Knopf, 2005)
Manhattan Project, raising them as commendable figures in American history and shaping “Los Alamos as a success story.”3 “Our Day in Their Shadow,” an article by Lee-Anne Broadhead, criticizes Howes and Herzenberg’s portrayal of these women as role models, claiming that “such an effort—to the extent it accepts and endorses the historical, political and scientific legitimacy of the Project—is . . . misguided and dangerous,” while asserting that accounts should focus only on those who refused to contribute to the bomb.4 However, it is necessary to recognize the complexity of these women’s lives and the factors that prompted their varied responses, whether they aided in bomb development or voiced opposition.
This research focuses on four female nuclear scientists— Melba Phillips, Lise Meitner, Lilli Hornig, and Maria Goeppert-Mayer—and the tension between their discoveries and the contemporary political transformations in Western society. These women stood between pressure to yield their life’s work and comply with restrictive policies, and a desire to assert themselves in their fields. However, unlike Oppenheimer (who, targeted by government officials as a security risk, was eventually compelled to suppress his opinions and retreat from his prominent standing), Phillips, Meitner, Hornig, and Goeppert-Mayer were influenced by the prevalent uncertainty and secretiveness to protect their values. Discriminatory societal pressures combined with the uncertainties of the Manhattan Project motivated them to defend their lives’ work; how-
3 Temple University Press “Their Day in the Sun.”
4 Lee-Anne Broadhead “Our Day in Their Shadow” Peace and Conflict Studies, Vol. 15, 2 (2009): 38
ever, the public has misconstrued their contributions while expounding those of their male counterparts.
In her 1952 article “Dangers Confronting American Science,” Melba Phillips5 expressed apprehension toward the developments in her field. She recognized heightened government interference and a distortion in the desires of scientists who sought monetary rewards or recognition—aims that Phillips called “paths of least resistance.”6 The self-interest of the nation and the growing prominence of militarism in science enforced these changes, ultimately obstructing the explorations and subsequent discoveries of scientists. During the McCarthy era, paranoia among U.S. political officials led to the establishment of new national security requirements that provoked fear and unease. The FBI monitored the activities of academics and conducted background examinations, gathering evidence that could suggest a subversion of national values. This included virtually any progressive actions, such as speaking for the Congress of American Women or sponsoring a Bill of Rights Conference.7 Many of those targeted were subjected to trials. Nevertheless, Phillips’s foremost values lay in exposing the dependence of true and effective learning on political freedom. In “Dangers Confronting American Science,” she called her audience to action, claiming “[I] believe that it is the responsibility of scientists, as citizens and as beneficiaries of the humanistic traditions of our culture, to guard and uphold the standards by which they live. [I] do not think it possible to maintain anything of value while yielding these standards since they are themselves among the ultimate values.”8 The role she assumed in promoting the education of American citizens exhibited her adherence to this statement. According to “Professional and Personal Coherence: The Life and Work of Melba Newell Phillips” by Watkins and Neuenschwander, “In exhorting scientists to engage the broader public, to listen as
5 Sallie Watkins, an author of “Professional and Personal Coherence: The Life and Work of Melba Newell Phillips,” and Phillips’s close friend for three decades, recalls the theoretical physicist for her caring and dignified demeanor. Oppenheimer, her thesis advisor at Berkeley, described her as “a person [who] can appreciate more than most of us the advantages of a high academic environment.” Phillips’s work involved magnetron theory, atomic core polarization, and the Oppenheimer-Phillips process which helped lay the groundwork for quantum theory.
6 Melba Phillips, “Dangers Confronting American Science,” Science Vol. 116, 3017 (1952): 439–440.
7 Dwight Neuenschwander and Sallie Watkins, “Professional and Personal Coherence: The Life and Work of Melba Newell Phillips,” 329.
8 Phillips, “Dangers Confronting American Science,” 441.
well as to speak, Melba led by personal example. She was first and always a teacher.”9 Her priorities resided in academia and in revealing the risks of forcing scholars into precarious positions. Meanwhile, another nuclear physicist, Lise Meitner, was marginilized by her nation's rising political forces, but fought, as she had since the beginning of her career, to secure scientific freedom.
Lise Meitner, who lived in Berlin during the establishment of the Third Reich and was labeled a Jew according to the Nuremberg Laws, would be threatened with extreme consequences if she were to voice her opinions with the resolution and clarity of Melba Phillips.10 Unlike Phillips, her moral stance was largely limited to guiding her individual actions. Understanding that her discoveries in physics would soon benefit a powerful and corrupt nation, Meitner felt compelled to distance herself and her professional work from Germany. However, her scientific career had developed in Berlin and her life was firmly established there. She conducted experiments in chemistry at the Kaiser Wilhelm Institute with the technical guidance of Otto Hahn11 and Fritz Strassmann, both of whom relied on her expertise in physics and theoretical interpretation. The promising scientists formed a close group.12 Upon leaving, Meitner would be sacrificing her collaborators, laboratory space, and equipment. Referring to her physics research, Meitner explains: “I built it from its very first little stone; it was, so to speak, my life’s work, and it seemed so terribly hard to separate myself from it.”13 Ruth Sime, the author of Lise Meitner: A Life in Physics, reflects on Meitner’s hesitations when she inquires “If dismissed, where would she go? If permitted to stay, should she resign anyway, on principle?”14 Initially, Meitner’s response was to disregard the moral and professional crossroads at which she stood. Gradually, however, her stance grew stronger
9 Neuenschwander and Watkins, “Professional and Personal Coherence: The Life and Work of Melba Newell Phillips,” 327.
10 Meitner, although a “shy young woman,” was an “assertive professor” and is teasingly described by her nephew (with whom she maintained a close relationship) as “short, dark and bossy.” Her numerous archived letters also illuminate this contrast between shy and assertive: she was apologetic and careful to voice her feelings, but confidently communicated her scientific opinions and findings. She obtained her doctorate at the University of Vienna, Austria, before moving to Berlin, Germany. She successfully isolated the isotope protactinium-231, and studied nuclear isomerism and beta decay. Lise Meitner, herself, did not identify with her family’s Jewish background.
11 The German chemist Otto Hahn (1879-1968) met Lise Meitner in 1920 at the University of Berlin, and later they worked together at the Kaiser Wilhelm Institute in Dahlem, Berlin.
12 Fritz Strassmann (1902-1980), a German radiochemist, began assisting Meitner and Hahn with their investigations in 1934.
13 Sime. Lise Meitner: A Life in Physics, 148.
14 Sime. Lise Meitner: A Life in Physics, 139.
and Sime elaborates on her considerations: “The political situation was bad but would surely improve. She would not leave until she had lost everything and was driven out. Until then, Germany gave her what she thought was essential: her work was untouched, her position was the same, most of her friends were still there.”15 Despite the trajectory of her nations’ politics, Meitner’s utmost priorities lay in protecting her relationships and scientific research.
However, as her situation worsened following the Nazi invasion of Austria in 1938, Meitner’s attitude shifted and she became fearful that her decision to remain in Germany would have significant consequences. Nazis within the Wilhelm Institute began showing hostility toward her and she lost the reassurance and security once provided by her Austrian citizenship. Furthermore, she was denied her passport by the Reichsführer-SS and Chief of German Police.16 As dangers escalated, she received generous letters from the distinguished Danish physicist Niels Bohr, who offered her assistance with accommodations and employment outside of Germany. With considerable struggle Meitner eventually accepted a position at the Manne Seighban Institute in Sweden and departed from Germany to Stockholm. Meanwhile, she desperately sought to maintain her correspondence with Hahn and Strassman. Initially, the two men consoled her. They eased her relentless curiosity by sharing the results of their findings and prompting her for assistance. However, her desire to return to the group intensified upon hearing of mystifying results in Hahn’s experiments bombarding unstable uranium isotopes: He had found barium atoms present among the products. From afar, Meitner contemplated theoretical explanations and, with the assistance of her nephew Otto Frisch, made a significant breakthrough— to which she immediately notified Hahn—by attributing the results to the previously unheard-of process termed “fission.” The addition of a neutron to the massive uranium nucleus had initiated the atom’s division: a process that, by achieving higher atomic stability, could release massive amounts of energy.
As Meitner’s new career at the Manne Siegbahn Institute in Stockholm strayed into science reliant on mechanical equipment and experimentation, she expressed her complete disinterest; she favored creating hypotheses and deriving conclusions based on conceptual interpretations. This is evident from Sime’s explanation that “Lise Meitner wanted very little to do with the apparatus at first.” She continues by quoting Meitner: “‘In the whole big institute, only five academic people are employed, and they too work almost entirely on problems of apparatus. Scientifically I am completely isolated, for months
15 Sime. Lise Meitner: A Life in Physics, 149.
16 Sime. Lise Meitner: A
I speak with no one about physics.’”17 Meitner clearly despised this scientific environment. Like Phillips when she became a target of American McCarthyite scrutiny and accusations, Meitner was outcast as a result of Europe’s accelerating political instability which appeared detrimental to her pursuit of pure science. She wished to return to the Berlin group and their laboratory before the Third Reich’s control. Meitner fought to maintain connections to the work she left behind in Germany, but when these were severed, it seemed she had lost the battle to protect her profession and her future.
While Phillips viewed humanist science as nearly synonymous with pure science, and therefore loudly voiced her support for the separation of America’s scientific advancements from the ongoing international conflict, Meitner saw pure science through a more personal lens, associating it with her degree of freedom to explore experimental phenomena and physical theory. However, both interpretations fell under the realm of basic scientific research. In his book Science – The Endless Frontier, published in 1945, Vannevar Bush describes basic science as being “performed without thought of practical ends, . . . [resulting] in general knowledge.”18 Phillips and Meitner leveraged this form of science, and in doing so, tried to escape the risks and political stigma of direct application. Later, during the months preceding the bombings of Hiroshima and Nagasaki, the women’s decision to distance themselves from the Manhattan Project allowed them to circumvent the “growing ethical qualms about the continued development of the ‘gadget’” experienced by numerous Manhattan Project scientists.19 However, it is impossible to entirely isolate “basic” from applied scientific research, as the two approaches are largely interdependent. Meitner would have understood the crossover between pure scientific discoveries and weapons advancements, but initially turned a blind eye in favor of continuing her professional correspondence with Hahn, whom she knew was an ardent German nationalist. “Our Day in their Shadow” praises women who conscientiously abandoned their jobs as physicists in entirety during World War II, implying an unavoidable contamination of the field by military applications.20 However, Bush writes: “it is certain that important and highly useful discoveries will result from some fraction of the
17 Sime. Lise Meitner: A Life in Physics, 280.
18 Vannevar Bush was an American engineer who headed the U.S. Office of Scientific Research and Development, or OSRD, during World War II. Vannevar Bush and Rush Holt, Science – the Endless Frontier, (Washington, D.C: National Science Foundation, 1945), 18.
19 Bird and Sherwin. American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer
20 Broadhead. “Our Day in Their Shadow: Critical Remembrance, Feminist Science and the Women of the Manhattan Project”
Female Nuclear Scientists of the Mid-20th Century
undertakings in basic science; but the results of any one particular investigation cannot be predicted with accuracy.”21 It seems that Phillips and Meitner were prepared to rely on these uncertainties in order to defend themselves and their stances. Referring to universities, Bush explains that “it is chiefly in these institutions that scientists may work in an atmosphere which is relatively free from the adverse pressure of convention, prejudice, or commercial necessity. At their best they provide the scientific worker with a strong sense of solidarity and security, as well as a substantial degree of personal intellectual freedom.” As a professor and academic, Phillips had multiple ties to universities and sought refuge in classrooms and lecture halls. Similarly, Meitner escaped to the laboratory. When it came to nuclear weapons, Phillips and Meitner defended their pacifist views by exaggerating the discontinuities between pure and applied science. Additionally, Phillips could demonstrate dedication to her bold political choices by countering U.S. security measures that favored extreme secrecy, while Meitner harnessed the moral and political ambiguities of her residence in Germany—She initially banked on the assumption that the political situation “would surely improve”—to avoid compromising her relationships and her scientific promise.
Melba Phillips was a scientist subjected to America’s demeaning governmental policies and a woman fixed under the public’s critical stare. She pursued her passion for physics from a young age and earned her Ph.D. in the field; however, she immediately faced the challenge of making a living—a problem exacerbated by her decision to remain unmarried. She was prevented from holding advanced teaching positions at universities: many refused to hire her on the grounds of her gender. Thus, like numerous other female academics, the opportunities available to her were at women’s schools such as Bryn Mawr and Connecticut College. Furthermore, Phillips’s situation as an activist and educator was complicated by the post-World War II anticommunist zeitgeist and an accompanying domestic movement that responded to supposed threats to the “family way” of life. The author Mary Brennan explores this atmosphere surrounding much of America’s female population in her book Wives, Mothers, and the Red Menace Brennan explains that “A deeply entrenched and comfortable image of middle-class prosperity became a valuable tool for both men and women . . . as they worked to enlist the majority of American women in their campaign against communism.” She goes on to say that this movement “reinforced the current
domestic ideal of femininity as the only viable option.”22 Furthermore, it connected communism to minorities that were often viewed with hostility or mistrust, like homosexuals and feminists.23 Phillips—who was neither a wife nor a mother, but rather someone who deconstructed the housewifely female image—would have been viewed as a threat for her rhetoric. However, despite her courageous rejection of McCarthyism, she was not freed from the pressure of women’s prevailing conservative expectations. By upholding her values as a scientist and free political individual, Phillips’s actions reflected critically in the public eye. There was no approach to earn the respect of the majority while simultaneously protecting her central beliefs.
Meitner, too, was betrayed by her own country; however, she felt the most hurt by her group at the Kaiser Wilhelm Institute. The fissure that developed in her life can be partially attributed to Otto Hahn’s cautious protection of his public image. Lise Meitner forged close connections with him, both professionally and personally, only to be cast aside. Meitner’s identity as a female physicist and a Jew gradually revealed Hahn’s insecurities in his career. His response is clearly laid out in Lise Meitner: A Life in Physics. Sime explains that he experienced “self deception brought on by fear: fear of his collaborators . . . of anyone and everyone who was poised to take advantage of his political vulnerability.”24 When Meitner revealed that she was Jewish, Hahn's paranoia dictated his actions. Sime observed that, “When Lise’s presence seemed to endanger the [Kaiser Wilhelm] institute, he had instinctively distanced himself from her.”25 Meanwhile, following the sudden loss of the protection given to Meitner by her Austrian citizenship upon Nazi occupation of Austria, she described Hahn’s reaction: “‘He has, in essence, thrown me out.’”26 After settling in Stockholm, Meitner fell into isolation and depression. Furthermore, her new discoveries which arose from her profound dedication to physical theory, would soon be held against her.
The public’s notions of women’s capabilities in scientific fields merged into the classification of physics as a more refined science than chemistry. The source, “Reaping the Benefits of Collaboration While Avoiding Its Pitfalls,” observes that the description of women as chemists would often imply an inferiority, while their husbands were subtly elevated as phys-
22 Mary Brennan, Wives, Mothers, and the Red Menace: Conservative Women and the Crusade Against Communism, (Boulder: University Press of Colorado, 2008), 9.
23 Brennan, Wives, Mothers, and the Red Menace, 4.
24 Sime. Lise Meitner: A Life in Physics, 256.
25 Sime. Lise Meitner: A Life in Physics, 256.
26 Sime. Lise Meitner: A Life in Physics, 185.
icists, highlighting Marie and Pierre Curie as an example.27 Chemistry was strongly associated with experimentation and steady labor. Physics, on the other hand, was associated with greater intelligence and strong problem-solving abilities which imbued male physicists’ public images with a farseeing, “lost in the mind” appeal. It was in this context that Otto Hahn sought to reserve fission for himself, alone. Meitner would be distanced from her and Hahn’s findings due to her identity as an immigrant, woman, Jew, and even a physicist. Sime writes that Hahn “was irked to find fission so thoroughly dominated by physics—and by physicists with no special connection to him.”28 In response, he desperately sought to exclude physics from the revolutionary discovery, and in doing so he dismissed Meitner’s contributions. Hahn’s reaction only intensified as renowned physicists seized every opportunity to explore the findings, and Meitner was increasingly dissociated from fission and her career as a physicist.
Like Phillips, Meitner was targeted by the paranoia of others who sought to adhere to national values and maintain stability in their own lives despite the rapidly changing political situation. Otto Hahn prioritized his profession and his status as the head of the radiochemistry department at the Wilhelm Institute. Meanwhile, for many American women, the rejection of communism promised them security in the roles they were influenced by American tradition to value, and assurance in knowing that they, too, would not become targets. The discrimination toward Meitner for her Jewish heritage, distrust of Phillips for her political activism, and disparagement of both women for their independence and intelligence, motivated their hard-willed resistance.
Melba Phillips’s response to the state of affairs was guided by her moral virtues: Her navigation of government pressure is evident in her testimony before the Senate Internal Security Subcommittee and a letter she subsequently composed. Her acceptance of struggle in order to amplify her voice illuminates the passion and dedication behind her beliefs. Phillips’s testimony records her citation of the Fifth Amendment and her steadfast resistance to the transient demands and insecurities of those in power. When asked if she had been a member of the Communist Party, she replied, “My response to that question is dictated by my view of professional and personal ethics, first to do my professional job as well as it is humanly possible, and second, to defend and maintain my individual and
27 Pycior, Helena M. “Reaping the Benefits of Collaboration While Avoiding Its Pitfalls.”
28 Sime. Lise Meitner : A Life in Physics, 262.
Figure 1. Portrait of Melba Phillips. Maurice L. Lehv Photographers, 1966-67. Photograph. Emilio Segrè Visual Archives General Collection, Niels Bohr Library & Archives, American Institute of Physics.
personal right which I thought was my right so long as I was a law-abiding citizen.” 29 Following decades of effort to secure teaching positions, Phillips was finally awarded sought-after roles at Columbia University and Brooklyn College; however, in her writing, she shows awareness that these would be jeopardized by her decision to stand up to Congress. As she expected as a result of her refusal to incriminate herself before the Select Committee, she was promptly notified of her termination from both schools. The article “Professional and Personal Coherence” describes the years following her testimony as “Melba’s days in the wilderness,” referring to these valuable jobs that she lost and the straining circumstances. In a letter written in 1953, Phillips explained, “I am not going to worry much about expenses for a year, by which time my savings will be so low that I may have to, but on the other hand I may have enough odd jobs by that time to live.”30 Her stoic acceptance of these hardships indicates her channeled attention toward problems she viewed as greater threats. As Watkins and Neuenschwander deduce, “Melba and her friends had a life of the mind that transcended their personal difficulties.”31 Rather than being overcome by irresolution, the sources indicate that Phillips formed distinguished priorities.
Lise Meitner’s response to her situation was contrastingly guided by an apprehension that her circumstances would negate the work that she had so relentlessly performed for decades of her life. She understood the moral weight of her decision to leave or remain in Berlin but largely resisted confronting it. Her position at the Kaiser Wilhelm Institute was representative of her efforts since young adulthood to overcome professional barriers. Meanwhile, the cumulative effects of the discrimination and restrictions she faced were reflected
29 Neuenschwander and Watkins, “Professional and Personal Coherence: The Life and Work of Melba Newell Phillips,” 336.
30 Testimony of Miss Melba Phillips, Accompanied by Counsel, Cammer and Shapiro, 284.
31 Ibid.
inward, upsetting her emotional and psychological state. In one letter she wrote to her brother, she disclosed her feelings: “I have no self confidence, and when I once thought I did things well, now I don’t trust myself. . . . I don’t fit in here at all, and although I try not to show it, my inner insecurity is painful and prevents me from thinking calmly. Hahn has just published absolutely wonderful things based on our work together . . . many people here must think I contributed nothing to it.”32 Multiple factors contributed to Meitner’s condition following 1938: isolation and disorientation from her journey abroad, an invisibility that separated her from her discoveries, pressure to abandon theoretical science, and lost faith in Otto Hahn.
It is well-known that Oppenheimer suffered under a similar mental state; however, his experiences suggest a personal divide from his field, rather than one induced by greater society. As a student, Oppenheimer found himself inept in the laboratory and his close relationships worsened as he was overwhelmed by existential questions. His biographers recount the visibility of his depressive moods. For example, while at Cambridge, a friend recalled seeing Oppenheimer laying facedown on the floor, groaning in obvious misery, or once collapsing onto the floor of the laboratory.”33 After falling into an especially profound despair, Oppenheimer read Proust’s A La Recherche du Temps Perdu. In American Prometheus, Sherwin and Bird explain that a new philosophical mentality combined with the emerging radical ideas behind quantum mechanics caused Oppenheimer to reach a personal awakening and re-establish his lost assurances in physics. Oppenheimer’s dramatic responses amassed attention and ultimately humanized him as a scientist, exposing his personal characteristics and emphasizing his inner turmoil. Society formed distinct connections through these performances.
In contrast, Meitner coped inwardly. Attributing blame to herself despite her lack of control over the situation, she limited her response to reserved letters she wrote to her brother and Otto Hahn. She expressed her feelings indirectly and largely obscured the criticism she felt toward others through her affectionate wording. In one letter to Hahn she wrote, “Dear Otto! . . . It would have been so nice for me if you had just written that we—independently of your wonderful findings—had come upon the necessity for the existence of the Kr Rb Sr series.”34 However, Meitner, like Phillips, never lost her admiration for physics and her hope of gaining unrestricted access to research. Phillips exchanged her faculty positions, her salary,
32 Sime. Lise Meitner : A Life in Physics, 255.
33 Bird and Sherwin. American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer, 43.
34 Sime. Lise Meitner : A Life in Physics, 254.
and her public image to stand up for academic freedom; Meitner also could not abandon her life’s work despite her deteriorating livelihood. As a result, she adamantly rejected involvement with the atomic bomb. She was largely unknowledgeable of the bomb’s steady progress, and subsequent to the news that it had dropped, her shock transitioned to torment.
The public’s seizure of control over Meitner’s image distinguishes her from many of her counterparts. This widespread response was not reliant on Meitner’s distinct priorities and the way she chose to display her professional life. Sime stresses that “She knew none of the intricacies of bomb physics, none of the tricks the bomb builders had used to multiply neutrons or the specifics of bomb assembly or detonation; she did not know how the Americans had managed to separate the 235U they used over Hiroshima, or the details of the reactor that bred the plutonium that destroyed Nagasaki,” demonstrating that Meitner sought to remain true to her word after initially declaring that she wanted “nothing to do with the bomb.”35 Addressing Meitner’s grievances and despair that resulted from her celebrity upon the detonation of “Little Boy,” as the uranium bomb dropped on Hiroshima was named, Sime writes that Meitner’s “efforts were futile; the press went after her with an intensity that bordered on assault.”36 As the great world conflict ceased,
35 Sime. Lise Meitner : A Life in Physics, 314.
36 Sime. Lise Meitner: A Life in Physics, 313.
the American public sought out a distinct hero and found appeal in a “fleeing Jewess” image they attributed to Meitner. Meitner was referred to as “the Mother of the Atomic Bomb”, a title that she vehemently rejected for the rest of her life. Sime explains that “hero worship, prejudice, and gender bias wiped out her independent scientific record with a single word.” Furthermore, Otto Hahn would continue casting her in his own shadow and negating her efforts to receive the form of praise she valued: societal recognition for physical discoveries free from militarism and mass death and destruction.
The nuclear scientists Maria Goeppert-Mayer and Lilli Hornig also sought out fulfilling professional lives; however, they wished to simultaneously uphold their commitments as wives and mothers.37 Maria Goeppert was in the midst of pursuing a Ph.D. in physics when she met Joseph Mayer, whom she would soon marry. Upon receiving her degree, she was immediately subjected to the challenges of asserting herself as a married female physicist in a skeptical and unwelcoming male-dominated field. Lilli Hornig also became a wife before receiving her degree in chemistry from Harvard University. When her husband, Don Hornig, was offered a position in the Manhattan Project at Los Alamos, Lilli had to be promised a career in the Los Alamos chemistry laboratory before she welcomed this abrupt and mysterious change. The situation is described in her Atomic Heritage Foundation interview: after multiple phone calls from Los Alamos to persuade the young couple to come, she inquired: “What am I going to do there?” Her question received a reassuring response: “Oh we’re scouring the country for people—anybody with a Master’s in chemistry, especially from Harvard, is going to be more than welcome.”38 This implies that Lilli was intent on following her husband, but only if the change would also advance her own professional career.
Goeppert-Mayer and Hornig sought to follow their scientific endeavors despite the contemporary expectations surrounding motherhood. In 1942, in the midst of the Manhattan Project’s
37 Goeppert-Mayer was recognized by her students and acquaintances for her “chain-smoking, chalk-waving intensity,” and “excited, rapid fire oral delivery.” As a student, she attended Gottingen and Cambridge University. Her later work involved uranium isotope separation and the revolutionary nuclear shell structure model. Lilli Hornig is noted for her outspoken feminist responses to the assumptions of Harvard’s University’s physical chemistry department or Los Alamos’s personnel office. She attended Bryn Mawr and Harvard. Her work involved testing the solubility of plutonium salts and theoretical explanations of shock fronts.
38 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
development, Joseph accepted work at the Aberdeen Proving Grounds in Maryland. Meanwhile, Maria remained behind, caring for their children and performing full-time research at Columbia’s Substitute Alloys Materials Lab. In a letter written in 1944, she explained, “Spending at least 40 hours a week in the laboratory is rather strenuous when combined with two children.”39 Her efforts during this time reveal her insistence on bridging the traditionally separated roles of mother and scholar. Lilli Hornig, who was also a mother of two at the time, was fired from her laboratory work due to the risks of reproductive harm that experimenting with an isotope of plutonium posed. In her interview she recalls, “I tried delicately to point out that [my supervisor] might be more susceptible than I was; that didn’t go over well.”40 Like Goeppert-Mayer, Hornig was desperate to avoid having her gender and motherhood obstruct the work she was capable of accomplishing as a scientist; in doing so, she would compromise her own well-being.
During the Great Depression, married women were frequently seen with hostility for their “selfishness” in seeking out jobs while there were few vacancies. Anti-nepotism was defended as a policy to prevent colleges from employing the under-qualified spouses of faculty members in place of more experienced candidates. However, this justification was manipulated under societal pressure and discrimination against women in the workplace. The policy exacerbated the challenges of upholding multiple commitments as a woman. As a result of anti-nepotism in the early to mid 1900s, women who continued their work were frequently relegated to inferior conditions and received little to no compensation—deprived of acknowledgement for their expertise and labor. While her husband was in a tenure-track position at Johns Hopkins University, Goeppert-Mayer took on an unpaid teaching position at the school that only permitted her an attic workspace. Furthermore, she could not be officially associated with her husband under their shared surname.41 Joe eventually left the university; however, the reason for this is unclear—some sources attribute it to economic issues, and others explain that he was terminated due to his German roots and Maria’s presence. This process presented itself numerous times, even when Goeppert-Mayer was hired to work in a separate department from her husband. In one letter, she wrote that she had no choice but to follow Joe
39 Sarah Lawrence College Archives. Maria Goeppert Mayer to Constance Warren
40 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
41 Center of History for Physics at AIP. “Struggle for Employment: Anti-Nepotism Laws in the Academy,” 2.
when he received a new position.42 Under the scientific recognition of her husband, Lilli Hornig also had to defend herself from society’s attempts to relegate her to a menial professional status. Despite the earlier phone call urging her to come to Los Alamos, she was first offered a position as a secretary. She insisted, however, that her talents would be better utilized in the laboratory.
Maria Goeppert-Mayer and Lilli Hornig upheld their personal values in their relationships and in their scientific research; however, in the process they, too, had to make distinct sacrifices. The impacts of anti-nepotism on women who were both professors and wives imposed such great hindrances that many resorted to hiding their marriages from their employers.43 Additionally, Geoppert-Mayer’s strenuous efforts to care for her two children while balancing her professional responsibilities compromised her physical and mental health. She contracted pneumonia which debilitated her for a duration of time. However, with her insistence to continue work at Columbia’s laboratory, she did not concede the remaining decades of her career to volunteer professorships. Goeppert-Mayer likely valued the university for the opportunity to advance her profession without abandoning her familial duties. Additionally, her prior position at Sarah Lawrence, a women’s college, may have shaped her desire to seize an offer situated within the larger, male-dominated physics community. The position at Columbia promised connections to the world’s leading physicists, allowing her, as an individual, the chance to demonstrate her expertise in physics. Through her new position, she was able to become acquainted with many of the century’s best scientific minds, including Niels Bohr and Enrico Fermi. Most significantly, she also developed her relationship with Edward Teller who played a leading role in the creation of the hydrogen bomb and who ultimately encouraged her to pursue her Nobel Prize-winning work on the nuclear shell model.
Lilli Hornig also sought opportunities to obtain a comprehensive understanding of the recent developments in her field which necessitated that she step outside of the isolation of the laboratory and the routine tasks that she performed. Her husband was permitted to be an eyewitness of the atomic bomb test detonation near the Trinity site; however, Hornig secretly accessed a viewing location from the Sandia Mountains.44 If she had not demonstrated this curiosity regarding the impact
of her work, her capability to take an informed stance on the bomb would have been obstructed; instead, she likely would have been influenced by the opinion of her husband, an explosives engineer.45 The Szilard petition, though never delivered, was created with the intention of convincing Washington to allow the Japanese to witness an atomic bomb test in order to demonstrate its destructive power and give them the opportunity to surrender. Lilli Hornig recalled signing the Szilard petition while in the absence of her husband. Her mention of this implies Don Hornig’s own steadfast viewpoint which is supported in his biography by Zuoyue Wang who explains that he was “convinced of [the bomb’s] necessity in ending the war, and with a brother serving in the Navy, [he] endorsed [its] use in Japan.”46 Lilli’s individual voice, as well as those of many other scientists at this time, held little significance in the scope of the project, because, as she explains, “the military . . . made the decision well before that they were going to use it no matter what.”47 However, scientists are remembered and humanized for their responses to tumultuous circumstances. Had she not shown this determination to comprehend the applications and overall significance of her work, Hornig’s viewpoint would have been presumed to align with her husband’s.
Lilli Hornig and Goeppert-Mayer either suppressed or did not comprehend the full moral implications of participating in the construction of the bomb until very late into their work. The unpredictable nature of Hornig and Goeppert-Mayer’s Manhattan Project assignments may have contributed to the women’s willing efforts in executing them. Hornig explained that there was “essentially nothing known about plutonium chemistry at the time,” indicating that successful applications did not seem imminent. Plutonium was first synthetically produced in 1941 and a very limited amount of the element was available for experimentation. Only after being pressured to abandon this position did Lilli begin working alongside her husband on lensing–a method of strategically producing shockwaves around the core of the bomb so as to control its implosion reaction. Meanwhile, Maria Goeppert-Mayer worked on photochemically separating uranium, a task that demonstrated little practicality, before moving on to a separation technique
42 Chia, Jing Min. “Maria Goeppert Mayer: Revisiting Science at Sarah Lawrence College.”
43 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
44 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
45 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
46 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”; American National Biography “Hornig, Donald F,” 1.
47 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
involving gaseous diffusion.48 Although these women’s experiments held scientific value, their work initially seemed unlikely to produce results of significant promise to the Manhattan Project. However, plutonium-239 would eventually comprise the core of two successful nuclear bombs, including the one tested at Trinity, and lensing was a fundamental component of the complex implosion mechanism. Additionally, the successful gun-type bomb design used a uranium-235 core with uranium obtained through gaseous diffusion. Hornig’s perspective concerning the impact of her work was influenced by the specialization of her assignments which obstructed her understanding of the scope and progress of the project as a whole. In her interview, Hornig described her work at Los Alamos, “there was one other woman in the division; she and I worked together and we had our little cubby hole and did our little procedures and put them under the Geiger counter. It wasn’t terribly inspiring and nobody actually really spoke to us.”49 The description of her daily routine demonstrates the confinement of her work. Compartmentalization was heavily enforced at Los Alamos by authorities like General Leslie Groves, and numerous employees were informed about other aspects of the project on a strictly “need-to-know” basis. Additionally, the Manhattan project was enormous and dispersed throughout the country; unless a scientist occupied a directorial position, predicting its progress in its successive stages would be nearly impossible.
Immigration may also have been a key influence in Hornig’s perception of the bomb. Hornig was born in 1921 in Czech. In 1929 her family moved to Berlin, and four years later they immigrated again to the United States. She explained that “after Hitler came to power, my father was actually being threatened with being taken off to a concentration camp. And he spent several weeks sleeping at friends’ houses so he wouldn’t be
found.”50 The suffering that her family endured reflects her statement that “many of us had really worked on it—on the bomb with the thought that it might deter Hitler.”51 Profound fear and anger toward Nazi Germany that arose from directly experiencing Hitler’s control motivated many immigrant scientists to contribute their expertise to the Manhattan Project; Lilli was one of them. When the war on the Western Front was over, Hornig felt it was too late; that as long as the war continued, the use of the bomb could not be prevented. She explains, “and we thought in our innocence—of course it made no difference—but if we petitioned hard enough they might do a demonstration test or something.”52
Goeppert-Mayer’s motivation for pursuing work on nuclear weaponry remains largely ambiguous. She immigrated to the United States in 1930, three years before Hitler’s appointment as chancellor of Germany. However, she had a clear understanding of Nazi Germany’s violence and oppression from her efforts helping fellow scientists find refuge. Additionally, many of her friends and family members remained in Germany. Shortly before the Second World War, in 1941, Goeppert-Mayer secured her first paid position at Sarah Lawrence College—an institution that offered Goeppert-Mayer, a longtime victim of anti-nepotism policies, newfound recognition, respect, and resources. However, in 1943, she was given a leave from the college to work for the Manhattan Project within Columbia’s Substitute Alloys Materials Lab (S.A.M.) to study uranium isotopes. It seems that this change promised her no clear benefits: The pay was no better, and, because her husband had accepted work in Maryland, it hindered the domestic tasks that she had previously shown commitment toward upholding. Additionally, as the project stretched on, she risked termination from her valuable Sarah Lawrence Position. Unlike Hornig, Goeppert-Mayer does not mention that her work on the bomb was initially guided by the hope of defeating Hitler and Nazi Germany. Instead, she reveals that her willingness to experiment with uranium fission for S.A.M was dependent on her understanding that the atomic bomb would not be completed before the war’s end. However, following the atomic bombings, she continued research for Edward Teller’s Opacity Project which he created under the prospect of developing a hydrogen bomb.
Maria Goeppert-Mayer’s close relationships with Enrico Fermi, Harold Urey, and Edward Teller offered her attractive prospects: assistance in recognizing unsolved scientific prob-
48 Hadley Hershey, “Maria Goeppert-Mayer” Los Alamos National Laboratory
49 The Atomic Heritage Foundation, “Lilli Hornig’s Interview."
50 The Atomic Heritage Foundation, “Scientist Refugees and the Manhattan Project”
51 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
52 The Atomic Heritage Foundation, “Lilli Hornig’s Interview.”
Female Nuclear Scientists of the Mid-20th Century
lems with great potential, and an elevated position within the male-dominated physics community.53 As a result, Goeppert-Mayer strove to remain in close proximity to these scientists and their work. She had been introduced to Urey and Fermi in 1939 after taking a position at Columbia University. Fermi asked her to investigate the valence shell of the transuranic elements (elements with atomic numbers greater than that of uranium), and after her success in revealing fundamental characteristics of these elements, she was elected a Fellow of the American Physical Society. In 1942, Mayer became involved in the Manhattan Project after Urey sought her expertise for a position at S.A.M. She worked on devising methods for the separation of the uranium-235 isotope, which was no menial objective in the construction of the bomb. In 1944, Goeppert-Mayer accepted a new role at Columbia where she would explore the properties of matter and radiation at high temperatures for Teller’s Opacity Project: A mission to construct a “Super” bomb (later known as a thermonuclear or hydrogen bomb). Less than two years later when Joe took a job in the Institute for Nuclear Studies at the University of Chicago, she began volunteering at the university as an Associate Professor of Physics. Teller had also taken a position there, and she was able to continue her Opacity work with him.
Today, Edward Teller receives a level of historical recognition barely below Oppenheimer. Although a “brilliant man,” many scientists at the time thought Teller to be paranoid, volatile, and later, dishonest.54 Goeppert-Mayer, however, “considered [him] to be one of the world’s most stimulating collaborators.”55 In a Scientific American article, the author Ashutosh Jogalekar explains that “level headed decision making and the ability to be a team player” were not qualities that Teller embodied. He goes on to describe Teller’s background, stating, “[a] combined double blow brought about by the cruelties of communism and Nazism seems to have dictated almost every one of [his] major decisions,” and with the Cold War on the horizon, “[he took] advantage of the worsening political situation and his own growing prominence in the scientific community” to promote nuclear weaponry for the next fifty years.56 Mayer’s connections to this scientist suggests that her aim to achieve freedom and prominence in the physics community overshadowed considerations of pure science and pacifism. Success as a physicist, which Goeppert-Mayer perceived to be possible through precedence. Success had a very different meaning for Goeppert-Mayer than it did for Phillips or Meitner; however, her decisions paved the way for her being awarded the Nobel Prize.
Later in her life, addressing her work for the Manhattan Project, Goeppert-Mayer explained, “we found nothing, and we were lucky . . . we escaped the searing guilt felt to this day by those responsible for the bomb.”57 Her work on the bomb was perhaps overshadowed by the credit and renown that accompanied the Nobel Prize. In 1948 she proposed the nuclear shell model which theorized that, like electrons, the protons and neutrons occupy certain layers or energy levels within the atomic nucleus which influence its stability. This revolutionary discovery and her Manhattan Project contributions, however, were inseparable in history. Her relationship with Teller, Fermi, and Urey revealed discrepancies and issues in physicists’ understanding and gave her confidence in her ability to address them. She could not have been elevated in the male-dominated physics community without these connections to prominent men. It is difficult to gauge the truth behind scientists’ personal stories which recount their feelings
53 Enrico Fermi (1901-1954) was an Italian-American physicist. His work involved the first chain reaction of nuclear fission. He was also awarded the Nobel Prize in 1938. Harold Urey, an American chemist, received the Nobel Prize in chemistry in 1934 for his discovery of deuterium. Edward Teller (1908-2003), a Hungarian-American theoretical physicist was among the first male scientists recruited to work on the Manhattan Project at Los Alamos.
54 Ashutosh Jogalekar. “The Many Tragedies of Edward Teller.” The Curious Wavefunction.
55 Sachs, Robert. Maria Goeppert-Mayer, 322
56 Jogalekar. “The Many Tragedies of Edward Teller.” The Curious Wavefunction.
57 The Nobel Prize, “Women Who Changed Science: Maria Goeppert Mayer”
and hesitations due to the controversy and social stigma surrounding the decisions of this era. Scientists like Hornig may have adjusted their individual stories to correspond to evolving public perceptions. “Our Day in Their Shadow” rightfully asserts that not all female scientists of the Manhattan Project should be elevated as role-models for their great contributions. However, as citizens and researchers, and thus granters of historical recognition, it is also necessary to acknowledge women like Lilli Hornig and Maria Goeppert-Mayer for the unique circumstances in which they found themselves.
Phillips, Meitner, Goeppert-Mayer, and Hornig assumed conflicting roles as female scientists. Tenaciously defending their personal values meant sacrificing their desires to develop praised public images. Meitner and Phillips, like Hornig and Goeppert-Mayer, were subjected to the strict divide between female domesticity and scientific professions, especially with regards to women’s perceived capabilities in physics. Their decisions ultimately stemmed from leveraging the uncertainties posed by the size and inherent secretiveness of the Manhattan Project, the unpredictability of scientific applications (generally-speaking), and Germany’s political path. However, while Meitner and Phillips used this unpredictability to distance themselves and their research from the global turmoil, instead pursuing “pure” science at the cost of national recognition, Hornig and Goeppert-Mayer leveraged it to assert themselves in the midst of their fields and advance their careers.
It is necessary to recognize that female scientists’ perspectives evolved as they encountered numerous barriers and as moral issues grew in prominence. Manhattan Project experiments continually gauged the likelihood of accomplishing an atomic bomb and demonstrated that the U.S. could not be guaranteed a weapon within the time constraints imposed by the race against the German bomb project. American scientists experienced heightened paranoia and anxiety from this uncertainty. The pressures and ambiguities at the time presented Phillips, Meitner, Hornig, and Goeppert-Mayer with imperfect choices. However, they are not given recognition comparable to their male counterparts for their scientific contributions and multilayered circumstances. Historians and the general public must acknowledge the impact of these women’s intelligence, as well as their personal values connected to their intersecting identities.
American National Biography, s.v. “Hornig, Donald F,” accessed December 7, 2023. https://www.anb.org/.
Bird, Kai, and Martin J. Sherwin. American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer. London: Atlantic Books, 2023.
Brennan, Mary C. Wives, Mothers, and the Red Menace: Conservative Women and the Crusade Against Communism. Boulder, CO: Univ. Press of Colorado, 2008.
Bush, Vannevar, and Rush D. Holt. Science – the Endless Frontier. Princeton University Press, 2021. https://doi.org/10.2307/j.ctv15r5879.
Campbell, F. L. “Atomic Thunderbolts.” The Scientific Monthly 61, no. 3 (1945): 233–34. http://www.jstor.org/stable/18583.
“Eyewitness Accounts of the Explosion at Trinity on July 16, 1945.” | The Trinity Test | Historical Documents. Accessed December 7, 2023. https://www.atomicarchive.com/resources/documents/trinity/weisskopf.html
Hershey, Hadley. “Maria Goeppert Mayer.” Los Alamos National Laboratory, August 29, 2022. https://discover.lanl. gov/publications/the-vault/the-vault-2022/maria-goeppertmayer/.
Jogalekar, Ashutosh. “The Many Tragedies of Edward Teller.” The Curious Wavefunction, January 15, 2014. http://wavefunction.fieldofscience.com/2015/01/the-manytragedies-of-edward-teller.html.
Kelly, Cindy, and Lilli Hornig. "Lilli Hornig’s Interview." Other. The Atomic Heritage Foundation, November 4, 2011.
Neuenschwander, Dwight E., and Sallie A. Watkins. “Professional and Personal Coherence: The Life and Work of Melba Newell Phillips.” Physics in Perspective 10, no. 3 (2008): 295–364. https://doi.org/10.1007/s00016-007-0373-z.
Phillips, Melba. “Dangers Confronting American Science.” Science 440, no. 3017 (October 24, 1952): 439–43. https:// doi.org/10.1126/science.116.3017.439.
Pycior, Helena M. “Reaping the Benefits of Collaboration While Avoiding Its Pitfalls: Marie Curie’s Rise to Scientific Prominence.” Social Studies of Science 23, no. 2 (1993): 301–23. http://www.jstor.org/stable/285481.
Ruth Lewin Sime. 1996. Lise Meitner : A Life in Physics. California Studies in the History of Science. Berkeley: University of California Press. https://search-ebscohost-com. proxy216.nclive.org/login.aspx?direct=true&db=nlebk&A N=8685&site=ehost-live.
Testimony of Miss Melba Phillips, New York, N.Y., Accompanied by Counsel, Cammer and Shapiro, New York, N.Y., Before the Senate Internal Security Subcommittee, 82nd Cong. 284 (1952) (statement of Melba Phillips)
“Their Day in the Sun” Temple University Press. Accessed December 7, 2023. https://tupress.temple.edu/books/theirday-in-the-sun.
“Women Who Changed Science: Maria Goeppert Mayer.” The Nobel Prize. Accessed December 7, 2023. https://www.nobelprize.org/womenwhochangedscience/ stories/maria-goeppert-mayer.
Annabella Botta
The U.S. government has struggled with racial classification systems for around a century. Though they have greatly improved their data collection process, many would argue that they have continuously missed the mark for appropriate racial and ethnic terms. They have managed to provide broad yet narrow categories in hopes of “properly” assessing the demographics of the citizens within its borders. As a result of these terms and the lack of information surrounding them, many don’t know how to identify themselves. In the case of Latinos and Hispanics, the U.S. government has managed to unify and divide them. Having two terms acting for one group of people has only led to more confusion regarding the identification process of the ethnic group. One of the main issues the government has encountered with Latinos and Hispanics is the two-step racial question; the first asks whether the person is Latino or Hispanic, and the second asks about their race. However, this has proven to be more problematic than including Latinos and Hispanics as their race. As a result of the improper accounting for their population and increased confusion about what they have to mark, the Latino and Hispanic population has influenced the design of the decennial census. With their population increasing and the definitions of what it means to be a race fluctuating, the following changes the census may endure, such as new racial and ethnic categories or specific definitions of each race, may be detrimental for the U.S. to account for its citizens properly.
For over two centuries the census has been used by the Federal government to collect different types of data, including accounting for all citizens within the United States. Still, it
wasn’t until 93 years ago that Hispanics and Latinos were given an opportunity to represent themselves on the form. In 1930, the United States census questionnaire asked “Color or Race,” which census takers would fill in. For that decade, the Census Bureau added the category “Mexican” into their database, accounting for them as their own race. This showed some improvement as “prior to the 1930 Census, Mexicans had been categorized as White.”1 While including Mexicans in the census could be considered a milestone for recognizing Hispanics and Latinos in the United States, it may have caused more harm than good. Being a part of a new racial category gave leeway to discrimination, which, unfortunately, was the case for many. As a result of this new racial category, many Mexicans and Mexican-Americans were deported in the following years. According to the U.S. Citizenship and Immigration Services report, “during the peak years of the repatriation campaigns (1929-1935), the INS formally removed approximately 82,000 Mexicans.”2 That number only accounts for the Mexicans that were recorded by the Immigration and Naturalization Service (INS), meaning that there were likely thousands more deported. Facing these new challenges, activist groups began convincing the Census Bureau to remove the
1 US Census Bureau, “U.S. Decennial Census Measurement of Race and Ethnicity across the Decades: 1790–2020,” Census.gov, August 3, 2021, https://www.census.gov/library/visualizations/interactive/decennial-census-measurement-of-race-and-ethnicity-acros s-the-decades-1790-2020. html.
2 “INS Records for 1930s Mexican Repatriations | USCIS,” www.uscis. gov, July 29, 2020, https://www.uscis.gov/about-us/our-history/history-office-and-library/featured-stories-from-the-uscis-history-office and-library/ins-records-for-1930s-mexican-repatriations.
category from the census. In their article Kim Parker, Juliana Menasce Horowitz, Rich Morin, and Mark Hugo Lopez describe these actions, “but Mexican Americans (helped by the Mexican government) lobbied successfully to eliminate it in the 1940 census and revert to being classified as white, which gave them more legal rights and privileges.”3 Their main goal was to remove the category to be marked as white and thus would no longer face discrimination. The census was convinced and removed the category from the racial question taking the only representation that Latinos and Hispanics had out of the census for forty years.
Even though they were left out of the census, Latinos and Hispanics were still topics of debate as many in and out of the community believed that they should be given their own racial/ ethnic category in the census. The census sheets continued to change; for example, the first mail-out census was in 1960. As the census continued to change, so did the United States. In 1965 Lyndon B. Johnson signed the Immigration and Nationality Act or Hart-Celler Act. This act was revolutionary as it would change immigration into the United States forever and thus influence the ethnic and racial groups within the country. Without this discrimination, people from all over the world could immigrate to the United States including those from Latin America.4 To account for the increase in the Hispanic and Latino population, along with the push for representation from the ethnic group, the United States decided to change the census once more. The U.S. Census Bureau described the change as “the first time a separate question on Hispanic origin or descent was asked, but only of a 5 percent sample of the population.”5 With only five percent of the population getting the form, it was evident that it was a trial to see if the category would stay. On the 1970 census sheet, the racial question asked, “Is this person’s origin or descent—” then listed the options Mexican, Puerto Rican, Cuban, Central or South American, Other Spanish, or No, none of these. Even though the census asked whether the person is Hispanic or Latino inadvertently, it does not explicitly use either term, therefore unable to unify the community as a category. Although the format seemed
3 Kim Parker et al., “Race and Multiracial Americans in the U.S. Census,” Pew Research Center’s Social & Demographic Trends Project, June 11, 2015, https://www.pewresearch.org/social-trends/2015/06/11/chapter-1race-and-multiracial-americans-in-the-u-s-census/.
4 Census History Staff US Census Bureau, “1970 Overview - History - U.S. Census Bureau,” www.census.gov, accessed September 27, 2023, https://www.census.gov/history/www/through_the_decades/overview/1970.html.
5 Census History Staff US Census Bureau, “1970 Overview - History - U.S. Census Bureau,” www.census.gov, accessed September 27, 2023, https://www.census.gov/history/www/through_the_decades/overview/1970.html.
somewhat understandable, there was a lot of discrepancy in the data collected since many found it confusing. Towards the end of the decade, Edward R. Roybal, a California representative, vouched for a law requiring the government to acquire information surrounding households and citizens of a Spanish-speaking country of origin.6 As a result, on May 12, 1977, the Office of Management and Budget (OMB) adopted Directive NO. 15, Race and Ethnic Standards for Federal Statistics and Administrative Reporting, which would officially add the term “Hispanic” into the census as its own ethnic category.7 According to the Office of Management and Budget, a “Hispanic” is defined as “A person of Mexican, Puerto Rican, Cuban, Central or South American or other Spanish culture or origin, regardless of race.” Having a set definition and category, the ethnic group now had a more accurate representation in the census and other racial forums. The inclusion of the term was first seen in the 1980 census sheet.8 The racial/ethnic background questionnaire was again a multiple-choice question. Still, the options were different, including “No (not Spanish/Hispanic),” “Yes, Mexican, Mexican-Amer., Chicano”, “Yes, Puerto Rican,” “Yes, Cuban,” and “Yes, other Spanish/Hispanic.” Although only slightly different, including this term was a major step for the ethnic group’s representation, also aiding in accounting for the number of U.S. citizens a part of the ethnic group.
Giving Hispanics their own ethnic category was only part of the challenge for the U.S. Census Bureau; the other part of the challenge was getting citizens a part of this group to mark it on the form. In an effort to get Hispanics to fill out the form, Univision (the largest Spanish-speaking media provider in the U.S.) created census ads for the 1980 and 1990 censuses.9 In these ads, they had famous sports players or characters speaking Spanish, asking listeners and viewers to participate in the census that year. They mention that it would greatly help the community as it could provide them with more advantages later on. For example, in the 1980 Univision census ad, Efren
6 George Ramos, “From the Archives: Pioneer in Latino Politics in Los Angeles,” Los Angeles Times, October 26, 2005, https://www.latimes.com/ local/obituaries/archives/la-me-edward-r-roybal-20051026-story.html.
7 Centers for Disease Control and Prevention CDC, “OMB Directive 15: Race and Ethnic Standards for Federal Statistics and Administrative Reporting,” wonder.cdc.gov, November 19, 2019, https://wonder.cdc.gov/ wonder/help/populations/bridged-race/directive15.html.
8 US Census Bureau, “1980 Census Short-Form Questionnaire,” US Census Bureau, 1980, https://www.census.gov/content/dam/Census/ programs-surveys/decennial/technical-documentation/questionnaires/1 980_short_questionnaire.pdf.
9 “‘Destino 80’ Univision Census Ads 1980,” www.youtube.com, accessed September 27, 2023, https://www.youtube.com/watch?v=viyPQ1WnrFc&t=37s; “Univision Census Ad Series 1990,” www.youtube.com, accessed September 27, 2023, https://www.youtube.com/watch?v=aSe0RkH4tMo
Herrera from the Seattle Seahawks is seen with his family filling out the census. Herrera turns to the camera and explains that it is important to “all of us” as it could help them gain “schools, hospitals, work, and representation.” With the entire ad being in Spanish, it can impact a larger audience and therefore have higher participation from the Hispanic community. These ads proved helpful as there was an apparent increase in the recorded number of Hispanics, but in 1990, the slight change in the census helped represent the ethnicity even more as there was a failsafe to determine whether the person was of Hispanic/Spanish descent. In the 1990 census sheet, a similar format is used in the 1980 census sheet, but for the “Yes, other Spanish/Hispanic” category, there is an open response to write the country of origin. By having this information, the Census Bureau can determine who is and who is not Hispanic, given that someone was confused. As a result of the 1990 census, a large population of Hispanics was recorded. Between the 1990 census and the 2000 census, the Hispanic category underwent another change. In 1997 the OMB released Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity.10 These revisions would change the ethnicity question to ask whether a person was of Spanish/Hispanic/Latino origin in the 2000 census sheet and that the term Hispanic should be changed to “Hispanic or Latino.”11 According to the OMB, the term Latino was added because “regional usage of the terms differs—Hispanic is commonly used in the eastern portion of the United States, whereas Latino is commonly used in the western portion.” Integration of both terms serves as a way to adhere to an enormous scope of the United States, allowing them to collect more information as there would be less confusion about the category. In the most recent census sheet, 2020, the ethnic category was formatted the same, but the racial category was also asked to write in the country of origin. Doing so may assist the government in collecting conclusive data surrounding Hispanics and Latinos, as many mark the “other race category.” In their article Paul Taylor, Mark Hugo Lopez, Jessica Martínez, and Gabriel Velasco explain that “half (51%) of Latinos identify their race as “some other
race.”12 This identification does not give much insight to government officials as the category wasn’t meant to be used so frequently. A write-in option would eliminate confusion and better represent specific races within the United States, contributing more valid data. Even though having both terms within the racial/ethnic question has made it less confusing for those filling out the census on a surface level, many still do not identify with the categorization. Mark Hugo Lopez, Jens Manuel Krogstad, and Jeffrey S. Passel shared the results of a 2019 survey asking how Latino and Hispanic people identify themselves. The results showed that 47% of Hispanics identify by country of origin, 39% as Hispanic/Latino, and 14% as American.13 Since the majority identify themselves as their country of origin, it pleads the question of whether the ethnic category represents the community effectively or if it is too broad.
As the census changes, new data and patterns will be collected, unifying or separating the vast community of Hispanics and Latinos in the United States. According to the Office of Management and Budget “racial and ethnic categories set forth in the standards should not be interpreted as being primarily biological or genetic in reference” but instead should be considered “in terms of social and cultural characteristics as well as ancestry.”14 Meaning that anyone could be regarded as Hispanic or Latino, given that they fit the requirements that the U.S. Census Bureau has laid out. Though as the 2030 census approaches, there have been new talks of changing the Latino and Hispanic category again, this time considering them as a race. Based on the previous history of the census, this could be monumental for the community but also detrimental in terms of societal acceptance.
With the number of changes made to the race/ethnicity question in the census and the lack of information about each change, there is reasonable cause as to why many do not know the difference between race and ethnicity. This confusion may
10 Office of Management and Budget OMB, “/ Notices Office of Management and Budget Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity,” October 30, 1997, https://www. govinfo.gov/content/pkg/FR-1997-10-30/pdf/97-28653.pdf.
11 US Census Bureau, “2000 Census Long-Form Questionnaire,” US Census Bureau, 2000, https://www.census.gov/content/dam/Census/ programs-surveys/decennial/technical-documentation/questionnaires/2 000_long_form.pdf.
12 Paul Taylor et al., “When Labels Don’t Fit: Hispanics and Their Views of Identity,” Pew Research Center’s Hispanic Trends Project, April 4, 2012, https://www.pewresearch.org/hispanic/2012/04/04/when-labelsdont-fit-hispanics-and-their-views-of-identity/#offici al-adoption-of-the-terms-%E2%80%9Chispanic%E2%80%9D-and-%E2%80%9Clatino%E2%80%9D:~:text=for%2 0family%20reasons.-.
13 Mark Hugo Lopez, Jens Manuel Krogstad, and Jeffrey S. Passel, “Who Is Hispanic?,” Pew Research Center, September 5, 2023, https://www. pewresearch.org/short-reads/2022/09/15/who-is-hispanic.
14 Office of Management and Budget OMB, “/ Notices Office of Management and Budget Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity,” October 30, 1997, https://www. govinfo.gov/content/pkg/FR-1997-10-30/pdf/97-28653.pdf.
lead to inaccurate information being filled out by citizens, specifically for Latinos and Hispanics. The cause behind this confusion can be explained as Latinos and Hispanics being treated as a race when, by definition, they are an ethnicity. Due to how they are treated, they often overlook the government’s definition of their categorization and consider themselves their own race.15 Another aspect of the census that many need help understanding is the individual meanings of Latinos and Hispanics. The terms are frequently used interchangeably, but their differences often need to be noticed. The interview questions were asked of known Latino/Hispanic North Carolina School of Science and Mathematics (NCSSM) faculty and students at the Morganton campus. These questions will focus on topics relating to the census and their input. Gathering this information may give additional insight into how the community feels about the terms, their categorization in the government, and whether they have had any benefits from marking this category in the past. Additionally, the information received can help create new hypotheses as to how larger groups of Latinos and Hisapnic may feel and possibly support changes that the government plans on making to the census in the future.
The interview process began based on when both the interviewee and the interviewer had aligned schedules. Each interviewee was informed as to why they were being interviewed in an email sent asking if they would be willing to participate. The interviews started with a “thank you” and then were followed by the 10-11 (Staff and Faculty were asked an additional question). The general length of each interview varied based on the responses from the participants. The following questions were asked:
1. Do you know what the difference is between ethnicity/race? If yes, can you explain it to me?
The Census Bureau currently has assigned a two-step racial question in place asking for a person’s race and ethnicity. Though, when referring to Hispanics and Latinos, many consider the group as its own separate race rather than an ethnicity
like the classification states.16 Asking this question will possibly help prove the claim that many people do not understand the difference between race and ethnicity which can cause issues in understanding the two-step racial question.
Based on all the interviews I’ve had, it’s been interesting to note the similarities and differences between everyone’s answers. Of the 14 people that I interviewed, eight were able to provide me with a description for both, in aims of explaining the difference. The other six participants explained they could not give me an explanation for what the difference is. This result was especially surprising as I assumed that most people would not know the difference since the terms are so commonly used together rather than separately. Of the people that did say yes, only a couple felt entirely confident in their answers while others started with phrases such as “I think…” or “I’m pretty sure..” before providing their explanation. Based on these results I would expect the majority of individuals to provide or at least attempt to provide me with the definitions of race and ethnicity.
2. Do you know what the difference is between Hispanic and Latino? If yes, can you explain it to me?
Another source of confusion for many is that difference between the terms “Latino” and “Hispanic” themselves. Due to this fact, many do not know what to mark on the form and will oftentimes not mark the box. By asking this question, this possible source for confusion can either be ruled out or counted for why there are discrepancies in census collection.
Results
Of the 14 individuals interviewed, 11 were able to provide a description of both Latino and Hispanic but only six were able to properly describe each term. Initially, I expected there to be around a half of the total participants that would be able to properly describe the term. Given that a little less than half were able to provide me with the correct definitions, my results were expected. If more participants were asked this question there would likely be similar numbers. Surprisingly though not all of the adults were able to accurately understand the terms or did not know the difference. One of the main rea-
15 Edward Telles, “Latinos, Race, and the U.S. Census,” The Annals of the American Academy of Political and Social Science 677 (2018): 153–64, https://www.jstor.org/stable/26582325?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3A8100714b41a9f9057194f6c9842e5d2b.
16 Edward Telles, “Latinos, Race, and the U.S. Census,” The Annals of the American Academy of Political and Social Science 677 (2018): 153–64, https://www.jstor.org/stable/26582325?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3A8100714b41a9f9057194f6c9842e5d2b.
Defining a People: The Effect and History of the Terms "Hispanic" and "Latino"…
sons for the discrepancies seen from Latino/Hispanic census recordings has essentially been proven correct. Many scholars have claimed that the discrepancies seen in data is likely caused by a lack of understanding of the terms. Only six of a total of 14 participants showed that they could accurately describe each term showing that there is evidently a lack of understanding. This was only a small portion of a giant population, so there are likely many citizens who remain confused and which in turn affects their responses for self identification.
3. With what term do you identify yourself and why?
Reasoning and Prediction
This question is specifically meant to gather information for the term that is preferred by Latinos and Hispanics (assuming that the majority of those being interviewed fall into both categories). The results would then be compared to the information presented in Pew Research’s “Who is Hispanic?” article stating that in a 2020 survey 50% of Hispanic/Latinos preferred the term Hispanic.17 The information recorded from this question can also be used to either support or refute the claim made by the OMB stating that the term “Hispanic” is more frequently used on the East Coast.18
After my interviews though, I would say that my results have varied more than I thought they would. Of the total number that were interviewed, six have said they prefer “Hispanic,” five have said they prefer “Latino,” and three said “Both.” Their explanations for their answers primarily were related to their upbringing and understanding of the terms. The results were consistent with my prediction as the majority of participants stated that they preferred the term “Hispanic,” likely due to regional location. In comparison to Pew Research’s data of 50% of Hispanic/Latinos prefer the term “Hispanic” based on these interviews, 43% of participants preferred the term. If a larger percentage of Hispanic/Latinos were interviewed there would still likely be a majority preferring “Hispanic” again, likely explained by location (east coast).
4. Do you think Latinos and Hispanics should be considered their own race and why?
For many years there have been discrepancies in the census due to the fact that many Latinos/Hispanics mark “other race” on the census sheet.19 This is a result of confusion in the community as there is a common conception that Latinos and Hisapnics consider themselves their own race.20 Since there are so many discrepancies caused because of this, there is probably reasoning as to why Latinos and Hispanics should be considered their own racial category. By gathering this information, it can either back this claim or refute it.
I previously thought there would be a strong majority of people who believed that Latinos and Hispanics should be considered their own race based on how Latinos/Hispanics are treated within the United States. Of the total participants interviewed, only four participants said that they would like to see Latinos/ Hispanics as a race; six said they would not, two expressed that they would consider Latinos a race and keep Hispanic an ethnicity, one said they would like to get rid of the racial classification and one said they “don’t have a particular point of view in that.” I found it surprising that the majority of those that I interviewed expressed that they would not want Latinos/Hispanics to be considered a race. Reasoning behind this was mainly that they felt that the category will be “too broad” or concerns about how multiracial and ethnic peoples will be able to identify themselves. These answers completely derailed my original hypothesis about what their answers may be, as I originally thought the contrary would be shown. Those that said they would consider Latinos and race but Hispanics and ethnicity seemed to have a better understanding of what each term meant compared to others that simply said “yes” or “no” as they were able to fully defend their answers based on definitions. If this question was asked to a larger group of people I would now assume that there would be a greater percentage
17 Mark Hugo Lopez, Jens Manuel Krogstad, and Jeffrey S. Passel, “Who Is Hispanic?,” Pew Research Center, September 5, 2023, https://www. pewresearch.org/short-reads/2022/09/15/who-is-hispanic.
18 Office of Management and Budget OMB, “/ Notices OFFICE of MANAGEMENT and BUDGET Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity,” October 30, 1997, https:// www.govinfo.gov/content/pkg/FR-1997-10-30/pdf/97-28653.pdf.
19 Paul Taylor et al., “When Labels Don’t Fit: Hispanics and Their Views of Identity,” Pew Research Center’s Hispanic Trends Project, April 4, 2012, https://www.pewresearch.org/hispanic/2012/04/04/when-labelsdont-fit-hispanics-and-their-views-of-identity/#offici al-adoption-of-the-terms-%E2%80%9Chispanic%E2%80%9D-and-%E2%80%9Clatino%E2%80%9D:~:text=for%2 0family%20reasons.-.
20 Edward Telles, “Latinos, Race, and the U.S. Census,” The Annals of the American Academy of Political and Social Science 677 (2018): 153–64, https://www.jstor.org/stable/26582325?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3A8100714b41a9f9057194f6c9842e5d2b.
of people who would say that Hispanics/Latinos should not be considered their own race.
5. The U.S. Census said the following regarding race and ethnicity, “The racial and ethnic categories set forth in the standards should not be interpreted as being primarily biological or genetic in reference. Race and ethnicity may be thought of in terms of social and cultural.” Based on this description, how would you describe Latinos and Hispanics?
Again, one of the most prominent issues within the community is a lack of understanding between race and ethnicity. Even when looking at the OMB’s definition of race and ethnicity, the terms are still used interchangeably.21 This question allows interviewees to give their input as to what they would define Latinos and Hispanics as. Given this information, it may be easier to draw conclusions as to what the census bureau should change when gathering racial and ethnic demographics.
Results
A majority of 13 out of a total of 14 participants said that they would agree that Latinos and Hispanics are more usually categorized on the basis of a cultural and social sense rather than a biological sense. Though there was one participant that said the contrary. I expected most people to respond this way, though it is surprising that only one person said that they feel that Hispanics/Latinos are divided based on biological standards. I expected more of the participants to answer this way due to the often harsh stereotypes surrounding Latinos and Hispanics in the United States. Through the media, there has seemed to be a certain “look” for what a Hispanic/Latino person should resemble and a “persona” that they follow. For example, many have the misconception that all Latinos should have a darker complexion and for women, they should have a curvy figure. In this sense, it seems that Latinos/Hispanics are strictly divided in terms of physical appearance rather than their culture. If a larger pool was interviewed, there would likely be a similar distribution of results.
6. Do you think there should be stronger separation between the terms “Latinos” and “Hispanics” since they are different by definition?
Reasoning and Prediction
Due to lack of separation of the terms “Latino” and “Hispanic,” there is a strong indication that additional confusion revolves around the ethnic category. Many consider the lumping of the terms to be the cause of issues as well as they do not feel that they can properly fit into either category if the other is included as they do not understand that is an “either or” question rather than a “and” question.22 By getting feedback from the demographic, there may be better explanations as to why the US government needs to change their racial groupings.
For my sixth question I asked, Do you think there should be stronger separation between the terms Latinos and Hispanics since they are by definition different? Due to the confusion surrounding the terms “Latino” and “Hispanic”, and the fact that they are different, my initial expectation was that everyone would agree that the terms should be separated. Of the 14 interviews I’ve had 11 participants say that they do feel that there should be a stronger separation between the terms while three have said that because the difference is so minimal, there is not really a need for separation. I do find these results surprising because I would assume that every member of the LatinoHispanic community would want to separate the terms. Of the responses that said no, I found it particularly interesting that Cannon Rich, the one Brazilian participant I interviewed said there wasn’t really a need for separation. Being Brazilian means that Cannon can only mark Latino. Due to the fact that the terms are used interchangeably, I would assume that Cannon would agree that they should be separated so he couldn’t be marked as Hispanic and strictly Latino. The argument seen for the people saying that they should be separated mainly resides in the fact that there would be better representation for strictly Latino groups, strictly Hispanic groups, and Hispanic-Latino groups. Though another dispute for this separation was that it would divide people even further and cause more issues. Regardless though, I do feel that the majority of the following group of participants will continue to say that they feel that the terms should be separated.
21 Office of Management and Budget OMB, “/ Notices OFFICE of MANAGEMENT and BUDGET Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity,” October 30, 1997, https:// www.govinfo.gov/content/pkg/FR-1997-10-30/pdf/97-28653.pdf.
22 Edward Telles, “Latinos, Race, and the U.S. Census,” The Annals of the American Academy of Political and Social Science 677 (2018): 153–64, https://www.jstor.org/stable/26582325?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3A8100714b41a9f9057194f6c9842e5d2b.
7. Have you ever felt worried about marking Latino/ Hispanic on any government, academic, or a medical sheet?
Based on the emergence of the term Hispanic in the census there is reasoning behind the gap between its representation in the census. In many instances, Hispanics/Latinos would have been marked by “White” due to census takers which gave them additional benefits in life. After they were separated into their own categories they often faced discriminatory actions such as deportation, which was the case for many Mexicans after the term was added as its own census category.23 Even today, many people do not mark their true race/ethnicity in order to have better treatment from others, including the government.24 This question may help prove or debunk these thoughts or lead to other conversations about worries that Hispanics/Latinos have with government representation.
Results
When asking this question I assumed that there would be at least a small portion of participants that have had hesitation in marking those terms in the past based on experiences I have heard from others, especially in an academic setting with programs such as ESL (English as a second language). Of the 14 people that I have interviewed, 11 expressed that they have not felt worried about marking either term, two said that they have, and one said they felt unsure if they could even count as hispanic sometimes. I did expect a majority of the participants I interviewed to express that they have never felt worried, but I still assumed that there would be more participants who have felt worried about marking the terms. Of the two responses I got that said they have felt worried in the past, they both said that they felt more worried about the following questions. For example, one participant expressed that they felt worried about the questions asking about citizenship and being “first generation” while the other felt worried about the racial questions that followed. The one participant that felt confused at times about whether or not they could identify with the term Hispanic/Latino said that she felt like she couldn’t based on her appearance and her lack of ability to speak Spanish like the rest of her Hispanic peers. Given this information I would assume
23 “INS Records for 1930s Mexican Repatriations | USCIS,” www.uscis. gov, July 29, 2020, https://www.uscis.gov/about-us/our-history/history-office-and-library/featured-stories-from-the-uscis-history-office and-library/ins-records-for-1930s-mexican-repatriations.
24 MICHEL GOBAT, “The Invention of Latin America: A Transnational History of Anti-Imperialism, Democracy, and Race,” The American Historical Review 118, no. 5 (2013): 1345–75, https://www.jstor.org/stable/23784580.
that if a larger pool of participants were interviewed, the majority would express that they have not felt worried before.
8. Have you seen any benefit from marking Latino/ Hispanic on any government, academic, or a medical sheet?
One of the U.S. government’s biggest arguments for marking race and ethnicity in the census is that they will provide communities with more representation in government, financial aid, etc.25 By answering this question, it would give the Hispanic/Latino community an opportunity to express any benefits they have seen providing a true perspective of whether or not the government has followed through with their mission.
This question has been one of the most impactful that I have asked. Issues regarding Hispanic/Latino students have been reflected in the responses I have collected. The majority of the responses I have gotten said that they haven’t seen benefits but some students have said that they have in an academic setting through things such as affirmative action. Though, of the responses I have received saying they have seen benefits, the students almost see the “benefit” as a negative. Their reasoning behind this is that they feel they didn’t “truly” get into programs or feel that they need to question whether it was based on their own merit or if they were a diversity acceptance. Hearing these terms in everyday life, I did expect the student participants I interviewed to explain the academic benefits they have seen. Unfortunately though, after interviewing these students many have questions as to whether or not they actually have shown academic prowess or have been accepted based on a singular box they have marked. One participant explained that a close friend had said that he was most likely accepted into the school only because he is a part of the Hispanic/Latino community. Of the individuals that have said no, some gave additional comments regarding academic advantages as well but left their responses rather dry. As I continue to interview students I expect more responses to revolve around academic advantages with topics focussing on affirmative action and the controversial “diversity acceptance” argument from others.
25 Jaime Raigoza, “U.S. Hispanics: A Demographic and Issue Profile,” Population and Environment 10, no. 2 (1988): 95–106, https://www.jstor.org/stable/27503098?searchText=us+census+hispanics&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3Dus%2Bcensus%2Bhispanics&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-d efault%3A232c6ddb62a1652906a79aeaa581fb90.
9. Are you familiar with the two-step racial question on the census? If yes, do you think it should remain a two step question or should it be reverted to a one-step question?
In the current census format, the government asks about racial and ethnic demographics in a two-step question, which was first introduced in 1980.26 The first part asking whether a person is of Latino/Hispanic origin or descent and the second asking their race. Though, one of the most prevalent problems is that many people consider Hispanics/Latinos as their own race, therefore do not feel represented in the “race” portion of the question.27 By adding Latinos/Hispanics as their own race, it would essentially remove the need for a two-step question, making it easier on both the government and the demographic to understand the census and its results. The information in this question can be used to gather an overall feel for what format Latinos/Hispanics would prefer the census to ask its racial question as.
Seven participants expressed that the two-step question should stay but there should be modified categories for race and ethnicity, five participants expressed that it should be reverted to a one-step question, and two people said “I don’t know.” These results fit with what I was expecting, as I figured most participants would express that they would want to change the question that is already in place rather than completely changing the structure of the race/ethnicity question on the census sheets. Though, there is an obvious lack of information about what the actual two-step question is as almost all of the participants I asked didn’t know what it was and needed an explanation in order to answer the question being asked.
10. How do you think the U.S. government could improve its racial and ethnic demographic collection for the U.S. census?
There are many things that the U.S. government has to change in the census in order to properly represent all of its citizens residing within its borders.28 This question would give Hispanics/Latinos the opportunity to express any other concerns that they have seen and what they would like to be changed in the census. The information gathered in this question may help make predictions for what will be changed for the future census sheets as well. It may also back the idea that Latinos/ Hispanics have the power to change the census, depending if the collected results align with what other Latinos/Hispanics have said and what the government plans on changing for the 2030 census.29
Based on the participants interviewed, 11 agreed that there should be better definitions as to what each category means and who would be considered what race and possibly adding more terms to the questions, two mentioned asking simply the country of origin to avoid further discrimination, and one said that the government should try to collect demographic collection for non citizens as well. As expected many believed that the lack of definitions for each category has made it difficult for participants to accurately fill out the census sheets further the claim that there is a large source of confusion for the racial and ethnic terms set forth in standards.
Regardless of the many changes the U.S. government has made to its census sheet, lack of inclusivity is still prominent within the two-step racial question. As a result of this inclusivity, many feel as though they are not properly being represented by the census categories put in place. Two of the main groups that have felt this are multiracial citizens and Hispanics/Latinos. The number of multiracial people within the United States has continued to grow significantly over the past few decades as multiracial citizens have continuously expressed their inability
26 Kim Parker et al., “Race and Multiracial Americans in the U.S. Census,” Pew Research Center’s Social & Demographic Trends Project, June 11, 2015, https://www.pewresearch.org/social-trends/2015/06/11/chapter-1-raceand-multiracial-americans-in-the-u-s-census/.
27 Edward Telles, “Latinos, Race, and the U.S. Census,” The Annals of the American Academy of Political and Social Science 677 (2018): 153–64, https://www.jstor.org/stable/26582325?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3A8100714b41a9f9057194f6c9842e5d2b.
28 Kenneth Prewitt, “Racial Classification in America: Where Do We Go from Here?,” Daedalus 134, no. 1 (2005): 5–17, https://www.jstor.org/ stable/20027956.
29 Ian Haney López, “Race on the 2010 Census: Hispanics & the Shrinking White Majority,” Daedalus 134, no. 1 (2005): 42–52, https://www.jstor.org/stable/20027959?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3Ab80b967ec12923121f0051da60ba9d9.
Defining a People: The Effect and History of the Terms "Hispanic" and "Latino"…
to feel “properly” represented by the census and confusion is high amongst the Hispanic/Latino community, questions as to why the U.S. government has not implemented improvements remain.30 If the government continues its system of demographic collection without providing better representation for multiracial citizens, could this be considered a form of discrimination? Improving the categories in the census will result in the inclusivity that the government has struggled with.31
Like many of the multiracial population in the United States, Hispanics/Latinos have also expressed dissatisfaction with census as they feel that the racial question does not properly “fit” them. With Hispanics/Latinos not having their own category in the census they have felt the need to mark “other” as their race. By doing so this category has grown significantly in terms of percentage, even though it was not supposed to be used as a main category to begin with.32 The growth in this category has caused the government a multitude of issues as they are not receiving “accurate” data that represents the racial demographics within the United States. Since most Hispanics/ Latinos consider themselves their own race, there are pushes for the government to reevaluate the category.33 While many Hispanics/Latinos mark “other” as their race a large number of Hispanics/Latinos mark “white” as well.34 As a result of immigration to the United States, there has been a stark growth in the number of Hispanics/Latinos, causing them to become the fastest growing ethnic group. If Hispanics/Latinos were to begin to be considered their own race, the percentage of white
people within the United States would decrease, and eventually would be surpassed by Latinos/Hispanics.35
Based on the interviews, a conclusion can be drawn that in order to gain more accurate information, the government should likely separate the terms Hispanic and Latino into different categories such as “Latino,” “Hispanic,” “Hispanic-Latino” to minimize additional confusion by respondents. Further separating these terms will also help provide opportunities for each specific group rather than lumping all accomplishments into one. For example scholarships are often awarded to “Hispanic Students” but have been given to students who are Latino/Hispanic. In specific instances like that of Cannon Rich, a strictly Latino student (of Brazilian descent for example) may receive a “Hispanic Recognition Award” even though they are not Hispanic. In order to prevent this from happening more in depth separation has to be made. Another improvement that the government was suggested to add based on these interviews was adding proper and apparent definitions to what constructs define each race and ethnicity present in the questions.
30 Natalie Eilbert, “Revealing Racial Complexity, Diversity: The Multiracial Population Has Been Growing for Decades, and the US Census Finally Figured That Out,” ProQuest, March 7, 2022, https://www.proquest.com/docview/2636156939.
31 Kenneth Prewitt, “Racial Classification in America: Where Do We Go from Here?,” Daedalus 134, no. 1 (2005): 5–17, https://www.jstor.org/ stable/20027956.
32 Mark Hugo Lopez, Jens Manuel Krogstad, and Jeffrey S. Passel, “Who Is Hispanic?,” Pew Research Center, September 5, 2023, https://www. pewresearch.org/short-reads/2022/09/15/who-is-hispanic.
33 Edward Telles, “Latinos, Race, and the U.S. Census,” The Annals of the American Academy of Political and Social Science 677 (2018): 153–64, https://www.jstor.org/stable/26582325?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3A8100714b41a9f9057194f6c9842e5d2b.
34 Mark Hugo Lopez, Jens Manuel Krogstad, and Jeffrey S. Passel, “Who Is Hispanic?,” Pew Research Center, September 5, 2023, https://www. pewresearch.org/short-reads/2022/09/15/who-is-hispanic.
The birth of the census allowed the United States to account for the demographics of its citizens; whether they intended to better minority communities or exploit them, the most stressed part of census collection is racial identification. Although there is a significant emphasis on these terms, the racial categories the government provides have continuously been the source of controversy. Many have expressed discontent with the census as they do not feel fully represented or do not understand the difference between race and ethnicity. Latinos and Hispanics have continuously been misrepresented in the census due to a lack of separation between the terms. The development of these terms has unified Latinos and Hispanics through their struggles but has also divided them, as many identify with one term over another. As a whole they are oftentimes treated as their own race but by government definition they are an ethnic group. Thus, the two-step race question format on the census has led to mismarkings and confusion as many Latinos and Hispanics have expressed that they do not feel represented by the racial categories in place. In order for the census to establish proper social normalities and economic support to the ethnic and racial groups encased within its borders they need to improve their classifications. Hispanic and Latino should be considered different categories since they
35 Ian Haney López, “Race on the 2010 Census: Hispanics & the Shrinking White Majority,” Daedalus 134, no. 1 (2005): 42–52, https://www.jstor.org/stable/20027959?searchText=US+census+Latinos&searchUri=%2Faction%2FdoBasicSearch %3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-de fault%3Ab809b967ec12923121f0051da60ba9d9.
are different in definition, and the qualifications for each category should be defined upfront to prevent uncertainty from takers. More improvements for these categories will lead to better data for the government to pull from and will hopefully allow minority groups in the U.S. to receive benefits that they may need.
Brown, Anna. “The Changing Categories the U.S. Census Has Used to Measure Race.” Pew Research Center, February 25, 2020. https://www.pewresearch.org/short-reads/2020/02/25/thechanging-categories-the-u-s-has -used-to-measure-race.
Bureau, US Census. “2000 Census Long-Form Questionnaire.” US Census Bureau, 2000. https://www.census.gov/ content/dam/Census/programs-surveys/decennial/technical-docu mentation/questionnaires/2000_long_form.pdf.
———. “2020 Census Informational Questionnaire.” US Census Bureau, 2020. https://www2.census.gov/programs-surveys/decennial/2020/technical-documentation/ que stionnaires-and-instructions/questionnaires/2020-informational-questionnaire-english_DI -Q1.pdf.
Bureau, US Census. “Hispanic or Latino Origin.” Census. gov. Accessed September 27, 2023. https://www.census.gov/ acs/www/about/why-we-ask-each-question/ethnicity/#:~:text=O MB%20defines%20%22Hispanic%20or%20Latino.
———. “Improvements to the 2020 Census Race and Hispanic Origin Question Designs, Data Processing, and Coding Procedures.” The United States Census Bureau, August 3, 2021. https://www.census.gov/newsroom/blogs/ random-samplings/2021/08/improvements-to-2 020-census-race-hispanic-origin-question-designs.html.
———. “U.S. Decennial Census Measurement of Race and Ethnicity across the Decades: 1790–2020.” Census.gov, August 3, 2021. https://www.census.gov/library/visualizations/interactive/decennial-census-measurement of-raceand-ethnicity-across-the-decades-1790-2020.html.
———. “Univision.” Census.gov, 2020. https://www.census. gov/library/spotlights/2020/univision-tv.html.
Bureau, US Census. “1980 Census Short-Form Questionnaire.” US Census Bureau, 1980. https://www.census.gov/ content/dam/Census/programs-surveys/decennial/technical-docu mentation/questionnaires/1980_short_questionnaire.pdf..
Bureau, US Census, Beverly M. Pratt, Lindsay Hixson, and Nicholas A. Jones. “Measuring Race and Ethnicity across the Decades: 1790—2010 - U.S. Census Bureau.” www.census. gov. Accessed September 27, 2023. https://www.census.gov/ data-tools/demo/race/MREAD_1790_2010.html.
CDC, Center for Disease Control and Prevention. “Hispanic Origin - Health, United States.” www.cdc.gov, August 8, 2022.
https://www.cdc.gov/nchs/hus/sources-definitions/hispanic-origin.htm#:~:text=Hispanic %20or%20Latino%20 origin%20includes.
CDC, Centers for Disease Control and Prevention. “OMB Directive 15: Race and Ethnic Standards for Federal Statistics and Administrative Reporting.” wonder.cdc.gov, November 19, 2019. https://wonder.cdc.gov/wonder/help/populations/ bridged-race/directive15.html.
Cohn, D’vera. “Census History: Counting Hispanics.” Pew Research Center’s Social & Demographic Trends Project, March 3, 2010. https://www.pewresearch.org/socialtrends/2010/03/03/census-history-counting-hispanics -2/.
Eilbert, Natalie. “Revealing Racial Complexity, Diversity: The Multiracial Population Has Been Growing for Decades, and the US Census Finally Figured That Out.” ProQuest, March 7, 2022. https://www.proquest.com/ docview/2636156939.
Gibson, Campbell, and Kay Jung. “Population Division Historical Census Statistics on Population Totals.” Accessed September 27, 2023. https://www.census.gov/content/ dam/Census/library/working-papers/2002/demo/POP-tw ps0056.pdf.
Gobat, Michel. “The Invention of Latin America: A Transnational History of Anti-Imperialism, Democracy, and Race.” The American Historical Review 118, no. 5 (2013): 1345–75. https://www.jstor.org/stable/23784580.
Haub, Carl. “Changing the Way U.S. Hispanics Are Counted.” PRB, November 7, 2012. https://www.prb.org/ resources/changing-the-way-u-s-hispanics-are-counted/.
Indianapolis Star. “First Rallies, Now Voter Registration: Activists Seek to Use Passion Ignited by Debate on Immigration to Get Hispanics to Cast Ballots.” ProQuest, May 21, 2006. https://www.proquest.com/docview/240832495?parentSessionId=7TF8Y3PoM8v8%2Bu %2Fvy5CDFTQdzCNT3Mu5hm6ojgQYoyw%3D.
James, Meg. “Univision Promotes the Census to Latino Audiences.” Los Angeles Times, April 1, 2010. https://www.latimes.com/archives/la-xpm-2010-apr-01-lafi-ct-facetime1-2010apr01-stor y.html.
Kohut, Andrew. “From the Archives: In ’60s, Americans Gave Thumbs-up to Immigration Law That Changed the Nation.” Pew Research Center, September 20, 2019. https://www.pewresearch.org/short-reads/2019/09/20/ in-1965-majority-of-americans-favored-immigration-and-nationality-act-2/.
López, Ian Haney. “Race on the 2010 Census: Hispanics & the Shrinking White Majority.” Daedalus 134, no. 1 (2005): 42–52. http://www.jstor.org/stable/20027959.
Lopez, Mark Hugo, Jens Manuel Krogstad, and Jeffrey S. Passel. “Who Is Hispanic?” Pew Research Center, September 5, 2023. https://www.pewresearch.org/shortreads/2022/09/15/who-is-hispanic.
National Archives. “1790 Census Records,” October 28, 2020. https://www.archives.gov/research/census/1790.
OMB, Office of Management and Budget. “/ Notices Office of Management and Budget Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity,” October 30, 1997. https://www.govinfo.gov/content/pkg/ FR-1997-10-30/pdf/97-28653.pdf.
Parker, Kim, Juliana Menasce Horowitz, Rich Morin, and Mark Hugo Lopez. “Race and Multiracial Americans in the U.S. Census.” Pew Research Center’s Social & Demographic Trends Project, June 11, 2015. https://www.pewresearch. org/social-trends/2015/06/11/chapter-1-race-and-multiracial-am ericans-in-the-u-s-census/.
Pew Research Center. “Race and Ethnicity in the U.S. Census,” February 6, 2020. https://www.pewresearch.org/ interactives/what-census-calls-us.
Prewitt, Kenneth. “Racial Classification in America: Where Do We Go from Here?” Daedalus 134, no. 1 (2005): 5–17. https://www.jstor.org/stable/20027956.
Raigoza, Jaime. “U.S. Hispanics: A Demographic and Issue Profile.” Population and Environment 10, no. 2 (1988): 95–106. https://www.jstor.org/stable/27503098?searchText=us+census+hispanics&searchUri=%2 Faction%2FdoBasicSearch%3FQuery%3Dus%2Bcensus%2Bhispanics&ab_segments=0 %2Fbasic_search_ gsv2%2Fcontrol&refreqid=fastly-default%3A232c6ddb62a1652906a7 9aeaa581fb90.
Ramos, George. “From the Archives: Pioneer in Latino Politics in Los Angeles.” Los Angeles Times, October 26, 2005. https://www.latimes.com/local/obituaries/archives/la-meedward-r-roybal-20051026-stor y.html.
Taylor, Paul, Mark Hugo Lopez, Jessica Martínez, and Gabriel Velasco. “When Labels Don’t Fit: Hispanics and Their Views of Identity.” Pew Research Center’s Hispanic Trends Project, April 4, 2012. https://www. pewresearch.org/hispanic/2012/04/04/when-labels-dontfit-hispanics-and-the ir-views-of-identity/#official-adoption-of-the-terms-%E2%80%9Chispanic%E2%80%9D -and-%E2%80%9Clatino%E2%80%9D:~:text=for%20 family%20reasons.-.
TBD, Oral History on Racial Categories in US Census
Telles, Edward. “Latinos, Race, and the U.S. Census.” The Annals of the American Academy of Political and Social Science 677 (2018): 153–64. https://www.jstor.org/ stable/26582325?searchText=US+census+Latinos&searchUri=%2F action%2FdoBasicSearch%3FQuery%3DUS%2Bcensus%2BLatinos&ab_segments=0%
2Fbasic_search_gsv2%2Fcontrol&refreqid=fastly-default%3A8100714b41a9f9057194f6 c9842e5d2b.
The White House. “Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity.” Accessed September 27, 2023. https://obamawhitehouse.archives.gov/ omb/fedreg_1997standards/.
The White House. “Standards for the Classification of Federal Data on Race and Ethnicity.” Accessed September 27, 2023. https://obamawhitehouse.archives.gov/omb/fedreg_ race-ethnicity.
US Census Bureau. “About Hispanic Origin.” Census.gov, March 7, 2018. https://www.census.gov/topics/population/ hispanic-origin/about.html.
US Census Bureau, Census History Staff. “1970 Overview - History - U.S. Census Bureau.” www.census.gov. Accessed September 27, 2023. https://www.census.gov/history/www/ through_the_decades/overview/1970.html.
Wang, Hansi Lo. “New ‘Latino’ and ‘Middle Eastern or North African’ Checkboxes Proposed for U.S. Forms.” NPR, January 26, 2023. https://www.npr.org/2023/01/26/1151608403/ mena-race-categories-us-census-middle-eas tern-latino-hispanic#:~:text=New%20race%2C%20ethnicity%20categories%20proposed %20for%20census%2C%20federal%20 surveys%20The.
Wang, Hansi Lo. “Biden Officials May Change How the U.S. Defines Racial and Ethnic Groups by 2024.” NPR, June 15, 2022, sec. Race. https://www.npr. org/2022/06/15/1105104863/racial-ethnic-categories-omb-directive-15.
www.uscis.gov. “INS Records for 1930s Mexican Repatriations | USCIS,” July 29, 2020. https://www.uscis.gov/ about-us/our-history/history-office-and-library/featured-stories-fro m-the-uscis-history-office-and-library/ ins-records-for-1930s-mexican-repatriations.
www.youtube.com. “‘Destino 80’ Univision Census Ads 1980.” Accessed September 27, 2023. https://www.youtube. com/watch?v=viyPQ1WnrFc&t=37s.
www.youtube.com. “Univision Census Ad Series 1990.” Accessed September 27, 2023. https://www.youtube.com/ watch?v=aSe0RkH4tMo.
Bianca Chan
Since the beginning of time, women have struggled to justify their existence and gain true power in a patriarchal and oppressive society. While gender has largely been the basis for this issue, the inequality that these women face overlaps with other aspects of their backgrounds, such as cultural, social, and economic factors. The interconnectedness of these categorizations pushes society to explore how these forms of discrimination exacerbate each other. To form an inclusive understanding on gender inequality and feminism, one must delve into a variety of literary mediums. Particularly, this paper uncovers the voice of a pansexual African-American singer-songwriter, Janelle Monae, through the visual aesthetics and lyrics in the music videos of her film The Dirty Computer. Utilizing the motifs of the science fiction genre, Monae dives into her perspective as a member of a dystopian society in which marginalized individuals rebel against systems of de-individuality. While the message of this movie emphasizes the representation of female bodies and perspectives, it could serve as a platform to highlight the alienation experienced by all ostracized communities. Notably, Monae argues that though African American LGBTQ women are socially restrained from embracing their femininity and queerness due to society’s judgments and ideals, they can and should take agency over their bodies and actions.
Janelle Monae uses sensual imagery and vivid lyrics to expose the stigmatized public attitudes that non-heterosexual individuals have faced. Firstly, Monae achieves this idea by exposing the harmful effects of adhering to societal expectations of sexuality. The film begins with a dark atmosphere, which showcases the gloomy space she lies in and the unembellished, plain suit that she is wearing (0:00-3:00). In contrast to this empty
cold computer factory controlled by white men, her memories are filled with vivid, bold colors and lively movement (3:50-4:50). This juxtaposition between the warm and cold tones of these two settings, a symbol of positivity and darkness, respectively, depicts that Monae has forcibly relinquished her authentic identities that have brought her joy and energy. Just like how the regime she is confined to is devoid of color, she is deprived of her freedom. She then delves into the roots of her oppression in her song “Crazy, Classics Life” (4:13-9:50). By pointing out that she is labeled as “the American nightmare” while referencing the “American Dream” (6:13-6:17), she critiques the legal landscape that the American Constitution has set for those with marginal identities. While the nation preaches its commitment to equality in sexual orientation, she argues that this rhetoric is not actualized into practice. In fact, many micro-aggressive behaviors and inherent biases are still prevalent. This notion is also portrayed by another LGBTQ individual, David M. Despite working at a Fortune 500 company with a formal nondiscrimination policy, he claims that he would “lower my voice in meetings to make it sound less feminine and avoid wearing anything but a black suit” (Singh and Durso). By exposing the hypocrisy of America’s commitment to LGBTQ equality, David and Jane are testaments to how sexual orientation still routinely affects their ability to access broader opportunities in life. Moreover, in her song “Take a Byte,” Janelle Monae sings, “Your code is programmed not to love me” (12:50-12:51). Her usage of the technical language “programmed” shows that an individual’s expression of sexuality is often dictated by society, rather than something that comes from and can be manifested from within. As a result of these external pressures, individuals lack the ability
to resist traditional gender binaries. The societal attitude that non-heterosexual orientation is deviant is also reflected in My Words to Victor Frankenstein above the Village of Chamounix Performing Transgender Rage, in which the transgender author Susan Stryker states that she is a “technological construction” in which “flesh torn apart and sewn together again in a shape other than that in which it was born” (Stryker 238). By incorporating this vivid, gory language and comparing transexual bodies to that of Frankenstein, she prompts the reader to visualize how LGBTQ people are widely characterized as anomaly and even reduced to a monstrosity. Ultimately, the language used by Monae, Stryker, and David reveals that LGBTQ community members lacked autonomy over their lives and were reduced to inhumane terms. They were unable to pursue the freedom that America had promised.
Opposing the rejection of LGBTQ individuals, Janelle Monae deconstructs the stigma towards her own LGBTQ identity and empowers others in her community to stand against the hatred. Monae declares that “we don’t need another ruler/All of my friends are kings” (6:04-6:08). In another song “Django Jane,” Monae is sitting on a throne in a black suit (20:23) while proclaiming “Yeah, this is my palace” (18:59-19:00). In both of these scenes, by exalting herself as the king, a masculine symbol for power, she outwardly rebels against patriarchal control and liberates herself from the inferiority. Additionally, later on in this song, Monae uses the phrase “hit the mute button/let the vagina have a monologue” (21:05-21:08). By personifying a female characteristic, the vagina, she asserts the fierceness and sovereignty of the female voice, suggesting that it should no longer be silenced and ignored by the male-dominant voices of society. Additionally, by incorporating the phrase “hit the mute button,” she argues for women to embrace their own unique perspectives, instead of aligning their actions, interests, and styles of thinking with males. Similarly, this fervent desire for power is shown by her lyric when she refers to herself as “James Bond and not Jane Doe.” A fictional character, James Bond is a representation of hypermasculinity, the ideal man who is competent, powerful, and charismatic. On the other hand, “Jane Doe” is a medical pseudonym to hide away someone’s true identity. Her juxtaposition of these labels shows that Monae does not characterize herself as an ordinary person without an individual sense of being and must conceal her identity, but rather is an individual who has control over her actions. She believes that embodying these traits, courage and independence, would raise women from a powerless, primitive state to a state of power. Another text called “A Poem about My Right” also depicts LGBTQ black females’ open resistance to patriarchy. June Jordan, a self-identified bisexual and black woman, asserts that “I am not wrong: Wrong is not my name. My name
is my own my own my own” (Jordan 139-140). By intentionally repeatedly rejecting the label “Wrong,” Jordan draws a contrast between her and her heterosexual white counterparts, whose actions and words are viewed more positively and favorably by society. She ultimately declares that she is not defined by all the sexual abuse and colorism that has been imposed on her life, despite the victim-blaming she faces from society. Additionally, through her repetition of the phrase “my own,’’ she is unapologetically breaking free from these oppressive patterns and emphasizes that she strives to define her own experiences. Similarly, this outward form of opposition is also portrayed by another Black LGBTQ poet, Audre Lorde, who asserts that “ur words will not be heard/nor welcomed/but when we are silent we are still afraid/So it is better to speak remembering we were never meant to survive” in “Litany for Survival” (3541). Lorde argues that though these vulnerable groups often maintain their silence in hopes of protecting themselves, it only reinforces their inferiority. Therefore, instead of shielding themselves by hiding in the shadow of marginalization, she openly defies these societal norms and even encourages others to do the same. In essence, Janelle Monae, along with other Black LGBTQ females, actively pushes for members to claim power over their own unique inputs to achieve upward mobility in the social hierarchy
Moreover, Janelle Monae promotes self-empowered female body acceptance and positivity, a message depicted by the visuals and lyrics of the music video for her song “PYNK.” The video begins with Janelle Monae and background dancers, wearing anatomical costumes that represent a clitoris (24:1024:20). As the video goes on, a woman puts her head through Janelle’s pants, portraying the concept of birth and procreation (24:30-24:34). Through these outfits, Janelle readily defies the shaming that is often associated with talking about genitals and the idea of sex. Instead, she openly welcomes these conversations and believes in the necessity to raise people’s awareness of females’ bodies. Women’s opposition to ignorance about their own anatomy and physiology is also reflected in the film She’s Beautiful When She’s Angry, in which a woman, Vilunya, says that the Boston Women’s Health Book Collective, frustrated at their lack of knowledge, “made a list of subjects they wanted information about.. anatomy, birth control, pregnancy, postpartum…” (46:40-46:50). Their initiative to create these academic works on female health portrays women’s desire to educate themselves and share knowledge about their bodies. Beyond equipping women with knowledge of their sexual health, Janelle Monae also believes in outwardly celebrating their natural anatomy. Traditionally, biological characteristics are often seen as taboo, as portrayed in Dreaded “Otherness”: Heteronormative Patrolling in Women’s Body Hair Rebellions,
which claims that “women disguise and conceal their ‘natural’ bodies and undergo a variety of bodily modifications, procedures, grooming habits, and maintenance behaviors” (Fahs). However, Monae actively challenges these conventions by zooming in on her underwear, so that the audience can see that pubic hair is clearly sticking out (25:43). She proudly asserts that “Pynk is the truth you cannot hide,” alluding to the vagina (28:25-28:27). This image directly defies the norm that women need to be completely shaven in order to appear more feminine and beautiful. She demands autonomous decision-making concerning women’s physical appearances. Moreover, the text on her underwear reads “sex cells,” play on words that can be interpreted literally as sperm and egg, or as “sex sells,” a popular phrase in advertising that alludes to the use of sex to draw others’ attention. Through this double meaning, Janelle Monae rebukes this phrase and argues that women should not only be motivated to use their bodies to fulfill the desires of people’s gazes. Monae emphasizes that it is the fundamental right of women to make autonomous decisions about their own biological anatomies or their “sex cells.” Ultimately, incorporating these visuals and lyrics into Dirty Computer, Janelle Monae empowers women to claim agency over their bodies.
Through Dirty Computer, a story that sheds light onto a community whose individuality is actively being erased, Janelle Monae exposes the discriminatory attitudes that continue to marginalize the lives of black LGBTQ females. However, rather than approaching the stigma with negativity, Monae encourages women to speak out against these oppressive forms of authority through embracing their own voice and body positivity. Through her artistic representations and musical storytelling, Janelle readily defies perceived heterosexual gender binaries and feminine social standards. By bringing into light the experiences of a community with varying identities, Black, LGBTQ, and female, Janelle Monae is a testament to intersectionality in action. Additionally, by exploring what it means to be an unconventional individual as defined by societal standards, Janelle Monae pushes the audience to be more empathetic and cognizant of society’s micro and macro aggression towards individuals in marginalized communities. Promoting a sense of community, Monae encourages her audience to appreciate the views and actions that bind us toward universal humanity while unapologetically expressing the perspectives that distinguish us as unique beings. Ultimately, in a world where the social and political landscape continues to be polarized, in Dirty Computer, Janelle Monae’s message encourages others to embrace their own unique identities, instead of molding them into social binaries.
Fahs, Breanne. “Dreaded ‘Otherness’: Heteronormative Patrolling in Women’s Body Hair Rebellions.” Gender and Society, vol. 25, no. 4, 2011, pp. 451–72. JSTOR, http://www. jstor.org/stable/23044206. Accessed 18 Oct. 2023.
Jordan, June. “Poem about My Rights”. Poetryfoundation.org.https://www.poetryfoundation.org/ poems/48762/poem-about-my-ri ghts
Lorde, Audre. “A Litany for Survival”. Poetryfoundation.org.https://www.poetryfoundation.org/ poems/147275/a-litany-for-survi val
Monae, Janelle, et al. Dirty Computer Bad Boy Records, 2018. She’s Beautiful When She’s Angry. Directed by Mary Dore. International Film Circuit, 2014. https://www.youtube.com/ watch?v=e-n829QzZ58
Singh, Sejal, et al. “Widespread Discrimination Continues to Shape LGBT People’s Lives in Both Subtle and Significant Ways”. Center for American Progress. 2 May 2017. Stryker, Susan. “My Words to Victor Frankenstein Above the Village of Chamounix: Performing Transgender Rage”. GLQ 1 June 1994; 1 (3): 237–254. doi: https://doi.org/10.1215/106426841-3-237
Research or inquiry (historia in ancient Attic Greek) did not originate with the modern sciences, but instead has a long history dating back to the earliest attempts by humans to make sense of the natural world. Hunter-gatherers deciphering the migratory patterns of prey animals and the potential utility or danger of various plants and fungi engaged in a type of inquiry, as did ancient stargazers tracking the wanderers (planetes) across the sky. Some of the earliest written accounts of what can be regarded as humanities research date back to Ancient Greece and China, when historians like Herodotus and Sima Qian used their research to not just chronicle events, but also to understand their causes. In the Histories (Historíai), Herodotus sought to explain the rise of the Persian Empire, its invasion of Greece, and its ultimate defeat by a coalition of Greek city-states. Similarly, in the Records of the Grand Historian (Shiji) Sima Qian documented the Warring States era and the eventual unification of China under Qin Shi Huang.
What distinguishes the modern process of research from these ancient efforts is that it is now a systematized process of inquiry engaging with past scholarship, exploring new questions and communicating those results to others. This is true not only in the sciences, but also in the humanities. Modern research is a collective enterprise and by participating in that process, our students at NCSSM-Morganton have joined a larger community of scholars.
The majority of the works in this journal were produced in the year-long Research in Humanities sequence at NCSSM-Morganton. Students can elect to take the first class, Research Experience in the Humanities, during the spring of their junior year. RexHUM focuses on empires and colonialism over the past five centuries and many of the students used it as an opportunity to study histories, cultures and topics that had not been part of their previous coursework. The second class, Research in the Humanities, is taken in the fall semester of their senior year and students expand or finish an existing research project that they designed and explored the previous semester. The longer works in this journal were written by RHUM students, while the five shorter articles were the result of research that came out of American Studies and Bioethics classes and a 2023 Summer Research and Innovation Program on South Mountains State Park.
The articles are grouped into three clusters that highlight their shared commonalities and the temporal breadth of the student’s research into topics ranging from colonialism in the early modern Americas and South Asia to contemporary music and looming changes to racial categories in the US Census. Herodotus and Sima Qian sought to understand the historical and political settings that they lived in and our students have similarly tried to make sense of their own socio-political context. By engaging in humanities research our students have taken another step in preparing themselves to be future leaders of North Carolina. They have learned to see how society, politics and culture shape the science they study in other departments and how to consider long-term causes and consequences to the choices we make.
Marcelo Aranda, Morganton, NC