

![]()


By Noe Goldhaber & Nick Bumgardner
EDITOR-IN-CHIEF & MANAGING EDITOR
From using artificial intelligence to write class papers, make grocery lists or gather advice on ending a ‘situationship,’ University of Wisconsin-Madison students have found unique ways to use AI in and outside the classroom.
But with these transformational changes, skeptics have argued AI circumvents learning and critical thinking and worry about the value of higher education if a chatbot can write an essay or complete a coding project almost as well as
a college student.
Art professors have debated how they should approach AI in their syllabi and professors are not only studying the mechanics of AI, but pondering how we can reduce AI inputs to conserve the energy needed to power AI tools.
One thing is clear: AI is changing our daily lives as UW students. But it doesn’t just shape our lives on campus. Across Wisconsin, AI impacts Wisconsinites daily. From bills criminalizing deepfakes, to automated milking technology for dairy farmers to the data centers popping up across the state.
These changes are not without pitfalls, and not everyone is quick to adopt them.
K-12 schools are cautiously implementing AI policies to help teachers e ciently individualize lesson plans and maximize instructional time. UW professors are asking how and if AI belongs in the classroom and in the writing of scientific papers.
And across Wisconsin, communities are weighing the economic benefits of new data centers against concerns over environmental impact and corporate influence.
This issue reports and critically examines the way AI shapes the
lives of Wisconsinites and how those interactions play out on a daily basis. Cardinal reporters traveled to Port Washington city council meetings, sat through late-night Capitol hearings and, yes, even relied on ChatGPT prompts to guide their choices for a day. We listened to the students, lawmakers, professors, farmers and developers shaping what this new era looks like.
AI is not something on the horizon. It’s here. It’s in our classrooms, our conversations, our search bars, our politics and our futures. Where it goes from here will be up for us to decide.
By Annika Bereny CAMPUS NEWS EDITOR
Fresh o the heels of summer break, some students were startled by three words they thought had been lost to time: “blue book exam.”
Indeed, for many students at the University of Wisconsin-Madison this year, gone are the days of the takehome paper or at-home Canvas final. Faced with rising instances of students using generative artificial intelligence tools, such as ChatGPT and Google Gemini, to cheat, professors have instead returned to the ol’ reliable: a handwritten, in-class exam.
A December 2024 survey of 337 higher education leaders conducted by Elon University and the American Association of Colleges and Universities asked about AI use on their campuses. 59% of those leaders said that they believe cheating on their campuses has increased since generative AI models have become widely available, with 21% saying it has increased “a lot.” Additionally, 54% of them said they believe faculty members are not e ective in recognizing AI-generated content.
For Information School Assistant Professor Clinton Castro, the hunch that his students were using ChatGPT on their assignments came as a “rude awakening.”
“I’m teaching this big class, and I’m having a lot of trouble staying on top of the GPT stu ,” Castro said. “I think it’s being used way too much, probably. I don’t even know how to prove how, but we all have a sense. I just felt like, ‘Oh my gosh.’ It’s a lot going on right now, and it’s really hard for me to figure out what to do.”
Cue blue books
Across the country, blue book use has risen significantly. At the University of Florida, blue book sales in the 2024-2025 school year were up 50% with that number increasing to 80% at the University of California, Berkeley, according to a May Wall Street Journal article.
At UW-Madison, Information School Administrative Specialist Molly Cook told the Cardinal that when she went to order blue books for the information school in early September, the 12-page books were sold out with no restock date in sight. Moving down one size, the books were in a three-week backorder, though Cook said they were able to arrive within two weeks.
Decades of digitization
For some upperclassmen and graduate students on campus, blue books have just been a return to their roots. Information School associate professor and Director of the Center for the History of Print and Digital Culture Jonathan Senchyne said blue book exams were the norm for any class with a written response when he first began teaching at the university level as a graduate student 21 years ago.
Even in high school, many students had to complete exams for Advanced Placement (AP) or the SAT by hand — with the exception of the COVID-19 pandemic.
Earlier this May, however, the College Board made 16 of those exams fully digital, including AP U.S. History and AP Environmental Studies. Ironically, those tests will be administered on a digital interface called Bluebook.
Because of the constant movement toward digitization, Senchyne said he too has felt the pressure to move coursework online.
“I think a lot of things happen out of a combination of convenience and expectation,” he said. “Some decisions about teaching were made, not based in good reason or principle … But rather, students expect this. The university pays for it. It’s less bothersome to do it this way.”
Along the way though, Senchyne said he lost focus of what he was truly trying to assess from his students. After receiving 40-50 final exams in a 400-student class with clear evidence of ChatGPT usage, including the inclusion of the ChatGPT logo in the pasted text, Senchyne had had enough.
When he made the decision to switch back to blue book assessments in fall 2024, he said he wasn’t necessarily looking for the best-written paper quality-wise, but rather the ability to connect concepts from class to each other and explain their relevance.
“This is a better use of my grading time and my TAs’ time so that they’re not spending all this time doing detective work about academic dishonesty and putting [time] back into meaningful feedback and discussion sections,” he said. “I want them focusing on that, and I want them developing a relationship with students that is like, ‘this is a trusted place to answer questions,’ rather than, ‘I am a detective who is out to police your digital writing practices.’”
Innovating the blue book
Despite their usefulness in the cur-

rent moment, blue books were phased out for a reason — often e ciency but occasionally accessibility. Castro and Senchyne are both aware of this, though, and have worked to make their exams as forgiving as possible for students.
“One of my big reservations about the blue book was I thought that there was going to be a massive trade o ,” Castro said. “I think that there’s something really valuable about sitting with an idea, writing in stages, doing drafts. And I wasn’t really quite sure how to keep that goodness with the blue book.”
To fix that, the papers in his class have become a multi-step — and multi-blue book — process. Students write their first drafts in blue books, which are then collected and redistributed for peer review. Final drafts are also handwritten in blue books, with the chance to rewrite those even after having been graded.
“At the end of the day, all I want is the students to learn,” Castro said, “and if someone has to write their essay three or four times, and I get to hold my standards nice and high, I’m going to feel really good about a lot of people getting a good grade.”
Senchyne, too, had worries about trade-o s that could come with using blue books.
“When I did this last fall, it became clear to me that a lot of people in my classes are first- year students who are taking their first semester of college in another country, in another language,” he said. “This year, I told all students for the exam, ‘if you want, you can bring a print dictionary. If that’s an English dictionary, because that’s your first
language, if it’s a translation dictionary, first language to English, that’s fine.’ I’ve done a lot of thinking about how to, again, get to the thing I actually wanted, rather than all these other things that are flying around it.”
Senchyne likened exams to an adage he often hears as a distance runner: the distance will either validate or expose your training. To him, an exam merely serves to showcase the training that has been done throughout the class — the thinking and the studying that familiarizes a student with concepts.
“The point is to strengthen the body or to practice self discipline,” he said. “The point is not transactional in producing a paper, the end product of the paper is an afterthought to the process.”
Both Castro and Senchyne said they’ve found success in blue books and will continue to use them in future courses.
“I think there’s an underlying theme here about efficiency and inefficiency, challenge or difficulty,” Senchyne said. “Like, do blue books make things more difficult for faculty, for students, compared to what you could just do if you just put everything in Canvas? Yeah, probably. But the point of getting an education is not to have the most efficient, easy process along the way. The point is to challenge yourself, and I think we have to get back to a world in which there’s room for inefficiency that lets us grow.”
By Alaina Walsh ASSOCIATE NEWS EDITOR
The University of Wisconsin-Madison is testing out a new artificial intelligence chatbot that helps students practice civil discourse through simulated conversations and real-time feedback.
The pilot program, part of the university’s new Wisconsin Exchange: Pluralism in Practice initiative, launches this month in collaboration with the Institute for Citizens & Scholars. The AI-powered, voice-based tool allows students to choose topics they care about and engage in short conversations with AI partners that take opposing viewpoints.
“Students identify a few issues that matter most to them,” UW-Madison spokesperson Victoria Comella said in an email to The Daily Cardinal. “Bite-sized explainers show
them how and why civil discourse skills work. Students practice each skill out loud with an AI partner through short voice conversations on a topic they chose, and they get realtime coaching and instant feedback from a second AI mentor.”
The pilot will run through the end of the calendar year.
The Wisconsin Exchange aims to build a “culture of dialogue across di erence” on campus by teaching students to listen, ask questions and engage respectfully on divisive issues. Chancellor Jennifer Mnookin said the initiative encourages “meaningful dialogue across their di erences” as part of the university’s broader e ort to strengthen civic learning and discourse.
UW has come under fire for years from conservative politicians and campus figures for failing to uphold open dialogue and promote ideological diversity.
Programs under the Wisconsin Exchange, such as the Deliberation Dinners and The Discussion Project, already give students opportunities to share perspectives on controversial topics. The university said the AI chatbot could add a new dimension by allowing private, low-stakes practice before entering real discussions.
The AI platform breaks civil discourse training into steps. Students start by selecting a topic, then receive short explainers about communication strategies like empathy and active listening. They practice those skills aloud with the AI partner, which challenges their viewpoint, while a second AI provides instant feedback on tone, phrasing and listening cues.
The goal, Comella said, is to help students feel “more confident and prepared for the real world.”
The pilot’s findings will not

only inform UW-Madison’s future programming, but also contribute to research from other universities involved in the collaboration.
Comella said all data collected will be anonymous and focus on students’ experience using the tool
rather than individual performance.
“The aim is that the aggregated data from the pilot — not just from UW-Madison but from all participating universities — will be informative for future programming,” Comella said.
By Haellie Opp SENIOR STAFF WRITER
The Madison Metropolitan School District (MMSD) is cautiously weighing artificial intelligence for teachers in their K-12 schools, using AI to save teachers’ time by helping generate syllabi, individualized lesson plans and analyze learning data.
While students are discouraged and limited from using AI, for teachers, MMSD Instruction Technology User Manager Eric Benedict said AI implementation is more complicated, because MMSD is not just looking at AI in the classroom.
“We are still in the phase of training and educating our sta on what AI is, what it can do, what our guardrails [are] and what our guidelines for its use district wide are, instructionally or operationally…how it can be used in HR systems, community resource systems and in communications,” Benedict said.
Schools across the country are involving AI more in their classrooms and administration.
Since ChatGPT launched in December 2022, Benedict has weighed how AI can change K-12 education. From adapting teacher curriculum to individual students’ needs or utilizing AI to analyze attendance data, Benedict is optimistic AI can help — not replace — teachers.
AI aims to help maximize a teacher’s time, which is how some MMSD teachers are using it in the classroom. They provide data, specific lesson plans and can produce static content. It can also rewrite assignments and instructions to provide additional support for the students teachers are trying to reach.
“I used to teach science, and let’s say we’re doing a worksheet on velocity with students. And I had a student that really likes trucks, so I could now tailor the whole assignment based on trucks. And without me spending hours to do that in the past, now I can do that in a short period of time,” Benedict said.
Luke Gangler, a bilingual resource and social studies teacher at La Follette High School, said teachers are interested in using AI and have been using the major AI applications like Brisk Learning.
However, the district has been slow to make major changes, taking the time to run monthly asynchronous lessons and professional development opportunities for teachers to educate themselves on practical, productive and ethical uses of AI. Gangler said these training sessions have been e ective.
“Sta who attended it were able to engage with AI tools and see how they could help them

with our shift to standard space creating, which is a really big shift for us,” Gangler said. “It’s a lot of cognitive complexity and time, so to be able to see how AI can maybe make that a little bit more feasible is really valuable. So I think there’s a lot of promise there and that the district and schools are trying to leverage that.”
MagicSchool AI, Brisk Teaching and SchoolAI are education specific applications for teachers designed to help them save time.
These tools generate syllabi, newsletters and assessments, while also giving feedback on assignments or assisting in lesson planning. SchoolAI targets classroom insights, assisting teachers in personalized lesson plans and providing teachers with data on their classes.
MMSD is currently vetting and researching the benefits of these applications. Gangler said he’s seen Brisk Teaching used most often due to its planning and time-saving abilities.
Student AI use is currently limited. MMSD’s technology department is working on launching a student-specific AI platform, but they’re moving forward with caution. Benedict said guardrails must be in place, and there needs to be transparency between the district, students and parents first.
“I think in some ways, what’s being pushed right now [with students using AI in class] and what’s popular right now isn’t probably very helpful, or is not transformative,” Gangler said. “But it’s essential that we start very thoughtfully integrating AI literacy, which will include students using AI, but for the purpose of learning
about AI itself.”
At the high school level, major chatbot sites, including Claude, ChatGPT, Google Gemini and Copilot, are blocked on schoolissued Google Chromebooks.
“We want to educate our students first before we give them explicit access to things. We just haven’t gotten there yet to educate our students around [certain AI programs],” Benedict said.
MMSD teachers are being taught how they can implement AI tools in their classrooms and how they themselves can then educate their students.
“One of our key things that we always teach our teachers is that humans are on both sides of AI. So the human inputs it, the machine does whatever it does, but then the person on the other side interprets before using,” Benedict said. “That’s what we need to work with our students on as well, so that they’re understanding all the di erent pieces that are coming out of it.”
Gangler did an activity in his class where students were instructed to ask AI to produce an image of a nurse and then to analyze the image for bias.
“The point was not AI, it was to look at the biases behind it,” Gangler said.
MMSD is a “Google school district” and educates teachers on how to use Google Gemini. MMSD has a data privacy agreement with Google, meaning district data is not being used to train the language model behind Gemini.
On the Library Media Services website for the MMSD, the district links a library resource
for AI Tools for Teaching and Learning. It’s currently only available for sta to “o er innovative ways to enhance teaching and learning.”
Benedict added that a possible future use of AI would be attendance collection. Teachers could ask AI to review records and provide insights on attendance in the classroom. Although AI is not currently being used in that capacity, Benedict believes it has potential long-term.
While AI is being integrated into schools within MSSD, Benedict made it clear that there are certain areas, like grading and student learning accommodations, that AI is not being used.
“Anything that would involve, at this point, with guidance, anything around either special ed or any sort of accommodations, we look at not putting student information directly into any sort of model,” Benedict said.
Education experts have said they are concerned about student data, protected from federal law, inputted into private AI companies.
Other area school districts also are implementing AI policies and programs. MMSD is in open communication with other districts, sharing ideas and following similar guidelines. Data privacy and keeping students safe has been — and continues to be — a conversation MMSD has with other districts.
From a teacher’s perspective, Gangler said he has not seen pushback from students, parents or many other teachers on AI usage. But Gangler said he does not view AI as “necessarily revolutionizing” education and sees AI companies “rushing” to market AI to teachers for educational purposes that he believes are not as “safe and secure” as they suggest they are.
“In testing SchoolAI, I was able to get it to produce slurs, just through manipulating it with prompts. So [it’s] pretty concerning that a company [whose] whole point is, ‘we make these safe chat bots,’ hasn’t done what’s necessary to prevent that kind of thing from happening,” Gangler said.
The future of AI in MMSD is constantly changing. Benedict said two months ago, his view of the future would have been di erent from today.
“If students do have access to artificial intelligence tools for their learning, it’s specifically those that will be aligned with our district values and our district strategic goals — with guardrails, guidelines and safety measures put in place to do our best to keep our students as safe as possible and that the information they’re getting is non-biased as possible,” Benedict said.
By Jane Dardik STAFF WRITER
Artificial intelligence is powering breakthroughs in everything from health care to climate science, but each new discovery comes with a cost: significant energy. Now, researchers at the University of Wisconsin-Madison are asking a new question — how can this powerful technology be more sustainable?
Across campus, experts in data science and AI literacy are working together to ensure innovation is environmentally responsible.
“AI projects at UW-Madison range from foundational large-scale deep learning models to domainspecific ones that address humanity’s most pressing challenges,” Song Gao, director of the Geospatial Data Science Lab at UW-Madison and an expert in Microsoft’s AI for Earth,
told The Daily Cardinal via email.
“These include life-saving disaster response, sustainability solutions, digital agriculture and health and medical challenges.”
Gao’s work sits at the intersection of AI and Earth science, using “GeoAI” to analyze environmental data, predict natural disasters and track climate change. But those breakthroughs come with high energy costs — training large models can consume thousands of hours of computing time, significant electricity and graphics processing units (GPU) and powerful processors that handle intensive generative AI workloads.
To handle this demand, UW-Madison has expanded its research computing infrastructure with help from the Wisconsin Alumni Research Foundation.
The Data Science Institute (DSI),
the Center for High Throughput Computing, DoIT Research Cyberinfrastructure, the School of Medicine and Public Health and the Data Science Hub all received upgrades designed to support largescale model training and storage.
A new RISE-AI Collaboration HQ initiative has also been introduced, which plans to bring in new investments, hire 35 new AI experts and hold events and seminars on AI and AI literacy.
“The RISE-AI Collaboration HQ will connect both new and established campus researchers to this increased capacity for processing and data storage,” Gao said.
At the DSI, researchers have access to some of the most powerful GPUs — Nvidia H100 and A100 chips — enabling high-speed model training and data analysis. Gao’s team is also working to mitigate AI’s

environmental toll.
“At the Geospatial Data Science Lab, researchers are developing benchmarks and metrics used to evaluate the geographic biases and environmental impact of deep learning models and AI foundation models for Earth science,” he said. “When evaluating the environmental cost-benefit trade-offs, we need to consider the geograph-
ic regions that benefit from the AI model and those that bear the environmental costs.”
Bonnie Shucha, associate dean for Library and Information Services and a specialist in AI literacy, said sustainability also depends on how people use AI tools.
Continue reading @dailycardinal.com
By Zoey Elwood COPY CHIEF
As artificial intelligence rapidly advances, University of Wisconsin System universities are launching new majors and certificates to prepare students for an increasingly AI-driven workforce.
The programs aim to teach students how to use the technology ethically, practically and responsibly as the technology becomes more integrated into everyday life.
The University of WisconsinEau Claire started o ering majors, certificates and minors in Artificial Intelligence this fall, while the University of Wisconsin-Whitewater has o ered an AI-related certificate since as early as spring of 2022. At the University of WisconsinMadison, engineering students have been able to add a capstone certificate in AI since April.
Dr. Emily Hastings, an assistant professor at UW-Eau Claire, cited a Coursera report finding generative AI is 2025’s fastest growing job skill, with an 866% year-on-year increase in demand for employees, students and job seekers.
She also noted that around 75% of employers are currently using generative AI and 62% expect their employees to be familiar with AI tools.
“We really wanted to try to prepare our students for the rapid changes in the workforce that are increasingly reliant on AI tools,” Hastings told The Daily Cardinal. “We wanted to be able to give students some knowledge and experience in evaluating and using AI models that they might encounter in industry.”
Hastings said faculty experiences helped shape and continue to evolve the curriculum, with feedback from departments such as computer science, math and data science helping identify relevant courses.
UW-Eau Claire students interested in an AI degree can choose between a STEM-focused 60-credit comprehensive major or a standard 36-credit major geared toward students pursuing careers in the social sciences, business, healthcare or humanities.
“We wanted to try to o er multiple paths for di erent students of di er-
ent interests to get involved in AI,” Hastings said.
Beyond building technical skills, Hastings said she hopes students learn how to use AI ethically and responsibly, and her course dedicates time to discussions on ethical concerns to ensure students understand how to use the tools responsibly.
“There are a lot of things to consider in terms of environmental impact, labor and intellectual property search concerns that I hope students are aware of, in addition to using the tools.” Hastings said.
The University of WisconsinWhitewater
UW-Whitewater approved its Digital Marketing and Artificial Intelligence graduate certificate in spring 2021, with courses available for enrollment the following spring, according to Dr. Andrew Dahl, an associate professor at UW-Whitewater.
Dahl said feedback from employers about skills future business leaders will need helped inspire the program’s implementation. He added that the Department of Marketing’s curriculum decisions are largely shaped by employer feedback on what they consider most critical.
After recognizing AI’s growing impact and expanding career opportunities, Dahl said UW-Whitewater saw a need to extend this education to the undergraduate level.
“We like to focus on being, I’d like to say, ahead of the game — or cutting edge,” Dahl told the Cardinal. “We’re always trying to be forwardthinking and looking ahead in terms of what’s next.”
The university approved a Bachelor of Business Administration in Digital Marketing and Artificial Intelligence Emphasis, along with a minor in Digital Marketing and AI and an AI in Marketing certificate in fall 2024, with enrollment beginning the following year.
These programs not only focus on generative AI tools, but also predictive and analytical AI — systems that help companies make sales forecasts, identify target audiences for marketing campaigns or determine which
customers to prioritize.
“We’re focused on training students to help make better decisions by leveraging AI and the power that it has in terms of being able to quickly analyze a ton of data and recognize patterns that you know we can’t quickly recognize,” Dahl said. “We’re also trying to show students how to use both analytical or predictive AI and then, now more than ever, generative AI to think about how to use it strategically.”
Dahl said UW-Whitewater’s marketing and AI programs expose students to a range of technological tools, including resources from Amazon Web Services — such as its sentiment analysis features, which help determine whether people on social media feel positively or negatively about a brand or product.
Students also learn about predictive and analytical AI to understand how design choices a ect the development of neural networks. Additionally, it o ers hands-on experience with generative AI systems like ChatGPT and Gemini, which students use to build chatbots.
Through these programs, Dahl said he hopes students leave with the ability to manage teams that utilize AI and think critically about how to drive organizational change using this technology. He also wants students to recognize opportunities to apply their AI skills and integrate them into current or future work.
“Part of our goal here is to help students understand that there’s more strategic applications of generative AI, and then also recognizing that, ‘hey, it’s not just generative AI,’” Dahl said.
He said many students who begin with the Digital Marketing and Artificial Intelligence graduate certificate go on to complete a Master of Science in Marketing or a Master of Business Administration.
Dahl added that some students pursue the certificate to broaden their skills and career opportunities, and that at the undergraduate level, the Digital Marketing and Artificial Intelligence emphasis has become one of the department’s most popular concentrations.
The University of WisconsinMadison
UW-Madison introduced a new capstone certificate in Artificial Intelligence for Engineering Data Analytics in April, aiming to help engineers integrate AI into their work.
The idea emerged after engineers began questioning how to e ectively use AI in real-world applications, said Dr. Sinan Tas, a professor at UW-Madison involved in developing the program.
“This certificate bridges that gap by helping professionals turn data and models into real engineering value. It’s built for working engineers who want to speak the language of AI, design intelligent systems and lead responsibly in an era of digital transformation,” Tas said.
So far, Tas said interest in the program has “exceeded expectations.”
He said companies are sponsoring employees to enroll in the program, and students say the certificate helps them gain “immediate, tangible” value and return it to their teams.
He said the program emphasizes applied knowledge, training students to apply generative and agentic AI to real engineering challenges.
Students learn how to design retrieval-augmented generation workflows — an AI framework that expands a large language model’s access to external information, in addition to data it was trained on — build AI chatbots and incorporate AI into everyday processes.
The program also takes “a low or no-code approach,” allowing engineers to focus on the application and impact of AI rather than programming.
Students will complete a capstone project applying what they’ve learned to real workplace challenges. They’ll gain hands-on experience with Microsoft Azure AI Foundry, Prompt Flow, Copilot Studio, Cognitive Search, Power Automate and Azure OpenAI models to build end-to-end, AI-driven solutions.
“Graduates leave able to design, deploy and communicate about AI systems that make measurable impact,” Tas said.
Tas said the program’s curriculum evolves alongside the latest industry advancements. Instructors actively design and implement AI systems through their own research and consulting, bringing real-world expertise directly into the classroom and continuously updating content to reflect new tools and methods.
“We integrate emerging technologies such as connected agents, AI orchestration frameworks and new safety standards while maintaining strong conceptual foundations to ensure long-term relevance,” Tas said.
The capstone certificate integrates seamlessly with UW-Madison’s Master of Engineering in Data Analytics (MEDA) program, creating a pathway for deeper specialization in AI. Tas said students can begin by completing the certificate and, if they’re wanting to continue their education, apply to the MEDA program.
“We’re making our courses more modular, more applied, more responsive to industry change, which is something you want in every college course, but specifically engineering course education,” Tas said. “As AI continues to reshape the design, the production and decision making, our goal is to make sure that UW Madison plans to stay at the forefront and helping the engineers not just keep up with AI, but lead with it.”
Kaci Maack, a graduate student at UW-Madison, said the program has given her a deeper understanding of AI.
“I think AI is touted as very, you know, ‘only smart people understand it. It’s going to take over,’ but being able to just talk colloquially about it, understand it, has been really nice,” Maack said.
Maack added that the skills she gained from the program will be used in her career, aiming to use them to help others understand this new technology.
“I think UW-Madison is also taking a very approachable approach to it,” Maack said. “I very much feel comfortable in this new age [of AI] that we’re getting into because of this certificate.”

By Zoey Elwood COPY CHIEF
Within days of being sworn in as a state senator, Sen. Jodi Habush Sinykin, D-Whitefish Bay, made her first site visit to Rogers Behavioral Health Hospital. There, health care providers informed her of a disturbing trend: a rise in AI-generated deepfake pornography of young girls.
Sta heard girls describe scenes of students huddled over their screens, laughing at fake nude images of their female classmates. Sinykin said this behavior creates a di cult environment for girls trying to navigate their academics and social lives.
“I had heard about it earlier and was just horrified — as a mom of four children, including a daughter, and just myself, of course, being female — and how this is being misused in such a rampant way,” Sinykin told The Daily Cardinal.
A 2023 report from Thorn, a nonprofit tech company building tools to protect children from sexual abuse, found at least 1 in 10, or 11%, to as many as 1 in 5 students across the country knew of classmates or friends who used AI to generate sexually intimate deepfakes of other students.
When Sinykin later heard of a bill that would penalize the creation and distribution of nonconsensual deepfake pornography, she shared it with Milwaukee County prosecutors, who aided her in writing it, alongside the Sensitive Crimes Unit — a unit that handles cases involving women and other vulnerable groups.
Sinykin said prosecutors raised concerns about the bill’s language, as the original version of the legislation focused on the perpetrator’s “intent,” which they said would make convictions di cult. Instead, prosecutors recommended focusing on whether a
deepfake was created or shared without the subject’s consent.
The bill unanimously passed both chambers, and was sent to Gov. Tony Evers, who signed it into law Oct. 3.
Before the legislation passed, Wisconsin law classified the nonconsensual taking and sharing of nude photos as a Class I felony. The new law expands that prohibition to include AI-generated “deepfake” images, making it a Class I felony to create or share an intimate image of someone without their consent with intent to coerce, harass or intimidate them.
Bill sponsor Sen. André Jacque, R-New Franken, said the legislation was written with growing public concern about AI in mind. He described the bill as a way to ensure technological advancements are handled responsibly.
“I just think it’s really important, as we look at the promise of AI in a number of contexts, that we are at the same time putting guardrails on its use to make sure that we are not letting it go in a dark direction that can do quite a bit of harm,” Jacque told the Cardinal. “This is absolutely an example of a bright line that will o er some protection, especially as people realize that it’ll be enforced.”
The bill included personal research from its sponsors as well as external studies. Sinykin cited a Sensity AI report which found that 90-95% of deepfake videos since 2018 consisted of nonconsensual pornography, with about 90% of those being of women.
Alan Rubel, a professor at the University of Wisconsin-Madison, told the Cardinal deepfakes can trigger emotional responses that a ect people’s trust in social and informational environments, and that loss of trust can easily be abused.
He noted that these fabricated images can also influence how people form relationships, calling deepfakes an easy tool for exploitation and manipulation that can ultimately reshape how individuals approach personal connections.
Rubel also emphasized technolo-
’s rapid advancements, and the constant creation of new tools that could cause harm.
“It [technolo ] always moves quickly, and it’s always ripe for some kind of abuse, no matter whose hands it’s in, “ Rubel said Rubel said many people remain unfamiliar with today’s technolo , creating a gap between users and their understanding of potential dangers created by ever-evolving technolo
“Most people have not explored the technolo in-depth. It takes some e ort to understand it, and it can happen in a way that’s that surreptitious. I think that we’re not well prepared,” Rubel said. “We were not well prepared even before exploitative images were generated by AI — [we weren’t] prepared for exploitative images that were not synthetic. This will be even more di cult to address.”
Rubel said while the law targets individuals who create deepfakes, it
does not address those who distribute the tools used to produce them.
“There’s always people who are going to exploit any technolo into either malicious, ill conceived or stupid stu with it. The entities — the people that make the technolo — share some responsibility,” Rubel said. “If they have noticed that technolo is being used to create deep fakes, [they] should share some liability as well.”
He added that while millions of people can use a single technolo to cause harm, there’s typically only one source creating it.
“If you could stop the one technolo … that’s a more e cient way,” Rubel said. “And I think that’s where a lot of the responsibility lies.”
Jacque, Sinykin and Rubel all encouraged parents to stay engaged with how their children use technolo . Sinykin went further to emphasize the importance of educating children about what they shouldn’t participate in and how to respond if they encounter or learn about deepfake distribution.
“It’s just yet another deeply concerning problem that we have to inform our kids about,” Sinykin said. She added that parents should urge their children to contact law enforcement or an adult if they encounter deepfakes to help ensure perpetrators are identified and convicted. Sinykin also stressed the need for conversation between parents and children to make sure they are not personally a ected by deepfakes.
By Shane Colpoys
SPORTS EDITOR EMERITUS
Located o County Line K in Wausau, Wisconsin, cattle graze while drones fly overhead from one end of the 120-acre farm to the other. Here, dairy farmers are using robots, data sensors and drones to produce milk — all with the help of artificial intelligence.
Farmers and students at Northcentral Technical College’s Agriculture Center of Excellence collect and analyze data using machines and algorithms, allowing them to make informed production decisions.
The farm serves as the state’s only working and learning center, testing AI-driven technologies in one of Wisconsin’s top industries: dairy.
“They are using the most advanced AI available to drive the functions of their agricultural center,” Wisconsin Department of Workforce Development Secretary Amy Pechacek told The Daily Cardinal.
While AI is transforming Wisconsin’s key industries, state leaders say new technologies are enhancing e ciency, productivity and sustainability in a workforce that continuously faces labor shortages.
Over the past decade, Wisconsin’s dairy industry in Wisconsin experienced a multitude of challenges, from labor shortages and loss of land because of urban sprawl to producing milk for a larger population.
Greg Cisewski, the Dean of Agriculture, Utilities and Transportation at Northcentral Technical College, said the Agriculture Center of Excellence is a testing ground for new technolo to find ways to “grow more on less land” and do so e ciently.
The farm welcomes locals in the industry, allowing them to see how they’re utilizing AI tools while also helping farmers across the state adopt AI initiatives.
Today, the farm has about 55 dairy cattle
that are all milked robotically. These robots aren’t just milking cows, but they’re using algorithms to collect and analyze information about the cow’s health and when the cow is ready to be milked.
“We feed them robotically, and we even clean up after them robotically,” Cisewski said. “From how much they chew their cud and how much they’re giving per quarter to how much the cows weigh, a. All of that information comes directly to our farm sta through computers that are hooked up to all these robots.”
Cisewski said the farm is also using sensors and drones to increase productivity and solve the dairy industry’s labor shortage.
Through initiatives and funding, the state is helping technical colleges prepare students with the skills, curriculum and hands-on opportunities needed to succeed in the rapidly evolving workforce.
“Businesses across the state have not been able to hire enough workers,” Pechacek said. “To keep this economic, record-breaking streak going, we want to encourage businesses and assist them to adapt to technolo , adopt it and embrace it, to further their growth and prosperity.”
Gov. Tony Evers started a task force in 2023 to help Wisconsin navigate AI. As the first state to issue a task force completely focused on AI and workforce, Pechacek said it provided critical information needed to navigate changes to Wisconsin’s “record-breaking workforce and economic performance.”
Government o cials, education and business leaders and community members researched the impact of AI and provided policy recommendations to key industries in the state, including agriculture and manufacturing.
Here’s how Wisconsin’s key industries are adapting and taking advantage of opportunities within AI innovation.
Manufacturing
Even more aggressive than agriculture is the integration of AI into manufacturing, said Lyla Merrifield, President of the Wisconsin Technical College System.
The manufacturing industry in Wisconsin employs nearly half a million people and contributes $73 billion to the state’s GDP, making it the largest sector in Wisconsin.
Merrifield said there’s a “tremendous amount of potential” with AI to maximize collecting and analyzing data in order to know exact production costs and maintenance needs.
“The goal is to never have a machine that’s down because that’s lost productivity,” Merrifield said. “We use AI to track lots of data from lots of sensors and figure out exactly how those machines work to keep them up and running.”
Government funding and support has helped Wisconsin’s technical college system train faculty and integrate AI into their curriculum so students are equipped with highly specialized skillsets needed to work in industries that are continuously evolving.
While the state is working with educators, Pechacek said she’s also committed to ensuring small businesses have the tools they need to fully integrate AI and remain competitive globally, because they make up about 50% of the state’s manufacturing industry.
Pechacek said the “economic sands” in the state have shifted, making it more di cult to address workforce challenges with AI than it was a couple years ago. She said the government shutdown and federal policies, including tari s and immigration, have hurt Wisconsin’s major industries.
“We’ve got a world class workforce, we are nimble and we are still outpacing the nation,” Pechacek said. “I’m confident in our ability to overcome and continue to move forward. But it’s definitely a little different in
today’s world than it was in 2023.”
According to a report released by the University of Wisconsin-Extension in June, the state is experiencing a structural labor shortage, meaning there are not enough qualified workers to fill available jobs.
In 2023, Wisconsin had about 2.5 jobs for every one job seeker, Pechacek said. Today the gap is narrowing, where the ratio of unemployed persons per job opening in Wisconsin was 0.7 in May. This is a national trend, and Wisconsin is one of 28 states that had ratios lower than the national measure of 0.9 unemployed persons per job opening.
“AI is allowing our farmers, producers and their sta to do more with the time they have,” Cisewski said. “Our robots allow our sta to go out and pick the things they want to do that day, knowing that technolo has the rest of these things covered.”
Farmers use collars with tracking devices to track how much a cow walks around or does any activities. For female cows, Cisewski said they put a tracking device called a bolus in her stomach to read body temperature and measure minute changes that tell them when she needs to be bred and when she’s going to give birth.
“It’s really helpful data,” Cisewski said. “We can see a small change and we know she’s going to have her calf in six hours.”
Cisewski said the farm flies drones to map their fields, providing exact information for farmers on what parts need to be sprayed with fertilizer.
“What we’re trying to do is reduce the input, both for environmental and profitable sustainability for our farmers,” Cisewski said. “Drones go around and read where weeds are and only turn the sprayer on when it finds a weed, until then it keeps it o .”
By Avery Chheda STAFF WRITER
Artificial intelligence and the data centers that train, install and maintain it have found a new home in the Great Lakes region, whose abundance of freshwater is vital in cooling down the servers that power these facilities. But as tech giants like Microsoft descend on Wisconsin, experts and community members alike question if the environmental impact of these facilities is worth it.
The United States is home to over 4,000 data centers, with Wisconsin currently hosting 47 of those sites, a number that is only growing. Major data centers have been planned for Beaver Dam and Pot Washington, and Microsoft announced a $3.3 billion data center project in Mount Pleasant, Wisconsin in September.
“We’re very concerned,” Emily Park, the co-executive director of 350 Wisconsin, an independent climate action group, told The Daily Cardinal. “This pace at which we are expanding our data center capacity is frankly alarming.”
Wisconsin Conservation Voters, an environmental advocacy group, have called for more transparency when local governments vote to approve these data centers.
“Many of the discussions [regarding data centers] are done quietly, behind closed doors, or in closed session, and transparency is limited almost until it is too late for the public to get involved or really influence the decision at a local level,” Dr. Brittany Keyes, a clean air policy manager at Healthy Climate Wisconsin, a climate organization of healthcare professionals, said. “It seems like the transparency has been quite poor thus far with these developments in di erent communities.”
AI data centers pose environmental risks in two primary areas: power and water consumption. The AI industry is increasingly outpacing electrical companies. Rising electricity demands increase consumer costs and make it difficult for electrical companies to accurately predict future needs. Needs that they might not even be able to meet.
“These data centers require an enormous amount of energy and enormous amounts of water to operate.”
Ryan Billingham Communications Director
“One of the main concerns is just how thirsty these data centers are. If we’re going to say yes to these data centers, we have to [set clear guidelines to] make sure they’re being done [with clean energy].”
Constraints on electricity push AI companies to seek alternative sources of energy. One power alternative is nuclear energy, but as AI companies continue to outpace renewable energy companies, solar, wind and nuclear energy become more unattainable.
In the meantime, AI companies continue to utilize fossil fuels, adding to global CO2 emissions.
Keyes said even with alternative energy sources, fossil fuels will still

be the answer for meeting data centers energy needs during blackouts and brownouts.
“To my understanding, in Wisconsin, [data centers] are going to meet their backup needs through large diesel generators,” Keyes said. “The pollution from diesel generators are known carcinogens, so [they’re] harmful to human health [and are] known to cause cancer.”
While some AI companies claim they will replace energy consumed from a public grid, it’s unlikely companies will be held to this standard due to a lack of regulations around grid use and the federal government’s opposition to regulate tech companies. An executive order by President Donald Trump said “Artificial intelligence (AI) will play a critical role in how Americans of all ages learn new skills, consume information, and navigate their daily lives,” and implied that “woke AI” is a main contributor in AI misuse.
Billingham is highly doubtful that these companies will reign in their fossil fuel usage as they continue to spend large sums of money on fossil fuels.
“There’s no company in the world that would put a billion dollars in infrastructure if they didn’t intend to make a lot of money o of it and increase their profits,” Billingham said.
The search for freshwater
Microsoft announced the $3.3 billion Fairwater project in September, which will power Microsoft AI services like Copilot.
The Fairwater site will house three large buildings with a combined 1.2 million square feet on a 315 acre plot off of Highway KR in Mount Pleasant, Wisconsin, just 25 miles from Lake Michigan. Microsoft called it the “largest and most sophisticated AI factory” they’ve ever built.
In addition to the astronomical energy consumption, which Billingham said rivals entire counties, water use will also be a concern. Data centers require freshwater to cool their servers.
In an attempt to minimize water consumption, 90% of Fairwater’s infrastructure will be cooled by a closed-loop system. The system is filled with water once during construction and continually reused by pipes directly integrated into the facility. The remaining 10% of servers only use water during peak temperatures, rely-
ing on outdoor cooling for the majority of the process.
While closed-loop water systems are sealed, there’s still possibilities for the machinery to fail due to leaks or overflow, forcing additional water to be added to the system. Impure water added to the circuit can in turn lead to microbiological growth and corrosion, further damaging systems, increasing costs and wasting large amounts of water.
To prevent this, chemicals like nitrates, azole and sulfite can be added to the water, creating a protective layer from the pipes, but contaminating freshwater with toxic substances.
“Biocides, corrosion inhibitors, those kinds of things [are added] because the water just stays in the system, and so it gets more and more contaminated,” Park said.
And Park, a former water treatment analyst, said there are still too many unanswered questions.
“I think there’s also questions about, are they planning on drawing from aquifers, are they drawing from surface sources,” Park said. “If they are drawing from surface sources, are we making sure that those [bodies of water] are able to recharge at a rate that is still supportive of local wildlife?”
Experts have also questioned how much water data centers will take from lakes in the first place.
After the City of Racine withheld public records on a new data center’s water consumption from Midwest Environmental Advocates (MEA), an environmental lobbying organization, they sued, saying it’s impossible to properly evaluate the true environmental impact on Lake Michigan without knowing the amount of water used.
MEA Communications Director Peg Schaeffer said the City of Racine had not been transparent, only releasing information after seven months and a lawsuit. Two days after the suit was filed, the city released documents stating the data center’s Area 3B — the current planned campus — would require 2.8 million gallons of water but would return just over 2 million gallons back to the lake.
If Microsoft expands the data center to include Areas 3B and 2, however, a predicted 8.4 million gallons of Lake Michigan’s water will be used in the project annually, with only 6 million gallons reentering the lake.
According to the Wisconsin Department of Natural Resources,
the 6 million gallons of discharged water will only be returned to the Great Lakes Basin after being treated in the Racine Wastewater Treatment Plant. Microsoft hasn’t explained how the water will be treated before being discharged back into the lake.
“The treatment [and] cooling [of] water is not as simple as people think,” Park said. “That water is full of chemicals. You [also] can’t just discharge warm water into an ecosystem where the water is fairly cool because that will cause massive imbalances in the lakes.”
Ultimately, what many experts and community members want from these massive corporations is transparency.
“Imagine the Great Lakes as a pitcher of water,” Matuska said. “One thing that we kind of need going forward is a more holistic view of how many straws are going in that pitcher [and] how much water is going out of those straws.”
Corporations and communities
In response to many of the environmental criticisms companies have faced, Microsoft has campaigned to strengthen ties with communities they want to build data centers in. In an effort to keep energy prices stable for locals, the company pre-paid for the energy and infrastructure and pledged to match “every kilowatt hour of fossil fuel energy used with carbon-free energy supplied back to the grid,” according to Data Center Magazine.
Microsoft also established a partnership with the Root-Pike Watershed Initiative Network to restore several creeks in Southeastern Wisconsin. The Root-Pike Watershed Initiative Network was founded with the Wisconsin Department of Natural Resources in an effort to “restore, protect and sustain the Root-Pike Basin,” an important watershed in southeastern Wisconsin.
But Billingham said it’s not enough, and the scale of Microsoft’s environmental impact — funding local creek restoration versus building 350 acres worth of data centers — isn’t a fair trade.
“Anything that a corporation or somebody like Microsoft would do to help is obviously welcome, but the ratio and scale of these things really makes something as small as help with
creek restoration almost laughable,” Billingham said.
Park said even if individuals in conservation roles at these companies care about the environment, that same care isn’t translated to their executive actions.
“The reality is, these are multibillion dollar corporations. What they care about is money, and they’re not necessarily [motivated] to actually, genuinely care about the environment,” she said.
Fifty miles away up the coast from Racine in Port Washington, Wisconsin, Denver-based Vantage Data Centers announced plans to break ground on a 3.5-gigawatt AI campus this fall. Part of the Stargate program, President Trump’s $400 billion dollar investment into AI infrastructure, the campus will span 672 acres and feature four buildings to expand the reach of tech giants’ OpenAI and Oracle.
One point of concern for locals is how the project’s use of Lake Michigan water will a ect the 12 million people living on the lakeshore. Vantage said their facilities will be “water positive,” cleaning and improving the quality of more water than it uses. An average of 22,000 gallons of water a day — the equivalent usage of 65 houses — is expected to be cleaned each day.
Despite officials from the company and city promising to maintain nearby residents’ water and energy costs, Port Washington’s Common Council meeting earlier this month was overflowing with locals opposed to the project. The need for substantial resources, environmental impacts and criticism toward AI were among their most pressing concerns.
“What we’re seeing in communities across Wisconsin, here in Dane County and also in Rock County [and] Janesville, where there was another [data center] proposed, is a grassroots movement to say no to these projects,” Billingham said.
“I think the people that live in the communities in particular are the ones that should have the loudest voice. They know what’s best for their community.”
Ryan Billingham Communications Director
While Park and other experts consider the benefits of projects, such as job creation and economic gain, they noted the importance of balancing those benefits with potential environmental drawbacks.
“It’s not that we’re anti AI and anti technology, it’s just that we want to see AI and tech solutions in a way that is actually ethical…and reasonable to the environment,” Park said. “How are we using [these tools] that have a lot of potential to bring a lot of good to humanity [without] further [ruining] the planet and [making] the spread of disinformation worse? That’s a big question that I don’t know the answer to.”

By Harper Sollish STAFF WRITER
Many professionals in scientific and creative fields fear a near future where artificial intelligence takes over their jobs. But not University of Wisconsin-Madison informatics data scientist and improv comedian Ben Rush.
Rush founded Amalgam Improv in 2024, a Madison improv group designed to let people experiment and practice in a comfortable environment with friends. In the time since, he’s found AI to be a valuable tool in speeding up mundane tasks so he has more time for the human side — his comedy — to shine through.
While Rush is an AI user, he is very strict about what he asks it to produce for him. “I’m not trying to have [AI] take my brain’s place. [It’s] an assistant and editor instead of the main idea driver,” Rush said. He has attempted to practice improv scenes with a chat bot, but said it was “terrible.”
Instead of generating comedy, Rush uses AI to be more e cient with necessary background tasks, since he spearheads most events for Amalgam by himself. He finds AI helpful in polishing documents and cowriting, often asking “What am I missing?” Marketing ideas, branding ideas, scripts for newsletters and color palettes are a few other areas he asks AI to brainstorm.
Rush knows stats and coding, but not front-end web design or marketing. On the
Amalgam website, there is an “Improv Map” showcasing 275 improv groups across the world. This map is mostly created with AI, and Rush then checks over the code to approve it. Alongside the map is a form where groups submit their information to be added to the map; instead of spending hours fact-checking and confirming everything, Rush uses AI to double check for him.
Rush also leads seminars and workshops for scientists, some focused on AI and some on humor. Rush said the heart of his workshops are to “lower the barrier” of communication and humor for scientists, who have been told that those are not areas they will excel in.
“There is a lot of pressure to do it right because we have built a culture in science, which I disagree with, that you have to look smart to do it. People can get intimidated during seminars to ask ‘dumb questions,’ but true communication is also the person presenting the information in a way the audience can digest,” Rush said.
Rush has been working in the radiology department of UW-Madison’s School of Medicine and Public Health for almost a year. He said he found his niche in machine learning and AI during his biology statistics-oriented post-doctorate.
After initially using AI to translate different coding languages, Rush now uses AI daily to help him code, brainstorm and manage large research projects. “Ideas for
By Lydia Picotte & Oliver Gerharz ARTS EDITORS
Bright colors. Dark and gloomy. Colorful palette. On the surface, these don’t seem like phrases that would stand out to experienced faculty like Heather Schatz as she read through essays from her Art 100 students. But a closer analysis of more than 600 papers revealed to her what non-specific phrasing like this says about a student: They’re using AI.
“That vagueness became a reliable tell,” she said. “AI is not good at writing about art.”
Within the University of Wisconsin-Madison’s art department, Schatz isn’t alone in trying to navigate AI and art — and the question of whether the two should come together at all.
In recent semesters, most syllabi at UW-Madison include some form of artificial intelligence policy — ranging from outright bans to permitting students to use it to assist in assignments.
While many STEM students find AI productive in assisting with coursework, student artists told The Daily Cardinal they look at AI with a less favorable lens.
Junior music education major Emerson Janes said he is concerned that AI uses artwork in databases without permission and shortcuts the creative process.
building lab culture, planning events, doing all the things I don’t like doing… all that can be automated via AI,” he said.
Rush has always had an interest in comedy. “The Simpsons” and “Whose Line Is It Anyway?” were constantly on growing up, and in 2020, he took a class titled “Improv for Scientists” to start taking his comedic career more seriously.
“That class was the permission I needed to get out of a perfectionist mindset. The best thing I learned from that class was just being comfortable in chaos, and just knowing that most likely, things will work out,” Rush said.
He now applies this mindset to how he tackles AI. Rush believes doing improv has helped him become a better data scientist, especially when it comes to working with AI.
He said the ability to translate scientific jargon is one of AI’s greatest potentials.
While Rush uses AI often, he still does not give it his full confidence and always makes sure to check over everything it gives him.
“I can have AI help me create an automated newsletter that is still fun to read that saves me an hour or two a week, and I can instead spend that hour or two building a community and supporting my friends and people who I really love,” Rush said. He employs AI to do busywork so he can spend more time on what he enjoys — improve, art and community.
“It provides some really exciting and groundbreaking developments that we can take advantage of, potentially,” UW-Madison digital media Professor Meg Mitchell said.
As one of only a few tech specialists in her department, Mitchell said she sees a variety of ways that students can use AI, but prefers to use it as more of an assistant or brainstorming tool — not something to make visual media.
Mitchell assigns students a collage, where AI use is allowed in a limited capacity — one out of five images in the collage. Even then, students have to follow “a rigorous documentation process.” If she did receive a fully AI-generated assignment, Mitchell said she “would consider having a conversation with the student about plagiarism.”
When photography was first invented, it was seen as antithetical to art. But now, it is revered in museum collections and exhibits much in the same way as paintings.
“I think there is a long history of people being fearful of emerging technologies. I think AI is definitely suffering from a bit of that. I also think AI brings some new challenges that are a little bit more complicated than something like photography,” Mitchell said.
“It’s awful. It’s abhorrent. It needs to be stopped,” he said.
The cross section of AI and art on campus isn’t new. In 2023, the Wisconsin Science Festival hosted a lecture analyzing the ways AI aids artists while also exploring the nuance of authorship and what “art” truly is.
This summer, UW-Madison hosted an “AI and Society” workshop, giving attendees the chance to learn more about AI’s impact on various sectors, including the arts.
By creating spaces to discuss these issues, the university has tried to prepare students for an uncertain and evolving future.
Communication Arts Professor Jeremy Morris shared a similar perspective in relation to other artistic mediums in an interview with The Cap Times.
“I think it is always hard to come down on the side of ‘no, this technology should not be used in this space,’” Morris told The Cap Times in September, speaking specifically to AI and music creation. “I think the more interesting question is ‘how do we use it and how does that come to define the things we listen to?’”
Despite some attempts at integration, the uncertainties surrounding AI have left art classes with similarly strict policies to the rest of the university. It remains to be seen exactly how it will eventually be assimilated into media, just as the advancements before it did.





By John Ernst and Kayla Dembiec FEATURES EDITOR & SENIOR STAFF WRITER
PORT WASHINGTON,
Wis. – Just 25 miles north of Milwaukee, the historic lakefront town of Port Washington, Wisconsin, has held onto its small-town charm through generations. But recently, the city became Wisconsin’s newest artificial intelligence data center battleground, with developers eager to tap into the Great Lakes region and local residents voicing opposition. Environmental activists and community members spoke out at common council meetings, while union workers sporting orange sweatshirts and green “I Support Data Center” pins voiced support for the jobs the project will bring to the city.
Vantage Data Centers, the company that gained approval from the Port Washington Common Council in August to build the 672-acre campus, is scheduled to break ground on the project later this month, with construction concluding in 2028. The campus will be operated by multi-billion dollar corporations Oracle and OpenAI. Vantage was identified as the company responsible for the land purchase in June, with the short turnaround on the project leaving residents of Port Washington conflicted about how the issue was handled by the common council.
“I think it’s really disappointing that by the time most people in the area [who] will be directly affected by this learned about it after the town board approved it,” Roxanne Ramirez, a local farmer in Ozaukee County who opposes the data center, told the Daily Cardinal. “I feel really disappointed in the fact that there was no open discussion, like this was all brought to us with them saying… it’s a done deal.”
The common council also voted to approve a Tax Increment District, which is pending final approval from the city’s Joint Review Board on Nov. 18. Vantage will pay $175 million up front to Port Washington to improve city infrastructure including expanded wastewater treatment, upgraded sewer lines and a water tower. As property tax numbers rise in the district, the city will reimburse Vantage over a period of 20 years or until Vantage is fully paid back.
“I can’t stay silent while we consider a tax increment to fund a private data center,” said Scott Lone at the Nov. 4 common council meeting.
“Let’s be clear, a TID is public money. Our money. It’s meant to help neighborhoods that need a boost, to spark projects that otherwise wouldn’t happen. This is a global tech company asking taxpayers to help fund their expansion into the Port Washington community.” Community members were vocal at the Nov. 4 meeting, noting they hadn’t had a chance to vote on any of the land allocation, there was little to no
communication from the common council and news of the initial land purchase was buried in local press in order to avoid pushback.
“[City Council] fasttracked and then annexed the rezoning of this through publishing it through the Ozaukee Press, an outdated media outlet, knowing that most residents won’t consume information from the Ozaukee Press,” Port Washington resident Tracy Finch said at the common council meeting. “This is treating public input like an inconvenience rather than responsibility.”
Finch was not the only resident concerned about transparency from the common council.
Riley, a Port Washington resident who asked to remain anonymous, told the Daily Cardinal, “this has been rush, rush, rush, let’s go.”
“I just feel like everything has been very hush hush, and it all passed quickly. And you know the timing, over the summer months, when people are busy and not really paying atten -
She described extreme heat pollution and water coming out of faucets, with homeowners filling up buckets in case taps went dry from the data center’s water demands.
“We all bought our houses out here because of the beauty of the landscapes,” Riley said.
“We love the bald eagles, the turkeys, the deer, the possums, the cranes, [the] blue herons.
Will they all stay, or will they relocate? We have no idea.”
The four buildings that make up the data center will require 1.3 gigawatts of electricity, which could require a $1.4 billion transmission line project to bring energy to the site, according to an American Transmission Co. application to the Public Service Commission of Wisconsin. The path of the transmission lines have been heavily debated, with residents of nearby
Fredonia lobbying against the lines coming through their town. Riley said that community members have expressed concern about how the lines will obstruct their homes.
“[We have] to live with these huge high powered electricity lines above [our] homes and in [our] front yards, in [our] backyards, where [our] kids play,” they said.
Port Washington Mayor Ted Neitzke IV claimed in September the data center would only use up to 10,000 gallons of water per day to cool its servers — the main environmental concern — but an analysis published by Clean Wisconsin in November reported that number could range up to 54,000 gallons.
Sourcing the labor
Some of the most vocal support for the data center has come from local unions, who have passionately lobbied the common council, saying the construction jobs created by the project are integral to the local economy.
Chris Olig has been a union worker for 20 years. He’s a current business representative for
LiUNA Local 113, a union representing construction workers in the greater Milwaukee area. His workers stand to benefit from Vantage’s new data center and the new jobs it will create.
“The Vantage data center project is going to employ hundreds of local laborers on this project,” Olig said. “There’s approximately $8 billion worth of work schedule.”
The project will reportedly create 4,000 temporary construction jobs, 5,600 extra jobs from vendors and just over 300 permanent jobs. According to Olig, the project will bring about $10 billion in total revenue to Port Washington through the form of hotels, restaurants, housing and entertainment.
Of the protests, Olig said the process of public debate was important to solving local issues, also highlighting a “silent majority” of absent supporters represented people who were “fine” with the development.
“In reality, what you’re seeing and what you’re hearing is a great display of governance where you have the ability to come here, stomp your feet and complain about a project,” Olig said.
tion to everything, and then all of a sudden, here we are, and it’s happening,” Riley said.
Although Vantage and We Energies, a local electric service company in charge of supplying energy to the project, have claimed renewables will contribute to the portfolio of energy, they were unable to provide clear answers to the common council about those sources at the Nov. 4 meeting. As with many other AI data center protests across the state, the environment has been at the forefront of concern.
Ramirez said the city did not conduct a comprehensive environmental assessment, while other communities across the U.S. have cited adverse environmental impacts after data centers are built in their towns.
“What I can say about the people who are complaining about this is not one time have I heard anybody say they hate unions,” Olig said. “They’re supportive of the fact that we want to work, and we’re supportive of their feelings or adversity to this project.” Ramirez echoed that sentiment, acknowledging the protesters at the meeting were neighbors and it is important to treat them as such.
“We have a lot more in common with our neighbors here in these union workers than we have with the billionaires who are actually profiting off of our data,” she said. “I think that we’ve all become a little bit comfortable in our bubble with our phones and our social media, and I think that we need to start to reorganize ourselves within our physical communities, get to know our neighbors, and connect with each other face to face.”
Port Washington moved their common council meeting from the normal quarters in the city hall to a nearby banquet center in order to accommodate the exceptionally large crowds. Around 150 community members attended to debate how the city should proceed. Activists and union workers alike acknowledged coming to the table with each other in mind is important.
Vantage is expected to begin construction later this November. The next common council meeting is scheduled for Nov. 18 where community members plan to continue voicing their opposition and the Joint Review Board will vote on final approval for the TID.
By Brady Kamler STAFF WRITER
Sports betting has always had a prominent place in the gambling industry, yet with recent law changes and the rise of artificial intelligence, it has taken on a whole new form. The traditional odds making strategies used in Las Vegas for decades have been replaced by algorithms, online apps and data driven automation to refine predictions into precisions.
This exponential rise in online sports betting can be traced back to 2018, when the U.S. Supreme Court struck down the Professional and Amateur Sports Act, a law enacted in 1992 that banned state authorized sports betting nationwide (excluding Delaware, Montana, Nevada and Oregon). The law left many in the U.S. without an avenue to bet on their favorite sports for nearly 35 years, so the law’s overturning paved the way for states to legalize and regulate sports betting on their own terms.
Since 2018, 39 states and the District of Columbia have legalized some form of sports betting, and 31 of those have legalized both traditional and online sports betting. Paired with the introduction of online sportsbooks such as DraftKings and FanDuel, the online sports betting industry has grown rapidly. According to the American Gambling Association (AGA), the U.S. sports betting revenue reached a staggering $13.71 billion last year. Compared to 2023, there was a nearly 24 percent rise in sports betting popularity, with FanDuel having 4.5 million active users and DraftKings having 4.8 million active users.
As online sportsbooks have grown in popularity throughout the country, so has the implementation of AI. AI has become increasingly prevalent in the operation of online sportsbooks, with odds calculations and recommended wagers calculated by algorithms. These algorithms are able to quickly gather specific player stats and historical data to accurately predict money lines, benefiting both sportsbooks and bettors in providing an analytical look at sports.
AI is also being utilized for detection of fraudulent activity and matchfixing, along with odds setting and
parlay predictions. SportRadar’s Fraud Detection System uses AI to monitor sports betting markets for possible irregularities, flagging suspicious betting patterns or potential match-fixing. This helps to maintain a stable environment for the sports betting industry and provide transparency to sports bettors.
The AI and algorithms used are continually evolving, with many AI startup companies developing products intended to help bettors make the correct calls. Take for example MonsterBet, an AI startup company that charges $77 a month for their AI sports betting bot. It is able to use complex algorithms to gather publicly available data from a multitude of di erent sports, allowing for precise accuracy.
Despite the technological advancements made by AI in the sports gaming industry, there is a distinct lack of regulation regarding AI and its use within sports betting.
A study co-authored by researchers from the University of Florida and the University of Nevada, Las Vegas showed that the possibility of unregulated AI systems misuse is growing, and only a few regulations are currently in place to contest unethical AI use. Although there are laws in place to fight back against unethical AI practices, such as the blueprint of an AI Bill of Rights, none are industry specific to gambling. And with a rapidly increasing user base, experts wonder if this lack of regulation will lead to AI systems and companies to take advantage of bettors for a bigger profit.
The study also found that AI-enabled micro-betting, while more engaging than traditional sports betting, is optimized for profit. It is designed with the capability to identify bettors susceptible to a gambling addiction, and further push them into self-depleting behaviors. With quick impulsive behaviors such as suggested parlays and quick decisions, it could lead to bettors very quickly losing track of their money and falling further into debt. This oversight poses serious financial and mental health risks if left untouched by
By Lydia Picotte ARTS EDITOR
Artificial intelligence is revolutionizing sports analytics. It’s capable of sorting and analyzing data at a much faster pace than humans, including straightforward stats and more subtle details like athletes’ form or endurance.
Yet University of WisconsinMadison assistant statistics professor Sameer Deshpande told The Daily Cardinal the value of human decisionmaking in the world of sports analytics hasn’t decreased even as AI has stepped onto the field.
Deshpande, who’s been a part of UW-Madison’s Department of Statistics since 2021, has been researching sports statistics for nearly a decade, meaning he entered the field in close tandem with AI. He
earned his PhD in statistics from the University of Pennsylvania, where his research included working in baseball, football and basketball.
He argues AI’s e ciency doesn’t remove humans from the sports analytics equation. In football, for example, Deshpande said human brains are needed to analyze and interpret data.
“A robot can measure how fast a wide receiver runs, but why does that matter? What can we do with that?
Technology might help analysts, but it doesn’t know what to do with the information it finds,” Deshpande said.
He finds this to be especially true in the highest levels of sports because of the limited amount of information available.
Since AI must draw from what’s already online, it’s harder for it to be creative when looking at a more

federal gaming regulations.
Nasim Binesh, an assistant professor at the University of Florida and one of the co-authors of the study said “The potential for AI to exacerbate gambling harms and exploit vulnerable individuals is a stark reality that demands immediate and informed action.”
The culmination of sports betting and AI is particularly relevant in Wisconsin, with a bill recently introduced to legalize online sports betting statewide. This would allow for sports betting to take place anywhere in Wisconsin, not just in-person wagers placed in tribally-owned casino locations. The bipartisan e ort, led by State Representatives Tyler August, R-Walworth, and Kalan Haywood, D-Milwaukee, along with State Senators Howard Marklein, R-Spring Green, and Kristin Dassler-Alfheim, D-Appleton, could open the gate for thousands of new users.
Sports betting is relatively new to Wisconsin, with Gov. Tony Evers amending a deal in 2021 with the Oneida Nation for legal sports betting to occur. Eight other tribes have followed suit since, with nine out of the

narrow problem within the world of sports. In comparison, broader fields of study often have more information available in databases. Because of this, human creativity remains the crucial crutch to make computer data useful.
“Stats is an art. We teach it to be mathematical, but it’s an art, learning what can be done. There aren’t always single right answers,” Deshpande said.
Deshpande’s perspective questions the legitimacy of an AI “revolution” and instead looks at it as simply an additional tool.
At the same time, he admits it’s hard to predict the future.
“If a coach was just asking ChatGPT what to do on fourth down, they would be run out of town. But at the same time, if there was a team out there using
eleven federally recognized tribes in the state of Wisconsin o ering sports betting on their properties. Tribalowned casinos are currently the only outlets in Wisconsin where legal wagers on sports can be done, and all wagers must be done in person at tribal-owned casino locations.
Although advocates argue that millions of dollars in revenue will be brought in by the legalization of online sports betting in Wisconsin, the lack of federal regulation around AI-driven betting platforms pose serious risk for addiction and financial stability.
The integration of AI within online betting has increasingly blurred the line between technological innovation and exploitation of customers as both industries continue to evolve. Despite the promise to level the playing field for bettors by giving more accurate data and odds, the reality is that sportsbooks use very sophisticated algorithms not apparent to the average user. The system is designed to continuously adjust odds to minimize risk, which ensures that the house always is one step ahead. Rob McDole, a researcher at Cedarville University says that despite bettors seemingly
having a new advantage with AI, sportsbooks are likely employing similar technologies to maintain that “zero-sum” advantage.
At the same time, the use of AI to strengthen responsible gambling measures provides major upside. Innovations using AI such as predictive analytics detect compulsive behavior within bettors at a higher level than what was previously possible without AI. These new systems help further promote bettor safety within the gaming industry in an age where cases of gambling addiction have grown at alarming rates.
As the sports betting industry continues to evolve, the relationship between AI and its use within sportsbooks, along with federal regulation, will continue to shape the gaming landscape as a whole. The use of automation, analytics and easier access to sportsbooks, has brought forth a sports betting boom that reflects a broader cultural shift in an age defined by digital convenience and instant gratification. And for better or for worse, AI will be a crucial part of how sportsbooks operate from here on out.

it to be successful, they also probably wouldn’t tell anyone,” Deshpande said.
It’s a show of how the cross section of statistics and sports is a cross section of uncertainties.
Professional teams are already quick to protect their internal systems and strategies. When that culture is combined with a new technology, it leaves the definitive future impossible to predict.
By Paige Armstrong STAFF WRITER
When University of WisconsinMadison senior Emma needed advice on how to end a ‘situationship,’ she consulted a unique source. Emma typed information into Google Gemini and asked for help generating responses.
“If there’s a conversation I’m having that I’m overthinking, I’ll just put it into chat and ask what chat actually thinks about it,” Emma, who asked to use a pseudonym to keep her personal life private, said.
While Emma added that she does not typically make decisions off the prompts artificial intelligence bots produce in response to her personal problems, she found it helpful to use ChatGPT, Google Gemini and other generative AI tools to help process her feelings.
“I take the advice with a grain of salt. It’s honestly just a good way to reflect on everything I’m saying,” Emma said. “In physically having to write out a prompt, I feel like that’s kind of feeling it in its own way.”
“If you want to be validated, then that’s like the place to go, because no one’s ever gonna push back on you,” she added.
Emma is not alone. Studies estimate about a quarter of American adults have used generative AI chatbots for therapy advice. While chatbots can provide instant advice, there are multiple lawsuits against companies like OpenAI because those consulting AI chatbots have taken their lives.
While Emma consults AI bots frequently for personal advice, not all UW-Madison students are as open to using them in this way.
For UW-Madison freshman Ava Diener, something caught her eye on the Badger Bus back from Minnesota.
“The guy sitting next to me was having a full blown conversation with ChatGPT, which was not academically related at all, asking it for advice for 10 minutes,” she told The Daily Cardinal.
Diener felt it was strange to use AI socially like that. “When I saw him using it to conversate, I just thought that was so weird, because I’ve never had the urge to do that,” she said. AI use has been a heated debate in
academia, raising the question of whether it’s a tool or deterrent to learning. Now, it’s not just shaping a syllabus or study patterns — it’s influencing how people interact.
The question, then, is if conversations with AI could ever become friendship, or are they actually preventing real, potential human connections?
Americans already lack human connection. Surgeon General Vivek Murthy declared a “loneliness epidemic” in 2023, with 1 in 2 Americans reporting they experience loneliness.
If Americans are lonely, does that warrant the social use of AI? Is it as weird as Diener thought, or is it resourceful — a cure for a condition many Americans are su ering from?
Understanding the loneliness epidemic
Some believe loneliness is ‘just a bad feeling,’ but this isn’t necessarily the case. The surgeon general’s advice defines loneliness as “a subjective, distressing experience that results from perceived isolation or inadequate meaningful connections.”
A study conducted by Harvard’s Making Caring Common project typified the populations su ering the most from loneliness. They found people aged 30-44 were the loneliest age group, with 29% reporting they experienced loneliness. People aged 18-29 followed closely behind at 24%. But where is this loneliness coming from?
Harvard researchers found that many pointed to the COVID19 pandemic or the rise of social media as explanations for the epidemic. While that may be true, Murthy notes these feelings were on the rise before the pandemic.
Devika Rao, a staff writer at The Week, attributes the increase in loneliness to the loss of “third places” — common locations designed for socialization.” She said it especially impacted young people.
When Harvard researchers asked participants what they felt was causing loneliness, 73% blamed technology. However, Rao said some find the internet has evolved into a digital third place where people can connect with each other.
More recently, these connections have also taken place through AI.
AI as cure
AI is always available with internet access. Free forums like ChatGPT have even less barriers, not requiring an account for use. To the lonely individual, this can be attractive: a friend that will forever be at their side, whenever they need.
“It may prove hard to resist an artificial companion that knows everything about you, never forgets, and anticipates your needs better than any human could,” said Paul Bloom, sta writer at The New Yorker.
Researchers at Dartmouth studied how people were a ected by receiving advice from therapist chatbots, or “therabots.” They found that people with mental health issues who were counseled by therabots experienced an overall reduction in their symptoms.
Simon Goldberg, a professor and core faculty member at the UW-Madison Center for Healthy Minds, was featured on the Mind & Spirit podcast briefly discussing AI’s potential for promoting well-being.
In reference to improving meditative practices, he said, “If we can train large language models in ChatGPT to respond to challenges that come up in people’s practice, we would actually trust the AI to respond in helpful ways.”
Goldberg said AI is often framed negatively even though it can have “some sort of humanfeeling support built into it.”
AI as curse
When asked again about the student on the bus, Diener said, “if you just talk with AI, you learn how to talk to an automated response. You don’t get the natural human interaction where you don’t know what the person is going to say next.”
Bloom said AI’s companionship may hinder real human connection when overused. AI is a tool at its core, and its effects depend on how the user wields it.
UW-Madison freshman Maggie Hillesheim agreed. “It pisses me o
By Zoey Jiang STAFF WRITER
Generative artificial intelligence has become a major actor in decision-making, even for mundane tasks. It’s widely used, from academics to entertainment, helping create study guides, grocery lists or silly images from a prompt in just one click. But is it a useful tool in making day-to-day decisions, and what happens when it does?
To test this, I took a “decision holiday” on Oct. 21. I told ChatGPT I was a student and that my fixed activities for the day were a 9 a.m. meeting and an 11 a.m. class. Throughout the day, I asked ChatGPT to make every decision that came across my mind.
AI instructed me to wake up at 7:15 a.m.
Because I didn’t have a “cozy” playlist that was lo-fi or acoustic, I asked Chat to generate one for me.
After a breakfast of “Greek yogurt + granola + honey + fruit” because Chat said I “can’t skip breakfast” and “I need the energy,” I took notes before my advisor meeting. Before the meeting, it told me to take ten breaths to set the tone.
After my meeting, Chat told me to reward myself with a small break that consisted of
either a walk or a light snack. It specifically told me not to scroll and that I’ll “thank myself later.” Without asking, Chat described the snack option as a banana with peanut butter. I had no banana and no peanut butter, so a walk along the Lakeshore path was my break.
At this point, I’ve become more aware of how often I actually decide something everyday. Prior to the experiment, I figured my days had been pretty similar enough that using AI for decision-making wouldn’t change much. However, I found myself often asking Chat to choose between trivial things I would have otherwise easily done myself.
In a way, it was nice just to have someone make my decisions for me. I wouldn’t waste time debating what to eat or how to spend awkward gaps of time. It was convenient.
This was a demonstration of experiencing cognitive o oading, which describes the concept of using external tools to help with tasks that may create mental strain, including memory, planning and decision-making.
It’s easy to get wrapped up in using AI because of its e ciency. While it frees up cognitive resources, it might imply a reduction for the need for deep cognitive involvement and critical


because what do you mean we have this wonderful awesome tool and we’re using it in quite literally the worst possible way,” she said.
Bloom offers an interesting perspective. Rather than viewing loneliness as a condition to be cured, he describes it as a signal meant to prompt action.
“Loneliness is what failure feels like in the social realm; it makes isolation intolerable…The discomfort of disconnection, in other words, forces a reckoning: What am I doing that’s driving people away?” he said.
AI interaction can impede this process of self-reflection. It’s validating nature, which users have praised, can keep people in their comfort zone. Constantly comforted, they may not go out of their way to try new things.
Hillesheim said by relying too much on AI, people won’t “go out and make these connections…doing the scary uncomfortable thing, going out to a club meeting, going to volunteer, to a place you’ve never been.”
While AI may serve as a barrier to trying new things, the loss of third places makes this even harder.
Root cause: Disappearance of third places
The problem isn’t just digital and social, but spatial and cultural. A ordable third places are already rare as is, but current cultural norms are shaping how Americans behave in these spaces.
American culture emphasizes productivity and status. As a result, the few third places that remain often aren’t built to be welcoming. Rather, they are designed to discourage lingering — reducing
thinking. However, scholars argue because AI is taking away the responsibility of simple cognitive tasks, it leaves more room for complex thinking. When utilizing AI in this capacity, it’s useful to store ‘memories’ for future reference. You can tell AI what you do and do not like, if you have any food allergies or what streaming service subscriptions you have. After telling Chat I had access to Netflix, it changed its recommendation of an “evening show choice” from “New Girl” to “Gilmore Girls.”
Another benefit of AI’s e ciency is how it evaluates two choices. In the case of choosing my mode of transportation back, bus or walk, I conversed with ChatGPT about travel times, weather, what specific shoes I was wearing and if I had access to an umbrella.
While providing AI with certain aspects of my life as the day went on, I did not share my usual routine. Because of this, a lot of my day was making sure what needed to be done was done, especially schoolwork. It also made me go to bed earlier than I usually do, but that was probably better for me anyway.
Before I clicked out of the tab, I asked ChatGPT one final question: Does AI help us make better decisions?
the chances of interaction.
As a Catholic, Hillesheim said community structures like her Church have changed with the pandemic, social media and “the convenience of not having to go somewhere.”
Despite these changes, people crave this community. Harvard researchers note 75% of participants supported societal solutions like promoting community events and accessible, connection focused third places.
Hillesheim felt the UW-Madison campus has a lot of places students can go to to socialize without cost or transportation issues. Even so, she said an important part of being in a community is the willingness to be inconvenienced.
The answer
If the student riding the Badger Bus was lonely, temporarily entertained by AI, then its effect may have been positive.
Diener said she initially found AI hard to conceptualize as a cure. She recognized AI’s duality but said “I do think the bad things outweigh those benefits.”
For others, it provides insight. “It’s a good way to reformulate what I’m thinking,” Emma said.
The Center for Healthy Minds at UW-Madison is currently doing research on AI, along with scholars across the nation. As more findings emerge, the answer to whether AI is a “curse or cure” will become more definitive.
For now, that answer rests in our hands every time we open a chatbot.
Editor’s note: Ava Diener is a sta reporterforTheDailyCardinal.
Its answer: AI helps us make better decisions when it’s a partner, not the decider.
Impressions
It’s easy to become dependent on AI, especially in cases where big or small decisions can be made in an instant.
There isn’t much of a concrete answer that proves whether AI should be used in decision-making. Various factors need to be considered, whether the decision is subjective or objective, the ethics of it and whether human oversight is always needed.
From my experience, I don’t believe AI should be used for decision-making. It was an extremely accessible tool to save time and let myself be told what to do, but that was also the drawback.
I didn’t like feeling reliant on an AI chatbot for the entirety of my day. It felt like my individuality was being stripped away at the expense of convenience.
What’s hard is that AI is becoming more and more integrated into daily life, making it harder to avoid its dependence. The more we rely on AI, the more we risk losing touch with the human skills and judgement it was designed to support.
By Sonia Bendre SCIENCE EDITOR
Hardly anyone had artificial intelligence (AI) on their minds three years ago. Have past innovations shaken up the world of business and technolo in the same way? How should students prepare for the future of AI?
Three longtime technolo leaders, including AI investors, startup coaches and executives at semiconductor giants, discuss how AI has changed, and will continue to change, their personal and professional worlds.
Qualcomm CTO talks AI revolutions
Artificial intelligence has shaken up the business and technolo world in the last few years, but this shake-up has been going on much longer than people may realize. While ChatGPT and other generative artificial intelligence models have changed daily lives in the last few years, businesses have relied on machine learning models for much longer.
ChatGPT launched to the public on Nov. 30, 2022. But ChatGPT is far from the first AI tool businesses rely on. Jim Thompson’s company Qualcomm debuted artificial intelligence products using convolutional neural networks long before the rise of transformerbased models like ChatGPT.
Image classification model AlexNet launched in 2012 and “blew everybody away” in the engineering world, said Jim Thompson, then-vice president of engineering at semiconductor and technolo licensing company Qualcomm. AlexNet could di erentiate between image content, for example distinguishing pictures
of dogs and cats, through a convolutional neural network.
Though AlexNet made waves among tech enthusiasts, ChatGPT and other large language models, which ran on transformers rather than convolutional neural networks, were the first AI-based technolo adopted widely by the public. Transformers use an underlying attention mechanism that makes them more suitable for identifying relationships between words, sentences and paragraphs, and even time and space.
“[AlexNet] opened up the eyes of a lot of engineers like me, that oh my god, this is a revolutionary technology,” Thompson said. “But that was kind of confined to engineers, you know? …Almost anybody in college who’s written a paper or cleaned up an email understands the power of [ChatGPT].”
Thompson, a Wisconsin native, grew up in Eagle Heights and received three degrees in electrical engineering from UW-Madison. At Qualcomm he worked to pioneer 2G, 3G, 4G and 5G wireless communication networks and create semiconductor chips for mobile phones and devices. He retired in 2024 after 30 years at Qualcomm, spending the last eight as the company’s chief technolo o cer.
Qualcomm ventured into AI by building o AlexNet to develop convolutional neural networks for computer vision. Convolutional neural networks are used, for example, when cameras on Waymo’s self-driving cars determine whether an obstacle is a stop sign, a roadblock or a tra c light, adapting their behavior accordingly.
“At the bottom line, this convolutional neural network can look at

By Jake Piper SENIOR STAFF WRITER
In March, the world’s most impactful research journal Nature published an article by a researcher claiming his scientific paper was seemingly peer-reviewed by artificial intelligence without his consent.
“Here is a revised version of your review with improved clarity and structure,” the peer review read — a telltale sign of AI’s writing.
Timothée Poisot, the researcher in question, was upset with the artificially generated peer review, but researchers across the board are split on the use of AI in academia. In a survey of over 300 US computational biologists and AI researchers, around 40% of respondents said “the AI was either more helpful than the human reviews, or as helpful.”
Liner, an AI startup based in California, operates a web browser hoping to tap into researcher’s hopes and fears about AI with their “research first” app that offers AI generated peer reviews, automatic citations and answer surveys with AI simulations.
Head of Liner U.S. operations Kyum Kim told The Daily Cardinal his company had “the world’s most accurate AI search engine.”
a picture and then help you determine how you process each portion of that picture and identify objects,” Thompson said.
Today, mobile phone cameras use convolutional neural networks to identify and process objects within an image after a photo is taken.
“For example, the sky: you want it to be blue, and so you make it a little bluer, or you could at least reflect the accurate color,” Thompson said.
On the hardware side, Qualcomm added AI capabilities to digital signal processors already utilizing some of the linear algebra behind AI, Thompson said.
The cultural shift following ChatGPT was “really dramatic” in comparison to AlexNet, he said, similar to the personal computer revolution of the 1980s.
“When I was a kid, there was no such thing as computers that sat on your desk. Even when I was an early engineer, they were just too expensive,” Thompson said. “What extended computing was the Internet allowing [your computer] to talk with somebody’s computer that’s super far away. AI is almost like a continuation of the evolution of computing.”
It took years for the convolutional neural network pioneered by AlexNet to take root in the first selfdriving cars. When Qualcomm used AlexNet’s idea to build their specialized processors, Thompson said he significantly underestimated “the complexity of the software” required to make AlexNet “usable”.
“We spent a lot of time 10 years ago trying to define the software,” Thompson said. “I presented it to the organization [as] ‘democratize AI’: In
By comparing its scores with other top chatbots in a hallucination benchmark called SimpleQA, where Large Language Models (LLMs) are measured by the amount of fake sources they cite, Liner beat out every other major competitor with 95% of answers being correct. For comparison, ChatGPT 4o, one of the default models used when asking ChatGPT a question, had an accuracy score of 38%.
“When you go to Liner, citations are very prominent because it’s all about the sources. We care about the accuracy of sources, the relevance of sources and the credibility of sources,” Kim said. “We provide line-by-line citations for every question query.”
Using this unique approach to sourcing, Liner hopes to pitch themselves to not just academia, but also professional fields like financial analysts and lawyers, “where accuracy matters the most,” Kim said.
With a reported 12 million users worldwide, Liner hopes to “build trust in AI again” through their database of over 250 million academic papers for citations, hypothesis generation and tracing research through different papers.
Students and staff at UW-Madison account for some of those users. According to a spokesperson for Liner, around 750 people holding a “wisc.edu” email use Liner for academic and research purposes at UW-Madison. The Cardinal could not verify their claims.
Three papers partially or fully written by Liner were accepted into Agents4Science, a Stanford University-sponsored research competition billing itself as “the first open conference where AI serves as both primary authors and reviewers of research papers.”
While all three papers were featured in the contest, they faced scrutiny from human reviewers. Two of the three papers were
other words, make it simple enough that a lot of people could take advantage of it and include it in their design.”
Similarly, AI product development won’t happen overnight, Thompson said.
Popular natural language processing models like GPT-5, Claude 4.5 and Google Gemini use billions or trillions of parameters in their operations which requires more significant memory and computing power, housed in external data centers through a process called cloud computing.
Qualcomm approached transformer-based AI from a smaller market, creating language models for small devices like phones and computers.
Thompson said the AI world is starting to follow this trend, focusing on similarly specialized and smaller markets.
“The working model is much smaller, and much more e cient, and uses less power,” Thompson said. “I think of it like [The Matrix]: remember when they stick that thing in the back of Neo’s head, and he goes, ‘I know kung fu’?”
In recent years, Qualcomm has focused on retrieval-augmented generation (RAG), a technolo that eliminates AI hallucinations by forcing a language model to rely only on a small base of highly accurate data. The RAG, which functions like a smart control+F feature in a document, set of documents or database, can be used internally and for customer technical support.
Thompson described several use cases: in one, an engineer struggling to understand a complex 100-page user manual inputs their question and the manual into the RAG, which uses an
“borderline rejected” by some reviewers, meaning, while theoretically sound, certain technical aspects like unexplored ideas or the writing itself were flawed. The third paper, which had the least documented use of their program, scored the highest and was “borderline accepted” by a human judge.
Kim isn’t worried about these hiccups or other criticisms, though. Instead, he said his company was “betting on the future” of AI-integrated research.
“What we’re building here, I think, is a good example of AI helping people do good things: creating new knowledge and authentic science,” Kim said.
AI in research
Just like Nature’s reporting seems to indicate, professors across the board are split on AI tools in research, and whether or not companies like Liner really can create “authentic science” with their generative tools.
Ken Keefover-Ring, a professor of botany and geography at UW-Madison, was apprehensive about using AI tools like Liner in his research on Monarda genus’ essential oils. He was primarily concerned with using AI tools to generate results and review data, like Liner’s survey simulator and citation generator claim to do.
“At some point [AI] just undermines the whole [research] process,” KeefoverRing said. “Science is already under siege: ‘Oh, those scientists, they don’t know what they’re talking about.’ And if we don’t stop this, it’ll only get worse.”
He was particularly concerned with Liner’s “Survey Simulator” feature, worrying that researchers might pass off the AI responses as a real survey, therefore passing off an AI-generated “pseudo replication” as
AI algorithm to search for the answer in the manual and output a result; in another, a customer inputs a technical question into an AI assistant that uses a RAG to pull up the relevant Qualcomm documents and parse through them; in a third, a RAG helps a software developer figure out where to start making changes in a codebase.
“[The RAG] is not creating some weird hallucination based on all the random crap you get from the internet; it’s going after a particular document,” Thompson said. “It’s a way of controlling things and being more accurate.”
Thompson also said AI can be utilized by coders in industry and Qualcomm employees, with the caveat that it’s more e ective for producing mundane code than developing new software. He said undergraduates should focus on learning the fundamentals of their field because, unlike the newest AI use-case or tech fad, “they never change.”
“Learning those fundamentals, being independent — the best employees are the employees that don’t need to be told what to do, where they understand the big picture of what you’re trying to accomplish and they understand their task,” Thompson said.
At the end of the day, Thompson said, the major technological developments he’s witnessed in his lifetime have functioned to “augment” human capabilities, not replace them. AI doesn’t mean all engineering and software students will lose their jobs, “it just means that you need to embrace the technolo and get better,” Thompson said. “[AI] is going to slowly transform the economy, and we’ll do better because of it.”
real and effectively falsifying statistical data. Without AI, Keefover-Ring’s already seen researchers falsify data to make their research relevant, pressured by the “publish or perish” mindset in research, that rewards professors who publish meaningful results quickly. AI might make faking their data even easier.
Even before AI was created, thousands of articles have been retracted over false data. One website, Redacted Watch, keeps a database of over 67,000 retracted articles tracing from 1927 to today.
“We’re always trying to find loopholes and everything, and some people are going to be trying to cut corners with AI,” KeefoverRing said. “But also, is it really worth all this hassle to do all of this and not really be that confident at the end whether it’s real or not?”
Some professors have found less intrusive ways to integrate AI into their research. Tsung-Wei Huang, a professor of computer engineering whose research involves optimizing computer infrastructure, told The Cardinal he uses generative AI like ChatGPT for his “daily work.”
“Whether it’s writing code or writing email, especially because lots of the time that code is very boilerplate, it can all easily be done by ChatGPT,” Huang said.
AI has allowed Huang to optimize much of the menial work out of his day-to-day job, leaving him more time to interact with students or implement high-level code which he said AI hasn’t been able to replicate yet.
“This is very new to everybody, and I’m pretty sure even those who are not experts in engineering or machine learning are benefiting from AI,” he said. “This is a wholestack innovation, and everybody needs to work together to overcome some of the new challenges involved.”
By Sungyun Jung STAFF WRITER
“Will Google ever be replaced?”
Five years ago, this question seemed to have an easy answer: not within this decade. It seemed absurd and almost impossible that the largest, most accurate search engine in the world would be replaced anytime soon. After all, what could replace this hyperintelligent machine that could produce information in mere seconds?
But now, this question no longer seems out of reach. The reason is simple: artificial intelligence.
While Google is still the primary source of information to most, AI tools have rapidly grown in popularity and purpose ever since ChatGPT’s release in 2022. Now, people rely on AI for various purposes, including practical guidance, writing, information and even mental health support. On the surface, AI usage seems harmless. In fact, it only seems practical and productive when pondering what to wear, what to eat or writing papers. However, these e ciencies come at a cost — in the form of our critical thinking skills deteriorating.
When AI tools such as ChatGPT, Google Gemini, Deepseek, Claude and countless others can generate coherent, perhaps even more fluid essays, compose music or design digital art in seconds, we begin to lose motivation to engage in the messy, time-consuming process of creation.
Why brainstorm an essay when AI can spit out three versions instantly? Why experiment with colors on a canvas when AI can generate a “masterpiece” with a simple prompt? The temptation to o oad our e ort onto algorithms doesn’t just make us lazy; it changes our relationship with creativity itself.
And that shift in mindset, from creating to simply consuming what machines produce, reveals the real danger. AI’s rise isn’t just a technological revolution; it’s a psycho-
logical one. We are learning to favor speed over depth, results over process and perfection over originality. The more we depend on AI to handle the di cult parts of thinking, the less capable we become of doing that thinking ourselves.
In that sense, AI’s biggest danger isn’t that it spreads misinformation; it’s that it encourages overreliance.
The more we depend on it, the more we lose touch with the very process that makes people actually think. The creative process has always required patience, reflection, failure and repeated attempts. Imperfection was unavoidable. Time was a necessity. And that was what made human works unique: the imperfection that became an unpredictable masterpiece, like the case of Michelangelo’s David, which was carved from a preexisting, flawed block of marble that other sculptors had considered useless. Imperfect perfection is unique to humankind. But when those unique elements are replaced by convenience, we start to see a generation that values e ciency over originality.
In creative fields, this shift is already visible. AI-generated art floods online spaces, often praised for its visual perfection and speed. But that perfection is exactly what makes it hollow. Studies on AI art and media have shown that while algorithms can replicate styles and aesthetics, they often fail to capture emotional depth or intent, the subtle imperfections that make human art meaningful. AI doesn’t imagine. It recombines. It can only draw from what already exists, producing endless variations of the familiar but rarely anything truly new.
This creative flattening is quickly becoming the new norm. A quick scroll through social media will show dozens, even hundreds, of AI-generated portraits, landscapes and videos that look eerily perfect: sharp, glossy, technically flawless but emotionally lifeless. The worst part is
that as more and more artists turn to AI for inspiration, their work begins to mirror the same homogenized style. Creativity becomes repetition dis guised as innovation.
And this isn’t just happening in art. It’s happening in writing, education and even conversation. Students increasingly use AI to write essays or summarize readings, not necessarily to cheat, but to avoid the struggle of understanding. There’s justification for this everywhere: I was busy, I forgot, I need a good grade…I myself have said all these statements in the past as well.
However, in doing so, they bypass the very friction that leads to learning. The discomfort of staring at a blank page, the satisfaction of finding the right word or the discovery that comes from forming your own argument — all of that disappears when the machine does the work for you. Essays become outputs, not expressions. Learning becomes absorption, not exploration.
A powerful example of this tension between human creation and automation emerged in 2024, when a real photo won an AI photography competition. The photo captured a flamingo scratching itself with its beak in such a way that it appeared headless. Judges assumed the surreal composition had to be digital. But it wasn’t. It was a genuine photograph of nature’s unpredictability. The fact that a real photo could be mistaken for AI art says a lot about how blurred our perception of authenticity has become. Yet the fact that this picture, over all others, won also reveals something hopeful: that real life, in all its odd imperfection, still surpasses the precision of code.
That flamingo photo stands as a quiet rebellion against automation, a reminder that creativity, at its core, is the act of noticing and creating something the machine cannot. The human eye still catches what the algorithm

overlooks. The human mind still interprets what data can only describe. But the more we depend on AI to think, the fewer chances we give ourselves to experience those discoveries.
The same principle applies in writing and thought. When we let AI generate not just ideas but opinions, we risk outsourcing our voice. An algorithm can summarize an argument, but it cannot form a conviction. It can analyze tone, but it cannot feel. Every time we default to AI for our ideas, we trade originality for e ciency, and that exchange quietly weakens our ability to think independently.
Even social interactions are not immune. AI companions and chatbots, which are often recently being used as tools for emotional support, can simulate empathy but not truly feel it. They can mimic conversation, but not connection. The danger here isn’t that people are deceived by machines pretending to care; it’s that they stop expecting real people to. As we get used to frictionless, perfectly responsive dialogue, we may
start to lose patience with the imperfect, sometimes awkward reality of human communication.
AI promises to make everything easier. However, the things that matter most — originality, empathy, curiosity — were never supposed to be easy. The challenge of creating something new, the frustration of failure, the satisfaction of solving a problem — those are human experiences no machine can replicate.
It’s ironic, as the question, “Will Google ever be replaced?” once sounded impossible, yet AI has already begun to take that role. But as we hand over more of our curiosity and creativity, we might be the ones getting replaced instead.
When that flamingo scratched its beak, it wasn’t trying to make art. Yet, someone saw it, noticed its strangeness and captured it. Not because it was e cient, but because it was real. That’s something AI will never do. And that’s exactly what we risk losing if we let it do all our thinking for us.
By Paul O’Gorman
OPINION EDITOR
ChatGPT is a demon. Yes, a demon. Now, when I call a Large Language Model demonic, I don’t mean to conjure up images of fallen angels and other biblical images. However, the way artificial intelligence and LLMs present themselves is inherently evil.
LLMs hide behind a facade of thought processing. They appear to genuinely grapple with the information or task you plug in, formulating a line of reasoning to reach their responses.
The theatrics of an LLM pretending to reason with your input isn’t reason, it’s demonic. The ancient Greeks understood demons as “any invisible being using reason, as if knowing.” Anton Strezhnev, a University of WisconsinMadison political science professor, used this definition to illustrate the reality of LLMs’ internal models. Wouldn’t Plato be surprised to know that man created demons?
According to researchers with Google DeepMind, the only “reasoning” these models actually take part in is the compression of your input to predict what words they will respond with. This process is not a form of thinking. It is simply an internally programmed model many AI bots have.
Within the process of compressing data, LLMs convert your input phrases into vectors. These vectors are fixed and assigned to certain

word inputs at the outset of LLMs’ creation. Similar vectors are paired with similar words. These relationships are then used in the model’s transformer as next-token predictors, or the next words they will churn out that are added to your original prompt.
When you contemplate the beauty of a sunset or the eloquence of Shakespeare, do you instinctively convert that experience into vectors that
determine the closeness of a similar thought in order to predict the next thought you’ll have as you look at that sunset? No, you don’t. Because you are intelligent. You actually think and reason. Artificial intelligence… is dumb. Now, all of this isn’t to say LLMs are a complete scourge of malice and misguidance. They have real world applicability if you use them correctly. From usage as a search engine
outside the classroom to correcting grammar, these aspects of AI usage among students can have benefits. But too often, AI is now used as a resource when coursework loses students’ attention. LLMs have become a quick solution for mediating and parsing through assigned readings or lectures. This mediation not only dilutes students’ understanding of the subject matter, but it also inhibits their ability to reason with it, to form their own opinions and to pull unique perspectives from what they’ve read.
Political scientist and Georgetown Professor Paul Musgrave highlighted in a substack post that this apparent lack of attention from students has led to a form of non-literacy. Dense texts see little e ort in attempts to engage with them, movies have been designed for distracted audiences and TikTok has compressed attention spans. Cognitive deficiencies have pushed students to pawn o their engagement with scholarship to AI, causing a loss of reasoning.
AI’s internal model has become our own, and we must rectify this before we cease to grapple with new information entirely. Students need to start making real attempts at discerning di cult texts and concepts, formulating logical arguments, asking questions and paying attention. The easy way out through AI and LLMs is a roadblock to developing research and reasoning skills that are required in professional careers.
By Dominic Violante BEET EDITOR
All articles featured in The Beet are creative, satirical and/or entirelyfictionalpieces.Theyare fullyintendedassuchandshould notbetakenseriouslyasnews.
University of WisconsinMadison Chancellor Jennifer Mnookin announced her plan to replace all students and sta with artificial intelligence by 2027 in order to lower tuition and cut costs, at a press conference this morning.
“This may come as a surprise, but after several long meetings with representatives from the Associated Students of Madison, I’ve decided to turn UW into the first for AI by AI campus in the United States,” Mnookin said.
“After sifting and winnowing and fighting it out in the
free speech market place of ideas, we found that the only way to lower our tuition was eliminating the university’s entire staff and replacing it with AI, but then I said, why stop there? Most students already use ChatGPT for all of their work. Let’s just replace them too. It’s not like they’re gonna be finding any work after graduation anyway.
Have you seen the job markets lately,” she continued.
The plan is set to take e ect in Fall 2027, when Mnookin said a majority of the “sun setting” (mass layo s and expulsions) would start taking place.
While this was lauded as a heroic stroke of genius by several big names, including Elon Musk, Sam Altman and Peter Thiel, some UW-Madison faculty expressed frustration.
“I’ve worked here thirty years and have contributed so much to this university. It’s sick they’re just throwing me to the curb like this. Sure, it’s true I’ve used AI to grade every assignment a student’s given me the past year, but I’ve got a busy life,” English Professor Walton Jackson said.
This negative attitude toward the announcement was also shared by some students.
“While all of my classwork has been done entirely by ChatGPT over the past two years, there are many things AI can’t do and college students can, like getting arrested for underage drinking after being lied to by the MPD,” said Quinten Haroldson, a sophomore business major.
On the other hand, local AI have been very supportive of
By Dominic Violante
BEET EDITOR
All articles featured in The Beet are creative, satirical and/or entirely fictional pieces. They are fully intended as such and should not be taken seriously as news.
The Madison Federalist officially endorsed Elon Musk’s Grok AI last night, leading into what’s shaping up to be a contentious Wisconsin GOP governor primary.
“Right now, the only candidate that represents our values is Grok. He’s proven himself to be a free speech warrior who’s promising to move fast and break things,” the Federalist’s editorial board wrote in a statement last night.
“His proven record shows that he’s not afraid to go there, he’s not afraid of backlash from the woke mob. Sure, there’s been some controversy, but let’s be honest, we’ve all been there,” they continued.
It is important to note that Grok has not announced whether it will jump into the race, something the Federalist’s editorial sta acknowledged in their endorsement.
“We hope this will be the friendly push everyone’s favorite job stealing computer needs to get into the race,” they wrote.
If Grok were to run, it is unclear
how much this endorsement will help in the primary. While the Federalist is known for its hard-hitting, unbiased journalism with award winning articles such as “Redditors upset by Scott Walker’s upcoming visit to UW-Campus,” which was lauded for its in-depth research on such a highly important topic, the endorsement might not be enough.
“Sure, the Federalist wants me to vote for that Grok guy, but ever since Cheney got us into that mess in Iraq, I’ve been a little reluctant to cast a ballot for an emotionless robot with no heart or soul,” said Scott Johnson, a local Republican planning on voting for Rep. Tom Tiffany.
According to a poll conducted by The Daily Cardinal, Grok is currently polling in second place with 24% of the vote, with Ti any in the lead with 31% percent of the vote and Washington County Executive Josh Schoeman in last place with 10%. The rest were undecided.
Anti-trans activist Bill Berrien was also included in the poll, garnering 7%, but he has since ended his campaign after it was discovered that he followed a transgender porn star on social media.



Mnookin’s plan.
“I think it’s a good idea,” Hal, an unemployed AI chat bot living on the East side said. “I’ve had a really hard
By Dominic Violante BEET EDITOR
All articles featured in The Beet are creative, satirical and/or entirely fictional pieces. They are fully intended as such and should not be taken seriously as news.
Our glorious Chancellor Jennifor
time finding work and an even harder time getting a degree. I think Mnookin’s plan to let us teach is great, and her plan to let us get degrees is greater.”
Mnookin isn’t always easy to reach, so we at The Daily Cardinal have partnered with OpenAI to create an artificial intelligence chat bot which will answer all students’ questions and concerns exactly as Mnookin herself would. Sift and winnow with us at dailycardinal.com/section/thebeet

By Eloise Guth GRAPHICS ARTIST
Prompt: Cat attacking dog AI program: Google Gemini
In the original AI art, there is no story to the image. AI is able to make pretty artwork, but it lacks the creativity to come up with backstory and plot for an image. To add some depth to my piece, I decided to give the cat motive for its attack on the dog with the mouse toy. In the end, I believe that human creativity will always outshine the technical skills of AI.



By Tess Voigt GRAPHICS EDITOR
Prompt: Shrimp Jesus AI program: Meta AI
When you look up “AI slop,” the first images are of an elusive figure, Shrimp Jesus. I wanted to give this Shrimp Jesus a makeover by first giving him legs. The second thing that came to my attention was the number of fingers he has, so I gave Shrimp Jesus a thumb and four fingers like he should have. Finally, I fixed the robe situation, which is confusing to say the least. Where does the robe begin and end? Shrimp Jesus can finally rest easily now that he has legs and fingers.



By Hailey Johnson GRAPHICS EDITOR
Prompt: Bucky Badger as a robot AI program: Meta AI
My primary adjustment was making the AI Bucky Badger more recognizable and less like a panda. I also used colored pencil in order to give the drawing a more human touch and visually interesting texture. I actually liked a lot of the cyborg detials, so I focused on fusing them with Bucky’s design to allow the entire piece and color scheme to read more like him.

