Institutional Gaps in Regulating Artificial Intelligence in U.S. Higher Education

Victoria Woo

BIO:

Victoria (Hyewon) Woo is from Ulsan, South Korea, and is currently a senior at Whitworth University in Spokane, Washington. She is pursuing a double major in Piano Pedagogy and Interdisciplinary Computer Science, with academic work that bridges music and computing. Her studies reflect an interdisciplinary approach that integrates human-centered artistic practice with computational methods.

MAJOR: Interdisciplinary Computer Science

Minor: Piano Pedagogy

During her time at Whitworth, she has served as a teaching assistant for Computer Science I and II, along with other music and seminar courses. She has also completed multiple independent and collaborative projects, including an iOS application (Hike), a web-based JavaScript application (ReadYou), and a summer research internship at Whitworth University. In her research role, she worked with CUDA-based code contributing to a plasma thruster simulation project. Through these experiences, she has developed a strong interest in software development across both technical and interdisciplinary contexts.

 

Her current academic focus centers on artistic AI and natural language processing (NLP). After graduation, she intends to pursue graduate study to further explore these areas and deepen her engagement with AI and human-centered computing.

 

Project Overview: This research project examines how institutions of higher education in the United States are responding to the rapid integration of artificial intelligence tools in academic environments, and where significant regulatory gaps remain. As generative AI systems become increasingly accessible to students and faculty, universities face growing challenges in maintaining academic integrity, supporting cognitive development, and addressing algorithmic bias.

 

The central focus of this study is the inconsistency in institutional policies governing AI use in academic work. While some universities have developed formal guidelines, many remain ambiguous or outdated, leaving students and instructors to navigate ethical and practical questions without consistent standards. This project analyzes these discrepancies and their implications for equitable educational practice.

 

The research also explores how reliance on AI tools may influence student learning, particularly in relation to critical thinking, independent reasoning, and long-term cognitive development. In addition, it addresses concerns regarding bias in AI systems, especially when these tools are used in academic contexts without sufficient oversight or awareness of their limitations.

 

By synthesizing literature on AI ethics, educational policy, and cognitive science, this project highlights the need for cohesive institutional frameworks. It argues that AI integration must be guided by clearer ethical standards to ensure fairness, academic rigor, and responsible use.