1. Navigating the New Cognitive Frontier
The arrival of generative Artificial Intelligence represents a fundamental inflection point for higher education, compelling us to redefine the very nature of cognitive labor in our academic mission. Its rapid integration into the tools we use for teaching, learning, and research presents both unprecedented opportunities and significant challenges. The purpose of this paper is to articulate a formal institutional position on the integration of AI—one that is both forward-thinking in its embrace of innovation and cautious in its commitment to our core academic values. Developing a clear framework is of paramount strategic importance, as it will guide our policies, shape our pedagogy, and inform student engagement in this new era.
The central tension of AI integration is best captured by the technology’s own response to inquiry. When asked whether it can make us dumber or smarter, ChatGPT answered, “It depends on how we engage with it: as a crutch or a tool for growth.” The expert consensus is clear that AI is “here to stay,” and its responsible adoption therefore requires a deliberate and principled approach from the academic community. We cannot simply react; we must lead with intention. This begins with confronting the core challenge of our time: understanding and navigating the dual potential of artificial intelligence to either augment and elevate human intellect or to foster dependency and hinder its development.
2. The Core Dichotomy: Augmentation vs. Atrophy
To navigate the integration of AI responsibly, it is strategically essential to clearly define the two primary pathways for its use within an academic context. This section explores the fundamental choice we face as educators and learners: the choice between leveraging AI to augment and expand human intellect and the concurrent risk of allowing it to foster cognitive dependency and eventual atrophy. Our institutional approach must be built upon a balanced understanding of both possibilities.
2.1 The Risk of Cognitive Atrophy
A growing body of analysis cautions against the uncritical use of AI. A recent, though small and not yet peer-reviewed, study from the MIT Media Lab warns that “excessive reliance on AI-driven solutions” may contribute to “cognitive atrophy” and a decline in critical thinking abilities. This concern is echoed by experts who observe how cognitive tools can subtly reshape our thought processes. Philosopher Jeff Behrends notes that the frequent use of general large language models (LLMs) can negatively alter how users approach reasoning tasks, just as predictive text alters our word choices. This effect is a familiar one in the history of cognitive offloading. As literary scholar Karen Thornber observes, turn-by-turn navigation systems have resulted in many of us knowing the streets of our own cities in far less detail than cities we learned before the advent of GPS.
This risk is particularly acute when AI is used to circumvent, rather than support, the learning process. As education scholar Christopher Dede cautions, using AI to “write the first draft” directly undercuts the development of critical thinking and creativity. Learning specialist Dan Levy reinforces this point, stating that no meaningful learning can occur if a student simply asks an AI for an answer, as the foundational cognitive process of making meaning is bypassed entirely.
2.2 The Promise of Intellectual Augmentation
Conversely, when used with intention, AI holds immense promise as a tool for intellectual growth. Christopher Dede offers a powerful metaphor for this ideal relationship: AI as “the owl on your shoulder,” an assistant that helps us become wiser without doing our thinking for us. This model of augmentation, rather than replacement, is key.
In this framework, AI can become a powerful accelerator for deep learning. As Dan Levy suggests, AI can be a significant “plus” if it saves students time on “grunt work,” thereby freeing them to “devote that time to do more serious learning.” Similarly, Karen Thornber views AI as a potentially “helpful partner in analyzing and inferring,” capable of processing information at a scale that can enhance human discovery. When used deliberately to handle lower-order tasks, AI can create the cognitive space for students and researchers to focus on higher-order thinking and more complex intellectual challenges.
Successfully promoting augmentation while preventing atrophy depends on a foundational understanding of what makes machine “intelligence” and human cognition fundamentally different.
3. The Nature of the Intelligences: Differentiating Human and Machine Cognition
A robust and sustainable institutional policy on AI must be grounded in a clear-eyed understanding of the fundamental distinctions between artificial and human cognition. This is not a merely philosophical exercise; it is a practical necessity. By delineating these differences, we can clarify which cognitive tasks are suitable for augmentation by AI and which must remain the distinct and protected domains of human thought and development.
3.1 The Capabilities and Constraints of Artificial Intelligence
Artificial intelligence, particularly in its current generative forms, excels at specific, bounded tasks. Its core strength, as Christopher Dede notes, is in absorbing vast amounts of data and making “calculative predictions.” It is a powerful engine for data processing and statistical analysis. However, its limitations are as significant as its capabilities.
As engineering expert Fawwaz Habbal emphasizes, AI machines are entirely dependent on data created by humans, and their processes are “only recursive.” They do not possess genuine experience, insight, or the capacity for ethical and moral reasoning. Because they draw from similar, vast human-created databases, their outputs often converge, lacking true originality. While AI can assemble information in novel ways, it cannot create genuinely innovative solutions grounded in a human context because it lacks the lived experience and understanding that such innovation requires.
3.2 The Enduring Power of Human Cognition
In contrast, human cognition operates on a different and more complex plane. Education researcher Tina Grotzer argues compellingly that the human mind is “better than Bayesian.” It is capable of making “quick, intuitive leaps” guided by somatic markers and can “detect critical distinctions” and exceptions that a purely statistical model might overlook or average away. This ability to reason analogically—not merely to offer analogies as AI can—and to generate novel conceptual frameworks remains a uniquely human capacity.
In an era of what Karen Thornber calls “cheap intelligence”—the ubiquitous availability of machine-generated text and code—the value of distinctly human skills has never been higher. Competencies such as “discernment, evaluation, judgment, thoughtful planning, and reflection” cannot be outsourced. These are the foundational skills for navigating a complex world, and their cultivation is central to our academic mission. As Fawwaz Habbal definitively states, the complex “human challenges… can be solved only by humans.”
4. Guiding Principles for Responsible AI Integration
Translating this understanding into action requires a clear, actionable framework. The following principles represent the core of this institution’s position on artificial intelligence. They are designed to promote the productive and ethical use of AI as a tool for intellectual augmentation while rigorously safeguarding our fundamental mission: the development of the human intellect.
- Prioritize the Learning Process Over the Final Product. We must relentlessly focus on the cognitive journey, not just the destination. As Dan Levy argues, an “output is just a vehicle through which that learning is going to happen.” True learning requires the brain to be “actively engaged in making meaning and sense.” When AI is used merely to generate a final product, this essential process is bypassed. Our policies and pedagogical practices will therefore emphasize and assess the intellectual work of inquiry, analysis, and synthesis, ensuring that AI serves as an input to this process, not a substitute for it.
- Champion Augmentation, Not Automation, of Thought. Our institutional goal is to use technology to empower, not replace, human thinking. We embrace Christopher Dede’s vision of AI as the “owl on your shoulder,” a tool that enhances wisdom. We must resist the temptation to use AI simply to do “the same old stuff better and quicker.” Instead, we will guide our students and researchers to leverage AI to do “better things”—to ask more ambitious questions, analyze more complex datasets, and explore new frontiers of knowledge that would be otherwise inaccessible.
- Cultivate Intentional and Conscientious AI Literacy. Effective and ethical use of AI requires a deep understanding of its nature. We adopt the charge of the American Historical Association to “support the development of intentional and conscientious AI literacy.” This extends beyond mere technical proficiency. We will teach our students how AI systems work in a “computational/Bayesian sense,” as Tina Grotzer recommends, so they can become “critical and discerning” consumers of its outputs. This includes understanding its limitations, its potential for bias, and its lack of true contextual understanding.
- Elevate and Assess Uniquely Human Cognitive Skills. In an age of powerful AI, our curriculum and assessments must increasingly focus on the cognitive skills that machines cannot replicate. We will intensify our pedagogical focus on the competencies Karen Thornber identifies as newly crucial: “discernment, evaluation, judgment, thoughtful planning, and reflection.” We will further emphasize the development of what Fawwaz Habbal calls “human insight,” “critical thinking,” and “ethics.” These are the enduring skills that create value, drive innovation, and form the basis of leadership.
5. Conclusion: A Commitment to Human-Centered Learning
This institution’s position is clear: we are committed to leveraging artificial intelligence as a powerful tool to augment, but never replace, human intellect. We will approach this new cognitive frontier not with fear, but with a principled intentionality, guided by an unwavering commitment to the core mission of higher education.
Our purpose remains, as it has always been, to help students explore and develop their “incredible minds and abilities,” as Tina Grotzer so aptly puts it. Far from diminishing this mission, the rise of “cheap intelligence” clarifies its purpose and elevates its urgency. We reaffirm that the development of wisdom, the cultivation of critical thinking, and the exercise of ethical leadership remain a fundamentally human enterprise. These are the capacities essential for solving the complex challenges of the future, and we are dedicated to nurturing them in the leaders of tomorrow.
