iMBA Program

Welcome iMBA Students!

The iMBA team is available to assist you at all times. We recommend that you bookmark this page for future reference, as it will be very useful as you complete the program. If you have any questions or issues, please contact us at i-support@illinois.edu.







iConverge

iConverge is our annual on-campus networking and professional development event. It is a chance for students to see familiar friends, make new ones and develop professional relationships outside the classroom.

Learn More

Graduation

Gies College of Business grants degrees three times a year -- in May, August, and December. You need to submit an application for graduation in your final term in order to place your name on the degree list and receive your diploma.

Learn More




News and Events

Dancing in the Dark: Academia’s reckoning with Generative AI

Aug 27, 2025, 08:00 by Robert Brunner
Generative AI has shattered the protective walls of academia’s "Ivory Tower," challenging faculty to adapt quickly to a rapidly evolving technological landscape that directly impacts knowledge work.

By Robert Brunner, Professor of Accountancy and Chief Disruption Officer, Gies College of Business

Popular media regularly characterizes academics as occupants of an ivory tower, perhaps with hints of derision as the occupants are presumably disconnected from practical concerns. Faculty have historically been isolated from real world disruptions caused by technology, such as the computer or the Internet, which diffused from academic labs into industry. The Ivory Tower provided refuge.

But this metaphorical barrier has been frequently pierced of late. The primary drivers behind emerging technology disruptions are no longer academic or government labs but corporations and private firms. And nowhere is this more evident than with Generative AI, which has been developed and disseminated primarily by researchers working for deep pocketed technology firms[1]. This origination story impacts faculty in two important ways. First, Generative AI tools, like ChatGPT, developed rapidly and were released to an unprepared public, leaving many feeling flat-footed by this seemingly magical technology. Second, and perhaps most importantly, this technology primarily impacts what is known as knowledge work. Simply put, Generative AI cuts to the heart of what it means to be an academic; The Ivory Tower no longer provides refuge.

We are supposed to be experts; yet, in this case, we are behind the curve, unsure of how to react or even what to do. It is as if a universal case of imposter syndrome has descended upon the Ivory Tower as we struggle to understand how to embrace, explain, or use this technology. We are, in effect, dancing in the dark[2]:

unsure if we are alone or in a crowd,

unsure if we are doing the right things,

unsure if we are improving or regressing, and

unsure if this is the beginning of the end or the end of the beginning.

 

The Disruption of Academia

The first interaction for many of us was not a good one—a sudden wave of Generative AI-generated answers to writing assignments (ironically characterized by an excessive use of the em dash) that threatened a crucial assessment technique, and more broadly the concept of academic integrity. The resultant outcry was swift and perhaps predictable. Many called for bans, some called for patience and reflection, and some the Persian adage: “This too shall pass.”

But in the interim, we have learned several important lessons:

  • Generate AI is not going away.
  • Generative AI is continually improving.
  • Generative AI proficiency is a required skill for our graduates.

Thus, if we are forward thinking we must embrace the adoption of AI, or risk growing increasingly irrelevant within our ivory towers. Yet, we have little clarity or guidance on how to best move into this brave, new world. Either from our administrations or peers—we are still dancing in the dark! And, as knowledge workers, we are further confronted by the discomforting hint of the ephemeral nature of our own careers.

We spend years mastering a subject, becoming the expert in the room. But now, Generative AI can summarize an entire research field, customize our course lecture notes for each student, or critique a case study with unnerving fluency. The challenge is less about AI being smarter and more that AI is faster, convincing, and tireless. The comparison, and competition, is unsettling. This leads to a new imposter syndrome where faculty ask, “Am I falling behind?” and “Is my expertise still relevant?”.

Alongside these doubts are real ethical concerns. Faculty may be asking:

  • What counts as “my work” if I use AI?
  • Am I allowed to use AI to provide student feedback?
  • Am I violating academic integrity standards if I use AI too much or too little?

Without clear institutional guidance and support, many faculty are adrift in an ethical fog, unsure how or even if they should proceed. And these feelings apply equally across our service, teaching, and research[3] roles.

 

A Guiding Metaphor

Fortunately, there is a simple step we can take. Embrace the future and learn to work effectively with AI. Rather than seeing AI as a threat, view AI as a skilled intern that can quickly draft, summarize, analyze, and even organize large corpora. But this, in turn, requires us to provide direction, oversight, and judgment. Of course, this is the same advice we should be giving to our students, who are increasingly expected to master these skills to gain employment.

While easy to say, doing this can be discomforting. But this is not a new concept. In 1987, Apple released a concept video entitled the “Knowledge Navigator” that showed a professor interacting with a highly competent digital assistant. This bow-tied AI helped schedule meetings, find articles of interest, and organize research. While inspirational, it was at the time, science fiction.

But today this dream is alive. Faculty can ask AI to generate lecture outlines, critique writing, summarize recent research, or simulate a conversation in another language. What was once imagined as a distant future is now relatively cheap and ubiquitous. Yet faculty are still struggling as they ask, “How do I get started?”, “What is permitted under university policy?”, or “How do I remain a role model for my students?”.

The initial goal shouldn’t be to master AI, but to learn to use it meaningfully. Faculty must be the trusted guides into a world where AI influences how knowledge is produced, shared, and valued. For this to happen, institutions must provide the supporting scaffolding for a confident adoption of AI. They must:

  • Normalize discomfort. Faculty must be able to share their concerns and experiences without fear of judgment.
  • Provide clear guidelines. Institutions should articulate what is allowed and what is not.
  • Promote mentorship and peer learning. Small groups who share use cases, failures, and lessons.
  • Recognize ethical creativity. Highlight examples of AI-enhanced assignments, not just AI detection tools.
  • Encourage transparency. Let students know when and how AI is used—and expect the same from them.

Just as the Knowledge Navigator inspired an earlier generation, today’s Generative AI tools invite us to rethink how we teach, learn, and lead. We can stop dancing in the dark! With an open mindset, shared wisdom, and institutional support, the lights will come on and our path forward becomes not only visible, but empowering.


[1] Competition between technology firms (and countries) has created an arms race for talent and resources.

[2] This phrase was popularized in a hit song by Bruce Springsteen that dealt with the  themes of isolation and frustration with work, which seems rather relevant to this discussion.

[3] Experimentation is already happening on AI augmented research, see the Agents4Science conference.