Blog

Connecting Business Education to Real-World Transformation | Inside the Launch of BBA Unlocked

BBA Unlocked

Yorkville University recently launched BBA Unlocked – a new virtual forum designed to bring together Business Administration students, alumni, faculty, and industry leaders to explore the most pressing transformations shaping today’s business landscape.

Conceived as a recurring series, each session will focus on a timely topic, creating an ongoing platform that bridges academic learning with real-world application.

The inaugural session, The Role(s) of Ethical AI in Evolving Business, examined how artificial intelligence is reshaping operations, leadership, and strategic decision-making. While AI adoption continues to accelerate, a key theme that emerged throughout the discussion was that efficiency gains alone are not enough – achieving sustainable competitive advantage requires a clear strategic vision, disciplined governance, and strong operational accountability.

Held on April 9, the event featured keynote speaker Dr. Angel Valerio, COO of Clir Renewables and a member of Yorkville’s BBA faculty, alongside host and moderator Katherine Carpenter.

Drawing on his global work in AI-enabled operations, Dr. Valerio emphasized the importance of responsible implementation, operational resilience, and effective leadership in increasingly technology-driven environments.

“The key to any AI implementation is discipline. And discipline starts with defining ‘what problem are we actually trying to solve?’” he said his closing remarks, underscoring the foundational role of discipline in successful AI adoption.

“That definition is what leads us to the right frameworks for security and compliance, the right data practices, and the right partnerships. When those three elements are aligned, AI adoption becomes transparent and responsible.”

Also joining the conversation were faculty panelists Dr. Donna Chowdhury, Dr. Thomas Jones, Dr. Shimaa ElSherif, and Dr. Louise Olivier, who contributed cross-disciplinary perspectives grounded in both academic insight and industry experience. They also analyzed real-world business scenarios and engaged attendees through an interactive Q&A supported by live polling and real-time feedback via Mentimeter.

While time constraints prevented all of the Q&A questions from being addressed during the event, Drs. Chowdhury and ElSherif have since taken time to provide responses.

Below, they share their answers to some of the audience’s most thoughtful and pressing questions:

Donna Chowdhury: From a Project Management perspective, AI gives teams a strong starting point and helps accelerate planning. It can support tasks like developing work breakdown structures, generating timelines, identifying potential risks (based on historical patterns), and even forecasting and scenario analysis to support decision-making.

In many ways, we are now working with an invisible partner – AI. From a ‘posthuman’ perspective, this means intelligence is no longer human alone but shared between people and technology.

However, the biggest mistake I see is people trusting the output too quickly. Just because something sounds confident doesn’t mean it’s correct or relevant. Real world projects rarely operate under ideal conditions – AI often misses things like team capacity, stakeholder expectations, or unexpected changes. And in project environments, decisions have real consequences. So, governance, accountability, and decision ownership of the outcome belongs to us.

That’s why we encourage students to pause, question what they see, and ask, ‘Does this actually make sense here?’ At Yorkville, we don’t just teach students to use AI – we teach them to think with it. The risk isn’t AI being wrong – it’s people not questioning it. 

I always tell students to think of AI like a GPS – it can suggest a route, but you’re still responsible for where you go.

Shimaa ElSherif: Preparing students to use AI responsibly requires going beyond technical skills to developing strong AI literacy and critical thinking. Students need to understand that AI systems are not always accurate – they are shaped by data and design choices and can reflect existing biases. Integrating questioning into learning is essential, encouraging students to think about assumptions, missing data, and who might be impacted by AI-driven decisions.

Real-world examples make these risks more tangible. For instance, Amazon once developed a hiring algorithm that unintentionally favored male candidates because it was trained on historically biased data. This highlights the importance of not blindly trusting AI outputs.

Equally important is reinforcing a human-in-the-loop approach, where AI supports rather than replaces judgment, and embedding ethical reflection into coursework. Ultimately, the goal is to develop not just AI users, but critical thinkers who can engage with these tools responsibly and thoughtfully.

Shimaa ElSherif: Balancing innovation with governance requires organizations to stop viewing them as competing priorities and instead see governance as an enabler of sustainable innovation. A responsible innovation approach means embedding ethical, legal, and operational considerations into AI development from the outset, rather than addressing issues after deployment.

This includes establishing clear guidelines, assessing risks such as bias, and ensuring appropriate human oversight. For example, organizations using AI in high-impact areas like hiring should proactively test for fairness rather than reacting to issues later.

At the same time, governance should be flexible and risk-based, with stricter controls applied to higher-risk applications and lighter oversight for lower-risk use cases. Ultimately, organizations that integrate responsibility into their innovation processes are better positioned to build trust and achieve long-term success.

Shimaa ElSherif: I would say it really depends on the course content. Not every course needs a dedicated AI component, but wherever it is relevant, integrating AI can be very valuable.

When possible, embedding AI concepts – or even discussions around ethical considerations – helps keep courses current and better aligned with what students will encounter in the workforce. It also gives students a chance to develop practical skills while understanding the broader implications of using these tools.

The goal is not to force AI into every course, but to thoughtfully integrate it in ways that enhance learning and prepare students for an increasingly competitive and technology-driven environment.

Donna Chowdhury: I don’t think every course needs an AI component for the sake of it – but every program should help students understand how AI connects to their field.  I’d focus on using AI where it naturally fits – and making sure students learn how to question

Request Info Apply