Dr. Michael Hanna

Man with dark shirt smiling

Dr. Michael Hanna

Director of Cybersecurity Operations, Responsible AI, DoD Chief Digital and Artificial Intelligence Office; Author of the Defense AI Guide on Risk (DAGR); ERM Alumnus 2022


What was your interest in applying for GW’s Enterprise Risk Management program?

Just in terms of background, I fell into risk management almost by happenstance. By entering the technology field, specializing in cybersecurity, and operating in both the energy and defense sector, risk is something you have to analyze daily and across every activity. You just cannot do your job well and effectively if you do not incorporate risk management. Additional to my day-to-day functions, I'm a professor in cybersecurity and risk management so entering a program such as GW’s ERM program was such a valuable developmental experience.

Almost a year ago, I was searching for programs to further develop my risk skills and I came across the George Washington ERM in Government Program, and immediately thought it was an excellent fit. Not only did it allow me to develop as a risk management professional, but I understood that I was going to be exposed to top experts in the field, both from academic and professional backgrounds.

And, to go one step further, I was able to participate and learn from other government leaders across multiple agencies and departments, which was one of the reasons that the class was so phenomenal.

What are one or two major takeaways from the program for you?

The first takeaway is the need to examine and assess risk from a holistic perspective, and to use data and risk management tools to support the development of your program or the refinement of your program through feedback. You can't just look at your risk problem from a subset of your expertise because this will result in only addressing a sliver of that risk picture. You must look at all risk because risk consists of many relationship dynamics.

The second takeaway was the importance of cross-functional collaboration to explore problems from a perspective that we normally wouldn't be exposed to. This was truly very interesting to me, as a student. While we were exploring the class use cases, listening to the feedback from other students and the instructors from very diverse backgrounds, we were able to really collaborate and examine risks from different perspectives. And what that teaches you is, as you're developing your risk guides or your risk programs is to really engage with other stakeholders because you don't know all the answers by yourself. And for an ERM program to be successful you really need the collaboration and expertise of all the stakeholders.

What are you applying today? And how has it shown up in your work with the AI risk guidance that you help produce?

Recently we released our Responsible AI Toolkit, and a major component of it is the risk guide that I had written and is known as the Defense AI Guide on Risk, or DAGR. During the development of this guide, it was important to understand that Artificial Intelligence is a capability that provides more than just technological risks and opportunities. It is a capability that has social, technological, operational, political, economic, and sustainability implications, which I call a STOPES analysis.

When I began developing this guide, I first examined the overarching and publicly available guidance, the theories, the requirements, the regulations, and international considerations, which are pretty substantial. Next, I focused on how I can consolidate and abstract the literature into simplified models and processes. And the regulations, requirements, and considerations can truly come from a wide set of domains, right? The domains of interest can have constitutional, legal, environmental, and social equity considerations, to name a few. There are many important concepts that have to be examined there. And when I created this risk guide, DAGR, I made sure to account for that. And when we evaluate risk for an AI capability, it's not just looking at technical factors, it's looking at the STOPES analysis I highlighted before.

I hope this guide spurs those risk thoughts and conversations across all the stakeholders to really examine the risk of developing and deploying an AI capability. As of right now, the guide and toolkit is released publicly and across our department with intentions, and also on track for NATO acceptance. For organizations that do not yet have a responsible AI toolkit or AI risk program, this guide may be of assistance.

The views expressed here are solely those of the author and do not necessarily reflect those of the Department of the Navy, Department of Defense, or the United States government.