
Brent Packer • 2025-09-08
With AI continuing to be a top priority for companies, some strategy and operations leaders are uncertain about how and if they can contribute to conversations about AI use cases. By asking and exploring a set of key questions, non-technical leaders can play a critical role in capturing value and mitigating risk on AI.
Artificial intelligence remains a key priority for companies. As companies explore how to make the most of this technology, they are increasingly relying on technical expertise to evaluate, build, or buy AI solutions and use cases. Non-technical leaders, especially ones with more strategy and operations roles, may be uncertain how and when they should be involved.
A common misconception is that non-technical leaders mainly contribute in two domains related to AI use cases: upfront use case selection and operational rollout of the tool to users. These are helpful, but there are other critical areas where strategy and ops leaders can and should weigh in. In fact, being non-technical becomes a superpower to take a higher-level view to ensure AI use cases are as valuable as possible. The following are 5 key questions strategy and ops leaders should ask and explore:
While it's exciting to work on cutting-edge technology, it's important to cut through the noise and the hype to understand what you want to solve with the use case and why AI is the best approach for it. This is inclusive of both building in-house or buying a solution from a vendor. There are many times where a more simple, easier to maintain, cheaper, and ultimately more effective solution that doesn't use AI is the right choice over the flashy AI one.
Ensure your teams have done the analysis on alternative, non-AI solutions before starting on the AI path. These could be more traditional rules-based solutions (e.g., if this, then that) or even continuing to use humans to do the task.
It comes natural to strategy and operations experts to obsess about the "so what" of any decision or action. AI is no different. Non-technical leaders can push for clarify on how the company knows whether the use case is bringing value or not. Before any AI use case is launched, even as a pilot, these leaders can work with their teams to define the metrics, develop a process for gathering the metrics, and setting thresholds on expanding or stopping the use case. A key consideration for any use case is cost since the compute and API costs can become unruly and rapidly eat into any value the use case brings. This is a callback to the classic adage - “if you can’t measure it, you can’t manage it.”
AI models require human touch in various points in their lifecycle. For example, there's a human behind the keyboard selecting the data, training the models, etc. Humans have blind spots and biases and this has unsurprisingly appeared in AI models too. While leaders should strive to have AI use cases free of bias, it's unrealistic to expect absolutely none. Non-technical leaders can ask questions to better understand where blind spots and bias may come into play and help develop strategies to reduce them. This is not limited to gender, religion, age, race, and other biases typically associated with the term. This can also include cultural bias (e.g., is the use case's communication and decision-making based on Western norms?), suggestion bias (e.g., is the use case over-suggesting certain products), and others.
Below are a few illustrative examples that you may biases and blind spots in AI models that you may not be aware could exist:
It may be that certain technical biases are appropriate and are part of the design of the tool. For example, an US sports betting AI copilot tool may work best if it's data and training is skewed towards its target audience of 25-40 year-old men living in the states. Overall, it's important that non-technical leaders dig into what some of these blind spots and biases could be and develop strategies to either address them or accept them.
The "intelligence" within AI is its ability to learn, form conclusions, and take action that its not specifically designed to do. Hallucinations are, in a sense, just as much a feature as a bug of AI. However, there needs to be guardrails on AI use cases to keep the AI contained within a certain set of appropriate outcomes. Knowing where and what types of guardrails to develop is an area strategy and ops experts should play. This can be done by creating a list of the high-risk outcomes of the AI use case that need to be avoided (e.g., making up new policies, giving incorrect information, saying offensive language, using unauthorized data, recommending competitors, capturing medical history, etc.) Once that's done, a significant amount of time and energy should go into pushing the AI use case to its limits in this failure testing activity. Once these failures are discovered, they can be documented and sent back to the technical teams to refine the guardrails.
AI models are nothing without the underlying data that's powering them. However, certain types of data have different restrictions and disclosure requirements related to using that data to improve the AI use case. It's not the role of strategy and operations leaders to have the deep regulatory expertise to know the nuances of data privacy rules. They can bring in the legal and compliance experts early in the process and act as a conduit to the technical team to resolve data issues. If this is not spotted in time and resolved with the right experts, it can cause serious issues. At best, the development team may need more time to revise the way the use case is designed. At worst, there may be legal repercussions. Again, the non-technical leader's role is not to resolve these issues but to sense if experts on the topic should be engaged.
Non-technical leaders, especially strategy and operations leaders, have a key role to play in the creation of effective, valuable AI use cases in their companies. Being non-technical allows these leaders to stay out of the data science and development details and instead view the use case at a higher level. It's this viewpoint that is essential to complement the work of the company's technical leaders to, together, leverage AI to make something truly valuable.
Contact NorthLawn to to connect with Brent directly or explore how NorthLawn's AI Practice can support your organization's goals.
Copyright © 2026 NorthLawn LP. All rights reserved.