AI Consultant? Or AI Strategist? The Questions We Ask Matter


AI Consultant? Or AI Strategist? The Questions We Ask Matter

A recent LinkedIn post challenged so-called “AI consultants” to name all AI lifecycle phases, identify the control points, and map them to system diagrams. The implication was clear: if you cannot do these things on command, your engagement is not worth continuing.

At first glance, the post seems to promote accountability. But dig a little deeper, and two flawed assumptions become apparent. First, the post implies that “AI” has a single, universal lifecycle. Second, it assumes that all AI professionals must function as technical system architects. In truth, both assumptions reveal a misunderstanding of what AI is and what it takes to integrate it responsibly and strategically.

AI Is Not One Thing

Artificial Intelligence is not a single system, tool, or method. It is an umbrella term encompassing a wide range of technologies, including machine learning, expert systems, natural language processing, computer vision, large language models, and intelligent automation. Each of these disciplines brings its methodologies, applications, and lifecycle considerations.

Throughout my ongoing doctoral research, which focuses specifically on how leaders perceive and implement AI within decision-making and strategic planning, I have reviewed over 1,000 peer-reviewed scholarly articles. These span business, technology, ethics, and organizational leadership. What stands out across this body of research is that there is no single, universally accepted AI lifecycle. Instead, frameworks vary depending on industry, use case, regulatory environment, organizational maturity, and technological scope.

For example, the CRISP-DM model is well-known in data science for guiding structured analytics projects. However, it does not sufficiently address ongoing model monitoring, fairness audits, or adaptive learning that modern AI systems require. The U.S. Department of Defense offers an alternative lifecycle that emphasizes ethical alignment, field testing, and mission impact assessments, which are particularly suitable for defense applications but may not apply to private enterprises.

The World Economic Forum and OECD propose governance-oriented models that highlight iterative development, policy checkpoints, and public trust. One researcher proposes a hybrid framework that integrates technical development phases with leadership oversight, emphasizing the management of ethical risks alongside performance metrics. At the same time, another describes a cyclical lifecycle that blends agile development with structured compliance steps, including post-deployment retraining and stakeholder review.

In short, lifecycle models vary widely. We need a framework that has the flexibility to accommodate the diverse range of AI technologies and yet work across various industries. A rigid preconceived lifecycle checklist may satisfy a theoretical exercise, but it does little to support the nuance required for governing real-world AI implementations across diverse business environments and requirements.

Strategy Is Not the Same as Systems Architecture

The second issue with the original post is that it assumes all consultants working in AI should be able to map system subprocesses. This might be reasonable if the consultant’s role is to design the technical infrastructure. However, not everyone in the AI space is an engineer.

I do not position myself as a generic “AI consultant” or as an “AI expert,” which I have discussed in previous articles. I consider myself to be an AI Business Strategist. My expertise lies in helping organizations align AI investments with their business strategies, establish effective governance structures, and integrate AI into workflows in an ethical manner. I do not build models from scratch or configure machine learning pipelines. Instead, I help leadership teams identify appropriate use cases, evaluate risks and opportunities, and ensure that AI initiatives support broader strategic and cultural goals.

Asking me to draw a system diagram for every AI engagement is like asking a CFO to produce source code for an ERP system. The question itself fails to recognize the diversity of roles involved in successful AI adoption.

The Questions That Matter

Leaders are justified in questioning the capabilities and credibility of those who call themselves AI consultants or experts. But, the questions must align with business needs. The most important questions are not about reciting technical lifecycles. They aim to evaluate whether an advisor can help you understand what AI can do, what it should do, and how to make it sustainable and responsible within your organization.

Can the person guide you through the organizational readiness process? Can they help assess data maturity and governance alignment? Can they articulate the risks of automation and the benefits of augmentation? Can they help design a framework that not only satisfies technical needs but also earns stakeholder trust?

Where Strategy Begins

AI fails in organizations not because someone forgot a lifecycle step but because leadership lacked a clear objective, the culture was not prepared, or the ethical implications were not considered. AI strategy does not begin with models; it begins with intent, governance, and alignment.

Yes, let’s raise the bar for those offering AI services. But let’s do it with precision and understanding. Not every expert needs to be a coder. Some of us work at the intersection of people, process, policy, and performance. And that is where AI, as a business transformation tool, either succeeds or fails.

616 NE 3rd St, Marion, WI 54950
Unsubscribe · Preferences

background

Subscribe to Nexus Notes