4 min read

Whoever Writes the Prompt Picks the Strategy

Most companies still think AI is showing them what the data says. But, it is not. It is showing them what the prompt asked it to prioritize.

Same data. Same model. Two contradictory strategic recommendations. That is not a flaw in the technology. It is the most important thing to understand about how AI actually works.

We ran a controlled test using brand perception data across five major brands, measured across convenience, quality, customer service, design, heritage, and innovation. We submitted the exact same dataset to the exact same model twice. The only thing we changed was the prompt.

The first prompt asked which brand had the strongest overall perception, using breadth and depth of associations as the lens. The answer was clear and confident: a functional market leader defined by convenience, value, and customer experience. Broad preference. Strong delivery. Obvious winner.

The second prompt asked the model to evaluate the same data through the lens of long-term brand equity: quality, design, heritage, and innovation. A different brand won. And, the first winner was reframed as a commoditized incumbent whose advantages any well-funded competitor could replicate overnight.

Same data. Same numbers. Two coherent, well-supported, contradictory strategic recommendations.

We have run variations of this across categories and datasets. The result is the same every time. Change the frame, change the conclusion. And, the frame lives in the prompt.

 

The frame is the strategy

Most people think prompting is about getting better answers. It is actually about defining better decisions. That is the difference between using AI to confirm what you already believed and using it to reach a decision you can defend.

Prompting is one of the most talked about parts of working with AI, and still one of the most poorly executed. Everyone knows prompting matters, yet most people still approach it the wrong way because they misunderstand what prompting actually is.

A prompt is not a search query. It is an interpretive act. Whoever writes it is choosing the analytical framework. They are deciding what matters, what gets prioritized, what gets ignored, and ultimately what the business may decide to do next. If they don’t fully appreciate that, they may miss that no strategic choice was made at all.

Across CPG, financial services, and healthcare, we see the same pattern repeatedly: the teams getting the most value from AI are rarely the ones with the most sophisticated tools. They are the ones with the most disciplined prompting.

 

AI is a prediction engine, rather than a source of truth

The clearest framework we’ve found for understanding this comes from Avi Goldfarb, economist at the University of Toronto and co-author of Prediction Machines. First meeting Avi around the time of ChatGPT’s release was fortuitous, and his thinking has significantly shaped how we approach this work ever since.

His argument is simple: AI is fundamentally a prediction tool. Not intelligence. Not reasoning. Prediction.

When you submit a prompt, the model is not thinking through your business problem the way a senior analyst would. It is predicting the most statistically likely response based on the context you provide. That sounds like a technical distinction. It is actually a strategic one.

If you are using AI to summarize information, almost any prompt will work. If you are using AI to weigh alternatives and predict outcomes, which is what real business decisions require, the prompt has to be built differently. Not better worded. Differently built.

The job is not better questions. It is better decision conditions. And that work happens before you ever open the tool.

AI doesn't just preserve the need for judgment. It raises the stakes for it.

 

Prompting as practiced craft

A disciplined prompt makes several choices before a single word is written. What framework should shape the analysis? Whose perspective should drive the recommendation? Who is the output for? What would a dangerously misleading answer look like?

Each one changes the conclusion. Most teams skip all of them without realizing it.

The how is proprietary — our special sauce. What matters here is the why. An undisciplined prompt asks for analysis. A disciplined one asks for judgment. One reads like a competent summary. The other reads like the recommendation a partner would put in front of a CEO.

The prompt is the last step, not the first. The hard work happens before anyone starts typing.

 

The invisible choice

Daniel Kahneman, the behavioral economist best known for uncovering decision-making biases and the psychology of judgment, spent his career teaching a version of this truth: we are blind to our own blind spots. Judd Kessler, a behavioral economist at Wharton and a valued member of our Bovitz team, reinforces the same idea. We rarely know how much we are missing. AI inherits that problem and amplifies it, because polished, confident answers make a chosen point of view look like objective fact.

"The data shows" makes the conclusion sound inevitable. "We examined this through the lens of long-term brand equity, which is why we weighted heritage and differentiation above utilitarian performance" makes the choice visible and invites challenge. The second sentence is harder to write. It is also the only honest one.

We used to worry about bad data producing bad answers. The harder problem is good data producing confidently wrong ones because the frame was wrong from the start.

 

The reviewer's job has changed

When AI generates analysis, checking whether the numbers are right is necessary. It is not enough. The harder question, and the one most teams are still not asking, is whether the frame is right.

AI can produce a confident, polished, internally consistent recommendation that points a client in the wrong direction, not because the data was wrong, but because of whose question the model was actually answering. Before the prompt was written, who decided which framework the model would use? If the answer is "no one consciously," a strategic decision was already made. It just was not visible.

If no one is auditing the frame, the frame is being chosen by whoever happened to type the prompt. That is not an operational detail. That is strategy. It should be made consciously, by the right people, with the tradeoffs visible to everyone downstream.

 

The widening gap

Teams that build this discipline early will make better decisions faster than teams that do not. That gap will widen over time, not narrow. We will look back in five years and see it as one of the clearest separators between companies that used AI well and companies that simply used AI often.

At Bovitz, we believe the goal is not artificial intelligence. It is augmented intelligence. AI should strengthen human judgment, creativity, and strategic thinking, not replace it. That is where the real advantage lives.

If decisions in your business now depend on AI-generated analysis, the important question is not whether your prompts are good. It is whether anyone is auditing the frame, and whether that frame is the one your business actually requires. We have built a discipline for that. If you would like to see what that looks like inside your category, let's talk.


This is the second post in our series on AI, decision-making, and the discipline that separates teams that think better from those that merely work faster. Next: why pattern recognition is not the same as decision-making — and why that distinction makes skilled decision-makers more valuable in the age of AI, not less.

 

Why We’re Evolving and What It Means to Be Obsessed with Humanity®

Why We’re Evolving and What It Means to Be Obsessed with Humanity®

As an integrated insights, strategy and marketing agency, we’ve always believed that meaningful impact starts with people.

Read More

Whoever Writes the Prompt Picks the Strategy

Most companies still think AI is showing them what the data says. But, it is not. It is showing them what the prompt asked it to prioritize.

Read More

Why Most People Are Doing AI Wrong

We're launching a series on AI over the next two months to help you navigate your own thinking about how you want to use AI and what you may want to...

Read More