The Problem

Here is a conversation I have seen play out more times than I can count.

A leadership team gets excited about AI. They run a workshop, gather ideas from across the business, and end up with a long list - customer chatbot, invoice automation, demand forecasting, resume screening, fraud detection. All valid. All competing for the same budget and team.

Then someone asks: "Great - so where do we start?"

And the room goes quiet. Or the loudest voice wins.

The default answer is usually business impact. Pick the highest-value item and go. It is logical, it is defensible, and in my experience - it is often wrong.

Business impact tells you what is worth doing. It does not tell you what is safe to start with, what you are actually capable of delivering, or what will quietly fail six months in.

Why AI Is Different

A standard digital initiative fails for execution reasons - budget overruns, scope creep, change management gaps. Standard frameworks like RICE or MoSCoW were built for exactly these failure modes.

AI initiatives fail for all of those reasons, plus a set that generic frameworks miss entirely:

Data that does not exist or is not usable. The model looks great in theory. Then you discover the data is siloed across three systems, nobody has cleaned it in two years, and access requires a legal review that takes months.

Users who do not trust the output. The model is 87% accurate. The team it is meant to help ignores it entirely because they do not understand how it works and do not trust it over their own judgment.

Regulatory exposure nobody stress-tested. The use case touches hiring or lending - and suddenly you are in a conversation about the EU AI Act that nobody budgeted for.

These are not edge cases. They are the most common reasons AI projects stall after they have already been approved.

The Framework

I put together a scoring approach built specifically for AI - six dimensions, each weighted by how much it actually determines whether an initiative succeeds or fails. Inspired by standard prioritisation thinking, adapted specifically for how AI initiatives actually fail.

30%

Business Impact

Revenue, cost, CX value created

25%

Data Readiness

Is data available, clean and accessible today?

15%

Technical Feasibility

Build vs buy vs API — how hard is it really?

15%

Adoption & Change

Will people trust and use the output?

−10%

Ethical & Reg. Risk

High risk actively pulls your score down

5%

Strategic Alignment

Does it build reusable capability?

Data Readiness gets 25% because poor data quality is the most common reason AI projects stall, and it almost never appears in a standard business case. Score this one honestly.

Ethical & Regulatory Risk is negative. A high risk score actively reduces overall priority. If a use case sits in hiring, lending, or healthcare, that should change where it sits in your roadmap - not just get flagged in a risk register nobody reads.

What It Looks Like In Practice

Take AI Resume Screening. Strong business case — faster hiring, reduced workload, consistent screening. Most organisations would put this high on the list instinctively. Run it through the framework:

Business Impact 8/10

Data Readiness 6/10 - CVs exist, fair training data is harder

Technical Feasibility 7/10

Adoption & Change 5/10 - HR teams tend to be cautious

Ethical & Reg. Risk 9/10 - high-risk under EU AI Act

Strategic Alignment 5/10

The outcome would be: Low Priority - Later —> Not because the idea is wrong - but because the organisation is not ready to do it responsibly yet.

That is a conversation worth having before you have spent six months building it.

The Honest Caveat

No framework makes the decision for you. What it does is force a structured conversation - one where data readiness gets as much airtime as business value, and where ethical risk is on the table from day one.

The scores are only as good as the honesty of the people filling them in. The most valuable thing that comes out of scoring use cases together is often not the final ranking - it is the conversation that happens when two people score the same use case very differently.

Download Free toolkit → If you want to put this into practice, I have built a free Excel file with all six dimensions, weighted scoring, auto-calculated priority tiers and roadmap placement - with 8 pre-filled AI use case examples to get you started.

No sign-up needed. AI_Priortisation _Toolkit_PS_Ver1.0

Next Edition

Before you can prioritise, you need to know how to identify the right candidates in the first place. Next: a framework for spotting AI use cases - and knowing which problems are actually AI problems versus data or process problems wearing an AI costume.

That's a wrap for this week.

Stay tuned for more practical tools for the transformation journey in next editions

Best,
Piyush

Exploring transformation at the intersection of people and performance.

Thank you for reading. See you in the next edition!

I'd love you to join me on this journey! I cut through the noise and get straight to the knowledge that matters in just 2 mins.

📧Click here to subscribe.

Keep Reading