From 150 AI ideas to the right 10
Most enterprises don't have an AI ambition problem. They have a prioritisation problem.
Walk into almost any boardroom these days and you will find the same scene playing out. The AI strategy is approved. The investment envelope is signed off. The chief executive has made a public commitment. Somewhere in the organisation, a team has been asked to come back with a list of opportunities — and they have. The list runs to a hundred ideas. Sometimes a hundred and fifty. The room nods. And then, slowly, nothing ships at scale.
This is not a story about a lack of vision. It is a story about a missing layer. Between the strategy on the slide and the engineering team standing up the first model, there is a piece of work that almost no organisation does well: the disciplined translation of “everything is possible” into “these are the right ten things to do next.”
That gap — between ambition and prioritised delivery — is where most enterprise AI portfolios stall.
Most boards we work with don’t have an AI ambition problem. They have a prioritisation problem hiding inside an enthusiasm problem. The strategy is fine. The list of ideas is long. The mechanism for choosing the right ten is missing.
Chris Wray, Senior Adviser
The list is not the problem
It is fashionable to blame the absence of a backlog. In our experience, that is rarely the issue. Modern enterprises are good at generating ideas. Strategy teams run innovation sessions. Function heads compile wish lists. Vendors arrive with playbooks. Generative AI creates impressive collections. Within a few weeks of a board-level commitment to AI, a typical large organisation can surface well over a hundred candidate use cases without breaking a sweat.
The list is the easy part. The hard part is what comes next, and most operating models have no honest answer for it.
How do you compare a fraud-detection model in finance against a copilot for the field engineering team? How do you weigh a customer-service triage capability — high public profile, modest economics — against a back-office automation that nobody will write a press release about but that pays back inside a year? How do you factor in regulatory exposure, data readiness, platform dependencies, change-management burden, and supplier lock-in risk when these dimensions sit across five different functions and are rarely added up against each other?
Most organisations answer those questions informally. Use cases get pitched by the loudest team. Pilots are funded based on energy and influence rather than evidence. Governance and risk are consulted late, often after a vendor is already on the hook. The result is a portfolio of disconnected experiments, very few of which scale, and even fewer of which deliver the outcomes the original strategy promised.
What is missing is not appetite. It is a structured, repeatable way to go from “everything is possible” to “these are the right ten things to do next.”
Why prioritisation is harder than it looks
Prioritisation in an AI context is not the same as ranking a software roadmap. The variables are messier and the asymmetries are greater.
A traditional product backlog can be ordered with reasonable confidence using familiar levers — user value, effort, dependencies. AI use cases bring a wider set of moving parts that interact in ways that defeat a simple scoring sheet:
- Regulatory exposure is non-linear. Two use cases with similar economics can sit on opposite sides of a risk threshold that triggers entirely different governance regimes.
- Data readiness is rarely binary. A use case may be technically viable on the data we already hold, viable in twelve months once a remediation is done, or never viable without a structural change.
- Architecture and reuse matter disproportionately. A use case that builds a capability others can borrow is worth more than its standalone business case suggests. A use case that creates a new bespoke stack is worth less.
- Operational change is often the highest hidden cost. A model that requires a wholesale redesign of a frontline workflow is a different proposition from one that augments an existing one.
- Vendor independence changes the answer. The economics of a use case look very different when the recommendation is shaped by a partner with a horse in the race.
A serious prioritisation method has to incorporate all of these dimensions simultaneously and produce a defensible, ranked output that a finance committee can fund and a delivery organisation can actually start work on.
What good looks like
Three things have to be true for prioritisation to work as intended.
Discovery has to be systematic. Identifying AI opportunities should not depend on which team shouts loudest or which supplier got the meeting. The full surface area of the organisation’s operations, regulatory environment, and strategic priorities needs to be examined, and use cases generated against that context — not against a generic playbook. Breadth matters: it forces a comparison, and comparison is where prioritisation begins.
Governance has to come first, not last. Every candidate use case should be assessed against compliance and risk frameworks, and against technical readiness, before it ever gets near a business case. Doing this upstream is dramatically cheaper than discovering eighteen months later that a flagship pilot cannot go live because the data lineage was never sound or the procurement route was wrong. It is also the only way to give a risk committee a portfolio they can sign off on with confidence.
The output has to be a ranked portfolio, not a wish list. Senior decision-makers do not need another long list. They need a defensible, value-ranked, risk-adjusted shortlist with clear, quick wins, realistic effort estimates, and an honest read on which use cases need a platform investment and which can be delivered standalone. That is the document that releases the momentum for funding, procurement, and delivery.
The discipline is in saying ‘no’ to use cases that do not earn their place, however politically attractive they might be.
Governance has to come first, not last. Every quarter we spend learning that the hard way costs more than the entire prioritisation exercise we should have run up front.
Tuli Faas, Risk & Governance Adviser
A structured layer between strategy and delivery
This is the layer Onepoint Differential™ is designed to be: an AI-powered smart advisor that helps enterprises systematically identify, evaluate, and prioritise AI use cases — delivering a risk-adjusted, value-prioritised, and governance-ready AI portfolio in weeks, not months.
A hands-on walkthrough of Onepoint Differential™ with Hugo Pickford-Wardle, Innovation Adviser at Onepoint
The pattern we see when this layer is in place is quite different from the science-fair model that has dominated the first wave of enterprise AI. Rather than a parallel set of pilots competing for attention, the organisation runs a single managed portfolio. Use cases enter through a structured discovery process. They are assessed against the same governance criteria. They are ranked against the same value model. They progress through gates that the leadership has agreed in advance.
That sounds bureaucratic. In practice, it is the opposite. A managed portfolio frees the organisation to move faster on the use cases that matter, because the time normally lost to relitigating the same arguments — is this safe, is this in scope, is this the right partner — has already been spent up front. Differential typically narrows a hundred-and-fifty-plus opportunity set to a governance-assessed shortlist in a few weeks rather than several months.
That is not a tooling story. The accelerator is the easy part. The harder and more valuable work is the structured engagement with operational leaders, the rapid validation of priority use cases, and the discipline of vendor-independent recommendations — an honest read on whether to build, buy, partner, or wait.
What this means for leaders
If your organisation has approved an AI strategy and is now staring at a hundred ideas, the question is not whether to invest. It is whether you have a defensible mechanism for choosing the ten that matter.
A few practical tests are worth applying:
- Could you defend your current AI portfolio to a risk committee on the basis of evidence rather than enthusiasm?
- Do your top-ranked use cases share a value model and a risk model, or were they chosen by different criteria in different rooms?
- Has anyone independent of the suppliers in the room scored the build-versus-buy question?
- Is the work you have not started — the use cases you said no to — as defensible as the work you have?
Answering those questions honestly is uncomfortable. It is also the work that separates organisations whose AI investment compounds from those whose AI investment scatters.
What it takes is unfashionable: pragmatism over hype, evidence over enthusiasm, governance designed in rather than bolted on, and a portfolio mindset rather than a parade of pilots. The organisations that get this right will not be the ones with the most pilots. They will be the ones that turn strategic intent into outcomes — reliably, transparently, and at a pace the rest of the market will struggle to match.
If you are working on translating an AI strategy into a credible delivery backlog, the Differential proposition is set out in full at onepointltd.com/differential. We would be delighted to compare notes.




















