Should We Add AI To This Feature Now Or Wait: A Decision Memo Template
Every product team I work with hits the same wall. Someone on the team suggests "we should add AI to this." Another person says "the tech isn't ready yet." A third person mentions they saw a competitor demo something similar. And then the conversation goes in circles for weeks while nothing gets built.
I've been through this at least a dozen times. At GiftPass, we debated for a month about whether to add AI-powered fraud detection to the gift card marketplace. At Khelo India, the question was whether AI could help with athlete performance tracking across 28 states. Each time, we wasted cycles because we didn't have a clear framework for making the decision.
So I built one. This is the decision memo template I now use with every team. It forces you to answer the right questions and makes the decision obvious.
The Core Question
The question isn't "should we add AI?" That's too vague. The real question is: "Will adding AI to this specific feature create enough value to justify the cost and risk, given our current constraints?"
That breaks down into four sub-questions:
- What specific user problem will AI solve better than the current solution?
- Do we have the data and infrastructure to build this?
- What's the cost in time, money, and opportunity?
- What happens if we wait 6 months instead?
If you can't answer all four clearly, you're not ready to decide.
The Scoring Framework
I score each AI feature proposal across five dimensions. Each gets a score from 1-5. The total tells you what to do.
| Dimension | Score 1 | Score 3 | Score 5 |
|---|---|---|---|
| User Value | Nice to have | Saves time weekly | Transforms workflow daily |
| Data Ready | Need to collect | Exists but messy | Clean and accessible |
| Tech Feasibility | R&D required | Needs some work | Standard patterns |
| Competitive Need | No one has it | Some competitors | Table stakes |
| Team Capacity | Fully booked | Can squeeze in | Bandwidth exists |
How to Interpret the Score
A score of 20+ is rare. When you see it, move fast. A score below 10 is a clear no. The tricky ones are in the 12-18 range, and that's where the memo becomes important.
The Decision Memo Template
Here's the actual template I use. Fill this out before any meeting where you'll discuss adding AI to a feature.
1. Problem Statement
[What user problem does this solve? Be specific. "Users spend X minutes doing Y" is good. "It would be cool" is not.]
2. Proposed AI Solution
[What specifically will AI do? What's the input, what's the output? What decisions will it make or assist?]
3. Scoring
User Value: _/5
Data Readiness: _/5
Tech Feasibility: _/5
Competitive Need: _/5
Team Capacity: _/5
Total: _/25
4. What We'd Need
[List specific requirements: data pipelines, APIs, team members, timeline]
5. Risks
[What could go wrong? What's the fallback if AI doesn't work?]
6. What Happens If We Wait
[Specific consequences of waiting 3, 6, 12 months. Include competitive risk.]
7. Decision
[Your recommendation with reasoning in 2-3 sentences]
Real Examples
Let me show you how this works in practice with three real decisions I've made.
Example 1: AI Search for HexaHealth (Score: 22 → Built)
The problem: Patients were struggling to find the right doctors and procedures on our platform. They'd search for "knee pain" and get results for orthopedic surgeons, physical therapists, and general practitioners all mixed together.
The scoring:
- User Value: 5 - Search was the #1 support ticket topic
- Data Ready: 4 - We had procedure data, needed to structure it better
- Tech Feasibility: 5 - Standard semantic search pattern
- Competitive Need: 4 - Competitors had basic search, not AI-powered
- Team Capacity: 4 - Could dedicate 2 engineers for 6 weeks
Decision: Build now. We shipped it in 8 weeks. Search-to-booking conversion went up 34%.
Example 2: AI Performance Predictions for Khelo India (Score: 11 → Delayed)
The problem: Coaches wanted to predict which athletes would perform well at upcoming events so they could focus training resources.
The scoring:
- User Value: 4 - Coaches definitely wanted this
- Data Ready: 1 - Historical performance data was scattered across 28 state systems
- Tech Feasibility: 2 - Would need custom ML models, not just LLM calls
- Competitive Need: 2 - Government project, no direct competitor
- Team Capacity: 2 - Team was fully committed to core platform
Decision: Revisit after data consolidation. We focused on getting clean data pipelines first. The AI feature went on the Year 2 roadmap.
Example 3: AI Fraud Alerts for GiftPass (Score: 18 → Built with constraints)
The problem: Gift card marketplace had fraud attempts that manual review couldn't catch fast enough. We were losing money on fraudulent transactions.
The scoring:
- User Value: 3 - Users don't see fraud prevention directly
- Data Ready: 4 - Transaction logs were clean and comprehensive
- Tech Feasibility: 4 - Pattern detection is well-established
- Competitive Need: 3 - Standard for FinTech
- Team Capacity: 4 - Had a senior engineer available
Decision: Build with constraints. We built a rules-based system with AI assistance for edge cases, rather than fully autonomous AI detection. Shipped in 5 weeks. False positives dropped 60%.
When to Wait
Sometimes waiting is the right call. Here are legitimate reasons to delay:
Wait If: Your Data Isn't Ready
AI features without good data are just expensive random number generators. If your data readiness score is below 3, spend the quarter fixing that first. The AI feature will be 10x easier to build on clean data.
Wait If: The Tech Is Genuinely Immature
In early 2024, many teams tried to build AI agents that could take autonomous actions. Most failed because the technology wasn't reliable enough. By late 2024, the models improved significantly. Sometimes waiting 6 months means the problem becomes 3x easier to solve.
Wait If: You're Chasing a Competitor Demo
Demos lie. I've seen countless impressive AI demos that fell apart in production. If you're building something just because a competitor showed a demo, take a breath. Talk to their actual users. Often the reality is much less impressive than the marketing.
Don't Wait If: You're Just Scared
This is the trap. Teams delay AI features because they're uncertain about the technology. Uncertainty is uncomfortable. But the only way to learn is to ship something. If your score is above 15, the uncertainty isn't a reason to wait. It's a reason to start small and iterate.
The One-Pager Version
If you don't have time for the full memo, here's the fast version:
Can you describe the user problem in one sentence?
If no, stop. You're not ready.
Do you have the data to train/prompt the AI?
If no, fix data first. Then revisit.
Can you ship a basic version in 8 weeks?
If no, you're overscoping. Simplify or wait for capacity.
What's your fallback if AI quality is poor?
If no fallback, add one. Never ship AI without a safety net.
If you answer yes to all four, build it. If you answer no to any, address that gap first.
Common Mistakes
After using this framework with a dozen teams, I've seen the same mistakes repeatedly:
Mistake 1: Scoring based on excitement, not evidence. User Value should be based on actual user research or support tickets, not what you think users will love. If you're guessing, score it a 3 maximum.
Mistake 2: Ignoring capacity constraints. A great idea with no engineers to build it is just a nice thought. Be honest about what your team can actually take on.
Mistake 3: Waiting for perfect data. Data is never perfect. Score 4 means "good enough to start." You don't need a 5 to begin.
Mistake 4: Building the full vision first. Always scope to the smallest useful version. You can add sophistication later. The first version of every AI feature I've shipped was embarrassingly simple compared to the final product.
The Bottom Line
The decision to add AI to a feature isn't mysterious. It's a prioritization exercise like any other product decision. The framework forces you to be specific about value, honest about constraints, and realistic about timing.
Use the memo template. Score honestly. And remember that "not now" isn't the same as "never." The best AI features I've shipped were ones where we waited until we were genuinely ready, then moved fast.
The right time to add AI is when you can clearly articulate the user problem, you have the data to solve it, and you have the capacity to iterate. If any of those are missing, wait. If all three are present, stop debating and start building.— Nasr Khan