Should We Add AI To This Feature Now Or Wait: A Decision Memo Template

AI Feature Decision Matrix User Value → Feasibility → BUILD NOW High value + feasible Ship this quarter PLAN FOR LATER High value but hard Invest in foundations EXPERIMENT Low risk to try Test and learn DON'T BUILD Hard and low value Skip it

Every product team I work with hits the same wall. Someone on the team suggests "we should add AI to this." Another person says "the tech isn't ready yet." A third person mentions they saw a competitor demo something similar. And then the conversation goes in circles for weeks while nothing gets built.

I've been through this at least a dozen times. At GiftPass, we debated for a month about whether to add AI-powered fraud detection to the gift card marketplace. At Khelo India, the question was whether AI could help with athlete performance tracking across 28 states. Each time, we wasted cycles because we didn't have a clear framework for making the decision.

So I built one. This is the decision memo template I now use with every team. It forces you to answer the right questions and makes the decision obvious.

The Core Question

The question isn't "should we add AI?" That's too vague. The real question is: "Will adding AI to this specific feature create enough value to justify the cost and risk, given our current constraints?"

That breaks down into four sub-questions:

  1. What specific user problem will AI solve better than the current solution?
  2. Do we have the data and infrastructure to build this?
  3. What's the cost in time, money, and opportunity?
  4. What happens if we wait 6 months instead?

If you can't answer all four clearly, you're not ready to decide.

The Scoring Framework

I score each AI feature proposal across five dimensions. Each gets a score from 1-5. The total tells you what to do.

DimensionScore 1Score 3Score 5
User ValueNice to haveSaves time weeklyTransforms workflow daily
Data ReadyNeed to collectExists but messyClean and accessible
Tech FeasibilityR&D requiredNeeds some workStandard patterns
Competitive NeedNo one has itSome competitorsTable stakes
Team CapacityFully bookedCan squeeze inBandwidth exists

How to Interpret the Score

20-25
Build Now
15-19
Plan This Quarter
10-14
Revisit Later
5-9
Don't Build

A score of 20+ is rare. When you see it, move fast. A score below 10 is a clear no. The tricky ones are in the 12-18 range, and that's where the memo becomes important.

The Decision Memo Template

Here's the actual template I use. Fill this out before any meeting where you'll discuss adding AI to a feature.

DECISION MEMO: AI Feature Evaluation
Feature: [Name of feature]
Date: [Today's date]
Author: [Your name]
Recommendation: [BUILD NOW / PLAN FOR Q_ / REVISIT IN 6MO / DON'T BUILD]

1. Problem Statement

[What user problem does this solve? Be specific. "Users spend X minutes doing Y" is good. "It would be cool" is not.]

2. Proposed AI Solution

[What specifically will AI do? What's the input, what's the output? What decisions will it make or assist?]

3. Scoring

User Value: _/5
Data Readiness: _/5
Tech Feasibility: _/5
Competitive Need: _/5
Team Capacity: _/5
Total: _/25

4. What We'd Need

[List specific requirements: data pipelines, APIs, team members, timeline]

5. Risks

[What could go wrong? What's the fallback if AI doesn't work?]

6. What Happens If We Wait

[Specific consequences of waiting 3, 6, 12 months. Include competitive risk.]

7. Decision

[Your recommendation with reasoning in 2-3 sentences]

Real Examples

Let me show you how this works in practice with three real decisions I've made.

Example 1: AI Search for HexaHealth (Score: 22 → Built)

The problem: Patients were struggling to find the right doctors and procedures on our platform. They'd search for "knee pain" and get results for orthopedic surgeons, physical therapists, and general practitioners all mixed together.

The scoring:

Decision: Build now. We shipped it in 8 weeks. Search-to-booking conversion went up 34%.

Example 2: AI Performance Predictions for Khelo India (Score: 11 → Delayed)

The problem: Coaches wanted to predict which athletes would perform well at upcoming events so they could focus training resources.

The scoring:

Decision: Revisit after data consolidation. We focused on getting clean data pipelines first. The AI feature went on the Year 2 roadmap.

Example 3: AI Fraud Alerts for GiftPass (Score: 18 → Built with constraints)

The problem: Gift card marketplace had fraud attempts that manual review couldn't catch fast enough. We were losing money on fraudulent transactions.

The scoring:

Decision: Build with constraints. We built a rules-based system with AI assistance for edge cases, rather than fully autonomous AI detection. Shipped in 5 weeks. False positives dropped 60%.

Pattern
Notice that "User Value" isn't always the highest score on things we build. The fraud detection scored 3 on user value but was critical for business survival. The framework helps you see the full picture.

When to Wait

Sometimes waiting is the right call. Here are legitimate reasons to delay:

Wait If: Your Data Isn't Ready

AI features without good data are just expensive random number generators. If your data readiness score is below 3, spend the quarter fixing that first. The AI feature will be 10x easier to build on clean data.

Wait If: The Tech Is Genuinely Immature

In early 2024, many teams tried to build AI agents that could take autonomous actions. Most failed because the technology wasn't reliable enough. By late 2024, the models improved significantly. Sometimes waiting 6 months means the problem becomes 3x easier to solve.

Wait If: You're Chasing a Competitor Demo

Demos lie. I've seen countless impressive AI demos that fell apart in production. If you're building something just because a competitor showed a demo, take a breath. Talk to their actual users. Often the reality is much less impressive than the marketing.

Don't Wait If: You're Just Scared

This is the trap. Teams delay AI features because they're uncertain about the technology. Uncertainty is uncomfortable. But the only way to learn is to ship something. If your score is above 15, the uncertainty isn't a reason to wait. It's a reason to start small and iterate.

💡The cost of waiting isn't zero. Every month you delay, competitors learn and users develop workarounds. Factor this into your decision.

The One-Pager Version

If you don't have time for the full memo, here's the fast version:

1
Question 1

Can you describe the user problem in one sentence?

If no, stop. You're not ready.

2
Question 2

Do you have the data to train/prompt the AI?

If no, fix data first. Then revisit.

3
Question 3

Can you ship a basic version in 8 weeks?

If no, you're overscoping. Simplify or wait for capacity.

4
Question 4

What's your fallback if AI quality is poor?

If no fallback, add one. Never ship AI without a safety net.

If you answer yes to all four, build it. If you answer no to any, address that gap first.

Common Mistakes

After using this framework with a dozen teams, I've seen the same mistakes repeatedly:

Mistake 1: Scoring based on excitement, not evidence. User Value should be based on actual user research or support tickets, not what you think users will love. If you're guessing, score it a 3 maximum.

Mistake 2: Ignoring capacity constraints. A great idea with no engineers to build it is just a nice thought. Be honest about what your team can actually take on.

Mistake 3: Waiting for perfect data. Data is never perfect. Score 4 means "good enough to start." You don't need a 5 to begin.

Mistake 4: Building the full vision first. Always scope to the smallest useful version. You can add sophistication later. The first version of every AI feature I've shipped was embarrassingly simple compared to the final product.

The Bottom Line

The decision to add AI to a feature isn't mysterious. It's a prioritization exercise like any other product decision. The framework forces you to be specific about value, honest about constraints, and realistic about timing.

Use the memo template. Score honestly. And remember that "not now" isn't the same as "never." The best AI features I've shipped were ones where we waited until we were genuinely ready, then moved fast.

The right time to add AI is when you can clearly articulate the user problem, you have the data to solve it, and you have the capacity to iterate. If any of those are missing, wait. If all three are present, stop debating and start building.— Nasr Khan
AI Feature Decision Product Trade Offs Roadmap Priority Decision Memo Launch Criteria