Why your AI prompts feel generic — and how solo consultants fix it
Share
You ask Claude to draft a section of a strategy memo. What comes back is technically on-topic and unmistakably forgettable. The sentences are grammatical. The paragraphs are organized. The output couldn't have been written by anyone who's actually been in the meetings, met the client, or has a point of view about the industry — and that's the part the client is paying you for.
So you rewrite it from scratch and chalk it up to AI not being good enough yet.
The output is generic. The reason isn't that AI isn't good enough. It's that the brief was generic, and AI faithfully produced what was asked for. The fix doesn't require a better model; it requires a different prompt structure. There are roughly four ways generic AI output happens, and each has a specific cure.
This article is for solo consultants and small agency owners who use Claude or ChatGPT regularly but keep producing output that gets edited out of every deliverable. If you're new to AI and want the broader frame for which tasks to give it in the first place, start here instead. The rest of this assumes you've already decided this is a Category 2 task and you're trying to get a usable draft.
The four reasons your AI output is generic
Reason 1: You didn't describe the output in 100 words
Most prompts that produce generic text are vague briefs. The prompt asks for "a section about onboarding" or "an executive summary of this engagement," and AI does what any junior writer would do given that brief — produces something competent, on-topic, and unspecific.
The fix is the briefing discipline. Before opening Claude, write a short brief in your head or on paper:
- What is this section's job in the document? Not the topic — the job. Is it the part that builds the case for the recommendation, or the part that pre-handles the obvious objection, or the part that gives the reader the one number they'll quote later?
- What's the one claim? Every well-written section advances one claim. Vague sections advance three loosely-related ones.
- What's the evidence I'll point at? Bullet the facts you actually have. Don't ask AI to make some up — that's a different failure mode (see Reason 4).
- What's the rhythm? A two-paragraph open-and-amplify, a five-paragraph thread, a single deliberate one-sentence conclusion?
Once you've done this — usually 90 seconds — paste the brief in front of the actual ask. The output will be 3× sharper for 1.5× the effort.
Reason 2: You're asking AI to generate instead of expand
This is the failure mode that destroys consulting deliverables. You ask AI to write a 600-word section from a topic prompt. AI doesn't have your data, your interview notes, or your judgment, so it invents. The output is fluent and hollow.
The fix is the inverse workflow: outline + bullets + AI-as-prose-engine, not AI-as-author.
You write the bullets. The bullets are your thinking expressed in shorthand — the specific data, the specific quote from the interview, the specific recommendation, the specific evidence. Then AI turns the bullets into paragraphs. Your output now contains your thinking, in fluent prose, in less than 20% of the time.
The shape of the prompt looks like this:
Turn the following bullets into [section] of [document type]. Output is the
section text only — no headers I didn't include, no commentary, no marketing
voice.
Rules:
- Match the voice of the example below
- Keep all facts and specifics from my bullets. Do not generalize them.
- Do not add facts I didn't include. If a transition needs context I haven't
given you, leave a [TK note] and let me fill it in.
- Sentences should average 15–20 words. Vary length.
- End on a substantive sentence, not a transition.
Voice example (1–2 paragraphs of my actual writing):
[PASTE]
My bullets:
[PASTE]
Two parts of this prompt do most of the work. The voice example pulls AI's prose toward yours. The "do not add facts I didn't include — leave a [TK note]" line stops AI from inventing the things it doesn't have.
This is the prompt that does the most work in any consulting writing assignment. It's covered in detail in Chapter 6 of the Playbook.
Reason 3: You didn't give it a voice example
If reasons 1 and 2 are about what AI writes, this one is about how. AI defaults to a particular dialect — slightly hedged, three-part-list-friendly, transition-announcing, summary-paragraph-loving. Anyone who reads consulting writing for a living can spot it in the first sentence.
The fix takes 10 seconds: paste a sample of your own writing into the prompt and tell AI to match the rhythm. Two paragraphs is enough.
What "match the rhythm" actually means in prompt terms:
- Sentence length distribution
- Vocabulary register (do you say "leverage" the verb? Probably not — and you want AI to avoid it)
- Punctuation habits (are you an em-dash person? AI is by default; if you're not, say so)
- The shape of how you open and close sections (do you front-load the claim, or let it emerge?)
The sample doesn't have to be from the same document. A LinkedIn post you wrote, a paragraph from a past memo, an excerpt from your About page — anything that sounds like you. Pin it to a Claude Project so you don't have to re-paste.
If you're unsure whether AI is matching, run a separate "voice checker" pass on the output. The prompt: "Tell me which sentences in this section read as AI-flavored. Don't edit; identify." Claude is shockingly good at flagging its own tells if you tell it to look.
Reason 4: You're trusting AI in your blind spot
Sometimes the output isn't generic — it's confidently wrong. You ask AI a domain question outside your expertise (legal language, technical specifications, regulatory framing) and it produces a fluent, plausible answer that doesn't survive scrutiny. You don't catch it because the language doesn't trigger your "this seems off" instinct.
This isn't a prompting problem. It's a category error. AI is fine for tasks where you can spot drift in the output. It is dangerous for tasks where you can't.
The cure is the second of the three-question screen: can I tell at a glance whether the output is good? If no, do the task yourself. Or pay a specialist.
The pattern is most common in proposal-writing, where consultants reach for AI to draft the legal-or-financial-sounding boilerplate. Don't. Either reuse a clause from a vetted past contract, or ask the lawyer.
Putting the four together: a worked example
Suppose you're writing the recommendations section of a strategy memo for a B2B SaaS client whose customer-success metrics dropped after a pricing overhaul. You've done the analysis. You have three recommendations. You need to draft 600 words.
Bad prompt:
Write the recommendations section of a strategy memo for a SaaS client whose
customer success metrics dropped after a pricing change. Make three
recommendations.
This produces three plausible-sounding, totally generic recommendations that don't match your data and could apply to any SaaS company in the world.
Good prompt:
Turn the following bullets into the "Recommendations" section of a strategy
memo for [Client] whose customer-success NPS dropped from 41 to 28 after the
March pricing overhaul.
Rules:
- Match the voice of the example below.
- Keep all facts and specifics. Do not generalize them.
- Do not add facts I didn't include. Use [TK note] for missing transitions.
- Average 15–20 word sentences.
- End each recommendation with the one concrete first action, not a summary.
Voice example: [paste 2 paragraphs of your last memo]
Bullets:
Recommendation 1: Roll back the auto-upgrade to annual on tier 2.
- Discovered in interviews: 7 of 12 tier-2 churners cited "surprise annual
charge" as the trigger
- Cost of rollback: roughly $180k ARR exposed (per Finance, April 12 model)
- Time to implement: 2 weeks if Eng prioritizes; otherwise 6
- First action: brief Sarah (CRO) by April 30 with the churn-interview data
Recommendation 2: ...
[and so on]
This produces the recommendations section in your voice, anchored to your data, with concrete first actions, in three minutes instead of 90.
The bullets are 15 minutes of work. The prompt is reusable. The output requires light editing, not rewriting. That's the inversion.
What this looks like as a habit
Most consultants who try AI a few times and conclude "the output is too generic for my work" are correct about the output and wrong about the cause. The fix isn't a better model or more prompts; it's a sharper brief and the bullets-first workflow.
The habit takes about three engagements to internalize. After that, it becomes the default, and you'll notice the time savings concretely — what was a 90-minute deliverable section becomes a 20-minute draft-and-edit.
What to do this week
Pick the next deliverable section you have to write. Before opening Claude, do the 5 minutes of brief-writing and bullet-listing. Open Claude, paste the prompt structure from this article (with your voice example), get the draft. Time the edit pass.
If the result still feels generic, the failure point is one of these four — and it's almost always Reason 1 (the brief was vague) or Reason 2 (you asked AI to generate when you should have asked it to expand).
Going deeper
This article distills the workflow from Chapter 6 of The Solo Consultant's AI Playbook, which adds the outline-review prompt, the voice-checker prompt, and a worked example of writing a documented ops playbook section from interview notes. If deliverable drafting is the workflow you're losing the most hours to right now, the chapter is the deeper version.
The framing for which tasks to give AI in the first place — Category 1 / 2 / 3 — is in the previous article. If you find yourself asking AI to do Category 1 work (the stuff that requires your judgment), no prompt structure on earth will fix the output. That's the upstream filter.
— Digital Kreative