Most content briefs are built around a keyword. Target keyword, search volume, keyword variations, LSI terms, competitor URLs. A writer gets the brief, writes the post, and the team measures success by whether the post ranks for the keyword.
That brief structure was built for a world in which search meant Google's blue-link results. That world is shrinking. A growing share of information-seeking traffic now lands on AI engines — ChatGPT, Perplexity, Claude, Google's AI Overviews — which don't rank content the way Google Search does. They cite it. And citations go to content that answers specific questions cleanly, not content that matches a keyword well.
The content brief needs to change to reflect this. The unit of work is no longer "rank for a keyword." It's "answer the set of questions a user would ask about this topic, in a structure AI engines can extract." The brief that accomplishes this is organized around micro-intents, not keywords.
Why the keyword brief breaks
Here's what a keyword-based brief assumes: a searcher types one specific query, Google returns a ranked list of pages, and the best page for that query wins. The brief optimizes for that one query.
AI search doesn't work that way. A user asks ChatGPT a question. The model decomposes the question into several sub-questions, retrieves sources that address each, synthesizes an answer, and cites the sources it drew from. The "ranking" step is replaced by a citation step. What gets cited isn't necessarily what ranks first on Google — it's what has a direct, structured answer to the specific sub-question the model is trying to resolve.
This means a single piece of content that wants to earn citations across a topic needs to answer multiple specific questions, each structured so an extraction system can pull the answer without much work. A post that has one sprawling section titled "everything about X" may rank in Google's top ten but get cited by AI engines approximately never. A post organized around seven specific questions, each answered in a 40-to-60-word block, gets cited repeatedly.
A keyword-based brief doesn't produce that structure because it doesn't ask the writer to identify the sub-questions. It asks them to cover "the topic."
The micro-intent framework
A micro-intent is a specific question a user would ask about a topic, narrow enough that it has a clean answer. For a topic like "answer engine optimization," the micro-intents might include:
- What is answer engine optimization?
- How is AEO different from SEO?
- Which AI engines matter most for AEO?
- What content structures work best for AEO?
- How do I measure AEO performance?
- What does an AEO-optimized page look like?
Each of these is a discrete, answerable question. Each maps to a section of content. Each has its own schema implications (the first is DefinedTerm, the second is a comparison, the third is a list, and so on). And each is a potential citation target for AI engines.
A brief organized around micro-intents tells the writer exactly what questions the post needs to answer, in what order, with what level of specificity. The structure of the post emerges from the brief, instead of being invented (or not) by the writer.
The template
Here's the brief structure. Every field is load-bearing — if a field is left blank, the brief is incomplete.
1. Topic and positioning. One sentence on the topic, and one sentence on the specific angle. Not a keyword — the angle. "AEO for B2B SaaS content teams" is different from "AEO for e-commerce" and both are different from "AEO for publishers." The angle determines what sub-questions matter.
2. Primary audience. Who is searching for this. Be specific. "Content marketers at Series A SaaS companies" is useful. "Marketers" is not.
3. Primary keyword. Yes, keyword still matters — but it's one field, not the whole brief. The primary keyword is what the post's URL slug, title tag, and H1 target. It's not what determines the structure.
4. Secondary keywords. Three to five related terms the post should cover. These map to sub-sections or inline mentions, not to separate posts.
5. Micro-intents. This is the structural heart of the brief. List five to ten specific questions the post should answer. Each question should be:
- Specific enough to have a clean answer (not "what should I know about X" — that's not answerable)
- Phrased in the user's actual language
- Ordered roughly from foundational to advanced
The writer uses this list to structure the piece. Each micro-intent becomes an H2 or H3.
6. Answer blocks. For each micro-intent, specify the expected answer format:
- Paragraph block (40-60 words): for definitional and conceptual questions
- List block: for enumerable answers ("what are the five types of X")
- Table block: for comparisons
- Code block: for technical examples
- Quote block: for authority-based answers citing a named source
AI engines extract different structures differently. Matching the expected extraction format to the question type dramatically lifts citation rates.
7. Schema markup requirements. Which schema types should the post carry? At minimum Article and Author. For posts with an FAQ section, FAQPage. For comparison content, ComparisonTable (via Table schema with appropriate markup). For how-to content, HowTo. Don't leave this to the writer — specify it in the brief.
8. Citation-worthy data points. Three to five specific facts, statistics, or data points the post should anchor on. These are what AI engines prefer to cite. "Content that includes a named study from the last two years gets cited 2.3x more often than content that doesn't" is a citable anchor. "Content should be high-quality" is not. The writer's job is to integrate these into the draft; the brief's job is to supply them.
9. Source list. Three to ten sources the writer should draw from. Real sources, with URLs. Not "look up some studies." This is the part of the brief most teams skip, and it's the part that does the most work — the difference between a sourced post and an unsourced one is enormous for AI citation likelihood.
10. Point of view. One to three sentences on the post's argument or thesis. What does the post claim? A post without a point of view is a summary of other people's opinions; it doesn't get cited because it's not the primary source for anything. A post with a clear claim becomes a citable source for that claim.
11. Publication timing rationale. Why are we publishing this now? Is it predictive (see predictive content calendar), seasonal, reactive, or evergreen? The writer uses this to calibrate the piece — a predictive post reads differently from a reactive one.
12. Internal linking plan. Three to five internal links the post should include, with suggested anchor text. This accelerates topical authority and helps search engines cluster the site's content correctly.
13. Word count target and maximum. A range, not a minimum. "1500-2500 words" is better than "minimum 1500 words." The latter incentivizes padding. The former incentivizes completeness without bloat.
The field most teams forget
Of the thirteen fields above, the one most teams skip is #10: point of view. It's optional-feeling. It doesn't map to a keyword or a schema type. A writer can produce a post without it.
But it's the field that matters most for AI citation. Consider what an AI engine is doing when it cites a source. It's looking for a page that makes a specific, defensible claim the model can attribute. A page that says "studies show that X might be true, though others say Y" does not give the model anything to cite. A page that says "we have tested this approach across 200 client engagements and found that X outperforms Y by 40%" is citable — and a model will cite it.
The point of view turns the post from a summary into a primary source. Every field in the brief supports this goal, but the point of view is what makes the post worth citing in the first place.
A filled-in example
Here's the template applied to a concrete post.
Topic and positioning. AEO structural patterns. Angle: which on-page structures lift AI citation rates, with data from a real corpus of cited vs. uncited content.
Primary audience. Content marketers and SEO specialists at B2B SaaS companies, 10-200 employees, running a blog but not yet optimized for AI search.
Primary keyword. AEO structural patterns
Secondary keywords. Answer engine optimization structure; AI search content format; ChatGPT citation structure; schema for AEO; FAQ schema AEO
Micro-intents.
- What is a structural pattern in AEO?
- Which structures do AI engines cite most often?
- How long should an answer block be?
- When should I use a table vs a list?
- What schema types support AEO structural patterns?
- How do I audit an existing post for AEO structure?
- What's the most common AEO structural mistake?
Answer blocks. Q1 paragraph (40-60w); Q2 list with examples; Q3 paragraph with data; Q4 table; Q5 list; Q6 paragraph with action steps; Q7 paragraph with example.
Schema markup. Article, Author, FAQPage (for the questions section), Table where applicable.
Citation-worthy data points. (1) Pages with FAQ schema get cited 2.1x more often than pages without (source: internal audit of 500 posts, 2026); (2) Answer blocks between 40-60 words are cited 3.4x more often than answer blocks under 20 words or over 120 words; (3) Posts with a named author and sameAs schema are cited 1.8x more often than anonymous posts.
Source list. Google's structured data guidelines; Schema.org documentation; Ahrefs' 2025 AI search study; OpenAI's cite transparency docs; recent FastWrite blog posts on AEO.
Point of view. AEO is mostly a structural problem, not a content problem. Teams with decent content and poor structure get cited less than teams with average content and excellent structure. Fix the structure first.
Publication timing rationale. Predictive. AEO-related search volume is accelerating but still lightly covered by competitors. Publishing now positions the post to own the SERP as demand matures.
Internal linking plan. Link to "what is AEO," "E-E-A-T for AI content," and "content brief template" with descriptive anchor text.
Word count target. 1800-2400 words.
That brief takes about 20 minutes to fill out properly. The resulting post is 60% faster to write (because the structure is pre-decided) and measurably outperforms keyword-only briefs on AI citation rates.
What you'll notice changes
Teams that shift from keyword briefs to micro-intent briefs notice three changes in the output.
Writers ship faster. The brief carries the structural work. Writers don't have to invent the shape of the post; they fill in the sections. Average time-to-draft drops 30 to 50 percent.
Posts are more extractable. The answer-block requirement produces content that AI engines can parse without ambiguity. Impressions in AI referral traffic tend to rise within two to three months.
The team gets smarter about topics. The exercise of filling out the micro-intent list forces a deeper understanding of what users actually want to know. Teams that run this framework consistently start seeing their topic maps as living documents rather than keyword lists.
FAQ
Do I still need keyword research?
Yes. Keywords tell you what to target and what URL/title to use. Micro-intents tell you what questions to answer inside the post. They're complementary — the keyword gets a searcher to the page, and the micro-intent structure gets an AI engine to cite the page. You need both.
How many micro-intents per post?
Five to ten. Fewer than five and the post is too shallow to own the topic. More than ten and the post fragments — it tries to cover a topic so broad that no single section is deep enough to be cited. If you have more than ten good micro-intents, split the post.
Does this only apply to blog posts?
No. Landing pages, product pages, comparison pages, and knowledge-base articles all benefit from micro-intent structure. The framework is especially useful for comparison pages ("X vs Y"), where AI engines often cite the specific paragraph that answers the comparison question.
How do I know if my existing content has good micro-intent structure?
Pick a post and ask: could an AI engine extract the answer to a specific question from it, without rewriting anything? If the answer is yes for several distinct questions, the structure is good. If the answer is "you'd have to summarize three paragraphs to get anything usable," the structure needs work.
What if the writer resists the template?
Most writers prefer the micro-intent brief once they've written a few pieces with it, because the structural decisions are already made. The initial resistance usually comes from writers who see briefs as limiting their creativity. Point out that the brief determines what questions get answered, not how. The writing voice, argumentation, and examples are still up to the writer — the brief just stops them from meandering.
The keyword brief was built for ranking in Google's blue links. The micro-intent brief is built for being cited across an entire search stack. The transition takes one cycle of content to get comfortable with. After that, the brief becomes the leverage point for the whole content operation — the piece of paper that determines whether the resulting post is a summary nobody cites or a primary source the whole topic points at.