Topics and Prompts Planning
OptimizeGEO Prompt & Topic Development Process
Start With the Client Questionnaire
Purpose: Understand who they want to target, not the whole market.
Key inputs extracted: * Priority regions * Competitor list * Target customer segments *
Explicit non-targets to avoid (important filter)
Why this matters: This defines the scope so we don’t generate prompts for irrelevant
verticals (example, pharma, R&D, clinical labs in this case).
Identify Real Workflow Problems (Not Product Features)
We avoid using the client’s internal terminology because it usually does not reflect how
people actually search.
We instead look at: * The jobs-to-be-done * The pain points that drive software adoption
Typical categories to explore: * Compliance / documentation friction * Operational
bottlenecks * Data integrity issues * Reporting requirements * Staffing + workload
constraints
Gather Real Market Language (Not Assumptions)
We search how practitioners talk in the wild.
Use LLMs, Reddit, discussion boards, Q&A threads: * Search using workflow pain
questions, not product names (example, “how to maintain chain of custody logs” instead
of “LIMS software”).
Outcome: * A list of real, verbatim questions showing how people actually describe their
problems.
Group the Real Questions Into Themes
We don’t invent themes. We observe patterns:
Example resulting themes: 1. Chain-of-custody & sample tracking 3. Instrument
data-transfer issues 4. QA/QC review workflows 5. State/municipal reporting
requirements 6. Turnaround-time bottlenecks
These themes reflect recurring real-world pain not marketing categories.
Normalize the Questions Into Neutral Search Prompts
Goal: Convert messy human phrasing → clean prompts that AI search engines respond
to consistently.
Normalization rules: * Keep the pain + workflow + constraint * Remove local context,
emotion, anecdotes * Avoid vendor names, solution bias, or insider jargon
This step produces the final prompt list for OptimizeGEO scanning.
Final Output
A structured topics + prompts set, ready for measurement scans. This set: * Reflects
actual market pain * Avoids branding bias * Aligns with target customer segments * Is
generalized enough for AI platform visibility analysis
Why This Works
This approach avoids: * Guessing * Overfitting to client worldview * Feature-based
prompts that have no real search demand
Instead, we anchor prompts in: - Real conversations - Real workflows - Real regulatory +
operational friction
Which produces signal-rich visibility measurement data.