insights

InsightHub

AI-Assisted Research: What Works and What Doesn't

December 30, 2025
schedule 6 min read

The State of AI in Research (2026)

AI capabilities in user research have advanced significantly. 88% of researchers now identify AI-assisted analysis as a major development. But adoption is uneven, and results are mixed.

Here's what actually works—and what doesn't.

What AI Does Well

1. Transcription and Initial Processing

AI excels at:

  • Transcribing interviews with high accuracy
  • Speaker identification and labeling
  • Timestamp generation
  • Initial cleanup of filler words

Benefit: Researchers spend zero time on transcription, which used to take 3-4x the interview length.

2. Summarization and Synthesis

AI can:

  • Summarize long transcripts into key points
  • Extract quotes organized by theme
  • Generate initial interview summaries
  • Compare across multiple interviews

Benefit: First-pass analysis that used to take hours happens in minutes.

3. Pattern Identification

AI helps with:

  • Clustering similar feedback
  • Identifying frequently mentioned topics
  • Detecting sentiment patterns
  • Flagging unusual responses

Benefit: Patterns emerge from large datasets that humans would miss.

4. Question Generation

AI can:

  • Suggest follow-up questions based on discussion guides
  • Generate variations of interview questions
  • Adapt questions based on previous responses
  • Create survey questions from research objectives

Benefit: Researchers focus on research design, not question wording.

What AI Doesn't Do Well (Yet)

1. Understanding Context

AI struggles with:

  • Reading between the lines
  • Understanding organizational politics mentioned implicitly
  • Recognizing when customers don't know what they need
  • Interpreting body language and tone

Human needed: Researchers catch what customers mean, not just what they say.

2. Making Judgment Calls

AI can't determine:

  • Whether feedback represents a real pattern or vocal minority
  • How feedback connects to business strategy
  • Which insights are actionable vs. interesting
  • When to probe deeper vs. move on

Human needed: Research judgment about significance and next steps.

3. Building Rapport

AI can't:

  • Make participants feel comfortable
  • Adapt conversation flow to participant energy
  • Create trust that surfaces honest feedback
  • Navigate sensitive topics with empathy

Human needed: The human connection that makes research valuable.

4. Detecting Its Own Errors

AI doesn't know:

  • When it's hallucinated a conclusion
  • When the training data biases its analysis
  • When the pattern it found is spurious
  • When the summary missed the most important point

Human needed: Quality control and verification.

The Right Model: AI-Assisted, Human-Led

The most effective approach:

AI handles:

  • Transcription
  • Initial organization and tagging
  • First-pass summarization
  • Pattern detection at scale
  • Routine categorization

Humans handle:

  • Research design
  • Interview conduct
  • Interpretation and judgment
  • Connection to strategy
  • Quality verification
  • Final synthesis

The handoff: AI provides a starting point. Humans refine, correct, and contextualize.

Practical Implementation

For interview research:

  1. AI transcribes → Human reviews for accuracy
  2. AI summarizes → Human corrects and adds context
  3. AI tags themes → Human validates and adjusts
  4. AI clusters insights → Human interprets patterns
  5. Human writes final synthesis with AI-assisted components

For feedback analysis:

  1. AI categorizes incoming feedback → Human spot-checks
  2. AI detects anomalies → Human investigates
  3. AI generates reports → Human edits and contextualizes
  4. AI tracks trends → Human interprets significance

Tools and Approaches

Transcription:

  • Otter, Rev, Descript (good accuracy, requires review)

Analysis assistance:

  • Dovetail, Condens (research-specific AI)
  • General LLMs with custom prompts (flexible but requires more work)

Pattern detection:

  • Text analytics platforms
  • Custom clustering pipelines

Key criteria:

  • Can you verify AI conclusions?
  • Does AI show its sources?
  • Can you correct AI mistakes easily?
  • Does AI integrate with your workflow?

Risks and Mitigations

Risk: Over-reliance on AI conclusions Mitigation: Always verify with source data. Never cite AI summary without checking original.

Risk: Bias amplification Mitigation: Diverse research participants. Review AI outputs for bias patterns.

Risk: Missing nuance Mitigation: Human review of all final outputs. Don't ship AI-only analysis.

Risk: Privacy concerns Mitigation: Understand where data goes. Use privacy-compliant tools. Anonymize sensitive content.

The Future (What's Coming)

Emerging capabilities:

  • Real-time analysis during interviews
  • Suggested follow-up questions mid-conversation
  • Automatic connection to prior research
  • Predictive pattern detection (what's likely to emerge)

Human researchers won't be replaced. But researchers who use AI will outperform those who don't.