AI-Powered LOI Estimation: Getting Survey Length Right
The industry rule of thumb is "three questions per minute." That's wrong in almost every case. A single-select question with four options takes about 12 seconds. A grid question with 10 rows and 7 columns takes 90 seconds. An open-ended question takes 45-120 seconds depending on how much you're asking. MaxDiff exercises can run 3-4 minutes for a full set.
When you estimate LOI at "three questions per minute" for a survey with six grids, two open-ends, and a conjoint, you're going to quote 15 minutes when the actual LOI is closer to 25. That means your CPI is wrong, your budget is wrong, and your respondent dropout rate will be higher than projected because people don't like being told a survey takes 15 minutes when it actually takes 25.
How the AI handles it
Paste your survey questions into the tool. Claude API reads each one, classifies it by type (single select, multi-select, grid/matrix, ranking, open-end, MaxDiff, conjoint, slider, etc.), and applies question-type-specific timing benchmarks. The output is a per-question time estimate plus the total LOI.
This is one of two tools in the kit that uses the Claude API. I went with AI here because question-type classification from raw text is genuinely hard to do with rules-based logic. A question that says "Please rank the following" is a ranking question, but a question that says "Which of these would you be most likely to choose" could be single-select, MaxDiff, or conjoint depending on the response format. The LLM handles that ambiguity better than regex ever could.
Tech stack
React 18.2 + Babel CDN for the frontend. Claude API (Anthropic) for question classification. The timing benchmarks are based on published research on survey completion times by question type, calibrated against real-world data from studies I've managed. API calls go directly to Anthropic, no middleware.
Try the Survey LOI Estimator →