Dispatch ·

Feedback analysis, done in a spreadsheet

A prototype that turns Google Sheets into an AI-powered text analyzer — sentiment scores and categorisation on open-ended survey responses, without leaving a tool most L&D teams already use.

Editor’s note, 2026: The specific OpenAI models and pricing referenced below have moved on since this was written. The pattern — thin integration, familiar surface, opinionated outputs — still holds. I still build this way.

Open-ended feedback is the most useful data most L&D surveys collect, and the least acted on. The quantitative ratings tell you what the average score was; the free-text comments tell you why. But the comments usually die in a spreadsheet column, because nobody wants to read a thousand paragraphs by hand.

I built a small tool to make that column less scary.

The tool

A Google Sheets integration that calls the OpenAI API on each open-ended response and returns two things back into the sheet:

  1. A sentiment score, from −5 to +5, with a one-sentence justification.
  2. A category, picked from a list you define — so the themes you care about surface automatically.

It lives inside Sheets, which is where most of my colleagues live. No new platform to learn, no subscription to approve, no data leaving the tool they already trust with the raw responses.

Why in a spreadsheet, though

The instinct for a tool like this is to stand up a dedicated dashboard — Streamlit, Retool, something shiny. The problem: nobody opens it. A new URL is a new habit, and most teams don’t need another dashboard.

Sheets works because the workflow already ends there. The team pastes the survey export into tab 1, the analysis shows up in tab 2, and the usual PivotTable / Sort / Filter moves still work. No learning curve.

What it actually changes

Three things I noticed once it was running:

  • Conversations shift from “what did people say?” to “what do we do?” The categorisation flips the discussion from reading quotes out loud to comparing categories against programme outcomes.
  • Outliers stop disappearing. The sentiment score surfaces comments that would otherwise be lost in the middle of the stack. A single −5 with a sharp justification is usually worth more than the 200 neutral responses it’s sitting next to.
  • The bar for running a survey drops. If analysis is cheap, you can afford more qualitative questions. That changes what you ask.

Take the pattern, not the tool

The specifics here matter less than the shape: a thin AI integration sitting inside software the team already uses, producing outputs the team already knows how to read. That pattern works in more places than people think.