About a decade ago, I had an idea I couldn’t build yet.
What if flavor had a visual representation? Not a chart or a wheel — an interactive space where shapes represent sweet, salty, umami, spicy, sour, and bitter. You place an ingredient and watch its flavor profile respond in real time: shapes resize, distort, change color. Swap ingredients in and out of a recipe and see whether the balance works before you ever turn on the stove.
The basic layer teaches the grammar of taste. What happens when you add acid to richness. Why a pinch of salt makes chocolate taste more like chocolate. How brightness emerges from combinations of characteristics you’d never think to pair.
Then there’s a layer above that: AI mentors modeled on great food thinkers. Not chatbots wearing their names — genuine teaching paths where Bourdain’s sensibility or Pépin’s technique guides you from simple flavor understanding to creative improvisation. Culinary education that adapts to how you actually learn.
I called it TasteBud. The name works on every level I need it to.
The technology to build this didn’t exist when I first imagined it. Visualization tools were expensive, AI was crude, real-time interaction at this fidelity was a fantasy. Now it’s all available. The constraint was never the idea — it was the infrastructure.
This is the kind of software I care about building. Tools that amplify how humans already think about complex domains rather than replacing that thinking with something flatter and more machine-convenient. The same philosophy that drives everything else in this stream.