top of page

Building Ember Relay: A Brutally Honest Dispatch from the Front Lines of Innovation

  • Writer: Miranda Griffin
    Miranda Griffin
  • Jul 3
  • 9 min read
Man and dog in truck bed; man works on laptop with papers, dog rests beside. Forest backdrop, overcast sky conveys calm, focused mood.

Introduction: What the Hell Is Ember Relay and Why Build It?

Ember Relay isn’t some slick VC-funded startup or weekend hackathon toy. It’s a battle-forged tool born out of necessity — a lifeline for solo investigators, field reporters, and systems thinkers like me who needed something better, something real. If you’ve ever sat in a parked car with your laptop balanced on your knees, trying to untangle a corporate spiderweb or decode hours of voice notes, this tool was built for you. More specifically, it was built by me, because nothing like it existed.

At its core, Ember Relay is a web-based investigative toolkit that combines transcription, ChatGPT-powered analysis, tagging, weather context, map integration, and exportable reporting — all designed for people doing real-world investigations, often with minimal support. No fluff. No corporate jargon. Just raw function and brutal utility.

But this tool didn’t start with ambition. It started with desperation — and a busted lease.

The first spark came during an investigation into my own apartment complex. Like a lot of renters, I knew something was off. Rent kept climbing. Complaints vanished into a void. And every time I tried to get a straight answer from management, they pointed fingers or threw legalese at me. That led me down the rabbit hole of LLC shells, hidden corporate ownership structures, and deliberately confusing property records.

I realized something sickening: the system was built to be unreadable. Not just by accident — by design.

That was the moment I knew I needed better tools. I started cobbling together transcripts, GPT prompts, and spreadsheets just to keep track of the names, entities, and tricks. Eventually I thought, “Why am I duct-taping this together every time? Why can’t I build something that actually supports investigations like this?”

The answer? I could. Barely.

See, I’m not a professional developer. I’m an ops person — a systems thinker. I knew enough Python and MySQL to get myself into trouble, and just enough curiosity to keep pushing through the fire. What followed was months of trial and error, YouTube rabbit holes, crying in parking lots, surprise moments of genius, and a slowly growing belief that maybe — just maybe — this thing could work.

This post is the full download: the good, the bad, the bullshit, and the breakthroughs of building Ember Relay as a solo creator with limited resources and a lot of rage-fueled determination.


The Spark: Why This Tool Had to Exist

The original problem wasn’t just the LLC maze of my apartment complex — it was what came next. Once I pulled that first thread, I realized how broken the process of doing an investigation was when you don’t have a team, a newsroom, or even a consistent desk.

I was constantly bouncing between tools that weren’t built for field work. I’d record voice notes while driving or walking. I’d dump transcripts into random documents. I’d copy-paste insights into spreadsheets. I’d screenshot public records from my phone while cross-referencing PDFs and trying not to scream. None of it was connected. None of it scaled. And all of it was fragile.

There were tools for journalists. There were tools for lawyers. There were tools for productivity nerds and AI bros. But none of them were designed for a field investigator in a truck bed with a dog at their feet and rain bouncing off the roof.

So I asked myself the systems question: what would a tool look like if it was built by someone who actually works in the field — someone who needs transcription, context, location data, weather info, and GPT analysis, all in one place?

Not a one-size-fits-all solution. Not an app stuffed with productivity jargon. Something lean. Something specific. Something useful.

That was the spark.

At first, I wasn’t thinking about product-market fit or monetization or user journeys. I was thinking: how can I stop spending six hours wrangling notes from a 30-minute interview? How can I search across dozens of recordings and instantly pull up patterns, names, and tags? How can I turn chaos into signal?

That’s when Ember Relay was born — not as a brand, but as a survival instinct. I didn’t start with the idea of launching a business. I started by solving my own problem, as precisely and ruthlessly as possible.

And that meant learning a hell of a lot more than I bargained for.


Day One: Whiteboards, Wild Ideas, and Whisper Transcripts

The first working version of Ember Relay — if you could even call it that — was a local Python script duct-taped to OpenAI’s Whisper model.

I remember that moment clearly. I was sitting in a cheap motel, laptop balanced on my legs, Lucas snoring next to me, a cold coffee getting colder. I had just recorded a voice memo ranting about a local housing scam — nothing polished, just a raw stream-of-consciousness download. I ran it through Whisper and watched the transcript roll out in front of me. It wasn’t perfect, but it was good enough to work with.

Then I fed that transcript into GPT using the API, with a clumsy prompt asking it to extract names, locations, events, and categories. It responded with a surprisingly coherent breakdown. The insight hit me like caffeine to the brain: I just turned a voice note into structured intelligence.

Not a summary. Not a transcript. But usable intelligence.

That moment was a dopamine spike like no other. No bloated UI. No SaaS fees. Just a direct line from a spoken idea to structured information I could act on. It wasn’t pretty. But it was real.

The next few days were a blur of furious scribbling on notepads, testing random JSON outputs, and refining prompts until they started producing something halfway consistent. I hardcoded tags. I experimented with GPT formats. I started mapping categories to folders so every voice note could be auto-sorted by type: landlord issues, legal leads, policy patterns.

It was simple: speak, transcribe, tag, analyze, save.

I didn’t need to build a monolith. I needed to build a pipeline.

And on that first real day of working code — just the basics, just enough to believe in — I realized this could become the tool I’d wished existed for years.


Expansion Pack: Categories, Tags, and the Voice-to-Insight Engine

Once I had the core pipeline — voice to transcript to GPT to tags — I knew I was onto something. But it was still clunky. Half-manual. Not quite reliable enough to use in the field. So the next goal was simple in theory and brutal in practice: make it repeatable.

I began layering in logic for categories: was this audio note about a person, a place, a legal event, a pattern of behavior, a known entity, or a new lead? I didn’t want just a big folder full of files. I wanted structured insight. Something I could search, organize, and act on without remembering what I’d said out loud two weeks ago at a gas station.

That’s when GPT became the engine — not just a passive summarizer, but an active processor. I rewrote prompt after prompt, trying to get it to produce JSON blobs that were actually usable. Sometimes it would hallucinate tags. Sometimes it would return full paragraphs when I needed bullet points. It was like training a wild animal to file paperwork.

Eventually I cracked a method that worked more than 80% of the time. Not perfect — but usable. I added fallback logic, error handling, file renaming based on output, even some basic auto-cleanup for redundant tags. It started feeling real.

At the same time, I built out the folder architecture so that every file had a home: transcripts sorted by date, category, location if available. Metadata saved alongside the original. Auto-named files that actually meant something when you looked at them a week later.

And then came the moment where I recorded a two-minute rant, ran it through the pipeline, and 15 seconds later had a fully transcribed, tagged, categorized, and filed report sitting in the right folder — complete with a timestamp and export-ready text.

I just sat there and stared at it. Not because it was perfect, but because I’d built it.


UI Pain and GUI Growing Pains

If the backend was a junkyard of logic and persistence, the frontend was an existential crisis made visible.

At first, I didn’t even plan on building a user interface. I figured I’d just keep running it from the terminal forever. But then I tried to demo it to someone — and nothing humbles you faster than watching another human stare blankly at a command line window while you say, “Okay, just run this, then this, then... wait, hang on…”

So I did the unthinkable. I started building a GUI.

I started with Tkinter. It worked — technically — but it looked like Windows XP and acted like it had been dropped on its head. Eventually, I ditched it and committed to React.

Which... was a choice.

React is powerful, but only if you know what you’re doing. I didn’t. I learned on the fly. JSX, props, state management, component trees — it was like trying to assemble IKEA furniture in a hurricane.

Every div felt like betrayal. But eventually, I got a clean layout working: upload an audio file, click a button, watch it process, and export your results — all without touching a terminal.

It didn’t just work — it looked like it worked. That was a first.


Backend Battles

I started with Flask. It was supposed to be simple. It was not.

Setting up routes was easy enough. But connecting them, debugging them, and making sure nothing silently failed? That was war.

One of the most maddening issues was a silent failure that only happened when filenames had special characters — things like ampersands or accents. Whisper would process it. GPT would analyze it. But the final save would quietly fail. No error. Just... nothing.

I tore through permissions. I thought it was a folder path. I even blamed GPT at one point. It took me days to realize the issue was buried in a filename handler.

That’s backend work in a nutshell: 80% of your pain comes from 1% of your assumptions.

Eventually I separated all major logic into their own clean functions: transcription, tagging, summarization, exporting, file saving. It was still fragile, but at least it was organized. It even made it easier to wire up to React later.

I had a working engine now. It just needed a launchpad.


Going Live: Deployment Drama and Victory Screams

I picked Render for deployment because it seemed simple. It was not.

First, it failed due to missing environment variables. Fixed it. Then it wanted a gunicorn config. Fixed that. Then the backend crashed with this gem:

“SyntaxError: invalid imaginary literal.”

I stared at the screen like it had insulted my family. After digging through everything, I found the culprit: a rogue j stuck on a number, making Python interpret it as a complex number.

One typo. Total failure.

Once I fixed that, it booted. The site was live. I sat in silence for a minute, just… breathing.

But it wasn’t over.

Frontend wouldn’t talk to the backend — CORS errors. Then upload limits broke things. Then GPT calls were rate-limited.

One by one, I solved them. I patched, deployed, broke, and rebuilt — until the app could handle uploads, run analysis, and export a report, fully automated, fully online.

I tested it on hotel Wi-Fi. On mobile hotspot. At a coffee shop. It held.

It worked.


The Good Stuff: Magic Moments That Made It Worth It

There were moments that made it all feel worth it.

Like testing it from a forest trailhead with no Wi-Fi — and watching the offline version run perfectly.

Or when I ran a five-minute voice memo about a legal case and it auto-generated a CSV report, sorted it into the right folder, and cross-tagged three major patterns I hadn’t noticed until GPT surfaced them.

It wasn’t just a tool. It was useful.

It helped me save time — which, when you’re living day-to-day and problem-solving in real time, is the most valuable currency there is.


The Bad Stuff: Tech Debt, Tool Fatigue, and Emotional Burnout

It wasn’t all triumph.

There was the time I named the folder compenents and spent an hour wondering why nothing would render.

There was the time the ChatGPT integration “broke” because I referenced the handler by the wrong name inside React.

And there was the emotional burnout. The mental load of switching between frontend, backend, prompts, file structures, error logs, deployment scripts — all while living out of a truck, managing chronic illness, and rationing food money.

I wasn’t just building. I was surviving.

And I started to question if I was insane for trying to do both at once.


What It’s Cost: Time, Energy, Money, Sanity

Ember Relay cost me months of time, hundreds of hours of energy I didn’t have, and money I couldn’t afford to lose.

I’ve coded in parking lots. I’ve debugged in motel rooms. I’ve survived on instant oats and Wi-Fi from coffee shops that didn’t ask questions.

This tool was built in chaos — which is exactly why it works so well in chaos now.


What’s Next: Pro Launch, Dream Clients, and the $1K/Month Goal

Next up: Pro launch.

I’m building this for solo investigators, legal researchers, field reporters — anyone doing high-context work who needs structure and insight without hiring a team.

Pricing is $1,000/month. Why? Because this isn’t a toy. It’s a professional-grade tool that saves time, energy, and mental bandwidth.

I want Ember Relay to support me — and others — doing this kind of work full-time. No merch. No coaching. No distractions. Just serious tools for serious investigations.


Final Thoughts: Build What You Need. Even If It Breaks You a Little.

Ember Relay wasn’t made in a lab. It was made on the road, in the rain, with a deadline no one could see but me.

It broke me a little.

But it also showed me what I could build. What I could survive. And what I could ship, even in imperfect conditions.

So if you’re building something no one else understands yet — keep going. Especially if it’s hard. Especially if it feels like no one else gets it.

Because someone will. And when they do? You’ll be ready.

Comments


bottom of page