Burn the Boats: The Idea That Hit Me Like a Bag of Bricks
Intro
I write this introduction feeling very unsure about posting this. I've been writing these blog posts in Obsidian and then I use Claude to make them more HTML friendly and post them. I have it check for spelling and grammar, but it's mostly converting it to post to my blog. So, as someone trying to use AI more, I decided to build a Claude skill to automate this for me. I built the skill and something unexpected happened. It not only converted it to a blog friendly format, it rewrote it to sound much better. This was no intended and an issue with my prompting. I have specifically asked Claude not to change my writing because I want it to feel authentic. I know I am not a very good writer, I haven't actually written like this since college...14 years ago. I'm okay with the fact that my writing isn't very good because I believe that doing it helps me introspective as well as helps me be a better writer. My issue is, I want people to eventually read some of these(as of now no one but my wife is reading them) and when I read what Claude had done it was way, way better. Even rereading my own writing, I could feel the run on sentences and straying of my thoughts. I really liked that Claude's version captured me as a reader, but as the writer it still sounded like me. So, I post this with this intro to start to speculate about whether I want Claude to adjust my writing at all. With everything I am already learning and working on it feels like a lot to also be working on improving my writing. I, mostly, writing to document and to self-reflect, not to be the world's best blogger. I would eventually like to know what people prefer though. However, I will proudly state that I will never post anything that was essentially written with AI without prefacing that writing. I have zero interest in taking credit for soemthing an AI wrote.
Burn the Boats: The Idea That Hit Me Like a Bag of Bricks
Some days you sit down at a coffee shop, open your laptop, and leave with a completely different life plan. This was one of those days.
The Breakfast That Changed Everything
My wife and I grabbed breakfast, and afterward we both cracked open our computers to get some work done. I couldn't focus. My brain was all over the place, like it usually is when I'm stuck between "what I should do" and "what I actually want to do."
So I did something a little unconventional — I asked Gemini to be my business coach. Not a nice one, either. I specifically told it to be harsh, realistic, and to push me. It did not disappoint.
What surprised me most was how much it already knew about me — my background, my goals, the tension I was feeling between playing it safe and swinging for something bigger. We talked through two options: a "safe" path that stayed close to the restaurant industry, and a riskier, more ambitious path in AI. When I started describing the safe option, Gemini stopped me and said something I didn't expect:
"You seem bored just talking about it. What's going to happen when times actually get tough?"
That landed. Because it was right. I was already dragging my feet just explaining the idea. That's not a great sign.
Burn the Boats
When I told it I didn't want the safe route, it said: "Good. Burn the boats."
It knew I was a David Goggins fan. That was a nice touch.
Then it broke down the real challenge into three parts: The Pivot, The Technical Moat, and The Capital Problem.
The Pivot was about thinking beyond what's trendy right now. LLMs are cool, but the real frontier is world models — AI that tries to understand how the world actually works, not just predict the next word. If I was going to build something meaningful, I needed to think toward the future, not just copy what's already been done.
The Technical Moat hit different. Gemini basically said: certificates and courses are fine, but real founders build things. And here's the part that stung a little — I'm not going to out-math kids from Stanford. I can't compete on pure theory. So instead of trying to, I need to build things from what I already understand deeply. More specifically, I need to build things that probably shouldn't work. That's where the interesting stuff happens.
Then it asked me a question that changed the whole conversation: What complex system do you understand deeply?
The Click
That question sat with me for a second. Rock climbing. Skydiving. High-stakes, real-consequence situations where everything depends on your ability to stay calm under pressure.
And then it hit me.
One thing I've always been quietly proud of is my ability to function in genuinely scary situations. I've had two cutaways skydiving — that's when your main parachute fails and you have to cut it away and deploy your reserve. Both times, I felt completely in control. No panic. No hesitation. The first one was honestly kind of fun.
My wife is the opposite. In scary or dangerous moments, she tends to freeze — and that's completely normal. Most people do. But I started thinking about why that is, and more importantly, what it means.
LLMs are essentially giant prediction machines. World models are trying to understand how the world actually works. So here's the question that hit me like a bag of bricks:
Can we use AI to predict when someone is going to freeze from fear — before it happens?
Think about who that matters for. It's not just skydivers. It's surgeons who have to make a split-second decision in the operating room and hesitate because they're afraid of being wrong. It's soldiers on the battlefield whose freeze could cost them or their whole team. It's anyone who faces a high-stakes moment where the wrong response — or no response — has serious consequences.
What if we could not only predict when someone is going to freeze, but eventually help them not freeze at the moment it matters most?
But How?
I'm not going to pretend I have this figured out. I don't. I have approximately zero clue how to build this at a technical level — at least not yet. But that's kind of the point. Gemini reminded me that's where The Capital Problem comes in: if I can't produce a simplified proof of concept, none of this matters.
The first step isn't building a neural network. It's not even close to that. The first step is collecting data and figuring out if I can even organize it into something useful.
So here's the actual plan: when I'm at the climbing gym, I'm going to start recording data — biometrics, video, whatever I can capture — specifically during moments when I'm scared. Maybe even frozen. Then I want to see if I can write a basic program to parse that data and make sense of it. No AI yet. Just data. Just structure. Just figuring out if there's a signal worth chasing.
It sounds way more doable than the long-term vision — which is good, because the long-term vision is terrifying.
What Now?
I've also decided to start a deep learning course so I can at least understand how to build a small neural network from scratch. I need the foundation if I'm ever going to get to the interesting stuff.
For now, I'll keep posting here — probably more philosophy, observations, things I find interesting in the AI world. And I'll update everyone on the data collection project as it develops.
I don't know if this idea is going to work. But for the first time in a while, I'm not bored talking about it. And honestly? That feels like a good sign.
"Build something that shouldn't work." That's the assignment.
Let's go.