Insights

Things I've learned, problems I've solved, and a few opinions I'm willing to defend. Not a blog — more like notes from the field.

The SQL Mystery: Why Your Workaround Might Be Making Things Worse

There's a pattern I've seen play out dozens of times across my career, and it almost always starts the same way: something breaks, someone finds a workaround, the workaround becomes the process, and nobody ever goes back to find the actual problem.

I lived this pattern for months with a SQL Server that crashed twice a day.

Every lockup meant the entire office and shipping floor stopped working. No orders processed, no inventory updated, nothing moved. Early on, each crash cost 30 minutes of investigation and recovery. I eventually streamlined it to about 4 minutes — log into the hypervisor, kill the VM, restart — but the real problem was still there. I just got faster at not solving it.

The pragmatic workaround was 3x daily scheduled restarts. Get ahead of the crashes. Management was satisfied because the visible disruptions went away. The problem, as far as anyone could tell, was "managed."

Then the warehouse shifted to a split schedule, which meant the SQL Server was running for longer continuous periods. The restart workaround stopped working. I had no choice but to find the real answer.

After months of systematic investigation — ruling out hardware, updates, configurations, everything obvious — I traced it to something nobody thought to check: an unused database field. The ERP software was occasionally writing corrupted characters to a field that wasn't being used for anything. SQL Server would eventually hit those characters and lock up. The timing appeared random because it depended on which records got accessed in which order.

Here's the part that still bothers me: the daily restarts were making things worse. When users kept the ERP running during a restart, the software would "burp" and write even more corrupted data. The workaround was actively feeding the problem it was supposedly solving.

The actual fix took an afternoon. I built a filter to intercept bad characters before they could be written, and a purge process to clean the historical corruption. The server went from crashing twice a day to once every few months.

The lesson isn't about SQL. It's about the hidden cost of workarounds. Every time you build a process around a problem instead of solving it, you're accepting two risks: the problem will get worse in ways you can't predict, and the workaround itself might be contributing to the damage.

The hardest part isn't finding the solution — it's convincing yourself (and everyone around you) that the comfortable workaround isn't good enough.

Why Fewer Orders Made Us More Money

This is my favorite business insight, because it's completely counterintuitive and took me way too long to figure out.

We sold products in three configurations: singles, dozens, and cases. Each had a different markup — roughly 400% on singles, 200% on dozens, and 30% on cases. Discounts kicked in at volume thresholds: buy 5 singles and get 15% off, buy 5 dozen and get 10% off, and so on up the tiers.

For years, the strategy was straightforward: sell more stuff. More orders equals more revenue equals more profit. That's how business works, right?

I built custom queries to look at the relationship between order volume, profit margins, and actual dollar profit — not percentage profit, but real money in the bank. And I included something most pricing analyses skip: labor costs. Every order that gets placed needs to be picked, packed, shipped, and occasionally returned. Those costs are real, and they scale linearly with volume.

What I found was a curve. And on that curve, there was a sweet spot where raising prices AND raising discount thresholds simultaneously caused order volume to drop — but total profit increased.

Read that again. Fewer orders. More money.

The math isn't complicated once you see it: if you lose 100 low-margin orders but the remaining orders have significantly higher margins, and you're spending less on fulfillment labor for those 100 fewer picks, packs, and ships — the net result is more cash. The key was finding exactly where on the curve that crossover happened.

It wasn't a one-time adjustment. Pricing and discounts are a moving target. Market conditions shift, competitors adjust, customer behavior changes. I had to continuously recalibrate to maintain the sweet spot. But knowing the sweet spot existed — that was the breakthrough.

The result: 15-25% revenue uplift on an $8M base. Average order value went from $40 to $50. And we achieved 20% year-over-year growth in our best year while maintaining lean operations — we didn't have to add staff to handle it because part of the strategy was deliberately reducing order volume.

The lesson: Most businesses optimize for the metric they can see most easily (revenue, order count, conversion rate). But the metric that matters is profit after all costs are accounted for — including the ones that don't show up on a standard P&L line item. Sometimes the path to more money is less activity, not more.

LLMs Are a Tool, Not a Solution

I've been building with AI for over a year now — multi-LLM provider architectures, RAG pipelines, cost-optimized routing, the works. I've shipped production systems and I write code with AI assistance every day.

And the most valuable thing I've learned is when NOT to use it.

There's a pattern I keep seeing: someone discovers that an LLM can do a thing, and immediately the conclusion is "we should use AI for this." No one stops to ask whether the thing needed to be done differently, whether a simpler tool would work better, or whether the AI is actually improving the output or just making the process feel more innovative.

I watched this happen with my own projects. Early on, I was routing everything through language models because I could. Then I looked at my API bills, looked at the quality of output for simple tasks, and realized I was using a $200 power tool to hammer in a nail.

Now every project starts with the problem, not the technology. I ask three questions:

1. What's the actual problem?

Not "how can AI help?" but "what outcome do we need?" Sometimes the answer leads to AI. Often it leads to a well-structured database query, a simple automation workflow, or a better process.

2. Where does AI genuinely add value?

AI is incredible at pattern recognition, content generation, classification, and handling ambiguity. It's terrible at precision, consistency, and doing the same simple thing reliably 10,000 times. Know which kind of problem you have.

3. What's the cost of being wrong?

If an LLM gives a 90% accurate answer and that's good enough — great, use it. If you need 99.9% accuracy, you probably want deterministic logic with AI as a helper, not the primary decision-maker. And you definitely want a human reviewing the output.

In my own SaaS platform, I use AI at specific decision points and traditional logic everywhere else. The LLM handles content analysis where its pattern-recognition ability genuinely outperforms rule-based approaches. But routing, error handling, cost tracking, and provider failover are all deterministic code — because I need those systems to be predictable, not creative.

The skill that actually matters in 2026 isn't "I know how to use AI." Everyone knows how to use AI — it's designed to be easy. The skill is knowing when to use it, when to use something simpler, and when the honest answer is "we don't need technology for this at all."

That's problem framing. And it's a much harder skill than prompt engineering.

Coming Next

"What 20,000 SKUs Taught Me About Data Quality"

How I turned 12 fields of messy supplier data into 200+ fields of omnichannel-ready product information, and why data quality is a battle you never actually win.

"The Non-Standard Method That Actually Worked"

Why I developed a Pick All/Pack All/Ship All fulfillment process that no textbook would recommend, and how experimentation beats best practices when you know your specific constraints.