What I'm Building
When Nella Cosa transitioned in 2025, I started building. Not courses, not a portfolio of tutorials — actual tools that solve actual problems. Here's what I've been shipping.
AI-Powered Content Platform
The problem: Processing content at scale requires different AI models for different tasks — and the costs add up fast if you're not thoughtful about which model handles what.
What I built: A content processing platform with multi-LLM provider architecture that routes work across OpenAI, Anthropic, and local models based on task complexity and cost. The system includes automatic provider fallback (if one service goes down, work routes to the next), and a "use free credits first" logic that minimizes API spend without sacrificing output quality.
This isn't a ChatGPT wrapper. It's a production-grade API architecture designed for bulk processing with intelligent provider selection — the kind of system that handles thousands of requests without someone babysitting it.
Status: Preparing open-source component for community release.
What this demonstrates: I can architect production AI systems — not just prompt engineering, but the infrastructure that makes AI reliable and cost-effective at scale.
Custom RAG Pipeline via MCP
The problem: I needed to reference thousands of pages of developer documentation while coding, but it only existed as web pages. Switching between browser tabs and my editor was killing my workflow.
What I built: A pipeline that scrapes and ingests thousands of documentation pages, processes them for retrieval, and serves them through a Model Context Protocol (MCP) server. Now I can query documentation in real-time from within my development environment — no browser tabs, no context switching.
What this demonstrates: Practical understanding of RAG architecture, vector storage, and MCP protocol. I built this because I had a real problem, not because "RAG" looks good on a resume.
"Open Mic" — Adaptive Lyric Scroller (iOS)
The problem: Musicians at open mic nights can't scroll lyrics at a fixed speed. Tempo shifts, dramatic pauses, and personal habits make every performance different. Existing teleprompter apps either scroll too fast or too slow — and you can't exactly reach for your phone mid-song.
What I'm building: A SwiftUI app that listens to live performance audio, determines the playing tempo in real-time, and adjusts lyric/chord scrolling speed accordingly. Users "train" the app across practice sessions so it learns their individual timing on specific songs.
This combines real-time audio analysis, on-device machine learning, and adaptive UX design — and it started because I go to a lot of open mic nights and watched musicians struggle with this exact problem.
Status: In active development.
What this demonstrates: Product thinking. I identify real problems that real people have, then figure out whether technology can solve them. Sometimes it can. Sometimes the answer is "just memorize the lyrics" — knowing the difference is the skill.
Other Projects
AirPlay Remote Music Controller (macOS)
Controls music playback on any AirPlay speaker without being on the same network. Required solving networking, audio routing, and remote access challenges across network boundaries.
iOS Task Management App
I've deliberately restarted this project multiple times. Each iteration applies architectural lessons from the previous build. Prioritizing clean architecture and learning over shipping speed — because sometimes the most valuable thing you can build is a better understanding of how to build.
Live Music Setlist App (iOS)
Reverse-engineered and rebuilt a production setlist management app. Built it to prove I could ship a complete, functional iOS application. Mission accomplished.
AI Workflow Automation (N8N)
Multi-step automation workflows with LLMs embedded at specific decision points. Key principle: AI at the decision points, traditional logic everywhere else. Because not everything needs a large language model — sometimes a simple if/then is the right answer.
My Philosophy on AI
LLMs are a tool, not a solution.
Every project I build starts with identifying a real problem. AI is one instrument in the toolbox — applied where it creates genuine value, left out where it doesn't. I've seen too many teams bolt AI onto everything because it's trending, then wonder why their costs tripled and their output quality dropped.
The skill isn't knowing how to use AI. The skill is knowing when and why to use it — and having the discipline to choose something simpler when that's the better answer.
The Tech Stack (For the Curious)
Swift/SwiftUI, Python, Docker Compose, N8N, AI/LLM APIs (OpenAI, Anthropic, local models), Model Context Protocol (MCP), RAG pipelines, Proxmox virtualization, Cloudflare (CDN, Workers, Tunnels), Git/GitHub, REST API design, real-time audio processing.
Home lab: Mac Mini servers, Proxmox virtualization, mixed hardware environment — because the best way to learn infrastructure is to break your own.