beginner projects software hub banner

Stop Watching. Start Building.

A year ago, I found myself at a crossroads with AI. Like many, I was fascinated, but I didn’t actually know what to do with it.

I did what most people do: I turned to YouTube. I watched endless tutorials and followed step-by-step guides. But I quickly hit a wall. I realized that there is a massive difference between watching someone use AI and actually using it to build something. The “Tutorial Hell” is real; you can follow a video perfectly, but the moment you face a blank screen, the momentum vanishes.

The realization was simple: “Using” is the only way to learn.

The Privacy Pivot

At first, I stayed in the cloud. I used ChatGPT, Copilot, and Perplexity.ai. The code coming out was amazing, but as my projects grew more personal, a concern emerged: Privacy.

When I was working on a reminder app designed to ensure I’d never forget a birthday again, I realized I was about to upload the names and birth dates of my entire family and friend circle to a corporate server. That didn’t sit right with me. I needed a local solution.

The Local Evolution

My journey into local AI was a series of iterations, each one teaching me something new:

  • Phase 1: The Discovery (LM Studio) 
    I started with LM Studio on my Mac. It was a revelation. Running the qwen2.5-coder-32b model locally allowed me to put the finishing touches on my apps without a single byte of data leaving my machine.
  • Phase 2: The Workflow Shift (VSCodium & Continue) 
    After months of tedious copying and pasting between a chat window and a code editor, I discovered a better way. As a long-time Linux user, I set up VSCodium on my MX Linux PC and added two key extensions: Python and Continue.

By running LM Studio as a server and connecting to it over my LAN, the friction vanished. I no longer had to manually move code; I could watch the Continue extension work directly within my editor. It was like magic and a massive leap in productivity.

  • Phase 3: The Optimization (Ollama) 
    Even with a powerful Mac Studio M4 Max, I felt there was still a more efficient way to handle the backend than LM Studio. That search led me to Ollama. Once I integrated Ollama and downlaoded the needed gemma4 models, the setup felt complete. It is leaner, faster, and has become the heart of my current coding workflow.

Why this matters for you

I spent a year orbiting these tools—switching OSs, testing LLMs, and fighting configuration errors—so that you don’t have to.

This site isn’t a classroom; it’s a digital workshop. I’m sharing the projects I’ve built and the exact setup I use today (documented in my Homelab section) to save you the months of trial and error I went through.

The tools are here. The paths are documented. Feel free to dive in.