TL;DR
  • AI video tools have crossed the chasm from tech demo to creative playground. The only limit is your ability to describe what you want.

  • I spent this week experimenting with AI platforms to create Nose First, a short film about my dog's perception of space and time.

  • We're quickly moving toward a world where anyone can create anything, with all the creative possibilities and cultural implications that brings.

I've always believed storytelling is one of the most valuable skills one can develop. In college, a documentary filmmaking class taught me more about narrative structure than any writing course ever did. There's something timeless about transforming ideas into stories that move people.

But for most of my life, filmmaking has been expensive, technical, and time-consuming. You needed cameras, lighting, sound equipment, editing software, distribution channels. Most ideas died in the gap between imagination and execution.

This week, I set out to prove what I've suspected for a while now: AI has made that gap disappear.

Creating an AI Movie

The inspiration for this project came from a 2021 camping trip where a psilocybin-fueled campfire debate turned to how dogs experience the world differently from us.

Most people know that dogs possess a sense of smell somewhere between 10,000-100,000 times better than ours. They can detect molecular traces of events that have already happened, essentially smelling the past.

Around the campfire, someone (definitely me) suggested maybe they can smell the future, too.

A ridiculous, beautiful thought that has stuck with me for years.

Lately, AI video has been everywhere, from memes on Instagram to Coca-Cola ads in theaters. After seeing some of the most recent shorts (Air Head and Kalshi’s TV spots are standouts), I felt motivated to jump in and see what I could create.

So this week I gave myself a challenge: bring that campfire conversation to life using 100% AI-generated content. No filming, no actors, no physical production. Just prompts and a creative vision.

Please enjoy the world premiere of Nose First and let me know what you think!!

I hope it’s not too freaky…

Behind the Scenes

For those curious, here's exactly how I made Nose First.

Concept and Script (~1 hour)

  • Started with the core idea: dogs experiencing time through scent

  • Used ChatGPT to explore narrative possibilities and arc

  • Worked with ChatGPT to draft the voiceover script and scene-by-scene storyboard

Voice Creation (~1 hour)

  • Generated a completely synthetic voice for Domino using ElevenLabs

  • Included different emotional tags throughout the narration, like [thoughtful], [curious], [nervous] to capture the tone and beats

Visual Production (~8 hours)

  • Broke the script into specific shots using ChatGPT

Example: Cinematic wide shot of a white Australian Cattle Dog with short fur and subtle gray speckling, standing still in a dark, moody pine forest at twilight. The dog has two dark brown ears—one upright, one flopped—and is seen from a slightly off-center, handheld or shaky camera angle, as if being observed from a distance. The camera is not steady; it breathes slightly, mimicking the tension of the moment. Fog drifts low across the forest floor, and the light is dim, cold, and surreal—blue twilight shadows mixed with faint gold flare bleeding in from the left side of frame. The dog tilts his head sharply and slowly in a gesture of confused, deep inquiry, holding it for a long beat. Radial blur and soft focus creep into the edges of the frame. The trees feel tall and distorted by distance. The mood is surreal, reverent, and inquisitive—as if the forest just showed him something he can't explain.

  • Fed these prompts into Google's Veo and Sora, using a combination of different models depending on the shot

  • Character consistency proved the hardest challenge, requiring reference images and a lot of attention to detail

  • It took hundreds of variations to get the final shots

Post-Production (~2 hours)

  • Used iMovie to piece it all together and get the transitions right

  • Created custom atmospheric sound effects with ElevenLabs: "distant bear growl in forest," "nighttime wind through trees with cricket chorus"

Total investment: 12 hours across two days and about $150 in AI subscriptions

Creative satisfaction: Immense

Next Up

Creating this video got me thinking about all kinds of big questions:

  • What happens when creation costs approach zero?

  • How does Hollywood adapt when anyone can generate professional-quality content from their laptop?

  • What does infinitely personalized media do to shared cultural experiences?

  • What is the nature of mankind’s eternal struggle with the muse?

I have a lot of thoughts, but to be honest, I burned too many hours this week getting Domino's snout coloring just right. Those deeper themes will have to wait for future newsletters and I need to get back to work.

For now: the tech is basically here, the hurdles are gone, and every story is one prompt away.

Up and to the right.