For the past year, I’ve been turning over in my mind how GenAI might not just shape the future of art, but also help us breathe new life into the creative past. Can the most advanced creative tools we’ve ever built revive the timeless beauty of analog storytelling magic? I kept coming back to the enduring textures of 2D, pre-CGI Disney animation that are forever burned into my childhood memories, wondering if AI could help artists access that kind of style and soul again in far less cost-prohibitive ways.
And then I watched Henry Daubrez’s short film Kitsune. It hit me like a memory and answered those exact questions with a resounding yes. So I had to reach out to Henry to chat.
This week, I'm thrilled to share highlights from my conversation with Henry, a creative director, filmmaker, and CEO/COO of DOGSTUDIO/DEPT, a multidisciplinary studio at the intersection of art, design, and technology. Henry is among the first artists to direct a fully generative short film, pushing the boundaries of what's possible with the tech while channeling the nostalgic visual language of classic animation. Henry’s Kitsune and Electric Pink (which premiered at Tribeca this year) aren’t just polished generative short experiments. They’re emotionally precise, unmistakably personal, and remind us that in a sea of AI content, it’s not the tools that make the story, it’s the hands that shape it.
If you haven’t seen his films, watch them before or after reading this. You’ll feel them.
This article presents excerpts from our longer conversation. Some responses have been lightly edited for clarity and brevity. For the full discussion:
📺 Watch the full conversation on YouTube or 🎧 Listen on Spotify
Henry Daubrez: The reel before the feature
🪗 The Liberating Constraint: For Henry, generative tools aren't about surrendering control, but about finding freedom in the machine's unexpected decisions. As a "micromanager" by trade, he finds it refreshing to start with something visual and then dissect, remix and refine it versus facing a blank page.
🎨 Art Director First: His process is deeply rooted in his art direction instincts. He defines his Creative Firewall™ by identifying the tool's capabilities early in an R&D phase, then using his artistic checklist to curate and refine outputs, prioritizing good storytelling over "pretty but wrong" visuals.
👨🏼💻 Iterative Creation: Henry likens his creative process with AI to agile software development. He constantly moves back and forth between generating, editing, and refining, allowing the film to evolve iteratively rather than adhering to a rigid, linear production pipeline. He believes this flexibility leads to a much better final product.
📣 AI as an Amplifier: AI accelerates the process of finding Henry’s voice as a filmmaker, allowing him to explore more stories and gather more reactions in a fraction of the time. It both collapses the creative timeline and expands creative horizons.
Henry’s philosophy in one line:
"The story you want to tell tells more about you as a person than the technology you're using."
Interview Excerpts:
Siddhi: What was your first encounter with the machine? That moment where AI first entered your practice, not just as an abstract idea, but something you touched and used?
Henry: It was three years and a half ago… that was the first time I heard of the word, like, you can prompt something. And back then, it was something called VQGAN. And I think it was fascinating because I had the feeling that I was asking for a person and I wasn't really getting the person. It was kind of a weird collage and mix of things which was both the person I was asking for, but at the same time not at all. To me, it was fascinating that the machine made some decisions... I've been spending most of my career micromanaging decisions... and it felt kind of refreshing to look at things and see how I feel about it and just decide if I want to use it or not. Then I would end up cutting pieces of whatever I generated and recompose them.
I felt refreshed not to take those decisions, but to still actively decide if they were good for me and what I could do with them.
Siddhi: Let's talk Kitsune. How were you able to achieve such a polished and cohesive look using generative tools?
Henry: Video was definitely not in a place where we could do a lot of things... and I think Veo 2 was this moment where it felt like... it's finally good enough to have natural movement and not just this kind of a slow motion. And it wasn't hallucinating everything you were asking, right? Veo 2 and 3 are really like world builders. Like you can push art direction a lot. So I could start to ask for very precise things like a mood and a color and everything was directed that way and the goal was to create…the challenge was to make sure I had consistency throughout. And I was only relying on text. The issue is that there's only so many words you can give... if you describe too long, too much... you start losing consistency. You lose action, you lose whatever details. And so I figured out that by using a fox, I could just ask for a fox. And I'm just saving all of this work where I don't have to describe a person… I got this fox tattoo just after my mom passed away from cancer, so in 2012... that symbol already carried meaning.
With Kitsune I was trying to reach out to something more nostalgic. I wanted it to look like something that could be seen on a TV in the 90s... that feeling of animation that’s not hyper-realistic but still has texture and warmth.
Siddhi: What was your secret to maintaining that cohesive vision when you're working across a suite of different generative tools at the same time?
Henry: The thing is, the further you go, the more complicated it is to go backward. Like you're not shooting and being like, wait, we're changing a different scenario. It's like, no, it's locked, like too expensive to change, too much risk. What I love with AI is that it gives me a chance to constantly go back and you can improve. It's very much like software development where you do iterative things, like agile... So the way I do it is that I start generating. I have a core idea, a core script, but not everything is extremely nailed to the word. And I started editing like when I'm 20 percent into generation... I like to see quickly how it feels. And so I'm going to have a few shots, put them in editing software, start putting a rough edit together... a lot of it was filling gaps. I had a film at some point that was like 95% done, but I knew those 5% would be really difficult to get.
To me, it gave me a much better film that way than if I had done like a typical process where like... when it's finished, you look at it and you might be thinking, oh, that could have been better if I had done that.
Siddhi: What did your tool set look like in practice throughout that process?
Henry: So for Kitsune, it was only text to video, and it was only Veo 2 for anything which is animated. Then I was using… something called Screenflow, which is normally a screencast tool, basically. But to do my editing, I've been so used to the tool that I'm using it for everything. So color grading was done in Screenflow and making sure everything was matching perfectly. Sound… I used something called MMAudio which is a library which creates audio directly from a video clip you're giving it to. So you don't have to sync the steps of the fox for example.
And music, I had two versions of the film, one which was kind Google-friendly, which is using stock audio, and which is using Udio, one of the tools in the market to create music, which I felt was amazing and better than Suno, which is one of the competitors at the time. So yeah, so that's mostly the stack for this.
Electric Pink is a film which was commissioned by Google for the release of their Flow tool, which basically is this kind of a new home for Imagen 3 and 4 and Veo 2 and 3. The big difference, I think, with Kitsune is that I wanted to have more control. I wanted to explore a lot of different aesthetics. And I wanted to avoid having, indeed, 5,000 to 7,000 generations and focus on having shots I felt were exactly where I wanted them to be and then animate them afterwards.
The difference is that each shot you see in the film has been created as a static image first. So I use Imagen 3 which is Google's model... that enabled me to create each individually and then a lot of Photoshop. For example, when you generate an image and there's posters on the walls, the posters don't make any sense. I would remove them fully with Firefly and then Photoshop back actual posters from the 80s of movies and cartoons and whatever, which was a way for me to place Easter eggs all over.
Siddhi: I think a lot about machines almost like mirrors in some ways, where sometimes these generative systems echo back some of your unconscious habits back at you. I'm curious if the tools ever taught you something new about your taste or your voice or a pattern that you discovered reflected after using the technology for a period of time.
Henry: So I think if I look at what I'm doing right now, part of it is also tied to the limitations of the tooling, right? Like for example, I know that if we look at Veo it's 720p. That's the max resolution. And so I tend to use a lot of noise, animated noise, and sometimes some kind of cinema scratches as a texture. Because I think making it looking like it's analogic, makes it more forgiving towards having low resolutions. I think that's just, once again, a case of just making lemonade. That becomes at some point a signature because I think this is where we are. So I don't like the idea that if you look at AI outputs, a lot of it still feels sometimes synthetic. And I think adding those analogic defects and those small things remove part of that perfect aspect.
The second thing… I personally do go back a lot to my upbringing and what drove me growing up or it does feel a lot like personal therapy these days. I like to look into things which activate your heart in a way or your feelings or like you can identify to. For a while I wanted to do something on loneliness…and grief, both my parents passed away from cancer. That's something I mentioned in Electric Pink. The story you want to tell tells more about you as a person than the technology you're using.
Siddhi: How does the creative process change for you Henry in this generative landscape when you're working alone versus working with a team?
Henry: No, no, 100%. It's definitely the same. It's really like… you're directing a team or like you're a chef in the kitchen... That's why I'm editing, generating, doing design, and retouching pictures at the same time. If there's one minute of generation, I'll shift and retouch whatever I have, or go back to my script. I think that's kind of the feeling, that part of enabling me to direct a lot of things happening at the same time versus having to wait for someone to be done. And it's really truly like being in a train at full speed and trying to bring it to your destination, but you can't slow down. You have to keep going. That's both exciting and sometimes really scary because it's a lot and it's draining too. Your brain has to do a lot of things at the same time constantly.
Siddhi: What did it feel like to see an AI-generated film (Electric Pink) included at Tribeca?
Henry: I wasn't part of the audience, but I got feedback from the team of Google... I had people walking up to me after the first conference and the screening saying that they cried..and that's definitely what I was trying to achieve with this film. I wanted to strike a chord, I want people to identify. I think the entire film was a personal story but I wanted to open doors for anyone and to see whatever they wanted to see.
And the girl who walked up to me… I saw she was struggling, and she started crying and saying that her grandmother just passed away just before and that reminded her of a lot of things. And the mom who came to me... was struggling with her son because he was a perfectionist and he was drawing a lot. Some people came up saying, like, look, it made me really emotional at that moment. I think that's the best compliment I can get on anything I create.
Especially when you have, you know, extremely perfect figures like Guillermo del Toro, who said… AI film would be just like a screen saver at best and that it would never create emotion. And I didn't agree. I think stories matter, people have different stories, technologies don't really matter. And I think this film is a good example of people just, you know, they feel something, and that doesn't matter how it was done.
Siddhi: In addition to being an artist you've invested actively in companies like Stability, Leonardo in its early days, Blockade Labs. What connects those investments for you? And what kinds of builders or ideas do you believe are going to shape the future of creative technology?
Henry: My company got acquired three years ago and gave me a bit of extra cash, which I never really had access to before. And this happened at the same time I discovered AI and at the same time companies were still in the market.
After six months of doing this... you quickly see when there's a game changer. And there was a lot less than you have today. Today, every two days, there's a new. And so when Stability came, it felt extremely powerful compared to anything else in the market. And I believe in life, if you want something, you just can ask... and so I reached out to Emad, who was the CEO back then, and just asked like, hey, are you looking for some funding? And they were actually closing their seed round. And that's how I got in. Leonardo, kind of the same. I knew one of the founders... and I felt that they had an extremely polished tool quickly, like UI-wise, UX-wise. The tech behind was basically leveraging other models, but they were smart in how they were building things and that felt good. I think it's definitely good timing, but also it was about people who are disrupting the market with new things. And since then, I've been looking at companies which are always... AI enabled somehow.
The last one is Acrylic Robotics. They're using robotic arms to paint with acrylic... and using AI to do computer vision and know how to reproduce something perfectly. And the market for that, right? Like if you look at hospitality... we've been surrounded by bad art for years and printed in the cheapest way. And suddenly you have a company which actually can make actual paintings.
Siddhi: For artists who are just stepping into using GenAI tools and they're feeling curious but cautious, afraid, nervous... what would you tell them to do as a first step?
Henry: I don't think AI is stripping you from creativity, even though the machine is controlling some of the things. I think creativity is how you use it and how you decide to bridge different tools or whatever you're doing with it. So it means that anything is good. I always tell everyone, just get their hands dirty. It's easy to complain today. We can just be, because it's not going to stop. That's 100 percent sure. Just try to get your hands dirty and whatever. Like, start a Midjourney account and just pump things. You just have to get started versus just reading headlines and complaining.
Henry's perspective offers a grounded and deeply human vision of what’s possible when artists collaborate with machines. Because in the end, it’s not the tools that carry the story. It’s the voice brave enough to speak it.
✉️ If this reminded you that stories still lead, even when machines assist, subscribe below and share it with someone tracing their own path through the noise.