Presentation
Generative Models for Visual Content Editing and Creation
DescriptionGenerative AI now drives storyboarding, previs, and look-development, yet two gaps slow adoption: artists struggle with opaque tools, while ML engineers lack cinematic grammar. This half-day master class closes both gaps by pairing concise theory with hands-on, human-in-the-loop practice and built-in ethics.
Through an Explain → Show → Do rhythm, each concept moves from a crisp technical snapshot to a live GPU demo and a guided task. Team exercises turn peer critique into a rapid feedback loop, while questions of authorship, bias, and legal clearance surface at every step—embedding responsible practice into real production workflows.
Live demos built on the CineVision pipeline transform a log-line into reference frames, shot lists, and colour-graded contact sheets, showcasing diffusion, LoRA, ControlNet, AnimateDiff, and IP-Adapter in action.
You will leave able to:
Explain how modern diffusion and multimodal generators work.
Customise AI tool-chains without ceding creative control.
Integrate AI assets into coherent, ethically sound sequences.
Assess—and build—production-ready pipelines that enhance director–cinematographer collaboration.
Advisory Committee
Prof. Maneesh Agrawala — Stanford University
Prof. Huamin Qu — The Hong Kong University of Science & Technology
Prof. James Evans — University of Chicago
Prof. Shane Denson — Stanford University
Prof. Tim Gruenewald — The University of Hong Kong
Prof. Bárbara Fernández-Melleda — The University of Hong Kong
Through an Explain → Show → Do rhythm, each concept moves from a crisp technical snapshot to a live GPU demo and a guided task. Team exercises turn peer critique into a rapid feedback loop, while questions of authorship, bias, and legal clearance surface at every step—embedding responsible practice into real production workflows.
Live demos built on the CineVision pipeline transform a log-line into reference frames, shot lists, and colour-graded contact sheets, showcasing diffusion, LoRA, ControlNet, AnimateDiff, and IP-Adapter in action.
You will leave able to:
Explain how modern diffusion and multimodal generators work.
Customise AI tool-chains without ceding creative control.
Integrate AI assets into coherent, ethically sound sequences.
Assess—and build—production-ready pipelines that enhance director–cinematographer collaboration.
Advisory Committee
Prof. Maneesh Agrawala — Stanford University
Prof. Huamin Qu — The Hong Kong University of Science & Technology
Prof. James Evans — University of Chicago
Prof. Shane Denson — Stanford University
Prof. Tim Gruenewald — The University of Hong Kong
Prof. Bárbara Fernández-Melleda — The University of Hong Kong

Event Type
Courses
TimeThursday, 18 December 20251:00pm - 4:45pm HKT
LocationMeeting Room S425, Level 4







