Haotian Zhang - Hong Kong University
July 13, 2024
This is an online 4-day workshop consisting of software tutorials and rapid prototyping. Participants will be exposed to a variety of visual media techniques centering on reality capture, simulation, animation, and generative AI, each processing information with specific characteristics.
Register here: https://forms.gle/EnqRSJa1Y1Ln4ZJ79
Instructors: Haotian Zhang, Kaiho Yu, Gabriela Bìlá
Over the tutorials, mentors will both introduce the tools and their specific workflow. The techniques are categorized according to their “materiality,” in other words, their association with real-world materials.
Day 1: Line
Medium: motion capture and AI auto-rigging
Software tools: Plask Motion (advanced option: Omniverse), Blender
Demo project: we will transform the movement of a person, from camera footage to an avatar's Inverse Kinematic joints animation in digital space.
Day 2: Solid
Medium: text-to-3D, img-to-3D, and simulation
Software tools: Meshy, Stable Diffusion, Blender
Demo project: we will use the Segment Anything Model to deconstruct a still-life image and translate each of its elements with img-to-3D tools into 3D objects. We will then simulate their assembly in Blender.
Day 3: Light
Medium: Gaussian Splatting and Neural Radiance Fields
Software tools: Luma AI, instant-ngp, Unreal Engine
Demo project: we will use instant-nap to process video collages to construct mirrors and portals in Neural Radiance Fields.
Day 4: Particle
Medium: point cloud manipulation with Blender geometry nodes
Software tools: RealityCapture, Colmap, Blender
Demo project: we will use photogrammetry to create a point cloud of the city and animate it with Blender geometry nodes to visualize the echolocation of a bat.
Methods of Translating Reality is part of the event Atlas of the Unseen Shanghai 未见之图, a collaboration between The University of Hong Kong, MIT City Science Group, and the City Science Lab Tongji.