Imagine to synthesize the video below in comparable visual fidelity within temporal interactive constraints. Possibly by using suitable data embedded in some high-dimensional space? Recall the challenge of these high-frequency distributions within the nonlinear material and transport space!
How can we achieve both effective and efficient synthesis, seen from a theoretical and artistic perspective?
When I see great images, it feels like I want to simulate them.
Activities & Achievements
Feb. 12, 2015
Won the 2nd Rendering Competition Prize in the course "Computer Graphics" at the Saarland University Germany during my master in Visual Computing.
A path tracer with russian roulette proportional to luminance of path throughput, combined with multiple importance sampling of the BRDF and area light sources, as well as stratified image plane sampling with Mitchell filtering.
Sept. 24, 2014
Incremental Light Propagation Volumes in CAVE-like Installations
Ludwig-Maximilians-University Munich, and
Leibniz Supercomputing Centre in Munich, Germany
Dr. Christoph Anthes
Prof. Dr. Dieter Kranzlmüller
Gave me a lot of experiences with CUDA and low-level optimized OpenGL. Sometimes it was just work, but most of the time it felt like fun. Thanks for giving me this opportunity.