The project started by modeling the envisioned scene, followed (or accompanied) by programming required materials and textures. The most work however should be attributed to the understanding of and implementation of path tracing.

Modeling

Having a concept in mind, I started with Autodesk Maya 2015 to model the walls of the hall. First by modeling the roofs at the side (1 unit = 1 m) for which I used boolean operations on polygon meshes for which care must be taken to get it right.

small dome

Using "Duplicate Special", repeating structures can be easily modelled in Maya. The result was after 1.5 - 2 full working days:

scene overview

I should note that during modeling, I hid the capitels of the columns to speed up refresh rate of the viewport. A dummy human acted for the high resolution mesh statues.

Next, I employed Maya's Paint tools to stroke the plants, grasses, and the tree. Pressure sensitive strokes from a graphics tablet produced much better results, and made actually fun after getting used to it. Here are some wireframe shaded screenshots during production:

plants

plants

plants

plants

The scene was modelled for a specific camera orientation. Note the final orientation is slightly different. Intermediate results are:

camera orientation

camera orientation

Next I modelled a hole into the roof (again with boolean ops). It is admittedly not very nice, but it is perfectly fine for the chosen camera orientation (and time was short). Using helping geometry, I let vegetation grow down from the hole, because the scene should look old, fallen.

hole in the roof with plants

Finally I added some assets to let the scene look more natural. These are fallen capitals from nearby columns (they all match to columns), some old statues on the floor (they are some scans, so they look good, but also sometimes broken). Just look at this image, how comfortably she is resting within the leaves in a silent corner near a column.

sappho in vine

I should mention that I slightly "fixed" 3D scans by removed some disturbing artefacts, after scaling then in my SI based measurement configuration (1 unit = 1 m). There are also two glass spheres in the scene which many years ago have been used by a wizard (or mage, people can not remember nowadays).

Materials

Now let us have a look at material development. For this I heavily employed the common scene "mitsuba" which is often used for this purpose, although materials should be tested and developed in real life scenarios. For the path tracer, I had to slightly change this scene to be properly surrounded by a sphere or box (for lighting).

I implemented the marble texture similar to http://lodev.org/cgtutor/randomnoise.html#Marble (it was more a guideline than a fitting implemention to my one). The most difficult task at this was to get the many parameters involed right (5 parameters excluding the noise texture itself). Introducing colors is just a matter if linear interpolation. More complex marbles can be done with layered textures. During this, I accidentially arrived at a granite material by using a marble texture (nice!).

...
...
...
...

Using path tracing, the glass materials looks very nice. These are some images from development I would like to share.

...
...

Integration, Sampling

For the path tracing with next event estimation, I consulted "Physically based rendering", 2rd edition, by Matt Pharr as well as "Advanced Global Illumination", P. Dutre, 2006 which covers the theory well, accompanied by lectures which I have forgotten which they was. Light sources are sampled by area with two different approaches: sample each light source once and compute the mean, or compute a number of samples on any light sources available for which I implemented uniform random sampling (for simplicity). For BRDFs I use importance sampling based on hemisphere sampling to reduce variance and thus to improve convergence. This is particular helpful for glass materials. And the lambertian surface implements cosine lobe sampling. Data structures are extended to hold the PDF to be used by the integrator. For the termination criteria, I employed russian roulette proportional to luminance of path throughput / 0.01 (see http://www.cs.utah.edu/~thiago/cs7650/hw12/). I have not implemented efficiency optimized russian roulette for I had a very limited amount of time. I also experimented with bidirectional path tracing (a few days agao) which however I haven't yet finished (see http://graphics.ucsd.edu/~iman/BDPT/). The implementation has been "verified" by comparing rendered scenes of the Cornell box with the original synthetic image (http://www.graphics.cornell.edu/online/box/data.html).

There is one very important detail I really want to share in the hope if at least one person finds it helpful in the future, I am pleased then. I got some problems with the implementation (so I thought). I looked everywhere (textbooks, papers, lectures) , took be a lot of time. I learned a lot with this. The problem was that I got a very noisy image in scene made up of lambertian surfaces:

error??

The solution: the surface albedo was set to 1, AND the area light was neither close enough, nor far enough from a surface which resulted in "numerical instability" due to the division by squared distance to compute intensity from light sources (for irradiance), and my own implementation might not be very robust I guess. So please keep this in mind. For those of you willing to understand path tracing, here are some images. Some overly bright pixels are not an implementation error, they just visualize the large variance of my path tracer, because I have not implemented LD sampling (see future work) for integration, nor MLT.

...
...
...

The rightmost image is a result from recursive ray tracing. And finally, an image from during development which shows that path tracing is not perfect. The light situation is very difficult because the only light source is seldomly seen from surface patches (even bidirectional path tracing would be problematic here). It has 1000 samples per pixel. In contrast recursive ray tracing produces an almost black image.

difficult light situation

I further employed stratified sampling of the image domain, as well as a Mitchell filter (Mitchell and Netravali, 1988). I experimented with box filter, gaussian filter as well, but Mitchell filter resulted in less noise.

Notable Parts - My Opinion

These images depict the marble texture (still for lambertian surface) and the path tracing. It is faszinating what path tracing can do.

The capitels have enough mesh resolution to look pretty convincing

Here is a fine red marble. It is just 4 different parameters compared to the marble for the columns.

The greek statue to the left looks quite awesome. I think the parameters of the marble texture are just right.

The "Sappho Head" to the right looks like "resting in peace", with some leaves going over her face.

If you look pretty close the refraction of light through the sphere on the left side of the throne, you can actually see all the red marble columns and the green tree.

On the throne is this plant growing up along the backside. The resolution is high enough for pixel precision (for the chosen camera parameters). Nice work of the Paint tools inside Autodesk Maya.

I really like the grass (green), and corn-like grass (even with convincing "flowers" [in terms of the Maya tool]). The shadows of the green plant on the marble are also pretty soft. Note that the softness depends on the distance from the marble. And its color is also greening.

I personally like trees. So my opinion is somewhat biased. This one looks nice, although still unnatual (of course, trees are highly complex).

Rendering Facts

Scene size (obj files, no textures): 4.4 GB
Scene geometric complexity: approx. 30 Mio triangles (no instancing for performance reason)
Render time for path tracing the scene (image: 573x240, 1000 spp): approx. 40min
Render time for path tracing the scene (image: 2294x960, 1000 spp): approx. 11h
Memory consumption during modeling: 12 GB (18 GB during exporting and working long hours)
Memory consumption during rendering: approx. 17 GB (I know, my code is not production ready)

 

Thank you for reading this.