In the previous section, we learned how to load a scene from an XML file. Once a scene has been loaded, it can be rendered as follows:
... # Load the scene for an XML file scene = load_file(filename) # Get the scene's sensor (if many, can pick one by specifying the index) sensor = scene.sensors() # Call the scene's integrator to render the loaded scene with the desired sensor scene.integrator().render(scene, sensor)
After rendering, it is possible to write out the rendered data as an HDR OpenEXR file like this:
# The rendered data is stored in the film film = sensor.film() # Write out data as high dynamic range OpenEXR file film.set_destination_file('/path/to/output.exr') film.develop()
One can also write out a gamma tone-mapped JPEG file of the same rendering
# Write out a tone-mapped JPG of the same rendering from mitsuba.core import Bitmap, Struct img = film.bitmap(raw=True).convert(Bitmap.PixelFormat.RGB, Struct.Type.UInt8, srgb_gamma=True) img.write('/path/to/output.jpg')
raw=True argument in
film.bitmap() specifies that we are
interested in the raw film contents to be able to perform a conversion into the
desired output format ourselves.
mitsuba.core.Bitmap.convert() for more information regarding the bitmap convertion routine.
The data stored in the
Bitmap object can also be cast into a NumPy array for further processing
# Get linear pixel values as a NumPy array for further processing img = img.convert(Bitmap.PixelFormat.RGB, Struct.Type.Float32, srgb_gamma=False) import numpy as np image_np = np.array(img) print(image_np.shape)
The full Python script of this tutorial can be found in the file:
In the following section, we show how to use the Python bindings to write a
simple depth map renderer, including ray generation and pixel value splatting,
purely in Python. While this is of course much more work than simply calling
render(), this fine-grained level of control can be useful
in certain applications. Please also refer to the related section on
developing custom plugins in Python.
Similar to before, we import a number of modules and load the scene from disk:
import os import enoki as ek import numpy as np import mitsuba # Set the desired mitsuba variant mitsuba.set_variant('packet_rgb') from mitsuba.core import Float, UInt32, UInt64, Vector2f, Vector3f from mitsuba.core import Bitmap, Struct, Thread from mitsuba.core.xml import load_file from mitsuba.render import ImageBlock # Absolute or relative path to the XML file filename = 'path/to/my/scene.xml' # Add the scene directory to the FileResolver's search path Thread.thread().file_resolver().append(os.path.dirname(filename)) # Load the scene scene = load_file(filename)
In this example we use the packet variant of Mitsuba. This means all calls to
Mitsuba functions will be vectorized and we avoid expensive for-loops in
Python. The same code will work for
gpu variants of the renderer as well.
Instead of calling the scene’s existing integrator as before, we will now manually trace rays through each pixel of the image:
# Instead of calling the scene's integrator, we build our own small integrator # This integrator simply computes the depth values per pixel sensor = scene.sensors() film = sensor.film() sampler = sensor.sampler() film_size = film.crop_size() spp = 32 # Seed the sampler total_sample_count = ek.hprod(film_size) * spp if sampler.wavefront_size() != total_sample_count: sampler.seed(ek.arange(UInt64, total_sample_count)) # Enumerate discrete sample & pixel indices, and uniformly sample # positions within each pixel. pos = ek.arange(UInt32, total_sample_count) pos //= spp scale = Vector2f(1.0 / film_size, 1.0 / film_size) pos = Vector2f(Float(pos % int(film_size)), Float(pos // int(film_size))) pos += sampler.next_2d() # Sample rays starting from the camera sensor rays, weights = sensor.sample_ray_differential( time=0, sample1=sampler.next_1d(), sample2=pos * scale, sample3=0 ) # Intersect rays with the scene geometry surface_interaction = scene.ray_intersect(rays)
After computing the surface intersections for all the rays, we then extract the depth values
# Given intersection, compute the final pixel values as the depth t # of the sampled surface interaction result = surface_interaction.t # Set to zero if no intersection was found result[~surface_interaction.is_valid()] = 0
We then splat these depth values to an
ImageBlock, which is an image data structure that
handles averaging over samples and accounts for the pixel filter. The
ImageBlock is then
converted to a
Bitmap object and the resulting image saved to disk.
block = ImageBlock( film.crop_size(), channel_count=5, filter=film.reconstruction_filter(), border=False ) block.clear() # ImageBlock expects RGB values (Array of size (n, 3)) block.put(pos, rays.wavelengths, Vector3f(result, result, result), 1) # Write out the result from the ImageBlock # Internally, ImageBlock stores values in XYZAW format # (color XYZ, alpha value A and weight W) xyzaw_np = np.array(block.data()).reshape([film_size, film_size, 5]) # We then create a Bitmap from these values and save it out as EXR file bmp = Bitmap(xyzaw_np, Bitmap.PixelFormat.XYZAW) bmp = bmp.convert(Bitmap.PixelFormat.Y, Struct.Type.Float32, srgb_gamma=False) bmp.write('depth.exr')
The code for this example can be found in