Python API Reference


mitsuba.python.chi2.BSDFAdapter(bsdf_type, extra, wi=[0, 0, 1], ctx=None)

Adapter to test BSDF sampling using the Chi^2 test.

Parameter bsdf_type (string):

Name of the BSDF plugin to instantiate.

Parameter extra (string):

Additional XML used to specify the BSDF’s parameters.

Parameter wi (array(3,)):

Incoming direction, in local coordinates.

class mitsuba.python.chi2.ChiSquareTest(domain, sample_func, pdf_func, sample_dim=2, sample_count=1000000, res=101, ires=4)

Implements Pearson’s chi-square test for goodness of fit of a distribution to a known reference distribution.

The implementation here specifically compares a Monte Carlo sampling strategy on a 2D (or lower dimensional) space against a reference distribution obtained by numerically integrating a probability density function over grid in the distribution’s parameter domain.

Parameter domain (object):

An implementation of the domain interface (SphericalDomain, etc.), which transforms between the parameter and target domain of the distribution

Parameter sample_func (function):

An importance sampling function which maps an array of uniform variates of size [sample_dim, sample_count] to an array of sample_count samples on the target domain.

Parameter pdf_func (function):

Function that is expected to specify the probability density of the samples produced by sample_func. The test will try to collect sufficient statistical evidence to reject this hypothesis.

Parameter sample_dim (int):

Number of random dimensions consumed by sample_func per sample. The default value is 2.

Parameter sample_count (int):

Total number of samples to be generated. The test will have more evidence as this number tends to infinity. The default value is 1000000.

Parameter res (int):

Vertical resolution of the generated histograms. The horizontal resolution will be calculated as res * domain.aspect(). The default value of 101 is intentionally an odd number to prevent issues with floating point precision at sharp boundaries that may separate the domain into two parts (e.g. top hemisphere of a sphere parameterization).

Parameter ires (int):

Number of horizontal/vertical subintervals used to numerically integrate the probability density over each histogram cell (using the trapezoid rule). The default value is 4.


The following attributes are part of the public API:

messages: string

The implementation may generate a number of messages while running the test, which can be retrieved via this attribute.

histogram: array

The histogram array is populated by the tabulate_histogram() method and stored in this attribute.

pdf: array

The probability density function array is populated by the tabulate_pdf() method and stored in this attribute.

p_value: float

The p-value of the test is computed in the run() method and stored in this attribute.


Invoke the provided sampling strategy many times and generate a histogram in the parameter domain. If sample_func returns a tuple (positions, weights) instead of just positions, the samples are considered to be weighted.


Numerically integrate the provided probability density function over each cell to generate an array resembling the histogram computed by tabulate_histogram(). The function uses the trapezoid rule over intervals discretized into self.ires separate function evaluations.

run(significance_level=0.01, test_count=1, quiet=False)

Run the Chi^2 test

Parameter significance_level (float):

Denotes the desired significance level (e.g. 0.01 for a test at the 1% significance level)

Parameter test_count (int):

Specifies the total number of statistical tests run by the user. This value will be used to adjust the provided significance level so that the combination of the entire set of tests has the provided significance level.

Returns → bool:

True upon success, False if the null hypothesis was rejected.

class mitsuba.python.chi2.LineDomain(bounds=[- 1.0, 1.0])

The identity map on the line.

mitsuba.python.chi2.MicrofacetAdapter(md_type, alpha, sample_visible=False)

Adapter for testing microfacet distribution sampling techniques (separately from BSDF models, which are also tested)

mitsuba.python.chi2.PhaseFunctionAdapter(phase_type, extra, wi=[0, 0, 1])

Adapter to test phase function sampling using the Chi^2 test.

Parameter phase_type (string):

Name of the phase function plugin to instantiate.

Parameter extra (string):

Additional XML used to specify the phase function’s parameters.

Parameter wi (array(3,)):

Incoming direction, in local coordinates.

class mitsuba.python.chi2.PlanarDomain(bounds=None)

The identity map on the plane


Adapter which permits testing 1D spectral power distributions using the Chi^2 test.

class mitsuba.python.chi2.SphericalDomain

Maps between the unit sphere and a [cos(theta), phi] parameterization.


class mitsuba.python.autodiff.Adam(params, lr, beta_1=0.9, beta_2=0.999, epsilon=1e-08)

Base class: mitsuba.python.autodiff.Optimizer

Implements the Adam optimizer presented in the paper Adam: A Method for Stochastic Optimization by Kingman and Ba, ICLR 2015.

__init__(params, lr, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
Parameter lr:

learning rate

Parameter beta_1:

controls the exponential averaging of first order gradient moments

Parameter beta_2:

controls the exponential averaging of second order gradient moments


Take a gradient step

class mitsuba.python.autodiff.Optimizer(params, lr)

Base class of all gradient-based optimizers (currently SGD and Adam)

__init__(params, lr)
Parameter params:

dictionary (name: variable) of differentiable parameters to be optimized.

Parameter lr:

learning rate


Set the learning rate.


Temporarily disable the generation of gradients.

class mitsuba.python.autodiff.SGD(params, lr, momentum=0)

Base class: mitsuba.python.autodiff.Optimizer

Implements basic stochastic gradient descent with a fixed learning rate and, optionally, momentum [SMDH13] (0.9 is a typical parameter value for the momentum parameter).

The momentum-based SGD uses the update equation

\[v_{i+1} = \mu \cdot v_i + g_{i+1}\]
\[p_{i+1} = p_i + \varepsilon \cdot v_{i+1},\]

where \(v\) is the velocity, \(p\) are the positions, \(\varepsilon\) is the learning rate, and \(\mu\) is the momentum parameter.

__init__(params, lr, momentum=0)
Parameter lr:

learning rate

Parameter momentum:

momentum factor


Take a gradient step

mitsuba.python.autodiff._render_helper(scene, spp=None, sensor_index=0)

Internally used function: render the specified Mitsuba scene and return a floating point array containing RGB values and AOVs, if applicable

mitsuba.python.autodiff.render(scene, spp: Union[None, int, Tuple[int, int]] = None, unbiased=False, optimizer: mitsuba.python.autodiff.Optimizer = None, sensor_index=0)

Perform a differentiable of the scene scene, returning a floating point array containing RGB values and AOVs, if applicable.

Parameter spp (None, int, or a 2-tuple (int, int)):

Specifies the number of samples per pixel to be used for rendering, overriding the value that is specified in the scene. If spp=None, the original value takes precedence. If spp is a 2-tuple (spp_primal: int, spp_deriv: int), the first element specifies the number of samples for the primal pass, and the second specifies the number of samples for the derivative pass. See the explanation of the unbiased parameter for further detail on what these mean.

Memory usage is roughly proportional to the spp, value, hence this parameter should be reduced if you encounter out-of-memory errors.

Parameter unbiased (bool):

One potential issue when naively differentiating a rendering algorithm is that the same set of Monte Carlo sample is used to generate both the primal output (i.e. the image) along with derivative output. When the rendering algorithm and objective are jointly differentiated, we end up with expectations of products that do not satisfy the equality \(\mathbb{E}[X Y]=\mathbb{E}[X]\, \mathbb{E}[Y]\) due to correlations between \(X\) and \(Y\) that result from this sample re-use.

When unbiased=True, the render() function will generate an unbiased estimate that de-correlates primal and derivative components, which boils down to rendering the image twice and naturally comes at some cost in performance \((\sim 1.6 imes\!)\). Often, biased gradients are good enough, in which case unbiased=False should be specified instead.

The number of samples per pixel per pass can be specified separately for both passes by passing a tuple to the spp parameter.

Note that unbiased mode is only relevant for reverse-mode differentiation. It is not needed when visualizing parameter gradients in image space using forward-mode differentiation.

Parameter optimizer (mitsuba.python.autodiff.Optimizer):

The optimizer referencing relevant scene parameters must be specified when unbiased=True. Otherwise, there is no need to provide this parameter.

Parameter sensor_index (int):

When the scene contains more than one sensor/camera, this parameter can be specified to select the desired sensor.

mitsuba.python.autodiff.render_torch(scene, params=None, **kwargs)

mitsuba.python.autodiff.write_bitmap(filename, data, resolution, write_async=True)

Write the linearized RGB image in data to a PNG/EXR/.. file with resolution resolution.


class mitsuba.python.util.ParameterMap(properties, hierarchy)

Dictionary-like object that references various parameters used in a Mitsuba scene graph. Parameters can be read and written using standard syntax (parameter_map[key]). The class exposes several non-standard functions, specifically torch`(), update`(), and keep`().

__init__(properties, hierarchy)

Private constructor (use mitsuba.python.util.traverse() instead)

torch() → dict

Converts all Enoki arrays into PyTorch arrays and return them as a dictionary. This is mainly useful when using PyTorch to optimize a Mitsuba scene.

update() → None

This function should be called at the end of a sequence of writes to the dictionary. It automatically notifies all modified Mitsuba objects and their parent objects that they should refresh their internal state. For instance, the scene may rebuild the kd-tree when a shape was modified, etc.

keep(keys: list) → None

Reduce the size of the dictionary by only keeping elements, whose keys are part of the provided list ‘keys’.


mitsuba.python.util.traverse(node: mitsuba.core.Object)mitsuba.python.util.ParameterMap

Traverse a node of Mitsuba’s scene graph and return a dictionary-like object that can be used to read and write associated scene parameters.

See also mitsuba.python.util.ParameterMap.

mitsuba.python.math.rlgamma(a, x)

Regularized lower incomplete gamma function based on CEPHES


Function decorator that adds the mitsuba project root to the FileResolver’s search path. This is useful in particular for tests that e.g. load scenes, and need to specify paths to resources.

The file resolver is restored to its previous state once the test’s execution has finished.

mitsuba.python.test.util.getframeinfo(frame, context=1)

Get information about a frame or traceback object.

A tuple of five things is returned: the filename, the line number of the current line, the function name, a list of lines of context from the source code, and the index of the current line within that list. The optional second argument specifies the number of lines of context to return, which are centered around the current line.

mitsuba.python.test.util.make_tmpfile(request, tmpdir_factory)


Return a list of records for the stack above the caller’s frame.

mitsuba.python.test.util.tmpfile(request, tmpdir_factory)

Fixture to create a temporary file

mitsuba.python.test.util.wraps(wrapped, assigned='__module__', '__name__', '__qualname__', '__doc__', '__annotations__', updated='__dict__')

Decorator factory to apply update_wrapper() to a wrapper function

Returns a decorator that invokes update_wrapper() with the decorated function as the wrapper argument and the arguments to wraps() as the remaining arguments. Default arguments are as for update_wrapper(). This is a convenience function to simplify applying partial() to update_wrapper().