# Render API Reference¶

## BSDF¶

class mitsuba.render.BSDF

Base class: mitsuba.core.Object

Bidirectional Scattering Distribution Function (BSDF) interface

This class provides an abstract interface to all %BSDF plugins in Mitsuba. It exposes functions for evaluating and sampling the model, and for querying associated probability densities.

By default, functions in class sample and evaluate the complete BSDF, but it also allows to pick and choose individual components of multi- lobed BSDFs based on their properties and component indices. This selection is specified using a context data structure that is provided along with every operation.

When polarization is enabled, BSDF sampling and evaluation returns 4x4 Mueller matrices that describe how scattering changes the polarization state of incident light. Mueller matrices (e.g. for mirrors) are expressed with respect to a reference coordinate system for the incident and outgoing direction. The convention used here is that these coordinate systems are given by coordinate_system(wi) and coordinate_system(wo), where ‘wi’ and ‘wo’ are the incident and outgoing direction in local coordinates.

mitsuba.render.BSDFContext

mitsuba.render.BSDFSample3f

__init__(self, props)
Parameter props (mitsuba.core.Properties):

no description available

component_count(self, active=True)

Number of components this BSDF is comprised of.

Parameter active (bool):

Returns → int:

no description available

eval(self, ctx, si, wo, active=True)

Evaluate the BSDF f(wi, wo) or its adjoint version f^{*}(wi, wo) and multiply by the cosine foreshortening term.

Based on the information in the supplied query context ctx, this method will either evaluate the entire BSDF or query individual components (e.g. the diffuse lobe). Only smooth (i.e. non Dirac-delta) components are supported: calling eval() on a perfectly specular material will return zero.

Note that the incident direction does not need to be explicitly specified. It is obtained from the field si.wi.

Parameter ctx (mitsuba.render.BSDFContext):

A context data structure describing which lobes to evalute, and whether radiance or importance are being transported.

Parameter si (mitsuba.render.SurfaceInteraction3f):

A surface interaction data structure describing the underlying surface position. The incident direction is obtained from the field si.wi.

Parameter wo (enoki.scalar.Vector3f):

The outgoing direction

Parameter active (bool):

Returns → enoki.scalar.Vector3f:

no description available

eval_null_transmission(self, si, active=True)

Evaluate un-scattered transmission component of the BSDF

This method will evaluate the un-scattered transmission (BSDFFlags::Null) of the BSDF for light arriving from direction w. The default implementation returns zero.

Parameter si (mitsuba.render.SurfaceInteraction3f):

A surface interaction data structure describing the underlying surface position. The incident direction is obtained from the field si.wi.

Parameter active (bool):

Returns → enoki.scalar.Vector3f:

no description available

flags(overloaded)
flags(self, active=True)

Flags for all components combined.

Parameter active (bool):

Returns → int:

no description available

flags(self, index, active=True)

Flags for a specific component of this BSDF.

Parameter index (int):

no description available

Parameter active (bool):

Returns → int:

no description available

id(self)

Return a string identifier

Returns → str:

no description available

needs_differentials(self, active=True)

Parameter active (bool):

Returns → bool:

no description available

pdf(self, ctx, si, wo, active=True)

Compute the probability per unit solid angle of sampling a given direction

This method provides access to the probability density that would result when supplying the same BSDF context and surface interaction data structures to the sample() method. It correctly handles changes in probability when only a subset of the components is chosen for sampling (this can be done using the BSDFContext::component and BSDFContext::type_mask fields).

Note that the incident direction does not need to be explicitly specified. It is obtained from the field si.wi.

Parameter ctx (mitsuba.render.BSDFContext):

A context data structure describing which lobes to evalute, and whether radiance or importance are being transported.

Parameter si (mitsuba.render.SurfaceInteraction3f):

A surface interaction data structure describing the underlying surface position. The incident direction is obtained from the field si.wi.

Parameter wo (enoki.scalar.Vector3f):

The outgoing direction

Parameter active (bool):

Returns → float:

no description available

sample(self, ctx, si, sample1, sample2, active=True)

Importance sample the BSDF model

The function returns a sample data structure along with the importance weight, which is the value of the BSDF divided by the probability density, and multiplied by the cosine foreshortening factor (if needed — it is omitted for degenerate BSDFs like smooth mirrors/dielectrics).

If the supplied context data strutcures selects subset of components in a multi-lobe BRDF model, the sampling is restricted to this subset. Depending on the provided transport type, either the BSDF or its adjoint version is sampled.

When sampling a continuous/non-delta component, this method also multiplies by the cosine foreshorening factor with respect to the sampled direction.

Parameter ctx (mitsuba.render.BSDFContext):

A context data structure describing which lobes to sample, and whether radiance or importance are being transported.

Parameter si (mitsuba.render.SurfaceInteraction3f):

A surface interaction data structure describing the underlying surface position. The incident direction is obtained from the field si.wi.

Parameter sample1 (float):

A uniformly distributed sample on $[0,1]$. It is used to select the BSDF lobe in multi-lobe models.

Parameter sample2 (enoki.scalar.Vector2f):

A uniformly distributed sample on $[0,1]^2$. It is used to generate the sampled direction.

Parameter active (bool):

Returns → Tuple[mitsuba.render.BSDFSample3f, enoki.scalar.Vector3f]:

A pair (bs, value) consisting of

bs: Sampling record, indicating the sampled direction, PDF values and other information. The contents are undefined if sampling failed.

value: The BSDF value (multiplied by the cosine foreshortening factor when a non-delta component is sampled). A zero spectrum indicates that sampling failed.

class mitsuba.render.BSDFContext

Context data structure for BSDF evaluation and sampling

BSDF models in Mitsuba can be queried and sampled using a variety of different modes – for instance, a rendering algorithm can indicate whether radiance or importance is being transported, and it can also restrict evaluation and sampling to a subset of lobes in a a multi- lobe BSDF model.

The BSDFContext data structure encodes these preferences and is supplied to most BSDF methods.

__init__(self, mode=TransportMode.Radiance)

//! @}

Parameter mode (mitsuba.render.TransportMode):

no description available

__init__(self, mode, type_mak, component)
Parameter mode (mitsuba.render.TransportMode):

no description available

Parameter type_mak (int):

no description available

Parameter component (int):

no description available

property component

Integer value of requested BSDF component index to be sampled/evaluated.

is_enabled(self, type, component=0)

Checks whether a given BSDF component type and BSDF component index are enabled in this context.

Parameter type (mitsuba.render.BSDFFlags):

no description available

Parameter component (int):

no description available

Returns → bool:

no description available

property mode

reverse(self)

Reverse the direction of light transport in the record

Returns → None:

no description available

class mitsuba.render.BSDFFlags

This list of flags is used to classify the different types of lobes that are implemented in a BSDF instance.

They are also useful for picking out individual components, e.g., by setting combinations in BSDFContext::type_mask.

Members:

None

No flags set (default value)

Null

‘null’ scattering event, i.e. particles do not undergo deflection

DiffuseReflection

Ideally diffuse reflection

DiffuseTransmission

Ideally diffuse transmission

GlossyReflection

Glossy reflection

GlossyTransmission

Glossy transmission

DeltaReflection

Reflection into a discrete set of directions

DeltaTransmission

Transmission into a discrete set of directions

Anisotropic

The lobe is not invariant to rotation around the normal

SpatiallyVarying

The BSDF depends on the UV coordinates

NonSymmetric

Flags non-symmetry (e.g. transmission in dielectric materials)

FrontSide

Supports interactions on the front-facing side

BackSide

Supports interactions on the back-facing side

Reflection

Any reflection component (scattering into discrete, 1D, or 2D set of directions)

Transmission

Any transmission component (scattering into discrete, 1D, or 2D set of directions)

Diffuse

Diffuse scattering into a 2D set of directions

Glossy

Non-diffuse scattering into a 2D set of directions

Smooth

Scattering into a 2D set of directions

Delta

Scattering into a discrete set of directions

Delta1D

Scattering into a 1D space of directions

All

Any kind of scattering

__init__(self, arg0)
Parameter arg0 (int):

no description available

class mitsuba.render.BSDFSample3f

Data structure holding the result of BSDF sampling operations.

__init__(self)
__init__(self, wo)

Given a surface interaction and an incident/exitant direction pair (wi, wo), create a query record to evaluate the BSDF or its sampling density.

By default, all components will be sampled regardless of what measure they live on.

Parameter wo (enoki.scalar.Vector3f):

An outgoing direction in local coordinates. This should be a normalized direction vector that points away from the scattering event.

__init__(self, bs)

Copy constructor

Parameter bs (mitsuba.render.BSDFSample3f):

no description available

property eta

Relative index of refraction in the sampled direction

property pdf

Probability density at the sample

property sampled_component

Stores the component index that was sampled by BSDF::sample()

property sampled_type

Stores the component type that was sampled by BSDF::sample()

property wo

Normalized outgoing direction in local coordinates

class mitsuba.render.TransportMode

Specifies the transport mode when sampling or evaluating a scattering function

Members:

Radiance

Importance

Importance transport

__init__(self, arg0)
Parameter arg0 (int):

no description available

class mitsuba.render.MicrofacetDistribution

Implementation of the Beckman and GGX / Trowbridge-Reitz microfacet distributions and various useful sampling routines

Based on the papers

“Microfacet Models for Refraction through Rough Surfaces” by Bruce Walter, Stephen R. Marschner, Hongsong Li, and Kenneth E. Torrance

and

“Importance Sampling Microfacet-Based BSDFs using the Distribution of Visible Normals” by Eric Heitz and Eugene D’Eon

The visible normal sampling code was provided by Eric Heitz and Eugene D’Eon. An improvement of the Beckmann model sampling routine is discussed in

“An Improved Visible Normal Sampling Routine for the Beckmann Distribution” by Wenzel Jakob

An improvement of the GGX model sampling routine is discussed in “A Simpler and Exact Sampling Routine for the GGX Distribution of Visible Normals” by Eric Heitz

__init__(self, type, alpha, sample_visible=True)
Parameter type (mitsuba.render.MicrofacetType):

no description available

Parameter alpha (float):

no description available

Parameter sample_visible (bool):

no description available

__init__(self, type, alpha_u, alpha_v, sample_visible=True)
Parameter type (mitsuba.render.MicrofacetType):

no description available

Parameter alpha_u (float):

no description available

Parameter alpha_v (float):

no description available

Parameter sample_visible (bool):

no description available

__init__(self, type, alpha, sample_visible=True)
Parameter type (mitsuba.render.MicrofacetType):

no description available

Parameter alpha (float):

no description available

Parameter sample_visible (bool):

no description available

__init__(self, type, alpha_u, alpha_v, sample_visible=True)
Parameter type (mitsuba.render.MicrofacetType):

no description available

Parameter alpha_u (float):

no description available

Parameter alpha_v (float):

no description available

Parameter sample_visible (bool):

no description available

__init__(self, arg0)
Parameter arg0 (mitsuba.core.Properties):

no description available

G(self, wi, wo, m)

Parameter wi (enoki.scalar.Vector3f):

no description available

Parameter wo (enoki.scalar.Vector3f):

no description available

Parameter m (enoki.scalar.Vector3f):

no description available

Returns → float:

no description available

alpha(self)

Return the roughness (isotropic case)

Returns → float:

no description available

alpha_u(self)

Return the roughness along the tangent direction

Returns → float:

no description available

alpha_v(self)

Return the roughness along the bitangent direction

Returns → float:

no description available

eval(self, m)

Evaluate the microfacet distribution function

Parameter m (enoki.scalar.Vector3f):

The microfacet normal

Returns → float:

no description available

is_anisotropic(self)

Is this an anisotropic microfacet distribution?

Returns → bool:

no description available

is_isotropic(self)

Is this an isotropic microfacet distribution?

Returns → bool:

no description available

pdf(self, wi, m)

Returns the density function associated with the sample() function.

Parameter wi (enoki.scalar.Vector3f):

The incident direction (only relevant if visible normal sampling is used)

Parameter m (enoki.scalar.Vector3f):

The microfacet normal

Returns → float:

no description available

sample(self, wi, sample)

Draw a sample from the microfacet normal distribution and return the associated probability density

Parameter sample (enoki.scalar.Vector2f):

A uniformly distributed 2D sample

Parameter pdf:

The probability density wrt. solid angles

Parameter wi (enoki.scalar.Vector3f):

no description available

Returns → Tuple[enoki.scalar.Vector3f, float]:

no description available

sample_visible(self)

Return whether or not only visible normals are sampled?

Returns → bool:

no description available

sample_visible_11(self, cos_theta_i, sample)

Visible normal sampling code for the alpha=1 case

Parameter cos_theta_i (float):

no description available

Parameter sample (enoki.scalar.Vector2f):

no description available

Returns → enoki.scalar.Vector2f:

no description available

scale_alpha(self, value)

Scale the roughness values by some constant

Parameter value (float):

no description available

Returns → None:

no description available

smith_g1(self, v, m)

Parameter v (enoki.scalar.Vector3f):

An arbitrary direction

Parameter m (enoki.scalar.Vector3f):

The microfacet normal

Returns → float:

no description available

type(self)

Return the distribution type

Returns → mitsuba.render.MicrofacetType:

no description available

class mitsuba.render.MicrofacetType

Supported normal distribution functions

Members:

Beckmann

Beckmann distribution derived from Gaussian random surfaces

GGX

GGX: Long-tailed distribution for very rough surfaces (aka. Trowbridge-Reitz distr.)

__init__(self, arg0)
Parameter arg0 (int):

no description available

## Endpoint¶

class mitsuba.render.Endpoint

Base class: mitsuba.core.Object

Endpoint: an abstract interface to light sources and sensors

This class implements an abstract interface to all sensors and light sources emitting radiance and importance, respectively. Subclasses implement functions to evaluate and sample the profile, and to compute probability densities associated with the provided sampling techniques.

The name endpoint refers to the property that while a light path may involve any number of scattering events, it always starts and ends with emission and a measurement, respectively.

In addition to Endpoint::sample_ray, which generates a sample from the profile, subclasses also provide a specialized direction sampling method. This is a generalization of direct illumination techniques to both emitters and sensors. A direction sampling method is given an arbitrary reference position in the scene and samples a direction from the reference point towards the endpoint (ideally proportional to the emission/sensitivity profile). This reduces the sampling domain from 4D to 2D, which often enables the construction of smarter specialized sampling techniques.

When rendering scenes involving participating media, it is important to know what medium surrounds the sensors and light sources. For this reason, every endpoint instance keeps a reference to a medium (which may be set to nullptr when it is surrounded by vacuum).

bbox(self)

Return an axis-aligned box bounding the spatial extents of the emitter

Returns → mitsuba.core.BoundingBox3f:

no description available

eval(self, si, active=True)

Given a ray-surface intersection, return the emitted radiance or importance traveling along the reverse direction

This function is e.g. used when an area light source has been hit by a ray in a path tracing-style integrator, and it subsequently needs to be queried for the emitted radiance along the negative ray direction. The default implementation throws an exception, which states that the method is not implemented.

Parameter si (mitsuba.render.SurfaceInteraction3f):

An intersect record that specfies both the query position and direction (using the si.wi field)

Parameter active (bool):

Returns → enoki.scalar.Vector3f:

medium(self)

Return a pointer to the medium that surrounds the emitter

Returns → mitsuba.render.Medium:

no description available

needs_sample_2(self)

Does the method sample_ray() require a uniformly distributed 2D sample for the sample2 parameter?

Returns → bool:

no description available

needs_sample_3(self)

Does the method sample_ray() require a uniformly distributed 2D sample for the sample3 parameter?

Returns → bool:

no description available

pdf_direction(self, it, ds, active=True)

Evaluate the probability density of the direct sampling method implemented by the sample_direction() method.

Parameter ds (mitsuba.render.DirectionSample3f):

A direct sampling record, which specifies the query location.

Parameter it (mitsuba.render.Interaction3f):

no description available

Parameter active (bool):

Returns → float:

no description available

sample_direction(self, it, sample, active=True)

Given a reference point in the scene, sample a direction from the reference point towards the endpoint (ideally proportional to the emission/sensitivity profile)

This operation is a generalization of direct illumination techniques to both emitters and sensors. A direction sampling method is given an arbitrary reference position in the scene and samples a direction from the reference point towards the endpoint (ideally proportional to the emission/sensitivity profile). This reduces the sampling domain from 4D to 2D, which often enables the construction of smarter specialized sampling techniques.

Ideally, the implementation should importance sample the product of the emission profile and the geometry term between the reference point and the position on the endpoint.

The default implementation throws an exception.

Parameter ref:

A reference position somewhere within the scene.

Parameter sample (enoki.scalar.Vector2f):

A uniformly distributed 2D point on the domain [0,1]^2

Parameter it (mitsuba.render.Interaction3f):

no description available

Parameter active (bool):

Returns → Tuple[mitsuba.render.DirectionSample3f, enoki.scalar.Vector3f]:

A DirectionSample instance describing the generated sample along with a spectral importance weight.

sample_ray(self, time, sample1, sample2, sample3, active=True)

Importance sample a ray proportional to the endpoint’s sensitivity/emission profile.

The endpoint profile is a six-dimensional quantity that depends on time, wavelength, surface position, and direction. This function takes a given time value and five uniformly distributed samples on the interval [0, 1] and warps them so that the returned ray follows the profile. Any discrepancies between ideal and actual sampled profile are absorbed into a spectral importance weight that is returned along with the ray.

Parameter time (float):

The scene time associated with the ray to be sampled

Parameter sample1 (float):

A uniformly distributed 1D value that is used to sample the spectral dimension of the emission profile.

Parameter sample2 (enoki.scalar.Vector2f):

A uniformly distributed sample on the domain [0,1]^2. For sensor endpoints, this argument corresponds to the sample position in fractional pixel coordinates relative to the crop window of the underlying film. This argument is ignored if needs_sample_2() == false.

Parameter sample3 (enoki.scalar.Vector2f):

A uniformly distributed sample on the domain [0,1]^2. For sensor endpoints, this argument determines the position on the aperture of the sensor. This argument is ignored if needs_sample_3() == false.

Parameter active (bool):

Returns → Tuple[mitsuba.core.Ray3f, enoki.scalar.Vector3f]:

The sampled ray and (potentially spectrally varying) importance weights. The latter account for the difference between the profile and the actual used sampling density function.

set_medium(self, medium)

Set the medium that surrounds the emitter.

Parameter medium (mitsuba.render.Medium):

no description available

Returns → None:

no description available

set_shape(self, shape)

Set the shape associated with this endpoint.

Parameter shape (mitsuba.render.Shape):

no description available

Returns → None:

no description available

shape(self)

Return the shape, to which the emitter is currently attached

Returns → mitsuba.render.Shape:

no description available

world_transform(self)

Return the local space to world space transformation

Returns → mitsuba.core.AnimatedTransform:

no description available

## Emitter¶

class mitsuba.render.Emitter

Base class: mitsuba.render.Endpoint

flags(self, arg0)

Flags for all components combined.

Parameter arg0 (bool):

no description available

Returns → int:

no description available

is_environment(self)

Is this an environment map light emitter?

Returns → bool:

no description available

class mitsuba.render.EmitterFlags

This list of flags is used to classify the different types of emitters.

Members:

None

No flags set (default value)

DeltaPosition

The emitter lies at a single point in space

DeltaDirection

The emitter emits light in a single direction

Infinite

The emitter is placed at infinity (e.g. environment maps)

Surface

The emitter is attached to a surface (e.g. area emitters)

SpatiallyVarying

The emission depends on the UV coordinates

Delta

Delta function in either position or direction

__init__(self, arg0)
Parameter arg0 (int):

no description available

## Sensor¶

class mitsuba.render.Sensor

Base class: mitsuba.render.Endpoint

film(self)

Return the Film instance associated with this sensor

Returns → mitsuba.render.Film:

no description available

needs_aperture_sample(self)

Does the sampling technique require a sample for the aperture position?

Returns → bool:

no description available

sample_ray_differential(self, time, sample1, sample2, sample3, active=True)
Parameter time (float):

no description available

Parameter sample1 (float):

no description available

Parameter sample2 (enoki.scalar.Vector2f):

no description available

Parameter sample3 (enoki.scalar.Vector2f):

no description available

Parameter active (bool):

Returns → Tuple[mitsuba.core.RayDifferential3f, enoki.scalar.Vector3f]:

no description available

sampler(self)

Return the sensor’s sample generator

This is the root sampler, which will later be cloned a number of times to provide each participating worker thread with its own instance (see Scene::sampler()). Therefore, this sampler should never be used for anything except creating clones.

Returns → mitsuba.render.Sampler:

no description available

shutter_open(self)

Return the time value of the shutter opening event

Returns → float:

no description available

shutter_open_time(self)

Return the length, for which the shutter remains open

Returns → float:

no description available

class mitsuba.render.ProjectiveCamera

Base class: mitsuba.render.Sensor

Projective camera interface

This class provides an abstract interface to several types of sensors that are commonly used in computer graphics, such as perspective and orthographic camera models.

The interface is meant to be implemented by any kind of sensor, whose world to clip space transformation can be explained using only linear operations on homogeneous coordinates.

A useful feature of ProjectiveCamera sensors is that their view can be rendered using the traditional OpenGL pipeline.

far_clip(self)

Return the far clip plane distance

Returns → float:

no description available

focus_distance(self)

Return the distance to the focal plane

Returns → float:

no description available

near_clip(self)

Return the near clip plane distance

Returns → float:

no description available

## Medium¶

class mitsuba.render.Medium

Base class: mitsuba.core.Object

eval_tr_and_pdf(self, mi, si, active=True)
Parameter mi (mitsuba.render.MediumInteraction3f):

no description available

Parameter si (mitsuba.render.SurfaceInteraction3f):

no description available

Parameter active (bool):

Returns → Tuple[enoki.scalar.Vector3f, enoki.scalar.Vector3f]:

no description available

get_combined_extinction(self, mi, active=True)
Parameter mi (mitsuba.render.MediumInteraction3f):

no description available

Parameter active (bool):

Returns → enoki.scalar.Vector3f:

no description available

get_scattering_coefficients(self, mi, active=True)
Parameter mi (mitsuba.render.MediumInteraction3f):

no description available

Parameter active (bool):

Returns → Tuple[enoki.scalar.Vector3f, enoki.scalar.Vector3f, enoki.scalar.Vector3f]:

no description available

id(self)

Return a string identifier

Returns → str:

no description available

intersect_aabb(self, ray)
Parameter ray (mitsuba.core.Ray3f):

no description available

Returns → Tuple[bool, float, float]:

no description available

phase_function(self)

Return the phase function of this medium

Returns → mitsuba.render.PhaseFunction:

no description available

sample_interaction(self, ray, sample, channel, active=True)
Parameter ray (mitsuba.core.Ray3f):

no description available

Parameter sample (float):

no description available

Parameter channel (int):

no description available

Parameter active (bool):

Returns → mitsuba.render.MediumInteraction3f:

no description available

use_emitter_sampling(self)

Returns whether this specific medium instance uses emitter sampling

Returns → bool:

no description available

class mitsuba.render.PhaseFunction

Base class: mitsuba.core.Object

eval(self, ctx, mi, wo, active=True)

Evaluates the phase function model

The function returns the value (which equals the PDF) of the phase function in the query direction.

Parameter ctx (mitsuba.render.PhaseFunctionContext):

A phase function sampling context, contains information about the transport mode

Parameter mi (mitsuba.render.MediumInteraction3f):

A medium interaction data structure describing the underlying medium position. The incident direction is obtained from the field mi.wi.

Parameter wo (enoki.scalar.Vector3f):

An outgoing direction to evaluate.

Parameter active (bool):

Returns → float:

The value of the phase function in direction wo

id(self)

Return a string identifier

Returns → str:

no description available

sample(self, ctx, mi, sample1, active=True)

Importance sample the phase function model

The function returns a sampled direction.

Parameter ctx (mitsuba.render.PhaseFunctionContext):

A phase function sampling context, contains information about the transport mode

Parameter mi (mitsuba.render.MediumInteraction3f):

A medium interaction data structure describing the underlying medium position. The incident direction is obtained from the field mi.wi.

Parameter sample:

A uniformly distributed sample on $[0,1]^2$. It is used to generate the sampled direction.

Parameter sample1 (enoki.scalar.Vector2f):

no description available

Parameter active (bool):

Returns → Tuple[enoki.scalar.Vector3f, float]:

A sampled direction wo

class mitsuba.render.PhaseFunctionContext

//! @}

reverse(self)

Reverse the direction of light transport in the record

Returns → None:

no description available

property sampler

Sampler object

class mitsuba.render.PhaseFunctionFlags

This enumeration is used to classify phase functions into different types, i.e. into isotropic, anisotropic and microflake phase functions.

This can be used to optimize implementatons to for example have less overhead if the phase function is not a microflake phase function.

Members:

None
Isotropic
Anisotropic
Microflake
__init__(self, arg0)
Parameter arg0 (int):

no description available

## Shape¶

class mitsuba.render.Shape

Base class: mitsuba.core.Object

Base class of all geometric shapes in Mitsuba

This class provides core functionality for sampling positions on surfaces, computing ray intersections, and bounding shapes within ray intersection acceleration data structures.

bbox(overloaded)
bbox(self)

Return an axis aligned box that bounds all shape primitives (including any transformations that may have been applied to them)

Returns → mitsuba.core.BoundingBox3f:

no description available

bbox(self, index)

Return an axis aligned box that bounds a single shape primitive (including any transformations that may have been applied to it)

Remark:

The default implementation simply calls bbox()

Parameter index (int):

no description available

Returns → mitsuba.core.BoundingBox3f:

no description available

bbox(self, index, clip)

Return an axis aligned box that bounds a single shape primitive after it has been clipped to another bounding box.

This is extremely important to construct high-quality kd-trees. The default implementation just takes the bounding box returned by bbox(ScalarIndex index) and clips it to clip.

Parameter index (int):

no description available

Parameter clip (mitsuba.core.BoundingBox3f):

no description available

Returns → mitsuba.core.BoundingBox3f:

no description available

bsdf(self)
Returns → mitsuba.render.BSDF:

no description available

compute_surface_interaction(self, ray, pi, flags=HitComputeFlags.All, active=True)
Parameter ray (mitsuba.core.Ray3f):

no description available

Parameter pi (mitsuba.render.PreliminaryIntersection3f):

no description available

Parameter flags (mitsuba.render.HitComputeFlags):

no description available

Parameter active (bool):

Returns → mitsuba.render.SurfaceInteraction3f:

no description available

effective_primitive_count(self)

Return the number of primitives (triangles, hairs, ..) contributed to the scene by this shape

Includes instanced geometry. The default implementation simply returns the same value as primitive_count().

Returns → int:

no description available

emitter(self, active=True)
Parameter active (bool):

Returns → mitsuba.render.Emitter:

no description available

exterior_medium(self)

Return the medium that lies on the exterior of this shape

Returns → mitsuba.render.Medium:

no description available

id(self)

Return a string identifier

Returns → str:

no description available

interior_medium(self)

Return the medium that lies on the interior of this shape

Returns → mitsuba.render.Medium:

no description available

is_emitter(self)

Is this shape also an area emitter?

Returns → bool:

no description available

is_medium_transition(self)

Does the surface of this shape mark a medium transition?

Returns → bool:

no description available

is_mesh(self)

Is this shape a triangle mesh?

Returns → bool:

no description available

is_sensor(self)

Is this shape also an area sensor?

Returns → bool:

no description available

parameters_grad_enabled(self)

Return whether shape’s parameters require gradients (default implementation return false)

Returns → bool:

no description available

pdf_direction(self, it, ps, active=True)

Query the probability density of sample_direction()

Parameter it (mitsuba.render.Interaction3f):

A reference position somewhere within the scene.

Parameter ps (mitsuba.render.DirectionSample3f):

A position record describing the sample in question

Parameter active (bool):

Returns → float:

The probability density per unit solid angle

pdf_position(self, ps, active=True)

Query the probability density of sample_position() for a particular point on the surface.

Parameter ps (mitsuba.render.PositionSample3f):

A position record describing the sample in question

Parameter active (bool):

Returns → float:

The probability density per unit area

primitive_count(self)

Returns the number of sub-primitives that make up this shape

Remark:

The default implementation simply returns 1

Returns → int:

no description available

ray_intersect(self, ray, flags=HitComputeFlags.All, active=True)

Test for an intersection and return detailed information

This operation combines the prior ray_intersect_preliminary() and compute_surface_interaction() operations.

Parameter ray (mitsuba.core.Ray3f):

The ray to be tested for an intersection

Parameter flags (mitsuba.render.HitComputeFlags):

Describe how the detailed information should be computed

Parameter active (bool):

Returns → mitsuba.render.SurfaceInteraction3f:

no description available

ray_intersect_preliminary(self, ray, active=True)

Fast ray intersection test

Efficiently test whether the shape is intersected by the given ray, and cache preliminary information about the intersection if that is the case.

If the intersection is deemed relevant (e.g. the closest to the ray origin), detailed intersection information can later be obtained via the create_surface_interaction() method.

Parameter ray (mitsuba.core.Ray3f):

The ray to be tested for an intersection

Parameter cache:

Temporary space ((MTS_KD_INTERSECTION_CACHE_SIZE-2) * sizeof(Float[P]) bytes) that must be supplied to cache information about the intersection.

Parameter active (bool):

Returns → mitsuba.render.PreliminaryIntersection3f:

no description available

ray_test(self, ray, active=True)
Parameter ray (mitsuba.core.Ray3f):

no description available

Parameter active (bool):

Returns → bool:

no description available

sample_direction(self, it, sample, active=True)

Sample a direction towards this shape with respect to solid angles measured at a reference position within the scene

An ideal implementation of this interface would achieve a uniform solid angle density within the surface region that is visible from the reference position it.p (though such an ideal implementation is usually neither feasible nor advisable due to poor efficiency).

The function returns the sampled position and the inverse probability per unit solid angle associated with the sample.

When the Shape subclass does not supply a custom implementation of this function, the Shape class reverts to a fallback approach that piggybacks on sample_position(). This will generally lead to a suboptimal sample placement and higher variance in Monte Carlo estimators using the samples.

Parameter it (mitsuba.render.Interaction3f):

A reference position somewhere within the scene.

Parameter sample (enoki.scalar.Vector2f):

A uniformly distributed 2D point on the domain [0,1]^2

Parameter active (bool):

Returns → mitsuba.render.DirectionSample3f:

A DirectionSample instance describing the generated sample

sample_position(self, time, sample, active=True)

Sample a point on the surface of this shape

The sampling strategy is ideally uniform over the surface, though implementations are allowed to deviate from a perfectly uniform distribution as long as this is reflected in the returned probability density.

Parameter time (float):

The scene time associated with the position sample

Parameter sample (enoki.scalar.Vector2f):

A uniformly distributed 2D point on the domain [0,1]^2

Parameter active (bool):

Returns → mitsuba.render.PositionSample3f:

A PositionSample instance describing the generated sample

sensor(self)
Returns → mitsuba.render.Sensor:

no description available

surface_area(self)

Return the shape’s surface area.

The function assumes that the object is not undergoing some kind of time-dependent scaling.

The default implementation throws an exception.

Returns → float:

no description available

class mitsuba.render.Mesh

Base class: mitsuba.render.Shape

__init__(self: mitsuba.render.Mesh, name: str, vertex_count: int, face_count: int, props: mitsuba.core.Properties = Properties[

plugin_name = “”, id = “”, elements = { }

] , has_vertex_normals: bool = False, has_vertex_texcoords: bool = False) -> None

Create a new mesh with the given vertex and face data structures

add_attribute(self, name, size, buffer)

Add an attribute buffer with the given name and dim

Parameter name (str):

no description available

Parameter size (int):

no description available

Parameter buffer (enoki.dynamic.Float32):

no description available

Returns → None:

no description available

attribute_buffer(self, name)

Return the mesh attribute associated with name

Parameter name (str):

no description available

Returns → enoki.dynamic.Float32:

no description available

eval_parameterization(self, uv, active=True)
Parameter uv (enoki.scalar.Vector2f):

no description available

Parameter active (bool):

Returns → mitsuba.render.SurfaceInteraction3f:

no description available

face_count(self)

Return the total number of faces

Returns → int:

no description available

faces_buffer(self)

Return face indices buffer

Returns → enoki.dynamic.UInt32:

no description available

has_vertex_normals(self)

Does this mesh have per-vertex normals?

Returns → bool:

no description available

has_vertex_texcoords(self)

Does this mesh have per-vertex texture coordinates?

Returns → bool:

no description available

ray_intersect_triangle(self, index, ray, active=True)

Ray-triangle intersection test

Uses the algorithm by Moeller and Trumbore discussed at http://www.acm.org/jgt/papers/MollerTrumbore97/code.html.

Parameter index (int):

Index of the triangle to be intersected.

Parameter ray (mitsuba.core.Ray3f):

The ray segment to be used for the intersection query.

Parameter active (bool):

Returns → mitsuba.render.PreliminaryIntersection3f:

Returns an ordered tuple (mask, u, v, t), where mask indicates whether an intersection was found, t contains the distance from the ray origin to the intersection point, and u and v contains the first two components of the intersection in barycentric coordinates

recompute_bbox(self)

Recompute the bounding box (e.g. after modifying the vertex positions)

Returns → None:

no description available

recompute_vertex_normals(self)

Compute smooth vertex normals and replace the current normal values

Returns → None:

no description available

vertex_count(self)

Return the total number of vertices

Returns → int:

no description available

vertex_normals_buffer(self)

Return vertex normals buffer

Returns → enoki.dynamic.Float32:

no description available

vertex_positions_buffer(self)

Return vertex positions buffer

Returns → enoki.dynamic.Float32:

no description available

vertex_texcoords_buffer(self)

Return vertex texcoords buffer

Returns → enoki.dynamic.Float32:

no description available

write_ply(self, filename)

Export mesh as a binary PLY file

Parameter filename (str):

no description available

Returns → None:

no description available

## Film¶

class mitsuba.render.Film

Base class: mitsuba.core.Object

Abstract film base class - used to store samples generated by Integrator implementations.

To avoid lock-related bottlenecks when rendering with many cores, rendering threads first store results in an “image block”, which is then committed to the film using the put() method.

bitmap(self, raw=False)

Return a bitmap object storing the developed contents of the film

Parameter raw (bool):

no description available

Returns → mitsuba.core.Bitmap:

no description available

crop_offset(self)

Return the offset of the crop window

Returns → enoki.scalar.Vector2i:

no description available

crop_size(self)

Return the size of the crop window

Returns → enoki.scalar.Vector2i:

no description available

destination_exists(self, basename)

Does the destination file already exist?

Parameter basename (mitsuba.core.filesystem.path):

no description available

Returns → bool:

no description available

develop(overloaded)
develop(self)
develop(self, offset, size, target_offset, target)
Parameter offset (enoki.scalar.Vector2i):

no description available

Parameter size (enoki.scalar.Vector2i):

no description available

Parameter target_offset (enoki.scalar.Vector2i):

no description available

Parameter target (mitsuba.core.Bitmap):

no description available

Returns → bool:

no description available

has_high_quality_edges(self)

Should regions slightly outside the image plane be sampled to improve the quality of the reconstruction at the edges? This only makes sense when reconstruction filters other than the box filter are used.

Returns → bool:

no description available

prepare(self, channels)

Configure the film for rendering a specified set of channels

Parameter channels (List[str]):

no description available

Returns → None:

no description available

put(self, block)

Merge an image block into the film. This methods should be thread- safe.

Parameter block (mitsuba.render.ImageBlock):

no description available

Returns → None:

no description available

reconstruction_filter(self)

Return the image reconstruction filter (const version)

Returns → mitsuba.core.ReconstructionFilter:

no description available

set_crop_window(self, arg0, arg1)

Set the size and offset of the crop window.

Parameter arg0 (enoki.scalar.Vector2i):

no description available

Parameter arg1 (enoki.scalar.Vector2i):

no description available

Returns → None:

no description available

set_destination_file(self, filename)

Set the target filename (with or without extension)

Parameter filename (mitsuba.core.filesystem.path):

no description available

Returns → None:

no description available

size(self)

Ignoring the crop window, return the resolution of the underlying sensor

Returns → enoki.scalar.Vector2i:

no description available

## Sampler¶

class mitsuba.render.Sampler

Base class: mitsuba.core.Object

Base class of all sample generators.

For each sample in a pixel, a sample generator produces a (hypothetical) point in the infinite dimensional random number cube. A rendering algorithm can then request subsequent 1D or 2D components of this point using the next_1d and next_2d functions.

Scalar and wavefront rendering algorithms will need interact with the sampler interface in a slightly different way:

Scalar rendering algorithm:

1. Before beginning to render a pixel block, the rendering algorithm calls seed to initialize a new sequence with the specific seed offset. 2. The first pixel sample can now be computed, after which advance needs to be invoked. This repeats until all pixel samples have been generated. Note that some implementations need to be configured for a certain number of pixel samples, and exceeding these will lead to an exception being thrown. 3. While computing a pixel sample, the rendering algorithm usually requests batches of (pseudo-) random numbers using the next_1d and next_2d functions before moving on to the next sample.

Wavefront rendering algorithm:

1. Before beginning to render the wavefront, the rendering algorithm needs to inform the sampler of the amount of samples rendered in parallel for every pixel in the wavefront. This can be achieved by calling set_samples_per_wavefront . 2. Then the rendering algorithm should seed the sampler and set the appropriate wavefront size by calling seed. A different seed value, based on the base_seed and the seed offset, will be used for every sample (of every pixel) in the wavefront. 3. advance can be used to advance to the next sample in the sequence. 4. As in the scalar approach, the rendering algorithm can request batches of (pseudo-) random numbers using the next_1d and next_2d functions.

advance(self)

A subsequent call to next_1d or next_2d will access the first 1D or 2D components of this sample.

Returns → None:

no description available

clone(self)

Create a clone of this sampler

The clone is allowed to be different to some extent, e.g. a pseudorandom generator should be based on a different random seed compared to the original. All other parameters are copied exactly.

May throw an exception if not supported. Cloning may also change the state of the original sampler (e.g. by using the next 1D sample as a seed for the clone).

Returns → mitsuba.render.Sampler:

no description available

next_1d(self, active=True)

Retrieve the next component value from the current sample

Parameter active (bool):

Returns → float:

no description available

next_2d(self, active=True)

Retrieve the next two component values from the current sample

Parameter active (bool):

Returns → enoki.scalar.Vector2f:

no description available

sample_count(self)

Return the number of samples per pixel

Returns → int:

no description available

seed(self, seed_offset, wavefront_size=1)

Deterministically seed the underlying RNG, if applicable.

In the context of wavefront ray tracing & dynamic arrays, this function must be called with wavefront_size matching the size of the wavefront.

Parameter seed_offset (int):

no description available

Parameter wavefront_size (int):

no description available

Returns → None:

no description available

set_samples_per_wavefront(self, samples_per_wavefront)

Set the number of samples per pass in wavefront modes (default is 1)

Parameter samples_per_wavefront (int):

no description available

Returns → None:

no description available

wavefront_size(self)

Return the size of the wavefront (or 0, if not seeded)

Returns → int:

no description available

## Scene¶

class mitsuba.render.Scene

Base class: mitsuba.core.Object

bbox(self)

Return a bounding box surrounding the scene

Returns → mitsuba.core.BoundingBox3f:

no description available

emitters(self)

Return the list of emitters

Returns → List[mitsuba.render.Emitter]:

no description available

environment(self)

Return the environment emitter (if any)

Returns → mitsuba.render.Emitter:

no description available

integrator(self)

Return the scene’s integrator

Returns → object:

no description available

pdf_emitter_direction(self, ref, active=True)
Parameter ref (mitsuba.render.Interaction):

no description available

Parameter active (bool):

Returns → float:

no description available

ray_intersect(overloaded)
ray_intersect(self, ray, active=True)

Intersect a ray against all primitives stored in the scene and return information about the resulting surface interaction

Parameter ray (mitsuba.core.Ray3f):

A 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which matters when the shapes are in motion)

Returns → mitsuba.render.SurfaceInteraction:

A detailed surface interaction record. Query its is_valid() method to determine whether an intersection was actually found.

Parameter active (bool):

ray_intersect(self, ray, flags, active=True)

Intersect a ray against all primitives stored in the scene and return information about the resulting surface interaction

Parameter ray (mitsuba.core.Ray3f):

A 3-dimensional ray data structure with minimum/maximum extent information, as well as a time value (which matters when the shapes are in motion)

Returns → mitsuba.render.SurfaceInteraction:

A detailed surface interaction record. Query its is_valid() method to determine whether an intersection was actually found.

Parameter flags (mitsuba.render.HitComputeFlags):

no description available

Parameter active (bool):

ray_intersect_naive(self, ray, active=True)
Parameter ray (mitsuba.core.Ray3f):

no description available

Parameter active (bool):

Returns → mitsuba.render.SurfaceInteraction:

no description available

ray_intersect_preliminary(self, ray, active=True)
Parameter ray (mitsuba.core.Ray3f):

no description available

Parameter active (bool):

Returns → mitsuba.render.PreliminaryIntersection:

no description available

ray_test(self, ray, active=True)
Parameter ray (mitsuba.core.Ray3f):

no description available

Parameter active (bool):

Returns → bool:

no description available

sample_emitter_direction(self, ref, sample, test_visibility=True, mask=True)
Parameter ref (mitsuba.render.Interaction):

no description available

Parameter sample (enoki.scalar.Vector2f):

no description available

Parameter test_visibility (bool):

no description available

Parameter mask (bool):

no description available

Returns → Tuple[mitsuba.render.DirectionSample, enoki.scalar.Vector3f]:

no description available

sensors(self)

Return the list of sensors

Returns → List[mitsuba.render.Sensor]:

no description available

shapes(self)
Returns → list:

no description available

shapes_grad_enabled(self)

Return whether any of the shape’s parameters require gradient

Returns → bool:

no description available

class mitsuba.render.ShapeKDTree

Base class: mitsuba.core.Object

Create an empty kd-tree and take build-related parameters from props.

add_shape(self, arg0)

Register a new shape with the kd-tree (to be called before build())

Parameter arg0 (mitsuba.render.Shape):

no description available

Returns → None:

no description available

bbox(self)
Returns → mitsuba.core.BoundingBox3f:

no description available

build(overloaded)
build(self)

Build the kd-tree

build(self)

Build the kd-tree

primitive_count(self)

Return the number of registered primitives

Returns → int:

no description available

shape(self, arg0)

Return the i-th shape (const version)

Parameter arg0 (int):

no description available

Returns → mitsuba.render.Shape:

no description available

shape_count(self)

Return the number of registered shapes

Returns → int:

no description available

## Record¶

class mitsuba.render.PositionSample3f

Generic sampling record for positions

This sampling record is used to implement techniques that draw a position from a point, line, surface, or volume domain in 3D and furthermore provide auxilary information about the sample.

Apart from returning the position and (optionally) the surface normal, the responsible sampling method must annotate the record with the associated probability density and delta.

__init__(self)

Construct an unitialized position sample

__init__(self, other)

Copy constructor

Parameter other (mitsuba.render.PositionSample3f):

no description available

__init__(self, si)

Create a position sampling record from a surface intersection

This is useful to determine the hypothetical sampling density on a surface after hitting it using standard ray tracing. This happens for instance in path tracing with multiple importance sampling.

Parameter si (mitsuba.render.SurfaceInteraction3f):

no description available

property delta

Set if the sample was drawn from a degenerate (Dirac delta) distribution

Note: we use an array of booleans instead of a mask, so that slicing a dynamic array of PositionSample remains possible even on architectures where scalar_t<Mask> != bool (e.g. Knights Landing).

property n

Sampled surface normal (if applicable)

property object

Optional: pointer to an associated object

In some uses of this record, sampling a position also involves choosing one of several objects (shapes, emitters, ..) on which the position lies. In that case, the object attribute stores a pointer to this object.

property p

Sampled position

property pdf

Probability density at the sample

property time

Associated time value

property uv

Optional: 2D sample position associated with the record

In some uses of this record, a sampled position may be associated with an important 2D quantity, such as the texture coordinates on a triangle mesh or a position on the aperture of a sensor. When applicable, such positions are stored in the uv attribute.

zero(size=1)
Parameter size (int):

no description available

Returns → mitsuba.render.PositionSample3f:

no description available

class mitsuba.render.DirectionSample3f

Base class: mitsuba.render.PositionSample3f

Record for solid-angle based area sampling techniques

This data structure is used in techniques that sample positions relative to a fixed reference position in the scene. For instance, direct illumination strategies importance sample the incident radiance received by a given surface location. Mitsuba uses this approach in a wider bidirectional sense: sampling the incident importance due to a sensor also uses the same data structures and strategies, which are referred to as direct sampling.

This record inherits all fields from PositionSample and extends it with two useful quantities that are cached so that they don’t need to be recomputed: the unit direction and distance from the reference position to the sampled point.

__init__(self)

Construct an unitialized direct sample

__init__(self, other)

Construct from a position sample

Parameter other (mitsuba.render.PositionSample3f):

no description available

__init__(self, other)

Copy constructor

Parameter other (mitsuba.render.DirectionSample3f):

no description available

__init__(self, p, n, uv, time, pdf, delta, object, d, dist)

Element-by-element constructor

Parameter p (enoki.scalar.Vector3f):

no description available

Parameter n (enoki.scalar.Vector3f):

no description available

Parameter uv (enoki.scalar.Vector2f):

no description available

Parameter time (float):

no description available

Parameter pdf (float):

no description available

Parameter delta (bool):

no description available

Parameter object (mitsuba.core.Object):

no description available

Parameter d (enoki.scalar.Vector3f):

no description available

Parameter dist (float):

no description available

__init__(self, si, ref)

Create a position sampling record from a surface intersection

This is useful to determine the hypothetical sampling density on a surface after hitting it using standard ray tracing. This happens for instance in path tracing with multiple importance sampling.

Parameter si (mitsuba.render.SurfaceInteraction3f):

no description available

Parameter ref (mitsuba.render.Interaction3f):

no description available

property d

Unit direction from the reference point to the target shape

property dist

Distance from the reference point to the target shape

set_query(self, ray, si)

Setup this record so that it can be used to query the density of a surface position (where the reference point lies on a surface).

Parameter ray (mitsuba.core.Ray3f):

Reference to the ray that generated the intersection si. The ray origin must be located at the reference surface and point towards si.p.

Parameter si (mitsuba.render.SurfaceInteraction3f):

A surface intersection record (usually on an emitter).

note Defined in scene.h

Returns → None:

no description available

zero(size=1)
Parameter size (int):

no description available

Returns → mitsuba.render.DirectionSample3f:

no description available

class mitsuba.render.MediumInteraction3f

Base class: mitsuba.render.Interaction3f

Stores information related to a medium scattering interaction

__init__(self)
property medium

Pointer to the associated medium

property sh_frame

to_local(self, v)

Convert a world-space vector into local shading coordinates

Parameter v (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

to_world(self, v)

Convert a local shading-space vector into world space

Parameter v (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

property wi

Incident direction in the local shading frame

zero(size=1)
Parameter size (int):

no description available

Returns → mitsuba.render.MediumInteraction3f:

no description available

class mitsuba.render.SurfaceInteraction3f

Base class: mitsuba.render.Interaction3f

Stores information related to a surface scattering interaction

__init__(self)

Construct from a position sample. Unavailable fields such as wi and the partial derivatives are left uninitialized. The shape pointer is left uninitialized because we can’t guarantee that the given PositionSample::object points to a Shape instance.

__init__(self, ps, wavelengths)

Construct from a position sample. Unavailable fields such as wi and the partial derivatives are left uninitialized. The shape pointer is left uninitialized because we can’t guarantee that the given PositionSample::object points to a Shape instance.

Parameter ps (mitsuba.render.PositionSample):

no description available

Parameter wavelengths (enoki.scalar.Vector0f):

no description available

bsdf(overloaded)
bsdf(self, ray)

Returns the BSDF of the intersected shape.

The parameter ray must match the one used to create the interaction record. This function computes texture coordinate partials if this is required by the BSDF (e.g. for texture filtering).

Implementation in ‘bsdf.h’

Parameter ray (mitsuba.core.RayDifferential3f):

no description available

Returns → mitsuba.render.BSDF:

no description available

bsdf(self)
Returns → mitsuba.render.BSDF:

no description available

compute_uv_partials(self, ray)

Computes texture coordinate partials

Parameter ray (mitsuba.core.RayDifferential3f):

no description available

Returns → None:

no description available

property dn_du

Normal partials wrt. the UV parameterization

property dn_dv

Normal partials wrt. the UV parameterization

property dp_du

Position partials wrt. the UV parameterization

property dp_dv

Position partials wrt. the UV parameterization

property duv_dx

UV partials wrt. changes in screen-space

property duv_dy

UV partials wrt. changes in screen-space

emitter(self, scene, active=True)

Return the emitter associated with the intersection (if any) note Defined in scene.h

Parameter scene (mitsuba.render.Scene):

no description available

Parameter active (bool):

Returns → mitsuba.render.Emitter:

no description available

has_n_partials(self)
Returns → bool:

no description available

has_uv_partials(self)
Returns → bool:

no description available

property instance

Stores a pointer to the parent instance (if applicable)

is_medium_transition(self)

Does the surface mark a transition between two media?

Returns → bool:

no description available

is_sensor(self)

Is the intersected shape also a sensor?

Returns → bool:

no description available

property n

Geometric normal

property prim_index

Primitive index, e.g. the triangle ID (if applicable)

property sh_frame

property shape

Pointer to the associated shape

target_medium(overloaded)
target_medium(self, d)

Determine the target medium

When is_medium_transition() = True, determine the medium that contains the ray(this->p, d)

Parameter d (enoki.scalar.Vector3f):

no description available

Returns → mitsuba.render.Medium:

no description available

target_medium(self, cos_theta)

Determine the target medium based on the cosine of the angle between the geometric normal and a direction

Returns the exterior medium when cos_theta > 0 and the interior medium when cos_theta <= 0.

Parameter cos_theta (float):

no description available

Returns → mitsuba.render.Medium:

no description available

to_local(self, v)

Convert a world-space vector into local shading coordinates

Parameter v (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

to_local_mueller(self, M_world, wi_world, wo_world)

Converts a Mueller matrix defined in world space to a local frame

A Mueller matrix operates from the (implicitly) defined frame stokes_basis(in_forward) to the frame stokes_basis(out_forward). This method converts a Mueller matrix defined on directions in world-space to a Mueller matrix defined in the local frame.

This expands to a no-op in non-polarized modes.

Parameter in_forward_local:

Incident direction (along propagation direction of light), given in world-space coordinates.

Parameter wo_local:

Outgoing direction (along propagation direction of light), given in world-space coordinates.

Parameter M_world (enoki.scalar.Vector3f):

no description available

Parameter wi_world (enoki.scalar.Vector3f):

no description available

Parameter wo_world (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

Equivalent Mueller matrix that operates in local frame coordinates.

to_world(self, v)

Convert a local shading-space vector into world space

Parameter v (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

to_world_mueller(self, M_local, wi_local, wo_local)

Converts a Mueller matrix defined in a local frame to world space

A Mueller matrix operates from the (implicitly) defined frame stokes_basis(in_forward) to the frame stokes_basis(out_forward). This method converts a Mueller matrix defined on directions in the local frame to a Mueller matrix defined on world-space directions.

This expands to a no-op in non-polarized modes.

Parameter M_local (enoki.scalar.Vector3f):

The Mueller matrix in local space, e.g. returned by a BSDF.

Parameter in_forward_local:

Incident direction (along propagation direction of light), given in local frame coordinates.

Parameter wo_local (enoki.scalar.Vector3f):

Outgoing direction (along propagation direction of light), given in local frame coordinates.

Parameter wi_local (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

Equivalent Mueller matrix that operates in world-space coordinates.

property uv

UV surface coordinates

property wi

Incident direction in the local shading frame

zero(size=1)
Parameter size (int):

no description available

Returns → mitsuba.render.SurfaceInteraction3f:

no description available

## Polarization¶

mitsuba.render.mueller.absorber(overloaded)
absorber(value)

Constructs the Mueller matrix of an ideal absorber

Parameter value (float):

The amount of absorption.

Returns → enoki.scalar.Matrix4f:

no description available

absorber(value)

Constructs the Mueller matrix of an ideal absorber

Parameter value (enoki.scalar.Vector3f):

The amount of absorption.

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.depolarizer(overloaded)
depolarizer(value=1.0)

Constructs the Mueller matrix of an ideal depolarizer

Parameter value (float):

The value of the (0, 0) element

Returns → enoki.scalar.Matrix4f:

no description available

depolarizer(value=1.0)

Constructs the Mueller matrix of an ideal depolarizer

Parameter value (enoki.scalar.Vector3f):

The value of the (0, 0) element

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.diattenuator(overloaded)
diattenuator(x, y)

Constructs the Mueller matrix of a linear diattenuator, which attenuates the electric field components at 0 and 90 degrees by ‘x’ and ‘y’, * respectively.

Parameter x (float):

no description available

Parameter y (float):

no description available

Returns → enoki.scalar.Matrix4f:

no description available

diattenuator(x, y)

Constructs the Mueller matrix of a linear diattenuator, which attenuates the electric field components at 0 and 90 degrees by ‘x’ and ‘y’, * respectively.

Parameter x (enoki.scalar.Vector3f):

no description available

Parameter y (enoki.scalar.Vector3f):

no description available

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.linear_polarizer(overloaded)
linear_polarizer(value=1.0)

Constructs the Mueller matrix of a linear polarizer which transmits linear polarization at 0 degrees.

“Polarized Light” by Edward Collett, Ch. 5 eq. (13)

Parameter value (float):

The amount of attenuation of the transmitted component (1 corresponds to an ideal polarizer).

Returns → enoki.scalar.Matrix4f:

no description available

linear_polarizer(value=1.0)

Constructs the Mueller matrix of a linear polarizer which transmits linear polarization at 0 degrees.

“Polarized Light” by Edward Collett, Ch. 5 eq. (13)

Parameter value (enoki.scalar.Vector3f):

The amount of attenuation of the transmitted component (1 corresponds to an ideal polarizer).

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.linear_retarder(overloaded)
linear_retarder(phase)

Constructs the Mueller matrix of a linear retarder which has its fast aligned vertically.

This implements the general case with arbitrary phase shift and can be used to construct the common special cases of quarter-wave and half- wave plates.

“Polarized Light” by Edward Collett, Ch. 5 eq. (27)

Parameter phase (float):

The phase difference between the fast and slow axis

Returns → enoki.scalar.Matrix4f:

no description available

linear_retarder(phase)

Constructs the Mueller matrix of a linear retarder which has its fast aligned vertically.

This implements the general case with arbitrary phase shift and can be used to construct the common special cases of quarter-wave and half- wave plates.

“Polarized Light” by Edward Collett, Ch. 5 eq. (27)

Parameter phase (enoki.scalar.Vector3f):

The phase difference between the fast and slow axis

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.reverse(overloaded)
reverse(M)

Reverse direction of propagation of the electric field. Also used for reflecting reference frames.

Parameter M (enoki.scalar.Matrix4f):

no description available

Returns → enoki.scalar.Matrix4f:

no description available

reverse(M)

Reverse direction of propagation of the electric field. Also used for reflecting reference frames.

Parameter M (enoki::Matrix<mitsuba.render.Color):

no description available

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.rotate_mueller_basis(overloaded)
rotate_mueller_basis(M, in_forward, in_basis_current, in_basis_target, out_forward, out_basis_current, out_basis_target)

Return the Mueller matrix for some new reference frames. This version rotates the input/output frames independently.

This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘in_basis_current’ to ‘out_basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘in_basis_target’ to ‘out_basis_target’.

Parameter M (enoki.scalar.Matrix4f):

The current Mueller matrix that operates from in_basis_current to out_basis_current.

Parameter in_forward (enoki.scalar.Vector3f):

Direction of travel for input Stokes vector (normalized)

Parameter in_basis_current (enoki.scalar.Vector3f):

Current (normalized) input Stokes basis. Must be orthogonal to in_forward.

Parameter in_basis_target (enoki.scalar.Vector3f):

Target (normalized) input Stokes basis. Must be orthogonal to in_forward.

Parameter out_forward (enoki.scalar.Vector3f):

Direction of travel for input Stokes vector (normalized)

Parameter out_basis_current (enoki.scalar.Vector3f):

Current (normalized) input Stokes basis. Must be orthogonal to out_forward.

Parameter out_basis_target (enoki.scalar.Vector3f):

Target (normalized) input Stokes basis. Must be orthogonal to out_forward.

Returns → enoki.scalar.Matrix4f:

New Mueller matrix that operates from in_basis_target to out_basis_target.

rotate_mueller_basis(M, in_forward, in_basis_current, in_basis_target, out_forward, out_basis_current, out_basis_target)

Return the Mueller matrix for some new reference frames. This version rotates the input/output frames independently.

This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘in_basis_current’ to ‘out_basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘in_basis_target’ to ‘out_basis_target’.

Parameter M (enoki::Matrix<mitsuba.render.Color):

The current Mueller matrix that operates from in_basis_current to out_basis_current.

Parameter in_forward (enoki.scalar.Vector3f):

Direction of travel for input Stokes vector (normalized)

Parameter in_basis_current (enoki.scalar.Vector3f):

Current (normalized) input Stokes basis. Must be orthogonal to in_forward.

Parameter in_basis_target (enoki.scalar.Vector3f):

Target (normalized) input Stokes basis. Must be orthogonal to in_forward.

Parameter out_forward (enoki.scalar.Vector3f):

Direction of travel for input Stokes vector (normalized)

Parameter out_basis_current (enoki.scalar.Vector3f):

Current (normalized) input Stokes basis. Must be orthogonal to out_forward.

Parameter out_basis_target (enoki.scalar.Vector3f):

Target (normalized) input Stokes basis. Must be orthogonal to out_forward.

Returns → enoki::Matrix<mitsuba.render.Color:

New Mueller matrix that operates from in_basis_target to out_basis_target.

mitsuba.render.mueller.rotate_mueller_basis_collinear(overloaded)
rotate_mueller_basis_collinear(M, forward, basis_current, basis_target)

Return the Mueller matrix for some new reference frames. This version applies the same rotation to the input/output frames.

This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘basis_current’ to ‘basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘basis_target’ to ‘basis_target’.

Parameter M (enoki.scalar.Matrix4f):

The current Mueller matrix that operates from basis_current to basis_current.

Parameter forward (enoki.scalar.Vector3f):

Direction of travel for input Stokes vector (normalized)

Parameter basis_current (enoki.scalar.Vector3f):

Current (normalized) input Stokes basis. Must be orthogonal to forward.

Parameter basis_target (enoki.scalar.Vector3f):

Target (normalized) input Stokes basis. Must be orthogonal to forward.

Returns → enoki.scalar.Matrix4f:

New Mueller matrix that operates from basis_target to basis_target.

rotate_mueller_basis_collinear(M, forward, basis_current, basis_target)

Return the Mueller matrix for some new reference frames. This version applies the same rotation to the input/output frames.

This operation is often used in polarized light transport when we have a known Mueller matrix ‘M’ that operates from ‘basis_current’ to ‘basis_current’ but instead want to re-express it as a Mueller matrix that operates from ‘basis_target’ to ‘basis_target’.

Parameter M (enoki::Matrix<mitsuba.render.Color):

The current Mueller matrix that operates from basis_current to basis_current.

Parameter forward (enoki.scalar.Vector3f):

Direction of travel for input Stokes vector (normalized)

Parameter basis_current (enoki.scalar.Vector3f):

Current (normalized) input Stokes basis. Must be orthogonal to forward.

Parameter basis_target (enoki.scalar.Vector3f):

Target (normalized) input Stokes basis. Must be orthogonal to forward.

Returns → enoki::Matrix<mitsuba.render.Color:

New Mueller matrix that operates from basis_target to basis_target.

mitsuba.render.mueller.rotate_stokes_basis(wi, basis_current, basis_target)

Gives the Mueller matrix that alignes the reference frames (defined by their respective basis vectors) of two collinear stokes vectors.

If we have a stokes vector s_current expressed in ‘basis_current’, we can re-interpret it as a stokes vector rotate_stokes_basis(..) * s1 that is expressed in ‘basis_target’ instead. For example: Horizontally polarized light [1,1,0,0] in a basis [1,0,0] can be interpreted as +45˚ linear polarized light [1,0,1,0] by switching to a target basis [0.707, -0.707, 0].

Parameter forward:

Direction of travel for Stokes vector (normalized)

Parameter basis_current (enoki.scalar.Vector3f):

Current (normalized) Stokes basis. Must be orthogonal to forward.

Parameter basis_target (enoki.scalar.Vector3f):

Target (normalized) Stokes basis. Must be orthogonal to forward.

Parameter wi (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Matrix4f:

Mueller matrix that performs the desired change of reference frames.

mitsuba.render.mueller.rotate_stokes_basis_m(wi, basis_current, basis_target)

Gives the Mueller matrix that alignes the reference frames (defined by their respective basis vectors) of two collinear stokes vectors.

If we have a stokes vector s_current expressed in ‘basis_current’, we can re-interpret it as a stokes vector rotate_stokes_basis(..) * s1 that is expressed in ‘basis_target’ instead. For example: Horizontally polarized light [1,1,0,0] in a basis [1,0,0] can be interpreted as +45˚ linear polarized light [1,0,1,0] by switching to a target basis [0.707, -0.707, 0].

Parameter forward:

Direction of travel for Stokes vector (normalized)

Parameter basis_current (enoki.scalar.Vector3f):

Current (normalized) Stokes basis. Must be orthogonal to forward.

Parameter basis_target (enoki.scalar.Vector3f):

Target (normalized) Stokes basis. Must be orthogonal to forward.

Parameter wi (enoki.scalar.Vector3f):

no description available

Returns → enoki::Matrix<mitsuba.render.Color:

Mueller matrix that performs the desired change of reference frames.

mitsuba.render.mueller.rotated_element(overloaded)
rotated_element(theta, M)

Applies a counter-clockwise rotation to the mueller matrix of a given element.

Parameter theta (float):

no description available

Parameter M (enoki.scalar.Matrix4f):

no description available

Returns → enoki.scalar.Matrix4f:

no description available

rotated_element(theta, M)

Applies a counter-clockwise rotation to the mueller matrix of a given element.

Parameter theta (enoki.scalar.Vector3f):

no description available

Parameter M (enoki::Matrix<mitsuba.render.Color):

no description available

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.rotator(overloaded)
rotator(theta)

Constructs the Mueller matrix of an ideal rotator, which performs a counter-clockwise rotation of the electric field by ‘theta’ radians (when facing the light beam from the sensor side).

To be more precise, it rotates the reference frame of the current Stokes vector. For example: horizontally linear polarized light s1 = [1,1,0,0] will look like -45˚ linear polarized light s2 = R(45˚) * s1 = [1,0,-1,0] after applying a rotator of +45˚ to it.

“Polarized Light” by Edward Collett, Ch. 5 eq. (43)

Parameter theta (float):

no description available

Returns → enoki.scalar.Matrix4f:

no description available

rotator(theta)

Constructs the Mueller matrix of an ideal rotator, which performs a counter-clockwise rotation of the electric field by ‘theta’ radians (when facing the light beam from the sensor side).

To be more precise, it rotates the reference frame of the current Stokes vector. For example: horizontally linear polarized light s1 = [1,1,0,0] will look like -45˚ linear polarized light s2 = R(45˚) * s1 = [1,0,-1,0] after applying a rotator of +45˚ to it.

“Polarized Light” by Edward Collett, Ch. 5 eq. (43)

Parameter theta (enoki.scalar.Vector3f):

no description available

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.specular_reflection(overloaded)
specular_reflection(cos_theta_i, eta)

Calculates the Mueller matrix of a specular reflection at an interface between two dielectrics or conductors.

Parameter cos_theta_i (float):

Cosine of the angle between the surface normal and the incident ray

Parameter eta (enoki.scalar.Complex2f):

Complex-valued relative refractive index of the interface. In the real case, a value greater than 1.0 case means that the surface normal points into the region of lower density.

Returns → enoki.scalar.Matrix4f:

no description available

specular_reflection(cos_theta_i, eta)

Calculates the Mueller matrix of a specular reflection at an interface between two dielectrics or conductors.

Parameter cos_theta_i (enoki.scalar.Vector3f):

Cosine of the angle between the surface normal and the incident ray

Parameter eta (enoki::Complex<mitsuba.render.Color):

Complex-valued relative refractive index of the interface. In the real case, a value greater than 1.0 case means that the surface normal points into the region of lower density.

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.specular_transmission(overloaded)
specular_transmission(cos_theta_i, eta)

Calculates the Mueller matrix of a specular transmission at an interface between two dielectrics or conductors.

Parameter cos_theta_i (float):

Cosine of the angle between the surface normal and the incident ray

Parameter eta (float):

Complex-valued relative refractive index of the interface. A value greater than 1.0 in the real case means that the surface normal is pointing into the region of lower density.

Returns → enoki.scalar.Matrix4f:

no description available

specular_transmission(cos_theta_i, eta)

Calculates the Mueller matrix of a specular transmission at an interface between two dielectrics or conductors.

Parameter cos_theta_i (enoki.scalar.Vector3f):

Cosine of the angle between the surface normal and the incident ray

Parameter eta (enoki.scalar.Vector3f):

Complex-valued relative refractive index of the interface. A value greater than 1.0 in the real case means that the surface normal is pointing into the region of lower density.

Returns → enoki::Matrix<mitsuba.render.Color:

no description available

mitsuba.render.mueller.stokes_basis(w)

Gives the reference frame basis for a Stokes vector.

For light transport involving polarized quantities it is essential to keep track of reference frames. A Stokes vector is only meaningful if we also know w.r.t. which basis this state of light is observed. In Mitsuba, these reference frames are never explicitly stored but instead can be computed on the fly using this function.

Parameter w (enoki.scalar.Vector3f):

Direction of travel for Stokes vector (normalized)

Returns → enoki.scalar.Vector3f:

The (implicitly defined) reference coordinate system basis for the Stokes vector travelling along w.

mitsuba.render.mueller.unit_angle(a, b)
Parameter a (enoki.scalar.Vector3f):

no description available

Parameter b (enoki.scalar.Vector3f):

no description available

Returns → float:

no description available

## Other¶

class mitsuba.render.HitComputeFlags

Members:

None

No flags set

Minimal

Compute position and geometric normal

UV

Compute UV coordinates

dPdUV

Compute position partials wrt. UV coordinates

dNGdUV

Compute the geometric normal partials wrt. the UV coordinates

dNSdUV

Compute the shading normal partials wrt. the UV coordinates

ShadingFrame

NonDifferentiable

Force computed fields to not be be differentiable

All

Compute all fields of the surface interaction data structure (default)

AllNonDifferentiable

Compute all fields of the surface interaction data structure in a non differentiable way

__init__(self, arg0)
Parameter arg0 (int):

no description available

class mitsuba.render.ImageBlock

Base class: mitsuba.core.Object

Storage for an image sub-block (a.k.a render bucket)

This class is used by image-based parallel processes and encapsulates computed rectangular regions of an image. This allows for easy and efficient distributed rendering of large images. Image blocks usually also include a border region storing contributions that are slightly outside of the block, which is required to support image reconstruction filters.

__init__(self, size, channel_count, filter=None, warn_negative=True, warn_invalid=True, border=True, normalize=False)
Parameter size (enoki.scalar.Vector2i):

no description available

Parameter channel_count (int):

no description available

Parameter filter (mitsuba.core.ReconstructionFilter):

no description available

Parameter warn_negative (bool):

no description available

Parameter warn_invalid (bool):

no description available

Parameter border (bool):

no description available

Parameter normalize (bool):

no description available

border_size(self)

Return the border region used by the reconstruction filter

Returns → int:

no description available

channel_count(self)

Return the number of channels stored by the image block

Returns → int:

no description available

clear(self)

Clear everything to zero.

Returns → None:

no description available

data(self)

Return the underlying pixel buffer

Returns → enoki.dynamic.Float32:

no description available

height(self)

Return the bitmap’s height in pixels

Returns → int:

no description available

offset(self)

Return the current block offset

Returns → enoki.scalar.Vector2i:

no description available

put(overloaded)
put(self, block)

Accumulate another image block into this one

Parameter block (mitsuba.render.ImageBlock):

no description available

put(self, pos, wavelengths, value, alpha=1.0, active=True)

Store a single sample / packets of samples inside the image block.

note This method is only valid if a reconstruction filter was given at the construction of the block.

Parameter pos (enoki.scalar.Vector2f):

Denotes the sample position in fractional pixel coordinates. It is not checked, and so must be valid. The block’s offset is subtracted from the given position to obtain the

Parameter wavelengths (enoki.scalar.Vector0f):

Sample wavelengths in nanometers

Parameter value (enoki.scalar.Vector3f):

Sample value assocated with the specified wavelengths

Parameter alpha (float):

Alpha value assocated with the sample

Returns → bool:

False if one of the sample values was invalid, e.g. NaN or negative. A warning is also printed if m_warn_negative or m_warn_invalid is enabled.

Parameter active (bool):

put(self, pos, data, active=True)
Parameter pos (enoki.scalar.Vector2f):

no description available

Parameter data (List[float]):

no description available

Parameter active (bool):

set_offset(self, offset)

Set the current block offset.

This corresponds to the offset from the top-left corner of a larger image (e.g. a Film) to the top-left corner of this ImageBlock instance.

Parameter offset (enoki.scalar.Vector2i):

no description available

Returns → None:

no description available

set_warn_invalid(self, value)

Warn when writing invalid (NaN, +/- infinity) sample values?

Parameter value (bool):

no description available

Returns → None:

no description available

set_warn_negative(self, value)

Warn when writing negative sample values?

Parameter value (bool):

no description available

Returns → None:

no description available

size(self)

Return the current block size

Returns → enoki.scalar.Vector2i:

no description available

warn_invalid(self)

Warn when writing invalid (NaN, +/- infinity) sample values?

Returns → bool:

no description available

warn_negative(self)

Warn when writing negative sample values?

Returns → bool:

no description available

width(self)

Return the bitmap’s width in pixels

Returns → int:

no description available

class mitsuba.render.Integrator

Base class: mitsuba.core.Object

Abstract integrator base class, which does not make any assumptions with regards to how radiance is computed.

In Mitsuba, the different rendering techniques are collectively referred to as integrators, since they perform integration over a high-dimensional space. Each integrator represents a specific approach for solving the light transport equation—usually favored in certain scenarios, but at the same time affected by its own set of intrinsic limitations. Therefore, it is important to carefully select an integrator based on user-specified accuracy requirements and properties of the scene to be rendered.

This is the base class of all integrators; it does not make any assumptions on how radiance is computed, which allows for many different kinds of implementations.

cancel(self)

Cancel a running render job

This function can be called asynchronously to cancel a running render job. In this case, render() will quit with a return value of False.

Returns → None:

no description available

render(self, scene, sensor)

Perform the main rendering job. Returns True upon success

Parameter scene (mitsuba.render.Scene):

no description available

Parameter sensor (mitsuba.render.Sensor):

no description available

Returns → bool:

no description available

class mitsuba.render.Interaction3f

Generic surface interaction data structure

__init__(self)
is_valid(self)

Is the current interaction valid?

Returns → bool:

no description available

property p

Position of the interaction in world coordinates

spawn_ray(self, d)

Spawn a semi-infinite ray towards the given direction

Parameter d (enoki.scalar.Vector3f):

no description available

Returns → mitsuba.core.Ray3f:

no description available

spawn_ray_to(self, t)

Spawn a finite ray towards the given position

Parameter t (enoki.scalar.Vector3f):

no description available

Returns → mitsuba.core.Ray3f:

no description available

property t

Distance traveled along the ray

property time

Time value associated with the interaction

property wavelengths

Wavelengths associated with the ray that produced this interaction

zero(size=1)
Parameter size (int):

no description available

Returns → mitsuba.render.Interaction3f:

no description available

class mitsuba.render.MonteCarloIntegrator

Base class: mitsuba.render.SamplingIntegrator

class mitsuba.render.PreliminaryIntersection3f

Stores preliminary information related to a ray intersection

This data structure is used as return type for the Shape::ray_intersect_preliminary efficient ray intersection routine. It stores whether the shape is intersected by a given ray, and cache preliminary information about the intersection if that is the case.

If the intersection is deemed relevant, detailed intersection information can later be obtained via the create_surface_interaction() method.

__init__(self)
compute_surface_interaction(self, ray, flags=HitComputeFlags.All, active=True)

Compute and return detailed information related to a surface interaction

Parameter ray (mitsuba.core.Ray3f):

Ray associated with the ray intersection

Parameter flags (mitsuba.render.HitComputeFlags):

Flags specifying which information should be computed

Parameter active (bool):

Returns → mitsuba.render.SurfaceInteraction3f:

A data structure containing the detailed information

property instance

Stores a pointer to the parent instance (if applicable)

is_valid(self)

Is the current interaction valid?

Returns → bool:

no description available

property prim_index

Primitive index, e.g. the triangle ID (if applicable)

property prim_uv

2D coordinates on the primitive surface parameterization

property shape

Pointer to the associated shape

property shape_index

Shape index, e.g. the shape ID in shapegroup (if applicable)

property t

Distance traveled along the ray

class mitsuba.render.SamplingIntegrator

Base class: mitsuba.render.Integrator

Integrator based on Monte Carlo sampling

This integrator performs Monte Carlo integration to return an unbiased statistical estimate of the radiance value along a given ray. The default implementation of the render() method then repeatedly invokes this estimator to compute all pixels of the image.

__init__(self, arg0)
Parameter arg0 (mitsuba.core.Properties):

no description available

aov_names(self)

For integrators that return one or more arbitrary output variables (AOVs), this function specifies a list of associated channel names. The default implementation simply returns an empty vector.

Returns → List[str]:

no description available

sample(self, scene, sampler, ray, medium=None, active=True)

Sample the incident radiance along a ray.

Parameter scene (mitsuba.render.Scene):

The underlying scene in which the radiance function should be sampled

Parameter sampler (mitsuba.render.Sampler):

A source of (pseudo-/quasi-) random numbers

Parameter ray (mitsuba.core.RayDifferential3f):

A ray, optionally with differentials

Parameter medium (mitsuba.render.Medium):

If the ray is inside a medium, this parameter holds a pointer to that medium

Parameter active (bool):

A mask that indicates which SIMD lanes are active

Parameter aov:

Integrators may return one or more arbitrary output variables (AOVs) via this parameter. If nullptr is provided to this argument, no AOVs should be returned. Otherwise, the caller guarantees that space for at least aov_names().size() entries has been allocated.

Returns → Tuple[enoki.scalar.Vector3f, bool, List[float]]:

A pair containing a spectrum and a mask specifying whether a surface or medium interaction was sampled. False mask entries indicate that the ray “escaped” the scene, in which case the the returned spectrum contains the contribution of environment maps, if present. The mask can be used to estimate a suitable alpha channel of a rendered image.

Remark:

In the Python bindings, this function returns the aov output argument as an additional return value. In other words:  (spec, mask, aov) = integrator.sample(scene, sampler, ray, medium, active) 

should_stop(self)

Indicates whether cancel() or a timeout have occured. Should be checked regularly in the integrator’s main loop so that timeouts are enforced accurately.

Note that accurate timeouts rely on m_render_timer, which needs to be reset at the beginning of the rendering phase.

Returns → bool:

no description available

class mitsuba.render.Spiral

Base class: mitsuba.core.Object

Generates a spiral of blocks to be rendered.

Author:

Adam Arbree Aug 25, 2005 RayTracer.java Used with permission. Copyright 2005 Program of Computer Graphics, Cornell University

__init__(self, size, offset, block_size=32, passes=1)

Create a new spiral generator for the given size, offset into a larger frame, and block size

Parameter size (enoki.scalar.Vector2i):

no description available

Parameter offset (enoki.scalar.Vector2i):

no description available

Parameter block_size (int):

no description available

Parameter passes (int):

no description available

block_count(self)

Return the total number of blocks

Returns → int:

no description available

max_block_size(self)

Return the maximum block size

Returns → int:

no description available

next_block(self)

Return the offset, size and unique identifer of the next block.

A size of zero indicates that the spiral traversal is done.

Returns → Tuple[enoki.scalar.Vector2i, enoki.scalar.Vector2i, int]:

no description available

reset(self)

Reset the spiral to its initial state. Does not affect the number of passes.

Returns → None:

no description available

set_passes(self, arg0)

Sets the number of time the spiral should automatically reset. Not affected by a call to reset.

Parameter arg0 (int):

no description available

Returns → None:

no description available

class mitsuba.render.Texture

Base class: mitsuba.core.Object

Base class of all surface texture implementations

This class implements a generic texture map that supports evaluation at arbitrary surface positions and wavelengths (if compiled in spectral mode). It can be used to provide both intensities (e.g. for light sources) and unitless reflectance parameters (e.g. an albedo of a reflectance model).

The spectrum can be evaluated at arbitrary (continuous) wavelengths, though the underlying function it is not required to be smooth or even continuous.

D65(scale=1.0)
Parameter scale (float):

no description available

Returns → mitsuba.render.Texture:

no description available

eval(self, si, active=True)

Evaluate the texture at the given surface interaction

Parameter si (mitsuba.render.SurfaceInteraction3f):

An interaction record describing the associated surface position

Parameter active (bool):

Returns → enoki.scalar.Vector3f:

An unpolarized spectral power distribution or reflectance value

eval_1(self, si, active=True)

Monochromatic evaluation of the texture at the given surface interaction

This function differs from eval() in that it provided raw access to scalar intensity/reflectance values without any color processing (e.g. spectral upsampling). This is useful in parts of the renderer that encode scalar quantities using textures, e.g. a height field.

Parameter si (mitsuba.render.SurfaceInteraction3f):

An interaction record describing the associated surface position

Parameter active (bool):

Returns → float:

An scalar intensity or reflectance value

eval_3(self, si, active=True)

Trichromatic evaluation of the texture at the given surface interaction

This function differs from eval() in that it provided raw access to RGB intensity/reflectance values without any additional color processing (e.g. RGB-to-spectral upsampling). This is useful in parts of the renderer that encode 3D quantities using textures, e.g. a normal map.

Parameter si (mitsuba.render.SurfaceInteraction3f):

An interaction record describing the associated surface position

Parameter active (bool):

Returns → enoki.scalar.Vector3f:

An trichromatic intensity or reflectance value

is_spatially_varying(self)

Does this texture evaluation depend on the UV coordinates

Returns → bool:

no description available

mean(self)

Return the mean value of the spectrum over the support (MTS_WAVELENGTH_MIN..MTS_WAVELENGTH_MAX)

Not every implementation necessarily provides this function. The default implementation throws an exception.

Even if the operation is provided, it may only return an approximation.

Returns → float:

no description available

pdf_position(self, p, active=True)

Returns the probability per unit area of sample_position()

Parameter p (enoki.scalar.Vector2f):

no description available

Parameter active (bool):

Returns → float:

no description available

pdf_spectrum(self, si, active=True)

Evaluate the density function of the sample_spectrum() method as a probability per unit wavelength (in units of 1/nm).

Not every implementation necessarily provides this function. The default implementation throws an exception.

Parameter si (mitsuba.render.SurfaceInteraction3f):

An interaction record describing the associated surface position

Parameter active (bool):

Returns → enoki.scalar.Vector0f:

A density value for each wavelength in si.wavelengths (hence the Wavelength type).

sample_position(self, sample, active=True)

Importance sample a surface position proportional to the overall spectral reflectance or intensity of the texture

This function assumes that the texture is implemented as a mapping from 2D UV positions to texture values, which is not necessarily true for all textures (e.g. 3D noise functions, mesh attributes, etc.). For this reason, not every will plugin provide a specialized implementation, and the default implementation simply return the input sample (i.e. uniform sampling is used).

Parameter sample (enoki.scalar.Vector2f):

A 2D vector of uniform variates

Parameter active (bool):

Returns → Tuple[enoki.scalar.Vector2f, float]:
1. A texture-space position in the range $$[0, 1]^2$$

1. The associated probability per unit area in UV space

sample_spectrum(self, si, sample, active=True)

Importance sample a set of wavelengths proportional to the spectrum defined at the given surface position

Not every implementation necessarily provides this function, and it is a no-op when compiling non-spectral variants of Mitsuba. The default implementation throws an exception.

Parameter si (mitsuba.render.SurfaceInteraction3f):

An interaction record describing the associated surface position

Parameter sample (enoki.scalar.Vector0f):

A uniform variate for each desired wavelength.

Parameter active (bool):

Returns → Tuple[enoki.scalar.Vector0f, enoki.scalar.Vector3f]:
1. Set of sampled wavelengths specified in nanometers

2. The Monte Carlo importance weight (Spectral power distribution value divided by the sampling density)

mitsuba.render.eval_reflectance(type, alpha_u, alpha_v, wi, eta)
Parameter type (mitsuba.render.MicrofacetType):

no description available

Parameter alpha_u (float):

no description available

Parameter alpha_v (float):

no description available

Parameter wi (mitsuba.render.Vector):

no description available

Parameter eta (float):

no description available

Returns → enoki.dynamic.Float32:

no description available

mitsuba.render.fresnel(cos_theta_i, eta)

Calculates the unpolarized Fresnel reflection coefficient at a planar interface between two dielectrics

Parameter cos_theta_i (float):

Cosine of the angle between the surface normal and the incident ray

Parameter eta (float):

Relative refractive index of the interface. A value greater than 1.0 means that the surface normal is pointing into the region of lower density.

Returns → Tuple[float, float, float, float]:

A tuple (F, cos_theta_t, eta_it, eta_ti) consisting of

F Fresnel reflection coefficient.

cos_theta_t Cosine of the angle between the surface normal and the transmitted ray

eta_it Relative index of refraction in the direction of travel.

eta_ti Reciprocal of the relative index of refraction in the direction of travel. This also happens to be equal to the scale factor that must be applied to the X and Y component of the refracted direction.

mitsuba.render.fresnel_conductor(cos_theta_i, eta)

Calculates the unpolarized Fresnel reflection coefficient at a planar interface of a conductor, i.e. a surface with a complex-valued relative index of refraction

Remark:

The implementation assumes that cos_theta_i > 0, i.e. light enters from outside of the conducting layer (generally a reasonable assumption unless very thin layers are being simulated)

Parameter cos_theta_i (float):

Cosine of the angle between the surface normal and the incident ray

Parameter eta (enoki.scalar.Complex2f):

Relative refractive index (complex-valued)

Returns → float:

The unpolarized Fresnel reflection coefficient.

mitsuba.render.fresnel_polarized(cos_theta_i, eta)

Calculates the polarized Fresnel reflection coefficient at a planar interface between two dielectrics or conductors. Returns complex values encoding the amplitude and phase shift of the s- and p-polarized waves.

This is the most general version, which subsumes all others (at the cost of transcendental function evaluations in the complex-valued arithmetic)

Parameter cos_theta_i (float):

Cosine of the angle between the surface normal and the incident ray

Parameter eta (enoki.scalar.Complex2f):

Complex-valued relative refractive index of the interface. In the real case, a value greater than 1.0 case means that the surface normal points into the region of lower density.

Returns → Tuple[enoki.scalar.Complex2f, enoki.scalar.Complex2f, float, enoki.scalar.Complex2f, enoki.scalar.Complex2f]:

A tuple (a_s, a_p, cos_theta_t, eta_it, eta_ti) consisting of

a_s Perpendicularly polarized wave amplitude and phase shift.

a_p Parallel polarized wave amplitude and phase shift.

cos_theta_t Cosine of the angle between the surface normal and the transmitted ray. Zero in the case of total internal reflection.

eta_it Relative index of refraction in the direction of travel

eta_ti Reciprocal of the relative index of refraction in the direction of travel. In the real-valued case, this also happens to be equal to the scale factor that must be applied to the X and Y component of the refracted direction.

mitsuba.render.has_flag(overloaded)
has_flag(arg0, arg1)
Parameter arg0 (mitsuba.render.HitComputeFlags):

no description available

Parameter arg1 (mitsuba.render.HitComputeFlags):

no description available

Returns → bool:

no description available

has_flag(arg0, arg1)
Parameter arg0 (int):

no description available

Parameter arg1 (mitsuba.render.BSDFFlags):

no description available

Returns → bool:

no description available

has_flag(arg0, arg1)
Parameter arg0 (int):

no description available

Parameter arg1 (mitsuba.render.PhaseFunctionFlags):

no description available

Returns → bool:

no description available

mitsuba.render.reflect(overloaded)
reflect(wi)

Reflection in local coordinates

Parameter wi (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

reflect(wi, m)

Reflect wi with respect to a given surface normal

Parameter wi (enoki.scalar.Vector3f):

no description available

Parameter m (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

mitsuba.render.refract(overloaded)
refract(wi, cos_theta_t, eta_ti)

Refraction in local coordinates

The ‘cos_theta_t’ and ‘eta_ti’ parameters are given by the last two tuple entries returned by the fresnel and fresnel_polarized functions.

Parameter wi (enoki.scalar.Vector3f):

no description available

Parameter cos_theta_t (float):

no description available

Parameter eta_ti (float):

no description available

Returns → enoki.scalar.Vector3f:

no description available

refract(wi, m, cos_theta_t, eta_ti)

Refract wi with respect to a given surface normal

Parameter wi (enoki.scalar.Vector3f):

Direction to refract

Parameter m (enoki.scalar.Vector3f):

Surface normal

Parameter cos_theta_t (float):

Cosine of the angle between the normal the transmitted ray, as computed e.g. by fresnel.

Parameter eta_ti (float):

Relative index of refraction (transmitted / incident)

Returns → enoki.scalar.Vector3f:

no description available

mitsuba.render.register_bsdf(arg0, arg1)
Parameter arg0 (str):

no description available

Parameter arg1 (Callable[[mitsuba.core.Properties], object]):

no description available

Returns → None:

no description available

mitsuba.render.register_emitter(arg0, arg1)
Parameter arg0 (str):

no description available

Parameter arg1 (Callable[[mitsuba.core.Properties], object]):

no description available

Returns → None:

no description available

mitsuba.render.register_integrator(arg0, arg1)
Parameter arg0 (str):

no description available

Parameter arg1 (Callable[[mitsuba.core.Properties], object]):

no description available

Returns → None:

no description available

mitsuba.render.register_medium(arg0, arg1)
Parameter arg0 (str):

no description available

Parameter arg1 (Callable[[mitsuba.core.Properties], object]):

no description available

Returns → None:

no description available

mitsuba.render.register_phasefunction(arg0, arg1)
Parameter arg0 (str):

no description available

Parameter arg1 (Callable[[mitsuba.core.Properties], object]):

no description available

Returns → None:

no description available

mitsuba.render.register_sensor(arg0, arg1)
Parameter arg0 (str):

no description available

Parameter arg1 (Callable[[mitsuba.core.Properties], object]):

no description available

Returns → None:

no description available

mitsuba.render.srgb_model_eval(arg0, arg1)
Parameter arg0 (enoki.scalar.Vector3f):

no description available

Parameter arg1 (enoki.scalar.Vector0f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

mitsuba.render.srgb_model_fetch(arg0)

Look up the model coefficients for a sRGB color value @param c An sRGB color value where all components are in [0, 1]. @return Coefficients for use with srgb_model_eval

Parameter arg0 (enoki.scalar.Vector3f):

no description available

Returns → enoki.scalar.Vector3f:

no description available

mitsuba.render.srgb_model_mean(arg0)
Parameter arg0 (enoki.scalar.Vector3f):

no description available

Returns → float:

no description available