Zahra Montazeri

Assistant Professor (Lecturer)

zahra.montazeri@manchester.ac.uk

Room 2.34, Kilburn Building, Oxford Rd, M13 9PL, Manchester, United Kingdom

+44 161 275 6209

Latest News

  • [July 2024] I served as a program committee for PG 2024!
  • [July 2024] We have three papers accepted to EGSR!
  • [July 2024] We organized the Women in Rendering Dinner at EGSR sponsored by WiGraph!
  • [June 2024] We gave a course lecture at SGP 2024, Graduate School at MIT!
  • [May 2024] I joined Metaverse Workshop at ARM, Cambridge!
  • [May 2024] I will serve as a Poster Chair for EuroGraphics 2025!
  • [Apr 2024] I served as a program committee for EGs, EGSR, I3D 2024!
  • [Jan 2023] I served as a program committee for EGs and EGSR 2023!
  • [Aug 2022] I gave a seminar talk at Summer Geometry Initiative at MIT!

Bio

Zahra is a lecturer (Assistant Professor) at the University of Manchester in the department of Computer Science. Her field of research is physics-based computer graphics with a focus on photorealistic rendering and appearance modeling for complex materials such as cloth, hair, and fur. She worked as a research consultant at Disney Research and collaborates with WetaDigital on joint research projects. She received Star Wars movie credit for The Mandalorian and her research paper was used in the production of Avatar, The Way of Water. Before joining academia, she was a Research and Development at Industrial Light & Magic (ILM). She holds a PhD in Computer Science from University of California, Irvine (UCI), advised by Shuang Zhao, and worked under the supervision of Professor Henrik Wann Jensen. During her studies, she interned at Pixar Animation Studios, DreamWorks Animation, and for two years at Luxion (makers of KeyShot). She received her M.Sc. in Computer Science from UCI in 2017 and her B.Sc. in Computer Engineering from the Sharif University of Technology in 2015, Iran.


Research

I am honored to work with these exceptional PhD students:

Current:

Alumni:


Publications

A Dynamic By-example BTF Synthesis Scheme
Zilin Xu, Zahra Montazeri, Beibei Wang, Ling-Qi Yan
ACM SIGGRAPH Asia 2024 (conference track full paper), Dec 2024 [Paper]
Measured Bidirectional Texture Function (BTF) can faithfully reproduce a realistic appearance but is costly to acquire and store due to its 6D nature (2D spatial and 4D angular). Therefore, it is practical and necessary for rendering to synthesize BTFs from a small example patch. While previous methods managed to produce plausible results, we find that they seldomly take into consideration the property of being dynamic, so a BTF must be synthesized before the rendering process, resulting in limited size, costly pre-generation and storage issues. In this paper, we propose a dynamic BTF synthesis scheme, where a BTF at any position only needs to be synthesized when being queried. Our insight is that, with the recent advances in neural dimension reduction methods, a BTF can be decomposed into disjoint low-dimensional components. We can perform dynamic synthesis only on the positional dimensions, and during rendering, recover the BTF by querying and combining these low-dimensional functions with the help of a lightweight Multilayer Perceptron (MLP). Consequently, we obtain a fully dynamic 6D BTF synthesis scheme that does not require any pre-generation, which enables efficient rendering of our infinitely large and non-repetitive BTFs on the fly. We demonstrate the effectiveness of our method through various types of BTFs taken from UBO2014.


A Multi-scale Yarn Appearance Model with Fiber Details
Apoorv Khattar, Junqiu Zhu, Jean-Marie Aubry, Emiliano Padovani, Marc Droske, Ling-Qi Yan, Zahra Montazeri
Computational Visual Media, 2024 [Paper] [Video] [Slides]
Rendering realistic cloth has always been a challenge due to its intricate structure. Cloth is made up of fibers, plies, and yarns, and previous curved-based models, while detailed, were computationally expensive and inflexible for large cloth. To address this, we propose a simplified approach. We introduce a geometric aggregation technique that reduces ray-tracing computation by using fewer curves, focusing only on yarn curves. Our model generates ply and fiber shapes implicitly, compensating for the lack of explicit geometry with a novel shadowing component. We also present a shad- ing model that simplifies light interactions among fibers by categorizing them into four components, accurately capturing specular and scattered light in both forward and backward directions. To render large cloth efficiently, we propose a multi-scale solution based on pixel coverage. Our yarn shad- ing model achieves 3-5 times faster rendering speed with less memory in near-field views compared to fiber-based models. Additionally, our multi-scale solution offers a 20% speed boost for distant cloth observation.


A Hierarchical Architecture for Neural Materials
Bowen Xue, Shuang Zhao, Henrik Wann Jensen, Zahra Montazeri
Computer Graphics Forum, 2024 [Paper] [Video]
Neural reflectance models are capable of reproducing the spatially-varying appearance of many real-world materials at different scales. Unfortunately, existing techniques such as NeuMIP have difficulties handling materials with strong shadowing effects or detailed specular highlights. In this paper, we introduce a neural appearance model that offers a new level of accuracy. Central to our model is an inception-based core network structure that captures material appearances at multiple scales using parallel-operating kernels and ensures multi-stage features through specialized convolution layers. Furthermore, we encode the inputs into frequency space, introduce a gradient-based loss, and employ it adaptive to the progress of the learning phase. We demonstrate the effectiveness of our method using a variety of synthetic and real examples.


Neural Appearance Model for Cloth Rendering
Guan Yu Soh, Zahra Montazeri
Computer Graphics Forum (Proceedings of EGSR 2024) [Paper] [Video]
The realistic rendering of woven and knitted fabrics has posed significant challenges throughout many years. Previously, fiberbased micro-appearance models have achieved considerable success in attaining high levels of realism. However, rendering such models remains complex due to the intricate internal scatterings of hundreds of fibers within a yarn, requiring vast amounts of memory and time to render. In this paper, we introduce a new framework to capture aggregated appearance by tracing many light paths through the underlying fiber geometry. We then employ lightweight neural networks to accurately model the aggregated BSDF, which allows for the precise modeling of a diverse array of materials while offering substantial improvements in speed and reductions in memory. Furthermore, we introduce a novel importance sampling scheme to further speed up the rate of convergence. We validate the efficacy and versatility of our framework through comparisons with preceding fiber-based shading models as well as the most recent yarn-based model.


ReflectanceFusion: Diffusion-based text to SVBRDF Generation
Bowen Xue, Giuseppe Claudio Guarnera, Shuang Zhao, Zahra Montazeri
Eurographics Symposium on Rendering (EGSR), 2024 [Paper] [Video]
We introduce ReflectanceFusion (Reflectance Diffusion), a new neural text-to-texture model capable of generating high-fidelity SVBRDF maps from textual descriptions. Our method leverages a tandem neural approach, consisting of two modules, to accurately model the distribution of spatially varying reflectance as described by text prompts. Initially, we employ a pre-trained stable diffusion 2 model to generate a latent representation that informs the overall shape of the material and serves as our backbone model. Then, our ReflectanceUNet enables fine-tuning control over the material’s physical appearance and generates SVBRDF maps. ReflectanceUNet module is trained on an extensive dataset comprising approximately 200,000 synthetic spatially varying materials. Our generative SVBRDF diffusion model allows for the synthesis of multiple SVBRDF estimates from a single textual input, offering users the possibility to choose the output that best aligns with their requirements. We illustrate our method’s versatility by generating SVBRDF maps from a range of textual descriptions, both specific and broad. Our ReflectanceUNet model can integrate optional physical parameters, such as roughness and specularity, enhancing customization. When the backbone module is fixed, the ReflectanceUNet module refines the material, allowing direct edits to its physical attributes. Comparative evaluations demonstrate that ReflectanceFusion achieves better accuracy than existing text-to-material models, such as Text2Mat, while also providing the benefits of editable and relightable SVBRDF maps.


Learning to Rasterize Differentiably
Chenghao Wu, Hamila Mailee, Zahra Montazeri, Tobias Ritschel
Computer Graphics Forum (Proceedings of EGSR 2024) [Paper] [Project page]
Differentiable rasterization changes the standard formulation of primitive rasterization —by enabling gradient flow from a pixel to its underlying triangles— using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.


Thinking on Your Feet: Enhancing Foveated Rendering in Virtual Reality During User Activity
David Petrescu, Paul A. Warren, Zahra Montazeri, Gabriel Strain, Steve Pettifer
ACM Symposium on Applied Perception (SAP) 2023 [Paper]
As prices fall, VR technology is experiencing renewed levels of consumer interest. Despite wider access, VR still requires levels of computational ability and bandwidth that often cannot be achieved with consumer-grade equipment. Foveated rendering represents one of the most promising methods for the optimization of VR content while keeping the quality of the user’s experience intact. The user’s ability to explore and move through the environment with 6DOF separates VR from traditional display technologies. In this work, we explore if the type of movement (Active versus Implied) and attentional task type (Simple Fixations versus Fixation, Discrimination, and Counting) affect the extent to which a dynamic foveated rendering method using Variable Rate Shading (VRS) optimizes a VR scene. Using psychophysics methods we conduct user studies and recover the Maximum Tolerated Diameter (MTD) at which users fail to notice drops in quality. We find that during self-movement, performing a task that requires more attention masks severe shading reductions and that only 31.7% of the headset’s FOV is required to be rendered at the native pixel sampling rate.


A Practical and Hierarchical Yarn-based Shading Model for Cloth
Junqiu Zhu, Zahra Montazeri, Jean-Marie Aubry, Ling-Qi Yan, Andrea Weidlich
Computer Graphics Forum (Proceedings of EGSR 2023) [Paper] [Video]
EGSR 2023 Best Paper Award and EGSR 2023 Best Visual Effects Award
Realistic cloth rendering is a longstanding challenge in computer graphics due to the intricate geometry and hierarchical structure of cloth: Fibers form plies which in turn are combined into yarns which then are woven or knitted into fabrics. Previous fiber-based models have achieved high-quality close-up rendering, but they suffer from high computational cost, which limits their practicality. In this paper, we propose a novel hierarchical model that analytically aggregates light simulation on the fiber level by building on dual-scattering theory. Based on this, we can perform an efficient simulation of ply and yarn shading. Compared to previous methods, our approach is faster and uses less memory while preserving a similar accuracy. We demonstrate both through comparison with existing fiber-based shading models. Our yarn shading model can be applied to curves or surfaces, making it highly versatile for cloth shading. This duality paired with its simplicity and flexibility makes the model particularly useful for film and games production.


Foveated Walking: Translational Ego-Movement and Foveated Rendering
David Petrescu, Paul A. Warren, Zahra Montazeri, Boris Otkhmezuri, Steve Pettifer
ACM Symposium on Applied Perception (SAP) 2023 [Paper]
The demands of creating an immersive Virtual Reality (VR) experience often exceed the raw capabilities of graphics hardware. Perceptually-driven techniques can reduce rendering costs by directing effort away from features that do not significantly impact the overall user experience while maintaining a high level of quality where it matters most. One such approach is foveated rendering, which allows for a reduction in the quality of the image in the peripheral region of the field-of-view where lower visual acuity results in users being less able to resolve fine details. 6 Degrees of Freedom tracking allows for the exploration of VR environments through different modalities, such as user-generated head or body movements. The effect of self-induced motion on rendering optimization has generally been overlooked and is not yet well understood. To explore this, we used Variable Rate Shading (VRS) to create a foveated rendering method triggered by the translational velocity of the users and studied different levels of shading Level-of-Detail (LOD)...


Velocity-Based LOD Reduction in Virtual Reality: A Psychophysical Approach
David Petrescu, Paul A. Warren, Zahra Montazeri, Steve Pettifer
Eurographics short 2023 [Paper]
Virtual Reality headsets enable users to explore the environment by performing self-induced movements. The retinal velocity produced by such motion reduces the visual system’s ability to resolve fine detail. We measured the impact of self-induced head rotations on the ability to detect quality changes of a realistic 3D model in an immersive virtual reality environment. We varied the Level of Detail (LOD) as a function of rotational head velocity with different degrees of severity. Using a psychophysical method, we asked 17 participants to identify which of the two presented intervals contained the higher quality model under two different maximum velocity conditions. After fitting psychometric functions to data relating the percentage of correct responses to the aggressiveness of LOD manipulations, we identified the threshold severity for which participants could reliably (75%) detect the lower LOD model. Participants accepted an approximately four-fold LOD reduction even in the low maximum velocity condition without a significant impact on perceived quality, suggesting that there is considerable potential for optimisation when users are moving (increased range of perceptual uncertainty). Moreover, LOD could be degraded significantly more (around 84%) in the maximum head velocity condition, suggesting these effects are indeed speed-dependent.


A Practical Ply-Based Appearance Modeling for Knitted Fabrics
Zahra Montazeri, Soren B. Gammelmark, Henrik Wann Jensen, Shuang Zhao
Eurographics Symposium on Rendering (EGSR), July 2021 [Paper] [Video]
Modeling the geometry and the appearance of knitted fabrics has been challenging due to their complex geometries and interactions with light. Previous surface-based models have difficulties capturing fine-grained knit geometries; Micro-appearance models, on the other hands, typically store individual cloth fibers explicitly and are expensive to be generated and rendered. Further, neither of the models offers the flexibility to accurately capture both the reflection and the transmission of light simultaneously. In this paper, we introduce an efficient technique to generate knit models with user-specified knitting patterns. Our model stores individual knit plies with fiber-level detailed depicted using normal and tangent mapping. We evaluate our generated models using a wide array of knitting patterns. Further, we compare qualitatively renderings to our models to photos of real samples.


A Practical Ply-Based Appearance Model of Woven Fabrics
Zahra Montazeri, Soren B. Gammelmark, Shuang Zhao, Henrik Wann Jensen
ACM Transactions on Graphics (SIGGRAPH Asia 2020), November 2020 [Paper] [Video] [Project page]
Simulating the appearance of woven fabrics is challenging due to the complex interplay of lighting between the constituent yarns and fibers. Conventional surface-based models lack the fidelity and details for producing realistic close-up renderings. Micro-appearance models, on the other hand, can produce highly detailed renderings by depicting fabrics fiber-by-fiber, but become expensive when handling large pieces of clothing. Further, neither surface-based nor micro-appearance model has not been shown in practice to match measurements of complex anisotropic reflection and transmission simultaneously. In this paper, we introduce a practical appearance model for woven fabrics. We model the structure of a fabric at the ply level and simulate the local appearance of fibers making up each ply. Our model accounts for both reflection and transmission of light and is capable of matching physical measurements better than prior methods including fiber based techniques. Compared to existing micro-appearance models, our model is light-weight and scales to large pieces of clothing.


Mechanics-Aware Modeling of Cloth Appearance
Zahra Montazeri, Chang Xiao, Yun Fei, Changxi Zheng, Shuang Zhao
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2019 [Paper] [Video]
Micro-appearance models have brought unprecedented fidelity and details to cloth rendering. Yet, these models neglect fabric mechanics: when a piece of cloth interacts with the environment, its yarn and fiber arrangement usually changes in response to external contact and tension forces. Since subtle changes of a fabric’s microstructures can greatly affect its macroscopic appearance, mechanics-driven appearance variation of fabrics has been a phenomenon that remains to be captured. We introduce a mechanics-aware model that adapts the microstructures of cloth yarns in a physics-based manner. Our technique works on two distinct physical scales: using physics-based simulations of individual yarns, we capture the rearrangement of yarn-level structures in response to external forces. These yarn structures are further enriched to obtain appearance-driving fiber-level details. The cross-scale enrichment is made practical through a new parameter fitting algorithm for simulation, an augmented procedural yarn model coupled with a custom-design regression neural network. We train the network using a dataset generated by joint simulations at both the yarn and the fiber levels. Through several examples, we demonstrate that our model is capable of synthesizing photorealistic cloth appearance in a mechanically plausible way.