Martin Knuth, Jan Bender, Michael Goesele and Arjan Kuijper, Deferred Warping, In IEEE Computer Graphics and Applications, 2016

 

PDF BibTex

 


Abstract

We introduce deferred warping, a novel approach for real-time deformation of 3D objects attached to an animated or manipulated surface. Our target application is virtual prototyping of garments where 2D pattern modeling is combined with 3D garment simulation which allows an immediate validation of the design. The technique works in two steps: First, the surface deformation of the target object is determined and the resulting transformation field is stored as a matrix texture. Then the matrix texture is used as look-up table to transform a given geometry onto a deformed surface. Splitting the process in two steps yields a large flexibility since different attachment types can be realized by simply defining specific mapping functions. Our technique can directly handle complex topology changes within the surface. We demonstrate a fast implementation in the vertex shading stage allowing the use of highly decorated surfaces with millions of triangles in real-time.


Video

 



Martin Knuth, Christian Altenhofen, Arjan Kuijper and Jan Bender, Efficient Self-Shadowing Using Image-Based Lighting on Glossy Surfaces, In Proceedings of Vision, Modeling and Visualization, 2014

PDF BibTex

 


Abstract

In this paper we present a novel natural illumination approach for real-time rasterization-based rendering with environment map-based high dynamic range lighting. Our approach allows to use all kinds of glossiness values for surfaces, ranging continuously from completely diffuse up to mirror-like glossiness. This is achieved by combining cosine-based diffuse, glossy and mirror reflection models in one single lighting model. We approximate this model by filter functions, which are applied to the environment map. This results in a fast, image-based lookup for the different glossiness values which gives our technique the high performance that is necessary for real-time rendering. In contrast to existing real-time rasterization-based natural illumination techniques, our method has the capability of handling high gloss surfaces with directional self-occlusion. While previous works exchange the environment map by virtual point light sources in the whole lighting and shadow computation, we keep the full image information of the environment map in the lighting process and only use virtual point light sources for the shadow computation. Our technique was developed for the usage in real-time virtual prototyping systems for garments since here typically a small scene is lit by a large environment which fulfills the requirements for image-based lighting. In this application area high performance rendering techniques for dynamic scenes are essential since a physical simulation is usually running in parallel on the same machine. However, also other applications can benefit from our approach.


Video

 


Images

Cloth
Dragon
Dragon
Teapot

Manuel Scholz, Jan Bender and Carsten Dachsbacher, Real-Time Isosurface Extraction with View-Dependent Level of Detail and Applications, Computer Graphics Forum 34, 1, 2015

 

Preprint BibTex


Abstract

Volumetric scalar datasets are common in many scientific, engineering, and medical applications where they originate from measurements or simulations. Furthermore, they can represent geometric scene content, e.g. as distance or density fields. Often isosurfaces are extracted, either for indirect volume visualization in the former category, or to simply obtain a polygonal representation in case of the latter. However, even moderately sized volume datasets can result in complex isosurfaces which are challenging to recompute in real-time, e.g. when the user modifies the isovalue or when the data itself is dynamic. In this paper, we present a GPU-friendly algorithm for the extraction of isosurfaces, which provides adaptive level of detail rendering with view-dependent tessellation. It is based on a longest edge bisection scheme where the resulting tetrahedral cells are subdivided into four hexahedra, which then form the domain for the subsequent isosurface extraction step. Our algorithm generates meshes with good triangle quality even for highly nonlinear scalar data. In contrast to previous methods, it does not require any stitching between regions of different levels of detail. As all computation is performed at run-time and no preprocessing is required, the algorithm naturally supports dynamic data and allows us to change isovalues at any time.


Images

teaser

Manuel Scholz, Jan Bender and Carsten Dachsbacher, Level of Detail for Real-Time Volumetric Terrain Rendering, In Proceedings of Vision, Modeling and Visualization, 2013, Best paper award

 

PDF BibTex


Abstract

Terrain rendering is an important component of many GIS applications and simulators. Most methods rely on heightmap-based terrain which is simple to acquire and handle, but has limited capabilities for modeling features like caves, steep cliffs, or overhangs. In contrast, volumetric terrain models, e.g. based on isosurfaces can represent arbitrary topology. In this paper, we present a fast, practical and GPU-friendly level of detail algorithm for large scale volumetric terrain that is specifically designed for real-time rendering applications. Our algorithm is based on a longest edge bisection (LEB) scheme. The resulting tetrahedral cells are subdivided into four hexahedra, which form the domain for a subsequent isosurface extraction step. The algorithm can be used with arbitrary volumetric models such as signed distance fields, which can be generated from triangle meshes or discrete volume data sets. In contrast to previous methods our algorithm does not require any stitching between detail levels. It generates crack free surfaces with a good triangle quality. Furthermore, we efficiently extract the geometry at runtime and require no preprocessing, which allows us to render infinite procedural content with low memory consumption.


Video

 


Images

Terrain1
Terrain2
Terrain3
Terrain4

Fabian Bauer, Martin Knuth and Jan Bender, Screen-Space Ambient Occlusion Using A-buffer Techniques, In IEEE Computer-Aided Design and Computer Graphics, 2013

 

PDF BibTex


Abstract

Computing ambient occlusion in screen-space (SSAO) is a common technique in real-time rendering applications which use rasterization to process 3D triangle data. However, one of the most critical problems emerging in screen-space is the lack of information regarding occluded geometry which does not pass the depth test and is therefore not resident in the G-buffer. These occluded fragments may have an impact on the proximity-based shadowing outcome of the ambient occlusion pass. This not only decreases image quality but also prevents the application of SSAO on multiple layers of transparent surfaces where the shadow contribution depends on opacity. We propose a novel approach to the SSAO concept by taking advantage of per-pixel fragment lists to store multiple geometric layers of the scene in the G-buffer, thus allowing order independent transparency (OIT) in combination with high quality, opacity-based ambient occlusion (OITAO). This A-buffer concept is also used to enhance overall ambient occlusion quality by providing stable results for low-frequency details in dynamic scenes. Furthermore, a flexible compression-based optimization strategy is introduced to improve performance while maintaining high quality results.


Video

 


Jan Bender, Dieter Finkenzeller and Peter Oel, HW3D: A tool for interactive real-time 3D visualization in GIS supported flood modelling, In Proceedings of the 17th international conference on computer animation & social agents (CASA), 2004

 

PDF BibTex


Abstract

Large numerical calculations are made to get a prediction what damage a possible flood would cause. These results of the simulation are used to prevent further flood catastrophes. The more realistic a visualization of these calculations is the more precaution will be taken by the local authority and the citizens. This paper describes a tool and techniques to get a realistic looking, three-dimensional, easy to use, realtime visualization despite of the huge amount of data given from the flood simulation process.


Detailed information about this project can be found here:

Real-time 3D visualization in GIS supported flood modelling

Videos

3D visualization of a flood model

3D visualization of a flood model (DivX, Mpeg)

3D visualization of a flood model

3D visualization of a flood model (DivX, Mpeg)

The tool

The animations were generated with the programm "HW3D" that I developed during my research. The tool can visualize terrain surfaces with a high resolution in real-time. It also handles large textures. In order to get a realistic visualization, buildings were reconstructed using the original floor plans of cities. Other GIS data like topography points can also be visualized by the tool.

Further information can be found here:

Real-time 3D visualization in GIS supported flood modelling