Exploring the limits of real time rendering





Intro//

My name is Rens,

I am a 3D artist, Environment artist and Technical Art Director.
I have worked with 3D for 14 years, of which 8 professionally.
Over the years I have worked with various studios including DICE, Epic Games, Sony and Starbreeze.

I have been part of several projects including:

Battlefield 3
Battlefield 3 Back to Karkand
Battlefield 3 Close Quarters
Battlefield 3 AfterMath
Battlefield 3 End Game

Battlefield 4
Battlefield 4 China Rising
Battlefield 4 Naval Strike

Battlefield Hardline

Paragon


I am currently working on closing the gap between offline,
pre rendered cinematics and exploring the possibilities of real time graphics.








Creative workflow//

Nature has always been important to me. Growing up I spend a lot of time outside or watching National Geographic and Discovery Channel.
I have done a lot with photography, capturing plants, insects and small animals. Every time I would open up the pictures on the computer you would see these amazing structures and details.
Having a chance to combine it with my 3D work and striving to create the same quality images through a digital platform is such a pleasure.

It makes you appreciate and see the environment around you in a different way. When going for a walk I no longer see just plants or trees. I look at shapes, growth patterns, placement, colors, surface properties, how the light hits it or how it moves. This builds up an understanding that is necessary to create environments.




Objective//

What happens when you ignore performance, memory, technical limitations and purely focus on getting it right? Would you still produce roughly the same results? Would you be able to get closer to photo realism? Knowing that the movie industry has managed to fool us more than once by creating realistic images, what would it take for games to look and feel the same?

These questions have motivated me to produce work I have never done before.

I spend more than four years iterating, reinventing workflows and learning to develop content I did not yet know the quality and requirements of.
During the previous year I developed plants, trees, rocks and environments with good results, which serve as a solid foundation to deliver a first experience.

A Tech demo that combines Scifi, AI, robotics, nature and a story as a first introduction to the universe I am shaping.
A playable experience showcasing that developing high end 4K graphics is the future.

According to Steam statistics, less than 1% of their users use a 4K display, as not many games are built to take advantage of the higher resolution. ( http://store.steampowered.com/hwsurvey/ )
Current graphics cards have made a huge impact on making 4K possible, but running heavier content and dynamic lighting is still very demanding.

With a new generation of graphics cards on the horizon, gaming can be brought to a new level.


Technical//
The levels I build consist out of many steps, here are a few key components.

All content shown is created by me. I do not buy or use content from other sources or artists.
I take great pride in learning about each step of development and pushing its limits.
In order to do so, having the quality bar defined by external content would interfere with this process.
At many occasions, I deleted my work and started again just to upgrade a single step in the process.
Knowing that by doing so, I could push the end results a little further.

To run my content and build environments I use Unreal Engine.
It is an incredible powerful engine to create games and visuals with.

The lighting in my work is dynamic opposed to pre calculated and baked down to light maps or texture.
Though it does not reach the same quality or accuracy as pre calculated lighting can be, dynamic lighting has a lot of advantages.
When the level changes during production, it will update and look the same as the final product without requiring a new light bake.
Calculation times go up the more content is in a level, but so does texture memory.

For global illumination I am using NVidia's VXGI solution.

To build life like plants I had to capture leaves separately.
I had to repeat the process multiple times in order to get to a decent quality level.
Many iterations have come from updating the capture process to get better results.
To get the color information, subsurface, normal and height textures I photograph a leaf about 10 times under different lighting conditions.
I built my own light rig to capture and am using Photoshop and Allegoritmic Substance Designer to process the photos.

Every leaf that you see, has been picked up, transported, captured, processed into a texture, cut out in 3d, modeled and shaped by hand.
To reconstruct and put the plants back together, I use Exlevel GrowFX.
This allows me to shape each plant in detail, recreating its growth patterns and placing leaves precisely where I need them to be.

For ground and terrain I no longer using just a texture. Instead I use PhysX simulation.
By having several hundred leaves fall down to the ground using gravity, they collide with each other and find a natural placement.
As each leaf falls on top of the next, it creates a layered space that has true depth.
This technique does not only apply to terrain, I also use it to layer leaves on top of rocks and structures for extra detail and variation.

Photogrammetry is an amazing tool and plays a key part in my work. To have the ability to translate real life objects into 3D is an art in itself.

Building environments changed by adding realistic objects, everything now has to meet a certain level of quality and density.

My material setup is rather simple. It uses a diffuse texture and reuses it for its specular and roughness values.
I have seen people go to extremes to get the most accurate PBR values, however not all content allows for a proper specular or roughness capture.
Instead of filtering and processing the diffuse in Photoshop or Substance I do so in the material itself.
This saves the memory of having two textures or channels that can then be re purposed and allows for real time adjustments.








Support//

A big thank you to everyone who supported me and is making this work possible.

Epic games (financial, receiver of Dev grant)
AMD (hardware)
Valve, STEAM VR (developer program, hardware)
Nvidia (hardware)
Allegorithmic (software)
ExLevel Growfx (software)
SpeedTree (software)




Funding//

For people that are interested in financially supporting me.
paypal.me/artbyrens












My setup:


Dual 1080TI in SLI
(using single gpu for real time work)

Intel Core i7 5960X
Asus X99-E WS
Corsair 64GB DDR4 2400MHz

Acer 28" Predator XB281HK G-Sync 4K