What’s New in RealityServer 5.1 Update 227

RealityServer 5.1 Update 227 has just been released and it has some great new features. A new Iray version with improvements to the AI Denoiser, an easy to use compositing system and a bunch of new convenience commands. This post gives an overview of the new functionality, however the compositing features are so significant that we are currently writing a dedicated article for that feature which will be out soon.

Iray Update

This release adds Iray 2017.1.4 buildĀ 296300.6298. There are a lot of updates to the hugely popular AI Denoiser in this release, including:

Here is a quick comparison of the old and new denoiser as well as the same image with denoising off. The difference is subtle, however it is particularly noticeable in the texture of the timber.

New

Old

Off

We also had some users report a freeze on some scenes with Iray Interactive which has been fixed in this release. If you are using the very latest Intel CPUs with AVX512 support then there was also a crash that has been fixed. Be sure to check the neurayrelnotes.pdf file that ships with RealityServer for full details.

Compositing System

For a long time customers have been asking us for a way to keep the full photorealistic quality produced with RealityServer, but to reduce the resource requirements needed to deploy applications. Many customers are already using compositing solutions to achieve this by rendering pieces of their image, re-colouring them and combining them back together. Typically this relies on alpha channels and so called ‘clown masks’ to separate the elements. We believe we have a better way.

Actually, the capabilities needed have been in RealityServer for some time, using a feature available in the Iray renderer called Light Path Expressions, or LPEs for short. In contrast to alpha channel and mask based methods, LPEs work by separating the contribution various light paths make to the final image, for example the contribution of a specular reflection from a specific object. This system is extremely powerful, however also quite complicated to use. So we decided to wrap it in a much simpler to use API which allows both the generation of all of the components needed for compositing and a system for doing the runtime compositing as well.

To the right you can see an example building up several LPE components into a final image and then tinting those components. In this example the direct and indirect are separated and can be tinted individually. You might also notice that the compositing in the out of focus regions is perfect, something that is impossible to do with masking based approaches.

LPE Components Being Added Together

The whole system is just three new commands, compositor_render_components, compositor_prepare_composite and compositor_composite_components. You just specify which objects and types of light transport to separate (e.g., diffuse reflection) and it will create and render all of the LPEs for you. It also supports separating the contribution from individual lights or groups of lights as well. This allows you to selectively recolour parts of our scene, including the indirect contributions in a consistent way.

When compositing the results together, each component is simply multiplied by the tinting colour and added to the next. There are no masks or no alpha channels and so there are no problems with fringing or aliasing that you see on traditional solutions and unlike those solutions, you can also recolour the indirect illumination. It’s a real game changer and while the functionality has been available for a long time, no one was using it so we decided to make it more accessible with a new API.

We wanted to take it a step further though. A lot of customers also asked us about not just tinting with a colour, but also modulating that tint with an image, so they could change texture as well. For example, let’s say you have personalisable product, such as a phone case where you want the user to be able to provide their own image and you want to visualise that. Previously there was no way around having to live render that image. With our new compositing solution you can now also re-texture the components.

At runtime the compositing is all done on RealityServer and is GPU accelerated. However unlike live rendering you don’t need anywhere near as many resources to service your users. Everything is stored in full HDR as well so you can even change tone-mapping settings during compositing. This functionality is so significant that we are currently writing another blog post dedicated just to this feature. Watch for it in the coming weeks.

Best of all, if you need to customise our solution somehow, the full source code is provided since the entire system is implemented as V8 server-side JavaScript commands. We have put together a system which we feel will likely cover about 85% of the use cases we see, for those specialised cases, you can modify the commands to suit your needs.

New Commands

Aside from the commands added for compositing, we found during the development that there we quite a few other convenience commands we would like to have so we added those. Here is what’s there:

All of these commands are built using the server-side V8 JavaScript API so you have all of the source code for these with the release.

Let Us Know

We’d love to hear how you go with this new functionality, particularly the compositing features. Contact us if you have any issues getting running or if you have feedback on this new functionality.

Articles