{"id":1912,"date":"2017-11-28T07:12:12","date_gmt":"2017-11-28T07:12:12","guid":{"rendered":"https:\/\/www.migenius.com\/?p=1912"},"modified":"2017-12-04T02:27:12","modified_gmt":"2017-12-04T02:27:12","slug":"realityserver-5-1","status":"publish","type":"post","link":"https:\/\/www.migenius.com\/articles\/realityserver-5-1","title":{"rendered":"RealityServer 5.1 with AI Denoising"},"content":{"rendered":"

RealityServer 5.1 is here and it has something a lot of users have been asking about. This release adds the new AI Denoising algorithm for fast and high quality denoising of your images using state of the art machine learning technology. You really need to try it to fully appreciate the performance benefits however we’ll show you a few images to give you a feeling for what it is capable of. We are also adding support for the new NVIDIA Volta architecture and as usual a range of other smaller enhancements.<\/p>\n

<\/p>\n

AI Denoising<\/h3>\n

Iray Photoreal mode has always had the issue that in quite a few use cases (e.g., complex architectural interiors) there can be some lingering noise in final renders that takes a significant amount of time to remove. While the overall image quality is very high this last little bit of noise can take from minutes to hours to remove depending on your scene. We finally have a solution for this problem.<\/p>\n

The new AI Denoiser is based on an artificial intelligence technique known as machine learning. It has been trained to remove noise but not real detail using an extensive set of Iray scenes, many of which were provided by migenius customers. While training the denoiser is a time consuming process, this is performed by NVIDIA on a large number of GPUs so it comes pre-trained. After which you can run the denoiser at near real-time speeds on your own local GPU.<\/p>\n

Unlike many traditional offline denoisers, the AI Denoiser is fast enough to be used interactively, typically less than 100ms to execute. During rendering some additional data needs to be stored, increasing memory requirements however the performance gains are substantial. Below is a constant time comparison.<\/p>\n<\/div><\/div>

\"\"<\/a><\/p>\n

Large Final Frame with AI Denoising (Click to Enlarge)<\/p>\n<\/div>\n<\/div><\/div><\/div>\n\n

 <\/p>\n

Above you can see a crop of the bathroom rendering shown earlier in an area that exhibits more persistent noise. The top row is a conventional Iray rendering for various render times while the bottom row shows the same render time with the AI Denoiser activated. These renders were made on a TITAN X (Pascal) GPU. Even at the 30s end, the denoised image gives a much more useful result than even the 300s image without denoising, while at the other end, the 120s denoised image approaches the quality of the 1200s.<\/p>\n

In practice the performance of the denoiser is going to depend on your content but for many use cases we are seeing a 5-10x speed up. Of course there are some trade-offs to get this performance, the denoising can introduce some smoothing and if there is insufficient information in the source image other artifacts, in most cases however these are preferable to the noise. Here is an example from one of our customers, Tapglance<\/a> using their real-world content.<\/p>\n

With AI Denoiser<\/div>
\"\"<\/p>\n

120 Second Render Time. Scene by John O’connor. Created in TapGlance.<\/p>\n<\/div>\n<\/div>

Without Denoiser<\/div>
\"\"<\/p>\n

120 Second Render Time. Scene by John O’connor. Created in TapGlance.<\/p>\n<\/div>\n<\/div><\/div>\n

The AI Denoiser can be enabled for all rendering or only after a certain number of iterations have been computed which is useful for ensuring there is enough information for the algorithm to work with before applying it. Our pre-release customers have called this feature a game changer for producing final quality images in a fraction of the time. It’s also fully automatic so there are no complex settings to tune, just turn it on and go.<\/p>\n

Volta Support<\/h3>\n

The NVIDIA Volta architecture is finally starting to appear, with cloud providers offering access to the Tesla V100 card. RealityServer 5.1 includes full support for Volta based cards and our early benchmarking is showing an impressive 50-60% improvement in performance over the previous generation Pascal architecture when comparing similar cards (for example comparing the Tesla P100 to the Tesla V100). Look out for more Volta based cards in the future and once we can get our hands on more we plan to do more extensive benchmarking.<\/p>\n<\/div><\/div>

\"\"<\/p>\n<\/div><\/div><\/div>\n

Upload Server<\/h3>\n

Many users have asked us how to go about getting data up to RealityServer. We have usually just relied on this being done either by programmatically creating the scene data or uploading it through a side channel to the file system of the server running RealityServer. In RealityServer 5.0 we added the\u00a0image_reset_from_base64<\/em> command and the import_scene_elements_from_string<\/em> command was extended to allow importing any data type supported by the standard range of RealityServer importers. However there are still many cases this doesn’t cover and many applications require a general ability to upload.<\/p>\n

This release adds a new built in upload server which you can optionally enable in your realityserver.conf<\/em> to get content onto your server easily over HTTP\/HTTPS. You can then use standard form based upload methods to get content onto your server. We plan to continue extending the upload server with new features for future releases such as automated unzipping.<\/p>\n

New V8 Canvas Related Features<\/h3>\n

In RealityServer 5.0 we added the server-side V8 engine and some basic canvas functionality came with that. This introduced some cool possibilities however it became evident pretty quickly that more functionality was needed. In RealityServer 5.1 we have extended the render_to_canvases<\/em> command to return an array of canvases which can used directly in V8. This is great for automating rendering of multiple channels at once, perfect for automating Light Path Expression based compositing (spoiler alert, we are working on something in that area for an update).<\/p>\n

We also found several use cases where resizing image canvases directly in RealityServer would be useful so we added a resize<\/em> method to the Canvas object in V8.<\/p>\n<\/div><\/div>

\"\"<\/a><\/p>\n

Cool (But Useless) Demo Using Canvas.resize<\/p>\n<\/div>\n<\/div><\/div><\/div>\n

This can be really useful if you need a smaller version of an image for some reason, for example producing thumbnails or passing to an image processing algorithm that works better at lower resolutions. The above (admittedly useless) example uses the resize method to prepare the canvas for conversion to ASCII art (this example is included in RealityServer 5.1 for your enjoyment).<\/p>\n

All of this is great but we also found that once we had this functionality people wanted to be able to write the resulting canvases out to disk for later use or persistence. We have added a very basic\u00a0fs<\/em> module to V8 now so you can read and write binary data to disk and create directories. We plan to extend it in the future with additional functionality.<\/p>\n

Assimp Plugin Importer and Exporter<\/h3>\n

We have been shipping the Iray Viewer desktop application with RealityServer for some time and many users noted that it included a nice plugin based on the Assimp<\/a> library which allows it to import (and export) many file formats. Some of you even found that you could copy this plugin out of Iray Viewer into your RealityServer installation and use it there. Enough people were doing this that we thought why not include it as a standard feature.<\/p>\n

In the process we reworked things a little to make the plugin fully compatible with RealityServer and also updated the version of Assimp being used to 4.0.1. We enabled all formats supported by Assimp for import and export except for OBJ which we disabled in favour of our existing OBJ importer which is more complete. Your millage with different formats will vary significantly in terms of what gets imported. Some will do reasonable materials and geometry, others may just give you geometry. We still recommend building your system around our .mi file format but there are many use cases where this plugin can be helpful.<\/p>\n

If you make use of the Assimp plugin we’d encourage you to support the project with a donation as we have done or by contributing. It is being actively developed and details of how to help support the project are available on their Github page<\/a>.<\/p>\n<\/div><\/div>

Import formats:<\/p>\n