{"id":3691,"date":"2020-08-12T02:18:21","date_gmt":"2020-08-12T02:18:21","guid":{"rendered":"https:\/\/www.migenius.com\/?p=3691"},"modified":"2020-08-12T09:42:38","modified_gmt":"2020-08-12T09:42:38","slug":"realityserver6","status":"publish","type":"post","link":"https:\/\/www.migenius.com\/articles\/realityserver6","title":{"rendered":"What’s New In RealityServer 6.0"},"content":{"rendered":"\n
This is a big one! We’ve been in beta for a while so some of our advanced users have already had a chance to check out the great new features in RealityServer 6.0. Some headline items are a new fibers primitive, matte fog, toon post-processing, sheen BSDF, V8 engine version bump, V8 command debugging and much more. Checkout the full article for details.<\/p>\n\n\n\n\n\n\n\n
RealityServer 6.0 includes Iray 2020.0.2 build 327300.6313 which contains a lot of cool new functionality. Let’s take a look at a few of the biggest.<\/p>\n\n\n\n
In many products this might be called hair rendering, however fibers as implemented by Iray can be used for almost any fiber like object, for example hair, grass, carpets, fabric fringes and more.<\/p>\n\n\n\n
Fibers are a new type of element you can create and consist of a set of points defining a uniform cubic B-spline<\/a> and radii at those points. This forms a kind of smooth extruded cylinder with varing thickness. This lightweight primitive is intersected during ray-tracing directly, without creating any explicit polygons or mesh geometry. This allows very large numbers of primitives to be handled.<\/p>\n\n\n\n Think millions. Many places you’d like to use fibers will need lots of them, so this efficiency is essential. In testing we have also seen that RTX based GPUs with RT Core hardware ray-tracing support see even bigger speedups on scenes with a lot of fiber geometry compared to regular scenes. Creating fibers can be tricky since there isn’t really much software which authors this type of geometry. Tools such as the XGen<\/a> feature of Autodesk Maya would be one example. However many of our customers just want to generate a somewhat random distribution of fibers over existing mesh geometry.<\/p>\n<\/div>\n\n\n\n To make that simple we now have a new generate_fibers_on_mesh <\/em>command which takes a mesh and some simple parameters and generates fibers for you. The topiary example above was generated this way by passing the geometry of the big 6 into the command. If you want to directly control every aspect of your fibers we also have the generate_fibers <\/em>command which allows you to provide a JSON based or binary based description of the fibers geometry you want to create. We’ve also included V8 wrapper classes for fibers similar to those used for Polygon_mesh and Triangle_mesh. This is the best way to make use of the binary data input.<\/p>\n\n\n\n Fibers can also be read from .mi files using the hair object supported there from the mental ray days. As various Iray plugins such as Iray for Maya start to support fibers they will be able to export this data from those tools into a .mi file that can be read by RealityServer. Of course you can also create fibers from C++ plugins so if you want to use your own custom fibers format you can implement that as well. To finish up on fibers here is another great image, this time with 30M fibers in the scene and varying the fiber colour using textures.<\/p>\n\n\n\n When rendering larger scenes, the effects of aerial perspective<\/a> can be critically important to getting a realistic result. You probably know this most from seeing the shift towards blue in features as you look out to the horizon over a landscape. In theory you could simulate this already in previous releases by enclosing the entire scene in a huge box and applying a material with appropriate volume properties to the box however this would significantly increase rendering time. Now with the new matte fog feature there is a much faster way.<\/p>\n\n\n\n In the image above you can see the original scene without matte fog applied on the left and on the right with matte fog enabled. You can move the slider to compare the two results. The matte fog image gives a much more realistic impression of this type of scene. Until you see it with the fog it can be hard to put your finger on what is actually wrong with the original image. Enabling matte fog is easy, you just need to enable a few attributes on your scene options, see the Iray documentation for details.<\/p>\n\n\n\n The matte fog technique is applied as a post effect, similar to the bloom feature that was introduced some time ago. It uses depth information combined with the rendering result to apply a distance based fog. Because it runs as a GPU post process it adds very little time to rendering. Unlike a true volumetric simulation (which as mentioned earlier, is still possible), matte fog will not produce effects from specific light sources or so called god rays<\/em>. For these effects you will still need to perform a full volume simulation, however to get a simple aerial perspective effect this feature is perfect.<\/p>\n\n\n\n Another new post processing effect being introduced in this version is toon post processing. This allows you to produce non-photorealistic rendering results using RealityServer such as might be used for cartoons, illustrations or diagrams. The toon post processing effect can be applied to the normal result canvas, ambient occlusion canvas or the BSDF weight canvas (also introduced in this version). It affects both the shading and also adds outlines to the objects. Here is a quick example of what you can achieve. <\/p>\n\n\n\n This image was made by applying the toon effect to the BSDF weight canvas which encodes the albedo of the materials. It then uses a faux lighting effect to give shading which is quantized to give the banded appearance typical for cartoons. You can control the colour of the outlines, the level of quantization or also choose to show the fully faux shaded appearance or no shading at all. Object IDs are used to determine where the edges of the objects are located, so in cases where you have objects with the same material that needs to show edges you should ensure they have unique object ids assigned with the label<\/em> attribute. You can also set the label <\/em>attribute to 0<\/em> on an object if you want to selectively disable the outlining. Also note that the toon feature works best when progressive_aux_canvas <\/em>is enabled on your scene options.<\/p>\n\n\n\n To support all of these features the old canvas_content <\/em>parameters of our RealityServer render commands have been changed to now accept an object in addition to a string. This object is needed since canvas content type can now contain various parameters. For using the toon effects you need to use a V8 command or named canvases so that you can use the render_to_canvases <\/em>command for this purpose. It’s definitely simplest to use in V8, here is a quick example command which renders a toon image similar to the above of one of our default scenes.<\/p>\n\n\n The main difference to how you would have done this before is the fact that the in the canvas definitions passed to the render_to_canvases <\/em>command the content <\/em>property is specified as an object. You can see that in the second canvas that is being rendered there are three extra parameters. This example renders two canvases, the first is the BSDF weight and the second is the post toon effect which uses the first canvas to do its work. We then encode and return the second canvas. Changing the scale parameter of the toon effect gives some interesting results. In the example below you can see the difference between setting this to 0 and 4.<\/p>\n\n\n\n Please refer to the Iray documentation for more details on how to use the toon effect feature and what the different parameters do. While RealityServer rendering commands still accept the older canvas_content string definitions, we definitely recommend updating your applications if you are rendering anything other than the default canvas_content (result) to use the new method of specifying canvas contents to ensure future compatibility.<\/p>\n\n\n\n Iray 2020 comes with support for MDL 1.6 which adds several new features. One of the major ones is a new BSDF specifically for dealing with sheen. This phenomenon is particularly important for realistic fabrics and in the past other rendering engines would often use non-physical effects such as falloff to fake this. Now with the new sheen_bsdf<\/em> you have a physically based option for proper sheen. Here is an example how how big a difference sheen can make.<\/p>\n\n\n\n In this scene the image on the left is using pure diffuse fabrics while on the right we have added sheen. The effect is subtle and difficult to quantify but the sheen is often what results in a much more fabric like appearance. To help you try out this functionality we’ve included a new add_sheen <\/em>MDL material in a new migenius core_definitions <\/em>MDL module. You’ll find this at mdl::migenius::core_definitions::add_sheen<\/em> and it allows you to add a sheen layer on top of any existing material.<\/p>\n\n\n\n The Structured Similarity Index<\/a> or SSIM for short, helps determine the similarity between an image and a given reference image. How can this help us when rendering since we don’t have a reference image? In fact that is exactly what we are trying to create by rendering in the first place. Iray 2020 uses deep-learning techniques to allow predict SSIM values based on an imaginary converged image using training data from a large number of pairs of partially and fully converged images generated with Iray. How does knowing the SSIM value actually help us here?<\/p>\n\n\n\n The image above is a heatmap generated by the SSIM predictor in Iray and it shows which parts of the image have converged and which still require further work. The brigther areas are closer to the imagined reference image while the darker areas are further from it in terms of similarity. Using this information it is possible to predict at which iteration Iray will reach a particular target similarity and how long it will take to get there. If you’ve dealt with having to build heuristics for your application to set the iteration count or nebulous quality parameters in Iray, especially for widely varying scenes then you can probably see where there is going and why you would want it.<\/p>\n\n\n\n In the end the goal will be for you to be able to set a quality value which is your SSIM target value and have this control when rendering terminates as well as provide feedback to users on how long rendering is expected to take. Right now however this feature should be considered more of a preview into type of automated termination conditions that are coming in future releases. There are a few important restrictions for now which mean you may not be able to use it in your application.<\/p>\n\n\n\n So if you want to use the denoiser or work with very large output resolutions it might not be the right solution for you. Right now you would also need to implement the logic for using SSIM to set termination conditions yourself since at the moment it only reports information to you, it does not take any actual action. In RealityServer you can see the information coming back from the SSIM predictor by calling the render_get_progress<\/em> command with the area <\/em>parameter set to estimated_convergence_at_sample <\/em>or estimated_convergence_in <\/em>to get the iteration at which the target SSIM value will be reached and the amount of time to get there respectively. In theory your application could start rendering just enough to get the estimates and then use these to set the termination attributes and report the estimated time.<\/p>\n\n\n\n If you would like to obtain an image like the one shown above to get some insight into what the SSIM predictor is seeing you can render out the convergence_heatmap<\/em> canvas type, settings its index <\/em>parameter to the index of the canvas you want to diagnose. This works in the same way as the toon post processing canvas setup described above. There are various Iray attributes to control the predictor, post_ssim_available <\/em>to allow it to be used, post_ssim_enabled <\/em>to turn it on and post_ssim_predict_target <\/em>to set the target value between 0 and 1 which is basically the quality. A value of 0.99 would usually yield a production quality image and 0.98 would be suitable for medium quality results. There is also a post_ssim_max_memory <\/em>option to control memory usage but unless you really need to change this you should let Iray choose the memory to use. All of these options are set on the Options <\/em>element for your scene.<\/p>\n\n\n\n Look for further improvements to the SSIM system in future releases. If you try it out in RealityServer 6.0 we’d love to hear more about your experiences with this feature.<\/p>\n\n\n\n You might have noticed, particularly when rendering with very low iteration counts that the shadows on the virtual ground plane in Iray Interactive can have a blotchy appearance, even when using the AI Denoiser. This is because by default there is a filter applied to these shadows to smooth them out, however this was implemented before the AI Denoiser existed. Unfortunately the filter destroys the denoisers ability to detect the noise on the ground plane and remove it. There is now a new boolean option irt_ground_shadow_filter<\/em> which if disabled (it is enabled by default to preserve the current behaviour) will turn off this filter and let the denoiser operate on the ground plane.<\/p>\n\n\n\n In the image above you can see the results with the default settings were the filter is applied on the left. Note the blotchiness in the ground shadow while the noise has been removed from the rest of the image. On the right you see the effect of turning off the filtering so that the denoiser can work on the ground as well. Note that both of these images are just 4 iterations with Iray Interactive and the AI denoiser enabled. Of course neither gives a perfect result and at much higher sample counts the difference is much less obvious, however many of our customers are creating applications where they are targeting very low rendering times and making heavy use of denoising and low iteration counts. For those cases this can help a lot with ground shadows.<\/p>\n\n\n\n As with most Iray releases you will also find a lot of bug fixes and smaller improvements incorporated, here’s a few relevant ones but be sure to read the neurayrelnotes.pdf<\/em> file for all the changes.<\/p>\n\n\n\n The AI Denoiser now gives significantly better results for some problematic scenes that were reported by users. While the vast majority of scenes gave good results already, there were a few specific scenes with certain types of features (in particular thin features over other elements) which could cause artifacts. This has been improved a lot in this release.<\/p>\n\n\n\n While the hair BSDF of MDL 1.5 had actually been in Iray in the previous release, it wasn’t actually of much use since there was no hair primitive to use it with, that of course has now changed but in general the update to MDL 1.6 brings with it all of the changes present in MDL 1.6 as well. If you are writing your own MDL content some of these may be important, for example changes to how the modules are handled (particularly the removal of weak relative module references). A new multiscattering_tint <\/em>parameter has also been added to all glossy BSDFs to address energy conservation in certain situations. To get a full list of the changes look to the MDL 1.6 specification (you can find it in the RealityServer Document Center) and scroll to the last page where you will see a list of all changes.<\/p>\n\n\n\n There is a new iray_rt_low_memory<\/em> option for users that are operating in significantly memory constrained environments, this allows the acceleration structures for ray-tracing to consume less memory. This is only relevant for pre-Turing GPUs at the moment but can be useful for people using older, lower-end cards.<\/p>\n\n\n\n Obviously the update to Iray 2020 brings a lot of new functionality, however we didn’t stop there. There are plenty of RealityServer specific updates as well and while these might not have as many pretty pictures to go with them, we think developers will find them particularly exciting.<\/p>\n\n\n\n While an update to the V8 engine might not seem too significant, it is actually a pretty big jump and adds a lot of core JavaScript language features. Previously we were on V8 5.6, so if you are a JavaScript guru you know this is a big deal. Google have put a lot of focus into the performance of V8 and there have been significant improvements in this area, meaning your custom V8 commands will run faster. If you’ve been waiting for some of the great new language features in JavaScript such as nullish coalescing<\/a>, optional chaining<\/a> and more then you finally have your chance to use them. Also features like the padStart <\/em>and padEnd <\/em>methods for strings can come in handy, particularly when making element names. If you’re really brave you can try out WebAssembly<\/a> support (WASM), yes, even though we have a native C++ API already, you might find reasons to do this or maybe your just adventurous!<\/p>\n\n\n\n Ok, for anyone that spends a lot of time writing custom V8 JavaScript commands with RealityServer this is huge. In RealityServer 6.0 you can debug your V8 commands using the Chrome DevTools<\/a>. This means you can set breakpoints, inspect values, step through your code, all in a familiar environment. It’s really a game changer for productivity, no more console.log<\/em> debugging your commands! It’s simple to enable, just checkout your realityserver.conf<\/em> file and in your V8 configuration you’ll see the following line commented out, just un-comment it to enable the inspector.<\/p>\n\n\n When you run RealityServer now it will listen on port 20000 (or whatever you selected) for inspector connection. In Chrome you then just need to go to your address bar and drop in a link like this.<\/p>\n\n\n\n Of course you can replace 127.0.0.1 with whatever you like if your server is remote and the port as well. Now a word of warning, do not enable this feature on production servers<\/strong>. In order to debug V8 commands, RealityServer switches to a single threaded model so only a single command can be in flight at a time. There are obviously also security implications to leaving this port open, so make sure this is not enabled outside debugging. When you navigate to the inspector you’ll be greeted with an interface similar to this one.<\/p>\n\n\n\n You can see a few things going on here, a breakpoint was set and hit and we’ve stepped on one line from that and are watching a couple of variables. You can add your V8 directories to the sources explorer to find your commands and any console output is now redirected to the console in the inspector so you can see it there.<\/p>\n\n\n\n You’ve actually always been able to use Promises in V8 commands however they probably didn’t work quite the way you would expect, in fact we had never intended to support them, it was more an accident of them being in V8 itself. In RealityServer 6.0 we’ve now made Async and Promise support more useful and also included async versions of methods on some of our built in classes like fs<\/em> and http<\/em>. This means your commands execute <\/em>function can now be async<\/em> in a useful way. The command itself will not return until all Promises are resolved or rejected, so keep in mind this is not intended for kicking off long running jobs and walking away. With this new functionality you can now do something like this.<\/p>\n\n\n Both the fs<\/em> and http <\/em>classes now have an async <\/em>module within them which exposes async versions of the regular synchronous versions. You can combine these with language features such as Promise.all()<\/em> and friends. If a Promise is rejected then the command with throw an error. In cases where you want to have specific behaviour you need to wrap your calls in a try catch block.<\/p>\n\n\n There actually are not that many native JavaScript features that make use of Promises, with the notable exception of the WebAssembly API. Without using Promises you can only load trivially small WebAssembly programs, the API for loading larger programs requires the use of Promises. For example here is a very simple command which loads a .wasm file, compiles and executes it and returns the result.<\/p>\n\n\n WebAssembly is a whole topic in itself of course and we won’t cover it here, however there are some pretty interesting possibilities. You can of course also just make your own Promises if you would like them for some reason.<\/p>\n\n\n Note that like using built in functionality, in the case above if the reject path is taken the command itself with throw an error. So you need to use a try catch block where as well if you want different behaviour. We’ll likely need to make more use of async functionality in the future as JavaScript expands and more built in features use it, so this new feature lays some of the ground work for that. We also added one more related feature, there is now a global sleep()<\/em> function you can call. You might wonder where you would want this however it can be very helpful when you want to implement some kind of retry logic with exponential back-off for example. The sleep function returns a Promise which resolves to undefined <\/em>when the specified time is reached.<\/p>\n\n\n There are still some restrictions on this functionality in RealityServer 6.0. Right now it is not possible to call another V8 command after an await expression or in the then, catch or finally handlers of a Promise. This restriction will be removed in a future release.<\/p>\n\n\n\n When building V8 commands you often want to share settings between your various commands and allow users to configure these. In the past the only real way to do this was to explicitly load a JSON file or another JavaScript file which contained this information. However this means mixing configuration and logic in the same place which is often undesirable. It is now possible to access the RealityServer configuration information from within your V8 commands. Here is a quick example.<\/p>\n\n\n You’ll see here there is a new RS.Configuration<\/em> object and this holds all of the information parsed from the RealityServer configuration files (by default realityserver.conf<\/em> and whatever you might include). In the above example we are first retrieving a user<\/em> configuration directive called migenius<\/em> and the an item from within it called my_feature<\/em>. Many of the built-in directives can be accessed directly on RS.Configuration<\/em> and this maps very closely to the internal mi::rswservices::IConfiguration<\/strong> <\/a>class in our C++ API. The easiest way to get a feeling for what is available is to just dump RS.Configuration<\/em> as JSON and take a look through it.<\/p>\n\n\n\n Within V8 you can now crop any Canvas and get back the cropped version and optionally convert the pixel type during the cropping operation. We had a few users who were resorting to external tools for this or writing the cropping code in pure JavaScript which was quite a bit slower so we thought we would add native support.<\/p>\n\n\n It has been a long standing issue with Iray and in turn RealityServer that when you encounter GPU failures such as a GPU fault, memory being exhausted or other issues causing the GPU to stop working, the only place you find out about this is the log output. This means you had no way to programmatically handle GPU errors or other fatal issues that are encountered. Part of what makes this so difficult to deal with is that Iray has a strict no-exceptions policy (for performance reasons) which means it cannot throw exceptions that could be caught by other code.<\/p>\n\n\n\n To handle this without exceptions Iray 2020 introduces a new concept of log entry tagging and message details. Log entries now come with a Message_details<\/em> structure which tells you whether the message was associated with a specific GPU device and also a Message_tag <\/em>which gives details on what actually happened. The idea is that these are able to be acted on programmatically, so there is no need to try and parse text out of logs or other brittle measures that would break with even minor Iray changes.<\/p>\n\n\n\n In RealityServer 6.0 you can exploit this new functionality by implementing an mi::rswservices::ILog_handler<\/em> in a C++ plugin. We ship a small example in the src\/log_handler<\/em> directory of your RealityServer installation showing how this can be done. If you implement a custom log handler you can programmatically inspect all log details and look for specific tags which identify GPU problems or other activity you are interested in and then take action programmatically. This finally provides a mechanism by which you can react to GPU errors.<\/p>\n\n\n\n<\/a>
<\/a>
Matte Fog<\/h4>\n\n\n\n
<\/div><\/div>\n\n\n\n
Toon Post-processing<\/h4>\n\n\n\n
<\/figure>\n\n\n\n
\nconst Scene = require('Scene');\nconst Camera = require('Camera');\n\nmodule.exports.command = {\n name: 'render_toon',\n description: 'Render a scene with the toon effect.',\n groups: ['javascript', 'examples'],\n execute: function() {\n let scene = Scene.import('test', 'scenes\/meyemII.mi');\n scene.options.attributes.set(\n 'progressive_rendering_max_samples', 10, 'Sint32');\n let camera = scene.camera_instance.item.as('Camera');\n\n let canvases = RS.render_to_canvases({\n canvases: [\n {\n name: 'weight',\n pixel_type: 'Rgba',\n content: {\n type: 'bsdf_weight',\n params: {}\n }\n },\n {\n name: 'toon',\n pixel_type: 'Rgba',\n content: {\n type: 'post_toon',\n params: {\n index: 0,\n scale: 4,\n edge_color: {\n r: 1.0, g: 1.0, b: 0.0\n }\n }\n }\n }\n ],\n canvas_resolution_x: camera.resolution_x,\n canvas_resolution_y: camera.resolution_y,\n renderer: 'iray',\n render_context_options: {\n scheduler_mode: {\n type: 'String',\n value: 'batch'\n }\n },\n scene_name: scene.name\n });\n\n return new Binary(\n canvases[1].encode('jpg', 'Rgb', '90'),'image\/jpeg');\n }\n};\n<\/pre><\/div>\n\n\n
<\/div><\/div>\n\n\n\n
Sheen BSDF<\/h4>\n\n\n\n
<\/div><\/div>\n\n\n\n
Deep-Learning SSIM Render Predictor<\/h4>\n\n\n\n
Iray Interactive Ground Shadow Filtering<\/h4>\n\n\n\n
<\/div><\/div>\n\n\n\n
Other Iray Changes<\/h4>\n\n\n\n
RealityServer<\/h3>\n\n\n\n
V8 Engine Version 8.1<\/h4>\n\n\n\n
V8 Command Debugging<\/h4>\n\n\n\n
\n<user v8>\n...\ninspector 20000\n...\n<\/user>\n<\/pre><\/div>\n\n\n
devtools:\/\/devtools\/bundled\/inspector.html?v8only=true&ws=127.0.0.1:20000<\/code><\/pre>\n\n\n\n
<\/figure>\n\n\n\n
V8 Async Support<\/h4>\n\n\n\n
\nmodule.exports.command = {\n ...\n execute: async function(args) {\n let data = await fs.async.readFile(args.filename, { encoding: 'utf8' });\n return JSON.parse(data);\n }\n};\n<\/pre><\/div>\n\n\n
\nmodule.exports.command = {\n ...\n execute: async function(args) {\n try {\n let data = await fs.async.readFile(args.filename, { encoding: 'utf8' });\n return JSON.parse(data);\n } catch(error) {\n return({});\n }\n }\n};\n<\/pre><\/div>\n\n\n
\nmodule.exports.command = {\n ...\n execute: async function() {\n const wasm = await fs.async.readFile('main.wasm');\n const module = await WebAssembly.compile(wasm.buffer);\n const instance = await WebAssembly.instantiate(module);\n const result = instance.exports.main();\n return result;\n }\n};\n<\/pre><\/div>\n\n\n
\nmodule.exports.command = {\n ...\n execute: async function(args) {\n let result = await new Promise((resolve, reject) => {\n if (args.input === 'resolve') {\n resolve('I resolved');\n } else {\n reject('I rejected');\n }\n });\n }\n};\n<\/pre><\/div>\n\n\n
\nmodule.exports.command = {\n ...\n execute: async function(args) {\n await sleep(5000);\n return 'Waited 5s!';\n }\n};\n<\/pre><\/div>\n\n\n
V8 RealityServer Configuration Access<\/h4>\n\n\n\n
\nconst config = RS.Configuration.user.find( config => config.value === 'migenius');\nconst my_feature = config.subitem.find( config => config.name === 'my_feature');\nif (my_feature === 'on') {\n \/\/ Do something\n}\n<\/pre><\/div>\n\n\n
V8 Canvas Cropping<\/h4>\n\n\n\n
\nlet canvas = image.canvas;\n\/\/ Arguments are xl, yl, xh, yh being the left, bottom, right and top edges of the crop\nlet cropped = canvas.crop(10, 15, 50, 35);\n\/\/ Optionally convert pixel type while cropping\nlet cropped = canvas.crop(10, 15, 50, 35, 'Rgba');\n<\/pre><\/div>\n\n\n
Advanced Logging and GPU Error Handling<\/h4>\n\n\n\n
Updated Example Applications<\/h4>\n\n\n\n