RealityServer Client Library<\/a> has been the recommended means of using RealityServer when building browser based applications. However all of the example applications in RealityServer were still using the legacy client library. RealityServer 6.0 updates all sample applications to use the new, modern JavaScript client library. We’ve also removed some other example which really didn’t represent best practice use of RealityServer.<\/p>\n\n\n\nWebSocket UAC and Scope Integration<\/h4>\n\n\n\n UAC (User Access Control) and automatic session creation are incredibly useful features for multi-user RealityServer applications. Unfortunately in previous versions while you could use our new WebSocket based client libraries, you still had to send traditional requests in order to keep sessions alive as UAC was not aware of what was happening in a WebSocket stream. Now UAC is aware of WebSocket streams and having an active stream will automatically keep the session alive for you.<\/p>\n\n\n\n
Extras Plugins Now Core<\/h4>\n\n\n\n Pretty must every customer who uses RealityServer also installs the so called “Extras” plugins we provide which added commands such as camera_frame<\/em>, camera_auto_exposure<\/em>, set_sun_position <\/em>and more as well as plugins to help with setting content disposition headers for image saving and other handy functionality. These are all now finally included as core features so there is no need to install extra plugins when setting up RealityServer. So if you’re wondering where the extras plugin download has gone, it is no longer needed.<\/p>\n\n\n\nIray Interactive Batch Mode<\/h4>\n\n\n\n We’ve included the V8 command render_batch_irt<\/em> for some time and it basically just renders using Iray Interactive in a loop to simulate batch rendering as enabled in Iray Photoreal by using the batch scheduler render context option. While Iray Interactive also recognises the batch scheduler mode, it annoyingly handles it differently, instead of running to termination in one step it returns after each step and needs to be called multiple times until it reaches the termination conditions. Our V8 command worked around this but had some annoying consequences, in particular switching between renderers meant changing the command you called instead of just one parameter. With RealityServer 6.0, if you enable the batch scheduler when using Iray Interactive mode it now behaves the same way as Iray Photoreal and renders to completion, even when using the standard render <\/em>command. If you use the render_batch_irt <\/em>command it will still work and it uses the new functionality (which is also faster), however we recommend switching to just use regular render <\/em>commands now.<\/p>\n\n\n\nLightmapping Improvements<\/h4>\n\n\n\n With the new functionality to allow Iray Interactive batch rendering, lightmapping with Iray Interactive is now possible as well. While this was technically possible in earlier versions it would only ever render the first iteration which wouldn’t be of much value. Now you can use Iray Interactive for light mapping just like Iray Photoreal. Another feature we added to the lightmap renderer is control over the amount of padding which is added to the lightmapping elements within the result texture. This is desirable if you want to implement your own inpainting solution or apply an external denoiser to lightmaps (since we don’t support AI denoising of lightmaps in RealityServer).<\/p>\n\n\n\n
<\/div><\/div>\n\n\n\n
On the left you can see the lightmap image with no padding applied and on the right with 5 padding passes (the default value). The padding process basically extends the edges of the lightmap regions using the boundary pixels and is intended to avoid edge arifacts that can appear when using the textures and the filtering causes the black areas to intrude on the area being textured. As such it is still important that if you render without the padding in order to apply your own post processing that you do something similar to your images afterwards to avoid artifacts.<\/p>\n\n\n\n
New Admin Console Pages<\/h4>\n\n\n\n If you’re a l33t RealityServer developer then you know about the Admin Console page served up on port 8081 by default (configurable of course). This has in the past only been a direct exposure of the internal Iray admin page, however in RealityServer 6.0 we have started to extend it with new RealityServer specific pages. Now you can see a list of the active WebSocket streams and also the active render loops that are running. This can be great when diagnosing various issues.<\/p>\n\n\n\n
<\/div>
<\/div><\/div>\n\n\n\n
Generate Points of a Mesh<\/h4>\n\n\n\n Since we had the logic already for the generate_fibers_on_mesh <\/em>command, we decided to create an additional more generic command called generate_points_on_mesh <\/em>as well. This takes a Triangle_mesh <\/em>or Polygon_mesh <\/em>object and gives you back a list of random points distributed on the surface along with normals at those points, also potentially randomised to some degree. This can be great for scattering objects in scenes, for example if you want to randomly place trees over a terrain. Here’s a quick test using the generate_points_on_mesh <\/em>command to create transforms for a bunch of cylinders created with the generate_cylinder <\/em>command.<\/p>\n\n\n\n <\/figure>\n\n\n\nBaking MDL Function Calls<\/h4>\n\n\n\n Using the distill_material <\/em>command it was previously indirectly possible to bake the results of an MDL function call using the baking functionality of the distiller, however this unfortunately required making a placeholder material and attaching the function to it in a way that you knew the distiller would not change it (for example attaching it to the emission). There is now an mdl_bake_function_call <\/em>command which allows you to pass any MDL function call into the baker and use just the baking functionality without distilling anything.<\/p>\n\n\n\nThere are quite a lot of different potential uses for this. For example you might want to bake a very complex graph of connected functions into a simple texture image or you might want to write custom MDL functions which perform image manipulation you want to have run on the GPU during baking. This command basically provides a way for you to execute your MDL function over the full UV texture domain and output the results to a static image.<\/p>\n\n\n\n
The command also lets you get the results in a wide variety of ways, you can have it return the encoded image data, base64 encode it, return a Canvas or create scene elements such as Texture or Image for you.<\/p>\n\n\n\n
Canvas Combiner<\/h4>\n\n\n\n We see customers often want to do two types of operations on canvases they are producing, multiplying them for blending or adding them for Light Path Expression compositing. This could already be done with V8 commands and reasonably fast but in some cases not fast enough. To help with that we have added a new combine_canvases command. This takes two canvases and an operation (currently only multiply or add but we are considering adding others in the future) and runs the operation and returns a new canvas.<\/p>\n\n\n\n <\/figure>\n\n\n\nThere are quite a few use cases for this. In the above example we are rendering a photorealistic image on the left while in the middle we are just rendering the BSDF Weight canvas, with a custom texture with some text on it which can be produced much more quickly. By using the canvas combiner we can multiply these two canvases to get the result on the right without re-rendering the more expensive photorealistic image. This can be used for personalisation applications such as where users can upload their own content to show on products. While a V8 JavaScript implementation we had was taking around 30ms for a typical image the combine_canvases <\/em>command took around 3ms, so it can give a significant speedup.<\/p>\n\n\n\nGet Canvas Improvements<\/h4>\n\n\n\n You may not have experimented with it yet but using JavaScript type arrays you can create some fairly efficient image processing in JavaScript V8 commands. One pain point however was that there was no way to directly get a named canvas as a JavaScript Canvas object. The get_canvas <\/em>command has now been extended with an encode<\/em> parameter which is on by default to preserve the current behaviour, however if you set it to false<\/em>, get_canvas <\/em>will instead return the actual canvas. This is not of any value in JSON-RPC however in V8 that allows you to fetch any named canvas into a JavaScript Canvas object.<\/p>\n\n\n\nAWS Credential Fetching<\/h4>\n\n\n\n Many users who use AWS services from within EC2 instances take advantage of the fact that AWS SDK based tools can retrieve their credentials directly from EC2 instance metadata rather than specifying them explicitly in configuration files. This is more secure and much more flexible so we have extended the AWS SQS and S3 features used with the Queue Manager functionality to now allow for automatic credential fetching if you are running on an EC2 instance. To use it you just omit the credential configuration and RealityServer will automatically try and fetch them. Note that you need to ensure you assign an IAM role to your EC2 instance which has permission to access to the services you want to use.<\/p>\n\n\n\n
Client Library Wait for Render<\/h4>\n\n\n\n The RS.Stream.update_camera<\/em> method in the client library now supports wait_for_render <\/em>semantics. If this property in the data object is set to true then the returned Promise will not resolve until the camera change is available in a rendered image. One common problem you could have when implementing a RealityServer application is when making a change you could briefly get an updated image of the scene in which the change was not reflected. This allows you to ensure the changes you have made are actually present in the image you are showing.<\/p>\n\n\n\nEcho in Configuration Files<\/h4>\n\n\n\n More conveniences for the you DevOps types, you can now add echo <\/em>statements of your RealityServer configuration files which can be great for debugging configuration issues and problems. You can try it by adding something like this to your realityserver.conf<\/em> file. Messages are logged to stderr since the full logging system has not started at the time the configuration files are parsed.<\/p>\n\n\n