In React Optimization Techniques for Raytracing on the Web (Part 1), we introduce operator overloading schemes in JS. In React Optimization Techniques for Raytracing on the Web (Middle), we cover Time Slicing and Streaming Rendering optimization strategies.

Formulas in code are no longer ugly, and UI threads are no longer stuck. By doing the first rendering in a progressive manner, the user sees the content faster. These are all good optimizations.

But we can do more.

Advanced scheme: Schedule

Concurrency Mode for React’s upcoming Concurrency Mode feature includes Suspense and Schedule features. What they do is prioritize UI rendering according to the source of the trigger and the importance of the module.

Not all modules of a page are equally important. There are always some modules, such as headings, headers, prices, etc., that are more important than others (such as sidebars, advertising).

The source that triggers the update on the page is not as urgent. Responding to user input, for example, should be more important than any other render request. For this reason, Facebook engineers also contributed isInputPending to the Chrome team, an API that can tell if there is currently any user input. React interrupts the current task in a timely manner to process the user request.

For our ray-tracing scenario, this priority relationship can also be delineated. For example, objects, especially objects in focus, are obviously more important than the background. We can calculate more about the light sampling in the pixel where the object is.

In particular, a lot of times, the background doesn’t have to count the ray over and over again, it might get the same value every time it counts. We can detect these situations, skip them, and put computing resources into more important pixel locations.

The priority allocation policy varies according to scenarios and requirements. Here, I adopted a relatively general processing method, which is to compare the Mean Squared Error of the pixels of the two images before and after, and sort them according to the Error size. The ones with large Error are in the front row, and the ones with small Error are in the back row. Take the first 20,000 at a time to calculate ray tracing.

In this way, we have added an optimization algorithm for ray tracing. Ray tracing emits multiple random rays, which itself uses Monte Carlo method to fit the rendering equation. It is constantly approaching the theoretical value calculated by the rendering equation with the statistical average value simulated. (For those who are interested in Monte Carlo method, To see how it can be used in game AI scenarios, click on 40+ Lines of JS code to Build Your 2048 Game AI.

We can more efficiently approximate the theoretical value by arranging our Monte Carlo sampling position according to pixel error.

First, we can’t calculate all the points in a render function, we need to refine the renderByPosition function, as shown above. So we can use this function to ray trace the pixels in order of priority, rather than mindlessly tracing them in the for loop.

Then we add three new arrays, renderCount, prevImageData and currImageData.

Previously rendered mindlessly, a single numeric variable innerCount is applied to all pixels, dividing them to get an average. Nowadays, the number of times each pixel is rendered may be inconsistent due to priority. Therefore, special records are needed.

PrevImageData is used to remember the previous color value, and currImageData is used to remember the current color value to calculate the error.

Error calculation is very simple, is to achieve a function to obtain the mean square error, and then according to the last picture, the current picture, calculate the error value of each position before and after the two colors (each color value contains RGBA four numbers). Arrange them in order of error.

RenderByPosition (renderByPosition); renderByPosition (renderByPosition); Data logging and synchronization with renderCount, prevImageData, currImageData and ImageData.data.

We still use innerCount in the render function to record the total number of renders. When it’s greater than 2, it means that we have at least two images that we can compare errors with. So instead of recursively going to render, we switch to the scheduleRender function, which tracks rays based on priority.

The scheduleRender function obtains the error list first, takes the first 20,000 pixels with the largest error and renders them in turn. It also does Time Slicing and Streaming Rendering to keep the UI smooth.

At the bottom of the scheduleRender function, we count the number of times the scheduleRender has been used. When the number is greater than 5 times, we switch the rendering mode back to the render function of the whole render.

This is because monte Carlo simulation has randomness, and only the errors of the first and second pictures are prioritized, so there will be probabilistic neglect. The phenomenon is that the pictures become uneven, as if there are many bad points. By rendering the whole at a certain frequency, unlucky pixels get a chance to be re-identified.

By switching scheduleRender and Render, we eliminate statistical bias as much as possible. Priority partitioning is achieved while maintaining the overall smoothness of the rendering.

Take 1000 seconds to render the result as above. The top half is the rendered image and the bottom half is the number of times each pixel has been rendered. The more times, the whiter the color. It’s easy to see where our computing resources are allocated.

As we can see from the figure, the light situation in the boundary and shadow is relatively complex, so we focus on fitting the light situation in these places (whiter). The background and sky, on the other hand, are a single color, so less light computing resources should be invested (darker).

And as we do that, we can see that the value in the error list is getting closer and closer to zero. It means that the theoretical value fits better.

As shown above, through Schedule + MSE optimization strategy, we obtained a local hd image in a shorter time. You don’t have to wait a long time to get a global hd image.

Advanced scheme: mental acceleration

Optimization of rendering performance is not the only way to optimize.

The physical sense of fast, with the human psychological sense of fast, sometimes not consistent.

We did Schedule rendering in the update phase because we needed at least two images to calculate the error gradient.

But think about it. The first time you render, can’t you prioritize?

From our experience with human-created images, it’s easy to conclude that what’s in the center of an image is often more important than what’s at the edges. Our Streaming Rendering mimics the React SSR Rendering HTML from top to bottom.

There is nothing wrong with rendering an HTML document from top to bottom. Whereas we are pictures, we should start in the middle and expand up and down.

As shown above, by simply changing the starting position and orientation of the render, users are more likely to see something they are more interested in. Instead of seeing the empty sky first.

In addition, human vision has the ability to automatically pick up patterns. We don’t have to render the image completely, but we can let the user know what the image contains by matching the pattern of the object with the human eye. As a result, we can render parts of the image more coarsely with a certain gap.

The image above was rendered in half the pixels in half the time. We can actually make out what’s in the picture. In this way, we can quickly generate rough maps that give users visual space; Combined with the previous move, expand the pixels from the middle to refine the content for a better visual experience.

As shown above, the user now not only sees the object in the center of the vision more quickly, but also has some control over the picture as a whole. Our Schedule priority policy was also successfully applied in the first render phase.

conclusion

As a review, we can see that the render optimization strategies used in React/Vue also apply elsewhere.

+-*/ compiles to a function call, just as JSX compiles to a react. createElement function call.

Time Slicing can be used to block UI mainlines for long periods of Time.

Streaming Rendering can be used even if you wait a long time for a full Rendering.

There is a priority division between modules, which can be implemented in Schedule.

These are the problems I solved in the process of learning ray tracing. There are no plans to make them into open source Babel plug-ins and libraries, so I would like to share some ideas and hope that they can help some students.

It’s worth noting that we haven’t run out of optimizations available; The optimizations presented here are just a few of them. For example, given that each pixel’s ray tracing is independent, Parallelization of this process and putting it on a Web Worker or GPU can result in a significant improvement in efficiency. If you’re interested, you can explore it yourself.

Click to see the online DEMO above. You can also experience it on your phone.