preface
In the visualization application of 3D computer room data center, with the continuous popularization and development of video surveillance network system, network camera is more applied in the monitoring system, especially the advent of high-definition era, which speeds up the development and application of network camera.
With the increasing number of surveillance cameras, the surveillance system is faced with severe problems: massive video scattered, isolated, incomplete perspective, unclear position and other problems, always around the user. Therefore, how to manage the camera and control video dynamics more intuitively and clearly has become an important topic to enhance the value of video applications. Therefore, the current project comes into being from the perspective of solving this problem. Around how to improve, management and effective utilization of front-end equipment acquisition of huge amounts of information service for public safety, especially in the technology integration trend, how to combine the advanced video fusion, the fusion and technology, such as 3 d dynamic 3 d scene real-time dynamic visual monitoring, more effective identification, analysis and use of the effective public information service of huge amounts of data, It has become the trend and direction of visual development of video surveillance platform. At present, in the surveillance industry, hai kang, dahua monitoring industry leader, based on this way can be park in public places such as camera planning layout, was laid by hai kang, dahua, such as camera brand camera parameters, adjust the visual range of cameras in the system model, monitoring direction, more convenient for people to intuitive understanding of the camera monitoring area, Monitoring Angle, etc.
Here is the project address: WebGL custom 3D camera surveillance model based on HTML5
Results the preview
Overall scene – Camera rendering
Local scene – Camera rendering
Code generation camera model and scene
The camera model used in the project is generated by 3dMax modeling. The modeling tool can export OBJ and MTL files, and in HT, the camera model in 3D scene can be generated by parsing OBJ and MTL files.
The scene in the project was built by the 3D editor of HT. Some models in the scene were modeled by HT and some by 3dMax, and then imported into HT. The white light on the ground in the scene was presented by the ground map in the 3D editor of HT.
The cone model
A 3D model is composed of the most basic triangular faces. For example, a rectangle can be composed of 2 triangles, a cube can be composed of 6 faces or 12 triangles, and so on. A more complex model can be composed of many small triangles. Therefore, 3D model definition is the description of all triangles that construct the model, and each triangle is composed of three vertex vertex, and each vertex vertex is determined by x, Y and Z three-dimensional space coordinates. HT uses the right hand spiral rule to determine the front side of the triangle plane constructed by the three vertices.
HT by HT. Default. SetShape3dModel (name, model) functions, can be registered custom 3 d models, generated in front of the camera cone is generated by the method. The cone can be regarded as consisting of 5 vertices and 6 triangles, as shown below:
ht.Default.setShape3dModel(name, model)
2. Model is a JSON type object, where VS represents the vertex coordinate array, IS represents the index array, and UV represents the map coordinate array. If you want to define a surface separately, you can use Bottom_vs, Bottom_is, bottom_uv, top_vs, top_is, top_UV, etc. Then you can control a surface separately by shape3d.top.*, shape3d.bottom.*, etc
Here is the code I used to define the model:
Var setRangeModel = function(camera, fovy) {var fovyVal = 0.5 * fovy; Var pointArr = [0, 0, 0, -fovyval, fovyVal, 0.5, fovyVal, -fovyval, -fovyval, -fovyval, -fovyval, 0.5]; ht.Default.setShape3dModel(camera.getTag(), [{ vs: pointArr, is: [2, 1, 0, 4, 1, 0, 4, 3, 0, 3, 2, 0], from_vs: pointArr.slice(3, 15), from_is: [3, 1, 0, 3, 2, 1], from_uv: [0, 0, 1, 0, 1, 1, 0, 1] }]); }Copy the code
I use the current camera tag value as the name of the model. The tag tag is used to uniquely identify an element in HT. Users can customize the tag value. PointArr is used to record the coordinate information of the five vertices of the current pentahedron. From_vs, FROm_IS and FROm_UV are used to construct the bottom surface of the pentahedron separately in the code. The bottom surface is used to display the image presented by the current camera.
The wF. Geometry attribute of the cone style object is set in the code. By this attribute, you can add the wire frame of the model to the cone, enhance the three-dimensional effect of the model, and adjust the color and thickness of the wire frame by wF. color, WF. width and other parameters.
The code for setting the style property of the relevant model is as follows:
1 rangenode. s({2 'shape3d': cameraName, 3 'shape3d.color': 'RGBA (52, 148, 252, 0.3)', 5 // Cone model color 6 'shape3d.reverse.flip': true, 7 // Cone model reverse whether to display positive content 8 'shape3d.light': False, 9 // Whether the cone model is affected by light False, 13 // Whether the cone model can move 14 'wf.geometry': true // Whether the cone model line frame is displayed 15});Copy the code
Camera image generation principle
Perspective projection
Perspective projection is a method of drawing or rendering on a two-dimensional paper or canvas plane in order to obtain a visual effect close to real three-dimensional objects. It is also called perspective. Perspective makes the distant object smaller and the near object larger, and parallel lines will have a visual effect that is closer to human observation.
As you can see in the figure above, the perspective projection only displays the contents of the View Frustum on the screen, so Graph3dView provides eye, Center, Up, far, near, Fovy and aspect parameters to control the specific scope of the truncated cone. See the 3D manual of HT for Web for detailed perspective projection.
According to the description in the above figure, in this project, after the initialization of the camera, the position of the eyes and center of the current 3D scene can be cached, and then the eyes and center of the 3D scene can be set as the position of the center point of the camera. Then get the screenshot of the current 3D scene at this moment, which is the monitoring image of the current camera. Then set the center and eyes of the 3D scene to the eyes and Center positions cached at the beginning. Through this method, the snapshot of any position in the 3D scene can be achieved. So as to realize the real-time generation of camera monitoring images.
Relevant pseudocodes are as follows:
1 function getFrontImg(camera, rangeNode) { 2 var oldEye = g3d.getEye(); 3 var oldCenter = g3d.getCenter(); 4 var oldFovy = g3d.getFovy(); 5 g3d.setEye(camera position); 6 g3d.setCenter(camera orientation); 7 g3d.setFovy(camera opening Angle); 8 g3d.setAspect(camera aspect ratio); 9 g3d.validateImp(); 10 g3d.toDataURL(); 11 g3d.setEye(oldEye);; 12 g3d.setCenter(oldCenter); 13 g3d.setFovy(oldFovy); 14 g3d.setAspect(undefined); 15 g3d.validateImp(); 16}Copy the code
After the test, image acquisition by this method will cause some lag on the page, because it is to obtain the overall screenshot of the current 3D scene. Since the current 3D scene is relatively large, it is very slow for toDataURL to obtain the image information, so I adopted the off-screen method to obtain the image, the specific method is as follows: 1. Create a new 3D scene, set the width and height of the current scene to 200px, and the content of the current 3D scene is the same as the scene on the home screen. In HT, use new HT.graph3D.Graph3DView (dataModel) to create a new scene. The dataModel is all primitives of the current scene, so the main screen and the off-screen 3D scene share the same dataModel to ensure the consistency of the scene. 2. Make the newly created scene location invisible to the screen and add it to the DOM. 3. Change the operation of obtaining images from the main screen to the operation of obtaining images from the off-screen. At this time, the size of the off-screen image is much smaller than that of obtaining images from the main screen, and the original position of eyes and center need not be saved for off-screen acquisition. Because we did not change the position of eyes and Center on the main screen, the overhead caused by switching was reduced and the speed of obtaining images was greatly improved.
Here is the code for implementing this method:
1 function getFrontImg(camera, rangeNode) {2 rangenode. s('shape3d.from.visible', false); 4 rangeNode.s('shape3d.visible', false); 5 rangeNode.s('wf.geometry', false); 6 var cameraP3 = camera.p3(); 7 var cameraR3 = camera.r3(); 8 var cameraS3 = camera.s3(); 9 var updateScreen = function() { 10 demoUtil.Canvas2dRender(camera, outScreenG3d.getCanvas()); 11 rangeNode.s({ 12 'shape3d.from.image': camera.a('canvas') 13 }); 14 rangeNode.s('shape3d.from.visible', true); 15 rangeNode.s('shape3d.visible', true); 16 rangeNode.s('wf.geometry', true); 17}; Var cameraP3 = [cameraP3[0], cameraP3[1] + cameraS3[1] / 2, cameraP3[2] + cameraS3[2] / 2; 22 var realEye = demoutil. getCenter(cameraP3, realP3, cameraR3); 22 var realEye = demoutil. getCenter(cameraP3, realP3, cameraR3); 23 24 outScreenG3d.setEye(realEye); 25 outScreenG3d.setCenter(demoUtil.getCenter(realEye, [realEye[0], realEye[1], realEye[2] + 5], cameraR3)); 26 outScreenG3d.setFovy(camera.a('fovy')); 27 outScreenG3d.validate(); 28 updateScreen(); 29}Copy the code
The getCenter method in the above code is used to obtain the position of point A in the 3D scene after point A is rotated around point B by Angle Angle. The method adopts the hT. Math method encapsulated in HT, and the code is as follows:
1 // pointA is the rotation point around pointB 2 // pointB is the rotation point 3 // R3 is the rotation Angle array [xAngle, yAngle, zAngle] is around x, y, 4 var getCenter = function(pointA, pointB, r3) {5 var MTRX = new ht.math.matrix4 (); 6 var euler = new ht.Math.Euler(); 7 var v1 = new ht.Math.Vector3(); 8 var v2 = new ht.Math.Vector3(); 9 10 mtrx.makeRotationFromEuler(euler.set(r3[0], r3[1], r3[2])); 11 12 v1.fromArray(pointB).sub(v2.fromArray(pointA)); 13 v2.copy(v1).applyMatrix4(mtrx); 14 v2.sub(v1); 15 16 return [pointB[0] + v2.x, pointB[1] + v2.y, pointB[2] + v2.z]; 17};Copy the code
Part of the knowledge applied to vectors is as follows:
OA + OB = OC
The method is divided into the following steps:
1. The var MTRX = new ht. Math. Matrix4 () to create a transformation matrix, through MTRX. MakeRotationFromEuler (euler) set (r3 [0], [1] r3, r3 [2])) to get around the r3 [0], R3 [1], R3 [2] is the rotation matrix of X-axis, Y-axis and z-axis rotation. 2. Create vectors v1 and v2 using new ht.math.vector3 (). 3. V1. FromArray (pointB) is to establish a vector from the origin to pointB. 4.v2. fromArray(pointA) is to establish a vector from the origin to pointA. 5. V1.fromarray (pointB).sub(v2.fromarray (pointA)), i.e. Vector OB-oa, the obtained vector is AB, and v1 becomes vector AB. 6.v2. copy(v1) v2: copy(v1) v2: copy(v1).applyMatrix4(MTRX) v2: copy(v1).applyMatrix4 7. At this time, v2.sub(v1) can obtain the vector formed by points starting from pointB and ending from pointB after rotation, which is v2 at this time. 8. The rotated point is [pointB[0] + v2.x, pointB[1] + v2.y, pointB[2] + v2.z] by vector formula.
The 3D scene example in the project is actually the VR example of Hightopo’s industrial Internet booth in HT at the recent Guizhou Digital Expo. The public has high expectations for VR/AR, but we still have to go step by step. Even the first product of Magic Leap, which has raised $2.3 billion in financing, can only be Full of Shit. This topic will be discussed later. Here is the last video photo of the scene:
The 2D image is attached to the 3D model
From the previous step we can get a screenshot of the current camera position, so how to paste the current image to the bottom of the pentahedron built above? From_vs, from_is is used to build the bottom rectangle, so in HT you can set the pentahedron style property shape3d.from.image to the current image, where the from_UV array is used to define the position of the map, as shown below:
Here is the code that defines the map position from_UV:
1 from_uv: [0, 0, 1, 0, 1, 1, 0, 1]
From_uv is an array of positions that define a map. As explained above, a 2D image can be pasted onto the FROM surface of a 3D model.
The control panel
In HT, use new HT.widget.panel () to generate the following Panel:
Each camera in the panel has a module to present the current monitoring image. In fact, this place is also a canvas, which is the same canvas as the monitoring image in front of the cone in the scene. Each camera has its own canvas to store the real-time monitoring picture of the current camera. This allows you to paste the Canvas anywhere. The code for adding the canvas to the panel is as follows:
1 formPane.addRow([{ 2 element: camera.a(‘canvas’) 3 }], 240, 240);
The code stores the Canvas node under the attR property of the camera primitor, and then obtains the picture of the current camera via Camera.a (‘canvas’).
Each control node in the panel is added via formpane.addrow, as described in the forms manual of HT for Web. Then add the formPane to the Panel Panel through hT.Widget. Panel. For details, see the Panel manual of HT for Web.
Part of the control code is as follows:
1 formPane.addRow(['rotateY', { 2 slider: { 3 min: -Math.PI, 4 max: Math.PI, 5 value: r3[1], 6 onValueChanged: function() { 7 var cameraR3 = camera.r3(); 8 camera.r3([cameraR3[0], this.getValue(), cameraR3[2]]); 9 rangeNode.r3([cameraR3[0], this.getValue(), cameraR3[2]]); 10 getFrontImg(camera, rangeNode); 11} 12} 13}], [0.1, 0.15]);Copy the code
The control panel adds control elements through addRow. The above code is to add control for rotation of the camera around the y axis. OnValueChanged is called when the value of slider changes, and the rotation parameter of the current camera is obtained through Camera.r3 (). It’s rotating around the y axis so the Angle between the x axis and the z axis is the same, what’s changing is the Angle of rotation of the y axis, Rangenode.r3 ([cameraR3[0], this.getValue(), cameraR3[2]]) and rangenode.r3 ([cameraR3[0], this.getValue(), CameraR3 [2]]) to set the rotation Angle of the cone in front of the camera, and then call the previously encapsulated getFrontImg function to obtain the real-time image information below the rotation Angle.
TitleBackground: titleBackground: Rgba (230, 230, 230, 0.4) can set the title background to a transparent background. TitleColor, titleHeight and other title parameters can be set. Through separatorColor, separatorWidth segmentation parameter can be set between the internal panel line color, width, etc. The last panel by panel. SetPositionRelativeTo (” rightTop “) to the position of the panel set into the top right corner, And through the document. The body. The appendChild (panel. GetView ()) to the outermost div added to the page, the panel. The getView () is used to the outermost layer of the dom node in the access panel.
The specific initialization panel code is as follows:
1 function initPanel() { 2 var panel = new ht.widget.Panel(); 3 var config = {4 title: 'rGBA (230, 230, 230, 0.4)', 6 titleColor: 'rgb(0, 0, 0)', 7 titleHeight: 30, 8 separatorColor: 'rgb(67, 175, 241)', 9 separatorWidth: 1, 10 exclusive: true, 11 items: [] 12 }; 13 cameraArr.forEach(function(data, num) { 14 var camera = data['camera']; 15 var rangeNode = data['rangeNode']; 16 var formPane = new ht.widget.FormPane(); 17 initFormPane(formPane, camera, rangeNode); 18 config.items. Push ({19 title: "camera" + (num + 1), 20 titleBackground: 'rgba(230, 230, 230, 0.4)', 21 titleColor: 'rgb(0, 0, 0)', 22 titleHeight: 30, 23 separatorColor: 'rgb(67, 175, 241)', 24 separatorWidth: 1, 25 content: formPane, 26 flowLayout: true, 27 contentHeight: 400, 28 width: 250, 29 expanded: num === 0 30 }); 31}); 32 panel.setConfig(config); 33 panel.setPositionRelativeTo('rightTop'); 34 document.body.appendChild(panel.getView()); 35 window.addEventListener("resize", 36 function() { 37 panel.invalidate(); 38}); 39}Copy the code
In the control panel, you can adjust the direction of the camera, the radiation range monitored by the camera, the length of the cone in front of the camera, etc., and the image of the camera is generated in real time, as shown in the following operation screenshot:
The following is the operation of the 3D scene adopted in this project combined with THE VR technology of HT for Web: