“I am participating in the nuggets Community Game Creativity Submission Contest. For details, please see: Game Creativity Submission Contest.”
With the prevalence of the original god game, the home for the two yuan game this area more and more important. The two-dimension project itself is based on Japanese cartoons, so the final essence is to restore 2D vertical drawing as far as possible, unlike PBR which pursues physical correctness. As long as it is good-looking and restores vertical drawing, it will be successful. So here, our goal is to restore the standing painting.
Cartoon rendering area, there are some cartoon style unique effect, it’s personal understanding of the secondary yuan and the Japanese anime, collected some cartoon rendering of the unique effect, as well as personal record in the pit of the rocks to climb, topic and to share their own implementation approach, if we can help to you, it’s very good yi (note: All the code ideas in this article are based on the Built in pipeline).
1, eye eyebrows through the hair, commonly known as eyebrows
This is also a conventional effect, which is generally required for second dimensional art, and is also a very important evaluation effect in animation photography. Currently, there are two methods: depth and stencil
Depth based code (Disadvantage: one more pass, one more drawcall)
1). Draw the face and hair first: Leave the default opaque Queue" Queue" = "Geometry". 2). Add eyebrows (requires two passes) and set "Queue" = "Geometry+10" 3 behind the face and hair. The first eyebrow pass {ZTest LEqual........ }(draw the part not covered by the hair) 4) Draw the eyebrows for the second Pass {ZTest GEqual...... }(Draw the part blocked by the hair)Copy the code
Based on Stencil code
1), first draw the eyebrows, set the eyebrows to "Queue" = "Geometry-10" at the same time set Stencil Stencil {Ref 2 Comp GEqual Pass Replace Fail Keep} 2), then draw the face and hair, For hair Stencil set Stencil {Ref 1 Comp Greater Pass Keep Fail Keep} Note: Stencil defaults to 0 without setting anythingCopy the code
2. Customize control Bloom area
This is a relatively common demand of the public. In many quadratic games, only the face is illuminated, but the whiter areas of the body can not be illuminated. Of course, it will be much easier to write the Bloom effect in the later stage by ourselves, and we have incorporated this effect by modifying the default PPSV2 in Unity. Because many companies are now using Unity’s own post-processing, it’s really convenient, and the effects can be adjusted.
About the implementation idea:
1. Render the role normally, and then save bloom’s mask area in channel A by setting the output color outcolor. A = black and white area, and then follow the assembly line to the RT of the color buffer needed in the later stage.
2, normal Post Bloom, modify Bloom. Shader file, apply the current screen RT alpha value
3. Depth-based frontal projection
This sub-game project, domestic does have most game projects directly do not let hair projection to solve the problem. However, there will be one less layer of projection relationship, and the stereoscopic effect may be reduced, but it doesn’t matter to the player. If the shadowmap provided by unity is used, it is difficult to control because the shadowmap is calculated according to the direction of light. For example, if the shadow cast from the left forehand is appropriate, the shadow cast from the right hand may be cast on the face. Because the direction of light is not easy to control, it cannot achieve the effect of the whole forehand casting, which looks very thin. It’s a nice feeling of paper. So refer to the following linked article, based on depth, second ah.
Liushuo: [Unity URP] with Render Feature to achieve the projection of bangs in cartoon rendering
The resources found, hair and eyes are on the same grid, so the eye area is also drawn as a projection, which will definitely separate when actually doing it
General idea:
1), draw the face and eyes (other head area Mesh except hair) through the first pass, only write depth
2), draw the hair, using the second pass, draw the hair area mask (keep only the clean hair area), black and white
3), sample the black and white image in the face shader, and offset the black and white image in 2) according to the light direction in the camera space to obtain the forehair projection
Code implementation (based on the Built in pipeline).
C# part for hair drawing:
public class HairMaskGenerate : MonoBehaviour { public Renderer faceRenderer1; Renderer public Renderer faceRenderer2; Public Renderer eyeBrowRenderer; Public Renderer eyeRenderer; Public Renderer hairRenderer; Renderer public Material hairMaskMaterial; private CommandBuffer cmb = null; private RenderTexture hairMaskRT = null; private Camera mRTGenerateCamera; void Start() { mRTGenerateCamera = GetComponent<Camera>(); cmb = new CommandBuffer(); cmb.name = "Cmb_DrawHairMask"; hairMaskRT = new RenderTexture(mRTGenerateCamera.pixelWidth, mRTGenerateCamera.pixelHeight, 24); cmb.SetRenderTarget(hairMaskRT); cmb.ClearRenderTarget(true, true, Color.black); DrawRenderer(faceRenderer1, hairMaskMaterial, 0, 0); DrawRenderer(faceRenderer1, hairMaskMaterial, 0, 0) cmb.DrawRenderer(faceRenderer2, hairMaskMaterial, 0, 0); cmb.DrawRenderer(eyeBrowRenderer, hairMaskMaterial, 0, 0); //cmb.DrawRenderer(eyeRenderer, hairMaskMaterial, 0, 0); CMB.DrawRenderer(hairRenderer, hairMaskMaterial, 0, 1); mRTGenerateCamera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, cmb); } // Update is called once per frame void Update() { mRTGenerateCamera.CopyFrom(Camera.main); / / a unified position and Angle information, keep the agreement and the main Camera mRTGenerateCamera. FarClipPlane = Camera. Main. FarClipPlane; mRTGenerateCamera.nearClipPlane = Camera.main.nearClipPlane; mRTGenerateCamera.fieldOfView = Camera.main.fieldOfView; Shader.SetGlobalTexture("_FaceShadow", hairMaskRT); }}Copy the code
A shader for painting hair masks:
Shader "Unlit/HairMask" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 100 Pass { ColorMask 0 ZTest LEqual ZWrite On CGPROGRAM #pragma vertex vert #pragma fragment frag // make fog work #pragma multi_compile_fog #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; UNITY_FOG_COORDS(1) float4 vertex : SV_POSITION; }; sampler2D _MainTex; float4 _MainTex_ST; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = TRANSFORM_TEX(v.uv, _MainTex); UNITY_TRANSFER_FOG(o,o.vertex); return o; } fixed4 frag (v2f i) : SV_Target { // sample the texture fixed4 col = tex2D(_MainTex, i.uv); // apply fog UNITY_APPLY_FOG(i.fogCoord, col); Return fixed4,0,0,1 (0); } ENDCG } Pass { ZTest Less ZWrite Off CGPROGRAM #pragma vertex vert #pragma fragment frag // make fog work #pragma multi_compile_fog #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; UNITY_FOG_COORDS(1) float4 vertex : SV_POSITION; }; sampler2D _MainTex; float4 _MainTex_ST; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = TRANSFORM_TEX(v.uv, _MainTex); UNITY_TRANSFER_FOG(o,o.vertex); return o; } fixed4 frag (v2f i) : SV_Target { // sample the texture fixed4 col = tex2D(_MainTex, i.uv); // apply fog UNITY_APPLY_FOG(i.fogCoord, col); Return fixed4,1,1,1 (1); } ENDCG } } }Copy the code
Face shader(forehair projection) :
Half hairShadow = 1.0; #if USE_SUPER_SHADOW float2 scrUV = input.scrPos.xy / input.scrPos.w; // Float2 scrPos = i.ositionss.xy/i.ositionss.w; Float4 scaledScreenParams = _ScreenParams; // Calculate the light direction of View Space floatlightDir = normalize(input.viewlightDir) * (1.0 / input.ndCW); // Calculate the sampling point, Where _HairShadowDistace controls the sampling distance float2 samplingPoint = scrUV + _HairShadowDistace * ViewlightDir. xy * float2(1 / scaledScreenParams.x, 1 / scaledScreenParams.y); // If the sample point is in the shadow area, the value obtained is 1. If the sample point is in the shadow area, the value obtained is 1. hairShadow = tex2D(_FaceShadow, samplingPoint).r; #endif half4 color = lerp(diffuse , diffuse * _ShadowColor.xyz ,hairShadow);Copy the code
4. Depth-based screen isometric edge light
With traditional NOV, large areas of unjustified flooding can occur on planes with small normal transformations, which makes sense because the normal lines of the entire plane face the same direction. You can’t do isometric lighting otherwise, so you have isometric edge lighting based on the depth of the screen. Isometric edge light can also be combined with the original, and then LERP, to make a better effect.
Basic idea: The camera renders depth, and then offsets it in the direction of depth to create an edge contour effect
To set the opening depth on the camera:
MainCam.depthTextureMode |= DepthTextureMode.Depth
Copy the code
; \
Object shader:
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct v2f
{
float2 uv : TEXCOORD0;
float clipW :TEXCOORD1;
float4 vertex : SV_POSITION;
float signDir : TEXCOORD2;
};
sampler2D _CameraDepthTexture;
float4 _MainTex_ST;
float4 _Color;
float _RimOffect;
float _Threshold;
v2f vert (appdata_full v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
o.clipW = o.vertex.w ;
float3 viewNormal = mul(UNITY_MATRIX_IT_MV, v.normal);
float3 clipNormal = mul(UNITY_MATRIX_P, viewNormal);
o.signDir = sign(-v.normal.x);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
float2 screenParams01 = float2(i.vertex.x/_ScreenParams.x,i.vertex.y/_ScreenParams.y);
float2 offectSamplePos = screenParams01-float2(_RimOffect/i.clipW,0) * i.signDir;
float offcetDepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, offectSamplePos);
float trueDepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenParams01);
float linear01EyeOffectDepth = Linear01Depth(offcetDepth);
float linear01EyeTrueDepth = Linear01Depth(trueDepth);
float depthDiffer = linear01EyeOffectDepth-linear01EyeTrueDepth;
float rimIntensity = step(_Threshold,depthDiffer);
float4 col = float4(rimIntensity,rimIntensity,rimIntensity,1);
return col;
}
ENDCG
Copy the code
5. Facial Rembrandt light
In cartoon animation, if the structure of human face is quickly established by using this, it can also be called imitation miha swimming face shadow (after all, in my learning cognition, this technology comes from MHY haha), and of course there is normal correction to achieve. Normal correction art is difficult to achieve, and particularly tedious. Finally, the lightMap is selected. The code idea is simple, the key is the production of lightmap.
**, length 00:08
Code implementation:
Float3 _Up = float3 (0, 0); Float3 _Front = float3(0,0,-1); Float3 Left = cross(_Up,_Front); float3 Right = -Left; // Directions can also be taken directly from the world matrix of the model. // This requires that the model be made with the correct orientation: // Front = mul(unity_ObjectToWorld,float4(0,0,1,0)); / / float4 Right = the mul (unity_ObjectToWorld, float4 (1,0,0,0)); / / float4 Up = the mul (unity_ObjectToWorld, float4 (0,1,0,0)); float FL = dot(normalize(_Front.xz), normalize(L.xz)); float LL = dot(normalize(Left.xz), normalize(L.xz)); float RL = dot(normalize(Right.xz), normalize(L.xz)); float faceLight = faceLightMap.r + _FaceLightmpOffset ; Float faceLightRamp = (FL > 0) * min((faceLight > LL),(1 > faceLight+RL)); float3 Diffuse = lerp( _ShadowColor*BaseColor,BaseColor,faceLightRamp);Copy the code
There are several general ideas about how to create a Lightmap:
1). Draw contour lines by Pencil software
Use the CSP contour Fill tool to create a tri-face shadow map
2), by writing your own tools in Unity or UE4 game engines
Xuetao: Cartoon face shadow map generation rendering principle
2), through external tools, without entering the engine automatic generation, convenient operation (this method is recommended).
Orange Cat: How to quickly generate a hybrid cartoon lighting map
6. Flash flowing hair highlights
Hair in cartoon renderings can be done in one of three ways.
1) Advanced Kaijiaya to make anisotropic effect (disadvantage: not easy to control shape, code in link below)
COS_NPR not real render _ hair _ABigDeal’s blog -CSDN blog
2) Using Matcap means (view-based, independent of light L, code implementation is in the link below).
Hugh86: Unity NPR’s Japanese cartoon renderings
3) There is also the flow highlight that MHY first studied in Collapse 3 (at present, technical posts seem to have been shielded and cannot be found, there was some before ~~), I like this practice. The principle is to project the light direction onto the XY plane, calculate the highlights in Blinphong mode and create the angel ring shape by combining the highlights Mask map in the XY plane. See the code below for specific implementation. The two-dimensional effect is stronger, easier to control and more in line with the artist’s requirements.
**, length 00:21
float4 uv0 = i.uv0; float3 L = UnityWorldSpaceLightDir(i.positionWS); float3 V = UnityWorldSpaceViewDir(i.positionWS); float3 H = normalize(L + V); float3 N = normalize(i.normalWS); float3 NV = mul(UNITY_MATRIX_V, N); HV = mul(UNITY_MATRIX_V, H); float NdotH = dot(normalize(NV.xz), normalize(HV.xz)); NdotH = pow(NdotH, 6) * _LightWidth; // control the doth = pow(NdotH, 1 / _LightLength); // doth = pow(NdotH, 1 / _LightLength); // _LightFeather * NdotH; float lightFeather = _LightFeather * NdotH; float lightStepMax = saturate(1 - NdotH + lightFeather); float lightStepMin = saturate(1 - NdotH - lightFeather); Float3 lightColor_H = SmoothStep (lightStepMin, lightStepMax, clamp(lightmap.r, 0, 0.99)) * _lightCOLOR_H. RGB; float3 lightColor_L = smoothstep(_LightThreshold, 1, lightMap.r) * _LightColor_L.rgb; float4 specularColor = (lightColor_H + lightColor_L) * (1 - lightMap.b) * lerp(1, _LightIntShadow, shadowStep); return specularColor;Copy the code
Lightm. R and. B channel diagrams in the above code
7, camera FOV stroke correction
Internet has three kinds of basic stroke, in object space, based on the space of the camera, and in order to not achieve NDC with screen becomes big small cut out of the space, but the basic didn’t relate fov this factor, in two dimensional cartoon rendering, many large lens effects, animation personnel when doing the action does not change the distance of the camera, Instead, change the foV of the camera directly, possibly from a huge wide Angle of 60 degrees to 18 degrees and so on, and all three strokes will be eliminated because foV is not taken into account.
Here first mark the general three space stroke code, assuming the normal data has been corrected, no broken edges, do not know how to correct, refer to the big link
Job/Toon Shading Workflow Automatically generate Outline Normal for hard surface models
The following code does not include the effect of vertex color on stroke. You can use vertex color to control stroke thickness and ZOffset.
Based on the object space (advantage: because it is to modify the object space vertex scaling, so sometimes need to involve some late effect that requires Depth, can keep correct, because Depth requires the object’s world space position), add the direction correction factor here
v2f o; float3 fixedVerterxNormal = v.tangent.xyz; Float3 dir = normalize(v.vertex.xyz); float3 dir = normalize(v.vertex.xyz); float3 dir2 = fixedVerterxNormal; float D = dot(dir,dir2); dir = dir * sign(D); dir = dir * _Factor + dir2 * (1 - _Factor); V.vertex.xyz += dir * _Outline*0.001; o.pos = UnityObjectToClipPos(v.vertex);Copy the code
Camera-space based (the general practice of cartoon rendering, which ensures that objects are correct even with unequal scale) :
v2f o; float3 fixedVerterxNormal = v.tangent; float4 pos = UnityObjectToClipPos(v.vertex); float ScaleX = abs(_ScreenParams.x / _ScreenParams.y); float3 viewNormal = mul((float3x3)UNITY_MATRIX_IT_MV, fixedVerterxNormal); float3 ndcNormal = normalize(TransformViewToProjection(viewNormal.xyz)) * clamp(pos.w, 0, 1); //clamp(0,1) : float2 offset = 0.01 * _OutlineWidth * ndcnorm.xy; offset.x /= ScaleX; Pos.xy += offset; pos.xy += offset; o.vertex = pos;Copy the code
It’s pretty much the same
Let’s explore the impact of having stroke support for FOV. Understand two things first:
Capture camera FOV: Since our camera is a symmetric viewing body, the projection matrix is as follows (based on OpenGL)
So the second row, the second column of the projection matrix is going to be the inverse of half of the tangent of FOV. Float fov = 1.0 / unity_CameraProjection[1]. Y: float fov = 1.0 / unity_CameraProjection[1].
Get the distance from the camera:
float3 positionVS = mul(UNITY_MATRIX_MV, input.positionOS).xyz;
float viewDepth = abs(positionVS.z);
Copy the code
Through the above two operations, you can take the distance and foV factor into account at the same time, and then handle stroke in the visual space, I improved the code
v2f o; float3 fixedVerterxNormal = v.tangent; float4 viewSpacePos = mul(UNITY_MATRIX_MV, v.vertex); float4 vert = viewPos / viewPos.w; float s = -(viewPos.z / unity_CameraProjection[1].y); Float power = pow(s, 0.5); float power = pow(s, 0.5); float3 viewSpaceNormal = mul(UNITY_MATRIX_IT_MV, fixedVerterxNormal); ViewSpaceNormal. Z = 0.01; viewSpaceNormal = normalize(viewSpaceNormal); float width = power*_OutlineWidth; vert.xy += viewSpaceNormal.xy *width; vert = mul(UNITY_MATRIX_P, vert); o.vertex = vert;Copy the code
8. Correction of perspective effect
Referring to the formation interface of the original god, when there are multiple characters on the same screen, the characters on the outside of the camera will appear obvious deformation, even when the camera is at 40 degrees FOV. I’m sure the reader would have thought of using orthographic projection, but orthographic projection, the character would lose perspective completely, especially the shoes of the character would notice that the back of the character would come to the front, because the perspective would be lost, and that’s not what the art wants, the art still wants perspective. To put it bluntly, the art wants the character standing on the outside to look the same as the one in the middle of the screen, without the effect of perspective. The following articles are referred to.
Bluerose: In Ue4 to achieve the perspective correction of the quadratic model
Multiple characters of the original god stand side by side, and the effect of the outer character has no deformity caused by perspective
After deep thinking, I got my own set of implementation ideas. The offset values of X and Y in the first two lines of the perspective matrix were changed to fixed values. It may be because these two values are related to FOV perspective, which leads to the result of near larger and far smaller, and the reason why the roles on the outside look obvious perspective.
Code implementation in Unity:
half _ShiftX; // C# can be passed in, offset in the X direction by art half _ShiftY; V2f vert (appdata v) {v2f o; // C# can be passed in, and the offset in the Y direction is adjusted by art. float4 positionVS = mul(UNITY_MATRIX_MV, v.vertex); float4x4 PMatrix = UNITY_MATRIX_P; PMatrix[0][2] = _ShiftX; PMatrix[1][2] = _ShiftY; o.pos = mul(PMatrix, positionVS); }Copy the code
9. Late animation photography
This concept is very big and contains many aspects, but it seems that domestic attention to this part is not so important. In short, there are relatively few such technical articles, so I found two references, which are quite good, and put the following links.
Liushuo: [Unity URP] an exploration of cartoon rendering imitation animation photography
[Project Analysis Notes] Cartoonish rendering methods and reflections _SummerStarTREE’s blog -CSDN blog
There are two effects mentioned in Article 2, flare and Parama. In fact, the two local bloom effects mentioned above can also belong to this field. In addition to these two, article 2 also mentioned that if you use the ACES hue mapping in Unity’s PPSV, it will reduce saturation, especially for high brightness objects. The solution is to change the ACES correction parameters in PPSV Package, which is equal to changing the slope of the curve. Keep it as saturated as possible.
ACES transforms Unity to Post by default:
Try flare and Parama again (actually follow the link above, use the Vignette to enhance the contrast, and then add a custom black and white mask, with a gradient from top left to bottom right to simulate the light effect.
10. Append (in the future, I will continue to append if there are new unique effects for watching cartoons)…
These unique effects of the two dimensions would feel a little lacking if they weren’t present in a high-quality cartoon game. The above implementation ideas are only personal implementation, if you have any questions, welcome to comment. You’ll understand it better if you read it.