This commit is contained in:
2023-06-29 11:55:02 +08:00
commit 36e95249b1
1236 changed files with 464197 additions and 0 deletions

View File

@@ -0,0 +1,44 @@
## Shader
- RWStructuredBuffer:可读写可用类型float1234、uint1234
- StructuredBuffer:对应的只读BUffer
## C#脚本
### 声明变量
CS与Buffer
```c#
[SerializeField]
ComputeShader computeShader = default;
ComputeBuffer positionBuffer;
```
### 绑定CS中的变量并且执行
声明Shader中变量id以用于绑定
```c#
static readonly int intpositionsId = Shader.PropertyToID("_Positions"),
resolutionId= Shader.PropertyToID("_Resolution"),
stepId= Shader.PropertyToID("_Step"),
timeId= Shader.PropertyToID("_Time");
```
给变量绑定数值
```c#
computeShader.SetInt(resolutionId, resolution);
computeShader.SetFloat(stepId, step);
computeShader.SetFloat(timeId, Time.time);
computeShader.SetBuffer(0, positionsId, positionBuffer);
```
执行
```c#
int groups = Mathf.CeilToInt(resolution / 8f);
computeShader.Dispatch(0, groups, groups, 1);
```
### 绘制GPU Instance命令
```c#
//需要传入边界盒以及绘制数量
var bounds=new Bounds(Vector3.zero,Vector3.one*(2f+2f/resolution));
Graphics.DrawMeshInstancedProcedural(mesh, 0, material,bounds,positionBuffer.count);
```
### 多kernel
CS可能拥有多个内核函数此时computeShader的SetBuffer()与Dispatch()可以通过第一个形参来设置kernel index。

View File

@@ -0,0 +1,234 @@
## 纯Shader
- Uniform代表该变量在顶点与片元着色器中值都是相同的。
- fixed:低精度数字它们以精度来换取移动设备上的速度。在台式机上fixed只是float的别名。
## 内置库
- UnityShaderVariables.cginc定义了渲染所需的一堆着色器变量例如变换相机和光照数据。这些都在需要时由Unity设置。
- HLSLSupport.cginc进行了设置因此无论代码针对的是哪个平台都可以使用相同的代码进行编写。无需担心使用特定于平台的数据类型等。
- UnityInstancing.cginc专门用于实例化支持这是一种减少绘制调用的特定渲染技术。尽管它不直接包含文件但依赖于UnityShaderVariables。
## 关键字
### PropertyType
- Int
- Float
- Range
- Color
- Vector
- 2D texture
- Cube texture
- 3D texture
### SubShader
针对不同性能的显卡适配不同的Shader。
```
SubShader{
[Tags]
[RenderSetup]
Pass{
}
}
```
### Fallback关键字
Fallback关键词用于处理匹配失败的情况指定一个用于处理这个情况的Pass或者直接跳过。
>FallBack Off
### CGPROGRAM与ENDCG
里面编写这里编写HLSL/CG对于VertexShader与PixelShader则写在SubShader的Pass关键字的{}中。
## 内置变量
### 矩阵
- UNITY_MATRIX_MVP
- UNITY_MATRIX_MV
- UNITY_MATRIX_P
- UNITY_MATRIX_VP
- UNITY_MATRIX_T_MV
- UNITY_MATRIX_IT_MV
- _Object2World
- _World2Object
- unity_WorldToObject
### 摄像机
- WorldSpaceCameraPos
- ProjectionParams
- ScreenParams
- ZBufferParams
- unity_OrthoParams
- unity_CameraProjection
- unity_CameraInvProjection
- unity_CameraWorldClipPlanes[6]
### 灯光
- UNITY_LIGHTMODEL_AMBIENT
- _WorldSpaceLightPos0
## 定义VS与PS名称
```c++
#pragma vertex vert
#pragma fragment frag
```
## UnityCG.cginc
### 常用结构体
- appdata_base float4 vertex : POSITION; float3 normal : NORMAL; float4 texcoord: TEXCOORD0;
- appdata_tan float4 vertex : POSITION; float4 tangent : TANGENT; float3 normal : NORMAL; float4 texcoord : TEXCOORD0;
- appdata_full float4 vertex : POSITION; float4 tangent : TANGENT; float3 normal : NORMAL; float4 texcoord : TEXCOORD0; float4 texcoord1 : TEXCOORD1; float4 texcoord2 : TEXCOORD2; float4 texcoord3 : TEXCOORD3; # if defined(SHADER_API_XBOX360) half4 texcoord4 : TEXCOORD4; half4 texcoord5 : TEXCOORD5; # endif fixed4 color : COLOR;
- appdata_img float4 vertex : POSITION; half2 texcoord : TEXCOORD0;
- v2f_img 裁剪空间中的位置、纹理坐标
### 常用函数
- float4 WorldSpaceViewDir(float4 v)输入一个模型空间中的顶点位置,返回世界空间中从该点到摄像机的观察方向
- float4 UnityWorldSpaceViewDir(float4 v)输入一个世界空间中的顶点位置,返回世界空间中从该点到摄像机的观察方向
- float4 ObjSpaceViewDir(float4 v)输入一个模型空间中的顶点位置,返回模型空间中从该店到摄像机的观察方向
- float4 WorldSpaceLightDir(flaot4 v)仅用于向前渲染。 输入一个模型空间中的顶点位置,返回世界空间中从该点到光源的光照方向。没有被归一化
- float4 ObjectSpaceLightDir(float4 v)仅用于向前渲染中,输入一个模型空间中的顶点位置, 返回模型空间中从该点到光源的光照方向。没有被归一化
- float4 UnityWorldSpaceLightDir(float4 v)仅用于向前渲染中,输入一个世界空间中的顶点位置, 返回世界空间中从该点到光源的光照方向。没有被归一化
- float3 UnityObjectToWorldNormal(float3 norm)把法线方向从模型空间中转换到世界空间中
- float3 UnityObjectToWorldDir(float3 dir)把方向矢量从模型空间中变换到世界空间中
- float3 Unity WorldToObjectDir(float3 dir)把方向矢量从世界空间变换到模型空间中
## 浮点格式
float32位
half16位 -60000~+60000
fixed11位 -2.0~+2.0
## 贴图
`Sampler2D _MainTex`使用类似`float4 _MainTex_ST`作为缩放与位移。
## 管线LightMode
定义在Pass内的Tag{}中。
- Always总是渲染但不计算任何光照。
- ForwardBase用于前向渲染该Pass会计算环境光、平行光、逐顶点SH与LightMap。
- ForwardAdd用于前向渲染该Pass会计算额外的逐像素光源每个Pass对应一个光源。
- Deferred用于延迟渲染该Pass会计算GBuffer。
- ShadowCaster把物体的深度信息渲染ShadowMap或是一张深度纹理中。
- PrepassBase用于遗留的延迟渲染该Pass会渲染法线和高光反射的指数部分。
- PrepassFinal用于遗留的延迟渲染该Pass通过合并纹理、光照和自发光来渲染得到最终的颜色。
- Vertex、VertexLMRGBM、VertexLM用于遗留的顶点光照渲染。
## 不透明物体的渲染顺序
1. 先渲染所有不透明物体,并开启深度测试与深度写入
2. 把半透明物体按它距离摄像机的远近进行排序 ,然后按照从后往前的顺序渲染这些半透明物体,并开启它们的深度测试,但关闭深度写入。
### Unity3d的解决方案
定义了5个渲染队列
- Backgrouond
- Geometry
- AlphaTest
- Transparent
- Overlay
可以在Tags定义队列
```
SubShader{
Tags{"Queue"="AlphaTest"}
Pass{
ZWrite Off
}
}
```
### Blend相关命令
BlendOff
BlendSrcFactorDstFactor
BlendSrcFactorDstFactor,SrcFactorA DstFactorA
BlendOp BlendOperation
### 混合操作
1. Add
2. Sub
3. RevSub
4. Min
5. Max
### 常见的混合类型
1. Blend SrcAlpha OneMinusSrcAlpha 正常
2. Blend OneMinusDstColor One 柔和相加
3. Blend DstColor Zero 正片叠底
4. Blend DstColor SrcColor 两倍相乘
5. BlendOp Min Blend One One 变暗
6. BlendOp Max Blend One One 变亮
7. Blend One One 线性减淡
### 解决半透明乱序问题
1. 使用两个Pass来渲染模型第一个Pass开启深度写入但不输出颜色。第二个Pass进行正常的透明度混合。
2. 双面渲染的透明效果
## 剔除命令
Cull Back | Front | Off
## 内置时间变量
- _Time场景加载开始到现在的时间4个分量为t/20,t,2t,3t
- _SinTime时间的正弦值4个分量为t/8,t/4,t/2,t
- _CosTime时间的余弦值4个分量为t/8,t/4,t/2,t
- untiy_DeltaTimedt为时间的增量4个分量为dt1/dt,SmoothDt,1/SmoothDt
## 预处理命令
### multi_compile
`multi_compile`定义的宏,如`#pragma multi_compile_fog``#pragma multi_compile_fwdbase`等基本上适用于大部分shader与shader自身所带的属性无关。
### shader_feature
`shader_feature`定义的宏多用于针对shader自身的属性。比如shader中有`_NormalMap`这个属性(Property),便可通`#pragma shader_feature _NormalMap`来定义宏用来实现这个shader在material有无`_NormalMap`时可进行不同的处理。
## 优化
减少Draw Call的方式有动态合批与静态合批。Unity中支持两种批处理方式一种是动态批处理一种是静态批处理。对于动态批处理来说有点是一切处理都是Unity自动完成的不需要我们自己做任何操作而且物体是可以移动的但缺点是限制很多可能一不小心就会破坏了这种机制导致Unity无法动态批处理一些使用了相同材质的物体。
而对于静态批处理来说,它的优点是自由度很高,限制很少;但缺点是可能会占用更多的内存,而且经过静态批出里的所有物体都不可以再移动了。
动态批处理的原理是每一帧把可以进行批处理的模型网格进行合并再把合并后模型数据传递给GPU然后使用同一个材质对其渲染。处理实现方便动态批处理的另一个好处是经过批处理的物体仍然可以移动这是由于在处理每帧时Unity都会重新合并一次网格。
### 共享材质
将多张纹理合并到一起,并且制作成材质。
### 动态合批
在使用同一个材质的情况下,满足的条件后就会被动态处理,每帧都会合并一次。条件:
- 能够进行动态批处理的网格顶点属性规模要小于900.例如如果Shader中需要使用顶点位置、法线和纹理坐标这3个顶点属性那么想要让模型能够被动态批处理它的顶点数目不能超过300。需要注意的是这个数字未来有可能会发生变化因此不要依赖这个数据。
- 一般来说所有对象都需要使用同一个缩放尺度。一个例外的情况是如果所有的物体都使用了不同的非统一缩放那么它们也是可以被动态批处理的。但在Unity 5 中,这种对模型缩放的限制已经不存在了。
- 使用光照纹理的物体需要格外小心处理。这些物体需要额外的渲染参数,例如,在光照纹理上的索引、偏移量和缩放信息等。因此,为了让这些物体可以被动态批处理,我们需要保证它们指向光照纹理中的同一个位置。
- 多Pass的Shader会中断批处理。在前向渲染中我们有时需要使用额外的Pass来为模型添加更多的光照效果但这样一来模型就会被动态批处理了。
### 静态合批
在运行开始阶段,把需要进行静态批处理的模型合并到一个新的网格结构中。
## GeometryShader
>ShaderModel必须4.0以上`#program target 4.0`如果低于这个目标u3d会自定提升至该级别。
- maxvertexcount定义输出顶点数如果只是处理三角形只需要设置为3即可。
- triangle为输入类型关键字。
- TriangleStream为输出流类型。
```
[maxvertexcount(3)]
void MyGeometryProgram (
triangle InterpolatorsVertex i[3],
inout TriangleStream<InterpolatorsGeometry> stream
)
```
### Flat线框效果CatLikeCoding中的案例
向三角形添加重心坐标的一种方法是使用网格的顶点颜色存储它们。每个三角形的第一个顶点变为红色,第二个顶点变为绿色,第三个顶点变为蓝色。但是,这将需要具有以此方式分配的顶点颜色的网格,并且无法共享顶点。我们想要一种适用于任何网格的解决方案。幸运的是,我们可以使用我们的几何程序添加所需的坐标。
由于网格不提供重心坐标因此顶点程序不了解它们。所以它们不属于InterpolatorsVertex结构。要使几何程序输出它们我们必须定义一个新结构。首先在MyGeometryProgram上方定义InterpolatorsGeometry。它应包含与InterpolatorsVertex相同的数据因此使用它作为其内容
```hlsl
struct InterpolatorsGeometry {
InterpolatorsVertex data;
CUSTOM_GEOMETRY_INTERPOLATORS
};
```
- MyGeometryProgram的作用为调整按照面法线调整顶点法线在调整barycentricCoordinates值最后塞入inout TriangleStream<InterpolatorsGeometry>中。
- GetAlbedoWithWireframe为线控效果控制最终会在My Lighting.cginc中以宏替换的方式整合至渲染流程中。
```hlsl
float3 GetAlbedoWithWireframe (Interpolators i) {
float3 albedo = GetAlbedo(i);
float3 barys;
barys.xy = i.barycentricCoordinates;
barys.z = 1 - barys.x - barys.y;
float3 deltas = fwidth(barys);
float3 smoothing = deltas * _WireframeSmoothing;
float3 thickness = deltas * _WireframeThickness;
barys = smoothstep(thickness, thickness + smoothing, barys);
float minBary = min(barys.x, min(barys.y, barys.z));
return lerp(_WireframeColor, albedo, minBary);
}
```

View File

@@ -0,0 +1,48 @@
## 公式
```c#
saturate((1.0 + ( (Set_ShadingGrade - (_1st_ShadeColor_Step-_1st_ShadeColor_Feather)) * (0.0 - 1.0) ) / (_1st_ShadeColor_Step - (_1st_ShadeColor_Step-_1st_ShadeColor_Feather)))); // Base and 1st Shade Mask
```
## ShadingGradeMap
```c#
float3 Set_BaseColor = lerp( (_MainTex_var.rgb*_BaseColor.rgb), ((_MainTex_var.rgb*_BaseColor.rgb)*Set_LightColor), _Is_LightColor_Base );
float3 _BaseColor_var = lerp(Set_BaseColor,_Is_LightColor_1st_Shade_var,Set_FinalShadowMask);
float Set_FinalShadowMask = saturate((1.0 + ( (Set_ShadingGrade - (_1st_ShadeColor_Step-_1st_ShadeColor_Feather)) * (0.0 - 1.0) ) / (_1st_ShadeColor_Step - (_1st_ShadeColor_Step-_1st_ShadeColor_Feather))));
float Set_ShadeShadowMask = saturate((1.0 + ( (Set_ShadingGrade - (_2nd_ShadeColor_Step-_2nd_ShadeColor_Feather)) * (0.0 - 1.0) ) / (_2nd_ShadeColor_Step - (_2nd_ShadeColor_Step-_2nd_ShadeColor_Feather))));
float3 Set_FinalBaseColor = lerp( _BaseColor_var,
lerp(_Is_LightColor_1st_Shade_var,
lerp( _2nd_ShadeMap_var.rgb*_2nd_ShadeColor.rgb,
((_2nd_ShadeMap_var.rgb*_2nd_ShadeColor.rgb)*Set_LightColor),
_Is_LightColor_2nd_Shade ),
Set_ShadeShadowMask),
Set_FinalShadowMask);
```
## Feather
```c#
_Set_1st_ShadePosition与 _Set_2st_ShadePosition默认为白色。
float3 Set_BaseColor = lerp( (_BaseColor.rgb*_MainTex_var.rgb), ((_BaseColor.rgb*_MainTex_var.rgb)*Set_LightColor), _Is_LightColor_Base );
float Set_FinalShadowMask = saturate(1.0 + lerp( _HalfLambert_var,
_HalfLambert_var*saturate(_SystemShadowsLevel_var),
_Set_SystemShadowsToBase ) - (_BaseColor_Step-_BaseShade_Feather) *
((1.0 - _Set_1st_ShadePosition_var.rgb).r - 1.0) //对应上面的 (0.0 - 1.0)项
/ (_BaseColor_Step - (_BaseColor_Step-_BaseShade_Feather)));
float3 Set_FinalBaseColor = lerp( Set_BaseColor,
lerp(Set_1st_ShadeColor,
Set_2nd_ShadeColor,
saturate((1.0 + ( (_HalfLambert_var - (_ShadeColor_Step-_1st2nd_Shades_Feather)) * ((1.0 - _Set_2nd_ShadePosition_var.rgb).r - 1.0) ) / (_ShadeColor_Step - (_ShadeColor_Step-_1st2nd_Shades_Feather))))
),
Set_FinalShadowMask); // Final Color
```
## 总结
可以看得出两者都使用了UTS的祖传公式进行插值。但不同点在于
- ShadingGradeMap工作模式的UTS公式插值对象为ShadingGradeMap之后使用1st、2stShadeColor对应的Step与Feather进行计算来获得2个ShadeMask。最后再使用2个Mask对BaseColor、1stShadeColor与2stShadeColor进行插值来获取最终结果。
- Feather工作模式的UTS公式插值对象为HalfLambert之后也使用2份Step与Feather进行计算得到来获得2个ShadeMask。最后再使用2个Mask对BaseColor、1stShadeColor与2stShadeColor进行插值来获取最终结果。还有一个区别在于里面增加了ShadePositionMap对UTS公式的分子项进行控制也就是使用`(1.0 - _Set_2nd_ShadePosition_var.rgb).r - 1.0)`来代替`(0.0 - 1.0)`

View File

@@ -0,0 +1,685 @@
## 各种定义
根据是否开启天使环渲染与`_MAIN_LIGHT_SHADOWS`来定义顶点输入与输出格式。
```c#
struct VertexInput {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 tangent : TANGENT;
float2 texcoord0 : TEXCOORD0;
#ifdef _IS_ANGELRING_OFF
float2 lightmapUV : TEXCOORD1;
#elif _IS_ANGELRING_ON
float2 texcoord1 : TEXCOORD1;
float2 lightmapUV : TEXCOORD2;
#endif
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct VertexOutput {
float4 pos : SV_POSITION;
float2 uv0 : TEXCOORD0;
//v.2.0.4
#ifdef _IS_ANGELRING_OFF
float4 posWorld : TEXCOORD1;
float3 normalDir : TEXCOORD2;
float3 tangentDir : TEXCOORD3;
float3 bitangentDir : TEXCOORD4;
//v.2.0.7
float mirrorFlag : TEXCOORD5;
DECLARE_LIGHTMAP_OR_SH(lightmapUV, vertexSH, 6);
#if defined(_ADDITIONAL_LIGHTS_VERTEX) || (VERSION_LOWER(12, 0))
half4 fogFactorAndVertexLight : TEXCOORD7; // x: fogFactor, yzw: vertex light
#else
half fogFactor : TEXCOORD7;
#endif
# ifndef _MAIN_LIGHT_SHADOWS
float4 positionCS : TEXCOORD8;
int mainLightID : TEXCOORD9;
# else
float4 shadowCoord : TEXCOORD8;
float4 positionCS : TEXCOORD9;
int mainLightID : TEXCOORD10;
# endif
UNITY_VERTEX_INPUT_INSTANCE_ID
UNITY_VERTEX_OUTPUT_STEREO
//
#elif _IS_ANGELRING_ON
float2 uv1 : TEXCOORD1;
float4 posWorld : TEXCOORD2;
float3 normalDir : TEXCOORD3;
float3 tangentDir : TEXCOORD4;
float3 bitangentDir : TEXCOORD5;
//v.2.0.7
float mirrorFlag : TEXCOORD6;
DECLARE_LIGHTMAP_OR_SH(lightmapUV, vertexSH, 7);
#if defined(_ADDITIONAL_LIGHTS_VERTEX) || (VERSION_LOWER(12, 0))
half4 fogFactorAndVertexLight : TEXCOORD8; // x: fogFactor, yzw: vertex light
#else
half fogFactor : TEXCOORD8; // x: fogFactor, yzw: vertex light
#endif
# ifndef _MAIN_LIGHT_SHADOWS
float4 positionCS : TEXCOORD9;
int mainLightID : TEXCOORD10;
# else
float4 shadowCoord : TEXCOORD9;
float4 positionCS : TEXCOORD10;
int mainLightID : TEXCOORD11;
# endif
UNITY_VERTEX_INPUT_INSTANCE_ID
UNITY_VERTEX_OUTPUT_STEREO
#else
LIGHTING_COORDS(7,8)
UNITY_FOG_COORDS(9)
#endif
//
};
//灯光数据
struct UtsLight
{
float3 direction;
float3 color;
float distanceAttenuation;
real shadowAttenuation;
int type;
};
```
根据宏定义宏:`_ADDITIONAL_LIGHTS`=>`REQUIRES_WORLD_SPACE_POS_INTERPOLATOR`,`_MAIN_LIGHT_SHADOWS`=>`REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR`。以及一些函数:
```c#
// RaytracedHardShadow
// This is global texture. what to do with SRP Batcher.
#define UNITY_PROJ_COORD(a) a
#define UNITY_SAMPLE_SCREEN_SHADOW(tex, uv) tex2Dproj( tex, UNITY_PROJ_COORD(uv) ).r
#define TEXTURE2D_SAMPLER2D(textureName, samplerName) Texture2D textureName; SamplerState samplerName
TEXTURE2D_SAMPLER2D(_RaytracedHardShadow, sampler_RaytracedHardShadow);
float4 _RaytracedHardShadow_TexelSize;
//function to rotate the UV: RotateUV()
//float2 rotatedUV = RotateUV(i.uv0, (_angular_Verocity*3.141592654), float2(0.5, 0.5), _Time.g);
float2 RotateUV(float2 _uv, float _radian, float2 _piv, float _time)
{
float RotateUV_ang = _radian;
float RotateUV_cos = cos(_time*RotateUV_ang);
float RotateUV_sin = sin(_time*RotateUV_ang);
return (mul(_uv - _piv, float2x2( RotateUV_cos, -RotateUV_sin, RotateUV_sin, RotateUV_cos)) + _piv);
}
//
fixed3 DecodeLightProbe( fixed3 N ){
return ShadeSH9(float4(N,1));
}
inline void InitializeStandardLitSurfaceDataUTS(float2 uv, out SurfaceData outSurfaceData)
{
outSurfaceData = (SurfaceData)0;
// half4 albedoAlpha = SampleAlbedoAlpha(uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap));
half4 albedoAlpha = half4(1.0,1.0,1.0,1.0);
outSurfaceData.alpha = Alpha(albedoAlpha.a, _BaseColor, _Cutoff);
half4 specGloss = SampleMetallicSpecGloss(uv, albedoAlpha.a);
outSurfaceData.albedo = albedoAlpha.rgb * _BaseColor.rgb;
#if _SPECULAR_SETUP
outSurfaceData.metallic = 1.0h;
outSurfaceData.specular = specGloss.rgb;
#else
outSurfaceData.metallic = specGloss.r;
outSurfaceData.specular = half3(0.0h, 0.0h, 0.0h);
#endif
outSurfaceData.smoothness = specGloss.a;
outSurfaceData.normalTS = SampleNormal(uv, TEXTURE2D_ARGS(_BumpMap, sampler_BumpMap), _BumpScale);
outSurfaceData.occlusion = SampleOcclusion(uv);
outSurfaceData.emission = SampleEmission(uv, _EmissionColor.rgb, TEXTURE2D_ARGS(_EmissionMap, sampler_EmissionMap));
}
half3 GlobalIlluminationUTS(BRDFData brdfData, half3 bakedGI, half occlusion, half3 normalWS, half3 viewDirectionWS)
{
half3 reflectVector = reflect(-viewDirectionWS, normalWS);
half fresnelTerm = Pow4(1.0 - saturate(dot(normalWS, viewDirectionWS)));
half3 indirectDiffuse = bakedGI * occlusion;
half3 indirectSpecular = GlossyEnvironmentReflection(reflectVector, brdfData.perceptualRoughness, occlusion);
return EnvironmentBRDF(brdfData, indirectDiffuse, indirectSpecular, fresnelTerm);
}
```
## 顶点着色器
计算
- 顶点法线、切线、次级法线
- 裁剪过的顶点世界坐标
- 使用`ComputeFogFactor`(FOG_LINEAR、FOG_EXP与FOG_EXP2)计算`fogFactorAndVertexLight`或者`fogFactor`。
- shadowCoord
- mainLightID
如果开启天使环渲染则增加一个TexCoord1为天使环UV坐标。
```c#
VertexOutput vert (VertexInput v) {
VertexOutput o = (VertexOutput)0;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_TRANSFER_INSTANCE_ID(v, o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
o.uv0 = v.texcoord0;
//v.2.0.4
#ifdef _IS_ANGELRING_OFF
//
#elif _IS_ANGELRING_ON
o.uv1 = v.texcoord1;
#endif
o.normalDir = UnityObjectToWorldNormal(v.normal);
o.tangentDir = normalize( mul( unity_ObjectToWorld, float4( v.tangent.xyz, 0.0 ) ).xyz );
o.bitangentDir = normalize(cross(o.normalDir, o.tangentDir) * v.tangent.w);
o.posWorld = mul(unity_ObjectToWorld, v.vertex);
o.pos = UnityObjectToClipPos( v.vertex );
//v.2.0.7 Detection of the inside the mirror (right or left-handed) o.mirrorFlag = -1 then "inside the mirror".
float3 crossFwd = cross(UNITY_MATRIX_V[0].xyz, UNITY_MATRIX_V[1].xyz);
o.mirrorFlag = dot(crossFwd, UNITY_MATRIX_V[2].xyz) < 0 ? 1 : -1;
//
float3 positionWS = TransformObjectToWorld(v.vertex.xyz);
float4 positionCS = TransformWorldToHClip(positionWS);
half3 vertexLight = VertexLighting(o.posWorld.xyz, o.normalDir);
half fogFactor = ComputeFogFactor(positionCS.z);
OUTPUT_LIGHTMAP_UV(v.lightmapUV, unity_LightmapST, o.lightmapUV);
OUTPUT_SH(o.normalDir.xyz, o.vertexSH);
# if defined(_ADDITIONAL_LIGHTS_VERTEX) || (VERSION_LOWER(12, 0))
o.fogFactorAndVertexLight = half4(fogFactor, vertexLight);
#else
o.fogFactor = fogFactor;
#endif
o.positionCS = positionCS;
#if defined(_MAIN_LIGHT_SHADOWS) && !defined(_RECEIVE_SHADOWS_OFF)
#if SHADOWS_SCREEN
o.shadowCoord = ComputeScreenPos(positionCS);
#else
o.shadowCoord = TransformWorldToShadowCoord(o.posWorld.xyz);
#endif
o.mainLightID = DetermineUTS_MainLightIndex(o.posWorld.xyz, o.shadowCoord, positionCS);
#else
o.mainLightID = DetermineUTS_MainLightIndex(o.posWorld.xyz, 0, positionCS);
#endif
return o;
}
```
## 像素着色器
UTS的着色模式有两种分别封装在`UniversalToonBodyDoubleShadeWithFeather.hlsl`与`UniversalToonBodyShadingGradeMap`中。
```c#
float4 frag(VertexOutput i, fixed facing : VFACE) : SV_TARGET
{
#if defined(_SHADINGGRADEMAP)
return fragShadingGradeMap(i, facing);
#else
return fragDoubleShadeFeather(i, facing);
#endif
}
```
## 透明与裁剪
`ClippingMode`设置为非off后才会开启裁剪选项。拥有以下功能
- 裁剪Mask反转
- 使用BaseMap的Alpha通道作为Mask
- 裁剪强度与透明度强度
这个功能通常用来除了头发之类的透明物体。
## 完整代码
```c#
#if (SHADER_LIBRARY_VERSION_MAJOR ==7 && SHADER_LIBRARY_VERSION_MINOR >= 3) || (SHADER_LIBRARY_VERSION_MAJOR >= 8)
# ifdef _ADDITIONAL_LIGHTS
# ifndef REQUIRES_WORLD_SPACE_POS_INTERPOLATOR
# define REQUIRES_WORLD_SPACE_POS_INTERPOLATOR
# endif
# endif
#else
# ifdef _MAIN_LIGHT_SHADOWS
//# if !defined(_MAIN_LIGHT_SHADOWS_CASCADE)
# ifndef REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR
# define REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR
# endif
//# endif
# endif
# ifdef _ADDITIONAL_LIGHTS
# ifndef REQUIRES_WORLD_SPACE_POS_INTERPOLATOR
# define REQUIRES_WORLD_SPACE_POS_INTERPOLATOR
# endif
# endif
#endif
// RaytracedHardShadow
// This is global texture. what to do with SRP Batcher.
#define UNITY_PROJ_COORD(a) a
#define UNITY_SAMPLE_SCREEN_SHADOW(tex, uv) tex2Dproj( tex, UNITY_PROJ_COORD(uv) ).r
#define TEXTURE2D_SAMPLER2D(textureName, samplerName) Texture2D textureName; SamplerState samplerName
TEXTURE2D_SAMPLER2D(_RaytracedHardShadow, sampler_RaytracedHardShadow);
float4 _RaytracedHardShadow_TexelSize;
//function to rotate the UV: RotateUV()
//float2 rotatedUV = RotateUV(i.uv0, (_angular_Verocity*3.141592654), float2(0.5, 0.5), _Time.g);
float2 RotateUV(float2 _uv, float _radian, float2 _piv, float _time)
{
float RotateUV_ang = _radian;
float RotateUV_cos = cos(_time*RotateUV_ang);
float RotateUV_sin = sin(_time*RotateUV_ang);
return (mul(_uv - _piv, float2x2( RotateUV_cos, -RotateUV_sin, RotateUV_sin, RotateUV_cos)) + _piv);
}
//
fixed3 DecodeLightProbe( fixed3 N ){
return ShadeSH9(float4(N,1));
}
inline void InitializeStandardLitSurfaceDataUTS(float2 uv, out SurfaceData outSurfaceData)
{
outSurfaceData = (SurfaceData)0;
// half4 albedoAlpha = SampleAlbedoAlpha(uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap));
half4 albedoAlpha = half4(1.0,1.0,1.0,1.0);
outSurfaceData.alpha = Alpha(albedoAlpha.a, _BaseColor, _Cutoff);
half4 specGloss = SampleMetallicSpecGloss(uv, albedoAlpha.a);
outSurfaceData.albedo = albedoAlpha.rgb * _BaseColor.rgb;
#if _SPECULAR_SETUP
outSurfaceData.metallic = 1.0h;
outSurfaceData.specular = specGloss.rgb;
#else
outSurfaceData.metallic = specGloss.r;
outSurfaceData.specular = half3(0.0h, 0.0h, 0.0h);
#endif
outSurfaceData.smoothness = specGloss.a;
outSurfaceData.normalTS = SampleNormal(uv, TEXTURE2D_ARGS(_BumpMap, sampler_BumpMap), _BumpScale);
outSurfaceData.occlusion = SampleOcclusion(uv);
outSurfaceData.emission = SampleEmission(uv, _EmissionColor.rgb, TEXTURE2D_ARGS(_EmissionMap, sampler_EmissionMap));
}
half3 GlobalIlluminationUTS(BRDFData brdfData, half3 bakedGI, half occlusion, half3 normalWS, half3 viewDirectionWS)
{
half3 reflectVector = reflect(-viewDirectionWS, normalWS);
half fresnelTerm = Pow4(1.0 - saturate(dot(normalWS, viewDirectionWS)));
half3 indirectDiffuse = bakedGI * occlusion;
half3 indirectSpecular = GlossyEnvironmentReflection(reflectVector, brdfData.perceptualRoughness, occlusion);
return EnvironmentBRDF(brdfData, indirectDiffuse, indirectSpecular, fresnelTerm);
}
struct VertexInput {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 tangent : TANGENT;
float2 texcoord0 : TEXCOORD0;
#ifdef _IS_ANGELRING_OFF
float2 lightmapUV : TEXCOORD1;
#elif _IS_ANGELRING_ON
float2 texcoord1 : TEXCOORD1;
float2 lightmapUV : TEXCOORD2;
#endif
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct VertexOutput {
float4 pos : SV_POSITION;
float2 uv0 : TEXCOORD0;
//v.2.0.4
#ifdef _IS_ANGELRING_OFF
float4 posWorld : TEXCOORD1;
float3 normalDir : TEXCOORD2;
float3 tangentDir : TEXCOORD3;
float3 bitangentDir : TEXCOORD4;
//v.2.0.7
float mirrorFlag : TEXCOORD5;
DECLARE_LIGHTMAP_OR_SH(lightmapUV, vertexSH, 6);
#if defined(_ADDITIONAL_LIGHTS_VERTEX) || (VERSION_LOWER(12, 0))
half4 fogFactorAndVertexLight : TEXCOORD7; // x: fogFactor, yzw: vertex light
#else
half fogFactor : TEXCOORD7;
#endif
# ifndef _MAIN_LIGHT_SHADOWS
float4 positionCS : TEXCOORD8;
int mainLightID : TEXCOORD9;
# else
float4 shadowCoord : TEXCOORD8;
float4 positionCS : TEXCOORD9;
int mainLightID : TEXCOORD10;
# endif
UNITY_VERTEX_INPUT_INSTANCE_ID
UNITY_VERTEX_OUTPUT_STEREO
//
#elif _IS_ANGELRING_ON
float2 uv1 : TEXCOORD1;
float4 posWorld : TEXCOORD2;
float3 normalDir : TEXCOORD3;
float3 tangentDir : TEXCOORD4;
float3 bitangentDir : TEXCOORD5;
//v.2.0.7
float mirrorFlag : TEXCOORD6;
DECLARE_LIGHTMAP_OR_SH(lightmapUV, vertexSH, 7);
#if defined(_ADDITIONAL_LIGHTS_VERTEX) || (VERSION_LOWER(12, 0))
half4 fogFactorAndVertexLight : TEXCOORD8; // x: fogFactor, yzw: vertex light
#else
half fogFactor : TEXCOORD8; // x: fogFactor, yzw: vertex light
#endif
# ifndef _MAIN_LIGHT_SHADOWS
float4 positionCS : TEXCOORD9;
int mainLightID : TEXCOORD10;
# else
float4 shadowCoord : TEXCOORD9;
float4 positionCS : TEXCOORD10;
int mainLightID : TEXCOORD11;
# endif
UNITY_VERTEX_INPUT_INSTANCE_ID
UNITY_VERTEX_OUTPUT_STEREO
#else
LIGHTING_COORDS(7,8)
UNITY_FOG_COORDS(9)
#endif
//
};
// Abstraction over Light shading data.
struct UtsLight
{
float3 direction;
float3 color;
float distanceAttenuation;
real shadowAttenuation;
int type;
};
///////////////////////////////////////////////////////////////////////////////
// Light Abstraction //
/////////////////////////////////////////////////////////////////////////////
real MainLightRealtimeShadowUTS(float4 shadowCoord, float4 positionCS)
{
#if !defined(MAIN_LIGHT_CALCULATE_SHADOWS)
return 1.0h;
#endif
ShadowSamplingData shadowSamplingData = GetMainLightShadowSamplingData();
half4 shadowParams = GetMainLightShadowParams();
#if defined(UTS_USE_RAYTRACING_SHADOW)
float w = (positionCS.w == 0) ? 0.00001 : positionCS.w;
float4 screenPos = ComputeScreenPos(positionCS/ w);
return SAMPLE_TEXTURE2D(_RaytracedHardShadow, sampler_RaytracedHardShadow, screenPos);
#endif
return SampleShadowmap(TEXTURE2D_ARGS(_MainLightShadowmapTexture, sampler_MainLightShadowmapTexture), shadowCoord, shadowSamplingData, shadowParams, false);
}
real AdditionalLightRealtimeShadowUTS(int lightIndex, float3 positionWS, float4 positionCS)
{
#if defined(UTS_USE_RAYTRACING_SHADOW)
float w = (positionCS.w == 0) ? 0.00001 : positionCS.w;
float4 screenPos = ComputeScreenPos(positionCS / w);
return SAMPLE_TEXTURE2D(_RaytracedHardShadow, sampler_RaytracedHardShadow, screenPos);
#endif // UTS_USE_RAYTRACING_SHADOW
#if !defined(ADDITIONAL_LIGHT_CALCULATE_SHADOWS)
return 1.0h;
#endif
ShadowSamplingData shadowSamplingData = GetAdditionalLightShadowSamplingData();
#if USE_STRUCTURED_BUFFER_FOR_LIGHT_DATA
lightIndex = _AdditionalShadowsIndices[lightIndex];
// We have to branch here as otherwise we would sample buffer with lightIndex == -1.
// However this should be ok for platforms that store light in SSBO.
UNITY_BRANCH
if (lightIndex < 0)
return 1.0;
float4 shadowCoord = mul(_AdditionalShadowsBuffer[lightIndex].worldToShadowMatrix, float4(positionWS, 1.0));
#else
float4 shadowCoord = mul(_AdditionalLightsWorldToShadow[lightIndex], float4(positionWS, 1.0));
#endif
half4 shadowParams = GetAdditionalLightShadowParams(lightIndex);
return SampleShadowmap(TEXTURE2D_ARGS(_AdditionalLightsShadowmapTexture, sampler_AdditionalLightsShadowmapTexture), shadowCoord, shadowSamplingData, shadowParams, true);
}
UtsLight GetUrpMainUtsLight()
{
UtsLight light;
light.direction = _MainLightPosition.xyz;
// unity_LightData.z is 1 when not culled by the culling mask, otherwise 0.
light.distanceAttenuation = unity_LightData.z;
#if defined(LIGHTMAP_ON) || defined(_MIXED_LIGHTING_SUBTRACTIVE)
// unity_ProbesOcclusion.x is the mixed light probe occlusion data
light.distanceAttenuation *= unity_ProbesOcclusion.x;
#endif
light.shadowAttenuation = 1.0;
light.color = _MainLightColor.rgb;
light.type = _MainLightPosition.w;
return light;
}
UtsLight GetUrpMainUtsLight(float4 shadowCoord, float4 positionCS)
{
UtsLight light = GetUrpMainUtsLight();
light.shadowAttenuation = MainLightRealtimeShadowUTS(shadowCoord, positionCS);
return light;
}
// Fills a light struct given a perObjectLightIndex
UtsLight GetAdditionalPerObjectUtsLight(int perObjectLightIndex, float3 positionWS,float4 positionCS)
{
// Abstraction over Light input constants
#if USE_STRUCTURED_BUFFER_FOR_LIGHT_DATA
float4 lightPositionWS = _AdditionalLightsBuffer[perObjectLightIndex].position;
half3 color = _AdditionalLightsBuffer[perObjectLightIndex].color.rgb;
half4 distanceAndSpotAttenuation = _AdditionalLightsBuffer[perObjectLightIndex].attenuation;
half4 spotDirection = _AdditionalLightsBuffer[perObjectLightIndex].spotDirection;
half4 lightOcclusionProbeInfo = _AdditionalLightsBuffer[perObjectLightIndex].occlusionProbeChannels;
#else
float4 lightPositionWS = _AdditionalLightsPosition[perObjectLightIndex];
half3 color = _AdditionalLightsColor[perObjectLightIndex].rgb;
half4 distanceAndSpotAttenuation = _AdditionalLightsAttenuation[perObjectLightIndex];
half4 spotDirection = _AdditionalLightsSpotDir[perObjectLightIndex];
half4 lightOcclusionProbeInfo = _AdditionalLightsOcclusionProbes[perObjectLightIndex];
#endif
// Directional lights store direction in lightPosition.xyz and have .w set to 0.0.
// This way the following code will work for both directional and punctual lights.
float3 lightVector = lightPositionWS.xyz - positionWS * lightPositionWS.w;
float distanceSqr = max(dot(lightVector, lightVector), HALF_MIN);
half3 lightDirection = half3(lightVector * rsqrt(distanceSqr));
half attenuation = DistanceAttenuation(distanceSqr, distanceAndSpotAttenuation.xy) * AngleAttenuation(spotDirection.xyz, lightDirection, distanceAndSpotAttenuation.zw);
UtsLight light;
light.direction = lightDirection;
light.distanceAttenuation = attenuation;
light.shadowAttenuation = AdditionalLightRealtimeShadowUTS(perObjectLightIndex, positionWS, positionCS);
light.color = color;
light.type = lightPositionWS.w;
// In case we're using light probes, we can sample the attenuation from the `unity_ProbesOcclusion`
#if defined(LIGHTMAP_ON) || defined(_MIXED_LIGHTING_SUBTRACTIVE)
// First find the probe channel from the light.
// Then sample `unity_ProbesOcclusion` for the baked occlusion.
// If the light is not baked, the channel is -1, and we need to apply no occlusion.
// probeChannel is the index in 'unity_ProbesOcclusion' that holds the proper occlusion value.
int probeChannel = lightOcclusionProbeInfo.x;
// lightProbeContribution is set to 0 if we are indeed using a probe, otherwise set to 1.
half lightProbeContribution = lightOcclusionProbeInfo.y;
half probeOcclusionValue = unity_ProbesOcclusion[probeChannel];
light.distanceAttenuation *= max(probeOcclusionValue, lightProbeContribution);
#endif
return light;
}
// Fills a light struct given a loop i index. This will convert the i
// index to a perObjectLightIndex
UtsLight GetAdditionalUtsLight(uint i, float3 positionWS,float4 positionCS)
{
int perObjectLightIndex = GetPerObjectLightIndex(i);
return GetAdditionalPerObjectUtsLight(perObjectLightIndex, positionWS, positionCS);
}
half3 GetLightColor(UtsLight light)
{
return light.color * light.distanceAttenuation;
}
#define INIT_UTSLIGHT(utslight) \
utslight.direction = 0; \
utslight.color = 0; \
utslight.distanceAttenuation = 0; \
utslight.shadowAttenuation = 0; \
utslight.type = 0
int DetermineUTS_MainLightIndex(float3 posW, float4 shadowCoord, float4 positionCS)
{
UtsLight mainLight;
INIT_UTSLIGHT(mainLight);
int mainLightIndex = MAINLIGHT_NOT_FOUND;
UtsLight nextLight = GetUrpMainUtsLight(shadowCoord, positionCS);
if (nextLight.distanceAttenuation > mainLight.distanceAttenuation && nextLight.type == 0)
{
mainLight = nextLight;
mainLightIndex = MAINLIGHT_IS_MAINLIGHT;
}
int lightCount = GetAdditionalLightsCount();
for (int ii = 0; ii < lightCount; ++ii)
{
nextLight = GetAdditionalUtsLight(ii, posW, positionCS);
if (nextLight.distanceAttenuation > mainLight.distanceAttenuation && nextLight.type == 0)
{
mainLight = nextLight;
mainLightIndex = ii;
}
}
return mainLightIndex;
}
UtsLight GetMainUtsLightByID(int index,float3 posW, float4 shadowCoord, float4 positionCS)
{
UtsLight mainLight;
INIT_UTSLIGHT(mainLight);
if (index == MAINLIGHT_NOT_FOUND)
{
return mainLight;
}
if (index == MAINLIGHT_IS_MAINLIGHT)
{
return GetUrpMainUtsLight(shadowCoord, positionCS);
}
return GetAdditionalUtsLight(index, posW, positionCS);
}
VertexOutput vert (VertexInput v) {
VertexOutput o = (VertexOutput)0;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_TRANSFER_INSTANCE_ID(v, o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
o.uv0 = v.texcoord0;
//v.2.0.4
#ifdef _IS_ANGELRING_OFF
//
#elif _IS_ANGELRING_ON
o.uv1 = v.texcoord1;
#endif
o.normalDir = UnityObjectToWorldNormal(v.normal);
o.tangentDir = normalize( mul( unity_ObjectToWorld, float4( v.tangent.xyz, 0.0 ) ).xyz );
o.bitangentDir = normalize(cross(o.normalDir, o.tangentDir) * v.tangent.w);
o.posWorld = mul(unity_ObjectToWorld, v.vertex);
o.pos = UnityObjectToClipPos( v.vertex );
//v.2.0.7 Detection of the inside the mirror (right or left-handed) o.mirrorFlag = -1 then "inside the mirror".用于判断是否是渲染镜子反射结果。
//[0]Right unit vector [1] Up unit vector [2] -1 * world space camera Forward unit vector
float3 crossFwd = cross(UNITY_MATRIX_V[0].xyz, UNITY_MATRIX_V[1].xyz);
o.mirrorFlag = dot(crossFwd, UNITY_MATRIX_V[2].xyz) < 0 ? 1 : -1;
//
float3 positionWS = TransformObjectToWorld(v.vertex.xyz);
float4 positionCS = TransformWorldToHClip(positionWS);
half3 vertexLight = VertexLighting(o.posWorld.xyz, o.normalDir);
half fogFactor = ComputeFogFactor(positionCS.z);
OUTPUT_LIGHTMAP_UV(v.lightmapUV, unity_LightmapST, o.lightmapUV);
OUTPUT_SH(o.normalDir.xyz, o.vertexSH);
# if defined(_ADDITIONAL_LIGHTS_VERTEX) || (VERSION_LOWER(12, 0))
o.fogFactorAndVertexLight = half4(fogFactor, vertexLight);
#else
o.fogFactor = fogFactor;
#endif
o.positionCS = positionCS;
#if defined(_MAIN_LIGHT_SHADOWS) && !defined(_RECEIVE_SHADOWS_OFF)
#if SHADOWS_SCREEN
o.shadowCoord = ComputeScreenPos(positionCS);
#else
o.shadowCoord = TransformWorldToShadowCoord(o.posWorld.xyz);
#endif
o.mainLightID = DetermineUTS_MainLightIndex(o.posWorld.xyz, o.shadowCoord, positionCS);
#else
o.mainLightID = DetermineUTS_MainLightIndex(o.posWorld.xyz, 0, positionCS);
#endif
return o;
}
#if defined(_SHADINGGRADEMAP)
#include "UniversalToonBodyShadingGradeMap.hlsl"
#else //#if defined(_SHADINGGRADEMAP)
#include "UniversalToonBodyDoubleShadeWithFeather.hlsl"
#endif //#if defined(_SHADINGGRADEMAP)
float4 frag(VertexOutput i, fixed facing : VFACE) : SV_TARGET
{
#if defined(_SHADINGGRADEMAP)
return fragShadingGradeMap(i, facing);
#else
return fragDoubleShadeFeather(i, facing);
#endif
}
```

View File

@@ -0,0 +1,204 @@
## 描边相关属性
| Property | Function |
| ------------------------- | ------------------------------------------------------------ |
| `OUTLINE MODE` | 指定背面挤出描边的方式。 你可以选择`NML`(正常倒置法)/`POS`(位置缩放法)。/ `POS`位置缩放法。在大多数情况下使用NML但如果是只由硬边组成的网格如立方体POS将防止轮廓被断开。
对简单的形状使用POS对人物和有复杂轮廓的东西使用NML会比较好。 |
| `Outline_Width` | 定义描边宽度 **注意: 这个值依赖于模型被导入Unity时的比例。** which means that you have to be careful if the scale is not 1. |
| `Farthest_Distance` | 轮廓的宽度将根据摄像机和物体之间的距离而改变。摄像机大于这个距离时轮廓宽度将为0。 |
| `Nearest_Distance` | 轮廓的宽度将根据摄像机和物体之间的距离而改变。摄像机小于这个距离时宽度将为`Outline_Width`。 |
| `Outline_Sampler` | 用于美术手动隐藏指定区域的轮廓? |
| `Outline_Color` | 轮廓颜色。 |
| `Is_BlendBaseColor` | 是否与BaseColor颜色融合。 |
| `Is_LightColor_Outline` | 是否受灯光颜色影响。 |
| `Is_OutlineTex` | 是否存在轮廓贴图。 |
| `OutlineTex` | 轮廓贴图,用于控制颜色。 |
| `Offset_Camera_Z` | 在摄像机空间Z坐标轴上偏移。可以用于调整模型尖端与模型交界处处效果。 大多数情况设置为0。 |
| `Is_BakedNormal` | 是否使用烘焙的法线贴图的法线值。 |
| `BakedNormal for Outline` | 指定的烘焙法线贴图。 |
## 调整轮廓强度
>`_OutlineMode`有两种模式NormalDirection与PositionScaling,除了一些简单基础几何体,其他一般使用法线方向偏移。
### Outline Sampler
![](https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/raw/release/urp/2.3.0/Documentation~/Images_jpg/0906-18_01.jpg)
黑色表示“无线条”,白色表示宽度为 100%。
### BakedNormal
采样烘焙的法线贴图,并将值赋予顶点法线。
### Offset_Camera_Z
>`Offset_Camera_Z`就是单纯的深度偏移,可以用于调整模型尖端与模型交界处处效果。但本人认为这个没有必要。
效果如图:
![](https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/raw/release/urp/2.3.0/Documentation~/Images_jpg/0205-11_01.jpg)
## 顶点着色器
主要用于偏移顶点坐标、采样法线贴图值并传递给顶点以及传递顶点数据。计算物体与摄像机距离,对设定的最近距离与最远距离插值得到距离因子。之后乘以`_Outline_Width*0.001`
>个人认为这里的比例存在问题,应该使用到摄像机矩阵参数作为比值会比较好。
```c#
float Set_Outline_Width = (_Outline_Width*0.001*smoothstep( _Farthest_Distance, _Nearest_Distance, distance(objPos.rgb,_WorldSpaceCameraPos) )*_Outline_Sampler_var.rgb).r;
//Transparent开启时`_ZOverDrawMode`为1否则为0。
Set_Outline_Width *= (1.0f - _ZOverDrawMode);
```
```c#
#ifdef _OUTLINE_NML
//v.2.0.4.3 baked Normal Texture for Outline
o.pos = UnityObjectToClipPos(lerp(float4(v.vertex.xyz + v.normal*Set_Outline_Width,1), float4(v.vertex.xyz + _BakedNormalDir*Set_Outline_Width,1),_Is_BakedNormal));
#elif _OUTLINE_POS
Set_Outline_Width = Set_Outline_Width*2;
float signVar = dot(normalize(v.vertex),normalize(v.normal))<0 ? -1 : 1;
o.pos = UnityObjectToClipPos(float4(v.vertex.xyz + signVar*normalize(v.vertex)*Set_Outline_Width, 1));
#endif
//v.2.0.7.5
o.pos.z = o.pos.z + _Offset_Z * _ClipCameraPos.z;
return o;
```
## 像素着色器
主要用来设置轮廓颜色,`轮廓颜色=灯光颜色 * BaseMap * BaseColor * (OutlineTex)`。同时提供了裁剪功能可以根据ClippingMask、BaseMap的Alpha通道进行裁剪。
>`_Is_LightColor_Outline`根据`ShaderGUI`的选项进行设置但这里感觉可以增加CaptureMat、World映射Ramp、环境探针以增加更多效果。
```c#
//计算灯光色小于0.05则使用环境色)、灯光亮度
half3 ambientSkyColor = unity_AmbientSky.rgb>0.05 ? unity_AmbientSky.rgb*_Unlit_Intensity : half3(0.05,0.05,0.05)*_Unlit_Intensity;
float3 lightColor = _LightColor0.rgb >0.05 ? _LightColor0.rgb : ambientSkyColor.rgb;
float lightColorIntensity = (0.299*lightColor.r + 0.587*lightColor.g + 0.114*lightColor.b);
lightColor = lightColorIntensity<1 ? lightColor : lightColor/lightColorIntensity;//灯光亮度小于1时对颜色亮度进行缩放。
lightColor = lerp(half3(1.0,1.0,1.0), lightColor, _Is_LightColor_Outline);
//计算BaseMap*BaseColor值、采样OutlineMap值。
float2 Set_UV0 = i.uv0;
float4 _MainTex_var = SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex, TRANSFORM_TEX(Set_UV0, _MainTex));
float3 Set_BaseColor = _BaseColor.rgb*_MainTex_var.rgb;
float3 _Is_BlendBaseColor_var = lerp( _Outline_Color.rgb*lightColor, (_Outline_Color.rgb*Set_BaseColor*Set_BaseColor*lightColor), _Is_BlendBaseColor );
float3 _OutlineTex_var = tex2D(_OutlineTex,TRANSFORM_TEX(Set_UV0, _OutlineTex)).rgb;
#ifdef _IS_OUTLINE_CLIPPING_NO
float3 Set_Outline_Color = lerp(_Is_BlendBaseColor_var, _OutlineTex_var.rgb*_Outline_Color.rgb*lightColor, _Is_OutlineTex );
return float4(Set_Outline_Color,1.0);
#elif _IS_OUTLINE_CLIPPING_YES
//开启裁剪模式的状态下可以根据ClippingMask、BaseMap的Alpha通道进行裁剪。
float4 _ClippingMask_var = SAMPLE_TEXTURE2D(_ClippingMask, sampler_MainTex, TRANSFORM_TEX(Set_UV0, _ClippingMask));
float Set_MainTexAlpha = _MainTex_var.a;
float _IsBaseMapAlphaAsClippingMask_var = lerp( _ClippingMask_var.r, Set_MainTexAlpha, _IsBaseMapAlphaAsClippingMask );
float _Inverse_Clipping_var = lerp( _IsBaseMapAlphaAsClippingMask_var, (1.0 - _IsBaseMapAlphaAsClippingMask_var), _Inverse_Clipping );
float Set_Clipping = saturate((_Inverse_Clipping_var+_Clipping_Level));
clip(Set_Clipping - 0.5);
float4 Set_Outline_Color = lerp( float4(_Is_BlendBaseColor_var,Set_Clipping), float4((_OutlineTex_var.rgb*_Outline_Color.rgb*lightColor),Set_Clipping), _Is_OutlineTex );
return Set_Outline_Color;
#endif
}
```
## 完整代码
```c#
uniform float4 _LightColor0; // this is not set in c# code ?
struct VertexInput {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 tangent : TANGENT;
float2 texcoord0 : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct VertexOutput {
float4 pos : SV_POSITION;
float2 uv0 : TEXCOORD0;
float3 normalDir : TEXCOORD1;
float3 tangentDir : TEXCOORD2;
float3 bitangentDir : TEXCOORD3;
UNITY_VERTEX_OUTPUT_STEREO
};
VertexOutput vert (VertexInput v) {
VertexOutput o = (VertexOutput)0;
UNITY_SETUP_INSTANCE_ID(v);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
o.uv0 = v.texcoord0;
float4 objPos = mul ( unity_ObjectToWorld, float4(0,0,0,1) ); //取得物体世界坐标
float2 Set_UV0 = o.uv0;
//TRANSFORM_TEX(v.texcoord,_MainTex);等价于o.uv = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw;
float4 _Outline_Sampler_var = tex2Dlod(_Outline_Sampler,float4(TRANSFORM_TEX(Set_UV0, _Outline_Sampler),0.0,0));//使用_Outline_Sampler的值进行缩放与偏移UV后在顶点着色器中对_Outline_Sampler进行采样。
//v.2.0.4.3 baked Normal Texture for Outline
//计算法线以及法向空间矩阵
o.normalDir = UnityObjectToWorldNormal(v.normal);
o.tangentDir = normalize( mul( unity_ObjectToWorld, float4( v.tangent.xyz, 0.0 ) ).xyz );
o.bitangentDir = normalize(cross(o.normalDir, o.tangentDir) * v.tangent.w);
float3x3 tangentTransform = float3x3( o.tangentDir, o.bitangentDir, o.normalDir);
//UnpackNormal() can't be used, and so as follows. Do not specify a bump for the texture to be used.
//采样_BakedNormal贴图值并解算烘焙法线方向
float4 _BakedNormal_var = (tex2Dlod(_BakedNormal,float4(TRANSFORM_TEX(Set_UV0, _BakedNormal),0.0,0)) * 2 - 1);
float3 _BakedNormalDir = normalize(mul(_BakedNormal_var.rgb, tangentTransform));
//end
float Set_Outline_Width = (_Outline_Width*0.001*smoothstep( _Farthest_Distance, _Nearest_Distance, distance(objPos.rgb,_WorldSpaceCameraPos) )*_Outline_Sampler_var.rgb).r;
Set_Outline_Width *= (1.0f - _ZOverDrawMode);
//v.2.0.7.5
//计算裁剪位置以及按照不同平台来设置_Offset_Z
float4 _ClipCameraPos = mul(UNITY_MATRIX_VP, float4(_WorldSpaceCameraPos.xyz, 1));
//v.2.0.7
#if defined(UNITY_REVERSED_Z)
//v.2.0.4.2 (DX)
_Offset_Z = _Offset_Z * -0.01;
#else
//OpenGL
_Offset_Z = _Offset_Z * 0.01;
#endif
//v2.0.4
//根据OutlineMode对顶线位置进行偏移。
#ifdef _OUTLINE_NML
//v.2.0.4.3 baked Normal Texture for Outline
o.pos = UnityObjectToClipPos(lerp(float4(v.vertex.xyz + v.normal*Set_Outline_Width,1), float4(v.vertex.xyz + _BakedNormalDir*Set_Outline_Width,1),_Is_BakedNormal));
#elif _OUTLINE_POS
Set_Outline_Width = Set_Outline_Width*2;
float signVar = dot(normalize(v.vertex),normalize(v.normal))<0 ? -1 : 1;
o.pos = UnityObjectToClipPos(float4(v.vertex.xyz + signVar*normalize(v.vertex)*Set_Outline_Width, 1));
#endif
//v.2.0.7.5
o.pos.z = o.pos.z + _Offset_Z * _ClipCameraPos.z;
return o;
}
float4 frag(VertexOutput i) : SV_Target{
//v.2.0.5
if (_ZOverDrawMode > 0.99f)
{
return float4(1.0f, 1.0f, 1.0f, 1.0f); // but nothing should be drawn except Z value as colormask is set to 0
}
_Color = _BaseColor;
float4 objPos = mul ( unity_ObjectToWorld, float4(0,0,0,1) );
//v.2.0.7.5
half3 ambientSkyColor = unity_AmbientSky.rgb>0.05 ? unity_AmbientSky.rgb*_Unlit_Intensity : half3(0.05,0.05,0.05)*_Unlit_Intensity;
float3 lightColor = _LightColor0.rgb >0.05 ? _LightColor0.rgb : ambientSkyColor.rgb;
float lightColorIntensity = (0.299*lightColor.r + 0.587*lightColor.g + 0.114*lightColor.b);
lightColor = lightColorIntensity<1 ? lightColor : lightColor/lightColorIntensity;
lightColor = lerp(half3(1.0,1.0,1.0), lightColor, _Is_LightColor_Outline);
float2 Set_UV0 = i.uv0;
float4 _MainTex_var = SAMPLE_TEXTURE2D(_MainTex,sampler_MainTex, TRANSFORM_TEX(Set_UV0, _MainTex));
float3 Set_BaseColor = _BaseColor.rgb*_MainTex_var.rgb;
float3 _Is_BlendBaseColor_var = lerp( _Outline_Color.rgb*lightColor, (_Outline_Color.rgb*Set_BaseColor*Set_BaseColor*lightColor), _Is_BlendBaseColor );
//
float3 _OutlineTex_var = tex2D(_OutlineTex,TRANSFORM_TEX(Set_UV0, _OutlineTex)).rgb;
//v.2.0.7.5
#ifdef _IS_OUTLINE_CLIPPING_NO
float3 Set_Outline_Color = lerp(_Is_BlendBaseColor_var, _OutlineTex_var.rgb*_Outline_Color.rgb*lightColor, _Is_OutlineTex );
return float4(Set_Outline_Color,1.0);
#elif _IS_OUTLINE_CLIPPING_YES
float4 _ClippingMask_var = SAMPLE_TEXTURE2D(_ClippingMask, sampler_MainTex, TRANSFORM_TEX(Set_UV0, _ClippingMask));
float Set_MainTexAlpha = _MainTex_var.a;
float _IsBaseMapAlphaAsClippingMask_var = lerp( _ClippingMask_var.r, Set_MainTexAlpha, _IsBaseMapAlphaAsClippingMask );
float _Inverse_Clipping_var = lerp( _IsBaseMapAlphaAsClippingMask_var, (1.0 - _IsBaseMapAlphaAsClippingMask_var), _Inverse_Clipping );
float Set_Clipping = saturate((_Inverse_Clipping_var+_Clipping_Level));
clip(Set_Clipping - 0.5);
float4 Set_Outline_Color = lerp( float4(_Is_BlendBaseColor_var,Set_Clipping), float4((_OutlineTex_var.rgb*_Outline_Color.rgb*lightColor),Set_Clipping), _Is_OutlineTex );
return Set_Outline_Color;
#endif
}
```

View File

@@ -0,0 +1,753 @@
## 总览
- `DoubleShadeWithFeather`UTS/UniversalToon 的标准工作流程模式。允许 2 种阴影颜色(双阴影颜色)和颜色之间的渐变(羽化)。
- `ShadingGradeMap`:更高级的工作流程模式。除了 DoubleShadeWithFeather 功能之外,此着色器还可以保存称为 ShadingGradeMap 的特殊贴图。
![](https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/raw/release/urp/2.3.0/Documentation~/Images_jpg/URP_image035.jpg)
### 案例文件
示例文件于`urp-2.2.0-notpackage`标签之后被移除所以可以切换到该commit将示例文件复制出来。
- ToonShader.unity Settings for an illustration-style shader.
- ToonShader_CelLook.unity Settings for a cel-style shader.
- ToonShader_Emissive.unity Settings for a shader with an emissive .
- ToonShader_Firefly.unity Multiple real-time point lights.
- AngelRing\AngelRing.unityAngel ring and ShadingGradeMap sample.
- Baked Normal\Cube_HardEdge.unityBaked Normal reference.
- BoxProjection\BoxProjection.unity Lighting a dark room using Box Projection.
- EmissiveAnimation\EmisssiveAnimation.unityEmissiveAnimation sample.
- LightAndShadows\LightAndShadows.unityComparison between the PBR shader and UTS2.
- MatCapMask\MatCapMask.unityMatcapMask sample.
- Mirror\MirrorTest.unity: Sample scene checking for a mirror object
- NormalMap\NormalMap.unity Tricks for using the normal map with UTS2.
- PointLightTest\PointLightTest.unitySample of cel-style content with point lights.
- Sample\Sample.unity Introduction to the basic UTS2 shaders.
- ShaderBall\ShaderBall.unityUTS2 settings on an example shader ball.
## 贴图
### 基础贴图
- Base Color
- 1st Shade Color
- 2nd Shade Color
除了基础的颜色纹理还可以接受多种自定义选项,例如
- High Color
- Rim Light
- MatCap
- Emissive
### 法线贴图
法线贴图一般在`UTS/UniversalToon`中用于平滑阴影渐变效果。
此外还通过与比例尺一起使用来调整皮肤纹理,以及使用`MatCap`表现头发高光效果。
| `选项` | 函数 | 属性 |
|:-------------------|:-------------------|:-------------------|
| NormalMap Effectiveness | 选择是否在每种颜色上反射法线贴图。如果按钮为Off则该颜色不会反映法线贴图而是由对象本身的几何形状评估。 | |
| `3 Basic Colors` | 当您希望法线贴图反映在颜色中时,请设置为活动。 | _Is_NormalMapToBase |
| `HighColor` | 当您希望法线贴图影响高颜色时,请设置活动。 | _Is_NormalMapToHighColor |
| `RimLight` | 当您希望法线贴图影响 RimLight 时请设置为Active。 | _Is_NormalMapToRimLight |
### 用于添加阴影区域的PositionMap
用于调整投射阴影的顶点位置。`UniversalToonBodyShadingGradeMap`模式下才能使用。
| `选项` | 函数 | 属性 |
|:-------------------|:-------------------|:-------------------|
| `1st Shade Position Map` |使用PositionMap强制修改`1st Shade Color``Position`, 与系统照明无关。指定需要进行投射阴影的区域。| _Set_1st_ShadePosition |
| `2nd Shade Position Map` |使用PositionMap强制修改`2st Shade Color``Position`, 与系统照明无关。指定需要进行投射阴影的区域。 (也会影响到`1st Shade Color``Position Map`). | _Set_2nd_ShadePosition |
为了独立于光照显示第二阴影颜色,请确保填充第一和第二阴影颜色位置图重叠的位置。这样,即使来自其他照明的阴影落在第二个阴影颜色区域,它也会继续显示。
### Shading Grade Map
基于光照来调整阴影亮度,允许在 UV 点级别控制第一和/或第二阴影颜色。
该贴图的精细控制使得“当光线照射到衣服上时隐藏褶皱”这样的效果成为可能。将诸如环境光遮蔽贴图之类的着色贴图应用到着色等级贴图可以使阴影更容易根据光照下降。这对于创建跟随头发刘海或衣服凹面部分的阴影非常有用。
### 边缘光效果(RimLight)
`RimLight``LightDirection_MaskOn``Add_Antipoden_RimLight`效果:
![](https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/raw/release/urp/2.3.0/Documentation~/Images_jpg/UT2018_UTS2_SuperTips_14.jpg)
RimLight 通常从相机的角度显示在对象边缘周围。在 UTS2 中,您可以相对于主灯的位置调整边缘灯的显示位置。('LightDirection 蒙版')
您还可以将 RimLight 设置为与光源相反的方向。您还可以使用“添加 Antipodean_RimLight”渲染“光反射”。
如果您只想在光源的相反方向上显示边缘光并沿光源方向切割边缘光,请将边缘光的光方向颜色指定为黑色 (0,0,0)。
![](https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/raw/release/urp/2.3.0/Documentation~/Images_jpg/UT2018_UTS2_SuperTips_15.jpg)
### 天使环
https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/blob/release/urp/2.3.0/Documentation~/index.md#-making-materials-for-angel-ring
制作天使环材质首先需要将头发模型正面投射到UV上做制作贴图。
## 颜色
高光颜色
通常色(被光照射部分)
1号阴影
2号阴影
## 模板功能
![](https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/raw/release/urp/2.3.0/Documentation~/Images_jpg/URP_image036.jpg)
可以用来制作睫毛挡住头发的效果。
## 半兰伯特模型
>有一个问题仍然存在。在光照无法到达的区域,模型的外观通常是全黑的,没有任何明暗变化,这会使模型的背光区域看起来就像一个平面一样,失去了模型细节表现。
为此Valve 公司在开发游戏《半条命》时提出了一种技术,由于该技术是在原兰伯特光照模型的基础上进行了一个简单的修改, 因此被称为半兰伯特光照模型。
$$Cdiffuse=(Clight · Mdiffuse) (0.5 (n· I) + 0.5)$$
把n·I 的结果范围从[-1, 1 映射到0, 1 范围内。也就是说对于模型的背光面在原兰伯特光照模型中点积结果将映射到同一个值即0 值处;而在半兰伯特模型中,背光面也可以有明暗变化,不同的点积结果会映射到不同的值上。
## 高光计算
使用半兰伯特模型计算高光Mask
```c#
//半影值计算高光Mask(半兰伯特模型)
float _Specular_var = 0.5*dot(halfDirection,lerp( i.normalDir, normalDirection, _Is_NormalMapToHighColor ))+0.5; // Specular
//Step(a,x) => x>=a ? 1 : 0,计算高光Mask
float _TweakHighColorMask_var = (saturate((_Set_HighColorMask_var.g+_Tweak_HighColorMaskLevel))*lerp( (1.0 - step(_Specular_var,(1.0 - pow(abs(_HighColor_Power),5)))), pow(abs(_Specular_var),exp2(lerp(11,1,_HighColor_Power))), _Is_SpecularToHighColor ));
```
高光颜色=高光贴图 * 高光颜色设定值 * 灯光颜色(可选)
```c#
float4 _HighColor_Tex_var = tex2D(_HighColor_Tex, TRANSFORM_TEX(Set_UV0, _HighColor_Tex));
float3 _HighColor_var = (lerp( (_HighColor_Tex_var.rgb*_HighColor.rgb), ((_HighColor_Tex_var.rgb*_HighColor.rgb)*Set_LightColor), _Is_LightColor_HighColor )*_TweakHighColorMask_var);
//Composition: 3 Basic Colors and HighColor as Set_HighColor
float3 Set_HighColor = (lerp(SATURATE_IF_SDR((Set_FinalBaseColor-_TweakHighColorMask_var)), Set_FinalBaseColor, lerp(_Is_BlendAddToHiColor,1.0,_Is_SpecularToHighColor) )+lerp( _HighColor_var, (_HighColor_var*((1.0 - Set_FinalShadowMask)+(Set_FinalShadowMask*_TweakHighColorOnShadow))), _Is_UseTweakHighColorOnShadow ));
```
## 边缘光
边缘光颜色=边缘光贴图 * 边缘光颜色设定值 * 灯光颜色(可选),其他调整参数有`_Ap_RimLight_Power`、``
```c#
float4 _Set_RimLightMask_var = tex2D(_Set_RimLightMask, TRANSFORM_TEX(Set_UV0, _Set_RimLightMask));
float3 _Is_LightColor_RimLight_var = lerp( _RimLightColor.rgb, (_RimLightColor.rgb*Set_LightColor), _Is_LightColor_RimLight );
//区域计算=1-dot(ViewDir,Normal),之后pow(_RimArea_var,exp2(lerp(3,0,_RimLight_Power)))最后使用UTS祖传公式计算
float _RimArea_var = abs(1.0 - dot(lerp( i.normalDir, normalDirection, _Is_NormalMapToRimLight ),viewDirection));
float _RimLightPower_var = pow(_RimArea_var,exp2(lerp(3,0,_RimLight_Power)));
float _Rimlight_InsideMask_var = saturate(lerp( (0.0 + ( (_RimLightPower_var - _RimLight_InsideMask) * (1.0 - 0.0) ) / (1.0 - _RimLight_InsideMask)), step(_RimLight_InsideMask,_RimLightPower_var), _RimLight_FeatherOff ));
float _VertHalfLambert_var = 0.5*dot(i.normalDir,lightDirection)+0.5;
float3 _LightDirection_MaskOn_var = lerp( (_Is_LightColor_RimLight_var*_Rimlight_InsideMask_var), (_Is_LightColor_RimLight_var*saturate((_Rimlight_InsideMask_var-((1.0 - _VertHalfLambert_var)+_Tweak_LightDirection_MaskLevel)))), _LightDirection_MaskOn );
float _ApRimLightPower_var = pow(_RimArea_var,exp2(lerp(3,0,_Ap_RimLight_Power)));
float3 Set_RimLight = (SATURATE_IF_SDR((_Set_RimLightMask_var.g+_Tweak_RimLightMaskLevel))*lerp( _LightDirection_MaskOn_var, (_LightDirection_MaskOn_var+(lerp( _Ap_RimLightColor.rgb, (_Ap_RimLightColor.rgb*Set_LightColor), _Is_LightColor_Ap_RimLight )*saturate((lerp( (0.0 + ( (_ApRimLightPower_var - _RimLight_InsideMask) * (1.0 - 0.0) ) / (1.0 - _RimLight_InsideMask)), step(_RimLight_InsideMask,_ApRimLightPower_var), _Ap_RimLight_FeatherOff )-(saturate(_VertHalfLambert_var)+_Tweak_LightDirection_MaskLevel))))), _Add_Antipodean_RimLight ));
//Composition: HighColor and RimLight as _RimLight_var
float3 _RimLight_var = lerp( Set_HighColor, (Set_HighColor+Set_RimLight), _RimLight );
```
## Matcap与头发的各向异性高光效果
UTS的老版本模型采用了Matcap给头发圈定高光颜色以及区域范围之后使用法线贴图来实现类似各向异性的效果
```c#
//Matcap
//v.2.0.6 : CameraRolling Stabilizer
//Mirror Script Determination: if sign_Mirror = -1, determine "Inside the mirror".
//v.2.0.7
fixed _sign_Mirror = i.mirrorFlag;
//
float3 _Camera_Right = UNITY_MATRIX_V[0].xyz;
float3 _Camera_Front = UNITY_MATRIX_V[2].xyz;
float3 _Up_Unit = float3(0, 1, 0);
float3 _Right_Axis = cross(_Camera_Front, _Up_Unit);
//Invert if it's "inside the mirror".
if(_sign_Mirror < 0){
_Right_Axis = -1 * _Right_Axis;
_Rotate_MatCapUV = -1 * _Rotate_MatCapUV;
}else{
_Right_Axis = _Right_Axis;
}
float _Camera_Right_Magnitude = sqrt(_Camera_Right.x*_Camera_Right.x + _Camera_Right.y*_Camera_Right.y + _Camera_Right.z*_Camera_Right.z);
float _Right_Axis_Magnitude = sqrt(_Right_Axis.x*_Right_Axis.x + _Right_Axis.y*_Right_Axis.y + _Right_Axis.z*_Right_Axis.z);
float _Camera_Roll_Cos = dot(_Right_Axis, _Camera_Right) / (_Right_Axis_Magnitude * _Camera_Right_Magnitude);
float _Camera_Roll = acos(clamp(_Camera_Roll_Cos, -1, 1));
fixed _Camera_Dir = _Camera_Right.y < 0 ? -1 : 1;
float _Rot_MatCapUV_var_ang = (_Rotate_MatCapUV*3.141592654) - _Camera_Dir*_Camera_Roll*_CameraRolling_Stabilizer;
//v.2.0.7
float2 _Rot_MatCapNmUV_var = RotateUV(Set_UV0, (_Rotate_NormalMapForMatCapUV*3.141592654), float2(0.5, 0.5), 1.0);
//V.2.0.6
float3 _NormalMapForMatCap_var = UnpackNormalScale(tex2D(_NormalMapForMatCap, TRANSFORM_TEX(_Rot_MatCapNmUV_var, _NormalMapForMatCap)), _BumpScaleMatcap);
//v.2.0.5: MatCap with camera skew correction
float3 viewNormal = (mul(UNITY_MATRIX_V, float4(lerp( i.normalDir, mul( _NormalMapForMatCap_var.rgb, tangentTransform ).rgb, _Is_NormalMapForMatCap ),0))).rgb;
float3 NormalBlend_MatcapUV_Detail = viewNormal.rgb * float3(-1,-1,1);
float3 NormalBlend_MatcapUV_Base = (mul( UNITY_MATRIX_V, float4(viewDirection,0) ).rgb*float3(-1,-1,1)) + float3(0,0,1);
float3 noSknewViewNormal = NormalBlend_MatcapUV_Base*dot(NormalBlend_MatcapUV_Base, NormalBlend_MatcapUV_Detail)/NormalBlend_MatcapUV_Base.b - NormalBlend_MatcapUV_Detail;
float2 _ViewNormalAsMatCapUV = (lerp(noSknewViewNormal,viewNormal,_Is_Ortho).rg*0.5)+0.5;
//
//v.2.0.7
float2 _Rot_MatCapUV_var = RotateUV((0.0 + ((_ViewNormalAsMatCapUV - (0.0+_Tweak_MatCapUV)) * (1.0 - 0.0) ) / ((1.0-_Tweak_MatCapUV) - (0.0+_Tweak_MatCapUV))), _Rot_MatCapUV_var_ang, float2(0.5, 0.5), 1.0);
//If it is "inside the mirror", flip the UV left and right.
if(_sign_Mirror < 0){
_Rot_MatCapUV_var.x = 1-_Rot_MatCapUV_var.x;
}else{
_Rot_MatCapUV_var = _Rot_MatCapUV_var;
}
float4 _MatCap_Sampler_var = tex2Dlod(_MatCap_Sampler, float4(TRANSFORM_TEX(_Rot_MatCapUV_var, _MatCap_Sampler), 0.0, _BlurLevelMatcap));
float4 _Set_MatcapMask_var = tex2D(_Set_MatcapMask, TRANSFORM_TEX(Set_UV0, _Set_MatcapMask));
```
## UniversalToonBodyShadingGradeMap
合成公式:
$$1-\frac{( HalfLambert - (Step-Feather))}{Feather}$$
### 初始化数据
初始化表面数据、InputData与Varyings并且填充数据。
### 灯光与环境光数据
```c#
//计算间接照明值使用间接Diffuse乘以SurfaceDiffuse间接Specular乘以SurfaceSpecular最后两者叠加。
half3 envColor = GlobalIlluminationUTS(brdfData, inputData.bakedGI, surfaceData.occlusion, inputData.normalWS, inputData.viewDirectionWS);
envColor *= 1.8f;
//取得主光参数direction、color、distanceAttenuation、shadowAttenuation、type
UtsLight mainLight = GetMainUtsLightByID(i.mainLightID, i.posWorld.xyz, inputData.shadowCoord, i.positionCS);
half3 mainLightColor = GetLightColor(mainLight);
real shadowAttenuation = 1.0;
# ifdef _MAIN_LIGHT_SHADOWS
shadowAttenuation = mainLight.shadowAttenuation;
# endif
```
### 开启裁剪功能
```c#
float4 _ClippingMask_var = SAMPLE_TEXTURE2D(_ClippingMask, sampler_MainTex, TRANSFORM_TEX(Set_UV0, _ClippingMask));
float Set_MainTexAlpha = _MainTex_var.a;
float _IsBaseMapAlphaAsClippingMask_var = lerp( _ClippingMask_var.r, Set_MainTexAlpha, _IsBaseMapAlphaAsClippingMask );
float _Inverse_Clipping_var = lerp( _IsBaseMapAlphaAsClippingMask_var, (1.0 - _IsBaseMapAlphaAsClippingMask_var), _Inverse_Clipping );
float Set_Clipping = saturate((_Inverse_Clipping_var+_Clipping_Level));
clip(Set_Clipping - 0.5);
```
### 计算灯光方向与灯光颜色
```c#
float3 defaultLightDirection = normalize(UNITY_MATRIX_V[2].xyz + UNITY_MATRIX_V[1].xyz);
//v.2.0.5
float3 defaultLightColor = saturate(max(half3(0.05,0.05,0.05)*_Unlit_Intensity,max(ShadeSH9(half4(0.0, 0.0, 0.0, 1.0)),ShadeSH9(half4(0.0, -1.0, 0.0, 1.0)).rgb)*_Unlit_Intensity));
//通过外部传入的_Offset_X_Axis_BLD、_Offset_Y_Axis_BLD、_Offset_Z_Axis_BLD来创建自定义灯光方向
float3 customLightDirection = normalize(mul( unity_ObjectToWorld, float4(((float3(1.0,0.0,0.0)*_Offset_X_Axis_BLD*10)+(float3(0.0,1.0,0.0)*_Offset_Y_Axis_BLD*10)+(float3(0.0,0.0,-1.0)*lerp(-1.0,1.0,_Inverse_Z_Axis_BLD))),0)).xyz);
//HLSL内置函数Any(),用于测试变量是否为非零值
float3 lightDirection = normalize(lerp(defaultLightDirection, mainLight.direction.xyz,any(mainLight.direction.xyz)));
lightDirection = lerp(lightDirection, customLightDirection, _Is_BLD);
//v.2.0.5:
half3 originalLightColor = mainLightColor.rgb;
float3 lightColor = lerp(max(defaultLightColor, originalLightColor), max(defaultLightColor, saturate(originalLightColor)), _Is_Filter_LightColor);
```
### 计算1、2、3 ShadowMask与插值后的BaseColor值之后合并到一起
### 计算高光并且合并到一起
### 计算天使环并且合并到一起
### 计算Matcap并且合成到一起
### 叠加自发光贴图结果
### 灯光循环
### 最终合成
```c#
finalColor = SATURATE_IF_SDR(finalColor) + (envLightColor*envLightIntensity*_GI_Intensity*smoothstep(1,0,envLightIntensity/2)) + emissive;
finalColor += pointLightColor;
```
## 完整代码
```c#
float4 fragShadingGradeMap(VertexOutput i, fixed facing : VFACE) : SV_TARGET
{
//计算世界法线值与摄像机朝向
i.normalDir = normalize(i.normalDir);
float3x3 tangentTransform = float3x3( i.tangentDir, i.bitangentDir, i.normalDir);
float3 viewDirection = normalize(_WorldSpaceCameraPos.xyz - i.posWorld.xyz);
float2 Set_UV0 = i.uv0;
//v.2.0.6
float3 _NormalMap_var = UnpackNormalScale(SAMPLE_TEXTURE2D(_NormalMap, sampler_MainTex, TRANSFORM_TEX(Set_UV0, _NormalMap)), _BumpScale);
float3 normalLocal = _NormalMap_var.rgb;
float3 normalDirection = normalize(mul( normalLocal, tangentTransform )); // Perturbed normals
// todo. not necessary to calc gi factor in shadowcaster pass.
//初始化表面数据、InputData与Varyings并且填充数据。
//input数据位于render-pipelines.universal包含positionWS、normalWS、viewDirectionWS、shadowCoord、fogCoord、vertexLighting、bakedGI,Varyings input用来计算InputData。Vrayings位于LitForwardPass中
//inputData主要用来计算envColorGI、主灯光数据与AdditionalUtsLight数据
SurfaceData surfaceData;
InitializeStandardLitSurfaceDataUTS(i.uv0, surfaceData);
InputData inputData;
Varyings input = (Varyings)0;
// todo. it has to be cared more.
UNITY_SETUP_INSTANCE_ID(input);
UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
# ifdef LIGHTMAP_ON
# else
input.vertexSH = i.vertexSH;
# endif
input.uv = i.uv0;
# if defined(_ADDITIONAL_LIGHTS_VERTEX) || (VERSION_LOWER(12, 0))
input.fogFactorAndVertexLight = i.fogFactorAndVertexLight;
# else
input.fogFactor = i.fogFactor;
# endif
# ifdef REQUIRES_VERTEX_SHADOW_COORD_INTERPOLATOR
input.shadowCoord = i.shadowCoord;
# endif
# ifdef REQUIRES_WORLD_SPACE_POS_INTERPOLATOR
input.positionWS = i.posWorld.xyz;
# endif
# ifdef _NORMALMAP
input.normalWS = half4(i.normalDir, viewDirection.x); // xyz: normal, w: viewDir.x
input.tangentWS = half4(i.tangentDir, viewDirection.y); // xyz: tangent, w: viewDir.y
# if (VERSION_LOWER(7, 5))
input.bitangentWS = half4(i.bitangentDir, viewDirection.z); // xyz: bitangent, w: viewDir.z
#endif //
# else
input.normalWS = half3(i.normalDir);
# if (VERSION_LOWER(12, 0))
input.viewDirWS = half3(viewDirection);
# endif //(VERSION_LOWER(12, 0))
# endif
//位于LitForwardPass中其中SAMPLE_GI()采样LightMap或者SH。
InitializeInputData(input, surfaceData.normalTS, inputData);
//位于Lighting中使用前几个参数初始化最后一个brdfData形参
BRDFData brdfData;
InitializeBRDFData(surfaceData.albedo,
surfaceData.metallic,
surfaceData.specular,
surfaceData.smoothness,
surfaceData.alpha, brdfData);
//计算间接照明值使用间接Diffuse乘以SurfaceDiffuse间接Specular乘以SurfaceSpecular最后两者叠加。
half3 envColor = GlobalIlluminationUTS(brdfData, inputData.bakedGI, surfaceData.occlusion, inputData.normalWS, inputData.viewDirectionWS);
envColor *= 1.8f;
//取得主光参数direction、color、distanceAttenuation、shadowAttenuation、type
UtsLight mainLight = GetMainUtsLightByID(i.mainLightID, i.posWorld.xyz, inputData.shadowCoord, i.positionCS);
half3 mainLightColor = GetLightColor(mainLight);
float4 _MainTex_var = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, TRANSFORM_TEX(Set_UV0, _MainTex));
//v.2.0.4
#ifdef _IS_TRANSCLIPPING_OFF
//
#elif _IS_TRANSCLIPPING_ON
//开启裁剪功能
float4 _ClippingMask_var = SAMPLE_TEXTURE2D(_ClippingMask, sampler_MainTex, TRANSFORM_TEX(Set_UV0, _ClippingMask));
float Set_MainTexAlpha = _MainTex_var.a;
float _IsBaseMapAlphaAsClippingMask_var = lerp( _ClippingMask_var.r, Set_MainTexAlpha, _IsBaseMapAlphaAsClippingMask );
float _Inverse_Clipping_var = lerp( _IsBaseMapAlphaAsClippingMask_var, (1.0 - _IsBaseMapAlphaAsClippingMask_var), _Inverse_Clipping );
float Set_Clipping = saturate((_Inverse_Clipping_var+_Clipping_Level));
clip(Set_Clipping - 0.5);
#endif
real shadowAttenuation = 1.0;
# ifdef _MAIN_LIGHT_SHADOWS
shadowAttenuation = mainLight.shadowAttenuation;
# endif
//v.2.0.4
float3 defaultLightDirection = normalize(UNITY_MATRIX_V[2].xyz + UNITY_MATRIX_V[1].xyz);
//v.2.0.5
float3 defaultLightColor = saturate(max(half3(0.05,0.05,0.05)*_Unlit_Intensity,max(ShadeSH9(half4(0.0, 0.0, 0.0, 1.0)),ShadeSH9(half4(0.0, -1.0, 0.0, 1.0)).rgb)*_Unlit_Intensity));
//通过外部传入的_Offset_X_Axis_BLD、_Offset_Y_Axis_BLD、_Offset_Z_Axis_BLD来创建自定义灯光方向
float3 customLightDirection = normalize(mul( unity_ObjectToWorld, float4(((float3(1.0,0.0,0.0)*_Offset_X_Axis_BLD*10)+(float3(0.0,1.0,0.0)*_Offset_Y_Axis_BLD*10)+(float3(0.0,0.0,-1.0)*lerp(-1.0,1.0,_Inverse_Z_Axis_BLD))),0)).xyz);
//HLSL内置函数Any(),用于测试变量是否为非零值
float3 lightDirection = normalize(lerp(defaultLightDirection, mainLight.direction.xyz,any(mainLight.direction.xyz)));
lightDirection = lerp(lightDirection, customLightDirection, _Is_BLD);
//v.2.0.5:
half3 originalLightColor = mainLightColor.rgb;
float3 lightColor = lerp(max(defaultLightColor, originalLightColor), max(defaultLightColor, saturate(originalLightColor)), _Is_Filter_LightColor);
////// Lighting:
//计算View与主Light的半向量
float3 halfDirection = normalize(viewDirection+lightDirection);
//v.2.0.5
_Color = _BaseColor;
#ifdef _IS_PASS_FWDBASE
//取得设置的LightColor、BaseColor(设置的灯光颜色or主贴图采样结果)、1st_ShadeMap_var最后计算_Is_LightColor_1st_Shade_var与半兰伯特值。
float3 Set_LightColor = lightColor.rgb;
float3 Set_BaseColor = lerp( (_MainTex_var.rgb*_BaseColor.rgb), ((_MainTex_var.rgb*_BaseColor.rgb)*Set_LightColor), _Is_LightColor_Base );
//v.2.0.5
float4 _1st_ShadeMap_var = lerp(SAMPLE_TEXTURE2D(_1st_ShadeMap,sampler_MainTex, TRANSFORM_TEX(Set_UV0, _1st_ShadeMap)),_MainTex_var,_Use_BaseAs1st);
//_1st_ShadeMap_var为确定区域颜色为设置的颜色。_1st_ShadeMap_var.rgb*_1st_ShadeColor.rgb*Set_LightColor可设置为不受LightColor影响。
float3 _Is_LightColor_1st_Shade_var = lerp( (_1st_ShadeMap_var.rgb*_1st_ShadeColor.rgb), ((_1st_ShadeMap_var.rgb*_1st_ShadeColor.rgb)*Set_LightColor), _Is_LightColor_1st_Shade );
float _HalfLambert_var = 0.5*dot(lerp( i.normalDir, normalDirection, _Is_NormalMapToBase ),lightDirection)+0.5; // Half Lambert
//v.2.0.6
float4 _ShadingGradeMap_var = tex2Dlod(_ShadingGradeMap, float4(TRANSFORM_TEX(Set_UV0, _ShadingGradeMap), 0.0, _BlurLevelSGM));
//the value of shadowAttenuation is darker than legacy and it cuases noise in terminaters.
#if !defined (UTS_USE_RAYTRACING_SHADOW)
shadowAttenuation *= 2.0f;
shadowAttenuation = saturate(shadowAttenuation);
#endif
//v.2.0.6
//Minmimum value is same as the Minimum Feather's value with the Minimum Step's value as threshold.
//_Tweak_SystemShadowsLevel控制 Unity 的系统阴影级别。默认为 0级别可调整为 ±0.5。
//使用_Tweak_SystemShadowsLevel来偏移ShadingGradeMap中定义的阴影范围
//最后计算ShadowMask:Set_ShadingGrade=_HalfLambert_var*saturate(_SystemShadowsLevel_var)的值。
float _SystemShadowsLevel_var = (shadowAttenuation *0.5)+0.5+_Tweak_SystemShadowsLevel > 0.001 ? (shadowAttenuation *0.5)+0.5+_Tweak_SystemShadowsLevel : 0.0001;
float _ShadingGradeMapLevel_var = _ShadingGradeMap_var.r < 0.95 ? _ShadingGradeMap_var.r+_Tweak_ShadingGradeMapLevel : 1;
float Set_ShadingGrade = saturate(_ShadingGradeMapLevel_var)*lerp( _HalfLambert_var, (_HalfLambert_var*saturate(_SystemShadowsLevel_var)), _Set_SystemShadowsToBase );
//float Set_ShadingGrade = saturate(_ShadingGradeMapLevel_var)*lerp( _HalfLambert_var, (_HalfLambert_var*saturate(1.0+_Tweak_SystemShadowsLevel)), _Set_SystemShadowsToBase );
//计算1、2、3 ShadowMask与插值后的BaseColor值之后合并到一起。
//lerp( (_2nd_ShadeMap_var.rgb*_2nd_ShadeColor.rgb), ((_2nd_ShadeMap_var.rgb*_2nd_ShadeColor.rgb)*Set_LightColor), _Is_LightColor_2nd_Shade )
//lerp(_Is_LightColor_1st_Shade_var,上一次插值结果Set_ShadeShadowMask)
//lerp(_BaseColor_var,上一次插值结果,Set_FinalShadowMask)
//合成方式为:
float Set_FinalShadowMask = saturate((1.0 + ( (Set_ShadingGrade - (_1st_ShadeColor_Step-_1st_ShadeColor_Feather)) * (0.0 - 1.0) ) / (_1st_ShadeColor_Step - (_1st_ShadeColor_Step-_1st_ShadeColor_Feather)))); // Base and 1st Shade Mask
float3 _BaseColor_var = lerp(Set_BaseColor,_Is_LightColor_1st_Shade_var,Set_FinalShadowMask);
//v.2.0.5
float4 _2nd_ShadeMap_var = lerp(SAMPLE_TEXTURE2D(_2nd_ShadeMap,sampler_MainTex, TRANSFORM_TEX(Set_UV0, _2nd_ShadeMap)),_1st_ShadeMap_var,_Use_1stAs2nd);
float Set_ShadeShadowMask = saturate((1.0 + ( (Set_ShadingGrade - (_2nd_ShadeColor_Step-_2nd_ShadeColor_Feather)) * (0.0 - 1.0) ) / (_2nd_ShadeColor_Step - (_2nd_ShadeColor_Step-_2nd_ShadeColor_Feather)))); // 1st and 2nd Shades Mask
//Composition: 3 Basic Colors as Set_FinalBaseColor
float3 Set_FinalBaseColor = lerp(_BaseColor_var,lerp(_Is_LightColor_1st_Shade_var,lerp( (_2nd_ShadeMap_var.rgb*_2nd_ShadeColor.rgb), ((_2nd_ShadeMap_var.rgb*_2nd_ShadeColor.rgb)*Set_LightColor), _Is_LightColor_2nd_Shade ),Set_ShadeShadowMask),Set_FinalShadowMask);
//采样高光Mask
float4 _Set_HighColorMask_var = tex2D(_Set_HighColorMask, TRANSFORM_TEX(Set_UV0, _Set_HighColorMask));
//半影值计算高光Mask(半兰伯特模型)
float _Specular_var = 0.5*dot(halfDirection,lerp( i.normalDir, normalDirection, _Is_NormalMapToHighColor ))+0.5; // Specular
//Step(a,x) => x>=a ? 1 : 0,计算高光Mask
float _TweakHighColorMask_var = (saturate((_Set_HighColorMask_var.g+_Tweak_HighColorMaskLevel))*lerp( (1.0 - step(_Specular_var,(1.0 - pow(abs(_HighColor_Power),5)))), pow(abs(_Specular_var),exp2(lerp(11,1,_HighColor_Power))), _Is_SpecularToHighColor ));
float4 _HighColor_Tex_var = tex2D(_HighColor_Tex, TRANSFORM_TEX(Set_UV0, _HighColor_Tex));
float3 _HighColor_var = (lerp( (_HighColor_Tex_var.rgb*_HighColor.rgb), ((_HighColor_Tex_var.rgb*_HighColor.rgb)*Set_LightColor), _Is_LightColor_HighColor )*_TweakHighColorMask_var);
//Composition: 3 Basic Colors and HighColor as Set_HighColor
float3 Set_HighColor = (lerp(SATURATE_IF_SDR((Set_FinalBaseColor-_TweakHighColorMask_var)), Set_FinalBaseColor, lerp(_Is_BlendAddToHiColor,1.0,_Is_SpecularToHighColor) )+lerp( _HighColor_var, (_HighColor_var*((1.0 - Set_FinalShadowMask)+(Set_FinalShadowMask*_TweakHighColorOnShadow))), _Is_UseTweakHighColorOnShadow ));
//开始计算边缘光
float4 _Set_RimLightMask_var = tex2D(_Set_RimLightMask, TRANSFORM_TEX(Set_UV0, _Set_RimLightMask));
float3 _Is_LightColor_RimLight_var = lerp( _RimLightColor.rgb, (_RimLightColor.rgb*Set_LightColor), _Is_LightColor_RimLight );
float _RimArea_var = abs(1.0 - dot(lerp( i.normalDir, normalDirection, _Is_NormalMapToRimLight ),viewDirection));
float _RimLightPower_var = pow(_RimArea_var,exp2(lerp(3,0,_RimLight_Power)));
float _Rimlight_InsideMask_var = saturate(lerp( (0.0 + ( (_RimLightPower_var - _RimLight_InsideMask) * (1.0 - 0.0) ) / (1.0 - _RimLight_InsideMask)), step(_RimLight_InsideMask,_RimLightPower_var), _RimLight_FeatherOff ));
float _VertHalfLambert_var = 0.5*dot(i.normalDir,lightDirection)+0.5;
float3 _LightDirection_MaskOn_var = lerp( (_Is_LightColor_RimLight_var*_Rimlight_InsideMask_var), (_Is_LightColor_RimLight_var*saturate((_Rimlight_InsideMask_var-((1.0 - _VertHalfLambert_var)+_Tweak_LightDirection_MaskLevel)))), _LightDirection_MaskOn );
float _ApRimLightPower_var = pow(_RimArea_var,exp2(lerp(3,0,_Ap_RimLight_Power)));
float3 Set_RimLight = (SATURATE_IF_SDR((_Set_RimLightMask_var.g+_Tweak_RimLightMaskLevel))*lerp( _LightDirection_MaskOn_var, (_LightDirection_MaskOn_var+(lerp( _Ap_RimLightColor.rgb, (_Ap_RimLightColor.rgb*Set_LightColor), _Is_LightColor_Ap_RimLight )*saturate((lerp( (0.0 + ( (_ApRimLightPower_var - _RimLight_InsideMask) * (1.0 - 0.0) ) / (1.0 - _RimLight_InsideMask)), step(_RimLight_InsideMask,_ApRimLightPower_var), _Ap_RimLight_FeatherOff )-(saturate(_VertHalfLambert_var)+_Tweak_LightDirection_MaskLevel))))), _Add_Antipodean_RimLight ));
//Composition: HighColor and RimLight as _RimLight_var
float3 _RimLight_var = lerp( Set_HighColor, (Set_HighColor+Set_RimLight), _RimLight );
//Matcap
//v.2.0.6 : CameraRolling Stabilizer
//Mirror Script Determination: if sign_Mirror = -1, determine "Inside the mirror".
//v.2.0.7
fixed _sign_Mirror = i.mirrorFlag;
//
float3 _Camera_Right = UNITY_MATRIX_V[0].xyz;
float3 _Camera_Front = UNITY_MATRIX_V[2].xyz;
float3 _Up_Unit = float3(0, 1, 0);
float3 _Right_Axis = cross(_Camera_Front, _Up_Unit);
//Invert if it's "inside the mirror".
if(_sign_Mirror < 0){
_Right_Axis = -1 * _Right_Axis;
_Rotate_MatCapUV = -1 * _Rotate_MatCapUV;
}else{
_Right_Axis = _Right_Axis;
}
float _Camera_Right_Magnitude = sqrt(_Camera_Right.x*_Camera_Right.x + _Camera_Right.y*_Camera_Right.y + _Camera_Right.z*_Camera_Right.z);
float _Right_Axis_Magnitude = sqrt(_Right_Axis.x*_Right_Axis.x + _Right_Axis.y*_Right_Axis.y + _Right_Axis.z*_Right_Axis.z);
float _Camera_Roll_Cos = dot(_Right_Axis, _Camera_Right) / (_Right_Axis_Magnitude * _Camera_Right_Magnitude);
float _Camera_Roll = acos(clamp(_Camera_Roll_Cos, -1, 1));
fixed _Camera_Dir = _Camera_Right.y < 0 ? -1 : 1;
float _Rot_MatCapUV_var_ang = (_Rotate_MatCapUV*3.141592654) - _Camera_Dir*_Camera_Roll*_CameraRolling_Stabilizer;
//v.2.0.7
float2 _Rot_MatCapNmUV_var = RotateUV(Set_UV0, (_Rotate_NormalMapForMatCapUV*3.141592654), float2(0.5, 0.5), 1.0);
//V.2.0.6
float3 _NormalMapForMatCap_var = UnpackNormalScale(tex2D(_NormalMapForMatCap, TRANSFORM_TEX(_Rot_MatCapNmUV_var, _NormalMapForMatCap)), _BumpScaleMatcap);
//v.2.0.5: MatCap with camera skew correction
float3 viewNormal = (mul(UNITY_MATRIX_V, float4(lerp( i.normalDir, mul( _NormalMapForMatCap_var.rgb, tangentTransform ).rgb, _Is_NormalMapForMatCap ),0))).rgb;
float3 NormalBlend_MatcapUV_Detail = viewNormal.rgb * float3(-1,-1,1);
float3 NormalBlend_MatcapUV_Base = (mul( UNITY_MATRIX_V, float4(viewDirection,0) ).rgb*float3(-1,-1,1)) + float3(0,0,1);
float3 noSknewViewNormal = NormalBlend_MatcapUV_Base*dot(NormalBlend_MatcapUV_Base, NormalBlend_MatcapUV_Detail)/NormalBlend_MatcapUV_Base.b - NormalBlend_MatcapUV_Detail;
float2 _ViewNormalAsMatCapUV = (lerp(noSknewViewNormal,viewNormal,_Is_Ortho).rg*0.5)+0.5;
//
//v.2.0.7
float2 _Rot_MatCapUV_var = RotateUV((0.0 + ((_ViewNormalAsMatCapUV - (0.0+_Tweak_MatCapUV)) * (1.0 - 0.0) ) / ((1.0-_Tweak_MatCapUV) - (0.0+_Tweak_MatCapUV))), _Rot_MatCapUV_var_ang, float2(0.5, 0.5), 1.0);
//If it is "inside the mirror", flip the UV left and right.
if(_sign_Mirror < 0){
_Rot_MatCapUV_var.x = 1-_Rot_MatCapUV_var.x;
}else{
_Rot_MatCapUV_var = _Rot_MatCapUV_var;
}
float4 _MatCap_Sampler_var = tex2Dlod(_MatCap_Sampler, float4(TRANSFORM_TEX(_Rot_MatCapUV_var, _MatCap_Sampler), 0.0, _BlurLevelMatcap));
float4 _Set_MatcapMask_var = tex2D(_Set_MatcapMask, TRANSFORM_TEX(Set_UV0, _Set_MatcapMask));
//
//MatcapMask
float _Tweak_MatcapMaskLevel_var = saturate(lerp(_Set_MatcapMask_var.g, (1.0 - _Set_MatcapMask_var.g), _Inverse_MatcapMask) + _Tweak_MatcapMaskLevel);
float3 _Is_LightColor_MatCap_var = lerp( (_MatCap_Sampler_var.rgb*_MatCapColor.rgb), ((_MatCap_Sampler_var.rgb*_MatCapColor.rgb)*Set_LightColor), _Is_LightColor_MatCap );
//v.2.0.6 : ShadowMask on Matcap in Blend mode : multiply
float3 Set_MatCap = lerp( _Is_LightColor_MatCap_var, (_Is_LightColor_MatCap_var*((1.0 - Set_FinalShadowMask)+(Set_FinalShadowMask*_TweakMatCapOnShadow)) + lerp(Set_HighColor*Set_FinalShadowMask*(1.0-_TweakMatCapOnShadow), float3(0.0, 0.0, 0.0), _Is_BlendAddToMatCap)), _Is_UseTweakMatCapOnShadow );
//
//v.2.0.6
//Composition: RimLight and MatCap as finalColor
//Broke down finalColor composition
float3 matCapColorOnAddMode = _RimLight_var+Set_MatCap*_Tweak_MatcapMaskLevel_var;
float _Tweak_MatcapMaskLevel_var_MultiplyMode = _Tweak_MatcapMaskLevel_var * lerp (1, (1 - (Set_FinalShadowMask)*(1 - _TweakMatCapOnShadow)), _Is_UseTweakMatCapOnShadow);
float3 matCapColorOnMultiplyMode = Set_HighColor*(1-_Tweak_MatcapMaskLevel_var_MultiplyMode) + Set_HighColor*Set_MatCap*_Tweak_MatcapMaskLevel_var_MultiplyMode + lerp(float3(0,0,0),Set_RimLight,_RimLight);
float3 matCapColorFinal = lerp(matCapColorOnMultiplyMode, matCapColorOnAddMode, _Is_BlendAddToMatCap);
//v.2.0.4
#ifdef _IS_ANGELRING_OFF
float3 finalColor = lerp(_RimLight_var, matCapColorFinal, _MatCap);// Final Composition before Emissive
//
#elif _IS_ANGELRING_ON
//计算天使环
float3 finalColor = lerp(_RimLight_var, matCapColorFinal, _MatCap);// Final Composition before AR
//v.2.0.7 AR Camera Rolling Stabilizer
//计算UV将表面法线转换到视角坐标将(-1,1)=》(0,1)之后,按照摄像机的 -(_Camera_Dir*_Camera_Roll)旋转坐标后,采样贴图
float3 _AR_OffsetU_var = lerp(mul(UNITY_MATRIX_V, float4(i.normalDir,0)).xyz,float3(0,0,1),_AR_OffsetU);
float2 AR_VN = _AR_OffsetU_var.xy*0.5 + float2(0.5,0.5);
float2 AR_VN_Rotate = RotateUV(AR_VN, -(_Camera_Dir*_Camera_Roll), float2(0.5,0.5), 1.0);
float2 _AR_OffsetV_var = float2(AR_VN_Rotate.x, lerp(i.uv1.y, AR_VN_Rotate.y, _AR_OffsetV));
float4 _AngelRing_Sampler_var = tex2D(_AngelRing_Sampler,TRANSFORM_TEX(_AR_OffsetV_var, _AngelRing_Sampler));
float3 _Is_LightColor_AR_var = lerp( (_AngelRing_Sampler_var.rgb*_AngelRing_Color.rgb), ((_AngelRing_Sampler_var.rgb*_AngelRing_Color.rgb)*Set_LightColor), _Is_LightColor_AR );
float3 Set_AngelRing = _Is_LightColor_AR_var;
float Set_ARtexAlpha = _AngelRing_Sampler_var.a;
float3 Set_AngelRingWithAlpha = (_Is_LightColor_AR_var*_AngelRing_Sampler_var.a);
//Composition: MatCap and AngelRing as finalColor
finalColor = lerp(finalColor, lerp((finalColor + Set_AngelRing), ((finalColor*(1.0 - Set_ARtexAlpha))+Set_AngelRingWithAlpha), _ARSampler_AlphaOn ), _AngelRing );// Final Composition before Emissive
#endif
//v.2.0.7
#ifdef _EMISSIVE_SIMPLE
float4 _Emissive_Tex_var = tex2D(_Emissive_Tex,TRANSFORM_TEX(Set_UV0, _Emissive_Tex));
float emissiveMask = _Emissive_Tex_var.a;
emissive = _Emissive_Tex_var.rgb * _Emissive_Color.rgb * emissiveMask;
#elif _EMISSIVE_ANIMATION
//v.2.0.7 Calculation View Coord UV for Scroll
float3 viewNormal_Emissive = (mul(UNITY_MATRIX_V, float4(i.normalDir,0))).xyz;
float3 NormalBlend_Emissive_Detail = viewNormal_Emissive * float3(-1,-1,1);
float3 NormalBlend_Emissive_Base = (mul( UNITY_MATRIX_V, float4(viewDirection,0)).xyz*float3(-1,-1,1)) + float3(0,0,1);
float3 noSknewViewNormal_Emissive = NormalBlend_Emissive_Base*dot(NormalBlend_Emissive_Base, NormalBlend_Emissive_Detail)/NormalBlend_Emissive_Base.z - NormalBlend_Emissive_Detail;
float2 _ViewNormalAsEmissiveUV = noSknewViewNormal_Emissive.xy*0.5+0.5;
float2 _ViewCoord_UV = RotateUV(_ViewNormalAsEmissiveUV, -(_Camera_Dir*_Camera_Roll), float2(0.5,0.5), 1.0);
//鏡の中ならUV左右反転.
if(_sign_Mirror < 0){
_ViewCoord_UV.x = 1-_ViewCoord_UV.x;
}else{
_ViewCoord_UV = _ViewCoord_UV;
}
float2 emissive_uv = lerp(i.uv0, _ViewCoord_UV, _Is_ViewCoord_Scroll);
//
float4 _time_var = _Time;
float _base_Speed_var = (_time_var.g*_Base_Speed);
float _Is_PingPong_Base_var = lerp(_base_Speed_var, sin(_base_Speed_var), _Is_PingPong_Base );
float2 scrolledUV = emissive_uv + float2(_Scroll_EmissiveU, _Scroll_EmissiveV)*_Is_PingPong_Base_var;
float rotateVelocity = _Rotate_EmissiveUV*3.141592654;
float2 _rotate_EmissiveUV_var = RotateUV(scrolledUV, rotateVelocity, float2(0.5, 0.5), _Is_PingPong_Base_var);
float4 _Emissive_Tex_var = tex2D(_Emissive_Tex,TRANSFORM_TEX(Set_UV0, _Emissive_Tex));
float emissiveMask = _Emissive_Tex_var.a;
_Emissive_Tex_var = tex2D(_Emissive_Tex,TRANSFORM_TEX(_rotate_EmissiveUV_var, _Emissive_Tex));
float _colorShift_Speed_var = 1.0 - cos(_time_var.g*_ColorShift_Speed);
float viewShift_var = smoothstep( 0.0, 1.0, max(0,dot(normalDirection,viewDirection)));
float4 colorShift_Color = lerp(_Emissive_Color, lerp(_Emissive_Color, _ColorShift, _colorShift_Speed_var), _Is_ColorShift);
float4 viewShift_Color = lerp(_ViewShift, colorShift_Color, viewShift_var);
float4 emissive_Color = lerp(colorShift_Color, viewShift_Color, _Is_ViewShift);
emissive = emissive_Color.rgb * _Emissive_Tex_var.rgb * emissiveMask;
#endif
//
//v.2.0.6: GI_Intensity with Intensity Multiplier Filter
float3 envLightColor = envColor.rgb;
float envLightIntensity = 0.299*envLightColor.r + 0.587*envLightColor.g + 0.114*envLightColor.b <1 ? (0.299*envLightColor.r + 0.587*envLightColor.g + 0.114*envLightColor.b) : 1;
float3 pointLightColor = 0;
#ifdef _ADDITIONAL_LIGHTS
int pixelLightCount = GetAdditionalLightsCount();
// determine main light inorder to apply light culling properly
// when the loop counter start from negative value, MAINLIGHT_IS_MAINLIGHT = -1, some compiler doesn't work well.
// for (int iLight = MAINLIGHT_IS_MAINLIGHT; iLight < pixelLightCount ; ++iLight)
for (int loopCounter = 0; loopCounter < pixelLightCount - MAINLIGHT_IS_MAINLIGHT; ++loopCounter)
{
int iLight = loopCounter + MAINLIGHT_IS_MAINLIGHT;
if (iLight != i.mainLightID)
{
float notDirectional = 1.0f; //_WorldSpaceLightPos0.w of the legacy code.
UtsLight additionalLight = GetUrpMainUtsLight(0,0);
if (iLight != MAINLIGHT_IS_MAINLIGHT)
{
additionalLight = GetAdditionalUtsLight(iLight, inputData.positionWS, i.positionCS);
}
half3 additionalLightColor = GetLightColor(additionalLight);
float3 lightDirection = additionalLight.direction;
//v.2.0.5:
float3 addPassLightColor = (0.5*dot(lerp(i.normalDir, normalDirection, _Is_NormalMapToBase), lightDirection) + 0.5) * additionalLightColor.rgb;
float pureIntencity = max(0.001, (0.299*additionalLightColor.r + 0.587*additionalLightColor.g + 0.114*additionalLightColor.b));
float3 lightColor = max(0, lerp(addPassLightColor, lerp(0, min(addPassLightColor, addPassLightColor / pureIntencity), notDirectional), _Is_Filter_LightColor));
float3 halfDirection = normalize(viewDirection + lightDirection); // has to be recalced here.
//v.2.0.5:
_1st_ShadeColor_Step = saturate(_1st_ShadeColor_Step + _StepOffset);
_2nd_ShadeColor_Step = saturate(_2nd_ShadeColor_Step + _StepOffset);
//
//v.2.0.5: If Added lights is directional, set 0 as _LightIntensity
float _LightIntensity = lerp(0, (0.299*additionalLightColor.r + 0.587*additionalLightColor.g + 0.114*additionalLightColor.b), notDirectional);
//v.2.0.5: Filtering the high intensity zone of PointLights
//计算灯光颜色
float3 Set_LightColor = lerp(lightColor, lerp(lightColor, min(lightColor, additionalLightColor.rgb*_1st_ShadeColor_Step), notDirectional), _Is_Filter_HiCutPointLightColor);
//计算BaseColor与采样_1st_ShadeMap_var与_2nd_ShadeMap_varMask。之后计算Set_FinalShadowMask与Set_ShadeShadowMask用于一次灯光循环的插值。
float3 Set_BaseColor = lerp((_BaseColor.rgb*_MainTex_var.rgb*_LightIntensity), ((_BaseColor.rgb*_MainTex_var.rgb)*Set_LightColor), _Is_LightColor_Base);
//v.2.0.5
float4 _1st_ShadeMap_var = lerp(SAMPLE_TEXTURE2D(_1st_ShadeMap, sampler_MainTex,TRANSFORM_TEX(Set_UV0, _1st_ShadeMap)), _MainTex_var, _Use_BaseAs1st);
float3 Set_1st_ShadeColor = lerp((_1st_ShadeColor.rgb*_1st_ShadeMap_var.rgb*_LightIntensity), ((_1st_ShadeColor.rgb*_1st_ShadeMap_var.rgb)*Set_LightColor), _Is_LightColor_1st_Shade);
//v.2.0.5
float4 _2nd_ShadeMap_var = lerp(SAMPLE_TEXTURE2D(_2nd_ShadeMap, sampler_MainTex,TRANSFORM_TEX(Set_UV0, _2nd_ShadeMap)), _1st_ShadeMap_var, _Use_1stAs2nd);
float3 Set_2nd_ShadeColor = lerp((_2nd_ShadeColor.rgb*_2nd_ShadeMap_var.rgb*_LightIntensity), ((_2nd_ShadeColor.rgb*_2nd_ShadeMap_var.rgb)*Set_LightColor), _Is_LightColor_2nd_Shade);
float _HalfLambert_var = 0.5*dot(lerp(i.normalDir, normalDirection, _Is_NormalMapToBase), lightDirection) + 0.5;
// float4 _Set_2nd_ShadePosition_var = tex2D(_Set_2nd_ShadePosition, TRANSFORM_TEX(Set_UV0, _Set_2nd_ShadePosition));
// float4 _Set_1st_ShadePosition_var = tex2D(_Set_1st_ShadePosition, TRANSFORM_TEX(Set_UV0, _Set_1st_ShadePosition));
// //v.2.0.5:
// float Set_FinalShadowMask = saturate((1.0 + ((lerp(_HalfLambert_var, (_HalfLambert_var*saturate(1.0 + _Tweak_SystemShadowsLevel)), _Set_SystemShadowsToBase) - (_1st_ShadeColor_Step - _1st_ShadeColor_Feather)) * ((1.0 - _Set_1st_ShadePosition_var.rgb).r - 1.0)) / (_1st_ShadeColor_Step - (_1st_ShadeColor_Step - _1st_ShadeColor_Feather))));
//SGM
//v.2.0.6
float4 _ShadingGradeMap_var = tex2Dlod(_ShadingGradeMap, float4(TRANSFORM_TEX(Set_UV0, _ShadingGradeMap), 0.0, _BlurLevelSGM));
//v.2.0.6
//Minmimum value is same as the Minimum Feather's value with the Minimum Step's value as threshold.
//float _SystemShadowsLevel_var = (attenuation*0.5)+0.5+_Tweak_SystemShadowsLevel > 0.001 ? (attenuation*0.5)+0.5+_Tweak_SystemShadowsLevel : 0.0001;
float _ShadingGradeMapLevel_var = _ShadingGradeMap_var.r < 0.95 ? _ShadingGradeMap_var.r + _Tweak_ShadingGradeMapLevel : 1;
//float Set_ShadingGrade = saturate(_ShadingGradeMapLevel_var)*lerp( _HalfLambert_var, (_HalfLambert_var*saturate(_SystemShadowsLevel_var)), _Set_SystemShadowsToBase );
float Set_ShadingGrade = saturate(_ShadingGradeMapLevel_var)*lerp(_HalfLambert_var, (_HalfLambert_var*saturate(1.0 + _Tweak_SystemShadowsLevel)), _Set_SystemShadowsToBase);
//
float Set_FinalShadowMask = saturate((1.0 + ((Set_ShadingGrade - (_1st_ShadeColor_Step - _1st_ShadeColor_Feather)) * (0.0 - 1.0)) / (_1st_ShadeColor_Step - (_1st_ShadeColor_Step - _1st_ShadeColor_Feather))));
float Set_ShadeShadowMask = saturate((1.0 + ((Set_ShadingGrade - (_2nd_ShadeColor_Step - _2nd_ShadeColor_Feather)) * (0.0 - 1.0)) / (_2nd_ShadeColor_Step - (_2nd_ShadeColor_Step - _2nd_ShadeColor_Feather)))); // 1st and 2nd Shades Mask
//SGM
// //Composition: 3 Basic Colors as finalColor
// float3 finalColor =
// lerp(
// Set_BaseColor,
// lerp(
// Set_1st_ShadeColor,
// Set_2nd_ShadeColor,
// saturate(
// (1.0 + ((_HalfLambert_var - (_2nd_ShadeColor_Step - _2nd_Shades_Feather)) * ((1.0 - _Set_2nd_ShadePosition_var.rgb).r - 1.0)) / (_2nd_ShadeColor_Step - (_2nd_ShadeColor_Step - _2nd_Shades_Feather))))
// ),
// Set_FinalShadowMask); // Final Color
//Composition: 3 Basic Colors as finalColor
float3 finalColor =
lerp(
Set_BaseColor,
//_BaseColor_var*(Set_LightColor*1.5),
lerp(
Set_1st_ShadeColor,
Set_2nd_ShadeColor,
Set_ShadeShadowMask
),
Set_FinalShadowMask);
//v.2.0.6: Add HighColor if _Is_Filter_HiCutPointLightColor is False
float4 _Set_HighColorMask_var = tex2D(_Set_HighColorMask, TRANSFORM_TEX(Set_UV0, _Set_HighColorMask));
float _Specular_var = 0.5*dot(halfDirection, lerp(i.normalDir, normalDirection, _Is_NormalMapToHighColor)) + 0.5; // Specular
float _TweakHighColorMask_var = (saturate((_Set_HighColorMask_var.g + _Tweak_HighColorMaskLevel))*lerp((1.0 - step(_Specular_var, (1.0 - pow(abs(_HighColor_Power), 5)))), pow(abs(_Specular_var), exp2(lerp(11, 1, _HighColor_Power))), _Is_SpecularToHighColor));
float4 _HighColor_Tex_var = tex2D(_HighColor_Tex, TRANSFORM_TEX(Set_UV0, _HighColor_Tex));
float3 _HighColor_var = (lerp((_HighColor_Tex_var.rgb*_HighColor.rgb), ((_HighColor_Tex_var.rgb*_HighColor.rgb)*Set_LightColor), _Is_LightColor_HighColor)*_TweakHighColorMask_var);
finalColor = finalColor + lerp(lerp(_HighColor_var, (_HighColor_var*((1.0 - Set_FinalShadowMask) + (Set_FinalShadowMask*_TweakHighColorOnShadow))), _Is_UseTweakHighColorOnShadow), float3(0, 0, 0), _Is_Filter_HiCutPointLightColor);
//
finalColor = SATURATE_IF_SDR(finalColor);
pointLightColor += finalColor;
// pointLightColor += lightColor;
}
}
#endif // _ADDITIONAL_LIGHTS
//
//Final Composition
finalColor = SATURATE_IF_SDR(finalColor) + (envLightColor*envLightIntensity*_GI_Intensity*smoothstep(1,0,envLightIntensity/2)) + emissive;
finalColor += pointLightColor;
#endif
//v.2.0.4
#ifdef _IS_TRANSCLIPPING_OFF
fixed4 finalRGBA = fixed4(finalColor,1);
#elif _IS_TRANSCLIPPING_ON
float Set_Opacity = SATURATE_IF_SDR((_Inverse_Clipping_var+_Tweak_transparency));
fixed4 finalRGBA = fixed4(finalColor,Set_Opacity);
#endif
return finalRGBA;
}
```

View File

@@ -0,0 +1,105 @@
## 资料
- [ ] 游戏诞生之日09 - 美术篇 卡通渲染着色器 UTS2 https://zhuanlan.zhihu.com/p/137288013
- [ ] MMD联动Unity学习笔记 Vol.42 UTS2进阶慎入长篇多图预警https://www.bilibili.com/read/cv3347514
- [ ] 官方文档https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/blob/release/urp/2.3.0/Documentation~/index.md
- [ ] 官方文档——属性解释https://github.com/unity3d-jp/UnityChanToonShaderVer2_Project/blob/release/urp/2.3.0/Documentation~/Props_en.md
## 总览
没有后处理效果
两种着色方式:
DoubleShadeWithFeatherUTS/UniversalToon 的标准工作流程模式。允许 2 种阴影颜色(双阴影颜色)和颜色之间的渐变(羽化)。
ShadingGradeMap更高级的工作流程模式。除了 DoubleShadeWithFeather 功能之外,此着色器还可以保存称为 ShadingGradeMap 的特殊贴图。
- UniversalToonInput.hlsl定义各种变量与资料实现采样AO贴图`SampleOcclusion()`与初始化Lit表面数据`InitializeStandardLitSurfaceData()`
- UniversalToonHead.hlsl定义了一些宏与函数雾、坐标转换、线性空间与GammaSpace转换相关。
## 光照
## Pass
顺序为:
1. Outline
2. ForwardLit
3. ShadowCaster
4. DepthOnly
模板测试语法:
```c#
Stencil
{
Ref[_StencilNo] //设置渲染的模板缓存值0~255
Comp[_StencilComp] //模板测试的通过条件有除了equal还有Greater、Less、Always、Never等类似ZTest。
Pass[_StencilOpPass] //表示通过模板测试和Z测试注意是都通过的像素怎么处置它的模板值。
Fail[_StencilOpFail] //表示通过了模板测试但没通过Z测试的像素怎么处置它的模板值。
}
```
### Outline
```c#
Tags {"LightMode" = "SRPDefaultUnlit"}:使用这个LightMode标签值在渲染物体时绘制一个额外的Pass。也是URP光照模式的默认值。
Cull [_SRPDefaultUnlitColMode]
ColorMask [_SPRDefaultUnlitColorMask]
Blend SrcAlpha OneMinusSrcAlpha
Stencil
{
Ref[_StencilNo]
Comp[_StencilComp]
Pass[_StencilOpPass]
Fail[_StencilOpFail]
}
```
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "UniversalToonHead.hlsl"
#include "UniversalToonOutline.hlsl"
### ForwardLit
```c#
Tags{"LightMode" = "UniversalForward"}:URP前向渲染
ZWrite[_ZWriteMode]
Cull[_CullMode]
Blend SrcAlpha OneMinusSrcAlpha
Stencil {
Ref[_StencilNo]
Comp[_StencilComp]
Pass[_StencilOpPass]
Fail[_StencilOpFail]
}
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"
#include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
#include "Packages/com.unity.render-pipelines.universal/Shaders/LitForwardPass.hlsl"
#include "UniversalToonHead.hlsl"
#include "UniversalToonBody.hlsl"
```
### ShadowCaster
渲染阴影贴图
```c#
Name "ShadowCaster"
Tags{"LightMode" = "ShadowCaster"}
ZWrite On
ZTest LEqual
Cull[_CullMode]
#include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
#include "Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hlsl"
```
### DepthOnly
渲染深度缓存
```c#
Tags{"LightMode" = "DepthOnly"}
ZWrite On
ColorMask 0
Cull[_CullMode]
#include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
#include "Packages/com.unity.render-pipelines.universal/Shaders/DepthOnlyPass.hlsl"
```

View File

@@ -0,0 +1,71 @@
# Unity通用渲染管线URP系列——自定义渲染管线
## 逻辑分布
- CustomRenderPipelineAsset.cs定义渲染管线Asset最后创建并且返回定义的管线实例。
- CustomRenderPipeline.cs整个渲染管线逻辑。考虑到多视图渲染的关系还需要为摄像机类实现对应的逻辑。
- CameraRenderer.cs摄像机对应的渲染逻辑。
## CustomRenderPipeline
重写`void Render(ScriptableRenderContext context,Camera[] cameras)`,使用自定义的`CameraRenderer`类型的变量renderer,调用`renderer.Render(context,camera)`为每个摄像机(视图)进行渲染。
## CameraRenderer
大部分逻辑都集中在这个类中。
重写`void Render(ScriptableRenderContext context,Camera camera)`
裁剪之前会进行
- `PrepareBuffer()`根据摄像机来设置缓存名称
- `PrepareForSceneWindow()`绘制UI准备
1. [裁剪](#Cull)
2. [设置初始化参数](#Setup)
3. [绘制可见多边形](#DrawVisibleGeometry)
4. [绘制Gizmos]()
5. [提交](#Submit)
全代码
```
```
###
#### Command Buffers
自定义的渲染功能需要设置Command Buffers以存储这个功能所有的渲染命令创建新的Command Buffers需要设置名称案例中使用“Render Camera”。
使用`buffer.BeginSample(bufferName)``buffer.EndSample(bufferName)`给Profiler与帧调试器识别到该渲染功能。
执行`Command Buffers`需要在`context`上调用ExecuteCommandBuffer。这会从缓冲区复制命令但并不会清除它如果要重用它的话就必须在之后明确地执行该操作。因为执行和清除总是一起完成的所以添加同时执行这两种方法的方法很方便。
#### 编辑器相关功能
使用`partial`关键字来建立局部类将Editor相关的代码转移到另一个文件中。
### Cull()
尝试通过Camera获取剪裁变量之后存储裁剪后的结果。
### lighting.Setup(context,cullingResults)
使用`CullingResults`向Shader传递`DirectionalLight`信息,支持多个方向光。
### Setup()
设置初始变量以及做一些初始化操作:
1. `SetupCameraProperties`设置摄像相关属性比如投影矩阵
2. `buffer.ClearRenderTarget`清屏操作。根据清屏标志进行对应的处理。为Depth只会清空深度缓存颜色的话会清空深度与颜色缓存如果项目设置为线性颜色则使用线性颜色进行清屏。
### DrawVisibleGeometry()
根据`SortingSettings``DrawingSettings``FilteringSettings`,之后调用
` context.DrawRenderers(cullingResults,ref drawingSettings,ref filteringSettings)`绘制模型。
`DrawinSettings`的构造函数中,`SortingSettings`起到确定基于正焦还是基于透视的应用排序,还可以用来设置绘制顺序,`criteria = SortingCriteria.CommonOpaque`设置绘制顺序为从近到远。
绘制还需要需要对应着色器的Id案例中使用`static ShaderTagId unlitShaderTagId = new ShaderTagId("SRPDefaultUnlit")`获取。
`FilteringSettings`用于确定哪一些渲染队列是被允许渲染。
以及`context.DrawSkybox(camera)`来绘制天空盒。
#### 绘制顺序
### lighting.Clearup()
渲染完阴影后,清理阴影图集。
### Submit()
提交给渲染队列。

View File

@@ -0,0 +1,71 @@
## BRDF
案例中使用了迪士尼模型。
## 混合模式
`SrcBlend``One``DetBlend``OneMinusSrc`
## Shader GUI
`Shader`{}内添加`CustomEditor“ CustomShaderGUI”`。之后添加脚本:
```c#
using UnityEditor;
using UnityEngine;
using UnityEngine.Rendering;
public class CustomShaderGUI : ShaderGUI {
MaterialEditor editor;
Object[] materials;
MaterialProperty[] properties;
public override void OnGUI (
MaterialEditor materialEditor, MaterialProperty[] properties
) {
base.OnGUI(materialEditor, properties);
editor = materialEditor;
materials = materialEditor.targets;
this.properties = properties;
}
}
```
之后实现
```c#
bool HasProperty (string name) =>
FindProperty(name, properties, false) != null;
void SetProperty (string name, string keyword, bool value) {
if (SetProperty(name, value ? 1f : 0f)) {
SetKeyword(keyword, value);
}
}
bool SetProperty (string name, float value) {
MaterialProperty property = FindProperty(name, properties, false);
if (property != null) {
property.floatValue = value;
return true;
}
return false;
}
void SetKeyword (string keyword, bool enabled) {
if (enabled) {
foreach (Material m in materials) {
m.EnableKeyword(keyword);
}
}
else {
foreach (Material m in materials) {
m.DisableKeyword(keyword);
}
}
}
```
就可以添加若干自定义Shader属性的`Set`函数了。添加按钮逻辑如下:
```c#
bool PresetButton (string name) {
if (GUILayout.Button(name)) {
editor.RegisterPropertyChangeUndo(name);
return true;
}
return false;
}
```

View File

@@ -0,0 +1,185 @@
## 建立自己的管线Shader以及ShaderLibrary
## Shader宏分支控制
### multi_complie
常用的两种做法:
1. 使用multi_complie或shader_feature来定义宏根据不同的宏指令编译出多套ShaderUnity内建shader大体也是这么做的。
2. 有外部传入参数在shader内部if判断选择执行哪部分运算。
因为在shader种使用if、for很影响效率所以第二种方法使用较少用于case较少的时候。
这两个宏一般与`Shader.EnableKeyword("宏名");``Shader.DisableKeyword("宏名");`一起使用。
它会无脑的进行组合编译如果宏指令太多会产生非常多的variant。
`#pragma multi_compile Red Green Blue`
会产生三个variant因为你定义了三个宏
```c#
#pragma multi_compile Red Green Blue
#pragma multi_compile Pink Yellow
```
会产生6个variantRedPinkRedYellowGreenPinkGreenYellowBluePinkBlueYellow),因为他们之间会两两组合。
### shader_feature
该指令的效果和用法基本都与`multi_complie`一样,都是用来添加宏。同时它就是为了multi_compile打包时的爆炸编译的问题。
但如果是需要同时存在两种宏分支的功能就不适合用`shader_feature`了。
## URP ShaderLibrary
### render-pipelines.core
`Packages/com.unity.render-pipelines.core/ShaderLibrary/SpaceTransforms.hlsl`中定义了若干空间转换函数,但因为没有定义宏,所以还需要在用之前定义缺少宏,相关矩阵变量使用`UnityInput.hlsl`进行定义。下面的宏声明可能会有bug最好手动复制错误信息中的变量。估计是字符集的问题
`Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl`包含了若干基础类型定义目前只用于定义real类型。
`Packages/com.unity.render-pipelines.core/ShaderLibrary/UnityInstancing.hlsl`为了实现GPUInstancing所需的库。
`Packages/com.unity.render-pipelines.core/ShaderLibrary/CommonMaterial.hlsl`实现了迪士尼BRDF模型函数。比如`PerceptualSmoothnessToPerceptualRoughness`、`PerceptualRoughnessToRoughness`
```c#
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
#include "UnityInput.hlsl"
#define UNITY_MATRIX_M untiy_ObjectToWorld
#define UNITY_MATRIX_I_M untiy_WorldToObject
#define UNITY_MATRIX_V untiy_MatrixV
#define UNITY_MATRIX_VP unity_MatrixVP
#define UNITY_MATRIX_P glstate_matrix_projection
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/SpaceTransforms.hlsl"
```
### render-pipelines.universal 7.3.1
Unity-Chan使用URP7.3.1
#### Core.hlsl
`Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl`
- 定义了顶点输入、顶点法线输入输入以及初始化函数。
- 定义`UNITY_Z_0_FAR_FROM_CLIPSPACE`宏。
- 返回`UnityInput.hlsl`中定义的`_WorldSpaceCameraPos`,以及`_ScaledScreenParams`。
- 一些常用函数
#### Lighting
`Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl`
实现了灯光相关逻辑(包括阴影)
- 灯光衰减函数
- 灯光数据结构体以及结构体数据填充与计算函数
- BRDF Functions
- Global Illumination主要是球谐结果
- 光照计算LightingLambert、LightingSpecular、LightingPhysicallyBased、VertexLighting
- Fragment FunctionsUniversalFragmentPBR、UniversalFragmentBlinnPhong、LightweightFragmentPBR
#### LitInput
`Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl`
- 定义了PBR所需的`CBUFFER`
- `SampleMetallicSpecGloss()`、`SampleOcclusion()`、`InitializeStandardLitSurfaceData()`
#### LitForwardPass
`Packages/com.unity.render-pipelines.universal/Shaders/LitForwardPass.hlsl`
URP前向管线具体实现文件。像素着色器最终调用`UniversalFragmentPBR()`来计算最后颜色。
## 减少Draw Call的方法
- SRP批处理器:批处理是组合Drawcall的过程可减少CPU和GPU之间的通信时间。最简单的方法是启用SRP batcher。SRP batcher不会减少Draw Call的数量而是使其更精简。它在GPU上缓存了材质属性因此不必在每次绘制调用时都将其发送出去。
- GPU Instancing:使用GPU Instancing可使用少量绘制调用一次绘制或渲染同一网格的多个副本。
### SRP批处理器
遇到错误`SRP Batcher Material property is found in another cbuffer`这个是因为const buffer的名称不正确造成的。
官方文档有如下一句话:
>For a Shader to be compatible with SRP:
All built-in engine properties must be declared in a single CBUFFER named “UnityPerDraw”. For example, unity_ObjectToWorld, or unity_SHAr.
All Material properties must be declared in a single CBUFFER named “UnityPerMaterial”.
翻译成白话来说Shader中所有的内置属性例如unity_ObjectToWorldunity_SHAr等都要在一个名为UnityPerDraw的CBUFFER中声明而所有的Material属性都要在一个名为UnityPerMaterial的CBUFFER中声明。
```c#
cbuffer UntiyPreMaterial
{
float4 _BaseColor;
}
CBUFFER_START(UntiyPreMaterial)
float4 _BaseColor;
CBUFFER_END
```
之后在管线的构造函数中添加设置:
```c#
public CustomRenderPipeline()
{
GraphicsSettings.useScriptableRenderPipelineBatching = true;
}
```
### GPU Instancing
大致步骤:
1. 在Shader文件中添加`#pragma multi_compile_instancing`,这将使Unity生成我们的着色器的两个变体一个具有GPU实例化支持一个不具有GPU实例化支持。材质检查器中还出现了一个切换选项使我们可以选择每种材质要使用的版本。
2. 支持GPU实例化需要更改方法还需要包含`UnityInstancing.hlsl`,作用是重新定义这些宏来访问实例数据数组。但是要进行这项工作需要知道当前正在渲染的对象的索引。索引是通过顶点数据提供的因此需要使其可用。UnityInstancing.hlsl定义了宏来简化此过程但是它假定顶点函数具有struct参数。
3. 声明一个用于传递定点数据的结构体使用GPU实例化时对象索引也可用作顶点属性。我们可以在适当的时候通过简单地将`UNITY_VERTEX_INPUT_INSTANCE_ID`放在属性中来添加它。
4. 在`VertexShader`中添加添加`UNITY_SETUP_INSTANCE_ID(input)`来提取实例的顶点索引数据这足以使GPU实例化进行工作了。
5. 因为SRP批处理程序拥有优先权所以还需要使用`UNITY_INSTANCING_BUFFER_START`替换`CBUFFER_START`以及用`UNITY_INSTANCING_BUFFER_END`替换`CBUFFER_END`,再将内部属性使用`UNITY_DEFINE_INSTANCED_PROP`进行包裹。
6. 为了将顶点索引实例传递至`PixelShader`,还需要再创建一个结构体并添加UNITY_VERTEX_INPUT_INSTANCE_ID。
7. 最后在`PixelShader`中添加`UNITY_SETUP_INSTANCE_ID(input)`来访问实例。使用`UNITY_ACCESS_INSTANCED_PROP`来访问材质中的属性数据。
```cg
UNITY_INSTANCING_BUFFER_START(UnityPerMaterial)
UNITY_DEFINE_INSTANCED_PROP(float4,_BaseColor)
UNITY_INSTANCING_BUFFER_END(UnityPerMaterial)
struct Attributes
{
float3 positionOS : POSITION;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct Varyings
{
float4 positionCS: SV_POSITION;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
Varyings UnlitPassVertex(Attributes input)
{
Varyings output;
UNITY_SETUP_INSTANCE_ID(input);
UNITY_TRANSFER_INSTANCE_ID(input,output);
float3 positionWS = TransformObjectToWorld(input.positionOS);
output.positionCS=TransformWorldToHClip(positionWS);
return output;
}
float4 UnlitPassFragment(Varyings input) : SV_TARGET
{
UNITY_SETUP_INSTANCE_ID(input);
return UNITY_ACCESS_INSTANCED_PROP(UnityPerMaterial,_BaseColor);
}
```
### 动态合批
减少DC的第三种方法称为动态批处理。这是一种古老的技术它将共享相同材质的多个小网格合并为一个较大的网格而该网格被绘制。但如果使用逐对象材质属性per-object material properties会失效。 较大的网格一般按需生成,所以动态合批仅适用于较小的网格。球体还是太大了,但立方体可以使用。
一般来说GPU实例化优于动态批处理。该方法也有一些注意事项例如当涉及不同的比例时不能保证较大网格的法线向量为单位长度。此外绘制顺序也将更改因为它现在是单个网格而不是多个。 还有静态批处理它的工作原理类似但是会提前标记为静态批处理的对象。除了需要更多的内存和存储空间之外它没有任何注意事项。RP不关心这个因此使用起来不用过多担心。
大致步骤:
1. 在`CameraRenderer.DrawVisibleGeometry`中将`enableDynamicBatching`与`enableInstancing`设置为true。
2. 在`CustomRenderPipeline`将`GraphicsSettings.useScriptableRenderPipelineBatching = useSPRBatcher`。
### 给RP添加变量控制
大致步骤:
- `CameraRenderer`中给`DrawVisibleGeometry`与`Render`添加`useDynamicBatching`与`useGPUInstancing`形参,这两变量将用来设置`DrawVisibleGeometry`中的`enableuseDynamicBatching`与`enableInstancing`。
- `CustomRenderPipeline`中添加`useDynamicBatching`与`useGPUInstancing`变量,并且给构造函数添加`useSRPBatcher`、`useDynamicBatching`与`useGPUInstancing`形参,用于修改上述变量以及设置是否启用`SRPBatcher`,并且给`Render`中渲染函数添加上述形参。
- `CustomRenderPipelineAsset`添加`useSRPBatcher`、`useDynamicBatching`与`useGPUInstancing`变量,并修改对应函数的形参。
## 在Shader中添加渲染设置
在`Properties`中
```cg
[Enum(UnityEngine.Rendering.BlendedMode)]
_SrcBlend("Src Blend",Float)=1
[Enum(UnityEngine.Rendering.BlendedMode)]
_DstBlend("Src Blend",Float)=0
[Enum(Off,0,On,1)] _ZWrite ("Z Write",Float)=1
```
在`Pass`中添加
```cg
Blend [_SrcBlend] [_DstBlend]
ZWrite [_ZWrite]
```
## 添加纹理
1. 在`Properties`中添加` _BaseMap("Texture",2D)="white" {}`
2. 因为要支持GPUInstancing的关系代码比较复杂后续步骤见git。

View File

@@ -0,0 +1,103 @@
## Post-FX Stack
为了效率跳过。
`void Draw(RenderTargetIdentifier from,RenderTargetIdentifier to,Pass pass)`
![](https://pic2.zhimg.com/80/v2-9e0c020362ba48d9652de9507acd876d_720w.jpg)
to代表绘制的RT id通过`buffer.SetRenderTarget()`来设置。from为原始渲染结果使用`buffer.SetGlobalTexture()`来向Shader传递贴图资源。
## Bloom
对图像进行双线性采样以生成分辨率不断对半分的Bloom金字塔。
![](https://pic1.zhimg.com/80/v2-263834811ad5f64ae6674ebaae360bc8_720w.jpg)
`_BloomPyramid1`~`_BloomPyramid16`传递id。
![](https://pic1.zhimg.com/80/v2-a2dfea38523980ad31c5e656a181735c_720w.jpg)
创建一个DoBloom方法。首先将摄像机的像素宽度和高度减半然后选择默认的渲染纹理格式。最初我们将从源复制到金字塔中的第一个纹理。追踪那些标识符。
![](https://pic4.zhimg.com/80/v2-8946f3551e373b2a5cc399a360c1e34b_720w.jpg)
## OnBloom
循环整个级别的贴图,并调用`Draw()`绘制之后计算后一级别的贴图的数据准备下一次循环直到循环完成或者贴图分辨为1*1时候。
### Draw
```c#
void Draw (
RenderTargetIdentifier from, RenderTargetIdentifier to, Pass pass
) {
buffer.SetGlobalTexture(fxSourceId, from);
buffer.SetRenderTarget(
to, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store
);
buffer.DrawProcedural(
Matrix4x4.identity, settings.Material, (int)pass,
MeshTopology.Triangles, 3
);
}
```
##
```c#
Pass {
BloomCombine,
BloomHorizontal,
BloomPrefilter,
BloomVertical,
Copy
}
```
Bloom循环的主要内逻辑为
1. copy之前的渲染结果。
2. 进行一次预处理。
3. 取得2个RT首先进行水平高斯模糊之后进行垂直高斯模糊。因为水平模糊已经进行一次采样所以垂直采样的次数可以减半了。
4. 释放中间产生水平高斯的RT。
5. 进行反向叠加产生结果循环。(垂直高斯模糊后的结果)
6. 释放垂直高斯的RT。
## 预处理
Bloom通常在艺术上用于仅使某些东西发光但是我们的效果目前适用于所有对象不管它有多亮。尽管从物理上讲没有意义但是我们可以通过引入亮度阈值来限制影响效果的因素。实际上就是提取亮度高的区域。
c# DoBloom():
```c#
Vector4 threshold;
threshold.x = Mathf.GammaToLinearSpace(bloom.threshold);
threshold.y = threshold.x * bloom.thresholdKnee;
threshold.z = 2f * threshold.y;
threshold.w = 0.25f / (threshold.y + 0.00001f);
threshold.y -= threshold.x;
buffer.SetGlobalVector(bloomThresholdId, threshold);
```
Shader:
```c#
float3 ApplyBloomThreshold (float3 color) {
float brightness = Max3(color.r, color.g, color.b);
float soft = brightness + _BloomThreshold.y;
soft = clamp(soft, 0.0, _BloomThreshold.z);
soft = soft * soft * _BloomThreshold.w;
float contribution = max(soft, brightness - _BloomThreshold.x);
contribution /= max(brightness, 0.00001);
return color * contribution;
}
float4 BloomPrefilterPassFragment (Varyings input) : SV_TARGET {
float3 color = ApplyBloomThreshold(GetSource(input.fxUV).rgb);
return float4(color, 1.0);
}
```
## 解决白色辉光显得块状化问题
使用在Core RP Library的Filtering include文件中定义的SampleTexture2DBicubic函数。
```c#
#include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Filtering.hlsl"
float4 GetSourceBicubic (float2 fxUV) {
return SampleTexture2DBicubic(
TEXTURE2D_ARGS(_PostFXSource, sampler_linear_clamp), fxUV,
_PostFXSource_TexelSize.zwxy, 1.0, 0.0
);
}
```
之后再合并结果的时候对低分辨的贴图使用该函数进行采样。在案例中被设置为可开启项。
## 强制控制
在`BloomCombinePassFragment`中的给低分辨率结果乘以强度之后在于高分辨率结果叠加。

View File

@@ -0,0 +1,14 @@
## HDR渲染纹理
HDR渲染仅与后处理结合使用才有意义因为我们无法更改最终的帧缓冲区格式。因此当我们在CameraRenderer.Setup中创建自己的中间帧缓冲区时我们将在适当的时候使用默认的HDR格式`buffer.GetTemporarayRT()`中使用`RenderTextureFormat.DefaultHDR`(后处理效果的RT格式),也就是R16G16B16A16_SFloat而不是LDR的常规默认格式。
逐步执行DrawCall时你会注意到场景看起来比最终结果要暗。发生这种情况是因为这些步骤存储在HDR纹理中。由于线性颜色数据按原样显示因此看起来很暗它错误地解释为sRGB。
![](https://pic1.zhimg.com/80/v2-a5c936841f88bc9a44cd4af65cb940c0_720w.jpg)
![](https://pic2.zhimg.com/80/v2-d528a8301bc04fa50aa97d7692b4fc59_720w.jpg)
为什么亮度会变化?
sRGB格式使用非线性传递函数。显示器会为此调整执行所谓的伽马校正。伽玛调节函数通常用c的2.2次方和c原色近似但实际传递函数略有不同。
## 解决因为高亮区域过小而导致的闪烁问题
<video src="https://vdn1.vzuu.com/SD/d8b32ae4-48e8-11eb-9c34-7640c864ad74.mp4?disable_local_cache=1&auth_key=1634485320-0-0-203192af877eba6f1b1ae62e0e6d165b&f=mp4&bu=pico&expiration=1634485320&v=hw"
</video>

View File

@@ -0,0 +1,323 @@
## 级联阴影
阴影贴图的缺点:阴影边缘的锯齿严重。原因是阴影贴图的分辨率低,在对阴影贴图采样时,多个不同的顶点对同一个像素采样,导致生成锯齿。为了解决这种问题,我们使用多张阴影贴图,离相机近的地方使用精细的阴影贴图,离相机远的地方使用粗糙的阴影贴图,这样不仅优化了阴影效果,还保证了渲染效率因此,级联阴影的关键就是生成和使用不同精细度的阴影贴图
阴影贴图的原理:在灯光方向架一台摄像机获取深度图然后再正常渲染自己的场景再在正常渲染场景的时候把fragment转换到光源空间把它和之前渲染的Shadowmap中高度深度作比较看它是否在影子里如果是就返回0不是就返回1
主要的步骤是取得场景中灯光的设置并且传递到Shader中之后在`RenderDirectionalShadows`中取得`GetTemporaryRT`在设置完绘制属性后将渲染结果传递至RT而不是摄像机上通过`RenderDirectionalShadows`绘制各个方向光的阴影。
`RenderDirectionalShadows`主要是通过`cullingResults.ComputeDirectionalShadowMatricesAndCullingPrimitives`来计算视图矩阵、投影矩阵与ShadowSplitData结构该函数第一个参数是可见光指数。接下来的三个参数是两个整数和一个Vector3它们控制阴影级联。稍后我们将处理级联因此现在使用零一和零向量。然后是纹理尺寸我们需要使用平铺尺寸。第六个参数是靠近平面的阴影我们现在将其忽略并将其设置为零。之后操作为
```c#
shadowSettings.splitData = splitData;
//设置矩阵
buffer.SetViewProjectionMatrices(viewMatrix,projectionMatrix);
ExecuteBuffer();
//根据ShadowDrawingSettings绘制阴影
context.DrawShadows(ref shadowSettings);
```
之后开始往`Lit.Shader`添加绘制阴影Pass
```c#
Pass{
Tags {
"LightMode" = "ShadowCaster"
}
ColorMask 0
HLSLPROGRAM
#pragma target 3.5
#pragma shader_feature _CLIPPING
#pragma multi_compile_instancing
#pragma vertex ShadowCasterPassVertex
#pragma fragment ShadowCasterPassFragment
#include "ShadowCasterPass.hlsl"
ENDHLSL
}
```
其中FragmentShader只负责裁剪
```c#
#ifndef CUSTOM_SHADOW_CASTER_PASS_INCLUDED
#define CUSTOM_SHADOW_CASTER_PASS_INCLUDED
#include "../ShaderLibrary/Common.hlsl"
TEXTURE2D(_BaseMap);
SAMPLER(sampler_BaseMap);
UNITY_INSTANCING_BUFFER_START(UnityPerMaterial)
UNITY_DEFINE_INSTANCED_PROP(float4, _BaseMap_ST)
UNITY_DEFINE_INSTANCED_PROP(float4, _BaseColor)
UNITY_DEFINE_INSTANCED_PROP(float, _Cutoff)
UNITY_INSTANCING_BUFFER_END(UnityPerMaterial)
struct Attributes {
float3 positionOS : POSITION;
float2 baseUV : TEXCOORD0;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
struct Varyings {
float4 positionCS : SV_POSITION;
float2 baseUV : VAR_BASE_UV;
UNITY_VERTEX_INPUT_INSTANCE_ID
};
Varyings ShadowCasterPassVertex (Attributes input) {
Varyings output;
UNITY_SETUP_INSTANCE_ID(input);
UNITY_TRANSFER_INSTANCE_ID(input, output);
float3 positionWS = TransformObjectToWorld(input.positionOS);
output.positionCS = TransformWorldToHClip(positionWS);
float4 baseST = UNITY_ACCESS_INSTANCED_PROP(UnityPerMaterial, _BaseMap_ST);
output.baseUV = input.baseUV * baseST.xy + baseST.zw;
return output;
}
void ShadowCasterPassFragment (Varyings input) {
UNITY_SETUP_INSTANCE_ID(input);
float4 baseMap = SAMPLE_TEXTURE2D(_BaseMap, sampler_BaseMap, input.baseUV);
float4 baseColor = UNITY_ACCESS_INSTANCED_PROP(UnityPerMaterial, _BaseColor);
float4 base = baseMap * baseColor;
#if defined(_CLIPPING)
clip(base.a - UNITY_ACCESS_INSTANCED_PROP(UnityPerMaterial, _Cutoff));
#endif
}
#endif
```
`clip()`是HLSL内置函数当传入数值小于0时丢弃当前像素。`_Cutoff`是一个设定的浮点值默认为0。不是0就是1。在`ShadowCaster`Pass中是为了正确渲染Alpha物体阴影。
`ShadowCaster`Pass是用来渲染阴影贴图灯光空间的深度贴图如果相机空间物体表面深度大于阴影贴图中深度则代表物体处于阴影。取得阴影值在`GetDirectionalShadowAttenuation()=>FilterDirectionalShadow()`里面,采样完阴影后`GetDirectionalShadowAttenuation()`里进行一次插值,`return lerp(1.0,shadow,directional.strength);`,最后在`GetLighting()`取得计算完的阴影值。
在`FilterDirectionalShadow()`中调用了`SAMPLE_TEXTURE2D_SHADOW`宏。`SAMPLE_TEXTURE2D_SHADOW`宏本质是`SampleCmpLevelZero()`函数会对指定的纹理坐标进行采样将采样的结果与传入的参数z当前texel在光源空间的深度进行比较小于等于z视为通过说明此texel没被遮挡否则视为不通过说明此texel位于阴影中
另外注意`SamplerComparisonState`是用来进行深度采样比较的采样器。需要与`SampleCmpLevelZero`一起使用。
### 添加级联效果
由于定向光会影响最大阴影距离范围内的所有物体,因此它们的阴影贴图最终会覆盖较大的区域。由于阴影贴图使用正交投影,因此阴影贴图中的每个纹理像素都具有固定的世界空间大小。如果该尺寸太大,则清晰可见单个阴影纹理,从而导致锯齿状的阴影边缘和小的阴影可能消失。可以通过增加图集大小来缓解这种情况,但仅限于一定程度。
### 添加设置
#### ShadowSettings类
添加`ShadowSettings`类:
```c#
using UnityEngine;
[System.Serializable]
public class ShadowSettings
{
[Min(0f)]
public float maxDistance = 100f;
public enum TextureSize
{
_256=256,_512=512,_1024=1024,_2048=2048,_4096=4096,_8192=8192
}
[System.Serializable]
public struct Directional
{
public TextureSize atlasSize;
}
public Directional directional = new Directional { atlasSize = TextureSize._1024};
}
```
往`CustomRenderPipelineAsset`、`CustomRenderPipeline`、`CameraRenderer` 、`Lighting`依次添加`ShadowSettings`变量以及对应函数中的形参。
#### Shadow类
```c#
public class Shadows
{
const int maxShadowedDirectionalLightCount = 1;
int ShadowedDirectionalLightCount;
const string bufferName = "shadows";
CommandBuffer buffer = new CommandBuffer
{
name = bufferName
};
ScriptableRenderContext context;
CullingResults cullingResults;
ShadowSettings settings;
struct ShadowedDirectionalLight
{
public int visibleLightIndex;
}
private ShadowedDirectionalLight[] ShadowedDirectionalLights =
new ShadowedDirectionalLight[maxShadowedDirectionalLightCount];
public void Setup(
ScriptableRenderContext context,CullingResults cullingResults,
ShadowSettings settings
)
{
ShadowedDirectionalLightCount = 0;
this.context = context;
this.cullingResults = cullingResults;
this.settings = settings;
}
void ExecuteBuffer()
{
context.ExecuteCommandBuffer(buffer);
buffer.Clear();
}
/*
* 存储方向光阴影信息
* 灯光处于阴影有效且处于可见状态时将信息存储在ShadowedDirectionalLights[]中
*/
public void ReserveDirectionalShadows(Light light, int visibleLightIndex)
{
if (ShadowedDirectionalLightCount < maxShadowedDirectionalLightCount &&
light.shadows!=LightShadows.None && light.shadowStrength>0f &&
cullingResults.GetShadowCasterBounds(visibleLightIndex,out Bounds b))
{
ShadowedDirectionalLights[ShadowedDirectionalLightCount++] = new ShadowedDirectionalLight() { visibleLightIndex = visibleLightIndex};
}
}
}
```
- 之后在`Lighting`类中添加`Shadow`类变量shadow并且在`Setup`中添加`shadows.Setup(context,cullingResults,shadowSettings);`。以及在`SetupDirectionalLight`中添加`shadows.ReserveDirectionalShadows(visibleLight.light,index);`。
### 渲染
#### 阴影图集
```c#
TEXTURE2D_SHADOW(_DirectionalShadowAtlas);
#define SHADOW_SAMPLER sampler_linear_clamp_compare
SAMPLER_CMP(SHADOW_SAMPLER);
```
#### 3 级联阴影贴图
现在终于得到阴影,但它们看起来很糟糕。不应被阴影化的表面最终会被形成像素化带的阴影伪影所覆盖。这些是由于阴影贴图的有限分辨率导致的自我阴影化。使用不同的分辨率会更改伪影模式,但不会消除它们。
![](https://pic1.zhimg.com/80/v2-7b1bc2d23fcaaf10112496200536f770_720w.jpg)
#### 球形剔除
Unity通过为其创建一个选择球来确定每个级联覆盖的区域。由于阴影投影是正交的且呈正方形因此它们最终会紧密契合其剔除球但还会覆盖周围的一些空间。这就是为什么可以在剔除区域之外看到一些阴影的原因。同样光的方向与球无关因此所有定向光最终都使用相同的剔除球。
剔除球是ComputeDirectionalShadowMatricesAndCullingPrimitives函数中计算出的`ShadowSplitData`分离出的数据。传递`_CascadeCount`级联数以及`_CascadeCullingSpheres`以及剔除球的位置数据。
创建`ShadowData`结构体以传递级联级别以及阴影硬度。将会在`GetLighting`中通过像素的世界坐标是否在球体内来计算`ShadowData`结构体中的级联级别。
#### 最大距离
此时阴影会在超过最后一个剔除球后消失,为了解决这个问题,会设置一个最大距离,超过最大距离阴影才会消失。具体操作是在`GetShadowData`中比较像素深度以及阴影最大距离值,如果超过则将`strength`设置为0。
#### 给阴影添加衰减与级联渐变
阴影衰减见git
![](https://pic3.zhimg.com/80/v2-3d3ad5abdfc204e2b640ffab7329554a_720w.jpg)
级联衰减公式:
![](https://pic2.zhimg.com/80/v2-01588a58eec8923422b5ea114f2000e5_720w.jpg)
其中f为下面式子的倒数
![](https://pic2.zhimg.com/80/v2-5739eecbfd934de957734ad3d9913ad9_720w.jpg)
在`RenderDirectionalShadows`中计算fade因子后传递给`_ShadowDistanceFade`
#### 清除阴影的摩尔纹
##### 简单的清除方法
1. 最简单的方法是向阴影投射器的深度添加恒定的偏差,这虽然会产生不精确的阴影但可以消除摩尔纹。
2. 另一种方法是应用斜率比例偏差方法是对SetGlobalDepthBias的第二个参数使用非零值。
##### 法线偏差
新建级联数据变量用来传递级级联数据x为剔除球半径倒数y为使用`√2*2f*cullingSphere.w/tileSize`算出来的级联纹理大小,因为最坏的情况是以像素对角方向进行偏移,所以前面有乘以`√2`,存入shader后乘以Normal取得偏移值之后对`surfaceWS`进行偏移。
##### 可配置的偏差
从剔除数据中获取Light以及其shadowBias之后传递给`ShadowedDirectionalLight`。在绘制阴影前将偏移值传递给`buffer.SetGlobalDepthBias(0,light.slopeScaleBiase);`从剔除数据中获取Light以及其normalBias传递到Shader中乘以上一步中的`normalBias`。
##### 解决阴影裁剪问题
当摄像机处于物体中间时,会出现阴影被裁剪的问题。解决方法是在顶点着色器中添加:
```c#
#if UNITY_REVERSED_Z
output.positionCS.z=min(output.positionCS.z,output.positionCS.w*UNITY_NEAR_CLIP_VALUE);
#else
output.positionCS.z=max(output.positionCS.z,output.positionCS.w*UNITY_NEAR_CLIP_VALUE);
#endif
```
原理是当物体z坐标小于近剪裁面时将顶点挤压or贴在近剪裁面上。
对于大三角产生问题的原因不明白。解决方法是取得灯光的`ShadowNearPlane`之后传递给计算剔除球形参中。
##### PCF过滤
到目前为止我们仅对每个片段采样一次阴影贴图且使用了硬阴影。阴影比较采样器使用特殊形式的双线性插值在插值之前执行深度比较。这被称为百分比紧密过滤percentage closer filtering 简称PCF因为其中包含四个纹理像素所以一般指是2×2 PCF过滤器。
- 添加2x2、3x3、5x5,3种过滤枚举、传递shadowAtlasSize到Shader以及对应的Shader宏并且修改`SetKeywords()`。
- 为每种过滤器设置不同的采样次数与宏设置
`DIRECTIONAL_FILTER_SETUP`为SampleShadow_ComputeSamples_Tent_xxx通过Size与positionSTS.xy计算权重以及uv。之后对阴影进行对应次数的采样。
```c#
float FilterDirectionalShadow (float3 positionSTS)
{
#if defined(DIRECTIONAL_FILTER_SETUP)
float weights[DIRECTIONAL_FILTER_SAMPLES];
float2 positions[DIRECTIONAL_FILTER_SAMPLES];
float4 size = _ShadowAtlasSize.yyxx;
DIRECTIONAL_FILTER_SETUP(size, positionSTS.xy, weights, positions);
float shadow = 0;
for (int i = 0; i < DIRECTIONAL_FILTER_SAMPLES; i++) {
shadow += weights[i] * SampleDirectionalShadowAtlas(
float3(positions[i].xy, positionSTS.z)
);
}
return shadow;
#else
return SampleDirectionalShadowAtlas(positionSTS);
#endif
}
```
在`Shadows.cs`中修改`SetCascadeData()`。增大滤镜大小可使阴影更平滑但也会导致粉刺再次出现。我们需要增加法向偏置以匹配滤波器尺寸。可以通过将纹理像素大小乘以1加上SetCascadeData中的过滤器模式来自动执行此操作。
```c#
void SetCascadeData(int index, Vector4 cullingSphere, float tileSize)
{
float texelSize = 2f * cullingSphere.w / tileSize;
float filterSize = texelSize * ((float)settings.directional.filter + 1f);
cullingSphere.w -= filterSize;
cullingSphere.w *= cullingSphere.w;
cascadeCullingSpheres[index] = cullingSphere;
cascadeData[index] = new Vector4(
1f / cullingSphere.w,
filterSize*1.4142136f);
}
```
##### 级联过渡
在`GetShadowData()`中根据距离计算fade如果不是最后一个球就把fade赋值给cascadeBlend。最后一个球的`·`strength=strength*fade`
```c#
for (i=0;i<_CascadeCount;i++)
{
float4 sphere=_CascadeCullingSpheres[i];
float distanceSqr=DistanceSquared(surfaceWS.position,sphere.xyz);
if(distanceSqr<sphere.w)
{
if(i== _CascadeCount-1)
{
data.strength*=FadedShadowStrength(distanceSqr,_CascadeData[i].x,_ShadowDistanceFade.z);
}
break;;
}
}
```
##### 过渡抖动
提高级联过渡效果。在`LitPassFragment`中给像素计算抖动值
```c#
surface.dither=InterleavedGradientNoise(input.positionCS.xy,0);
```
当使用抖动混合时,如果我们不在上一个级联中,则当混合值小于抖动值时,跳到下一个级联。
```c#
#if defined(_CASCADE_BLEND_DITHER)
else if(data.cascadeBlend < surfaceWS.dither)
{
i+=1;
}
#endif
```
##### 其他功能效果
- 透明度
- 阴影模式
- 裁切阴影
- 抖动阴影
- 无阴影
- 不受光阴影投射器
- 接受阴影

View File

@@ -0,0 +1,204 @@
## ToonShader
书中案例使用了2个Pass来实现效果。
### OutlinePass
模型外面边
```
Pass {
NAME "OUTLINE"
Cull Front
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
float _Outline;
fixed4 _OutlineColor;
struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 pos : SV_POSITION;
};
v2f vert (a2v v) {
v2f o;
float4 pos = mul(UNITY_MATRIX_MV, v.vertex);
//IT_MV rotates normals from object to eye space
float3 normal = mul((float3x3)UNITY_MATRIX_IT_MV, v.normal);
normal.z = -0.5;
pos = pos + float4(normalize(normal), 0) * _Outline;
o.pos = mul(UNITY_MATRIX_P, pos);
return o;
}
float4 frag(v2f i) : SV_Target {
return float4(_OutlineColor.rgb, 1);
}
ENDCG
}
```
### ToonShader
颜色计算:
`Ramp颜色=tex2D(_Ramp, float2(dot(worldNormal, worldLightDir))).rgb`
`diffuse=贴图颜色*指定颜色*灯光颜色*(Ramp贴图)`
```
fixed4 c = tex2D (_MainTex, i.uv);
fixed3 albedo = c.rgb * _Color.rgb;
fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;
UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
fixed diff = ;
diff = (diff * 0.5 + 0.5) * atten;
fixed3 diffuse = _LightColor0.rgb * albedo * tex2D(_Ramp, float2(diff, diff)).rgb;
```
高光计算:
使用fwidth与smooth主要是用于抗锯齿。https://blog.csdn.net/candycat1992/article/details/44673819
```
fixed spec = dot(worldNormal, worldHalfDir);
fixed w = fwidth(spec) * 2.0;
fixed3 specular = _Specular.rgb * lerp(0, 1, smoothstep(-w, w, spec + _SpecularScale - 1)) * step(0.0001, _SpecularScale);
```
Pass代码
```
Pass {
Tags { "LightMode"="ForwardBase" }
Cull Back
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fwdbase
#include "UnityCG.cginc"
#include "Lighting.cginc"
#include "AutoLight.cginc"
#include "UnityShaderVariables.cginc"
fixed4 _Color;
sampler2D _MainTex;
float4 _MainTex_ST;
sampler2D _Ramp;
fixed4 _Specular;
fixed _SpecularScale;
struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 texcoord : TEXCOORD0;
float4 tangent : TANGENT;
};
struct v2f {
float4 pos : POSITION;
float2 uv : TEXCOORD0;
float3 worldNormal : TEXCOORD1;
float3 worldPos : TEXCOORD2;
SHADOW_COORDS(3)
};
v2f vert (a2v v) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex);
o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);
o.worldNormal = UnityObjectToWorldNormal(v.normal);
o.worldPos = mul(_Object2World, v.vertex).xyz;
TRANSFER_SHADOW(o);
return o;
}
float4 frag(v2f i) : SV_Target {
fixed3 worldNormal = normalize(i.worldNormal);
fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
fixed3 worldViewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
fixed3 worldHalfDir = normalize(worldLightDir + worldViewDir);
fixed4 c = tex2D (_MainTex, i.uv);
fixed3 albedo = c.rgb * _Color.rgb;
fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;
UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
fixed diff = dot(worldNormal, worldLightDir);
diff = (diff * 0.5 + 0.5) * atten;
fixed3 diffuse = _LightColor0.rgb * albedo * tex2D(_Ramp, float2(diff, diff)).rgb;
fixed spec = dot(worldNormal, worldHalfDir);
fixed w = fwidth(spec) * 2.0;
fixed3 specular = _Specular.rgb * lerp(0, 1, smoothstep(-w, w, spec + _SpecularScale - 1)) * step(0.0001, _SpecularScale);
return fixed4(ambient + diffuse + specular, 1.0);
}
ENDCG
}
```
### 素描风格
通过`hatchFactor=max(0,dot(worldLightDir,worldNormal))*7`获取因子。之后计算各个图层的混合因子:
```
if (hatchFactor > 6.0) {
// Pure white, do nothing
} else if (hatchFactor > 5.0) {
o.hatchWeights0.x = hatchFactor - 5.0;
} else if (hatchFactor > 4.0) {
o.hatchWeights0.x = hatchFactor - 4.0;
o.hatchWeights0.y = 1.0 - o.hatchWeights0.x;
} else if (hatchFactor > 3.0) {
o.hatchWeights0.y = hatchFactor - 3.0;
o.hatchWeights0.z = 1.0 - o.hatchWeights0.y;
} else if (hatchFactor > 2.0) {
o.hatchWeights0.z = hatchFactor - 2.0;
o.hatchWeights1.x = 1.0 - o.hatchWeights0.z;
} else if (hatchFactor > 1.0) {
o.hatchWeights1.x = hatchFactor - 1.0;
o.hatchWeights1.y = 1.0 - o.hatchWeights1.x;
} else {
o.hatchWeights1.y = hatchFactor;
o.hatchWeights1.z = 1.0 - o.hatchWeights1.y;
}
```
之后在片元着色器中将因子与6张素描贴图的采样结果相乘以及计算白色区域光照最后相加
```
fixed4 hatchTex0 = tex2D(_Hatch0, i.uv) * i.hatchWeights0.x;
fixed4 hatchTex1 = tex2D(_Hatch1, i.uv) * i.hatchWeights0.y;
fixed4 hatchTex2 = tex2D(_Hatch2, i.uv) * i.hatchWeights0.z;
fixed4 hatchTex3 = tex2D(_Hatch3, i.uv) * i.hatchWeights1.x;
fixed4 hatchTex4 = tex2D(_Hatch4, i.uv) * i.hatchWeights1.y;
fixed4 hatchTex5 = tex2D(_Hatch5, i.uv) * i.hatchWeights1.z;
fixed4 whiteColor = fixed4(1, 1, 1, 1) * (1 - i.hatchWeights0.x - i.hatchWeights0.y - i.hatchWeights0.z -
i.hatchWeights1.x - i.hatchWeights1.y - i.hatchWeights1.z);
fixed4 hatchColor = hatchTex0 + hatchTex1 + hatchTex2 + hatchTex3 + hatchTex4 + hatchTex5 + whiteColor;
UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
return fixed4(hatchColor.rgb * _Color.rgb * atten, 1.0);
```

View File

@@ -0,0 +1,313 @@
## BrightnessSaturationAndContrast
### 给摄像机添加脚本
添加2个Meta让其可以在编辑器模式下运行并且只能绑定Camera组件。
```
[ExecuteInEditMode]
[RequireComponent(typeof(Camera))]
```
实现基础类
```
using UnityEngine;
[ExecuteInEditMode]
[RequireComponent(typeof(Camera))]
public class PostEffectsBase : MonoBehaviour
{
protected void CheckResources()
{
bool isSupported = CheckSupport();
if (isSupported == false)
{
NotSupported();
}
}
protected bool CheckSupport()
{
if (SystemInfo.supportsImageEffects == false || SystemInfo.supportsRenderTextures == false)
{
Debug.LogWarning("Not Supported!");
return false;
}
return true;
}
protected void NotSupported()
{
enabled = false;
}
protected Material CheckShaderAndCreateMaterial(Shader shader, Material material)
{
if (shader == null)
return null;
if (shader.isSupported && material && material.shader == shader)
return material;
if (!shader.isSupported)
return null;
else
{
material = new Material(shader);
material.hideFlags = HideFlags.DontSave;
if (material)
return material;
else
return null;
}
}
protected void Start()
{
CheckResources();
}
}
```
之后根据需求在子类中添加变量:
```
using UnityEngine;
public class BrightnessSaturationAndContrast : PostEffectsBase
{
[Range(0.0f, 3.0f)]
public float brightness = 1.0f;
[Range(0.0f, 3.0f)]
public float saturation = 1.0f;
[Range(0.0f, 3.0f)]
public float contrast = 1.0f;
public Shader briSatConShader;
private Material briSatConMaterial;
public Material material
{
get
{
briSatConMaterial = CheckShaderAndCreateMaterial(briSatConShader, briSatConMaterial);
return briSatConMaterial;
}
}
private void OnRenderImage(RenderTexture src, RenderTexture dest)
{
if (material != null)
{
material.SetFloat("_Brightness", brightness);
material.SetFloat("_Saturation", saturation);
material.SetFloat("_Contrast", contrast);
Graphics.Blit(src, dest, material);
}
else
{
Graphics.Blit(src, dest);
}
}
}
```
最后在OnRenderImage中调用Graphics.Blit()进行渲染。
### 添加后处理Shader
```
Shader "PostProcess/BrightnessSaturationAndContrast" {
Properties {
_MainTex ("Base", 2D) = "white" {}
_Brightness("Brightness",Float)=1
_Saturation("Saturation",Float)=1
_Contrast("Constrast",Float)=1
}
SubShader {
Pass{
ZTest Always Cull Off ZWrite Off
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
sampler2D _MainTex;
half _Brightness;
half _Saturation;
half _Contrast;
struct v2f{
float4 pos : SV_POSITION;
half2 uv : TEXCOORD0;
};
v2f vert(appdata_img v)
{
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);
o.uv=v.texcoord;
return o;
}
fixed4 frag(v2f i) : SV_Target{
fixed4 renderTex=tex2D(_MainTex,i.uv);
fixed3 finalColor=renderTex.rgb * _Brightness;
fixed luminance=0.2125*renderTex.r+0.7154*renderTex.g+0.0721*renderTex.b;
fixed3 luminanceColor=fixed3(luminance,luminance,luminance);
finalColor =lerp(luminanceColor,finalColor,_Saturation);
fixed3 avgColor=fixed3(0.5,0.5,0.5);
finalColor=lerp(avgColor,finalColor,_Contrast);
return fixed4(finalColor,renderTex.a);
}
ENDCG
}
}
Fallback Off
}
```
## 高斯模糊
与之前不同这里利用RenderTexture.GetTemporary函数分配了一块与屏幕图像大小相同的缓冲区。这是因为高斯模糊需要调用两个Pass我们需要使用一块中间缓存来存储第一个Pass执行完毕后得到的模糊结果。`RenderTexture buffer = RenderTexture.GetTemporary(rtW, rtH, 0);`我们首先调用`Graphics.Blit(src, buffer, material, 0),`使用Shader中的第一个pass对src进行处理并将结果存储在了buffer中。然后在调用`Graphics.Blit(src, buffer, material, 1)`,使用Shader中的第二个Pass对buffer进行处理返回最终的屏幕图像。最后我们还需要调用`RenderTexture.ReleaseTemporary`来释放之前分配的缓存。
```
private void OnRenderImage(RenderTexture src, RenderTexture dest)
{
if (material != null)
{
int rtW = src.width;
int rtH = src.height;
RenderTexture buffer = RenderTexture.GetTemporary(rtW, rtH, 0);
Graphics.Blit(src, buffer, material, 0);
Graphics.Blit(buffer, dest, material, 1);
RenderTexture.Release(buffer);
}
else
{
Graphics.Blit(src, dest);
}
}
```
实现降采样:
```
private void OnRenderImage(RenderTexture src, RenderTexture dest)
{
if (material != null)
{
int rtW = src.width / downSample;
int rtH = src.height / downSample;
RenderTexture buffer = RenderTexture.GetTemporary(rtW, rtH, 0);
buffer.filterMode=FilterMode.Bilinear;
Graphics.Blit(src, buffer, material, 0);
Graphics.Blit(buffer, dest, material, 1);
RenderTexture.Release(buffer);
}
else
{
Graphics.Blit(src, dest);
}
}
```
### CGINCLUDE
使用CGINCLUDE与ENDCG关键字将通用部分引用给其他Pass
```
Shader "Unity Shaders Book/Chapter 12/Gaussian Blur" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_BlurSize ("Blur Size", Float) = 1.0
}
SubShader {
CGINCLUDE
#include "UnityCG.cginc"
sampler2D _MainTex;
half4 _MainTex_TexelSize;
float _BlurSize;
struct v2f {
float4 pos : SV_POSITION;
half2 uv[5]: TEXCOORD0;
};
v2f vertBlurVertical(appdata_img v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
half2 uv = v.texcoord;
o.uv[0] = uv;
o.uv[1] = uv + float2(0.0, _MainTex_TexelSize.y * 1.0) * _BlurSize;
o.uv[2] = uv - float2(0.0, _MainTex_TexelSize.y * 1.0) * _BlurSize;
o.uv[3] = uv + float2(0.0, _MainTex_TexelSize.y * 2.0) * _BlurSize;
o.uv[4] = uv - float2(0.0, _MainTex_TexelSize.y * 2.0) * _BlurSize;
return o;
}
v2f vertBlurHorizontal(appdata_img v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
half2 uv = v.texcoord;
o.uv[0] = uv;
o.uv[1] = uv + float2(_MainTex_TexelSize.x * 1.0, 0.0) * _BlurSize;
o.uv[2] = uv - float2(_MainTex_TexelSize.x * 1.0, 0.0) * _BlurSize;
o.uv[3] = uv + float2(_MainTex_TexelSize.x * 2.0, 0.0) * _BlurSize;
o.uv[4] = uv - float2(_MainTex_TexelSize.x * 2.0, 0.0) * _BlurSize;
return o;
}
fixed4 fragBlur(v2f i) : SV_Target {
float weight[3] = {0.4026, 0.2442, 0.0545};
fixed3 sum = tex2D(_MainTex, i.uv[0]).rgb * weight[0];
for (int it = 1; it < 3; it++) {
sum += tex2D(_MainTex, i.uv[it*2-1]).rgb * weight[it];
sum += tex2D(_MainTex, i.uv[it*2]).rgb * weight[it];
}
return fixed4(sum, 1.0);
}
ENDCG
ZTest Always Cull Off ZWrite Off
Pass {
NAME "GAUSSIAN_BLUR_VERTICAL"
CGPROGRAM
#pragma vertex vertBlurVertical
#pragma fragment fragBlur
ENDCG
}
Pass {
NAME "GAUSSIAN_BLUR_HORIZONTAL"
CGPROGRAM
#pragma vertex vertBlurHorizontal
#pragma fragment fragBlur
ENDCG
}
}
FallBack "Diffuse"
}
```