This commit is contained in:
2023-06-29 11:55:02 +08:00
commit 36e95249b1
1236 changed files with 464197 additions and 0 deletions

View File

@@ -0,0 +1,8 @@
---
title: 未命名
date: 2022-09-04 19:50:57
excerpt:
tags:
rating: ⭐
---
可以使用DrawDynamicMeshPass()实现在插件中使用MeshDraw绘制Pass。

View File

@@ -0,0 +1,65 @@
---
title: Lighting阶段的ShadowMap逻辑
date: 2023-04-10 14:45:45
excerpt:
tags:
rating: ⭐
---
## PS Parameter
`FDeferredLightPS::FParameters GetDeferredLightPSParameters()`
- LightAttenuationTexture = ShadowMaskTexture ? ShadowMaskTexture : WhiteDummy;
- One pass projection
- VirtualShadowMap = VirtualShadowMapUniformBuffer;
- VirtualShadowMapId = VirtualShadowMapId;
- ShadowMaskBits = ShadowMaskBits ? ShadowMaskBits : GSystemTextures.GetZeroUIntDummy(GraphBuilder);
最后输出
- SceneColorTexture
- SceneDepthTexture
## Lighting阶段中VSM 渲染函数
RenderDeferredShadowProjections(),输出`ScreenShadowMaskTexture`
## 问题所在
LightAttenuation
- x整个场景的方向光阴影。
- y整个场景的方向光SSS阴影。
- zLight function + per-object shadows
- wper-object SSS shadowing in w
zw用于非方向光或者移动端渲染路径。
```c++
if (LightData.bRadialLight || SHADING_PATH_MOBILE)
{
// Remapping the light attenuation buffer (see ShadowRendering.cpp)
Shadow.SurfaceShadow = LightAttenuation.z * StaticShadowing;
// SSS uses a separate shadowing term that allows light to penetrate the surface
//@todo - how to do static shadowing of SSS correctly? Shadow.TransmissionShadow = LightAttenuation.w * StaticShadowing;
Shadow.TransmissionThickness = LightAttenuation.w;
}
else
{
// Remapping the light attenuation buffer (see ShadowRendering.cpp)
// Also fix up the fade between dynamic and static shadows // to work with plane splits rather than spheres.
float DynamicShadowFraction = DistanceFromCameraFade(SceneDepth, LightData);
// For a directional light, fade between static shadowing and the whole scene dynamic shadowing based on distance + per object shadows
Shadow.SurfaceShadow = lerp(LightAttenuation.x, StaticShadowing, DynamicShadowFraction);
// Fade between SSS dynamic shadowing and static shadowing based on distance
Shadow.TransmissionShadow = min(lerp(LightAttenuation.y, StaticShadowing, DynamicShadowFraction), LightAttenuation.w);
Shadow.SurfaceShadow *= LightAttenuation.z;
Shadow.TransmissionShadow *= LightAttenuation.z;
// Need this min or backscattering will leak when in shadow which cast by non perobject shadow(Only for directional light)
Shadow.TransmissionThickness = min(LightAttenuation.y, LightAttenuation.w);
}
```
**DeferredLightingCommon.ush**与**DeferredLightPixelShaders.usf**中的**LightAttenuation**即为ShadowMap数据。具体可以参考**GetLightAttenuationFromShadow()**可以看得出EPIC之后打算通过抖动来消除VSM的阴影锯齿。
主要的阴影效果由**LightAttenuation**与ShadingModels.ush中的NoL提供。我们只需要调整自阴影也就是z通道其他的整个场景的阴影。

View File

@@ -0,0 +1,44 @@
---
title: RenderLights
date: 2023-04-09 10:23:21
excerpt:
tags:
rating: ⭐
---
# 关键函数
取得所有ShadowMap的投影信息
```c++
const FVisibleLightInfo& VisibleLightInfo = VisibleLightInfos[LightSceneInfo->Id];
const TArray<FProjectedShadowInfo*, SceneRenderingAllocator>& ShadowMaps = VisibleLightInfo.ShadowsToProject;
for (int32 ShadowIndex = 0; ShadowIndex < ShadowMaps.Num(); ShadowIndex++)
{
const FProjectedShadowInfo* ProjectedShadowInfo = ShadowMaps[ShadowIndex];
}
```
# 透明体积图元渲染
## InjectSimpleTranslucencyLightingVolumeArray
插入简单透明体积物体渲染。应该是根据3D贴图渲染体积效果。默认状态下不运行。
- InjectSimpleLightsTranslucentLighting
- InjectSimpleTranslucentLightArray
## InjectTranslucencyLightingVolume
在收集用于渲染透明体积的灯光代理信息后进行渲染,主要用于云的渲染。
- InjectTranslucencyLightingVolume
# 直接光照
## RenderVirtualShadowMapProjectionMaskBits
- VirtualShadowMapProjectionMaskBits
- VirtualShadowMapProjection(RayCount:%u(%s),SamplesPerRay:%u,Input:%s%s)
输出到名为`Shadow.Virtual.MaskBits`与`Shadow.Virtual.MaskBits(HairStrands)`的UAV。
## AddClusteredDeferredShadingPass
## RenderSimpleLightsStandardDeferred
## RenderLight
针对每个灯在ShadowProjectionOnOpaque渲染ShadowMask
- VirualShadowMapProjection
- CompositeVirtualShadowMapMask

View File

@@ -0,0 +1,155 @@
# MeshDraw学习笔记
## 前言
源码版4.27.0
参考文章Yivanlee的MeshDraw系列文章。
## 图元渲染数据收集
- FDeferredShadingSceneRenderer::Render()
- InitViews()
- ComputeViewVisibility()
- GatherDynamicMeshElements()
- GetDynamicMeshElements()
InitViews():计算可见性以及初始化胶囊阴影、天空环境图、大气雾、体积雾。
GatherDynamicMeshElements():遍历场景中的所有图元类,调用`GetDynamicMeshElements()`接口函数获取渲染数据,之后调用`FMeshElementCollector``AllocateMesh()`创建一块FMeshBatch类型的内存并且使用渲染数据进行填充。
`FMeshBatch`承载`MaterialRenderProxy`以及其他渲染数据,比如:
- FVertexFactory
- FMaterialRenderProxy
- FLightCacheInterface
- uint32 CastShadow : 1; // Whether it can be used in shadow renderpasses.
- uint32 bUseForMaterial : 1; // Whether it can be used in renderpasses requiring material outputs.
- uint32 bUseForDepthPass : 1; // Whether it can be used in depth pass.
- uint32 bUseAsOccluder : 1; // Hint whether this mesh is a good occluder.
- uint32 bWireframe
`FPrimitiveSceneProxy::GetDynamicMeshElements()`
FPrimitiveSceneProxy为了解决游戏线程与渲染线程之间数据传递造成的锁死问题而诞生的方案可以理解为在渲染线程中的Scene镜像。
无论是StaticMesh还是SkeletonMesh都重写了基类的UPrimitiveComponent::CreateSceneProxy()创建对应的SceneProxy类之后再用此提交渲染数据的提交渲染信息与请求的逻辑位于GetDynamicMeshElements()。
该函数函数在渲染线程中运行根据情况使用不通的FMaterialRenderProxy子类传递给FMeshBatch与FMeshElementCollectorFMaterialRenderProxy及其子类可以看做Material信息镜像。这些情况大致包含
- DebugView
- 渲染网格模式
- Lod对应的Material
- 顶点色可视
这一步可以理解为传递Material给MeshDraw框架。
## MeshDraw渲染
`ComputeViewVisibility()`执行完`GatherDynamicMeshElements()`收集完图元渲染数据后会对每个需要渲染的View调用`SetupMeshPass()``SetupMeshPass()`会遍历`EMeshPass`中所定义的枚举再使用对应创建函数来构建FMeshPassProcessor最后执行`DispatchPassSetup()`填充所需的渲染相关信息后渲染线程中创建当前Pass的绘制任务。
EMeshPass定义了Pass:
```js
DepthPass,
BasePass,
AnisotropyPass,
SkyPass,
SingleLayerWaterPass,
CSMShadowDepth,
Distortion,
Velocity,
TranslucentVelocity,
TranslucencyStandard,
TranslucencyAfterDOF,
TranslucencyAfterDOFModulate,
TranslucencyAll, /** Drawing all translucency, regardless of separate or standard. Used when drawing translucency outside of the main renderer, eg FRendererModule::DrawTile. */
LightmapDensity,
DebugViewMode, /** Any of EDebugViewShaderMode */
CustomDepth,
MobileBasePassCSM, /** Mobile base pass with CSM shading enabled */
MobileInverseOpacity, /** Mobile specific scene capture, Non-cached */
VirtualTexture,
DitheredLODFadingOutMaskPass
```
### FMeshPassProcessor
`FMeshPassProcessor`是Mesh处理器的基类主要作用是设置渲染状态、绑定Shader与UniformStructBuffer最后生成MeshDrawCommands并且加入绘制队列。只要与模型相关的Pass都会继承该类在派生类中都会重写`AddMeshBatch()`一般会在对应的生成MeshDrawCommands函数或是`DrawDynamicMeshPass()`中的回调函数中调用。以及实现具体的处理函数`Process()`
#### DrawDynamicMeshPass
该函数中有一个回调函数,
回调函数的逻辑顺序为:
1. 使用FScene、FSceneView、FMeshPassProcessorRenderState、EDepthDrawingMode、FMeshPassDrawListContext等变量创建一个FMeshPassProcessor。
2. 之后按照有效的View使用AddMeshBatch()往FMeshPassProcessor中添加MeshBatch。
#### AddMeshBatch
其作用为往一个Pass中增加FMeshBatch。
主要逻辑:
1. 判断是否需要绘制后进行寻找FMaterial递归。
2. 从FMaterialRenderProxy中寻找FMaterial如FMaterial无效则从父类寻找直至找到为止。底层为各个材质模型的默认材质
3. 找到有效FMaterial后调用TryAddMeshBatch()。
#### TryAddMeshBatch
收集BlendMode、MeshDrawingPolic、RasterizerFillMode、RasterizerCullMode等所需变量后传递给处理函数`Process()`。在`Process()`中取得所需Shader与渲染数据后FMeshPassProcessorRenderState、FMeshDrawCommandSortKey、FMeshMaterialShaderElementData等调用`BuildMeshDrawCommands()`创建MeshDrawCommands。
以FDepthPassMeshProcessor为例`Process()`的主要逻辑顺序为取得所需的Shader包括顶点、Hull、Domain、Vertex、Pixel。初始化MeshMaterial数据BuildMeshDrawCommands构建MeshDraw命令。`Process()`同时也是个模板函数(根据不同渲染需求构建对应的MeshDrawCommand)用来切换构建的MeshDrawCommand中的EMeshPassFeatures形参用于设置顶点输入流类型Default、PositionOnly、PositionAndNormalOnly
`FMeshPassProcessorRenderState`是MeshPassProcessor的渲染状态集。存储信息如下
- FRHIBlendState* BlendState;
- FRHIDepthStencilState* DepthStencilState;
- FExclusiveDepthStencil::Type DepthStencilAccess;
- FRHIUniformBuffer* ViewUniformBuffer;
- FRHIUniformBuffer* InstancedViewUniformBuffer;
- FRHIUniformBuffer* ReflectionCaptureUniformBuffer;
- FRHIUniformBuffer* PassUniformBuffer;
- uint32 StencilRef;
### BuildMeshDrawCommands
BuildMeshDrawCommands()大致逻辑为:
1. 创建`FMeshDrawCommand`对象。以下简称为MDC。
2. 根据`FMeshPassProcessorRenderState`中的`StencilRef`来设置MDC的模板index。
3. 创建`FGraphicsMinimalPipelineStateInitializer`对象设置PrimitiveType、ImmutableSamplerState根据PassShadersType模板参数`FGraphicsMinimalPipelineStateInitializer`引用设置对应Shader的ShaderResource与ShaderIndex设置MDC的RasterizerState、BlendState、DepthStencilState、DrawShadingRate通过VertexFactory来设置MDC的PrimitiveIdStreamIndex。
4. 判断PassShadersType模板参数是那种类型的Shader之后取得对应`FMeshDrawSingleShaderBindings`,最后将`FShaderUniformBufferParameter``FViewUniformShaderParameters``FDistanceCullFadeUniformShaderParameters``FDitherUniformShaderParameters``FInstancedViewUniformShaderParameters`加入`FMeshDrawSingleShaderBindings`中。(`FShaderUniformBufferParameter`会在对应Shader类中绑定实际的UniformBuffer
5. 遍历`FMeshBatch`中存储的所有FMeshBatchElement将之前的MDC对象加入`DrawListStorage`中并取得其引用根据PassShadersType模板参数从MDC引用取得对应`FMeshDrawSingleShaderBindings`,最后将`FPrimitiveUniformShaderParameters`加入`FMeshDrawSingleShaderBindings`中。
6. 结束当前MDC构建并且将其加入`DrawListContext`的绘制列表中。
## 场景与FMeshPassProcessor的关系
在FScene::AddPrimitive(UPrimitiveComponent* Primitive)创建图元类对应的场景代理,计算矩阵、边界盒来构建`FCreateRenderThreadParameters`对象,最后向渲染线程加入图元场景信息。
在ActorComponents的UpdateAllPrimitiveSceneInfosForScenes()会在渲染线程执行UpdateAllPrimitiveSceneInfos()。UpdateAllPrimitiveSceneInfos()=》AddToScene()=>AddStaticMeshes()=>CacheMeshDrawCommands()中遍历所有类型的Pass并且创建对应的`FMeshPassProcessor`然后调用AddMeshBatch()。
## MeshDraw与RGD
MeshDraw部分不考虑Shader以SingleLayerWater为例子
- 构建FSingleLayerWaterPassMeshProcessor类
- 在构造函数中设置PassDrawRenderState。CW_RGBA, BO_Add, BF_One, BF_InverseSourceAlpha
- 重写AddMeshBatch()收集OverrideSettings、MeshFillMode、MeshCullMode、MaterialRenderProxy后传入Process().
- 实现Process(),取得Shader、初始化ShaderElementData、计算SortKey之后调用BuildMeshDrawCommands()构建MeshDrawCommands。
- 实现CreateSingleLayerWaterPassProcessor()与对应的FRegisterPassProcessorCreateFunction以用来创建FSingleLayerWaterPassMeshProcessor。
调用的逻辑位于FDeferredShadingSceneRenderer::RenderSingleLayerWaterInner
- 取得GBuffer并绑定深度模板从RTPool中取得一个纯白贴图(WhiteDummy)
- 遍历有效View开始渲染
- 使用上述2个RT与FOpaqueBasePassUniformParameters填充FSingleLayerWaterPassParameters
- 使用RDG创建一个Pass里面执行更新ViewUniformBuffer后用上述两个UniformBuffer构建FRDGParallelCommandListSet最后使用View.ParallelMeshDrawCommandPasses[EMassPass::SingleLayerWaterPass].DispatchDraw()进行绘制。
## 调整渲染方式以实现背面剔除
目标是设置为非DoubleSide以及ReverseCullMode模式一些给顶点工厂添加数据
Mesh.bDisableBackfaceCulling
Mesh.ReverseCulling
重写FSkeletalMeshSceneProxy::GetDynamicElementsSection()
```c#
ERasterizerCullMode FMeshPassProcessor::ComputeMeshCullMode(const FMeshBatch& Mesh, const FMaterial& InMaterialResource, const FMeshDrawingPolicyOverrideSettings& InOverrideSettings)
{
const bool bMaterialResourceIsTwoSided = InMaterialResource.IsTwoSided();
const bool bInTwoSidedOverride = !!(InOverrideSettings.MeshOverrideFlags & EDrawingPolicyOverrideFlags::TwoSided);
const bool bInReverseCullModeOverride = !!(InOverrideSettings.MeshOverrideFlags & EDrawingPolicyOverrideFlags::ReverseCullMode);
const bool bIsTwoSided = (bMaterialResourceIsTwoSided || bInTwoSidedOverride);
const bool bMeshRenderTwoSided = bIsTwoSided || bInTwoSidedOverride;
return bMeshRenderTwoSided ? CM_None : (bInReverseCullModeOverride ? CM_CCW : CM_CW);
}
```
## 修改ShaderModel
修改Pin
添加ShaderModel枚举
BasePassPixelShader.usf中的FPixelShaderInOut_MainPS()

View File

@@ -0,0 +1,324 @@
## 前言
>RDG = Rendering Dependency Graph
RDG主要包含两个重要组件一个是FRDGBuilder负责构建Render graph的资源及添加pass等构建RenderGraph。另一个是FRDGResourceRenderGraph的资源类所有资源都由它派生。
官方入门ppt:https://epicgames.ent.box.com/s/ul1h44ozs0t2850ug0hrohlzm53kxwrz
因为加载速度较慢所以我搬运到了有道云笔记http://note.youdao.com/noteshare?id=a7e2856ad141f44f6b48db6e95419920&sub=E5276AAD6DAA40409586C0552B8E163A
另外我还推荐看:
https://papalqi.cn/index.php/2020/01/21/rendering-dependency-graph/
https://zhuanlan.zhihu.com/p/101149903
**本文也将引用上文中的若干内容。**
**注意**
1. 本文为了便于理解把诸如typedef FShaderDrawSymbols SHADER;这种类型别名都改成原本的类型名了。
2. 本文属于本人边学边写的学习笔记,不可避免地会有错误。如有发现还请指出,尽请见谅。
## 推荐用于学习的代码
PostProcessTestImage.cpp
GpuDebugRendering.cpp
- ShaderPrint.cpp 具体实现
- ShaderPrint.h 绘制函数声明
- ShaderPrintParameters.h 渲染变量与资源声明
本人推荐这个ShaderPrint简单的同时又可以进行扩展以此实现更多debug方式。本文中没有说明出处的代码默认来自于ShaderPrint中。
## 相关头文件
你可以查看RenderGraph.h查看官方对于RDG系统的介绍。
- #include "RenderGraphDefinitions.h"
- #include "RenderGraphResources.h"
- #include "RenderGraphPass.h"
- #include "RenderGraphBuilder.h"
- #include "RenderGraphUtils.h"
- #include "ShaderParameterStruct.h"
- #include "ShaderParameterMacros.h"
## 资源声明与绑定
### Struct的字节对齐问题
在声明Struct时需要注意一下字节对齐的问题。这里我引用一段papalqi博客中的文章
>当然在设置中我们可能还需要注意一些简单的问题。由于unreal 采用16字节自动对齐的原则所以在编写代码时我们实际上对任何成员的顺序是需要注意的。例如下图中的顺序调整。宏系统的另一个特性是着色器数据的自动对齐。Unreal引擎使用与平台无关的数据对齐规则来实现着色器的可移植性。
主要规则是每个成员都是按照其大小的下一个幂进行对齐的但前提是大于4个字节。例如
- 指针是8字节对齐的即使在32位平台上也是如此
- 浮点、uint32、int32是四字节对齐的
- FVector2DFIntPoint是8字节对齐的
- FVector和FVector 4是16字节对齐的。
>作者(Author)papalqi 链接(URL)https://papalqi.cn/index.php/2020/01/21/rendering-dependency-graph/
**所以我们需要根据位宽对变量与资源进行排序:**
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesRDG-%E5%8F%98%E9%87%8F%E4%BD%8D%E5%AE%BD%E8%87%AA%E5%8A%A8%E5%AF%B9%E9%BD%90.png)
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesRDG-%E5%8F%98%E9%87%8F%E4%BD%8D%E5%AE%BD%E6%89%8B%E5%8A%A8%E5%AF%B9%E9%BD%90.png)
进行手动位宽对齐,以减少额外的内存与带宽占用。
### 资源的初始化与绑定
其中资源分为使用RDG托管与非托管的。下面是ShaderPrint的部分代码
```
// Initialize graph managed resources
// Symbols buffer contains Count + 1 elements. The first element is only used as a counter.
FRDGBufferRef SymbolBuffer = GraphBuilder.CreateBuffer(FRDGBufferDesc::CreateStructuredDesc(sizeof(ShaderPrintItem), GetMaxSymbolCount() + 1), TEXT("ShaderPrintSymbolBuffer"));
FRDGBufferRef IndirectDispatchArgsBuffer = GraphBuilder.CreateBuffer(FRDGBufferDesc::CreateIndirectDesc(4), TEXT("ShaderPrintIndirectDispatchArgs"));
FRDGBufferRef IndirectDrawArgsBuffer = GraphBuilder.CreateBuffer(FRDGBufferDesc::CreateIndirectDesc(5), TEXT("ShaderPrintIndirectDrawArgs"));
// Non graph managed resources
FUniformBufferRef UniformBuffer = CreateUniformBuffer(View);
FShaderResourceViewRHIRef ValuesBuffer = View.ShaderPrintValueBuffer.SRV;
FTextureRHIRef FontTexture = GEngine->MiniFontTexture != nullptr ? GEngine->MiniFontTexture->Resource->TextureRHI : GSystemTextures.BlackDummy->GetRenderTargetItem().ShaderResourceTexture;;
```
**使用RDG托管的资源**会使用GraphBuilder.CreateBuffer()创建一个FRDGBufferDesc在之后会使用GraphBuilder.CreateUAV()、CreateSRV()、CreateTexture()创建具体资源时,作为第一个形参。
**不使用RDG托管的资源**都只需要定义、计算后直接绑定即可Uniform的创建请见下文。
```
FShaderBuildIndirectDispatchArgsCS::FParameters* PassParameters = GraphBuilder.AllocParameters<FShaderBuildIndirectDispatchArgsCS::FParameters>();
PassParameters->UniformBufferParameters = UniformBuffer;
PassParameters->ValuesBuffer = ValuesBuffer;
PassParameters->RWSymbolsBuffer = GraphBuilder.CreateUAV(SymbolBuffer, EPixelFormat::PF_R32_UINT);
PassParameters->RWIndirectDispatchArgsBuffer = GraphBuilder.CreateUAV(IndirectDispatchArgsBuffer, EPixelFormat::PF_R32_UINT);
```
GpuDebugRendering.cpp中的代码可以看得出是直接绑定的。
```
//bIsBehindDepth是一个之前设置的bool变量
ShaderDrawVSPSParameters* PassParameters = GraphBuilder.AllocParameters<ShaderDrawVSPSParameters>();
PassParameters->ShaderDrawPSParameters.ColorScale = bIsBehindDepth ? 0.4f : 1.0f;
```
### 声明资源宏
这里介绍几种常用的。这些宏位于Runtime\RenderCore\Public\ShaderParameterMacros.h中。另外还有一组RDG版本的宏这些宏声明的资源需要先使用 GraphBuilder.CreateBuffer()初始化资源后再调用对应GraphBuilder.CreateXXXX()完成创建,。这个头文件中还包含了若干案例代码,为了文章的简洁性这里就不贴了。
#### 常规变量
```c++
SHADER_PARAMETER(float, MyScalar)
SHADER_PARAMETER(FMatrix, MyMatrix)
SHADER_PARAMETER_RDG_BUFFER(Buffer<float4>, MyBuffer)
```
#### 结构体
在ShaderPrint中全局的结构体的声明都写在头文件中。在自己定义的GlobalShader调用`SHADER_PARAMETER_STRUCT_REF(FMyNestedStruct, MyStruct)`来设置全局结构体的指针进行引用。使用结构体的Shader需要使用`SHADER_USE_PARAMETER_STRUCT(FMyShaderClassCS, FGlobalShader);`进行标记。
```c++
//声明一个全局的结构体变量
EGIN_GLOBAL_SHADER_PARAMETER_STRUCT(FMyParameterStruct, RENDERER_API)
END_GLOBAL_SHADER_PARAMETER_STRUCT()
IMPLEMENT_GLOBAL_SHADER_PARAMETER_STRUCT(FMyParameterStruct, "MyShaderBindingName");
```
```c++
//声明一个全局结构体的引用
BEGIN_GLOBAL_SHADER_PARAMETER_STRUCT(FGlobalViewParameters,)
SHADER_PARAMETER(FVector4, ViewSizeAndInvSize)
// ...
END_GLOBAL_SHADER_PARAMETER_STRUCT()
BEGIN_SHADER_PARAMETER_STRUCT(FOtherStruct)
SHADER_PARAMETER_STRUCT_REF(FMyNestedStruct, MyStruct)
END_SHADER_PARAMETER_STRUCT()
```
```c++
//为使用结构化着色器参数API的着色器类打上标签。
class FMyShaderClassCS : public FGlobalShader
{
DECLARE_GLOBAL_SHADER(FMyShaderClassCS);
SHADER_USE_PARAMETER_STRUCT(FMyShaderClassCS, FGlobalShader);
BEGIN_SHADER_PARAMETER_STRUCT(FParameters)
SHADER_PARAMETER(FMatrix, ViewToClip)
//...
END_SHADER_PARAMETER_STRUCT()
};
```
#### 数组
```c++
SHADER_PARAMETER_ARRAY(float, MyScalarArray, [8])
SHADER_PARAMETER_ARRAY(FMatrix, MyMatrixArray, [2])
SHADER_PARAMETER_RDG_BUFFER_ARRAY(Buffer<float4>, MyArrayOfBuffers, [4])
```
#### Texture
```c++
SHADER_PARAMETER_TEXTURE(Texture2D, MyTexture)
SHADER_PARAMETER_TEXTURE_ARRAY(Texture2D, MyArrayOfTextures, [8])
```
#### SRV
```c++
SHADER_PARAMETER_SRV(Texture2D, MySRV)
SHADER_PARAMETER_SRV_ARRAY(Texture2D, MyArrayOfSRVs, [8])
SHADER_PARAMETER_RDG_BUFFER_SRV(Buffer<float4>, MySRV)
SHADER_PARAMETER_RDG_BUFFER_SRV_ARRAY(Buffer<float4>, MyArrayOfSRVs, [4])
```
#### UAV
```c++
SHADER_PARAMETER_UAV(Texture2D, MyUAV)
SHADER_PARAMETER_RDG_BUFFER_UAV(RWBuffer<float4>, MyUAV)
SHADER_PARAMETER_RDG_BUFFER_UAV_ARRAY(RWBuffer<float4>, MyArrayOfUAVs, [4])
```
#### Sampler
```c++
SHADER_PARAMETER_SAMPLER(SamplerState, MySampler)
SHADER_PARAMETER_SAMPLER_ARRAY(SamplerState, MyArrayOfSamplers, [8])
```
#### 不太懂是干什么的
```c++
//Adds a render graph tracked buffer upload.
//Example:
SHADER_PARAMETER_RDG_BUFFER_UPLOAD(Buffer<float4>, MyBuffer)
```
```c++
BEGIN_SHADER_PARAMETER_STRUCT(ShaderDrawVSPSParameters, )
SHADER_PARAMETER_STRUCT_INCLUDE(FShaderDrawDebugVS::FParameters, ShaderDrawVSParameters)
SHADER_PARAMETER_STRUCT_INCLUDE(FShaderDrawDebugPS::FParameters, ShaderDrawPSParameters)
END_SHADER_PARAMETER_STRUCT()
```
## 设置变量
首先我们要了解清楚给一个Pass设置变量的步骤
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesPassParameterSetup.png)
>当增加一个RGPass它必须带有Shader参数可以是任何Shader参数比如UnifromBufferTexture等。且参数使用" GraphBuilder.AllocaParameter "来分配保留所有参数的结构体因为Lambda执行被延迟确保了正确的生命周期。参数采用宏的形式来声明。且参数结构体的声明最好的方法是内联直接在每个Pass的ShaderClass内声明好结构。
>首先得在Shader里使用宏SHADER_USE_PARAMETERSTRUCT(FYouShader, ShaderType)设置Shader需要使用Prameter。然后需要实现一个FParameter的宏包裹的结构体里面声明该Pass需要用到的所有参数参数基本上都是靠新的RDG系列宏来声明。需要注意一点的是对于UnifromBuffer需要使用StructRef来引用一层可以理解为Parameter结构体里面还有一个结构体。
```c++
FShaderDrawSymbols::FParameters* PassParameters = GraphBuilder.AllocParameters<FShaderDrawSymbols::FParameters>();
PassParameters->RenderTargets[0] = FRenderTargetBinding(OutputTexture, ERenderTargetLoadAction::ENoAction);
PassParameters->UniformBufferParameters = UniformBuffer;
PassParameters->MiniFontTexture = FontTexture;
PassParameters->SymbolsBuffer = GraphBuilder.CreateSRV(SymbolBuffer);
PassParameters->IndirectDrawArgsBuffer = IndirectDrawArgsBuffer;
```
对于代码中UniformBuffer变量ShaderPrint是这么设置的
```
FUniformBufferRef UniformBuffer = CreateUniformBuffer(View);
```
```
typedef TUniformBufferRef<FUniformBufferParameters> FUniformBufferRef;
// Fill the uniform buffer parameters
void SetUniformBufferParameters(FViewInfo const& View, FUniformBufferParameters& OutParameters)
{
const float FontWidth = (float)FMath::Max(CVarFontSize.GetValueOnRenderThread(), 1) / (float)FMath::Max(View.UnconstrainedViewRect.Size().X, 1);
const float FontHeight = (float)FMath::Max(CVarFontSize.GetValueOnRenderThread(), 1) / (float)FMath::Max(View.UnconstrainedViewRect.Size().Y, 1);
const float SpaceWidth = (float)FMath::Max(CVarFontSpacingX.GetValueOnRenderThread(), 1) / (float)FMath::Max(View.UnconstrainedViewRect.Size().X, 1);
const float SpaceHeight = (float)FMath::Max(CVarFontSpacingY.GetValueOnRenderThread(), 1) / (float)FMath::Max(View.UnconstrainedViewRect.Size().Y, 1);
OutParameters.FontSize = FVector4(FontWidth, FontHeight, SpaceWidth + FontWidth, SpaceHeight + FontHeight);
OutParameters.MaxValueCount = GetMaxValueCount();
OutParameters.MaxSymbolCount = GetMaxSymbolCount();
}
// Return a uniform buffer with values filled and with single frame lifetime
FUniformBufferRef CreateUniformBuffer(FViewInfo const& View)
{
FUniformBufferParameters Parameters;
SetUniformBufferParameters(View, Parameters);
return FUniformBufferRef::CreateUniformBufferImmediate(Parameters, UniformBuffer_SingleFrame);
}
```
看得出创建步骤为:
1. 创建之前使用宏声明的结构体的对象。
2. 对结构体中变量进行赋值。
3. 包一层TUniformBufferRef并使用CreateUniformBufferImmediate返回FUniformBufferRef。
4. 绘制函数中对对应命名空间FParameters结构体进行资源绑定。
可以看得出对于Uniform中的普通变量是直接设置的。
### 创建Buffer
调用GraphBuilder.CreateBuffer()即可创建缓存返回一个FRDGBufferRef对象。但CreateBuffer()的第一个形参中FRDGBufferDesc可以使用Desc有好几种CreateIndirectDesc、CreateStructuredDesc、CreateBufferDesc。其中CreateIndirectDesc应该是与RDG的IndirectDraw/Dispatch机制有关。对于结构体可以使用CreateStructuredDesc声明资源时用StructuredBuffer与RWStructuredBuffer宏。使用CreateBufferDesc需要手动计算占用空间与元素个数。代码大致如下
```
GraphBuilder.CreateBuffer(FRDGBufferDesc::CreateIndirectDesc(4), TEXT("BurleyIndirectDispatchArgs"));
GraphBuilder.CreateBuffer(FRDGBufferDesc::CreateStructuredDesc(sizeof(ShaderDrawDebugElement), GetMaxShaderDrawElementCount()), TEXT("ShaderDrawDataBuffer"));
GraphBuilder.CreateBuffer(FRDGBufferDesc::CreateBufferDesc(sizeof(uint32), 1), TEXT("HairDebugSampleCounter"));
```
### 绑定SRV与UAV
使用SRV/UAV宏后其对应的Buffer需要使用GraphBuilder.CreateSRV/GraphBuilder.CreateUAV进行绑定。注意这里分为Buffer_UAV与Texture_UAV
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesReadingFromABufferUsingAnSRV.png)
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesBindingUAVsForPixelShaders.png)
**Buffer_UAV**:在创建Buffer后再调用CreateUAV
```
//HairStrandsClusters.cpp中的代码
FRDGBufferRef GlobalRadiusScaleBuffer = GraphBuilder.CreateBuffer(FRDGBufferDesc::CreateBufferDesc(sizeof(float), ClusterData.ClusterCount), TEXT("HairGlobalRadiusScaleBuffer"));
Parameters->GlobalRadiusScaleBuffer = GraphBuilder.CreateUAV(GlobalRadiusScaleBuffer, PF_R32_FLOAT);
```
**使用Texture2D来绑定SRV与UAV。**
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesCreateAUACForTexture.png)
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesCreateASRVForTexture.png)
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesPassParameter.png)
### 绑定RenderTarget
如需使用RenderTarget只要在FParameter结构体声明中加一行RENDER_TARGET_BINDING_SLOTS()即可。
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/Images/RenderTargetBindingsSlots.png)
之后就可以进行RT的绑定。
**绑定储存颜色数据的RT**
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesBindingAColorRenderTarget.png)
颜色数据RT绑定时需要注意数组下标号。
**绑定深度模板缓存**
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/Images/BindingDepthStencilTarget.png)
FParameter中的RenderTarget对象为FShadowMapRenderTargets类其地址与类内的容器变量ColorTargets一样所以可以使用这个写法。
```
class FShadowMapRenderTargets
{
public:
TArray<IPooledRenderTarget*, SceneRenderingAllocator> ColorTargets;
IPooledRenderTarget* DepthTarget;
}
```
### 绑定非当前GraphPass的资源
>需要注意的是一个GraphPass内的资源有可能不是由Graph创建的这个时候就需要使用GraphBuilder.RegisterExternalBuffer/Texture来把某个PoolRT或者RHIBuffer转成RDGResource才能使用。同样的吧一个RDGResource转成PoolRT或者RHIBuffer的方法则是GraphBuilder.QueueExternalBuffer/Texture感觉这两对更适合叫ImportResource和ExportResource。如下图。
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesRegistration.png)
>Check out GRenderTargetPool.CreateUntrackedElement()to get a TRefCountPtr<IPooledRenderTarget>if need to register a different from RHI resource (for instance the very old FRenderTarget)
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/imagesExtractionQueries.png)
## AddPass
>GraphBuilder.AddPass()主要用来配置管线状态用于延迟执行。比如使用 FGraphicsPipelineStateInitializer对象来配置PSO并调用RHI的API来进行绘制。或者使用SetComputeSahder()来DispatchCompute。注意此时还不会实际执行绘制而是在所有AddPass完成后调用GraphBuilder.Execute()才实际执行。而且更主要的是SetShaderParameters()也是在这儿做这个函数是UE封装的因为我们的一个Pass只能有一个AllocParameter所以这个东西里面是塞了UnifromBuffer SRV UAV等等各种东西。在以前的流程里面是自己再Shader里封装Shader.SetSRV/UAV/Unifrom等等的函数而现在则只需要吧所有参数塞一起并在GraphPass内设置即可。RDG会自动检测参数的每一个成员和类型自动SetUAV/SRV/Unifrom。
```
GraphBuilder.AddPass(
RDG_EVENT_NAME("DrawSymbols"),
PassParameters,
ERDGPassFlags::Raster,
[VertexShader, PixelShader, PassParameters](FRHICommandListImmediate& RHICmdListImmediate)
{
FGraphicsPipelineStateInitializer GraphicsPSOInit;
RHICmdListImmediate.ApplyCachedRenderTargets(GraphicsPSOInit);
GraphicsPSOInit.DepthStencilState = TStaticDepthStencilState<false, CF_Always>::GetRHI();
GraphicsPSOInit.BlendState = TStaticBlendState<CW_RGBA, BO_Add, BF_One, BF_InverseSourceAlpha, BO_Add, BF_Zero, BF_One>::GetRHI();
GraphicsPSOInit.RasterizerState = TStaticRasterizerState<>::GetRHI();
GraphicsPSOInit.PrimitiveType = PT_TriangleList;
GraphicsPSOInit.BoundShaderState.VertexDeclarationRHI = GetVertexDeclarationFVector4();
GraphicsPSOInit.BoundShaderState.VertexShaderRHI = VertexShader.GetVertexShader();
GraphicsPSOInit.BoundShaderState.PixelShaderRHI = PixelShader.GetPixelShader();
SetGraphicsPipelineState(RHICmdListImmediate, GraphicsPSOInit);
SetShaderParameters(RHICmdListImmediate, VertexShader, VertexShader.GetVertexShader(), *PassParameters);
SetShaderParameters(RHICmdListImmediate, PixelShader, PixelShader.GetPixelShader(), *PassParameters);
RHICmdListImmediate.DrawIndexedPrimitiveIndirect(GTwoTrianglesIndexBuffer.IndexBufferRHI, PassParameters->IndirectDrawArgsBuffer->GetIndirectRHICallBuffer(), 0);
});
```

View File

@@ -0,0 +1,295 @@
## 前言
在插件中使用RDG调用ComputeShader的方法我花了没几天就搞定了。但PixelShader相对来说就麻烦了怎么搞都没有绘制到RT上。最后还是通过改写DrawFullscreenPixelShader的代码搞定了。
另外说一下PixelShader的调用和传统的GlobalShader调用很相似。
## 设置Shader虚拟目录
这个之前忘记说了,所以这里补充一下,具体操作为在插件的模块启动函数中添加以下代码::
```
void FBRPluginsModule::StartupModule()
{
FString PluginShaderDir = FPaths::Combine(IPluginManager::Get().FindPlugin(TEXT("BRPlugins"))->GetBaseDir(), TEXT("Shaders"));
AddShaderSourceDirectoryMapping(TEXT("/BRPlugins"), PluginShaderDir);
}
```
之后就可以使用这个虚拟目录来定义Shader
```
IMPLEMENT_GLOBAL_SHADER(FSimpleRDGComputeShader, "/BRPlugins/Private/SimpleComputeShader.usf", "MainCS", SF_Compute);
```
## 参考案例
这里我没什么管理可以推荐的感觉都不是很好。但你可以搜索ERDGPassFlags::Raster就可以找到RDG调用PixelShader的代码。
## 使用DrawFullscreenPixelShader绘制
在RDG中已经封装了一个绘制函数DrawFullscreenPixelShader()可以拿来做测试。使用的方法也比较简单直接在GraphBuilder.AddPass的Lambda中调用DrawFullscreenPixelShader即可。但
其使用的顶点格式是公共资源(CommonRenderResources.h)中的GFilterVertexDeclaration.VertexDeclarationRHI、GScreenRectangleVertexBuffer.VertexBufferRHI、GScreenRectangleIndexBuffer.IndexBufferRHI。
```
void FScreenRectangleVertexBuffer::InitRHI()
{
TResourceArray<FFilterVertex, VERTEXBUFFER_ALIGNMENT> Vertices;
Vertices.SetNumUninitialized(6);
Vertices[0].Position = FVector4(1, 1, 0, 1);
Vertices[0].UV = FVector2D(1, 1);
Vertices[1].Position = FVector4(0, 1, 0, 1);
Vertices[1].UV = FVector2D(0, 1);
Vertices[2].Position = FVector4(1, 0, 0, 1);
Vertices[2].UV = FVector2D(1, 0);
Vertices[3].Position = FVector4(0, 0, 0, 1);
Vertices[3].UV = FVector2D(0, 0);
//The final two vertices are used for the triangle optimization (a single triangle spans the entire viewport )
Vertices[4].Position = FVector4(-1, 1, 0, 1);
Vertices[4].UV = FVector2D(-1, 1);
Vertices[5].Position = FVector4(1, -1, 0, 1);
Vertices[5].UV = FVector2D(1, -1);
// Create vertex buffer. Fill buffer with initial data upon creation
FRHIResourceCreateInfo CreateInfo(&Vertices);
VertexBufferRHI = RHICreateVertexBuffer(Vertices.GetResourceDataSize(), BUF_Static, CreateInfo);
}
```
DrawFullscreenPixelShader()使用的VertexShader是FScreenVertexShaderVSusf为FullscreenVertexShader.usf。代码如下
```
#include "../Common.ush"
void MainVS(
float2 InPosition : ATTRIBUTE0,
float2 InUV : ATTRIBUTE1, // TODO: kill
out float4 Position : SV_POSITION)
{
Position = float4(InPosition.x * 2.0 - 1.0, 1.0 - 2.0 * InPosition.y, 0, 1);
}
```
这里可以看得出一个问题那就是PixelShader无法获得UV坐标。所以DrawFullscreenPixelShader()能做事情很有限。因此本人写的例子是自定义了顶点格式。
## CommonRenderResources
Ue4的已经帮我们设置好了几个基础的VertexDeclaration位于RenderCore\Public\CommonRenderResources.h。
GEmptyVertexDeclaration.VertexDeclarationRHI在Shader中的Input为
```
in uint InstanceId : SV_InstanceID,
in uint VertexId : SV_VertexID,
```
GFilterVertexDeclaration.VertexDeclarationRHI在Shader中的Input为
```
in float4 InPosition : ATTRIBUTE0,
in float2 InUV : ATTRIBUTE1,
```
对应的顶点缓存与索引缓存为TGlobalResource<FScreenRectangleVertexBuffer> GScreenRectangleVertexBuffer与TGlobalResource<FScreenRectangleIndexBuffer> GScreenRectangleIndexBuffer;
如果你想在调用PixelShader时使用FScreenRectangleVertexBuffer就需要转换UV坐标了(-1,1)=>(0,1)因为FScreenRectangleVertexBuffer的UV定义范围为(-1,1)。
## RenderTarget的传入与绑定
传入的RenderTarget皆可以用Texture2D类型声明。例如
```
BEGIN_SHADER_PARAMETER_STRUCT(FParameters, )
SHADER_PARAMETER_STRUCT_REF(FSimpleUniformStructParameters, SimpleUniformStruct)
SHADER_PARAMETER_TEXTURE(Texture2D, TextureVal)
SHADER_PARAMETER_SAMPLER(SamplerState, TextureSampler)
SHADER_PARAMETER(FVector4, SimpleColor)
RENDER_TARGET_BINDING_SLOTS()
END_SHADER_PARAMETER_STRUCT()
```
绑定需要在上面的声明宏中加入RENDER_TARGET_BINDING_SLOTS(),之后设置变量时绑定:
```
FSimpleRDGPixelShader::FParameters *Parameters = GraphBuilder.AllocParameters<FSimpleRDGPixelShader::FParameters>();
Parameters->RenderTargets[0] = FRenderTargetBinding(RDGRenderTarget, ERenderTargetLoadAction::ENoAction);
```
还可以绑定DepthStencil
```
Parameters->RenderTargets.DepthStencil = FDepthStencilBinding(OutDepthTexture,ERenderTargetLoadAction::ELoad,ERenderTargetLoadAction::ELoad,FExclusiveDepthStencil::DepthNop_StencilWrite);
```
在USF中对应Out为
```
out float4 OutColor : SV_Target0,
out float OutDepth : SV_Depth
```
如果有多个RenderTarget绑定会如SV_Target0、SV_Target1、SV_Target2一般递增。
## 资源清理
本人案例中因为只有一个Pass所以就没有用这两个函数。
```
ValidateShaderParameters(PixelShader, Parameters);
ClearUnusedGraphResources(PixelShader, Parameters);
```
## RDGPixelDraw
直接上代码了。
```
void RDGDraw(FRHICommandListImmediate &RHIImmCmdList, FTexture2DRHIRef RenderTargetRHI, FSimpleShaderParameter InParameter, const FLinearColor InColor, FTexture2DRHIRef InTexture)
{
check(IsInRenderingThread());
//Create PooledRenderTarget
FPooledRenderTargetDesc RenderTargetDesc = FPooledRenderTargetDesc::Create2DDesc(RenderTargetRHI->GetSizeXY(),RenderTargetRHI->GetFormat(), FClearValueBinding::Black, TexCreate_None, TexCreate_RenderTargetable | TexCreate_ShaderResource | TexCreate_UAV, false);
TRefCountPtr<IPooledRenderTarget> PooledRenderTarget;
//RDG Begin
FRDGBuilder GraphBuilder(RHIImmCmdList);
FRDGTextureRef RDGRenderTarget = GraphBuilder.CreateTexture(RenderTargetDesc, TEXT("RDGRenderTarget"));
//Setup Parameters
FSimpleUniformStructParameters StructParameters;
StructParameters.Color1 = InParameter.Color1;
StructParameters.Color2 = InParameter.Color2;
StructParameters.Color3 = InParameter.Color3;
StructParameters.Color4 = InParameter.Color4;
StructParameters.ColorIndex = InParameter.ColorIndex;
FSimpleRDGPixelShader::FParameters *Parameters = GraphBuilder.AllocParameters<FSimpleRDGPixelShader::FParameters>();
Parameters->TextureVal = InTexture;
Parameters->TextureSampler = TStaticSamplerState<SF_Trilinear, AM_Clamp, AM_Clamp, AM_Clamp>::GetRHI();
Parameters->SimpleColor = InColor;
Parameters->SimpleUniformStruct = TUniformBufferRef<FSimpleUniformStructParameters>::CreateUniformBufferImmediate(StructParameters, UniformBuffer_SingleFrame);
Parameters->RenderTargets[0] = FRenderTargetBinding(RDGRenderTarget, ERenderTargetLoadAction::ENoAction);
const ERHIFeatureLevel::Type FeatureLevel = GMaxRHIFeatureLevel; //ERHIFeatureLevel::SM5
FGlobalShaderMap *GlobalShaderMap = GetGlobalShaderMap(FeatureLevel);
TShaderMapRef<FSimpleRDGVertexShader> VertexShader(GlobalShaderMap);
TShaderMapRef<FSimpleRDGPixelShader> PixelShader(GlobalShaderMap);
//ValidateShaderParameters(PixelShader, Parameters);
//ClearUnusedGraphResources(PixelShader, Parameters);
GraphBuilder.AddPass(
RDG_EVENT_NAME("RDGDraw"),
Parameters,
ERDGPassFlags::Raster,
[Parameters, VertexShader, PixelShader, GlobalShaderMap](FRHICommandList &RHICmdList) {
FRHITexture2D *RT = Parameters->RenderTargets[0].GetTexture()->GetRHI()->GetTexture2D();
RHICmdList.SetViewport(0, 0, 0.0f, RT->GetSizeX(), RT->GetSizeY(), 1.0f);
FGraphicsPipelineStateInitializer GraphicsPSOInit;
RHICmdList.ApplyCachedRenderTargets(GraphicsPSOInit);
GraphicsPSOInit.DepthStencilState = TStaticDepthStencilState<false, CF_Always>::GetRHI();
GraphicsPSOInit.BlendState = TStaticBlendState<>::GetRHI();
GraphicsPSOInit.RasterizerState = TStaticRasterizerState<>::GetRHI();
GraphicsPSOInit.PrimitiveType = PT_TriangleList;
GraphicsPSOInit.BoundShaderState.VertexDeclarationRHI = GTextureVertexDeclaration.VertexDeclarationRHI;
GraphicsPSOInit.BoundShaderState.VertexShaderRHI = VertexShader.GetVertexShader();
GraphicsPSOInit.BoundShaderState.PixelShaderRHI = PixelShader.GetPixelShader();
SetGraphicsPipelineState(RHICmdList, GraphicsPSOInit);
RHICmdList.SetStencilRef(0);
SetShaderParameters(RHICmdList, PixelShader, PixelShader.GetPixelShader(), *Parameters);
RHICmdList.SetStreamSource(0, GRectangleVertexBuffer.VertexBufferRHI, 0);
RHICmdList.DrawIndexedPrimitive(
GRectangleIndexBuffer.IndexBufferRHI,
/*BaseVertexIndex=*/0,
/*MinIndex=*/0,
/*NumVertices=*/4,
/*StartIndex=*/0,
/*NumPrimitives=*/2,
/*NumInstances=*/1);
});
GraphBuilder.QueueTextureExtraction(RDGRenderTarget, &PooledRenderTarget);
GraphBuilder.Execute();
//Copy Result To RenderTarget Asset
RHIImmCmdList.CopyTexture(PooledRenderTarget->GetRenderTargetItem().ShaderResourceTexture, RenderTargetRHI->GetTexture2D(), FRHICopyTextureInfo());
}
```
调用方法和之前的ComputeShader部分相同这里就不赘述了。具体的可以参考我的插件。
### 自定义顶点格式部分
```
struct FTextureVertex
{
FVector4 Position;
FVector2D UV;
};
class FRectangleVertexBuffer : public FVertexBuffer
{
public:
/** Initialize the RHI for this rendering resource */
void InitRHI() override
{
TResourceArray<FTextureVertex, VERTEXBUFFER_ALIGNMENT> Vertices;
Vertices.SetNumUninitialized(6);
Vertices[0].Position = FVector4(1, 1, 0, 1);
Vertices[0].UV = FVector2D(1, 1);
Vertices[1].Position = FVector4(-1, 1, 0, 1);
Vertices[1].UV = FVector2D(0, 1);
Vertices[2].Position = FVector4(1, -1, 0, 1);
Vertices[2].UV = FVector2D(1, 0);
Vertices[3].Position = FVector4(-1, -1, 0, 1);
Vertices[3].UV = FVector2D(0, 0);
//The final two vertices are used for the triangle optimization (a single triangle spans the entire viewport )
Vertices[4].Position = FVector4(-1, 1, 0, 1);
Vertices[4].UV = FVector2D(-1, 1);
Vertices[5].Position = FVector4(1, -1, 0, 1);
Vertices[5].UV = FVector2D(1, -1);
// Create vertex buffer. Fill buffer with initial data upon creation
FRHIResourceCreateInfo CreateInfo(&Vertices);
VertexBufferRHI = RHICreateVertexBuffer(Vertices.GetResourceDataSize(), BUF_Static, CreateInfo);
}
};
class FRectangleIndexBuffer : public FIndexBuffer
{
public:
/** Initialize the RHI for this rendering resource */
void InitRHI() override
{
// Indices 0 - 5 are used for rendering a quad. Indices 6 - 8 are used for triangle optimization.
const uint16 Indices[] = {0, 1, 2, 2, 1, 3, 0, 4, 5};
TResourceArray<uint16, INDEXBUFFER_ALIGNMENT> IndexBuffer;
uint32 NumIndices = UE_ARRAY_COUNT(Indices);
IndexBuffer.AddUninitialized(NumIndices);
FMemory::Memcpy(IndexBuffer.GetData(), Indices, NumIndices * sizeof(uint16));
// Create index buffer. Fill buffer with initial data upon creation
FRHIResourceCreateInfo CreateInfo(&IndexBuffer);
IndexBufferRHI = RHICreateIndexBuffer(sizeof(uint16), IndexBuffer.GetResourceDataSize(), BUF_Static, CreateInfo);
}
};
class FTextureVertexDeclaration : public FRenderResource
{
public:
FVertexDeclarationRHIRef VertexDeclarationRHI;
virtual void InitRHI() override
{
FVertexDeclarationElementList Elements;
uint32 Stride = sizeof(FTextureVertex);
Elements.Add(FVertexElement(0, STRUCT_OFFSET(FTextureVertex, Position), VET_Float2, 0, Stride));
Elements.Add(FVertexElement(0, STRUCT_OFFSET(FTextureVertex, UV), VET_Float2, 1, Stride));
VertexDeclarationRHI = RHICreateVertexDeclaration(Elements);
}
virtual void ReleaseRHI() override
{
VertexDeclarationRHI.SafeRelease();
}
};
/*
* Vertex Resource Declaration
*/
extern TGlobalResource<FTextureVertexDeclaration> GTextureVertexDeclaration;
extern TGlobalResource<FRectangleVertexBuffer> GRectangleVertexBuffer;
extern TGlobalResource<FRectangleIndexBuffer> GRectangleIndexBuffer;
```
```
TGlobalResource<FTextureVertexDeclaration> GTextureVertexDeclaration;
TGlobalResource<FRectangleVertexBuffer> GRectangleVertexBuffer;
TGlobalResource<FRectangleIndexBuffer> GRectangleIndexBuffer;
```

View File

@@ -0,0 +1,249 @@
## 前言
UE4 RDG(RenderDependencyGraph)渲染框架本质上是在原有渲染框架上基础上进行再次封装它主要的设计目的就为了更好的管理每个资源的生命周期。同时Ue4的渲染管线已经都替换成了RDG框架了但依然有很多非重要模块以及第三方插件没有替换所以掌握以下RDG框架还是十分有必要的。
上一篇文章已经大致介绍了RDG框架的使用方法。看了前文的资料与官方的ppt再看一下渲染管线的的代码基本就可以上手了写Shader。但作为一个工作与Ue4一点关系的业余爱好者用的2014年的电脑通过修改渲染管线的方式来写Shader不太现实编译一次3小时真心伤不起。同时google与Epic论坛也没有在插件中使用RDG的资料所以我就花了些时间探索了一下用法最后写了本文。**因为非全职开发UE4时间精力有限不可避免得会有些错误还请见谅。** 代码写在我的插件里如果感觉有用麻烦Star一下。位于在Rendering下的SimpleRDG.cpp与SimpleRenderingExample.h中。
[https://github.com/blueroseslol/BRPlugins](https://github.com/blueroseslol/BRPlugins)
首先还是从ComputeShader开始因为比较简单。
## 参考文件
下面推荐几个参考文件强烈推荐看GenerateMips包含RDG Compute与GlobalShader两个案例。
- GenerateMips.cpp
- ShaderPrint.cpp
- PostProcessCombineLUTs.cpp
搜索ERDGPassFlags::Compute就可以找到RDG调用ComputeShader的代码。
## RDG的数据导入与导出
RDG需要使用RegisterExternalBuffer/Texture导入数据GraphBuilder.QueueExternalBuffer/Texture取出渲染结果这需要使用一个TRefCountPtr<IPooledRenderTarget>对象。直接使用RHIImmCmdList.CopyTexture尝试将FRDGTextureRef的数据拷贝出来时会触发禁止访问的断言。
## UAV
UAV用于保存ComputeShader的计算结果它的创建步骤如下
### 实现使用宏声明Shader变量
```
SHADER_PARAMETER_UAV(Texture2D, MyUAV)
SHADER_PARAMETER_RDG_BUFFER_UAV(RWBuffer<float4>, MyUAV)
```
### 使用对应的函数创建并且绑定到对应Shader变量上
SHADER_PARAMETER_UAV对应CreateTexture()SHADER_PARAMETER_RDG_BUFFER_UAV对应CreateUAV()。(前者没试过)
可以使用FRDGTextureUAVDesc与Buff两种方式进行创建。
## SRV创建与使用
UAV是不能直接在Shader里读取所以需要通过创建SRV的方式来读取。因为我并没有测试SRV所以这里贴一下FGenerateMips中的部分代码
```c++
TSharedPtr<FGenerateMipsStruct> FGenerateMips::SetupTexture(FRHITexture* InTexture, const FGenerateMipsParams& InParams)
{
check(InTexture->GetTexture2D());
TSharedPtr<FGenerateMipsStruct> GenMipsStruct = MakeShareable(new FGenerateMipsStruct());
FPooledRenderTargetDesc Desc;
Desc.Extent.X = InTexture->GetSizeXYZ().X;
Desc.Extent.Y = InTexture->GetSizeXYZ().Y;
Desc.TargetableFlags = TexCreate_ShaderResource | TexCreate_RenderTargetable | TexCreate_UAV;
Desc.Format = InTexture->GetFormat();
Desc.NumMips = InTexture->GetNumMips();;
Desc.DebugName = TEXT("GenerateMipPooledRTTexture");
//Create the Pooled Render Target Resource from the input texture
FRHIResourceCreateInfo CreateInfo(Desc.DebugName);
//Initialise a new render target texture for creating an RDG Texture
FSceneRenderTargetItem RenderTexture;
//Update all the RenderTexture info
RenderTexture.TargetableTexture = InTexture;
RenderTexture.ShaderResourceTexture = InTexture;
RenderTexture.SRVs.Empty(Desc.NumMips);
RenderTexture.MipUAVs.Empty(Desc.NumMips);
for (uint8 MipLevel = 0; MipLevel < Desc.NumMips; MipLevel++)
{
FRHITextureSRVCreateInfo SRVDesc;
SRVDesc.MipLevel = MipLevel;
RenderTexture.SRVs.Emplace(SRVDesc, RHICreateShaderResourceView((FTexture2DRHIRef&)InTexture, SRVDesc));
RenderTexture.MipUAVs.Add(RHICreateUnorderedAccessView(InTexture, MipLevel));
}
RHIBindDebugLabelName(RenderTexture.TargetableTexture, Desc.DebugName);
RenderTexture.UAV = RenderTexture.MipUAVs[0];
//Create the RenderTarget from the PooledRenderTarget Desc and the new RenderTexture object.
GRenderTargetPool.CreateUntrackedElement(Desc, GenMipsStruct->RenderTarget, RenderTexture);
//Specify the Sampler details based on the input.
GenMipsStruct->Sampler.Filter = InParams.Filter;
GenMipsStruct->Sampler.AddressU = InParams.AddressU;
GenMipsStruct->Sampler.AddressV = InParams.AddressV;
GenMipsStruct->Sampler.AddressW = InParams.AddressW;
return GenMipsStruct;
}
```
```c++
void FGenerateMips::Compute(FRHICommandListImmediate& RHIImmCmdList, FRHITexture* InTexture, TSharedPtr<FGenerateMipsStruct> GenMipsStruct)
{
check(IsInRenderingThread());
//Currently only 2D textures supported
check(InTexture->GetTexture2D());
//Ensure the generate mips structure has been initialised correctly.
check(GenMipsStruct);
//Begin rendergraph for executing the compute shader
FRDGBuilder GraphBuilder(RHIImmCmdList);
FRDGTextureRef GraphTexture = GraphBuilder.RegisterExternalTexture(GenMipsStruct->RenderTarget, TEXT("GenerateMipsGraphTexture"));
···
FRDGTextureSRVDesc SRVDesc = FRDGTextureSRVDesc::CreateForMipLevel(GraphTexture, MipLevel - 1);
FGenerateMipsCS::FParameters* PassParameters = GraphBuilder.AllocParameters<FGenerateMipsCS::FParameters>();
PassParameters->MipInSRV = GraphBuilder.CreateSRV(SRVDesc);
}
```
可以看出是先通过CreateUntrackedElement()创建IPooledRenderTarget之后再调用RegisterExternalTexture进行注册最后再调用CreateSRV创建SRV。
另外IPooledRenderTarget除了有CreateUntrackedElement()还有FindFreeElement()。这个函数就适合在多Pass RDG中使用了。
```
FRDGTextureRef GraphTexture = GraphBuilder.RegisterExternalTexture(GenMipsStruct->RenderTarget, TEXT("GenerateMipsGraphTexture"));
FRDGTextureSRVDesc SRVDesc = FRDGTextureSRVDesc::CreateForMipLevel(GraphTexture, MipLevel - 1);
FGenerateMipsCS::FParameters* PassParameters = GraphBuilder.AllocParameters<FGenerateMipsCS::FParameters>();
PassParameters->MipInSRV = GraphBuilder.CreateSRV(SRVDesc);
```
### 在Shader中读取SRV
读取SRV与读取Texture相同首先需要创建采样器
```
SHADER_PARAMETER_SAMPLER(SamplerState, MipSampler)
PassParameters->MipSampler = RHIImmCmdList.CreateSamplerState(GenMipsStruct->Sampler);
```
之后就可以想Texture2D那样进行取值了
```
#pragma once
#include "Common.ush"
#include "GammaCorrectionCommon.ush"
float2 TexelSize;
Texture2D MipInSRV;
#if GENMIPS_SRGB
RWTexture2D<half4> MipOutUAV;
#else
RWTexture2D<float4> MipOutUAV;
#endif
SamplerState MipSampler;
[numthreads(8, 8, 1)]
void MainCS(uint3 DT_ID : SV_DispatchThreadID)
{
float2 UV = TexelSize * (DT_ID.xy + 0.5f);
#if GENMIPS_SRGB
half4 outColor = MipInSRV.SampleLevel(MipSampler, UV, 0);
outColor = half4(LinearToSrgb(outColor.xyz), outColor.w);
#else
float4 outColor = MipInSRV.SampleLevel(MipSampler, UV, 0);
#endif
#if GENMIPS_SWIZZLE
MipOutUAV[DT_ID.xy] = outColor.zyxw;
#else
MipOutUAV[DT_ID.xy] = outColor;
#endif
}
```
## RDGCompute完整代码
以下就是我写的例子,因为比较简单而且有注释,所以就不详细解释了。
```c++
void RDGCompute(FRHICommandListImmediate &RHIImmCmdList, FTexture2DRHIRef RenderTargetRHI, FSimpleShaderParameter InParameter)
{
check(IsInRenderingThread());
//Create PooledRenderTarget
FPooledRenderTargetDesc RenderTargetDesc = FPooledRenderTargetDesc::Create2DDesc(RenderTargetRHI->GetSizeXY(),RenderTargetRHI->GetFormat(), FClearValueBinding::Black, TexCreate_None, TexCreate_RenderTargetable | TexCreate_ShaderResource | TexCreate_UAV, false);
TRefCountPtr<IPooledRenderTarget> PooledRenderTarget;
//RDG Begin
FRDGBuilder GraphBuilder(RHIImmCmdList);
FRDGTextureRef RDGRenderTarget = GraphBuilder.CreateTexture(RenderTargetDesc, TEXT("RDGRenderTarget"));
//Setup Parameters
FSimpleUniformStructParameters StructParameters;
StructParameters.Color1 = InParameter.Color1;
StructParameters.Color2 = InParameter.Color2;
StructParameters.Color3 = InParameter.Color3;
StructParameters.Color4 = InParameter.Color4;
StructParameters.ColorIndex = InParameter.ColorIndex;
FSimpleRDGComputeShader::FParameters *Parameters = GraphBuilder.AllocParameters<FSimpleRDGComputeShader::FParameters>();
FRDGTextureUAVDesc UAVDesc(RDGRenderTarget);
Parameters->SimpleUniformStruct = TUniformBufferRef<FSimpleUniformStructParameters>::CreateUniformBufferImmediate(StructParameters, UniformBuffer_SingleFrame);
Parameters->OutTexture = GraphBuilder.CreateUAV(UAVDesc);
//Get ComputeShader From GlobalShaderMap
const ERHIFeatureLevel::Type FeatureLevel = GMaxRHIFeatureLevel; //ERHIFeatureLevel::SM5
FGlobalShaderMap *GlobalShaderMap = GetGlobalShaderMap(FeatureLevel);
TShaderMapRef<FSimpleRDGComputeShader> ComputeShader(GlobalShaderMap);
//Compute Thread Group Count
FIntVector ThreadGroupCount(
RenderTargetRHI->GetSizeX() / 32,
RenderTargetRHI->GetSizeY() / 32,
1);
//ValidateShaderParameters(PixelShader, Parameters);
//ClearUnusedGraphResources(PixelShader, Parameters);
GraphBuilder.AddPass(
RDG_EVENT_NAME("RDGCompute"),
Parameters,
ERDGPassFlags::Compute,
[Parameters, ComputeShader, ThreadGroupCount](FRHICommandList &RHICmdList) {
FComputeShaderUtils::Dispatch(RHICmdList, ComputeShader, *Parameters, ThreadGroupCount);
});
GraphBuilder.QueueTextureExtraction(RDGRenderTarget, &PooledRenderTarget);
GraphBuilder.Execute();
//Copy Result To RenderTarget Asset
RHIImmCmdList.CopyTexture(PooledRenderTarget->GetRenderTargetItem().ShaderResourceTexture, RenderTargetRHI->GetTexture2D(), FRHICopyTextureInfo());
//RHIImmCmdList.CopyToResolveTarget(PooledRenderTarget->GetRenderTargetItem().ShaderResourceTexture, RenderTargetRHI->GetTexture2D(), FResolveParams());
}
```
## 调用绘制函数
与传统方法类似调用上述渲染函数时需要使用ENQUEUE_RENDER_COMMAND(CaptureCommand)[]。下面是我写在蓝图函数库的代码。
```
void USimpleRenderingExampleBlueprintLibrary::UseRDGComput(const UObject *WorldContextObject, UTextureRenderTarget2D *OutputRenderTarget, FSimpleShaderParameter Parameter)
{
check(IsInGameThread());
FTexture2DRHIRef RenderTargetRHI = OutputRenderTarget->GameThread_GetRenderTargetResource()->GetRenderTargetTexture();
ENQUEUE_RENDER_COMMAND(CaptureCommand)
(
[RenderTargetRHI, Parameter](FRHICommandListImmediate &RHICmdList) {
SimpleRenderingExample::RDGCompute(RHICmdList, RenderTargetRHI, Parameter);
});
}
```
## 如何使用
直接在蓝图中调用即可
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/Images/UseRDGCompute.png)
注意RenderTarget的格式需要与UAV的格式一一对应。
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/Images/RTFormat.png)
结果:
![](https://cdn.jsdelivr.net/gh/blueroseslol/ImageBag@latest/ImageBag/Images/RDGComputeShaderResult.png)

View File

@@ -0,0 +1,116 @@
# 参考文章
UE4版本https://zhuanlan.zhihu.com/p/446587397
# c++部分
## 添加ShaderModel
- EngineType.h 在EMaterialShadingModel中添加ShaderModel枚举
## 添加材质编辑器引脚
- SceneTypes.h 在EMaterialProperty中添加2个引脚属性名称枚举。
- Material.h 在UMaterial类中添加FScalarMaterialInput类型变量CustomData2与CustomData3。
### MaterialExpressions.cpp
- 给MakeMaterialAttributes节点增加对应引脚
- 添加CustomData2与CustomData3声明。位于MaterialExpressionMakeMaterialAttributes.h
- 修改UMaterialExpressionMakeMaterialAttributes::Compile()增加CustomData2与CustomData3的对应列。
- 给BreakMaterialAttributes节点增加对应引脚
- 修改UMaterialExpressionBreakMaterialAttributes::UMaterialExpressionBreakMaterialAttributes()增加CustomData2与CustomData3的对应列。
- 修改UMaterialExpressionBreakMaterialAttributes::Serialize(),增加两列`Outputs[OutputIndex].SetMask(1, 1, 0, 0, 0); ++OutputIndex;`
- 修改BuildPropertyToIOIndexMap()增加CustomData2与CustomData3的对应列并且将最后一行的index改成合适的数值。
- 修改断言条件`static_assert(MP_MAX == 32`=>`static_assert(MP_MAX == 34`
### MaterialShared.cpp
- 修改FMaterialAttributeDefinitionMap::InitializeAttributeMap(),给CustomData2与CustomData3添加对应码只需与之前的不重复即可。
- 修改FMaterialAttributeDefinitionMap::GetAttributeOverrideForMaterial(),修改新添加的ShaderModel的引脚在材质编辑器中的显示名称。
### MaterialShader.cpp
- 修改GetShadingModelString()给新增加的ShaderModel添加返回字符串。
- 修改UpdateMaterialShaderCompilingStats(),给性能统计添加新增加的ShaderModel条件判断,`else if (ShadingModels.HasAnyShadingModel({ MSM_DefaultLit, MSM_Subsurface, MSM_PreintegratedSkin, MSM_ClearCoat, MSM_Cloth, MSM_SubsurfaceProfile, MSM_TwoSidedFoliage, MSM_SingleLayerWater, MSM_ThinTranslucent ,MSM_NPRShading}))`
### Material.cpp
- 修改UMaterial::PostLoad(),给新增加的引脚添加对应的两行代码,来对材质属性进行重新排序,`DoMaterialAttributeReorder(&CustomData2, UE4Ver, RenderObjVer);`
- 修改UMaterial::GetExpressionInputForProperty(),给新增加的引脚添加对应的两行代码。
- 修改UMaterial::CompilePropertyEx(),给新增加的引脚添加对应的两行代码。编译材质属性。
- 修改static bool IsPropertyActive_Internal()控制材质编辑器中引脚是否开启。给CustomData添加对应的代码。
### HLSLMaterialTranslator.cpp
控制材质节点编译拼接成完整的ShaderCode。
- 修改FHLSLMaterialTranslator::FHLSLMaterialTranslator()给SharedPixelProperties数组中引脚对应index赋值为true。
- 修改FHLSLMaterialTranslator::GetMaterialEnvironment()给新增加的ShaderModel添加宏。
- 修改FHLSLMaterialTranslator::GetMaterialShaderCode()。给新增加的引脚添加对应的两行`LazyPrintf.PushParam(*GenerateFunctionCode(MP_CustomData1));`。该函数会读取`/Engine/Private/MaterialTemplate.ush`并对替换字符格式化。
- 修改FHLSLMaterialTranslator::Translate(),为新增加的两个引脚增加两行:
```c#
Chunk[MP_CustomData2] = Material->CompilePropertyAndSetMaterialProperty(MP_CustomData2 ,this);//NPR Shading
Chunk[MP_CustomData3] = Material->CompilePropertyAndSetMaterialProperty(MP_CustomData3 ,this);
```
### MaterialGraph.cpp
材质界面代码。修改UMaterialGraph::RebuildGraph(),用来显示新添加的两个引脚:
```c#
MaterialInputs.Add( FMaterialInputInfo(FMaterialAttributeDefinitionMap::GetDisplayNameForMaterial(MP_CustomData2, Material), MP_CustomData2, FMaterialAttributeDefinitionMap::GetDisplayNameForMaterial(MP_CustomData2, Material)));
MaterialInputs.Add( FMaterialInputInfo(FMaterialAttributeDefinitionMap::GetDisplayNameForMaterial(MP_CustomData3, Material), MP_CustomData3, FMaterialAttributeDefinitionMap::GetDisplayNameForMaterial(MP_CustomData3, Material)));
```
### 解决添加引脚后
添加引脚后会出现`PropertyConnectedBitmask cannot contain entire EMaterialProperty enumeration.`的编译错误。需要将
```c#
uint32 PropertyConnectedBitmask;
```
=》
```c#
uint64 PropertyConnectedBitmask;
```
再将函数中的转换类型改成uint64即可。
```c#
ENGINE_API bool IsConnected(EMaterialProperty Property) { return ((PropertyConnectedBitmask >> (uint64)Property) & 0x1) != 0; }
ENGINE_API void SetConnectedProperty(EMaterialProperty Property, bool bIsConnected)
{
PropertyConnectedBitmask = bIsConnected ? PropertyConnectedBitmask | (1i64 << (uint64)Property) : PropertyConnectedBitmask & ~(1i64 << (uint64)Property);
}
```
# Shader部分
## MaterialTemplate.ush
给新增加的引脚增加对应的格式化代码:
```c#
half GetMaterialCustomData2(FMaterialPixelParameters Parameters)
{
%s;
}
half GetMaterialCustomData3(FMaterialPixelParameters Parameters)
{
%s;
}
```
## ShadingCommon.ush
- 新增加的ShaderModel添加ID宏`#define SHADINGMODELID_NPRSHADING 12`
- 修改float3 GetShadingModelColor(uint ShadingModelID)给添加的ShaderModel设置一个显示颜色。
## BasePassCommon.ush
- 修改#define WRITES_CUSTOMDATA_TO_GBUFFER宏在最后的条件判断中添加新增加的ShaderModel。
## DeferredShadingCommon.ush
- 修改bool HasCustomGBufferData(int ShadingModelID)在条件判断中添加新增加的ShaderModel。
## ShadingModelsMaterial.ush
- 修改void SetGBufferForShadingModel()该函数用于设置输出的GBuffer给新增加的ShaderModel添加对应的代码段
```c#
#ifdef MATERIAL_SHADINGMODEL_NPRSHADING
else if(ShadingModel == SHADINGMODELID_NPRSHADING)
{
GBuffer.CustomData.x=saturate(GetMaterialCustomData0(MaterialParameters));
GBuffer.CustomData.y=saturate(GetMaterialCustomData1(MaterialParameters));
GBuffer.CustomData.z=saturate(GetMaterialCustomData2(MaterialParameters));
GBuffer.CustomData.w=saturate(GetMaterialCustomData3(MaterialParameters));
}
#endif
```
## 光照实现
- ShadingModel.ushBxDF实现。
- DeferredLightingCommon.ush延迟光照实现。

View File

@@ -0,0 +1,113 @@
BasePassPixelShader.usf中的FPixelShaderInOut_MainPS()为BasePass阶段Shader的主要逻辑。会被PixelShaderOutputCommon.usf的MainPS(),即PixelShader入口函数中被调用。
该阶段会取得材质编辑器各个引脚的计算结果在一些计算下最终输出GBuffer以备后续光照计算。可以认为是“紧接”材质编辑器的下一步工作。相关的c++逻辑位于FDeferredShadingSceneRenderer::Render()的RenderBasePass()中。
但因为个人能力与时间所限,只能写一篇杂乱笔记作为记录,以供后续使用。
<!--more-->
### 780~889计算变量并且填充FMaterialPixelParameters MaterialParameters。BaseColor、Metallic、Specular就位于867~877。
### 915~1072计算GBuffer或者DBuffer
- 915~942:计算贴花相关的DBuffer
- 954~1028按照ShaderModel来填充GBuffer。983~1008 Velocity、1013~1022 使用法线来调整粗糙度在皮肤以及车漆ShaderModel中有用到
- 1041GBuffer.DiffuseColor = BaseColor - BaseColor * Metallic;
- 1059~1072使用法线清漆ShaderModel还会计算的底层法线计算BentNormal以及GBufferAO。使用SphericalGaussian
### 1081~1146
#### 1086~1116计算DiffuseColorForIndirect
DiffuseColorForIndirectDiffuseDir只在Hair中计算
- 次表面与皮肤DiffuseColorForIndirect += SubsurfaceColor;
- 布料DiffuseColorForIndirect += SubsurfaceColor * saturate(GetMaterialCustomData0(MaterialParameters));
- 头发DiffuseColorForIndirect = 2*PI * HairShading( GBuffer, L, V, N, 1, TransmittanceData, 0, 0.2, uint2(0,0);
#### 1118~1120计算预间接光照结果
GetPrecomputedIndirectLightingAndSkyLight
采样对应的预结算缓存:
1. PRECOMPUTED_IRRADIANCE_VOLUME_LIGHTING根据TRANSLUCENCY_LIGHTING_VOLUMETRIC_PERVERTEX_NONDIRECTIONAL来判断是进行读取顶点AO值还是采样体积关照贴图来作为IrradianceSH的值。最后累加到OutDiffuseLighting上。
2. CACHED_VOLUME_INDIRECT_LIGHTING采样IndirectLightingCache最后累加到最后累加到OutDiffuseLighting上。
3. 采样HQ_TEXTURE_LIGHTMAP或者LQ_TEXTURE_LIGHTMAP最后累加到OutDiffuseLighting上。
调用GetSkyLighting()取得天光值并累加到OutDiffuseLighting上。最后计算OutDiffuseLighting的亮度值最后作为OutIndirectIrradiance输出。
#### 1138计算DiffuseColor
DiffuseColor=Diffuse间接照明 * Diffse颜色 + 次表面间接光照 * 次表面颜色+AO
```c#
DiffuseColor += (DiffuseIndirectLighting * DiffuseColorForIndirect + SubsurfaceIndirectLighting * SubsurfaceColor) * AOMultiBounce( GBuffer.BaseColor, DiffOcclusion );
```
#### 1140~1146SingleLayerWater 覆盖颜色操作
```c#
GBuffer.DiffuseColor *= BaseMaterialCoverageOverWater;
DiffuseColor *= BaseMaterialCoverageOverWater;
```
### 1148~1211
1. 使用ForwardDirectLighting的DiffuseLighting与SpecularLighting累加ColorTHIN_TRANSLUCENT Model则为 DiffuseColor与ColorSeparateSpecular。
2. SIMPLE_FORWARD_DIRECTIONAL_LIGHT调用GetSimpleForwardLightingDirectionalLight()计算方向光结果。
根据光照模式累加最后累加到Color上
```c#
#if STATICLIGHTING_SIGNEDDISTANCEFIELD
DirectionalLighting *= GBuffer.PrecomputedShadowFactors.x;
#elif PRECOMPUTED_IRRADIANCE_VOLUME_LIGHTING
DirectionalLighting *= GetVolumetricLightmapDirectionalLightShadowing(VolumetricLightmapBrickTextureUVs);
#elif CACHED_POINT_INDIRECT_LIGHTING
DirectionalLighting *= IndirectLightingCache.DirectionalLightShadowing;
#endif
Color += DirectionalLighting;
```
```c#
float3 GetSimpleForwardLightingDirectionalLight(FGBufferData GBuffer, float3 DiffuseColor, float3 SpecularColor, float Roughness, float3 WorldNormal, float3 CameraVector)
{
float3 V = -CameraVector;
float3 N = WorldNormal;
float3 L = ResolvedView.DirectionalLightDirection;
float NoL = saturate( dot( N, L ) );
float3 LightColor = ResolvedView.DirectionalLightColor.rgb * PI;
FShadowTerms Shadow = { 1, 1, 1, InitHairTransmittanceData() };
FDirectLighting Lighting = EvaluateBxDF( GBuffer, N, V, L, NoL, Shadow );
// Not computing specular, material was forced fully rough
return LightColor * (Lighting.Diffuse + Lighting.Transmission);
}
```
### 1213~1273渲染雾效果
包括VertexFog、PixelFog、体积雾以及体积光效果lit translucency)
体积雾只要使用View.VolumetricFogGridZParams中的值计算UV调用Texture3DSampleLevel采样FogStruct.IntegratedLightScattering最后的值为float4(VolumetricFogLookup.rgb + GlobalFog.rgb * VolumetricFogLookup.a, VolumetricFogLookup.a * GlobalFog.a);。
### 1283~1310取得材质中自发光值得计算结果并且累加到Color上
```c#
half3 Emissive = GetMaterialEmissive(PixelMaterialInputs);
#if !POST_PROCESS_SUBSURFACE && !MATERIAL_SHADINGMODEL_THIN_TRANSLUCENT
// For skin we need to keep them separate. We also keep them separate for thin translucent.
// Otherwise just add them together.
Color += DiffuseColor;
#endif
#if !MATERIAL_SHADINGMODEL_THIN_TRANSLUCENT
Color += Emissive;
```
### 1312~1349SingleLayerWater光照计算
计算SunIlluminance、WaterDiffuseIndirectIlluminance、Normal、ViewVector、EnvBrdf预积分G F * 高光颜色位于BRDF.ush后根据设置采用对应的方式(前向渲染与延迟渲染方式)
```c#
const float3 SunIlluminance = ResolvedView.DirectionalLightColor.rgb * PI; // times PI because it is divided by PI on CPU (=luminance) and we want illuminance here.
const float3 WaterDiffuseIndirectIlluminance = DiffuseIndirectLighting * PI;// DiffuseIndirectLighting is luminance. So we need to multiply by PI to get illuminance.
```
### 1352~1372超薄透明物体光照计算
### 1375~1529GBuffer相关
1. BlendMode处理
2. GBuffer.IndirectIrradiance = IndirectIrradiance;
3. 调用LightAccumulator_Add()累加关照对BaseColor的影响。Out.MRT[0]=FLightAccumulator.TotalLight
4. 调用EncodeGBuffer(),填充GBuffer12345数据。
5. Out.MRT[4] = OutVelocity;
6. Out.MRT[GBUFFER_HAS_VELOCITY ? 5 : 4] = OutGBufferD;
7. Out.MRT[GBUFFER_HAS_VELOCITY ? 6 : 5] = OutGBufferE;
8. Out.MRT[0].rgb *= ViewPreExposure;
### 1553FinalizeVirtualTextureFeedback

View File

@@ -0,0 +1,304 @@
# 参考
Tone mapping进化论 https://zhuanlan.zhihu.com/p/21983679
## Filmic tone mapping
2010年Uncharted 2的ToneMapping方法。这个方法的本质是把原图和让艺术家用专业照相软件模拟胶片的感觉人肉tone mapping后的结果去做曲线拟合得到一个高次曲线的表达式。这样的表达式应用到渲染结果后就能在很大程度上自动接近人工调整的结果。
```c#
float3 F(float3 x)
{
const float A = 0.22f;
const float B = 0.30f;
const float C = 0.10f;
const float D = 0.20f;
const float E = 0.01f;
const float F = 0.30f;
return ((x * (A * x + C * B) + D * E) / (x * (A * x + B) + D * F)) - E / F;
}
float3 Uncharted2ToneMapping(float3 color, float adapted_lum)
{
const float WHITE = 11.2f;
return F(1.6f * adapted_lum * color) / F(WHITE);
}
```
## Academy Color Encoding SystemACES
>是一套颜色编码系统或者说是一个新的颜色空间。它是一个通用的数据交换格式一方面可以不同的输入设备转成ACES另一方面可以把ACES在不同的显示设备上正确显示。不管你是LDR还是HDR都可以在ACES里表达出来。这就直接解决了VDR的问题不同设备间都可以互通数据。
>更好的地方是按照前面说的ACES为的是解决所有设备之间的颜色空间转换问题。所以这个tone mapper不但可以用于HDR到LDR的转换还可以用于从一个HDR转到另一个HDR。也就是从根本上解决了VDR的问题。这个函数的输出是线性空间的所以要接到LDR的设备只要做一次sRGB校正。要接到HDR10的设备只要做一次Rec 2020颜色矩阵乘法。Tone mapping部分是通用的这也是比之前几个算法都好的地方。
```c#
float3 ACESToneMapping(float3 color, float adapted_lum)
{
const float A = 2.51f;
const float B = 0.03f;
const float C = 2.43f;
const float D = 0.59f;
const float E = 0.14f;
color *= adapted_lum;
return (color * (A * color + B)) / (color * (C * color + D) + E);
}
```
## ToneMapPass
位于PostProcessing.cpp中
```c#
FTonemapInputs PassInputs;
PassSequence.AcceptOverrideIfLastPass(EPass::Tonemap, PassInputs.OverrideOutput);
PassInputs.SceneColor = SceneColor;
PassInputs.Bloom = Bloom;
PassInputs.EyeAdaptationTexture = EyeAdaptationTexture;
PassInputs.ColorGradingTexture = ColorGradingTexture;
PassInputs.bWriteAlphaChannel = AntiAliasingMethod == AAM_FXAA || IsPostProcessingWithAlphaChannelSupported();
PassInputs.bOutputInHDR = bTonemapOutputInHDR;
SceneColor = AddTonemapPass(GraphBuilder, View, PassInputs);
```
如代码所示需要给Shader提供渲染结果、Bloom结果、曝光结果、合并的LUT。
1. 获取输出RT对象如果输出RT无效则根据当前设备来设置RT格式默认为PF_B8G8R8A8。(LinearEXR=>PF_A32B32G32R32FLinearNoToneCurve、LinearWithToneCurve=>PF_FloatRGBA)
2. 从后处理设置中获取BloomDirtMaskTexture。
3. 从控制台变量获取SharpenDiv6。
4. 计算色差参数ChromaticAberrationParams
5. 创建共有的Shader变量 FTonemapParameters并将所有参数都进行赋值。
6. 为桌面端的ToneMapping生成排列向量。
7. 根据RT类型使用PixelShader或者ComputeShader进行渲染。
8. 返回右值。
BuildCommonPermutationDomain()构建的FCommonDomain应该是为了给引擎传递宏。其中Settings为FPostProcessSettings。
using FCommonDomain = TShaderPermutationDomain<
- FTonemapperBloomDim(USE_BLOOM):Settings.BloomIntensity > 0.0
- FTonemapperGammaOnlyDim(USE_GAMMA_ONLY):true
- FTonemapperGrainIntensityDim(USE_GRAIN_INTENSITY):Settings.GrainIntensity > 0.0f
- FTonemapperVignetteDim(USE_VIGNETTE):Settings.VignetteIntensity > 0.0f
- FTonemapperSharpenDim(USE_SHARPEN):CVarTonemapperSharpen.GetValueOnRenderThread() > 0.0f
- FTonemapperGrainJitterDim(USE_GRAIN_JITTER):Settings.GrainJitter > 0.0f
- FTonemapperSwitchAxis(NEEDTOSWITCHVERTICLEAXIS):函数形参bSwitchVerticalAxis
- FTonemapperMsaaDim(METAL_MSAA_HDR_DECODE):函数形参bMetalMSAAHDRDecode
- FTonemapperUseFXAA(USE_FXAA):View.AntiAliasingMethod == AAM_FXAA
>;
using FDesktopDomain = TShaderPermutationDomain<
- FCommonDomain,
- FTonemapperColorFringeDim(USE_COLOR_FRINGE):
- FTonemapperGrainQuantizationDim(USE_GRAIN_QUANTIZATION) FTonemapperOutputDeviceDim为LinearNoToneCurve与LinearWithToneCurve时为false否则为true。
- FTonemapperOutputDeviceDim(DIM_OUTPUT_DEVICE):ETonemapperOutputDevice(CommonParameters.OutputDevice.OutputDevice)
>;
```
enum class ETonemapperOutputDevice
{
sRGB,
Rec709,
ExplicitGammaMapping,
ACES1000nitST2084,
ACES2000nitST2084,
ACES1000nitScRGB,
ACES2000nitScRGB,
LinearEXR,
LinearNoToneCurve,
LinearWithToneCurve,
MAX
};
```
### Shader
>在当前实现下,渲染场景的完整处理通过 ACES Viewing Transform 进行处理。此流程的工作原理是使用"参考场景的"和"参考显示的"图像。
- 参考场景的 图像保有源材质的原始 线性光照 数值,不限制曝光范围。
- 参考显示的 图像是最终的图像,将变为所用显示的色彩空间。
使用此流程后,初始源文件用于不同显示时便无需每次进行较色编辑。相反,输出的显示将映射到 正确的色彩空间。
>ACES Viewing Transform 在查看流程中将按以下顺序进行
- Look Modification Transform (LMT) - 这部分抓取应用了创意"外观"(颜色分级和矫正)的 ACES 颜色编码图像, 输出由 ACES 和 Reference Rendering TransformRRT及 Output Device TransformODT渲染的图像。
- Reference Rendering Transform (RRT) - 之后,这部分抓取参考场景的颜色值,将它们转换为参考显示。 在此流程中,它使渲染图像不再依赖于特定显示器,反而能保证它输出到特定显示器时拥有正确而宽泛的色域和动态范围(尚未创建的图像同样如此)。
- Output Device Transform (ODT) - 最后,这部分抓取 RRT 的 HDR 数据输出,将其与它们能够显示的不同设备和色彩空间进行比对。 因此,每个目标需要将其自身的 ODT 与 Rec709、Rec2020、DCI-P3 等进行比对。
默认参数:
r.HDR.EnableHDROutput:设为 1 时,它将重建交换链并启用 HDR 输出。
r.HDR.Display.OutputDevice
- 0:sRGB (LDR) (默认)
- 1:Rec709 (LDR)
- 2:显式伽马映射 (LDR)
- 3:ACES 1000-nit ST-2084 (Dolby PQ) (HDR)
- 4:ACES 2000-nit ST-2084 (Dolby PQ) (HDR)
- 5:ACES 1000-nit ScRGB (HDR)
- 6:ACES 2000-nit ScRGB (HDR)
r.HDR.Display.ColorGamut
- 0:Rec709 / sRGB, D65 (默认)
- 1:DCI-P3, D65
- 2:Rec2020 / BT2020, D65
- 3:ACES, D60
- 4:ACEScg, D60
我的测试设备是:
- 宏碁(Acer) 暗影骑士24.5英寸FastIPS 280Hz小金刚HDR400
- ROG 枪神5 笔记本 HDIM连接
- UE4.27.2 源码版
经过实际还是无法打开HDR输出着实有些可惜。所以一般显示器的Shader代码为使用RenderDoc抓帧
```c#
float4 TonemapCommonPS(
float2 UV,
float3 ExposureScaleVignette,
float4 GrainUV,
float2 ScreenPos,
float2 FullViewUV,
float4 SvPosition
)
{
float4 OutColor = 0;
const float OneOverPreExposure = View_OneOverPreExposure;
float Grain = GrainFromUV(GrainUV.zw);
float2 SceneUV = UV.xy;
float4 SceneColor = SampleSceneColor(SceneUV);
SceneColor.rgb *= OneOverPreExposure;
float ExposureScale = ExposureScaleVignette.x;
float SharpenMultiplierDiv6 = TonemapperParams.y;
float3 LinearColor = SceneColor.rgb * ColorScale0.rgb;
float2 BloomUV = ColorToBloom_Scale * UV + ColorToBloom_Bias;
BloomUV = clamp(BloomUV, Bloom_UVViewportBilinearMin, Bloom_UVViewportBilinearMax);
float4 CombinedBloom = Texture2DSample(BloomTexture, BloomSampler, BloomUV);
CombinedBloom.rgb *= OneOverPreExposure;
float2 DirtLensUV = ConvertScreenViewportSpaceToLensViewportSpace(ScreenPos) * float2(1.0f, -1.0f);
float3 BloomDirtMaskColor = Texture2DSample(BloomDirtMaskTexture, BloomDirtMaskSampler, DirtLensUV * .5f + .5f).rgb * BloomDirtMaskTint.rgb;
LinearColor += CombinedBloom.rgb * (ColorScale1.rgb + BloomDirtMaskColor);
LinearColor *= ExposureScale;
LinearColor.rgb *= ComputeVignetteMask( ExposureScaleVignette.yz, TonemapperParams.x );
float3 OutDeviceColor = ColorLookupTable(LinearColor);
float LuminanceForPostProcessAA = dot(OutDeviceColor, float3 (0.299f, 0.587f, 0.114f));
float GrainQuantization = 1.0/256.0;
float GrainAdd = (Grain * GrainQuantization) + (-0.5 * GrainQuantization);
OutDeviceColor.rgb += GrainAdd;
OutColor = float4(OutDeviceColor, saturate(LuminanceForPostProcessAA));
[branch]
if(bOutputInHDR)
{
OutColor.rgb = ST2084ToLinear(OutColor.rgb);
OutColor.rgb = OutColor.rgb / EditorNITLevel;
OutColor.rgb = LinearToPostTonemapSpace(OutColor.rgb);
}
return OutColor;
}
```
关键函数是这个对非HDR设备进行Log编码。
```c#
half3 ColorLookupTable( half3 LinearColor )
{
float3 LUTEncodedColor;
// Encode as ST-2084 (Dolby PQ) values
#if (DIM_OUTPUT_DEVICE == TONEMAPPER_OUTPUT_ACES1000nitST2084 || DIM_OUTPUT_DEVICE == TONEMAPPER_OUTPUT_ACES2000nitST2084 || DIM_OUTPUT_DEVICE == TONEMAPPER_OUTPUT_ACES1000nitScRGB || DIM_OUTPUT_DEVICE == TONEMAPPER_OUTPUT_ACES2000nitScRGB || DIM_OUTPUT_DEVICE == TONEMAPPER_OUTPUT_LinearEXR || DIM_OUTPUT_DEVICE == TONEMAPPER_OUTPUT_NoToneCurve || DIM_OUTPUT_DEVICE == TONEMAPPER_OUTPUT_WithToneCurve)
// ST2084 expects to receive linear values 0-10000 in nits.
// So the linear value must be multiplied by a scale factor to convert to nits.
LUTEncodedColor = LinearToST2084(LinearColor * LinearToNitsScale);
#else
LUTEncodedColor = LinToLog( LinearColor + LogToLin( 0 ) );
#endif
float3 UVW = LUTEncodedColor * ((LUTSize - 1) / LUTSize) + (0.5f / LUTSize);
#if USE_VOLUME_LUT == 1
half3 OutDeviceColor = Texture3DSample( ColorGradingLUT, ColorGradingLUTSampler, UVW ).rgb;
#else
half3 OutDeviceColor = UnwrappedTexture3DSample( ColorGradingLUT, ColorGradingLUTSampler, UVW, LUTSize ).rgb;
#endif
return OutDeviceColor * 1.05;
}
float3 LogToLin( float3 LogColor )
{
const float LinearRange = 14;
const float LinearGrey = 0.18;
const float ExposureGrey = 444;
// Using stripped down, 'pure log', formula. Parameterized by grey points and dynamic range covered.
float3 LinearColor = exp2( ( LogColor - ExposureGrey / 1023.0 ) * LinearRange ) * LinearGrey;
//float3 LinearColor = 2 * ( pow(10.0, ((LogColor - 0.616596 - 0.03) / 0.432699)) - 0.037584 ); // SLog
//float3 LinearColor = ( pow( 10, ( 1023 * LogColor - 685 ) / 300) - .0108 ) / (1 - .0108); // Cineon
//LinearColor = max( 0, LinearColor );
return LinearColor;
}
float3 LinToLog( float3 LinearColor )
{
const float LinearRange = 14;
const float LinearGrey = 0.18;
const float ExposureGrey = 444;
// Using stripped down, 'pure log', formula. Parameterized by grey points and dynamic range covered.
float3 LogColor = log2(LinearColor) / LinearRange - log2(LinearGrey) / LinearRange + ExposureGrey / 1023.0; // scalar: 3log2 3mad
//float3 LogColor = (log2(LinearColor) - log2(LinearGrey)) / LinearRange + ExposureGrey / 1023.0;
//float3 LogColor = log2( LinearColor / LinearGrey ) / LinearRange + ExposureGrey / 1023.0;
//float3 LogColor = (0.432699 * log10(0.5 * LinearColor + 0.037584) + 0.616596) + 0.03; // SLog
//float3 LogColor = ( 300 * log10( LinearColor * (1 - .0108) + .0108 ) + 685 ) / 1023; // Cineon
LogColor = saturate( LogColor );
return LogColor;
}
```
## CombineLUTS Pass
实际会在GetCombineLUTParameters()中调用也就是CombineLUTS (PS) Pass。实际的作用是绘制一个3D LUT图毕竟ToneMapping实际也就是一个曲线所以可以合并到一起。核心函数位于PostProcessCombineLUTs.usf的**float4 CombineLUTsCommon(float2 InUV, uint InLayerIndex)**
- 计算原始LUT Neutral 。
- 对HDR设备使用ST2084解码对LDR设备使用Log解码。**LinearColor = LogToLin(LUTEncodedColor) - LogToLin(0);**
- 白平衡。
- 在sRGB色域之外扩展明亮的饱和色彩以伪造广色域渲染(Expand bright saturated colors outside the sRGB gamut to fake wide gamut rendering.)
- 颜色矫正:对颜色ColorSaturation、ColorContrast、ColorGamma、ColorGain、ColorOffset矫正操作。
- 蓝色矫正。
- ToneMapping与之前计算结果插值。
- 反蓝色矫正。
- 从AP1到sRGB的转换并Clip掉gamut的值。
- 颜色矫正。
- Gamma矫正。
- 线性颜色=》设备颜色OutDeviceColor = LinearToSrgb( OutputGamutColor );部分HDR设备则会调用对应的矩阵调整并用LinearToST2084().
- 简单处理OutColor.rgb = OutDeviceColor / 1.05;
所以核心的Tonemapping函数为位于TonemaperCommon.ush的**half3 FilmToneMap( half3 LinearColor)**与**half3 FilmToneMapInverse( half3 ToneColor)**
```
// Blue correction
ColorAP1 = lerp( ColorAP1, mul( BlueCorrectAP1, ColorAP1 ), BlueCorrection );
// Tonemapped color in the AP1 gamut
float3 ToneMappedColorAP1 = FilmToneMap( ColorAP1 );
ColorAP1 = lerp(ColorAP1, ToneMappedColorAP1, ToneCurveAmount);
// Uncorrect blue to maintain white point
ColorAP1 = lerp( ColorAP1, mul( BlueCorrectInvAP1, ColorAP1 ), BlueCorrection );
```
## ue4
后处理中的ToneMapping曲线参数为
Slope: 0.98
Toe: 0.3
Shoulder: 0.22
Black Clip: 0
White Clip: 0.025
![](https://docs.unrealengine.com/5.0/Images/designing-visuals-rendering-and-graphics/post-process-effects/color-grading/DefaultSettings_FilmicToneMapper.webp)

View File

@@ -0,0 +1,86 @@
之前有看过MeshDraw的流程发现MeshDraw部分还是和材质编辑器帮得死死的。不过应该可以通过自定义MeshDrawPass、图元类、定点工厂来直接使用VS与PS Shader。不过这样就无法使用材质编辑器进行简单的Shader编写与使用Material Instance来调节参数只能使用c++进行参数传递非常得不方便在使用材质编辑器的情况使用CustomNode可以更多的自由度比如使用一些UE在ush定义的函数以及循环等;更好的可读性以及方便项目升级与后续项目使用等。
这里我总结了一些CustomNode使用方法。使用CustomNode时最好将ConsoleVariables.ini中的 r.ShaderDevelopmentMode设置为1。这样可以看到更多的Shader错误信息但老实说UE4的Shader错误提示真心不能与U3D比……
>采用CustomNode+Include usf文件的方式使用Ctrl+Shift+.不会真正的更新代码,必须手动断开节点连接再连接上,才会触发重新编译。
<!--more-->
### IncludeFilePaths
给Material添加ush与usf文件包含只支持以上2个后缀名。CustomNode会在生成的CustomExpressionX()之前加上
```c#
#include "你填入的文件路径"
```
这样你就可以在插件中的模块c++的StartupModule()中定义Shader映射目录之后将Shader函数代码写在插件里。映射操作大致如下
```c#
void FXXXModule::StartupModule()
{
// This code will execute after your module is loaded into memory; the exact timing is specified in the .uplugin file per-module
FString PluginShaderDir=FPaths::Combine(IPluginManager::Get().FindPlugin(TEXT("XXX"))->GetBaseDir(),TEXT("Shaders"));
AddShaderSourceDirectoryMapping(TEXT("/Plugin/YYY"),PluginShaderDir);
}
```
### Additional Defines
用来定义宏假设DefineName为ValueDefineValue为1。那么Custom将会在生成的CustomExpressionX()之前加上:
```c#
#ifndef Value
#define Value 1
#endif//Value
```
### Additional Outputs
设置完OutputName与OutputType后就会在生成的函数的形参利添加对应类型的引用
```c#
CustomExpression0(FMaterialPixelParameters Parameters, inout MaterialFloat Ret1)
```
之后就可以CustomNode节点中给这个形参赋值最后从CustomNode节点生成的输出节点中去到数值。
### 在CustomNode中使用节点代码
前几年刚接触CustomNode的时候一直都在思考如果使用一些带有`Parameters`参数的函数,比如`AbsoluteWorldPosition:GetWorldPosition(Parameters)`。这几天在回过头看了一下,只需要在函数中添加一个`FMaterialPixelParameters Parameters`或者`FMaterialVertexParameters Parameters`形参,之后就可以在函数利使用这些函数了。
#### 常用节点HlSL代码
- AbsoluteWorldPosition:GetWorldPosition(Parameters)
- AbsoluteWorldPosition(ExcludingMaterialOffsets):GetPrevWorldPosition(Parameters)
- VertexNormalPosition:Parameters.TangentToWorld[2];
- PixelNormalWS:Parameters.WorldNormal
- ObjectPosition:GetObjectWorldPosition(Parameters)
- CameraPosition:ResolvedView.WorldCameraOrigin
- LightVector:Parameters.LightVector
- ResolvedView.DirectionalLightDirection
- ResolvedView.DirectionalLightColor.rgb
- ResolvedView.SkyLightColor.rgb;
- ResolvedView.PreExposure
- EyeAdaptation:EyeAdaptationLookup() 位于EyeAdaptationCommon.ush
需要开启大气雾:
- SkyAtmosphereLightColor:·.AtmosphereLightColor[LightIndex].rgb
- SkyAtmosphereLightDirection:ResolvedView.AtmosphereLightDirection[LightIndex].xyz
节点代码中的`Parameters`为`FMaterialPixelParameters Parameters`或者`FMaterialVertexParameters Parameters`结构体两者都可以在MaterialTemplate.ush中找到成员定义。
其他一些节点中会使用`View.XXX`来获取变量,这里的`View`等于`ResolvedView`。具体的变量可以通过查看`FViewUniformShaderParameters`c++)。
剩下的一些节点代码可以通过材质编辑器的Window-ShaderCode-HLSL来找到。具体的方式是将所需节点连到对应引脚上之后将生成过得代码复制出来再进行寻找。当然也可以直接在FHLSLMaterialTranslator中来寻找对应节点的代码。
一些常用shader函数都可以在Common.ush找到。
## Texture
在CustomNode里使用Texture只需要给CustomNode连上TextureObject节点之后材质编辑器会自动生成对应的Sampler。例如Pin名称为XXX那就会生成XXXSampler之后向函数传递这2个参数即可。
函数中的形参类型为Texture2D与SamplerState。
TextureCoord虽然可以通过代码调用但会出现错误可能是因为材质编辑器没有检测到TextureCoord节点以至于没有添加对应代码所致。所以TextureCoord节点还是需要连接到CustomNode的pin上无法通过代码省略。
#### 默认贴图
所以一些控制类的贴图可以使用引擎里的资源这些资源在Engine Content中需要勾选显示Engine Content选项后才会显示
Texture2D'/Engine/EngineResources/WhiteSquareTexture.WhiteSquareTexture'
Texture2D'/Engine/EngineResources/Black.Black'
## HLSL分支控制关键字
一些if与for语句会产生变体所以在CustomNode里可以通过添加以下关键字来进行控制变体产生。
### if语句
- branch添加了branch标签的if语句shader会根据判断语句只执行当前情况的代码这样会产生跳转指令。
- flatten添加了flatten标签的if语句shader会执行全部情况的分支代码然后根据判断语句来决定使用哪个结果。
### for语句
- unroll添加了unroll标签的for循环是可以展开的直到循环条件终止代价是产生更多机器码
- loop添加了loop标签的for循环不能展开流式控制每次的循环迭代for默认是loop

View File

@@ -0,0 +1,122 @@
# UE4渲染用贴图资源实时更换
最近做卡通渲染美术同学反应每次更换渲染贴图都需要手动替换资源并重启引擎相当麻烦。所以花时间了解了一下UE4的渲染资源逻辑。并且找到了解决方法。
## 可参考的资源
在FViewUniformShaderParameter中有
SHADER_PARAMETER_TEXTURE(Texture2D, PreIntegratedBRDF)
SHADER_PARAMETER_TEXTURE(Texture2D, PerlinNoiseGradientTexture) 在渲染管线中调用FSystemTextures::InitializeCommonTextures生成之后通过GSystemTextures加载
SHADER_PARAMETER_TEXTURE(Texture3D, PerlinNoise3DTexture) 在渲染管线中调用FSystemTextures::InitializeCommonTextures生成之后通过GSystemTextures加载
SHADER_PARAMETER_TEXTURE(Texture2D, AtmosphereTransmittanceTexture)
在Material界面中有SubsufaceProfileSubsufaceProfile类SubsurfaceProfile.h
### USubsurfaceProfile
资源的类型为USubsurfaceProfile实现了2个接口函数
```c++
void USubsurfaceProfile::PostEditChangeProperty(struct FPropertyChangedEvent& PropertyChangedEvent)
{
const FSubsurfaceProfileStruct SettingsLocal = this->Settings;
USubsurfaceProfile* Profile = this;
ENQUEUE_RENDER_COMMAND(UpdateSubsurfaceProfile)(
[SettingsLocal, Profile](FRHICommandListImmediate& RHICmdList)
{
// any changes to the setting require an update of the texture
GSubsurfaceProfileTextureObject.UpdateProfile(SettingsLocal, Profile);
});
}
```
```c++
void USubsurfaceProfile::BeginDestroy()
{
USubsurfaceProfile* Ref = this;
ENQUEUE_RENDER_COMMAND(RemoveSubsurfaceProfile)(
[Ref](FRHICommandList& RHICmdList)
{
GSubsurfaceProfileTextureObject.RemoveProfile(Ref);
});
Super::BeginDestroy();
}
```
在UMaterialInterface::UpdateMaterialRenderProxy()中开始触发下列渲染资源操作:
- 升级渲染资源**GSubsurfaceProfileTextureObject.UpdateProfile(SettingsLocal, Profile);**
- 移除渲染资源**GSubsurfaceProfileTextureObject.RemoveProfile(Ref);**
本质上更新SubsurfaceProfileEntries数组后将渲染进程里的GSSProfiles释放掉。在渲染管线中主要通过**GetSubsufaceProfileTexture_RT(RHICmdList);**来获取这个资源,其顺序为
>GetSubsufaceProfileTexture_RT()=>GSubsurfaceProfileTextureObject.GetTexture(RHICmdList);=>return GSSProfiles;
如果GSSProfiles无效则调用CreateTexture()对SubsurfaceProfile进行编码并生成新的GSSProfiles。
### PreIntegratedBRDF
1. 在FViewUniformShaderParameters中添加**PreIntegratedBRDF**贴图变量
2. 在FViewUniformShaderParameters::FViewUniformShaderParameters(),进行初始化**PreIntegratedBRDF = GWhiteTexture->TextureRHI;**
3. 在FViewInfo::SetupUniformBufferParameters()中进行资源指定:**ViewUniformShaderParameters.PreIntegratedBRDF = GEngine->PreIntegratedSkinBRDFTexture->Resource->TextureRHI;**
1. 在UEngine中定义资源指针**class UTexture2D* PreIntegratedSkinBRDFTexture;**与**FSoftObjectPath PreIntegratedSkinBRDFTextureName;**
2. 在UEngine::InitializeObjectReferences()中调用**LoadEngineTexture(PreIntegratedSkinBRDFTexture, *PreIntegratedSkinBRDFTextureName.ToString());**载入贴图。
```c++
template <typename TextureType>
static void LoadEngineTexture(TextureType*& InOutTexture, const TCHAR* InName)
{
if (!InOutTexture)
{
InOutTexture = LoadObject<TextureType>(nullptr, InName, nullptr, LOAD_None, nullptr);
}
if (FPlatformProperties::RequiresCookedData() && InOutTexture)
{
InOutTexture->AddToRoot();
}
}
```
### AtmosphereTransmittanceTexture
1. 在FViewUniformShaderParameters中添加**AtmosphereTransmittanceTexture**贴图变量
2. 在FViewUniformShaderParameters::FViewUniformShaderParameters(),进行初始化**AtmosphereTransmittanceTexture = GWhiteTexture->TextureRHI;**
3. 在FViewUniformShaderParameters::SetupUniformBufferParameters()中:**ViewUniformShaderParameters.AtmosphereTransmittanceTexture = OrBlack2DIfNull(AtmosphereTransmittanceTexture);**
4. 在FogRendering阶段InitAtmosphereConstantsInView()中进行资源指定:**View.AtmosphereTransmittanceTexture = (FogInfo.TransmittanceResource && FogInfo.TransmittanceResource->TextureRHI.GetReference()) ? (FTextureRHIRef)FogInfo.TransmittanceResource->TextureRHI : GBlackTexture->TextureRHI;**
5. 在UAtmosphericFogComponent::UpdatePrecomputedData()中对FogInfo进行预计算在Tick事件中调用每帧调用
在UpdatePrecomputedData()中的最后还会调用以下函数来对资源进行更新:
```c++
PrecomputeCounter = EValid;
FPlatformMisc::MemoryBarrier();
Scene->AtmosphericFog->bPrecomputationAcceptedByGameThread = true;
// Resolve to data...
ReleaseResource();
// Wait for release...
FlushRenderingCommands();
InitResource();
FComponentReregisterContext ReregisterContext(this);
```
生成贴图逻辑(部分)如下:
```c++
FScene* Scene = GetScene() ? GetScene()->GetRenderScene() : NULL;
{
int32 SizeX = PrecomputeParams.TransmittanceTexWidth;
int32 SizeY = PrecomputeParams.TransmittanceTexHeight;
int32 TotalByte = sizeof(FColor) * SizeX * SizeY;
check(TotalByte == Scene->AtmosphericFog->PrecomputeTransmittance.GetBulkDataSize());
const FColor* PrecomputeData = (const FColor*)Scene->AtmosphericFog->PrecomputeTransmittance.Lock(LOCK_READ_ONLY);
TransmittanceData.Lock(LOCK_READ_WRITE);
FColor* TextureData = (FColor*)TransmittanceData.Realloc(TotalByte);
FMemory::Memcpy(TextureData, PrecomputeData, TotalByte);
TransmittanceData.Unlock();
Scene->AtmosphericFog->PrecomputeTransmittance.Unlock();
}
```
### 本人尝试方法
个人觉得SubsurfaceProfile的方法是最好的但也因为相对麻烦所以放弃。我最后选择了修改UEngine(GEngine)中对应的的UTexture指针来实现渲染资源替换因为每帧都还会调用SetupUniformBufferParameters()来指定渲染用的资源。
为了保证载入的资源的生命周期我实现了一个UEngineSubSystem子类声明了若干UTexture指针并且移植了LoadEngineTexture()。这样就可以实现Runtime更换渲染资源了。大致代码如下
```c++
UToonEngineSubsystem* EngineSubsystem = GEngine->GetEngineSubsystem<UToonEngineSubsystem>();
EngineSubsystem->LoadEngineTexture(EngineSubsystem->ToonRampTexture, *ToonRampTexture->GetPathName());
GEngine->ToonRampTexture=EngineSubsystem->ToonRampTexture;
```

View File

@@ -0,0 +1,141 @@
## Ue4后处理逻辑简析
### APostProcessVolume
通常我们在后处理体积也就是APostProcessVolume设置后处理效果。它存储了struct FPostProcessSettings Settings;
加入关卡后会存储在UWorld的PostProcessVolumes中之后依次调用DoPostProcessVolume=》OverridePostProcessSettings之后修改FSceneView中的FFinalPostProcessSettings FinalPostProcessSettings。对所有属性进行插值计算
最后就可以通过View.FinalPostProcessSettings来读取后处理参数了。
### AddPostProcessingPasses
控制渲染的变量主要用下方式获取
- 从FViewInfo里直接获取
- 从FFinalPostProcessSettings获取View.FinalPostProcessSettings
- 从FEngineShowFlags获取View.Family->EngineShowFlags
- 从ConsoleVariable中获取
获取各种Buffer与变量之后
```c#
const FIntRect PrimaryViewRect = View.ViewRect;
const FSceneTextureParameters SceneTextureParameters = GetSceneTextureParameters(GraphBuilder, Inputs.SceneTextures);
const FScreenPassRenderTarget ViewFamilyOutput = FScreenPassRenderTarget::CreateViewFamilyOutput(Inputs.ViewFamilyTexture, View);
const FScreenPassTexture SceneDepth(SceneTextureParameters.SceneDepthTexture, PrimaryViewRect);
const FScreenPassTexture SeparateTranslucency(Inputs.SeparateTranslucencyTextures->GetColorForRead(GraphBuilder), PrimaryViewRect);
const FScreenPassTexture CustomDepth((*Inputs.SceneTextures)->CustomDepthTexture, PrimaryViewRect);
const FScreenPassTexture Velocity(SceneTextureParameters.GBufferVelocityTexture, PrimaryViewRect);
const FScreenPassTexture BlackDummy(GSystemTextures.GetBlackDummy(GraphBuilder));
// Scene color is updated incrementally through the post process pipeline.
FScreenPassTexture SceneColor((*Inputs.SceneTextures)->SceneColorTexture, PrimaryViewRect);
// Assigned before and after the tonemapper.
FScreenPassTexture SceneColorBeforeTonemap;
FScreenPassTexture SceneColorAfterTonemap;
// Unprocessed scene color stores the original input.
const FScreenPassTexture OriginalSceneColor = SceneColor;
// Default the new eye adaptation to the last one in case it's not generated this frame.
const FEyeAdaptationParameters EyeAdaptationParameters = GetEyeAdaptationParameters(View, ERHIFeatureLevel::SM5);
FRDGTextureRef LastEyeAdaptationTexture = GetEyeAdaptationTexture(GraphBuilder, View);
FRDGTextureRef EyeAdaptationTexture = LastEyeAdaptationTexture;
// Histogram defaults to black because the histogram eye adaptation pass is used for the manual metering mode.
FRDGTextureRef HistogramTexture = BlackDummy.Texture;
const FEngineShowFlags& EngineShowFlags = View.Family->EngineShowFlags;
const bool bVisualizeHDR = EngineShowFlags.VisualizeHDR;
const bool bViewFamilyOutputInHDR = GRHISupportsHDROutput && IsHDREnabled();
const bool bVisualizeGBufferOverview = IsVisualizeGBufferOverviewEnabled(View);
const bool bVisualizeGBufferDumpToFile = IsVisualizeGBufferDumpToFileEnabled(View);
const bool bVisualizeGBufferDumpToPIpe = IsVisualizeGBufferDumpToPipeEnabled(View);
const bool bOutputInHDR = IsPostProcessingOutputInHDR();
```
读取参数并设置
```c#
TOverridePassSequence<EPass> PassSequence(ViewFamilyOutput);
PassSequence.SetNames(PassNames, UE_ARRAY_COUNT(PassNames));
PassSequence.SetEnabled(EPass::VisualizeStationaryLightOverlap, EngineShowFlags.StationaryLightOverlap);
PassSequence.SetEnabled(EPass::VisualizeLightCulling, EngineShowFlags.VisualizeLightCulling);
#if WITH_EDITOR
PassSequence.SetEnabled(EPass::SelectionOutline, GIsEditor && EngineShowFlags.Selection && EngineShowFlags.SelectionOutline && !EngineShowFlags.Wireframe && !bVisualizeHDR && !IStereoRendering::IsStereoEyeView(View));
PassSequence.SetEnabled(EPass::EditorPrimitive, FSceneRenderer::ShouldCompositeEditorPrimitives(View));
#else
PassSequence.SetEnabled(EPass::SelectionOutline, false);
PassSequence.SetEnabled(EPass::EditorPrimitive, false);
#endif
PassSequence.SetEnabled(EPass::VisualizeShadingModels, EngineShowFlags.VisualizeShadingModels);
PassSequence.SetEnabled(EPass::VisualizeGBufferHints, EngineShowFlags.GBufferHints);
PassSequence.SetEnabled(EPass::VisualizeSubsurface, EngineShowFlags.VisualizeSSS);
PassSequence.SetEnabled(EPass::VisualizeGBufferOverview, bVisualizeGBufferOverview || bVisualizeGBufferDumpToFile || bVisualizeGBufferDumpToPIpe);
PassSequence.SetEnabled(EPass::VisualizeHDR, EngineShowFlags.VisualizeHDR);
#if WITH_EDITOR
PassSequence.SetEnabled(EPass::PixelInspector, View.bUsePixelInspector);
#else
PassSequence.SetEnabled(EPass::PixelInspector, false);
#endif
PassSequence.SetEnabled(EPass::HMDDistortion, EngineShowFlags.StereoRendering && EngineShowFlags.HMDDistortion);
PassSequence.SetEnabled(EPass::HighResolutionScreenshotMask, IsHighResolutionScreenshotMaskEnabled(View));
PassSequence.SetEnabled(EPass::PrimaryUpscale, PaniniConfig.IsEnabled() || (View.PrimaryScreenPercentageMethod == EPrimaryScreenPercentageMethod::SpatialUpscale && PrimaryViewRect.Size() != View.GetSecondaryViewRectSize()));
PassSequence.SetEnabled(EPass::SecondaryUpscale, View.RequiresSecondaryUpscale() || View.Family->GetSecondarySpatialUpscalerInterface() != nullptr);
```
这些操作一直到`PassSequence.Finalize();`。
### 后处理Pass处理
主要的Pass有这么一些
```c#
TEXT("MotionBlur"),
TEXT("Tonemap"),
TEXT("FXAA"),
TEXT("PostProcessMaterial (AfterTonemapping)"),
TEXT("VisualizeDepthOfField"),
TEXT("VisualizeStationaryLightOverlap"),
TEXT("VisualizeLightCulling"),
TEXT("SelectionOutline"),
TEXT("EditorPrimitive"),
TEXT("VisualizeShadingModels"),
TEXT("VisualizeGBufferHints"),
TEXT("VisualizeSubsurface"),
TEXT("VisualizeGBufferOverview"),
TEXT("VisualizeHDR"),
TEXT("PixelInspector"),
TEXT("HMDDistortion"),
TEXT("HighResolutionScreenshotMask"),
TEXT("PrimaryUpscale"),
TEXT("SecondaryUpscale")
```
之前读取了参数对这些Pass是否开启进行了设置。之后以这种格式使用Shader对传入的图形进行后处理。
```c#
if (PassSequence.IsEnabled(EPass::MotionBlur))
{
FMotionBlurInputs PassInputs;
PassSequence.AcceptOverrideIfLastPass(EPass::MotionBlur, PassInputs.OverrideOutput);
PassInputs.SceneColor = SceneColor;
PassInputs.SceneDepth = SceneDepth;
PassInputs.SceneVelocity = Velocity;
PassInputs.Quality = GetMotionBlurQuality();
PassInputs.Filter = GetMotionBlurFilter();
// Motion blur visualization replaces motion blur when enabled.
if (bVisualizeMotionBlur)
{
SceneColor = AddVisualizeMotionBlurPass(GraphBuilder, View, PassInputs);
}
else
{
SceneColor = AddMotionBlurPass(GraphBuilder, View, PassInputs);
}
}
SceneColor = AddAfterPass(EPass::MotionBlur, SceneColor);
```
这些效果的代码都在UnrealEngine\Engine\Source\Runtime\Renderer\Private\PostProcess中。
### 后处理材质调用
AddPostProcessMaterialChain
=》
AddPostProcessMaterialPass()为实际的绘制函数。最后在AddDrawScreenPass()中进行绘制。DrawScreenPass()=>DrawPostProcessPass=>DrawPostProcessPass()
### 推荐参考的后处理代码
PostProcessBloomSetup.h
VisualizeShadingModels.cpp

View File

@@ -0,0 +1,16 @@
#### 渲染循环发起以及渲染函数
渲染更新由UGameEngine::Tick()发起。
```
UGameEngine::Tick
|
-RedrawViewports()
|
-GameViewport->Viewport->Draw
|
-EnqueueBeginRenderFrame()
SetRequiresVsync()
EnqueueEndRenderFrame()
```
#### FDeferredShadingSceneRenderer
FDeferredShadingSceneRenderer继承自FSceneRenderer从Render函数中可以了解到延迟渲染的整个过程。每个Pass的渲染流程。

View File

@@ -0,0 +1,314 @@
# Yivanlee 添加Pass与GBuffer笔记
### 给BaseScalability.ini 添加渲染质量命令行
```ini
[EffectsQuality@0]
[EffectsQuality@1]
r.ToonDataMaterials=0
[EffectsQuality@2]
[EffectsQuality@3]
[EffectsQuality@Cine]
r.ToonDataMaterials=1
```
### 增加bUsesToonData选项
1. MaterialRelevance.h的FMaterialRelevance
2. HLSLMaterialTranslator.h与HLSLMaterialTranslator.cpp的FHLSLMaterialTranslator类
3. MaterialInterface.cpp的UMaterialInterface::GetRelevance_Internal
4. PrimitiveSceneInfo.cpp的FBatchingSPDI.DrawMesh()
5. SceneCore.h的FStaticMeshBatchRelevance类
### 定义Stat宏
RenderCore.cpp与RenderCore.h里定义ToonDataPass渲染Stat。
```c#
//h
DECLARE_CYCLE_STAT_EXTERN(TEXT("ToonData pass drawing"), STAT_ToonDataPassDrawTime, STATGROUP_SceneRendering, RENDERCORE_API);
//cpp
DEFINE_STAT(STAT_ToonDataPassDrawTime);
```
BasePassRendering.cpp里定义渲染状态宏。
```c#
DECLARE_CYCLE_STAT(TEXT("ToonDataPass"), STAT_CLM_ToonDataPass, STATGROUP_CommandListMarkers);
DECLARE_CYCLE_STAT(TEXT("AfterToonDataPass"), STAT_CLM_AfterToonDataPass, STATGROUP_CommandListMarkers);
```
### 添加渲染用的RT
SceneRenderTargets.h与SceneRenderTargets.cpp
```c++
//h
TRefCountPtr<IPooledRenderTarget> ToonBufferA;
//cpp
FSceneRenderTargets::FSceneRenderTargets(const FViewInfo& View, const FSceneRenderTargets& SnapshotSource)
: LightAccumulation(GRenderTargetPool.MakeSnapshot(SnapshotSource.LightAccumulation))
···
, ToonBufferA(GRenderTargetPool.MakeSnapshot(SnapshotSource.ToonBufferA))
```
修改SetupSceneTextureUniformParameters(),在GBuffer代码段中增加`SceneTextureParameters.ToonBufferATexture = bCanReadGBufferUniforms && EnumHasAnyFlags(SetupMode, ESceneTextureSetupMode::GBufferF) && SceneContext.ToonBufferA ? GetRDG(SceneContext.ToonBufferA) : BlackDefault2D;`
在SceneTextureParameters.h与SceneTextureParameters.cpp中将新增加的RT添加到FSceneTextureParameters中并且在GetSceneTextureParameters中注册RT,并在另一个同名函数中添加`Parameters.ToonBufferATexture = (*SceneTextureUniformBuffer)->ToonBufferATexture;`。
在FSceneTextureUniformParameters中添加`SHADER_PARAMETER_RDG_TEXTURE(Texture2D, ToonBufferATexture)`
### 添加SceneVisibility中的ToonDataPass定义
在SceneVisibility.h中的MarkRelevant()添加
```c#
if (StaticMeshRelevance.bUseToonData)
{
DrawCommandPacket.AddCommandsForMesh(PrimitiveIndex, PrimitiveSceneInfo, StaticMeshRelevance, StaticMesh, Scene, bCanCache, EMeshPass::ToonDataPass);
}
```
在ComputeDynamicMeshRelevance()中添加
```c#
if (ViewRelevance.bUsesToonData)
{
PassMask.Set(EMeshPass::ToonDataPass);
View.NumVisibleDynamicMeshElements[EMeshPass::ToonDataPass] += NumElements;
}
```
#### 修改DecodeGBufferData()以及相关函数
- 修改RayTracingDeferredShadingCommon.ush的DecodeGBufferData()
- 修改DeferredShadingCommon.ush中的FGBufferData添加ToonDataA变量并修改DecodeGBufferData()、GetGBufferDataUint()、GetGBufferData()、
- 修改SceneTextureParameters.ush中的ToonData变量声明`Texture2D ToonBufferATexture;`、`#define ToonBufferATextureSampler GlobalPointClampedSampler`以及`GetGBufferDataFromSceneTextures()`;SceneTexturesCommon.ush中的`#define SceneTexturesStruct_ToonBufferATextureSampler SceneTexturesStruct.PointClampSampler`
### 增加ToonDataPass MeshDrawPass已实现增加GBuffer
- 在MeshPassProcessor.h增加ToonDataPass MeshDrawPass定义。
- 在DeferredShadingRenderer.h添加渲染函数声明。
- 在新添加的ToonDataRendering.h与ToonDataRendering.cpp中添加MeshDrawPass声明与定义。
- 在ToonDataPassShader.usf中实现
```
// Copyright Epic Games, Inc. All Rights Reserved.
/*=============================================================================
AnisotropyPassShader.usf: Outputs Anisotropy and World Tangent to GBufferF
=============================================================================*/
#include "Common.ush"
#include "/Engine/Generated/Material.ush"
#include "/Engine/Generated/VertexFactory.ush"
#include "DeferredShadingCommon.ush"
struct FToonDataPassVSToPS
{
float4 Position : SV_POSITION;
FVertexFactoryInterpolantsVSToPS Interps;
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
float3 PixelPositionExcludingWPO : TEXCOORD7;
#endif
};
#if USING_TESSELLATION
struct FAnisotropyPassVSToDS
{
FVertexFactoryInterpolantsVSToDS FactoryInterpolants;
float4 Position : VS_To_DS_Position;
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
float3 PixelPositionExcludingWPO : TEXCOORD7;
#endif
OPTIONAL_VertexID_VS_To_DS
};
#define FVertexOutput FAnisotropyPassVSToDS
#define VertexFactoryGetInterpolants VertexFactoryGetInterpolantsVSToDS
#else
#define FVertexOutput FToonDataPassVSToPS
#define VertexFactoryGetInterpolants VertexFactoryGetInterpolantsVSToPS
#endif
#if USING_TESSELLATION
#define FPassSpecificVSToDS FAnisotropyPassVSToDS
#define FPassSpecificVSToPS FToonDataPassVSToPS
FAnisotropyPassVSToDS PassInterpolate(FAnisotropyPassVSToDS a, float aInterp, FAnisotropyPassVSToDS b, float bInterp)
{
FAnisotropyPassVSToDS O;
O.FactoryInterpolants = VertexFactoryInterpolate(a.FactoryInterpolants, aInterp, b.FactoryInterpolants, bInterp);
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
TESSELLATION_INTERPOLATE_MEMBER(PixelPositionExcludingWPO);
#endif
return O;
}
FToonDataPassVSToPS PassFinalizeTessellationOutput(FAnisotropyPassVSToDS Interpolants, float4 WorldPosition, FMaterialTessellationParameters MaterialParameters)
{
FToonDataPassVSToPS O;
O.Interps = VertexFactoryAssignInterpolants(Interpolants.FactoryInterpolants);
O.Position = mul(WorldPosition, ResolvedView.TranslatedWorldToClip);
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
O.PixelPositionExcludingWPO = Interpolants.PixelPositionExcludingWPO;
#endif
return O;
}
#include "Tessellation.ush"
#endif
/*=============================================================================
* Vertex Shader
*============================================================================*/
void MainVertexShader(
FVertexFactoryInput Input,
OPTIONAL_VertexID
out FVertexOutput Output
#if USE_GLOBAL_CLIP_PLANE && !USING_TESSELLATION
, out float OutGlobalClipPlaneDistance : SV_ClipDistance
#endif
#if INSTANCED_STEREO
, uint InstanceId : SV_InstanceID
#if !MULTI_VIEW
, out float OutClipDistance : SV_ClipDistance1
#else
, out uint ViewportIndex : SV_ViewPortArrayIndex
#endif
#endif
)
{
#if INSTANCED_STEREO
const uint EyeIndex = GetEyeIndex(InstanceId);
ResolvedView = ResolveView(EyeIndex);
#if !MULTI_VIEW
OutClipDistance = 0.0;
#else
ViewportIndex = EyeIndex;
#endif
#else
uint EyeIndex = 0;
ResolvedView = ResolveView();
#endif
FVertexFactoryIntermediates VFIntermediates = GetVertexFactoryIntermediates(Input);
float4 WorldPos = VertexFactoryGetWorldPosition(Input, VFIntermediates);
float4 WorldPositionExcludingWPO = WorldPos;
float3x3 TangentToLocal = VertexFactoryGetTangentToLocal(Input, VFIntermediates);
FMaterialVertexParameters VertexParameters = GetMaterialVertexParameters(Input, VFIntermediates, WorldPos.xyz, TangentToLocal);
// Isolate instructions used for world position offset
// As these cause the optimizer to generate different position calculating instructions in each pass, resulting in self-z-fighting.
// This is only necessary for shaders used in passes that have depth testing enabled.
{
WorldPos.xyz += GetMaterialWorldPositionOffset(VertexParameters);
}
#if USING_TESSELLATION
// Transformation is done in Domain shader when tessellating
Output.Position = WorldPos;
#else
{
float4 RasterizedWorldPosition = VertexFactoryGetRasterizedWorldPosition(Input, VFIntermediates, WorldPos);
#if ODS_CAPTURE
float3 ODS = OffsetODS(RasterizedWorldPosition.xyz, ResolvedView.TranslatedWorldCameraOrigin.xyz, ResolvedView.StereoIPD);
Output.Position = INVARIANT(mul(float4(RasterizedWorldPosition.xyz + ODS, 1.0), ResolvedView.TranslatedWorldToClip));
#else
Output.Position = INVARIANT(mul(RasterizedWorldPosition, ResolvedView.TranslatedWorldToClip));
#endif
}
#if INSTANCED_STEREO && !MULTI_VIEW
BRANCH
if (IsInstancedStereo())
{
// Clip at the center of the screen
OutClipDistance = dot(Output.Position, EyeClipEdge[EyeIndex]);
// Scale to the width of a single eye viewport
Output.Position.x *= 0.5 * ResolvedView.HMDEyePaddingOffset;
// Shift to the eye viewport
Output.Position.x += (EyeOffsetScale[EyeIndex] * Output.Position.w) * (1.0f - 0.5 * ResolvedView.HMDEyePaddingOffset);
}
#elif XBOXONE_BIAS_HACK
// XB1 needs a bias in the opposite direction to fix FORT-40853
// XBOXONE_BIAS_HACK is defined only in a custom node in a particular material
// This should be removed with a future shader compiler update
Output.Position.z -= 0.0001 * Output.Position.w;
#endif
#if USE_GLOBAL_CLIP_PLANE
OutGlobalClipPlaneDistance = dot(ResolvedView.GlobalClippingPlane, float4(WorldPos.xyz - ResolvedView.PreViewTranslation.xyz, 1));
#endif
#endif
#if USING_TESSELLATION
Output.FactoryInterpolants = VertexFactoryGetInterpolants( Input, VFIntermediates, VertexParameters );
#else
Output.Interps = VertexFactoryGetInterpolants(Input, VFIntermediates, VertexParameters);
#endif // #if USING_TESSELLATION
#if INSTANCED_STEREO
#if USING_TESSELLATION
Output.Interps.InterpolantsVSToPS.EyeIndex = EyeIndex;
#else
Output.Interps.EyeIndex = EyeIndex;
#endif
#endif
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
Output.PixelPositionExcludingWPO = WorldPositionExcludingWPO.xyz;
#endif
OutputVertexID( Output );
}
/*=============================================================================
* Pixel Shader
*============================================================================*/
void MainPixelShader(
in INPUT_POSITION_QUALIFIERS float4 SvPosition : SV_Position,
FVertexFactoryInterpolantsVSToPS Input
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
, float3 PixelPositionExcludingWPO : TEXCOORD7
#endif
OPTIONAL_IsFrontFace
OPTIONAL_OutDepthConservative
, out float4 ToonBufferA : SV_Target0
#if MATERIALBLENDING_MASKED_USING_COVERAGE
, out uint OutCoverage : SV_Coverage
#endif
)
{
#if INSTANCED_STEREO
ResolvedView = ResolveView(Input.EyeIndex);
#else
ResolvedView = ResolveView();
#endif
// Manual clipping here (alpha-test, etc)
FMaterialPixelParameters MaterialParameters = GetMaterialPixelParameters(Input, SvPosition);
FPixelMaterialInputs PixelMaterialInputs;
#if USE_WORLD_POSITION_EXCLUDING_SHADER_OFFSETS
float4 ScreenPosition = SvPositionToResolvedScreenPosition(SvPosition);
float3 TranslatedWorldPosition = SvPositionToResolvedTranslatedWorld(SvPosition);
CalcMaterialParametersEx(MaterialParameters, PixelMaterialInputs, SvPosition, ScreenPosition, bIsFrontFace, TranslatedWorldPosition, PixelPositionExcludingWPO);
#else
CalcMaterialParameters(MaterialParameters, PixelMaterialInputs, SvPosition, bIsFrontFace);
#endif
#if OUTPUT_PIXEL_DEPTH_OFFSET
ApplyPixelDepthOffsetToMaterialParameters(MaterialParameters, PixelMaterialInputs, OutDepth);
#endif
#if MATERIALBLENDING_MASKED_USING_COVERAGE
OutCoverage = DiscardMaterialWithPixelCoverage(MaterialParameters, PixelMaterialInputs);
#endif
//float Anisotropy = GetMaterialAnisotropy(PixelMaterialInputs);
//float3 WorldTangent = MaterialParameters.WorldTangent;
ToonBufferA = float4(0.2, 0.1, 0.8, 1.0);
}
```

View File

@@ -0,0 +1,170 @@
## 描边
- RenderToonOutlineToBaseColor
- RenderToonOutlineToSceneColor
相关Shader
- ToonDataPassShader.usf
- ToonOutline.usf
- ToonShadingModel.ush
相关RT
```
//Begin YivanLee's Modify
TRefCountPtr<IPooledRenderTarget> SceneColorCopy;
TRefCountPtr<IPooledRenderTarget> BaseColorCopy;
TRefCountPtr<IPooledRenderTarget> ToonBufferDepth;
TRefCountPtr<IPooledRenderTarget> ToonOutlineTexture;
TRefCountPtr<IPooledRenderTarget> ToonOutlineMaskBlurTexture;
TRefCountPtr<IPooledRenderTarget> ToonIDOutlineTexture;
//ToonDataTexture01 is ToonNormal
TRefCountPtr<IPooledRenderTarget> ToonDataTexture01;
//ToonDataTexture02 is R: ShadowController G: B: A:
TRefCountPtr<IPooledRenderTarget> ToonDataTexture02;
//ToonDataTexture03 is OutlineColorMask and OutlineMask
TRefCountPtr<IPooledRenderTarget> ToonDataTexture03;
//ToonDataTexture04 is IDTexture
TRefCountPtr<IPooledRenderTarget> ToonDataTexture04;
//End YivanLee's Modify
```
### GBuffer
ToonData0 = float4(N * 0.5 + 0.5, 1.0f);//WorldNormal
ToonData1 = GetMaterialToonDataA(MaterialParameters);//Shadow controller
ToonData2 = GetMaterialToonDataB(MaterialParameters);//OutlinleColor,OutlineMask
ToonData3 = GetMaterialToonDataC(MaterialParameters);//IDTexture,OutlineWidth
### BasePass部分
位于FDeferredShadingSceneRenderer::RenderBasePass()最后,
```
if (ShouldRenderToonDataPass())
{
//Begin Recreate ToonData Render targets
SceneContext.ReleaseToonDataTarget();
SceneContext.AllocateToonDataTarget(GraphBuilder.RHICmdList);
SceneContext.ReleaseToonDataGBuffer();
SceneContext.AllocateToonDataGBuffer(GraphBuilder.RHICmdList);
//End Recreate ToonData Render targets
TStaticArray<FRDGTextureRef, MaxSimultaneousRenderTargets> ToonDataPassTextures;
uint32 ToonDataTextureCount = SceneContext.GetToonDataGBufferRenderTargets(GraphBuilder, ToonDataPassTextures);
TArrayView<FRDGTextureRef> ToonDataPassTexturesView = MakeArrayView(ToonDataPassTextures.GetData(), ToonDataTextureCount);
ERenderTargetLoadAction ToonTargetsAction;
if (bEnableParallelBasePasses)//Windows DirectX12
{
ToonTargetsAction = ERenderTargetLoadAction::ELoad;
}
else//Windows DirectX11
{
ToonTargetsAction = ERenderTargetLoadAction::EClear;
}
FRenderTargetBindingSlots ToonDataPassRenderTargets = GetRenderTargetBindings(ToonTargetsAction, ToonDataPassTexturesView);
ToonDataPassRenderTargets.DepthStencil = FDepthStencilBinding(SceneDepthTexture, ERenderTargetLoadAction::ELoad, ERenderTargetLoadAction::ELoad, ExclusiveDepthStencil);
ToonDataPassRenderTargets.ShadingRateTexture = GVRSImageManager.GetVariableRateShadingImage(GraphBuilder, ViewFamily, nullptr, EVRSType::None);
AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLM_ToonDataPass));
RenderToonDataPass(GraphBuilder, ToonDataPassTextures, ToonDataTextureCount, ToonDataPassRenderTargets, bEnableParallelBasePasses);
AddSetCurrentStatPass(GraphBuilder, GET_STATID(STAT_CLM_AfterToonDataPass));
RenderToonOutlineToBaseColor(GraphBuilder, SceneDepthTexture, bEnableParallelBasePasses);
}
```
#### RenderNormalDepthOutline
- ToonOutlineMain:使用拉普拉斯算子与Sobel算子计算并混合结果。计算Depth与Normal最后Length(float4(Normal,Depth));
- ToonIDOutlinePSMain:使用Sobel计算描边。
#### RenderToonIDOutline
#### CombineOutlineToBaseColor
### 渲染管线Render()
RenderToonOutlineToSceneColor()位于RenderLights()之后与RenderDeferredReflectionsAndSkyLighting()之前的位置。
## ShaderModel
### DefaultLitBxDF
```c#
Lighting.Diffuse = AreaLight.FalloffColor * (Falloff * NoL) * Diffuse_Lambert( GBuffer.DiffuseColor );
Lighting.Specular = AreaLight.FalloffColor * (Falloff * NoL) * SpecularGGX(GBuffer.Roughness, GBuffer.SpecularColor, Context, NoL, AreaLight);
```
### Toon
- ToonDataAR ShadowOffset GBA 未使用
- ToonDataBRGB OutlineColor A OutlineMask
- ToonDataCRGB IDMap A OutlineWidth
- PreIntegratedToonBRDF: R 为NoL 预积分Ramp G 为GGX高光预积分值。
- PreIntegratedToonSkinBRDFRGB为皮肤预积分颜色。
- SubsurfaceColor该数据存放在CustomData.rgb位置,在天光计算中其作用
PS.ToonShadingStandard没有使用SubsurfaceColor。
#### ToonShadingStandard
在原始的PBR公式做了以下修改
固有色部分
1. 使用ShadowOffset(ToonDataA.r)来控制阴影区域的偏移也就是类似UTS的Step。但使用lerp(Context.NoL, 1.0, ShadowOffset),这导致偏移并不易于控制。
2. 计算FallOffMask预积分衰减调整系数。使用ShadowOffset过的NoL与Metalic作为UV对PreIntegratedToonBRDF图进行查表返回r值。
3. Lighting.Diffuse = AreaLight.FalloffColor * FallOffMask * GBuffer.BaseColor / 3.1415927f;
高光部分
1. D预积分GGX使用NoH与Roughness作为UV对PreIntegratedToonBRDF进行查表返回g值。
2. F边缘光效果系数return smoothstep(0.67, 1.0, 1 - NoV);
3. Lighting.Specular = (F + D) * (AreaLight.FalloffColor * GBuffer.SpecularColor * FallOffMask * 8);
```c++
float ShadowOffset = GBuffer.ToonDataA.r;
float FallOffMask = Falloff * GetPreintegratedToonBRDF(lerp(Context.NoL, 1.0, ShadowOffset), GBuffer.Metallic);
Lighting.Diffuse = AreaLight.FalloffColor * FallOffMask * GBuffer.BaseColor / 3.1415927f;
float R2 = GBuffer.Roughness * GBuffer.Roughness;
float ToonGGX = GetPreintegratedToonSpecBRDF(Context.NoH, GBuffer.Roughness);
float D = lerp(ToonGGX, 0.0, R2);
float3 F = GetToonF(Context.NoV);
Lighting.Specular = (F + D) * (AreaLight.FalloffColor * GBuffer.SpecularColor * FallOffMask * 8);
```
#### ToonShadingSkin
在ToonShadingStandard的基础上做了以下修改
固有色部分:
1. 使用ShadowOffset来偏移Context.NoL * Shadow.SurfaceShadow来获得ShadowMask。
2. 使用ShadowMask与Opacity作为UV来查询PreIntegratedToonSkinBRDF返回rgb值。
3. Lighting.Diffuse = AreaLight.FalloffColor * FallOffMask * GBuffer.BaseColor / 3.1415927f * PreintegratedBRDF;
高光部分相同。
#### ToonShadingHair
在ToonShadingStandard的基础上做了以下修改
固有色部分相同。
高光部分增加各向异性计算:
```c++
float3 H = normalize(L + V);
float HoL = dot(H, geotangent);
float sinTH = saturate(sqrt(1 - HoL * HoL));
float spec = pow(sinTH, lerp(256, 4, GBuffer.Roughness));
float R2 = GBuffer.Roughness * GBuffer.Roughness;
float3 F = GetToonF(Context.NoV);
spec += F;
Lighting.Specular = AreaLight.FalloffColor * FallOffMask * spec * GBuffer.BaseColor;
```
#### 天光(环境光)
阴影部分的光照主要为环境光逻辑为于ReflectionEnvironmentSkyLighting()
```
/*BeginYivanLee's Modify*/
float3 SkyLighting = float3(0.0, 0.0, 0.0);
BRANCH
if(ShadingModelID == SHADINGMODELID_TOONSTANDARD || ShadingModelID == SHADINGMODELID_TOONHAIR || ShadingModelID == SHADINGMODELID_TOONSKIN)
{
float3 SubsurfaceColor = ExtractSubsurfaceColor(GBuffer);
float3 SkyToonLighting = GBuffer.BaseColor * SubsurfaceColor.rgb;
float3 SkyDiffuseLighting = SkyLightDiffuse(GBuffer, AmbientOcclusion, BufferUV, ScreenPosition, BentNormal, DiffuseColor) * CloudAmbientOcclusion;
SkyLighting = lerp(SkyDiffuseLighting, SkyToonLighting, 0.8f);
}
else
{
SkyLighting = SkyLightDiffuse(GBuffer, AmbientOcclusion, BufferUV, ScreenPosition, BentNormal, DiffuseColor) * CloudAmbientOcclusion;
}
/*EndYivanLee's Modify*/
```

View File

@@ -0,0 +1,210 @@
---
title: 参考SubsurfaceProfile 模改ToonDataAsset
date: 2023-02-04 20:38:39
excerpt:
tags:
rating: ⭐⭐
---
# 原理
## PreShader
在材质编译期,将名为 **__SubsurfaceProfile** 的Uniform表达式塞入材质中。
```c++
int32 FHLSLMaterialTranslator::NumericParameter(EMaterialParameterType ParameterType, FName ParameterName, const UE::Shader::FValue& InDefaultValue)
{
const UE::Shader::EValueType ValueType = GetShaderValueType(ParameterType);
check(InDefaultValue.GetType() == ValueType);
UE::Shader::FValue DefaultValue(InDefaultValue);
// If we're compiling a function, give the function a chance to override the default parameter value
FMaterialParameterMetadata Meta;
if (GetParameterOverrideValueForCurrentFunction(ParameterType, ParameterName, Meta))
{
DefaultValue = Meta.Value.AsShaderValue();
check(DefaultValue.GetType() == ValueType);
}
const uint32* PrevDefaultOffset = DefaultUniformValues.Find(DefaultValue);
uint32 DefaultOffset;
if (PrevDefaultOffset)
{
DefaultOffset = *PrevDefaultOffset;
}
else
{
DefaultOffset = MaterialCompilationOutput.UniformExpressionSet.AddDefaultParameterValue(DefaultValue);
DefaultUniformValues.Add(DefaultValue, DefaultOffset);
}
FMaterialParameterInfo ParameterInfo = GetParameterAssociationInfo();
ParameterInfo.Name = ParameterName;
const int32 ParameterIndex = MaterialCompilationOutput.UniformExpressionSet.FindOrAddNumericParameter(ParameterType, ParameterInfo, DefaultOffset);
return AddUniformExpression(new FMaterialUniformExpressionNumericParameter(ParameterInfo, ParameterIndex), GetMaterialValueType(ParameterType), TEXT(""));
}
```
`const int32 ParameterIndex = MaterialCompilationOutput.UniformExpressionSet.FindOrAddNumericParameter(ParameterType, ParameterInfo, DefaultOffset);`
`return AddUniformExpression(new FMaterialUniformExpressionNumericParameter(ParameterInfo, ParameterIndex), GetMaterialValueType(ParameterType), TEXT(""));`
之后在`Chunk[MP_SubsurfaceColor] = AppendVector(SubsurfaceColor, CodeSubsurfaceProfile);`将结果编译成`MaterialFloat4(MaterialFloat3(1.00000000,1.00000000,1.00000000),Material.PreshaderBuffer[2].x)`
## 填充PreShader结构体
1. 从MeshDraw框架的FMeshElementCollector::AddMesh()开始,执行`MeshBatch.MaterialRenderProxy->UpdateUniformExpressionCacheIfNeeded(Views[ViewIndex]->GetFeatureLevel());`开始更新材质的UniformExpression。
2. `FMaterialRenderProxy::UpdateUniformExpressionCacheIfNeeded()`:取得材质指针之后评估材质表达式。
3. `FMaterialRenderProxy::EvaluateUniformExpressions()`从渲染线程取得材质的ShaderMap再从ShaderMap取得UniformExpressionSet。
4. `FUniformExpressionSet::FillUniformBuffer`Dump preshader results into buffer.
1. FEmitContext::EmitPreshaderOrConstantPreshaderHeader = &UniformExpressionSet.UniformPreshaders.AddDefaulted_GetRef();
# 将ToonData的ID塞入材质
存在问题如何将SubsurfaceProfile Asset的ID塞入材质中
```c++
int32 FMaterialCompiler::ScalarParameter(FName ParameterName, float DefaultValue)
{
return NumericParameter(EMaterialParameterType::Scalar, ParameterName, DefaultValue);
}
int32 FHLSLMaterialTranslator::NumericParameter(EMaterialParameterType ParameterType, FName ParameterName, const UE::Shader::FValue& InDefaultValue)
{
const UE::Shader::EValueType ValueType = GetShaderValueType(ParameterType);
check(InDefaultValue.GetType() == ValueType);
UE::Shader::FValue DefaultValue(InDefaultValue);
// If we're compiling a function, give the function a chance to override the default parameter value
FMaterialParameterMetadata Meta;
if (GetParameterOverrideValueForCurrentFunction(ParameterType, ParameterName, Meta))
{ DefaultValue = Meta.Value.AsShaderValue();
check(DefaultValue.GetType() == ValueType);
}
const uint32* PrevDefaultOffset = DefaultUniformValues.Find(DefaultValue);
uint32 DefaultOffset;
if (PrevDefaultOffset)
{
DefaultOffset = *PrevDefaultOffset;
}else
{
DefaultOffset = MaterialCompilationOutput.UniformExpressionSet.AddDefaultParameterValue(DefaultValue);
DefaultUniformValues.Add(DefaultValue, DefaultOffset);
}
FMaterialParameterInfo ParameterInfo = GetParameterAssociationInfo();
ParameterInfo.Name = ParameterName;
const int32 ParameterIndex = MaterialCompilationOutput.UniformExpressionSet.FindOrAddNumericParameter(ParameterType, ParameterInfo, DefaultOffset);
return AddUniformExpression(new FMaterialUniformExpressionNumericParameter(ParameterInfo, ParameterIndex), GetMaterialValueType(ParameterType), TEXT(""));
}
bool FMaterialHLSLGenerator::GetParameterOverrideValueForCurrentFunction(EMaterialParameterType ParameterType, FName ParameterName, FMaterialParameterMetadata& OutResult) const
{
bool bResult = false;
if (!ParameterName.IsNone())
{ // Give every function in the callstack on opportunity to override the parameter value
// Parameters in outer functions take priority // For example, if a layer instance calls a function instance that includes an overriden parameter, we want to use the value from the layer instance rather than the function instance for (const FFunctionCallEntry* FunctionEntry : FunctionCallStack)
{ const UMaterialFunctionInterface* CurrentFunction = FunctionEntry->MaterialFunction;
if (CurrentFunction)
{
if (CurrentFunction->GetParameterOverrideValue(ParameterType, ParameterName, OutResult))
{ bResult = true;
break;
}
}
}
}
return bResult;
}
// Finds a parameter by name from the game thread, traversing the chain up to the BaseMaterial.
FScalarParameterValue* GameThread_GetScalarParameterValue(UMaterialInstance* MaterialInstance, FName Name)
{
UMaterialInterface* It = 0;
FMaterialParameterInfo ParameterInfo(Name); // @TODO: This will only work for non-layered parameters
while(MaterialInstance)
{
if(FScalarParameterValue* Ret = GameThread_FindParameterByName(MaterialInstance->ScalarParameterValues, ParameterInfo))
{
return Ret;
}
It = MaterialInstance->Parent;
MaterialInstance = Cast<UMaterialInstance>(It);
}
return 0;
}
template <typename ParameterType>
ParameterType* GameThread_FindParameterByName(TArray<ParameterType>& Parameters, const FHashedMaterialParameterInfo& ParameterInfo)
{
for (int32 ParameterIndex = 0; ParameterIndex < Parameters.Num(); ParameterIndex++)
{
ParameterType* Parameter = &Parameters[ParameterIndex];
if (Parameter->ParameterInfo == ParameterInfo)
{
return Parameter;
}
}
return NULL;
}
void UMaterialFunctionInstance::OverrideMaterialInstanceParameterValues(UMaterialInstance* Instance)
{
// Dynamic parameters
Instance->ScalarParameterValues = ScalarParameterValues;
Instance->VectorParameterValues = VectorParameterValues;
Instance->DoubleVectorParameterValues = DoubleVectorParameterValues;
Instance->TextureParameterValues = TextureParameterValues;
Instance->RuntimeVirtualTextureParameterValues = RuntimeVirtualTextureParameterValues;
Instance->FontParameterValues = FontParameterValues;
// Static parameters
FStaticParameterSet StaticParametersOverride = Instance->GetStaticParameters();
StaticParametersOverride.EditorOnly.StaticSwitchParameters = StaticSwitchParameterValues;
StaticParametersOverride.EditorOnly.StaticComponentMaskParameters = StaticComponentMaskParameterValues;
Instance->UpdateStaticPermutation(StaticParametersOverride);
}
```
将SubsurfaceProfile 塞入 Material
```c++
int32 UMaterialExpressionStrataLegacyConversion::Compile(class FMaterialCompiler* Compiler, int32 OutputIndex)
{
}
```
## MaterialRenderProxy
```c++
void SetSubsurfaceProfileRT(const USubsurfaceProfile* Ptr) { SubsurfaceProfileRT = Ptr; }
const USubsurfaceProfile* GetSubsurfaceProfileRT() const { return SubsurfaceProfileRT; }
/** 0 if not set, game thread pointer, do not dereference, only for comparison */
const USubsurfaceProfile* SubsurfaceProfileRT;
```
## UMaterialInterface
```c++
uint8 bOverrideSubsurfaceProfile:1;
TObjectPtr<class USubsurfaceProfile> SubsurfaceProfile;
void UMaterialInterface::UpdateMaterialRenderProxy(FMaterialRenderProxy& Proxy)
//还有所有子类实现 UMaterialInstance、UMaterial
USubsurfaceProfile* UMaterialInterface::GetSubsurfaceProfile_Internal() const
```
## MaterialShared.h
```c++
inline bool UseSubsurfaceProfile(FMaterialShadingModelField ShadingModel)
{
return ShadingModel.HasShadingModel(MSM_SubsurfaceProfile) || ShadingModel.HasShadingModel(MSM_Eye);
}
```
## UMaterial
```c++
USubsurfaceProfile* UMaterial::GetSubsurfaceProfile_Internal() const
{
checkSlow(IsInGameThread());
return SubsurfaceProfile;
}
```