vault backup: 2024-11-26 18:16:19

This commit is contained in:
2024-11-26 18:16:19 +08:00
parent b0061ba795
commit c7e0a80a36
36 changed files with 149 additions and 39 deletions

View File

@@ -0,0 +1,75 @@
# Wifi密码
Nswl67730588
# 打印机
172.168.6.3
admin
98614258
# NAS
FS6712x
172.168.5.17
admin / root
NiceFuture0521
## SVN
- svn://172.168.5.17/ASoul_UE5/ASoul_UE5
- admin admin 123456
- svn://172.168.5.17/EOE_UE4
- dazhi 123456
- yuege 123456
- jiajie 123456
## 共享盘账号密码
`//172.168.5.17/NSWL_TECH`
NSWL_TECH设置为Z盘。
登录用户dev
密码NiceFuture0521
## Perforce
```c++
docker run -d --restart unless-stopped \
-v /volume1/Docker/perforce/p4:/p4 \
-p 1666:1666 \
blueroses/perforce-helix-p4d:2024.5
```
大小棚用P4
```c++
docker run -d --restart unless-stopped \
-v /volume1/Docker/perforce/p4_studio:/p4 \
-p 1660:1666 \
blueroses/perforce-helix-p4d:2024.5
```
```c++
docker run -d --restart unless-stopped \
-v /volume1/Docker/perforce/p4_studio3:/p4 \
-p 1661:1666 \
blueroses/perforce-helix-p4d:2024.5
```
```c++
docker run -d --restart unless-stopped \
-v /volume1/Docker/perforce/p4_test:/p4 \
-p 1662:1666 \
blueroses/perforce-helix-p4d:2024.5
```
## chishin/wol-go-web
```bash
docker run --name=wol -d \
-e "WOLHTTPPORT=7000" \
-p 7000:7000 \
-v /volume1/Docker/wol/externall-file-on-host.csv:/app/computer.csv \
--restart unless-stopped \
dabondi/go-rest-wol
```
```
docker run -d --net=host \
--env PORT=7000 \
chishin/wol-go-web
```
## VPN
- https://115.236.67.234:8443
- loujiajie
- ios支付账号
- https://secloud1.ruijie.com.cn/SSLVPNClient

92
02-Note/ASoul/ASoul.md Normal file
View File

@@ -0,0 +1,92 @@
---
title: EOE相关
date: 2024-04-08 14:49:40
excerpt:
tags:
rating: ⭐
status: inprogress
destination:
share: false
obsidianUIMode: source
---
# ASoul
前面几个阶段的软件都是打包过的UE客户端里面带对应的几个功能。
## 流程架构
![[导播台架构图.canvas|Untitled]]
## 外部数据输入
- 青瞳动捕(网络)
- FaceMask面捕数据网络
- 动捕手套(蓝牙)
### FaceMask
内部定制的面捕App FaceMask考虑到成本目前还是使用Iphone11。
**面部头盔定制**
头盔针对每个中之人的头型分别进行调整(大小、动捕捕捉点)
除此之外头盔进行了轻量化的定制,减少中之人的头部负担,提升最长演出时间。**主要使用延迟摄像头数据线的方式(第三方定制)** 将摄像头安装到头盔前面的支架上。
## 导播台程序
程序采用C/S架构实现Server端负责数据接收以及同步客户端数据客户端负责发送导播人员的控制命令。每个部分操作都分别在一台电脑上操作理论上也可以放在一台电脑上但UI界面屏幕放不下
优点:
- 容灾:
- 性能扩展只需要扩展渲染机与服务器担当服务器的电脑即可。主要是显卡渲染机、CPU主频高、高速内存、高速固态硬盘。
- 相对好上手无需熟悉UE
- 资产管理:该导播台软件
- 制作其他派生产品方便
缺点:
- 流程规范&严格:
- 需要程序来拓展功能:如果有拓展功能或者改善功能,需要有一定工作经验的程序编写代码来实现,大概率无法通过蓝图进行添加。
服务器运行程序:
- 导播台程序服务端:同步各个导播台客户端的数据。
- Perforce Helix Core项目、引擎、外部资产版本管理。
个人并不推荐使用SVN建议现阶段使用Perforce Helix Core的免费版5个用户和20个工作区
***部分功能的实现方式因为没有看过代码所以只能靠猜***
### 数据接收&动作数据重定向
负责接收上述数据并且根据预设进行重定向与IK计算。骨骼数据同步略微费局域网带宽
### 舞台角色控制
1. 舞台角色添加、移除、显示(过渡特效)
2. 角色装备修改。比如手上应援棒、锤子;身上的翅膀;头上的帽子。
3. 角色特效。
同时可以看到头发、衣服的物理模拟效果。
### 各机位画面预览
预制若干镜头与机位视角大致为20+个。可以通过**StreamDock**进行切换。需要由导播员进行切换。支持虚拟摄像头(平板、手机)。
也可以预制镜头使用Sequence资产导入。
### 渲染机
用于渲染画面之后将信号输入到OBS推流机中。
硬件:
- 显卡Nvidia 4090
### 监视器
用于查看视频推流最终结果。如果有NG情况可以一键切掉画面黑屏或者vMix Pro的NG等待画面
## OBS推流机&vMix Pro
推流机运行OBS接收各种信号并且混合(有接混音台)。
### vMix Pro
主要用于推送PPT、图片、视频到OBS推流机中。一键切掉画面也是通过它实现的。
## 云服务
RTC服务一般用于线下Live降低延迟。可以找阿里云、腾讯云等各种服务商。
# 结果
1. P4V工程交付一个单独下载的版本能够提供版本管理规范以及分支设计与使用。ASoul组的程序以及美术仅使用P4V GUI各种操作。
2. 华枢说会提供修改过的引擎、项目与插件。
3. 可以拿到内部定制的面捕软件FaceMask源码工程。
4. 代码复杂度5年工作经验以上 6个程序 做了4个月因为有经过预演8个月
5. 线下Live 降低延迟的服务,RTC.
6. 需要对接字节IT进行服务器移交、宽带。
7. ASoul服务器运行的服务 导播台UE Server端、以及P4V。
# 交接
## 交接预判
1. 各种非交付软件的数据移交方式。P4V许可以及数据。
2. 判断是否修改引擎代码。在Asoul的UE工程上右键Switch UnrealEngine Version查看里面是否是Binary Build或者Source Build。确认是否可以使用官方的工程目录。
3. 询问是否可以提供VS生成的类图判断项目的代码量以及技术难度。
4. 查看资产规范文档。
5. 人员组织架构。需要哪些技术栈。方便后续复现。

View File

@@ -0,0 +1,6 @@
- [ ] Config
-
- [ ] Content
- [ ] Source
- [ ] Plugins
- [ ] EditorTool

View File

@@ -0,0 +1,16 @@
# 共性问题
- [ ] 巨蛋VJ材质有鬼影
# BP04
- RayMarching 云
# BP16
- Tick
- 乃琳
- BP Nai Lin Birthday 2023_上海 29~30 => 34~35
- 伴舞乃琳与非伴舞乃琳的Outline渲染停止。
- 地形草停止。
- grass.Enable 0
- DMXComponent
- DMX数据层 K掉。
- 场景变换卡顿。

View File

@@ -0,0 +1,90 @@
# 才艺位置重定向
- Override Instance Data
- Transform Origin Actor勾选LiveArea Actor即可。
#
1. **BP_1_kaichang**:问题比较大。
2. BP_2_beginner基本没问题会有2处掉帧。
3. BP_3_OnlyMyRailgun没问题。
# TODO
- PVW、PGM部分大屏失效
- 平面反射
- DMX优化
- CharactereMovmentCom优化
- 其他
## 已做记录
1. [x] MediaPlayer硬件加速与TSR。
2. [x] BP_1开头平面反射优化以及场景BP_PannelReflction K没
3. [x] CPU粒子优化
4. [x] 伴舞角色MovemenComponent以及Tick优化。
5. [x] DMX Beam优化。
6. [ ] Ultra_Dynamic_Sky的方向光级联阴影贴图设置。
#
- [ ] 节目Sequence播放完之后会卡住。
- [ ] 感觉是DMX的问题
- [ ] M_Beam_Master2
1. BP_1_kaichang
1. [ ] 开场特效光粒子掉帧。
2. [ ] 开场反射过于清晰。
2. [ ] BP_2_beginner 灯光开启时,黄光一闪卡顿。
3. [ ] BP_3_OnlyMyRailgun结束时掉帧。
4. [ ] BP_4_28Reasons结束卡柱。
5. [ ] BP_6_Mago靠右灯光闪一下卡住。
6. [ ] 节目8最后卡住。
7. [ ] 节目9QQ炫舞 卡。
1. [ ] BP_PlaneFaceCam_Wgt 相关Actor的制作有问题。
8. [ ] 节目11中后会有掉帧。
# BP_1_kaichang
## 性能问题
1. 粒子问题性能问题
1. NS_StarFlow/Cinematics/FX/star/NS_StarFlow是CPU粒子。
2. NS_StarCenter/Cinematics/FX/star/NS_StarCenter是CPU粒子。
2. 场景的Sequence还有一个Spawnable的Planar Reflection需要去掉。
3. BP_PlanarReflection在镜头不需要平面反射的时候应该设置为禁用。
4. 2905077~开始性能暴降:
1. DMX光柱性能影响
1. M_Beam_Master1 SM_Beam_RM 左右两边的侧灯各6盏
1. DMX Beam Visibility Ratio 1 => 0.12
2. 场景中的其他Beam材质也建议修改这个参数。
2. BP_PannelReflction会渲染体积云与阴影。
3. Ultra_Dynamic_Sky的方向光级联阴影贴图设置。
5. 2906750
1. M_Beam_Master1 SM_Beam_RM
6. **2913380**
1. 群贝拉场景。
1. 将CharacterMovement的tick与Update相关东西去掉。
7. 2916200
1. 透明渲染问题。 后面就几个红色的射灯调整一下啊DMX Beam Visibility Ratio 1 => 0.06,并且调整材质参数集的开合大小。
1. M_Beam_Master15 SM_Beam_RM
2. M_Beam_Master16 SM_Beam_RM
3. M_Beam_Master17 SM_Beam_RM
4. M_Beam_Master18 SM_Beam_RM
5. M_Beam_Master19 SM_Beam_RM
8. 2916349
1. M_Beam_Master21
2. M_Beam_Master14
9. 2921820
1. NS_Pyrotechnics_01/Cinematics/FX/FestivalFX/FX/NS_Pyrotechnics_01
10. **2926862**
1. 多个贝拉上台,有性能问题。
## 非性能建议
1. Cam_yaobi_02 2892625 这里建议暂时把屏幕空间反射关了,并且控制平面反射范围。
1. 屏幕空间反射亮度设置为0。
2. 发光贝拉建议把投影关了。
3. 2899860~2900140的星光反射会消失。
# BP_16_YYDYG
- 可以考虑把BP_PannelReflction K掉因为角色脚下的台子没有反射。
- Sequencer 去掉体积云。
- DMX
- M_Beam_Master5
- M_Beam_Master14

View File

@@ -0,0 +1,16 @@
# Motion写入逻辑
正确导入Motion的Log
```c++
[2024.10.23-05.22.35:360][637]LogSequoia: SequoiaFileRefPool:: Load FileRef start ++++++:C:/LiveDirectorSaved/Sequoia/心宜思诺一周年/LIKE THAT.Sequoia/1D20B3CA4D4B5CED1D7312AE0D9EBF9F.motion
```
错误
```c++
LogSequoia: SequoiaData file ref load complete, sequoiaPath = :/Sequoia/心宜思诺一周年/初智齿.Sequoia/初智齿.json
```
## 录制逻辑
LogSequoia: UMotionCaptureRecorder::StartRecord start record motion frames from avatar:Idol.F07
LogSequoia: UMotionCaptureRecorder::StopRecordstop record motion frames from avatar:Idol.F07, frames:0

View File

@@ -0,0 +1,45 @@
# 测试流程
1. 需要手机与电脑处于同一个网段。
2. 设置MotionServer Ip设置、ARKitNetConfig.ini、MotionNetConfig.ini、MotionNetConfig2.ini
3. 打开FaceMask设置角色名称、电脑IP并且点连接。
4. 打开MotionProcess设置角色名称并且点连接。
**可以直接打开Map_MotionProcess进行开发与测试。**
## Editor测试方式
GM_TsLiveDirectorGameMode => PC_TsDirectorController=> BP_MotionSender0 BP_MotionReceiver0
# IdolAnimInstance
UpdateAnimation每帧执行PrepareMocapParameters()会获取TsMotionRetargetComponent的引用正常情况会获取IdolActor的Controller中的TsMotionRetargetComponent。
TsMotionRetargetComponent包含TsChingmuMocapReceiverActor => ChingmuMocapReceiverActor
# 相关动画节点
## AnimNode_FullBody
青瞳的动捕数据通过**AnimNode_FullBody**节点进行接收。具体是通过AMotionReceiverActor接收逻辑。
## AnimNode_FacialExpression
FaceMask面捕节点。
但具体的数据接收是在TsMediaPipeMocapReceiverActor与TsMotionRetargetComponent。
### FacialExpressionConfigAsset
用于设置表情各种数据。所有角色的表情资产位于`Content/LiveDirector/FaceExpressionConfig`
比较关键的曲线映射也就是将Arkit面捕数据从一个BlendShape0~1映射成5个对应的blendShape这样做到更加细腻的表情效果。比如tongueOut =>
tongueOut_1
tongueOut_2
tongueOut_3
tongueOut_4
tongueOut_5
BlendShape Maya源文件位于
## HandPoseAnimNode调整手部Pose
FName HandPoseDataTablePath = TEXT("DataTable'/Game/ResArt/HandPose/DT_HandPoseConfig.DT_HandPoseConfig'");
# 相关Actor
- AMotionReceiverActor动捕数据接收。
- AMediaPipeMocapReceiverActor面捕数据接收。
## AMediaPipeMocapReceiverActor
1. AMediaPipeMocapReceiverActorTick => OnGetMediaPipeData() => **(TsMediaPipeSkeleton)Skeleton.OnGetMediaPipeData(Data)** 这个函数逻辑在TsMediaPipeMocapReceiverActor。
2. TsMediaPipeMocapReceiverActorReceiveTick() => UpdateAnimation() 对数据进行过滤调整之后,将**面捕数据塞入AnimNode_FacialExpression**。

View File

@@ -0,0 +1,462 @@
# 相关类
- TsArkitDataReceiver(ArkitDataReceiver)
- TsChingmuMocapReceiverActor(ChingmuMocapReceiverActor)
- TsMotionReceiverActor(MotionReceiverActor) => BP_MotionReceiver定义了MotionNetConfig.ini。
- TsMotionSenderActor(MotionSenderActor)
# TsChingmuMocapReceiverActor
***地图里只会有一个生成的TsChingmuMocapReceiverActor来管理动捕数据接收***
1. Init()在Server才会Spawn TsChingmuMocapReceiverActor。
2. ConnectChingMu()**ChingmuComp.StartConnectServer()**
3. Multicast_AligmMotionTime()寻找场景中的BP_MotionReceiver并且调用Receiver.AlignTimeStamp()。
## ChingmuMocapReceiverActor
核心逻辑:
- ***FChingmuThread::Run()***
- ***AChingmuMocapReceiverActor::Tick()***
- AChingmuMocapReceiverActor::DoSample()
```c++
void AChingmuMocapReceiverActor::BeginPlay()
{
Super::BeginPlay();
MaxHumanCount = 10;
MaxRigidBodyCount = 10;
CacheLimit = 240;
SampledHumanData = NewObject<UMocapFrameData>();
ThreadInterval = 0.002;
BackIndexCount = int64(UMotionUtils::BackSampleTime / (1000.0 / CHINGMU_SERVER_FPS));//BackSampleTime = 100ms CHINGMU_SERVER_FPS =120ms
ChingmuComp = Cast<UChingMUComponent>(GetComponentByClass(UChingMUComponent::StaticClass()));
if (ChingmuComp == nullptr)
{
UE_LOG(LogTemp, Error, TEXT("Chingmu Component is missing!!"));
}
Thread = new FChingmuThread("Chingmu Data Thread", this);
Sender = GetMotionSender();
}
```
FChingmuThread::Run()中处理完[[#ST_MocapFrameData]]之后将几个演员动补数据存入FrameQueue之后。在Tick()出队之后数据存入AllHumanFrames/AllRigidBodyFrames。
- AllHumanFrames
- ID
- std::vector<ST_MocapFrameData*> Frames
- ID
- TimeStamp
- FrameIndex
- BonesWorldPos
- BonesLocalRot
```c++
void AChingmuMocapReceiverActor::Tick(float DeltaTime)
{
Super::Tick(DeltaTime);
if(!Sender)
{
Sender = GetMotionSender();
}
const auto CurTime = ULiveDirectorStatics::GetUnixTime();//获取当前系统时间
if(UseThread)
{
// 线程方式
// 在数据队列中获取青瞳数据
while (!FrameQueue.IsEmpty())//处理完所有
{
ST_MocapFrameData* Frame;
if (FrameQueue.Dequeue(Frame))//出队
{
PutMocapDataIntoFrameList(Frame);//将帧数数据塞入对应HuamnID/RigidBodyID的AllHumanFrames/AllRigidBodyFrames中。
}
}
}
DoSample(AllHumanFrames);
DoSample(AllRigidBodyFrames);
// 每隔1s计算一次平均包间隔
if (CurTime - LastCheckIntervalTime > 1000)
{
if (AllHumanFrames.Num() > 0)
{
AllHumanFrames[0]->CalculatePackageAverageInterval(this->PackageAverageInterval);
LastCheckIntervalTime = CurTime;
}
}
}
```
### 采样相关逻辑
- ***SampleByTimeStamp***()
```c++
void AChingmuMocapReceiverActor::DoSample(TArray<MocapFrames*>& Frames)
{
for (auto i = 0; i < Frames.Num(); i++)
{
Frames[i]->CheckSize(CacheLimit);//判断当前帧数据是否超过指定长度240帧2~4秒数据移除超出长度的数据。
if (SampleByTimeStamp(Frames[i]->Frames))//对数据进行插值当前插值数据存在SampledHumanData。
{
SendFrameToCharacter();//执行对应的TsChingmuMocapReceiverActor.ts中的逻辑主要是触发一个事件讲数据传递给TsMotionRetargetComponent.ts 或者 TsSceneLiveLinkPropActor.ts动捕道具
}
}
}
class MocapFrames
{
public:
int ID;
std::vector<ST_MocapFrameData*> Frames = {};
public:
MocapFrames(): ID(0)
{
}
bool CheckSize(const int Limit)
{
if (Frames.size() > Limit)
{
const int DeletedCount = Frames.size() / 2;
for (auto i = 0; i < DeletedCount; i++)
{
auto Data = Frames[i];
if (Data)
{
delete Data;
}
Data = nullptr;
}
Frames.erase(Frames.cbegin(), Frames.cbegin() + DeletedCount);
return true;
}
return false;
}
};
```
对数据进行插值,当前插值数据存在**SampledHumanData**。
```c++
bool AChingmuMocapReceiverActor::SampleByTimeStamp(std::vector<ST_MocapFrameData*>& DataList)
{
const int64 SampleTime = ULiveDirectorStatics::GetUnixTime() - UMotionUtils::BackSampleTime;//UMotionUtils::BackSampleTime = 100ms,采样100ms的数据。
int Previous = -1;
int Next = -1;
for (int Index = DataList.size() - 1; Index > 0; Index--)//从Last => First遍历所有数据确定插值的2个数据Index。
{
const ST_MocapFrameData* Data = DataList[Index];
if (Data == nullptr)
{
continue;
}
if (Data->TimeStamp - SampleTime > 0)
{
Next = Index;
}
else
{
Previous = Index;
break;
}
}
if (bShowSampleLog)
{
UE_LOG(LogTemp, Warning, TEXT("prev: %d, next: %d, total: %llu"), Previous, Next, DataList.size());
}
if (Previous != -1 && Next != -1)
{
const auto p = DataList[Previous];
const auto n = DataList[Next];
const float Factor = (n->TimeStamp - p->TimeStamp) > 0
? (1.0 * (SampleTime - p->TimeStamp) / (n->TimeStamp - p->TimeStamp))
: 1.0;
// Bone world pos cannot lerp like this
// It will cause bone length changes all the time
SampledHumanData->ID = p->ID;
SampledHumanData->TimeStamp = SampleTime;
SampledHumanData->FrameIndex = p->FrameIndex;
for (auto Index = 0; Index < 23; Index++)//对23个骨骼进行差值。
{
SampledHumanData->BonesWorldPos[Index] = UKismetMathLibrary::VLerp(
p->BonesWorldPos[Index], n->BonesWorldPos[Index], Factor);
SampledHumanData->BonesLocalRot[Index] = UKismetMathLibrary::RLerp(p->BonesLocalRot[Index].Rotator(),
n->BonesLocalRot[Index].Rotator(),
Factor, true).Quaternion();
}
return true;
}
if (Previous != -1)//容错处理全都是Previous数据太旧直接清空。
{
SampledHumanData->CopyFrom(DataList[Previous]);
if(SampleTime - DataList[Previous]->TimeStamp > UMotionUtils::MotionTimeout)
{
// data is too old, clear the data list.
DataList.clear();
}
return true;
}
if (Next != -1)//没有Previous直接复制Next的数据。
{
SampledHumanData->CopyFrom(DataList[Next]);
return true;
}
return false;
}
```
### FChingmuThread
用途为:
- 获取当前系统时间。
- 使用异步Task的方式通过调用**UChingMUComponent::FullBodyMotionCapBaseBonesLocalSpaceRotation()** 来更新每个演员的动捕数据。动捕数据存储在**ChingMUComponent**中的***LocalRotationList***、***GlobalLocationList***中。
- 管理HumanToLastReceiveTime以此管理每个动捕演员的动画数据时长。
- OwnerActor->OnGetHumanData_NotInGameThread()
- 根据当前时间与当前Frames从UChingMUComponent中将数据复制到[[#ST_MocapFrameData]]中。
- 将[[#ST_MocapFrameData]]转换成JSON后使用AMotionSenderActor::OnGetRawMocapData_NotInGameThread()发送。
- 将当前帧数据加入FrameQueue队列。
- 线程睡眠0.001s。以此保证AChingmuMocapReceiverActor::Tick()中可以把数据都处理完。
```c++
uint32 FChingmuThread::Run()
{
FTransform Tmp;
while (bRun)
{
if (OwnerActor && OwnerActor->UseThread && OwnerActor->ChingmuComp && OwnerActor->ChingmuComp->IsConnected())
{
CurTime = ULiveDirectorStatics::GetUnixTime();
// Human
for (auto HumanIndex = 0; HumanIndex < OwnerActor->MaxHumanCount; HumanIndex++)
{
const auto bRes = OwnerActor->ChingmuComp->FullBodyMotionCapBaseBonesLocalSpaceRotation(
OwnerActor->ChingmuFullAddress, HumanIndex, TmpTimeCode);
if (bRes)
{
if (!HumanToLastReceiveTime.Contains(HumanIndex))//空数据处理。
{
HumanToLastReceiveTime.Add(HumanIndex, 0);
}
if (HumanToLastReceiveTime[HumanIndex] != TmpTimeCode.Frames)//判断是否收到新的Frame数据
{
HumanToLastReceiveTime[HumanIndex] = TmpTimeCode.Frames;
OwnerActor->OnGetHumanData_NotInGameThread(HumanIndex, CurTime, TmpTimeCode.Frames);
}
else
{
// get same frame, skip
break;
}
}
}
// Rigidbody
for (auto RigidBodyIndex = OwnerActor->RigidBodyStartIndex; RigidBodyIndex < OwnerActor->RigidBodyStartIndex
+ OwnerActor->MaxRigidBodyCount; RigidBodyIndex++)
{
OwnerActor->ChingmuComp->GetTrackerPoseTC(OwnerActor->ChingmuFullAddress, RigidBodyIndex, Tmp,
TmpTimeCode);
if (!RigidBodyToLastReceiveTransform.Contains(RigidBodyIndex))
{
RigidBodyToLastReceiveTransform.Add(RigidBodyIndex, FTransform::Identity);
}
// 道具的TmpTimeCode.Frames永远为0所以无法用帧数判断
// 改为transform判断
if (!RigidBodyToLastReceiveTransform[RigidBodyIndex].Equals(Tmp))
{
RigidBodyToLastReceiveTransform[RigidBodyIndex] = Tmp;
OwnerActor->OnGetRigidBodyData_NotInGameThread(RigidBodyIndex, Tmp, CurTime, TmpTimeCode.Frames);
}
}
}
if (bRun)
{
FPlatformProcess::Sleep(OwnerActor ? OwnerActor->ThreadInterval : 0.004);
}
else
{
break;
}
}
UE_LOG(LogTemp, Warning, TEXT("%s finish work."), *ThreadName)
return 0;
}
```
## ST_MocapFrameData
- ST_MocapFrameData为动捕数据的原始帧数据。
```c++
#define MOCAP_BONE_COUNT 23
enum E_MotionType
{
Human,
RigidBody
};
enum E_SourceType
{
Mocap,
CMR,
Replay
};
struct ST_MocapFrameData
{
int ID;
int64 TimeStamp;
int FrameIndex;
E_MotionType MotionType;
E_SourceType SourceType;
FVector BonesWorldPos[MOCAP_BONE_COUNT];
FQuat BonesLocalRot[MOCAP_BONE_COUNT];
};
class LIVEDIRECTOR_API UMocapFrameData : public UObject
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditAnywhere)
int ID;
UPROPERTY(BlueprintReadWrite, EditAnywhere)
TArray<FVector> BonesWorldPos;
UPROPERTY(BlueprintReadWrite, EditAnywhere)
TArray<FQuat> BonesLocalRot;
UPROPERTY(BlueprintReadWrite, EditAnywhere)
int64 TimeStamp;
UPROPERTY(BlueprintReadWrite, EditAnywhere)
int FrameIndex;
UPROPERTY(BlueprintReadWrite, EditAnywhere)
int MotionType; // 0 human; 1 rigidbody
UPROPERTY(BlueprintReadWrite, EditAnywhere)
int SourceType; // 0 mocap, 1 cmr
public:
void CopyFrom(const ST_MocapFrameData* Other)
{
ID = Other->ID;
TimeStamp = Other->TimeStamp;
FrameIndex = Other->FrameIndex;
MotionType = Other->MotionType;
SourceType = Other->SourceType;
for (auto Index = 0; Index < 23; Index++)
{
BonesWorldPos[Index] = Other->BonesWorldPos[Index];
BonesLocalRot[Index] = Other->BonesLocalRot[Index];
}
}
};
class MocapFrames
{
public:
int ID;
std::vector<ST_MocapFrameData*> Frames = {};
void CalculatePackageAverageInterval(float& Res)
{
if(Frames.size() > 0)
{
auto First = Frames[0];
auto Last = Frames[Frames.size() - 1];
if(Last->FrameIndex > First->FrameIndex)
{
Res = 1.0 * (Last->TimeStamp - First->TimeStamp) / (Last->FrameIndex - First->FrameIndex);
}
}
}
};
```
# MotionCapture(青瞳插件)
实现1个组件与3个动画节点
- [[#ChingMUComponent]]
- [[#AnimNode_ChingMUPose]]:接受骨骼动捕数据。
- [[#AnimNode_ChingMURetargetPose]]:接受重定向后的骨骼动捕数据。
- AnimNode_ChingMURetargetPoseForBuild:
## ***ChingMUComponent***
1. Init
1. BeginPlay()取得ini文件中的配置信息取得当前角色的SkeletonMesh => CharacterSkinMesh;取得BoneName=>BoneIndex Map、TPose状态下骨骼的旋转值、TposeParentBonesRotation。
2. Connect
1. StartConnectServer()motionCapturePlugin->ConnectCommand = "ConnectServer"。具体逻辑会在FMotionCapture::Tick()处理。
2. DisConnectServer()motionCapturePlugin->ConnectCommand = "DisConnect"。
3. [[#CalculateBoneCSRotation()]]
4. [[#FullBodyMotionCapBaseBonesLocalSpaceRotation]]
### CalculateBoneCSRotation
> Get Human Fullbody Tracker data ,including of 23joints localRotation and root joint world Position
1. m_motioncap->CMHuman()调用DLL的CMHumanExtern()获取一个Double数组前3个是RootLocation后面全是Rotation。
2. 计算最终的四元数旋转值。
3. 返回的形参 FQuat* BonesComponentSpaceRotation数组指针。
### FullBodyMotionCapBaseBonesLocalSpaceRotation
相比CalculateBoneCSRotation增加了时间码以及GlobalLocation的动捕数据获取。
1. m_motioncap->CMHuman()调用DLL的CMHumanExtern()获取一个Double数组前3个是RootLocation后面全是Rotation。
2. motionCapturePlugin->CMHumanGlobalRTTC()调用DLL的CMHumanGlobalRTTC()1-24 New Features。计算**VrpnTimeCode**以及**GlobalLocationList**。
数据存在**ChingMUComponent**中的***LocalRotationList***、***GlobalLocationList***。
## FAnimNode_ChingMUPose
1. Initialize_AnyThread():取得**ChingMUComponent**。
2. Update_AnyThread():调用**ChingMUComponent->CalculateBoneCSRotation()**
3. Evaluate_AnyThread()对23根骨骼进行遍历取得RefPose后将从Update_AnyThread()获得动捕数据(**Rotation**覆盖到上面ComponentSpace**根骨骼需要额外添加Location数据**。最后将数据从ComponentSpace => LocalSpace。
## AnimNode_ChingMURetargetPose
1. Initialize_AnyThread()创建曲线逻辑TCHour、TCMinute、TCSecond、TCFrame
2. Update_AnyThread()
3. Evaluate_AnyThread():相关逻辑都实现在这里。
### AnimNode_ChingMURetargetPose::Evaluate_AnyThread()
# TsMotionReceiverActor
只在BeginPlay()中调用了this.MarkAsClientSeamlessTravel(); 具体逻辑在`AMotionReceiverActor`
## MotionReceiverActor
![[动捕逻辑思维导图.canvas]]
# Config与BoneName相关逻辑
1. Config/FullBodyConfig.json储存了对应的骨骼名称、Morph以及RootMotion骨骼名称。
1. 通过 UMotionUtils::GetModelBones()、UMotionUtils::GetMoveableBones()、UMotionUtils::GetMorphTargets()获取名称数组。
2. GetModelBones()
1. 主要在FAnimNode_FullBody::Initialize_AnyThread()被调用。
2. 填充`TArray<FBoneReference> BoneRefList;`顺带初始化SampledFullBodyData。
3. InitBoneRefIndex()初始化BoneRefList中每个FBoneReference的BoneIndex通过骨骼名称找到如果没有找到会提示对应的Log。
4. FAnimNode_FullBody::Evaluate_AnyThread(),作用在[[#ApplyDataToPose()]]。
3. GetMorphTargets()
1. 主要在FAnimNode_FullBody::Initialize_AnyThread()被调用。
## ApplyDataToPose()
### BoneTransform
1. 遍历BoneRefList从UMotionUtils::GetModelBones()获得)
2. 对BoneIndex有效的骨骼进行一些操作。
1. 取得当前动画蓝图输出Pose的**骨骼Index**以及**采样后动捕数据的旋转值**。
2. 如果骨骼名是Hips就将当前Index设置给HipsIndex。
3. 将旋转值应用到OutputPose中。
4. 判断当前骨骼名是否为MoveableBones中的名称将这些骨骼的Location设置到OutputPose中。
### MorphValues
将对应MorphTarget数据应用到对应的CurveChannel上。
### RootMotion
根据bUseHipsTranslation变量执行不同的逻辑
#### MapTranslationToHips
调用函数形参如下:
```c++
MapTranslationToHips(Output, EvaluatedFullBodyData, 0, HipsIndex);
```
1. 获取Joints骨骼的Locaiton作为RootMotion数据
2. 判断Joints骨骼是不是根骨骼如果**是**则调整RootMotion数据轴向。
3. 将Joints骨骼的Location归零。
4. 如果Hips骨骼有效则将RootMotion数据加到其Location上。
#### ExtractRootMotionInfo
1. 获取Joints骨骼的Locaiton作为RootMotion数据。
2. 判断Joints骨骼是不是根骨骼如果**不是**则调整RootMotion数据轴向。**轴向与MapTranslationToHips()不同**
3. 将Joints骨骼的Location归零。
4. 将RootMotion设置给AnimInstance的RootMotionLocation。
5. 如果Hips骨骼有效进行一堆计算最终将Rotation设置AnimInstance的RootMotionRotation。

View File

@@ -0,0 +1,17 @@
{
"nodes":[
{"id":"2666bc7c541cb485","type":"text","text":"FChingmuThread::Run()\n\n发送数据\nOnGetHumanData_NotInGameThread() => PutMocapDataIntoQueue => Sender->OnGetRawMocapData_NotInGameThread(jsonStr);\n\n```c++\nwhile (bRun)\n{\n\tif (OwnerActor && OwnerActor->UseThread && OwnerActor->ChingmuComp && OwnerActor->ChingmuComp->IsConnected())\n\t{\n\t\tCurTime = ULiveDirectorStatics::GetUnixTime();\n\t\t// Human\n\t\tfor (auto HumanIndex = 0; HumanIndex < OwnerActor->MaxHumanCount; HumanIndex++)\n\t\t{\n\t\t\tconst auto bRes = OwnerActor->ChingmuComp->FullBodyMotionCapBaseBonesLocalSpaceRotation(\n\t\t\t\tOwnerActor->ChingmuFullAddress, HumanIndex, TmpTimeCode);\n\t\t\tif (bRes)\n\t\t\t{\n\t\t\t\tif (!HumanToLastReceiveTime.Contains(HumanIndex))\n\t\t\t\t{\n\t\t\t\t\tHumanToLastReceiveTime.Add(HumanIndex, 0);\n\t\t\t\t}\n\t\t\t\tif (HumanToLastReceiveTime[HumanIndex] != TmpTimeCode.Frames)\n\t\t\t\t{\n\t\t\t\t\tHumanToLastReceiveTime[HumanIndex] = TmpTimeCode.Frames;\n\t\t\t\t\tOwnerActor->OnGetHumanData_NotInGameThread(HumanIndex, CurTime, TmpTimeCode.Frames);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\t// get same frame, skip\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t}\n\tif (bRun)\n\t{\n\t\tFPlatformProcess::Sleep(OwnerActor ? OwnerActor->ThreadInterval : 0.004);\n\t}\n\telse\n\t{\n\t\tbreak;\n\t}\n}\n\n```","x":-600,"y":-420,"width":980,"height":1180},
{"id":"c5705d4ff792be0b","type":"text","text":"**ChingmuComp.StartConnectServer()** 在UI界面控制链接服务器。\nAChingmuMocapReceiverActor::BeginPlay()创建FChingmuThread。","x":-360,"y":-640,"width":500,"height":140},
{"id":"668c865498842d96","type":"text","text":"AChingmuMocapReceiverActor::Tick()\n\n```c++\nconst auto CurTime = ULiveDirectorStatics::GetUnixTime();\nif(UseThread)\n{\n\t// 线程方式\n\t// 在数据队列中获取青瞳数据\n\twhile (!FrameQueue.IsEmpty())\n\t{\n\t\tST_MocapFrameData* Frame;\n\t\tif (FrameQueue.Dequeue(Frame))\n\t\t{\n\t\t\tPutMocapDataIntoFrameList(Frame);\n\t\t}\n\t}\n}\n\nDoSample(AllHumanFrames);\nDoSample(AllRigidBodyFrames);\n\n// 每隔1s计算一次平均包间隔\nif (CurTime - LastCheckIntervalTime > 1000)\n{\n\tif (AllHumanFrames.Num() > 0)\n\t{\n\t\tAllHumanFrames[0]->CalculatePackageAverageInterval(this->PackageAverageInterval);\n\t\tLastCheckIntervalTime = CurTime;\n\t}\n}\n```","x":-600,"y":820,"width":980,"height":800},
{"id":"04df15f334d740f3","type":"text","text":"IdolAnimInstance & Anim_FullBody\n\nIdolAnimInstance主要是取得场景中的**AMotionReceiverActor**以及设置身份。\nAnim_FullBody\n\n```c++\nvoid FAnimNode_FullBody::Update_AnyThread(const FAnimationUpdateContext& Context)\n{\n\tSourcePose.Update(Context);\n\tEMotionSourceType MotionSourceType = EMotionSourceType::MST_MotionServer;\n\tconst UIdolAnimInstance* IdolAnimInstance = Cast<UIdolAnimInstance>(\n\t\tContext.AnimInstanceProxy->GetAnimInstanceObject());\n\tif (IdolAnimInstance)\n\t{\n\t\tMotionSourceType = IdolAnimInstance->GetMotionSourceType();\n\t}\n\tif (MotionSourceType == EMotionSourceType::MST_MotionServer)\n\t{\n\t\tconst FString ValidIdentity = GetFullBodyIdentity(Context);\n\t\tconst auto Recv = GetMotionReceiver(Context);\n\t\tif (!ValidIdentity.IsEmpty() && Recv.IsValid())\n\t\t{\n\t\t\tbGetMotionData = Recv->SampleFullBodyData_AnimationThread(ValidIdentity,\n\t\t\t ULiveDirectorStatics::GetUnixTime() -\n\t\t\t UMotionUtils::BackSampleTime * 2,\n\t\t\t SampledFullBodyData);\n\t\t}\n\t}\n}\n\nvoid FAnimNode_FullBody::Evaluate_AnyThread(FPoseContext& Output)\n{\n\tSourcePose.Evaluate(Output);\n\tif (!InitializedBoneRefIndex)\n\t{\n\t\tInitBoneRefIndex(Output);\n\t\tInitializedBoneRefIndex = true;\n\t}\n\tEMotionSourceType MotionSourceType = EMotionSourceType::MST_MotionServer;\n\tconst UIdolAnimInstance* IdolAnimInstance = Cast<UIdolAnimInstance>(\n\t\tOutput.AnimInstanceProxy->GetAnimInstanceObject());\n\tif (IdolAnimInstance)\n\t{\n\t\tMotionSourceType = IdolAnimInstance->GetMotionSourceType();\n\t}\n\n\tFMotionFrameFullBodyData& EvaluatedFullBodyData = SampledFullBodyData;\n\n\tswitch (MotionSourceType)\n\t{\n\tcase EMotionSourceType::MST_MotionServer:\n\t\tif (!bGetMotionData)\n\t\t{\n\t\t\treturn;\n\t\t}\n\t\tEvaluatedFullBodyData = SampledFullBodyData;\n\t\tbreak;\n\tcase EMotionSourceType::MST_SequoiaReplay:\n\t\t{\n\t\t\t// Evaluate from sequoia source.\n\t\t\tconst FSequoiaMotionSource& MotionSource = FSequoiaMotionSource::Get();\n\t\t\tconst FString ValidIdentity = GetFullBodyIdentity(Output);\n\t\t\tif (const FMotionFrameFullBodyData* FrameSnapshot = MotionSource.EvaluateFrame_AnyThread(ValidIdentity))\n\t\t\t{\n\t\t\t\tEvaluatedFullBodyData = *FrameSnapshot;\n\t\t\t\tbGetMotionData = true;\n\t\t\t}\n\t\t\telse\n\t\t\t{\n\t\t\t\tUE_LOG(LogTemp, Warning, TEXT(\"%s No Sequoia Frame Data found.AvatarName=%s\"),\n\t\t\t\t ANSI_TO_TCHAR(__FUNCTION__), *ValidIdentity)\n\t\t\t\tbGetMotionData = false;\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\tbreak;\n\tdefault:\n\t\tbreak;\n\t}\n\n\tApplyDataToPose(Output, EvaluatedFullBodyData);\n}\n```","x":-960,"y":1720,"width":1700,"height":2080},
{"id":"778e83e66edd5118","x":-903,"y":3980,"width":1586,"height":197,"type":"text","text":"bool AMotionReceiverActor::SampleFullBodyData_AnimationThread()\n1. 对CharacterToFrameList里的角色数据进行采样并将采样数据存储到SampledFullBodyData中。\n2. CharacterToFrameList的数据会在接收到网络传递的逻辑后填充ASimpleUDPReceiverActor::OnReceiveData_NetworkThread() => ProcessReceivedData_NetworkThread => PutFrameIntoQueue_NetworkThread() "},
{"id":"521dba38cdd6c593","x":-460,"y":4300,"width":700,"height":120,"type":"text","text":"FMotionFrameFullBodyData& EvaluatedFullBodyData = SampledFullBodyData;\nApplyDataToPose(Output, EvaluatedFullBodyData);"}
],
"edges":[
{"id":"b6e4d43c4c38cf16","fromNode":"2666bc7c541cb485","fromSide":"bottom","toNode":"668c865498842d96","toSide":"top"},
{"id":"34998812ac1bd8a8","fromNode":"c5705d4ff792be0b","fromSide":"bottom","toNode":"2666bc7c541cb485","toSide":"top"},
{"id":"2e063b7710fd9a81","fromNode":"668c865498842d96","fromSide":"bottom","toNode":"04df15f334d740f3","toSide":"top"},
{"id":"ddef3dd868ca08bf","fromNode":"04df15f334d740f3","fromSide":"bottom","toNode":"778e83e66edd5118","toSide":"top","label":"Update_AnyThread"},
{"id":"037baa41a3eb9866","fromNode":"778e83e66edd5118","fromSide":"bottom","toNode":"521dba38cdd6c593","toSide":"top","label":"Evaluate_AnyThread"}
]
}

View File

@@ -0,0 +1,29 @@
# 手部IK逻辑
主要用于设置**一些道具配套的手部姿势并且限制演员做出一些NG手势**。具体逻辑位于ControlRig XXX中。里面需要传入一些HandIKTarget Transform这里以吉他为例首先相关计算从载入道具开始到RefreshInstrumentIK为止
- LoadPropByConfig =>
- CheckPropPose=>
- TriggerInstrumentPose=>
- TriggerInstrumentIK
- RefreshInstrumentIK
# 重定向相关
逻辑主要分为TsRetargetManagerComponent以及动画蓝图蓝图中ControlRig。
- MotionProcess端会走重定向逻辑。
- 其他客户端会接受MotionProcess => MotionServer广播的Motion数据。
## TsRetargetManagerComponent
该组件会计算当前角色骨骼与标准的Human骨骼的比例以此计算出一些用于重定向的数据并且开启重定向中的PostProcess
- ModelScale
- LegScale
- HipDiff
## ControlRig
ControlRig中有一个Mocap骨骼与角色骨骼所有控制器都在Mocap骨骼上。
1. 接收动捕数据并且将数据设置到Mocap骨骼骨骼上。
2. PostProcess。
3. 除Hip外的骨骼设置Rotation到角色骨骼上Hips只设置Transform。
4. 后处理。
5. 将Hips骨骼数据传递到Joints上。
# IK问题记录
处理平跟与高跟NaiLin_ControlRig_Heel /Game/ResArt/CharacterArt/NaiLin/ControlRig/NaiLin_ControlRig_Heel

View File

@@ -0,0 +1,163 @@
"PropName": "梦境楼梯",
"AssetPath": "/Game/Props/SceneProps/Items/BP_Steps.BP_Steps",
定义场景A与场景BA=>B。
1. 创建一个和A或者B相同的场景道具。使用LevelInstance。
2. 制作渐变效果在A或者B溶解完成后进行一次切换区域。
1. 或者使用一个其他模型做过度。
3. 物理问题可以使用 Ignore Component Transform 来解决。
manager.LayerManager.EnterLevelArea(this.preset.LevelAreaPreset.UUID, manager.LevelSwitchType);
# 角色会隐藏问题解决
1. TsIdolControllerActor.ts中绑定了若干事件
```typescript
RegisterEventListener(): void
{
this.ListenerWrapper_SwitchLiveArea = (liveAreaUUID: UE.Guid) => { this.SwitchToLiveArea(liveAreaUUID) }
this.ListenerWrapper_OnFinishCreateTmpArea = (liveAreaUUID: UE.Guid) => { this.RequireSwitchToLiveArea(liveAreaUUID) }
this.ListenerWrapper_SceneChanged = (levelName:string)=>{this.OnSceneChanged() };
this.ListenerWrapper_BeforeSceneChanged = (levelName:string)=>{this.BeforeSceneChanged() };
DirectorEventSystem.RegisterEventListener(this, DirectorEvent.OnFinishSwitchLiveAreaLocal, this.ListenerWrapper_SwitchLiveArea)
DirectorEventSystem.RegisterEventListener(this, DirectorEvent.OnFinishSwitchSubLevelLocal,this.ListenerWrapper_SceneChanged)
DirectorEventSystem.RegisterEventListener(this, DirectorEvent.BeforeSwitchLevel,this.ListenerWrapper_BeforeSceneChanged)
DirectorEventSystem.RegisterEventListener(this, DirectorEvent.OnFinishCreateTmpLiveAreaLocal,this.ListenerWrapper_OnFinishCreateTmpArea)
}
```
- ListenerWrapper_SwitchLiveArea角色移动到其他LiveArea的核心逻辑。
- ListenerWrapper_OnFinishCreateTmpArea无逻辑。
- ListenerWrapper_SceneChanged卸载所有道具this.PropComp.OnSceneChanged()
- ListenerWrapper_BeforeSceneChanged将角色与衣服从LiveAreaDetach? this.DressModel.K2_DetachFromActor()
## ListenerWrapper_SwitchLiveArea
```c++
SwitchToLiveArea(TargetLiveAreaGUID: UE.Guid): void {
console.warn(this.Identity.RootTag.TagName.toString() + ' switch to live area ' + TargetLiveAreaGUID.ToString())
this.LiveAreaGIUD = TargetLiveAreaGUID
this.SetTransformToLiveArea()
if (this.PropComp.DressModel && this.PropComp.DressModel.MovementComp&&this.PropComp.DressModel.MovementComp.ManulMovement) {
console.warn(this.PropComp.DressModel.GetName() + ' is in free move mode, will not teleport to new area!')
return
}
var liveAreaMgr = LiveAreaUtils.GetLievAreaManagerInstance(this)
if (liveAreaMgr && liveAreaMgr.IsTmpLiveArea(TargetLiveAreaGUID)) {
// teleport to the target live area without fx
this.PropComp.Teleport(TargetLiveAreaGUID)
} else {
this.PropComp.DressModelTeleport(TargetLiveAreaGUID)
}
}
```
## 相关事件
- BeforeSwitchArea_Multicast
## 笔记
- SwitchToLiveArea()
- 设置Idol的位置。
- this.OnSwitchLiveArea.Broadcast(oriUUID, uuid);
        - DirectorEventSystem.Emit(this, DirectorEvent.OnFinishSwitchLiveAreaLocal, this.CurrentLiveAreaUUID) TsDirectorCamManagerActor.ts
        - console.log('切换直播区域,area=[' + (liveArea as UE.LiveAreaActor).Title + ']')
        - DirectorCamUtil.EnsureWorkShopReady(this, placement.UUID, () => { this.SwitchWorkShopInAreaServer(0) }) ***TsDirectorCamManagerActor.ts***
        - SwitchWorkShopInAreaServer
        - this.HandlePreviewTaskDataMulticast(newPreviewTaskData);
        - this.RequestPVWTaskServer(newPVWTaskData);
```ts
    //推流
    E_StartCut(progress: number): void {
        this.camManagerCache = DirectorCamUtil.PushStreamOnServerAsync(this, UE.EPushStreamMethod.Cut, false, this.camManagerCache)
    }
function PushStreamOnServerAsync(context:UE.Object, method:UE.EPushStreamMethod,bForce:boolean, camManagerCache? : TsDirectorCamManagerActor):TsDirectorCamManagerActor{
let camManager = GetCamManager(context, camManagerCache)
if(camManager){
camManager.PushStreamOnServerAsync(method, bForce)
}
return camManager
}
// Server端判断是否可以推流。存在网络延迟<E5BBB6>?
@ufunction.ufunction(ufunction.Reliable, ufunction.ServerAPI)
PushStreamOnServerAsync(method:UE.EPushStreamMethod, bForce : boolean):void{
let newPGMTaskData = DirectorCamUtil.CopyTaskData(this.prestreamTaskData)
this.RequestPGMTaskServer(newPGMTaskData, method, bForce)
}
```
## TsMovableLiveAreaComponent
- TsMovableLiveAreaComponent
##
```ts
/** 使用cmd进入区域跳过加载等待 其他方法不可调用 */
@ufunction.ufunction(ufunction.ServerAPI, ufunction.Reliable)
EnterAreaByCMD(areaUUIDStr: string): void {
let areaManager = this.GetLevelAreaManager();
if (!areaManager) {
return;
}
let area: UE.LiveAreaActor = null;
if (areaUUIDStr != null || areaUUIDStr != "") {
let uuid = new UE.Guid();
if (UE.Guid.Parse(areaUUIDStr, $ref(uuid))) {
area = areaManager.GetLiveArea(uuid);
}
}
if (area == null) {
area = areaManager.GetAllLiveAreas().GetRef(0)
}
if (area == null) {
console.error("no area")
return
}
if (area.UUID.op_Equality(areaManager.CurrentLiveAreaUUID)) {
return
}
let bHasData = false;
let presetId:UE.Guid;
let manager = this.GetManager()
for (let index = 0; index < manager.Config.AreaPresets.Num(); index++) {
const element = manager.Config.AreaPresets.GetRef(index);
if (element.AreaGuid.op_Equality(area.UUID)) {
presetId = element.UUID;
bHasData = true;
break;
}
}
let levelName = UE.GameplayStatics.GetCurrentLevelName(this, true);
if (!bHasData) {
manager.AddAreaPreset( levelName, area.UUID, area.Title)
for (let index = 0; index < manager.Config.AreaPresets.Num(); index++) {
const element = manager.Config.AreaPresets.GetRef(index);
if (element.AreaGuid.op_Equality(area.UUID)) {
presetId = element.UUID;
break;
}
}
}
let viewTarget = areaManager.GetViewTarget(area.UUID)
if (!viewTarget) {
this.AddViewTarget(area.UUID, area.Title);
}
manager.AddConfigLevelSetting(levelName);
this.BeforeSwitchArea_Multicast(manager.CurPresetId);
manager.CurPresetId = presetId;
areaManager.SwitchToLiveArea(area.UUID);
}
}
```

View File

@@ -0,0 +1,2 @@
# 光捕相机类型
BP_HandHeldCam => HandHeldCamera同过BP_HandHeldCamera_Proxy Actor实时发送LiveLink数据。

View File

@@ -0,0 +1,35 @@
{
"nodes":[
{"id":"300a2e3e614685a2","type":"group","x":-500,"y":-20,"width":660,"height":500,"label":"导播台程序"},
{"id":"035350cfe6c5a215","type":"group","x":-500,"y":-400,"width":660,"height":275,"label":"外部数据输入"},
{"id":"2eec2fb1d3a37d06","type":"group","x":200,"y":-20,"width":360,"height":133,"label":"云服务"},
{"id":"63e99817023a9452","x":200,"y":360,"width":360,"height":120,"type":"group","label":"其他工具"},
{"id":"9c4c9310461193d8","type":"text","text":"舞台角色控制","x":-125,"y":113,"width":250,"height":60},
{"id":"6aa20a6c6e56213d","type":"text","text":"[[ASoul#各机位画面预览|各机位画面预览]]","x":-125,"y":206,"width":250,"height":60},
{"id":"39bafcd9161d7e0a","type":"text","text":"导播台程序","x":-480,"y":20,"width":250,"height":60},
{"id":"5b68848d0ae9aef3","type":"text","text":"数据接收&动作数据重定向","x":-125,"y":20,"width":250,"height":60},
{"id":"ddccb7a9337eac2c","type":"text","text":"RTC服务","x":220,"y":0,"width":250,"height":60},
{"id":"64c78f2c7f900857","type":"text","text":"青瞳动捕输入","x":-460,"y":-360,"width":250,"height":60},
{"id":"3100f1c53b772812","type":"text","text":"[[ASoul#FaceMask|FaceMask]]","x":-460,"y":-240,"width":250,"height":60},
{"id":"6024a903f9025bbf","type":"text","text":"动捕手套","x":-140,"y":-360,"width":250,"height":60},
{"id":"2573e7521a0b567d","x":-140,"y":-240,"width":250,"height":60,"type":"text","text":"虚拟摄像头"},
{"id":"d4895a6dd8e8f492","type":"text","text":"OBS 推流机器","x":-30,"y":600,"width":250,"height":60},
{"id":"b6635d1e5df0f9c5","type":"text","text":"[[ASoul#vMix Pro|vMix Pro]]","x":220,"y":400,"width":250,"height":60},
{"id":"2f70c72b00fdbb6f","x":-125,"y":400,"width":250,"height":60,"type":"text","text":"监视器"},
{"id":"fd18d36587eee2af","type":"text","text":"渲染机","x":-125,"y":300,"width":250,"height":60}
],
"edges":[
{"id":"50384075226d46f4","fromNode":"39bafcd9161d7e0a","fromSide":"right","toNode":"5b68848d0ae9aef3","toSide":"left"},
{"id":"7330916171b51d83","fromNode":"39bafcd9161d7e0a","fromSide":"right","toNode":"9c4c9310461193d8","toSide":"left"},
{"id":"f1bcfe8881d914d1","fromNode":"39bafcd9161d7e0a","fromSide":"right","toNode":"6aa20a6c6e56213d","toSide":"left"},
{"id":"e15305c8aa918dac","fromNode":"39bafcd9161d7e0a","fromSide":"right","toNode":"fd18d36587eee2af","toSide":"left"},
{"id":"8cac5632ec79bc5f","fromNode":"035350cfe6c5a215","fromSide":"bottom","toNode":"5b68848d0ae9aef3","toSide":"top"},
{"id":"5868048a50b5a58f","fromNode":"5b68848d0ae9aef3","fromSide":"bottom","toNode":"9c4c9310461193d8","toSide":"top"},
{"id":"e29f2faceec273b4","fromNode":"9c4c9310461193d8","fromSide":"bottom","toNode":"6aa20a6c6e56213d","toSide":"top"},
{"id":"4efa1ed45a9bf2e6","fromNode":"300a2e3e614685a2","fromSide":"bottom","toNode":"d4895a6dd8e8f492","toSide":"top"},
{"id":"7803dd9c4b820e03","fromNode":"b6635d1e5df0f9c5","fromSide":"bottom","toNode":"d4895a6dd8e8f492","toSide":"top"},
{"id":"16ebaec8f13d643b","fromNode":"39bafcd9161d7e0a","fromSide":"right","toNode":"2f70c72b00fdbb6f","toSide":"left"},
{"id":"7deafcdada47affc","fromNode":"6aa20a6c6e56213d","fromSide":"bottom","toNode":"fd18d36587eee2af","toSide":"top"},
{"id":"f2b08ba7d08b3340","fromNode":"fd18d36587eee2af","fromSide":"bottom","toNode":"2f70c72b00fdbb6f","toSide":"top"}
]
}

View File

@@ -0,0 +1,114 @@
# 前言
输入`run`显示所有命令说明。
- PVW: 切换PVW
- PGM: 切换PGM
- 0: 切换Operator
- 3: 切换三级
- HandCam: 切换手持相机
- 11: 切换FreeMove
- ReMidi: 刷新midi板子
- DebugFrame: debug frame
- EnterArea: 进入Area (可填uuid)
- xs: 进入异世界雪山1 Area (测试用)
- DMXAlign: 强行对齐DMX-Area
- ReDeck: 强行加载当前区域的镜头数据并刷新StreamDeck
- IdolStatus: 角色动作状态
- AllIdolStatus: 所有角色动作状态
- IdolRelativeTr: 所有角色位置信息
- MotionLog: 角色动作Log
- IdolCache: 角色缓存情况(ServerOnly)
- GetMotionOffset: 获取动作时间偏移
- MotionReceiveStatus: 角色动作数据接收情况
- ResetOffsetTime: 重置所有角色动作包时间偏移
- SetRes: 设置目标分辨率,如run SetRes 1920 1080
- HipsTranslation: 使用Hips位移
- IdolCostume: 加4个团服角色
- ShowUI: UE4中的ShowUI指令迁移
- BindPGM2: 重新绑定PGM2的固定机位
- LipSync: 设置LipSync音频-静音阈值
- UdexGlove: 使用宇叠科技新手套(部分角色适用)
- GenerateMeshConfig: 生成 mesh config(技术用)
涉及到:
- TsLiveDirectorGameInstance.ts
- TsDirectorConsoleCommandHandler.ts
# 命令执行逻辑
## Run
TsDirectorConsoleCommandHandler.ts
```ts
static HandleConsoleCommand(gameInstance: TsLiveDirectorGameInstance, consoleCommand: string): void {
if(consoleCommand == '' || consoleCommand.toLocaleLowerCase() == 'help'){
TsDirectorConsoleCommandHandler.Help()
return
}
var parts = consoleCommand.split(' ')
var funcName = parts[0]
var func = TsDirectorConsoleCommandHandler.GetFunctionByName(funcName)
if (func == null) {
console.error('Not exist cmd ' + consoleCommand)
return
}
switch (parts.length) {
case 1: func(gameInstance); break;
case 2: func(gameInstance, parts[1]); break;
case 3: func(gameInstance, parts[1], parts[2]); break;
case 4: func(gameInstance, parts[1], parts[2], parts[3], parts[4]); break;
case 5: func(gameInstance, parts[1], parts[2], parts[3], parts[4], parts[5]); break;
default: console.error('Cmd paramenter is wrong!')
}
}
```
主要的几个切换导播台的命令基本是调用`gameInstance.SetDirectorModeStr("XXX")`
进入Area函数为调用`TsDirectorConsoleCommandHandler._EnterArea(gameInstance, "0C1E0DD349EDD9860ED8BDBB55A736F3")``_EnterArea`的代码为:
```ts
static _EnterArea(gameInstance: TsLiveDirectorGameInstance, areaUUID: string): void {
let mapEnvironment = Utils.GetMapEnvironmentManager(gameInstance.GetWorld());
if (mapEnvironment && mapEnvironment.LayerManager) {
mapEnvironment.LayerManager.EnterAreaByCMD(areaUUID);
}
}
```
TsLiveDirectorGameInstance.ts
```typescript
Run(CMDStr: $Ref<string>) : void{
let consoleCommand = $unref(CMDStr)
TsDirectorConsoleCommandHandler.HandleConsoleCommand(this, consoleCommand)
}
```
## SetDirectorModeStr
位于`ULiveDirectorGameInstance`
TsLiveDirectorGameInstance.ts
## 其他有用函数
```ts
static _HipsTranslation(gameInstance: TsLiveDirectorGameInstance, value:number): void {
var actors = UE.NewArray(UE.Actor)
UE.GameplayStatics.GetAllActorsOfClass(gameInstance, TsIdolActor.StaticClass(), $ref(actors))
for (var i = 0; i < actors.Num(); i++) {
var model = actors.GetRef(i) as TsIdolActor
if (model) {
var anim = model.Mesh.GetAnimInstance() as UE.IdolAnimInstance
let fullbodyNode = Reflect.get(anim, 'AnimGraphNode_Fullbody') as UE.AnimNode_FullBody
if (fullbodyNode) {
//fullbodyNode.bUseHipsTranslation = value > 0
}
anim.SetRootMotionMode(value > 0 ? UE.ERootMotionMode.NoRootMotionExtraction : UE.ERootMotionMode.RootMotionFromEverything)
model.RootComponent.K2_SetRelativeLocationAndRotation(new UE.Vector(0, 0, model.CapsuleComponent.CapsuleHalfHeight), new UE.Rotator(0, 0, 0), false, null, false)
console.warn("use hips translation " + (value > 0))
}
}
}
```
# RuntimeEditor插件
# 三级导播台
run 3
# MotionProcess
资产位于UIAssets/Character/WBP_CharacterItem
UI逻辑位于TsCharacterItem.ts的TsCharacterItem

View File

@@ -0,0 +1,148 @@
# 前言
默认存储数据路径C:\LiveDirectorSaved\Sequoia
操作方式:
1. 4级中使用`Ctrl + Shift + D`勾选Sequoia编辑器后显示。
2. Ctrl + 剪切轨道。
# 相关类
- TS`LiveDirector\Script\Sequoia`
- TsSequoiaManagerActor
- OnPlayButtonClicked()Sequoia播放函数。主要逻辑是打开Sequoia的序列化数据之后创建或取得播放器斌进行播放/停止。
- TsSequoiaData => USequoiaData => USequoiaObject
- TsSequoiaBinding => USequoiaBinding => UNameableSequoiaObject => USequoiaObject
- TsSequoiaTake => USequoiaTake => UNameableSequoiaObject => USequoiaObject
- *SequoiaDirectorCamTake*
- TsSequoiaTrack => USequoiaTrack => UNameableSequoiaObject => USequoiaObject
- CharacterLiveLinkAnimTrack
- SequoiaMotionTrack
- *SequoiaCamShotTrack*(SequoiaCamShotEvalTemplate )
- *SequoiaCamTargetTrack*(SequoiaCamTargetEvalTemplate)
- SequoiaAudioTrack
- TsSequoiaSection => USequoiaSection
- *SequoiaCamSection*(TS)
- *SequoiaCamTargetSection*(TS)
- TsSequoiaSectionWithFileRef
- CharacterLiveLinkAnimSection
- SequoiaMotionSection
- SequoiaAudioSection
- ISequoiaEvalTemplate
- *SequoiaCamShotEvalTemplate*
- *SequoiaCamTargetEvalTemplate*
- CharacterLiveLinkAnimEvalTemplate
- SequoiaMotionEvalTemplate
- SequoiaAudioEvalTemplate
- ICamShotEvalHandle
- *SingleCamShotEvalHandle*
- *DoubleCamShotEvalhandle*
- c++`LiveDirector\Source\Modules\Sequoia`
- SequoiaPlayer
- PlayInternal():播放逻辑,主要调用`SequoiaData->Evaluate();`
- USequoiaObject => UObject
# 播放逻辑
```c++
TsSequoiaManagerActor@OnPlayButtonClicked: start play : 大聲鑽石
[2024.11.26-04.21.03:648][613]Puerts: (0x00000BD7686682F0) SequoiaManager@ Composer: On start playing...
[2024.11.26-04.21.03:649][613]Puerts: (0x00000BD7686682F0) DirectorCamSequoiaHandle : Enter CamTarget Section: Idol.JiaRan
[2024.11.26-04.21.03:649][613]Puerts: (0x00000BD7686682F0) DirectorCamSequoiaHandle : play Cam Section: ZhuJiwei_Zheng16-24mm group:CC8F4D734664869EC8FE788E7550AC31 index:0 scrub:false
[2024.11.26-04.21.03:665][614]Puerts: (0x00000BD7686682F0) request PGM: WorkShop
```
1. Sequoia界面点击播放后调用TsSequoiaManagerActor::OnPlayButtonClicked()
2. SequoiaPlayer::PlayInternal(),设置时间范围后。
3. USequoiaData::Evaluate()。
1. 调用所有USequoiaBinding::Evaluate()。
1. 调用所有USequoiaTrack::Evaluate()。
2. 调用所有USequoiaTake::Evaluate()。
1. 调用所有USequoiaTrack::Evaluate()。
PS. 实际上Sequoia的镜头录制数据会创建SequoiaCamShotTrack、SequoiaCamTargetTrack轨道。
## USequoiaTrack::Evaluate()
```c++
void USequoiaTrack::Evaluate(TRange<FFrameTime> EvaluationRange, ESequoiaEvaluateType EvalType)
{
Super::Evaluate(EvaluationRange, EvalType);
TArray<FSequoiaEvalSection> EvalSections;
USequoiaUtil::GetEvalSections(Sections, EvaluationRange, EvalSections);//根据播放范围取得Section
OnEvaluate(EvalSections, EvaluationRange.GetLowerBoundValue(), EvaluationRange.GetUpperBoundValue(), EvalType);//调用蓝图类的BlueprintImplementableEvent事件。
}
```
在TsSequoiaTrack中Overrider了OnEvaluate():
```ts
OnEvaluate(EvalSections: $Ref<UE.TArray<UE.SequoiaEvalSection>>, EvalStartTime: UE.FrameTime, EvalEndTime: UE.FrameTime, EvalType: UE.ESequoiaEvaluateType) : void{
if(!this.CanEvaluate() || !EvalSections){
return
}
if(!this.evalTemplate){
this.evalTemplate = this.CreateTemplate()
if(!this.evalTemplate){
return
}
this.evalTemplate.InitTemplate(this)
}
let newEvalSections = new Array<TsSequoiaSection>()
let evalSectionsRef = $unref(EvalSections)
for(let index = 0; index < evalSectionsRef.Num(); index ++){
let sectionRef = evalSectionsRef.GetRef(index)
let tsSection = sectionRef.Section as TsSequoiaSection
if(!sectionRef || !tsSection){
continue
}
if(sectionRef.EvalMode == UE.ESequoiaEvaluateMode.EEM_Inside || sectionRef.EvalMode == UE.ESequoiaEvaluateMode.EEM_JumpIn){
newEvalSections.push(tsSection)
if(newEvalSections.length >= MAX_EVAL_COUNT){
break
}
}
}
let bTemplateSourceChanged = this.IsTemplateSourceChanged(newEvalSections)
if(bTemplateSourceChanged){
this.evalTemplate.SetTemplateSource(newEvalSections, EvalType)
}
this.evalTemplate.Evaluate(EvalStartTime, EvalEndTime, EvalType)
this.lastEvalType = EvalType
}
```
看得出主要是主要逻辑是:
1. 创建指定类型的evalTemplate之后调用`evalTemplate.InitTemplate()`。
2. 取得ESequoiaEvaluateMode为EEM_Inside与EEM_JumpIn的所有EvalSections。
3. 判断Template是否发生改变如果改变则调用`evalTemplate.SetTemplateSource()`。
4. 调用`evalTemplate::Evaluate()`。
## ISequoiaEvalTemplateSequoiaCamShotEvalTemplate &
- InitTemplate
- SetTemplateSource
- Evaluate
## SequoiaCamSection
SequoiaCamSection => TsSequoiaSection。
- 数据Model类使用SequoiaCamSectionModel。
- SequoiaCamShotEvalTemplate
# 其他
## 添加自定义轨道
往Sequoia添加一个自定义轨道可以按照以下大体步骤进行开发
1. 大部分的拓展逻辑都写在SequoiaCustomBuilderTool.ts
2. 在SequoiaCustomBuilderTools.ts 的BindingType,TrackType,SectionType中添加组定义类型.在关系Map(BindingToTrackMap)中添加从属关系
3. 在Sequoia代码文件夹下创建拓展文件夹创建对应的TsBinding,TsTrack,TsSection等对应的UObject以及Model类可以参考DirectorCam.
4. Model文件用于数据序列化和存储通常不要使用UE类型UObject文件是真正的逻辑类
5. 创建Binding和BindingModel类分别定义AssignModel和构造函数用来承接数据
1. 在SequoiaCustomBuildertool.CreateBindingModel 和 CreateEmptyBindingModelByBindingType中新增新类型的Model创建。
2. 在SequoiaCustomBuildertool.CreateBinding中添加新Binding类型的创建
3. Track,Take,Section也是类似于Binding的方式在CustomBuilderTool中添加创建代码。
4. 至此就完成了数据部分的定义和代码。
1. 录制逻辑需要首先创建对应的录制逻辑继承自ISequoiaTakeRecorder.
2. 在SequoiaHelper.BuildTakeRecorders 中根据参数创建对应的recorder.

View File

@@ -0,0 +1,9 @@
# 启动逻辑
1. ULiveDirectorGameInstance::ParseCommandLine()解析DirectorMode、PGMMode字符串并设置。
2. SetDirectorModeStr()
3. SetDirectorMode()
4. 调用蓝图事件OnDirectorModeChanged()
TsLiveDirectorGameInstance extends UE.LiveDirectorGameInstance
TsDirectorCamManagerActor.ts

View File

@@ -0,0 +1,20 @@
# 相关类
- WBP_LevelFunctionView
- TsMapEnvironmentFunctionManager
- TsCommonVisibleLevelFunction
- TsCommonGravityLevelFunction
- TsCommonTimeLevelFunction
- TsCommonVariantComponent
- TsCommonWindLevelFunction
- TsBaseLevelFunctionActor
# 糖果工厂的移动LiveArea实现
- UITsCandyFactoryDetails.tsx
- 逻辑实现:/ResArt/Scene/Map_Stylized_Vilage/BP/BP_CandyFactoryLift
- 基类TsBaseLevelFunctionActor
2个圈转到相对原点。停下。
# Area动画
/Props/AreaSequence/天空之城_降落

View File

@@ -0,0 +1,4 @@
身份判断逻辑:
```c++
  if (Utils.IsCurDirectorModeEqual(this, DirectorMode.IdolControllerMaster))
```

View File

@@ -0,0 +1,532 @@
# TODO
1. [x] 项目目录中的EditorTools\\Python\\MaterialSetCharacter\\SetCharacter.py 是否是不需要使用的?
1. 以前使用的工具脚本,可以不用。
2. [ ] 思诺生日会
1. [ ] 道具衣服导入流程。
2. [ ] 直播流程跑通
4. [ ] Sequence播放逻辑跑通。
1. [ ] 使用c++实现一个新轨道以此来捕捉当前的对应角色Actor的引用。
2. [ ] 在播放最后放置Cut/Take逻辑
3. [ ] 将这个播放功能放到4级里。
5. [ ] DCC插件
- [ ] 角色、衣服重新绑定。以及重定向。
- 角色裸模,骨骼权重传递给衣服。之后修改轴向脚本。
- [ ] 动画蓝图中的ASoul自研修形插件匹配。也就是在Maya中输出一个骨骼以及BlendShape信息JSON之后导入UE。
6. [ ] ChaosBone
1. 参考资产KawaiiPhysics与ChaosBone混用`Content\ResArt\CharacterArt\BeiLa\BeiLa_SwimSuit\Animations\ABP_SK_BeiLa_Swimsuit_PostProcess.uasset`
7. [ ] 考虑将Sequence TakeRecord功能移植到导播台项目中先确定可能性
# Console
1. run0
2. run3运行三级导播。
3.
# 直播流程
PS.
1. 码率12000、1080p 一个画面 35Mbps上下行。外部直播需要准备主备路2条ASoul主线路200M上下行对等。
2.
## 导播台Server/Client启动流程
1. 渲染机机房
1. Server先开桌面找到`ue5 - 直播Server`或者`ue4`启动bat。
2. 其他几台渲染机再开,`ue5直播用`。StartClient_win启动bat。
2. 导播台Client顺序随意
1. 桌面找到`ue5直播用`文件夹启动bat。
2. StartClient_MotionProcessor.bat、MotionServer.bat右上一。
3. StartClient_MapEnvironment开启4级导播台右上二
4. StartCline_win开启3级导播台输入命令Run3。右下二。
3. 启动青瞳客户端。右下一。
1. 开启动捕数据测试工具Qingtong_Record选择一个数据进行连通性测试。确定右上有输入动捕数据输入。
2. 关闭Qingtong_Record启动CMTracker桌面有快捷方式。
3. 在MotionProcessor中确认青瞳机器的IP正确。
4. 动捕棚设备确定。
4. 动捕设备确认确认是否正常输入数据到MotionProcessor。
5. 推流机开启打开OBS填写推流地址ping一下内网以及网页在线测速确认网络环境正常。左上二左下一。左下上一二都可以
6. vMix开启连接OBS。左下上一二都可以
7. 使用B站、抖音直播的私密测试账号推流私密画面确定推流以及音频输出没有问题。
8. 确认设备
1. 确认使用的平板、iphone等设备的摄像头是胶布贴住的。
9. 动捕演员就位动捕设备就位后重新开启MotionProcessor。
10. 检查
1. 检查虚拟偶像模型是否贴地(贴地、浮空)
2. 换套衣服检查肩膀是否有问题。如果有问题使用MotionProcessor修正肩膀效果。
3. PVW PGM 时间同步点击StartCline_win中的时间同步按钮。
## 导播台使用
1. 播放VJ视频
1. 远程放置一个远程VJ播放用Actor。
2. 远程连麦
1. 手机连接。
2. 走声卡,传递到调音台。
## StreamDock
- 第一行第一个按3下切换成下一页。
- 镜头分专用镜头LiveArea中设定的镜头)、通用镜头(所有场景都可以使用)
- 添加镜头的方法也可以用于播放特效以及其他Sequence中可以制作的效果 月哥有记录
1. 新建一个Sequence并且K好镜头。
2. 创建CameraGroup DataAsset资产。
3. 将该资产加入到场景中的LevelArea下面的XX数组就。
4. 点击导播工具按钮。
## 添加道具、物品、场景、特效
1. 添加物品道具。
1. 目录Content-Props-CharacterProps-Items找到对应类型的道具复制后修改模型资产即可。
2. 添加道具类TsIdelPropActor
3. 具有描边、骨骼模型、静态模型、Niagara组件。
4. 属性
1. 骨骼模型Socket设置类属性的MountPointToTransform来设置。
2. Prop
1. DisplayName导播那里的显示的名称。
2. 添加场景
3. 添加角色
4. 添加衣服
## 相关Bat作用
1. StartListenServer服务器。
2. 导播台
1. StartClient_WinPVW预览屏、PGW推流机、Preview小窗口* 2
2. StartClient_Win_VideoProcess视频处理将视频推到另外2个View上。
3. StartClient_MapEnvironment导播台控制地图场景。4级、1级视图-Layout处切换
4. StartClient_IdolController_Master导播台角色控制相关。3级
5. StartClient_HandHeldCam导播台手持相机。
6. StartClient_MotionProcessor导播台动捕
7. 线下的
1. PGM2
2. PGMCameraRenderer
8. Pico
1. StartClient_PicoClient_0
2. StartClient_PicoClient_1
3. StartClient_PicoClient_2
4. StartClient_PicoClient_3
3. 启动编辑器项目目录中的StartEditor_Win.bat
4. MotionServer动捕相关。
## 修改配置
1. MotionServer通过修改源码来指定IP。
2. 上述StartClient_MapEnvironment.Bat修改Server IP。
3. 切身份、调试StartClient_IdolController_Master、PVW预览屏、PGW推流机、Preview小窗口ChangeNetTag Operator.IdolController.Master
# Project
1. DesignerStreamDock插件相关包括SreamDock插件工程直接连接UE打包工程读取相关配置之后自动生成图标预设并且加载、若干预设文件。
2. **Engine**编译版引擎。添加过ShaderModel、修改过Shader、添加过若干Editor代码。
3. **LiveDirector**:导播台项目工程。
4. StartLiveDirector启动Bat文件。
5. StartLiveDirectorCluster分布式启动方案带一个库。
6. Tools一些第三方库
1. JS加密加密代码方便给第三方。
2. MotionReplayer动捕数据回放工具。
3. **MotionServer**动捕数据处理Server。主要用于转发青瞳的动捕数据给其他Client。 MotionServer - Server - DirectorLive Client使用动画蓝图中的IsMotionProcess变量判断身份。
1. 还包括自研的面部App FaceMask的数据转发。
4. obs-studioOBS源码添加了若干插件但用不了因为技术服务属于字节。
5. PixelStream移植自UE官方UE4版本有若干小改动。UE5版本不用。
6. Protobuf动捕数据传输google的那个协议。
7. VCluster未完成用于拉起所有机器的导播台程序。
## 重定向逻辑
1. MotionProcessor关卡中每个角色的动画蓝图负责重定向。
1. 使用一个Control Rig节点进行重定向。接收MotionScale、SpineScale、ArmScale、HipsDiff以及MocapData、ShoulderDiff、ZOffset。之后进行自定义计算。优势就是非常灵活。
2. TsMotionRetargetComponent.ts
3. 挂载在BP_IdolController中。
肩校准左边的滑块,用于调整动捕角色与游戏模型的比例。
- 默认默认比例是计算出来对于角色Motion走路效果最佳的比例。
- 对齐:团舞,所有连接角色的比例统一设置成一个数值(互相匹配,针对腿)。
- 对齐(道具模式,不建议使用,半废弃):团舞,所有连接角色的比例统一设置成一个数值(互相匹配,针对手)。
## 渲染管线
1. 添加ShaderModel。
1. ToonLit通过CustomDatas传入阴影给Diffuse与Specular
2. ToonCustomBxDF只有Diffuse 阴影过渡Specular靠Matcap。
2. 改了ShadingModels.ush.
3. Encode/Decode GBuffer.
4. UE5 Encode/Decode GBuffer.
# Plugins
1. AssetProcess自研资源规范性检测、安全性检测。
2. AVAudioUE4商城公用库音频播放库。
3. ChaosBone自研骨骼物理模拟插件。
4. ChingReciver青瞳的插件。
5. DataTableEditorUntilit数据表插件。商城。
6. DirectAssistanter辅助工具。
7. DTWebBrower移植自UE、网页浏览器内嵌。
8. ControlRig移植自UE。
9. FacialExpression自研面捕驱动插件。
10. FFMepg移植自FFMEPG。
11. GFurPro毛发插件。
12. GloveProcess自研手套数据后处理插件。手套数据后处理插件。机器学习算法姿势会进行匹配选择一个最好的姿势输出。
13. JNAAniamtion自研动画编辑相关。
14. KantanChert商城图表插件。
15. KawaiiPhysics
16. LDAssist商城美术编辑工具。
17. MotionCapture青瞳的插件。
18. NDIO:NDIO
19. PixelCapture官方
20. PixelStream
21. Protobuf
22. puerts
23. ReactUMG
24. RuntimeImportAudio商城
25. RuntimeEditorASoul导播台的实时编辑器模块。
26. SerialComPLugin串口插件。一些灯现在不用了。
27. SimpleTCPServer移植官方加修改有在用。
28. SimpleUDP移植官方加修改有在用。
29. SPCR 布料插件
30. StreamDockLink
31. TextureShare自研Pico相关插件现在没用。
32. VRCapture自研Pico相关
33. VRPlaybackUE自研Pico相关。
34. VRTrack 自研VR手套相关。
## JNAAniamtion
字节自研的蒙皮角色蒙皮修形插件实现了LinearPsdSlover、PsdSlover、TwistSlover、ComposeDriver、SkinWeightDriver。
需要使用中台研发的Maya插件导出角色的基础结构JSON文件并将后缀名改成**UJSON**后导入。
1. [x] JNAnimation空。
2. [x] JNAnimationEd定义动画节点**AnimGraphNode_JNPoseDriver**。
3. [x] JNCustomAssetEd定义角色蒙皮修形资产。定义数据格式位于**UJNPoseDriverAsset**
4. [x] JNCustomAsset定义JSON数据载体UJNPoseDriverAsset(UBaseJsonAsset -> UObject)。
5. [ ] [[#JNAnimationTools]]
1. FAnimNode_JNPoseDriver
2. FComposeDriver
3. FLinearSolver
4. FPoseDriverUtils
5. FPSDSlover
6. FSkinWeightDriver
7. FSolverDriver
8. FTwistSolver
### JNCustomAsset
```c++
USTRUCT(BlueprintType)
struct FDrivenInfos
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FString> BlendShape;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FString> Joint;
};
USTRUCT(BlueprintType)
struct FAniCruveInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> Input;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FDrivenInfos DrivenInfos;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> Tangent;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> OutTangent;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> InTangent;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> Value;
};
USTRUCT(BlueprintType)
struct FPSDAniCurveInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> B;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> U;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> D;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> F;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> DF;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> UF;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> DB;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> UB;
};
USTRUCT(BlueprintType)
struct FPSDSloverInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FPSDAniCurveInfo aniCurveInfos;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FString driver;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> matrix;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FString parent;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> aimAxis;
};
USTRUCT(BlueprintType)
struct FLinearSolverInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
float coefficient;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FString attribute;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FAniCruveInfo> aniCurveInfos;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FString driver;
};
USTRUCT(BlueprintType)
struct FComposeDriverInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FString> curveName;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FString> blendshape;
};
USTRUCT(BlueprintType)
struct FSkinWeightDriverInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
int index;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FString joint;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FString> influenceObjects;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> weights;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<float> initPoint;
};
USTRUCT(BlueprintType)
struct FTwistSloverInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FString inputJoint;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
int twistAxis;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FString> twistJoints;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
bool isReverse;
};
USTRUCT(BlueprintType)
struct FPoseDriverSolversInfo
{
GENERATED_BODY()
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FPSDSloverInfo> psdSolvers;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FLinearSolverInfo> linearSolvers;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FComposeDriverInfo> composeDrivers;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FSkinWeightDriverInfo> skinWeightDrivers;
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
TArray<FTwistSloverInfo> twistSolvers;
};
UCLASS(BlueprintType)
class JNCUSTOMASSET_API UJNPoseDriverAsset : public UBaseJsonAsset
{
GENERATED_BODY()
public:
virtual bool ParseFromJsonObject(const TSharedRef<FJsonObject>& JsonObjectRef) override;
public:
UPROPERTY(BlueprintReadWrite, EditDefaultsOnly)
FPoseDriverSolversInfo SloversInfo;
};
```
### JNAnimationTools
# Script
1. DirectorCam与24个镜头相关。
2. EditorRuntimeEditor相关。
3. LiveDirector
1. [ ] Character/TsIdolActor.ts[[角色流程#TsIdolActor.ts]]
4. Camera
5. Characrer:
6. Danma:弹幕相关View以及控制。
7. DeckLinkViewProcess视频处理叠加UI之类的操作。
8. DecorationUI UMG类定义。
9. DeviceINputActorMedia以及串口控制器Actor
10. DirectorFrameWorkGameMode、Contorl、 UIManage之类的通用框架。
11. DirectorToolMenu编辑器UI相关。
12. Level场景切换控制器。
13. LiveArea直播区域。
14. MapEnvironmentLevel里的效果以及相关逻辑。天气控制。
15. Pico相关。
16. Prop道具相关道具。
17. QuickControl简单UI控制器。
18. ScreenPlayerTextureRenderer将视频渲染成贴图之后再场景中渲染。
19. SeiSenderOBS Sei信息。
20. VideoStreamTransition转场功能闪白、转视频啥的。
21. Python小工具
22. Sequoia运镜录制剪辑工具。自研类似Sequence的 runtime editor控制镜头。控制灯光。
23. SimpleLiveDirector提供给外部供应商的简单版程序。
## Source
1. AppShells做了一半还没用。
2. Editor/HotKeyManager快捷键相关可以通过配置实现。
3. LiveDirector
4. LiveDirectorEditor
5. Module:
1. BlackMagicInputUE官方移植视频采集卡。
2. BlackMagicOutputUE官方移植视频采集卡。
3. DeckLinkOuputUE官方移植视频采集卡。
4. GameCluster未完成
6. MultiViewRenderer20个View的UI相关。
7. UIModuleUI样式定义功能。
### HotKeyManager
实现UI控件SHotKeyEditor、SHotkeyInput。主要的功能是注册名为HotKeyManager的Tab里面塞了前面2个UI控件。
TODO找到在哪使用
### LiveDirector
#### AIACao后续大概率不会再用
AI NLP相关需要自己接入服务。
#### AIgo一欧元滤波器
TODO找到在哪使用
#### Animation
- FAnimNode_FullBody动画蓝图中**非MotionProcessor** LiveDirector Client使用该节点。
- FSequenceMotionSource一个单例类没有继承猜测与Sequoia系统记录数据有关。
#### Camera
- DirectorCamera
- CameraCopySettings
- CamTarget
- CamTargetManagerActor
- DroneCamera
## Material
ResArt-CommonMaterial-MaterialM_ToonHair_v01。v02为新版。
- CommonMaterial-FunctionsShadingModel的骚操作。
## Character
- BP_Idlo_Base
- 继承=>
## LightChannel
1. 角色为LightChannel2。
## 资源
- Characters角色蓝图类。
- 命名规则`BP_角色名_XXX`
- 由技术同学进行配置,
- 主要配置Outline材质。
- DressName导播软件会
- 动画蓝图:服装布料效果。
- ResArt
- CharacterArt
- Material
- Mesh
- Animations
- Dissolve溶解材质个数与名称需要与角色材质一一对应。
- MakeDissolveMaterial.py生成对应的溶解材质。
-
场景:
- BP_ASoulSky天空盒控制。
- 场景变体使用UE官方的SceneVariantManager。
- LiveDirector - Editor- SceneVariant SceneVariantTool
- 使用一个工具蓝图来创建场景变体以此来实现场景溶解切换效果。
### 添加资产流程
#### 角色流程
如果只是修改若干资产数值需要点击全量生成。或者直接去修改JSON文件IdolPropAesstConfig
#### 道具流程
Tag可以控制
1. 谁可以拿
2. 拿的手势
3. 道具类型
4. 左右手的MountPoint数据
#### 场景流程
1. 角色需要`LiveArea`Actor
2. Sequence与Camera需要`CameraRoot`Actor需要与LiveArea完全重叠
1. ~~FollowMovementComponent编写各种相机跟踪物体IdolName、Socket、Bone~~
2. 摄像机挂载FollowingComponment。
3.
# 实现Sequence控制角色与镜头
1. Sequence镜头控制屏蔽ASoul系统的镜头控制屏蔽DirectorCam相关功能。
1. DirectorCam C++目录着重看Subsystem以及CameraManager。
2. 只要的耦合在Puerts脚本中 DirectorCam目录中TsDirectorCamManagerActor、以及目录下面的StreamDock。
3. 1个LiveArea - 1个WorkShop - 多个CameraGroup - 24个镜头
2. 录制动画/动捕动画切换:修改动画蓝图。将录制的动画塞入。
3. 新添加功能实现24个镜头与Sequence镜头切换位于DirectorCam - StreamDcok。
# 修型
在动画蓝图节点中既控制BlendShape也有骨骼。
# StreamDock
1. 有了相关操作后刷新StreamDcok。
2. 配置蓝图。EUWBP_
1. 与LiveArea有关。
1. Dock实时读取BP_DefaultCamPlacem的目标Serup。
2. 生成LiveArea模板需要使用工具DirectorTool-生成LiveArea。
# FaceMask位置
P4V - Trunk - tools - ARFaceCap
- 版本为Unity2020.1.16
# MotionServer
青瞳 -》 UE版MotionServer 重定向、过滤 -》.net MotionServer转发。
# UE5版本改动
1. 程序方面多了大世界系统逻辑,世界分区(WorldComposition)。其他逻辑是一样的。
2. 材质、渲染使用一些trick技巧强行在材质中塞入一个宏强行打开CustomData来塞ShadowColor。
## 版本升级
1. 程序升级代码后QA负责测试。
2. TA效果美术、QA负责观察。
# Content
## Maps
### 开发相关
- Maps
- Scenes
- Map_LookDev
- Map_Hide
- Map_Lightt
- Map_LookDev角色LookDev
- Map_LookDev_WZY
- Map_Props
### 大世界
- Maps
- Map_SeasideCity
- Map_SeasideCity
- Map_CHNature
- Map_CHNature
- Map_Stylized_Vilage
- Map_Stylized_Vilage
- Map_WorlddEnd
- Map_WorldEnd
其他用了大世界技术的小型地图:
- Maps
- Map_GreenScreen绿幕效果。
- Map_LiveLive场景。

View File

@@ -0,0 +1,63 @@
# 编译
1. 需要安装CUDA Toolkit 11.8。
2. 添加系统变量CUDA_PATH。数值为 "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8";
3. 生成解决方案后,
# 移植到官方引擎
1. 替换修改的引擎usf文件。
2. 移除PostProcessTonemap.usf中的修改部分。
1. `tonemapping那里多加了一张图从c++加的`
```hlsl
void MainPS(
in noperspective float2 UV : TEXCOORD0,
in noperspective float2 InVignette : TEXCOORD1,
in noperspective float4 GrainUV : TEXCOORD2,
in noperspective float2 ScreenPos : TEXCOORD3,
in noperspective float2 FullViewUV : TEXCOORD4,
float4 SvPosition : SV_POSITION, // after all interpolators
out float4 OutColor : SV_Target0
#if OUTPUT_LUMINANCE
, out float OutLuminance: SV_Target1
#endif
)
{
float Luminance;
FGBufferData SamplerBuffer = GetGBufferData(UV * View.ResolutionFractionAndInv.x, false);
if (SamplerBuffer.CustomStencil > 1.0f && abs(SamplerBuffer.CustomDepth - SamplerBuffer.Depth) < 1)
{
OutColor = SampleSceneColor(UV);
}
else
{
OutColor = TonemapCommonPS(UV, InVignette, GrainUV, ScreenPos, FullViewUV, SvPosition, Luminance);
}
#if OUTPUT_LUMINANCE
OutLuminance = Luminance;
#endif
}
```
还原成原本样式:
```
void MainPS(
in noperspective float2 UV : TEXCOORD0,
in noperspective float2 InVignette : TEXCOORD1,
in noperspective float4 GrainUV : TEXCOORD2,
in noperspective float2 ScreenPos : TEXCOORD3,
in noperspective float2 FullViewUV : TEXCOORD4,
float4 SvPosition : SV_POSITION, // after all interpolators
out float4 OutColor : SV_Target0
#if OUTPUT_LUMINANCE
, out float OutLuminance: SV_Target1
#endif
)
{
float Luminance;
FGBufferData SamplerBuffer = GetGBufferData(UV * View.ResolutionFractionAndInv.x, false);
OutColor = TonemapCommonPS(UV, InVignette, GrainUV, ScreenPos, FullViewUV, SvPosition, Luminance);
#if OUTPUT_LUMINANCE
OutLuminance = Luminance;
#endif
}
```

View File

@@ -0,0 +1,158 @@
TsScreenPlayerTextureRenderer => AMultiViewActor
# 渲染逻辑
- UMultiViewRendererComponent::DrawMultiViewCameras()
渲染函数:GetRendererModule().BeginRenderingViewFamily(&SceneCanvas, &ViewFamily);
摄像机相关函数:FSceneView* UMultiViewRendererComponent::CalcSceneView(FSceneViewFamily* InViewFamily, UCineCameraComponent* InCamera,
const uint32 InViewIndex)
# 多屏与采集卡
以Preview为例
TsDirectorCamManagerActor.ts
```c++
this.PreviewWindow = UE.MultiViewActor.Open(this.GetWorld(), UE.EMultiViewCameraLayout.Display_1920x1080_Layout_4x4, UE.EMultiViewMultiGPUMode.HalfSplit)
this.PreviewWindow.SetRenderFeaturePlanarReflection(false);
this.PreviewWindow.SetRenderFeatureNiagara(false);
// video output
let videoOutputParam = new UE.VideOutputParam()
videoOutputParam.bBlackMagicCard = false
videoOutputParam.bLazyStart = false
this.PreviewWindow.StartVideoOutput(videoOutputParam)
```
PVW&PGM使用 **BLACKMAGIC_OUTPUT_CONFIG_HORIZONTAL**  也就是/Game/ResArt/BlackmagicMedia/MO_BlackmagicVideoOutput。
```ts
export function StartVideoOutput(camManager : TsDirectorCamManagerActor, targetWindow : UE.MultiViewActor):void{
let videoOutpuParam = new UE.VideOutputParam()
videoOutpuParam.FilmbackMode = camManager.FilmbackMode
videoOutpuParam.OutputConfigPath = BLACKMAGIC_OUTPUT_CONFIG_HORIZONTAL
videoOutpuParam.bBlackMagicCard = true
videoOutpuParam.bLazyStart = false
if(camManager.FilmbackMode == UE.EFilmbackMode.EFM_1080x1920){
videoOutpuParam.MatV2H = GetV2HMaterialInstace(camManager)
}
targetWindow.StartVideoOutput(videoOutpuParam)
}
```
- DirectorMode.PreviewbBlackMagicCard = false
- PVW&PGMbBlackMagicCard = true
## c++
核心函数在于**AMultiViewActor::StartVideoOutput**
# TS传入设置位置
- TsDirectorCamManagerActor.ts SwitchToDirectMode(newTag: UE.GameplayTag)
- FilmbackHelper.ts StartVideoOutput()
SwitchToDirectMode()
```ts
case DirectorMode.Preview:
console.log('启动Splite4x4预览窗口')
if (shouldCreateWindow) {
this.PreviewWindow = UE.MultiViewActor.Open(this.GetWorld(), UE.EMultiViewCameraLayout.Display_1920x1080_Layout_4x4, UE.EMultiViewMultiGPUMode.HalfSplit)
this.PreviewWindow.SetRenderFeaturePlanarReflection(false);
this.PreviewWindow.SetRenderFeatureNiagara(false);
// video output
let videoOutputParam = new UE.VideOutputParam()
videoOutputParam.bBlackMagicCard = false
videoOutputParam.bLazyStart = false
this.PreviewWindow.StartVideoOutput(videoOutputParam)
}
```
```ts
function StartVideoOutput(camManager : TsDirectorCamManagerActor, targetWindow : UE.MultiViewActor):void{
if(!BE_USE_DECKLINK){
let videoOutpuParam = new UE.VideOutputParam()
videoOutpuParam.FilmbackMode = camManager.FilmbackMode
videoOutpuParam.OutputConfigPath = BLACKMAGIC_OUTPUT_CONFIG_HORIZONTAL
videoOutpuParam.bBlackMagicCard = true
videoOutpuParam.bLazyStart = false
if(camManager.FilmbackMode == UE.EFilmbackMode.EFM_1080x1920){
videoOutpuParam.MatV2H = GetV2HMaterialInstace(camManager)
}
targetWindow.StartVideoOutput(videoOutpuParam)
}
}
```
# UDeckLinkMediaCapture
m_DeckLinkOutputDevice = DeckLinkDiscovery->GetDeviceByName(m_DeviceName);
```c++
FDeckLinkDeviceDiscovery::DeckLinkDeviceArrived(IDeckLink *device)
{
/*TComPtr<IDeckLink> device_;
device_ = device;*/
TComPtr<FDeckLinkOutputDevice> newDeviceComPtr = new FDeckLinkOutputDevice(device);
if (!newDeviceComPtr->Init())
return S_OK;
std::lock_guard<std::recursive_mutex> lock(m_DeviceMutex);
FString deviceName = newDeviceComPtr->GetDeviceName();//看这个Com对象的设备名称是否对应
if (!m_Devices.Contains(deviceName) )
{
m_Devices.Add(deviceName,newDeviceComPtr);
}
return S_OK;
}
```
DeckLinkDeviceArrived调用逻辑位于**DeckLinkAPI_h.h**。
## ADeckLinkOutputActor
ADeckLinkOutputActor的DeviceName 默认值为"DeckLink Mini Monitor 4K";
判断错误函数
UMediaCapture::CaptureTextureRenderTarget2D()=>UMediaCapture::StartSourceCapture() => ValidateMediaOutput()
```c++
bool UMediaCapture::ValidateMediaOutput() const
{
if (MediaOutput == nullptr)
{ UE_LOG(LogMediaIOCore, Error, TEXT("Can not start the capture. The Media Output is invalid."));
return false;
}
FString FailureReason;
if (!MediaOutput->Validate(FailureReason))
{ UE_LOG(LogMediaIOCore, Error, TEXT("Can not start the capture. %s."), *FailureReason);
return false;
}
if(DesiredCaptureOptions.bAutostopOnCapture && DesiredCaptureOptions.NumberOfFramesToCapture < 1)
{ UE_LOG(LogMediaIOCore, Error, TEXT("Can not start the capture. Please set the Number Of Frames To Capture when using Autostop On Capture in the Media Capture Options"));
return false;
}
return true;
}
```
```c++
bool UDeckLinkMediaCapture::InitBlackmagic(int _Width, int _Height)
{
if (DeckLinkDiscovery == nullptr)
{
return false;
}
Width = _Width;
Height = _Height;
check(Height > 0 && Width > 0)
BMDDisplayMode displayMode = GetDisplayMode(Width, Height);
m_DeckLinkOutputDevice = DeckLinkDiscovery->GetDeviceByName(m_DeviceName);
if (m_DeckLinkOutputDevice.Get() == nullptr)
{
return false;
}
if (!m_DeckLinkOutputDevice->EnableOutput(displayMode, bmdFormat8BitYUV))
{
m_DeckLinkOutputDevice.Reset();
return false;
}
return true;
}
```
***DeckLinkDiscovery->GetDeviceByName(m_DeviceName);***

View File

@@ -0,0 +1,266 @@
# 相关蓝图类
BP_Live里面可以指定MediaPlayer以及MediaTexture并且替换蓝图子StaticMesh材质中的EmissiveMap为MediaTexture。
# 导播台
之后就可以将视频放到指定的Saved文件夹里就可以在导播台播放了。
# NDI 播放逻辑
通过道具来添加NDI 设置。
## 道具
- BP_ProjectorD0
- BP_Screen011
## 相关注释掉的代码
- TsMapEnvironmentAssets.ts
- TsMapEnvironmentSingleSelectItemView.ts
- SetMediaData()
- TsScreenPlayerItemView.ts
- SetData()
- TsScreenPlayerSelectItemPopupView.ts
- ChangeMediaType()
# NDI播放模糊问题解决
- bool UNDIMediaReceiver::CaptureConnectedVideo()
```c++
bool UNDIMediaReceiver::Initialize(const FNDIConnectionInformation& InConnectionInformation, UNDIMediaReceiver::EUsage InUsage)
{
if (this->p_receive_instance == nullptr)
{
if (IsValid(this->InternalVideoTexture))
this->InternalVideoTexture->UpdateResource();
// create a non-connected receiver instance
NDIlib_recv_create_v3_t settings;
settings.allow_video_fields = false;
settings.bandwidth = NDIlib_recv_bandwidth_highest;
settings.color_format = NDIlib_recv_color_format_fastest;
p_receive_instance = NDIlib_recv_create_v3(&settings);
// check if it was successful
if (p_receive_instance != nullptr)
{
// If the incoming connection information is valid
if (InConnectionInformation.IsValid())
{
//// Alright we created a non-connected receiver. Lets actually connect
ChangeConnection(InConnectionInformation);
}
if (InUsage == UNDIMediaReceiver::EUsage::Standalone)
{
this->OnNDIReceiverVideoCaptureEvent.Remove(VideoCaptureEventHandle);
VideoCaptureEventHandle = this->OnNDIReceiverVideoCaptureEvent.AddLambda([this](UNDIMediaReceiver* receiver, const NDIlib_video_frame_v2_t& video_frame)
{
FTextureRHIRef ConversionTexture = this->DisplayFrame(video_frame);
if (ConversionTexture != nullptr)
{
if ((GetVideoTextureResource() != nullptr) && (GetVideoTextureResource()->TextureRHI != ConversionTexture))
{
GetVideoTextureResource()->TextureRHI = ConversionTexture;
RHIUpdateTextureReference(this->VideoTexture->TextureReference.TextureReferenceRHI, ConversionTexture);
}
if ((GetInternalVideoTextureResource() != nullptr) && (GetInternalVideoTextureResource()->TextureRHI != ConversionTexture))
{
GetInternalVideoTextureResource()->TextureRHI = ConversionTexture;
RHIUpdateTextureReference(this->InternalVideoTexture->TextureReference.TextureReferenceRHI, ConversionTexture);
}
}
});
// We don't want to limit the engine rendering speed to the sync rate of the connection hook
// into the core delegates render thread 'EndFrame'
FCoreDelegates::OnEndFrameRT.Remove(FrameEndRTHandle);
FrameEndRTHandle.Reset();
FrameEndRTHandle = FCoreDelegates::OnEndFrameRT.AddLambda([this]()
{
while(this->CaptureConnectedMetadata())
; // Potential improvement: limit how much metadata is processed, to avoid appearing to lock up due to a metadata flood
this->CaptureConnectedVideo();
});
#if UE_EDITOR
// We don't want to provide perceived issues with the plugin not working so
// when we get a Pre-exit message, forcefully shutdown the receiver
FCoreDelegates::OnPreExit.AddWeakLambda(this, [&]() {
this->Shutdown();
FCoreDelegates::OnPreExit.RemoveAll(this);
});
// We handle this in the 'Play In Editor' versions as well.
FEditorDelegates::PrePIEEnded.AddWeakLambda(this, [&](const bool) {
this->Shutdown();
FEditorDelegates::PrePIEEnded.RemoveAll(this);
});
#endif
}
return true;
}
}
return false;
}
```
绘制函数
```c++
/**
Attempts to immediately update the 'VideoTexture' object with the last capture video frame
from the connected source
*/
FTextureRHIRef UNDIMediaReceiver::DisplayFrame(const NDIlib_video_frame_v2_t& video_frame)
{
// we need a command list to work with
FRHICommandListImmediate& RHICmdList = FRHICommandListExecutor::GetImmediateCommandList();
// Actually draw the video frame from cpu to gpu
switch(video_frame.frame_format_type)
{
case NDIlib_frame_format_type_progressive:
if(video_frame.FourCC == NDIlib_FourCC_video_type_UYVY)
return DrawProgressiveVideoFrame(RHICmdList, video_frame);
else if(video_frame.FourCC == NDIlib_FourCC_video_type_UYVA)
return DrawProgressiveVideoFrameAlpha(RHICmdList, video_frame);
break;
case NDIlib_frame_format_type_field_0:
case NDIlib_frame_format_type_field_1:
if(video_frame.FourCC == NDIlib_FourCC_video_type_UYVY)
return DrawInterlacedVideoFrame(RHICmdList, video_frame);
else if(video_frame.FourCC == NDIlib_FourCC_video_type_UYVA)
return DrawInterlacedVideoFrameAlpha(RHICmdList, video_frame);
break;
}
return nullptr;
}
```
DrawProgressiveVideoFrame
UNDIMediaReceiver::CaptureConnectedVideo
=>
DisplayFrame NDIlib_frame_format_type_progressive NDIlib_FourCC_video_type_UYVY
=>
DrawProgressiveVideoFrame
## Shader Binding RT
设置RT
```c++
FTextureRHIRef TargetableTexture;
// check for our frame sync object and that we are actually connected to the end point
if (p_framesync_instance != nullptr)
{
// Initialize the frame size parameter
FIntPoint FrameSize = FIntPoint(Result.xres, Result.yres);
if (!RenderTarget.IsValid() || !RenderTargetDescriptor.IsValid() ||
RenderTargetDescriptor.GetSize() != FIntVector(FrameSize.X, FrameSize.Y, 0) ||
DrawMode != EDrawMode::Progressive)
{
// Create the RenderTarget descriptor
RenderTargetDescriptor = FPooledRenderTargetDesc::Create2DDesc(
FrameSize, PF_B8G8R8A8, FClearValueBinding::None, TexCreate_None, TexCreate_RenderTargetable | TexCreate_SRGB, false);
// Update the shader resource for the 'SourceTexture'
// The source texture will be given UYVY data, so make it half-width
#if (ENGINE_MAJOR_VERSION > 5) || ((ENGINE_MAJOR_VERSION == 5) && (ENGINE_MINOR_VERSION >= 1))
const FRHITextureCreateDesc CreateDesc = FRHITextureCreateDesc::Create2D(TEXT("NDIMediaReceiverProgressiveSourceTexture"))
.SetExtent(FrameSize.X / 2, FrameSize.Y)
.SetFormat(PF_B8G8R8A8)
.SetNumMips(1)
.SetFlags(ETextureCreateFlags::RenderTargetable | ETextureCreateFlags::Dynamic);
SourceTexture = RHICreateTexture(CreateDesc);
#elif (ENGINE_MAJOR_VERSION == 4) || (ENGINE_MAJOR_VERSION == 5)
FRHIResourceCreateInfo CreateInfo(TEXT("NDIMediaReceiverProgressiveSourceTexture"));
TRefCountPtr<FRHITexture2D> DummyTexture2DRHI;
RHICreateTargetableShaderResource2D(FrameSize.X / 2, FrameSize.Y, PF_B8G8R8A8, 1, TexCreate_Dynamic,
TexCreate_RenderTargetable, false, CreateInfo, SourceTexture,
DummyTexture2DRHI);
#else
#error "Unsupported engine major version"
#endif
// Find a free target-able texture from the render pool
GRenderTargetPool.FindFreeElement(RHICmdList, RenderTargetDescriptor, RenderTarget, TEXT("NDIIO"));
DrawMode = EDrawMode::Progressive;
}
#if ENGINE_MAJOR_VERSION >= 5
TargetableTexture = RenderTarget->GetRHI();
#elif ENGINE_MAJOR_VERSION == 4
TargetableTexture = RenderTarget->GetRenderTargetItem().TargetableTexture;
...
...
// Initialize the Render pass with the conversion texture
FRHITexture* ConversionTexture = TargetableTexture.GetReference();
FRHIRenderPassInfo RPInfo(ConversionTexture, ERenderTargetActions::DontLoad_Store);
// Needs to be called *before* ApplyCachedRenderTargets, since BeginRenderPass is caching the render targets.
RHICmdList.BeginRenderPass(RPInfo, TEXT("NDI Recv Color Conversion"));
```
设置NDI传入的UYVY
```c++
// set the texture parameter of the conversion shader
FNDIIOShaderUYVYtoBGRAPS::Params Params(SourceTexture, SourceTexture, FrameSize,
FVector2D(0, 0), FVector2D(1, 1),
bPerformsRGBtoLinear ? FNDIIOShaderPS::EColorCorrection::sRGBToLinear : FNDIIOShaderPS::EColorCorrection::None,
FVector2D(0.f, 1.f));
ConvertShader->SetParameters(RHICmdList, Params);
// Create the update region structure
FUpdateTextureRegion2D Region(0, 0, 0, 0, FrameSize.X/2, FrameSize.Y);
// Set the Pixel data of the NDI Frame to the SourceTexture
RHIUpdateTexture2D(SourceTexture, 0, Region, Result.line_stride_in_bytes, (uint8*&)Result.p_data);
```
## 解决方案
[NDI plugin质量问题](https://forums.unrealengine.com/t/ndi-plugin-quality-trouble/1970097)
I changed only shader “NDIIO/Shaders/Private/NDIIOShaders.usf”.
For example function **void NDIIOUYVYtoBGRAPS (// Shader from 8 bits UYVY to 8 bits RGBA (alpha set to 1)):**
_WAS:_
```c++
float4 UYVYB = NDIIOShaderUB.InputTarget.Sample(NDIIOShaderUB.SamplerB, InUV);
float4 UYVYT = NDIIOShaderUB.InputTarget.Sample(NDIIOShaderUB.SamplerT, InUV);
float PosX = 2.0f * InUV.x * NDIIOShaderUB.InputWidth;
float4 YUVA;
float FracX = PosX % 2.0f;
YUVA.x = (1 - FracX) * UYVYT.y + FracX * UYVYT.w;
YUVA.yz = UYVYB.zx;
YUVA.w = 1;
```
_I DID:_
```c++
float4 UYVYB = NDIIOShaderUB.InputTarget.Sample(NDIIOShaderUB.SamplerB, InUV);
float4 UYVYT0 = NDIIOShaderUB.InputTarget.Sample(NDIIOShaderUB.SamplerT, InUV + float2(-0.25f / NDIIOShaderUB.InputWidth, 0));
float4 UYVYT1 = NDIIOShaderUB.InputTarget.Sample(NDIIOShaderUB.SamplerT, InUV + float2(0.25f / NDIIOShaderUB.InputWidth, 0));
float PosX = 2.0f * InUV.x * NDIIOShaderUB.InputWidth;
float4 YUVA;
float FracX = (PosX % 2.0f) * 0.5f;
YUVA.x = (1 - FracX) * UYVYT1.y + FracX * UYVYT0.w;
YUVA.yz = UYVYB.zx;
YUVA.w = 1;
```
Small changes but result is seems much more better.
Of course, I added a bit of sharpness to the material after I changed the shader, but even without that, the result looks better than in the original version.
滤波资料:https://zhuanlan.zhihu.com/p/633122224
## UYVYYUV422
- https://zhuanlan.zhihu.com/p/695302926
- https://blog.csdn.net/gsp1004/article/details/103037312
![](https://i-blog.csdnimg.cn/blog_migrate/24b41fd36ff7902670e11a8005afb370.jpeg)

View File

@@ -0,0 +1,50 @@
# 前言
1. 角色需要`BP_LiveArea`LiveAreaActor
2. Sequence与Camera需要`CameraRoot`Actor需要与LiveArea完全重叠
1. ~~FollowMovementComponent编写各种相机跟踪物体IdolName、Socket、Bone~~
2. 摄像机挂载FollowingComponment。
# LiveDirector
## BP_LiveArea
基类为**ALiveAreaActor**位于Source/LiveDirector/DirectorFramework/LiveAreaActor.h。直播区域占位用Actor可以用于定义
- WeatherOverride进入该区域后天气系统重载。
- WeatherBlendDuration天气系统重载过渡时间。
- CareLayers该区域加载时会自动加载关联的 DataLayer 层级。
# DirectorCam
## Core
### BP_CamPlacement_LiveArea
一般挂载在**BP_LiveArea**下面。使用LiveDirector - DirectorCam - Template下的模板生成。继承关系为**BP_CamWorkShopPlacement -> ACamWorkShopPlacementActor**位于Modules/DirectorCam/Core/CamWorkShopPlacementActor.h
## Data
### UDirectorCamGroupData
镜头组数据资产,里面存储***各个镜头LevelSequence***。
### UDirectorCamSetupData
里面存储各种***UDirectorCamGroupData***。
# LevelSequences
LevelSequence的帧数为30fps。
- DirectorCamSetupData
- Example5400x4800(大动捕室)/Game/LevelSequences/Example5400x4800/CamSetup_5400x4800
- DirectorCamGroupData/Game/LevelSequences/Example5400x4800/General
- Dance
## 创建一个新镜头的步骤
***推荐直接复制之前制作的镜头,并在此基础上进行编辑。***
1. 在LevelSequences目录下新建一个LevelSequence资产。
2. 拖入角色蓝图资产到到LevelSequence中Spawnable并且Attach指定的**LiveArea**Actor下对齐直播场景的原点。拖入SoundBase资产到LevelSequence中可选
3. 角色蓝图添加CharacterMesh轨道并且给CharacterMesh轨道指定对应的AnimationSequence资产。
4. 添加摄像机
1. CinemaCamera
1. 如果需要追踪则给CinemaCamera添加FollowingMovement组件以及轨道并且指定组件的FollowingActor为对应Actor并且***填写FollowingIdolName***例如Idol.BeiLa。以及FollowingSocketName
2. DirectorCamera为场景相关的相机静态镜头
1. Sequence中添加DirectorCamera之后Attach到CameraRoot下。
2. 如果需要追踪则给DirectorCamera添加FollowingMovement组件以及轨道并且指定组件的FollowingActor为对应Actor并且***填写FollowingIdolName***例如Idol.BeiLa。以及FollowingSocketName
5. 将LevelSequence添加到对应的`UDirectorCamGroupData`资产中。
6.`UDirectorCamGroupData`放入到对应`UDirectorCamGroupData`资产中。
7. 点击ASoul工具-配置ShotGroupBroad通用按钮。配置StreamDock面部按钮以及图标。

View File

@@ -0,0 +1,11 @@
# 添加步骤
1. 在Maps文件中构建**Map_xxx**文件夹以及同名称的大世界Map。
2. 在地图中间添加LiveArea并且设置标题。
3. 添加Config/LiveDirectorAsset/LevelAreaConfig.json配置。
## 相关资产存放位置
- 在Maps文件中构建**Map_xxx**文件夹以及同名称的大世界Map。
- **Maps/Scenes/Map_xxx**存放对应大世界的资产。
- Maps/Epic存放官方商城下载的资源。
- UIAssets/MapEnvironments/ICON_/Area存放LiveArea位置的预览图标
- UIAssets/MapEnvironments/ICON_/Level存放对应Map或者大世界的预览图标

View File

@@ -0,0 +1 @@
BP_ProjectV_EnvironmentBlendable

View File

@@ -0,0 +1,149 @@
# 前言
继承关系BP_XXX_Base -> BP_Idol_Base -> TsIdolActor -> AVCharacter -> ACharacter 。
主要逻辑位于TsIdolActor中文件路径为`Script/LiveDirector/Character/TsIdolActor.ts`
# 添加新衣服流程
1. 在Content/Character/XXX中创建继承自BP_XXX_Base的蓝图。
2. 设置SkeletalMesh。并且确保勾选了**Dynamic InsetShadow**。
3. 添加PhysicalAsset并且保证胶囊体大致包裹模型否则阴影会出现被裁剪的问题。
4. 添加对应的OutlineMaterial。
5. 在DressName中输入新衣服的显示名称。
6. 添加Prop标签。
1. Idol.XXX
2. Prop.Dress(第一件衣服需要设置成Prop.Dress.Default)
3. Prop.MountPoint.Body
7. ***脚本扫描***:点击编辑器的嘉然头像->角色道具配置全量生成 or 角色道具配置增量生成。会往IdolPropAssetConfig.json添加对应衣服或者道具配置数据。
# 添加新角色流程笔记
1. 添加一个Idol.xxx标签。
2. 修改下面相关文件[[#含有角色标签的文件]]。
3. 添加对应的蓝图与文件结构。
1. Content/Character
1. 在Idol_Base中添加[[#BP_XXX_Base]]。
2. 设置衣服、头发的SkeletalMesh。并且确保勾选了**Dynamic InsetShadow**。
3. 添加角色、衣服、头发的PhysicalAsset并且保证胶囊体大致包裹模型否则阴影会出现被裁剪的问题。
2. Content/ResArt/CharacterArt放置角色与服装按照
1. 动画蓝图中的***FullBody节点需要设置角色标签***。
1. 指定用于修型的PostProcess动画蓝图。
2. 添加Prop标签。
1. Idol.XXX
2. Prop.Dress
3. Prop.MountPoint.Body
3. ***脚本扫描***:点击编辑器的嘉然头像->角色道具配置全量生成 or 角色道具配置增量生成。会往IdolPropAssetConfig.json添加对应衣服或者道具配置数据。
4. 设置道具挂载信息数据表ResArt/MountPointConfig/DT_MountPointConfig用于设置道具挂载时的相对偏移。
5. ***材质相关操作***
1. 在ResArt/CommonMaterial/Functions/CameraLightCollection中添加对应角色的属性。
2. 在ResArt/CommonMaterial/Functions/MF_CharacterMainLightIntensity中添加对应RoleID。
3. 在ResArt/CommonMaterial/Functions/MF_CharacterRimLightIntensity中添加对应RoleID。
4. 在对应角色的基础材质中设置RoleID数值。
5. 调用Python脚本制作Dissolve材质。LiveDirector/Editor/MaterialMigration/MakeDissolveMaterials.py
## 含有角色标签的文件
1. [x] TsCharacterItem.ts `Script/LiveDirector/Character/View/TsCharacterItem.ts`
1. 3级角色控制界面相关的UI操作。
2. [x] TsCharacterMocapViewTmp.ts :这个是MotionProcess的UI继承控件`/Content/UIAssets/Character/Mocap/WBP_CharacterMocapViewTmp`
1. 在MotionProcess专用地图中创建对应的Idol。
3. [x] TsPropMocapItemTmp.ts
1. 在MotionProcess专用地图中控制道具Attach到对应IdolUI逻辑
4. [x] TsDirectorConsoleCommandHandler.ts
1. 快捷命令 Motion同步相关 GetMotionOffset
2. 快捷命令 快速创建4个角色 IdolCostume
5. [x] TsSpawnPointSettingItem.ts
1. IdolItemUI继承控件`/Content/UIAssets/Character/WBP_SpawnPointSettingItem`
6. [x] TsIdolPropManagerComponent.ts
1. 没有思诺与心怡
2. 需要搞清楚。
7. [x] ~~TsSimpleLevelManager.ts~~
1. SwitchLiveArea()中调用只调用了Idol.BeiLa属于容错语句。
8. ~~CameraDebug.cpp ~~(这个不需求)
## BP_XXX_Base
1. 指定动画蓝图。
2. 指定LiveLinkName。
3. 指定OutlineMaterial。
# AVCharacter
主要实现了`virtual void OnRep_AttachmentReplication() override;`声明了若干BlueprintNativeEvent
- bool CanSyncRelativeTransform();
- void BeforeAttachToNewParent();
- void AfterAttachToNewParent();
## OnRep_AttachmentReplication()
注释:
>// 动捕模式下CanSync=false. 各端自行计算Actor Location, client无需使用Server计算结果
// 自由行走模式下, CanSync=trueclient需要同步server的transform信息。
同步Attachment行为。在AActor::OnRep_AttachmentReplication()的基础上添加:
- 判断CanSync标记以此来决定是否同步Transform
- 未Attach组件=>Attch组件前后添加BeforeAttachToNewParent()、AfterAttachToNewParent()
```c++
auto CanSync = CanSyncRelativeTransform(); //获取Sync标记具体的逻辑位于TsIdolActor.ts中
if (attachmentReplication.AttachParent)
{
if (RootComponent)
{
USceneComponent* AttachParentComponent = (attachmentReplication.AttachComponent ? attachmentReplication.AttachComponent : attachmentReplication.AttachParent->GetRootComponent());
if (AttachParentComponent)
{
if(CanSync)//增加判断Sync判断只有在自由行走模式下才会同步Transform。
{
RootComponent->SetRelativeLocation_Direct(attachmentReplication.LocationOffset);
RootComponent->SetRelativeRotation_Direct(attachmentReplication.RotationOffset);
RootComponent->SetRelativeScale3D_Direct(attachmentReplication.RelativeScale3D);
}
// If we're already attached to the correct Parent and Socket, then the update must be position only.
// AttachToComponent would early out in this case.
// Note, we ignore the special case for simulated bodies in AttachToComponent as AttachmentReplication shouldn't get updated
// if the body is simulated (see AActor::GatherMovement).
const bool bAlreadyAttached = (AttachParentComponent == RootComponent->GetAttachParent() && attachmentReplication.AttachSocket == RootComponent->GetAttachSocketName() && AttachParentComponent->GetAttachChildren().Contains(RootComponent));
if (bAlreadyAttached)
{
// Note, this doesn't match AttachToComponent, but we're assuming it's safe to skip physics (see comment above).
if(CanSync)
{
RootComponent->UpdateComponentToWorld(EUpdateTransformFlags::SkipPhysicsUpdate, ETeleportType::None);
}
}
else
{
BeforeAttachToNewParent();//增加BlueprintNativeEvent
RootComponent->AttachToComponent(AttachParentComponent, FAttachmentTransformRules::KeepRelativeTransform, attachmentReplication.AttachSocket);
AfterAttachToNewParent();//增加BlueprintNativeEvent
}
}
}
}
```
# TsIdolActor.ts
## VirtualOverrider
CanSyncRelativeTransform()
- 不需要同步Transform的情况
- AI控制的ACao角色不需要同步。
- 使用TsIdolMovementComponent并且勾选了ManulMovement的情况不需要同步。
- 动画蓝图中使用了**AnimGraphNode_Fullbody**节点并且bGetMotionData为true的情况不需要同步。
具体代码如下:
```typescript
CanSyncRelativeTransform(): boolean {
if (Utils.HasTag(this.PropTags, new UE.GameplayTag("Idol.AIACao"))) {
return false;
}
if(this.MovementComp && this.MovementComp.ManulMovement){
return false
}
var animInstance = this.Mesh.GetAnimInstance() as UE.IdolAnimInstance
let fullbodyNode = Reflect.get(animInstance, 'AnimGraphNode_Fullbody') as UE.AnimNode_FullBody
return !(fullbodyNode && fullbodyNode.bGetMotionData)
}
```
# Prop.Dress.Default
1. TsIdolPropManagerComponent.ts ServerLoadProp()
2. DoLoadProp()
3. ServerDoLoadPropPreset()
4. GetPropPreset()
5. GetDefaultDress()取得DefaultDress标签字符串。
6. GetPropAssetConfigsByTags(tags)根据标签取得对应的资产配置UPropAssetConfig
扫描所有资产TsPropAssetManager.ts CollectAllAssets()

View File

@@ -0,0 +1,18 @@
# 原始流程
1. StartListenServer_(异世界).bat
2. MotionServer.exe
3. StartClient_MotionProcessor.bat
4. Edior
5. MotionProcessor 设置IP 动作传输
6. Editor Open IP
7. 4级添加地图
8. 3级添加角色
9. 在StartListenServer中运行run pvw
# 改进流程
1. 启动Editor并且以专用服务模式启动。Play As Listen Server
2. MotionServer.exe
3. StartClient_MotionProcessor.bat
4. MotionProcessor 设置IP 动作传输
5. 4级添加地图
6. 3级添加角色

View File

@@ -0,0 +1,106 @@
# 相关类
- TsPropActor
- TsIdolPropActor
- TsScenePropActor
- TsPropEffectActor
# 相关资产
ResArt/MountPointConfig/DT_MountPointConfig用于设置道具挂载时的相对偏移。
# MountPoint
GameplayTag中有定义相关Prop的挂载位置标签
- Prop
- MountPoint
- Back
- Body
- Feet
- Head
- HeadBottom
- HeadUp
- Hips
- LeftFoot
- LeftHand
- RightFoot
- RightHand
- Root
对应逻辑TsPropAssetManager.ts中的枚举查找函数为GetMountPointIndexByTagName():
```ts
export const enum MountPointEnum {
    HeadUp,
    Head,
    HeadDown,
    LeftHand,
    RightHand,
    Back,
    Feet,
    Hips
}
```
TsIdolPropManagerComponent.ts
AttachAllProp() => AttachProp()
TsPropAssetManager.ts
```ts
static GetMountPointName(Tag: UE.GameplayTag): string {
if (Tag.TagName.startsWith('Prop.MountPoint')) {
let res = Tag.TagName.split('.')
return res[res.length - 1]
}
return ''
}
```
# 换装代码调用步骤
PrepareLoadNewModel_Multicast
ServerDoLoadPropPreset() => ServerStartSwitchPreset() => ServerLoadProp() => DoLoadProp() => LoadDressByConfig()
- TsDirectorController.ts CreateIdol()
- TsIdolControllerActor.ServerCreateIdolControllerAtLiveArea
- controller.PropComp.ServerLoadPropPreset(0)
- TsIdolPropManagerComponent
- ServerLoadPropPreset()
- ServerLoadProp()
- DoLoadProp()
- LoadDressByConfig
LocalLoadDressByConfig() 本地加载。
LoadDressByConfig() 服务器加载。
# 角色衣服套装预设切换逻辑
在WBP_CharacterItem中5个按钮BtnSuit_1、BtnSuit_2、BtnSuit_3、BtnSuit_4、BtnSuit_5会调用EventOnPresetClicked。
```ts
EventOnPresetClicked(PresetIndex: number, IsDoubleClick: boolean):void {
let curTime = UE.KismetSystemLibrary.GetGameTimeInSeconds(this)
if (curTime - this.LastLoadPresetTime < 0.5) {
console.warn('Click too fast , please try again later')
return
}
this.LastLoadPresetTime = curTime;
if (IsDoubleClick) {
this.LoadPreset(PresetIndex)
} else {
this.PreviewPreset(PresetIndex)
}
}
public LoadPreset(PresetIndex: number): void {
if (this.Idol == null) {
console.error(`TsCharacterItem@LoadPreset error: idol is null`);
return;
}
this.Idol.PropComp.ServerLoadPropPreset(PresetIndex);
}
public PreviewPreset(PresetIndex: number): void {
if (this.Idol == null) {
console.error(`TsCharacterItem@PreviewPreset error: idol is null`);
return;
}
this.Idol.PropComp.ClientLoadPropPreset(PresetIndex);
this.RefreshPresetUIStatus()
}
```

View File

@@ -0,0 +1,264 @@
# TsScreenPlayerTextureRenderer
   - MultiViewActor: UE.MultiViewActor;// 用于渲染pvw/pgm画面
- ReceiveBeginPlay() =>
- this.RegisterLocalEventListener(); =>
- this.ChangeCameraTask(TaskData);
## RegisterLocalEventListener
```ts
private RegisterLocalEventListener(): void {
this.PVWCameraChangeFunc = (TaskData: UE.CamTaskDataRPC) => {
if (this.CurTag == DirectorMode.PVWCameraRenderer) {
this.ChangeCameraTask(TaskData);
}
}
this.PGMCameraChangeFunc = (TaskData: UE.CamTaskDataRPC) => {
if (this.CurTag == DirectorMode.PGMCameraRenderer) {
this.ChangeCameraTask(TaskData);
}
}
this.SwitchDirectorNetTagFunc = (oldTag: UE.GameplayTag, newTag: UE.GameplayTag) => {
this.SwitchDirectorNetTagCallback(oldTag, newTag);
}
DirectorEventSystem.RegisterEventListener(this, DirectorEvent.OnPVWTaskRequested, this.PVWCameraChangeFunc);
DirectorEventSystem.RegisterEventListener(this, DirectorEvent.OnPGMTaskRequested, this.PGMCameraChangeFunc);
DirectorEventSystem.RegisterEventListener(this, DirectorEvent.SwitchDirectorMode, this.SwitchDirectorNetTagFunc)
}
```
## ChangeCameraTask
```ts
ChangeCameraTask(TaskData: UE.CamTaskDataRPC) {
if(!this.bStarted){
return;
}
if(this.DirectorCamManagerActor == null){
this.DirectorCamManagerActor = this.GetCameraManagerActor();
}
// double check
if(this.DirectorCamManagerActor == null){
return;
}
if (this.Task) {
this.Task.Stop();
}
this.Task = DirectorCamUtil.CreateCamTask(this, TaskData, CamTaskType.FullStream, this.DirectorCamManagerActor.droneCamera, null, this.DirectorCamManagerActor.handHeldCamera)
if (this.Task) {
this.Task.Start()
this.BindCamera(this.Task.TryGetBindedMainCamera());
}
}
```
# DirectorCamGroup.cpp
```c++
UDirectorSequencePlayer* UDirectorCamGroup::GetDirectorSequencePlayerFromPool(int CamIndex, bool IsPreview)
{
TargetPlayer = NewObject<UDirectorSequencePlayer>(this);
TargetPlayer->SetCamSequence(CamIndex, SequenceActor);
if(IsPreview)
{
CachedPreviewingPlayer.AddUnique(TargetPlayer);
}
else
{
CachedStreamingPlayer.AddUnique(TargetPlayer);
}
}
```
# PVW & PGM红框效果代码 TsDirectorCamManagerActor.ts
```
//客户机加入流程:
//连接Server->同步当前场景->场景加载完成后收到本地事<E59CB0>?->初始化workShop
//->确认机器身份,执行本机任务(异步任务处理流程)
//异步处理流程:根据机器的性能,加入时间不同,状态差异自适应处理任务逻辑
// 服务端收到任务请<E58AA1>?->确保服务端的workShop初始化完<E58C96>?->处理数据及同<E58F8A>??
// ->客户机收到同步数据后->等待自己的workshop初始化完<E58C96>?->处理自己的逻辑
// 同一种任务只处理最近一<E8BF91>?
```
## HandlePreStreamTaskByNetTag()
相关切区域与切镜代码会调用RequestPVWTaskServer()
RequestPVWTaskServer() =>
RequestPVWTask() => HandlePreStreamTaskDataMulticast() => HandlePreStreamTaskByNetTag()
```ts
HandlePreStreamTaskByNetTag(): void {
if (this.prestreamTaskData) {
switch (Utils.GetDirectorMode(this).TagName) {
case DirectorMode.PVWAndPGM:
if(!DirectorCamUtil.SubmitNewCommandIfDataNotChanged(this.preStreamTask, this.prestreamTaskData)){
if (this.preStreamTask) {
this.preStreamTask.Stop()
}
this.preStreamTask = DirectorCamUtil.CreateCamTask(this, this.prestreamTaskData, CamTaskType.FullStream, this.droneCamera,
this.PVWWindow, this.handHeldCamera)
if (this.preStreamTask) {
this.preStreamTask.Start()
if (this.PVWWindow) {
this.PVWWindow.SetViewBorderColor(0, new UE.LinearColor(0, 1, 0, 1))
}
console.log('PVW Task:' + this.preStreamTask.workShop.BindPlacement.Title + " " + this.preStreamTask.groupName + " " +
this.preStreamTask.camName)
}
}
break
case DirectorMode.Preview:
this.RefreshWindowViewBorderColor()
break
case DirectorMode.PVW:
this.HandlePVWTask()
break;
default:
break
}
}
}
```
# 130mm Bug日志
```c++
[2024.08.06-05.08.09:625][373]Puerts: (0x00000703D1DF42D0) request PVW: WorkShop:3F855CF84C91BBD9207C0FAF5273586C, CamGroup:9223AA8A478BA88F1217CD86911EDAE1, Index=0, StartFrame:0
[2024.08.06-05.08.09:626][373]LogLevelSequence: Starting new camera cut: 'Cam_TalkShowZuo_ZhuJiWei24mm_1'
[2024.08.06-05.08.09:626][373]Puerts: (0x00000703D1DF42D0) 设置相机目标:
[2024.08.06-05.08.09:626][373]Puerts: (0x00000703D1DF42D0) PVW Task: 线下 站\n4 ZhuJiWei24mm
[2024.08.06-05.08.11:550][487]Puerts: Warning: (0x00000703D1DF42D0) PauseOrResumeAllFxProp ,true
[2024.08.06-05.08.11:551][487]Puerts: Warning: (0x00000703D1DF42D0) PauseOrResumeAllFxProp ,false
[2024.08.06-05.08.11:551][487]LogChaosBone: Display: ResetDynamics: 1, ChaosBoneAssets Name: CBA_SK_JiaRan_Costume_BHair
[2024.08.06-05.08.11:551][487]LogChaosBone: Display: ResetDynamics: 1, ChaosBoneAssets Name: CBA_SK_JiaRan_Costume_dress
[2024.08.06-05.08.11:602][490]Puerts: Warning: (0x00000703D1DF42D0) BP_JiaRan_Costume_C_0 set visible false
[2024.08.06-05.08.11:602][490]Puerts: Warning: (0x00000703D1DF42D0) teleport to livearea 031D1C1742D7EB4C76F6B397FF404FD8
[2024.08.06-05.08.11:602][490]Puerts: Warning: (0x00000703D1DF42D0) PauseOrResumeAllFxProp ,true
[2024.08.06-05.08.11:602][490]Puerts: (0x00000703D1DF42D0) X=0.000 Y=0.000 Z=0.000
[2024.08.06-05.08.11:602][490]Puerts: Warning: (0x00000703D1DF42D0) PauseOrResumeAllFxProp ,false
[2024.08.06-05.08.11:602][490]Puerts: (0x00000703D1DF42D0) BP_JiaRan_Costume_C_0,attach prop ,玩具水枪A
[2024.08.06-05.08.11:603][490]Puerts: Warning: (0x00000703D1DF42D0) get socket of JiaRan: RightHandSocket
[2024.08.06-05.08.11:604][490]Puerts: (0x00000703D1DF42D0) Attach prop 玩具水枪A
[2024.08.06-05.08.11:604][490]Puerts: (0x00000703D1DF42D0) Idol.JiaRan successfully switch to 031D1C1742D7EB4C76F6B397FF404FD8
[2024.08.06-05.08.12:085][519]Puerts: Error: (0x00000703D1DF42D0) OnFinishDisappearFx(), but the idol visibility is true!!BP_JiaRan_Costume_C_0
[2024.08.06-05.08.12:107][520]Puerts: Warning: (0x00000703D1DF42D0) BP_JiaRan_Costume_C_0 set visible true
[2024.08.06-05.08.12:108][520]LogBlueprintUserMessages: [BP_JiaRan_Costume_C_0] Apply Material. Oringinal Material: false
[2024.08.06-05.08.12:108][520]Puerts: (0x00000703D1DF42D0) RefreshSceneCaptureShowList
[2024.08.06-05.08.12:109][520]Puerts: (0x00000703D1DF42D0) CheckItemFxPool, size 0
[2024.08.06-05.08.12:109][520]Puerts: (0x00000703D1DF42D0) TriggerHandPose
[2024.08.06-05.08.12:109][520]Puerts: (0x00000703D1DF42D0) TriggerHandPose
[2024.08.06-05.08.12:109][520]Puerts: Warning: (0x00000703D1DF42D0) Disable instrument pose
[2024.08.06-05.08.12:109][520]Puerts: (0x00000703D1DF42D0) Detect hand pose ToyGun
[2024.08.06-05.08.12:109][520]Puerts: (0x00000703D1DF42D0) TriggerHandPose ToyGun
[2024.08.06-05.08.12:455][541]Puerts: (0x00000703D1DF42D0) request PGM: WorkShop:3F855CF84C91BBD9207C0FAF5273586C, CamGroup:9223AA8A478BA88F1217CD86911EDAE1, Index=0, StartFrame:0 , PushMethod =0
[2024.08.06-05.08.12:805][562]Puerts: (0x00000703D1DF42D0) request PGM: WorkShop:3F855CF84C91BBD9207C0FAF5273586C, CamGroup:9223AA8A478BA88F1217CD86911EDAE1, Index=0, StartFrame:0 , PushMethod =0
[2024.08.06-05.08.14:606][670]LogBlueprintUserMessages: [BP_JiaRan_Costume_C_0] Apply Material. Oringinal Material: true
[2024.08.06-05.08.15:822][743]Puerts: (0x00000703D1DF42D0) request PVW: WorkShop:3F855CF84C91BBD9207C0FAF5273586C, CamGroup:DDB1F27A44E7E5B312830A828EB55464, Index=0, StartFrame:0
[2024.08.06-05.08.15:823][743]LogLevelSequence: Starting new camera cut: 'Cine_Camera_Actor_1'
[2024.08.06-05.08.15:823][743]Puerts: (0x00000703D1DF42D0) 设置相机目标:
[2024.08.06-05.08.15:824][743]Puerts: (0x00000703D1DF42D0) PVW Task: 线下 贝拉2024生日 lengthtest
[2024.08.06-05.08.15:824][743]LogBlueprintUserMessages: [BP_Cine_Cam_FOV_1] 35.0
[2024.08.06-05.08.16:256][769]LogBlueprintUserMessages: [BP_Cine_Cam_FOV_1] 35.002922
```
相关Log顺序
- request PVW: WorkShop:3F855CF84C91BBD9207C0FAF5273586C, CamGroup:DDB1F27A44E7E5B312830A828EB55464, Index=0, StartFrame:0
- LogLevelSequence: Starting new camera cut: 'Cine_Camera_Actor_1'
- Puerts: 设置相机目标:
- Puerts: PVW Task: 线下 贝拉2024生日 lengthtest
- PrintString BP_Cine_Cam_FOV_1 35.0
***HandlePVWTask中的prestreamTaskData 不对***
## 日志
```
Puerts: (0x000009E309FB5E30) request PVW: WorkShop:3F855CF84C91BBD9207C0FAF5273586C, CamGroup:B3441B7649E66555CCB558B2C0FD2872, Index=0, StartFrame:0
LogLevelSequence: Starting new camera cut: 'ZhuJiwei_Zheng16-24mm_5'
Puerts: (0x000009E309FB5E30) 130mm bug用log,其他相机1 bindingCam逻辑,ZhuJiwei_Zheng16-24mm_5
Puerts: (0x000009E309FB5E30) 130mm bug用log,其他相机2 bindingCam逻辑,ZhuJiwei_Zheng16-24mm_5
Puerts: (0x000009E309FB5E30) 130mm bug用log,ChangeSequenceCamTarget FullStream Other
Puerts: (0x000009E309FB5E30) 设置相机目标: Idol.BeiLa 0
Puerts: (0x000009E309FB5E30) PVW Task: 线下 单唱站\nFastRhythm ZhuJiwei_Zheng16-24mm
Puerts: (0x000009E309FB5E30) request PVW: WorkShop:3F855CF84C91BBD9207C0FAF5273586C, CamGroup:DDB1F27A44E7E5B312830A828EB55464, Index=0, StartFrame:0
Puerts: (0x000009E309FB5E30) Nice Playing Sequence:LevelSequenceActor_2:BP_lengthtest_1
Puerts: (0x000009E309FB5E30) RebindTarget: BP_BeiLa_Costume_C_0
PIE: Warning: Sequence did not contain any bindings with the tag 'Beila' LevelSequenceActor_2
LogLevelSequence: Starting new camera cut: 'CineCameraActor_Focus130MMTest_1'
Puerts: (0x000009E309FB5E30) 130mm bug用log,其他相机1 bindingCam逻辑,CineCameraActor_Focus130MMTest_1
Puerts: (0x000009E309FB5E30) 130mm bug用log,其他相机2 bindingCam逻辑,CineCameraActor_Focus130MMTest_1
Puerts: (0x000009E309FB5E30) 130mm bug用log,ChangeSequenceCamTarget FullStream Other
Puerts: (0x000009E309FB5E30) 设置相机目标: Idol.BeiLa 0
Puerts: (0x000009E309FB5E30) PVW Task: 线下 贝拉2024生日 BP_lengthtest_1
LogBlueprintUserMessages: [BP_Cine_Cam_FOV_1] 35.0
```
## 可能的相关代码
```c
// 播放指定相机到PVW.
@ufunction.ufunction(ufunction.ServerAPI, ufunction.Reliable)
PlayCamSequenceOnPVWServer(camGroupId:UE.Guid, camIndex:number):void{
let shotExists = this.directorCamSubSystem.ShotExistInGroup(this.prestreamTaskData.WorkShopId, camGroupId, camIndex)
if (shotExists || camIndex == DirectorCamUtil.DRONE_CAM_INDEX || camIndex == DirectorCamUtil.HANDHELD_CAM_INDEX) {
let newPVWTaskData = DirectorCamUtil.CopyTaskData(this.prestreamTaskData)
newPVWTaskData.CamGroupId = camGroupId
newPVWTaskData.CamIndex = camIndex
newPVWTaskData.StartFrame = 0
newPVWTaskData.bPreviewOneFrame = false
this.RequestPVWTaskServer(newPVWTaskData)
}
else {
console.warn('当前镜头不存<E4B88D>?,切换无效!index=' + camIndex)
}
}
class TsDirectorCamSequencePlayBridge extends UE.Actor {
    PlayCamSequence(camGroupId:UE.Guid, camIndex:number): void {
        let camManager = TsDirectorCamManagerActor.Get(this);
        if (camManager) {
            camManager.PlayCamSequenceOnPVWServer(camGroupId, camIndex);
        }
    }
}
```
```c++
// 切换到当前直播区域中的多个workShop
SwitchWorkShopInAreaServer(index: number): void {
// [Server]监听到场景模块切换了直播区域后自动切换<E58887>?0个workshop当前切换事件用来切换当前直播区域中的多个workshop
let levelControl = this.GetLevelAreaManager()
let liveArea = levelControl.GetCurrentLiveArea()
if (!liveArea) {
console.error('DirectorCamManager@ cannot find LiveAreaBy id')
return
}
let placement = DirectorCamUtil.GetPlacementInCurLiveArea(index, liveArea)
if (!placement) {
console.error('DirectorCamManager@SwitchWorkShopEvent:GetPlacementInCurLiveArea failed')
return
}
let GroupData = DirectorCamUtil.GetDefaultGroupData(this, placement.UUID)
if (GroupData) {
// 预览切换workShop后播放第一个分<E4B8AA>?
let newPreviewTaskData = new UE.CamTaskDataRPC()
newPreviewTaskData.WorkShopId = DirectorCamUtil.CopyGuid(placement.UUID)
newPreviewTaskData.CamGroupId = DirectorCamUtil.CopyGuid(GroupData.UUID)
newPreviewTaskData.CamIndex = 0
newPreviewTaskData.OperationId = this.previewTaskData.OperationId + 1
newPreviewTaskData.StartFrame = 0
newPreviewTaskData.bPreviewOneFrame = false
this.HandlePreviewTaskDataMulticast(newPreviewTaskData);
// 预推流切换workShop后播放第一个分组的第一个镜头
let newPVWTaskData = new UE.CamTaskDataRPC()
newPVWTaskData.WorkShopId = DirectorCamUtil.CopyGuid(placement.UUID)
newPVWTaskData.CamGroupId = DirectorCamUtil.CopyGuid(GroupData.UUID)
newPVWTaskData.CamIndex = 0
newPVWTaskData.StartFrame = 0
newPVWTaskData.bPreviewOneFrame = false
this.RequestPVWTaskServer(newPVWTaskData);
}
}
```

View File

@@ -0,0 +1,737 @@
# Common
## Common.ush
添加结构体主要用在材质的CustomNode里。
```c++
// Used by toon shading.
// Define a global custom data structure which can be filled by Custom node in material BP.
struct FToonShadingPerMaterialCustomData
{
// Toon specular
float3 ToonSpecularColor;
float ToonSpecularLocation;
float ToonSpecularSmoothness;
// Toon shadow
float3 ToonShadowColor;
float ToonShadowLocation;
float ToonShadowSmoothness;
float ToonForceShadow;
// Toon secondary shadow
float3 ToonSecondaryShadowColor;
float ToonSecondaryShadowLocation;
float ToonSecondaryShadowSmoothness;
// custom data, usually not used
float4 CustomData0;
float4 CustomData1;
float4 CustomData2;
float4 CustomData3;
};
static FToonShadingPerMaterialCustomData ToonShadingPerMaterialCustomData;
```
## DeferredShadingCommon.ush
1. 实现[[#Encode/Decode函数]]
2. HasCustomGBufferData()函数添加对应的ToonShadingModel宏判断
3. [[#FGBufferData新增变量]]
4. [[#Encode/Decode GBufferData新增逻辑]]
1. Metallic/Specualr/Roughness => ToonShadowLocation/ToonForceShadow/ToonShadowSmoothness
2. AO => ToonSecondaryShadowLocation
3. CustomData => ToonShadowColor/ToonSecondaryShadowSmoothness
4. PrecomputedShadowFactors => ToonSecondaryShadowColor
5. `#define GBUFFER_REFACTOR 0` 以此关闭自动生成Encode/Decode GBufferData代码并使用硬编码调用Encode/Decode GBufferData。
6. `#if WRITES_VELOCITY_TO_GBUFFER` => `#if GBUFFER_HAS_VELOCITY`,以此**关闭写入VELOCITY到GBuffer中**。
### Encode/Decode函数
RGB655 to 8-bit RGB。
将R 256 => 64 ,GB 256 => 32。之后使用2个8bit浮点来存储通道1存储R与G的头两位通道2存储G的后3位与B。
```c++
float2 EncodeColorToRGB655(float3 Color)
{
const uint ChannelR = (1 << 6) - 1;
const uint ChannelG = (1 << 5) - 1;
const uint ChannelB = (1 << 5) - 1;
uint3 RoundedColor = uint3(float3(
round(Color.r * ChannelR),
round(Color.g * ChannelG),
round(Color.b * ChannelB)
));
return float2(
(RoundedColor.r << 2 | RoundedColor.g >> 3) / 255.0,
(RoundedColor.g << 5 | RoundedColor.b ) / 255.0
);
}
float3 DecodeRGB655ToColor(float2 RGB655)
{
const uint ChannelR = (1 << 6) - 1;
const uint ChannelG = (1 << 5) - 1;
const uint ChannelB = (1 << 5) - 1;
uint2 Inputs = uint2(round(RGB655 * 255.0));
uint BitBuffer = (Inputs.x << 8) | Inputs.y;
uint R = (BitBuffer & 0xFC00) >> 10;
uint G = (BitBuffer & 0x03E0) >> 5;
uint B = (BitBuffer & 0x001F);
return float3(R, G, B) * float3(1.0 / ChannelR, 1.0 / ChannelG, 1.0 / ChannelB);
}
```
### FGBufferData新增变量
```c++
struct FGBufferData
{
...
// Toon specular
// 0..1, specular color
half3 ToonSpecularColor;
// 0..1, specular edge position
half ToonSpecularLocation;
// 0..1, specular edge smoothness
half ToonSpecularSmoothness;
// Toon shadow
// 0..1, shadow color
half3 ToonShadowColor;
// 0..1, shadow egde location
half ToonShadowLocation;
// 0..1, shadow edge smoothness
half ToonShadowSmoothness;
// 0..1, force shadow
half ToonForceShadow;
// Toon secondary shadow
// 0..1, secondary shadow color
float3 ToonSecondaryShadowColor;
// 0..1, secondary shadow edge location
float ToonSecondaryShadowLocation;
// 0..1, secondary shadow edge smoothness
float ToonSecondaryShadowSmoothness;
// Toon render
half3 ToonCalcShadowColor;
};
```
### Encode/Decode GBufferData新增逻辑
```c++
void EncodeGBuffer(
FGBufferData GBuffer,
out float4 OutGBufferA,
out float4 OutGBufferB,
out float4 OutGBufferC,
out float4 OutGBufferD,
out float4 OutGBufferE,
out float4 OutGBufferVelocity,
float QuantizationBias = 0 // -0.5 to 0.5 random float. Used to bias quantization.
)
{
...
switch(GBuffer.ShadingModelID)
{
case SHADINGMODELID_TOON_BASE:
OutGBufferB.r = ToonShadingPerMaterialCustomData.ToonShadowLocation;
OutGBufferB.g = ToonShadingPerMaterialCustomData.ToonForceShadow;
OutGBufferB.b = ToonShadingPerMaterialCustomData.ToonShadowSmoothness;
OutGBufferC.a = ToonShadingPerMaterialCustomData.ToonSecondaryShadowLocation;
OutGBufferD.a = ToonShadingPerMaterialCustomData.ToonSecondaryShadowSmoothness;
OutGBufferD.rgb = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
OutGBufferE.gba = ToonShadingPerMaterialCustomData.ToonSecondaryShadowColor.rgb;
break;
case SHADINGMODELID_TOON_PBR:
OutGBufferB.g = ToonShadingPerMaterialCustomData.ToonShadowLocation;
OutGBufferD.a = ToonShadingPerMaterialCustomData.ToonShadowSmoothness;
OutGBufferD.rgb = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
OutGBufferE.gba = ToonShadingPerMaterialCustomData.ToonSpecularColor.rgb;
break;
case SHADINGMODELID_TOON_SKIN:
OutGBufferB.r = ToonShadingPerMaterialCustomData.ToonShadowLocation;
OutGBufferD.a = ToonShadingPerMaterialCustomData.ToonShadowSmoothness;
OutGBufferD.rgb = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
break;
default:
break;
}
...
}
FGBufferData DecodeGBufferData(
float4 InGBufferA,
float4 InGBufferB,
float4 InGBufferC,
float4 InGBufferD,
float4 InGBufferE,
float4 InGBufferF,
float4 InGBufferVelocity,
float CustomNativeDepth,
uint CustomStencil,
float SceneDepth,
bool bGetNormalizedNormal,
bool bChecker)
{
FGBufferData GBuffer = (FGBufferData)0;
...
switch(GBuffer.ShadingModelID)
{
case SHADINGMODELID_TOON_BASE:
GBuffer.ToonShadowColor = InGBufferD.rgb;
GBuffer.ToonShadowLocation = InGBufferB.r;
GBuffer.ToonShadowSmoothness = InGBufferB.b;
GBuffer.ToonForceShadow = InGBufferB.g;
GBuffer.ToonSecondaryShadowColor = InGBufferE.gba;
GBuffer.ToonSecondaryShadowLocation = InGBufferC.a;
GBuffer.ToonSecondaryShadowSmoothness = InGBufferD.a;
GBuffer.Metallic = 0.0;
GBuffer.Specular = 1.0;
GBuffer.Roughness = 1.0;
GBuffer.GBufferAO = 0.0;
GBuffer.IndirectIrradiance = 1.0;
GBuffer.PrecomputedShadowFactors = !(GBuffer.SelectiveOutputMask & SKIP_PRECSHADOW_MASK) ? float4(InGBufferE.r, 1.0, 1.0, 1.0) : ((GBuffer.SelectiveOutputMask & ZERO_PRECSHADOW_MASK) ? 0 : 1);
GBuffer.StoredMetallic = 0.0;
GBuffer.StoredSpecular = 1.0;
break;
case SHADINGMODELID_TOON_PBR:
GBuffer.ToonSpecularColor = InGBufferE.gba;
GBuffer.ToonShadowColor = InGBufferD.rgb;
GBuffer.ToonShadowLocation = InGBufferB.g;
GBuffer.ToonShadowSmoothness = InGBufferD.a;
GBuffer.ToonSecondaryShadowColor = GBuffer.ToonShadowColor;
GBuffer.ToonForceShadow = 1.0;
GBuffer.ToonSpecularLocation = 1.0;
GBuffer.Specular = 1.0;
GBuffer.PrecomputedShadowFactors = !(GBuffer.SelectiveOutputMask & SKIP_PRECSHADOW_MASK) ? float4(InGBufferE.r, 1.0, 1.0, 1.0) : ((GBuffer.SelectiveOutputMask & ZERO_PRECSHADOW_MASK) ? 0 : 1);
break;
case SHADINGMODELID_TOON_SKIN:
GBuffer.ToonShadowColor = InGBufferD.rgb;
GBuffer.ToonShadowLocation = InGBufferB.r;
GBuffer.ToonShadowSmoothness = InGBufferD.a;
GBuffer.ToonSecondaryShadowColor = GBuffer.ToonShadowColor;
GBuffer.ToonForceShadow = 1.0;
GBuffer.Metallic = 0.0;
GBuffer.StoredMetallic = 0.0;
GBuffer.PrecomputedShadowFactors = !(GBuffer.SelectiveOutputMask & SKIP_PRECSHADOW_MASK) ? float4(InGBufferE.r, 1.0, 1.0, 1.0) : ((GBuffer.SelectiveOutputMask & ZERO_PRECSHADOW_MASK) ? 0 : 1);
break;
default:
break;
}
...
};
```
# BasePass
BasePassPixelShader.usf
1. `#if 1` => `#if GBUFFER_REFACTOR && 0`以此关闭自动生成Encode/Decode GBufferData代码并使用硬编码调用Encode/Decode GBufferData。
2. 在FPixelShaderInOut_MainPS()中添加写入FGBufferData逻辑。代码如下
```c++
...
switch(GBuffer.ShadingModelID)
{
case SHADINGMODELID_TOON_BASE:
GBuffer.ToonShadowColor = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
GBuffer.ToonShadowLocation = ToonShadingPerMaterialCustomData.ToonShadowLocation;
GBuffer.ToonShadowSmoothness = ToonShadingPerMaterialCustomData.ToonShadowSmoothness;
GBuffer.ToonForceShadow = ToonShadingPerMaterialCustomData.ToonForceShadow;
GBuffer.ToonSecondaryShadowColor = ToonShadingPerMaterialCustomData.ToonSecondaryShadowColor.rgb;
GBuffer.ToonSecondaryShadowLocation = ToonShadingPerMaterialCustomData.ToonSecondaryShadowLocation;
GBuffer.ToonSecondaryShadowSmoothness = ToonShadingPerMaterialCustomData.ToonSecondaryShadowSmoothness;
GBuffer.Specular = 1.0;
GBuffer.GBufferAO = 0.0;
GBuffer.PrecomputedShadowFactors.gba = 1;
break;
case SHADINGMODELID_TOON_PBR:
GBuffer.ToonSpecularColor = ToonShadingPerMaterialCustomData.ToonSpecularColor.rgb;
GBuffer.ToonShadowColor = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
GBuffer.ToonShadowLocation = ToonShadingPerMaterialCustomData.ToonShadowLocation;
GBuffer.ToonShadowSmoothness = ToonShadingPerMaterialCustomData.ToonShadowSmoothness;
GBuffer.ToonSecondaryShadowColor = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
GBuffer.ToonForceShadow = 1.0;
GBuffer.Specular = 1.0;
GBuffer.PrecomputedShadowFactors.gba = 1;
break;
case SHADINGMODELID_TOON_SKIN:
GBuffer.ToonShadowColor = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
GBuffer.ToonShadowLocation = ToonShadingPerMaterialCustomData.ToonShadowLocation;
GBuffer.ToonShadowSmoothness = ToonShadingPerMaterialCustomData.ToonShadowSmoothness;
GBuffer.ToonSecondaryShadowColor = ToonShadingPerMaterialCustomData.ToonShadowColor.rgb;
GBuffer.ToonForceShadow = 1.0;
GBuffer.PrecomputedShadowFactors.g = 1;
break;
default:
break;
}
...
```
# Lighting
## ShadingModels
### ShadingCommon.ush
**添加ShadingModelID宏**
- SHADINGMODELID_TOON_BASE 13
- SHADINGMODELID_TOON_PBR 14
- SHADINGMODELID_TOON_SKIN 15
- SHADINGMODELID_NUM 16
判断是否是IsToonShadingModel:
```c++
bool IsToonShadingModel(uint ShadingModel)
{
uint4 ToonShadingModels = uint4(SHADINGMODELID_TOON_BASE, SHADINGMODELID_TOON_PBR, SHADINGMODELID_TOON_SKIN, 0xFF);
return any(ShadingModel.xxxx == ToonShadingModels);
}
```
## DeferredLightingCommon.ush
修改了AccumulateDynamicLighting()的逻辑。
```c++
FLightAccumulator AccumulateDynamicLighting(
float3 TranslatedWorldPosition, half3 CameraVector, FGBufferData GBuffer, half AmbientOcclusion, uint ShadingModelID,
FDeferredLightData LightData, half4 LightAttenuation, float Dither, uint2 SVPos,
inout float SurfaceShadow)
{
FLightAccumulator LightAccumulator = (FLightAccumulator)0;
half3 V = -CameraVector;
half3 N = GBuffer.WorldNormal;
BRANCH if( GBuffer.ShadingModelID == SHADINGMODELID_CLEAR_COAT && CLEAR_COAT_BOTTOM_NORMAL)
{
const float2 oct1 = ((float2(GBuffer.CustomData.a, GBuffer.CustomData.z) * 4) - (512.0/255.0)) + UnitVectorToOctahedron(GBuffer.WorldNormal);
N = OctahedronToUnitVector(oct1);
}
float3 L = LightData.Direction; // Already normalized
float3 ToLight = L;
float3 MaskedLightColor = LightData.Color;
float LightMask = 1;
if (LightData.bRadialLight)
{
LightMask = GetLocalLightAttenuation( TranslatedWorldPosition, LightData, ToLight, L );
MaskedLightColor *= LightMask;
}
LightAccumulator.EstimatedCost += 0.3f; // running the PixelShader at all has a cost
BRANCH
if( LightMask > 0 )
{
FShadowTerms Shadow;
Shadow.SurfaceShadow = AmbientOcclusion;
Shadow.TransmissionShadow = 1;
Shadow.TransmissionThickness = 1;
Shadow.HairTransmittance.OpaqueVisibility = 1;
const float ContactShadowOpacity = GBuffer.CustomData.a;
GetShadowTerms(GBuffer.Depth, GBuffer.PrecomputedShadowFactors, GBuffer.ShadingModelID, ContactShadowOpacity,
LightData, TranslatedWorldPosition, L, LightAttenuation, Dither, Shadow);
SurfaceShadow = Shadow.SurfaceShadow;
LightAccumulator.EstimatedCost += 0.3f; // add the cost of getting the shadow terms
#if SHADING_PATH_MOBILE
const bool bNeedsSeparateSubsurfaceLightAccumulation = UseSubsurfaceProfile(GBuffer.ShadingModelID);
FDirectLighting Lighting = (FDirectLighting)0;
half NoL = max(0, dot(GBuffer.WorldNormal, L));
#if TRANSLUCENCY_NON_DIRECTIONAL
NoL = 1.0f;
#endif
Lighting = EvaluateBxDF(GBuffer, N, V, L, NoL, Shadow);
Lighting.Specular *= LightData.SpecularScale;
LightAccumulator_AddSplit( LightAccumulator, Lighting.Diffuse, Lighting.Specular, Lighting.Diffuse, MaskedLightColor * Shadow.SurfaceShadow, bNeedsSeparateSubsurfaceLightAccumulation );
LightAccumulator_AddSplit( LightAccumulator, Lighting.Transmission, 0.0f, Lighting.Transmission, MaskedLightColor * Shadow.TransmissionShadow, bNeedsSeparateSubsurfaceLightAccumulation );
#else // SHADING_PATH_MOBILE
//修改了这里
bool UseToonShadow = IsToonShadingModel(GBuffer.ShadingModelID);
BRANCH
if( Shadow.SurfaceShadow + Shadow.TransmissionShadow > 0 || UseToonShadow)//修改结束
{
const bool bNeedsSeparateSubsurfaceLightAccumulation = UseSubsurfaceProfile(GBuffer.ShadingModelID);
//修改了这里
BRANCH
if(UseToonShadow)
{
float NoL = dot(N, L);
float ToonNoL = min(NoL, GBuffer.ToonForceShadow);
//合并SurfaceShadow以及Transmision Shadow
Shadow.SurfaceShadow = min(Shadow.SurfaceShadow, Shadow.TransmissionShadow);
//根据ToonShadowSmoothness、ToonShadowLocation、NoL计算阴影亮度最后计算主阴影颜色。
float RangeHalf = GBuffer.ToonShadowSmoothness * 0.5;
float RangeMin = max(0.0, GBuffer.ToonShadowLocation - RangeHalf);
float RangeMax = min(1.0, GBuffer.ToonShadowLocation + RangeHalf);
float ShadowIntensity = Shadow.SurfaceShadow * smoothstep(RangeMin, RangeMax, ToonNoL);
GBuffer.ToonCalcShadowColor = lerp(GBuffer.ToonShadowColor * LightData.SpecularScale, (1.0).xxx, ShadowIntensity);
//计算次级阴影颜色,并最终合成。
RangeHalf = GBuffer.ToonSecondaryShadowSmoothness * 0.5;
RangeMin = max(0.0, GBuffer.ToonSecondaryShadowLocation - RangeHalf);
RangeMax = min(1.0, GBuffer.ToonSecondaryShadowLocation + RangeHalf);
ShadowIntensity = Shadow.SurfaceShadow * smoothstep(RangeMin, RangeMax, ToonNoL);
GBuffer.ToonCalcShadowColor = lerp(GBuffer.ToonSecondaryShadowColor * LightData.SpecularScale, GBuffer.ToonCalcShadowColor, ShadowIntensity);
}
//修改结束
#if NON_DIRECTIONAL_DIRECT_LIGHTING
float Lighting;
if( LightData.bRectLight )
{
FRect Rect = GetRect( ToLight, LightData );
Lighting = IntegrateLight( Rect );
}
else
{
FCapsuleLight Capsule = GetCapsule( ToLight, LightData );
Lighting = IntegrateLight( Capsule, LightData.bInverseSquared );
}
float3 LightingDiffuse = Diffuse_Lambert( GBuffer.DiffuseColor ) * Lighting;
LightAccumulator_AddSplit(LightAccumulator, LightingDiffuse, 0.0f, 0, MaskedLightColor * Shadow.SurfaceShadow, bNeedsSeparateSubsurfaceLightAccumulation);
#else
FDirectLighting Lighting;
if (LightData.bRectLight)
{
FRect Rect = GetRect( ToLight, LightData );
const FRectTexture SourceTexture = ConvertToRectTexture(LightData);
#if REFERENCE_QUALITY
Lighting = IntegrateBxDF( GBuffer, N, V, Rect, Shadow, SourceTexture, SVPos );
#else
Lighting = IntegrateBxDF( GBuffer, N, V, Rect, Shadow, SourceTexture);
#endif
}
else
{
FCapsuleLight Capsule = GetCapsule( ToLight, LightData );
#if REFERENCE_QUALITY
Lighting = IntegrateBxDF( GBuffer, N, V, Capsule, Shadow, SVPos );
#else
Lighting = IntegrateBxDF( GBuffer, N, V, Capsule, Shadow, LightData.bInverseSquared );
#endif
}
//修改了这里
float SurfaceShadow = UseToonShadow ? 1.0 : Shadow.SurfaceShadow;
float TransmissionShadow = UseToonShadow ? 1.0 : Shadow.TransmissionShadow;
Lighting.Specular *= UseToonShadow ? GBuffer.ToonSpecularColor : LightData.SpecularScale;
LightAccumulator_AddSplit( LightAccumulator, Lighting.Diffuse, Lighting.Specular, Lighting.Diffuse, MaskedLightColor * SurfaceShadow, bNeedsSeparateSubsurfaceLightAccumulation );
LightAccumulator_AddSplit( LightAccumulator, Lighting.Transmission, 0.0f, Lighting.Transmission, MaskedLightColor * TransmissionShadow, bNeedsSeparateSubsurfaceLightAccumulation );
//修改结束
LightAccumulator.EstimatedCost += 0.4f; // add the cost of the lighting computations (should sum up to 1 form one light)
#endif
}
#endif // SHADING_PATH_MOBILE
}
return LightAccumulator;
}
```
## ShadingModels.ush
```c++
float3 ToonSpecular(float ToonSpecularLocation, float ToonSpecularSmoothness, float3 ToonSpecularColor, float NoL)
{
float ToonSpecularRangeHalf = ToonSpecularSmoothness * 0.5;
float ToonSpecularRangeMin = ToonSpecularLocation - ToonSpecularRangeHalf;
float ToonSpecularRangeMax = ToonSpecularLocation + ToonSpecularRangeHalf;
return smoothstep(ToonSpecularRangeMin, ToonSpecularRangeMax, NoL) * ToonSpecularColor;
}
```
创建了ToonCustomBxDF**SHADINGMODELID_TOON_BASE**与ToonLitBxDF**SHADINGMODELID_TOON_PBR**、**SHADINGMODELID_TOON_SKIN**2个ShadingModel函数。
### ToonCustomBxDF的修改
Diffuse里面乘以之前在DeferredShadingCommon.ush中计算好的ShadowColor已经计算了NoL
`Lighting.Diffuse *= AreaLight.FalloffColor * (Falloff * NoL);`
=>
`Lighting.Diffuse *= AreaLight.FalloffColor * Falloff * GBuffer.ToonCalcShadowColor;`
Speuclar直接归零具体是在BasePass阶段进行计算了。
`Lighting.Specular = 0`
### ToonLitBxDF的修改
Diffuse里面乘以之前在DeferredShadingCommon.ush中计算好的ShadowColor已经计算了NoL
`Lighting.Diffuse *= AreaLight.FalloffColor * (Falloff * NoL);`
=>
`Lighting.Diffuse *= AreaLight.FalloffColor * Falloff * GBuffer.ToonCalcShadowColor;`
Speuclar最后乘以了**Shadow.SurfaceShadow**
`Lighting.Specular *= Shadow.SurfaceShadow;`
```c++
FDirectLighting ToonLitBxDF( FGBufferData GBuffer, half3 N, half3 V, half3 L, float Falloff, half NoL, FAreaLight AreaLight, FShadowTerms Shadow )
{
BxDFContext Context;
FDirectLighting Lighting;
#if SUPPORTS_ANISOTROPIC_MATERIALS
bool bHasAnisotropy = HasAnisotropy(GBuffer.SelectiveOutputMask);
#else
bool bHasAnisotropy = false;
#endif
float NoV, VoH, NoH;
BRANCH
if (bHasAnisotropy)
{
half3 X = GBuffer.WorldTangent;
half3 Y = normalize(cross(N, X));
Init(Context, N, X, Y, V, L);
NoV = Context.NoV;
VoH = Context.VoH;
NoH = Context.NoH;
}
else
{
#if SHADING_PATH_MOBILE
InitMobile(Context, N, V, L, NoL);
#else
Init(Context, N, V, L);
#endif
NoV = Context.NoV;
VoH = Context.VoH;
NoH = Context.NoH;
SphereMaxNoH(Context, AreaLight.SphereSinAlpha, true);
}
Context.NoV = saturate(abs( Context.NoV ) + 1e-5);
#if MATERIAL_ROUGHDIFFUSE
// Chan diffuse model with roughness == specular roughness. This is not necessarily a good modelisation of reality because when the mean free path is super small, the diffuse can in fact looks rougher. But this is a start.
// Also we cannot use the morphed context maximising NoH as this is causing visual artefact when interpolating rough/smooth diffuse response.
Lighting.Diffuse = Diffuse_Chan(GBuffer.DiffuseColor, Pow4(GBuffer.Roughness), NoV, NoL, VoH, NoH, GetAreaLightDiffuseMicroReflWeight(AreaLight));
#else
Lighting.Diffuse = Diffuse_Lambert(GBuffer.DiffuseColor);
#endif
// Toon Diffuse
Lighting.Diffuse *= AreaLight.FalloffColor * Falloff * GBuffer.ToonCalcShadowColor;
BRANCH
if (bHasAnisotropy)
{
//Lighting.Specular = GBuffer.WorldTangent * .5f + .5f;
Lighting.Specular = AreaLight.FalloffColor * (Falloff * NoL) * SpecularGGX(GBuffer.Roughness, GBuffer.Anisotropy, GBuffer.SpecularColor, Context, NoL, AreaLight);
}
else
{
if( IsRectLight(AreaLight) )
{
Lighting.Specular = RectGGXApproxLTC(GBuffer.Roughness, GBuffer.SpecularColor, N, V, AreaLight.Rect, AreaLight.Texture);
}
else
{
// Toon specular
Lighting.Specular = AreaLight.FalloffColor * (Falloff * NoL) * SpecularGGX(GBuffer.Roughness, GBuffer.SpecularColor, Context, NoL, AreaLight);
}
}
Lighting.Specular *= Shadow.SurfaceShadow;
FBxDFEnergyTerms EnergyTerms = ComputeGGXSpecEnergyTerms(GBuffer.Roughness, Context.NoV, GBuffer.SpecularColor);
// Add energy presevation (i.e. attenuation of the specular layer onto the diffuse component
Lighting.Diffuse *= ComputeEnergyPreservation(EnergyTerms);
// Add specular microfacet multiple scattering term (energy-conservation)
Lighting.Specular *= ComputeEnergyConservation(EnergyTerms);
Lighting.Transmission = 0;
return Lighting;
}
FDirectLighting ToonCustomBxDF( FGBufferData GBuffer, half3 N, half3 V, half3 L, float Falloff, half NoL, FAreaLight AreaLight, FShadowTerms Shadow )
{
BxDFContext Context;
FDirectLighting Lighting;
float NoV, VoH, NoH;
#if SHADING_PATH_MOBILE
InitMobile(Context, N, V, L, NoL);
#else
Init(Context, N, V, L);
#endif
NoV = Context.NoV;
VoH = Context.VoH;
NoH = Context.NoH;
SphereMaxNoH(Context, AreaLight.SphereSinAlpha, true);
Context.NoV = saturate(abs( Context.NoV ) + 1e-5);
#if MATERIAL_ROUGHDIFFUSE
// Chan diffuse model with roughness == specular roughness. This is not necessarily a good modelisation of reality because when the mean free path is super small, the diffuse can in fact looks rougher. But this is a start.
// Also we cannot use the morphed context maximising NoH as this is causing visual artefact when interpolating rough/smooth diffuse response.
Lighting.Diffuse = Diffuse_Chan(GBuffer.DiffuseColor, Pow4(GBuffer.Roughness), NoV, NoL, VoH, NoH, GetAreaLightDiffuseMicroReflWeight(AreaLight));
#else
Lighting.Diffuse = Diffuse_Lambert(GBuffer.DiffuseColor);
#endif
// Toon Diffuse
Lighting.Diffuse *= AreaLight.FalloffColor * Falloff * GBuffer.ToonCalcShadowColor;
// Toon specular
// Lighting.Specular = AreaLight.FalloffColor * (Falloff * NoL) * ToonSpecular(GBuffer.ToonSpecularLocation, GBuffer.ToonSpecularSmoothness, GBuffer.ToonSpecularColor, NoL);
// Lighting.Specular *= Shadow.SurfaceShadow;
// FBxDFEnergyTerms EnergyTerms = ComputeGGXSpecEnergyTerms(GBuffer.Roughness, Context.NoV, GBuffer.SpecularColor);
// Add energy presevation (i.e. attenuation of the specular layer onto the diffuse component
// Lighting.Diffuse *= ComputeEnergyPreservation(EnergyTerms);
Lighting.Specular = 0;
Lighting.Transmission = 0;
return Lighting;
}
FDirectLighting IntegrateBxDF( FGBufferData GBuffer, half3 N, half3 V, half3 L, float Falloff, half NoL, FAreaLight AreaLight, FShadowTerms Shadow )
{
switch( GBuffer.ShadingModelID )
{
case SHADINGMODELID_DEFAULT_LIT:
case SHADINGMODELID_SINGLELAYERWATER:
case SHADINGMODELID_THIN_TRANSLUCENT:
return DefaultLitBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_SUBSURFACE:
return SubsurfaceBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_PREINTEGRATED_SKIN:
return PreintegratedSkinBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_CLEAR_COAT:
return ClearCoatBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_SUBSURFACE_PROFILE:
return SubsurfaceProfileBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_TWOSIDED_FOLIAGE:
return TwoSidedBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_HAIR:
return HairBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_CLOTH:
return ClothBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_EYE:
return EyeBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_TOON_BASE:
return ToonCustomBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
case SHADINGMODELID_TOON_PBR:
case SHADINGMODELID_TOON_SKIN:
return ToonLitBxDF( GBuffer, N, V, L, Falloff, NoL, AreaLight, Shadow );
default:
return (FDirectLighting)0;
}
}
```
## DeferredLightPixelShaders.usf
在DeferredLightPixelMain()中添加逻辑:
1. 非卡通材质正常渲染。
2. 材质材质只有在LightingChannel = 2时才会计算卡通光影效果。
```c++
bool UseToonShadow = IsToonShadingModel(ScreenSpaceData.GBuffer.ShadingModelID);
// LightingChannel Toon Shading only calculate light of LightingChannel = 2
BRANCH if (!UseToonShadow || (UseToonShadow && DeferredLightUniforms.LightingChannelMask & 0x4))
{
const float SceneDepth = CalcSceneDepth(InputParams.ScreenUV);
const FDerivedParams DerivedParams = GetDerivedParams(InputParams, SceneDepth);
FDeferredLightData LightData = InitDeferredLightFromUniforms(CURRENT_LIGHT_TYPE);
UpdateLightDataColor(LightData, InputParams, DerivedParams);
#if USE_HAIR_COMPLEX_TRANSMITTANCE
if (ScreenSpaceData.GBuffer.ShadingModelID == SHADINGMODELID_HAIR && ShouldUseHairComplexTransmittance(ScreenSpaceData.GBuffer))
{
LightData.HairTransmittance = EvaluateDualScattering(ScreenSpaceData.GBuffer, DerivedParams.CameraVector, -DeferredLightUniforms.Direction);
}
#endif
float Dither = InterleavedGradientNoise(InputParams.PixelPos, View.StateFrameIndexMod8);
float SurfaceShadow = 1.0f;
float4 LightAttenuation = GetLightAttenuationFromShadow(InputParams, SceneDepth);
float4 Radiance = GetDynamicLighting(DerivedParams.TranslatedWorldPosition, DerivedParams.CameraVector, ScreenSpaceData.GBuffer, ScreenSpaceData.AmbientOcclusion, ScreenSpaceData.GBuffer.ShadingModelID, LightData, LightAttenuation, Dither, uint2(InputParams.PixelPos), SurfaceShadow);
OutColor += Radiance;
}
```
# PostProcess
## ToneMapping
c++部分主要修改了:
1. PostProcessing.cpp
2. PostProcessTonemap.cpp
3. PostProcessTonemap.h
***实现向ToneMaper Shader传递 `TRDGUniformBufferRef<FSceneTextureUniformParameters>`的功能***
之后再PostProcessTonemap.usf中对**CustomStencil**进行判断如果为true则直接返回之前渲染结果。实际上BufferVisualization里根本看不出来。
```c++
#include "DeferredShadingCommon.ush"
// pixel shader entry point
void MainPS(
in noperspective float2 UV : TEXCOORD0,
in noperspective float2 InVignette : TEXCOORD1,
in noperspective float4 GrainUV : TEXCOORD2,
in noperspective float2 ScreenPos : TEXCOORD3,
in noperspective float2 FullViewUV : TEXCOORD4,
float4 SvPosition : SV_POSITION, // after all interpolators
out float4 OutColor : SV_Target0
#if OUTPUT_LUMINANCE
, out float OutLuminance: SV_Target1
#endif
)
{
float Luminance;
FGBufferData SamplerBuffer = GetGBufferData(UV * View.ResolutionFractionAndInv.x, false);
if (SamplerBuffer.CustomStencil > 1.0f && abs(SamplerBuffer.CustomDepth - SamplerBuffer.Depth) < 1)
{
OutColor = SampleSceneColor(UV);
}
else
{
OutColor = TonemapCommonPS(UV, InVignette, GrainUV, ScreenPos, FullViewUV, SvPosition, Luminance);
}
#if OUTPUT_LUMINANCE
OutLuminance = Luminance;
#endif
}
```
## PostProcessCombineLUT.usf
主要移植了UE4版本的LUT以此保证效果统一。
# 其他
## GpuSkinCacheComputeShader.usf
注释2行代码用处不明。
```c++
#if GPUSKIN_MORPH_BLEND
{
Intermediates.UnpackedPosition += Unpacked.DeltaPosition;
// calc new normal by offseting it with the delta
LocalTangentZ = normalize( LocalTangentZ + Unpacked.DeltaTangentZ);
// derive the new tangent by orthonormalizing the new normal against
// the base tangent vector (assuming these are normalized)
LocalTangentX = normalize( LocalTangentX - (dot(LocalTangentX, LocalTangentZ) * LocalTangentZ) );
}#else
#if GPUSKIN_APEX_CLOTH
```
=>
```c++
#if GPUSKIN_MORPH_BLEND
{
Intermediates.UnpackedPosition += Unpacked.DeltaPosition;
// calc new normal by offseting it with the delta
//LocalTangentZ = normalize( LocalTangentZ + Unpacked.DeltaTangentZ);
// derive the new tangent by orthonormalizing the new normal against
// the base tangent vector (assuming these are normalized)
//LocalTangentX = normalize( LocalTangentX - (dot(LocalTangentX, LocalTangentZ) * LocalTangentZ) );
}#else
#if GPUSKIN_APEX_CLOTH
```

View File

@@ -0,0 +1,314 @@
# 相关资产路径
Content/ResArt/CommandMaterial
- [x] [[#Functions]]
- [x] [[#MatCap]]
- [x] [[#Materials]]
- [ ] [[#MaterialInstance]]
- [x] [[#Outline]]
- [x] [[#Textures]]
# Functions
- [x] [[#ShadingModels]]
- MF_ToonPBRShadingModel
- MF_ToonBaseShadingModel
- MF_ToonSkinShadingModel
- MF_ToonHairShadingModel
- [x] [[#Effects]]
- MF_Dissolve
- MF_EdgeLight
- MF_Fur
- [x] Tools
- MF_DecodeArrayIDAndAlpha分离输入浮点数的整数与小数部分。整数部分作为TextureArrayID小数部分作为Alpha参数。
- 主要用于MF_FaceOverlay的**Face Overlay Color**效果与M_Penetrate的**Eye Overlay Color**效果。
- MF_Hash11
- MF_Hash12
- MF_Hash13
- MF_Hash22
- MF_Hash23
- [x] ***CameraLightCollection***各个角色的主光照颜色、边缘光颜色与主光亮度以及LightDir。
- MF_CharacterMainLightIntensity使用CustomNode编写的函数通过RoleID进行Switch Case计算对应角色的MainLightColor * MainLightIntensity这些一般在Sequence中进行修改。
- MF_ApplyToonHairSpecular头发高光计算被M_ToonBase_V02调用。
- ***MF_CharacterEffects***基本所有角色相关材质都有使用。主要调用了MF_EdgeLight、MF_Dissolves实现了**边缘光与溶解效果。
- MF_CharacterRimLightIntensity使用CustomNode编写的函数通过RoleID进行Switch Case计算对应角色的RimLightColor * RimLightIntensity这些一般在Sequence中进行修改。
- MF_FaceHighlightAndShadow使用Shadow贴图渲染脸上的阴影效果通过Dot(LightVector,FaceRight)判断左右)以及边缘高光(通过Dot(LightVector,FaceFront)作为Mask)被M_ToonFace调用但没有使用脸部阴影效果只使用了高光实际看不出来
- 其中的FaceRight、FaceFront、FaceLightDir**使用了CustomPrimitiveData**。
- MF_FaceOverlay脸部材质额外的BaseColorTexture叠加效果猜测是用来制作一些特殊表情的腮红效果被M_ToonFace调用。
- MF_Inputs镭射材质效果只被M_ToonLaserPBR、MI_Leishezhi调用。
- MF_MatcapMatcap效果输出2种贴图Multip 与 Add效果被MF_ToonPBRInput调用。
- **MF_Matcap_Add**MF_Matcap的升级版。
- OutputAdd = LightMap * LightMatcap
- OutputEmissive = Matcap Texture 01 + Matcap Texture 02 + Matcap Texture 03 + Emissive Matcap * Emissive Texture
- OutputNeckShadow = lerp( lerp(1.0, Matcap Color 04, Matcap Texture 04), 1, NeckShadow)
- OutputInnerline = Innerline Matcap
- MF_NormalMapIntensityNormal强度调整被大量材质引用。
- ***MF_SceneEffects***调用了MF_Dissolve实现了溶解效果。但备用他的材质并不多。
- MF_ShiftTangentKajiya-Kay中的ShiftTangent被M_ToonHair_V01调用。
- MF_StrandSpecKajiya-Kay中的高光计算逻辑被M_ToonHair_V01调用。
- MF_SurfaceSurface材质相关属性逻辑被MF_ToonPBRInput、MF_ToonBaseInput调用。
- **MF_Surface_V02**Surface材质相关属性逻辑被MF_ToonBaseInput_V02调用。与MF_Surface相比少了Specular输出。
- MF_TextureBooming材质没有上线。
- **MF_ToonBaseInput**通用ToonBase材质逻辑函数。集合了MF_CharacterMainLightIntensity、MF_Matcap_Add、MF_Surface、MF_ToonBaseShadingModel材质函数以及一些变量材质设置。被**M_ToonBase_V02_Penetrate**、**M_ToonBase_V02_Test**调用。
- ***MF_ToonBaseInput_V02***通用ToonBase材质逻辑函数V02。集合了MF_CharacterMainLightIntensity、MF_Matcap_Add、**MF_Surface_V02**、MF_ToonBaseShadingModel材质函数以及一些变量材质设置。被**M_ToonBase_V02**、**M_NaiLin_AnotherWorld02**、**M_EggGym_Flower**调用。
- **MF_ToonHairSpecularMaskUV**计算Hair高光贴图UV被MF_ApplyToonHairSpecular**M_ToonBase_V02**)调用。
- 使用dot( float3(0,0,1.0f), CaemraVector)的数值来对**HairMask的采样UVV轴** 进行偏移,以此实现高光偏移效果。
- **MF_ToonPBRInput**通用ToonPBR材质逻辑函数。集合了MF_CharacterMainLightIntensity、MF_Matcap、MF_Surface、**MF_ToonPBRInput**l材质函数以及一些变量材质设置。被**M_Penetrate**、**M_ToonBase_V01**、**M_ToonFace**、**M_ToonHair_V01**、**M_ToonSkin**、**M_BeiLa_Skin_AnotherWorld**、**M_Wave**。
- ***MF_TranslucentDOF***Translucent材质的景深效果***没有看懂***。被MF_Input、**MF_Surface**、**MF_Surface_V02**、M_ToonFacee_old、M_ToonLaserPBR调用。
- MF_VectorRotateAboutAxis向量旋转函数。被MF_WorldSpaceStarring调用。
- MF_WorldSpaceStarring被M_NaiLin_AnotherWorld02调用。
- SceneEffectsCollection场景效果材质参数集**可能已经废弃因为UE5大世界不支持关卡流**。会被MF_SceneEffects、BP_EmptyToStageA以及其他材质调用。
## ShadingModels
采用CustomNode构造FMaterialAttributesde的之后传递到MaterialAttribute模式的材质中其他骚操作还有
1. 使用宏开启MRT5`#define PIXELSHADEROUTPUT_MRT5 1`
2. 设置ShadingModelID`Result.ShadingModel = 14;`
3. 使用FToonShadingPerMaterialCustomData ToonShadingPerMaterialCustomData位于Common.ush)来传递卡通渲染用数据之后在BasePassPixelShader.ush中将数据塞入GBuffer中。
### MF_ToonPBRShadingModel
```c++
FMaterialAttributes Result;
Result.BaseColor = float3(1.0, 1.0, 1.0);
Result.Metallic = 0.0;
Result.Specular = 0.0;
Result.Roughness = 0.0;
Result.Anisotropy = 0.0;
Result.EmissiveColor = float3(0.0, 0.0, 0.0);
Result.Opacity = 1.0;
Result.OpacityMask = 1.0;
Result.Normal = float3(0.0, 0.0, 1.0);
Result.Tangent = float3(1.0, 0.0, 0.0);
Result.WorldPositionOffset = float3(0.0, 0.0, 0.0);
Result.SubsurfaceColor = float3(1.0, 1.0, 1.0);
Result.ClearCoat = 1.0;
Result.ClearCoatRoughness = 0.1;
Result.AmbientOcclusion = 1.0;
Result.Refraction = float3(0.0, 0.0, 0.0);
Result.PixelDepthOffset = 0.0;
Result.ShadingModel = 1;
Result.CustomizedUV0 = float2(0.0, 0.0);
Result.CustomizedUV1 = float2(0.0, 0.0);
Result.CustomizedUV2 = float2(0.0, 0.0);
Result.CustomizedUV3 = float2(0.0, 0.0);
Result.CustomizedUV4 = float2(0.0, 0.0);
Result.CustomizedUV5 = float2(0.0, 0.0);
Result.CustomizedUV6 = float2(0.0, 0.0);
Result.CustomizedUV7 = float2(0.0, 0.0);
Result.BentNormal = float3(0.0, 0.0, 1.0);
Result.ClearCoatBottomNormal = float3(0.0, 0.0, 1.0);
Result.CustomEyeTangent = float3(0.0, 0.0, 0.0);
#define PIXELSHADEROUTPUT_MRT5 1
Result.ShadingModel = 14;
ToonShadingPerMaterialCustomData.ToonSpecularColor = saturate(SpecularColor.rgb);
ToonShadingPerMaterialCustomData.ToonShadowColor = saturate(ShadowColor.rgb);
ToonShadingPerMaterialCustomData.ToonShadowLocation = saturate(CutPosition);
ToonShadingPerMaterialCustomData.ToonShadowSmoothness = saturate(CutSmoothness);
return Result;
```
### MF_ToonBaseShadingModel
```c++
FMaterialAttributes Result;
Result.BaseColor = float3(1.0, 1.0, 1.0);
Result.Metallic = 0.0;
Result.Specular = 0.0;
Result.Roughness = 0.0;
Result.Anisotropy = 0.0;
Result.EmissiveColor = float3(0.0, 0.0, 0.0);
Result.Opacity = 1.0;
Result.OpacityMask = 1.0;
Result.Normal = float3(0.0, 0.0, 1.0);
Result.Tangent = float3(1.0, 0.0, 0.0);
Result.WorldPositionOffset = float3(0.0, 0.0, 0.0);
Result.SubsurfaceColor = float3(1.0, 1.0, 1.0);
Result.ClearCoat = 1.0;
Result.ClearCoatRoughness = 0.1;
Result.AmbientOcclusion = 1.0;
Result.Refraction = float3(0.0, 0.0, 0.0);
Result.PixelDepthOffset = 0.0;
Result.ShadingModel = 1;
Result.CustomizedUV0 = float2(0.0, 0.0);
Result.CustomizedUV1 = float2(0.0, 0.0);
Result.CustomizedUV2 = float2(0.0, 0.0);
Result.CustomizedUV3 = float2(0.0, 0.0);
Result.CustomizedUV4 = float2(0.0, 0.0);
Result.CustomizedUV5 = float2(0.0, 0.0);
Result.CustomizedUV6 = float2(0.0, 0.0);
Result.CustomizedUV7 = float2(0.0, 0.0);
Result.BentNormal = float3(0.0, 0.0, 1.0);
Result.ClearCoatBottomNormal = float3(0.0, 0.0, 1.0);
Result.CustomEyeTangent = float3(0.0, 0.0, 0.0);
#define PIXELSHADEROUTPUT_MRT5 1
Result.ShadingModel = 13;
ToonShadingPerMaterialCustomData.ToonShadowColor = saturate(ShadowColor.rgb);
ToonShadingPerMaterialCustomData.ToonShadowLocation = clamp(ShadowLocation, 0, SpecularLocation);
ToonShadingPerMaterialCustomData.ToonShadowSmoothness = saturate(ShadowSmoothness);
ToonShadingPerMaterialCustomData.ToonForceShadow = saturate(ForceShadow);
ToonShadingPerMaterialCustomData.ToonSecondaryShadowColor = saturate(SecondaryShadowColor.rgb);
ToonShadingPerMaterialCustomData.ToonSecondaryShadowLocation = clamp(SecondaryShadowLocation, 0, SpecularLocation);
ToonShadingPerMaterialCustomData.ToonSecondaryShadowSmoothness = saturate(SecondaryShadowSmoothness);
return Result;
```
### MF_ToonSkinShadingModel
```c++
FMaterialAttributes Result;
Result.BaseColor = float3(1.0, 1.0, 1.0);
Result.Metallic = 0.0;
Result.Specular = 0.0;
Result.Roughness = 0.0;
Result.Anisotropy = 0.0;
Result.EmissiveColor = float3(0.0, 0.0, 0.0);
Result.Opacity = 1.0;
Result.OpacityMask = 1.0;
Result.Normal = float3(0.0, 0.0, 1.0);
Result.Tangent = float3(1.0, 0.0, 0.0);
Result.WorldPositionOffset = float3(0.0, 0.0, 0.0);
Result.SubsurfaceColor = float3(1.0, 1.0, 1.0);
Result.ClearCoat = 1.0;
Result.ClearCoatRoughness = 0.1;
Result.AmbientOcclusion = 1.0;
Result.Refraction = float3(0.0, 0.0, 0.0);
Result.PixelDepthOffset = 0.0;
Result.ShadingModel = 1;
Result.CustomizedUV0 = float2(0.0, 0.0);
Result.CustomizedUV1 = float2(0.0, 0.0);
Result.CustomizedUV2 = float2(0.0, 0.0);
Result.CustomizedUV3 = float2(0.0, 0.0);
Result.CustomizedUV4 = float2(0.0, 0.0);
Result.CustomizedUV5 = float2(0.0, 0.0);
Result.CustomizedUV6 = float2(0.0, 0.0);
Result.CustomizedUV7 = float2(0.0, 0.0);
Result.BentNormal = float3(0.0, 0.0, 1.0);
Result.ClearCoatBottomNormal = float3(0.0, 0.0, 1.0);
Result.CustomEyeTangent = float3(0.0, 0.0, 0.0);
#define PIXELSHADEROUTPUT_MRT5 1
Result.ShadingModel = 15;
ToonShadingPerMaterialCustomData.ToonShadowColor = saturate(ShadowColor.rgb);
ToonShadingPerMaterialCustomData.ToonShadowLocation = saturate(CutPosition);
ToonShadingPerMaterialCustomData.ToonShadowSmoothness = saturate(CutSmoothness);
return Result;
```
### MF_ToonHairShadingModel
```c++
FMaterialAttributes Result;
Result.BaseColor = float3(1.0, 1.0, 1.0);
Result.Metallic = 0.0;
Result.Specular = 0.0;
Result.Roughness = 0.0;
Result.Anisotropy = 0.0;
Result.EmissiveColor = float3(0.0, 0.0, 0.0);
Result.Opacity = 1.0;
Result.OpacityMask = 1.0;
Result.Normal = float3(0.0, 0.0, 1.0);
Result.Tangent = float3(1.0, 0.0, 0.0);
Result.WorldPositionOffset = float3(0.0, 0.0, 0.0);
Result.SubsurfaceColor = float3(1.0, 1.0, 1.0);
Result.ClearCoat = 1.0;
Result.ClearCoatRoughness = 0.1;
Result.AmbientOcclusion = 1.0;
Result.Refraction = float3(0.0, 0.0, 0.0);
Result.PixelDepthOffset = 0.0;
Result.ShadingModel = 1;
Result.CustomizedUV0 = float2(0.0, 0.0);
Result.CustomizedUV1 = float2(0.0, 0.0);
Result.CustomizedUV2 = float2(0.0, 0.0);
Result.CustomizedUV3 = float2(0.0, 0.0);
Result.CustomizedUV4 = float2(0.0, 0.0);
Result.CustomizedUV5 = float2(0.0, 0.0);
Result.CustomizedUV6 = float2(0.0, 0.0);
Result.CustomizedUV7 = float2(0.0, 0.0);
Result.BentNormal = float3(0.0, 0.0, 1.0);
Result.ClearCoatBottomNormal = float3(0.0, 0.0, 1.0);
Result.CustomEyeTangent = float3(0.0, 0.0, 0.0);
Result.ShadingModel = 14;
Result.Anisotropy = 1.0;
ToonShadingPerMaterialCustomData.CustomData0 = saturate(float4(ShadowColor.rgb, ShadowSmoothness));
ToonShadingPerMaterialCustomData.CustomData1 = saturate(float4(SpecularAbsorbance, SpecularRangeParam.xyz));
return Result;
```
## Effects
- **MF_EdgeLight**边缘光效果通过UE菲尼尔函数以及一些贴图、变量实现效果。主要被***MF_CharacterEffects***引用。
- **MF_Dissolve**:溶解过度效果。主要被***MF_CharacterEffects***、***MF_SceneEffects***引用。
- MF_Fur用于制作Fur效果的材质函数被**ToonShading_Standard_nooffset_fur**引用。主要还是靠***GFur插件***,没有参考意义,升级难度+1。
# Matcap
存放大量Matcap用的**球形环境贴图**。除此之外`/ResArt/CommonMaterial/Materials/V02/MatCap/`也存储了Matcap贴图。
## 球形全景图制作方法
【如何将无人机拍摄的球形全景图还原成球形视图】 https://www.bilibili.com/video/BV1yz411q7Eg/?share_source=copy_web&vd_source=fe8142e8e12816535feaeabd6f6cdc8e
# Materials
>所有角色与新服装都迁移到V02版本V01已废弃。
- Special
- M_BeiLa_Skin_AnotherWorld特别定制的材质。
- [ ] V02
- Special
- M_NaiLin_AnotherWorld02特别定制的材质。
- ***[[#M_ToonBase_V02]]*****默认的ShadingModel为13**也就是SHADINGMODELID_TOON_BASE。
- ~~M_ToonBase_V02_Test~~测试用主要的差别是使用的是MF_ToonBaseInput里面用的是旧版的MF_Surface
- MI_ToonBase_V02
- MI_ToonSkin_V02
- MI_ToonFace_V02
- MI_ToonHair_V02
- MI_Brow_V02
- MI_Eye_V02
- MI_EyeGlass_V02
- MI_EyeHighlight_V02
- MI_EyeShadow_V02
- MI_MakeUp_V02
- MI_TeethTongue_V02
- [x] **M_Eye_Highlight**
- M_Hide隐藏模型用材质。
- [x] M_Penetrate
- [x] **M_ToonBase_V01**主要的逻辑是MF_ToonPBRInput => MF_CharacterEffects。**默认的ShadingModel为14**,也就是**SHADINGMODELID_TOON_PBR**。
- [x] M_ToonBase_V02_Penetrate带有Penetrate功能的M_ToonBase_V01。
- [x] **M_ToonFace**
- [x] M_ToonFace_old
- [x] **M_ToonHair_V01**
- [x] **M_ToonSkin**
## M_ToonBase_V02
与M_ToonBase_V01相比。最主要的逻辑区别在于
1. MF_ToonPBRInput => MF_ToonBaseInput_V02
1. MF_Matcap_Add => MF_Matcap。不输出Specular转而将高光结果输出在BaseColor与Emissive中。
2. MF_ToonPBRShadingModel => MF_ToonBaseShadingModel。
1. 移除Specular
2. 增加ToonShadowSmoothness
3. 增加ToonSecondaryShadow
4. ShadingModel为13也就是**SHADINGMODELID_TOON_BASE**。14 => 13。
2. 增加MF_ApplyToonHairSpecular()计算头发高光并且将结果叠加到Emissive上。
3. 增加Penetrate逻辑结果加上WPO上。
4. 增加Refraction逻辑通过Normal以及菲尼尔节点插值以此设置Refraction。
# MaterialInstance
# Outline
主要存储了Outline主材质以及所有角色所用到的Outline材质统一放到对应角色文件夹中。描边材质使用***M_Outline_V03***或***MI_Outline_V03***即可。
1. 材质ShadingModel为Unlit、BlendMode为Maskd。
2. 所有描边材质都有**MF_CharacterEffects**、MF_CharacterMainLightIntensity。所以都支持溶解效果以及调整每个角色的描边光照效果连接的是材质Emissive引脚
- M_Outline_V01 & M_Outline_V02
- **WPO**引脚逻辑:**描边粗细控制部分逻辑**和V01基本相同除了V01有Minimum Line Thickness这个粗细最小值。剩下的其他逻辑一模一样。
- ***M_Outline_V03***
- **OpacityMask**引脚逻辑与V01、V02略有不同。会将BaseTexture贴图的a通道乘以HideMask然而并没有材质使用。
- **WPO**引脚逻辑V03版本的**描边粗细控制部分逻辑**做了更加合理的改动。VertexColor的RGB通道代替Backface外扩时的VertexNormalWS这样不会影响光照效果。
- MaterialInstance
- MI_Outline_V03
- MI_Outline_Face_V03
- MI_Outline_Hair_V03
- MI_Outline_Skin_V03
- Dissolve继承自MaterialInstance文件夹中的对应材质实例**勾选了EnableDissolve**。
# Textures
存储一些默认贴图与一些测试用(无用)贴图。

View File

@@ -0,0 +1,12 @@
# 面捕问题解决
## c++
~~IdolAnimInstance~~:没相关代码。
~~MotionSyncComponent~~
~~UOVRLipSyncActorComponentBase~~
~~MotionReceiverActor.cpp~~
## ts
TsArkitDataReceiver
TsMotionRetargetComponent
TsMotionSyncComponent

View File

@@ -0,0 +1,16 @@
# 环境
- Unity2020.1.16
- XCode版本对应要求:https://developer.apple.com/cn/support/xcode/
- 曹老师说无需证书就可以打包到手机上。
# 证书
证书密码123456
## 具体操作
https://blog.csdn.net/ListJson/article/details/128044539
1. 生成CRS文件用于申请证书钥匙串 - 钥匙串访问 - 证书助理 - 从证书颁发机构请求证书
2. 将证书导入Mac从苹果开发者网站上申请Development与Distribution证书并且导入Mac系统需要拖入钥匙串的登录里
3. 信任证书:在钥匙串里双击对应证书,点开信任选项卡,设置为始终信任。
4. 导出P12证书。
# 打包