Init
This commit is contained in:
37
03-UnrealEngine/Rendering/AIGC/4DGaussians.md
Normal file
37
03-UnrealEngine/Rendering/AIGC/4DGaussians.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: Untitled
|
||||
date: 2024-03-11 13:32:22
|
||||
excerpt:
|
||||
tags:
|
||||
rating: ⭐
|
||||
---
|
||||
# 前言
|
||||
- https://github.com/hustvl/4DGaussians
|
||||
- 使用的渲染器为**diff_gaussian_rasterization**。
|
||||
- https://github.com/yzslab/gaussian-splatting-lightning
|
||||
- 使用内置渲染器,在项目目录下的**internal/renderers**中,但还是基于**diff-gaussian-rasterization**。
|
||||
|
||||
**diff_gaussian_rasterization**: https://github.com/graphdeco-inria/diff-gaussian-rasterization ,具体可以参考[[GaussianViewer]],里面一样基于此渲染器。
|
||||
|
||||
问题:
|
||||
1. 4DGaussians
|
||||
1. 数据与3DGaussians的区别在哪?主要是load_ply()
|
||||
# hustvl/4DGaussians
|
||||
- scene/gaussian_model.py:场景管理
|
||||
- load_ply():点云文件读取。
|
||||
- load_model():载入AI模型?
|
||||
|
||||
# 与毛同学的沟通记录
|
||||
比对2个仓库
|
||||
- https://github.com/hustvl/4DGaussians
|
||||
- 以及3DGaussion
|
||||
|
||||
录制: 毛钟楷的个人会议室
|
||||
录制文件:https://meeting.tencent.com/v2/cloud-record/share?id=917807b7-2772-4891-b33c-3e61a71904a9&from=3
|
||||
|
||||
# UE5 运行神经网络模型
|
||||
- https://zhuanlan.zhihu.com/p/665593759
|
||||
- https://github.com/microsoft/OnnxRuntime-UnrealEngine
|
||||
- https://www.youtube.com/watch?v=oWYphpV6A40
|
||||
- https://youtu.be/LX1w_etaftY?si=iF1f8-7TtqI_q4VI
|
||||
- How to use LibTorch and Tokenizers in Unreal Engine 5:https://www.youtube.com/watch?v=dvGWUh4SPBY
|
180
03-UnrealEngine/Rendering/AIGC/GaussianSplattingViewer.md
Normal file
180
03-UnrealEngine/Rendering/AIGC/GaussianSplattingViewer.md
Normal file
@@ -0,0 +1,180 @@
|
||||
---
|
||||
title: GaussianSplattingViewer
|
||||
date: 2023-12-29 19:35:16
|
||||
excerpt:
|
||||
tags:
|
||||
rating: ⭐
|
||||
---
|
||||
|
||||
# 前言
|
||||
- https://github.com/limacv/GaussianSplattingViewer
|
||||
使用GLFW创建的程序。
|
||||
# main.py
|
||||
主要逻辑位于main()中,大致逻辑如下:
|
||||
1. 获取前文设置的全部变量。
|
||||
2. 创建imgui用于控制变量。
|
||||
3. 创建GLFW渲染窗口**windows**。
|
||||
4. 调用**imgui.integrations.glfw**中的**GlfwRenderer**,并且将结果渲染到这个**windows**中。
|
||||
5. 获取tk(tkinter)并且赋值给root,之后调用withdraw()。应该是用于绘制选择文件窗口的。
|
||||
6. 绑定glfw的set_cursor_pos_callback、set_mouse_button_callback、set_scroll_callback、set_key_callback、set_window_size_callback事件。
|
||||
7. 创建**renderer_ogl**的**OpenGLRenderer**渲染器对象,并将其加入g_renderer_list全局渲染器列表。
|
||||
8. 创建**renderer_cuda**的**CUDARenderer**渲染器对象,如果成功,将其加入g_renderer_list全局渲染器列表。
|
||||
9. 按照之前设置的渲染器index选择用于渲染的渲染器,并赋值给**g_renderer**。
|
||||
10. 高斯数据处理
|
||||
1. gaussians = util_gau.naive_gaussian(),创建写死的高斯数据。
|
||||
2. update_activated_renderer_state(gaussians)
|
||||
11. 开始进入渲染循环
|
||||
1. 调用glfw、GlfwRenderer、imgui循环相关函数。
|
||||
2. 清屏。
|
||||
3. 更新摄像机Location & Intrin。
|
||||
4. imgui菜单控制逻辑。调整各种参数、打开Ply点云文件。**载入逻辑位于util_gau.py的load_ply()**
|
||||
1. 文件载入之后会进行一次高斯数据更新update_gaussian_data()以及排序sort_and_update()
|
||||
5. 摄像机更新。
|
||||
6. 缩放更新。
|
||||
7. 如果修改了Shading则更新渲染模式set_render_mod()
|
||||
8. 如果点击了sort Gaussians按钮,则进行一次排序sort_and_update()
|
||||
9. 如果勾选了g_auto_sort,则进行一次排序sort_and_update()
|
||||
10. 保存图片按钮逻辑。
|
||||
11. imgui、GlfwRenderer渲染函数调用;glfw更换前后缓存。
|
||||
|
||||
## 渲染器函数
|
||||
### renderer_ogl.py
|
||||
> 渲染模式为:"Gaussian Ball", "Billboard", "Depth", "SH:0", "SH:0~1", "SH:0~2", "SH:0~3 (default)。
|
||||
|
||||
`_sort_gaussian`
|
||||
```python
|
||||
def _sort_gaussian(gaus: util_gau.GaussianData, view_mat):
|
||||
xyz = gaus.xyz
|
||||
xyz_view = view_mat[None, :3, :3] @ xyz[..., None] + view_mat[None, :3, 3, None]
|
||||
depth = xyz_view[:, 2, 0]
|
||||
index = np.argsort(depth)
|
||||
index = index.astype(np.int32).reshape(-1, 1)
|
||||
return index
|
||||
```
|
||||
|
||||
`__init__`
|
||||
1. 载入Shader。
|
||||
2. 定义面片顶点数据。
|
||||
3. 设置属性通道为Position,并将顶点数据塞入VAO。
|
||||
4. 设置渲染属性:
|
||||
1. 禁用剔除。
|
||||
2. 开启BlendMode,gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA。也就是线性插值。
|
||||
|
||||
`update_gaussian_data`
|
||||
1. 传入当前高斯数据。
|
||||
2. 调用下面的flat函数,赋值给gaussian_data。
|
||||
3. 传递vao、buffer_id、gaussian_data、bind_idx到Shader中。
|
||||
4. 调用 `util.set_uniform_1int(self.program, gaus.sh_dim, "sh_dim")`
|
||||
|
||||
`sort_and_update`:排序并且更新Shader中的数据。
|
||||
`draw`:绘制函数。
|
||||
1. 传递VAO点云数据数组到VertexShader。
|
||||
2. 取得点云数。
|
||||
3. 绘制与点云数目一样多的面片Instance。
|
||||
|
||||
```python
|
||||
def flat(self) -> np.ndarray:
|
||||
ret = np.concatenate([self.xyz, self.rot, self.scale, self.opacity, self.sh], axis=-1)
|
||||
return np.ascontiguousarray(ret)
|
||||
```
|
||||
|
||||
#### VertexShader
|
||||
1. 根据gl_InstanceID、total_dim计算当前面片Instance的点云数据开始index。
|
||||
2. 根据开始index,从g_data[]取得g_pos数据,并且转换成屏幕空间坐标。
|
||||
3. 执行early culling。将不在屏幕内的点云数据都塞到Vec4(-100, -100, -100,1)。
|
||||
4. 根据开始index,从g_data[]取得g_rot。
|
||||
5. 根据开始index,从g_data[]取得g_scale。
|
||||
6. 根据开始index,从g_data[]取得g_opacity。
|
||||
7. 调用computeCov3D() => computeCov2D(),计算协方差矩阵。
|
||||
```c++
|
||||
mat3 cov3d = computeCov3D(g_scale * scale_modifier, g_rot);
|
||||
vec2 wh = 2 * hfovxy_focal.xy * hfovxy_focal.z;
|
||||
vec3 cov2d = computeCov2D(g_pos_view,
|
||||
hfovxy_focal.z,
|
||||
hfovxy_focal.z,
|
||||
hfovxy_focal.x,
|
||||
hfovxy_focal.y,
|
||||
cov3d,
|
||||
view_matrix);
|
||||
|
||||
// Invert covariance (EWA algorithm)
|
||||
float det = (cov2d.x * cov2d.z - cov2d.y * cov2d.y);
|
||||
if (det == 0.0f)
|
||||
gl_Position = vec4(0.f, 0.f, 0.f, 0.f);
|
||||
|
||||
float det_inv = 1.f / det;
|
||||
conic = vec3(cov2d.z * det_inv, -cov2d.y * det_inv, cov2d.x * det_inv);
|
||||
|
||||
vec2 quadwh_scr = vec2(3.f * sqrt(cov2d.x), 3.f * sqrt(cov2d.z)); // screen space half quad height and width
|
||||
vec2 quadwh_ndc = quadwh_scr / wh * 2; // in ndc space
|
||||
g_pos_screen.xy = g_pos_screen.xy + position * quadwh_ndc;
|
||||
coordxy = position * quadwh_scr;
|
||||
gl_Position = g_pos_screen;
|
||||
```
|
||||
8. alpha = g_opacity;
|
||||
9. if (render_mod == -1)则计算深度,最后输出color为1/Depth的灰度值。"Depth"渲染模式。
|
||||
```c++
|
||||
// Covert SH to color
|
||||
int sh_start = start + SH_IDX;
|
||||
vec3 dir = g_pos.xyz - cam_pos;
|
||||
dir = normalize(dir);
|
||||
color = SH_C0 * get_vec3(sh_start);
|
||||
|
||||
if (sh_dim > 3 && render_mod >= 1) // 1 * 3
|
||||
{
|
||||
float x = dir.x;
|
||||
float y = dir.y;
|
||||
float z = dir.z;
|
||||
color = color - SH_C1 * y * get_vec3(sh_start + 1 * 3) + SH_C1 * z * get_vec3(sh_start + 2 * 3) - SH_C1 * x * get_vec3(sh_start + 3 * 3);
|
||||
|
||||
if (sh_dim > 12 && render_mod >= 2) // (1 + 3) * 3
|
||||
{
|
||||
float xx = x * x, yy = y * y, zz = z * z;
|
||||
float xy = x * y, yz = y * z, xz = x * z;
|
||||
color = color +
|
||||
SH_C2_0 * xy * get_vec3(sh_start + 4 * 3) +
|
||||
SH_C2_1 * yz * get_vec3(sh_start + 5 * 3) +
|
||||
SH_C2_2 * (2.0f * zz - xx - yy) * get_vec3(sh_start + 6 * 3) +
|
||||
SH_C2_3 * xz * get_vec3(sh_start + 7 * 3) +
|
||||
SH_C2_4 * (xx - yy) * get_vec3(sh_start + 8 * 3);
|
||||
|
||||
if (sh_dim > 27 && render_mod >= 3) // (1 + 3 + 5) * 3
|
||||
{
|
||||
color = color +
|
||||
SH_C3_0 * y * (3.0f * xx - yy) * get_vec3(sh_start + 9 * 3) +
|
||||
SH_C3_1 * xy * z * get_vec3(sh_start + 10 * 3) +
|
||||
SH_C3_2 * y * (4.0f * zz - xx - yy) * get_vec3(sh_start + 11 * 3) +
|
||||
SH_C3_3 * z * (2.0f * zz - 3.0f * xx - 3.0f * yy) * get_vec3(sh_start + 12 * 3) +
|
||||
SH_C3_4 * x * (4.0f * zz - xx - yy) * get_vec3(sh_start + 13 * 3) +
|
||||
SH_C3_5 * z * (xx - yy) * get_vec3(sh_start + 14 * 3) +
|
||||
SH_C3_6 * x * (xx - 3.0f * yy) * get_vec3(sh_start + 15 * 3);
|
||||
}
|
||||
}
|
||||
}
|
||||
color += 0.5f;
|
||||
```
|
||||
|
||||
#### PixelShader
|
||||
1. if (render_mod == -2)则直接显示当前颜色,并且结束渲染。"Billboard"渲染模式
|
||||
2. 计算`power = -0.5f * (conic.x * coordxy.x * coordxy.x + conic.z * coordxy.y * coordxy.y) - conic.y * coordxy.x * coordxy.y;`,丢弃power大于0的像素。
|
||||
3. 计算float opacity = `min(0.99f, alpha * exp(power))`;,丢弃opacity小于1/255的像素。
|
||||
4. FragColor = vec4(color, opacity);
|
||||
5. if (render_mod == -3)则将透明度低于0.22的像素都隐藏。"Gaussian Ball"渲染模式
|
||||
|
||||
"Depth", "SH:0", "SH:0~1", "SH:0~2", "SH:0~3 (default)。
|
||||
|
||||
### renderer_cuda.py
|
||||
略
|
||||
|
||||
## 相关函数
|
||||
**update_activated_renderer_state**:更新渲染器状态。包括更新**高斯数据**、**摄像机缩放&高斯点云排序**、摄像机位移、渲染比例、渲染Mode。
|
||||
```python
|
||||
def update_activated_renderer_state(gaus: util_gau.GaussianData):
|
||||
g_renderer.update_gaussian_data(gaus)
|
||||
g_renderer.sort_and_update(g_camera)
|
||||
g_renderer.set_scale_modifier(g_scale_modifier)
|
||||
g_renderer.set_render_mod(g_render_mode - 3)
|
||||
g_renderer.update_camera_pose(g_camera)
|
||||
g_renderer.update_camera_intrin(g_camera)
|
||||
g_renderer.set_render_reso(g_camera.w, g_camera.h)
|
||||
```
|
310
03-UnrealEngine/Rendering/AIGC/GaussianViewer.md
Normal file
310
03-UnrealEngine/Rendering/AIGC/GaussianViewer.md
Normal file
@@ -0,0 +1,310 @@
|
||||
---
|
||||
title: Untitled
|
||||
date: 2024-01-01 18:57:57
|
||||
excerpt:
|
||||
tags:
|
||||
rating: ⭐
|
||||
---
|
||||
# 前言
|
||||
- https://github.com/graphdeco-inria/gaussian-splatting/tree/main/gaussian_renderer
|
||||
基于Sibr渲染器制作的3D高斯查看器。
|
||||
|
||||
# 项目结构
|
||||
- [x] gaussian
|
||||
- render - sibr_gaussian
|
||||
- apps - SIBR_gaussianViewer_app
|
||||
- [x] diff-gaussian-rasterization(CUDA)
|
||||
# render - sibr_gaussian
|
||||
- picojson:JSON库
|
||||
- rapidxml:XML库
|
||||
- **nanoflann**:是一个c++11标准库,用于构建具有不同拓扑(R2,R3(点云),SO(2)和SO(3)(2D和3D旋转组))的KD树。
|
||||
|
||||
## GaussianSurfaceRenderer
|
||||
>主要用于渲染椭圆体,估计是用于Debug用的。
|
||||
|
||||
### GaussianData
|
||||
- GaussianData():通过构造函数形参接受CPU端读取的高斯数据,再通过调用glCreateBuffers()、glNamedBufferStorage()创建GL缓存对象并且初始化,并使用GLuint进行记录(index)。
|
||||
- render:给Shader绑定GL缓存,并且绘制数组实例。
|
||||
|
||||
### GaussianSurfaceRenderer
|
||||
- GaussianSurfaceRenderer():初始化相关变量。
|
||||
- 初始化VS/Frag Shader。
|
||||
- rayOrigin、MVP、alpha_limit、stage变量
|
||||
- 创建idTexture、colorTexture贴图变量以及过滤器
|
||||
- 创建fbo对象以及depthBuffer之后调用makeFBO()正式创建FBO
|
||||
- 创建清屏Shader。
|
||||
- makeFBX():创建idTexture、colorTexture、depthBuffer FBO,用于将顶点数据传递到FragShader中。
|
||||
- process():整个渲染过程逻辑处理。
|
||||
1. 清屏。
|
||||
2. 判断如果分辨率与FBO大小不同,则重新创建FBO。
|
||||
3. 获取绘制Buffer的Index,调用glDrawBuffers() 绘制colorTexture、idTexture。
|
||||
4. 开启深度测试关闭Blend模式。
|
||||
5. 给Shader绑定相关`_paramMVP`、`_paramCamPos`、`_paramLimit`、`_paramStage`变量,并且调用GaussianData.render()进行一次**不透明物体**的渲染。以小方盒的形式绘制点云数据。
|
||||
6. 调用glDrawBuffers() 绘制colorTexture。
|
||||
7. 关闭深度测试,开启透明Blend模式。
|
||||
8. GaussianData.render()进行一次**透明物体**的渲染,融合模式**additive blendnig**。以小方盒的形式绘制点云数据。
|
||||
9. 开启深度测试,关闭Blend模式。
|
||||
10. 将结果显示在屏幕上?
|
||||
|
||||
## GaussianView
|
||||
继承自sibr::ViewBase,用与调用渲染器以及显示结果。
|
||||
|
||||
### GaussianView
|
||||
- GaussianView():
|
||||
- 初始化_pointbasedrenderer渲染器
|
||||
- 初始化_copyRenderer渲染器
|
||||
- 载入图片并且加入debug模式(应该sibr自带的那个多视角图片debug模式)
|
||||
- 载入*.ply点云文件,函数为loadPly()。
|
||||
- CUDA相关处理,应该是为了计算3D高斯结果所需的数据。
|
||||
- 生成GaussianData指针变量gData。
|
||||
- 初始化3D高斯渲染器对象_gaussianRenderer。
|
||||
- 创建GL缓存对象imageBuffer。
|
||||
- CUDA插值操作。
|
||||
- 绑定3个geomBufferFunc、binningBufferFunc、imgBufferFunc仿函数,用来调整CUDA渲染时的缓存大小(创建或者回收内存空间)
|
||||
- onRenderIBR():View的渲染函数。
|
||||
- Ellipsoids(椭圆体渲染):使用_gaussianRenderer->process() 进行渲染。(OpenGL)
|
||||
- Initial Points:`_pointbasedrenderer->process()`渲染点。
|
||||
- Splats:使用CudaRasterizer::Rasterizer::forward()进行渲染。最后通过_copyRenderer->process()复制回imageBuffer缓存。
|
||||
- onGUI():GUI相关逻辑。
|
||||
|
||||
CUDA文件位于`SIBR_viewers\extlibs\CudaRasterizer\CudaRasterizer\cuda_rasterizer\rasterizer_impl.cu`以及`forward.cu`,这些为核心逻辑。
|
||||
## Shader
|
||||
可以理解为将点云渲染成一个个的椭圆体,每个椭圆体的颜色与点云数据中的颜色相关。
|
||||
### VertexShader
|
||||
1. 取得IndexID。
|
||||
2. 使用IndexID从传入Shader的Buffer中获取的椭圆体中心、alpha、ellipsoidScale、q(四元数rotation),之后将rotation转成3x3矩阵 ellipsoidRotation。
|
||||
3. 取得当前顶点Index并获得坐标。再乘以椭圆体旋转值并加上椭圆体中心坐标,取得最终的WorldPos(当前顶点的世界坐标)。
|
||||
4. 使用IndexID从传入Shader的Buffer中取得**辐射照度?辐射强度?** 数据。
|
||||
5. 将不符合要求的顶点堆到vec4(0,0,0,0)点。
|
||||
6. 输出顶点数据到FragShader。
|
||||
### FragShader
|
||||
1. 计算摄像机=>当前顶点世界坐标的方向向量dir。
|
||||
2. 调用closestEllipsoidIntersection(),计算与椭圆体的相交的坐标与相交点的法线。
|
||||
1. 计算椭圆体空间的localRayOrigin与localRayDirection
|
||||
2. 计算椭圆与直线相交的方程。
|
||||
3. 计算摄像机朝向的椭圆体的外表面。如果是内表面最终颜色值 * 0.4。
|
||||
4. 将相交的世界坐标乘以MVP矩阵,得到摄像机View坐标下的的世界坐标。
|
||||
5. 计算深度缓存。
|
||||
6. 计算Alpha。
|
||||
7. 渲染`out_color = vec4(align * colorVert, a);` 也就是colorTexture
|
||||
8. 渲染`out_id = boxID;`也就是idTexture
|
||||
|
||||
# CudaRasterizer
|
||||
**本人没学过CUDA,以下仅仅是对代码的猜测。**
|
||||
额外需要了解Tile渲染方式(具体可以看**Tiled-Based Deferred Rendering(TBDR)**) https://zhuanlan.zhihu.com/p/547943994
|
||||
|
||||
- 屏幕分成`16 * 16`的tile,每个tile进行单独计算。之后对每个像素进行计算。
|
||||
- 取得对应tile中Start与End的位置,对已经排序完的高斯点进行计算,求微分。
|
||||
- 计算当前像素的透明度T
|
||||
- 2D协方差 => power => alpha。
|
||||
- 每次循环都进行`float test_T = T * (1 - alpha)`,当test_T极小时(不透明)则停止循环。
|
||||
- T = test_T。
|
||||
- 计算当前像素的颜色,也就是计算各个方向接受的辐射照度。
|
||||
- `for (int ch = 0; ch < CHANNELS; ch++)`
|
||||
`C[ch] += features[collected_id[j] * CHANNELS + ch] * alpha * T;`
|
||||
- 计算最终贡献值
|
||||
- 如果当前像素在范围中则输出
|
||||
- `final_T[pix_id]`最终透明度。
|
||||
- `n_contrib[pix_id]`最终贡献值。
|
||||
- `out_color[ch * H * W + pix_id]`最终颜色。`C[ch] + T * bg_color[ch]`
|
||||
|
||||
对屏幕分Tile
|
||||
![[ScreenSpaceTile.jpg]]
|
||||
|
||||
以此减少需要遍历的点云数量。
|
||||
![[TileRange.jpg|500]]
|
||||
|
||||
每个点云相当于空间中当前位置空间的辐射强度分布。
|
||||
![[GS_radiation.jpg]]
|
||||
|
||||
一个像素的渲染会计算这个像素范围内所有的点云的辐射强度、透明度,最后求微分。下图两条横线内相当于一个像素的范围。
|
||||
![[一个像素需要计算范围内所有电源的辐射强度.png|500]]
|
||||
|
||||
## rasterizer_impl.cu
|
||||
- getHigherMsb()
|
||||
- checkFrustum():判断点云是否在视锥内,返回一个bool数组。
|
||||
- duplicateWithKeys()
|
||||
- identifyTileRanges():确定每个Tile的工作起点与终点。
|
||||
- markVisible():标记高斯点云是否处于可视状态。
|
||||
- GeometryState::fromChunk():计算数据块的指针偏移,并且返回创建的GeometryState结构体对象。
|
||||
- ImageState::fromChunk():计算数据块的指针偏移,并且返回创建的ImageState结构体对象。
|
||||
- BinningState::fromChunk():计算数据块的指针偏移,并且返回创建的BinningState结构体对象。
|
||||
- forward():前向渲染可微分光栅化的高斯。具体见下文。
|
||||
- backward():生成优化所需的梯度数据,并传递到forward()。**该项目中目前未被调用**
|
||||
|
||||
相关数据结构体定义在rasterizer_impl.h中:
|
||||
```c++
|
||||
struct GeometryState
|
||||
{
|
||||
size_t scan_size;
|
||||
float* depths;
|
||||
char* scanning_space;
|
||||
bool* clamped;
|
||||
int* internal_radii;
|
||||
float2* means2D;
|
||||
float* cov3D;
|
||||
float4* conic_opacity;
|
||||
float* rgb;
|
||||
uint32_t* point_offsets;
|
||||
uint32_t* tiles_touched;
|
||||
|
||||
static GeometryState fromChunk(char*& chunk, size_t P);
|
||||
};
|
||||
|
||||
struct ImageState
|
||||
{
|
||||
uint2* ranges;
|
||||
uint32_t* n_contrib;
|
||||
float* accum_alpha;
|
||||
|
||||
static ImageState fromChunk(char*& chunk, size_t N);
|
||||
};
|
||||
|
||||
struct BinningState
|
||||
{
|
||||
size_t sorting_size;
|
||||
uint64_t* point_list_keys_unsorted;
|
||||
uint64_t* point_list_keys;
|
||||
uint32_t* point_list_unsorted;
|
||||
uint32_t* point_list;
|
||||
char* list_sorting_space;
|
||||
|
||||
static BinningState fromChunk(char*& chunk, size_t P);
|
||||
};
|
||||
```
|
||||
|
||||
### forward()
|
||||
1. 创建相关变量:GeometryState、ImageState、minn、maxx。
|
||||
2. FORWARD::preprocess()
|
||||
3. 计算所有tile的高斯点云总量。
|
||||
4. 根据需要需要渲染的高斯点云总量来调整CUDA buffer大小。
|
||||
5. 创建BinningState。
|
||||
6. duplicateWithKeys()
|
||||
7. getHigherMsb()
|
||||
8. 对高斯点运行排序。
|
||||
9. cudaMemset(imgState.ranges, 0, tile_grid.x * tile_grid.y * sizeof(uint2));
|
||||
10. 调用identifyTileRanges(),确定每个Tile的工作起点与终点。
|
||||
11. 取得点云颜色数组。
|
||||
12. FORWARD::render()
|
||||
|
||||
## forward.cu
|
||||
### preprocess()
|
||||
在光栅化之前,对每个高斯进行初始化处理。
|
||||
- 只处理在视锥中并且在盒子中的高斯。
|
||||
- 使用投影矩阵对点云的点进行变换,并进行归一化,赋予给新变量p_proj。
|
||||
- 计算协方差矩阵cov3D。
|
||||
- 计算2D屏幕空间的协方差矩阵cov
|
||||
- Invert covariance
|
||||
- Compute extent in screen space (by finding eigenvalues of 2D covariance matrix). Use extent to compute a bounding rectangle of screen-space tiles that this Gaussian overlaps with. Quit if rectangle covers 0 tiles.
|
||||
- 如果没有颜色数据则从球谐函数中计算辐射照度。
|
||||
- 存储当前数据。
|
||||
- `depths[idx]`
|
||||
- `radii[idx]`
|
||||
- `points_xy_image[idx]`
|
||||
- `conic_opacity[idx]`
|
||||
- `tiles_touched[idx]`
|
||||
|
||||
```c++
|
||||
// Invert covariance (EWA algorithm)
|
||||
float det = (cov.x * cov.z - cov.y * cov.y);
|
||||
if (det == 0.0f)
|
||||
return;
|
||||
float det_inv = 1.f / det;
|
||||
float3 conic = { cov.z * det_inv, -cov.y * det_inv, cov.x * det_inv };
|
||||
|
||||
// Compute extent in screen space (by finding eigenvalues of
|
||||
// 2D covariance matrix). Use extent to compute a bounding rectangle
|
||||
// of screen-space tiles that this Gaussian overlaps with. Quit if
|
||||
// rectangle covers 0 tiles.
|
||||
float mid = 0.5f * (cov.x + cov.z);
|
||||
float lambda1 = mid + sqrt(max(0.1f, mid * mid - det));
|
||||
float lambda2 = mid - sqrt(max(0.1f, mid * mid - det));
|
||||
float my_radius = ceil(3.f * sqrt(max(lambda1, lambda2)));
|
||||
float2 point_image = { ndc2Pix(p_proj.x, W), ndc2Pix(p_proj.y, H) };
|
||||
uint2 rect_min, rect_max;
|
||||
|
||||
if (rects == nullptr) // More conservative
|
||||
{
|
||||
getRect(point_image, my_radius, rect_min, rect_max, grid);
|
||||
}
|
||||
else // Slightly more aggressive, might need a math cleanup
|
||||
{
|
||||
const int2 my_rect = { (int)ceil(3.f * sqrt(cov.x)), (int)ceil(3.f * sqrt(cov.z)) };
|
||||
rects[idx] = my_rect;
|
||||
getRect(point_image, my_rect, rect_min, rect_max, grid);
|
||||
}
|
||||
|
||||
if ((rect_max.x - rect_min.x) * (rect_max.y - rect_min.y) == 0)
|
||||
return;
|
||||
```
|
||||
|
||||
### render()
|
||||
对所有Tile进行并行计算。针对CUDA核心数量创建对应的Block以及对应数据。`int collected_id[BLOCK_SIZE]、float2 collected_xy[BLOCK_SIZE]、float4 collected_conic_opacity[BLOCK_SIZE]`。
|
||||
|
||||
递归所有的Block,计算透明度、Color以及贡献值(用于计算平均值)。
|
||||
|
||||
```c++
|
||||
// Iterate over batches until all done or range is complete
|
||||
for (int i = 0; i < rounds; i++, toDo -= BLOCK_SIZE)
|
||||
{
|
||||
// End if entire block votes that it is done rasterizing
|
||||
int num_done = __syncthreads_count(done);
|
||||
if (num_done == BLOCK_SIZE)
|
||||
break;
|
||||
|
||||
// Collectively fetch per-Gaussian data from global to shared
|
||||
int progress = i * BLOCK_SIZE + block.thread_rank();
|
||||
if (range.x + progress < range.y)
|
||||
{ int coll_id = point_list[range.x + progress];
|
||||
collected_id[block.thread_rank()] = coll_id;
|
||||
collected_xy[block.thread_rank()] = points_xy_image[coll_id];
|
||||
collected_conic_opacity[block.thread_rank()] = conic_opacity[coll_id];
|
||||
} block.sync();
|
||||
|
||||
// Iterate over current batch
|
||||
for (int j = 0; !done && j < min(BLOCK_SIZE, toDo); j++)
|
||||
{ // Keep track of current position in range
|
||||
contributor++;
|
||||
|
||||
// Resample using conic matrix (cf. "Surface
|
||||
// Splatting" by Zwicker et al., 2001)
|
||||
float2 xy = collected_xy[j];
|
||||
float2 d = { xy.x - pixf.x, xy.y - pixf.y };
|
||||
float4 con_o = collected_conic_opacity[j];
|
||||
float power = -0.5f * (con_o.x * d.x * d.x + con_o.z * d.y * d.y) - con_o.y * d.x * d.y;
|
||||
if (power > 0.0f)
|
||||
continue;
|
||||
|
||||
// Eq. (2) from 3D Gaussian splatting paper.
|
||||
// Obtain alpha by multiplying with Gaussian opacity // and its exponential falloff from mean. // Avoid numerical instabilities (see paper appendix).float alpha = min(0.99f, con_o.w * exp(power));
|
||||
if (alpha < 1.0f / 255.0f)
|
||||
continue;
|
||||
float test_T = T * (1 - alpha);
|
||||
if (test_T < 0.0001f)
|
||||
{ done = true;
|
||||
continue;
|
||||
}
|
||||
// Eq. (3) from 3D Gaussian splatting paper.
|
||||
for (int ch = 0; ch < CHANNELS; ch++)
|
||||
C[ch] += features[collected_id[j] * CHANNELS + ch] * alpha * T;
|
||||
|
||||
T = test_T;
|
||||
|
||||
// Keep track of last range entry to update this
|
||||
// pixel. last_contributor = contributor;
|
||||
}}
|
||||
```
|
||||
|
||||
```c++
|
||||
// All threads that treat valid pixel write out their final
|
||||
// rendering data to the frame and auxiliary buffers.
|
||||
if (inside)
|
||||
{
|
||||
final_T[pix_id] = T;
|
||||
n_contrib[pix_id] = last_contributor;
|
||||
for (int ch = 0; ch < CHANNELS; ch++)
|
||||
out_color[ch * H * W + pix_id] = C[ch] + T * bg_color[ch];
|
||||
}
|
||||
```
|
||||
# apps - SIBR_gaussianViewer_app
|
||||
调用`gaussianviewer/renderer/GaussianView.hpp`封装的App。
|
79
03-UnrealEngine/Rendering/AIGC/Sibr相关笔记.md
Normal file
79
03-UnrealEngine/Rendering/AIGC/Sibr相关笔记.md
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
title: 未命名
|
||||
date: 2023-12-29 16:20:43
|
||||
excerpt:
|
||||
tags:
|
||||
rating: ⭐
|
||||
---
|
||||
|
||||
# 前言
|
||||
- 文档:https://sibr.gitlabpages.inria.fr
|
||||
- 代码:https://gitlab.inria.fr/sibr/sibr_core
|
||||
- 案例代码
|
||||
- [renderer/SimpleView.hpp](https://gitlab.inria.fr/mbenadel/sibr_simple/-/blob/master/renderer/SimpleView.hpp)&[renderer/SimpleView.cpp](https://gitlab.inria.fr/mbenadel/sibr_simple/-/blob/master/renderer/SimpleView.cpp)
|
||||
- [renderer/SimpleRenderer.hpp](https://gitlab.inria.fr/mbenadel/sibr_simple/-/blob/master/renderer/SimpleRenderer.hpp)&[renderer/SimpleRenderer.cpp](https://gitlab.inria.fr/mbenadel/sibr_simple/-/blob/master/renderer/SimpleRenderer.cpp)
|
||||
- [Simple SIBR Project](https://gitlab.inria.fr/sibr/projects/simple)
|
||||
- [SIBR/OptiX integration example](https://sibr.gitlabpages.inria.fr/docs/0.9.6/optixPage.html)
|
||||
- [Tensorflow/OpenGL Interop for SIBR](https://sibr.gitlabpages.inria.fr/docs/0.9.6/tfgl_interopPage.html)
|
||||
- Shader:需要将你自己编写的Shader放入**renderer/shaders**文件夹中
|
||||
- 关键词:
|
||||
- Structure-from-Motion (SfM)
|
||||
- Multi-View Stereo (MVS)
|
||||
|
||||
## 功能
|
||||
https://sibr.gitlabpages.inria.fr/docs/0.9.6/projects.html
|
||||
|
||||
- [Sample algorithms & toolboxes](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_samples.html)
|
||||
- [Dataset Preprocessing Tools](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_dataset_tools.html) ([https://gitlab.inria.fr/sibr/sibr_core](https://gitlab.inria.fr/sibr/sibr_core))
|
||||
- [Unstructured Lumigraph Rendering (ULR)](https://sibr.gitlabpages.inria.fr/docs/0.9.6/ulrPage.html) ([https://gitlab.inria.fr/sibr/sibr_core](https://gitlab.inria.fr/sibr/sibr_core))
|
||||
- [Our algorithms](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_ours.html)
|
||||
- [Exploiting Repetitions for IBR of Facades](https://sibr.gitlabpages.inria.fr/docs/0.9.6/facade_repetitionsPage.html) ([https://gitlab.inria.fr/sibr/projects/facades-repetitions/facade_repetitions](https://gitlab.inria.fr/sibr/projects/facades-repetitions/facade_repetitions)) (Exploiting Repetitions for IBR of Facades (paper reference :[http://www-sop.inria.fr/reves/Basilic/2018/RBDD18/](http://www-sop.inria.fr/reves/Basilic/2018/RBDD18/)))
|
||||
- [Deep Blending for Free-Viewpoint Image-Based Rendering – Scalable Inside-Out Image-Based Rendering](https://sibr.gitlabpages.inria.fr/docs/0.9.6/inside_out_deep_blendingPage.html) ([https://gitlab.inria.fr/sibr/projects/inside_out_deep_blending](https://gitlab.inria.fr/sibr/projects/inside_out_deep_blending)) (Deep Blending for Free-Viewpoint Image-Based Rendering, paper references: [http://www-sop.inria.fr/reves/Basilic/2018/HPPFDB18/](http://www-sop.inria.fr/reves/Basilic/2018/HPPFDB18/) , [http://visual.cs.ucl.ac.uk/pubs/deepblending/](http://visual.cs.ucl.ac.uk/pubs/deepblending/) ; Scalable Inside-Out Image-Based Rendering, paper references: [http://www-sop.inria.fr/reves/Basilic/2016/HRDB16](http://www-sop.inria.fr/reves/Basilic/2016/HRDB16) , [http://visual.cs.ucl.ac.uk/pubs/insideout/](http://visual.cs.ucl.ac.uk/pubs/insideout/) )
|
||||
- [Multi-view relighting using a geometry-aware network](https://sibr.gitlabpages.inria.fr/docs/0.9.6/outdoorRelightingPage.html) ([https://gitlab.inria.fr/sibr/projects/outdoor_relighting](https://gitlab.inria.fr/sibr/projects/outdoor_relighting)) (Multi-view Relighting Using a Geometry-Aware Network; paper reference ([https://www-sop.inria.fr/reves/Basilic/2019/PGZED19/](https://www-sop.inria.fr/reves/Basilic/2019/PGZED19/)) )
|
||||
- [Image-Based Rendering of Cars using Semantic Labels and Approximate Reflection Flow](https://sibr.gitlabpages.inria.fr/docs/0.9.6/semantic_reflectionsPage.html) ([https://gitlab.inria.fr/sibr/projects/semantic-reflections/semantic_reflections](https://gitlab.inria.fr/sibr/projects/semantic-reflections/semantic_reflections)) (Image-Based Rendering of Cars using Semantic Labels and Approximate Reflection Flow (paper reference : [http://www-sop.inria.fr/reves/Basilic/2020/RPHD20/](http://www-sop.inria.fr/reves/Basilic/2020/RPHD20/)))
|
||||
- [Depth Synthesis and Local Warps for plausible image-based navigation - Bayesian approach for selective image-based rendering using superpixels](https://sibr.gitlabpages.inria.fr/docs/0.9.6/spixelwarpPage.html) ([https://gitlab.inria.fr/sprakash/spixelwarp](https://gitlab.inria.fr/sprakash/spixelwarp)) (Depth Synthesis and Local Warps for plausible image-based navigation, paper reference: [http://www-sop.inria.fr/reves/Basilic/2013/CDSD13/](http://www-sop.inria.fr/reves/Basilic/2013/CDSD13/) ; Bayesian approach for selective image-based rendering using superpixels, paper reference: [http://www-sop.inria.fr/reves/Basilic/2015/ODD15/](http://www-sop.inria.fr/reves/Basilic/2015/ODD15/) ))
|
||||
- [Glossy Probe Reprojection for Interactive Global Illumination](https://sibr.gitlabpages.inria.fr/docs/0.9.6/synthetic_ibrPage.html) ([https://gitlab.inria.fr/sibr/projects/glossy-probes/synthetic_ibr](https://gitlab.inria.fr/sibr/projects/glossy-probes/synthetic_ibr)) (Glossy Probe Reprojection for Interactive Global Illumination (paper reference : [http://www-sop.inria.fr/reves/Basilic/2020/RLPWSD20/](http://www-sop.inria.fr/reves/Basilic/2020/RLPWSD20/)))
|
||||
- [Other algorithms](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_others.html)
|
||||
- [Soft3D](https://sibr.gitlabpages.inria.fr/docs/0.9.6/soft3dPage.html) ([https://gitlab.inria.fr/sibr/projects/soft3d](https://gitlab.inria.fr/sibr/projects/soft3d)) (Soft 3D Reconstruction for View Synthesis (paper reference : [https://ericpenner.github.io/soft3d/](https://ericpenner.github.io/soft3d/)))
|
||||
- [Integrated toolboxes](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_toolbox.html)
|
||||
- [Core framework of FRIBR](https://sibr.gitlabpages.inria.fr/docs/0.9.6/fribrFrameworkPage.html) ([https://gitlab.inria.fr/sibr/fribr_framework](https://gitlab.inria.fr/sibr/fribr_framework)) (Core framework of FRIBR)
|
||||
- [SIBR/OptiX integration example](https://sibr.gitlabpages.inria.fr/docs/0.9.6/optixPage.html) ([https://gitlab.inria.fr/sibr/projects/optix](https://gitlab.inria.fr/sibr/projects/optix)) (SIBR/OptiX integration example)
|
||||
- [Simple SIBR Project](https://sibr.gitlabpages.inria.fr/docs/0.9.6/simplePage.html) ([https://gitlab.inria.fr/sibr/projects/simple](https://gitlab.inria.fr/sibr/projects/simple)) (A simple sample SIBR project for you to base your projects on)
|
||||
- [Tensorflow/OpenGL Interop for SIBR](https://sibr.gitlabpages.inria.fr/docs/0.9.6/tfgl_interopPage.html) ([https://gitlab.inria.fr/sibr/tfgl_interop](https://gitlab.inria.fr/sibr/tfgl_interop)) (Tensorflow GL interoperability dependencies and cuda code)
|
||||
|
||||
- [示例算法和工具箱](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_samples.html)
|
||||
- [数据集预处理工具](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_dataset_tools.html)([https://gitlab.inria.fr/sibr/sibr_core](https://gitlab.inria.fr/sibr/sibr_core))
|
||||
- [非结构化 Lumigraph 渲染 (ULR)](https://sibr.gitlabpages.inria.fr/docs/0.9.6/ulrPage.html) ( [https://gitlab.inria.fr/sibr/sibr_core](https://gitlab.inria.fr/sibr/sibr_core) )
|
||||
- [我们的算法](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_ours.html)
|
||||
- [Exploiting Repetitions for IBR of Facades](https://sibr.gitlabpages.inria.fr/docs/0.9.6/facade_repetitionsPage.html) ( [https://gitlab.inria.fr/sibr/projects/facades-repetitions/facade_repetitions](https://gitlab.inria.fr/sibr/projects/facades-repetitions/facade_repetitions) ) (Exploiting Repetitions for IBR of Facades (论文参考: http: [//www-sop.inria.fr/里夫/巴西利克/2018/RBDD18/](http://www-sop.inria.fr/reves/Basilic/2018/RBDD18/)))
|
||||
- [用于基于自由视点图像的渲染的深度混合 – 可扩展的由内而外基于图像的渲染](https://sibr.gitlabpages.inria.fr/docs/0.9.6/inside_out_deep_blendingPage.html)( [https://gitlab.inria.fr/sibr/projects/inside_out_deep_blending](https://gitlab.inria.fr/sibr/projects/inside_out_deep_blending) ) (用于基于自由视点图像的渲染的深度混合,论文参考:[http://www-sop.inria.fr/reves/Basilic/2018/HPPFDB18/,http](http://www-sop.inria.fr/reves/Basilic/2018/HPPFDB18/) : [//visual.cs.ucl.ac.uk/pubs/deepblending/](http://visual.cs.ucl.ac.uk/pubs/deepblending/);可扩展的由内而外基于图像的渲染,论文参考:[http://www-sop.inria.fr/reves/Basilic/2016/HRDB16,http](http://www-sop.inria.fr/reves/Basilic/2016/HRDB16) : [//visual.cs.ucl.ac.uk/pubs/insideout/](http://visual.cs.ucl.ac.uk/pubs/insideout/))
|
||||
- [使用几何感知网络的多视图重新照明](https://sibr.gitlabpages.inria.fr/docs/0.9.6/outdoorRelightingPage.html)( [https://gitlab.inria.fr/sibr/projects/outdoor_relighting](https://gitlab.inria.fr/sibr/projects/outdoor_relighting) )(使用几何感知网络的多视图重新照明;论文参考 ( [https://www-sop. inria.fr/reves/Basilic/2019/PGZED19/](https://www-sop.inria.fr/reves/Basilic/2019/PGZED19/) ) )
|
||||
- [使用语义标签和近似反射流的基于图像的汽车渲染](https://sibr.gitlabpages.inria.fr/docs/0.9.6/semantic_reflectionsPage.html)([https://gitlab.inria.fr/sibr/projects/semantic-reflections/semantic_reflections](https://gitlab.inria.fr/sibr/projects/semantic-reflections/semantic_reflections))(使用语义标签和近似反射流的基于图像的汽车渲染(论文参考:[http://www-sop.inria.fr/reves/Basilic/2020/RPHD20/](http://www-sop.inria.fr/reves/Basilic/2020/RPHD20/)))
|
||||
- [用于合理的基于图像的导航的深度合成和局部扭曲 - 使用超像素进行选择性基于图像的渲染的贝叶斯方法](https://sibr.gitlabpages.inria.fr/docs/0.9.6/spixelwarpPage.html)([https://gitlab.inria.fr/sprakash/spixelwarp](https://gitlab.inria.fr/sprakash/spixelwarp))(用于合理的基于图像的导航的深度合成和局部扭曲,论文参考:[http://www-sop.inria.fr/reves/Basilic/2013/CDSD13/](http://www-sop.inria.fr/reves/Basilic/2013/CDSD13/);使用超像素进行选择性基于图像渲染的贝叶斯方法,论文参考: http: [//www-sop.inria.fr/里夫/巴西利克/2015/ODD15/](http://www-sop.inria.fr/reves/Basilic/2015/ODD15/)))
|
||||
- [用于交互式全局照明的光泽探针重投影](https://sibr.gitlabpages.inria.fr/docs/0.9.6/synthetic_ibrPage.html)([https://gitlab.inria.fr/sibr/projects/glossy-probes/synthetic_ibr](https://gitlab.inria.fr/sibr/projects/glossy-probes/synthetic_ibr))(用于交互式全局照明的光泽探针重投影(论文参考:[http://www-sop.inria。 fr/reves/Basilic/2020/RLPWSD20/](http://www-sop.inria.fr/reves/Basilic/2020/RLPWSD20/) ))
|
||||
- [其他算法](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_others.html)
|
||||
- [Soft3D](https://sibr.gitlabpages.inria.fr/docs/0.9.6/soft3dPage.html)([https://gitlab.inria.fr/sibr/projects/soft3d](https://gitlab.inria.fr/sibr/projects/soft3d))(用于视图合成的软3D重建(论文参考: https: [//ericpenner.github.io/soft3d/](https://ericpenner.github.io/soft3d/)))
|
||||
- [集成工具箱](https://sibr.gitlabpages.inria.fr/docs/0.9.6/sibr_projects_toolbox.html)
|
||||
- [FRIBR核心框架](https://sibr.gitlabpages.inria.fr/docs/0.9.6/fribrFrameworkPage.html)([https://gitlab.inria.fr/sibr/fribr_framework)(FRIBR](https://gitlab.inria.fr/sibr/fribr_framework)核心框架)
|
||||
- [SIBR/OptiX 集成示例](https://sibr.gitlabpages.inria.fr/docs/0.9.6/optixPage.html)( [https://gitlab.inria.fr/sibr/projects/optix](https://gitlab.inria.fr/sibr/projects/optix) )(SIBR/OptiX 集成示例)
|
||||
- [简单 SIBR 项目](https://sibr.gitlabpages.inria.fr/docs/0.9.6/simplePage.html)( [https://gitlab.inria.fr/sibr/projects/simple](https://gitlab.inria.fr/sibr/projects/simple) )(一个简单的示例 SIBR 项目,供您作为项目的基础)
|
||||
- [SIBR 的 Tensorflow/OpenGL 互操作](https://sibr.gitlabpages.inria.fr/docs/0.9.6/tfgl_interopPage.html)([https://gitlab.inria.fr/sibr/tfgl_interop)(Tensorflow](https://gitlab.inria.fr/sibr/tfgl_interop) GL 互操作性依赖项和 cuda 代码)
|
||||
|
||||
项目结构:
|
||||
- `renderer/`: contains your library code and configuration
|
||||
- `preprocess/`: contains your preprocesses listed by directory, and the configuration CMake file to list them
|
||||
- `apps/`: contains your apps listed by directory, and the configuration CMake file to list them
|
||||
- `documentation/`: contains additional doxygen documentation
|
||||
|
||||
# SIBR数据集创建方式
|
||||
**SIBR**本身定义了一种数据格式
|
||||
|
||||
可以使用**RealityCapture**或者**Colmap**创建原生的SIBR数据集,也可以根据文档使用SFM或者MVS系统创建兼容数据集合。
|
||||
- [如何从 Reality Capture 创建数据集](https://sibr.gitlabpages.inria.fr/docs/0.9.6/HowToCapreal.html)
|
||||
- [如何从 Colmap 创建数据集](https://sibr.gitlabpages.inria.fr/docs/0.9.6/HowToColmap.html)
|
||||
|
||||
官方提供的案例数据集:https://repo-sam.inria.fr/fungraph/sibr-datasets/museum_front27_ulr.zip
|
||||
|
||||
# 运行案例方式
|
||||
下载编译好的版本
|
||||
>SIBR_ulrv2_app_rwdi.exe --path C:/Downloads/museum_front27_ulr/museum_front27/sibr_cm sibr -museum-front
|
Reference in New Issue
Block a user