Skip to content

Commit

Permalink
feat: pipeline the computing tasks (#119)
Browse files Browse the repository at this point in the history
* feat: pipeline the computing tasks
fix #110
  • Loading branch information
timerring authored Dec 10, 2024
1 parent 718f377 commit 324cd7c
Show file tree
Hide file tree
Showing 5 changed files with 51 additions and 38 deletions.
56 changes: 28 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
## 2. Major features

<!-- - **速度快**:~~录制的同时可以选择启动无弹幕版视频的上传进程,下播即上线平台~~。(无弹幕版暂缓上线,等维护完成下一个版本上线) -->
- **速度快**:采用 `pipeline` 流水线处理视频,理想情况下录播与直播相差半小时以内,没下播就能上线录播!
- **多房间**:同时录制多个直播间内容视频以及弹幕文件(包含普通弹幕,付费弹幕以及礼物上舰等信息)。
- **占用小**:自动删除本地已上传的视频,极致节省空间。
- **模版化**:无需复杂配置,开箱即用,( :tada: NEW)通过 b 站搜索建议接口自动抓取相关热门标签。
Expand All @@ -34,24 +35,23 @@

```mermaid
graph TD
User((用户))---->startRecord(启动录制)
User((用户))--record-->startRecord(启动录制)
startRecord(启动录制)--保存视频和字幕文件-->videoFolder[(Video 文件夹)]
User((用户))---->startUploadNoDanmaku(启动无弹幕版视频上传)
videoFolder[(Video 文件夹)]<--实时上传-->startUploadNoDanmaku(启动无弹幕版视频上传)
User((用户))---->startScan(启动扫描 Video 文件夹)
User((用户))--scan-->startScan(启动扫描 Video 文件夹)
videoFolder[(Video 文件夹)]<--间隔两分钟扫描一次-->startScan(启动扫描 Video 文件夹)
startScan --判断是否有弹幕-->ifDanmaku{判断}
startScan <--视频文件--> whisper[whisperASR模型]
whisper[whisperASR模型] --生成字幕-->parameter[查询视频分辨率]
subgraph 启动新进程
parameter[查询分辨率] -->ifDanmaku{判断}
ifDanmaku -->|有弹幕| DanmakuFactory[DanmakuFactory]
ifDanmaku -->|无弹幕| whisper[whisperASR模型]
DanmakuFactory[DanmakuFactory] --自动转换弹幕--> whisper[whisperASR模型]
whisper[whisperASR模型] --生成字幕--> ffmpeg1[ffmpeg]
ffmpeg1[ffmpeg] --渲染字幕 --> uploadQueue[(上传队列)]
ifDanmaku -->|无弹幕| ffmpeg1[ffmpeg]
DanmakuFactory[DanmakuFactory] --根据分辨率转换弹幕--> ffmpeg1[ffmpeg]
end
ffmpeg1[ffmpeg] --渲染弹幕及字幕 --> uploadQueue[(上传队列)]
User((用户))---->startUpload(启动有弹幕版视频上传进程)
startUpload(启动有弹幕版视频上传进程) <--扫描队列并上传视频--> uploadQueue[(上传队列)]
User((用户))--upload-->startUpload(启动视频上传进程)
startUpload(启动视频上传进程) <--扫描队列并上传视频--> uploadQueue[(上传队列)]
```


Expand Down Expand Up @@ -92,7 +92,15 @@ pip install -r requirements.txt
# 记录项目根目录
./setPath.sh && source ~/.bashrc
```
以下功能默认开启,如果无 GPU,请直接看 4.2 节,并将 `src/allconfig.py` 文件中的 `GPU_EXIST` 参数设置为 `False`

项目大多数参数均在 `src/allconfig.py` 文件中,相关参数如下:
+ GPU_EXIST 是否存在 GPU(以 `nvidia-smi` 显示驱动以及 `CUDA` 检查通过为主)
+ MODEL_TYPE 渲染模式,
+ `pipeline` 模式(默认): 目前最快的模式,需要 GPU 支持,最好在 `blrec` 设置片段为半小时以内,asr 识别和渲染并行执行,分 p 上传视频片段。
+ `append` 模式: 基本同上,但 asr 识别与渲染过程串行执行,比 pipeline 慢预计 25%。
+ `merge` 模式: 等待所有录制完成,再进行合并识别渲染过程,上传均为完整版录播。

以下功能默认开启,如果无 GPU,请直接看 4.2 节,并将 `src/allconfig.py` 文件中的 `GPU_EXIST` 参数设置为 `False`,并将 `MODEL_TYPE` 调整为 `merge` 或者 `append`
如果需要使用自动识别并渲染字幕功能,模型参数及链接如下,注意 GPU 显存必须大于所需 VRAM:

| Size | Parameters | Multilingual model | Required VRAM |
Expand All @@ -110,40 +118,32 @@ pip install -r requirements.txt
### 4.2 biliup-rs 登录

首先按照 [biliup-rs](https://github.com/biliup/biliup-rs) 登录b站,将登录产生的`cookies.json`文件复制一份到项目根目录中
首先按照 [biliup-rs](https://github.com/biliup/biliup-rs) 登录b站,登录脚本在 `src/upload/biliup` ,登录产生的`cookies.json`保留在该文件夹下即可

### 4.3 启动自动录制

-`startRecord.sh`启动脚本中设置端口 `port`
-`settings.toml` 中设置视频存放目录、日志目录,也可在 blrec 前端界面即`http://localhost:port` 中进行设置。详见 [blrec](https://github.com/acgnhiki/blrec)
-`record.sh`启动脚本中设置端口 `port`
-`settings.toml` 中设置视频存放目录、日志目录,也可启动后在 blrec 前端界面即`http://localhost:port` 中进行设置。详见 [blrec](https://github.com/acgnhiki/blrec)

然后执行
启动 blrec

```bash
./record.sh
```
### 4.4 启动自动上传
有弹幕版视频和无弹幕版视频的上传是独立的,可以同时进行,也可以单独启用。

#### 4.4.1 无弹幕版视频自动上传(WIP,下个版本上线,先跳过本步)

- 投稿的配置文件为 `upload_config.json`,可以参考给出的示例添加。
- 请在将一级键值名称取为**字符串格式**的对应直播间的房间号(4位数以上)。

#### 4.4.2 弹幕版视频渲染与自动上传

> 请先确保你已经完成了 4.1 步骤,下载并放置了模型文件。
> 否则,请将 `src/allconfig.py` 文件中的 `GPU_EXIST` 参数设置为 `False`
##### 启动弹幕渲染进程
#### 启动扫描渲染进程

输入以下指令即可检测已录制的视频并且自动合并分段,自动进行弹幕转换,字幕识别与渲染的过程:

```bash
./scan.sh
```

##### 启动自动上传进程
#### 启动自动上传进程

```bash
./upload.sh
Expand Down
2 changes: 1 addition & 1 deletion settings.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ enable_recorder = true
[output]
path_template = "{roomid}/{roomid}_{year}{month}{day}-{hour}-{minute}-{second}"
filesize_limit = 0
duration_limit = 0
duration_limit = 1200
out_dir = "./Videos"

[logging]
Expand Down
3 changes: 3 additions & 0 deletions src/allconfig.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,6 @@
BURN_LOG_PATH = os.path.join(BILIVE_DIR, 'logs', 'burningLog', f'burn-{datetime.now().strftime("%Y%m-%d-%H%M%S")}.log')
MERGE_LOG_PATH = os.path.join(BILIVE_DIR, 'logs', 'mergeLog', f'merge-{datetime.now().strftime("%Y%m-%d-%H%M%S")}.log')
GPU_EXIST=True
MODEL_TYPE = "pipeline"
# MODEL_TYPE = "append"
# MODEL_TYPE = "merge"
11 changes: 9 additions & 2 deletions src/burn/only_render.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@
import argparse
import os
import subprocess
from src.allconfig import GPU_EXIST, SRC_DIR
from src.allconfig import GPU_EXIST, SRC_DIR, MODEL_TYPE
from src.burn.generate_danmakus import get_resolution, process_danmakus
from src.burn.generate_subtitles import generate_subtitles
from src.burn.render_video import render_video
import multiprocessing

def normalize_video_path(filepath):
"""Normalize the video path to upload
Expand Down Expand Up @@ -34,7 +35,8 @@ def render_video_only(video_path):

# Generate the srt file via whisper model
if GPU_EXIST:
generate_subtitles(original_video_path)
if MODEL_TYPE != "pipeline":
generate_subtitles(original_video_path)

# Burn danmaku or subtitles into the videos
render_video(original_video_path, format_video_path, subtitle_font_size, subtitle_margin_v)
Expand All @@ -52,6 +54,11 @@ def render_video_only(video_path):
with open(f"{SRC_DIR}/upload/uploadVideoQueue.txt", "a") as file:
file.write(f"{format_video_path}\n")

def pipeline_render(video_path):
generate_subtitles(video_path)
burn_process = multiprocessing.Process(target=render_video_only, args=(video_path,))
burn_process.start()

if __name__ == '__main__':
# Read and define variables
parser = argparse.ArgumentParser(description='Danmaku burns')
Expand Down
17 changes: 10 additions & 7 deletions src/burn/scan.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

import os
from pathlib import Path
from src.burn.only_render import render_video_only
from src.burn.only_render import render_video_only, pipeline_render
from src.burn.render_and_merge import render_and_merge
import time
from src.allconfig import VIDEOS_DIR
from src.allconfig import VIDEOS_DIR, MODEL_TYPE

def process_folder_merge(folder_path):
# Don't process the recording folder
Expand Down Expand Up @@ -42,16 +42,19 @@ def process_folder_append(folder_path):
mp4_files.sort()
for file in mp4_files:
print(f"Processing {file}...", flush=True)
render_video_only(file)
if MODEL_TYPE == "pipeline":
pipeline_render(file)
else:
render_video_only(file)

if __name__ == "__main__":
room_folder_path = VIDEOS_DIR
while True:
for room_folder in Path(room_folder_path).iterdir():
if room_folder.is_dir():
# This function use the merge mode to upload videos
# process_folder_merge(room_folder)
# This function use the append mode to upload videos
process_folder_append(room_folder)
if MODEL_TYPE == "merge":
process_folder_merge(room_folder)
else:
process_folder_append(room_folder)
print(f"{time.strftime('%Y-%m-%d %H:%M:%S')} There is no file recorded. Check again in 120 seconds.", flush=True)
time.sleep(120)

0 comments on commit 324cd7c

Please sign in to comment.