Video Stream Synthesis for Internet Censorship Circumvention
The machine used for development is a 32bit Ubuntu 14.04 LTS VM.
Start by reading how we are able to set up a virtual camera device in v4l2loopback and a video pipeline (with ffmpeg) to feed it.
Evaluation
Implementation
It would be nice for the evaluation to have consistent results between (just) ffmpeg baseline runs and DS generated baselines for calibration purposes. Matching FPSs on both pipelines is fundamental so that results come near each other.
The issue with this setup is twofold:
Solution: To run Snowmix pipeline at 30 fps, matching the benchmarks framerate.
Problem: CVideoOutput out of shm buffers. Setting m_got_no_free_buffers flag. This happens when the output script cannot handle output flushing. Although the output script pipeline was emmiting at an increased 30fps rate, it was somehow unable to deliver to consumer on time. This output script was simple and fast writing to the pipe that connects it to ffmpeg. The problem then resided on the ultimate ffmpeg consumer. By reading the input pipeline with the “-re” option, ffmpeg was supposed to emulate a live video source. FFMPEG documentation seems quite contradictory at this point. “-re” works as it should shall the input video be already recorded into a file and respects its encoded framerate. For a pipe approach as we were doing, ffmpeg should be allowed to read input “asap”. Indeed, this speeds up the consuming process (while it is still limited on snowmix output speed) and allowed Snowmix to work as intended at a 30 fps rate.