Pixelated macro blocking issues

When I play the HLS output on a VideoJS client/player, I used to observe pixelated view (A view where there are few pixels / macro blocks yet to be replaced by new frames, and the view is distorted). I used a transport stream that has h.264 encded streams. This is also called macro blocking. The set up is same as what I have been using, an adaptive transport stream source over internet, a convertor to RTSP, and HLS encoder using ffmpeg. The possible causes can be:
a. The client is unable to process the frames
b. The HLS encoder is not able to process the frames
c. The RTSP streamer is not able to process.
Client processing can be a reason if the HLS fragments produced in the server reflect proper quality.
It is a little tricky to isolate between the HLS encoder and RTSP streamer. A good way to understand can be by playing the Adaptive Transport Stream source. This is most likely to work. The players will have a capability to buffer the content.
It is also possible to use an RTSP client to play the RTSP stream. The player will be able to play clearly for the most part. The RTSP delivery system is more robust than the HLS stream and takes care of buffering ensuring better quality.
I check the ffmpeg logs, and found the following constantly thrown out, at regular intervals.

{ message: ‘ffmpeg_xxx:Err[rtsp @ 000001b459c6e640] max delay reached. need to consume packet’,
{ message: ‘ffmpeg_xxx:Err[rtsp @ 000001b459c6e640] RTP: missed 7 packets’,
{ message: ‘ffmpeg_xxx:Err[h264 @ 000001b45b81fac0] left block unavailable for requested intra mode’,
{ message: ‘ffmpeg_xxx:Err[h264 @ 000001b45b81fac0] error while decoding MB 0 29, bytestream 13200’,
{ message: ‘ffmpeg_xxx:Err[h264 @ 000001b45b81fac0] concealing 184 DC, 184 AC, 184 MV errors in I frame’,
{ message: ‘ffmpeg_xxx:Err[h264 @ 000001b459c6ab80] co located POCs unavailable’,

All these indicate that there is an issue in transcoding the RTSP stream due to packet loss, or decoding the macroblocks. DC, AC and MV errors are co-efficients used in Digital Cosine Transforms of the raw video compression algorithms. These can be listed as:
a. Issue between I frames
b. Issue between P frames
c. Issue between B frames
Out of these, B frames will not create much of an issue. if prolonged I frames are missing, one can get buffering indications on the client, or there can be a video which is stuck. Progressive frames work on exiting I frames and if we lose data we are likely to have macro blocks which are not processed.

There are few more items we can look into:
a. Change the bitrate with respect to quality
b. Change the buffer used while processing, increasing helps
c. Change the transcoding to multiple stages and not perform all at once.

In my case I still had pixelated view, so decided to make sure that RTSP streamer does not read from internet but it can from a local file. This definitely improved the quality. In effect now I need to ensure that in memory buffer is not sufficient, rather parsing content from local file system helps.

AWS CloudFront using NodeJS server as an origin for transcoded HLS content

I had earlier mentioned how IIS server was set up as a distribution point for my transcoding. There was a standalone application loaded by electron that loads an HTML UI. The embedded Javascript is used to control and monitor the transcoding service. My design requirements changed from then on. The next step is to expose this as a service which can be integrated by other applications. In fact then I chose to use NodeJS as my server which will expose these services. The component functions can be listed below:
a. NodeJS server exposes the control commands API.
b. The server also serves static HTML pages along with embedded Javascript, that is used to serve the client media player. If you recall this is a videojs application. This helps us with client side advertisement insertion as well.
Our nodeJS transcoding application uses fluent-ffmpeg, that in turn uses Ffmpeg, to perform the transcoding service. In the earlier version where electron application was used, I had used Javascript Process to invoke the nodeJS application, and this had issues, as sometimes, it failed to stop the Ffmpeg process. This time this issue was solved using a store for command handle. This is stored as an ordinary application variable in the nodeJS server script. Here is a way to get the handle to FfmpegCommand application. A point to note is that if you append other event handlers to the FfmpegCommand command then you lose the handle.

ff_Xcode_cmd = new ffmpeg(strMediaURL, { }).addOptions([
... //Fill in your values
]);

ff_Xcode_cmd.output(outputFileName);
ff_Xcode_cmd
.on('start', function(command){
logger.info('print the command used to Start' + command);
// The command is not a handle, but the string used to start ffmpeg //Command.
})
.on('stderr', function(stderr) {
logger.error('Ffmpeg processing errors' + stderr);
})
.on('progress', function () {
logger.info('FfmpegCommand Connected');
clearInterval(interval);
})
.on('end', hLSStreamingFinsihed).run();
}, 5000);

Here is the command to kill.


var stop = () => {
logger.info('ffmpeg stop');
setTimeout(function() {
ff_Xcode_cmd.on('error', function() {
console.log('Ffmpeg has been killed');
});

ff_Xcode_cmd.kill();
}, 10000);
}

Another important point to note is that stopping the process is as simple as ffmpegCommandHandle.kill(). When I tried in do this, it was killing the server process as well. Hence I had to bring out the kill within the clearInterval loop. Here it was working as expected.
If you are setting up a nodeJS server, and would like to quickly ramp up on how to do it, please go through the tutorial by Andrew Mead. I found it very useful.

The next step is to create an AWS distribution. The AWS CloudFront is used here and it requires an origin. Normally one can use AWS internal media packager components to perform transcoding in cloud and provide the output to CloudFront. Here we would like CloudFront to cache the video content called our NodeJS transcoding server. I have set up an http origin that points the end points from our transcoded server.

There are are more points to be taken care, to ensure that Cross Origin scripting is allowed the following lines to be inserted in the server scripts.

Analyze Video Strobe, Video Flickering, glitch, Frame drops with WPR and GPUView

When debugging the issues with video being rendered, or performance to be improved, or understand the capacity that the hardware can support, it is good to know some of these video rendering terms to be able to search the solution for the appropriate issue. In my system I was getting video blackening out for a moment at regular and very frequent intervals. This is called Video strobing. I have found some videos referring this to as Video Flickering. Here is the list of terms one will commonly encounter:
a. Video Strobing
b. Video Flickering
c. Video glitching.

Frames drops provide can be a reason for these Video Strobing. Here is a U-tube video that shows what is strobing. https://www.youtube.com/watch?v=abqfVYiNbJg. Look at the effect between 15 to 30 secs. You will see the effect. In this section we will try to relate our reasoning through some observations with tools in Windows.

I used 2 tools here,
a. Windows Performance Recorder – The tool to record the performance data
b. GPUView – the tool to view the performance data

Both of these are available if you download WIndows ADK Assessment and Deployment Kit. In this case I only installed the performance monitoring module.

Here are some links to get started:

There are few tips what worked for me when I tried to analyze:
a. Let the recording be done for a short period of time, say 5 secs, else we get lost into finding the data relations with the view.
b. When bringing it in the view, expand it till you are able to see the VSync lines, you can toggle this using F8
c. Start from the load on the Windows GPU and look for the loading and scheduling.
d. The time scale on the top allows us to understand the CPU queue time and how long the packets take to get scheduled, and Vsync provides the need for a frame in that interval.

“The time for which CPU actively processes the app workload for one frame. If the app is running smoothly, all the processing required for one frame happens within one v-sync interval. With the monitor refresh rate of 60Hz, this comes to 16ms per frame. If CPU time/frame is greater than 16ms, CPU optimizations might be needed to produce a glitch free app experience.”

Live Streaming and Client Side Ad Insertions with Fluent Ffmpeg, Videojs

Live Video Players tuning on to advertisements by switching URLs on client side has been there for long. This mechanism is stable today with the online Video platforms. Today a new mechanism is coming up, Server side advertisement insertions and these are rated better when compared to client side ones when it comes to the certainty of advertisement delivery. For more on that topic please refer to my first two blogs. In this section, I discuss a set up for live video delivery with client side advertisement insertions that can be used for small scale live video broadcasting.

The Cloud providers today AWS, Anvato, Azure have readymade API for these and a streaming service can be set up within hours once environment is available, but this article focusses on broadcasters who would like to maintain in-premise streaming services. This will help the small time broadcasters to get their complete requirements verified before they deploy in the cloud and get billed.

The components that make up such a system are, a live streaming source, a packaging tool that can convert these streams into adaptive transport streams, advertisement servers that have information on the advertisements on the current media timeline, a client to render these. We also need the DRM component to ensure subscribers can only view the content. I have not integrated with any DRM yet.

A live streaming source can be a camera, or any RTSP streaming agent. The intent of choosing RTSP source is to guarantee low latency, and better quality since we will have a controlled network.

Here is a simple flow of Live Video established using an application with RTSP Source, fluent-ffmpeg module of Node.js, Videojs client (with hls and ad plugins). This has to be integrated with the advertisement servers, or database to fetch the Cue points. Then we insert CUE-OUT and CUE-IN in our HLS playlist file based on the media timeline of the segments and we have a live playlist that can be used by Videojs client to play live video with client side advertisements switch.

From my experience here are few points to notice:
a. Fluent-ffmpeg can be installed using npm, and ensure ffmpeg tool is in the system path. We can set it in our node code, or the system environment.
b. We need to choose the right ffmpeg build to ensure we have the plugins for our transcoding requirements. We might have to make a custom build if required.
c. Though input can be an HLS or RTSP stream, it is advisable to choose the RTSP stream over a high bandwidth or preferably an internet so that we do not have any streaming issues caused by input.
d. Currently this tool writes the playlist into a file, it would be nice if we had an option to write into a database, or a choice of live in memory db, or live in memory stream.
e. We need to modify this file in real time to make CUE insertions. The logic of CUE start and end can be seen in the Video HLS contrib README document. Even though this is not really a standard, but many users are adopting this format.
f. The videojs component has samples which can be modified to our streaming source URLs.

A few FAQs I found good to know from my learning:
a. There might be cases, where our live file might not be playing on the client and seem to be stuck. It is good to check the playlist and see if the media Sequence Tag points to the correct segment in the file.
b. Live streaming folks choose different playlist composition. Some of them maintain just one to two segments in the playlist and this allows adaptive bitrate modifications early. In my case, I still need to experiment if the live stream can be used to play CUEs which spread across two playlists, probably it can. As of now I maintain chunks to the extent that the entire media time for client side advertisements are accommodated in a single playlist. So in this playlist we need to ensure the Sequence number points to the first entry in my playlist, otherwise by default it will point to the last few entries and we will find the content being skipped.

Custom x64 builds using the Windows build suite for ffmpeg

In my previous article I have mentioned how easy it was to build an ffmpeg version on a windows 64 bit using a sandboxed MSYS terminal. I am using ffmpeg for my transcoding needs and require support of all the ones available with this tool. I am listing down all I did to get my ffmpeg recompile my locally modified sources.
The source works by preparing all options taken from the input on a DOS terminal, exporting it to a Mintty window. So in case we need to run this for a specific module, it is better to declare all these variables separately. The values each of these can take can be found from media_autobuild_suite.bat file. This is how Mintty is loaded.


start /I %instdir%\%msys2%\usr\bin\mintty.exe -i /msys2.ico -t "media-autobuild_suite" ^
%instdir%\%msys2%\usr\bin\script.exe -a -q -f %build%\compile.log -c '^
MSYS2_PATH_TYPE=inherit MSYSTEM=%MSYSTEM% /usr/bin/bash --login ^
/build/media-suite_compile.sh --cpuCount=%cpuCount% --build32=%build32% --build64=%build64% --deleteSource=%deleteSource% ^
--mp4box=%mp4box% --vpx=%vpx2% --x264=%x2643% --x265=%x2652% --other265=%other265% --flac=%flac% --fdkaac=%fdkaac% ^
--mediainfo=%mediainfo% --sox=%sox% --ffmpeg=%ffmpeg% --ffmpegUpdate=%ffmpegUpdate% --ffmpegChoice=%ffmpegChoice% ^
--mplayer=%mplayer% --mpv=%mpv% --license=%license2% --stripping=%stripFile% --packing=%packFile% ^
--rtmpdump=%rtmpdump% --logging=%logging% --bmx=%bmx% --standalone=%standalone% --aom=%aom% ^
--faac=%faac% --ffmbc=%ffmbc% --curl=%curl% --cyanrip=%cyanrip% --redshift=%redshift%'

For example once could make the following declarations to build a standalone ffmpeg.

export cpuCount=4
export build64=yes
export x264=full
export ffmpeg=static
export standalone=y

Now load each line in media_suite_compile.sh in the Mintty console, our configure, build and install will be run and binary will be ready.

When a webserver is needed in windows during development to stream HLS fragments – use IISExpress

Here I will try to explain how simple my process was when I needed to stream HLS from a local machine. I am using video.js player along with video-contrib-hls module to render the hls on client. The preconditions are that I have a folder where my HLS fragments are created. FFMPEG tool can be used to create these files from a live or VOD streaming RTSP. In my earlier blog I have mentioned the usage of HLSserver module in node.js. This cannot be used to serve javascript and HTML. (At least out of the box). I was looking for an IIS server for Developer Windows setup. IISexpess was available. Install it from the Microsoft site.
a. Make a back up of the applicationhost.config in the IISexpress host config/templates folder.
b. Add the filter for m3u8 files.

.
.

b. Update the site

c. Navigate to IISexpress location inside the Program Files and type:
iisexpress /config:c:\mypathto\applicationhost.config

We are good to go.

An Easy FFMPEG build with all Codec support for 64 bit Windows

Here I am logging the exercise of compiling FFMPEG in Windows using the msvc compiler. While I was logging my way of doing it one by one, I found it tedious, and requiring multiple updates. These are smooth on Linux and iOS but with Windows the build process is a hybrid of MSVC and an MinGW-MSYS environments. A typical process consists of getting the x64 or x86 Visual studio command prompt. (You can do this by searching in the Windows menu bar for x64). Invoke the Msys window from within this console. Lot of notes are available on the net to compile a particular version of ffmpeg but then with MinGW updates one tends to run across some issues.

My requirements were to get an ffmpeg build system locally so that I can play around with the source files to understand few Video delivery issues. With the above process I was running into number of dependencies. H.264 support on ffmpeg build requires X.264 libraries, and X.264’s mp4 support requires GPAC libraries. Finally I found this site, an easy self-running-build with a UI to accept inputs and creates a MSYS shell. Here is the link, the build process took approximately 4 hours for me.

https://gitlab.com/RiCON/media-autobuild_suite/tree/master

This code comes up a with a configured MSYS window, and downloads sources from various sites, configures and builds all dependencies. It starts with media_auto_build_suite.bat and then has shell scripts containing build for each module. In case we need to rebuild, all we need to do is copy paste some of the commands from the media_suite_compile.sh.

The link below has details for individual builds.
https://github.com/jb-alvarado/media-autobuild_suite/wiki

Building RTP Streamer Live555 for x64 and using Live555Proxy

In my earlier post I have mentioned how to compile the live555 source using the Visual Studio IDE which is more of manual compilation. Here I am mentioning steps required to compile it using nmake. The live555 link to build itself is self explanatory, http://www.live555.com/liveMedia/#config-unix. But for a 64 bit windows machine, a little change of the win32config file in live folder helps. check the documentation in https://stackoverflow.com/questions/29041258/building-64bit-live555-with-visual-studio-2013. It was useful to me.

Sometimes I have been struggling to find Visual Studio command prompts which provides us the VS compile time environment ready made. After searching through the folders I found it good to check in ProgramFile (x86)\Microsoft Visual Studio\2017\Community\Common7\Tools\VxDevCmd.bat. This starts up the command prompt.

Once I had an issue with qos.h dependency, the winsock2.h referring to it. I found it more to be an SDK issue and with updated SDK 10.x, I was able to build it without any issue.

I was also able to access live555 proxy server which is used to present one common front end for many live streaming URLs from live555 streamer using VLC as well as ffplay. In some situations VLC is unable to play from the proxy server but then ffplay was able to. VLC was closing the connections for some reason.

Sometimes there might be issues with proxy server connecting to mediaserver running on ports like 8554, it will show up as bind error. in this cases use -p option for proxy server start with different port number.

At the moment I am able to use only mkv containers but that’s more of the build of live555 RTP streamer I have so I am happy with it.

Simple HLS Streaming from an RTSP source

Here is a very simple mechanism of streaming HLS content to your CDN. A typical live streaming scenario will be as follows:
a. Production house streams the highest quality source
b. We ingest this source and pass it on to our transcoders
c. We also create m3u8 chunks and the playlist
d. Send this content to a HLS server.

There are options like using an Nginx server, live555, node’s hls-server, but most of them require an rtsp server to serve the first data to ingest. For me using hls-server looked easiest. It requires node to be installed. I am using Windows now. In windows node is bundled with node package manager. Along with node you will require his-server or his-server-ef to run on a port. Map the folder where you have your video files to the path variable and dir variables.
Again as I informed in my Linux post earlier, I was able to get node components loaded only when I set up the NODE_PATH. in windows the files are stored in %appdata%\npm\node_modules.
Next we need our HLS streamer to ingest live streams and we will use the RTP source, and npm library fluent-ffmpeg to ingest the same. fluent-ffmpeg also requires ffmpeg to be installed and ffmpeg anf ffprobe paths to be set. The best part of fluent-ffmpeg is that it provides all ffmpeg functionality to our node.

install ffmpeg-binaries too.

run this in one node console.
var ffmpeg = require(‘fluent-ffmpeg’)
var ffmpegBinaries = require(“ffmpeg-binaries”)
var ffmpegPath = ffmpegBinaries.ffmpegPath() // Path to ffmpeg binary
var ffprobePath = ffmpegBinaries.ffprobePath() // Path to ffprobe binary

ffmpeg
.setFfmpegPath(ffmpegPath)

// host, port and path to the RTMP stream
var host = ”
var port = ”
var path = ‘/SampleVideo_big.mkv’

function callback() { // do something when stream ends and encoding finshes }

ffmpeg(‘rtsp://’+host+’:’+ port + “/” + path, { timeout: 432000 }).addOptions([
‘-c:v libx264’,
‘-c:a aac’,
‘-ac 1’,
‘-strict -2’,
‘-crf 18’,
‘-profile:v baseline’,
‘-maxrate 400k’,
‘-bufsize 1835k’,
‘-pix_fmt yuv420p’,
‘-hls_time 10’,
‘-hls_list_size 6’,
‘-hls_wrap 10’,
‘-start_number 1’
]).output(‘output.m3u8’).on(‘end’, callback).run()

run this on another
var HLSServer = require(‘hls-server’)
var http = require(‘http’)

var server = http.createServer()
var hls = new HLSServer(server, {
path: ‘/streams’, // Base URI to output HLS streams
dir: ‘path/to/media/files’ // Directory that input files are stored
})
server.listen(8000)

Now use vlcplayer to load the http hls playlist.

How to set up a a VS2017 solution file for live555

Deviating a little from the blockchain work, I stepped into Video tools I need for the advertisement platform. Live555 is an opensource library for RTSP streaming and it does HLS as a special case.

The intent is to use the MediaServer from live555, grab the frames from the sink, transcode into smaller chunks and send it out using live555. I am planning to use DirectShow for this. Currently Microsoft advocates the use of the Media platform as opposed to DirectShow.

Downloaded the latest tar files from http://www.live555.com/liveMedia/public/. Created projects in sequence in a new solution. Let us create 2 solutions, one for server and the other for client. I think we can manage with 1 solution and two executables, but not sure at the moment.
a. UsageEnvironment
b. BasicUsageEnvironment
c. groupsock
d. liveMedia
e. mediaServer
f. testProgs

All projects need to be created as a Static library except the mediaServer and testprogs. Add a folder called include in the solution workarea. In each of these projects create a filter called include. Add all the header files here. Add all the source files in the existing project. it will find its way to the configured source folders.

For each project ensure the following:
a. Except for mediaServer and testProgs all other projects are set up as static library
b. preprocessor include _CRT_SECURE_NO_WARNINGS. Include _WINSOCK_DEPRECATED_NO_WARNINGS for groupsock
c. Make sure that Precompiled Headers usage is turned off.
d. The include folder includes .\include and the following
I. UsageEnvironment
;.\include;..\groupsock\include
II. BasicUsageEnvironment
;.\include;..\groupsock\include;..\UsageEnvironment\include
III GroupSock
;.\include;..\UsageEnvironment\include
IV LiveMedia
;.\include;..\UsageEnvironment\include
V MediaServer
;ws2_32.lib
In the testProgs folder, there are multiple programs that can act as a standalone executable. So we can add and remove our programs as required to test.
Add the static for each module in the mediaServer or testprogs compiler configuration.
Once all these are done, you are good to go.
MediaServer is a separate console application.
We can use Testprograms to send messages to the MediaServer.