December 28, 2016

December news

The year of 2016 is almost over so prior to posting a year summary we'd like to highlight some significant updates from December.

Live Transcoder

Live Transcoder decoding and encoding capabilities were improved:

We've also added an article about setting constant bitrate for transcoding with x264.

This all improves efficiency and overall user experience of our Live Transcoder.

Nimble Streamer

Nimble has interesting update for ABR. You can now add multiple language streams into ABR stream. So you can combine N video and M audio streams for further usage in any player.


Icecast

Icecast live metadata can now be added to any outgoing Icecast stream using WMSPanel UI. Read this article for details. This is a good addition to existing Icecast metadata pass-through.

Read about more audio streaming scenarios supported by Nimble Streamer.


Larix mobile SDK

Larix mobile SDK has been updated.

Android SDK has several new features:

  • Set custom white balance, exposure value and anti-flicker;
  • Long press preview to focus lens to infinity, double tap to continuous auto focus.
  • Use volume keys to start broadcasting;
  • Use a selfie stick  to start broadcasting;

iOS SDK has minor fixes and improvements for streaming.

Use this page to proceed with SDK license subscription.



In a few days we'll also release a yearly summary of our company. 


Follow us at FacebookTwitter or Google+ to get latest news and updates of our products and services.

The State of Streaming Protocols - 2016 summary

Softvelum team which operates WMSPanel reporting service continues analyzing the state of streaming protocols.

As the year of 2016 is over, it's time to make some summary of what we can see from looking back through this past period. The media servers connected to WMSPanel processed more than 34 billion connections from 3200+ media servers (operated by Nimble Streamer and Wowza). As you can tell we have some decent data to analyse.

First, let's take a look at the chart and numbers:
The State of Streaming Protocols - 2016
You can compare that to the picture of 2015 protocols landscape:

December 27, 2016

Adding multiple audio tracks for ABR HLS live streams

Live streaming scenarios of Nimble Streamer include ABR (adaptive bitrate), it can be accomplished via HLS and MPEG-DASH.

Previously we introduced ability to use multiple separate video and audio tracks in same VOD stream. 

Now we've added multiple audio tracks support for live ABR streams. This allows assigning audio streams from any incoming source to ABR streams and define corresponding properties to each of them. Once such stream is defined, a player may provide the audio track selection in order to get proper audio stream.

Let's see how it's set up.

December 26, 2016

Setting constant bitrate for x264 encoder

Live Transcoder for Nimble Streamer has wide range of transcoding capabilities which include H.264 encoding with x264 library licensed for commercial usage by our company so any customer with our Transcoder may use x264 parameters to set up outgoing stream.

This article answers a popular question of our customers - "How can I set up constant bitrate for my streams?" - using x264 encoder settings. This encoder is also known as libx264.

Let us give a couple of short answers and then a full description.

How to set up CRF (Constant Rate Factor) with maximum bitrate


As you may have seen from our screencasts - such as UI sneak preview for ABR scenario setup - you can use web UI to set up transcoding scenario with source streams, transformation blocks and encoder. You can see blue block being sources of streams, green blocks for filters to transform the content and orange blocks as outgoing streams encoders. If you point your mouse to any block, you'll see setup icon - you can click on it to see details dialog.

Click on the orange block (that is the encoder settings box) and set the following custom fields:
  • crf to 20
  • vbv-maxrate to 400
  • vbv-bufsize to 1835
This will set maximum bit rate to 400Kbps with CRF of 20.



This is an equivalent of the following FFmpeg parameters:
-crf 20 -maxrate 400k -bufsize 1835k

How to set up CBR (Constant Bit Rate)


The constant bitrate can be set up almost the same way, in the orange encoder block. If you need bitrate 4Mbps, set the values as follows:

  • bitrate to 4000
  • vbv-maxrate to 4000
  • vbv-bufsize to 4000


This is an equivalent of the following FFmpeg parameters:
-b:v 4000k -minrate 4000k -maxrate 4000k -bufsize 4000k

See the following sections to explanations.

Some internals


Nimble Streamer Live Transcoder uses libx264 which has some flexibility to control the bitrate. As described in FFmpeg docs which uses libx264 as well, the CBR is not supported directly due to very complex codec logic but we can emulate and set maximum bitrate.

If you set vbv-maxrate and vbv-bufsize to something basic like the H.264 High Profile @ Level 4.1 limitations, the encoder will still operate in ABR mode, but will constrain itself to not go outside these specifications.

If you set vbv-maxrate to the same value as bitrate, then the encoder will operate in CBR mode. Notice that it's not a strict CBR where every picture has the same size. vbv-bufsize controls the size of the buffer which allows for bitrate variance while still staying inside the CBR limitations.

If you set only "bitrate" parameter then encoder will work in unconstrained VBR mode, having the parameter value as a target but not as fixed value.

Minimum bitrate (minrate)

As for lower bitrate threshold, the library will need to increase quality if average quality cannot reach min-rate, but in case if max possible quality still cannot fill bandwidth gap you will have bitrate lower than you set. Hence "-minrate 4000k" parameter in the example above will just not work - it's not used for libx264 and was added to FFmpeg guide accidentally. We've checked ffmpeg code - it's just a bug in the docs.

Some cases of FPS affecting bitrate


Please also check Handling fuzzy FPS to get proper bitrate output article which covers cases when frame rate may affect output bitrate.


To get some more details please read this FFmpeg docs page and also this forum thread.

Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


December 19, 2016

Append metadata to Icecast streams

Nimble Streamer has an extended audio streaming feature set for both live and VOD. Live audio streaming covers both transmuxing and transcoding of Icecast pulled and published streams.

In addition to just transmuxing and transcoding audio, Nimble now allows adding any metadata to any outgoing Icecast stream. So any player capable of Icecast playback and metadata processing will show respective info during the playback.

If you need to just pass through the metadata, you can use this instruction to Icecast re-streaming and this instruction for passing the metadata through Live Transcoder.

Let's see how you can do that.


Click on Nimble Streamer -> Live streams settings top menu then click on Icecast metadata tab.


Now click on Add Icecast metadata button to see the metadata setup dialog.



Application name and Stream name must be same as your outgoing stream name which you'd like to add metadata to.

The next set of fields is for the metadata items, each name has respective megadata item name:

  • Channel name
  • Description for the channel
  • Genre
  • Publicity
  • Bitrate in KBps
  • Audio info which is automatically filled with bitrate value but you can add any additional info there

You can also specify what servers to apply this setting to - just use the checkbox for respective servers. Click Save to apply your settings and check the list to show the update process.




That's it. If you want to change anything - just click on the tool icon.

Further audio streaming options


You can read about all metadata-related features on our audio streaming page under Icecast metadata section.

With Icecast/SHOUTcast streams being processed in via single transmuxing engine you may also use them in various scenarios like these mentioned below.


Let us know if you have any suggestions or questions regarding audio streaming, we're opened for discussions.

Related documentation


December 13, 2016

NVENC context cache for Live Transcoder

Nimble Streamer Live Transcoder has full support for NVidia video transcoding hardware acceleration.

Some complex transcoding scenarios may result excessive load on the hardware which may affect the performance and result errors. So you may find the following lines in Nimble Streamer logs: "Failed to encode on appropriate rate" or "Failed to decode on appropriate rate". This is a known issue for NVENC.

There are two approaches to solving this problem.


First approach is to use shared contexts.

Please read the "NVENC shared context usage" article to learn more about it and try it in action.

Another approach is to use NVENC contexts caching mechanism in Nimble Streamer Transcoder. It creates encoding and decoding contexts at the start of the transcoder. So when the encoding or decoding session starts, it picks up the context from the cache. It also allows reusing contexts. 

The following nimble.conf parameters need to be used in order to control NVENC context. configuration parameters reference for more information about Nimble config.

Use this parameter to enables the context cache feature:
nvenc_context_cache_enable = true

In order to handle context cache efficiently, the transcoder needs to lock the calls for NVidia drivers APIs to itself. This allows making the queue for the contexts creation and control this exclusively which improves the performance significantly. This is what the following parameter is for.
nvenc_context_create_lock = true

You can set transcoder to create contexts at the start. You can specify how many contexts will be created for each graphic unit for encoding and for decoding sessions.
Common format is this
nvenc_context_cache_init = 0:<EC>:<DC>,2:<EC>:<DC>,N:<EC>:<DC>
As you see it's a set of triples where first digit defines the GPU number, EC is encoder contexts umber and DC is decoder contexts number.
Check this example:
nvenc_context_cache_init = 0:32:32,1:32:32,2:16:16,3:16:16
This means you have 4 cards, first two cards have 32 contexts for encoding and 32 for decoding, then other two cards have 16 contexts respectively.

When a new context is created on top of those created on transcoder start, it will be released once the encoder or decoder session is over (e.g. the publishing was stopped). To make those contexts available for further re-use, you need to specify this parameter.
nvenc_context_reuse_enable = true
We recommend to use this option by default.

So as example, having 2 GPUs your config may looks like this:
nvenc_context_cache_enable = true
nvenc_context_create_lock = true
nvenc_context_cache_init = 0:32:32,1:32:32
nvenc_context_reuse_enable = true

That's it, use those parameters for the cases when you experience issues with NVENC.

Getting optimal parameters


Here are the steps you may follow in order to get the optimal number of decoding and encoding contexts.

1. Create transcoding scenarios for the number of streams which you plan to transcode, set decoding and encoding via GPU.

2. Enable new scenarios one by one with 1 minute interval and check Nimble logs after each scenario is enabled. If you don't see any errors, then keep adding the streams for transcoding.

3. You will load your GPU more and more until you get errors. It's important not to enable them all at once because it takes more time for each new context to create, as stated earlier in this article.

4. Once you start getting errors in log, stop Nimble Streamer, then set nvenc_context_cache_init to some the value same as the number of streams which failed to transcode. Also set other parameters (nvenc_context_cache_enable, nvenc_context_create_lock, nvenc_context_reuse_enable) to corresponding values.

5. Start Nimble Streamer and check /etc/logs/nimble/nimble.log for any errors. If you find transcoding-related errors, decrease the cache context value until you see to errors on Nimble Streamer start.

6. Once you see no issues on start, you need to create a scenario which contains decoding for the same number of streams as you have for decoding contexts. E.g. if it's 0:0:20 cache, make scenario with 20 streams for decoding and encoding into any rendition.

7. Start scenario and check the log. If you see decoding or encoding errors, then remove streams from your scenario until you have no errors. Check the stream after that to make sure you have no artifacts.

8. The number of processed streams will be your maximum number of decoding contexts.

The idea is to load the GPU without caching until getting errors in log. Once the errors are seen, you try setting up context caching to proper values until no errors are seen as well.

That's it. Now check examples context caching for various NVidia cards.

Examples


Here is a short extract of some configurations for known cards - they were tested in real scenarios. To use your GPU optimally you need to perform some tests with GPU without context caching and then set nvenc_context_cache_init to values you find optimal during your tests.

Quadro P5000 (16GB)

FullHD (1080p), high profile.

The maximum number of decoding contexts is 13. If you want to use Quadro only for decoding you can use
nvenc_context_cache_init = 0:0:13

If you want to use NVENC for both decoding and encoding we recommend to use
nvenc_context_cache_init = 0:10:5
Contexts are used as:
  • decoding: 5 for FullHD
  • encoding: 5 for FullHD + 5 HD (720p)

Tesla M60 (16 GB, 2 GPUs)

Read Stress-testing NVidia GPU for live transcoding article to see full testing procedure. The final numbers are as follows.


FullHD, high profile

Maximum number of decoding contexts is 20 for each GPU. So if you want to use GPU for decoding you can use
nvenc_context_cache_init=0:0:20,0:0:20

To use NVENC for both encoding and decoding we recommend this value:
nvenc_context_cache_init=0:30:15;0:30:15 

This means each GPU has 30 encoding and 15 decoding contexts.
Each GPU is used as as:
  • decoding: 15 for FullHD
  • encoding: 15 for HD (720p), 15 for 480p

HD, high profile
For HD we recommend:
nvenc_context_cache_init=0:23:23,1:23:23

Each GPU is used as:
  • decoding: 23 for HD
  • encoding: 23 for 480p

We'll add more cases as soon as we perform proper testing. If you have any results which you'd like to share, drop a note to us.

Troubleshooting


If you face any issues when using Live Transcoder, follow Troubleshooting Live Transcoder article to see what can be checked and fixed.
If you have any issues which are not covered there, contact us for any questions.

Related documentation


NVIDIA, the NVIDIA logo and CUDA are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. 

December 12, 2016

Specifying decoder threads in Live Transcoder

Nimble Streamer Live Transcoder allows performing various transformations over incoming video and audio streams.

Any incoming stream needs to be decoded before any further transformation unless you use a pass-through mode. The decoding may be either hardware-based (such as NVidia GPU decoding supported by Live Transcoder or Intel QuickSync) or software-based.

Software decoding may be optimized in order to use processor resources optimally. This is why the Transcoder allows using multiple threads for video decoding.

By default the decoding is performed using one thread for one stream on one CPU core. If the decoding doesn't consume entire core, more threads are added into that core.

However, in some cases one core in not enough to decode one stream. If you look at Nimble logs you will see messages like
Failed to decode video in appropriate rate
This means you need to run one decoding thread on several cores - this is where multiple threads are used.

Notice that before adding new threads, make sure your CPU is not 100% loaded before adjusting threads number.


Let's see how you can set it up. In the transcoding scenario you need to point to video decoder blue rectangle (with a film on it) and then click on appeared gear button.


You'll see decoder settings dialog. In the Decoder drop-down list it will show "Default" option. This is a software decoder used by Nimble Transcoder by default.




Use Threads edit box to specify the number of sessions threads used for decoding in this this particular incoming stream. Now click OK and then save transcoding scenario to apply it to the server.

You can specify this separately to any decoder in any scenario

Visit Live Transcoder webpage for more details about Live Transcoder and contact us if you have any question.

Related documentation


December 8, 2016

NVENC decoder in Nimble Live Transcoder

Nimble Streamer Live Transcoder has full support for NVidia video transcoding hardware acceleration. Having the hardware capable of the processing and drivers properly installed, our customer can choose NVENC to handle processing.


NVidia® Products contain a dedicated accelerator for video decoding and encoding, called NVENC, on the GPU die.

We've previously described the NVidia encoding setup. Now lets see how hardware-based decoding can be used.

The following codecs are supported for decoding:
  • H.264/AVC
  • H.265/HEVC
  • VP8 and VP9
  • AV1

You can see a list of the compatible hardware to see where each codec is supported.

In the transcoding scenario you need to point to video decoder blue rectangle (with a film on it) and then click on appeared gear button.


You'll see decoder settings dialog. In the Decoder drop-down list it will show "Default" option. This is a software decoder used by Nimble Transcoder by default.



To use GPU decoder, choose NVENC from from list. This will pick up NVidia GPU to take action.


The GPU field is a number which allows specifying the sequential number of physical GPU to process the decoding. So if you want to specify exact GPU to decode specific stream, you need to type the number, e.g. 0, 1 etc. for as many GPUs as you have. If you set it to "auto" then Nimble Transcoder will choose the least busy GPU.

Many consumer Nvidia GPU cards have encoding sessions count restricted up to 8 active sessions (depending on GPU driver version), the decoding sessions are not limited. So you can use even GTX card to help the transcoder to decode with no limitation. Check NVENC support matrix for more.

Deinterlacing mode has the following values:

  • weave will weave both fields (no deinterlacing)
  • bob will drop one field
  • adaptive will enable adaptive deinterlacing

Please refer to NVidia documentation for more details on each mode.

If you'd like to use software decoder though - please check this article.

To improve your NVENC transcoding experience, please also take a look at Transcoder troubleshooting covering most frequent questions.

Zabbix monitoring of Nimble Streamer allows tracking server status, SRT streams and NVidia GPU status.

We keep improving our transcoder feature set, contact us for any questions.

Related documentation



NVIDIA, the NVIDIA logo and CUDA are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries.