December 28, 2016

December news

The year of 2016 is almost over so prior to posting a year summary we'd like to highlight some significant updates from December.

Live Transcoder

Live Transcoder decoding and encoding capabilities were improved:

We've also added an article about setting constant bitrate for transcoding with x264.

This all improves efficiency and overall user experience of our Live Transcoder.

Nimble Streamer

Nimble has interesting update for ABR. You can now add multiple language streams into ABR stream. So you can combine N video and M audio streams for further usage in any player.


Icecast

Icecast live metadata can now be added to any outgoing Icecast stream using WMSPanel UI. Read this article for details. This is a good addition to existing Icecast metadata pass-through.

Read about more audio streaming scenarios supported by Nimble Streamer.


Larix mobile SDK

Larix mobile SDK has been updated.

Android SDK has several new features:

  • Set custom white balance, exposure value and anti-flicker;
  • Long press preview to focus lens to infinity, double tap to continuous auto focus.
  • Use volume keys to start broadcasting;
  • Use a selfie stick  to start broadcasting;

iOS SDK has minor fixes and improvements for streaming.

Use this page to proceed with SDK license subscription.



In a few days we'll also release a yearly summary of our company. 


Follow us at FacebookTwitter or Google+ to get latest news and updates of our products and services.

The State of Streaming Protocols - 2016 summary

Softvelum team which operates WMSPanel reporting service continues analyzing the state of streaming protocols.

As the year of 2016 is over, it's time to make some summary of what we can see from looking back through this past period. The media servers connected to WMSPanel processed more than 34 billion connections from 3200+ media servers (operated by Nimble Streamer and Wowza). As you can tell we have some decent data to analyse.

First, let's take a look at the chart and numbers:
The State of Streaming Protocols - 2016
You can compare that to the picture of 2015 protocols landscape:

December 27, 2016

Adding multiple audio tracks for ABR HLS live streams

Live streaming scenarios of Nimble Streamer include ABR (adaptive bitrate), it can be accomplished via HLS and MPEG-DASH.

Previously we introduced ability to use multiple separate video and audio tracks in same VOD stream. 

Now we've added multiple audio tracks support for live ABR streams. This allows assigning audio streams from any incoming source to ABR streams and define corresponding properties to each of them. Once such stream is defined, a player may provide the audio track selection in order to get proper audio stream.

Let's see how it's set up.

December 26, 2016

Setting constant bitrate for x264 encoder

Live Transcoder for Nimble Streamer has wide range of transcoding capabilities which include H.264 encoding with x264 library licensed for commercial usage by our company so any customer with our Transcoder may use x264 parameters to set up outgoing stream.

This article answers a popular question of our customers - "How can I set up constant bitrate for my streams?" - using x264 encoder settings. This encoder is also known as libx264.

Let us give a couple of short answers and then a full description.

How to set up CRF (Constant Rate Factor) with maximum bitrate


As you may have seen from our screencasts - such as UI sneak preview for ABR scenario setup - you can use web UI to set up transcoding scenario with source streams, transformation blocks and encoder. You can see blue block being sources of streams, green blocks for filters to transform the content and orange blocks as outgoing streams encoders. If you point your mouse to any block, you'll see setup icon - you can click on it to see details dialog.

Click on the orange block (that is the encoder settings box) and set the following custom fields:
  • crf to 20
  • vbv-maxrate to 400
  • vbv-bufsize to 1835
This will set maximum bit rate to 400Kbps with CRF of 20.



This is an equivalent of the following FFmpeg parameters:
-crf 20 -maxrate 400k -bufsize 1835k

How to set up CBR (Constant Bit Rate)


The constant bitrate can be set up almost the same way, in the orange encoder block. If you need bitrate 4Mbps, set the values as follows:

  • bitrate to 4000
  • vbv-maxrate to 4000
  • vbv-bufsize to 4000


This is an equivalent of the following FFmpeg parameters:
-b:v 4000k -minrate 4000k -maxrate 4000k -bufsize 4000k

See the following sections to explanations.

Some internals


Nimble Streamer Live Transcoder uses libx264 which has some flexibility to control the bitrate. As described in FFmpeg docs which uses libx264 as well, the CBR is not supported directly due to very complex codec logic but we can emulate and set maximum bitrate.

If you set vbv-maxrate and vbv-bufsize to something basic like the H.264 High Profile @ Level 4.1 limitations, the encoder will still operate in ABR mode, but will constrain itself to not go outside these specifications.

If you set vbv-maxrate to the same value as bitrate, then the encoder will operate in CBR mode. Notice that it's not a strict CBR where every picture has the same size. vbv-bufsize controls the size of the buffer which allows for bitrate variance while still staying inside the CBR limitations.

If you set only "bitrate" parameter then encoder will work in unconstrained VBR mode, having the parameter value as a target but not as fixed value.

Minimum bitrate (minrate)

As for lower bitrate threshold, the library will need to increase quality if average quality cannot reach min-rate, but in case if max possible quality still cannot fill bandwidth gap you will have bitrate lower than you set. Hence "-minrate 4000k" parameter in the example above will just not work - it's not used for libx264 and was added to FFmpeg guide accidentally. We've checked ffmpeg code - it's just a bug in the docs.

Some cases of FPS affecting bitrate


Please also check Handling fuzzy FPS to get proper bitrate output article which covers cases when frame rate may affect output bitrate.


To get some more details please read this FFmpeg docs page and also this forum thread.

Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


December 19, 2016

Append metadata to Icecast streams

Nimble Streamer has an extended audio streaming feature set for both live and VOD. Live audio streaming covers both transmuxing and transcoding of Icecast pulled and published streams.

In addition to just transmuxing and transcoding audio, Nimble now allows adding any metadata to any outgoing Icecast stream. So any player capable of Icecast playback and metadata processing will show respective info during the playback.

If you need to just pass through the metadata, you can use this instruction to Icecast re-streaming and this instruction for passing the metadata through Live Transcoder.

Let's see how you can do that.


Click on Nimble Streamer -> Live streams settings top menu then click on Icecast metadata tab.


Now click on Add Icecast metadata button to see the metadata setup dialog.



Application name and Stream name must be same as your outgoing stream name which you'd like to add metadata to.

The next set of fields is for the metadata items, each name has respective megadata item name:

  • Channel name
  • Description for the channel
  • Genre
  • Publicity
  • Bitrate in KBps
  • Audio info which is automatically filled with bitrate value but you can add any additional info there

You can also specify what servers to apply this setting to - just use the checkbox for respective servers. Click Save to apply your settings and check the list to show the update process.




That's it. If you want to change anything - just click on the tool icon.

Further audio streaming options


You can read about all metadata-related features on our audio streaming page under Icecast metadata section.

With Icecast/SHOUTcast streams being processed in via single transmuxing engine you may also use them in various scenarios like these mentioned below.


Let us know if you have any suggestions or questions regarding audio streaming, we're opened for discussions.

Related documentation


December 13, 2016

NVENC context cache for Live Transcoder

Nimble Streamer Live Transcoder has full support for NVidia video transcoding hardware acceleration.

Some complex transcoding scenarios may result excessive load on the hardware which may affect the performance and result errors. So you may find the following lines in Nimble Streamer logs: "Failed to encode on appropriate rate" or "Failed to decode on appropriate rate". This is a known issue for NVENC.

There are two approaches to solving this problem.


First approach is to use shared contexts.

Please read the "NVENC shared context usage" article to learn more about it and try it in action.

Another approach is to use NVENC contexts caching mechanism in Nimble Streamer Transcoder. It creates encoding and decoding contexts at the start of the transcoder. So when the encoding or decoding session starts, it picks up the context from the cache. It also allows reusing contexts. 

The following nimble.conf parameters need to be used in order to control NVENC context. configuration parameters reference for more information about Nimble config.

Use this parameter to enables the context cache feature:
nvenc_context_cache_enable = true

In order to handle context cache efficiently, the transcoder needs to lock the calls for NVidia drivers APIs to itself. This allows making the queue for the contexts creation and control this exclusively which improves the performance significantly. This is what the following parameter is for.
nvenc_context_create_lock = true

You can set transcoder to create contexts at the start. You can specify how many contexts will be created for each graphic unit for encoding and for decoding sessions.
Common format is this
nvenc_context_cache_init = 0:<EC>:<DC>,2:<EC>:<DC>,N:<EC>:<DC>
As you see it's a set of triples where first digit defines the GPU number, EC is encoder contexts umber and DC is decoder contexts number.
Check this example:
nvenc_context_cache_init = 0:32:32,1:32:32,2:16:16,3:16:16
This means you have 4 cards, first two cards have 32 contexts for encoding and 32 for decoding, then other two cards have 16 contexts respectively.

When a new context is created on top of those created on transcoder start, it will be released once the encoder or decoder session is over (e.g. the publishing was stopped). To make those contexts available for further re-use, you need to specify this parameter.
nvenc_context_reuse_enable = true
We recommend to use this option by default.

So as example, having 2 GPUs your config may looks like this:
nvenc_context_cache_enable = true
nvenc_context_create_lock = true
nvenc_context_cache_init = 0:32:32,1:32:32
nvenc_context_reuse_enable = true

That's it, use those parameters for the cases when you experience issues with NVENC.

Getting optimal parameters


Here are the steps you may follow in order to get the optimal number of decoding and encoding contexts.

1. Create transcoding scenarios for the number of streams which you plan to transcode, set decoding and encoding via GPU.

2. Enable new scenarios one by one with 1 minute interval and check Nimble logs after each scenario is enabled. If you don't see any errors, then keep adding the streams for transcoding.

3. You will load your GPU more and more until you get errors. It's important not to enable them all at once because it takes more time for each new context to create, as stated earlier in this article.

4. Once you start getting errors in log, stop Nimble Streamer, then set nvenc_context_cache_init to some the value same as the number of streams which failed to transcode. Also set other parameters (nvenc_context_cache_enable, nvenc_context_create_lock, nvenc_context_reuse_enable) to corresponding values.

5. Start Nimble Streamer and check /etc/logs/nimble/nimble.log for any errors. If you find transcoding-related errors, decrease the cache context value until you see to errors on Nimble Streamer start.

6. Once you see no issues on start, you need to create a scenario which contains decoding for the same number of streams as you have for decoding contexts. E.g. if it's 0:0:20 cache, make scenario with 20 streams for decoding and encoding into any rendition.

7. Start scenario and check the log. If you see decoding or encoding errors, then remove streams from your scenario until you have no errors. Check the stream after that to make sure you have no artifacts.

8. The number of processed streams will be your maximum number of decoding contexts.

The idea is to load the GPU without caching until getting errors in log. Once the errors are seen, you try setting up context caching to proper values until no errors are seen as well.

That's it. Now check examples context caching for various NVidia cards.

Examples


Here is a short extract of some configurations for known cards - they were tested in real scenarios. To use your GPU optimally you need to perform some tests with GPU without context caching and then set nvenc_context_cache_init to values you find optimal during your tests.

Quadro P5000 (16GB)

FullHD (1080p), high profile.

The maximum number of decoding contexts is 13. If you want to use Quadro only for decoding you can use
nvenc_context_cache_init = 0:0:13

If you want to use NVENC for both decoding and encoding we recommend to use
nvenc_context_cache_init = 0:10:5
Contexts are used as:
  • decoding: 5 for FullHD
  • encoding: 5 for FullHD + 5 HD (720p)

Tesla M60 (16 GB, 2 GPUs)

Read Stress-testing NVidia GPU for live transcoding article to see full testing procedure. The final numbers are as follows.


FullHD, high profile

Maximum number of decoding contexts is 20 for each GPU. So if you want to use GPU for decoding you can use
nvenc_context_cache_init=0:0:20,0:0:20

To use NVENC for both encoding and decoding we recommend this value:
nvenc_context_cache_init=0:30:15;0:30:15 

This means each GPU has 30 encoding and 15 decoding contexts.
Each GPU is used as as:
  • decoding: 15 for FullHD
  • encoding: 15 for HD (720p), 15 for 480p

HD, high profile
For HD we recommend:
nvenc_context_cache_init=0:23:23,1:23:23

Each GPU is used as:
  • decoding: 23 for HD
  • encoding: 23 for 480p

We'll add more cases as soon as we perform proper testing. If you have any results which you'd like to share, drop a note to us.

Troubleshooting


If you face any issues when using Live Transcoder, follow Troubleshooting Live Transcoder article to see what can be checked and fixed.
If you have any issues which are not covered there, contact us for any questions.

Related documentation


NVIDIA, the NVIDIA logo and CUDA are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. 

December 12, 2016

Specifying decoder threads in Live Transcoder

Nimble Streamer Live Transcoder allows performing various transformations over incoming video and audio streams.

Any incoming stream needs to be decoded before any further transformation unless you use a pass-through mode. The decoding may be either hardware-based (such as NVidia GPU decoding supported by Live Transcoder or Intel QuickSync) or software-based.

Software decoding may be optimized in order to use processor resources optimally. This is why the Transcoder allows using multiple threads for video decoding.

By default the decoding is performed using one thread for one stream on one CPU core. If the decoding doesn't consume entire core, more threads are added into that core.

However, in some cases one core in not enough to decode one stream. If you look at Nimble logs you will see messages like
Failed to decode video in appropriate rate
This means you need to run one decoding thread on several cores - this is where multiple threads are used.

Notice that before adding new threads, make sure your CPU is not 100% loaded before adjusting threads number.


Let's see how you can set it up. In the transcoding scenario you need to point to video decoder blue rectangle (with a film on it) and then click on appeared gear button.


You'll see decoder settings dialog. In the Decoder drop-down list it will show "Default" option. This is a software decoder used by Nimble Transcoder by default.




Use Threads edit box to specify the number of sessions threads used for decoding in this this particular incoming stream. Now click OK and then save transcoding scenario to apply it to the server.

You can specify this separately to any decoder in any scenario

Visit Live Transcoder webpage for more details about Live Transcoder and contact us if you have any question.

Related documentation


December 8, 2016

NVENC decoder in Nimble Live Transcoder

Nimble Streamer Live Transcoder has full support for NVidia video transcoding hardware acceleration. Having the hardware capable of the processing and drivers properly installed, our customer can choose NVENC to handle processing.


NVidia® Products contain a dedicated accelerator for video decoding and encoding, called NVENC, on the GPU die.

We've previously described the NVidia encoding setup. Now lets see how hardware-based decoding can be used.

The following codecs are supported for decoding:
  • H.264/AVC
  • H.265/HEVC
  • VP8 and VP9
  • AV1

You can see a list of the compatible hardware to see where each codec is supported.

In the transcoding scenario you need to point to video decoder blue rectangle (with a film on it) and then click on appeared gear button.


You'll see decoder settings dialog. In the Decoder drop-down list it will show "Default" option. This is a software decoder used by Nimble Transcoder by default.



To use GPU decoder, choose NVENC from from list. This will pick up NVidia GPU to take action.


The GPU field is a number which allows specifying the sequential number of physical GPU to process the decoding. So if you want to specify exact GPU to decode specific stream, you need to type the number, e.g. 0, 1 etc. for as many GPUs as you have. If you set it to "auto" then Nimble Transcoder will choose the least busy GPU.

Many consumer Nvidia GPU cards have encoding sessions count restricted up to 8 active sessions (depending on GPU driver version), the decoding sessions are not limited. So you can use even GTX card to help the transcoder to decode with no limitation. Check NVENC support matrix for more.

Deinterlacing mode has the following values:

  • weave will weave both fields (no deinterlacing)
  • bob will drop one field
  • adaptive will enable adaptive deinterlacing

Please refer to NVidia documentation for more details on each mode.

If you'd like to use software decoder though - please check this article.

To improve your NVENC transcoding experience, please also take a look at Transcoder troubleshooting covering most frequent questions.

Zabbix monitoring of Nimble Streamer allows tracking server status, SRT streams and NVidia GPU status.

We keep improving our transcoder feature set, contact us for any questions.

Related documentation



NVIDIA, the NVIDIA logo and CUDA are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. 

November 30, 2016

November news

We've made a few significant updates in November.

First, Larix mobile SDK for iOS has been updated with a new feature - Auto Focus Lock (AF-L). Just long press anywhere in the preview, and AF is locked until you tap to re-focus.
Use this page to proceed with SDK license subscription.

Nimble Streamer has a few updates as well.


Published Icecast streams can now be processed in Nimble Streamer. Read this article to see how it's set up and find out more about audio streaming scenarios supported by Nimble Streamer.

If you face any artifacts when publishing streams via UDP to Nimble Streamer, read this post describing the steps to avoid them.


WMSPanel non-admin users can now be granted permission by account admins to control Nimble Streamer instances. Please read this article for more details.

Last but not least, if you use WMSPanel API to control Nimble instances, you may set up threshold notifications to be alerted when you make too many API requests and are close to reach the calls limit. Visit this page for details.



The last but not the least update: check the State of Streaming Protocols for November 2016.


Follow us at FacebookTwitterGoogle+ or LinkedIn to get latest news and updates of our products and services.

The State of Streaming Protocols - November 2016

WMSPanel team continues analyzing the state of streaming protocols.

The metrics calculations are based on ~3.7 billion views. The stats are collected from 3100+ media servers (Nimble Streamer and Wowza).

Protocols share remain stable, i.e. HLS share is about 77% with RTMP around 12% and progressive download near 5%.

The State of Streaming Protocols, November 2016

You can compare that to October stats below.

November 16, 2016

Processing published Icecast in Nimble Streamer

Nimble Streamer has wide audio streaming feature set which includes both live and VOD. Live audio streaming covers both transmuxing and transcoding of Icecast pulled streams.

Now we're expanding this feature set by supporting the published Icecast sources which have MP3 or AAC codecs.

Current solution was tested with a number of existing Icecast publishing solutions, some of them are as follows:


Those tools' configuration files and sample playlists can be found in a separate githib repo in our account.

In our case the standard "source" login is used and the password is "secret" - just for demo purposes.

Let's see how it's set up for Nimble Streamer using WMSPanel.


First we need to define interfaces used for accepting published Icecast stream. Go to Nimble Streamer -> Live streams settings, choose your server from the drop down list and click on Interfaces tab.

List of interfaces
Now click on Add Icecast interface to see dialog as shown below.

Define Icecast interface

Add IP Address and Port here. Usually it's 8000 port and you may keep IP Address field blank if you need all IP addresses to be listened.

Now click on Applications tab to see the list of existing apps. Then click on Add application settings to define new app.

Adding new application for Icecast published stream

Here the application name will be "icecast", as defined in the configs. The login is "source" and password as "secret" - same as in configs.

Save settings to apply them to your servers.

Now the published Icecast stream will be available for playback or other actions within a few seconds once the settings are applied to the server. Now go to Nimble Streamer -> Live Streams menu to see the appeared incoming stream and then switch to Outgoing streams to see the result output. There you will be able to try playback and get the output stream URL for your further usage, as described in this article.

Further audio streaming options


Metadata - you can work with it using Nimble settings:



With Icecast/SHOUTcast streams being processed in via single transmuxing engine you may also use them in various scenarios like these mentioned below.

Quality


Qosifire quality monitoring service allows tracking the availability of Icecast streams and perform silence detection.

Example

Online radio snapshot page shows how to use Nimble Streamer and other Softvelum products for building audio delivery.



Let us know if you have any suggestions or questions regarding audio streaming, we're opened for discussions.

Related documentation



November 15, 2016

Nimble Streamer control for non-admin WMSPanel users

Nimble Streamer can be controlled in two ways. First one is to change config files, the second one is to use WMSPanel as a web UI.

WMSPanel is the easiest way to manage your streaming infrastructure based on Nimble Streamer. You can access via any browser and apply settings to multiple servers.

Secure your account in 3 easy steps article gives more ideas about working securely in WMSPanel.

Usually only account admins could control Nimble Streamer behavior. Now, WMSPanel allows those admins to give permission to non-admin users for controlling Nimble Streamer instances. You can specify which servers they may control and then set up white label access to WMSPanel to change the look-and-feel on the panel in order to make it look like you need.

Let's see how you can give that access.

Go to Control -> Users management menu to see the list of current users.

Users list

Now click on Abilities link on the designated user line to see the following dialog.

Servers selection dialog

Here you can see the list of the servers which you have now in your account. Select those which you need to give access for and slick on Save.

After that, the selected user will be able to do all the setup of his/her Nimble Streamer instance, just like the account administrator.

Slice-wide permissions

You can also define permissions for all users in particular slice. Read this article for more details.

Two-factor authentication

Any WMSPanel user may enable two-factor authentication for his user account. Please read this article for more details.



If you have any questions about our feature set, please contact our helpdesk.

Related documentation


Nimble Streamer, Building streaming infrastructure, Live Streaming with Nimble Streamer,

October 31, 2016

October news


October brought some good news for our customers.

First of all, we're honored to see Nimble Streamer as a finalist of Streaming Media Europe Readers Choice Awards in the "Best Streaming Innovation" nomination. We thank everyone who voted for us and hope to get more highlights by the industry press in future.

Speaking of industry  highlights, we are glad to see Radiant Media Player team now considers Nimble Streamer as fully supported media server. You can see us among the partners of this excellent solution.

Live Transcoder


Nimble Streamer Live Transcoder now has full support for NVidia hardware encoding acceleration via NVENC for H.264. It's available for both Windows and Linux platforms.
You can read this article for more details about the setup and capabilities.

Our customers report on huge off-load of their CPU when using GPU via our Transcoder.

Nimble Streamer


Nimble has improvements for both VOD and live scenarios.

When setting up origin-edge delivery configuration with RTMP delayed pull option, you may experience some delay on the viewer side starting the playback from edge. We've added a new option for origin side which you can use for decreasing the buffer. Read this article for more details.

Audio-only and video-only transmuxing for ABR VOD HLS is now supported. You will use SMIL files for that purpose, read this article for more details.

Mobile SDK


Larix mobile SDK was improved for both iOS and Android.


  • iOS SDK now has Larix sample application with Swift 3 support along with improved audio quality.
  • Android has streaming enhancements as well as sound Mute support.


Latest versions of SDK packages will be sent to our customers this week. Subscribe here to get those in case you haven't done it yet.


Dispersa

Our media streams monitoring service has a few updates as well

When you make setup for streams check-overs, you can now select to alert on email and push API when at least one checkpoint reports offline or when all checkpoints report offline.
For your convenience the subject also now contains stream name with checkpoint names, like "Stream offline alert (1/6) - Nimble Promo Video AMS1"



The last but not the least update: check the State of Streaming Protocols for October 2016.

Follow us at FacebookTwitterGoogle+ or LinkedIn to get latest news and updates of our products and services.

The State of Streaming Protocols - October 2016

WMSPanel team continues analyzing the state of streaming protocols.

The metrics calculations are based on 3.7+ billion views. The stats are collected from 3000+ media servers (Nimble Streamer and Wowza).

Protocols share remain stable, i.e. HLS share is about 75% with RTMP around 12% and progressive download near 6%. MPEG-DASH is ahead of SmoothStreaming, both having less than 1%.

The State of Streaming Protocols, October 2016

You can compare that to September stats below.

October 25, 2016

RTMP delayed pull buffer improvements

RTMP-related feature set of Nimble Streamer allows creating various streaming scenarios with your infrastructure. One of them is delayed pull, or pull by request. It allows saving bandwidth when some stream is not consumed by viewers.

When a viewer requests an edge server for some stream - e.g. via HLS - this edge will start pulling RTMP stream right at that moment. So the origin will give the stream only when it's needed. You can read more in this article.


Usually it takes some time to start the playback on viewer side so we added an enhancement for this scenario. You can now specify RTMP buffer items count on origin so the edge could get bigger buffer to provide it to the player.

So the player will get several chunks almost immediately and start playback in a second without buffering.

To make this work, follow these steps.

1. Upgrade your edge and origin Nimble instances.

2. On your origin, open nimble.conf and add
rtmp_buffer_items = 4096

3. Restart origin server.

Now, once the edge requests origin it will get 30 seconds buffer immediately and will start the playback.

You may find other fine tuning techniques for live streaming in Performance tuning guide.
Also, take a look at RTMP setup in Nimble Streamer video tutorial to learn more about basic RTMP scenarios.

Streaming Media Europe Readers' Choice Award 2016

As you know, Streaming Media is the leading magazine in the online streaming industry. We've been visiting their conferences - Streaming Media East and Streaming Media West - for several years so far and we're excited to participate in their activities.

Now we're honored to have Nimble Streamer as a finalist of Streaming Media Europe Readers Choice Awards in the "Best Streaming Innovation" nomination!



Thanks to everyone who voted for us!

October 20, 2016

NVidia NVENC settings in Nimble Streamer Live Transcoder

NVidia® Products with the Kepler, Maxwell and Pascal generation GPUs contain a dedicated accelerator for video encoding, called NVENC, on the GPU die.

NVENCODE API enables software developers to configure this dedicated hardware video encoder. This dedicated accelerator encodes video at higher speeds and power efficiency than CUDA-based or CPU-based encoders at equivalent quality. NVENCODE API allows the programmer to control various settings of the encoder to set the desired tradeoff between quality and performance.

Nimble Streamer Live Transcoder has full support for NVidia video encoding and decoding hardware acceleration. Having the hardware capable of the processing and drivers properly installed, our customer can choose NVENC to handle streams' encoding.

You can take a look at the list of NVidia GPUs capable of hardware encoding acceleration. To make HW acceleration work, you need to install the graphic card drivers into the system. Use this link to download and install them.

If you haven't yet installed Nimble Streamer transcoder, use this page to find proper setup instruction.

The transcoding scenarios are created using our excellent web UI. You can check this YouTube playlist to see how various use cases are defined. Takes just couple of minutes to complete.


Scenarios setup page
Part of ABR scenario setup example

To set up NVENC settings you need to open encoder settings dialog and choose "nvenc" as the Encoder.


After that you can add various parameters and set up specific values to tune up your encoding process. Please find full list of available encoding parameters below.


preset

Specifies H.264 preset.

  • hp - high performance
  • default - tradeoff between performance and quality
  • hq - high quality
  • llhp - low latency high performance
  • ll   - default low latency preset and the quality and speed is midway of the two other presets
  • llhq - low latency high quality
  • lossless - default lossless preset
  • losslesshp - lossless high performance
  • bd - blueray disk? NV_ENC_PRESET_BD_GUID


profile

Specifies H.264 profile.

  • baseline
  • main
  • high
  • high444


level

Specifies H.264 profile level.

  • 1
  • 1.0
  • 1b
  • 1.1
  • 1.2
  • 1.3
  • 2
  • 2.1
  • 2.2
  • 3
  • 3.1
  • 3.2
  • 4
  • 4.0
  • 4.1
  • 4.2
  • 5
  • 5.1

gpu

Selects which NVENC capable GPU to use. First GPU is 0, second is 1, and so on.

If you set it to "auto" then transcoder will choose the least busy GPU.

keyint

Number of pictures within the current GOP (Group of Pictures).

  • 0 - NVENC_INFINITE_GOPLENGTH
  • 1 - only I-frames are used

bframes

Specifies maximum number of B frames between non-B-frames.

  • 0 - no B-frames
  • 1 - IBP
  • 2 - IBBP

refs

Specifies the DPB size used for encoding.
Setting it to 0 will let driver use the default dpb size. The low latency application which wants to invalidate reference frame as an error resilience tool is recommended to use a large DPB size so that the encoder can keep old reference frames which can be used if recent frames are invalidated.

fps_n, fps_d

Set output FPS numerator and denominator. It only affects num_units_in_tick and time_scale fields in SPS.
If fps_n=30 and fps_d=1 then it's 30 FPS
If fps_n=60000 and fps_d=2002 then it's 29.97 FPS
Source stream FPS or filter FPS is used if fps_n and fps_d are not set.
Please also check Handling fuzzy FPS to get proper bitrate output article.

rate_control

Sets bitrate type.

  • cqp - Constant QP mode
  • vbr - Variable bitrate mode
  • cbr - Constant bitrate mode
  • vbr_minqp - ariable bitrate mode with MinQP
  • ll_2pass_quality - Multi pass encoding optimized for image quality and works only with low latency mode
  • ll_2pass_size - Multi pass encoding optimized for maintaining frame size and works only with low latency mode
  • vbr_2pass - Multi pass VBR


bitrate

Sets bitrate in Kbps.

max_bitrate

Sets max bitrate in Kbps.

init_bufsize

Specifies the VBV(HRD) initial delay in Kbits.

  • 0 - use the default VBV initial delay

bufsize

Specifies the VBV(HRD) buffer size in Kbits.

  • 0 - use the default VBV buffer size

qpi, qpp, qpb

Specifies the initial QP to be used for encoding, these values would be used for all frames if in CQP mode.

qmin

Specifies the minimum QP used for rate control

qmax

Specifies the maximum QP used for rate control

initialRCQP

Specifies the initial QP used for rate control

quality

Target Constant Quality level for VBR mode (range 0-51 with 0-automatic);

lossless

Enable lossless encode as following: sets QP to 0 and RC_mode to NV_ENC_PARAMS_RC_CONSTQP and profile to HIGH_444_PREDICTIVE_PROFILE.

  • 0 - disable
  • 1 - enable

keep_sar

If your input stream is anamorphic you might need to save its SAR parameter in the output as well, especially if you’re using a 'scale' filter in your Transcoder pipeline while DAR = SAR x Width / Height. Nimble supports keeping input SAR using keep-sar parameter set to true for encoder in its ‘Video output’ section. SAR/DAR/PAR correlation is described in this article.

monoChromeEncoding


  • 0 - disable
  • 1 - enable


frameFieldMode

Specifies the frame/field mode.

  • frame - NV_ENC_PARAMS_FRAME_FIELD_MODE_FRAME
  • filed - NV_ENC_PARAMS_FRAME_FIELD_MODE_FIELD
  • mbaff - NV_ENC_PARAMS_FRAME_FIELD_MODE_MBAFF


mvPrecision

Specifies the desired motion vector prediction precision.

  • default - NV_ENC_MV_PRECISION_DEFAULT
  • full_pell - NV_ENC_MV_PRECISION_FULL_PEL
  • half_pell - NV_ENC_MV_PRECISION_HALF_PEL
  • quarter_pel - NV_ENC_MV_PRECISION_QUARTER_PEL


enableAQ

Enable Spatial adaptive quantization

  • 0 - disable
  • 1 - enable


aqStrength

Specifies AQ strength.
AQ strength scale is from 1 (low) - 15 (aggressive).


enableTemporalAQ

Specifies Temporal adaptive quantization

  • 0 - disable
  • 1 - enable


strictGOPTarget

Set to enable to minimize GOP-to-GOP rate fluctuations

  • 0 - disable
  • 1 - enable


enableLookahead

Enable lookahead with depth <lookaheadDepth>;

lookaheadDepth

Maximum depth of lookahead with range 0-32 (only used if enableLookahead=1)

disableIadapt

Disable adaptive I-frame insertion at scene cuts (only has an effect when lookahead is enabled).

  • 0 - none
  • 1 - disable adaptive I-frame insertion


disableBadapt

Disable adaptive B-frame decision (only has an effect when lookahead is enabled)

  • 0 - none
  • 1 - Disable adaptive B-frame decision


enableIntraRefresh

Enable intra refresh. If the GOP structure uses B frames this will be ignored

  • 0 - disable
  • 1 - enable


intraRefreshPeriod

Interval between successive intra refresh.

intraRefreshCnt

Length of intra refresh in number of frames for periodic intra refresh. This value should be smaller than intraRefreshPeriod.

enableConstrainedEncoding

Set this to 1 to enable constrainedFrame encoding where each slice in the constarined picture is independent of other slices

useConstrainedIntraPred

Set 1 to enable constrained intra prediction.

separateColourPlaneFlag

Set to 1 to enable 4:4:4 separate colour planes

deblockingFilterMode

Specifies the deblocking filter mode. Permissible value range : [0, 2]

adaptiveTransform

Specifies the AdaptiveTransform Mode

  • auto - Adaptive Transform 8x8 mode is auto selected by the encoder driver
  • disable - Adaptive Transform 8x8 mode disabled
  • enable - ptive Transform 8x8 mode should be used


fmo

Specified the FMO Mode

  • auto - FMO usage is auto selected by the encoder driver
  • enable - Enable FMO
  • disable -Disble FMO


bdirect

Specifies the BDirect mode

  • auto - BDirect mode is auto selected by the encoder driver
  • disable - Disable BDirect mode
  • temporal - Temporal BDirect mode
  • spatial - Spatial BDirect mode


entropyCoding

Specifies the entropy coding mode

  • auto - Entropy coding mode is auto selected by the encoder driver
  • cabac - Entropy coding mode is CABAC
  • cavlc - Entropy coding mode is CAVLC


sliceMode

Specifies the way in which the picture is divided into slices.

  • 0 - MB based slices,
  • 1 - Byte based slices,
  • 2 - MB row based slices,
  • 3 - numSlices in Picture

When sliceMode == 0 and sliceModeData == 0 whole picture will be coded with one slice

sliceModeData

Specifies the sliceMode parameter.

  • sliceMode=0, sliceModeData specifies # of MBs in each slice(except last slice)
  • sliceMode=1, sliceModeData specifies maximum # of bytes in each slice(except last slice)
  • sliceMode=2, sliceModeData specifies # of MB rows in each slice(except last slice)
  • sliceMode=3, sliceModeData specifies number of slices in the picture. Driver will divide picture into slices optimally;

These are the parameters which you can use already in order to control NVidia video encoding hardware acceleration. Live Transcoder also supports NVidia hardware decoding.

To improve your NVENC transcoding experience, please also take a look at Transcoder troubleshooting covering most frequent questions.

Zabbix monitoring of Nimble Streamer allows tracking server status, SRT streams and NVidia GPU status.

We keep improving our transcoder feature set, contact us for any questions.

Related documentation



NVIDIA, the NVIDIA logo and CUDA are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. 

October 17, 2016

Re-sampling audio with Nimble Streamer Transcoder

Nimble Streamer Live Transcoder gives wide range of capabilities to transform both video and audio. Audio features allow doing various complex actions on sound as well as do some one-step actions like audio re-sampling.

Let's see a sample scenario shown below.

First, you see a passthrough of video just because this demo is for audio.
Then you see audio stream input and audio output with AAC encoder.


In between a custom filter added. Here are the details:



Set filter name to aformat and filter params to sample_rates=32000. This will re-sample audio to 32KHz.

That's it. Saving settings will apply this to designated transcoder instance.


Feel free to visit Live Transcoder webpage for more details and contact us if you have any question.

Related documentation


October 13, 2016

Audio-only and video-only transmuxing for HLS and MPEG-DASH via SMIL files

Nimble Streamer handles VOD streaming in various ways, one of them is doing ABR VOD via HLS and MPEG-DASH streaming protocols using SMIL files.

You can use SMIL to specify separate tracks in MP4 files which will be used transmuxed specifically with only audio or video for both HLS and DASH. This allows lowering the bandwidth usage.

To illustrate this approach, let's take a look at audio-only use case. We have "bigbuckbunny_450.mp4" file with audio we want to use and video track. There are bunch of other files with different video renditions, without the audio.

Check the sample SMIL file below.


<?xml version="1.0" encoding="UTF-8"?>
<smil title="">
 <body>
  <switch>
   <audioOnly src="bigbuckbunny_450.mp4" systemLanguage="eng" groupId="aac" default="true" autoSelect="true"/>
   <video src="bigbuckbunny_450.mp4" systemLanguage="eng" system-bitrate="450000" hlsAudioGroupId="aac"/>
   <video src="bigbuckbunny_750.mp4" systemLanguage="eng" system-bitrate="750000" hlsAudioGroupId="aac"/>
   <video src="bigbuckbunny_1100.mp4" systemLanguage="eng" system-bitrate="1100000" hlsAudioGroupId="aac"/>
   <video src="bigbuckbunny_1500.mp4" systemLanguage="eng" system-bitrate="1500000" hlsAudioGroupId="aac"/>
  </switch>
 </body>
</smil>

Get it on github. You can find other SMILs examples at this github repo.

Notice the groupId="aac" parameter in audioOnly tag, along with hlsAudioGroupId="aac" parameter in each video tag. This virtual group combines them together to use same audio track.

The result playlist would be as follows:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="eng",DEFAULT="YES",AUTOSELECT="YES",URI="bigbuckbunny__450.mp4_audiochunk.m3u8?nimblesessionid=96"
#EXT-X-STREAM-INF:BANDWIDTH=450000,AUDIO="aac",LANGUAGE="eng"
bigbuckbunny__450.mp4_videochunk.m3u8?nimblesessionid=96
#EXT-X-STREAM-INF:BANDWIDTH=750000,AUDIO="aac",LANGUAGE="eng"
bigbuckbunny__750.mp4_videochunk.m3u8?nimblesessionid=96
#EXT-X-STREAM-INF:BANDWIDTH=1100000,AUDIO="aac",LANGUAGE="eng"
bigbuckbunny__1100.mp4_videochunk.m3u8?nimblesessionid=96
#EXT-X-STREAM-INF:BANDWIDTH=1500000,AUDIO="aac",LANGUAGE="eng"
bigbuckbunny__1500.mp4_videochunk.m3u8?nimblesessionid=96

When the player gets the playlist it still shows multiple renditions and during playback it gives the audio track transmuxed from bigbuckbunny_450.mp4 file, while the video is taken from selected video file.

Multiple audio tracks from same file


You can add multiple audio tracks if you need, this is especially useful for multi-language videos. If you have single MP4 file with multiple audio tracks, you may use those tracks in your SMIL and resulting playlist via audioIndex parameter. You can use title parameter to set the name for the track - this is what will be shows in the player selection.
Here's a sample SMIL.
<?xml version="1.0" encoding="UTF-8"?>
<smil title="">
 <body>
  <switch>
   <audioOnly src="video.mp4" systemLanguage="eng" title="English" groupId="aac" audioIndex="0" default="true" autoSelect="true"/>
   <audioOnly src="video.mp4" systemLanguage="fra" title="French"  groupId="aac" audioIndex="1"/>
   <audioOnly src="video.mp4" systemLanguage="spa" title="Spanish" groupId="aac" audioIndex="2"/>
   <video src="video.mp4" systemLanguage="eng" system-bitrate="450000" hlsAudioGroupId="aac"/>
  </switch>
 </body>
</smil>
Notice the systemLanguage parameter - it's different for each language.

The result playlist will be as follows:
#EXTM3U 
#EXT-X-VERSION:3 
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="eng",NAME="English",DEFAULT="YES",AUTOSELECT="YES",URI="video.mp4_audiochunk.m3u8?nimblesessionid=74" 
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="fra",NAME="French",URI="video.mp4_audiochunk.m3u8?nimblesessionid=74&nimble_audio_index=1" 
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="spa",NAME="Spanish",URI="video.mp4_audiochunk.m3u8?nimblesessionid=74&nimble_audio_index=2" 
#EXT-X-STREAM-INF:BANDWIDTH=450000,AUDIO="aac",LANGUAGE="eng" 
video.mp4_videochunk.m3u8?nimblesessionid=74


If you have any questions regarding this or related feature sets, please contact us.

Related documentation


Nimble StreamerVOD Streaming in NimbleSMIL support for MPEG-DASH, MP4 transmuxing to HLS VOD streaming, Subtitles support in Nimble StreamerUsing SMIL in Nimble Streamer,

September 30, 2016

September news

This month we were concentrated on two major directions - improvements and bugfixing in Nimble Streamer and mobile SDK development.


Nimble Streamer

DVR feature set of Nimble Streamer was improved by 2 changes.


  • Archive read-only mode which allows playing the stream while the recording s stopped. As our API also supports this feature, the scheduled recording can be performed easily.
  • Maximum archive size parameter to avoid disk overflow. You can check original DVR setup article to see how you can set it.



Windows Phone live streaming

Mobile broadcasting SDK was improved with Windows Phone support. Your Windows device can now broadcast live stream via RTMP to any media servers or services which support this protocol.

Larix Broadcaster is also available in Windows Store to demonstrate current SDK capabilities.

Get it from Microsoft

You can install it for free and use in any live streaming use cases.


iOS SDK

Our mobile SDK for iOS was improved with new graphics and a "Mute" button.

White label streaming application

If you'd like to customize Larix Broadcaster for any platform by adding custom app name, logo, connection stream etc, you can request it as white label. You wouldn't need to hire developer to customize basic things, so this will same you some efforts and time.
Contact us in case you are interested.



The last but not the least update: check the State of Streaming Protocols for September 2016.


Follow us at FacebookTwitterGoogle+ or LinkedIn to get latest news and updates of our products and services.