March 31, 2017

2017Q1 news

First quarter of 2017 was full of updates for all major products of our company.

Before telling the news, we'd like to mention that our company CEO and CTO will be visiting 
NAB Show 2017 this April.
If you'd like to meet us and talk about our products, plans or anything around - just drop us a note so we could schedule proper time slot.


Also, check out the State of Streaming Protocols for Q1 of 2017 is available with MPEG-DASH going up.

Nimble Streamer


Our software media server and its transcoder got a number of important updates.

We've ported Nimble to IBM POWER8 architecture. It's a good addition to traditional x64 and ARM which were supported before.

Speaking of hardware, we ran an extensive testing of latest NVidia Tesla M60 graphic card in IBM Bluemix Cloud Platform to see how much it increases the performance of Live Transcoder for Nimble Streamer. We got excellent results, read this article for full details.

Live Transcoder now uses two more coding libraries in addition to already supported ones:

Video and audio can also be binded together in case they come from un-synced sources. Read this article for details. Those un-synced sources may be video and audio files - our transcoder is now capable of producing live streams from them. Same article describes how this can be done. You can also check our videos which illustrate this process.

CEA-708 subtitles forwarding is now available in Nimble Streamer for both transmuxing and transcoding.

Handling live streams timing errors compensation for DVR was added as well.

A couple of updates for our protocols processing engine:

  • RTSP can now be take over HTTP using VAPIX.
  • MPEG-TS processing was enhanced by adding mux rate. We've also added a brief troubleshooting section in the corresponding article to make sure our customers can overcome typical issues related to UDP delivery. Read this article for more details.

New WMSPanel statistics


ASN viewers count metric is now available in WMSPanel. It will be useful for those companies that want to build delivery networks with better latency and user experience.


Larix Mobile Broadcasting SDK


Mobile SDK have continuously been improved. You can find all latest versions' descriptions in SDK release notes.


Android and iOS

Both platforms had the following updates:

  • Multiple connections streaming. You can add several connections profiles and choose up to 3 connections for simultaneous streaming. You can stream to several destinations like your primary and secondary origin servers and also target it to some third-party service like YouTube Live or Twitch.
  • Limelight authentication is available. You can publish your streams directly into Limelight CDN for further delivery.
  • Streaming and user experience improvements.


Windows Phone

We've added various updates to Windows Phone application to make it up-to-date with the fixes on other platforms.

As always you can find the latest releases of Larix Broadcaster streaming app in AppStore, Google Play and Windows Store.




The next quarter will bring more features so stay tuned. Follow us at FacebookTwitter or Google+ to get latest news and updates of our products and services.



The State of Streaming Protocols - 2017 Q1

Softvelum team which operates WMSPanel reporting service continues analyzing the state of streaming protocols.

First quarter of 2017 has passed so let's take a look at stats. The media servers connected to WMSPanel processed more than 10 billion connections from 3300+ media servers (operated by Nimble Streamer and Wowza) during past 3 months.

First, let's take a look at the chart and numbers:

The State of Streaming Protocols - 2017 Q1



You can compare that to the picture of 2016 Q4 protocols landscape:

The State of Streaming Protocols - 2016 Q4

In the 4th quarter of 2016 it had being collected from 3200+ servers.

What can we see?

  • HLS is still stable at ~3/4 of all connections, its share is 70%.
  • RTMP is at the same level and even increased its share to 12%. Low latency streaming use case still need this protocol.
  • Progressive download is 3rd popular at 6%.
  • MPEG-DASH overcame HDS, Icecast and MPEG-TS going up by nearly 3 times by views count - it's now 5th popular protocol.
  • RTSP and Icecast kept their shares.

So MPEG-DASH is the only protocol which was visibly improving. You can also check December summary of steaming protocols.

We'll keep analyzing protocols to see the dynamics. Check our updates at FacebookTwitter or Google+.

March 15, 2017

VA API (libVA) support in Nimble Streamer

Video Acceleration API (VA API) is a royalty-free API along with its implementation as free and open-source library (libVA). This API provides access to hardware-accelerated video processing, using hardware such as graphics processing units (GPU) to accelerate video encoding and decoding by offloading processing from CPU.

Supported codecs are H.264 and VP8.

Nimble Streamer supports VAAPI and allows using libVA in Live Transcoder as one of the options for encoding among other libraries and SDKs.

Let's see how you can start using libVA in Nimble Streamer Live Transcoder.

Open your transcoding scenario or create a new one.

Sample scenario
Click on encoding block "gear" icon for open details dialog.

Encoder settings dialog with vaapi as Encoder
Here you need to choose "vaapi" option from "Encoder" drop-down and use the Codec dropdown list to select from h264 and vp8.
Now you can fill in library-specific parameters like profile etc. Once you save encoder settings and save the scenario, libva will start working.

Check the description of all supported parameters below.

H.264 encoding parameters


profile

Specifies the codec profile. The values are:

  • high (this one is default)
  • main
  • contstrained baseline

level

Specifies the codec level (level_idc value * 10).
Default: 51 (Level 5.1, up to 4K30)

g, keyint

Number of pictures within the current GOP (Group of Pictures).
1 - only I-frames are used.
Default: 120

bf

Maximum number of B frames between non-B-frames.

  • 0 - no B frames (default)
  • 1 - IBPBP...
  • 2 - IBBPBBP... etc.

rate_control

Sets bitrate control methods.

  • cbr - Use the constant bitrate control algorithm. "bitrate", "init_bufsize", "bufsize", "max_bitrate" - might be specified.
  • cqp -  Use the constant quantization parameter algorithm; "qpi", "qpp", "qpb" might be specified.

Default: cbr if bitrate is set, cqp otherwise.

b, bitrate

Maximum bit-rate to be constrained by the rate control implementation. Sets bitrate in kbps.
Must be specified for cbr.

target_percentage

The bit-rate the rate control is targeting, as a percentage of the maximum bit-rate for example if target_percentage is 95 then the rate control will target a bit-rate that is 95% of the maximum bit-rate.
Default: 66%

windows_size_ms

windows size in milliseconds. For example if this is set to 500, then the rate control will guarantee the target bit-rate over a 500 ms window.
Default: 1000

initial_qp

Initial QP for the first I frames, 0 - encoder chooses the best QP according to rate control;
Default: 0

min_qp

Minimal QP frames, 0 - encoder chooses the best QP according to rate control;
Default: 0

bufsize

Sets the size of the rate buffer in bytes. If is equal to zero, the value is calculated using bitrate, frame rate, profile, level, and so on.

init_bufsize

Sets how full the rate buffer must be before playback starts in bytes. If is equal to zero, the value is calculated using bitrate, frame rate, profile, level and etc.

qpi, qpp, qpb

Quantization Parameters for I, P and B frames, must be specified for CQP mode.
It's a value from 1…51 range, where 1 corresponds to the best quality.
Defult: 0

quality

Encoding quality - higher is worse and faster, 0 - use driver default.
Default: 0

fps_n, fps_d

Set output FPS numerator and denominator. It only affects num_units_in_tick and time_scale fields in SPS.

  • If fps_n=30 and fps_d=1 then it's 30 FPS
  • If fps_n=60000 and fps_d=2002 then it's 29.97 FPS

Source stream FPS or filter FPS is used if fps_n and fps_d are not set.

VP8 encoding parameters


The following parameters can be used if you select VP8 as your target codec.

g, keyint

Number of pictures within the current GOP (Group of Pictures).
1 - only I-frames are used.
Default: 120

rate_control

Sets bitrate control methods.


  • cbr - Use the constant bitrate control algorithm. "bitrate", "init_bufsize", "bufsize", "max_bitrate" - might be specified.
  • cqp -  Use the constant quantization parameter algorithm; "qpi", "qpp", "qpb" might be specified.

Default: cbr if bitrate is set, cqp otherwise.

b, bitrate

Maximum bit-rate to be constrained by the rate control implementation. Sets bitrate in kbps.
Must be specified for cbr.

target_percentage

The bit-rate the rate control is targeting, as a percentage of the maximum bit-rate for example if target_percentage is 95 then the rate control will target a bit-rate that is 95% of the maximum bit-rate.
Default: 66%

windows_size_ms

Windows size in milliseconds. For example if this is set to 500, then the rate control will guarantee the target bit-rate over a 500 ms window.
Default: 1000

initial_qp

Initial QP for the first I frames, 0 - encoder chooses the best QP according to rate control;
Default: 0

min_qp

Minimal QP frames, 0 - encoder chooses the best QP according to rate control;
Default: 0

bufsize

Sets the size of the rate buffer in bytes. If is equal to zero, the value is calculated using bitrate, frame rate, profile, level, and so on.

init_bufsize

Sets how full the rate buffer must be before playback starts in bytes. If is equal to zero, the value is calculated using bitrate, frame rate, profile, level and etc.

qpi, qpp

Quantization Parameters for I, P and B frames, must be specified for CQP mode.
It's a value from 1…51 range, where 1 corresponds to the best quality.
Defult: 0

quality

Encoding quality - higher is worse and faster, 0 - use driver default.
Default: 0

error_resilient 

enable error resilience features

  • 0 - disable(default)
  • 1 - enable


kf_auto

Auto keyframe placement, non-zero means enable auto keyframe placement

  • 0 - disable
  • 1 - enable(default) 


kf_min_dist

keyframe minimum interval

kf_max_dist

keyframe maximum interval

recon_filter

Reconstruction Filter type

  • 0: bicubic,
  • 1: bilinear,
  • other: none


loop_filter_type

Loop filter type

  • 0: no loop fitler,
  • 1: simple loop filter


loop_filter_level

loop filter level value. When loop_filter_level is 0, loop filter shall be disabled.

sharpness

Controls the deblocking filter sensitivity

iqi

I-frame quantization index
Range: 0..127

pqi

P-frame quantization index
Range: 0..127






Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


March 7, 2017

Nimble Streamer on IBM Power8 platform

Nimble Streamer media server is being developed as a native application for all popular platforms. You can see this in the full list of supported OSes. It also supports basic architectures available at the majority of hosting providers.

Today we add support for a new platform, POWER8 by IBM, a family of symmetric multiprocessors. Both Nimble Streamer and Live Transcoder were ported so you can use full capabilities of our products on this platform, including live streaming, VOD, DVR and build delivery networks of any kind.

Check installation instructions for Ubuntu to proceed with deployment. Only Ubuntu 14.04 is currently supported at the moment.

Nimble Streamer can be potentially ported and embedded to any platform or OS, so feel free to contact us in case you have some special cases.

Related documentation


Nimble Streamer, Live Transcoder,



February 27, 2017

Stress-testing NVidia GPU with IBM

Recently we finished extensive testing of latest NVidia Tesla M60 graphic card in IBM Bluemix Cloud Platform to see how much it increases the performance of Live Transcoder for Nimble Streamer.

We got excellent results, please read this article for more details:

Stress-testing NVidia GPU for live Ttranscoding

February 16, 2017

FDK AAC encoder and decoder in Nimble Transcoder

Live Transcoder for Nimble Streamer has full support for AAC decoding and encoding, along with various audio filters like re-sampling, transrating or audio channels manipulations.

Now we add FDK AAC support for both decoding and encoding. It allows adding HE-AAC and HE-AACv2 to your transcoding scenarios. This is also another alternative to ffmpeg decoder for audio streams, while having decent quality.

Let's see how you can set up FDK usage in your scenarios.

First, create a new scenario or modify existing one. If you need to perform only audio transformation, you can add a passthrough for video stream.
Minimum scenario for audio transformation.
As mentioned, you can use FDK for both decoding and encoding. Here is how decoder will look like in this case:

Using FDK as decoder.
So you just select libfdk_aac at Decoder drop-down list instead of Default.

If you'd like to encode using libfdk, open encoder dialog and choose libfdk_aac from Encoder drop-down list.

Using FDK as encoder

This also allows you to select HE-AAC and HE-AACv2 profiles. Print "profile" in property edit box to get drop-down list of profiles:
  • aac_low
  • aac_he
  • aac_he_v2
  • aac_ld
  • aac_eld

Choose aac_he or aac_he_v2 for respective options.


Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation



February 14, 2017

Forward CEA-708 subtitles with Nimble Streamer

Providing subtitles as part of live streaming is important and is required by law is some countries. So people ask adding that capability into Nimble Streamer in addition to VOD subtitles support.

There are cases when source stream which comes into Nimble Streamer already contains subtitles metainformation. So now Nimble allows forwarding CEA-708 subtitles. This means that all outgoing streams for all supported protocols will include subtitles.

This works for both transmuxing and transcoding of H.264 (AVC) content.

Transmuxing support this forwarding by default. Whatever metainformation is inserted into the original stream, it is passed through to all other protocols.

To make this work in Live Transcoder scenarios, you need to enable this feature for outgoing streams. It's a premium add-on for this media server and it has easy-to-use web UI to control transcoding behavior. To install and get a license for it, visit this page.
To enable this feature for particular encoded stream, you need to edit an encoder block for the stream which you want subtitles to be forwarded for.

Transcoder scenario

Click on encoder details icon to open encoder details dialog.



Check the Forward CEA-708 subtitles box and save settings to close the dialog. Then click Save on scenario age to apply it on server.

That's it - the forwarding will start working right after a scenario is saved on server.


Also take a look at DVB subtitles processing and SCTE-35 processing which can also be passed through Live Transcoder.

Please also check Subtitles digest page to see what else Nimble can do for you.


Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


Handling live streams timing errors in Nimble Streamer DVR

Sometimes when an MPEG-TS stream is received from media source, it may have some glitches either in video or audio. This is caused by third-party encoders which set incorrect time stamps assigned to media fragments - they may go back and forth in some un-predicted range. This also happens even when the source stream is transmuxed into other protocols, e.g. RTMP.

This may bother the viewers and also cause media servers to malfunction during the recording of the stream. Nimble Streamer allows compensating those timing issues and perform correct recording of video and audio in DVR. If the compensation can't help, then Nimble just removes the chunk and resets recording period.

Go to Nimble Streamer top menu, select Live Streams Settings menu and select DVR tab to open its settings.



Choose the designated stream properties and find Error correction section and check Drop invalid segments checkbox. This will perform the required correction to the recorded media, and the playback will be smooth from player point of view.

Keep protocol timestamps. If original stream has issues with timestamps, Nimble Streamer tries to compensate this by re-calculating correct numbers for DVR. This option disables compensation of timestamps, original timestamps are saved into the database and recording period is being reset.

Check segment sizes on load. This is a debugging option which enables validation of segments sizes in addition to getting size from the database. It's added for debugging purposes only, it increases load time for DVR archive so you should not use it by default.

Align segment time (PROGRAM-DATE-TIME) enables PROGRAM-DATE-TIME alignment for HLS segments based on stream timestamps to avoid drifting between PDT and segments duration.

Troubleshooting other issues


Please read Troubleshooting section in DVR setup article to see what else you can do to fix DVR-related issues.

Watch our DVR video tutorial: DVR recording and playback in Nimble Streamer

Also notice that HLS DVR streams can be added to SLDP HTML5 Player for rewinding low latency streams. Read this article for details.

If you have any further questions, contact our team.

Related documentation


February 9, 2017

Viewing ASN statistics for streaming connections

A number of our large customers build and maintain their own media content deliver networks. Common layout includes origin servers to process the content from its sources and edge servers which handle connections from end-users who watch and listen the media.

It's important to locate edges as close to possible viewers as possible to reduce latency and improve overall user experience. So you need a way to determine optimal physical location for each edge. This is why it's important to know what ASNs your viewers have. That will allow putting your edges on a proper hosting location with proper network peers.

WMSPanel allows showing ASN statistics of your views showing how many connections were made with from the most active ASNs. It's part of our media servers reporting framework.

January 25, 2017

Binding un-synced video and audio sources in Live Transcoder

Live Transcoder for Nimble Streamer has wide range of content transformation features which can be used in transcoding scenarios.

Some scenarios may require to take unrelated and un-synchronized sources of content and bind them together into a synchronized stream. This capability covers some major use cases like the following to name just a few:

  • YouTube doesn't accept audio-only and video-only content, so you need to add missing video/audio in order to comply.
  • Take video stream (e.g. a game footage) and put commentator's voice on top.
  • Online radio graphics overlay: take Icecast online radio stream, put still picture as video and publish it on a website or an external CDN so common video player could play sound and has was visual representation.
  • Take surveillance camera video stream, insert some sound of silence from MP3 or MP4 file and publish it to external destination - like aforementioned YouTube or CDN.

Nimble Streamer Transcoder is currently capable of both transforming file content for further live streaming usage and synchronizing it. Let's see how you can do this.

Installing Nimble Streamer Live Transcoder


First of all you need to have Nimble Streamer installed, as well as Live Transcoder


You can check basic principles of Transcoder setup using our video tutorials in this playlist.

Engaging graphics and on-demand content into live transcoding


Nimble Streamer can transform the following file types into live stream:

  • GIF
  • PNG
  • JPEG
  • BMP
  • TIFF
  • MP4 container with H.264 video and AAC or MP3 audio
  • MP3 audio files for audio streams

Video decoder


To use files in transcoding scenario you need to add the video decoder element to your scenario and choose File as decoding source as shown below.

Adding picture as a source for video decoder
You will see File path input field for entering full local path to a source file.

Notice that only local file path is supported, HTTP is not yet supported.

For still images (GIF, PNG, TIFF, JPEG) the FPS field specifies the key frame rate which will be used for video stream. In case you have GIF animation and its frame rate differs from the one you specify, then its playback speed will be higher or lower. E.g. if you have 15 images per second in your GIF and set FPS field to 30 then your GIF will be animated twice faster. The Stream ID parameter is ignored for still images.

For MP4, the FPS parameter will be ignored while Stream ID can be used in case you have multiple video tracks. In that case you will just set the value to track number, starting from 0.

The Decoder field is used for defining the engine used for decoding the input - currently it's either Default software decoder or NVidia. Read more about software decoder threads and NVidia decoder settings. The Threads field is used for software default decoder.

Audio decoder


Audio source is set similar way. drop Audio decoder element to the scenarios workspace and choose File.

Adding audio file as a source for audio decoder

You can specify File path to audio content. If you use MP4 with multiple audio tracks as audio source, you can specify the Stream ID to set it.

As mentioned before, only local file path is supported.

Synchronizing sources to audio


Having any source of video which is not in sync with audio, you can bind them together. This might be streams from file content which we described earlier. Also that might be original live streams from different sources which you need to be in sync in order to be played by all major players.

There are 2 major approaches for syncing streams - you can either sync audio to video or sync video to audio. The difference is about which stream will be used a primary source for timing synchronization.

Video to audio



This is the preferred option when you have audio as your main source of content and you need some auxiliary video to be shown. A good example is online radio streamed via Icecast which you want to publish to some delivery service which requires both audio and video in their streams, e.g. CDN or YouTube.

You need to create scenario as shown below.

Binding picture to audio stream

As you see you have audio passthrough for audio and then a decoder input created per "Engaging graphics..." section above. Video pipeline also has scaling filter to make a picture match designated size.
The encoder set up is shown below, check out Sync related streams field. It appears if you click on Expert setup link.

Encoder settings for video made from a picture

Sync related streams field needs to be set to Video to audio value. If you save this encoder and then open audio encoder settings, you will see that this field has also been set up to the same value to avoid sync-up collisions.

You can see audio being passed through, however you can create any other audio pipeline, including filtering etc.

Watch this video to see how this scenario is set up step-by-step:


You can find it in our YouTube channel.

Audio to video


This sync-up can be used when you have a source of soundless video which needs to be accompanied by some audio track. Surveillance camera streaming a good example for this.

The setup is similar - check scenario below as example taken from Audio decoder section above.

Binding audio file to video stream

Here you see video passthrough as video content is not touched and then audio created from file and encoded separately. The encoder setup is shown below.

Encoder settings for audio made from a file

Here you see Sync related streams field set as Audio to video.

Like in previous scenario, you can see video being passed through, however you can create any other audio pipeline, including filtering etc.


Watch this video to see how this scenario is set up step-by-step:




You can find it in our YouTube channel.



"Equalize-only" scenario


Of course you can synchronize video and audio from original live streams. This can be used when you do a voice-over with comments etc. The setup looks like this - just make sure both output streams names are the same.

Passthrough-only scenario to bind audio and video

And either audio encoder or video encoder need to have Sync related streams to either or two mentioned values - Audio to video or vise versa.

Encoder settings to bind audio to video stream.

This is basically how you can easily set up streaming from file sources and make streams synchronization.

Check this video for step-by-step instruction:





You can find this and other tutorials on our YouTube channel.



Please also read how to use live video and file overlays to create videowall with Live Transcoder



Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation