Hi, I'd like to ask a follow-up question to my earlier question: "Best approach to use the Media Server Studio SDK in my application ?" I have this new question after I experimented with the Media Server Studio SDK a little, and after I heard about the VA API.
Recently my wife purchased a thick and expensive book. As an ultrasonic diagnostician for children, she purchases many books, but this one had me puzzled. The book was titled Ultrasound Anatomy of the Healthy Child. Why would she need a book that showed only healthy children? I asked her and her answer was simple: to diagnose any disease, even one not yet discovered, you need to know what a healthy child looks like.
In this article we will act like doctors, analyzing and comparing a healthy and a not-so-healthy application.
Knock – knock – knock.
I am looking for a Vaapi based decoder example which takes a video or an image frame and decode it into .yuv file.
There are some existing example but there is no clear explanation of how to proceed.
Plz share some example or user guide or source code.
I am trying to run decoder for the live streams. When i demux streams and isolate video stream, I just push it to Intel decoder. Under normal conditions, it Works well. However, decoder enter an unwanted state when i get an noisy or skipped frame from demuxer.
I tried to follow problem. In my case; when i get an abnormal packed frame from demuxer, Decoder generates a noisy image. This is normal. But i espect that decoder can clean up this situation after any idr frame or key frame. In my trials, Intel decoder can not handle this abnormallity.
We are evaluating the INDE Media for Mobile encoder for a new Android app we are developing. The app streams live from the phone camera to a server. I would like to know if there is a way to use authentication between the camera streaming sample app which I believe uses RTMP streaming and a server (WOWZA in particular).
Hi, I have some application software that I am trying to optimize its performance by utilizing the Media Server Studio SDK. I have some big-picture questions regarding how to approach this project.
First, a short explanation of my software. It's a video surveillance program. It monitors multiple channels. Each channel runs in its own thread, and takes an H.264 bitstream as input, so an H.264 decoder is needed for each channel.
I was wondering, what do hardware decoders in general expect to receive when one wants to decode a picture for which the encoder generated multiple nals?