Video Aware Wireless Networks

 

Intel Labs Wireless Multimedia Solutions group is part of the Wireless Communications Lab and Integrated Platform Research Lab within Intel Labs, and we also collaborate with the Middle East Mobile Innovation Center in Cairo, Egypt, which is also part of the Wireless Communications Lab. Our current research topics including the following topics:

  • cross-layer optimization of video-based applications over wireless networks
  • performance analysis for WiFi and cellular networks with a focus on capacity and quality of experience
  • demonstration and real-time emulation of video over wireless systems
  • video analytics to generate side-information useful for network resource allocation
  • quality of experience estimation and prediction using objective metrics for various platforms
  • industry standards

Some results are presented in the following videos.

R@I Day Video: Enhanced Video Streaming

Content-aware Adaptation Video

The above demos were built using the Intel® Media SDK:

Intel® Media SDK 2012: A multi-platform API for media application development

Intel® Media SDK 2012 is the software development library that exposes Intel platforms' industry-leading media acceleration capabilities (encoding, decoding and transcoding). The library API is cross-platform and cross-operating systems with software fallback, allowing developers to simply code once for today's and future chipsets. With a forward scalable interface, plus easy-to-use coding samples and documentation, developers can gain a time-to-market, competitive advantage for their programs to have the best power and performance characteristics. Applications built with the Intel Media SDK 2012 deliver a highly consistent, rich quality media user experience across all platforms and devices. It's free - start unleashing the power of your software by downloading the Intel Media SDK 2012.

Video content delivery over wireless networks is expected to grow exponentially in the coming years. It is driven by applications including streaming TV content to mobile devices, but also internet video, video on demand, personal video streaming, video sharing applications (from mobile to mobile), video conferencing, live video broadcasting (cloud to mobile as well as mobile to cloud), video Twitter, and video blogging. In fact, mobile video traffic already exceeded 50% of the total mobile data traffic in 2011 and will be approximately two-thirds of the global mobile data traffic by 2016 (Cisco Visual Networking Index). Improvements in video compression and wireless spectral efficiency will not be sufficient to accommodate this potential demand. Although there has been a wealth of research in the area of joint source-channel coding and wireless video optimization over the past few decades, few of these techniques have yet to be realized in practical networks. Therefore, our research working together with academic and industrial researchers to focus on the intersection between theory and practice of bringing advanced video optimization techniques to wireless networks in practical ways. Specific technologies include:

Wireless Video Optimizations and Error Resiliency

  • Multi-user distortion aware resource allocation and MCS selection.
  • Joint source-channel coding using perceptual video quality distortion metrics.
  • Practical ways to realize wireless video optimizations in a layer-aware fashion, including information exchange methods between layers.
  • Machine learning techniques for dynamically optimizing wireless video.

Adaptive video streaming and transport for wireless networks

  • Dynamic Adaptive HTTP Streaming (DASH) and other HTTP-based adaptive streaming optimizations for wireless networks.
  • Scalable compression techniques for practical traffic shaping in the network.
  • Energy efficient wireless media transmission.
  • Analog and digital network coding for wireless media.
  • Content-aware adaptive algorithms and application of video analytics to estimate rate-distortion characteristics.

Novel wireless network architectures optimized for video distribution

  • Practical cooperative network architectures (hierarchical, heterogeneous, peer-to-peer, hybrid broadband/broadcast) specifically for optimized video delivery.
  • Relaying for wireless media transmission.
  • Distributed caching techniques for popular video content.

Intel is working with leading industrial partners, including Cisco and Verizon, as well as leading academic partners, including UT Austin, UC San Diego, Cornell, USC, and Moscow State University, to identify novel technologies for managing and optimizing the delivery of video-based applications over wireless networks and develop practical techniques to realize these novel technologies in the industry. Below is a summary of the research directions and vision being pursued by our academic partners.

UT Austin

The next generation of wireless networks will become the dominant means for video content delivery, leveraging rapidly expanding cellular and local area network infrastructure. Unfortunately, the application-agnostic paradigm of current data networks is not well-suited to meet projected growth in video traffic volume, nor is it capable of leveraging unique characteristics of real-time and stored video to make more efficient use of existing capacity. As a consequence, wireless networks are not able to leverage the spatio-temporal bursty nature of video. Further, they are not able to make rate-reliability-delay tradeoffs nor are they able to adapt based on the ultimate receiver of video information: the human eye-brain system. We believe that video networks at every time-scale and layer should operate and adapt with respect to perceptual distortions in the video stream.

In our opinion the key research questions are:

  • What are the right models for perceptual visual quality for 2-D and 3-D video?
  • How can video quality awareness across a network be used to improve network efficiency over short, medium, and long time-scales?
  • How can networks deliver adaptively high perceptual visual quality?

Our proposed research vectors are summarized as follows: Our first vector, Video Quality Metrics for Adaptation (VQA), focuses on advances in video quality assessment and corresponding rate distortion models that can be used for adaptation. Video quality assessment for 2-D is only now being understood while 3-D is still largely a topic of research. Video quality metrics are needed to define operational rate-distortion curves that can be implemented to make scheduling and adaptation decisions in the network. Our second vector, Spatio-Temporal Interference Management (STIM) in Heterogeneous Networks, exploits the characteristics of both real-time and streamed video to achieve efficient delivery in cellular systems. In one case, we will use the persistence of video streams to devise low overhead interference management for heterogeneous networks. In another case, we will exploit the buffering of streamed video at the receiver to opportunistically schedule transmissions when conditions are favorable, e.g., mobile users are close to the access infrastructure. Our third vector, Learning for Video Network Adaptation (LV), proposes a data-driven learning framework to adapt wireless network operation. We will use learning to develop link adaptation algorithms for perceptually efficient video transmission. Learning agents collect perceptual video quality distortion statistics and use this information to configure cross-layer transport and interference management algorithms. At the application layer, we propose using learning to predict user demand, which can be exploited to predicatively cache video and smooth traffic over the wireless link.

 

UC San Diego

The theme of this research project is to design wireless networks that are capable of yielding an increase of video transmission capacity of roughly two orders of magnitude. We make use of combinations of techniques drawn from the disciplines of compression, signal processing, communications theory, and network theory. Much of what we do is based around the use of cross-layer techniques at the physical, MAC, network, and application layers. A second major approach is based around determining appropriate perceptually-based metrics of video quality to allow unequal error protection, and concealing lost information at the destination by perceptually-based interpolation methods. Applications include both 2D and 3D stereo video. Our intention is for this research to encompass wireless systems at all levels of mobility: fixed wireless, pedestrian speeds, and vehicular speeds.

Among our current research directions are the following:

  • Develop efficient, robust, and optimal compression algorithms based on a video-plus-depth format for 3D video, taking into the account spatiotemporal organization of depth processing by the human visual system.
  • Derive cross-layer algorithms that are robust for arbitrary mobility and a diverse set of scenarios such as two-way interactive video, archival video streaming, and buffer assisted multiple-user resource allocation.
  • Design a novel wireless video cloud architecture, and associated algorithms, to enable delivery of online videos to mobile users with reduced video latency and increased network capacity. In particular, we intend to (a) include distributed processing together with caching, (b) utilize adaptive bit rate streaming techniques, and (c) extend caching to mobile devices.

Cornell

The Cornell team focuses on the following four directions for improving video transport over wireless systems:

1- Coding for Wireless Video-on-Demand. One of the greatest inefficiencies of existing commercial wireless networks is that they treat all video multicasts as separate unicasts, mainly because it is on-demand, i.e., the receivers are asynchronous. We use coding and stochastic control techniques to enable base stations to broadcast information that is simultaneously useful to all receivers in the multicast, regardless of viewing start time, channel quality, distortion tolerance, and existing knowledge of future frames.

2- Protocols for Exploiting Network Heterogeneity. The proliferation of different wireless access technologies, together with the growing number of multi-radio wireless devices suggest that the opportunistic utilization of multiple connections at the users (i.e., network heterogeneity) can be an effective solution to the phenomenal growth of traffic demand in wireless networks. We study the fundamental limits at which network heterogeneity improves video transport over wireless networks and develop optimal protocols to exploit it.

3- Speculative Video Streaming. The video content requested by users can often be predicted based on their location and past viewing habits. We study ways of predicting the location and timing of future requests and speculatively moving the content within the network to be near to the point of consumption.

4- Quality-Aware Routing using Dynamic Side Information. Existing video streaming protocols are limited to making scheduling decisions based upon either packet priorities or upon limited and static side information in the video packets. We develop dynamic side information that is stored as state information in network nodes and that is modified appropriately as the data travels through the network, as well as intelligent routing and scheduling algorithms to exploit this side information.

USC

We investigate fundamentally new cellular architectures to handle the ongoing explosive increase in the demand for video content in wireless mobile environments. We show that distributed caching and collaboration between users and femtocell-like base stations without high-speed backhaul, which we call helpers, can greatly improve throughput without suffering from the backhaul bottleneck problem common to femtocells. We also investigate the role of collaboration among users - a process that can be interpreted as the mobile devices playing the role of helpers also. This approach allows an improvement in the video throughput without deployment of any additional infrastructure. The efficiency of the caching approach depends on two key system properties: (i) the configuration of the ”effective” distributed cache, i.e., which user can connect to which helpers and (ii) the popularity distribution of the video files. As a function of these properties, we consider the wireless distributed caching problem, i.e., which files should be cached by which helpers. For the mobiles-as-helpers problem, a key question is the choice of the cluster dimension (collaboration distance), trading off spectral reuse with the probability of finding the desired video within the collaboration distance. Simulations show that our approach can improve the data throughput by as much as 400 − 500% through addition of helpers, and more than an order of magnitude through the device-to- device communications.

Moscow State University

Moscow State University research is primarily focused on signal processing of 2D and 3D video content, including the following:

  • Real-time Bidirectional Optical Flow for multiview conversion:
    • real-time stereo-to-multiview conversion
    • conversion of parallax of 3D video on-the-fly
  • 3D video quality estimation. Measurement of
    • subjective perception of depth
    • 3D devices characteristics
  • Multiview video compression
    • Dozens of high definition views yield huge amount of data to store and transfer

More information can be found at the CS MSU Graphics & Media Lab (Video Group) web-site: www.compression.ru/video/.

Intel Labs

O. Oyman and S. Singh, “Quality of Experience for HTTP Adaptive Streaming Services”, IEEE Communications Magazine - Special Issue on QoE Management in Emerging Multimedia Services, April 2012. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6178830

S. Singh, O. Oyman, A. Papathanassiou, D. Chatterjee and J. Andrews, “Video Capacity and QoE Enhancements over LTE”, IEEE ICC ViOpt Workshop, Ottawa, Canada, June 2012.

O. Oyman and S. Singh, “On Capacity-Quality Tradeoffs in HTTP Adaptive Streaming Services over LTE Networks”, UCSD Information Theory and Applications Workshop, San Diego, CA, Feb. 2012. (invited paper) http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6181820

O. Oyman, J. Foerster, Y. Tcha and S. C. Lee, “Toward Enhanced Mobile Video Services over WiMAX and LTE”, IEEE Communications Magazine, August 2010 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5534589

O. Oyman and J. Foerster, “Distortion-Aware MIMO Link Adaptation for Enhanced Multimedia Communications”, IEEE Personal, Indoor and Mobile Radio Communications (PIMRC) Workshop - Toward IMT-Advanced and Beyond, pp. 387-392, Istanbul, Turkey, Sep. 2010. (invited paper) http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5670400

S. Sridharan and O. Oyman, “On the Distortion Exponent of Partially Closed-Loop Wireless Systems”, Proc. IEEE Global Telecommunications Conference (GLOBECOM), Miami, FL, Dec. 2010. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5683739

D. Chatterjee, O. Oyman and J. Foerster, “Distortion-Aware Transmission in Cognitive Radio Networks”, Proc. 44th IEEE Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, Nov. 2010. (invited paper) http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5757719

Wafaa Taie, Mohamed Badawi, Mohamed Rehan, Mohamed Yousef, and Esraa Makled, "Analysis of Adaptive Video Streaming Performance in Wireless Network Using OPNET Modeler and OPNET System-in-the-Loop (SITL) Module", Proceedings of OPNETWORK 2012, August 2012.

UT Austin

http://msw3.stanford.edu/~zhuxq/vawn/utexas.html

UC San Diego

http://msw3.stanford.edu/~zhuxq/vawn/ucsd.html

Cornell

http://msw3.stanford.edu/~zhuxq/vawn/cornell.html

USC

http://msw3.stanford.edu/~zhuxq/vawn/usc.html

Moscow State University

http://msw3.stanford.edu/~zhuxq/vawn/msu.html

Categories: