Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness

M. A. Pimentel-Niño, Paresh Saxena, M. A. Vazquez-Castro

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)


© 2015 M. A. Pimentel-Niño et al. A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture.
Original languageEnglish
Article number394956
JournalScientific World Journal
Publication statusPublished - 1 Jan 2015


Dive into the research topics of 'Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness'. Together they form a unique fingerprint.

Cite this