
No-Reference and Reduced Reference Video Quality Metrics
New Contributions
Versandkostenfrei!
Versandfertig in 6-10 Tagen
45,99 €
inkl. MwSt.
PAYBACK Punkte
23 °P sammeln!
Digital video communication has evolved tremendouslyin the past few years, experiencing significantadvances in compression and transmission techniques.To quantify the performance of a video system, it isimportant to measure the quality of the video. Sincehumans are the ultimate receivers of a video signal,quality metrics must take into account the propertiesof the human visual system. So far, most of themetrics that have been proposed require access to theoriginal video, what makes them unsuitable forreal-time applications. We investigate how toestimate video quality in real-time applicationsu...
Digital video communication has evolved tremendously
in the past few years, experiencing significant
advances in compression and transmission techniques.
To quantify the performance of a video system, it is
important to measure the quality of the video. Since
humans are the ultimate receivers of a video signal,
quality metrics must take into account the properties
of the human visual system. So far, most of the
metrics that have been proposed require access to the
original video, what makes them unsuitable for
real-time applications. We investigate how to
estimate video quality in real-time applications
using no-reference and reduced reference metrics. For
this, we study the visibility, annoyance, and
relative importance of different types of artifacts
and how they combine to produce annoyance. The work
uses synthetic artifacts that are simpler, purer, and
easier to describe, allowing a high degree of control
with respect to the amplitude, distribution, and
mixture of different types of artifacts. We present
metrics for estimating the strength of four types of
artifacts. The outputs of the best artifact metrics
are used to build a combination model for overall
annoyance.
in the past few years, experiencing significant
advances in compression and transmission techniques.
To quantify the performance of a video system, it is
important to measure the quality of the video. Since
humans are the ultimate receivers of a video signal,
quality metrics must take into account the properties
of the human visual system. So far, most of the
metrics that have been proposed require access to the
original video, what makes them unsuitable for
real-time applications. We investigate how to
estimate video quality in real-time applications
using no-reference and reduced reference metrics. For
this, we study the visibility, annoyance, and
relative importance of different types of artifacts
and how they combine to produce annoyance. The work
uses synthetic artifacts that are simpler, purer, and
easier to describe, allowing a high degree of control
with respect to the amplitude, distribution, and
mixture of different types of artifacts. We present
metrics for estimating the strength of four types of
artifacts. The outputs of the best artifact metrics
are used to build a combination model for overall
annoyance.