999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Configurable Media Codec Framework:AStepping Stone for Fast and Stable Codec Development

2012-05-22 03:36:20EueeJang
ZTE Communications 2012年2期

Euee S.Jang

(Division of Computer Science and Engineering,College of Engineering,Hanyang University,222 Wangsimni-ro,Seongdong-gu,Seoul,Republic of Korea)

Abstract Recent advances in reconfigurable computing have led to new ways of implementing complex algorithms while maintaining reasonable throughput.Video codecs are becoming more complex in order to provide efficient compression for video with ever-increasing resolution.This problem is compounded by the fact that spectra of video decoding devices has become wider in the move from traditional TVto cable and satellite TV,IPTV,mobile TV,and Internet media.MPEG is tackling this problem with a reconfigurable video coding(RVC)framework and is standardizing a modular definition of tools and connections.MPEG's work started with video coding and has recently extended to graphics data coding.RVCwillbe supported by non-MPEG standards such as the Chinese audio-video standard(AVS).This article gives a brief background to the reconfigurable codec framework.The key to this framework is reconfigurability and reducing granularity to find commonality between different standards.

Keyw ords MPEG;reconfigurable coding;RVC;RMC

1 Introduction

T he Motion Picture Experts Group(MPEG)has created many audio-visual coding standards,including MP3,MPEG-2,and MPEG-4 AVC/H.264[1].MPEG's multimedia coding standards have been central in the shift from an analog to digitalparadigm.Video coding standards have been developed for specific applications:MPEG-1 for video CD,MPEG-2 for digital TVand DVD,MPEG-4 Part 2 Visual for mobile video,and MPEG-4 AVC/H.264 for DMBand Internet.It has been 20 years since MPEG-1 was standardized.There are many video coding standards,some from MPEG and others from non-MPEG organizations.However,competition between standards makes it difficult to develop video devices because such devices must support an ever-increasing number of codecs.

It could be argued that there should only be one or two generic video codecs and that standards should be unique.This may seem idealistic considering a huge amount of content has already been created using various standards.However,if we consider how media coding is done,a generic video coding standard is not impossible.Most media coding standards have basic processes:prediction,quantization,transform,and entropy coding.If a decoder is componentized into modules,it may be possible for a video coding standard to reuse modules from another video coding standard.The size of the module determines the granularity of the module,and the hardware and software of a module may vary in size,performance,and cost.The reusability of a module greatly increases when the right granularity is found for a given architecture.

The granular design of a codec can be used to describe a media coding standard,from bitstream syntax parsing to reconstruction of pixels or audio samples.Allthe media coding standards cannot be merged into one,but they can be described in a generic coding framework.In 2003,MPEG began standardizing a reconfigurable video coding(RVC)framework.The RVC framework can be considered a configurable media codec(CMC)framework that encompasses not only video coding but also audio and graphics coding.

MPEG first took the CMC framework approach in the area of video coding.This is a fast-evolving area because more demanding video services require more efficient coding standards.MPEG and ITU-T's Video Coding Experts Group(VCEG)have joined together to standardize high-efficiency video coding(HEVC),which aims to provide the most efficient compression.HEVC is expected to be more complex than MPEG-4 AVC/H.264.Very recently,Internet video coding(IVC)and web video coding(WVC)have also been proposed as royalty-free coding standards for Internet applications.Such diversity in video coding standards calls for CMC to be considered.

One of the main objectives of CMC is to narrow the gap between the design and implementation of algorithms.Generally speaking,designers of video coding algorithms do not take implementation into consideration when determining the merits of one algorithm over another.They instead design algorithms according to compression efficiency,the first requirement of video coding.Preferred algorithm designs are often complex and are difficult to implement.Designing algorithms according to implementation has been tried,but it is difficult because architecture such as hardware and software,single core and multicores,and floating-point and fixed-point arithmetic varies widely.Algorithm-architecture co-design has only recently been acknowledged as an important next step in research[2].

The idea of modularizing the codec with common tools came about by first considering how a module is constituted.Amodule is a functional unit(FU)comprising input,output,and internal processing.The FU can be described as a function callin a program,a logic unit in a chip,or a thread running in a parallelcomputing environment.An FUis designed to provide an abstract form of a function that can be implemented in different environments.MPEG's FUdesign is similar to the black-box approach,although this was not cleatly stated in the RVC standard.As long as input and output behaviors in an FUimplementation conform to the standard,internal implementation of FUis left open.

A decoder can be viewed as an FU with one input(for example,a bitstream)and three outputs(for example,YUV).However,the granularity of a large FUdoes not conform to the goal of the RVC framework,that is,to define a toolbox containing FUs that can be reused in many coding standards.FU granularity is key in determining how efficient the RVC framework is.FUs that are standardized in a video tool library(ISO/IEC 23002-4)are not thoroughly verified in terms of whether they are efficiently segmented or divided with optimal granularity.The initial goal of RVC standardization was to design a proper framework for configuring FUs to form a decoder network.

FUs must be configured and connected in such a way as to form a decoder network that is interoperable with different implementations.The model of computation(MoC)of the formed decoder network is a dataflow model in which the input and output of the FUs are called tokens.The availability of input tokens determines FUexecution of input tokens to produce output tokens.Therefore,connections between FUs are data-driven.The dataflow modelis a significant departure from the traditionalmodel of computation based on signal flow.Most signal-processing algorithms can be modeled as a signal-flow graph,and there is no room for functions or computations at the individual node.In a data-flow graph,additions,multiplications,and cosine functions can be hidden in a node.Input and output are described as input and output edges(or tokens).

A data-flow-based description of MoC is a simplified description of a decoder network(FU network).The remaining implementation details,such as buffer management,timing,and data precision,are unspecified so that implementation can be flexible.This is why there are two standard specifications,one for the framework(ISO/IEC 23001-4)and one for the toolbox(ISO/IEC 23002-4).The framework standard contains the decoder description language,used to describe the FU network,and bitstream syntax parsing.The toolbox standard contains video coding FUs and a simulation modelwith several decoder configurations for existing video coding standards.

The RVC framework is intended to cover not only video coding but also audio and graphics.The MPEG graphics community recently started work on a reconfigurable graphics coding(RGC)framework that is similar in principle to the RVC framework.The main goalof the RGC framework is to construct a toolbox for MPEG graphics coding tools[3].Activities relating to the RGC framework include confirming the RVC approach for any-media coding.GPUs are heavily used in graphics applications,and the modular design of the RGC framework helps in the implementation of FUs,which are well-suited for such graphics applications.

The CMC approach looks promising,and modular design in parallel computing is attracting interest in areas where multicores and GPUs can accelerate computing.MPEG is not the only group interested in CMC.The Audio Video Standard(AVS)Group in China shares MPEG's vision of CMC and has been developing its own FUs to support AVScodecs.

This paper takes the MPEG RVC framework as a good example of the CMC approach.A few years have passed since CMC was first hatched by MPEG,but there is much room for improvement of technology and standards.

2 The CMC Framework

There are two main issues for the modular design in CMC:how to define a module and how to connect modules.In MPEG RVC,a module is called a functional unit(FU).Input and output behavior is normatively defined,and internal processing is left open and is implementation-specific.

2.1 Module Design Philosophy

When designing a module in CMC,implementation and granularity,testability,and interoperability should be taken into account.

2.1.1 Implementation and Granularity

A module should be implementable in platforms that have hardware or software,single core or multiple cores.Abstract modeling is often preferred to physical implementation because it increases flexibility when implementing modules in various platforms.The module in MPEG RVC is designed using an abstract definition of FU in the text specification and using an examplar implementation of FU in RVC-CAL language.In the text specification,each FUis viewed as a black box,and in RVC-CALimplementation,each FU is viewed as a white box.Module granularity is included in the FU definition,and directly affects the reusability and reconfigurability of modules.For this reason,both implementation and granularity should be clearly defined.

▲Figure 1.An abstract FUdefinition from MPEGVTL.

2.1.2 Testability

Efficient testing and debugging of an implemented module is one of the goals of CMC.Adequate module granularity helps reduce testing and debugging work.In MPEG RVC black-box testing,golden responses are generated by analyzing the corners of a given FU.The black-box approach is taken to ensure that different module implementations can be tested in a standard way.

2.1.3 Interoperability

The standard definition of a media codec has,so far,been confined to the bitstream syntax,and parsing and decoding algorithms.Implementation of algorithms in a codec is unspecified.Industry fills this gap by enhancing the compression efficiency of encoding algorithms(through,for example,efficient mode decision)and enhancing the encoding and decoding algorithms with cost-effective implementations.An implementation designer can create customized algorithms,for example,a combined implementation of quantization and transform.Using CMC,interoperability between modules of different implementations may be possible if any implemented module conforms to the input and output behavior of the abstract module definition.Therefore,it is possible to produce a decoder comprising a combination of modules from different implementations.This has not been possible with conventional decoder implementation.In a multimedia framework such as DirectShow,the only visible component has been a decoder,not the modules to generate a decoder.In MPEG RVC,module-levelinteroperability is not yet supported because the first goal of MPEG RVCis to provide a framework,not the modules.

2.2 Case Study:MPEG Video Tool Library

The MPEG video tool library(VTL)(ISO/IEC 23002-4)is a collection of FUs and part of the MPEG RVC standard.The tools(or FUs)available in MPEG VTL are supported by the MPEG codec configuration representation(CCR)standard.Fig.1 shows an abstract definition of an inverse-scan FU used in MPEG-4 AVC/H.264.Two important fields in the abstract are input and output.The input is a 4×4 BLOCK token,and the output is also a BLOCK token.The description field contains a brief description of what the FU does internally with input and output tokens.The exact behavior is not explicitly described.There could be various implementations of the FU.Fig.2 shows a reference description and implementation in RVC-CAL.

In Fig.1,the FU testing is very much like black-box testing.In Fig.2,the testing is white-box testing.MPEG RVC is not clear about this issue yet,but the most important thing is that the input and output behavior is transparent to any implementation.

2.3 Module Connections

Once modules are defined,connections between modules have to be made in order to form a module network.When connecting the modules,any data transaction between modules should be defined clearly enough so that any implementation follows the specification.In MPEG,input and output data of FUs are called tokens.These are the basic elements for connecting FUs into an FU network.In a packet-switched network such as the Internet,a datagram is similar to a token.However,a token is different to a datagram in that the size and format of each token may be different to one another.Avariety of token types influences the design of interconnections between modules.If too much information is carried in a token type,modularization may not be done with optimalgranularity.Connections and traffic between modules should be therefore be minimized when defining modules.

▲Figure 2.Areference implementation in RVC-CALof the abstract FUdefinition from Fig.1.

▲Figure 3.Asimple FUnetwork with two FUs,two internal connections,and two external connections.

▲Figure 4.An FNDof Fig.3.

In CMC,connections are described in a readable format because they could be essential information in implementations.The following information should be described:connections between module input and output ports,definition of the token type of each connection(e.g.,block of 8×8,pixel,MB,1-bit flag),token sequence or order,and parameters for specific implementations.

In MPEG RVC,the module network is descibed with an XML-like description called the FU network description(FND).The rules for describing connections are defined in the FU network language(FNL)in the MPEG RVC standard.A diagram is commonly used to describe the module connections.Fig.3 shows an FUnetwork in MPEG-RVC.There can be up to four different token types,that is,two external and two internal.Input and output ports that share the same connection should support the same token type.

The diagram helps the implementation designer understand the modular network,but it is also desirable to describe the network in language format.Fig.4 shows an FND written in FNL.

2.4 Syntax Parser

The bitstream syntax and parsing process is unique for each codec and usually includes entropy coding(variable-length decoding,arithmetic decoding).Unlike in the modular CMC approach,the syntax parser module is less likely to be reused by other codecs and is highly codec-dependent.The parser is usually the first module to process the bit stream.In MPEG-RVC,the bit-stream parser description(BSD)is part of the decoder description.Each decoder description contains FND and BSD,and a parser module can be generated from the BSD.The BSD format is RVC bit-stream syntax description language(RVC-BSDL),a variant of XML.

▲Figure 5.An FNDexample of MPEG-4 simple profile.

A syntax parser can run without necessarily engaging the decoding process.This means that the syntax parsing and entropy decoding process can be detatched from the decoding process,and conformance of bit-stream syntax to bit-stream semantics can be checked.In MPEG RVC,automatic generation of the bit-stream parser from the BSD is still an unresolved issue because generating the parser,including the entropy decoder,is difficult to describe in XML.

Fig.5 shows an FND that includes all FUs needed to form an MPEG-4 simple-profile decoder.Each box is an FU.The FU on the far left is the syntax parser,which receives a bit stream and produces output tokens(e.g.entropy-decoded semantic data)for the other FUs.

CMC has two parts:framework and toolbox.Fig.6 shows how different toolboxes can be used to generate a decoder based on the MPEG RVC framework.Other than toolbox 1,the other toolboxes may be proprietary or non-MPEG standards.This opens the way for non-MPEG organizations to use the RVC framework for their own codec implementations and for MPEG codec implementations such as decoder 1 and decoder 2.The AVSgroup in China supports RVC and multiple toolboxes.

The toolbox approach also extends to other types of media coding,such as graphics coding.In reconfigurable graphics coding(RGC),the RVC framework supports graphics coding tools.Graphics coding is an area that can benefit from RVC.Many graphics applications are multimedia applications that encompass not only geometry data processing but also audio,image,and video data processing.To view a movie,two bit streams are needed,one for video and another for audio.For graphics applications such as games,many data sets need to be processed as components that include encoded graphics content.Many graphicalobject types share common coordinates,colors,and normals.As with many object-type compression methods,graphics data compression involves compressing primitives.For this reason,the division of codecs into modules is easy in graphics coding.

Fig.7 shows an FND of an MPEG scalable complexity 3D mesh coding(SC-3DMC).Many FUs are reused in order to decode attributes such as coordinates,colors,normals,and texture.

?Figure 6.Toolbox concept in MPEGRVC.

3 Future Research Directions

Although many years have been spent researching and standardizing CMC,this field is relatively young and there is much room for improvement.This is one reason why MPEG RVC continues.This section describes issues that are open for future research.

3.1 Model of Computation

▲Figure 7.An FNDof an MPEGSC-3DMCdecoder.

Coding tools are usually represented by algorithms,reference implementations,and textual specifications.In any representation format,the MoC is implicitly defined;otherwise,it would be hard to understand how a coding tool operates for a given functionality.MoC may differ from implementation to implementation.If there are three consecutive statements,that is,no branch or loop,in a C code,three statements are executed in sequence.Sequential execution may not be guaranteed if the implementation is done in hardware.Parallel execution of three statements may be possible if the statements are independent of each other.The choice of MoC directly affects implementation complexity,and MoC must be chosen carefully.

During the development of MPEG RVC,there have been many discussions about how to define MoC.The consesus is that the reference implementation language,RVC-CAL,should be used as a modelto understand MoC in MPEG RVC.To confirm this recommendation,more experiments should be conducted on how to describe a network of modules,how input and output tokens behave in the network,and how a generic description on different implementations can be guaranteed.

3.2 Parser Generation

Bit-stream syntax parsing,including entropy decoding,usually consumes 20 to 40 percent of the decoding time,and this makes the parser one of the most time-consuming modules in the decoder.It is difficult to design a parallel algorithm to speed up the bitstream syntax parser because the parsing process is sequential.This is not the case with other modules.It is also difficult to subdivide the bit-stream syntax parser,which is likely to be the largest module in CMC and outputs the largest number of tokens.

Despite the importance of parser generation,it is not an automatic process yet.In MPEG RVC,there is BSDin the decoder description.However,BSD is not directly used to generate the parser module,and this is called a built-in approach.While it can support existing codecs,the parser is less flexible in generating new codecs as needed.One reason parser generation is not automatic is because the parser includes entropy decoding algorithms,such as variable-length decoding and arithmetic decoding.Entropy decoding requires a complex procedural description,and it may be difficult to define a generic description for any implementation.Future research on automatic parser generation is necessary.

3.3 Design-Time versus Run-Time Generation of the Decoder

There are two distinct approaches in CMC:design-time codec configuration and run-time codec configuration.Most efforts have been focused on design-time configuration.Run-time codec configuration is a challenging issue because of the run-time requirements.In design-time configuration,defined modules may be complex in terms of implementation and computation.This is not the case for run-time configuration,where reasonable performance is expected.

3.4 Granularity of Modules

One of the frontier research areas in CMCis defining proper granularity when designing a module.A decoder can be regarded as a module,and dividing a decoder into modules is only beneficialif there is a gain in the divide-and-conquer strategy.This means the sum of all the processes of modules in a decoder should be less than or equal to that of the decoder.This problem is challenging because the cost may be different from implementation to implementation,from one set of division of modules to another,and from one platform to another.More research has been focused on the framework than on the modules.

3.5 Evolution of the Media Codec

One unrealized but very interesting objective of CMC is the evolution of media codecs through module upgrade.There has been recent discussion within MPEG of a royalty-free video coding standard for Internet applications.To date,it has been a very difficult create a royalty-free standard because standards depend on patent holders.Even if only one algorithm is not royalty-free,there are only a few limited ways to make the entire codec free:wait for up to 20 years untilthe patent has expired or design a codec standard that circumvents the patented algorithm.Both scenarios are very costly and seldom chosen.With a CMC framework,it is possible to pinpoint the algorithm in a module or set of modules in a decoder.The bypass standard has to include a new set of modules that do not have the patented algorithm.If this approach becomes common in standardization,the number of codecs will grow quickly,and it will be necessary to keep track of tools and their configurations.Although interesting,this idea is yet to be tested and implemented.

4 Conclusion

Standardization of the CMC framework has mostly been the work of MPEG.There are still many issues to be resolved before a dependable framework can be created and modules can be properly defined.MPEG's research is important for fast and stable codecs in the future.

主站蜘蛛池模板: 亚洲精品va| 欧美色综合久久| 亚洲嫩模喷白浆| 亚洲无卡视频| 丝袜国产一区| 99热这里只有精品2| 国产成人亚洲精品色欲AV| 一级毛片中文字幕| 毛片国产精品完整版| 日韩A∨精品日韩精品无码| 欧美在线中文字幕| 亚洲不卡av中文在线| 精品五夜婷香蕉国产线看观看| 亚洲码一区二区三区| 国产白浆在线| 欧美三级不卡在线观看视频| 久久一日本道色综合久久| 久久精品无码一区二区日韩免费| 天天摸夜夜操| 久久国产精品麻豆系列| 欧美笫一页| 精品精品国产高清A毛片| 欧美在线一级片| 国产系列在线| 456亚洲人成高清在线| 无码中文字幕加勒比高清| 国产一级精品毛片基地| 亚洲—日韩aV在线| 欧美啪啪一区| 久久精品女人天堂aaa| 欧美啪啪一区| 91亚瑟视频| 亚洲国产欧美自拍| 国产乱子伦一区二区=| 制服丝袜国产精品| 国产欧美日韩综合一区在线播放| 日本91在线| 69视频国产| 色成人亚洲| 91欧美在线| 亚洲人成电影在线播放| 久久狠狠色噜噜狠狠狠狠97视色| 91精品专区国产盗摄| www.av男人.com| 精品一区二区三区视频免费观看| 无码免费的亚洲视频| 天天色天天操综合网| 亚洲区视频在线观看| 久久无码av三级| 国产精品手机在线播放| 99热这里只有免费国产精品| 在线精品自拍| 久热re国产手机在线观看| 亚洲无限乱码| 中文字幕在线不卡视频| 在线国产毛片手机小视频| 久草视频福利在线观看| 国产激爽大片高清在线观看| 欧美亚洲国产日韩电影在线| 亚洲va在线观看| 精品国产成人高清在线| 亚洲无码日韩一区| 国产激爽大片在线播放| 成人福利视频网| 亚洲欧美成人在线视频| 欧美精品影院| 在线观看亚洲成人| 日韩精品久久久久久久电影蜜臀| 中文字幕无线码一区| 天天爽免费视频| 日韩美女福利视频| 午夜精品福利影院| 成人国产精品网站在线看| 99re视频在线| 国产区在线观看视频| 亚洲中文字幕久久精品无码一区| 亚洲精品麻豆| 欧美成人在线免费| 欧美福利在线| 老司机午夜精品网站在线观看| 乱人伦视频中文字幕在线| 久久黄色毛片|