COMPUTER ANIMATION AND VIRTUAL WORLDS Comp.Anim.Virtual Worlds 2005:16:451-461 Published online in Wiley InterScience (www.interscience.wiley.com).DOl:10.1002/cav.82 Image,Colour and Illumination in Animation Image and video retexturing By Yanwen Guo*,Jin Wang,Xiang Zeng,Zhongyi Xie, Hanqiu Sun and Qunsheng Peng We propose a novel imagelvideo retexturing approach that preserves the original shading effects without knowing the underlying surface and lighting conditions.For static images, we first introduce the Poisson equation-based algorithm to simulate the texture distortion on the projected interest region of the underlying surface,while preserving the shading effect of the original image.We further work on videos by retexturing the key frame as static image and then propagating the results onto the other frames.In video retexturing,we have introduced the mesh based optimization for object tracking to avoid texture drifting,and the graph cut algorithm to effectively deal with visibility shift between frames.The graph cut algorithm is applied on a trimap along the boundary of the object to extract the textured part inside the trimap.The proposed approach is developed in imagelvideo retexturing at nearly interactive rate,and our experimental results have showed the satisfactory performance of our approach.Copyright C 2005 Jol Wiley Sons,Ltd. KEY WORDS:image/video retexturing;Poisson equation;graph cut Introduction with the same motivations.In this paper,we propose a novel image/video retexturing approach while preser- Retexturing is the process of replacing existing textures ving the original shading effects. in the concerned region of images/videos with new Manipulating textures in real images has been fasci- ones while preserving the original shading effects.It nating people for a long time,and the understanding of has wide applications in special effects in TV and film texture evolves meanwhile.Early works,model texture producing,art and industrial design,distance learning, as a statistical attributel of a surface and decompose digital entertainment,and E-commence.To achieve real-world texture into a texture part and a lighting realistic retexturing effects,two basic problems must part.2 Recent studies categorize real-world textures be solved.One is to make the new texture adequately into regular and irregular types3 and decompose the wrapped and shaded so that it is consistent with the texture into geometry,lighting,and color components. unknown shape of the underlying surface as well as the Nevertheless,recovering the geometry,lighting compo- unknown lighting condition encoded by the original nents of a texture in a single image is very difficult. image.The other is how to prevent the new texture Because real-world images are usually taken under very drifting on the interested region between adjacent complex environment,physically based techniques like frames in video.The related research is mainly pro- shape-from-shading (SFS)are sometimes complex,un- posed for the problems individually,but not the pro- stable,and inaccurate. blems simultaneously for both image/video retexturing Rather than recovering the geometry and lighting information,our retexturing approach of images aims at producing an illusion such that the replaced texture inherits the geometry and lighting information implied *Correspondence to:Yanwen Guo,State Key Lab of CAD&CG, in the input image.By solving the Poisson equations,a Department of Mathematics,Zhejiang University,Hangzhou, China.E-mail:ywguo@cad.zju.edu.cn non-linear mapping between the new texture and the concerned region on the image is derived reflecting the Contract/grant sponsor:973 Program of China;contract/grant original shape of the underlying surface.Lighting effect number:2002CB312101. Contract/grant sponsor:NSFC;contract/grant numbers: on the original image is retained by adopting the YCbCr 60033010:60403038. color space to represent the previous texture value at 带事带年年海带带年事泰市年年带垂年带甲海垂事布而垂专年带带事香带带年带布车香甲海垂事布而年 Copyright C 2005 John Wiley Sons,Ltd
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2005; 16: 451–461 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cav.82 ****************************************************************************************************** Image, Colour and Illumination in Animation Image and video retexturing By Yanwen Guo*, Jin Wang, Xiang Zeng, Zhongyi Xie, Hanqiu Sun and Qunsheng Peng ****************************************************************************************************** We propose a novel image/video retexturing approach that preserves the original shading effects without knowing the underlying surface and lighting conditions. For static images, we first introduce the Poisson equation-based algorithm to simulate the texture distortion on the projected interest region of the underlying surface, while preserving the shading effect of the original image. We further work on videos by retexturing the key frame as static image and then propagating the results onto the other frames. In video retexturing, we have introduced the mesh based optimization for object tracking to avoid texture drifting, and the graph cut algorithm to effectively deal with visibility shift between frames. The graph cut algorithm is applied on a trimap along the boundary of the object to extract the textured part inside the trimap. The proposed approach is developed in image/video retexturing at nearly interactive rate, and our experimental results have showed the satisfactory performance of our approach. Copyright # 2005 John Wiley & Sons, Ltd. KEY WORDS: image/video retexturing; Poisson equation; graph cut Introduction Retexturing is the process of replacing existing textures in the concerned region of images/videos with new ones while preserving the original shading effects. It has wide applications in special effects in TV and film producing, art and industrial design, distance learning, digital entertainment, and E-commence. To achieve realistic retexturing effects, two basic problems must be solved. One is to make the new texture adequately wrapped and shaded so that it is consistent with the unknown shape of the underlying surface as well as the unknown lighting condition encoded by the original image. The other is how to prevent the new texture drifting on the interested region between adjacent frames in video. The related research is mainly proposed for the problems individually, but not the problems simultaneously for both image/video retexturing with the same motivations. In this paper, we propose a novel image/video retexturing approach while preserving the original shading effects. Manipulating textures in real images has been fascinating people for a long time, and the understanding of texture evolves meanwhile. Early works, model texture as a statistical attribute1 of a surface and decompose real-world texture into a texture part and a lighting part.2 Recent studies categorize real-world textures into regular and irregular types3,4 and decompose the texture into geometry, lighting, and color components. Nevertheless, recovering the geometry, lighting components of a texture in a single image is very difficult. Because real-world images are usually taken under very complex environment, physically based techniques like shape-from-shading (SFS) are sometimes complex, unstable, and inaccurate. Rather than recovering the geometry and lighting information, our retexturing approach of images aims at producing an illusion such that the replaced texture inherits the geometry and lighting information implied in the input image. By solving the Poisson equations, a non-linear mapping between the new texture and the concerned region on the image is derived reflecting the original shape of the underlying surface. Lighting effect on the original image is retained by adopting the YCbCr color space to represent the previous texture value at ****************************************************************************************************** Copyright # 2005 John Wiley & Sons, Ltd. *Correspondence to: Yanwen Guo, State Key Lab of CAD&CG, Department of Mathematics, Zhejiang University, Hangzhou, China. E-mail: ywguo@cad.zju.edu.cn Contract/grant sponsor: 973 Program of China; contract/grant number: 2002CB312101. Contract/grant sponsor: NSFC; contract/grant numbers: 60033010; 60403038
computer animation Y.GUO ET AL. virtual worlds 36 106 141 175 Figure 1.An example of our video retexturing.The top row lists 6 frames selected from a video clip with 175 frames,and the bottom row shows their corresponding retexturing results. each concerned pixel and making use of its Y compo- the screen,it must undergo a projective transformation. nent which encodes the brightness information of each The resultant image is therefore a non-trivial mapping pixel on the interest region. of the 2D manifold of the original surface depending on As for video retexturing,the main idea is to retexture the shape of the surface.The problem becomes harder a user specified key frame using the image retexturing for retexturing because the 3D shape of the underlying algorithm,and then propagate iteratively the replaced surface is unknown.To simulate the non-linear map- texture adhering to the key frame onto other frames. ping,Liu et al.3 introduced a user-assisted adjustment Here,two key issues for video retexturing need to be on the regular grid of the real texture,and obtained a addressed carefully:one is the texture drifting problem bijective mapping between the regular grid of the tex- among frame sequences,and the other is the visibility ture and the deformed grid of the surface image.Ob- shift between adjacent frames.We introduce a tracking viously,this method requires elaborate user interaction algorithm of feature-points coupled with a mesh based and is only suitable to regular textures.Assuming that optimization scheme to resolve the texture drifting the lighting satisfies Lambertian reflectance model,Fang problem efficiently.Meanwhile,graph cut algorithm is et al.recovered the geometry of the specified area using applied to a trimap along the interest region boundary SFS approximation and derived a propagation rule to to handle the case of visibility shift.Figure 1 demon- recalculate the mapping between the surface image and strates an example of using our video retexturing algo- the new texture. rithm on a video sequence with 175 frames. Extracting lighting information from real images is The remainder of this paper is organized as follows: another challenge for retexturing,Tsin et al.suggested a Section Related Work'presents a brief overview of Bayesian framework based on certain lighting distribu- related previous work.Section Image Retexturing'de- tion model,which relies on the color observation at each scribes the image retexturing of our approach including pixel.Oh et al.presented an algorithm for decoupling mesh generation,texture-coordinates calculation,light- texture illuminance from the image by applying an ing effects and experimental results.Section Video image processing filter.They assumed that large scale Retexturing'further addresses video retexturing using luminance variations are due to the lighting,while small motion tracking and the graph cut algorithms,as well as scale details are due to the texture.Welsh et al.pro- the video retexturing results.Finally,the summary and posed a texture synthesis like algorithm for transferring future research is given in Section 'Conclusions and color into grayscale image,the algorithm works on the Future Work' laB space and transfers a,B components from the sam- ple color image to the grayscale image.Its results auto- matically preserve the lighting effect of the original Related Work grayscale image. Keeping good track of moving objects in video is a Texture mapping needs to set a correspondence be- common goal in the vision and video editing field. tween each point on the 2D texture image and that on Those pixel-wise,non-parametric algorithms (i.e.,opti- the specified 3D surface.When a surface is displayed on cal flow)are robust to small-scale motion only.For Copyright C 2005 John Wiley Sons,Ltd. 452 Comp.Anim.Virtual Worlds 2005;16:451-461
each concerned pixel and making use of its Y component which encodes the brightness information of each pixel on the interest region. As for video retexturing, the main idea is to retexture a user specified key frame using the image retexturing algorithm, and then propagate iteratively the replaced texture adhering to the key frame onto other frames. Here, two key issues for video retexturing need to be addressed carefully: one is the texture drifting problem among frame sequences, and the other is the visibility shift between adjacent frames. We introduce a tracking algorithm of feature-points coupled with a mesh based optimization scheme to resolve the texture drifting problem efficiently. Meanwhile, graph cut algorithm is applied to a trimap along the interest region boundary to handle the case of visibility shift. Figure 1 demonstrates an example of using our video retexturing algorithm on a video sequence with 175 frames. The remainder of this paper is organized as follows: Section ‘Related Work’ presents a brief overview of related previous work. Section ‘Image Retexturing’ describes the image retexturing of our approach including mesh generation, texture-coordinates calculation, lighting effects and experimental results. Section ‘Video Retexturing’ further addresses video retexturing using motion tracking and the graph cut algorithms, as well as the video retexturing results. Finally, the summary and future research is given in Section ‘Conclusions and Future Work’. Related Work Texture mapping needs to set a correspondence between each point on the 2D texture image and that on the specified 3D surface. When a surface is displayed on the screen, it must undergo a projective transformation. The resultant image is therefore a non-trivial mapping of the 2D manifold of the original surface depending on the shape of the surface. The problem becomes harder for retexturing because the 3D shape of the underlying surface is unknown. To simulate the non-linear mapping, Liu et al.3 introduced a user-assisted adjustment on the regular grid of the real texture, and obtained a bijective mapping between the regular grid of the texture and the deformed grid of the surface image. Obviously, this method requires elaborate user interaction and is only suitable to regular textures. Assuming that the lighting satisfies Lambertian reflectance model, Fang et al.5 recovered the geometry of the specified area using SFS approximation and derived a propagation rule to recalculate the mapping between the surface image and the new texture. Extracting lighting information from real images is another challenge for retexturing, Tsin et al. 2 suggested a Bayesian framework based on certain lighting distribution model, which relies on the color observation at each pixel. Oh et al. 6 presented an algorithm for decoupling texture illuminance from the image by applying an image processing filter. They assumed that large scale luminance variations are due to the lighting, while small scale details are due to the texture. Welsh et al. 7 proposed a texture synthesis like algorithm for transferring color into grayscale image, the algorithm works on the l space and transfers ; components from the sample color image to the grayscale image. Its results automatically preserve the lighting effect of the original grayscale image. Keeping good track of moving objects in video is a common goal in the vision and video editing field. Those pixel-wise, non-parametric algorithms (i.e., optical flow8 ) are robust to small-scale motion only. For Figure 1. An example of our video retexturing. The top row lists 6 frames selected from a video clip with 175 frames, and the bottom row shows their corresponding retexturing results. Y. GUO ET AL. ****************************************************************************************************** ****************************************************************************************************** Copyright # 2005 John Wiley & Sons, Ltd. 452 Comp. Anim. Virtual Worlds 2005; 16: 451–461
computer animation virtual worlds IMAGE AND VIDEO RETEXTURING large-scale motions,tracking methods based on feature Texture Coordinates Calculation points and parametric models are more preferable. Feature based tracking can capture the motion of rota- The mapping from the new texture to the concerned tion,scaling etc.Tracking features with underlying region should be non-linear,to account for the distor- model can further reduce the risk of error,for example, tion effect of the replaced texture induced by the under- Jin et al.10 used a combined model of geometry and pho- lying geometry.As reconstructing the geometry of the tometry to track features and detect outliers in video. underlying surface with SFS and then performing tex- Visibility change may cause problems in tracking,as ture mapping or synthesis can be unstable and costly, new part may appear and old part may disappear in a we calculate the texture coordinates for each pixel video sequence.Both Agarwala et aland Wang et al within the interest region directly by solving an energy introduced an interpolation based,user assisted contour minimization problem. tracking framework for tracking interested part in video For the further description,we use the following sequences.Chuang et al.13 described a video-matting notations.We use I(x,y)to denote the color intensity algorithm based on accurate tracking of the specified of a pixel (x,y)on the image,VI(x,y)=(I,I)to repre- trimap.A trimap is a labeling image for which 0 stands sent the color gradient at (x,y),I=I(x,y)-I(x-1,y) for background,1 stands for foreground and the rest is for the horizontal component of the gradient,and the unknown region to be labeled. Iv=I(x,y)-I(x,y-1)for the vertical component. Suppose that a new texture with adequate size is first laid on the concerned region without distortion.In this Image Retexturing case,the initial texture coordinate for the pixel (x,y) within the concerned region is (uo(x,y),vo(x,y)).Similar In this section,we present a novel approach for image to conventional methods,we use (u(x,y),v(x,y))to retexturing.Assume that a new texture with adequate specify the final texture coordinates incurred by the size is given,as discussed above,the key issue here lies texture distortion. in how to construct a mapping from the new texture Our algorithm for computing the final texture coordi- domain to the concerned region on the original image. nates is based on the assumption that,in the intensity To achieve this,we first generate an initial 2D mesh on field of the image,the depth variation of the local the concerned region and let its shape conform with the surface is considered proportional to the local gradient underlying geometry of this region on the original transition.In fact the mapping between a point on the image. new texture domain and a pixel within the concerned region is determined by concatenate transform.That is, Mesh Generation the texture coordinates of adjacent pixels are inter- related,and there exists an offset between them.Actu- Generating a proper initial mesh for video tracking has ally,this offset can be conducted via the underlying been addressed in the field of video compensation for local geometry. compression.In Reference [14],nodes of the mesh are Figure 2 illustrates the calculation of offset in 1D case. first extracted based on the image features,such as the Let x and x-1 be two adjacent points,then according to spatial gradient,the displaced frame difference(DFD). above assumption,the variation of their underlying A mesh is then built with these nodes,using the con- depths can be written as h(x)-h(x-1)=k.VI(x), strained Delaunay triangulation. where VI(x)is the image gradient at the position x in Here we propose a semi-automatic algorithm ac- 1D case,and k is the proportion derived from the counting for both the image feature of edges and gra- assumption.This follows that the offset for the texture dients.It performs in three steps.The user first coordinates between x and x-1 should be the length of interactively outlines a boundary along the interested the green line in Figure 2: region using snakes.Then,the standard edge detection operator,for example,Canny operator,is applied inside the confined region,so some initial nodes are automa- V1+(k.7(x)2 tically generated on these detected edges.Other points can be introduced by the user when necessary.Delau- nay triangulation algorithm is finally applied to yield an Similarly,as for the 2D case of texture coordinates, initial mesh M over the region of interest. there exists the offsets for the texture coordinates Copyright C 2005 John Wiley Sons,Ltd. 453 Comp.Anim.Virtual Worlds 2005;16:451-461
large-scale motions, tracking methods based on feature points and parametric models are more preferable. Feature based tracking can capture the motion of rotation, scaling etc.9 Tracking features with underlying model can further reduce the risk of error, for example, Jin et al.10 used a combined model of geometry and photometry to track features and detect outliers in video. Visibility change may cause problems in tracking, as new part may appear and old part may disappear in a video sequence. Both Agarwala et al.11 and Wang et al.12 introduced an interpolation based, user assisted contour tracking framework for tracking interested part in video sequences. Chuang et al.13 described a video-matting algorithm based on accurate tracking of the specified trimap. A trimap is a labeling image for which 0 stands for background, 1 stands for foreground and the rest is the unknown region to be labeled. Image Retexturing In this section, we present a novel approach for image retexturing. Assume that a new texture with adequate size is given, as discussed above, the key issue here lies in how to construct a mapping from the new texture domain to the concerned region on the original image. To achieve this, we first generate an initial 2D mesh on the concerned region and let its shape conform with the underlying geometry of this region on the original image. Mesh Generation Generating a proper initial mesh for video tracking has been addressed in the field of video compensation for compression. In Reference [14], nodes of the mesh are first extracted based on the image features, such as the spatial gradient, the displaced frame difference (DFD). A mesh is then built with these nodes, using the constrained Delaunay triangulation. Here we propose a semi-automatic algorithm accounting for both the image feature of edges and gradients. It performs in three steps. The user first interactively outlines a boundary along the interested region using snakes. Then, the standard edge detection operator, for example, Canny operator, is applied inside the confined region, so some initial nodes are automatically generated on these detected edges. Other points can be introduced by the user when necessary. Delaunay triangulation algorithm is finally applied to yield an initial mesh M over the region of interest. Texture Coordinates Calculation The mapping from the new texture to the concerned region should be non-linear, to account for the distortion effect of the replaced texture induced by the underlying geometry. As reconstructing the geometry of the underlying surface with SFS and then performing texture mapping or synthesis can be unstable and costly, we calculate the texture coordinates for each pixel within the interest region directly by solving an energy minimization problem. For the further description, we use the following notations. We use Iðx; yÞ to denote the color intensity of a pixel ðx; yÞ on the image, rIðx; yÞ¼ðIx;IyÞ to represent the color gradient at ðx; yÞ, Ix ¼ Iðx; yÞ Iðx 1; yÞ for the horizontal component of the gradient, and Iy ¼ Iðx; yÞ Iðx; y 1Þ for the vertical component. Suppose that a new texture with adequate size is first laid on the concerned region without distortion. In this case, the initial texture coordinate for the pixel ðx; yÞ within the concerned region is ðu0ðx; yÞ; v0ðx; yÞÞ. Similar to conventional methods, we use ðuðx; yÞ; vðx; yÞÞ to specify the final texture coordinates incurred by the texture distortion. Our algorithm for computing the final texture coordinates is based on the assumption that, in the intensity field of the image, the depth variation of the local surface is considered proportional to the local gradient transition. In fact the mapping between a point on the new texture domain and a pixel within the concerned region is determined by concatenate transform. That is, the texture coordinates of adjacent pixels are interrelated, and there exists an offset between them. Actually, this offset can be conducted via the underlying local geometry. Figure 2 illustrates the calculation of offset in 1D case. Let x and x 1 be two adjacent points, then according to above assumption, the variation of their underlying depths can be written as hðxÞ hðx 1Þ ¼ k rIðxÞ, where rIðxÞ is the image gradient at the position x in 1D case, and k is the proportion derived from the assumption. This follows that the offset for the texture coordinates between x and x 1 should be the length of the green line in Figure 2: ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ð Þ k rIðxÞ 2 q ð1Þ Similarly, as for the 2D case of texture coordinates, there exists the offsets for the texture coordinates IMAGE AND VIDEO RETEXTURING ****************************************************************************************************** ****************************************************************************************************** Copyright # 2005 John Wiley & Sons, Ltd. 453 Comp. Anim. Virtual Worlds 2005; 16: 451–461
computer animation Y.GUO ET AL. virtual worlds Surface However,for the pixels lying in the triangles of M, directly application of the offset equations (2)-(4)to them may result in a wried mapping.To reduce the 园 error,we obtain their texture coordinates by solving the following energy minimization problem with respect to the u components (We compute their v components Texture coordinate offset similarly): h(x) k-(x) hx-1) minu(xy) IVu(x,y)-D.(x.y)2 (8) JM here Vu(x,y)=(u(x,y)-u(x-1,y),u(x,y)-u(x,y-1)), and D.(x.y)=(V+(k)2.V1+(kz). Minimizing equation (7),it can be easily converted x- Image coordinate into a set of Poisson equations with the form: Figure 2.Texture coordinate offset for adjacent pixels in 1D △u(x,y)=divDu(x,y) (9) case. in which A,div represent the Laplacian and divergence operator separately.The boundary conditions for above between the pixel (x,y)and its neighbors (x-1,y)and Poisson equations are determined by the u components (x,y-1),which can be expressed as follows: of the texture coordinates of those pixels lying on the edges of M,which are calculated using Equation (7).We u(x,0)-u(x-1,y0=V1+(·1)月 (2) adopt the conjugate gradients algorithm to solve them and it runs very fast. As discussed above,the non-linear mapping between ux,y)-ux,y-1)=V1+k·) (3) the point on the new texture domain and the pixel within the concerned region has been converted into solving a set of linear equations.Although some approx- x,y0)-(x,y-1)=V1+(k·1y)1 (4) imation is assumed,the presented algorithm is trivial to implement and presents satisfactory effects for most of our experiments. (x,0-(x-1,)=V1+(k·1x)2 (5) Lighting Effects where,both ki and k2 are the proportions derived from the assumption.The horizontal component Ix of the As we have obtained the non-linear correspondence image gradient impacts more greatly on the u offset between each pixel within the concerned region and than on the v offset,whereas the vertical component ly that on the new texture,the next step is to map the new impacts more greatly on the v offset than on the u offset, texture while preserving the lighting information en- so we set k and k2 with different values 0.6 and 0.3, coded in the original image.Normally,the intensity of a respectively in our experiments. texture can be regarded as the accumulated effect of the For the pixel lying on the edges of the generated mesh color and brightness.If the brightness information of the M,consider the u component of its texture coordinate, original texture can be extracted independently,fusing which exists: it with the new texture will resolve the problem of preserving the lighting.Fortunately,the YCbCr color u(x,y)lxy)eom u(x-1.y)+1+(ki1)2 (6) space illuminates us. YCbCr is a well-known color space compliant to the Making an approximation that u(x-1,y)=uo(x-1,y), digital video standard,where CbCr mainly represents and considering uo(x-1,y)=uo(x,y)-1,the above the hue of each textured pixel and Y component encodes equation is transformed into: its brightness.We simply copy the CbCr components of the new texture to the target image at each concerned u(x,y)lxsbeom=uo(x,y)+V1+(k1)2-1 (7) pixel during texture mapping,and use a weighted blending of the Y component of both the displayed Copyright C 2005 John Wiley Sons,Ltd. 454 Comp.Anim.Virtual Worlds 2005;16:451-461
between the pixel ðx; yÞ and its neighbors ðx 1; yÞ and ðx; y 1Þ, which can be expressed as follows: uðx; yÞ uðx 1; yÞ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk1 IxÞ 2 q ð2Þ uðx; yÞ uðx; y 1Þ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk2 IyÞ 2 q ð3Þ vðx; yÞ vðx; y 1Þ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk1 IyÞ 2 q ð4Þ vðx; yÞ vðx 1; yÞ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk2 IxÞ 2 q ð5Þ where, both k1 and k2 are the proportions derived from the assumption. The horizontal component Ix of the image gradient impacts more greatly on the u offset than on the v offset, whereas the vertical component Iy impacts more greatly on the v offset than on the u offset, so we set k1 and k2 with different values 0.6 and 0.3, respectively in our experiments. For the pixel lying on the edges of the generated mesh M, consider the u component of its texture coordinate, which exists: uðx; yÞjðx;yÞ2@M ¼ uðx 1; yÞ þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk1 IxÞ 2 q ð6Þ Making an approximation that uðx 1; yÞ ¼ u0ðx 1; yÞ, and considering u0ðx 1; yÞ ¼ u0ðx; yÞ 1, the above equation is transformed into: uðx; yÞjðx;yÞ2@M ¼ u0ðx; yÞ þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk1 IxÞ 2 q 1 ð7Þ However, for the pixels lying in the triangles of M, directly application of the offset equations (2)–(4) to them may result in a wried mapping. To reduce the error, we obtain their texture coordinates by solving the following energy minimization problem with respect to the u components (We compute their v components similarly): minuðx;yÞ Z M jruðx; yÞ Duðx; yÞj2 ð8Þ here ruðx; yÞ¼ðuðx; yÞuðx 1; yÞ, uðx; yÞ uðx; y 1ÞÞ, and Duðx; yÞ¼ð ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk1 IxÞ 2 q ; ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ ðk2 IyÞ 2 q Þ. Minimizing equation (7), it can be easily converted into a set of Poisson equations with the form: uðx; yÞ ¼ divDuðx; yÞ ð9Þ in which , div represent the Laplacian and divergence operator separately. The boundary conditions for above Poisson equations are determined by the u components of the texture coordinates of those pixels lying on the edges of M, which are calculated using Equation (7). We adopt the conjugate gradients algorithm to solve them and it runs very fast. As discussed above, the non-linear mapping between the point on the new texture domain and the pixel within the concerned region has been converted into solving a set of linear equations. Although some approximation is assumed, the presented algorithm is trivial to implement and presents satisfactory effects for most of our experiments. Lighting Effects As we have obtained the non-linear correspondence between each pixel within the concerned region and that on the new texture, the next step is to map the new texture while preserving the lighting information encoded in the original image. Normally, the intensity of a texture can be regarded as the accumulated effect of the color and brightness. If the brightness information of the original texture can be extracted independently, fusing it with the new texture will resolve the problem of preserving the lighting. Fortunately, the YCbCr color space illuminates us. YCbCr is a well-known color space compliant to the digital video standard, where CbCr mainly represents the hue of each textured pixel and Y component encodes its brightness. We simply copy the CbCr components of the new texture to the target image at each concerned pixel during texture mapping, and use a weighted blending of the Y component of both the displayed Figure 2. Texture coordinate offset for adjacent pixels in 1D case. Y. GUO ET AL. ****************************************************************************************************** ****************************************************************************************************** Copyright # 2005 John Wiley & Sons, Ltd. 454 Comp. Anim. Virtual Worlds 2005; 16: 451–461
computer animation virtual worlds IMAGE AND VIDEO RETEXTURING intensity of the concerned pixel on the original image Here mr stands for the weight balancing between the and that of the corresponding sample point on the new new texture and the brightness of the concerned pixel on texture plane.Define Yt,Cbr,Crt,Yi,Cbi,Cri and Yr,Cbr the image,the bigger m is,the further the lighting of the Cr,as the corresponding components of the new tex- retextured image resembles that of the original image. ture,intensity of the concerned pixel on the original We empirically value it with 0.6 in our experiment. image,and that of final result,respectively,the new intensity of the concerned pixel can be expressed as: Results of Image Retexturing Cb:=Cbr (10) Figure 3 demonstrates our experimental results of image Cry=Cr (11) retexturing.We can see that the effects preserve the Y,=m:×Y:+(1-mt)×Yi (12) shading information,and meanwhile yield the illusion C) Figure 3.In (a)and (b),Left is the original images,right is the retexturing results. 甲0年年w000●000年年年甲导事年带年年年年 Copyright C2005 John Wiley Sons,Ltd. 455 Comp.Anim.Virtual Worlds 2005;16:451-461
intensity of the concerned pixel on the original image and that of the corresponding sample point on the new texture plane. Define Yt;Cbt;Crt, Yi;Cbi;Cri and Yr; Cbr; Crr as the corresponding components of the new texture, intensity of the concerned pixel on the original image, and that of final result, respectively, the new intensity of the concerned pixel can be expressed as: Cbr ¼ Cbt ð10Þ Crr ¼ Crt ð11Þ Yr ¼ mt Yt þ ð1 mtÞ Yi ð12Þ Here mt stands for the weight balancing between the new texture and the brightness of the concerned pixel on the image, the bigger mt is, the further the lighting of the retextured image resembles that of the original image. We empirically value it with 0:6 in our experiment. Results of Image Retexturing Figure 3 demonstrates our experimental results of image retexturing. We can see that the effects preserve the shading information, and meanwhile yield the illusion Figure 3. In (a) and (b), Left is the original images, right is the retexturing results. IMAGE AND VIDEO RETEXTURING ****************************************************************************************************** ****************************************************************************************************** Copyright # 2005 John Wiley & Sons, Ltd. 455 Comp. Anim. Virtual Worlds 2005; 16: 451–461