Latest Computer Vision Research From Cornell and Adobe Proposes An Artificial Intelligence (AI) Method To Transfer The Artistic Features Of An Arbitrary Style Image To A 3D Scene

Art is an interesting but very complex discipline. Indeed, Creating artistic images is not only a time consuming problem but also requires a lot of skills. If this problem holds for 2D artworks, imagine scaling beyond the picture plane to time (in the case of animation) or 3D space (as in sculpture or virtual environments). This paper introduces new challenges and limitations that are addressed.

Previous results involving 2D modeling focus on breaking down video content frame by frame. The result is that each frame produced has a high-quality image, but often results in jittery elements in the produced video. This is due to the temporal discontinuity of the generated frames. In addition, It does not explore the 3D environment, which increases the complexity of the task. Other works focusing on 3D modeling suffer from geometrically inaccurate reconstructions of point clouds or triangle meshes and lack of stylistic details. The reason lies in the different geometric properties of the starting mesh and the generated mesh, as the style is applied after the linear transformation.

Also Read :  Residents want City of Mobile to trap and eliminate growing coyote population

The proposed method, called Artistic Radiance Fields (ARF), can transfer artistic features from a single 2D image to a real-world 3D scene, leading to novel scenes that are faithful to the input model image (Fig. 1).


For this purpose, The researchers applied a photo-realistic light field reconstructed to a stylized luminance field from multiple images from real-world scenes, providing high-quality stylized renderings from novel perspectives. The results are shown in Figure 1.

Examples include real-world pictures of a digger and a famous Van Gogh painting.starry night” Painting as a “style” to use it; The result is an excavation with a smooth color similar to a painting.

The ARF pipeline is depicted in the figure below (Figure 2).


The core of this architecture is the proposed combination of lossless nearest neighbor matching (NNFM) and color transfer.

NNFM involves comparison between feature maps of both models and images extracted using the infamous VGG-16 Convolutional Neural Network (CNN). In this way, Features can be applied to channelize complex high-frequency visual details from multiple angles.

Also Read :  How to Launch a Green Computing Initiative That Really Makes a Difference

Color transfer is a technique used to avoid significant color mismatches between composite views and style images. It applies a linear transformation of the pixels that make up the input image to match the mean and variance of the pixels in the style image.

In addition, The architecture uses a delayed back-propagation method to calculate loss in full-resolution images by reducing the load on the GPU. The first step is to render the image at full resolution and calculate the image loss and gradient with respect to the pixel colors that produce a cache color image. Then, These cache gradients are back-propagated patch-wise for the accumulation process.

The ARF approach presented in this paper brings several advantages. First, This results in stunning creations of stylish images with almost no artifacts. Secondly, Stylish images can be produced from innovative scenes with minimal input images, enabling state-of-the-art 3D reconstructions. Finally, By using the delayed back-propagation method, The architecture significantly reduces the GPU memory footprint.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'ARF: Artistic Radiance Fields'. All Credit For This Research Goes To Researchers on This Project. Check out the paper, github link and project.
Please Don't Forget To Join Our ML Subreddit

Also Read :  The Seattle MacArthur Fellow who teaches common sense to computers

Daniele Lorenzi holds an M.Sc. Italy in ICT for Internet and Multimedia Engineering in 2021 from the University of Padua. He is a Ph.D. candidate at the Institute of Information Technology (ITEC) at the Alpen-Adria-Universität (AAU) Klagenfurt. He currently works at the Christian Doppler Laboratory ATHENA, and his research interests include adaptive video transmission; immersive media; Includes machine learning and QoS/QoE evaluation.


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button