Artistic Style Transfer, Now in 3D!


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. Style transfer is an interesting problem in
machine learning research where we have two input images, one for content, and one for
style, and the output is our content image reimagined with this new style. The cool part is that the content can be a
photo straight from our camera, and the style can be a painting, which leads to super fun,
and really good looking results. We have seen plenty of papers doing variations
of style transfer, but can we can push this concept further? And the answer is, yes! For instance, few people know that style transfer
can also be done in 3D! If you look here, you see an artist performing
this style transfer by drawing on a simple sphere and get their artistic style to carry
over to a complicated piece of 3D geometry. We talked about this technique in Two Minute
Papers episode 94, and for your reference, we are currently at over episode 340. Leave a comment if you’ve been around back
then! And this previous technique led to truly amazing
results, but still had two weak points. One, it took too long. As you see here, this method took around a
minute or more to produce these results. And hold on to your papers, because this new
paper is approximately a 1000 times faster than that, which means that it can produce
100 frames per second using a whopping 4K resolution. But of course, none of this matters… if
the visual quality is not similar. And, if you look closely, you see that the
new results are indeed really close to the reference results of the older method. So, what was the other problem? The other problem was the lack of temporal
coherence. This means that when creating an animation,
it seems like each of the individual frames of the animation were drawn separately by
an artist. In this new work, this is not only eliminated
as you see here, but the new technique even gives us the opportunity to control the amount
of flickering. With these improvements, this is is now a
proper tool to help artists perform this 3D style transfer and create these rich virtual
worlds much quicker and easier in the future. It also opens up the possibility for novices
to do that, which is an amazing value proposition. Limitations still apply, for instance, if
we have a texture with some regularity, such as this brickwall pattern here, the alignment
and continuity of the bricks on the 3D model may suffer. This can be fixed, but it is a little labor-intensive. However, you know our saying, two more papers
down the line, and this will likely cease to be an issue. And what you’ve seen here today is just
one paper down the line from the original work, and we can do 4K resolution at a 100
frames per second. Unreal. Thanks for watching and for your generous
support, and I’ll see you next time!

86 thoughts on “Artistic Style Transfer, Now in 3D!

  1. 0:12 Hey hey, that's a picture of Tübingen Karlsbridge, I'm literally watching this from the river island on the left.

  2. Very cool indeed! This could open up new stylistic choices for games and help concept artists to play with color and light very fast.

  3. hi two minutes paper! it's really unfair that an AI experiment is not covered on this channel where AI with help of lasers made the "BOSE EINSTEIN CONDENSATE"

  4. Damn you Károly! Every time I finish watching one of your videos, there are papers EVERYWHERE on my floor! I try to hold on to them, but I can't… stop making me drop my papers >_<

  5. Awesome!!! How hard is it to implement it as a blender add-on. And is it even open source?
    Awesome research.

  6. Always interested in techniques to aid in game development. I hope that at some point there will be a workflow that allows for a fully AI assisted workflow. I am talking about the complete process of modeling, texturing, animating and then placing. Imagine how cool it would be if anyone could make a game and tell story on their own, without the huge budget (in both time and/or money) required now. Amazing!

  7. Give it 15 years we wont need to manually create the CGI and VFX for movies, heck we don't need to manually write the script either, or manually film it or manually create the soundtrack or manually add the sound effects to all the little parts that nobody notices.. AI is going to do everything. People will pay to have their own custom marvel movies generated just from a simple plot description or movie title such as " Batman vs Spiderman"…. AI will automate the entire process and generate and original movie. Of course Marvel will still own and profit from it and own all the rights nodoubt… Until software leaks onto the internet that allows people to generate the movies themselves without the movie studios knowledge.. that will be the new pirating.

  8. I wish the videos were slightly longer.
    I watch this as a layman interested in the future and I'd love to know slightly more.

  9. I watch these videos just to hear you say "And I see you… NEXT TIIIMEEE!!" 😀
    Love it, thanks for the awesome work you do, dude! 🙂

  10. As an artist and a programmer this is amazing, and its clear what the future of creative media will be, we will be directing software to generate art as we envision, rather than spending hours and hours doing manual work, like it or not, this is where things are going, perhaps then there will be a renaissance of sorts where we focus on the messages our art conveys because we no longer need to spend so much time on the visuals.

  11. Yeah I still talk about your episode 94 to my students : that 1st technique was already impressive. And I can't wait to see a similar tool for amateur.

    Keep doing such an amazing work!

  12. This is absolutely incredible. I'm guessing (and hoping) this was drastically improve the performance and artistic license in 3D and 2D games. What a time to be alive.

  13. 100 Videos later

    " Dear fellow scholars, this is Two Minute Papers and today we take a look at the Skynet-AI paper, also known as the Judgement Day Neural Network " 💀

    😂😂😂😂

  14. What if you combine this with the work shown in your "Do Neural Networks Need To Think Like Humans?" video? Could robots (with cameras) be given a more efficient path toward understanding the shapes of the 3D world by using this style transfer on the camera feed & analyzing all the variations?

  15. I can only think of an anime game that has perfect anime graphics, looking exactly like an actual episode

  16. At the time artistic style transfer technique was published, just the fact that it can turn photos into strikingly similar Van Gogh's painting was already mind-boggling.

    And then, came the video artistic style transfer. Especially the Ice Age one when there were no more flickering appeared in the video.
    And then, this…?
    Solving all of the above?
    At a 1000x speed improvement?

    You can even try the demo from the link in description, giving results in *realtime*. It's just crazy considering there are many services that can take up to 10 mins just to turn my 640px friend's face photo into the smeary world of Van Gogh.

    It's really an amazing time to be alive!

  17. Károly, can you talk about the approach Google Stadia does? It does style transfer in real time to a world map, if you want. Thanks, love all your videos.

  18. Awesome! I can only imagine how many possibilities there are for the entertainment industry when this will get even better in future!

  19. Synthwave, vice-city, macintosh style generator plus AI-music generator we'll have another 80-90s year revival, yey. Are you feeling it now mr krabs? the AI-overlord is doing his job well.

  20. which episode are you referring to as 94? youtube results show that it was an episode titled "Estimating Matrix Rank With Neural Networks | Two Minute Papers #94"

  21. But… doesn't all of this look like some elaborate matcap-like system? If the purpose would've been realtime 3D rendering, it seems to me that some little time spent on a shader would be able to lay some textures, with or without animation, with different mappings, to achive a similar result, and the textures would be quick to be made by an artist, and there would be no UV issues. I think I'm missing something…

  22. Removing the flicker is revolutionary on a single cpu. Why is it free to copy the code? That could be sold to any animation or game studio.

  23. I have been around since almost the beginning as you know 😀
    Also, this is amazing! Can't wait to see people use this in the wild!

  24. Hi, you are presenting most papers, like these tools are available for the public. I'd like you to elaborate on that please, maybe a video on how to use these inventions, what should interested parties do? For people who don't have a clue, what kind of skills needed? That would be very useful.

  25. No matter how still image look like hand drawn.After you enable the animation, you still feel that this is a 3D object.I think what we need most now is a Style Transfer for animations.

  26. automatic texture channels generation from single 2d image –> https://www.youtube.com/watch?v=bfabw9RDHDY

  27. This is awesome. I've always wanted a way for everyone with an imagination, but maybe not honed skill to be able to make really stellar work. Creativity shouldn't be barred by ones opportunity to maybe spend years and years honing a craft unless the person wants to spend time doing so.

  28. I'm a software engineer, specialising in AI, data science and computer graphics, but personally I dislike these "Artistic style transfer" types of thing. All this new technology, and people just want to recreate what humans have already done, reducing it to a photoshop plugin in the process. This is not creative, it is not art. Van Gogh's paintings looked the way they did because of the materials he used, and the way he used them. Work such as this is an attempt to deny such creativity – to say "look, Van Gogh just used this formula, that's all there was to it". It reveals a contempt for art, in my view.

  29. So the people of the future will be able to draw a line and have it magically transformed into a theory of everything, cool

  30. This is fucking insanity!!! They say AI can't be our god but it literal can bring inanimate objects to motion

  31. the flickering animation seems a bit more natural, it gives back that hand-drawn feel better, i wouldn't consider it a problem, but it's cool that they could actually smooth it out

  32. Cool, but the light/shading is locked to the camera. If there's a highlight drawn on the right side of the style source image the 3d model will always be drawn with light coming from the right. I did something like this but with the style source image mapped to a 3d sphere, so the light can be in 'world space' not stuck to the camera: https://www.youtube.com/watch?v=IGAcTPvyCHo

Leave a Reply

Your email address will not be published. Required fields are marked *