Siggraph 2015 Rewind - Brandon Parvini: Simulation and Render Workflows in Cinema 4D

Photo of Cineversity

Instructor Cineversity

Share this video
  • Duration: 50:13
  • Views: 6573
  • Made with Release: 16
  • Works with Release: 16 and greater

X-Particles, Octane and Photogrammetry in C4D.

Brandon Parvini demonstrates work done by Ghost Town Media for The Lazarus Effect, as well as his own personal project Haiku0715.

05:06The Lazarus Effect
07:32Haiku0715
14:07Photogrammetry
17:04X-Particles Tentacles
20:47X-Particles Liquid
33:56Octane
37:32Octane Node Graph

The Haiku0715 project was intended primarily as a test bed for learning both X-Particles and Octane for Cinema 4D. You'll see how Brandon used X-Particles to create both tentacle and fluid effects, baking each component to Alembic when he was satisfied with the simulation. You'll also see how Octane's node editor was used to develop shaders for the project. Brandon also explains the power of photogrammetry to transform multiple photographs into complete models, using tools like Photoscan and acquiring photoscanned models from Ten24.

Less...

Transcript

- Hi everyone. I'm Brandon Parvini. I'm the Creative Director and Lead 3D over at Ghost Town Media. I want to take you guys through my recent workflows and wrinkles I've been running into for some of the recent projects. Why don't I start us off by-- I'll show you guys some of the recent work we've been doing with our most recent reel. - Thanks. That's what we've been up to, recently. Bye. For us, we're Cinema 4D, full stop. That is really our engine of choice. Up until we found Cinema we were an After Effects house, bumbling our way along and trying to figure out how to get everything done in 2.5 D. I jumped in and out of platforms, and I'd never really felt at home inside of there. I felt like I was fighting the software. It didn't really communicate to me. Since I was in 5th grade I was using Photoshop. It had an Adobe-ish feel to me, and it just made sense. The biggest thing for us that we were able to realize is it allowed us a certain creativity in the process, and that creativity in the process ends up resulting in the creativity in the final result. We don't have large teams. We're big on generalists. You own your shot. You come in and you're going to do 13 shots by yourself. You're going to do the modeling, you're going to do the compositing, you're going to do the animation, and if I ask you to, you might have to go pick up coffee too. Although it's usually my job. That idea, that holistic approach to the craft and to experimentation inside of the projects is really one of the bedrocks for me that Cinema brought to the table for us. There's a certain aspect of play that we're able to have when we're developing and building rigs and things like that. While Cinema classically has been understood as a motion graphics tool, and obviously is widely successful at that, for us it's our foundation. If it's for visual effects, if it's for print, whatever it is, this is our home base. We generate all of our assets out of here, and this is what gives us good content to be able to work with. It was brought to a specific point when we had the opportunity to work on this project called The Lazarus Effect. Where we're having to do a lot of more film-style visual effects pieces, building everything that you'd classically see in other engines, but we had to figure out how to do it our way in Cinema and have it be notable and have it be modular and easy to deal with. This is a rig that I built in three days when we were trying to get the R and D going for how we wanted to do some of the effects in the film itself. We'd be combining photogrammetry, 3D scans, all kinds of stuff. Another major aspect that showed up when I was doing that project is the need for being able to conduct particles in an interesting way. We had to create all of the UI for the film before the film was even shot, and they ended up being storytelling pieces to explain what was happening with these medical serums that were working like this and doing that. I had to be able to conduct the particles in a way that I could say, "okay, in this instance of it the particles can live for a certain amount of time, then die, then turn into this, then do this, then do that." I'm not a TD. I really don't know what I'm doing with this stuff. I'm not a Python guy, I don't know how to code. But X-Particles came along, and I was like, "well this may or may not make sense, actually." What I was able to do is play around with the build style. It has a really intuitive approach, which is wildly complementary with the way that Cinema works in general. We were able to generate all of our UI and everything just by being able to conduct the particles in a fairly flexible and adept way. Everything speaks to each other, I can use the MoGraph Effectors on things. Everything was linking up in this really, really nice. I felt like, even at the end of it-- I was happy with it, and it was pretty, and I enjoyed doing it, and it was a lot of fun. I didn't realize how much fun I had working with particles, which makes me feel really nerdy. It was fun. I probably could have kept doing it for a couple more weeks before I started getting itchy. And I was like, "okay, this is great. I wonder what else it could do?" I found myself playing around here and there, trying to get different setups going. Okay, can we do fluid sims? Or can we do this, do that? I started playing around with it a little bit, and the next thing I know, X-Particles 3 comes out. They release a whole slew of new functions and features, and I felt like I hadn't even tapped in to the initial 40% of what I could have done with it as an engine inside of Cinema. I was like, "I really have to get into this." I found myself staring from the outside in. At the same time, around the time X-Particles 3 comes out, there's this really new interesting render engine called Octane that shows up, which is a GPU-based render engine. I found it wildly fascinating. A totally different workflow style, that I'm able to build my own little render farm by walking around the office, stealing GPUs and cramming it inside of a system, and saying, "okay, go." We don't have heavy metal at the office. We don't have racks of CPUs and all kinds of infrastructure that would be really conducive with swinging towards Arnold, which I'm sure a lot of you are hearing a lot about right now. It's a brilliant, brilliant render engine, but you have to have the infrastructure and you have to have the coin to make it work. A lot of the projects that we do over at Ghost Town, I might be working on it by myself, I might have a team of three guys. Maybe I have time to send it off to Rebus or Render Farm, but maybe I don't. This idea of, "maybe I can make my own little render farms, and it's $1,500 to buy some game cards, and I stick it in a system... I really should learn what I'm doing with this." I couldn't find a good project, things were going through, and I wasn't finding a good opportunity to test things out. I frankly didn't feel battle-tested in the software well enough to know that I'd feel confident to do it for a paying client. You know, I'm just going to make my own project. I'm going to do a little short, here, going to figure out how some of this stuff works. Do an expanded version of a daily. If you guys follow guys like Raw and Rendered, Beeple, they have these daily projects that they do that get them into trouble and get them comfortable with a lot of this. For me, I needed to make sure that I could actually figure out a problem. Not just, "oh, I can sketch around and make something that looks kind of pretty, but I haven't really done any rendered animations or this or that or the other." I wanted to spend enough time to back myself into a corner that I'd have to really understand how to get myself out of a problem if one came up. That's how I trust myself inside of software. It's not when things are going well, it's when things are going poorly. System just went down. You just lost a GPU. I don't know that license isn't working anymore. Those are the moments you lose all of the creative juice for the project, because you're worrying about all these other things. I want to stop myself down, and we'll show you what I manged to kick out. Spent about four weeks. Most of it was just waiting for the renders to kick through, because I had a couple old quadras and one titan. If I had a couple of 980s I would have been done in two or three weeks. This is Haiku 0715. It's a really creative title. I didn't know what to call it. I don't find myself that fancy. - Thanks, guys. One of the other things, and this is a bit of a shout out to the Cinema 4D community, when I'd seen what Raul had done for the semi-permanent titles-- and if you're not familiar, it's awesome, and you're losing out if you haven't seen it. He made this incredible set of opening titles by himself, in Cinema, using Octane, and just killed it. Did the audio, did everything. When I was getting ready to do this project, I was like, "I have no excuse. There's no excuse not to do it. I just need to get in there and make some trouble for myself." That's one of the things that I really do desperately love about the way that this platform activates people to just go make something. It's not about waiting for someone to come in line and help you get this done, or do that. You get in there and you do it yourself. Start making something, for god's sakes. I really, really, really enjoy that process, that I feel like I've been activated by this platform. One thing that this project taught me is a better way to be patient, to a certain degree. I didn't know what I was doing. I hadn't really done fluid sim. I'm not a Python guy. I don't really know Real Flow. I kind of understood X-Particles, but I was like, "I really need to get a better sense of this." What I had to do in order to have this all make sense is to, like a good chef, prepare each section of the meal. All the ingredients that I was going to end up adding up and putting together, I had to know they worked individually. I relied heavily on Alembic, and caching, and I built every single little piece for this project by itself. Tested it, made sure it worked, and then baked it. If you don't use Alembic for Cinema 4D, try it. The great thing about it is you're almost turning your work into this brilliant, moving piece of clip art. Like, "okay, great. I have it. It's locked. It's done, and it's all encapsulated. I'm not going to lose a BIN file, I'm not going to lose my cache over here or have a plugin that doesn't work over there." I take it, I bake it. It's 10 gigs and I can toss it around between all my systems. I don't have to worry about what it took to make it. Now I have a committed piece, and I can move forward in the production pipeline. I relied on that so heavily inside of this. It was really my saving grace. Let me actually take you guys through some of the different aspects of building this alp. The face that you see there at the end was actually pulled from a website called Ten 24. They're a photogrammetry house out of the UK, I believe. My big thing is I'm not a modeler, I don't have the patience for it. I'll hop into Z Brush and things like that. I like getting assets that look good quickly. I'm not a patient man. I just am not. I don't want to sit here and model out a head, and worry about this, and dive into that. I just want it now so I can start doing the fun stuff afterwards. Photogrammetry, 3D scanning are both areas that I'm wildly infatuated with. The idea of physical acquisition, of just grabbing my information from the real world, bring it in, toss it into the system. I have a good looking asset. I would never have been able to sit here and get the pores to look right, get the little scar that you see on the side of the head. All these things are all free when you're capturing from an actual, real, authentic, analog 3D source. Generally speaking, I always try to go from this-- you can see the point cloud setup. Let me stop really quickly. Photogrammetry is where I can take an array of photographs, and there's a piece of software called PhotoScan that will turn it into a point cloud, stitch it all together as a single mesh, and then re-project the texture from all of the photographs as a diffusion texture. And you can also output a normals map, both of which are brilliant for GPU-based renderers, because you're doing most of your work with your baseline diffusion and your normals to create your bump and texture. It makes a lot of sense, it manages to translate through. You can see this is actually a scan of me that we were doing at the office. One of my coworkers, David Tourneau, was just walking around with the camera snapping things off and saying "stop moving." It's a really fun experimental process. The reason I'm bringing this up is, think about your core asset. You may find yourself mucking about quite a bit trying to make everything look interesting, whereas if you can try to find a good core source of interesting asset to pull from, everything else ends up looking a little bit better. Everything's just a little more interesting. You're not having to reinvent the wheel so much. You're able to stand on the shoulder of a giant, like, "okay, this already looks pretty cool in here. What happens if I really start tearing this thing apart and thrashing it? What happens from there?" I want to show you guys the Ten 24 website. This is the face that I used for the end shot. I brought it in, all the 16 K textures, the sub-scattering is all provided. It's 40 Euros. Frankly, for a day of work I'll take that. That's your core asset that you're going to use. It's 100% rig ready. You can take it and get it all up and running pretty quickly. They have an enormous array of assets that you can use as well. Let me take you into the build itself. I feel like I'm delaying the inevitable. What we see here is my live build for the tentacles. I already have it cached. Otherwise it'd be really cool if it'd sim that fast. Still, it's pretty good. All we're seeing here, when you're deconstructing everything down, it's about these really simple, DNA-esque building blocks. I give the particles a command to be emitted from a section of the skull from a little emitter underneath the roof of the mouth and a little spot over next to the base of the skull. Using X-Particles, I'm able to say-- let's stop this down and I can take you through some of this. I have a bunch of emitters that I've set up here, and I'll I've done is I've strapped trails to them. That's giving us these nice sweeps. We had a pretty cool asset to begin with, and now it looks even gnarlier with all this extra detail on top of it. There's a little patina. Even if the eye can't see all of the details there, it knows it's there and is happy that it is. We go here to the modifiers. You can see that I set up-- I'll make this visible. I have a fall off attractor that grabs all the particles, pulls them towards the front. I've set up a couple steps of turbulence to make it a little chaotic. If you guys have ever done any particle simming, turbulence is one of the first things usually tossed in to make it a little interesting. This really cool modifier called Follow Surface. Follow Surface will say anything that comes within X or Y range of said object, and for this one I was pulling from my skull itself, saying anything that gets within X number of inches of the skull, grip it, grab it, and make them walk all over it. All of a sudden I'm being able to course them all over its topology, and I'm not having to conduct any crazy voodoo to do there. You can see that I'm using Cover Surface here as well. Little hints of things, subtle little kicks to push the particles around. Once I had something that was looking pretty good, I was like, "great, awesome." I took it, and I kicked it out as an Alembic. You can see I have my tendrils Alembic file. 2.6 gigs, not very heavy. Everything's encapsulated in one little thing. Always check and make sure that all your proper check boxes are there. We're going to bring all this in as splines. I could also bring it in as hair if I wanted to, but for right now let's just do splines. Those are nice and sturdy. As I play through here, you can see everything came through just fine. I can go through here now... I have all my generators, I can rip those guys out. I can say I don't care about any of you in my life. Now I just have that. Now I have a nice, clean, simple item that I can take, drag in. I know if I show it at any machine, it's just going to work. Simplifying the things that can be simple is a huge, huge part of being able to loosen up your feet as you're trying to jump around, be creative. You're not having to worry about too much wrangling in the process, because when we're trying to make a lot of this stuff you don't entirely know what you're doing, you're kind of feeling your way through it. If you can simplify your conversations as you go through it, do it. Get rid of the simple stuff so you can focus on the fun stuff. The other side of it that we have is the liquid simulation. I'll play through that real quickly. Again, 100% through X-Particles. Everything's happening inside of Cinema 4D. I haven't had to leave the platform once yet. You can see we get this nice little pour, it's nice and goopy. I was inspired by the opening title for the Daredevil, if you guys have seen that. That beautiful viscous. Of course, that was Real Flow, and a lot of heavy metal and lifting went into there. I was like, "that's great, I just don't know how to do any of that, so I'm going to make my own." I tried diving into Real Flow, I was watching tutorials, and I was trying to get my head wrapped around it, and I just wasn't getting results. I can use all of my same art direction techniques that I have in X-Particles with my fluid simulation. I have all this huge of assets that I'm able to play around with that I already know and feel comfortable with to help art direct my simulations. As it turned out, it only took me about two days to get that done, which isn't really that bad. I was doing this between other projects, so as soon as I had a couple cycles down I could swing over, pluck around, pluck around, hit cache, walk away to another system, keep doing another project, come back, "oh, that looks okay." What I want to show you guys is how straight forward it is to get something working. Let me get rid of this, and let's start from a default state. We have our happy little skull here. The skull is also provided by Ten 24. Again, cool assets make cool projects. If you've ever done any simming before then you probably already know this, but if you haven't, low polygon counts. You do not need to have extra weight in there, as long as it's about the same shop, creating proxies and simplifying your process will drastically improve all of your sim time. That way you aren't sitting there scratching your head, saying, "why isn't this working as fast as I thought it should?" It's pretty straightforward to do. I'm going to duplicate my skull, we'll go over here to our happy little polygon reduction tool, bump it up to 70%. I want this thing tossed way the heck down. I'm going to bake my item as a current state to object. You can see we have our little timer bar, that's making magic happen for you. All right. Great. Now we have it. We can get rid of this other guy, because we don't need to have him around anymore. Take that guy, rip him out. Get rid of that texture. We're going to turn off the viewport visibility and the render visibility on this guy. We'll call you our sim skull. Okay? Now we have a much better mesh. You can see how much lighter it is by comparison to the original. This guy would have been a nightmare to sim with on a not-gnarly box. Now we have our skull. Let's start messing around a little bit. We'll go here to X-Particle. Let's bring in a system so it keeps it nice and organized for us. Here is going to be our emitter. Let's also not forget, let's turn our sim skull into a collision object. Collider. Liquid doesn't bounce that much, so I'm going to drop that out for right now. I'm sure someone will argue with me that yes, that does actually bounce, but... You know what I mean. We have our emitter right here, and what I'm going to do is I'm going to toss in one of my dynamics objects. To keep things simple I'm going to use their baseline SPH Fluids, which is a really good brute force tool for most of the sims you're doing. If they're not wildly complex, this guy will do you just fine. Great. Now that guy's set up. Let's bring this guy up here, let's position him. Rotate him around. All right. Now he's going to be facing downwards. Shift that over. Seems like it's about aligned up. I'm going to go through, and these are some numbers I was playing around with, so I know they kind of work. I'm going to bump my viscosity to 60%, and I'm going to do my smoothing radius to 40 to help smooth out any of the extra, nasty bits that might blob out and whatnot. All right. Let's take a quick look at how this is coming across. Oh, one other thing. Something that took me a few times to realize how valuable this is, let's make our cache object, and this guy is going to be our Grand Central Station for this, where we'll be building our cache each time. That way...well, look what he did. He made floating liquid. Let's flip that around. Now we're going the right way. You know what I'll even do, just to make sure that everything's right in the world? I'm going to go to my object, emission, I'm gonna kill out any velocity that's coming from it whatsoever. Then we'll go back over here to my modifiers. I want to affect how the motion of it is working. Let's toss in some gravity. Their default works pretty well. It's pretty close to real world matching. Let's go back to our cache, and let's rebuild our cache. We're going to overwrite it, because we didn't care about what we did last time. Now we're getting some nice, gloopy drops. Blop, blop bop. All right, that's not bad. Took me about 45 seconds to get to this point. That's not too bad. There's not enough particles there to get you that nice, thick, viscous drip on everything, so let's crank up our numbers so there's more going on. I'm going to get super greedy, and let's go with 6,000. I'm going to take down my overall timeline. Go back here. We'll overwrite our cache again. The reason why I'm doing this is because a lot of time when you're looking at sims you don't want to just look at one angle of it. You want to see 13 different angles of it and watch through. If you're not caching this as you're going along, you'll have to wait for it procedurally to click through and try to figure its way through, versus just being able to say, "okay, great. I'm watching it play a couple of times, I saw it from a couple of angles, it's doing what I want it to do. Awesome." It's more of iterations and turnover. As you guys saw over my shoulder, all of a sudden I created a shower head. The size of my emitter didn't change any. Nothing has changed about what I'm doing, but now all of a sudden it's buoying out, and it's missing the skull altogether. This is a trick that I learned watching the Real Flow tutorial. The reason I bring this up is because you can look to other simulation engines and learn tricks that they're using, and it all will convert back over to our world, because a lot of it's built on the same math. What's happening here is I've bumped up all these extra particles that are coming through, but I still have this little keyhole that everything's powering out of. Essentially what I've done is I've created a fire hose where the pressure of all the particles that are pushing through is coming through at such a high velocity that it's shooting everywhere and it's becoming explosive. You can cheat that by playing with the dampening inside of the fluid software, but you're cheating a little bit. I found this other setup, and it was kind of a duh moment once I realized what I was doing. I'm gonna take a tube, bring that up here. Going to open that up a little bit. Now we've got this nice little ring. I'm going to step through, and I want to give us a couple height segments, maybe 10. You don't want to add any extra polys, you want to keep this as light as possible. Then I'm going to take the taper tool, swing it around because it always wants to be facing upwards. Hop back over to it, and I'm going to take my strength to 95. Maybe that's too much. Let's go with 90, see how that's looking. That's looking pretty good. Take them down to like 200. Maybe that's a little aggressive. That's not bad. Bring them down, and as I usually do, take it, current state to object, take you, get rid of you. Now I've created a funnel. Now, whatever happens with my particle emission, it doesn't matter because I've got this controlling system that's going to force everything down through a spigot so I can target what I'm doing. Set it up to a collider, no bounce, because you don't want things splashing. I'm going to grab my emitter and my tube. I'm going to bring them both up a little bit, make sure that I'm aligned. A really good way to also do this is, especially with sims-- I don't care with the geometry of that looks like, I know what it is. I'm going to turn it to x-ray so I can see what's going on inside of there. Now I can get even more bold with my birthrate, because I'm not concerned with it being even more explosive. Go back halfway here to my XB cache, and let's build a fresh cache. Hopefully my alignment's good, I didn't think about checking that. Now as everything begins to collect and pool down, now I'm being able to get much more of that sinewy drop that we're trying to get in the first place. This is the fun part, with X-Particles I can do this 13 different ways. If I didn't want to make so many particles I could have attached the trail sweep to all of this, and all of that could have been part of the final calculation for the skinning to create the mesh at the end. But now I can really art direct where things are being pointed at, and how I want things to look. Now I'm dialing my settings and really diving into a lot of that. We spent about five minutes inside of here, and we're already getting a fluid sim. We didn't have to leave the engine. We're not trying to worry about all kinds of crazy stuff. You're still here in all the tools that you're used to using. For me that's the biggest part. Just being able to sketch and play in places where you already feel safe. Now, because I'm impatient, I'm going to go through and see how that looks now. That's not bad. Another cool thing that's happening with the cache is I'm caching to the X-Particle caching format, but you can see that we have BIN, Houdini, Maya, Renderman, Krakatoa, My End Cache, which means that this cache I can take-- if I send over to the Houdini BIN, I can grab that folder, hop into-- sorry. If I do the Real Flow BIN, I can hop over to Real Flow, load that BIN in as a cache, and it'll start playing through as if I'd simmed the whole thing in Real Flow. Same thing with Houdini, all these other items. You're being able to work from where your comfortable, but if there's a very specific tool set over here, or you have someone who's awesome at this one thing in this one piece of software, say, "okay. Great. Have it, and do your thing. Then give it back to me and make it look even more awesome." It's this nice multi-faceted Grand Central Station where you're being able to go in and out of a lot of different platforms, but you don't have to leave if you don't want to. If I want to go through and do the skinning here, I can go to the generator, bring in the skinner, and I'm going to tell him to look to our particle mesh. Let's turn that off. All right. Now I've got a skin. It's a little blobby right now, but thankfully they give us some pretty nice settings that'll help us start cleaning a lot of this stuff up. Also I can customize the volume or the size of the particles. If I drop that down to, say, four... I may have made it angry by doing that. I can go through here and I can customize the size of it. I can go through and play with my iterations. Continue to fine tune and art direct the look itself. I can go through here, I can say, "I'm going to take it to 16." I can cull how the particles are being influenced. What's more is I can do different surface algorithms for how I want it all to mesh together. Really nice, really multifaceted. You can play around and get a lot of stuff pretty quickly. With the time that I have left I want to make a bit of a segue away from X-Particle and see if I can dive in for just a little hint of what's going on with Octane. I really, really enjoyed that process of diving in there. I've dove in and out of a lot of different render engines, and I never found myself that at home. I just felt like, "I'd prefer to just stick with the Cinema 4D renderer. I already have all my textures, I have all my materials. I don't really want to re-learn this thing." As part of getting used to it, I've been trying to do a lot of these sketches at night. I have an iMac at home that has an Nvidia card, so I've been trying to load things in there, play around with shaders, make little things here and there. Then I was like, "oh, maybe I should learn how to do car stuff in case I do a car project." So loaded some models into here. Built all the paint shaders. Got everything working. Again, this is all 100% Octane. It's been a lot of sketching and trying to get comfortable with things, which was leading me towards this project where I was like, "okay. Let's really try to dive in." So we go over to the face build... This is our model that we were talking about previously. Give her a second. These are some big projects, so... We're going to open up our live viewer window. Again, this is the face from Ten 24. The hair was generated from mapping that he supplies you that will tell you where to generate the hair and all that kind of stuff, texture, subsurface. These are all 16K textures, and they're all loading in without much of an issue. Let's toss this in the viewport. Right now what it's doing is it's bundling everything, packaging it up, and deploying it to the GPU itself. While it's loading everything over there, I'll take this as an opportunity to talk about the cost-effectiveness of it. You don't really need to be getting a quadro or anything like that to do this kind of stuff. Frankly, the best cards for this are gamer cards. If you can get your hands on one of the 980s, if you're cool enough that you can get one of the Titans or Titan Zs, these guys crush through Octane. If you have a gamer rig at home and you're not playing as much as you used to, rip that card out and stick it over here. You'll get really, really fast results. It's a linear stacking process, so the more CUDA cores you have in your system, the faster you render. That's all the math it takes. Buy a card based upon how many cores it actually has. Now they're doing it where doing texture overflows and things like that, so if the textures are really heavy it can spill over to your system. You don't even need to have some crazy eight gig card to hold everything. It'll slow you down a little bit, but it's really about the cores. You can see here, now we're over here in Octane. As I orbit around... Let's see here. I bet they probably have a CUDA card in here. There we go. I can navigate around in this environment. Some of the lagginess here, remember, we're dealing right now with 16K, 8K, 10K textures that are generating everything that you're seeing here. One of the biggest things that I really enjoyed once I got in there, and I've never been much of a node graph fan in the past, but their execution of their note editor is absolutely brilliant. I don't know if you guys render the same thing. I haven't memorized every single fractal noise out there. I don't know, magically, what my gradient setting is going to be. I need to see what I'm doing, at some level. The biggest thing they did, and no one else has this in a lot of their node graphs, is if I want to see this, I can actually see what I'm doing. If I'm combining my textures together, I'm getting a preview of everything that's happening. I can really quickly and easily go through and get a sense of what everything is doing to create the final product. If I want to go into here... Let's give the system a second. If I want to go into here and play around with my gamma, let's take them to 1.5. Immediately I'm seeing my results. It's all instantly updating, and I'm being able to get a really good, nice, fast feedback for what it is I'm trying to execute. Okay, great. Now I've made him a little more pale. He looks more like a post guy. I can go into here. I'm getting my previews and seeing what everything looks like. I can play around with exposures. You can see all of these options that we have. If I want to toss in turbulence, if I want to generate a lot of items I can have that all here. They have a lot of the same mapping setups. You can even take a lot of your Cinema 4D procedurals and bring them in here. You can tell Octane on its side to bake it out at 512 to 512, 2K by 2K, whatever it is it'll go through and keep that-- it'll worry about taking care of everything else under the hood for you so that way you can just get through your iterations. That was the biggest thing for me. I find it takes me a little bit to go through a couple iterations and really get a good sense of "okay, this is getting where I want it to be." Versus hoping and praying on buckets that show up in the right way. You want to get a nice GI render, or you're trying to get the sub-surface scattering going, but you want to see what the whole model looks like. You end up doing these little render regions just to see if it works for you. This is a much more artist friendly setup, where I'm being able to play and see all of my resultants as I go about it. I'm not hoping that I come back in 45 minutes and the GI renderer's kicked out and it looks like something I actually wanted it to do in the first place. I want to show you around this a little bit. My node graph is a little messy. I could have done a little bit of clean up before we got in here. You can see what I'm able to do here-- and I'm actually working with two different materials at the same time through a mix shader. I have my skin gloss, which is my overall glossiness, and my skin diffuse, which is really how you get a lot of your subsurface scattering kicking through the build scene. I could use five more monitors. That'd be awesome. Let's see here. If I go to my mix material... Come on, little buddy. I have this guy split right down the middle, 50 50. Kind of glossy, kind of diffusey. I can swing him over one direction or the other, and he'll start getting a little more oily or a little more matten, more powdered up. Or if I want to dive in more directly, and really start diving into the build that goes into the mix material... This machine might be ready for a restart soon. I can now hop into here, and I can play around with my specific roughness. You can see right now I'm mixing in this roughness shader that's going in. So if I drop this guy down pretty significantly he's going to be looking a lot more glossy. There's a nice intuitiveness of being able to play around, and once you get used to a lot of the settings it's a much quicker process. I ended up learning Octane over the course of the better part of three solid weeks where I dove in. There's a website called In Life Thrill. The gentleman there put up this brilliant series of tutorials. Each one's about 10 to 15 minutes, you learn exactly what you need. You get in, you get out, and you feel like you have a good grasp on that topic. Lighting, multipass, the difference between path tracing versus direct. Each one of these items has a different wrinkle for it, but it's a really good base setup that you can get through there and get a really good understanding. The best part, they're free. Free stuff's always cool. But if I toss in an arial light, go back over here to my view... Think people have been abusing this computer pretty hard today. Bring him forward. I'm not sure why he's not wanting to be so cooperative with me right now, but... I'll summarize it to this much. I have an iMac at home. It's an I 7, four core, and I have a 780 M graphics card on there with four gigs on it. It screams. I do all of my major builds on that, and then when it's time to render I'll save the project to my Dropbox and then open up at the office and get it running. It's not such a prohibitive curve to get-- there we go. Now he comes back to me, for a second until I look at him. It's not just a cost-prohibitive way to get into a really straightforward render engine that you can really quickly get good results. I'm not a character animator, I'm not a really deep texture artist, but you come in and it starts making sense pretty quickly. The ability to get instantaneous feedback means if you're not wildly experienced on it then you can get through more iterations until you can figure out what it is supposed to look like. That's the great part when you get fast feedback. If you don't know what you're doing, you can spend a little more time and tune in those settings and go through all those learning curves, because you're getting that instantaneous feedback without feeling like, "okay, I just wasted two evenings in a row waiting for GI renders that didn't work out for me in the long run." Oh, there we go. Looks who decided to come back and play. I'm going to get rid of my Octane sky, and let's see if we can get that nice guy to react. There he is. For me, it's been a lot of fun to play around inside of this. I would definitely, definitely recommend you guys take a look into it. OTOY has their virtual setup where you can go to their site, you can play around with their engine online. You can see how quickly it all works around. For me, it's been awesome. I've had a lot of people asking, Arnold or Octane, Arnold or Octane? For me, Octane just made sense. It was a smaller investment to get into it, and I could really quickly see those results. Versus if I'd had a big standing rack of CPUs, then maybe Arnold would have made more sense. If I'd had the IT and everything over on that side. Really it depends on what you're trying to do. Because we're really more of a small team setup, my preference went over towards Octane, but each person is going to be different. Some of you guys like V-Ray, some of you guys want to stay in standard render. For me this was a novel approach. I really appreciate anything that throws a wrinkle into my old school pipelines, if I possibly can. That brings me to a close. I'll be happy to talk about any of this stuff afterwards if you guys have any more specific questions. I wish I could've dove even deeper-- I probably could have spent the whole hour just doing sub-surface scattering and showing you that kind of stuff. It's really easy to get lost, especially-- this is one thing I'll tell you. It's really easy to get lost and lose track of time when you're getting instantaneous feedback. You keep thinking that you're done, and you keep thinking, "okay, that looks pretty good. Actually, what if I tweak that setting? Because it's just another three seconds. It's just another five seconds. It's just another five minutes." And the next thing I know it's 3:30 AM, and I have a meeting at 6:00 AM, and I really didn't need to keep mucking around with how glossy the skin was on his forehead or the roughness shader on his ear, but I really enjoyed doing it. Making work that's fun to work on, you see it in the final result. Finding tools that make you happy is the biggest thing. Find these elements, these ingredients that you enjoy working with. You can experiment, you can sketch, and you can feel like the process isn't so much of a burden. That's the biggest thing. Try out all the plugins. Try out everything you can. Keep mucking about until you find that right cocktail that speaks best to you. There's so many brilliant developers right now that are making Cinema such a powerful platform. When I'm trying to talk to guys who may have been part of the Autodesk family or these different groups, they want to still see this as Mograph tool. I'm like, "it's not. It's just a matter of which widgets I want to plug into it." It's very much like After Effects in that those who are After Effects fans, you use it, you're comfortable with it, it may be a love hate relationship at times, but it's what you know. But it's really brought to life by all the plugins that come into it. Cinema has a very similar aspect where they play very nicely with all their friends. They're as powerful as the group of people who've been coding and surrounding them, and we're seeing this really nice tipping point right now where we're getting access to everything. With the Houdini engine, which is going to absolutely melt your face once you guys start playing around with that. There's not much that I'm not going to be able to do inside of here in the next couple months. So just play. Play, sketch, and keep making cool stuff. Thank you guys so much, I really appreciate the time. By the way, if you want to find out anything on this project or you want to get in touch with me or Ghost Town, Ghost Town's website's always at G-T-M-V-F-X dot com. They have a Facebook at G-T-M-V-F-X. And at twitter it's at G-T-M-V-F-X. Let's see if I can say it one more time, G-T-M-V-F-X. If you want to find me specifically, I have my personal site up at alchem dot tv. You can find me on Instagram where you see a bunch of my Octane sketches and things like that that are going up at Brandon underscore Parvini, and on Twitter. You can always tweet me Octane or X-Particle questions. I'm always happy to help people out. I just like seeing people make cool stuff. It makes me a little bit crazy and makes me have to go do something else, then. Thank you guys so much. I really appreciate your time.
Resume Auto-Scroll?