Week6 Render Layer

The render layer in Maya generally has two purposes. One is to layer the scene: foreground, background, character, shadow, mask and so on. The other can be understood as sub channel. If you have done the later stage of film and television and used nuke or fusion, you must know the significance of channel for later stage synthesis. Generally, the picture only contains four RGBA channels, and other channels need to be rendered separately, such as Z The channel can adjust the depth of field, the OCC channel can adjust the occlusion effect, the normal channel can adjust the lighting effect, the position channel can do the volume effect, the vector channel can do the motion blur, and even the specular reflection and refraction can be regarded as channels. The information of these channels needs to be rendered into RGB three color image files for later software sampling.

In fact, I noticed this problem when I did all the 3D scenes and models before, but I always used a very stupid method, that is, different scenes were rendered with different materials at the same angle, which would be very troublesome, because two different scenes occupied a lot of memory, and I always couldn’t tune the single object in detail All right.

Then I learned about this function from KK’s course this week. He talked about the application of light layering in Maya in great detail. Then this time I tried to do some effects.

Process

Windows > Rendering editors > RenderSettings

Create a new layer and click the eye icon

Right click the layer and create collection and add the object to the collection

and give the material override

With regard to the light

firstly create the collection and add it

Make it invisible

So the light which is added to the layer and inivisble will not appear in the render

Sometimes the render will not update automatically so to update full scene

When it comes to the render settings it will appear the optional of the layers

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Lighting Tutorial Week3 : Image-Based Lighting (IBL) Setup (2)

The full name of “IBL” is “image-based lighint”, which is a method of camouflage global illumination. Using this method can obtain better visual effect and achieve the purpose of real-time rendering

One of the ways to achieve this is to capture an environment map first. This image can be taken by a camera in the real world (HDR is recommended for better effect). It can also be rendered in real time by a camera in the game.

  • Revising the Light-rig
  • setting up render layers and AOVs
  • Relighting in Nuke
  • Summary

update roughened spheres Shader

set up a standard surface with greysphere and chrome

Render Layer Greysphere

Render Layer Chromesphere

change the transform of the light to change the shadow position

Render layers. It’s the way we bleep the different elements into their own layers so can adjust them separately.

AOVs

A space on the lighting so we can break down the beauty, render into multiple lighting render. And beyond that, for each lighting render, we can break it down into food based on the shading component( diffuse component specular components and subsurface ghetto component).

with the help of this I.D. pass, we can easily separate the characters in com and apply different settings on them. And also the position pass so we can use the position. We can use the position past to draw a map, draw a map based on where the ground is so that we can discard the other Paxos in other areas.

breaking it down in to render of different light source.

Render seperately

we can just add them back together to reconstruct or reproduce the image identical as if we are rendering free lake together.

Aovs

Z is generally used to make the air fog and depth of field of the scene.

Albedo. This is a pure color layer without any light effect, which is generally used with AO.

Ao, which translates to mean environmental shading, is used to superimpose the shadow generated by the contact between objects to increase the sense of volume.

Coat, varnish layer, is the second layer of high gloss of materials. Generally, there are glazes like blue and white porcelain, as well as car paint.

Diffuse. This is a color layer with light information. There is no material other than color inside.

Direct. This is the direct illumination layer. It calculates the effect of direct illumination according to the light in the scene. It does not calculate the effect of photon rebound.

Emission. This is the literal meaning of self illumination. This channel will extract the materials with self illumination parameters in the scene. It is convenient to adjust the content related to self illumination.

Indirect. With the above direct, this channel is the photon rebound effect calculated by Arnold according to the light, that is, indirect lighting, just like a sunlight, when it shines into the room, the place where the room is not illuminated will be bright, that is, photon rebound and indirect lighting.

Motionvector. Because the scene is not animated, so use an animated ball as a case. Motionvector motion vector, used to do motion blur effect.

Opacity. This is a transparent channel. If there is a transparent property, it will be put forward to facilitate the control of transparent materials.

Specular. High light layer, used to adjust the intensity of high light AOV stratification.

SSS. When an object has subsurface and skin material, this channel can facilitate the later adjustment of SSS size.

Transmission. Refraction layer, similar to water, glass and other materials with refraction properties, this channel is convenient to adjust the size of refraction and other properties.

resource: https://www.bilibili.com/read/cv6974938/

Maya Process

create a new layer

create collection

create > set > set

Collection filters: Sets

Click middle mouse and drag to the box

Add the attribute of primary visibility

environment

And the light add the attribute of camera (0/1 to visible or invisible the camera background)

Re building beauty layer from aovs

To Add an Environment Light

Read an HDR image of the environment into your script.

Select Transform > SphericalTransform to insert a SphericalTransform node after the HDR image. You use this node to convert the HDR image into a spherical mapped image. In the node’s controls, select the Input Type and the Output Type (in this case, Sphere).

Select 3D > Lights > Environment to insert an Environment node in your script. Connect the SphericalTransform node to the Environment node’s map input, and the Environment node to the Scene node.

Adding contact shadow and reflection

Feeding back Shading/ Lighting adjustment into Arnold

K — copy node

expression
Posted in Houdini & Lighting | Leave a comment

Houdini Tutorial Week 5

This week is going to be all about volumes and smoke, fire or explosion.

In Volumes we’ll learn to create and control combustion style simulations in the sparse pyro solver, and then take it further by using PDG to run a bunch of wedged sims so that we can work more efficiently. Custom simulation and post-simulation techniques, together with custom shader and lighting strategies, will allow us to create production quality work in ways that out of the box tools just can’t. After some compositing, we’ll end up with a great explosion – but more importantly, the knowledge and confidence to hack solvers and shaders to create all kinds of effects!

Atmosphere Volume

This shader simulates light scattered by a thin, uniform atmosphere. It produces shafts of light and volumetric shadows cast from geometric objects. It works with point, spot, and area lights, but not with distant or skylights. This is a scene-wide volume shader (or an atmosphere shader in Arnold’s terms).  

  • atmosphere_volume used to be called volumetric_scattering and should not be confused with volume rendering of fluid type objects.
  • atmosphere_volume only works with ‘local’ lights that have a precise location and size and inverse-square decay. It does not support lights at an infinite distance, such as the Skydome light or directional light.
  • Currently, atmosphere_volume does not compose well against volumes. This is because atmospheres return a single flat result that is opacity mapped on top of whatever is in the background of the pixel.

atmosphere_volume should be composited using an ‘additive’ mode such as ‘screen’ because volumetric scattering is the light that cannot be represented in the alpha channel.

The example below demonstrates the effect of atmosphere_volume through a medium. It consists of a polygon plane with a circular ramp texture connected to the opacity of a standard_surface shader. The spotlight is pointing at the plane and atmosphere_volume is enabled.  

https://docs.arnoldrenderer.com/display/A5AFMUG/Atmosphere+Volume

How to drop an ‘Atmosphere’ node in scene and connect it to volumetric material

  • Add an atmosphere.
  • Set it’s material to a fog light (v_foglight)
  • Add a light to the scene and point it at some geometry.
  • On the Atmosphere node under the Render tab set the light mask to the light you added.
  • Tweak the volume fog material to match your scene, as the help states, this is dependent upon your scene scale.
  • Sometimes people will put another piece of geometry between the light and the target geometry to help break up the rays, often this is just a plane with a noise based alpha material such as a gobo.

To use Atmosphere, select it under Scene Elements, in the 3Delight ROP.

The 3Delight Atmosphere shader allows rendering of atmospheric effects such as fog and smoke. The shader interacts with all lighting elements of the scene (environment, area lights, directional lights, mesh lights) and will be part of any Deep EXR file produced by the render.

Color — This controls the general colour of the atmosphere.

Density — Specifies the density of the atmosphere. Density is related to the amount of particles/molecules that block light. Increasing density makes objects disappear with distance. Density’s effect is also related to the scale of the scene. Scenes of a small scale need a higher Density to render the same effect as scene built using a larger scale.  

Super Reflective — This parameter enables rendering of volumes that reflect much more light that they absorb. This non-physical behaviour allows for more artistic freedom.  For example, it is possible to obtain distinctive glow around lights while avoiding the steep absorption of a high Density that makes objects invisible in the atmosphere. The default value of 0 ensures a physically plausible render.

The parameters of this shader have been designed to allow an artist-friendly specification of atmosphere’s look. The colour dialed in the UI will be the colour of the atmosphere. The often used absorption and scattering parameters provide non intuitive results since they require the use of visually unrelated colours in the UI.

This shader simulates fog using single scattering only. For multiple-scattering, it is recommended to create a VDB volume with constant density and use VDB Volume shader.

https://www.sidefx.com/docs/houdini/render/volumes.html

Concept

  1. Smoke is the display state of fog. For example, fog volume in ISO offset is directly a visual vector field
  2. ISO is a lot of small squares that always face the camera, such as SDF volume in ISO offset
  3. Poly is the normal model display, such as ISO surface and tetra mesh in ISO offset
  4. Volume is a vector field. By default, it is displayed in smoke mode, but this node has no value by default, so it can’t be seen. You can give an initial value at will
  5. SDF is the signed distance field, which is a common way to represent three-dimensional space in iconography. Similar applications include ray marching material, dfao in UE4 and so on
  6. In ISO display, it is also stored in voxel form, from which distance, direction and other data can be obtained
  7. VDB is an open VDB, which is an updated general volume data type. It can export VDB format as a general material, which contains a variety of density and other volume data
  • Convert VDB can convert the relationship between VDB and VDB
  • Convert volume can be converted to poly or fog display mode
  • Volume visualization can be used to visualize the display color of fog

It can be seen that because there are several display states and the voxel data generated by nodes are different, there are different conversion relationships

Examples

Create the material Arnold

Node — Atmosphere_volume > shader to out_environment > atmosphere

Out>environment>select the material of the volume

Change the density

Add the noise > rgb to atmosphere_volume >rgb density

Process1 — Smoke

Create the node — Circle > scatter > attribcreate (density) > value 1

Add color to see the details and the attribute is density

Add Attribnoise attribute names> float and density

Change the remap ramp and element size

So the result is that the density is not everywhere

Add the Volume Rasterize Attributes and choose the attribute of density

The Volume Rasterize Attributes SOP takes a cloud of points as input and creates VDBs for its float or vector attributes. Internally, this node utilizes the Volume Rasterize Particles SOP and is thus subject to its nuances and limitations.

  • Group — A group of points in the input to rasterize.
  • Attributes — Pattern specifying which attributes to create corresponding VDBs for.
  • Note — Only float and vector attributes can be rasterized.

change the particle scale

Drag the voxel size to particle scale and select relative channel reference to make the expression

Attribnoise > Animation > Animation Noise

Add the dopnet to do the simulation

Way1: Smokesolver +volumesource+gasturbulence+ smokesolver_sparse

The Smoke Solver is able to perform the basic steps required for a smoke simulation. Pyro Solver (Sparse) extends the functionality of this solver by adding flame simulation along with extra shaping controls.

This solver makes use of various field subdata on the object.

  • The object should have a scalar field density for the density of the smoke.
  • The object should have a vector field vel for the velocity at each voxel.
  • Optionally, the object can have a scalar field temperature for internal buoyancy calculations.

The essential building blocks of a smoke simulation are the object, solver, and sourcing. The Smoke Object (Sparse) node creates a dynamic object containing the required fields, which are then evolved by the solver as the simulation proceeds. The simplest smoke simulation needs the following data:

  • density scalar field that contains where and how much smoke is present;
  • temperature scalar field that’s used for buoyancy calculations;
  • vel vector field that captures the instantaneous motion of the smoke.

This solver takes care of ensuring these fields change in a manner consistent with smoke, but sourcing is responsible for injecting these quantities through the course of the simulation. For example, you may want to continuously add to density at the soot source or temperature to cause hot regions to rise.

Smokeobject_sparse

This DOP creates a smoke object with properly configured fields that can be evolved by a Smoke Solver (Sparse) or Pyro Solver (Sparse). The object will start out empty, and can be populated with smoke or heat using the Volume Source DOP.

Settings

Movement of the volume — temperature

The DOP Import Fields SOP is designed to streamline the common operation of importing many fields from fluid simulations into SOPs.

Presents: Smoke

Field — The scalar or vector field to extract from the object. It will be properly named, ie, the vel field will create vel.xvel.y, and vel.z named volumes. It will also be in a group named after the DOP object. While this is designed around importing fields, any geometry can actually be imported here.

Create the lights

The Volume Visualization operator adds detail attributes to the volume to allow for visualizations of volumes that requires multiple volumes to be joined together. For example, one may want to take a density volume and color it according to three separate Cd.x, Cd.y, and Cd.z volumes.

First there is the opaque smoke. This smoke occludes geometry behind it. It also casts shadows from light sources. Finally, a diffuse color can be specified for what light colors it reflects rather than absorbs.

Second there is an emissive, glowing, component. This field is added directly to the final image, washing out but not occluding geometry behind it. This is useful for fire-style effects. It can also be useful for visualizing data because it allows interior detail to shine through the outer layers.

Fire colour — temperature and choose physical blackbody

Gasturbulence — Creates and applies a global turbulence field. This turbulent velocity field is modulated by the Control Field and lookup ramps provided. This controls where and with what magnitude the turbulence shows up, so you can ensure it occurs only in the regions of the sim you want.

  • Time Scale — Specifies a scale factor that relates DOP time to the simulation time for this microsolver. A value greater than one means the simulation time advances faster than the DOP time. A value less than one causes the simulation to appear to run in slow motion relative to the DOP time. Several expression functions such as doptime exist for converting from global times to simulation times and vice versa.
  • Scale — Magnitude of turbulence applied to specified velocity field.
  • Swirl Size — Initialized (base) swirl size value. Measured in worl units. This value is derived from frequency.
  • Grain — The amount of influence added bands of  Turbulence have, relative to the initial Swirl Size.
  • Pulse Length — How fast the noise moves. Higher values will result in slower movement.
  • Seed — Defines initial noise offset
  • Attenuation — Defines the gradual loss of intensity.
  • Influence Threshold — When to apply Turbulence, based on the specified Density Field.
  • Turbulence — Levels of turbulence to apply relative to the initial Swirl Size For smoother transitions, use lower values.

Way2: smokeobject + volumesource + smokesolver + smokesolver

change the size

Use the bounding box with the Lattice node to create a deformation cage for the input geometry.

Expression — size

Alt + E to make the windows bigger

Expression — centre (for transform)

made the expression to the smokeobject

Gasresizefluiddynamic — The Gas Resize Fluid Dynamic DOP is a microsolver used in building larger fluid simulations. The Fluid Solver and Smoke Solver DOPs allow microsolvers to be added before or after the main solver step to extend or tweak the simulation. Alternatively, advanced users may attempt to build an entire new solver out of microsolvers.

The Gas Resize Fluid Dynamic DOP will resize the fields required for different types of fluid simulations according to a reference field. A SOP Solver is used to recalculate the new bounds every timestep. This resizing is done with the Gas Resize Field DOP so it does not affect the actual voxel sampling, just the total number of voxels.

unselect the max bound > clamp to Max so the bound size will change according the volume

Process2 — Fire

Pyrosolver_sparse — This node is an extension of the Smoke Solver (Sparse). It considers an extra simulation field (flame, which captures the presence of flames) and adds some extra shaping parameters to allow for more control over the emergent look.

remove the temperature attribute of attribnoise and volumerasterizeattributes

add the attribute of source of density and target field of flame

frame lifespan and change the scale of temperature weight

make the pyrosolver_sparse > shredding bigger to add more details

Add the disturbance

Add gasvortexconfinement and confinement scale 0.5

Gasvortexconfinement — The Gas Vortex Confinement DOP applies vortex confinement to a velocity field. This is a force which amplifies existing vortices with the intent of undoing the diffusion that occurs during the diffusion stages of the fluid solver.

Confinement Scale — The strength of the vortex confinement.

Add gaswind and merge (direction)

The Gas Wind DOP is a microsolver used in building larger fluid simulations. The Fluid Solver and Smoke Solver DOPs that allow microsolvers to be added before or after the main solver step to extend or tweak the simulation. Alternatively, advanced users could try to build an entire new solver out of microsolvers.

The Gas Wind DOP applies a wind force, adjusting the velocity field in the direction of the ambient wind direction

fileache $HIP/sim/fire/$F4.vdb

Convertvdb — This node converts sparse volumes, or VDBs, into other types. It provides some extra options not available through the Convert SOP. This also allows the conversion of volumes into VDBs.

For converting to polygons, the second and third inputs can be optionally supplied. The second input provides a reference polygon surface for converting the volume, which is useful for converting fractured VDBs back to polygons. The third provides additional VDB fields which can be used for masking (which voxels to convert to polygons), and/or for specifying an adaptivity multiplier.

Add convertvdb + vdbvectormerge + primitive

primitive

reduce by half the the amount of data that we save to disk — write 16-Bits Floats

and set the render settings the save to the sisk

Render

  • Arnold render
  • Arnold material
  • Arnold light

invisible the particle only the arnold_volume and hide the flame so the render will be normal

Material ( Volume)

add the volume_sample_float + ramp_rgb1 + standard_volume

Standard_volume

The standard_volume is a physically-based volume shader. It provides independent control over volume density, scatter color and transparent color. Blackbody emission is used to render fire and explosions directly from physics simulations.

Each component can be controlled by a volume channel coming from the volume object, with other parameters acting as multipliers on the channel. Optionally the channel can be left empty, and a custom shader like Volume Sample or a procedural texture may be connected instead, to manipulate each component with more control.

However, be warned that the evaluation of a shader network for volume rendering is much more expensive than for surface shading because the shader network is called many times per ray, once per ray march sample. So, in a production environment, it’s best to use as few shaders as possible, ideally having just the standard_volume shader doing all the work.

https://docs.arnoldrenderer.com/display/A5AFHUG/Standard+Volume

with ramp_Rgb > black body

Ramp color preference

Viewport

Render

Process3: Explosion

Add scatter and vdbfrompolygons

Pyrosource

PyroSource — The Pyro Source SOP converts its input geometry into points suitable for sourcing pyro and smoke simulations. This SOP adds specified attributes to the generated points, which can be rasterized and imported into desired DOP fields by the Volume Source node. Pyro Source also contains a handful of initialization presets for driving common simulation scenarios.

Input geometry. When Mode is set to Surface Scatter or Volume Scatter, the geometry must correspond to a surface; in the latter case, the surface must also have a resolvable interior.

VolumeRasterizeAttribute — The Volume Rasterize Attributes SOP takes a cloud of points as input and creates VDBs for its float or vector attributes. Internally, this node utilizes the Volume Rasterize Particles SOP and is thus subject to its nuances and limitations.

Attiurbute — density

Add the pyrosolver and go to sourcing and remain density and temperature

PyroSolver — The Pyro Solver is a wrapper around a DOP network to simplify the running of Pyro solves.

The first input provides the sources for the Pyro simulation. This should be a set of named volumes. The exact names required are determined by the Sourcing tab. The Pyro Source SOP and Volume Rasterize Attributes SOP are useful tools for creating source volumes.

The second input provides the collisions for the Pyro simulation. It should be a SDF VDB, such as the second output of the Collision Source SOP or the main output of the VDB From Polygons SOP. If the collision is animating, points with a v attribute can be used to describe the motion. The two outputs of the Collision Source SOP can be merged and used as the second input to provide this.

Cooling rate means how fast does the temperature disappear

Make the density scale smaller to make the fire effect more clear

Add the density and divergence to make it expands quickly

make the visualization mode > smoke and change the density scale and shadow scale

outcome

Add the attribcreate

pyro solver add the density and field flame

make it is super easy to tweak the values — Minimal OpenCL Solve

The solver has the ability to perform a Minimal OpenCL Solve, which is useful for very rapid prototyping. This checkbox is located on the Advanced tab, and allows for interactive manipulation of parameters during a running simulation, which can give you quick feedback of their effects on the simulation.

When this checkbox is turned on, some features of the solver are turned off to to ensure that all simulation data can stay in video memory, avoiding costly copies that are necessary when only Use OpenCL is turned on.

Open the Shape > dissipation and disturbance and shredding and turbulence

Shape

The shape of the resultant smoke can be greatly changed by tweaking the settings that are located in this section. Depending on values of these parameters, simulation results may fall anywhere between simple laminar smoke flow to small fires to huge explosions.

Dissipation reduces the density of smoke over time, so that it fades and eventually disappears. It is important to set an appropriate value for the Clamp Below parameter when performing a sparse simulation. Otherwise, tiny density values will linger and unnecessarily inflate the active simulation region.

Disturbance and shredding apply random forces to break up the simulation. Former exerts linear accelerations and is useful for breaking up smooth smoke caps. The latter rotates velocities to redirect the flow. Shredding is effective at adding chaotic motion without speeding up or slowing down the flow; it is especially useful for fire simulations, which are dominated by vertical licks when no shredding is used.

Turbulence can be used to add powerful large-scale noise to the simulation velocities.

Each shaping operation has a checkbox to turn it on and a scaling factor to specify how strongly to apply it. There is also a tab containing further parameters for each built-in operator. A common theme here is the control field, which can be used to spatially attenuate strength of the shaping operator. When enabled, value of the Control Field is fitted from the Control Range to 0-1; this is further passed through theControl Ramp if Remap Control Field is enabled. The remapped control value is then applied as a scaling factor on top of the global strength.

So give the same Arnold volume and the material

Explosion material

Render only the standard volume

the emission > blackbody

Temperature so to change the brightness of the fire centre

Blackbody intensity

Collision

VDB from Polygons > Distance VDB > Collision

Making collision field by VDB and the initialize > collision

Key the sphere transform frames

Arrtibute pointv and distance VDB > collision / surface

Add a volumesource and merge to source the collision

The Volume Source node imports SOP data into DOP fields and geometry. This node is capable of merging an arbitrary number of SOP volumes and VDBs with fields, as well as importing and destroying simulation particles.

Posted in Houdini & Lighting | Leave a comment

Week5 Performance Animation: Recording and Feedback

Part1 Audio Options

Acting! It’s one of the most important components of being a successful animator. This week we were given the task of selecting three conversations and editing audio for future animation. We have to choose an audio clip that contains the interaction between the two characters. In the middle of the period, one person has to speak for 10-20s.

I chose three pieces of audio. They have one thing in common, that is, they have obvious emotional expression, and they can also create freely with appropriate body language.

  1. The first section is a dialogue between the man and the woman. The man excitedly expresses that he is suitable for rock music, and the rising tone represents his determination.
  2. The second paragraph is a dispute between the two protagonists. They both speak fast to express their strength, until at last there is a physical conflict between them.
  3. The third paragraph is a couple quarrel, there are some emotional expression and pause.

Part2 Action Recording

Process Steps

1 . Role building

Before shooting a video, we need to shape the character well, one is based on the sound of the audio, the other is to shape the character without any material. But no matter which one, the role should include his personality, age, gender, hobbies, advantages, disadvantages and even life experience. In a word, the richer the better. The purpose of this is to endow the character with soul and thought as well as rich emotion. When we walk into the character, we can imagine what kind of posture and expression he will put on. We have a starting point and we know where to start.

2 . Conceive a story

Every animation demo we do exists in a complete story. Even if it’s a very short audio, we can also conceive a small story for him. The reason why we want to conceive a story is that the story can tell us what kind of person he is, what happened in what place, and what he is thinking at this moment. When we understand these, we will have a clear idea. Avoid the impact of being confused or having too many ideas when doing animation. In addition, we can write the emotions and actions of the characters in the process of writing stories.

3 . Take a reference video

After designing the story, we can take our video reference according to the plot. This is a very important link. The better the performance, the clearer our animation feeling, which can save us a lot of unnecessary trouble.

4 . Know your audio

This seems obvious, but it’s easy to settle for knowing your audio “pretty well.” Don’t settle! Know our audio like the back of our hand and internalize it. Listen to it on a loop, until the cadence is familiar to you. We should be able to recite it accurately, with every pause and emphasis in the right place. We don’t want our video reference to show us trying to remember the words or throwing in gestures too early or too late. While shooting reference, our focus should be on what our character is feeling and doing, not what word comes next and when.

5 . Speak, don’t mouth

We want our reference performance to be as genuine as possible. Performing the dialogue audibly can help us better connect to the words. Speaking the words will also help us to see where our character should take breaths, even if we can’t hear their breathing in the audio track.

Bonus: Since we’re speaking along with our audio track, it can let us know when your timing is off and if we need to study the track more.

6 . Consider the energy level

Different voice performances call for different energy levels. We probably don’t want to be gesturing wildly during a subdued line read, and we don’t want to underact when the character’s voice is full of energy.

If our character is speaking calmly, speak calmly. If they’re yelling, yell! We want the effort of a high-energy line read to show up in our video reference.

https://www.animationmentor.com/blog/how-to-make-an-awesome-video-reference

Feedback of the audio

‘Audio clip 2 I think is the best one of these Crystal. Lots of different ways you can approach the story with this clip and it also opens it up to turning this in to a comedy if you want.’

I got Luke’s feedback, so I chose Audio 2. In this original movie, the two characters don’t have much body movement, but their expressions are in place.

Reference Recording

My idea is to add more body movements to enrich the dialogue. My thoughts include the following:

  1. Two characters are standing, character a starts to talk with his hands crossed, and then he wants to go. Character B grabs him and is thrown away by character A.
  2. Character B strides forward to character a, puts his finger against his chest and starts questioning, pushing him all the way to the wall.
  3. Character a begins to ask character B to step back.
  4. Character B’s mood begins to collapse, retreats and insults character a, along with the sentence ‘you do nothing’ with some actions, and points to character a with a finger and says ‘you never loved her’. In this process, character B has a large range of actions, including the responsibility of akimbo.

In the end, I chose to shoot and play the audio at the same time, and then synthesize the video and audio at the later stage. I know the lines clearly, which will help me perform better, and also enable me to better track the audio and understand the emotions of the characters.

The action reference:

Part3 Feedback

Ftrack feedback 1:

Ftrack feedback 2 ( performance) :

Notes of the advices from Luke

01s: Remove the action that the arms crossed since it is always seen as a bit of a cheat. So put his arms behind his back, or put them on his hips there. Or a bit more exaggerated with it so try and make it a little bit light hearted.

05s: If you see in your reference they’re very close together it makes it much easier to keep the heat in the argument. Rather than it being a manager’s to get that far away. Just have a go.

07s: You would get the pose in it in the back, so like really push that back out there so like he’s probably puffing out his chest.

13s: Then slowly, slowly start to set back doesn’t even have to walk away. You just have to slowly start to retreat in himself.

15s: Then when you go step away, grabs him like that, and then talk to him like this as if it is because it’s a side on one

Then I found some quarrel film performance.

Conclusion

I should always pay attention to the bending of the spine, which may give people the feeling of grievance or pride. When a person is strong, he must be responsible for others.

Considering the position of the hands and the placement of the arms for emotional expression, the crossing of the hands makes people feel deceptive. On the contrary, the two hands are placed on the hips or on the waist, which is more like swearing sovereignty.

The distance between two people also represents emotion. If two people are far away, it’s hard to feel that they are about to have a fierce quarrel. On the contrary, the closer they are, the more oppressive they feel.

Use the fingers to point to another character more often, but be firm.

I feel that I have learned some new skills of body language expression, including gesture, body shape, expression of eyes and position relationship between characters. This week I’ll summarize Luke’s comments and some references I’ve found for re recording.

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Week4 Rigs and Animation reference

Part1 Animation Reference

This week, Kay and I thought about 15 animations that can trigger and interact based on the story of Sweeney Todd. The character runs in the scene. According to the props in the scene, such as razor or stove, the animation can be triggered. The content of the animation is similar to the daily action and life of the character.

Character personality

Todd, the barber, is a kind-hearted man who only wants to enjoy the happiness of his family. Due to the conspiracy of others, he was exiled far away. The hardships on the road made his heart so strong that he was cold. He saw the darkness and stench of London and was shining with the light of death. After returning to London, he still cherishes the hope of finding his wife and daughter. When the hope is broken, there is only hatred in his heart. Revenge becomes the reason for his survival for the rest of his life. The life and death of others is not important in his heart, as long as he has important people.

Mrs. Lowe is a lonely one. She has an admiration for Todd. This admiration makes her come up with the idea of Todd killing people and using corpses to make pie materials. She seems to care for the barber apprentice. Generally speaking, she is just a person who only cares about what she likes but ignores other things. She is smart and calm, and greedy for small things )When Todd killed for the first time, he handled it carefully and came up with a plan to make a pie with a corpse, but Todd ignored her and even despised her. Compared with Todd’s original wife, Mrs. Lowe is a survivor.

According to their characteristics and representative actions, we finally decided nine animations of Todd, five animations of Mrs. Lovett, and the last animations to interact with the two characters.

Based on the binding in progress, we chose to shoot the reference first.

  1. Pick up the razor (left to right)
  2. Place scissors
  3. Step on the chair
  4. Wipe the chair
  5. Turn on the stove (flame in pupil)
  6. Peeking at the guests
  7. Spray perfume
  8. Exercise muscles and bones
  9. Drinking (looking at the bottle)
  10. Chop meat
  11. Carry a tray
  12. Look at your wallet
  13. Roll dough
  14. Flower arrangement
  15. Pushing to the stove
1 Pick up the razor (left to right)
2 Place scissors
3 Step on the chair
4 Wipe the chair
5 Turn on the stove (flame in pupil)
6 Peeking at the guests
7 Spray perfume
8 Exercise muscles and bones
9 Drinking (looking at the bottle)
10 Chop meat
11 Carry a tray
12 Look at your wallet
13 Roll dough
14 Flower arrangement
15 Pushing to he stove

https://www.bilibili.com/video/BV1Vs411X7iu?p=2&spm_id_from=333.788.b_6d756c74695f70616765.2

Part2 Rigs, Skin and Weights

In fact, Kay involves more software than me, but because I still like the whole process of modeling, I finally decided to be responsible for the carving material and binding of the characters. She told me that advanced skeleton is very easy to use and easy to use, so I spent a lot of time this week learning about the plug-in. She showed me the binding she had done before. Although it didn’t succeed in the end, she gave me some advice and points to pay attention to in case I repeat her wrong steps. She also gave me some tutorials that she thought were very good, which were relatively short and easy to understand.

This part is the most time-consuming part, the binding part. I learned how to use humanik to make and apply the motion capture actions into the model rigs which is shown in my previous blogs.

At first, I used the most primitive way to create bones, but I soon found that there were two problems with this method. One is that I don’t know the name of each skeleton. The other is that I don’t know that I can’t build an automatic controller in this way. That is to say, I have to use advanced to build the basic skeleton.

This image has an empty alt attribute; its file name is 4291613825767_.pic_.jpg

Then I used VMware to use advancedskeleton.

  • Rigs — BUILD and FIT and EDIT
  • Skin — DEFORM ( option1)

First, create a skeleton. I use the skeleton biped.ma then import


Half of the skeleton appears at this time, because it will automatically mirror. Then go to match the skeleton to the position I need, that is, to match my model. Click the select deformjoints and the rigs will be selected. And click the se smooth bind options and the controllers will appear.

One of the advantages of this plug-in is that you can select the joints name to display the name of each body part, and it will automatically calibrate the orientation of the bones.

I made mistakes in this step several times, because I hid the controller, just moved the root skeleton, and didn’t find that the controller was at the origin position, so the position couldn’t match all the time. Later, I moved my character to the origin and finally succeeded.

This image has an empty alt attribute; its file name is 4301613825915_.pic_.jpg
This image has an empty alt attribute; its file name is 4311613826529_.pic_.jpg


Then I tried to bind my first model with the advanced skeleton. At the beginning, I read the Chinese tutorial. Because of some translation problems, I didn’t do it right all the time. Later, I finally learned the location of the bone points, and then successfully mirrored them. I also made the skin.

This image has an empty alt attribute; its file name is 4241613825368_.pic_.jpg

In fact, I’m not so satisfied with the weights and bones, because the binding is still a little stiff, but this is just an attempt. I also want to ask Luke about how to solve this problem before we go to further study how to modify the weights.

Binding is really a very complicated process, and it needs to be tried many times, because there will always be many problems, but I don’t want to avoid this problem, because once I do it, I will have a sense of achievement, and I believe that this process is relatively slow at the beginning, and I will improve efficiency after I get familiar with this plug-in.

Part3 Updating

In fact, our progress this week is a little slow, because after discussing with Kamil, he told me that our model needs to run in both Maya and unity, so the riging requirements are a little high. So we use this gap for animation reference. What we hope is to trigger a prop in unity to start playing an animation. This animation doesn’t need to be very long, just a small action.

Kamil is not very familiar with advanced So skeleton spent a lot of time, and he also tried to use it in the unity. But finally he finished it.

Although Kamil has rigged the models to us, it doesn’t seem to display correctly when it is imported into unity. Unfortunately, when we imported the model to the unity, his legs disappeared after we led in. We, including the students of the game, don’t know how to solve this problem. However, in order not to affect the progress, we don’t spend too much time on this problem. And when he gave us time was already very tight, in order to catch up with the progress, we found a friend to help modify the previous binding and help us bind facial expressions and the rigs are quite good. At this time, we had a new bound model, so we didn’t ask Kamil to change it. We also thank him for promising to bind the role for us when we need help.

And I think his binding is very good, but we are very short of time, and we can’t complete the animation until the binding is completed. If we wait for Kamil to give us, we will not have time to finish all the animation, so we asked our friends for help. Even if we didn’t use Kamil’s binding in the end, we still thank him for helping us test these bones, and he said he learned a lot from this project.

After we got the model binding modified by our friends, I made some skin adjustments. On the whole, we are quite satisfied and can do animation.

Part4 Rig testing

character1

character2

In fact, this character didn’t do very well in facial expression because she didn’t have too many movements and because of the time. Another reason is that we want to spend more time on Todd.

Part5 Conclusion

It’s hard to keep trying and challenging, but our project has the goal of understanding and contacting areas we haven’t been involved in. It’s not to say how perfect I want to be. I want to enjoy every process of meeting and solving problems. This process may take a lot of time, but what I learn is not only the skills, but also the process of my own research. These experiences are very valuable. At the same time, learning to solve problems and materials is also a process of training myself.

Actually, after I tried binding for the first time, I think it is really difficult. I still have a lot to learn and a long way to go.

Posted in Collaboration Unit | Leave a comment

Week4 Zoetropes

zoetrope is one of several pre-film animation devices that produce the illusion of motion by displaying a sequence of drawings or photographs showing progressive phases of that motion. It is a drum with sequential animation stills facing inward around the circumference. The viewer peers through equally spaced viewing slots toward the images on the opposite wall. An open top allows light to enter and illuminate the images. As the drum spins, the slits provide broken views of the drawings or photographs, creating a strobe effect and the illusion of a moving image.

In Maya, it is convenient and fast to attach the material of image sequence to the object to form the special effect of explosion or lightning. It does not need a lot of calculation to create the real explosion, but uses the plane to form the three-dimensional effect. Because the particle solution is really time-consuming, it works very well and can be viewed in real time. The important thing is that there are many resource packages on the Internet, at least enough for previs.

When using an image sequence, you can keyframe the Image Number value (it’s automatically keyed to 1 by default). In addition, you can offset the Image Number keyframe by entering a frame number in Frame Offset.

In order to render this image sequence as a movie and a texture we need to perform these:

  1. Assign a file node to a shader
  2. Assign the first frame of the image sequence to the file node
  3. Turn on Use Image Sequence attribute in the file node
  4. Notice that this automatically creates a linked connection to the Image Number attribute
  5. Changing the Frame Offset changes when the first frame of the image sequence will exist

How to make Maya texture with image sequence (details)

When you give a material to an object, you can select any image in the sequence, and then under the file node, check use image sequence, and it will automatically read the relevant sequence diagram. In this method, the first frame reads the picture sequence as 1, the second frame reads the picture sequence as 2, the third frame reads the picture sequence as 3, and so on. That is to say, the number of played frames is related to the picture sequence number, because the following image number setting is related.

Therefore, there are two things to pay attention to when importing sequence frames. One is that there is a strict requirement for the sequence number of the picture, which is the sequence picture, but the sequence number cannot be 001, 002, 003, etc., so that the first one with 0 in front is 1, the second one must be 2, it cannot be 02 or 002, etc., and the tenth one is 10. The naming format must follow Maya’s naming rules for sequence images. If you only have 100 sequence pictures, then there is no sequence picture after the animation plays to 100 frames.

The next parameter is frame offset, which is the frame offset. The default value is 0. When it is set to 1, the frame image of the playback sequence will be shifted backward for a while. When the playback sequence is 2 in the first frame, and so on.

https://www.toolfarm.com/news/freebie_free_vfx_image_sequences_flipbooks_from_unity_labs/

How to make alpha image to use it in Maya

Imported the sequence frame or picture with colour into PR or PS, making it only black and white and use the reverse to make reservation and abandonment. Black is the transparent part and white is the opaque part.

Save the map with alpha channel in TGA or TIFF format, and then paste the map on the model.

How to use VFX image sequence as texture in maya

Images with alpha files (standardsurface)

  • Create the plane or other object
  • Give the plane a new material
  • Change base> color to file and give the sequence image and click the use image sequence
  • Geometry>Opacity>give the alpha images
  • Invert if necessary
  • Go to the planeshape>arnold>Opaque>unselect
  • With the arnold render

Images with alpha files (lambert)

  • Create the plane or other object
  • Give the plane a new material
  • Change color to file and give the sequence image (exr/png) with colour and click the use image sequence
  • Right click the transparency and break the connection
  • Give the transparency the sequence image of the alpha ( black and white only ) and click the use image sequence
  • Invert if necessary
  • With the hardware render

Effect with time offest

Way1 — use Frame Offest (both color and alpha)

  • -20 means start 20 frame later 0 frame
  • 20 means start 20 frame before 0 frame

Way2 — Adjust key frames

  • Right click the Image number> Delete Express
  • Go to frame1 and input 1 > right click Set Key
  • Go to frame100 and input 100 (Effect duration) > right click Set Key
  • Go the same both in color and Transparency
  • Go to the graph editor and select the relevant material and there will be a curve
  • Change the curve to the straight line (optional)
  • Drag the curve to adjust the start frame even the speed

Final effect

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Lighting Tutorial Week2 : Image-Based Lighting (IBL) Setup

Part1 Nuke

Inspecting the source image files

Matching the overall pixel value of the targeted area

Separating the HDRI into Hi / Low pass , North / South Dome

The benefit in the troop map is that we we can draw a straight line quite much more easier in this kind of projection.

I followed the steps to open nuke, because I had not used this software before, and I found that even if I replaced the path, I did not display the correct image. Emma and I both had problems. Later, I found that the window displaying the image was not updated. That is to say, if I select the correct node and press 1, the corresponding mode image will appear.

Press 1 to refresh the picture view of the node

Ctrl+shift+leftmouse button+drag — draw the selection on the image

move the mouse to the line and press ctrl it will show the yellow dot to adjust the line

Tab — add the node

Exposure node — to match the brightness ang chage the colorspace to stops

press up/down arrow to increase or decrease the value by 1

press right / left arrow to increase or decrease the value by 0.1/0.01/0.01

Multiply node

Press 1 and 2 — swap the picture

press this button to make it darker or brighter

copy the path and paste in the node port to see the final eddect

Render (select all the node)

Part2 Maya

raw format is correct

Frequent Editors

show the hidden object
rotate y
rotate x
reference
character
hide the reference

Shader set up

chrome
delete the history
remove the background plane
script editor

Posted in Houdini & Lighting | Leave a comment

Week3 Advanced Skeleton and Motion Capture to Rigs

This week’s class opened my eyes and let me know more about the technology of motion capture. Its main content is how to use the plug-in advancedskeleton to make the rigs and add the motion captured to rigs to form an animation. In addition, I also learned about a website mixamo, which has a lot of free models and animations. I downloaded some actions to test this model, and hope to use this model and the rigs to do the animation of motion capture.

two test mixamo aniamtion

At the beginning, I followed Luke’s tutorial. The binding of rigs was very smooth until I imported a new rig with animation. As a result, I missed some steps and some path names, which led me to make mistakes. So after several attempts, I failed to copy from motion capture to action capture. At this point, I wondered if it was because the model had no controller, so I asked Luke for help. His answer is: this will make it more difficult to control the mocap after it has been transferred as there will be no controllers for the rig. For the mocap to work you dont have to have a control rig however it is best to do this.

So the answer is that it’s better to have a controller, but it doesn’t affect me to do this step without a controller, so I tried again. I remember that I did have several steps in reverse order, so I didn’t show the correct effect. Then I did it again quickly, and this time it was right.

I recorded my binding process.

I summed up a few important points

  1. Import a model that already has bones and check her bones for errors.
  2. Then select the total bone points and click create custom rig mapping.
  3. Turn to the definition, double click the corresponding joint in the structure image, and then select the relevant rigs in the character model.
  4. When Binding hands and feet, if the bone is mirrored correctly, as long as one side is bound, the other side will also be aligned automatically.
  5. Save the skeleton definition
  6. Import the mocap rigs
  7. Right click the rigs and select assume preferred angle to make the model back to the T pose
  8. Click add button to add the character rigs into character2
  9. Select the character2 rig and load skeleton definition
  10. Choose character>character1 and source>character2
  11. Then move the animation timeline
mapping
mirror
all done successfully
save
mocap
T pose
add
load (do not change the path )Template:HIK
match

After the test, I decided to make a controller for it to facilitate the adjustment of animation, so I learned the plug-in of advanced skeleton.

I have to say that this plug-in is very convenient and easy to use. As long as you click Advanced skeleton, many options will appear.

Process

  1. Select Tools > name matcher
  2. Check whether the names of rigs in the outline view are consistent with those of advanced skeleton.
  3. Select files of mixamo or others
  4. If so, click Create + place fit skeleton, then click build advanced skeleton, and finally click constraint to joints.
click the three steps

If the name is inconsistent, delete the redundant name prefix to complete the above steps. The process of deletion is to find windows > General Editor > name space editor > Select > delete

When I read the tutorial, I found that I didn’t have an option that Luke has in his video and I consulted him the reason and he relpied that the create blendshapes button only appears if the rig you are using has come with blendshape data. And blendshape will appear as another mesh in the outliner. So the model I used does not have the blendshape data.

I recorded my operation process.

I didn’t finish it because it was too late yesterday, so when I got up today, I found that yesterday’s model lost its mapping after saving. After trying, I still couldn’t make her mapping normal, so I tried to do it again from the beginning to the end. I repeated this process four times, because I often miss some steps, so I read several tutorials and notes, and finally became familiar with this process, and successfully completed a motion capture copy to the skeleton. This time I completely recorded this process. The combination of these two technologies is actually very convenient for animation.

After having the controller, I really find it more convenient to adjust the action. In this way, most of the actions have been successfully transferred to my model, and the degree of action completion is also acceptable. This action is a Brazilian war dance action, the action range is relatively large, and then I found that there are some places, for example, the action is not accurate, so I made some adjustments.

Several steps to adjust the action.

  1. Make the action bake > bake to custom rig
  2. Select all the controllers and add them to an animation layer.
  3. Select a frame to adjust the animation, press s for K frames, and then modify the action.

For example, in the knee area, I think some movements are too large, which leads to inaccurate orientation of the knee, and the position of the hand is also adjusted. And some parts of the adjustments were recorded.

In fact, there are still many actions that need to be adjusted, but I am very satisfied with learning the technology of motion capture action to rigs, and I have some confidence in the binding and action debugging of future cooperation projects.

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Houdini Tutorial Week 4

This week we talked about some courses about materials, lighting and rendering, which are similar to the properties of other 3D software, so it’s not difficult to understand. Then I learned about some of Houdini’s own shaders and rendering engine (mantra), and how to render particles.

  1. RGB is the acronym of red, green and blue.
  2. “RGB color space” generally refers to “all colors” used in hardware and software
  3. SRGB is a specific type of RGB.
  4. SRGB is very popular, but its color gamut is very limited.
  5. Adobe RGB is a color space containing sRGB and CMYK.
  6. Pro Photo RGB is a more extensive color space mode, which is generally used for color management.

ACES

ACES is a color system that’s meant to standardize how color is managed from all kinds of input sources (film, CG, etc), and provide a future-proof working space for artists to work in at every stage of the production pipeline. Whatever your images are coming from, you smoosh them into the ACES color standard, and now your whole team is on the same page.

For CG artists, a big benefit is the ACEScg color gamut, which is a nice big gamut that allows for a lot more colors than ye olde sRGB. Even if you’re working in a linear colorspace with floating-point renders, the so-called “linear workflow”, your color primaries (what defines “red”, “green” and “blue”) are likely still sRGB, and that limits the number of colors you can accurately represent.

Displacement Rendering

  • Displacement mapping is usually used to represent the height fluctuation of objects in rendering.
  • The effect is usually to move the position of the point along the normal of the face by a distance defined in the map.
  • It gives the texture the ability to express detail and depth.
  • It can also allow self covering, self projection and edge contour presentation.
  • On the other hand, compared with other similar technologies, this technology consumes the most performance, since it needs a lot of additional geometric information.

Rendering Engine

RenderMan has powerful shader compiler and anti animation blur function, which enables designers to create ultra complex action movies. At the same time, it also has a function that can not be ignored, that is, its authenticity. RenderMan can render photo realistic images, so its application in industry is very popular. RenderMan compatible renderers are widely used in the production process of high-end moving images due to their excellent rendering quality and fast rendering ability. In today’s high-end fields such as animation movies and special effects, RenderMan compatible renderers are an indispensable rendering solution and there are many problems in the world Famous production companies like ILM and Sony all use it as one of the final solutions for rendering.

Redshift has the function of exclusion and the rendering sampling is adjustable. It has fast speed and complete functions with dazzling features: Fog / volume light proxy / instance function. Redshift’s efficient memory management allows rendering scenes with hundreds of millions of polygons and terabytes of texture data. Using GI technology based on offset points and brute force GI, extremely fast indirect lighting can be realized. Taking advantage of the original functions of GPU and using intelligent sampling technology, redshift has become the fastest renderer in the world. Users can export objects and light groups to redshift proxy files, which can be easily referenced by other scenes. Proxies allow powerful shaders, masks and visibility flag overlays, which are often required in production.

Arnold is currently a CPU render (GPU version is under development), a movie level rendering engine based on physical algorithm, and a very suitable for artists. Animated films, special effects blockbusters, Arnold is everywhere. Arnold excels in various complex projects. Arnold design framework can be easily integrated into the existing production process. It is built on the pluggable node system. Users can expand and customize the system by writing new shaders, cameras, filters, output nodes, procedural models, light types and user-defined geometric data. The goal of Arnold framework is to provide a complete solution for animation and VFX rendering.

Mantra is the highly advanced renderer included with Houdini. It is a multi-paradigm renderer, implementing scanline, raytracing, and physically-based rendering. You should use the physically based rendering engine unless you have a good reason to use another engine. Mantra has deep integration with Houdini, such as highly efficient rendering of packed primitives and volumes.

Mantra and Rendering

  • Nodes corresponding to renderers and scene description formats (such as the Mantra node and RenderMan node). These nodes output scene description files and call the appropriate renderer to render the file.
  • Nodes for generating other outputs, such as the Geometry node which “renders” the scene geometry to a geometry format such as .bgeo or .obj.
  • Utility nodes to control renders and dependencies. For example, you can use the Merge node to sequence renders. You can use the Switch node to switch between different render nodes based on an expression.

The mantra output driver node uses mantra (Houdini’s built-in renderer) to render our scene. We can create a new mantra node by choosing Render ▸ Create render node ▸ Mantra from the main menus. In general, rendering in Houdini uses a camera defining the viewpoint to render from, lights to illuminate the scene, and a render node representing the renderer and render settings to use. However, we can still make preview renders using the current view, a headlight, and default render settings.

We can do most of your work in the Render view to see an interactive updating render. This lets us assign materials and change render node and shader node parameters and see the results as you work. In the render node’s parameter editor, click Render to Disk or Render to Mplay.

The Valid Frame Range menu controls whether this render node renders single frames or sequences (animations). Choose Any frame to render single frames. Choose Frame range to render a sequence. Houdini uses a mathematical pinhole camera to simulate a camera. Because a pinhole camera does not have in-camera effects such as depth of field and bokeh, we must be explicitly tell mantra to simulate them. The size of the rendered image is controlled by a parameter on the camera.

Mantra attribute

  • Render to Disk — Renders with the last render control settings, using the path specified in Output Picture.
  • Render to MPlay — Renders with the last render control settings, redirecting rendered frames to MPlay, instead of the specified path. If enabled, deep images and cryptomatte images will still be written out to their specified output path.

Controls whether this render node outputs the current frame (Render any frame) or the image sequence specified in the Start/End/Inc parameters (Render Frame Range). Render Frame Range (strict) will render frames START to END when it is rendered, but will not allow frames outside this range to be rendered at all. Render Frame Range will allow outside frames to be rendered. This is used in conjunction with render dependencies. It also affects the behaviour of the ‘Override Frame Range’ in the Render Control dialog.

  • Render Current Frame — Renders a single frame, based on the value in the playbar or the frame that is requested by a connected output render node.
  • Render Frame Range — Renders a sequence of frames. If an output render node is connected, this range is generally ignored in favor of frames requested by the output render node.
  • Render Frame Range (Strict) — Renders a sequence of frames. If an output render node is connected, this range restricts its requested frames to this frame range.

Noise Level

  • Represents a threshold in the amount of variance allowed before mantra will send more secondary rays. Variance essentially represents how “spread out” the values in a set of samples are. For instance, a set of samples that were all the same would have a variance of 0. It is generally a good idea to keep this value as high as possible so that rays are sent only into those areas where an unacceptable amount of noise is present.
  • Adding “direct samples” and “indirect samples” image planes can help us track how many samples are being sent and to which parts of the image. For more information about sampling, see the “Sampling and Noise” section.
  • If we find that certain objects in our scene require substantially more samples than other parts of our image and we are unable to “target” those objects using the Noise Level parameter, it may be a better idea to add per-object sampling parameters to the problem areas.

Diffuse Quality

Controls the quality of indirect diffuse sampling (for information on the difference between direct and indirect rays, see sampling and noise). Often, indirect sources of light (such as the surfaces of other objects, and light scattered inside of a volume) will be a significant cause of noise in your renders. Turning this up should decrease this type of noise, at the cost of slowing down rendering.

Lighting

Environment lights illuminate the scene from a virtual hemisphere (or sphere) that is beyond that farthest geometry objects in the scene. Environment lights can be rotated to orient directional illumination, but they cannot be translated. An environment light may use a texture map to provide HDRI illumination from an environment map. With no rotation, the environment map is oriented so that the top face aligns with the positive Y axis. Environment map to control the colour and intensity of light arriving from different directions.

HDR (Resource https://hdrihaven.com/hdris/?c=all)

HDR mapping refers to the environment mapping with high dynamic range in 3D and other image software. Generally, HDR mapping is a “seamless mapping” made of “HDR photos” (seamless mapping is a picture that allows the edges of the picture to be connected up, down, left and right, and can’t see the seams or traces). HDR maps are generally natural scenery or indoor environment.

HDR map has a high dynamic range of illumination information data image, the ordinary image is 8bit, and HDR map is 32bit. That is to say, he has more grayscale details and richer details. High dynamic range image, which is closer to the dynamic range of the human eye, even more than the human eye. In short, it is a photo with rich bright and dark details. This is a kind of image processing software to make up for the lack of dynamic range of the camera and take multiple photos with the same position but different exposure at the same time.

The role of HDR mapping architecture, home, still life, machinery, film and television and post production and so on the rendering of the model will need such mapping. It plays an important role in: as an environment background (such as rendering building models, blue sky, white clouds, trees in the background, etc.). Or as the illumination and reflection light source of the rendered model. For example, when rendering high reflective models such as cars or stainless steel, HDR map will be used as the ambient light, which can not only achieve the illumination effect of reflector, but also produce rich and realistic natural reflection effect on the surface of the rendered object.

Light Objects are those objects which cast light onto other objects in a scene. With the light parameters you can control the colour, shadows, atmosphere and render quality of objects lit by the light. Lights can also be viewed through and used as cameras.

  • Point — A light that emits light from a specific point in space defined by the transform for the light.
  • Line — A line light which is from (-0.5, 0, 0) to (0.5, 0, 0) in the space of the light.
  • Grid — A rectangular grid from (-0.5, -0.5, 0) to (0.5, 0.5, 0) in the space of the light.
  • Disk — A disk shaped light. The disk is a unit circle in the XY plane in the space of the light.
  • Sphere — A sphere shaped light. The sphere is a unit sphere in the space of the light.
  • Tube — A tube shaped light. The first parameter of Area Size controls the height of the tube and the second controls the radius.
  • Geometry — Use the object specified by the Geometry Object parameter to define the shape of the area light.
  • Distant — A directional light source infinitely far from the scene. Distant light sources cast sharp shadows, and so are candidates for the use of depth map shadows.
  • Sun — A finite sized (non-point) directional light source infinitely far from the scene. Sun lights are similar to distant lights with the exception that they produce a penumbra – similar to the actual sun with Soft shadows.

Colour — The colour of the light source.

Intensity — The linear intensity of the light source. If the intensity is 0, the light is disabled. In this case, the light will only be sent to the renderer if the object is included in the Force Lights parameter of the output driver.

Exposure — Light intensity as a power of 2. Increasing the value by 1 will double the energy emitted by the light source. A value of 0 produces an intensity of 1 at the source, -1 produces 0.5. The result of this is multiplied with the Intensity parameter.

the combination of the two lights — to present the detail of the HDR mapping with the colour and type of the traditional light

https://www.sidefx.com/docs/houdini/render/sampling.html

Material and Arnold

(resource: https://docs.arnoldrenderer.com/display/A5AFHUG/Sampling)

Principledshader — The goal of this shader is to produce physically plausible results while using intuitive rather than physical parameters. A large number of materials can be created with relatively few parameters. All parameters are in the zero to one range and represent plausible real-world values within that range. Textures can be applied to all relevant parameters. Note that the texture value is always multiplied with the value of the parameter.

two ways to give the material

the parameter of the object / go into the object and add & connect the material node

Priority

Arnold

arnold
with arnold light

Add the UVproject node

UVProject creates the UV texture attribute if it does not already exist. The attribute class (Vertices or Points) is determined by the Group Type. It is recommended that UVs be applied to vertices, since this allows fine control on polygonal geometry and the ability to fix seams at the boundary of a texture.

The best way to visualize the effects on UVs is in the UV view. To change a viewport to show UVs, click the View menu in the top right corner of a viewport and choose Set View ▸ UV viewport.

modify the uv show the texture
glass material

Render Particles

Create the arnold render in Out & arnold light with skydome ( texture — hdr image) in Obj & arnold material builder (with standard surface) in Mat

since the crag model has its own material so should add the attributedelete node to remove the mantra material and give the arnold material

To give the different shader to the same object with two geometry

Add the material node in geometry and select the group and give the material ( to select the group name should check the object separate name )

Group — A list of primitives (or points, if Attributes is set to Point attributes), or the name of a group, to assign the material to.

Number of material — increase to two

Number of materials — The number of materials to assign. This is useful for assigning materials to various groups of primitives. You cannot layer materials – if you assign multiple materials to the same primitive, the last material will override the previous ones.

Roughness — change the Highlights and Reflections

Curvature — This VOP computes the amount of surface curvature, with white in convex areas, black in concave areas and 50% gray in flat areas. This is useful for masking wear like scratches and dents, which often happen on raised edges.

add the ramp and combine the curvature and standard surface

Copy the ramp and connect to specular roughness to make the specular more details

Add the arnold material to the particle

At first, I found that I didn’t have Arnold. Later, I added an attribute to the parameter and it appeared.

Change the arnold > points > mode (can change it into sphere / disk / square) and point scale

  • user_data_rgb> Cd connect with the basic color
  • user_data_float> life
  • user_data_float> age
  • Add user_data_float> life user_data_float> age connect to the input and input max
  • Add the ramp and ramp_rgb

Motion blur

render sequence

I found that Arnold’s rendering time was about 2 minutes for each image. I thought it was too long, because it took more than 8 hours to render 400 images. So I asked Mehdi how to speed up, and he told me that I could reduce the light sampling value. At the same time, I felt that the size of the particles was so big that I could see round particles. Then Mehdi told me that I could try to change the mode to disk. Then I rendered it again, and the effect was slightly different.

Render Destruction

Last week, I didn’t complete the destruction of different materials because I couldn’t find the right group. Then I asked Mehdi and found the solution, that is, checked an option.

Blast — group (select the separate object) and select Delete non selected

Then I gave them different colors and tried to render with mantra.

  • roof & ground -concrete
  • glass- glass
  • wall – wood
wood material

Assemble — This operator is used to finish the process of breaking a piece of geometry. It uses the groups and connected pieces created by the Breakoperator to output a set of disconnected pieces.

Block start & Block end

ball transform
node
Posted in Houdini & Lighting | Leave a comment

Week3 Character UV and Texture

I’m really struggling this week, because I’m challenging to get involved in new areas and learn a lot of new technologies, such as unfolding UV and painting the texture with PS.

Part 1 UV and Material

Character1

The first thing is that I expanded the UV of my first model and changed some models. I recorded a short period of my model exhibition process. I haven’t found a suitable screen recording software, because the QuickTime I use now can record the screen, but it’s not stuck, but the exported video is really big, I need to use PR processing to watch it.

After continuing last week’s work, this week I will re unfold the UV of shirts, waistcoats, shoes and hair.

And I also searched for some information in this process, because I had forgotten the correct UV shape of clothes and pants.

My steps are to map the planar first, then select a circle of lines, then cut them open, select a whole object, that is, the shell command, and then use the unfold function in the UV editor. Sometimes I find that the edges can’t connect when I cut them. I use sew to connect the edge lines.

After the last one-on-one, Luke said he wanted me to spend more time on materials.
So I added the lattice texture of the sleeves.

I have readjusted the UV and bump of the vest, because I found that if the bump value is too high and the texture of the material is too dense, the character will look dense on the clothes. It’s better to show the texture clearly, even if it may not match the real character, but if it is put on the cartoon character, the effect is better.

I also made two changes to my face.
One is in line with his character, I gave him increased broken eyebrows. This is Kay’s opinion to me. She thinks it’s cooler, and I think it’s very good.

Also, I changed the material of the original eyes, in order to make his eyes look more divine. The black one is a little stiff.

Some adjustments to the face map also increase the texture of the skin.

Then I adjusted my shoes and shoes. The recording screen of the process.

Then there’s the shoes and the hair. Because I thought maybe I would texture my hair and shoes later.

I also added the high gloss of leather shoes and the leather texture of gloves.

I have painted some texture and color gradation of black hair and white highlights.

After the first model, I feel that sub UV is not a particularly difficult thing. The main thing is the process of drawing maps, which makes me feel that I have mastered a skill if I haven’t painted maps before. I used to rely on online materials, but now I will add some hand-painted parts.

Character2

Then I started to draw UV and map. It’s basically the texture of hair, gloves, face and skirt. I drew some textures myself, and also found some fabrics and concave convex materials from the Internet.

In addition, I modified Todd’s hair and added some lines.

I used three pieces of skirt, one is nylon material, pasted with a small flash, and a circle of lace.

According to the UV shape put together after the map.

I still painted some smoky makeup on my face, hoping to create an evil feeling.

I gave half of the rubber texture to the glove, and a texture and bump to it.

In the part of hair, I drew some texture of hair. Because the layer of hair is very weak just because of the patch, so I used several colors of different shades to highlight the feeling of a trace of hair.

After the first role, I am familiar with the mapping of my second role, and I am used to painting materials with PS, but Substance Painter is also a good software. Although I didn’t use it this time, I will still study the mapping software when I have time

Final rendering

Posted in Collaboration Unit | Leave a comment