Week2 Previs/ Postvis/ Techvis | Film/ VFX/ Game Animation | Matchmove/ Rotomation/ Motion Capture

Part1 Previs Postvis Techvis

Previs, postvis and techvis allow directors to visually test things out in their film and help producers figure out the optimal way to go about shooting the film in a more cost-effective way. 

Previs is actually a simple production of the shooting content before the official shooting. It uses low mode and simple animation to show the actor’s position, framing, camera angle, camera movement and other major directions, which can be used as a reference for directors, camera directors and others when shooting. As we’ve touched on before, previs involves previsualizing part or all of a film. Often used for particularly complex scenes, it uses storyboarding, animatics and asset building to create a three dimensional visualization of the story’s world, enabling the director to explore different shots and angles. In the past, filmmakers used to plan their visual presentation with the help of sub mirrors, concept maps, solid models and props. Now the previs team can complete and accelerate this process through computer animation tools.

Previs enables a director to visually see the story before going to camera, which may sound like a novelty to some, but when we are planning something either technically or practically challenging, it helps immensely to see how things play out beforehand. Anything dangerous or expensive that we may only get one chance at shooting, for instance, might be something you want to invest some previs time into. Previs enables us to encounter some of these problems and plan for their resolution. Beyond that, previs plays a major role in storytelling since directors can play with lighting, cameras, and sets before production to help them discover how to tell the story the way they want to.

Postvis is to integrate the previs assets into the real shooting materials to see the visual effects in advance. Postvis has to decide where the live camera is. Postvis producers need to understand the size of the set and match the material with the 3D virtual scene. Postvis usually uses complex 3D tracking programs to simulate the motion of a real camera. Postvis greatly helps the editorial process by allowing directors and editors to cut with missing bits of the shots incorporated. A postvis team can quickly take elements of shots and insert missing backgrounds, creatures, and effects – any number of details the director needs. Turning the material over quickly affords the production time to refine the edit and enact changes to make the best of the material.

Previs and postvis provide the possibility of preview and determine whether some actions are feasible without the need to complete the final visual effect production. In fact, many elements of the sequence will be modified in this link. Aside from the creative key benefits, previs is cost-effective in that it reduces guesswork on the days of the shoots. Previs can really help the actors know what is happening scene by scene when shooting against blue. 

The role of techvis is equivalent to the technical part of previs. At this stage, we need to work closely with all the creators of the crew, from the art director to the director, to help them plan their shots. This is to build the scene, visual effects may be involved in the process of the production of visualization. Techvis isn’t some other thing that we do – it’s at the core of our production-centric approach to visualization. And because of that approach, we end up spending a good deal of time on set, working closely with the camera and visual effects departments. It’s common to reverse engineer some shots for acquisition on a greenscreen stage when the actor is suspended by cables or by various stunt rigs. Some shots need to be laid out showing a clear demarcation between digital and practical elements or extensions. Motion–based shots generally require techvis in order to verify the legality of the movement well ahead of the shoot day.

Part 2 Film Animation / VFX Animation / Game Animation

Film animation is closer to reality and more life-oriented. So the action of life is also a special requirement for the quality of animation production personnel. One of the most important qualities an animator must possess is whether the action of a character can be natural and fluent, and whether it conforms to the logic of life. In the animation, we should not only design a large number of language and expression actions for the characters, but also promote the development of the story plot of the film through these actions, combined with the sound and lens design. Moreover, a large number of detailed actions should be designed to help improve the credibility of the role and portray the character. It can be seen that the purpose of film and television animation is mainly for narrative performance and character shaping. According to the needs, the producers can use a variety of methods and angles to express each plot.

The controllable actions in the game animation are all circular actions. Walking cycle, running cycle, after walking and running, we have to connect other cycle actions, and there are corresponding cycle connection actions between each cycle action. A character’s boxing attack has gone through stand-by, preparation, boxing, boxing back to stand-by, which is a cycle. In the game animation, the role is fixed in a few actions and repeated constantly. For the purpose of the game experience, the action of the game role is actually made according to the intention of the software writer. Many of the requirements of the movement itself does not conform to the normal law of movement. Obviously, the purpose of game character animation is to bring players a specific game experience.

VFX animation refer to a special effect that we usually watch movies and other videos. The special effects here point to two points: modeling special effects, special effects modeling, that is to say, green screen shooting, post color matching, special effects scenes, etc. VFX animation are mainly to make up, process and perfect the visual effect of the screen by combining the false pictures taken by the green screen in the early stage and the computer special effects in the later stage. In VFX animation, in order to make the information dissemination more accurate, the picture quality more exquisite, or in order to make an object that does not exist in the nature as the theme to promote the plot development, we need to make very realistic or visual elements with visual impact, and the creation of such elements, the VFX animation play an irreplaceable role.

Film and television animation is more extensive in special effects and animation performance. It covers a wider range and has more diversity. Because of its particularity, game animation must find a balance between the expressiveness and the size of the game file. And film and television animation is completely free from these restrictions. In this respect, there are some differences between 3D games and 3D animation, and between 2D games and 2D animation. In the two-dimensional film and television animation, although track animation is often used, but plays a decisive role and can leave a deep impression on the audience, often those with beautiful composition, creative perspective and strong coherence.Creating animations for film and games and VFX are three different processes. While a film and vfx animation are meant to be viewed, games animation are all about user interaction.

Part 3 Motion Capture with Matchmove and Rotomation

In visual effects, matchmove is a technique that allows the insertion of computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot. The term is used loosely to describe several different methods of extracting camera motion information from a motion picture. Sometimes referred to as motion tracking or camera solving, matchmove is related to rotoscoping and photogrammetry. Match moving is typically a software-based technology, applied after the fact to normal footage recorded in uncontrolled environments with an ordinary camera.

Rotomation is a technique of animating a 3D element on top of the tracked motion picture footage, frame by frame to match an actor or an object in a live-action plate. To replace specific and complex deformation objects, such as characters, rotomation is needed. Rotomation is mainly used for digital replacement of characters, including erasure like steel wire, original characters and other elements and role performance tracking. In addition to replacing actors, rotomation is also often used in digital makeup.

Motion Capture is the ability to track in 3D the motion of a non-rigid object, like a human body or face or a piece of cloth. This is a special case compared to rigid moving objects or standard matchmoving, because, for each frame of the footage, the position of any non-rigid track is totally independent from any other previous position, or any other track. Therefore, its 3D position cannot be computed from a single view. To be able to compute the depth of such a track, you must see it through at last two different viewpoints.

Matchmove artists match CG scenes with shots from live-action footage so the two can be convincingly combined. They recreate live-action backgrounds on a computer in a way that mirrors the camera on the set in every way, including lens distortion. They do this by tracking the camera movements to make sure the real and virtual scenes appear from the same perspective. Sometimes matchmove artists go to the film set to take measurements and put up tracking markers. Then they use these markers to track the camera movement and work out the relevant coordinates in the 3D scene. They do this using 3D tracking programs like Maya or 3DEqualizer. Matchmove artists also do body and object tracking, using markers to recreate the movements of people, vehicles or other objects in CG. The motion files created with motion capture are then passed on to other departments via the VFX pipeline, so that, eventually, they can be seamlessly combined by the compositor.

Rotomation can be a complicated procedure, and just like other VFX techniques, its successful completion can take a significant amount of time and human involvement which can be combined with the motion capture. A rotomation procedure can vary greatly, depending on the needs of the scene. But generally, a procedure done well is based upon the assumption that the camera has been correctly matchmoved and fit into the set. One of the first tasks to do in this procedure is to establish the distance to the element you are rotomating and the camera. Other tasks following this chain procedure, which include among others, setting the initial pose, using non-linear animation techniques, analyzing the movement, and model editing.

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Week2 Character Design and Modeling

Part1 2D Character design

Reference

The reference pictures mainly refer to the overall feeling. At that time, I didn’t follow the movie completely. I hope he is a very thin image and has a fierce feeling at the same time. I mainly refer to European clothing in these pictures.

Draft

In fact, the style I am good at is cartoon rather than poem writing, so when I draw characters, I still choose the proportion of four heads and body that I am good at.

I want to highlight a few points

  • Hair: rear projection, bold color, highlights
  • Barber’s clothes: suit and vest, retro style
  • Face: the shadow around the eyes
  • Body shape: slender, hands longer

Based on these drafts, I redesigned his scale, enlarged his head, lengthened his hands, and chose his back.

Character design

The rest part is that I designed our second role.

I would like to highlight several features of this role

  • Curly red hair
  • Eyebrows on the top
  • Dress and gloves

Because I’m good at girls’ character setting, I didn’t draw many versions of the draft, but I drew what I wanted at once.

Character design

Part2 Modeling

Tutorial

It’s probably because I haven’t done modeling for a long time. At first, modeling was very slow. I start with the head. In fact, my modeling method starts from the basic model in the Maya library. Almost every model I have comes from them. In fact, it’s time-saving to do this. You can also refer to some human body structures. Although these characters are more cartoon oriented, there are still some structures that can’t be ignored.

In the process of making models with Maya, I mainly used some carving software. In fact, I often do half delete and half copy to ensure the symmetry of both sides. At the beginning, I forgot this step, which led me to do a lot of repetitive work to ensure the symmetry. I also spent a lot of time to repair some ground surfaces, because once there are broken surfaces, I can’t carve them. Because there are still many lines in my model, there are often surfaces stacked together, so I need to spend a lot of time to delete and refill them.

In the process of modeling, I am always dissatisfied with the fact that I don’t think this model is completely same to my setting. They don’t look like the same person. But I also put the reference image into Maya, so I began to suspect that it was the mapping problem. In fact, I don’t want to show UV and draw maps too early because I haven’t completely determined the model. Now it’s just a rough idea and draft, not the final effect.

The mapping part is not finished yet, I just put a little color and simple mapping to render the overall effect, and UV has not been expanded yet.

Model front and right perspective

Rendering

With regard to the girl now I only do this for the time being. Some models still need to be adjusted. I want to show them to the team members and collect their opinions. I’ll revise them again, so I haven’t launched her UV yet.

Girls, unlike boys, need to pay attention to their eyes and eyelashes.

I didn’t choose to carve the whole hair this time, but use the form of patch. This is because at first I wanted the feeling of wisps of hair. Later I found that this form was very messy, so I made a whole on the top of my head. The patch is convenient for me to make curly hair.

I mainly used these carving tools, because some of them were forgotten, so I checked some information, and I wrote down some notes and my learning steps below.

Because I will revise this role next week, so there are not many details. I’m just carving out her face for the time being. The shape of her body is roughly a figure.

You can sculpt a polygonal model with tools from the Sculpting shelf. You can also access these tools from the Mesh Tools > Sculpting Tools menu.

SculptBuilds up initial forms and moves vertices in a direction determined by the average of all normals within the boundary of the tool cursor. Use the Direction setting to modify the default setting.Press Ctrl + 1 to activate the Sculpt tool when another sculpting tool is already active.
SmoothLevels vertex positions in relation to each other by averaging the positions of vertices.Press Ctrl + 2 to activate the Smooth tool when another sculpting tool is already active.
RelaxAverages vertices on the surface without affecting its original shape. Press Ctrl + Shift to temporarily activate the Relax tool while using another sculpting toolPress Ctrl + 3 to activate the Relax tool when another sculpting tool is already active.
GrabSelects and moves vertices based on the distance and the direction you drag. Useful for making subtle adjustments to the form of the model.Modify the Direction setting to constrain the movement of the tool. For example, XY constrains vertex movement in the XY plane.Ctrl-drag to temporarily move vertices along their Averaged Normal.Press Ctrl + 4 to activate the Grab tool when another sculpting tool is already active.
PinchPulls vertices in towards the center of the tool cursor. Useful for more sharply defining an existing crease.Press Ctrl + 5 to activate the Pinch tool when another sculpting tool is already active.

Rendering

Part3 Feedback & Updates

Character1:

Luke gave me some advice and guidance

  • Some software that can help me. For example, substance paint, because the details of the sleeves are a little less now.
  • In addition, if I am not familiar with the binding, I can try some new software, such as advanced skeleton.
  • i can communicate with team members and they can give me a lot of opinions.

Character2:

Luke told me that if the arm of the character doesn’t do elbow joint, it will bring a lot of trouble when binding. Note that the elbow joints of the model need to be made to facilitate the rigs of the model later.

This image has an empty alt attribute; its file name is 4211613824146_.pic_.jpg

Based on the advice given to me by Luke last time, I remodeled the arm of the character and added the elbow joint.

I also slightly adjusted the thickness of the skirt to make it look fuller.

Because in the process of carving with carving tools, the two sides of the skirt become different. So what I still use this time is to keep the symmetry of the object, so as to facilitate the later binding. Delete half of the object, make the middle points on the same line (select the point to scale to the X axis), then copy the other half, adjust the scaleX to – 1, combine, and finally merge the middle row of points.

Part4 Conclusion

This is after I finished the design. After a year, I started to make models again. I haven’t used some Maya commands for a long time, so the first week is more for me to familiarize myself with some commands and carving process.

As for character setting, I actually spent a lot of time in character setting, because I was not very good at drawing boys. Because I had done very little before, I always painted some realistic characters, so I always deleted some details that might cause realistic feeling, highlighting the big shape and outline. I’m very grateful to Kay for appreciating the character I drew, because I didn’t know the structure of human body very well, and she pointed out my problem correctly.

Posted in Collaboration Unit | Leave a comment

Houdini Tutorial Week 3

The third week is to do the destruction effect. I try to import the model I made in Maya into Houdini for special effects production. Every session is having access to new content, which really makes me feel very challenging and having a sense of accomplishment. Rigid body breaking is a very powerful function in Houdini. In fact, most of the core work is done in SOP, such as breaking shape of different materials, constraint generation control, attribute control and packed object conversion, including adding details and material rendering in the later stage.

Voronoi

Voronoi is a kind of subdivision of space plane, which is characterized by that any position in the polygon is closest to the sample points of the polygon (such as residential areas), far away from the sample points of adjacent polygons, and each polygon contains and only contains one sample point. Because of the equal division of Voronoi, it can be used to solve the nearest point, the smallest closed circle and many other spatial analysis problems, such as adjacency, proximity and accessibility.

Broken nodes with three preset shapes

This node allows you to generate three types of broken concrete, glass and wood.

It has four input ports: geometry, constraint geometry, proxy geometry, and an optional input, which can be connected with extra points to control the shape of fragmentation. You can directly select three broken presets in material type. If a constraint geometry is connected, it will automatically generate constraints between the fragments for you.

Use different display modes

The guide geometry option controls different display modes. Fractured geometry is a mode in all three presets. As the name suggests, it shows the broken shape.

Constraint network is also the mode of all presets. It will display the constraint network between fragments (press w to switch the wireframe display).

Primary volume and edge detail are specific patterns of concrete, which represent the distribution of each primary fracture level and the cutting surface used by Boolean operation to cut objects.

Concentric Noise and Edge Detail are the unique modes of glass, which display the cutting lines used to generate the broken glass.

As for the three display modes unique to wood, they represent three forms of wood fragmentation: grains, cuts and splinters.

Generate concrete form

The fracture level parameter controls the cutting level. It’s a bit like the layer in PS. increasing this value will increase the level and number of fragments, and generate smaller fragments on the fragments.

Opening edge detail will add noise to the broken surface.

Using the RBD paint node, you can first draw fragile areas on an object, and then connect them to the RBD material structure. It uses a density attribute to control the fragile area to generate more fragments.

We need to select attribute in the scatter from tab and fill density in attribute name.

Glass formation

When several patches are connected to the input, the fracture per piece is a very useful parameter, which will generate fragmentation on all patches separately. This parameter can distinguish different objects according to the piece attribute. If there is no piece attribute, it will be distinguished according to the connectivity between objects.

The parameters under the cracks tab control the amount of broken glass.

Enable chipping under the chipping tab will add more details to the fragment. It can create further cracks between the pieces.

Generate wood form

By default, the direction of chip breakage is the longest axis of the object.

Cut spacing is used to adjust the number of cuts to control the number of pieces.

The cluster tab is used to “glue” small pieces of wood together to form larger pieces. Give broken form more variety.

Boolean

Processing interpenetration model function: for processing interpenetration model, directly connect to nodes such as Boolean nodes, select union mode, and process interpenetration problems (re topological interface operation for interpenetration part)

The shatter function uses the right input port to crush the left, which is very fast.

Note that Boolean is set to operation micro shatter crushing mode, and setb port is set to surface, because the broken object grid is a face, which is a piece face instead of a solid.

First way

Voronoifracture + Explodeview node

Avoid it pointing to the centre add the node of pointfromvolume to make it natural

vdbfrompolygon

Add remesh node

test

Then add the attribnoise to make the P noise

Rest node — create a rest attribute like pause period at the P attribute

the attribwrangle VEX pression is the same function with rest
the two attribwrangle like taking the rest node away and bringing it back at the end

Second way

add the scatter/grid/copytopointsand and attrirandomize

attrirandomize node

Attribute name -normal Distribution — Direction or orientation

Add attribnoise and booleanfracture divide so the break will show pieces with different shape

wooden shape

right click – create reference copy with all the value same with the original node

modify the scale of transform and uniform scale of explodedview

Third way

Add rdbmaterialfracture

enable edge details
recognise different pieces and separate them with name piece+number

Conclusion

Crushing method : get small pieces between big pieces to increase the detail and authenticity of crushing

Based on the basic settings, copy 3 copies of the point, so that the copied grid is overlapped. Then add the connectivity node, give each face an attribute class, and use this attribute to drive the offset value of turbulence set in pointvop. In this way, the overlapped fragments can deform disorderly and flow out of the gap, which is the small fragments between the large fragments that we need.

Node Summary

Geometry Nodes

Normal — This node computes point, vertex, primitive, or detail normals using a more accurate approach than the Facet node or the Vertex node.

Scatter — This node distributes new points across the surface in a roughly uniform pattern and optionally attempts to limit clumping and holes. For volume primitives, this node scatters points through the volume with a density proportional to the field value (with negative values giving zero probability). We can use the generated points for a variety of purposes. They may be used to specify birthing locations for particles, template points for copying, cell points for fracturing geometry, or as queues for irradiance computations. We can specify the density, the number of points per unit of area, (length for curves, volume for volumes and tetrahedra), optionally weighted by an attribute to control the distribution over the surface of the geometry. We can also scatter points with the density based on a texture map by scattering in texture space.

Remesh — This node tries to maximize the smallest angle in each triangle. (A “high quality” triangle mesh is one where all angles are as close as possible to 60 degrees.) Two types of remeshing: Uniform — The node tries to equalize all edge lengths, giving triangles of equal size. Adaptive — The node uses bigger triangles in broad areas and smaller triangles in detailed areas. This uses allows you to represent the original surface with fewer triangles. However, since edge lengths vary, this mode will have fewer equilateral triangles than Uniform.

Divide — Smooth by subdividing . Cleanup polygons: fix concave polygons, divide N-side polygons into triangles or quads with optional brickered layout, triangulate non-planar polygons.

Copytopoint — This is very useful for populating scenes with repeated elements such as trees, buildings, or snowflakes with full control over the placement of the copies. To simply create multiple copies of geometry without needing target points, use Duplicate. For example, we can arrange copies in a spherical shape by copying them onto the points of polygonal sphere. Or we could scatter points across a terrain geometry and copy trees onto the points. Note that this node creates additional geometry in the scene for each copy. 

Booleanfracture — This SOP fractures the input mesh using one or more cutting surfaces. Similar to Voronoi fracture, this is a higher-level node that handles common fracturing-related tasks such as naming pieces, recomputing normals, and building constraints between adjacent pieces.

Vdbfrompolygon — Convert polygonal surfaces, or surface attributes, to VDB volume elements. The geometry of the object input must be a quadrilateral, or a triangular face. This node creates a signed SDF field, or density field, from the polygon.

Voronoi fracture — The input geometry is broken by performing a Voronoi shattering around the input cell. Voronoi fracture SOP takes two initial input objects: the mesh to be broken and the point around which the broken unit is built. In general, these points can be generated by scatter or pointsfromvolume SOP. For solid fragmentation (internal faces will be built for each fragment), we can make all points have rough volume edges. In this case, fragments will be generated for each unit point. The fragments cut by the SOP are further aggregated (based on the attribute values at the input points).

Exploded view — Push the geometry from the center to the outside to create an explosive effect and visualize the fragmentation of the geometry. The number of Uniform Scale extension fragments. Each fragment moves the distance length proportionally. A value of 1 doubles the size of the object. To cancel scale out 1, use the scale in value. 0.5.

RBDMaterialFracture — This node allows you to accurately fracture geometry based on a specific type of material. Currently concrete, glass panels, and wood are supported. It accepts four inputs: geometry, constraint geometry, proxy geometry, and an optional input for extra points to control the fracturing process. It will fracture the incoming geometry using different fracturing method depending on the material specified in the Material Type parameter. If an input constraint geometry is specified, it will update the constraints for the fractured pieces.

Transformpieces — The Transform Pieces SOP can be used to transform input geometry according to transformation attributes on the template geometry, according to the rules and precedences described in Copy and Instancing operations . The template geometry to use for each piece of input geometry is determined by by attributes on the geometries and the Attribute Mode parameter. This node can be used in combination with a DOP Import node in Create Points to Represent Objects mode to transform the results of a multi-piece RBD simulation.

Dopimport — The DOP Import SOP imports geometry from a DOP network, and can also transform the input geometry based on the transforms of the DOP objects. The Import Style parameter can be used to select between several modes of operation. DOP objects have two distinct transforms associated with them. One comes from the Position data attached to the object. The other comes from the Geometry data on the object, which has an inherent transform associated with it. This SOP can apply either, both, or neither of these two transforms. It can also apply the inverse transform to effectively undo the transform operation of another Dop Import SOP. The Dop Import SOP also allows the transformation of selected vector attributes for points and primitives.

Rbdinteriordetail — This SOP creates additional detail on the interior surfaces of fractured geometry, which can be used to produce more interesting high resolution geometry. The amount of noise added to the points can be scaled based on their distance from the original surface.

Dynamic Nodes

Rigidbodysolver — The RBD Solver DOP sets objects to use the Rigid Body Dynamics solver. If an object has this DOP as its Solver subdata, it will evolve itself as an RBD Object. This solver is a union of two different rigid body engines, the RBD engine and the Bullet engine. The RBD engine uses volumes and is useful for complicated, deforming, stacked, geometry. The Bullet engine offers simpler collision shapes and is suitable for fast, large-scale simulations. The RBD and Bullet engines also have support for voronoi fracturing. 

Bulletrbdsolver — The Bullet Solver DOP sets objects to use the Bullet Dynamics solver. This solver can use simplified representation of the objects, such as boxes or spheres, or a composite of these simple shapes to make-up a more complex shape. This solver can use arbitrary convex shapes based on the geometry points of the object, and can also collide objects against affectors that are cloth, solid, or wire objects.

Groundplane — The Ground Plane DOP creates a ground plane inside the DOP simulation. It creates a new object that has a simple grid geometry attached to it. The grid has a Volumetric Representation attached which simulates an infinitely large plane. This can be used as a collision surface for RBD or Cloth simulations. Because the ground plane can be moved and reoriented, several ground planes can be used to box in an object.

Rbdpackedobject — The RBD Packed Object DOP creates a single DOP Object inside the DOP simulation. It takes the geometry from the given SOP Path and uses each primitive that has a transform and a single point to represent an RBD object. This includes primitives such as packed primitives, spheres, and tubes. Each primitive provides the collision geometry for a rigid body, and attributes on the primitive’s point are used to store information such as orientation, mass, and velocity.

Constraintnetwork — The constraint network defines pairs of RBD objects that should be constrained together. With the constraint network, SOP Geometry is specified which defines what objects should be constrained. This makes it easy to procedurally generate a set of constraint relationships, including constraints of different types.

Glueconrel — The Glue Constraint Relationship DOP is one of several constraint relationship data types. These constraint relationships can be attached as subdata to a Constraint Network DOP node to control the relationships defined by the constraint network. In a Glue Constraint Relationship, the objects move as one whole object until enough force is applied to break the glue bond. Glue constraints can only be broken by things colliding with them, generating an impact. This constraint type is currently only supported by the Bullet Solver.

Simulation 1

Gravity and Groundplane

change the bounce to change the bouncing degree or fiction to control the speed of stop

Add transformpieces + dopimport to make the simulation faster

Constraints

Then I replaced my model and tried to adjust three different constraint values in the hope that the effect of each fragmentation would be different.

House Destruction

At the beginning, I didn’t want to use the wooden house I made in Houdini, so I found a lot of materials on the Internet, but every time I import it, there will be some broken faces. Maybe it’s because the model itself has overlapping faces when it is made in Maya, so that almost every model can be used. Then I thought of simplifying the model, using the simplest model to do it, and finally ensuring all the requirements After the models are complete and correct, the normal design finally appears.

So I added some details. The Scatter Point and Volume Resolution.

Add Rbdbulletsolver and add Groundplane

Rbdbulletsolver

he RBD Bullet Solver is a wrapper around a DOP network to simplify the running of Bullet simulations.

  • The first input is the render geometry. It will be used as simulation geometry if none is provided in the third input. 
  • The second input is the constraint geometry, used to instantiate dynamic constraint relationships between simulated pieces.
  • The third input is the proxy geometry, a simplified representation of the render geometry, better suited for fast simulations. It will be used as the simulation geometry when provided.
  • The fourth input provides the collision geometry. Packed geometry is recognized and can be configured with the RBD Configure SOP to drive their behavior individually. Some pieces could be animated, some pieces set to deform, while others may be set to use spheres as geometry representation.

Make the rest part of the house still (constraints)

Weaken the primary strength to make the destruction smooth

Group — Select the face the make it unaffected (click the arrow and select the face then enter)

Attribcreate — The attribute can be a float, integer, vector, or string type. If the local variable name is not specified, the attribute name (all in upper case) will be used. After adding a user attribute, the local variable can be used anywhere in operations where local variables are allowed.

Wooden House Destruction

make the wooden break

Q&A

  1. I want to destruct according to different materials. I have divided into groups, but in blast, the group I selected is not displayed correctly.
  2. There are some models found on the Internet. I find that there is no problem when they are displayed. Once rbdmaterialfracture node is to be carried out, there will be some strange displays and broken surfaces. How to correct this.
Posted in Houdini & Lighting | Leave a comment

Week1 Group building and theme setting

At the end of last semester, Kay and I found two students who are familiar with us and asked them if they want to cooperate with us. At that time, we didn’t decide what topic to do and there was no specific plan, but we decide to ensure the team members first to build the group and then make the plan later.

We had two conversations in the meantime during this week.

One is to let us get familiar with each other and decide a theme.
One is to ask and answer some technical questions.

Part1 Conversation 1

  1. Project:We first asked each other to understand the course involved and introduced our professional advantages, and finally presented the content– a 3D Game with animation and interaction.
  2. Information exchange: Because Kay and I know little about games, we showed them some of our favourite game styles and types. They also provide some beautiful game pictures, CG scenes and so on.
  3. Personal ability display: Then we want to show our undergraduate showreels to the students majoring in games, so that they can understand the style we are good at and the skills we have mastered, and see what extent our games can be expected to achieve, and hope to give them some inspiration. The game student said they prefer our style — the dark cartoon style.
  4. Reference and theme: We found some reference and paintings, hoping to get inspiration. Then I thought of my favourite director Tim Burton. I remembered that Kay wanted to make an animation about Sweeney Todd. So I proposed that we could continue her project. At present, we have a set story background and general scenes,

Their previous projects:

Link:https://connect.unity.com/p/box-shooter-ver-1-1
Link:https://connect.unity.com/p/fetch-it

Todd Reference

Tim Burdon

Part2 Conversation 2

It’s more like a Q & A

  • 3D animation: first perspective or third perspective, what are the advantages and disadvantages
  • Game: there is no difference between the two models, just one can see the protagonist, one can’t, one has a wider vision and one has a narrow vision. For our game, we can combine the two and choose according to the specific animation.
  • Game: Is our game interface three-dimensional or two-dimensional
  • 3D animation: three-dimensional ,the images effect will be much better
  • 3D animation: What software do we use, unreal or unity
  • Game: In fact, both are OK, but we are more familiar with unity, but we can also take this opportunity to learn unreal, which can be determined according to the time, but when you conduct 3D modeling, we will conduct a series of tests, and the results will be available at that time.
  • 3D animation: What are the requirements for the model, low polygon or face number?
  • Game: There should not be too many restrictions on this. We will test it this week and tell you in detail that if you are afraid that it will not work, a high-performance computer should be able to solve the problem.
  • 3D animation: The effect of the model in Maya can’t be presented the same way in unity, because the lighting of the material doesn’t seem to be applicable.
  • Game: We can ask professional tutors at that time, because we are not very clear, but we can learn by ourselves to ensure the same effect, and we can negotiate later. At least there is no problem playing the interactive animations , because they are rendered animation.
  • 3D animation: Do the running animation needs to be made to run forward or loop in place
  • Game: Should be made in situ running animation, and then we control his progress through the program and so on

Atmosphere

Part3 Game and Animation style reference

Because what I am good at and like is the style of timburden, so I found some pictures. The students of the game also give some stylized realization they hope to achieve.

In fact, we want to focus on the atmosphere of the screen. Although it is RPG Games, we don’t want to make the final game playable. We hope that the interface and visual level of the animation of the game have a dark style. We don’t want to make it more complicated. We just want to refer to these dark games and show timburden’s style to the greatest extent, because this is what we are good at Long and like.

Part 4 Sketches

Part5 Task overview

Division table

nametaskmajor
KayScene Design/ Modeling/ Texture/ Animation3D Animation
CrystalCharacter Design / Modeling/ Texture/ Animation/ Mocap/ Lighting/ Rendering3D Animation
YanisAnimation Storyboard/ UI Design/ Game Level DesignGame Design
SamuelTest Unity with Animation/ Game Level DesignGame Design
KamilBody and Facial Rigs/ Skin/ Weight3D Animation
YaqiSound Design both in Animation and GameFilm

Although the purpose of my cooperation project is to let us learn to cooperate and strive to learn and progress in this process. Our goal is to finish this project. We know that it will be difficult to make a game that can be played. We hope that we can finish at least the part of 3D animation, that is to finish all the animation, character action and visual effect rendering.

Posted in Collaboration Unit | Leave a comment

Week1 Review and Reflection

The road so far to 3d animation fundamentals and the expectation to advanced animation unit.

  • What have I learnt
  • What did I get
  • What I want to improve
  • My shortcomings
  • My challenge
  • Areas I want to enter

Looking back on the basic course of animation in last semester, I feel that I’m reviewing my previous skills while gaining new ones. Before learning this course, I didn’t learn animation systematically, including the twelve principles of animation. I just know but can’t use them. After updating blog every week, I am very familiar with these principles, and pay attention to these points consciously when I do animation.

I think the best thing about this course is that it is a complete system. Reviewing the eight courses: bouncing ball, bouncing ball maze, tailed ball, walking cycle, stylized walking, body mechanics, phonemes, performing animation, it is a step-by-step process. Besides, the mode of enabling me to freely choose the model, do not specify the dead form can also let me have more space to play, such as adding scenes and mapping lights within the scope of ability.

My harvest is that I have now formed a good habit of looking for reference and shooting reference. Now every time I do animation, I will spend some time to do preparatory work. One is to spend time thinking about the animation I want to do, including some general actions. Then I will look for some reference, or make my own reference. This process is actually very helpful for animation. It will make me start to simulate some interesting and exaggerated actions in my mind, so I will add more details when I really start to do animation. Another important step is to be familiar with the model. Although I didn’t form this habit several times before, I did a lot of repetitive and useless work. Now I will check the rig and material of the model. I’m also slowly starting to modify my own animation. In the past, I was very resistant, because I didn’t want to modify the animation that I had done for a long time, and every time I did and modified it would consume a lot of time. However, after the comments of my tutor, I knew where my problems were, and I would correct the mistakes. It seems that this is the change of mentality and attitude, accepting inadequacies and learning to correct them.

In fact, this course also enables me to have some understanding of many 3D fields, such as the use of render farm. I have never really used them myself before. Through remote software, I can not only use some software freely, but also use render farm to try fast rendering. Then I also developed the habit of doing showreels, that is, taking every practice seriously, making every work as complete as possible, and then adding them to my resume.

I think in the whole creation process, what I have been pursuing is to make the animation more vivid, which is also my deficiency now. Sometimes I pay too much attention to the consistency with the reference, and ignore some of the interesting of the animation. But I want to compare with my undergraduate works, I have made some progress, and began to have a complete concept of performance, from reference to blocking to falling and sprinting.

This is what I have experienced and obtained at present. For the later courses, I hope I can start the process of making model mapping while I am making animation, because this is what I like and am good at. In fact, I have been access to these skills included in the courses so far when I was making my own short films before, but I am still looking forward to the animation of creature animation and abiotic animation, because this is a field I have never touched. Additionally, I think my focus in the future should be to be an animator, but I want to do my own story animation, that is, from the beginning of the script to the final rendering and synthesis. Though during this process many of my weaknesses will be exposed, but it will be a success to make my own animation from the beginning to the end.

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Expectations of professionalism

We will have cooperative projects this semester, which is more like laying the foundation for our future work and learning how to cooperate with our peers. I think the following points are the characteristics and regulations that I think we should pay attention to in cooperation.

My previous cooperation experience is not very much, I combined some of my experience and online information to sort out the following content.

Take notes

It’s important to be familiar with our working environment, including our partners. We can’t remember everything just by listening. Take notes at every meeting or discussion, which is more conducive to the future recovery and clearer thinking. We can write down the outstanding knowledge points of others to supplement ourselves, which is not enough to improve ourselves. At the same time, it is also conducive to broaden our own thinking to promote cooperation. Ideas always need stimulation.

Comuunication

However, communication also has great risks. As part of human nature, it’s easy to assume that the people in front of us already understand what we want to say. But that may not always be true. So it’s important to listen to feedback. This will help you understand whether your message is being understood in the way it was intended. If the information does not reach the target as expected, corrective actions can be taken at the same time. The disadvantage of poor communication is that it may increase the pressure and distrust between the coppers. Sometimes it can lead to the key information can not be transmitted, leading to confusion, and then it is difficult to end the blame game. In order to avoid confusion and misunderstanding, we should communicate through the formal enterprise communication system and follow the formal communication mode. Therefore, communication through email or any other formal communication channel will be an appropriate way to keep all records. If there is any dispute, it will be helpful for future reference.

Be Friendly

We should not abuse anyone. Even if there is disharmony in our work, we should maintain a positive and objective attitude towards partners and the works they did. Respect each other and yourself. People in the industry have their own ideas and goals. To work together as partners is for better work and more goals. At this time, we need to respect each other before we can have a basis for cooperation. Don’t interrupt our partner. At the end of their speech, they should put forward their own questions timely.If we are not clear about our tasks, we can turn to anyone for help. To accept the help of colleagues with great enthusiasm and gratitude, at the same time, when they need help, they should also lend a helping hand in time. In this way, we can get closer to our partners at least.

Timing

Frequent communication with partners and e-mail exchange, always hiding problems, there is no way to solve the problem, once there is a problem, we must deal with it immediately, do not avoid the problem by not returning the message. Manage time well, don’t always drag down the progress of the whole team or get used to being urged by our teammates, and take cooperation as our own business. Sometimes, the execution of actions and processes may have a sequence, so that the next process in the sequence can be effectively executed when it is completed. Some processes depend on the behavior of one member of the team so that other members can perform subsequent steps in order. The ultimate goal of this plan is that the work must be completed on time and should not be delayed to the deadline set for this goal.

Self-reflection

Give ourselves a summary. See how much we know about the characters and whether we have finished the part we should be responsible for. Summarize what needs to be improved or done well. Mistakes happen from time to time, and one of the facts about mistakes is that anyone can make mistakes. The wisest thing we can do is to admit our mistakes.
Shirking responsibility will damage not only the reputation of our companions, but also our own. It’s always good to avoid arguing with others. Admit our mistakes and try to remedy them.

Collaboration and teamwork

Teamworks is the most important, followed by individuals. In team cooperation, the team’s work is always greater than that of the individual, and there will inevitably be conflicts between the two. We can’t delay the whole team’s work process because of personal reasons, because the team’s achievements are the result of every member’s efforts, and we can’t delay the collective’s work because of the individual, first the collective, then the individual.

Trust and feedback

Since we can work together with these partners, we must trust each other’s ability and conduct. This will increase mutual understanding and improve the quality of cooperation. It’s important to receive feedback from others, but it’s also important to give feedback. Not only do we need feedback, but our peers also need it to understand their performance. Our feedback will help others make progress in their work, and it will also help us make progress in our own work. Providing constructive feedback to our peers at the right time and in the right state will help them make progress and ultimately make the cooperation successful.

Be positive

When people hear opinions that are different from their own opinions, their instinctive reaction is resistance. Driven by this kind of emotion, it is difficult to soberly analyze each other’s point of view, and can’t listen to any words said by each other. This performance is often in seminars or when you hear criticism from others.Keep peace of mind, don’t be jealous, don’t complain. When establishing a tacit understanding with our partner, we need to control our emotions, manage our mentality, keep ourselves calm, not envy other people’s talents, and not complain about other people’s incompetence.

Take responsibility

When we are entrusted with a job, we should try to accept it as your responsibility instead of passing it on to others. Procrastination can only create barriers between us and our peers, and even make them avoid communicating with us. When we work in a team, our responsibility is to work with our teammates and build positive working relationships. As a member of the team, we need to show our concern for other members and we are ready to achieve the goals set by the team. Whenever we have the opportunity, don’t hesitate to ask your colleagues about your performance and whether they think we need to improve.

Posted in Collaboration Unit | Leave a comment

Houdini Tutorial Week 2

The task of the second week is to start being access to the particle system and start some preview videos. I have been in touch with particles in Maya before, so I know some wind field, gravity and particle properties, so the content of this lesson is not very difficult for me to understand. Then I think the difficulty in this part is to understand every node, including increasing the points of the surface, and some methods of using spheres instead of particles, which I have learned and will continue to improve.

I have to say that this part is really cool, but it also contains a lot of simulations and details. It requires me to understand the function and usage of each node of the pop node. Then I made an additional video according to what I learned. Of course, there are many mistakes in this process. I think learning software is facing with the mistakes and exercising the ability to solve problems .

1 . Manipulate attributes on points, primitives, vertices, packed geo

(1) Update view mode

  • Auto update — update at once when value changes
  • On Mouse up — update when click mouse
  • Manual — the most efficient way (can do some certain steps and then refresh it)

(2) Shortcut

  • Null — it does not do anything like sth empty
  • press A and drag in centre can rearrange them / press L to lay out them in an elegant way
  • press J and left click to select them all can connect them in a chain and with Y to cut
  • shift — add selection

(3) Animation panel

  • play — arrow upper play backward — arrow down
  • left arrow — previous frame right arrow — next frame
  • Cmd+left arrow — first frame Cmd+right arrow — last frame (also the key points)

(4) attributes on points, primitives, vertices, packed geo

(5) UV

2 . Vop and Vex

  1. First, create a box, select the node and press I to enter
  2. Create an attribute wrangle node and connect it to the box node
  3. Select the attribute wrangle node, press P to open the parameter panel, and enter the vex code in the red area
  4. Click the red area and press Alt + E to open the vex code editor

Attribvop (no code)

Vex

  • Int : integer — mostly used to represent the sequence number of points, lines and surfaces, as well as transformation
  • Floating : point number (can be understood as a decimal) – used to represent floating-point scalar value
  • Vector2 : two dimensional vector — mostly used to represent texture coordinates
  • Vector : three dimensional vector — mostly used to represent position, direction, normal, colour RGB
  • Vector4 : four dimensional vector — mostly used to represent homogeneous coordinate position and colour RGBA with transparent channel
  • Matrix2 : two dimensional matrix — mostly used to represent 2D rotation matrix
  • Matrix3 : three-dimensional matrix — mostly used to represent 3D rotation matrix, 2D transformation (displacement, rotation, scaling) matrix
  • Matrix : four-dimensional matrix — mostly used to represent 3D transformation matrix
  • String :”this is hello world”
  • Array : ordered data combination
  • Struct : structure
  • BSDF : bidirectional scattering distribution function

Noise

Bind also the function of input and output

  1. Bind has the ability to read and write attributes, import attribute can only read attributes
  2. The data of bind operation is the object bound by the current context, which is the first input of VOP node. Import attribute can read the attribute of any input

3 . Dop with particle simulations

N — change the mode

Timeshift — for freeze frame

popnet

Polygon object cannot directly be modified by press enter, it will covert to the mode of selecting the points or edges and adjust them by pulling.

So the polygon need to be added a pack node so that it is one entity

geometry convert to polygon — unpack

popnet

  • popobject — container full of particles
  • popsolver — like the toolbox with physics logic
  • popsource — solving

4 . Adding force to the simulation

Gravity (down)

Pop wind / Pop force (add force)

Scatter — add point on the surface (change the force count)

render flipbook

change the life expectation

5 . Simple magical effect / disappearing object

Pop force — Change amplitude (flow away )

Pop drag — change air resistance

Point velocity — increase the velocity

Delete — remain part of the particle effect

Pop source — increase birth rate / change life expectancy / change live variance / change interpolate source (back & forward) / change interpolation method

Test 01 & 02

Add the sphere to replace the particle (due to some display that the particle is like ball not small enough) so make the size of the sphere smaller like the particle

6 . Preview simulations

Render Flipbook — Particle

Q&A

  1. Why the particle in my scene seem to be larger and more blurred than the tutorial.
  2. What is the difference between the output and blind export?
  3. Now I can keep up with the tutorial, but if I have to do it myself, I may still not have this mode of thinking. How to exercise it depends on finding different cases to practice.
  4. Do some nodes need to be specially remembered or need a lot of practice to get familiar with them.
  5. How to improve render flipbook quality.

Extra work (0201)

Posted in Houdini & Lighting | Leave a comment

Lighting Tutorial Week1 : Introduction to Lighting in Visual Effect

Part1 Notes

The role of lighting TD in VFX

Responsibilities

  • Assemble all the CG Assets in the shot from the upstream departments
  • Design and implement lighting setup in shot to meet supervisor’s art direction
  • Create production quality CGI with optimal render settings in respect to resources available
  • Provide compositing team with the CG render elements required to deliver the shot

1 . Skill requirements

  • Understanding real-world/ studio lighting and photography
  • Knowledge in Physically-Based Rendering
  • Problem-solving ( from the Pipeline point of view )
  • Scripting ability ( For example: Python )

Understanding real-world lighting and photography

Help the artist to understand the motivation of the lighting design on-set and analyze the lighting methodically in order to replicate the look in CG.

  1. Three-Point Lighting
  2. Lighting Ratio
  3. Quality of Light Source
  4. Color Temperature
  5. Creative Setup

1.1 Three-Point Lighting

  • Key light: the main source of light in the shot.
  • Fill light: The light fills into the key light’s shadow area
  • Rim light: For separating the subject from the background

1.2 Lighting Ratios

  • Stop value is measurement unit of the exposure.
    +1 stop : Doubling the amount of light
    -1 stop : Halving the amount of light
  • The ratios of the stop value of the subject’s Key side to Fill
  • For describing the lighting contrast.

High-Key lighting

  • The stop difference between the subject’s Key side and Fill side is subtle.
  • Low contrast ratio.

Low-Key lighting

  • The stop difference between the subject’s Key side and Fill side is distinctive
  • High contrast ratio

Lighting Modifiers

  • Reflector is used for bouncing light into the shadow side to lower the contrast
  • Flag is used to block the unwanted light on the subject to increase the contrast

1.3 Quality of Light Source

Hard light:

  • Sharp transition between the Light
    and Shadow on the subject.
  • Created by a single point of light
    which is focusing on the subject. For
    example: Spot light.

Soft light:

  • Smooth transition between the Light and Shadow on the subject
  • Created by
    Large light source
    Light Diffuser
    Bounce light

1.4 Color Temperature

  • Color contrast between different light sources helps to shape the subject.
  • Interior vs Exterior
  • Time in the day

Artificial lights vs Natural lights: Indicate whether it is an interior light source or from outdoors.

Natural light variation throughout the day : To suggest the time of day

1.5 Creative Lighting

Gobo lighting. Placing an object between the light source and subject in order to project it’s pattem / colour on the subject.

Identifying techniques from the plate / reference and re-create the CG lightings

Three-Point Lighting —————> Position and the direction of different light sources
High-key and Low-key Lighting —> The contrast ratio of the Key and Fill on the subject
Hard and Soft Lighting ————–> Transition from the Light area to the Shadow area
Colour Temperature ——-> Define the time and space of shot is taking place in the story
Creative Lighting ——-> Adding complexity to the lighting of the image

2 . Knowledge in Physically-Based Rendering

  1. Simulating light
  2. Behaviors of light on different material
  3. Method to Generate image
  4. Light Sources

Part2 Extension

Basic three point lighting technique

(1) Key light

Light source: It is usually the main light source in the scene. We will use it as a reference for color temperature and intensity to set other lights. If the color temperature of the main light is 5000K, when other lights are selected, the color temperature is roughly the same as that of the key light, and the intensity is lower than that of the main light.

Location: the larger the angle between the main light and the camera, the stronger the sense of volume of the main body, and the more dramatic the vision, so we have to consider how we want to feel.

Height: if we shine from below, it will create an unnatural and terrifying effect. If we raise the light 45 degrees above the main body, we have a good transition on the face and a good sense of form.

(2) Fill light

Function: the function of auxiliary light is to balance the main light and illuminate the shadow left by the main light.

Location: usually we want to put it in a complementary position with the main light, which can balance the shadow left by the main light. If you move to another location, shadows will appear in other areas.

Intensity: when we discuss the relationship between auxiliary light and main light, we are actually discussing the light ratio. When adjusting the intensity of auxiliary light, you should consider what you want to express and how you want to feel. So try to adjust the balance between the main light and the auxiliary light until you see the right effect.

(3) Rim light

Function: To separate the main body from the background, especially when the main body has dark hair, skin and clothes, and the background is also very dark, they are easy to mix together.

Location: we try to stay away from the main body in the rear. Be careful not to see the light or lamp holder in the lens. When we move to the side, be careful not to illuminate the main body’s face.

Intensity: the stronger the outline light, the stronger the sense of hierarchy, but it will look unnatural. Low intensity contour light makes it feel more natural.

(4) Background light

Function: the background light is used to illuminate the background. We use this light to control the coordination between the background and the main body.

Intensity: if the background light is turned on too much, the background will be very bright and the attention will be separated from the main body. However, if there is no background light, the background will lack vitality.

How does light play the role of “Narration” in film visual effect

Light type

1 . Direct light (hard light)

When irradiating the object, it produces clear projection and forms obvious transition layers, such as sunlight, moonlight, lightning, etc. It is often used as the main light, contour light, local modification light (eye light) and so on. It is conducive to the performance of the three-dimensional sense, contour and surface texture of the subject.

2 . Scattered light (soft light)
Only improve the general brightness of the object to be illuminated, receive light evenly, projection is not obvious, there will be no obvious light and shadow transition level. Such as skylight, astigmatism, diffuse reflectors.

Direction of light

1 . Front light

When the light source is close to the camera height and on the same horizontal plane, and the ray casting direction is consistent with the camera direction, the light is smooth, also known as front light. Front lighting can make the object receive light evenly and eliminate unnecessary shadows. Because the light of the subject is uniform, the front lighting with astigmatism makes the subject obtain soft and warm modeling effect. But usually front light is not conducive to the performance of the 3d sense of the subject, texture and sense of space, will make the picture appear flat, no ups and downs.

2. Side frontlight

The light with a 45 degree angle between the projection direction of the light source and the shooting direction of the camera is Side frontlight. When the subject is affected by the lateral smooth light, it will produce the shadow tone level of light dark transition, which is beneficial to enhance the stereoscopic sense. The scene under the Side frontlight illumination will produce projection, which can enrich the picture composition if handled properly. It is the standard light method of portrait, and uses direct light as the main light of the subject from the side.

3 . Side light

When the direction of light source is 90 degrees to the direction of camera, the light is side light. With full side lighting, the contrast between light and shade is obvious, but there is a lack of delicate shade transition.

4 . Side backlight

The light source comes from the top of the camera, and the side backlight is about 135 degrees away from the camera. The use of side backlight lighting can better outline the outline, its front dark part is larger, forming a special atmosphere, but also conducive to the performance of multi-level scenery and atmospheric perspective.

5 . Backlight

When the projection direction of the light source is 180 degrees to the direction of the camera, the light is backlight. It can delineate the outline of the scene and separate the subject from the background. It is suitable for multi-level perspective of the scene and the atmosphere, as well as the depth of space. It can also cause different effects such as silhouette and half silhouette of the scene or person.

6 . Top light

The light source projected from above the subject is the top light. The subject will form a large contrast between light and dark. When shooting close-up or close-up of a person, the eye socket will sink and the cheekbone will protrude. It is usually used to defame the person to cause abnormal feelings.

7 . Foot light

The light projected from below the subject is foot light. Such as oil lamp, fire, candle, etc.; used to render special atmosphere, such as the performance of terror, thrilling atmosphere, or in order to uglify a character. Foot light can also be used to modify light, eyes, clothes, hair, etc.

8 . Modified light

The light that can modify the shooting object is the modified light, which can partially modify the eyes, hair, face, clothing, etc. It can improve the contrast of the brightness of the picture, enrich the image level, and enhance and improve the artistic expression of modeling. Generally, small lights are used to decorate the light. It is suggested to use spotlights or area lights in 3D. Light spot, light bar and other light effects can be used to create a certain atmosphere, complete the artistic conception.

9 . Eye light

The light that makes the eyeball reflect is eye light. In general, when the angle between the main light and the auxiliary light and the camera is appropriate, the effect of eye light can be produced. If the main light and the auxiliary light can not produce eye light effect, the face lamp can be used to supplement eye light near the camera. Usually, there is only one highlight area in the eye light. Try to avoid two highlight spots in the eye, which gives people a sense of distraction.

Part3 Exercise

Posted in Houdini & Lighting | Leave a comment

Houdini Tutorial Week 1

This is the first time I have come into contact with Houdini. It can even be said that I have never come into contact with other software except Maya. In fact, learning software is not the ultimate goal, but using software and skills to create some art works. Although this software will be difficult to learn or even familiar with, the process of learning software is really a task to cultivate patience and perseverance. Since learning Maya is also a self-taught process, I am used to turning blog into a kind of note taking, so that I can read and review it later.

In the first week, we are actually getting familiar with the software and understanding some interfaces and commands, including the node model. I built wooden houses and stones. Because of the foundation of Maya, the idea of building a house is roughly the same as what I thought before, but this way of creating a house with nodes is very interesting and logical.

1 . Quick Introduction

Product Vision Compare

https://www.sidefx.com/products/compare/

Render: Mantra/RenderMan/Arnold/Mental Ray/Vray

Tip: Get the habit of using space when moving around in the viewport

2. Interface and Preference

(1) Viewport+Network

Ctrl W — only to retain the network

  • press Esc then middle button/ left click/ right click — translate/ rotate/scale the viewport
  • press S then can select the subject
  • Enter — translate+rotate+scale (separately T/ R /S)

Render View — Render

Viewport

(2) Create Objects

Tab/Right click and find the object name

or Ctrl + Left click

(3) Hide and Display

(4) Parameters

(5) Preference

show or hide — P

Build

Save the new preference

(6) Set Up New Project

(7) Save files

3. Context

  • image — compositing
  • channel — animation
  • materials — shaders
  • out — rendering
  • stage — USD
  • tasks — PDG pipeline

mostly work in the object context

(2) a new context — Geometry

double click the object

(3) Animation

key frame — Alt+left click

after translate and then click the key icon to apply key animation

graph edition / shift + left click the translate

Noise

left click the translate — motion fx

(4) Render

shift + left click — Region rendering

select object render

Occlusion relation

4. Name

  • SOP : Geometry : Surface operators
  • Obj : Object
  • DOP : Dynamics operators
  • ROP : Render operators
  • VOP : Vex operators (similar MEL for maya)

5. Transform

(1) press enter and +W E R — translate rotate scale (esc — not move)

(2) put the value to original 0 after change the value

right click the translate/rotate/scale + revert to defaults or Ctrl + Middle Button

6. Import

(1) F / space key+G — focus the model H — reorient the camera

(2) Alembic format (.abc) — CGI for arnold (render)

Bgeo format (.sc) — the best format to save

(3) click the object the it will only show the select one and hide the rest

if want to show two — add the merge

(4) modify uniform scale to make the object one to one scale

(5) test — some model and animation that provide in houdini

(6) middle click the mouse on the object click the “i”

it will show the information of the object like the height

(7) change the end frame & real time toggle

(8) Autoupdate and Manual

When performing special effect operation, in order to prevent every data change from consuming a lot of time, we can choose the time of each view change update automatically or manually.

(9) Auto arrange nodes — L

(10) *= All Files

(11) Convert node — Convert to package object into polygon

7. Export

(1) export node — Rop_geometry/File_cache

(2) path

click the output file can change it into detailed path

  • $HIP = save file path
  • geo = folder name
  • $OS = export geo name
  • $F4 = Frame number 0001
  • .bgeo.sc = houdini output format ( e.g: can change the .bgeo.sc into obj )

8. Scence

(1) D — change the background light and some other options

(2) U and I — jump up and down

(3) Attribnoise

(4) Save the file

9. Exercise

(1) Rocks

change it from primitive to polygon

Display form

change the position attribute of the noise

add the null

and change the range values

outcome

(2) Wooden house

(1) put the box on the floor

copy the parameter of size and paste it to center ( paste to relative reference)

select the points and create transform node

Alt+ Drag = Duplicate

create boolean

Polyextrude — Thickness ( remember to select the output back)

Reverse — Normal flip

the roof — twice extrude

Delete and non-delete

Copy and Transform

Outcome

Q&A

  1. What is the difference of output front and output back?
  2. What is the principle of delete node? Why can it only remove the roof, walls, and foundation as a whole? Why it will not delete half of the houses, and what to do if I want to delete half of the houses.
  3. There are two polyextrude steps when building the roof, and the second one is added the choice of selecting the group of extrudeSide, what is the difference between the two.
  4. If we want to make some special effects, is it necessary to modeling in Houdini or just a way to be familiar with SOP. Because I think modeling in Houdini is time-consuming.

Posted in Houdini & Lighting | Leave a comment

3D Animation Fundamentals Term1 Showreel

Posted in 3D Animation Fundamentals | Leave a comment