Week10 Composition and Conclusion

Part1 Composition with Game and Music

This week we started the final finishing work.

Game students designed the UI elements and interface for our game.

UI elements

Start menu design

Esc menu design

Main interface of the game

In addition, they also try to debug the lights in unity to achieve the effect close to Maya rendering.

I rendered a poster in vertical, and Yannis added some words for me.

And my roommate, from the illustration major, was very interested in our project, so he drew some illustrations for us.

Illustration Poster from Queeni ( Ziqing )

The sound effect students have collected all the music and sound effects she needs, so they started the final editing.

Part2 Conclusion

I summed up this cooperation project and all my personal work.

  • Character 2D design ( 2 characters )
  • Character 3D models ( 2 characters )
  • Unfold UV and painting texture ( 2 characters)
  • Adjust part of 3 scenes materials ( like the glass & chrome material and change the phong to lambert since it is too much for Kay to adjust)
  • Adjust the weight according the mocap rigs ( 1 character )
  • Multiple Mocap Retarget ( 1 character)
  • Animation ( 12 maya animation+ 5 unity animation )
  • All lighting in 3 scenes and 20 animations
  • All rendering ( 2o animations )

My personal showreel of the collaboration project

Project Animation Presentation (cut by Kay)

Project Unity Presentation (cut by Crystal)

Collaboration Project Full preview (cut by yannis)

Game Unity link: https://yhzhuan1.itch.io/sweeney-todd-animated

Interactive demonstration from the third-person perspective (cut by samuel)

Part3 Reflection

1 . About my team and teammates

This cooperation project of role-playing game is the first time for me to do 3D Role-playing game. First of all, I would like to thank all my teammates, Kay, Yannis, Samuel and Aggi, for their trust in me and the encouragement and help for me in the past three months. What moved me is that even though we all communicate through the Internet, like Wechat, our communication and cooperation has not been hindered. We didn’t see each other once, but our cooperation has been updated all the time. The team members respect and understand each other very much. We have always maintained the attitude of timely contact and reply when we have problems. We didn’t slack off our project because of the time difference. If someone has any project progress, we will share it in the group in time. In fact, I learned a lot about games and sound effects from my friends, such as unity and dubbing equipment. They are partners and can impart knowledge, because they have their own areas of expertise.

The overall process of this cooperation is relatively smooth and satisfactory, although our project has been stagnated for a period of time in the middle of the project because everyone has their own tasks to complete. I was very anxious at that time, but later we had a good communication and decided to help each other to catch up with the progress. For example, Kay was busy with other courses, so I helped her complete two animations, and I did all the lighting and rendering, which not only ensured the consistency of all the effects, but also saved her time. I used to do it by myself, but I found that we need to help each other too much in cooperation projects, because we are not omnipotent, we don’t know our skills, maybe our teammates will, and helping each other at this time can save a lot of time.

Although the game students are also very busy, but they are trying to try a lot of things. There was a time when I always urged them, so I reflected that I couldn’t be too strict with them because they were trying their best. The team needs mutual encouragement and understanding. Fortunately, they also finished some of the content of the game. Thanks again to all my teammates. I’ve had enough time in the past three months. I’m also happy and satisfied with our collaboration project.

2 . About myself

As a whole, I have some small summary every week (in my weekly blog) , but generally speaking, the biggest harvest is that I have learned a lot of skills, and I am more familiar with the whole process of making entire 3D animation. This process includes both what I am good at and what I am not good at. No matter what it is, I have spent energy and time to have access to it and try my best to complete it, which is also a great challenge to myself. The main thing is to learn a lot about rigging technology, as well as the painting method of mapping. There are also lighting and rendering farm for rendering. I have also done a lot of practice this time, which I seldom touch before. After all, I haven’t done such a complete work since I finished my graduation project. Now I will not be afraid to escape from keying animation or rigging, which also increases my confidence in doing my own animation project.

The above is about animation. I also want to mention a change in my personal mentality. In fact, every time the progress of our group is a little slow, I feel anxious. However, my friends will give me some encouragement or help me think of other ways. With their help, I will not be so restless when I encounter problems, and I will try to find a way to overcome the difficulties. I just need a little time to accept this fact. When I encountered problems, I also calmed down a lot, including Maya flash back or other sudden technical problems.

3 . The way to the future

As for my 3D part, I think if I have a chance to continue to push forward in the future, I hope I can improve my animations. To be honest, I’m in a hurry to make these animations in two weeks. I still try my best to do this under the condition of ensuring the quantity, but it’s not the best. I hope I can modify some details. In addition, I think I can spend more time learning about materials, and then improve the accuracy of the picture. Because the general atmosphere of the current rendering is still good, but there are not much details.

Although the project was barely completed before deadline, it still didn’t achieve the picture accuracy I expected. In fact, our group thinks that if we are given more time, we want to continue to promote this project, because we like this project very much. Now one scene has been implemented in unity, and there are two more scenes that we hope to complete in our spare time in the future. In addition to the sound effect part, the sound effect student said that she hopes to add more footsteps to achieve a more game like effect. I think this is very important that everyone is working hard for this project, hoping it will become better. Everyone has given a lot of enthusiasm, time and energy.

Posted in Collaboration Unit | Leave a comment

Week9 Rendering and Toning

Part1 Rendering

The most important step is setting project. In that case, all scene maps will be recognized. Every time I do animation, in order to save time and prevent the computer from getting stuck, I can do animation without maps. Finally, I can merge them into a folder to recognize the maps.

This is the animation I rendered about 10 days ago. I encountered some problems when I used the school rendering farm rendering. One is that it sometimes jumps over a frame without rendering. For example, when I render animation 12, there are no 34 frames, which leads me to have no way to import a complete sequence frame into the PR clip. At the beginning, I didn’t find one frame missing. I repeatedly import failed, I even thought that the name of the exported sequence frame was wrong, and later I found that one frame was missing.

Or rendering a few frames, map display is not complete, it will lead to frame skipping, and I generally choose to re render that frame or copy the previous frame of lost frame, because sometimes the speed is faster and I can’t see it. Although this method is not desirable, it is still very effective in order to save time.

Part2 Toning

Toning the atmosphere image

In fact, the rendering is not completed in this week. Every time we finish an animation, we will render it. There are also rendered animations in the previous blog. And I usually choose a favorite shot to color. It lays the foundation for the color matching of the following animation clips, and at the same time, for more poster materials.

Color matching is actually my favorite part, because it’s the last program, and it’s a great achievement to watch the pictures become more atmosphere. This process is actually a test of my overall perception of the picture. I really like the picture with a sense of atmosphere.

Part3 Unity

Although I have done some walking and running animations with different postures and speeds before, the feedback from the students in the game said that if we need to combine our animations, they need to add more code. It would be more convenient for them to use plug-ins directly, so they gave a test video.

At the beginning, I gave them a model with binding. They said it would be more convenient without binding. Then I corrected all the maps to make sure that there is only Lambert and no Arnold material, because this material is not recognized in unity.

What they have to do next is the collision between the character and the table and chair, as well as touching the props to play the animation part. They asked me for a model of a razor, so I built one.

Part4 Conclusion

problems

In fact, I spend 4-5 hours every night sorting out the rendered files. One is to confirm various settings, including the image quality, the size of each sampling value, the setting of reducing noise, and the setting of camera viewing angle. But what bothers me most is that the Maya icon of rendering farm and desktop always disappears. At that time, I asked Luke how to solve the problem, but there were often failures. In order to avoid trouble, I didn’t ask again, but chose to keep changing the machine. I had to log out and log in again many times every time to find a suitable machine to render. It’s really troublesome and time-consuming!!!!

But I found that Emma also encountered this problem, but she asked Luke and solved the problem, also let me know how to solve this problem.

So I learned asking until solving it and this really saved a lot of time. Avoiding problems can only be solved for a while, not for a long time.

Posted in Collaboration Unit | Leave a comment

Week8 Mocap Animation2 and Keying Animation2

Part1 Mocap Animation

Animation 6 — Standing Up

Mocap

From the experience of the last time, as long as the animation is sitting, there will be a lot of dressing and adjustment. First of all, I adjusted the position of my arms so that they didn’t collide with each other.

The second is the position of the feet and arms to the floor. After putting them in the scene, they obviously cross the floor, so I deleted some key frames to adjust the position. However, there are some problems: the animation is a little long, and there are some problems in the adjustment process.

Playblast

Render

Animation 7 — Shoved Reaction With Spin

Mocap

I didn’t make a lot of adjustments to this animation, because I’m quite satisfied with it, that is, I added the blink animation. The main studio adjusted the lighting effect, because the scene is the top light, and the character’s hair is white, and there are not many details. In order not to expose the top of the head, I added some surface light to illuminate the whole character.

Playblast

Render

Animation 8 — Threatening

Mocap

In this animation, I adjusted the position of the crotch to make him feel like he was leaning on one thing, with the center of gravity on one leg.

Playblast

Render

Animation 9 — Piano Playing

Mocap

In the mocap animation, the two hands crossed, so I adjusted the distance between the two arms. Then I let his hands touch the piano, matching the props of the piano and stool. Because there’s a little bit of warming up for the piano and a little bit of yellow for the character.

Playblast

Render

Animation 10 — Finding

Mocap

In this action, I added an expression of surprise when I found the object and made it look more surprised. Then slowly pick up the bottle, check, is also a parent binding.

Playblast

Render

Part2 Keying Animation

Animation — Spray Perfume

This is what I did right. The last animation is not as good as the previous ones, because I’m in a hurry and tired. I mainly adjusted the movements of my fingers. In my opinion, timing is more difficult to control in this animation, because in this animation, the head of the character should follow the action of the hand, while the animation of the hand has pause. I need to master every pause to make the picture look less chaotic.

I made him close his eyes and lift his head in the process of smelling, which showed that he was very intoxicated and enjoyed.

Playblast

Render

Part3 Conclusion

I think after the 16 animations, I know more about putting actions into the scene and the interaction of some props. I’m familiar with key animation again, because I didn’t do animation for a long time after I finished animation last semester. After these hours of practice, I obviously feel that my efficiency has become higher and the speed has become faster.

Now when I do animation, I will also pay attention to some of the two animation principles that I seldom touched before, such as time and stage. Because these are complete rendering animations with lights and backgrounds. Different from the previous animations, I need to make the actions of these characters look comfortable and natural in the scene. Although I don’t think these animations have been done very well in a short time, it does train my ability to do animations in a short time. I believe practice makes perfect. After continuous practice of such short animation, my feeling of animation will be more and more accurate.

Posted in Collaboration Unit | Leave a comment

Week7 Mocap Animation1 and Unity Animation

Part1 Mocap Animation

I spend most of my time adjusting my two arms, because the Mocap model is closer to the real scale, while my model has long arms and wide clothes. So when the action is copied, the two arms will interpenetrate wrong. After I bake the action into my model, I delete some useless keyframes and keep them. Then I mainly adjust the three controllers of the arm.

I don’t know exactly how to copy the action of Mocap to another model and adjust the correct steps of the action. And I basically delete some inappropriate keyframes, because every frame after bake will have keyframes, which is really not easy to adjust the animation, so I will delete the frames close to the middle, keep some extreme frames, and then modify and add some keyframes. To be honest, I don’t think this method is very good. It’s very troublesome to adjust. I think there should be a better method, but I didn’t have much time to adjust at that time, because I needed to finish 12 animations in two weeks.

Animation 1 — Arm Stretching

Mocap

I have made some changes to the animation during the test, so I don’t need to change too many actions. I deleted the keyframe of the arm overlap. I re keyed and adjusted the curve.

In this scene, I only added an area light source to make up the light for the character.

Playblast

Render

Animation 2 — Drunk Walking Turn

Mocap

The highlight of this animation is that I added a slow blink animation, which is more in line with the feeling of being drunk. I doubled the number of frames and the process of closing his eyes, which is more in line with the feeling of delirium after being drunk.

There is also that I have this wine bottle parent related to the hand controller.

Playblast

Render

Animation 3 — Shoulder Rubbing

Mocap

In this animation, I mainly adjusted the bending of the fingers, not the straight fingers, which would be a little stiff. And this movement is a movement of muscles and bones, the feeling of body parts should be relatively relaxed.

I gave the character an area light. I also added three red point lights to the oven on the left, hoping to create a sense of fire. The overall situation is still a warm yellow atmosphere, and then the red light intensity of the stove is relatively high, which can illuminate half of the character’s face and create a sense of terror.

Playblast

Render

Animation 4 — Standing Cover To Cover

Mocap

This animation is standing at the window peeping animation, to be honest, I only adjusted the position of his eyes and standing place, animation I did not adjust a lot, because this action is very short. At the same time, I adjusted his expression, so that the whole brow tightly wrinkled, the focus of the eyes is to show his more nervous state.

Playblast

Render

Animation 5 — Sitting Crying

Mocap

I adjusted a lot of arm movements in this animation, because it can be seen from the animation of mocap that there are many parts in this animation, especially in the sitting posture, because the hand of this character is very big, and the arms and legs are very long, so when you sit down, the arms will naturally have a lot of restrictions.

What’s more, it turned out to be an animation of sitting and laughing. I changed it into an animation of crying, and animated his mouth. Let him look more sad, I put the distance between the facial features are separated from some. Eyes are always looking down, there is a sense of frustration.

Playblast

Render

Part2 Unity Animation

Different forms of walking and running are used for animation of role-playing game walking in unity.

Standard walk

Trust walk

Orc Walk

Swagger walk

Running

These were originally meant to be put into unity’s circular animation, but students majoring in games think it’s faster to use plug-ins, and they are more familiar with them. Because relatively speaking, they think it’s more efficient to spend their time on the unity that they are good at debugging than on studying Maya animation. So these animations are useless, but I still think that doing these, I get some exercise, whether or not it is useful in the end.

Part3 Conclusion

This cooperation project is my first formal animation with motion capture animation since I came into contact with motion capture. This new technology from this semester I began to learn, I feel very easy to use. Although the whole process of matching the skeleton with the controller is very difficult, mainly because I don’t know enough about the binding of the character, after more than 10 attempts, I finally understand some of the truth in the skeleton phase through my own efforts. For example, when creating bones, you need to be left-right symmetrical, followed by the naming of bones. The most important point is to use the outline view to find the right bone and match it. I also have more experience in organizing the content and naming of the outline view. Before, it seemed that I only knew the surface meaning of binding. I didn’t do it after in-depth understanding. It was more like following a program or a certain habit. Now when binding, I will think about the structure of the human body, the relationship between father and son, and the relationship between ikfk.

In addition, I am more proficient in adjusting mocap animation, although the method is not very appropriate, but also know how to simplify the action. It’s like doing some blocking exercises, knowing how to give up some unnecessary small movements and extract several important movements and postures.

Posted in Collaboration Unit | Leave a comment

Week9 Performance Animation: Spline Pass

This week, I mainly made some changes and improvements based on Luke’s suggestions last week. It mainly consists of two parts. The first part is the amplitude of the following motion. In the third segment, the spine movement of the character and the adjustment of the feet.

In order to make the action of this character more cartoon, I listened to Luke’s opinion, and lengthened the time between the follow movement of the arm, and the range was slightly larger than before, so the character looked more flexible and comfortable. Before, my follow motion animation only included the crotch and head. Now I added the chest and neck, and adjusted the staggered frame from one to two to make his body look softer.

spline

However, the problem is that the hand on the waist seems to be moving all the time, because I use FK here, not IK, so I spent a little time to correct the position of the hand so that it is always in the position of the waist.

Comparison before and after modification

Then there’s the hand flick. Although I’m satisfied with the previous hand flick, I still adjust the left character’s hand. I hope his hand flick will be slower. If it’s too fast, I can’t see the track of the hand. Then I focused on adjusting the tilt of the spine of the character on the right. It was a little stiff before. Now I want to increase the feeling of curve. I deleted and adjusted the distance between the keyframes so that even if the action was fast, it didn’t feel stuck.

Comparison before and after modification. I think it’s a little smoother.

Then there is the part that I spent a lot of time. I think it’s a bit difficult. First of all, I adjusted his footwork. This time, I looked at the reference carefully, and tried to get up from the chair from time to time when I was doing animation, trying to do these actions, because I think the reference is sometimes misleading, because this kind of performance is deliberate. So I did it again out of habit in an empty environment. I found that when people walk slowly, they don’t lift their feet very high. The landing and lifting angle of the toe and heel will not be very high, because in a relatively long period of time, the foot hardly gets up, but it is a bit like walking close to the ground.

Controller curve of right feet
Controller curve of left feet

I control the roll value of his heel lift and heel down to within 5. I also make the time of feet in the air shorter, so there is no sense of floating.

What I can’t control well here is that I hope his walking displacement is straight forward, that is to say, the forward motion curve of the crotch is in a straight line. But sometimes, the knee will spring away suddenly, just because the action reaches the extreme value of a vector. So I often have to confirm that there is no such problem, and then adjust the position of the foot, not only the angle raised by the foot, the displacement length of the foot will also affect the effect of this animation, like a puppet, all are interrelated, modify one is to modify a lot of controllers.

Then there is the problem of the spine. The spine of the character on the left tends to tilt. The head is raised. The character on the right is curled up on the back. The head is also downward.

The most difficult part is the finger chest movement. I always adjust the controllers of his feet, crotch, spine, chest and waist. These controllers interact with each other, so once one is adjusted, other values will be affected. Although it is only a few seconds, I spend a lot of time to fine tune. Now the finger pointing to the chest has some strength, although it still doesn’t feel very strong.

I rely too much on the reference here, so I ignore some body ups and downs when walking, so my body still looks a little stiff. But I have tried my best to make the spine have some ups and downs and changes in front and back, left and right under the condition of proper dynamic curve.

The arms of the two characters look a little stiff, but I don’t want them to be too large to attract attention. At this time, the attention should be on the fingers of the character on the left, because the character on the left is pushing the character on the right, which should be the highlight of this animation

I think the dynamic of the spine and the body curve of the character are much better than before, and the distance between the two characters is not so narrow, which makes the movement of fingers pointing to the chest no longer constrained and has more space to play.

Here I try to do the following movement on the arm, though it’s not obvious. I also add that every time I’m pointed, the head of the right character has a downward trend to show that.

In the end, I’m not very good at grasping the collar. I don’t know how to make clothes as if they were picked up. So I just modified the spine for the time being. Now it seems that this action is not awkward. It’s like a dispute.

I always don’t know how to solve the problem is how to make the character on the right more flexible. Now he looks a little stiff. And how to make the action of grabbing collar more natural, and how to make the action of finger pointing chest more dynamic and real. I think the essence of my animation should be the distance, dynamics and strength between the two characters to set off the tense atmosphere, which depends on the animation to present. I also need to spend a lot of time to adjust the action details.

Comparison before and after modification.

Changes to this version

After Luke’s comments, he gave me some suggestions.

  • The action of putting the heel down slowly is too slow. Normal people’s heel can’t spend 10 frames from lifting to falling. Try to land on the sole of your foot and put your toes down slowly instead of stepping on the whole sole of your foot.
  • When there is a dispute, if one character wants to grasp the collar of another character, the action should be fast, not slow. It should be a moment to show the power and emotion.
  • The action of grasping the collar should also pay attention to the following movement of the body and spine.

There are a lot of recent projects. I didn’t spend a lot of energy and time on this modification. Now the general framework of the action is available, but it is not detailed enough. I will continue to revise it later. I hope it is a complete and completed performance animation. There is still a long way to go.

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Week6 Keying Animation and Multiple Mocap Retarget

Part1 Key Animation

This week I did an animation, which is the action of flower arrangement. I found flower models on the Internet, because they are all merged together, so I separate them one by one to facilitate the production of animation.

For this scene, I also added two facets to make the character more prominent.

I think the difficulty of this animation is the parent relationship of the object. Because the process of this animation is that the character picks up the flower, and then the flower will be put in the vase. During the period from holding the flower in hand to putting it down, the flower is always on the controller of the hand by parent relationship.

My approach is to confirm the animation, copy a flower model at the frame where the perianth is picked up, and then key the visible and invisible attributes. Then the flowers that are not bound to the parent-child relationship are not visible at that frame, and the flowers that are given the parent-child relationship are visible at that frame. In the same way, at the frame when the flower is ready to leave, a still model is copied, and it is visible at that frame.

I have to admit that this is a very stupid method, but I didn’t think too much at that time, because the method is really feasible. The only drawback is that once I want to adjust the arm movement, I need to adjust a lot of things. I have to redo the basic flower position and keyframe. Later, in the course of class, my classmates raised this question, and I found their method better and more convenient. I think I may use this method later, because it is more scientific.

By the way at the beginning, the action was about 19 seconds, but I think as an animation played in the game, it should be short and general, so I accelerated a little bit.

Playblast — rose

Render –rose

Part2 Multiple Mocap Retarget

Based on our decision to set touch play animation in the game, we discussed it with the team members again. We feel that we need to make some adjustments to the reference settings of the previous shooting. One reason is that we think the previous reference length is too long, and the action is relatively simple, so it will be a little boring. So we shortened the length of the animation, and I proposed that we can use some motion capture animation, because I have spent a little time to understand this before, and I think now is the time to make use of this knowledge.

We screened some actions from “mixamo ” together, and then what I’m going to do this week is match the bones of the bound character with those of the capture character. (https://www.mixamo.com/#/?limit=96&page=1&type=Character)

First of all, I changed the role that was well bound and debugged without many problems to T-post. Then I started the task of matching the skeleton to the control panel on the right.

In this process, I encountered a problem, that is, all the models I have contacted before can be automatically recognized symmetrically, but I don’t know why, even if I confirm the naming in the outline view.

I asked Luke, and he told me that it was because the bones were not symmetrically bound, but the binding had been completed by this time, so I had no way to modify it.

So I debugged almost 10 times, but still failed. But I don’t want to give up this, because as long as the problem of this arm is solved, other problems are basically very small.

So I tried several ways later.

  1. I first matched the bones of the right arm, then matched the left arm, then made clear the left arm, and matched the left arm separately.
  2. Then I removed the shoulder bone.
  3. Then I removed the upper arm bone.
  4. I’ve even tried to see the remodelling from the chest.

In a word, I tried for 3 nights, about 10 hours, but I still have no idea.

Later, after I tried for a few nights, I suddenly realized whether it was because I chose the bone in the view. In fact, the standard practice should be to select the corresponding name in the outline view, because if you select it in the view, it will select other bone points, but I can’t notice. It’s really strange that there are two displays of this bone in the outline view, and the correct one is to find the right bone layer by layer.

wrong
right

On the other hand, I spent a lot of time matching my fingers.
Because I’m not sure how to match. So I drag the downloaded model into Maya to check, because since I want to match the two, the bones of both of them are consistent. So after checking one by one, I found that I still had to choose the standard bone name in the outline view.

reference–character2

character1 (custom rig)

It can be seen from this that I have really tried many times. At first, I will save it. Later, when I failed, I just kept rematching without success.

It can be seen that there is still a problem with the binding of my arm. Even if I brush the skin and weight again, the problem has not been solved. Later, I thought that maybe I didn’t choose the right controller, so I referred to the binding of other characters and our own characters, which should be the shoulder problem. So I cleared the previous match and redone it several times.

Then I matched the controller again. At the beginning, the action was very strange. I thought my controller was wrong. I went to watch the video again, but it didn’t seem to be correct, so I didn’t use the tutorial method. I canceled the knee match and checked my shoulder binding more carefully. The revised one is really much more comfortable.

The final matching effect

After changing the position of the controller, there are obvious differences between the two versions. It seems that the arm before the change is more uncontrolled, and the one behind is much better.

Comparison of the two versions

before
after

Mocap animation

It can be seen from the above animation that most of the animations have been successfully copied to our model, that is, we need to adjust some key frames carefully to make the action more comfortable.

Part3 Conclusion

Another difficulty in this animation is that my rigging is not very good, so the posture of my arm is always uncomfortable when I lift it up. To solve this problem, I spent a lot of time adjusting the skin and weight, although the effect is still not obvious. In addition, she didn’t have a detailed expression rigging, so I cannot do eye closing and mouth movement on her.

I hope that if I have time later, I can take the time to modify the character, make the facial expression better and make the skin more accurate.

At last, I was very happy to be able to do this Mocap successfully, because I felt very anxious and irritable when I didn’t solve the problem in those three nights, but fortunately I didn’t give up. After that, I will copy some motion capture animations we found to our models. Anyway, this step saves a lot of time and energy for our later work, and makes me more confident in completing our project.

Posted in Collaboration Unit | Leave a comment

Week8 Performance Animation: Blocking

This week, we’re finally going to start to do the blocking process of performance animation.

First, I started choosing two characters for my animation. Because it’s a quarrelling animation, So I wanted to find a relatively strong person, and a person who looks more gentle, during which one of the characters has to have a collar because of the action of being picked up by the other character.

Models

Since I didn’t use male characters in previous animations, I spent a little time understanding these two models. Fortunately, the skeletons of these two characters are very similar, so I only need to know one to be familiar with the other. In general, I think the binding of these two roles is very good. The rigging of facial expression and mouth is also very detailed. After some debugging, I finally decided that it was the two of them.

Character 1

Character 2

If I finish the animation in advance in next few weeks, I will adjust their texture and render the lights when I have time with these.

Audio & Reference

I first processed the videos in PR, turned them into sequence frames, then exported the audio to wave format, and imported them into Maya, which ended my preparatory work. The original movie’s reference is to let me know the content of the lines. To be honest, the speed of their lines is so fast that I don’t know what English they are speaking. This reference video has subtitles for me to know the English lines and for me to do mouth animation.

I have to say that I realized that I had not done animation for several months, so it took me a little time to think about how to do these steps.

Although I know that the main part of my character is body language and expression, I still choose to do mouth animation first, because I think it’s strange that the smooth body action does not open and close the mouth. Do mouth first, let me listen to this audio repeatedly, let me better familiar with the mood and situation of the two characters.

Animation Principles

  • Timing
  • Staging
  • Arcs
  • Slow-in & slow-down
  • Pose-to-pose
  • Anticipation
  • Follow-through and overlapping action
  • Exaggeration

Phonemes

I divided phonemes into several clips to do the separate animation so that I can better adjust every frame on the timeline, and it’s easy to watch and modify. Because I found that there is always no way to play the mouth animation in real time, I have to playblast to see the problem and modify it.

At the beginning, my mouth shape range was a little small. After I finished my second role, I came back and began to readjust the mouth shape of this role.

In this part, I made a general mouth shape first, and then I may adjust it carefully, such as the part where the tongue touches the teeth and the part where the teeth bite the tongue. These details are not obvious now. I want to wait until the camera is set up and see the distance between the two characters before deciding whether it is necessary to do this step.

Maybe it’s the problem of model and binding. The mouth shape of the character on the right doesn’t look very obvious and exaggerated, especially from the side. But I put this problem aside and began to do physical exercises

Blocking

Then there is the initial action. In fact, I like Luke’s advice, which exaggerates two people’s actions. This shows that both characters have personality and attitude. The person on the left was lazy at the beginning. I didn’t step on his right foot on the floor, but it was empty, showing his disdain. The character on the right is more arrogant through the action of akimbo, his center of gravity is on the right foot, and the whole person leans forward.

Different standing posture and waist dynamic line.

Details of the shape and curvature of the fingers

In fact, I have a problem here. I plan to ask Luke for help when the animation is near the end. There seems to be something wrong with the mapping of this model. His face and body are one object, but they have two maps. I don’t know why this texture is like this, like UV is wrong, but unfortunately this character has been bound, I can’t adjust his UV, resulting in some strange shadows here. I think when I have time, I will find a way to solve this obstacle.

Then I made the character on the right first. In the reference, he didn’t have any body movements, but I added some myself, hoping that the character would be more vivid.

The difference between the two versions is that I added a foot movement, and the following movement of the head and chest. In addition, I tried to follow the eyes, I let his eyes always fixed on the opposite character’s face.

The second is the character on the left. I hope the audience’s attention is on the right, so I didn’t let him know a lot. It’s just a little blink, a little hand and foot movement.

The next part I think is the climax of my performance, and there are more movements, the range is relatively large, I spent more time to adjust.

The rhythm of this part is very fast, so the step I’ve been doing is to add keyframes. Then we made about 30 frames and began to play them to see where there was a problem. Generally speaking, when the rhythm is fast, too many keyframes will make the action look chaotic. To solve this problem, it is necessary to clean up the key frames properly, and use the curve editor to adjust the particularly abrupt curves or points. Generally speaking, I have deleted some sudden extremum points or unsmooth curves, and another point is to turn some curves into straight lines.

Here, the details of holding hands are a bit like the relationship between father and son, which is equivalent to the movement. So I am the details of keying animation frame by frame.

I think the key to this part is that I have a larger range in the hip of the two characters, in order to make them look more exaggerated

Another point is that this part of the arm swings a lot, although the distance between them is very close, which will increase the production of keyframe animation of the arm, because it is very possible to pass through the model. But a quick arm swing can create a tense atmosphere.

The outsider’s opening and gripping become my main concern this time, because it determines the strength of an action and the character’s mood.

The spine of the character on the right side is tilted forward to make it stronger

In fact, I’m in a hurry for the part 3, and I have encountered some problems. The first is that when they step back a little bit, the distance is very difficult to control, so I first make sure that when the character on the left points to his chest, the distance between them is not particularly short. In this case, I guarantee that when they step back, they try to keep their pace average.

In this part, I have never been able to find the feeling of finger pointing in the beginning, just like the finger is not forced now. I tried to join the following movement, and also tried to change the swing amplitude and frequency of the upper arm and lower arm, but the effect was not obvious.

And then I wanted to make the person on the right feel wobbly, but the leg movements were very difficult. The first thing I had to do was not to slide, and then the character couldn’t be too forward or too backward.

Another one is the left character grabbing the collar of the right character’s clothes. I don’t know why I always feel that both of them have to fall down. I think my reference is not good enough. I don’t know if it’s because of the lack of expression. Even if the distance between the two characters is very close, there is still no sense of tension. Generally speaking, I failed to achieve the desired effect. I will revise this paragraph later

But I always feel that I haven’t done a good job in this part. Maybe it’s the cooperation with the crotch. I always feel that there are some sliding steps, or the center of gravity is unstable, or people don’t step on the ground.

After completing the three segments, I spent some time to adjust the connection, such as frames 60 and 120. Sometimes, there will be stuck, because I didn’t get to the last frame of that segment, so I spent some time to connect the actions. This is the first version of blocking. I think there are still many things to be adjusted, but I have to say that when I adjust the animation after a month, my speed is not as fast as before, and my hands are not as skilled as before. So I only want to do this for the time being. I hope that I can adjust the status after that, and then modify some small details in detail.

Summary

I summed up some of the feelings of doing this.

The first is that good animation should consider time and stage. That is to say, when I do animation, I often feel that if one character moves and another character doesn’t move, it seems a little strange. But after some actions at home, I didn’t know who to put my eyes on. So we should try our best to let one character do the main animation, and the other character can have some obvious animation. We can’t grab the main position. The animation range of the main character should be large, exaggerated, and attract attention all at once.

The second is that in the process of walking, every part of the body should have subtle movements, even a little, it will make the character look less rigid. So at the beginning of K animation, the feet and hip should be paid attention to the centre of gravity together, so that they will not be adjusted repeatedly, because the displacement of the feet and hip determines the forward distance of the character. If one part goes wrong, many parts need to be modified.

Feedback

After Luke’s suggestion, I have the following parts to modify.

  • At the beginning of the clip, the following movement of the character’s head and arms can be more obvious, creating a cartoon feeling.
  • Two people slowly back in the process, pay attention to the direction of the spine, now very stiff. Note that the spine is a curve, and every joint of the body is bent, not just the crotch.
  • Pay attention to the posture of raising your feet. If you raise your heels too high, you will feel like the character is about to fall.

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment

Houdini Tutorial Week 6

The topic of this week is how do we use two different solvers together. We reviewed the previous learning content, also saw some beautiful particles, and then learned some new knowledge points on the basis of previous learning, including using the rigid body and the particles at the same time to do an effect.

The Display Options

Change the point size to make the particle point bigger

Geometry

Level of Detail — Increases or decreases the display resolution of metaballs, NURBS, and Beziér surfaces.

Volume Quality — Controls the display quality of volumes in the viewer.

  • Very Low — Draw volumes as parallel slices along one axis. This is the fastest option but produces a visual pop as the volume rotates in the view. Overlapping volumes will produce visual artifacts.
  • Low — Draw volumes as slices parallel to the viewport. This is the fastest of the view-aligned, useful for working interactively with dozens of volumes. Overlapping volumes will render correctly.
  • Normal — Draw volumes as slices parallel to the viewport, with more tightly-spaced slices than the “Low” option. Balances quality and performance.
  • High — Draw volumes as slices parallel to the viewport, with even more tightly-spaced slices than the “Normal” option. Slowest but best quality option. Adds some random variation to the volume sampling to break up the slices and 3D texture sampling.

Enabling HDR Rendering will remove any banding artifacts from volumes.

Polygon Convexing — Fixes concave polygons by tessellating them to triangles so they appear the correct shape. There are two options for determining when to redo the convexing:

  • Fast — Only redo convexing on full topology changes. Ignore changes to point position (P).
  • Accurate — Redo convexing on full topology and point attribute changes.

Particles

  • Display particles — How Houdini draws particles and disconnected points.
  • Point — Draw points as uniform dots, controlled by Point Size (in pixels). In this mode, close and far points are drawn at the same size.
  • Pixel — Draw points as single pixels. This may be useful for very dense particle simulations.
  • Line — Draw particles as streaks. This only affects particles (disconnected points are drawn as dots).
  • Disc — Draw particles as filled circles, with the radius controlled by Disc size (in world space units). In this mode, particles are drawn as actual geometry, so closer particles appear bigger than far particles. This only affects particles (disconnected points are drawn as dots).
  • Display Sprites — If particles have sprite attributes (see the Sprite node), draw the sprite image at the particle location.
  • Point Size — The size (in pixels) of particle and unconnected points, when Display particles is “Point”. 
  • Disc Size — The size (in Houdini units), of the filled circles, when Display particles is “Disc”.

Scene

  • Antialiasing Samples — Smooths edges of lines and polygons in the viewport. Increasing this increases the amount of framebuffer memory Houdini uses. Only use modes higher than 4× if your graphics card has 2GB of VRAM or more. Modes above 16× are significantly slower and give diminishing returns on quality, so it is best to find a “good enough” setting rather than maxing this value.
  • HDR Rendering — Produces higher quality render of volumes and transparency. This doubles the amount of framebuffer memory Houdini uses. When this is on, flipbooks will contain HDR images. Can use this with a LUT (in the Color Correction section) to view super-white values.
  • Enable X-Ray Drawing — Draw objects with the X-ray flag (bones and nulls) as wireframes when they’re behind solid surfaces.
  • X-Ray Strength — Controls the strength of the X-Ray wireframe. Values less than 1 will dim the lines, and values greater than 1 will widen the lines.
  • Enable Object Origins — Draw axis and pivot points at the object origin of objects with the Display Origin flag.

Depth of Field

Camera Depth of Field

Turn this on and set the view to look through a camera to simulate depth of field in the OpenGL scene view. You can control the range that is in-focus using the camera’s F-stop parameter (the F-stop must be non-zero to get depth of field).

This effect works by blurring the OpenGL rendered pixels, so it is quite fast but can give strange results in areas with lots of blurring, since it is a post-process that can only work with the available pixels. So for example it can’t simulate light blurring out from behind objects.

Example 1

Create the node

  • Torus
  • Vdbfrompolygons
  • Scatter
  • Voronoifracture
  • Explodedview (for visualization)
  • Transform
  • Null ( rbd_packed_source)

VDBfrompolygon — fog VDB density

Dopnet

  • Rbdpackedobject ( Sop path — the null )
  • Bulletrbdsolver
  • Ggroundplane
  • Gravity
  • Merge

Add the assemble to make the gravity work

Unselect create name attribute and select create packed geometry

Add the Popforce and Multisolver so that the piece will fly up

In the popforce node, increasing the Amplitude to make the effect work

Make it progressive — to create the group and group type points

And enabling keeping in bounding regions and bounding type : bounding box

Add key frames on the bounding box and change the group name (static)

Add the attribcreate and name (active) / value (1) / Group (!static)

Change Class to point and the type will change into float and guess

! + NAME means everything except that

Select the overwrite attribute from sops and there will be three attribute

Change the type in group into integer

In oder to increase more details add a scatter and merge them

Add the popdrag and increase the air resistance

Color ramp

Add — If an input is specified, this OP adds points and polygons to it as specified below. If no input is specified, then it generates the points and polygons below as a new entity. Extract points — Used in conjunction with a point expression, the Add op can be useful for extracting a specific point from another op. For example, to extract the X, Y and Z value of the fifth point, from a Grid SOP in geo1: Points added in this way are appended to the end of the point list if a Source is specified. Click the Information Pop-up on the op Tile to find out how many points there are. For example, if you have added two points and there are 347 points (from 0 to 346), you have added the last two point numbers: 345 and 346.

Create an Add SOP and set it to create a single point, then append a Copy SOP and set its number of copies to the (possibly animated) number of points you want. This works correctly even when number of points is 0, unlike some other approaches.

Select the delete Geometry but keep the points — This will destroy all the polygons, NURBs, and other primitives, leaving only the points intact.

Create the attribvop to do some visualization

and the aanoise to increase the contrast

This operator generates anti-aliased (fractional brownian motion) noise by using the derivative information of the incoming position to compute band-limited noise. This type of noise is ideal for shading.

  • The roughness parameter determines the coarseness of the noise. The maxoctaves parameter limits the noise to a fixed number of iterations.
  • The amplitude parameter is a scale factor on the resulting noise. The default output range of noise is -0.5 to 0.5. 

Clamp — This operator clamps the input data between the minimum and maximum values.

Bind export — name active

The noise work so only part of the pieces moving

Next step — timetable and dependent

Float to Integer — This operator converts a float value to an integer value.

Fit Range — This operator takes the value in the source range (srcmin, srcmax) and shifts it to the corresponding value in the destination range (destmin, destmax). For example, fit(.3, 0, 1, 10, 20) would return the value 13. Values outside the input range will be clamped to the input range.

Add the addconstant and clamp so can make animation

Addconstant –– This operator adds the specified constant value to the incoming integer, float, vector or vector4 value. It represents a simpler version of the Add operator because it does not require a second input. It is ideal as an increment (i++) operation in a While loop.

node

make the add input2 promote parameter so turn to the attribute vop there will be inout number parameter to adjust

key frame

Outcome

Next make the piece more and more smaller when moving

create the primitive and change the scale

connect the primitive to dopnet and make object merge except rbd

dopimport

  • dop network — dopnet
  • object mask rbd*
  • select the import style — fetch geometry from dop network

Add the attribpromote and change the new class into Primitive

Add the sopsolver and paste the primitive and attribpromote to the dopgeometry

Outcome — pieces become smaller

Measure — This node can be used to measure curvature, which is useful in game development workflows for determining sharp cliff-like areas in terrain, so that rocks or other items can be added to those areas of high curvature, giving the landscape a more natural look.

Measuring area is useful in character workflows, as artists can measure the polygons of characters before and after animation to determine where the polygonal mesh is highly deformed. This information can then be used to blend texture maps.

Measuring volume is often used in destruction workflows, where objects are shattered. Often very small pieces can be deleted (volume under a certain amount), as they don’t affect the overall look and can speed up simulation times. Similarly, measuring perimeter is useful for determining how large things are in 2 dimensional space. For example, this can be used for measuring the 2 dimensional footprint of a city.

add the attribwrangle

  • f@tokeep = f@tokeep * 0.97 ;
  • group : @active =1

So far I found that my pieces do not fly up so I think some important issues are missing. I remember that the software crashed before me, so some of the content was not saved, so I went to pop force to give it a Y up force

Make the blast paste it between the attribwranle and outout

  • Blast — Group : Subset of the input geometry to delete.
  • group @tokeep<0.1

So now the effect is right, from the local slowly broken, until the whole object broken. The broken fragments move upward, and the process of moving gradually becomes smaller, and finally disappears.

I also tried to use a sculpture model that I used to crush before to try different crushing effects.

Example 2

  • grid
  • scatter
  • pyrosource
  • volumerasterizeattribute
  • null

Popsource — mode: source smoke

change the attribute names float and density

change the amplitude so the smoke will change the volume

Convert to circle

with arnold light

Make the ground

Make convertvdb and vdbvectormerge and fileache

dopnet

POP Advect by Volumes — The Advect by Volumes POP is designed to make it easy to advect a particle system by a fluid simulation. Often the fluid simulation will be simulated as a separate pass and the velocity fields read off disk. However, the particles can be live-linked to an existing simulation.

air residence of popdrag and POP Advect by Volumes can change the speed and fly degree

merge the smoke and particle

sphere and copy to point

Example 3

Blast and select delete non selected

Keep the path

It’s going to convert to poly soaps, to polygons. You can convert it to poly soap, and it will make it not as heavy as polygons. And the only thing the only problem with poly sources is they have limits. Just convert it to polygons.

Convert — When converting from a set of polygons to a mesh, a single mesh will result only if:

  • more than one polygon is in the input
  • each polygon has exactly four points
  • the polygons are arranged as n rows by n columns
  • the polygons share coincident points (see Facet OP)

Otherwise, each polygon is converted individually into a mesh. In fact, any individual face can be converted to any surface. This is accomplished by cutting the face into three or four adjacent sections, and then creating a patch from them.

Fix the problems

Floor

The Fuse SOP is used to snap points together or snap points to a 3D grid, and optionally fusing points after the snap.

To restrict the points that can fuse and be fused to, points can be query points, or target points, or both. With only one input to the node, both query and target points are from the single input. However, with a second input to the node, only points in the first input can be query points and only points in the second input can be target points.

Polyfill

  1. Select at least one edge in each hole you want to fill.
  2. Click the PolyFill tool on the Polygon tab.
  • The Quadrilaterals fill mode is most useful for filling in small groups of polygons, and can handle a wider variety of shapes, such as an L-shaped hole or a spiral.
  • The Perfect Grid Quadrilaterals fill mode can handle fewer shapes, but often generates a better patch with better interpolated attributes.
  • If you are filling in a round hole, the Perfect Grid Quadrilaterals fill mode will likely give you the best results.
  • Some holes may require additional smoothing to fix UV values. You can use the Smooth parameter to position the geometry where you like, then append a UV Smooth to fix the UV values separately.
  • If a projection plane normal cannot be found for your geometry, you may want to use the Clean SOP to clean up your geometry first, especially consolidating points.

Stairs

wall

Whole building clean

Merge the two and finish the preparation

3 ways to reduce the number of polygons and geometry

1 . polyreduce

This version of PolyReduce gives very fast, highly accurate reduction while preserving the shape, textures, attributes, and quad topology of the input as much as possible.

This node has multiple features to let you guide where the node reduces and reshapes:

  • prevent the node from moving unshared edges in 3D and UV space.
  • specify [points and/or edges to preserve.
  • paint an attribute in areas where you want to retain more density.
  • retain polygons based on visibility from certain view points.

2 . Create the vdbfrompolygon + convertvdb

change the voxel size to increase / decrease the details

3 . Create the sphere and ray

The meteor is going to create a sphere on top of that meteor that creates sort of a bonding sphere. And then add the Ray node after the sphere

The Ray operator projects rays from each point of the first input geometry in the direction of its normal, and then moves the point to any geometry the ray hits from the second input. You can use this node to drape clothes over surfaces, shrink-wrap one object with another, and other similar effects.

So change the frequency of the sphere to increase/decrease the details

Extracttrasnfom

This SOP computes the transform (translation, rotation, and optionally scale) that best aligns the reference geometry’s points with the target geometry. If Use Piece Attribute is enabled, a transform will be computed for each piece in the geometry, instead of for the geometry as a whole. Additionally, if a piece contains a single packed primitive, the SOP will compare the primitive transforms between the inputs to allow transforms to be extracted from animated packed primitives without unpacking.

The output geometry of this SOP contains one point for each piece in the reference geometry, with point attributesdescribing the transform. These points can be used with the Transform Pieces SOP to apply the transform to geometry.

This extraction can be useful for setting up rigid body colliders (for which an animated rigid transform is ideal) from baked geometry files that represent rigid motion.

Point Deform

This node computes the how a point cloud (the deformation lattice) deforms (compared to its original “rest” point positions), and applies those deformations to the input geometry. The node works by having each point on the lattice “capture” and influence nearby points on the model. The closer the points, the more influence (computed using the Elendt metaball formula).

This allows you to animate proxy geometry and transfer that to a high resolution mesh. In that case, the points of the low res proxy would act as the lattice, capturing and deforming the high resolution geometry.

The deformation lattice points can be connected by edges. The node uses connected points to find local transforms, allowing accurate transformation of rotating models. This avoids the “collapsed” look you might get with from Lattice node’s point mode when the mesh rotates.

Separate multiple layers

So because that building has multiple layers on the on the walls and inside the walls. If you use the same sketcher , you will come up with this issue that have the same shapes repeating. So the multiple layers inside the walls become useless. They become redundant. And rigid body simulation won’t be as interesting as it could to be.

We want to have different seeds for every one of these layers.

Connectivity

The default name for the attribute is Class. Each primitive or point is assigned a number from 0 to the number of connected sets minus 1. Two primitives or points that share the same number will be connected.

For each connected pieces

Blast — select Alembic path

Create for each named primitive

Change the piece attribute to path

Single Pass

Runs a single iteration at the given offset. This is useful for debugging piecewise loops, showing the output of an individual piece/iteration.

Split

Split is designed to divide your geometry into two separate streams, the portion that matches the group and the portion that doesn’t.

Group different parts

vdbfrompolygon and scatter and voronoifracture

Add details to do the voronoifracture

remesh

scatter

normal

This time I encountered a problem, that is, my destruction effect is not displayed. After me and Mehdi were one-on-one, he told me that because my version was relatively low, the attribute type under connectivity needs to be changed to string instead of integer, but I don’t have this option. So he replaced connectivity with assemble.

more datails

Block end

Connectivity

Rbd Material Fracture

Most of the later part of this lesson is to repair some large buildings to prevent some strange faces when they are broken. It is very important to deal with the broken model in Houdini, especially some complex models.(obj_merge>blast>unpack>fuse>polyfill>connectivity>foreach_begin>vdbfrompolygon>scatter>foreach_end)

Often in rigid body simulations, you want a solid object to break into pieces because of some impact or force. For example, you might want an earthquake to destroy a house, with the concrete walls fracturing, the wood door splintering, and glass windows shattering. Or you might want a swinging demolition crane ball to cave in a wall.

Most fracturing tools in Houdini support a pre-fracturing workflow, where you break the geometry into pieces in SOPs, with the pieces held together by glue constraints. Pre-fracturing gives you full artistic control over the look of the destruction (for example, do you want big blocky pieces or small jagged pieces). The object will crumble when a force overcomes the glue strength, or you can manually animate the glue off when you want the object to break down.The high-level tool for pre-fracturing geometry is the RBD Material Fracture SOP, with plenty of controls over different types of fracturing. There are many lower-level SOPs if you need even more control over fracturing.

Simulates breaking patterns associated with different materials: concrete, wood, and glass.

  • Can iterate multiple levels fracturing.
  • Can simulate low-res proxy geometry and copy piece transforms onto high-res geometry.
  • Automatically sets up glue constraints between the pieces.
  • Updates existing constraint geometry as it fractures.
  • Outputs groups and attributes with information about the fractures if you want to do more complex post-processing.
  • Use the Group node to name groups of primitives. For example, the door, individual windows, and walls. This will allow you to fracture them individually.
  • If you see pieces spinning/wobbling in the simulation, you can particle drag to freeze them.
  • The RBD Material Fracture node can work on fast low-res proxy geometry. You need to set up high-res and low-res geometry with the same named pieces (for example, by breaking up the high-res geometry into named pieces and then copying and reducing the number of polygons to create the proxy).

Clustering refers to grouping fractured pieces into bigger clumps. There are two main clustering workflows:

  • If you just want a bunch of pieces to stick together permanently, give them all the same name attribute. Nodes that work on pieces will treat them as once piece.This can be useful, for example, with wood splintering, where you often want to group small splinters into bigger jagged chunks.
  • For certain directable crumbling effects, you will often want to work with bigger pieces early in the shot and have them break down into smaller pieces later in the shot. You can do this with a hierarchy of glue constraints. You can animate higher-level constraints off to break up bigger pieces into smaller pieces.

The RBD Material Fracture node provides clustering controls when the Material type is “Wood”. You can do manual clustering with the RBD Cluster node.

Effect so far

Posted in Houdini & Lighting | Leave a comment

Week5 Lighting

Part1 Scene1 — First floor lobby

Kay’s scene is almost finished. In order to save time, I asked her to start the next scene, so I started to help her modify some materials of this scene and start lighting.

I first modified the glass material of all the wine bottles, and then I realized that this scene didn’t match my model very well. At first, I thought it was the problem of mapping, but I thought that the mapping of my model was more realistic, which did not affect the final rendering. So I think maybe the scene model is a little close to the real scale, realistic style, and my character still belongs to the cartoon style. So I appropriately enlarged the items to make them look more lovely.

Before lighting, I looked for some references to help me understand and construct the lighting of this scene.

I hope it is a very strong top light effect in the overall dark environment. So I set the light source in a few chandeliers at the top. A separate global light is not set, because that will average the overall picture brightness.

In fact, I prefer this scene, because its lighting can have a great sense of atmosphere. I set up a point light source to match the model of this lamp.

We have three indoor and one outdoor. This is the first indoor scene. I still make it brighter, because comparatively speaking, I want the basement to be darker.

To highlight the character, I gave him two other lights to illuminate him.

The final rendering.

Part2 Scene2 — Attic

Before doing this scene, I found some movie lens references to help me understand the layout and lighting of the room

I really like the atmosphere and lighting effects in these two references below, that is, the obvious window trace and fog effect.

The reason why I like this scene most is that when I see two windows, I can do some detail treatment on these two windows, such as some light fog effect and the halo effect of candles.

In this scene, the light of the candle is a warm yellow point light with fog effect. Each of the two windows has a purple spotlight and a fog effect. In addition, I have a supplementary area light for the character.

By the way in order to ensure that the size of the characters in each scene is equal, we unified a size together.

I’d like to mention the fog effect light in particular, because I used it before when I was doing the bisect, but it took me a long time to forget. So I learned it again. The first is to create a spotlight with a certain exposure value.

Pull down the light attributes and find volume in Arnold. This is just similar to the volume light in Houdini of Mehdi. That is to say, if the value is reduced and magnified, combined with the establishment of fog effect light, volume light will be generated, and this value is used to control the visibility of volume light.

Open the render settings panel, click the Arnold render page, find the environment taskbar, select atmosphere, and select aiatmosphere volume to complete the light fog settings. For lights that do not need to set light fog, find volume under the light Properties menu and set it to 0.

resource: https://jingyan.baidu.com/article/8275fc86c4d7b106a13cf665.html

The final rendering.

toning

Part3 Scene3 — Basement

The feature of this scene is that there are many things and rich colors, so I hope the overall lighting tone is uniform. In this scene, I put a point light source in the top chandelier, and then the fog effect light of three candles, which is not very bright as a whole, but rather hazy and a bit gloomy.

So I found some game screenshots and atmosphere charts as a reference.

The lighting didn’t take much time this time, because some of the skills of the first two scenes are very proficient. In this scene, I decided the overall lighting, I hope it will be a complete hazy fog effect, and then fill some objects and d according to each lens, otherwise the character will be dark inside.

The final rendering.

Part4 Updating ( Modify attic lighting)

Because the students of the game are already designing the game interface, he asked me for a rendering. After discussion, we decided to choose this one, because the fog effect of the light is very atmospheric. We decided to create the atmosphere by lighting and color, not by characters. But because there were few things in the previous scene, some items were added to adjust the position of the items.

A haircut chair and some shredded paper are added. The distance between items is shorter and the feeling is more compact.

I roughly set the position of the light. In fact, I also adjusted the exposure value of some maps. Because in this very dark scene, I hope the object can still be seen clearly, so I want the maps of the main object to be brighter, and there are some color differences between various types of furniture.

The final rendering.

At the beginning, I adjusted the light exposure and color temperature. I adjusted the feeling of two kinds of light, one is the romantic warm purple, the other is the cold blue light in the morning.

Part5 Conclusion

Lighting can be said to be a part I like very much, including the KK course, I also learned a lot of new technologies and concepts. This process let me learn the role of different types of light sources, how to create volume light, how to improve the rendering quality of light. In addition, I am more proficient in the creation of lights, including color and intensity, and can basically achieve what I want.

Speed of progress: The students of the game have got our files and started to test the effect. Although there is no guarantee that the rendering effect is consistent with that in Maya, they will try their best. We also met with the sound effects students once, and she began to look for suitable sound effects and music. The main purpose is to meet the atmosphere, according to our reference between the hand and the object sound effect. We’ll animate next because both lightings is close to the end.

Posted in Collaboration Unit | Leave a comment

Week7 Performance Animation: Re-recording

The changes that I have made

  1. The character on the left leaned against the wall and crossed his hands to remain disdainful.
  2. The character on the right has his hands akimbo, putting the weight into the hips, which shows that he is very angry.
  3. The distance between the two characters is short, which makes the quarrel more intense.
  4. The right character is held by the left character before he is ready to turn around to walk away. Neither of them has much space to move.
  5. When the character on the left is blaming, the spine is straight. He straightened out his chest and pressed his hand straight into the chest of the character on the right side. He was the strong side.
  6. The character on the right is less powerful and slowly curls up.
  7. The distance between the two people is always very close, in the process of accusation, both people are slowly retreating.
  8. The rest part of the content is cut off, and the action of the left character lifting the collar of the right character is added so that is the ending pose.

Feedback: More exaggerated expression of angry emotions, not just body language, expression can be enriched.

Furious

Face squash and stretch with angry expression

Another point is that I didn’t say my lines directly when I was performing, which required me to look at the original video clips for inspiration.

Several points to pay attention to about expression

  • The distance between the two eyebrows becomes shorter and tends to squeeze downward
  • The facial features press towards the middle
  • Apple muscle upward extrusion
  • pupil dilation with changes in eye size
  • Exaggerate the shape of teeth and mouth

The other change I want to make is that when the left character criticizes, I will try to make his body incline forward, but at the same time, I will make sure that her spine is straight. I think some of the references are too straight. Although I didn’t play very well, I will pay attention to this.

The next step is that I will choose the appropriate role model to start the blocking process combined with the above references and suggestions.

Posted in Advanced & Experimental 3D Computer Animation Techniques | Leave a comment