STORY REEL
ANIMATION
ANIMATION
Animating a human is easier than animating a quadruped. For
a human like Arya, we can act out the motion and record ourselves if we need
reference. However we cannot act out a quadruped.
Therefore animation research was focused into finding live
references of how canines move. The videos below show dogs and foxes walking,
trotting, running and hoping.
This video shows a dog walking, pacing and trotting all in
real time and slow motion to break down each paw movement. While not directly
naming transverse or rotary gaits, the video does list the paw to ground
contact order.
This video shows the slow motion movement of an arctic fox
through its exhibit.
This last one shows a fox running, playing, and biting. In
contrast to the previous ones that purely exist for motion reference and
demonstrate a simple back and forth motion, this one shows a fox acting without
instruction and no real constraints besides being in a large room. It is more
natural to understanding how the creature behaves.
And finally this video is a clip from Fox and
the Hound showing the foxes running. It has cartoon squash and stretch to
emphasize their movements and make them aesthetically pleasing. Such a
technique would be beneficial to put into our own animations to give it impact
to the audience.
ENVIRONMENTS
CHARACTERS
LAYOUT
LIGHTING
This is just some reference of interiors to get an idea of
what it may look like. There’s dark spots on the wall, some debris on the
floor, wallpaper peeling, dusty and broken furniture, old broken frames and a
toy.
CHARACTERS
Character Hair:
https://forums.unrealengine.com/showthread.php?105929-4-11-Released!&highlight=hair+spline
Looking at Unreal Engine 4.11's new specific shading models:
hair, eyes, and cloth. The focus is on hair for now.
The
new MOBA game Paragon uses UE4.11's new shading models to create their
character materials.
Because the scenes of our cinematic call for dramatic
lighting, we want the hair to be affected and look nice in that dramatic
lighting as well.
We were thinking about if we want to achieve that effect
that the Paragon game model has for the cinematic, we would follow what the
Paragon artists did:
a mixture of hair planes, actual geometry (larger chunks for
the braid), splines, and forward/camera facing quad strips.
LAYOUT
For this week we had couple of changes, we changed the
storyboard background, the relocate some of the room items.
For my plan of being the layout artist for our cinematic, I
did some research about the Matinee in UE4, with the storyboard we made, I have
a basic plan for how I can apply our storyboards into the Matinee camera
sequence in the engine with the right timing for each time clip.
The following is my plan of how to do this. Since we will
have our story reel soon, here I’m going to describe it with the draft story
boards we have for now.
Draft scene one: we start from Arya in the destroyed room
(with her animal companion, and her mother just died.) She looks lost during a
war. Our camera will be in front of her, starting from her eyes, then her face, gradually stretch the camera
distance further, until showing the entire inside of the room. The her animal
companion guide her to leave the room and the building, eventually show up on
the street, with the entire city as background behind, and start their
adventure journey.
The camera angles to this point is zoom out all
the time, until Ayra and animals leaves the room ( angle will start to pan to
either left or right to show they are leaving the room. )
So
in UE4, I will work close with Larry, creating a matinee sequence, to set the
right movie clips timing based on story reel, adjust the sequence length,
placing camera actors, move the camera composition, and positioning the
individual of the scene as blocking.
LIGHTING
New Unreal Engine Features for Version 4.11
There
are a myriad of new features in version 4.11 that can make our cinematic that
much better. Concentrated here are the new lighting features.
An overview from
the developers can be seen here:
The
following text and pictures can be found here:
https://www.unrealengine.com/blog/unreal-engine-4-11-released
NEW:
CAPSULE SHADOWS
Unreal Engine now supports very soft indirect shadows cast by a capsule representation of the character:There are no formal tutorials yet, but the process is outlined here:
https://answers.unrealengine.com/questions/359384/anyone-know-how-to-get-capsule-shadows-to-work.html
NEW: LIGHTMASS
PORTALS
Skylight quality indoors can now be massively improved by setting up Portal actors over openings. Portals tell Lightmass where to look for incoming lighting; they don't actually emit light themselves. Portals are best used covering small openings that are important to the final lighting. This yields higher quality light and shadows, as Lightmass can focus rays toward the incoming light. (Below left: without portals. Below right: with portals.)
A forum post full of tests and settings can be found here:
NEW:
LIGHTING CHANNELS
Lighting channels allow dynamic lights to affect only objects when their lighting channels overlap. We now have support for up to 3 lighting channels.
And some other features (among numerous others) that can help other disciplines:
NEW: REALISTIC HAIR SHADING
We've added a physically based shading model for realistic hair based on the latest research from film.
NEW: REALISTIC EYE SHADING
You can now give your characters highly realistic eyes using Unreal Engine's new physically-based shading model for eyes..
NEW: IMPROVED SKIN SHADING
We've improved the quality and performance of the Subsurface Scattering Profile shading model for realistic skin.
NEW: REALISTIC CLOTH SHADING
We've added a physically based shading model for cloth. This simulates a fuzz layer and will produce more realistic results for cloth than were achievable before. To use choose the Cloth shading model in the material editor.
NEW: PARTICLE DEPTH OF FIELD
New material functions allow small, out-of-focus particles to be expanded for depth of field the same way that opaque particles would be rendered.
NEW: DITHERED OPACITY MASK
You can now use Dithered Opacity Mask to emulate a translucent surface using an opaque material.
NEW: DITHERED LOD CROSSFADES
Static meshes can now smoothly crossfade between levels-of-detail using an animated dither pattern!
NEW: PARTICLE CUTOUTS (FAST FLIPBOOK PARTICLE RENDERING)
Particle cutouts allow for flipbook particles to render as much as three times faster!
Particles using flipbook animations (Sub-UV Animation module) tend to have quite a bit of wasted overdraw - areas where the pixel shader had to be executed, but the final opacity was zero. As an example, the texture below is mostly comprised of transparent pixels.
NEW: PER-VERTEX TRANSLUCENT LIGHTING
Lit translucency can now be rendered much faster using new per-vertex translucent lighting settings!
And last but not least:
NEW: MAJOR PROGRESS ON SEQUENCER (EXPERIMENTAL)
Sequencer is our new non-linear cinematic animation system. Sequencer is still under heavy development and we don't recommend using it for production work quite yet, but you're welcome to try it out and send us feedback! Expect to hear a lot more about Sequencer in an upcoming UE4 release.
Notable new features in Sequencer for 4.11:
New tracks: Shot/director, play rate, slomo, fade, material, particle parameter tracks.
Improved movie rendering; .EXR rendering support.
Improved keyframing behaviors, copy/paste keyframes, copy keys from matinee, 3d keyframe path display.
Master sequence workflow, so you can have sub-scenes within a larger sequence
Support for "Spawnables" (actors that exist within your cinematic asset)
UI improvements: track coloring, keyframe shapes/coloring, track filtering.
You can turn on Sequencer by opening the Plugins panel and enabling the "Level Sequence Editor" plugin, then restarting the editor.
on Thursday, April 7th @ 2:00PM:
https://forums.unrealengine.com/showthread.php?106521?utm_source=launcher&utm_medium=chromium&utm_term=blog&utm_content=sequencerstream&utm_campaign=communitytab
PRODUCTION DESIGN
Looked into finding a style for
the cinematic that would really bring the visuals to life and have a bigger
impact on the audience.
The look I found that really hit home with me is
what Blur studios did for league of legends (https://www.youtube.com/watch?v=vzHrjOMfHPY)
. I know Tamara and a few others really wanted to do some realistic looking
hair but as the Production designer I really want to keep the characters stylized
and I feel like this one best represents that.
That being said there were a few others that I really liked
from how stylized they are.
For him I like him cause how much he fits the style we are
already going for but has a normal map to him.
The simplicity yet elegance in this one really strikes home
with me. One of the easiest ones to do yet still be at such a quality that it
would be great for a cinematic.
The part I liked about this one is his beard I feel like
they could have done a lot more with the hair but how each strand is just
sculpted in is what caught my eye.
Keeps the stylized look and has more realistic looking hair
but the LoL one I think is better.
TECHNICAL
FACIAL ANIMATION USING FACEWARE
Because the cinematic will focus closely on Arya’s face, we
would require sophisticated and expressive animations. For this reason, we’ve spoken with Jon
Albertson about using FaceWare technology, a way to motion capture live actor’s
facial movements.
Faceware offers three major pieces of software: “Live 2.0”, “Analyzer 3.0”, and “Retargeter
5.0”.
Live 2.0 and Analyzer 3.0
Live 2.0 and Analyzer 3.0 are both software that offer
markerless facial motion capture.
However, Live 2.0 utilizes a live webcam feed, whereas Analyzer 3.0 uses
video files, which can be anything from recorded webcam data to scenes from
movies and TV shows. Both are
markerless, working purely off of facial recognition technology.
Retargeter 5.0
Retargeter takes facial motion capture from Analyzer and
lets you put it onto any facial rig, utilizing a plug-in to Maya. Retargeter is able to work with a wide
variety of facial rigs, and offers region-based controls. This means that you can take certain regions
of a facial mocap shoot (the eyebrows and mouth movements only, for instance),
allowing for more control and finesse in how you break down facial mocap data,
parse through the parts you want, and apply it to a rig. It offers pose-based workflows such as
expression sets for visemes and so forth, letting one efficiently organize
mocap data into divisible parts.
Faceware currently offers a Personal Learning
Edition that is free as of March 21st, but can only be obtained after first
completing the free trial. We have
currently signed up for the free trial, to which they emailed that they will
send the next steps soon, but they have not yet sent the download information,
so we will be working on following up with them over the next few days.
No comments:
Post a Comment