Enter Yes Placement Work

First Day 17/11/17

The studio is conveniently near my University so I was able to locate it quickly. The hours are 9-5 Mon-Fri until 22nd Dec. My job involved exclusively working on animating various objects in Maya and occasionally modeling and animating in After Effects. Current project involved working for BBC Bitesize which is a free online study support resource for students in United Kingdom. Enter Yes is working on biology in GCSE section creating animated videos with a voice over to help the students prepare for their exams. Videos include allusive scenarios where common household items interact with one another helping students memorise important biological meaning without technicalities. The idea is to create a memorable scene/interaction which helps your brain to picture the scenario when presented with the question in exams.

Upon arriving I met a small team of people along with two other people from my course so it was nice already knowing someone. Ross Morrison wasn’t there yet so I modeled a kids swimming armband to help the team before I officially got a task. He is a producer within the company and the person I kept in contact with through the emails.

 

Most models have to be kept with a lower polygon count where possible as some machines don’t have powerful graphic cards and enough RAM in their systems.

After meeting with Ross and given a brief explanation of what the company does and what my role will be. I almost exclusively only helped out in animating with some modeling work where needed. My first official task was to animate two candles melting with the words ‘complex’ and ‘simple’ engraved to them where simple carbohydrates burn quickly and complex take much longer. This section was in nutrition and food. It was the first time I learned how to use blend shapes where Hannah Turkington, my unofficial mentor  and a former Ulster University student gave me tasks and helped me throughout the whole placement period.

In all of these finished animations, there’s a corresponding voice over which explains what is happening. In the end we decided to remove the engraved ‘complex’ and ‘simple’ words that were part of the candle mesh as they looked a little odd with blend shapes so it will be added in After Effects during post production. I learned that in order to have blend shapes work, you must always have the same amount of vertices in the mesh. To keep track of them you select Display > Heads Up Display > Poly Count.

20/11/17

Today I worked on the same script in nutrition and food section but on a different scene. It involved 3 industrial containers labeled ‘fat’, ‘protein’, and ‘carbs’ which opens corresponding to the voice over. Protein container has an engine inside of it which sputters and stops working until an oil can fixes it and it’s working again.

I messed up the oil can size and a lot of things had to be tweaked. This shows how constant communication is vital.

21/11/17

Still on the same script, I had to do a fusilli pasta scene with a milk carton where pasta had to link up to form a long chain of sugars and where the milk had to pop open and pour sugar cubes to follow the same theme. Pasta had to be remodeled by Hannah as previous version didn’t link up properly making the ‘link’ in the chain choppy. In this studio, scenes are constantly passed around so a person who, for example, specialises in dynamics will take over the more complicated parts of the scene when needed.

I couldn’t use blend shapes for milk as it had a different polygon count so I attempted to make it appear more snappy instead as if it popped open and dynamic sugar cubes could be added later.

Exercise and the circulatory system is a full script I had to complete by myself which had five different scenes in it. A bellow representing heart rate had to increase in speed. Hannah showed me how to create an alembic cache where you can control the speed of the animation without having to manually move keyframes around.

It is also a great way to treat some heavier scenes if you no longer wish to add any changes to your final animation; kind of like baking a scene.

A second scene involved a Newton’s cradle where I attempted both hand animating and dynamics. Animating honestly looked too fake and there wasn’t enough flow to it.

Ross agreed and asked if I’m comfortable trying dynamics. I have attemped them when sending my test video and with a decent tutorial by Edge CGI on YouTube, it wasn’t as difficult to recreate. I pinned them with nail constraints mimicking the actual threads that hold the balls and later parented the strings so they matched the constraint curves (so they would render out as normal). However, as stated in the script, having one of the far right balls stop for a moment before having a dramatic swing meant that I couldn’t simply let the animation play through, as different speeds were needed to represent how heart beats when exercising. Alembic cache was no luck but I figured that keyframing gravity itself helped me achieve the effect I was looking for.

Dynamics are unpredictable at times too so it’s best to play out the animation several times to get the desired effect as sometimes the balls didn’t react as expected.

22/11/17

This third scene was one of the most challenging and frustrating yet. I modeled a scalextric car controller for this scene also.

The script stated that the cars had to race together in the scene and abruptly stop with their tires melting, then the controllers would ‘pump’ them back to drive even faster with embers gushing from their tires. I remembered how I animated along a curve with motion paths in 1st year and decided to try that approach since the cars are racing around the track. Firstly I added some grooves in the path and learned how to extrude a curve from edges rather than drawing it manually for the cars to follow using polygon edges to a curve method. Animating this way worked well for the first circle but it was extremely fidgety when keyframed with various speeds and no looping available. I couldn’t make the cars come to a complete stop when they were supposed to die down. Alembic cache couldn’t save me either so I decided to keep the cars going almost a full circle until they stopped as the voice over indicates. I then started a new curve with new car geometry copied (hid the first round of cars and lined them up) and let them go again. With both animations cached I was able to control and exaggerate the speed for the final race.

The scene turned out really well but it did involve a lot of problem solving on my part which took a couple of days to fully complete.

27/11/17

After receiving feedback on the script above, I just had increase the speed of the bellow and it was all good to go. I started on a new script which explains how DNA works. The client wanted something like Mr. Robot or The Matrix inspired; binary numbers acting as a DNA sequence.

This fan-made video displays a possible approach to introduce this DNA section acting as a title card with a rain code. I approached the first scene in After Effects which I only know the basics of so I needed to look up some tutorials. I mashed the information from two tutorials by CardusBox and Flat Pack FX to create my own version of the binary code.

It turned out nicely for a first draft though the client will decide if that is what they are looking for.

28/11/17

The following scene involved a bowl of fusilli pasta and a fork picking it up to examine up close representing the coding strand and amino acid. It was a simple animation with parenting a piece of pasta with the fork and animating a couple of faces for it to look a little more dynamic.

Following scene involves a row of slot machines that get the code ‘right’ and flash while the camera pans out.

Basic and simple animation!

29/11/17

Another scene had to be completed in After Effects in sex, hormones and characteristics script. I had to create a sonar which displays a male and female icons on the screen getting closer to each other corresponding with the VO. The scene represents changing hormones of the sexes causing attraction to one another. I used 3 tutorials to help me create a sonar radar from scratch; for base, HUD elements and numbers. Clients prefer simple and straight to the point visuals so I decided to keep it simple and recognisable.

Using some of these tips it was easy to come up with a realistic sonar base to start off.

I decided to use a simple 360 constant gradient for radar sweeps as the main focus should be male/female icons as targets.

This tutorial helped me to create a number range around the sonar which represents degrees.

After tweaking swiping, colours of the targets and timing of fade, I was able to come up with this final result.

My next task was to take a genetic conditions script which involves 6 scenes and must be completed for Monday evening.

  • A pair of ripped jeans representing faulty genes which fly off the screen into a rack full of mother’s and father’s jeans.
  • Two piles of paired and unpaired socks representing Down’s Syndrome with leftover sock as a chromosome which fails to get 23 even pairs as there is an extra sock.
  • A pie with constantly flowing filling that doesn’t clot/stop representing Haemophilia and a bruising pie which represents human skin.
  • Party whistle representing faulty lungs from having Cystic Fibrosis.
  • Hourglass with chunky food that doesn’t let it pass through the hole after flipping representing impaired digestion.
  • A brain and spine still which represents Huntington’s disease which affects nerve cells in the brain causing it to eventually shut down.

1/12/17

To tackle the jeans scene I wasn’t sure on how to approach the ripping so I began looking into tutorials on tearable nCloth. The idea was to have the ball hidden for the final render so the tear would appear corresponding to the voice-over.

The ball was set as a rigid body and I triangulated the nCloth beforehand. I still couldn’t get the desired look even after increasing polygon count and playing with various settings. We decided to change up the script and have detailed rips in the scene already with panning camera movement to make it more interesting. This saved a lot of time and the visuals fit just fine without on-screen ripping. Still having trouble with animating the jeans with nCloth, I decided to rig the jeans with simple parent/constraint and animate them manually.

4/12/17

For Monday morning I finished up animating the rest of the rack’s movement. The sign on top will be textured to day ‘Mom’s and Dad’s’ with mother’s side taking up 70% of the sign space representing how much they influence their offspring’s genes.

For my second scene, I duplicated exactly 47 socks (23 pairs+1 leftover) in a pile to display the amount of chromosomes a person with Down’s Syndrome has. I used nCloth to drop the socks in a realistic pile then duplicating and freezing their position. With each disappearance of 2 socks 1 pile appears on the right hand side leaving one behind at the end of the VO.

In the beginning I attempted to use Bifröst in Maya to create a realistic fluid simulation. Using a SnapLaunch tutorial I learned about its sensitivity to scale and how numbers influence its appearance. Playing around with it was fun but I soon found that the machines in the studio can’t handle that simulation and it was difficult to even playblast the results.

The whole process took a really long time so I had to come up with an easier solution for the following day.

5/12/17

There was no point in playing around with it further as time was getting tight for me to finish the whole script for Thursday evening so I decided to use the good ol’ Blend Shapes instead.

It was quick, effective and got the point across so there was no need to overcomplicate the process.

For the 4th party horn scene I had to remodel the paper tube extension as the original model was only rolled up geometry. Using non-linear bend deformer technique, I was able to animate it being measly filled with air and then manually animating the faces where the whistle filled with air fully to make it more believable.

Cystic Fibrosis 4th scene is linked together with the 5th hourglass scene. I attempted to convert pieces of food in the hourglass to active rigid bodies and the glass itself to a passive one in hopes for it to move about without going through the glass when flipped. Machines were once again overloaded as there were too many pieces of food with a high polycount so I decided to leave this scene and complete it at home after having some time to think of an approach. I tackled the final scene which had to be completed in After Effects so I would at least have one more scene completed by the end of the day.

The idea was to create a motorway with a city in the distance which looks a lot like the brain and spine. In post production there would be appropriate sound effects of a car driving and allegedly crashing; shutting down the entire city to match the VO.

At home I decided to group some bits of food an manually animate their movement as it is a short scene and it was easy to get away with less detail. The hourglass then floats off the screen as stated in the script.

6/12/17

After receiving feedback this morning for the above script I had to make the following changes.

  • Scene 1: Change the camera angle so you’re looking from the top down, remove the complicated rigged scene and pull the trousers off of the screen fast instead of seeing them float away. Mom’s side of the rack is the only one with movement.
  • Scene 2: Change the camera movement and timing. Add a visible ‘bounce’ when the piles of socks appear to give it move life. Have the leftover sock fall onto the new pile rather than lay beside it.
  • Scene 3: Have the blend shape constantly increase/flow on top of the pie and only stop at the very end of the voice-over.
  • Scene 4: Add another party whistle that works correctly so there’s a comparison between the two.
  • Scene 5: The scene does not look good compares to previous ones. Hand over the scene to Danilo or Hannah to have a dynamic glass smash instead of it floating off.
  • Scene 6: After Effects don’t match to previous 3D work so the scene will have to be completed in Maya to add more dimension. Hand over to Rebecca.

now the camera angle is correct, rigged scene is removed and mom’s side of the rack is the only one with movement.

There is more dynamic camera movement making the scene more interesting. Piles of socks have a slight bounce once they appear and timing issues are fixed. I used nCloth to get a nice landing on a new pile.

Timing of the blend shapes was adjusted and now it moves smoothly throughout the whole animation only stopping at the end.

A second whistle was added for comparison. Blend shapes distorted the paper tube so I had to keyframe the non-linear bend deformer as well as keyframing the faces manually for the best look.

7/12/17

I got another round of feedback after completing yesterday’s critique. Scene 2 and scene 3 are good to go leaving 1 and 4 to be tweaked a little more.

Trousers just needed to disappear a bit quicker than before so there’s barely any time to react.

Party whistles took a while to get right but we eventually got there after a total of 4 more changes. First attempt was to have the defective whistle fully extend like a normal working one but look much less inflated and weaker. The working one must look ‘powerful’.

Then we decided it’s best not to fill it with air fully as it was difficult to see the main differences and remove the extra twitching from a normal working one.

After more feedback, Ross wanted the whistles to extend/contract at the same rate to show off the difference.

For the final attempt we decided to keep the correctly working one extended while the defective whistle tries to extend 3 different times but recoils after each attempt. This final version got approved.

8/12/17

Had to complete a title card for non-communicable diseases introducing the script. For this scene I had freedom to use whatever choice of software I wanted as long as the letters are jumbled up in the beginning and eventually spell out the words ‘Non-communicable diseases’. I decided to use Maya as previous After Effects attempt appeared too flat so I took no risks with this tight deadline.

It was fairly straight forward piece to complete as I simply animated the letters in various ways to make it more interesting and fun. Idea is to show how they’re out of place and jumbled hence non-communicable.

11/12/17

Today we were visited by a placement student Claire who is interested in 3D animation. I was in charge of showing her some of the work we do in the studio and explanations of basic concepts of 3D animation. She has some previous experience in 2D animation and showed me some of her work which was really impressive. After explaining some basics in Maya my next task was to do some fixes after receiving further feedback from BBC while Claire shadowed me. I picked up osmosis script this time which involved two simple scenes.

Previously in this scene there was no following of the raising with the camera, it was static shot and the client wanted more dynamic movement. I also had to make the scene cut shorter as previously it went onto 25 seconds rather than 15 with zero action. A new scene had to be introduced at 15 to match the VO. It was a simple fix of moving around keyframes which allowed me to explain to Claire what a graph editor was and how the axis work within this software.

Former version had only the small toys travel through the holes once and remained on the opposite side to the bigger blocks though the VO indicates that they can travel back and forth so the client wanted it to be clearer. Larger toy movement had to appear after 10 more seconds passed in the second scene with small toys moving continually until the very end where the voice-over indicates that both sides are equal. It was a finicky and time consuming task but easy in terms of difficulty in the actual animation. I showed the student how keyframes can be copied to cut corners and save time if she ever decides to pursue a 3D career.

Another piece of feedback was to remodel a key from vintage style to a Yale style instead to make it more modern.

key

This was a quick opportunity for me to show Claire how to model in Maya. Overall it was a nice experience to showcase my work and talk about the course and studio work to someone who is just dabbling into it. She seemed to enjoy her time and learned a lot of basics.

12/12/17

At this stage there was a lot of waiting around on feedback and constant changing and fixing of already existing scenes, changes that came through had to be applied by anyone who was available. I modeled a fancy cake server for one of the scenes that needed changes. Feedback for enzymes section also came through.

New key was replaced, words on locks were flipped from sideways so that the reader could make them out more easily and more dynamic camera movement which follows the key in action was added.

For this scene I tried to add more movement in the camera since it was previously completely static. I also fixed up some minor animation issues where some lego blocks didn’t properly connect to each other.

13/12/17

After receiving more feedback, the client wanted the DNA title card approached differently. ‘How does DNA work?’ title had to be omitted and the constant falling text is to be seen at all times without working any words. Ross suggested to add key words within the binary code just as the VO mentions them.

Using the same tutorials as before I managed to create a cool effect with jumbled words to mimic binary version alongside it. I just changed the colour of the words to white so it would stand out immediately as the voice-over mentions them. I also added a glow to the whole scene to make it seem more futuristic.

14/12/17

BBC provided us with further feedback on nutrition script. There were some changes in timing and the client wanted protein to be represented as an additional food rather than pointing at both steak and potato. No arrows in the final product; have food react to being mentioned instead.

There were also talks of replacing steak with something else to represent fat so Hannah showed me how to inflate mesh objects using nCloth and I also referenced back to a tutorial by Stuart Christensen which helped me with gravity and general inflation settings.

From ‘How DNA works’ scene we also received some feedback on the slot machine which I worked on earlier. In order to understand the scene more, it was decided that all of the slot machines will get the code wrong until the camera pans down to the final machine in the sequence showing the correct code followed by ‘winning’ sounds.

Sound-work and lights would have to be done in either After Effects or Nuke during post-production so it would be clear for the viewer which is the correct chain link.

15/12/17

Today we had to work quickly as after 2 o’clock 5 pizzas and copious amount of alcoholic beverages will be present in the studio for the staff party. I was presented with two tasks to complete before that deadline. First was a simple fix of making a mechanical arm from genetic engineering scene (the one I completed as a test in the beginning) come from the top of the screen rather than the bottom which only took a couple of minutes. Another task was a little more complicated as various people had different opinions of what the end result should look like. Ross suggested to have the crowbars come one after another like a domino effect instead of them laying on the floor completely static and then do one wiggle at the end rather than twice like done previously.

These crowbars were rigged with a lot of joints through them so the wiggle would appear smoother, I decided not to mess with the rig as it had a tendency to pop off of geometry. Instead I unlocked ‘rotate x’ in the attributes of the geometry and decided to animate keeping an equal distance of keyframes between each crowbar just in case an adjustment in speed would be needed. When crowbars stood up, joints were in their initial position which did not disturb the previously animated wave. Two different angles were done to see which one would flow better with previous scenes when editing them together for a client.

18/12/17

For the first portion of the day I had to completely redo the crowbar scene as other computers could not open the file due to high polygon count of the crowbars. It took only about an hour as I knew what to do without trial and error like yesterday.

Nutrition scene had to be tackled once more with the asked changes from the previous day. Ross decided to replace the steak with a slice of cake to represent fat and add an egg for protein. I found that using nCloth on a mesh with more than one geometry (even after combining it) does not work. Pieces of cake started to float off the screen and our goal was to simply inflate it. To work around this problem, I quickly modeled a similar slice with distinctive geometry to highlight differences between jam and sponge so texturing will be easier in the future. Cake still had to be high poly so nCloth application would be more believable.

all one mesh raher than loads

Hannah also showed me how to apply a surface shatter effect to the inflated cake as it was previously done with the steak once hydrogen and oxygen is mentioned in the VO. It has to pop!

All the changes have been applied with correct timing, food reacting to it being described, addition of and egg and floating cake.

19/12/17

Almost a full day was spent texturing various scenes to be sent off to clients to appear more pleasing. Global warming script also needed a quick fix as an old broccoli model was replaced. To highlight deforestation, I animated broccoli sprouts falling down one by one and quickly regenerating once reforestation comes up in the voice-over as a solution. Regrowth was a simple job of scaling sprouts in the scene and having them pop back in their full glory at the end of it.

20/12/17

Further revision on the crowbar scene was needed. The domino-like effect had to be removed as it takes too long to fully happen and the fluttering effect had to be softened as it appeared too aggressive in the previous version.

In order to get a softer wave we had to remove a good chunk of crowbars as it appeared to have an obvious unrealistic pattern. Ross also wanted a dramatic ‘whoosh’ and for the crowbars to return to their initial position by the end of the VO. I played with the graph editor and camera angle had to be changed for the desired effect to take place.

Non-communicable diseases script needed changes in two more scenes.

A simple turn on of the tap had to be animated with camera slowly panning out of the shot.

Previously this final scene had no camera movements and animations were mixed up and not corresponding the the voice over. I had to frame the shot with a pot which was being described. Further spreading of vines to represent cancerous tumors will; be completed in Nuke or After Effects in post production.

Final fix of the day was to remove dynamics from Jenga scene as BBC did not think it worked well with the overall look. They preferred if the two towers could remain intact with both of them wobbling instead. I put a simple bend tool through the geometry so it would be simple to keyframe the curvature value.

After further feedback I had to remove the momentum and simply have the donor tower slowly hit the recipient.

21/12/17

In defense mechanism script, it was a simple camera angle change. I also had to remove the ground where the spoon lands in a believable way as it was a cached animation. Previously we had a jelly that got changed to a pie after feedback so the new food had to align with the spoon’s motions.

DNA introduction video also had to be tweaked where all words apart from ‘enzymes’ and ‘proteins’ had to be removed.

Further texturing and camera work was done in various scenes to quicken up the pace for Hannah so she could edit all playblasts for the client.

22/12/17

This was my final day at the studio and in all honesty it was an amazing experience. I learned many tips and tricks in these 6 weeks from each and every person in the studio, got to work with paying clients and experienced a fixed schedule with tight deadlines. I would love to work with this company in the future and hope to stay in contact throughout final year.

Enter Yes Placement 17/11/17-22/12/17

Enter Yes™

I did around 6 weeks of placement in Belfast company called Enter YesFor this year I am trying to stay in the same area and get to know some local companies that work alongside our university. It’s important for me to have connections in the area I’m studying as my goal is to work in a smaller studio with a tight-knit group of people and build my way up. Established in 2016, they specialise in a wide variety of concepts, “motion capture, concept design, pre-visualisation, shoot supervision, compositing, 3D effects, CG effects, 3D and 2D animation, motion graphics, software development, AR/VR and experimental production, live action production and grading” as stated on their page.  I am excited to try and be part of the team and gain some experience working there.

 

Jenga Test 13/11/17

My lecturer Alec offered me to give my email to an employer who is looking for short term animators in order to complete a tight deadline. Although animation is not what I specialise in and it’s a free position; it ticks all of my boxes of where I’d like to stay in the future. Experience working in an actual studio is valuable to me and working from home is extremely difficult for me when there’s zero discipline. I received a script, storyboard and a voice over and had to complete an animation test within 7.5 hr limit and report back with my results in two days. It consisted of two Jenga towers and a rigged mechanical arm. The goal was to take a piece from the ‘donor’ tower and place it onto the ‘recipient’ tower causing the former to collapse. The recipient would have to bounce back into its original position without falling. Below is the script and the storyboard I received to help me visualise and approach this animation.

genetic_engeneering_scriptgENETIC_ENGENEERING_storyboard

Admittedly I had to redo it twice as I never used dynamics nor do I know how the graph editor works so the first result wasn’t satisfactory. I used a simple rigid body tutorial to get some insight into how I want the Jenga tower to behave as I believe rigid bodies would produce the most realistic result of a collapse.

It covers the most basic functions but it was all that I needed to start. After animating the hand, I turned the donor tower Jenga blocks into rigid bodies.

The base effect was there but there was no resistance or follow through sliding. I played with static friction and dynamic friction in the attribute editor with various blocks until I got the desired effect.

Results were much better in terms of surface resistance but I still had to adjust gravity by forcing it to fall and hit the neighbouring tower. I turned the recipient tower into a passive rigid body so the Jenga pieces would react to hitting it. My final task was to have the recipient tower react to being hit without falling so I used the first 4 minutes of this cube rigging tutorial to achieve the effect I was imagining.

It uses a simple parent/constraint  method to rig the cube along with adding a lattice for a more flexible effect. It was enough for me to complete the test and achieve the following result within the time frame I was given.

I received great news that this is exactly what the company is looking for and was asked to start working the following day. It was a little difficult to travel/find a place in such short notice as I had relocated back to Donegal, but I managed to travel for first couple of days before settling in closer.

Change of Plans, Baking and xGen

Change of Direction

With our company project taking up a bit longer than expected, I ran out of time to sculpt a realistic rat. I have decided to focus on xGen fur instead and combine the knowledge gained from it with sculpting for my Summer project. Learning from this project will help me to get even better results when revisiting rat models in the future. I will be working on the same model I have sculpted in zBrush last semester.

My main goals are to look into two different types of xGen approaches into fur and find ways to work with higher resolution model rather than my old retopology from the previous semester which had lost its details.

Baking

Maya has a function where you can bake normal maps. I followed this tutorial which explains the basics of this function and how it actually works.

It’s a technique where you project the details from your high poly model onto your low poly model which is widely used in gaming where the poly count must stay low for smooth gaming. You must overlay your high poly model with your retopologised piece in order for it to work, it should almost be a perfect match. Path to begin your baking process is rendering<lighting and shading<transfer maps. While I was attempting it, I noticed how my retopologised model does not align properly which caused major issues.

search envelope

 

Having your display as Envelope helps you to see where Maya is going to shoot rays and where they will hit your model, ideally your retopology must match the original model in order for it to fit into the mold. I attempted to manually increase its size (to a ridiculous amount) though at this stage I knew I will give up and look for other ways to preserve detail in zBrush.

Obviously it didn’t work as my source mesh did not match the low poly model. I was also informed that I should smooth out the edge flow though at this stage new methods of working around this issue were on my mind. I had a quick look into zBrush projection and found a much simpler way to approach this issue.

Projecting in zBrush was fairly simple. I duplicated the high poly model, hid the original and used zRemesher to knock down the poly count. Then I simply turned on the visibilit on the original sculpt (with low poly selected) and projected the detail. It worked okay as the poly count was still high but there were no issues with lagging when importing the new model into Maya.

xGen

One of my classmates introduced me to this amazing tutorial by Tom Newbury which covers the basics of hair,brows, lashes, beard and peach fuzz which most can be interpreted for my rat! His technique involves masking a region in zBrush rather than working with UV’s and maps in Maya and then simply importing it to preferred software. It saves time to mask it manually especially when you’re working with complicated characters. His video involves separating sides, scalp, brows and peach fuzz so to test out the technique I tried to only select the head of my rat.

head before decimation master

The basic technique is to mask the area you want the hair growth, after that press ctrl+w and hit shift+f to see your new polygroup. Then you duplicate the mesh to keep the original untouched, delete lower and delete higher in geometry of your masked model , hit ctrl+shift on your model to reveal the new piece of separated geometry. After this is all done you can press delete hidden et voilà! Fresh scalp is ready for hair. You can also use decimation master and zRemesher to reduce polygon count and project the detail afterwards.

I imported this piece of geometry into Maya and began the process of generating hair. It’s not a necessary step to use zRemesher as the hair will work fine (unless you’re animating it) but it is simply more pleasing to the eye as a still.

Getting Started

With xGen it’s important to keep your projects set correctly and files names properly as creating a new description generates paths in folders with various collections that may pile up if you are working with several scalp pieces. I imported my geometry and named it nicely, deleted history, set on freeze transformations etc. so everything is clean. I began by creating a new description, naming it along with collection and selecting splines, randomly across the surface, and placing and shaping guides.

Screen Shot 2017-05-12 at 14.17.40

In the tutorial, it was advised not to place your guides too close to the edges as they get too dense and try to keep the shaping guides well spaced out and at a minimum. After selecting the guides in the outliner, change the rebuild guides and set your desired length. In preview/output set spline segments to 1 to prevent lagging but increase modifier cv count in primitives to around 20-25 to get detail later on.

Screen Shot 2017-05-12 at 14.26.54

This is the result! I added taper afterwards to give a realistic taper to the ends of the hair like fur would have. At this stage you can start shaping your guides to create some flow manually.

Here I used guides as curves in utilities and adjusted them to my liking in what a natural rat fur flow should be. These can always be adjusted at any time. Selecting them individually and resizing will affect the length of the generated hair so you have a lof of control in which clusters need it.

Screen Shot 2017-05-12 at 15.07.57

I played around in modifiers adjusting hair clumps, noise, and even cut to get some sort of frizzy flow of hair on my rat.

Playing around in these settings was really fun and fidgety at the same time. With so much control, the downside of using this description is definitely time. It takes a long time to test all of these options when you’re after a specific look, especially when you’re not after one section but a whole animal to cover. I decided to look into groomable splines for a different method of creating fur in case it worked more efficiently for a rodent.

Groomable Splines

For this method I followed a two part tutorial which covers groomable splines in great detail with reasoning behind certain choices and how the tools perform.

My general consensus using this method it that you must always have a reference beside you while working on splines. It’s easy to get carried away with incorrect flow since it’s so easy to manipulate the splines. This method consists of selecting faces which will have fur on them, in my case it is everything except bottom of the paws, nails and eyes. There is peach fuzz around ears and paws so they all will have to be selected. These are the reference images I referred back to the most by two Instagram accounts, lilysrats and its_a_rats_world_mag.

They have amazing close up shots of these rodents which helped me with deciding on the length and direction of hair growth.

It was most sufficient to select all faces first and then deselect the ones that will have no fur generated on them. In the tutorial above I discovered that paint selection tool can be used to do this. I wish I discovered it earlier as it saved so much time rather than me having to click on individual faces with a mouse to deselect them. I also merged the paws and the main body together (as they were modeled as separate subtools in zBrush) in order for this face selecting method to work. Also going to create<sets<quick select set.. lets you save your selection of faces and you can always adjust them and save the new set as you fix up. After generating fur, I kept going back and forth with deselecting unwanted faces which I forgot about that merged within the body from the paws. It went back and forth a few times until all of the fur seemed to be behaving correctly with no overlapping.

I changed the colour of the fur as suggested in the tutorial so I can see better when fur becomes too short or disappears (red). Pose grooming tool was used at the start to roughly dictate the direction of the hair flow of the rat along with elevation to lift the fur back up. I played around with the length of the fur in certain places such as the nose and paws to get the correct look. Increment and goal length settings were tricky to figure out but once I got it, it was fun to play around with.

This tool is so much easier to use when it comes to animals that have fur all over. It saves a lot of time and results are great. I was able to capture a decent flow in the face, around the eyes and neck.

I changed the fur to a silver coating to match my rat reference and kept testing out generated fur. I noticed a couple of bald patches in several places as I was testing as well as faulty flow in the paws and tail. It took me several attempts of touching up and fixing to get the fur to sit correctly for me. I composed a quick turnabout video examining the final result before rendering.

I added a quick lambert shader on the base itself to add more colour. It’s not a necessarily accurate representation of a realistic colour scheme but my only focus in this project was to test and learn how xGen performs when it comes to fur.

Final Renders

I love the final results and the fur rendered beautifully. Of course in the areas such as paws and tail should differ in colour but that is a new area to explore during my summer project. I’m happy with the flow and appearance of the fur. It would be nice to see how it performs in an animated setting with a rigged model. I will definitely explore xGen further with human models as it is such a fun tool. Credit to my lovely rat Sushi for being a great model in this project.