2019-05-13

Final Practice Solutions.

1. Suppose we have left and right eye panorama video source videos left.mp4, right.mp4 (already lens corrected) and we like to produce a 360 3D video. Give the FFMPEG command to do this.
2. Briefly define the following kinds of eye movements: Saccades, smooth pursuit, vergeance.
3. Explain the following kinds of depth cues: Motion parallax, accommodation, height in the visual field.
4. Go over the process needed to get the humanoid model Ethan to follow one's gaze in a VR scene using Unity.
5. What is a prefab in Unity? What are the classes Input and OVRInput? Give example methods of each.
6. What is a Reichardt detector? What is stroboscopic apparent motion? What is a low persistance display mode and what is it used for?
7. Give pseudo-code for object-order and image-order rendering.
8. Explain the Blinn Phong shading model where an ambient lighting term has been added. Give an example of calculating something with it.
9. Express the point (0,0.5,0) in barycentric coordinates with respect to the triangle with vertices (−1,0,0), (0,1,0), (0,0,1)
10. Explain how software correction of optical distortion is done.
(Edited: 2019-05-13)
1. Suppose we have left and right eye panorama video source videos left.mp4, right.mp4 (already lens corrected) and we like to produce a 360 3D video. Give the FFMPEG command to do this. 2. Briefly define the following kinds of eye movements: Saccades, smooth pursuit, vergeance. 3. Explain the following kinds of depth cues: Motion parallax, accommodation, height in the visual field. 4. Go over the process needed to get the humanoid model Ethan to follow one's gaze in a VR scene using Unity. 5. What is a prefab in Unity? What are the classes Input and OVRInput? Give example methods of each. 6. What is a Reichardt detector? What is stroboscopic apparent motion? What is a low persistance display mode and what is it used for? 7. Give pseudo-code for object-order and image-order rendering. 8. Explain the Blinn Phong shading model where an ambient lighting term has been added. Give an example of calculating something with it. 9. Express the point (0,0.5,0) in barycentric coordinates with respect to the triangle with vertices (−1,0,0), (0,1,0), (0,0,1) 10. Explain how software correction of optical distortion is done.

-- Final Practice Solutions
5. Prefabs are GameObjects you can store that remember all the property values, components, and child GameObjects so that they can be re-used and instantiated for later/multiple use. In Unity, Input is a class that reads any general input (keyboard, mouse, VR controllers), whereas OVRInput is designed specifically to only read VR controls from GearVR and the Oculus Go controllers. OVRInput is better if you want to read VR controller specific controls (touchpad, primaryIndexTrigger, backButton) are pressed. OVRInput.GetDown("PrimaryIndexTrigger") and Input.GetButtonDown("Fire1") will both return true if pressing the index trigger of the Oculus Go Controller, but if clicking the mouse, only Input.GetButtonDown("Fire1") will return true.
5. Prefabs are GameObjects you can store that remember all the property values, components, and child GameObjects so that they can be re-used and instantiated for later/multiple use. In Unity, Input is a class that reads any general input (keyboard, mouse, VR controllers), whereas OVRInput is designed specifically to only read VR controls from GearVR and the Oculus Go controllers. OVRInput is better if you want to read VR controller specific controls (touchpad, primaryIndexTrigger, backButton) are pressed. OVRInput.GetDown("PrimaryIndexTrigger") and Input.GetButtonDown("Fire1") will both return true if pressing the index trigger of the Oculus Go Controller, but if clicking the mouse, only Input.GetButtonDown("Fire1") will return true.

-- Final Practice Solutions
2.Sacades are quick eye movements, traveling 900 degree/s a second to quickly move through important details
Smooth pursuit movements are slower eye movents that smoothly track a fast moving object at 30 degrees/sec
Vergence is when the eyes converge to meet at the same object
2.Sacades are quick eye movements, traveling 900 degree/s a second to quickly move through important details Smooth pursuit movements are slower eye movents that smoothly track a fast moving object at 30 degrees/sec Vergence is when the eyes converge to meet at the same object

-- Final Practice Solutions
8. Blinn-Phong shading can be used to handle reflections of a shiny surface as well (specular lighting). Its equation is:
L=(R,G,B)=dImax(0,n⋅β„“)+sImax(0,n⋅b)^x
Here s represents shininess coefficients, b is the angle bisector between β„“ and v, and x is a material property also controlling the global shininess of the material usually its value ranges from 100 (mildly shiny) to 10000 (act like a mirror).
An example similar to our in-class exercise (just take out Lambient):
A pixel's color value comes from a triangle which acts as red diffusely and green specularly. Suppose we have a light source with intensities (.7,.5,.3), and that n⋅β„“=.6 and n⋅b=.9. Let x=5000. What would be the final color of the pixel?
d = (1,0,0)
s = (0,1,0)
I = (.7,.5,.3)
n.l = .6
n.b = .9
x = 5000
L = (R,G,B)= (1,0,0)(.7,.5,.3)(0,.6) + (0,1,0)(.7,.5,.3)(0,.9)^5000= (.42,0,0)+(0,.5,0)(.9)^5000 = (.42,0,0)
(Edited: 2019-05-19)
8. Blinn-Phong shading can be used to handle reflections of a shiny surface as well (specular lighting). Its equation is: L=(R,G,B)=dImax(0,n⋅β„“)+sImax(0,n⋅b)^x Here s represents shininess coefficients, b is the angle bisector between β„“ and v, and x is a material property also controlling the global shininess of the material usually its value ranges from 100 (mildly shiny) to 10000 (act like a mirror). An example similar to our in-class exercise (just take out Lambient): A pixel's color value comes from a triangle which acts as red diffusely and green specularly. Suppose we have a light source with intensities (.7,.5,.3), and that n⋅β„“=.6 and n⋅b=.9. Let x=5000. What would be the final color of the pixel? d = (1,0,0) s = (0,1,0) I = (.7,.5,.3) n.l = .6 n.b = .9 x = 5000 L = (R,G,B)= (1,0,0)(.7,.5,.3)(0,.6) + (0,1,0)(.7,.5,.3)(0,.9)^5000= (.42,0,0)+(0,.5,0)(.9)^5000 = (.42,0,0)

-- Final Practice Solutions
6.
A Reichardt Detector is a brain circuit which responds to directional motion. Higher levels motion detection neurons exist that respond when the feature moves from one spot to another. Speed can be detected by neurons at this level by variations in the length of the input paths from the feature detection neurons. The odd spacing of features such as spokes on a wheel together with motion can confuse this circuitry, for example, to make the spokes look like they are going backwards even if going forwards.
Stroboscopic apparent motion: is caused when a sequence of still images is shown quickly yielding the appearance of motion.The effect can still be perceived at frame rates as low as 2 frames per second, even though images persist in the visual cortex for about 100ms.
Low persistence display mode: is to have the image quickly turn off after being displayed so you can't see the judder motion as much. The short amount of time the display is on is still enough for the eye to collect light from the scene. Low persistence is used to reduce judder, which is when pixels, instantly change at each new frame time, we get a lagging, wobbling drift in motion between frames.
(Edited: 2019-05-13)
6. '''A Reichardt Detector''' is a brain circuit which responds to directional motion. Higher levels motion detection neurons exist that respond when the feature moves from one spot to another. Speed can be detected by neurons at this level by variations in the length of the input paths from the feature detection neurons. The odd spacing of features such as spokes on a wheel together with motion can confuse this circuitry, for example, to make the spokes look like they are going backwards even if going forwards. '''Stroboscopic apparent motion: ''' is caused when a sequence of still images is shown quickly yielding the appearance of motion.The effect can still be perceived at frame rates as low as 2 frames per second, even though images persist in the visual cortex for about 100ms. '''Low persistence display mode: '''is to have the image quickly turn off after being displayed so you can't see the judder motion as much. The short amount of time the display is on is still enough for the eye to collect light from the scene. Low persistence is used to reduce judder, which is when pixels, instantly change at each new frame time, we get a lagging, wobbling drift in motion between frames.
2019-05-16

-- Final Practice Solutions
3. Height in the visual field: Due to perspective projection, the horizon is a line that divides the view in half, between sky and ground. Objects closer to the horizon are often perceived as farther away.
Accommodation: How much our eyes have had to accommodate to see an object clearly also tells us something about its size.
Motion parallax: Seeing one object by the same eye from two different viewpoints can be a depth cue. Nearby objects moving the same amount in a giving time change more in visual position in the eye than far away objects moving the same amount. This relative difference in motions is called parallax.
(Edited: 2019-05-16)
3. '''Height in the visual field:''' Due to perspective projection, the horizon is a line that divides the view in half, between sky and ground. Objects closer to the horizon are often perceived as farther away. '''Accommodation:''' How much our eyes have had to accommodate to see an object clearly also tells us something about its size. '''Motion parallax:''' Seeing one object by the same eye from two different viewpoints can be a depth cue. Nearby objects moving the same amount in a giving time change more in visual position in the eye than far away objects moving the same amount. This relative difference in motions is called parallax.
2019-05-19

-- Final Practice Solutions
4. Ethan must be an AIThirdPersonController and we have to set up his AI Character Control Script by dragging WalkTarget to his Target value. After that, we must create a NavMesh, that specifies where Ethan can go. If he cannot walk into an object, we must specify the pathway around said obstacle. This NavMesh transforms the region into a graph, where can use the shortest path algorithm, A*, to get Ethan from one place to another. Then, we can create a script for specifying Ethan's motion.
4. Ethan must be an AIThirdPersonController and we have to set up his AI Character Control Script by dragging WalkTarget to his Target value. After that, we must create a NavMesh, that specifies where Ethan can go. If he cannot walk into an object, we must specify the pathway around said obstacle. This NavMesh transforms the region into a graph, where can use the shortest path algorithm, A*, to get Ethan from one place to another. Then, we can create a script for specifying Ethan's motion.

-- Final Practice Solutions
7.
Object-Order Rendering (cycles through triangles then pixels):
  intialize z-buffer
    for each triangle
      for each pixel covered by triangle
        compute color and x
        if z is closer than current z-buffer
          update the color and z of the pixel
Image-Order Rendering (cycles through pixels then triangles):
  for each pixel
    for each triangle
      if this is the closest triangle at this pixel
        compute color and z
        set the color of the pixel
(Edited: 2019-05-19)
7. <u>Object-Order Rendering (cycles through triangles then pixels): </u> <br> intialize z-buffer for each triangle for each pixel covered by triangle compute color and x if z is closer than current z-buffer update the color and z of the pixel <u>Image-Order Rendering (cycles through pixels then triangles): </u> for each pixel for each triangle if this is the closest triangle at this pixel compute color and z set the color of the pixel

-- Final Practice Solutions
10.
Software correction of Optical Distortions is done by computing the inverse of an optical distortion using polar coordinates (r, theta) instead of (x,y). We can do this because barrel/pincushion distortions are radially symmetric. This makes it easier to approximate because the distortion is only dependent on r, not theta. Once the inverse is calculated using an approximation:
  f^-1(rd) ≈ (c1*rd^2 + c2*rd^4 + c1^2*rd^4 + c2^2*rd^8 + 2*c1*c2*rd^6)/(1 + c1*rd^2 + c2*rd^4)
These values can be stored in an array for fast access. Sometimes this calculation is worked directly into the perspective transformation as a post-processing step in the GPU, AKA distortion shading.
(Edited: 2019-05-19)
10. Software correction of Optical Distortions is done by computing the inverse of an optical distortion using polar coordinates (r, theta) instead of (x,y). We can do this because barrel/pincushion distortions are radially symmetric. This makes it easier to approximate because the distortion is only dependent on r, not theta. Once the inverse is calculated using an approximation: f^-1(rd) ≈ (c1*rd^2 + c2*rd^4 + c1^2*rd^4 + c2^2*rd^8 + 2*c1*c2*rd^6)/(1 + c1*rd^2 + c2*rd^4) These values can be stored in an array for fast access. Sometimes this calculation is worked directly into the perspective transformation as a post-processing step in the GPU, AKA distortion shading.

-- Final Practice Solutions
9. Express the point (0,0.5,0) in barycentric coordinates with respect to the triangle with vertices (−1,0,0), (0,1,0), (0,0,1)
 
if given point p = (0, .5, 0) & triangle vertices p1 = (-1, 0, 0), p2 = (0, 1, 0), p3 = (0, 0, 1) we can calculate
  a1 = s(d22*d31 - d12*d32) 
  
  = 1/3(2*1.5 - 1*1) = 1/3(3-1) = 1/3(2) 
 
  = 2/3	
  
  a2 = s(d11*d32 - d12*d31) 
 
  = 1/3(2*1 - 1*1.5) = 1/3(2-1.5) = 1/3(.5) 
 
  = 1/6	
  
  a3 = (1 - a1 - a2) 
 
  = 1 - 2/3 - 1/6 
  
  = 1/6	
  
where
  s = 1/(d11*d22 - d12*d12) 
 
  = 1/(2*2 - 1*1) = 1/(4-1) 
 
  = 1/3	
 
where
  dij = ei · ej
where
  e1 = p2 - p1 = (1, 1, 0)	
  e2 = p3 - p1 = (1, 0, 1)	
  e3 = p - p1 = (1, .5. 0)	
then plug in a1, a2, a3 to calculate the barycentric coordinates:
  pb = a1*p1 + a2*p2 + a3*p3 
  
  = 2/3(-1, 0, 0) + 1/6(0, 1, 0) + 1/6(0, 0, 1) 
  
  = 2/3, 0, 0) + (0, 1/6, 0) + (0, 0, 1/6) 
 
  = (-2/3, 1/6, 1/6)	
(Edited: 2019-05-19)
9. Express the point (0,0.5,0) in barycentric coordinates with respect to the triangle with vertices (−1,0,0), (0,1,0), (0,0,1) if given point p = (0, .5, 0) & triangle vertices p1 = (-1, 0, 0), p2 = (0, 1, 0), p3 = (0, 0, 1) we can calculate a1 = s(d22*d31 - d12*d32) = 1/3(2*1.5 - 1*1) = 1/3(3-1) = 1/3(2) = '''2/3''' a2 = s(d11*d32 - d12*d31) = 1/3(2*1 - 1*1.5) = 1/3(2-1.5) = 1/3(.5) = '''1/6''' a3 = (1 - a1 - a2) = 1 - 2/3 - 1/6 = '''1/6''' where s = 1/(d11*d22 - d12*d12) = 1/(2*2 - 1*1) = 1/(4-1) = '''1/3''' where dij = ei · ej where e1 = p2 - p1 = '''(1, 1, 0)''' e2 = p3 - p1 = '''(1, 0, 1)''' e3 = p - p1 = '''(1, .5. 0)''' then plug in a1, a2, a3 to calculate the barycentric coordinates: pb = a1*p1 + a2*p2 + a3*p3 = 2/3(-1, 0, 0) + 1/6(0, 1, 0) + 1/6(0, 0, 1) = 2/3, 0, 0) + (0, 1/6, 0) + (0, 0, 1/6) = '''(-2/3, 1/6, 1/6)'''
[ Next ]
X