2019-03-25

Mock Midterm Questions.

https://www.cs.sjsu.edu/faculty/pollett/185c.1.19s/?PracMid1.php
You can give the answers here in this thread.
https://www.cs.sjsu.edu/faculty/pollett/185c.1.19s/?PracMid1.php You can give the answers here in this thread.

-- Mock Midterm Questions
Question 2:
<!DOCTYPE html>
<html>
<head>
    
<title>Simple A-Frame page with Cube</title>
   
 <script src="https://aframe.io/releases/0.8.2/aframe.min.js"></script>
</head>
<body>
<a-scene background="color: #ECECEC">
    <a-box position="-1 0.5 -3" rotation="0 45 0" ></a-box>
</a-scene>
</body>
</html>
(Edited: 2019-03-25)
Question 2: <!DOCTYPE html> <html> <head> <title>Simple A-Frame page with Cube</title> <script src="https://aframe.io/releases/0.8.2/aframe.min.js"></script> </head> <body> <a-scene background="color: #ECECEC"> <a-box position="-1 0.5 -3" rotation="0 45 0" ></a-box> </a-scene> </body> </html>

-- Mock Midterm Questions
1.
 
Telepresence: System that allows the user to feel like they are somewhere else in the world
 
closed-loop: Changes in simulation depend on actions of the user / organism
 
synthetic: completely invented / created from geometric primitives + simulated physics
 
captured: based on captured video / images from real world footage
(Edited: 2019-03-25)
1. Telepresence: System that allows the user to feel like they are somewhere else in the world closed-loop: Changes in simulation depend on actions of the user / organism synthetic: completely invented / created from geometric primitives + simulated physics captured: based on captured video / images from real world footage

-- Mock Midterm Questions
3.
 
- camera component / entity specifies the position + look direction of user in scene
 
- to create a camera with look controls one could use the following code
 <a-entity camera="active: true" look-controls position="0 0 0"></a-entity>
 
- look-controls track user's head motion and update an entities rotation accordingly (in this case the rotation of our camera)
(Edited: 2019-03-25)
3. - camera component / entity specifies the position + look direction of user in scene - to create a camera with look controls one could use the following code <a-entity camera="active: true" look-controls position="0 0 0"></a-entity> - look-controls track user's head motion and update an entities rotation accordingly (in this case the rotation of our camera)

-- Mock Midterm Questions
10.
 
- access list of connected gamepads with navigator.getGamepads()
 
- assume that one of our gamepads is stored in the variable g
 
- check if gamepad's pose has orientation information with g.pose.hasOrientation()
 
- retrieve orientation with g.pose.orientation (this is a quaternion)
(Edited: 2019-03-25)
10. - access list of connected gamepads with navigator.getGamepads() - assume that one of our gamepads is stored in the variable g - check if gamepad's pose has orientation information with g.pose.hasOrientation() - retrieve orientation with g.pose.orientation (this is a quaternion)

-- Mock Midterm Questions
5. Vection is the illusion of self-motion. This is caused when the user visually perceives they are moving or accelerating but the body feels it is motionless. Sometimes, when developing for VR, developers may undergo perceptual training by continually using and testing their applications and VR devices. This for form of adaptation may give a difference experience between an experienced VR-user and a new user.
Weber's law states that the ratio between minimally noticeable differences in light is constant, where Δm/m = c where Δm is the barely noticeable difference, m is the magnitude of light, and c is a constant.
5. Vection is the illusion of self-motion. This is caused when the user visually perceives they are moving or accelerating but the body feels it is motionless. Sometimes, when developing for VR, developers may undergo '''perceptual training''' by continually using and testing their applications and VR devices. This for form of adaptation may give a difference experience between an experienced VR-user and a new user. Weber's law states that the ratio between minimally noticeable differences in light is constant, where Δm/m = c where Δm is the barely noticeable difference, m is the magnitude of light, and c is a constant.

-- Mock Midterm Questions
4. The vergence-accomodation mismatch problem is when a person’s eyes need to change focus and convergence for near objects but the VR headset only changes in convergence and not focus, causing the eyes to get tired. A way to fix this is to have light field displays and multi-focal-plane displays
6. T = `[[cos(pi/3), -sin(pi/3), 0, 1], [sin(pi/3), cos(pi/3), 0, 2], [0, 0, 1, 3], [0, 0, 0, 1]]` = `[[1/2, -sqrt(3)/2, 0, 1], [sqrt(3)/2, 1/2, 0, 2], [0, 0, 1, 3], [0, 0, 0, 1]]`
(Edited: 2019-03-25)
4. The vergence-accomodation mismatch problem is when a person’s eyes need to change focus and convergence for near objects but the VR headset only changes in convergence and not focus, causing the eyes to get tired. A way to fix this is to have light field displays and multi-focal-plane displays 6. T = @BT@[[cos(pi/3), -sin(pi/3), 0, 1], [sin(pi/3), cos(pi/3), 0, 2], [0, 0, 1, 3], [0, 0, 0, 1]]@BT@ = @BT@[[1/2, -sqrt(3)/2, 0, 1], [sqrt(3)/2, 1/2, 0, 2], [0, 0, 1, 3], [0, 0, 0, 1]]@BT@

-- Mock Midterm Questions
7. `[[cos(pi/8)], [sin(pi/8) cdot [[0],[1],[0]]]]` = `[[0.924], [0], [0.383], [0]]`
(Edited: 2019-03-25)
7. @BT@[[cos(pi/8)], [sin(pi/8) cdot [[0],[1],[0]]]]@BT@ = @BT@[[0.924], [0], [0.383], [0]]@BT@

-- Mock Midterm Questions
9/ A typical VR headset has a screen placed less than 10cm from the eye which is a shorter distance than a young adult can accommodate. To fix this problem, we use a convex lens to magnify the screen so that it fills the field of view.
9/ A typical VR headset has a screen placed less than 10cm from the eye which is a shorter distance than a young adult can accommodate. To fix this problem, we use a convex lens to magnify the screen so that it fills the field of view.

-- Mock Midterm Questions
8. getPoseMatrix(out, pose) is used to set the view matrix to the position of the head orientation
looking into getPoseMatrix(poseMatrix, frame_data.pose) we are getting the value of frame_data from vr_display.getFrameData(frame_data); and then we get orientation from frame_data.pose.orientation and we make a 4x4 matrix using mat4.fromQuat(poseMatrix, frame_data.pose.orientation)
8. getPoseMatrix(out, pose) is used to set the view matrix to the position of the head orientation looking into getPoseMatrix(poseMatrix, frame_data.pose) we are getting the value of frame_data from vr_display.getFrameData(frame_data); and then we get orientation from frame_data.pose.orientation and we make a 4x4 matrix using mat4.fromQuat(poseMatrix, frame_data.pose.orientation)
X