死死团
精华
|
战斗力 鹅
|
回帖 0
注册时间 2004-2-13
|
This choppiness is not a perceived flicker, but a perceived gap between the object in motion and its afterimage left in the eye from the last frame. A computer samples one point in time, then nothing is sampled until the next frame is rendered, so a visible gap can be seen between the moving object and its afterimage in the eye. The reason computer rendered video has a noticeable afterimage separation problem and camera captured video does not is that a camera shutter interrupts the light two or three times for every film frame, thus exposing the film to 2 or 3 samples at different points in time. The light can also enter for the entire time the shutter is open, thus exposing the film to a continuous sample over this time. These multiple samples are naturally interpolated together on the same frame. This leads to a small amount of motion blur between one frame and the next which allows them to transition smoothly.
To better illustrate how motion blurring improves the fluidity of the displayed motion on the screen consider the following example: a car is travelling with a positive velocity of 30m/s. Its original position is at a x-coordinate of zero. If the frame rate is 30fps, then Frame 1 corresponds to the motion of the car from t=0s to t=0.0333s (1/30) and Frame 2 corresponds to t=0.0333s to t=0.0666s. If the motion of the car is modelled with computer animation then the position of the car in Frame 1 would be at x=0.5m (the average position of the car within Frame 1) and x=1.5m in Frame 2. There is a 1m gap in the positions of the car in Frames 1 and 2.
If the motion of the car is captured on film with the same frame rate however, the position of the car on each frame can no longer be precisely pin-pointed. In this case the image of the car in Frame 1 would be a blurry image extending from x=0m to x=1m, while in Frame 2 it would be a blurry image from x=1m to x=2m. The result is that the image of the car is no longer precisely defined on both frames yet when the two frames are strung together there would be no gap appearing between the two blurry images. Although the moving images on each frame in this case have been blurred, the motion appears smoother to the human eyes.
An example of afterimage separation can be seen when taking a quick 180 degree turn in a game in only 1 second. A still object in the game would render 60 times evenly on that 180 degree arc (at 60 Hz frame rate), and visibly this would separate the object and its afterimage by 3 degrees. A small object and its afterimage 3 degrees apart are quite noticeably separated on screen.
The solution to this problem would be to interpolate the extra frames together in the back-buffer (field multisampling), or simulate the motion blur seen by the human eye in the rendering engine. When vertical sync is enabled, video cards only output a maximum frame rate equal to the refresh rate of the monitor. All extra frames are dropped. When vertical sync is disabled, the video card is free to render frames as fast as it can, but the display of those rendered frames is still limited to the refresh rate of the monitor. For example, a card may render a game at 100 FPS on a monitor running 75 Hz refresh, but no more than 75 FPS can actually be displayed on screen. As a result of the extra rendered frames, this would lead to a displayed frame to sometimes show more than one rendered frame on the screen, thus simulating the motion blur effect. |
|