Friday, 10 May 2013

2. Displaying 3D Polygon Animations


A p i
Api stands for 'application programming interface'. It's a protocol containining the routines and sub protocols as well as tools for building software applications. It is essensially used by software components to communicate these routines, data structures, object classes, and variables with each other. Because of it's structure, the api is designed to make it easier to develop a program, by condensing and organising all the peices needed which then just need to be put in the right places by the programmer. An api can take the form of POSIX, Microsoft windows API, and even in the libraries of a programming language itself, such as Standard template library in C++ and the Java API. It's important in 3D cgi because it links directly to the interface of the program.

Direct3D
One of the well known APIs is Direct3D which is actually a subset of Microsoft's DirectX graphics API. Interestingly, it is the graphics API for the Xbox and Xbox 360, and of course, also for the Microsoft windows operating system. The Direct3D api's job is to realise when detailed 3D graphics are rendering and/or performance is very demanding- it must provide programming commands for the system to use to help keep a higher and mroe regular performance. This is a common need with video game software,  which is why it was implimented in the Xbox.

OpenGL
OpenGL(Open graphics library) is another specification of an API for 3D graphics rendering. The libraries contained within the OpenGL api, are implementations that can be defined as implemented as needed by the API. Because OpenGL is so widely used, graphics cards usually have an OpenGL implementation. Unlike Direct3D, OpenGL is not specific to certain platforms, meaning it is much more flexiable and applications can be written which will be able to be used with many types of gaphics cards, and also increases the compatibility of the API on another device or updated hardware.

Graphics Pipeline
Graphics pipelines (Or rendering pipelines) can be used for different things but when refering to 3D computer graphics, we mean the algorithms contained within objects and scenes in order to convert them into flat images and video. Graphics pipelines are part of the Api such as Direct3D and OpenGL. They are the part which taakes in the information of the three dimentional primitive as input, and then outputs a 2D bitmap image.
 
(A map showing how a graphics pipeline opperates. The algorithm is essensial to it working as intended.)
(http://goanna.cs.rmit.edu.au/~gl/teaching/Interactive3D/2013/images/pipeline.png)


Per-vertex lighting and shading
Any 3D scene or 3D object needs to have lighting and shadows to make it appear 3D. The way that light is rendered onto objects incorpates a complex algorithm which defines where the light will hit, at what intencity, and when shadows will be cast. The positioning of the light sources, reflectance, as well as other surface properies will all contribute to the final render. As 3D assets are usually made up of polygons and vertices, normally the grapics pipeline only computes and responds to these faces. The lighting might be dramatically different from one day to another; which this would look okay on a cube or other solid shade, on a more natural form, the values between vertices need to be interpolated during rasterization so they blend together for a more natural and realistic lighting effect. There are many effects that can be applied to most modern graphics hardware, such as per-fragment or per-pixel lighting, and on more modern graphics hardware, per-vertex shading using vertex shading. All of these effects are post-rasterization, and are done via a shading program (Which may be already incorportated into the 3D modeling program).

Clipping
Clipping is an essensial process; it ensures that geometric shapes or polgons that fall outside the port of view are not rendered and so disgarded. Not having to render all these extra shapes that we can't see anyway, means that the processing has more memory to put into task that are needed, meaning that the game, or program, will ultimately run faster.

(http://wiki.blender.org/uploads/1/10/Manual-Part-II-EdgeFaceTools-RegionClipping.png)


Projection Transformation
Projection transformation is about making a believable perspective. Ie, objects more distant from the camera, are make smaller. And closer objects, appear larger. This is not to be confused with orthographic pojection, in which objects remain the same size nevermind how close they are. Projection transformation is acheived though an algorithimic formular; By dividing the X and Y coordinates of each vertext of each primitive by it's Z coordinate (Distance away from the camera). Projection transformation means that the view the player has is not a simple paralell straighforward, rectangular view. But rather, a view that starts small and increases towards the horizon.

(http://www.glprogramming.com/red/images/Image62.gif)


Viewport transformation
This involves the process of determining the 3D scene to be made into a raster image, the port of which is a specific size. To do this, the vertices have a new scale applied to them which is found by multiplaying the width of the window. A bias is then added, which determines the offset from the screen origin. Only the items visable in this frame are rendered into pixelied, flat images- rasterisation/scan conversion.

Scan conversion or Rasterisation
Rasterisation is how the 3D objects become 2D images made out of pixels, rather than their current scaleable form. The resultant 2D image is a direct representation of the scene, but with corresponding individual pixel values. Rendering out a scene can take a long time because of the complexely involved in calculating the values of each pixel, and, subsecuently, the higher resolution image you want, the longer is it going to take. The steps involved in this process are sometimes referred to as a group under the name of pixel pipeline.

(http://www.ntu.edu.sg/home/ehchua/programming/opengl/images/Graphics3D_Rasterization.png)


Texturing, fragment shading
With rasterisation and viewport transformation having dealed with the placement are basic values of each pixel corresponding to it's original 3D counterpast, the next stage is all about the individual fragments being given their colour based upon values interpolated from the vertices during the rasterization process. The colour of each pre-pixel is determined by texture as well as shade.

Display

You wouldn't think there would be so much involved with rendering out a cgi image as 2D. But with all of the above components of the graphics pipeline, the scene is finally able to be produced, and the final raster image can be be displayed on the monitor.

Information
http://www.opengl.org/
http://en.wikipedia.org/wiki/Per-pixel_lighting
http://en.wikipedia.org/wiki/Clipping_(computer_graphics)
http://groups.csail.mit.edu/graphics/classes/6.837/F98/Lecture12/projection.html
http://www.songho.ca/opengl/gl_transform.html
http://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&frm=1&source=web&cd=3&ved=0CEcQFjAC&url=http%3A%2F%2Fmrl.nyu.edu%2F~dzorin%2Fcg05%2Flecture07.pdf&ei=X7StUaj-H4aBOIvFgYAE&usg=AFQjCNGoiwb1nwzHQYm4NzuIvCu0-T3xEw&sig2=pd82aXZq9VlsBBsfIIaB0Q
http://www.clockworkcoders.com/oglsl/tutorial8.htm
http://www.lighthouse3d.com/tutorials/glsl-tutorial/combine-texture-fragment/
http://mrl.nyu.edu/~dzorin/cg05/lecture07.pdf

No comments:

Post a Comment