Friday, 10 May 2013

6. Constraints

Constrainsts
Polygon count and vertex count are the things that make up the bulk of an objects filesize, are are often referred to as the 'cost'. Typically, a character in a 3D game might be anywhere from a few hundred polygons, to a staggering 40,000+ polygons. The more polygons, the more detailed, but the more detailed, the longer rendering time- this is why develipers have to meet somewhere in the middle and find a compromise. Different mashines and engines can cope with different amounts of polygons or vertices- for examples, a high-end console or pc game might use a lot of vertices.polygons for their characters and envioronments, wheras a smart device like a phone would be able to handle much much less.

You'll often hear 3D artists referring to the poly count of a models, what they mean by this is the triangle count. Usually, trianges are used often polygons as the bulk of modern graphics hardware is designed to accelerate the rendering of trianges, and there is no particular advantage to using the other. A polygon count can be misleading- it's different to a triangle count; Triangle counts are always higher, and therefore it's wise to view your triangle count instead of your polygon count as this is the method currently most widely used.

Polygons however do have a useful purpose in game development. A model made of mostly four-sided quads will work well with edge-loop selection & transform methods that speed up modelling, make it easier to judge the "flow" of a model, and make it easier to weight a skinned model to its bones. Artists usually preserve these polygons in their models as long as possible. When a model is exported to a game engine, the polygons are all converted into triangles automatically. However different tools will create different triangle layouts within those polygons. A quad can end up either as a "ridge" or as a "valley" depending on how it's triangulated. Artists need to carefully examine a new model in the game engine to see if the triangle edges are turned the way they wish. If not, specific polygons can then be triangulated manually.

ridge_valley.gif
How a polygon is turned into a triangle. (http://wiki.polycount.com/PolygonCount?action=AttachFile&do=get&target=ridge_valley.gif)



Vertex count is ultimately more important for performance and memory than the triangle count, but for historical reasons artists more commonly use triangle count as a performance measurement. On the most basic level, the triangle count and the vertex count can be similar if the all the triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card. Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all of these lead to a much larger vertex count. This can stress the transform stages for the model, slowing performance. It can also increase the memory cost for the mesh because there are more vertices to send and store.


Rendering
Rendering is the process of generating an image from a model (or models in what collectively could be called a scene file), by means of computer programs. Also, the results of such a model can be called a rendering. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. Rendering is the final process of creating the actual 2D image or animation. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialised, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.


Real-time rendering is one of the interactive areas of computer graphics, it means creating synthetic images fast enough on the computer so that the viewer can interact with a virtual environment. The most common place to find real-time rendering is in video games. The rate at which images are displayed is measured in frames per second (frame/s) orHertz (Hz). The frame rate is the measurement of how quickly an imaging device produces unique consecutive images. If an application is displaying 15 frame/s it is considered real-time. Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second, i.e. one frame. The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result the final image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artefact of a camera. This is the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.


(An example of super realistic real-time rendering. The engine involves would have to be very powerful in order to handle this level of detail.)



Animations for non-interactive media, such as feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.

When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).

The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.

The methods of reflection can involved either normal reflection or scattering for a much more random effect. Shading can be randomised too but often this is reserved for more organic materials.



5. 3D Development Software

There are many software tools used in the production of 3D models including 3D studio max, maya, lightwave, cinema 4D, blender, sketchip, ZBrush etc. While these programs do similar things, they do not nessesarily specialise in the same areas, and sometimes are used in conjunction.

Quotes from the websites give some insight into the fuction of each tool:


'Autodesk® 3ds Max® software provides a comprehensive 3D modelling, animation, rendering and compositing solution for games, film and motion graphics artists. 3ds Max 2014 has new tools for crowd generation, particle animation and perspective matching, as well as support for Microsoft® DirectX 11® shaders.'

'Autodesk® Maya® 3D animation software offers a comprehensive creative feature set for 3D computer animation, modelling, simulation, rendering and compositing on a highly extensible production platform. Maya now has next-generation display technology, accelerated modelling workflows and new tools for handling complex data.'

'NewTek LightWave 11 made its debut in early 2012, bringing new functionality to an already robust 3D modeling, animation and rendering program. Many new features in LightWave 11 that are widely used by artists include built-in instancing, Bullet hard body dynamics, and Flocking along with new workflow options such as GoZ support for Pixologic’s ZBrush software and the Unity game engine development platform.'


http://www.photoshopcreative.co.uk/users/4039/thm1024/livingroom_design_var1_pt_net.jpg


'Renowned for its ease of use, speed and professional results, CINEMA 4D Prime is an ideal choice for all graphic designers looking to add 3D to their toolset. CINEMA 4D Prime's intuitive interface is designed to ease you in to the powerful and exciting world of 3D graphics. With CINEMA 4D Prime you can start creating great-looking 3D images in no time. And the tutorials provided help you learn CINEMA 4D quickly and easily. Adding CINEMA 4D Prime to your existing pipeline of 2D applications opens up a wealth of new design possibilities. For example: turn 2D artwork into 3D elements using import options such as Illustrator or EPS, or simply combine images or movies with 3D objects. CINEMA 4D Prime's powerful tools let you create images and animations for any industry. 3D logo, illustrations, buildings, space ships - whatever you want to create - you can rely on CINEMA 4D Prime. CINEMA 4D Prime lets you cost-effectively enter the world of 3D. And if you want more advanced tools, it’s great to know that you can upgrade to any of the more powerful versions of CINEMA 4D.'
http://www.maxon.net/uploads/pics/screen_prime_22.jpg
'The Interface allows you to change, adapt and re distribute the layout of all UI components and tools to suit the task at hand. With a great variety of tools available, Sculpting can be used to create very detailed organic looking characters. Coupled with modifiers like multi-res, the models can be very complex while the interface remains responsive. Transforming any model into a posable character has never been easier, with highly sophisticated methods of deformation calculation that allow realistic mesh displacement. Hard surfaces and Subdivision surface modeling benefit from tools that range from community provided complex primitives to stackable modifiers. Thanks to flawless integration, tasks as simple as walk cycles or as complex as lip syncing can be undertaken with more emphasis on the results and the fine tunning. True and tested, the robust default render engine is an industrial strength image generator. Using any of the multiple available tools to project meshes, it is straightforward to manage texture space for a given geometry. Create stunning visuals using a render engine that treats light in a more natural way, with the Cycles Render Engine. Unleash the power of your graphic card with Blender providing full support to GPU rendering.'
http://zibergela.bitarlan.net/wp-content/uploads/2012/03/blender2.jpg

'Hundreds of thousands of professionals in (take a deep breath) architecture, construction, engineering, commercial interiors, light construction, landscape architecture, kitchen & bath design, urban planning, game design, film & stage, woodworking, and plenty of other fields use SketchUp Pro all the time, every day. It’s the all-purpose antidote to complicated, expensive CAD software.

SketchUp Pro is like a pencil with superpowers. Start by drawing lines and shapes. Push and pull surfaces to turn them into 3D forms. Stretch, copy, rotate and paint to make anything you like. More advanced? Start modeling from CAD and terrain data, photographs or even hand sketches. Build models with custom behaviors and attributes. SketchUp Pro is as simple and as powerful as you want it to be.'

http://www.lunarstudio.com/rendering-gallery/image-illustrations/sketchup-modeling-services/house-proposed-sketchup.jpg


'ZBrush is a digital sculpting and painting program that has revolutionized the 3D industry with its powerful features and intuitive workflows. Built within an elegant interface, ZBrush offers the world’s most advanced tools for today’s digital artists. With an arsenal of features that have been developed with usability in mind, ZBrush creates a user experience that feels incredibly natural while simultaneously inspiring the artist within. With the ability to sculpt up to a billion polygons, ZBrush allows you to create limited only by your imagination. Designed around a principle of circularity, the menus in ZBrush work together in a non-linear and mode-free method. This facilitates the interaction of 3D models, 2D images and 2.5D Pixols in new and unique ways.'


http://www.zbrushkorea.com/zbrush/features/overview/img/main.jpg

There are both proprietary formats, such as AutoCAD - .dxf, 3D Studio Max - .3ds, Maya - .mb, LightWave - .lwo, and open formats such as .obj and .dae.

4. Mesh Construction

Although it is possible to construct a mesh by manually specifying vertices and faces, it is much more common to build meshes using a variety of tools. A wide variety of 3d graphics software packages are available for use in constructing polygon meshes.

Box modelling
Box modelling is a very popular method which involves the process of starting with a primitive box and manipulating it though various methods. It's basically the process of turning a very simple shape into a very complex one allowing you to construct these meshes though two simple tools. Firstly there is the subdivide tool which spilts faces and edges into smaller pieces by added new vertices and connecting them. If you wanted to subdivivde a square, for example, you could subevide it which would add one vertex in the center and one on each edge, resulting in four smaller squares. Another method includes using the extrude tool; this tool allows you to effectivly drag out/elongate or invert a form from a face or group of faces. It creates a new face of the same size and shape which is connected to each of the existing edges by a face. It's very useful for quickly extending objects. An example of the stages of box moddling is below.


Extrusion modelling
There's another method of 3D modeling usually referred to as extrustion modelling or inflation modeling. Exactly as the name suggests, it involves the process of extending or shortening a polygon from it's origin, whereby a 2D shape is created (often using placed points and connecting them to create a polygon), it is often traced from a photo or drawing. The object is traced from two different angles, in diferent viewports, before the user extrudes the shape into 3D making sure it matched up in both view ports. It's a very popular method of making faces and heads due to the complexity usually involved- it can take a long time, but often produces a much more organic effect. Normally, the artist only models half the object (if it is symettrical) and then duplicates and flips the other half to save time- This is not exclusive to extrusion modelling but is a method often used in this scenario.

Primitive modelling
A more reserved a less common approch to 3D modelling is the primitive modelling method. This involves creating larger objects by simply combining primitives to create new shapes. The method normally isn't a very effective way to produce complex shapes, and thus is usually only applied to very rigid or simple objects. Some of the primitives could be any of the ones included in the package, which could be things like cubes, pyramids, cylinders, spheres, and 2D primitives like squares, triangles, and disks.

Specialised modelling
The modelling techniques such as box modeling and extrusion modeling are alright for general modelling of objects, but if you need something a bit more organic or detailed, you need more specialised modelling. For this kind of modelling, specialised methods of constructing high or low detailed meshes exist. For example there is sketch based modeling available allowed construction of low detail models very quickly in a user friendly interface, and 3D scanners can be used to make very high detail meshes based on real world objects in a very automtic way. However, these kind of devices are very expensive are thus are usually reserved for researches and industry professionals that really need the high level of accuracy and sub-millimetric digital representations of an object.


3. Geometric Theory

Geometry
In a 3D workshop, assets are calculated using the same kind of formular as is found in 2D vector art. It's essensially the equivilant of 3D vector, the only difference in algorithm being that there is an extra dimention to be included in the calculations- but the main principles remain the same - Vertices can be scaled, rotated and 'scewed', without any loss of quality like you would find in a bitmap item. This vector style approch is achieved by plotting mathematical points along the axes. This, in turn creates a series of coordinates which are connected to create paths or lines. Because the values of these points are constantly monitered and their values altered as needed, it makes it a very precise and linear art. The shapes created can then easily have polygons created and filled with colour. Becuase it's all mathematical, there is no room for errors, and so if there is even a slight error in the geometry such as a break in the paths, or a double set of points too close to one another, this could make things a lot more difficult. 1+1 doesn't equal 3; however close it is, it doesn't matter. There's only 1 right answer, and everything needs to be correct to continue running smoothing.

Cartesian Coordinates System
The cartesian coordinates system is a type of coordinates system in which there is an axis on a plane defined by either x, y, or z. The system can involve just two of these dimentions(x and y) however in 3 dimentional work, there are always 3 coordinates. The coordinates extent accross the plane apparently into infinity. The only distinguashible point is the origin, which is the point as which all 3 coordiantes cross. This is the centre of the workspace, and usually where a 3D modeller will work around. The coordinates do have measurements, however due to the vector nature of 3D modelling, these are often irrelivant unless you're scalling objects to be the correct proportions. As 3D workshops rely heavily on mathematic solutions, the grid and coordinates system is very important to the computer in making sure everything is in the right place and the right scale.


Geometric theory and polygons
Mesh modeling involves, effectivly a 3D space filled with join up coordinates on a grid to make polygons and edges. These coordinates are basic objects, points in the 3D space known as a vertex. Two of these vertices can hence be connected to form and edge, and another vertex makes another edge; forming the most basic polygon you can make; A triangle. This the very basic principle of how points in space can be connected mathmatically to form the most simple of shapes, but joined together, things can get much more complex resulting in vast and intricate 3D forms often make from thousands of vertices, edges and polygons. A well as trianglular polygons, if 4 vertices are connected rather than three, this will make a quad. A face refers to the polygons making up an element (a group of polygons with common vertices).
According to Euclidean geometry, any group of three non-collinear points (points not lined up in a straight line) appoint a plane. It's obvious then, that a triangle must always inhabit a single plane as the three points are never lined up. It's difficult to say if this is true of more complex polygons due the more intricate workings of a 3D object. In a 2D object, the vertixes may only be lined up in one dimention, wheras in a 3D object, points could be alligned in any direction. A 3D object or vector that is flat and perpendicular to another object, is known as the normal. If the geometry is disrupted so will the normal be, and this can have a visiable effect as surface normals are often used for determining light transport in ray tracing.

The basic techniques explained here form the basic principles of 3D moddling. We know that vertices make edges, and edges form polygons.. But many polygons joined together make what is called a mesh, which, as a whole, is often referred to as a wireframe model. Such as the wireframe model of this dog below.




It's easy to make mistakes in 3D moddeling, whether these be mathematical errors on the computers part of just a mistake gone unnoticed. These mistakes though can lead to intersecting polygons, which often are difficult to detect in wireframe view and are only realised once the surfaces have been applied and the model rendered. This can waste a lot of time as it means going back and correcting things, before having to re-render all over again. Moddlers need to be careful to ensure that mesh does not pierce itself, or contain errors such as double vertices, edges or faces, or be a manifold (A mesh containing holes and with missing polygons or singularities - a single vertex connection two distinct sections of a mesh). We can use the merge polygons tool to make sure there are no extra vertices, but it's very difficult to fix mistakes once they have been made so it's important to be thoughtful about this when creating models.


Primitives
Primitives are pre-made objects intergrated into a 3D moddling software which allow the user to create that shape in a click. They're not particuarlly complex shapes, but having them on hand saves a lot of time and is a much much more efficient way of working. Usually, primitves come in simple shapes such as spheres or cubes, sometimes cyliners, pyramids, and even cones, but more often than not, the moddler simply takes elements of these shapes (Such as the roundered top of a sphere), and manipulates them to their needs. They are a shortcut, but it's not by any means cheating or doing the job for you, it's common practice in polygon moddling and saves a lot of hastle.


SurfacesSurfaces are texture effects and colour variants that can be applied to specific polygons or a set of polygons. Surfaces can either be chosen from the predefined presets of colours and textures or even have photographic maps added in order to acheive an even more realistic appearance. Sometimes a texture is commissioned and created for a particular object, in which case, the texture needs to wrap around the mesh, and needs to very precisely fit in all the right places. The example below shows the stages.

(http://blog.gamerdna.com/wp-content/uploads/2008/08/image-thumb13.png)




2. Displaying 3D Polygon Animations


A p i
Api stands for 'application programming interface'. It's a protocol containining the routines and sub protocols as well as tools for building software applications. It is essensially used by software components to communicate these routines, data structures, object classes, and variables with each other. Because of it's structure, the api is designed to make it easier to develop a program, by condensing and organising all the peices needed which then just need to be put in the right places by the programmer. An api can take the form of POSIX, Microsoft windows API, and even in the libraries of a programming language itself, such as Standard template library in C++ and the Java API. It's important in 3D cgi because it links directly to the interface of the program.

Direct3D
One of the well known APIs is Direct3D which is actually a subset of Microsoft's DirectX graphics API. Interestingly, it is the graphics API for the Xbox and Xbox 360, and of course, also for the Microsoft windows operating system. The Direct3D api's job is to realise when detailed 3D graphics are rendering and/or performance is very demanding- it must provide programming commands for the system to use to help keep a higher and mroe regular performance. This is a common need with video game software,  which is why it was implimented in the Xbox.

OpenGL
OpenGL(Open graphics library) is another specification of an API for 3D graphics rendering. The libraries contained within the OpenGL api, are implementations that can be defined as implemented as needed by the API. Because OpenGL is so widely used, graphics cards usually have an OpenGL implementation. Unlike Direct3D, OpenGL is not specific to certain platforms, meaning it is much more flexiable and applications can be written which will be able to be used with many types of gaphics cards, and also increases the compatibility of the API on another device or updated hardware.

Graphics Pipeline
Graphics pipelines (Or rendering pipelines) can be used for different things but when refering to 3D computer graphics, we mean the algorithms contained within objects and scenes in order to convert them into flat images and video. Graphics pipelines are part of the Api such as Direct3D and OpenGL. They are the part which taakes in the information of the three dimentional primitive as input, and then outputs a 2D bitmap image.
 
(A map showing how a graphics pipeline opperates. The algorithm is essensial to it working as intended.)
(http://goanna.cs.rmit.edu.au/~gl/teaching/Interactive3D/2013/images/pipeline.png)


Per-vertex lighting and shading
Any 3D scene or 3D object needs to have lighting and shadows to make it appear 3D. The way that light is rendered onto objects incorpates a complex algorithm which defines where the light will hit, at what intencity, and when shadows will be cast. The positioning of the light sources, reflectance, as well as other surface properies will all contribute to the final render. As 3D assets are usually made up of polygons and vertices, normally the grapics pipeline only computes and responds to these faces. The lighting might be dramatically different from one day to another; which this would look okay on a cube or other solid shade, on a more natural form, the values between vertices need to be interpolated during rasterization so they blend together for a more natural and realistic lighting effect. There are many effects that can be applied to most modern graphics hardware, such as per-fragment or per-pixel lighting, and on more modern graphics hardware, per-vertex shading using vertex shading. All of these effects are post-rasterization, and are done via a shading program (Which may be already incorportated into the 3D modeling program).

Clipping
Clipping is an essensial process; it ensures that geometric shapes or polgons that fall outside the port of view are not rendered and so disgarded. Not having to render all these extra shapes that we can't see anyway, means that the processing has more memory to put into task that are needed, meaning that the game, or program, will ultimately run faster.

(http://wiki.blender.org/uploads/1/10/Manual-Part-II-EdgeFaceTools-RegionClipping.png)


Projection Transformation
Projection transformation is about making a believable perspective. Ie, objects more distant from the camera, are make smaller. And closer objects, appear larger. This is not to be confused with orthographic pojection, in which objects remain the same size nevermind how close they are. Projection transformation is acheived though an algorithimic formular; By dividing the X and Y coordinates of each vertext of each primitive by it's Z coordinate (Distance away from the camera). Projection transformation means that the view the player has is not a simple paralell straighforward, rectangular view. But rather, a view that starts small and increases towards the horizon.

(http://www.glprogramming.com/red/images/Image62.gif)


Viewport transformation
This involves the process of determining the 3D scene to be made into a raster image, the port of which is a specific size. To do this, the vertices have a new scale applied to them which is found by multiplaying the width of the window. A bias is then added, which determines the offset from the screen origin. Only the items visable in this frame are rendered into pixelied, flat images- rasterisation/scan conversion.

Scan conversion or Rasterisation
Rasterisation is how the 3D objects become 2D images made out of pixels, rather than their current scaleable form. The resultant 2D image is a direct representation of the scene, but with corresponding individual pixel values. Rendering out a scene can take a long time because of the complexely involved in calculating the values of each pixel, and, subsecuently, the higher resolution image you want, the longer is it going to take. The steps involved in this process are sometimes referred to as a group under the name of pixel pipeline.

(http://www.ntu.edu.sg/home/ehchua/programming/opengl/images/Graphics3D_Rasterization.png)


Texturing, fragment shading
With rasterisation and viewport transformation having dealed with the placement are basic values of each pixel corresponding to it's original 3D counterpast, the next stage is all about the individual fragments being given their colour based upon values interpolated from the vertices during the rasterization process. The colour of each pre-pixel is determined by texture as well as shade.

Display

You wouldn't think there would be so much involved with rendering out a cgi image as 2D. But with all of the above components of the graphics pipeline, the scene is finally able to be produced, and the final raster image can be be displayed on the monitor.

Information
http://www.opengl.org/
http://en.wikipedia.org/wiki/Per-pixel_lighting
http://en.wikipedia.org/wiki/Clipping_(computer_graphics)
http://groups.csail.mit.edu/graphics/classes/6.837/F98/Lecture12/projection.html
http://www.songho.ca/opengl/gl_transform.html
http://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&frm=1&source=web&cd=3&ved=0CEcQFjAC&url=http%3A%2F%2Fmrl.nyu.edu%2F~dzorin%2Fcg05%2Flecture07.pdf&ei=X7StUaj-H4aBOIvFgYAE&usg=AFQjCNGoiwb1nwzHQYm4NzuIvCu0-T3xEw&sig2=pd82aXZq9VlsBBsfIIaB0Q
http://www.clockworkcoders.com/oglsl/tutorial8.htm
http://www.lighthouse3d.com/tutorials/glsl-tutorial/combine-texture-fragment/
http://mrl.nyu.edu/~dzorin/cg05/lecture07.pdf

1. Applications of 3D

T h e  i n t r o d u c t i o n  o f  3 D
The very first 3D creation was infact not a film, or a game. Actually, the idea of creating 3D games or films was unheard of- video games themselves were non existant, and animation in film was no more advanced than a flip book! David Brewster was the first to introduce the illullusion of 3D with his steroscope device in 1844. The images he could produce with this device are similar to the 3D images we sometimes see printed on bookmarks and posters.Though this is not exactly the same as the types of 3D we see and use today, it still bares the link, and is an important milestone as the first use of displaying an image in 3 dimentions.


(A stereoscope would take 3 pictures in different angles, and layer them so that the was you could see changed depending on your angle, giving an impression of 3D)

The word 'stereo', meaning coming from 2 or more, here refers to the 3 dimentions. The word is also evident in 'stereogram', whereby you can see a 3D image jump out of the screen by focusing your eyes correctly.


(A stereogram: Go cross-eyed and wait for the picture to leap out)

It's interesting stuff; but not quite what we're looking for. The fact that we could produce 3D images at all made the idea of making more realistic and interactive virtual imagry even more appealing. And this is where cgi (computer generated imagry) comes into play.

The first use of cgi was in 1972 and was by Edwin Catmull and Fred Parke. Nothing else like this existed at the time;  Catmull had to create his own program to render out the secuence, meaning he needed to have both the technical and artistic skills to make it work. The hand model consisted of 350 polygons and was laboriously rendered out as the world's first ever computerised digital animation.

Unsurprisingly, Edwin Catmull actually went on to found pixar just a few years later.

(The world's first 3D cgi)

G a m e s
3D gaming had always been something with many obstricles- a fast memory is required for a computer to be able to render out a sequence in realtime and, this alone, created a big problem. Even for the less detailed models with fewer polygons. To combat this issue of having to load in realtime, Monster Maze pre-rendered each turn for the 16 x 16 cell randomly generated maze. All the computer had to do was play a pre recorded 3D animation sequence when the button was pressed, giving the illussion of realtime 3D. The game was released for the Sinclair ZX81 platform in 1981 by Malcolm Evans.

(3D monster maze; 1981)

By the fifth generation in the video game timeline, more and more games were either being soley produced in 3D computerised graphics or old games being transferred or remade using the new technology now available. Games such as 'super mario 64', and 'the legend of zelda'. Games like this were revolutionary.

(Super mario 64)

(The legend of Zelda)

As well as this, CDs were becoming increasingly popular over cartridges, which usually allowed for more space, and, hence, a more complex rendering system which could be fully utilized to hold 3D cgi.

3D games were widely marketed, and people enjoyed the ability to be able to wander freely around the world in any direction, making the player feel more immersed in the virtual envioronment. It was clear that people wanted more of this, as it was a big selling, and turning profits. With this, the focus on outdated, more retro side-scrolling, top-down and rail-style styles started to turn, giving enterence to more advanced games and even new genres which couldn't really be brought to their full potential before.

(GoldenEye 007 - 1997)

Of course, even games like GoldenEye which were considered amazing when they were first released, appear as old and undeatailed today in terms of graphics. The demand for more realistic games has never been higher, but the graphics can only evolve as fast as the technology.

Just a few years ago we might’ve considered games such as Zoo Tycoon and Lara Croft to be realistic, where surfaces are painted as flat textures onto large polygons. Of course, the computers in modern day can now handle much more and so the graphics have been able to become more detailed; so detailed in fact they can almost be considered as convincing as photographs.. We have also uncovered new ways to create the light and shadow; now using a 3 point bounce system where we previously used only a 2 point bounce system. Because of this we now have games such as Heavy rain, Fifa, Pgr4 etc..



The use of hyper-realism now involved in most modern games, has given way to a new type of intereactive video game. Games such as Batman begins and uncharted are half film, half game! With cutscenes inbertween game play, it's a cross between the two, and with ever advancing technology, we're seeing this kind of game more and more.
As for what the future holds, well developers are actually considering the possibility of being able to realistically map and texture the player as a character in-game. Currently, while this is possible, it involves the precise image capture of the person from 360 degrees of angle, and of course, this would not be practical. Where this to come into our home consoles, it would need to incorporate a complex algorithm, able to first identify and then calculate the value of a face from different angles with a limited amount of reference.




F i l m & T V
Animation is widely used in todays films. It is sometimes used for a completely 3D movie with no real people involved, sometimes it is combined and overlayed with real video footage to make it more realistic. Often though, an animated character will be made to appear as real, and will act alongside the real characters. This means that the actors must imagine the character which is apparently stood infront of them, as this will be added in the final cut.

(An example of this kind of thing is in 'Rise of the planet of the apes')

Some animations in film these days are very, very realistic. In games, we are limited to how detailed we can make a character due to the demand for real time rendering. As a film is all pre-rendered, however, we are much more capable of adding mass amounts of detail. So much, infact, that some have gone so far as to have each hair individually placed onto the model- Such as 'Life of Pi'.

('Life of Pi' uses a mixture of both real footage of tigers, and models. It's difficult to tell them apart!)

As well as charatcer animation, there is also environment animation to consider. This is often utilized in tricky situations where the landscape would be too dangerous or difficult to film in. 'Life of pi' is another good example in this case; filming in the middle of the real sea would've been far too difficult and risky, so instead, they did it in a studio with wind simulators and levers to make the boat rock. They added the water later on.

('Life of pi': The ripples and lighting effects are all important factors in making it realistic.)



Animation techniques are also often applied to television media- they are sometimes used as special effects in programs such as 'Doctor who' and 'Being human', but animation in series like this is limited due to the high costs involved.
In adverts, animation is very popular. It was used it 'Compare the market' 's advert featuring a meerkat. The concept have proven itself by becoming one of the nation's most well known and loved advert characters. Animated characters like this are good for adverts as a certain charm can be captured that wouldn't be able to be produced elsewhere- this captures the audiences imagination and keeps it memorable.

(www.comparethemeerkat.com - Simples! ;D)

P r o d u c t  D e s i g n
3D product design is sometimes used as an alternative for a physical prototype, or is created in the stages before. Though not all products need to go though the process of digital realisation, it is often very beneficial to do so. Making the product in a virtual environment rather than physically means that only one needs to be made which can then be altered and changed as needed without having to change the whole thing.

Because it's a file, it can be sent to other people, who can then view it from all different angles, rather than having to make individual prototypes for each person. It can be retextured so that you can see what it will look like in different colours and skins with just a few clicks of a button.

Not only is this a very efficient process, but it also reduces costs as only one of them, ever has to be made.

Sometimes, this is referred to as industrial design combined with CAD (Computer aided design). It can help to test aesthetics, ergonomics, functionality and general usability of a product in design. A lot of products on the market will have gone though this process, but an example is an iphone. The process of making the virtual iphone prototype would have included a design pack like this:


Followed by the actual execution and production of the 3D model:


E d u c a t i o n
Though reletively new, there are now businesses setting up virtual environments in which you can roam around and see things from a first point view perspective, as opposed to just having to learn off flat images and diagrams. This seems like a nice idea, but is it really as good as it sounds or just a money making scheme?

www.gaia3d.com says 'Visual learning plays a key role in the teaching and learning process. Gaia 3D is the new generation 3D solution that allows you to explore new frontiers. A tool to enhance the work you do in the classroom. Gaia 3D helps teachers to teach those ‘hard to reach’ concepts whilst engaging students and bringing lesson content to life.
The 3D learning experience places students in virtual environments, allowing them to walk down a street in ancient Rome, visit the outer reaches of the universe or move through the chambers of a beating heart.'
Using this as an aid in the classroom is one option, however the potential of using this as a medical equipment is clear. By making a heart simulation on screen, students would be able to test on the heart, seeing how it would react, as well as being able to explore at a micro level right inside and around the heart, something which normally would be impossible. Something like this could save lives.
http://www.3dmedicaleducation.co.uk/ and http://www.3d4medical.com/ are both businesses which specilise is exactly that. As well as building the 3D models, they must also have some engineer inside them controlling the way they behave. The person making these models must have both the technical skills, artistic ability, and anatomical knowledge to bring it all together and make it work.
As well as a first person view point whee you could wander around, there are also fly thoughs of the organs, giving a quick and clear overview.
You would think something like this would be expensive, but it's not! Simple versions are often available for free, infact, and available on the internet over unity webplayer, or as an app on any smart device.
the apps
(The muscular arrangement in a human, available to view like never before, from any angle or perspective.)

A r c h i t e c t u r e

Architecture is often used to produce 3D models of a building or particular environment. These are usually produced within the planning stage when the layout and design is still being decided. Like in the product design, this is very useful for testing out different variations without having to build a house for real and knock it down again unitl it's right! Although the design could be done in 2D, only 3D gives the impression of the environment from different perspectives, as it would be seen.



Things like these can be built using programs such as lightwave or maya.

Flythoughs or first person walk thoughs are sometimes rendered out in order to give the user a better idea of how it will feel in real life, when the real things actually get's built.

W e b

 Some great examples of how animation is used on websites includes:

http://www.clicktorelease.com/code/blocky_earth/
http://www.artfolio.de/galerie3d.php
http://www.biliouris.gr/files/avles/

But not only full feature websites themselves, 3D technologoy is often used in adverts on webpages. The 3D feature of them makes them often interactive and engaging. And a bit more eye catching than the typical 2D banner.

Images:
http://www.stereo.canonia.pl/gr/holmes350.jpg
http://www.eyecanlearn.com/random7.jpg
http://www.caiman.us/freepix/1053-2.jpg
http://www.thesixthaxis.com/wp-content/uploads/2010/09/mario1.jpg
http://www.3dsbuzz.com/wp-content/uploads/2011/06/Linkawakensin3d.jpg
http://deltagamer.com/wp-content/uploads/2011/07/GoldenEye-007-Reloaded-Airfield-FPS-AR-with-scope.jpg
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEtA64Ut9hKRAe0wG47ZGmmROklVTQX7M_0sQNmksAGqKbzrNHNa86zYnr8kgRMP0AvYNEORNOfHTaXN4T8ffOGzLo_QgAz5Io3g44TMLNFqEp3Y03xLxxjGynaA9bkofqVulbipUmgEs/s400/Rise+of+Planet+of+Apes1.jpg
http://thelexicinema.co.uk/wp-content/uploads/2012/11/life_of_pi_8.jpg
http://thefilmexperience.net/storage/2012/lifeofpi-water.jpg?__SQUARESPACE_CACHEVERSION=1357494496918
http://www.fastcocreate.com/multisite_files/cocreate/imagecache/slideshow-large/slideshow/2012/11/1682006-slide-slide-9-life-of-pi-making-of.jpg
http://theinspirationroom.com/daily/interactive/2010/3/compare_the_meerkat_site.jpg
http://obamapacman.com/wp-content/uploads/2011/10/iPhone-5-vs-iPhone-4s-CAD-mockup.jpg
http://preview.turbosquid.com/Preview/2012/01/25__08_05_00/iphonecad7.jpgbb9b1e69-15a6-4094-ac33-2579bb23f191Large.jpg
http://applications.3d4medical.com/images/muscle/ipad-video.jpg
http://www.panebianco3d.com/images/architecture-pescara.gif
http://www.icreate3dmodelling.co.uk/3D-images/3D-architecture-renderings2.jpg

Information: