World Library  
Flag as Inappropriate
Email this Article

3D projection

3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current methods for displaying graphical data are based on planar (pixel information from several bitplanes) two-dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting.


  • Orthographic projection 1
  • Weak perspective projection 2
  • Perspective projection 3
  • Diagram 4
  • See also 5
  • References 6
  • External links 7
  • Further reading 8

Orthographic projection

When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic projection ignores this effect to allow the creation of to-scale drawings for construction and engineering.

Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and elevation.

If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows; To project the 3D point a_x, a_y, a_z onto the 2D point b_x, b_y using an orthographic projection parallel to the y axis (profile view), the following equations can be used:

b_x = s_x a_x + c_x
b_y = s_z a_z + c_z

where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become:

\begin{bmatrix} {b_x } \\ {b_y } \\ \end{bmatrix} = \begin{bmatrix} {s_x } & 0 & 0 \\ 0 & 0 & {s_z } \\ \end{bmatrix}\begin{bmatrix} {a_x } \\ {a_y } \\ {a_z } \\ \end{bmatrix} + \begin{bmatrix} {c_x } \\ {c_z } \\ \end{bmatrix} .

While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened as they would be in a perspective projection.

Weak perspective projection

A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice versa. It can be seen as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection with individual point depths Z_{i} replaced by an average constant depth Z_{ave},[1] or simply as an orthographic projection plus a scaling.[2]

The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the pure (unscaled) orthographic perspective. It is a reasonable approximation when the depth of the object along the line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions, it can be assumed that all points on a 3D object are at the same distance Z_{ave} from the camera without significant errors in the projection (compared to the full perspective model).

Perspective projection

When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective projection shows distant objects as smaller to provide additional realism.

The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation:

  • \mathbf{a}_{x,y,z} - the 3D position of a point A that is to be projected.
  • \mathbf{c}_{x,y,z} - the 3D position of a point C representing the camera.
  • \mathbf{\theta}_{x,y,z} - The orientation of the camera (represented by Tait–Bryan angles).
  • \mathbf{e}_{x,y,z} - the viewer's position relative to the display surface [3] which goes through point C representing the camera.

Which results in:

  • \mathbf{b}_{x,y} - the 2D projection of \mathbf{a}.

When \mathbf{c}_{x,y,z}=\langle 0,0,0\rangle, and \mathbf{\theta}_{x,y,z} = \langle 0,0,0\rangle, the 3D vector \langle 1,2,0 \rangle is projected to the 2D vector \langle 1,2 \rangle.

Otherwise, to compute \mathbf{b}_{x,y} we first define a vector \mathbf{d}_{x,y,z} as the position of point A with respect to a coordinate system defined by the camera, with origin in C and rotated by \mathbf{\theta} with respect to the initial coordinate system. This is achieved by subtracting \mathbf{c} from \mathbf{a} and then applying a rotation by -\mathbf{\theta} to the result. This transformation is often called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes): [4] [5]

\begin{bmatrix} \mathbf{d}_x \\ \mathbf{d}_y \\ \mathbf{d}_z \\ \end{bmatrix}=\begin{bmatrix} 1 & 0 & 0 \\ 0 & {\cos ( \mathbf{- \theta}_x ) } & { - \sin ( \mathbf{- \theta}_x ) } \\ 0 & { \sin ( \mathbf{- \theta}_x ) } & { \cos ( \mathbf{- \theta}_x ) } \\ \end{bmatrix}\begin{bmatrix} { \cos ( \mathbf{- \theta}_y ) } & 0 & { \sin ( \mathbf{- \theta}_y ) } \\ 0 & 1 & 0 \\ { - \sin ( \mathbf{- \theta}_y ) } & 0 & { \cos ( \mathbf{- \theta}_y ) } \\ \end{bmatrix}\begin{bmatrix} { \cos ( \mathbf{- \theta}_z ) } & { - \sin ( \mathbf{- \theta}_z ) } & 0 \\ { \sin ( \mathbf{- \theta}_z ) } & { \cos ( \mathbf{- \theta}_z ) } & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}\left( {\begin{bmatrix} \mathbf{a}_x \\ \mathbf{a}_y \\ \mathbf{a}_z \\ \end{bmatrix} - \begin{bmatrix} \mathbf{c}_x \\ \mathbf{c}_y \\ \mathbf{c}_z \\ \end{bmatrix}} \right)

This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". Note that if the camera is not rotated (\mathbf{\theta}_{x,y,z} = \langle 0,0,0\rangle), then the matrices drop out (as identities), and this reduces to simply a shift: \mathbf{d} = \mathbf{a} - \mathbf{c}.

Alternatively, without using matrices (let's replace (ax-cx) with x and so on, and abbreviate cosθ to c and sinθ to s):

\begin{array}{lcl} \mathbf{d}_x = c_y (s_z \mathbf{y}+c_z \mathbf{x})-s_y \mathbf{z} \\ \mathbf{d}_y = s_x (c_y \mathbf{z}+s_y (s_z \mathbf{y}+c_z \mathbf{x}))+c_x (c_z \mathbf{y}-s_z \mathbf{x}) \\ \mathbf{d}_z = c_x (c_y \mathbf{z}+s_y (s_z \mathbf{y}+c_z \mathbf{x}))-s_x (c_z \mathbf{y}-s_z \mathbf{x}) \\ \end{array}

This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):[6]

\begin{array}{lcl} \mathbf{b}_x &= & \frac{\mathbf{e}_z}{\mathbf{d}_z} \mathbf{d}_x - \mathbf{e}_x \\ \mathbf{b}_y &= & \frac{\mathbf{e}_z}{\mathbf{d}_z} \mathbf{d}_y - \mathbf{e}_y\\ \end{array}.

Or, in matrix form using homogeneous coordinates, the system

\begin{bmatrix} \mathbf{f}_x \\ \mathbf{f}_y \\ \mathbf{f}_z \\ \mathbf{f}_w \\ \end{bmatrix}=\begin{bmatrix} 1 & 0 & -\frac{\mathbf{e}_x}{\mathbf{e}_z} & 0 \\ 0 & 1 & -\frac{\mathbf{e}_y}{\mathbf{e}_z} & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 1/\mathbf{e}_z & 0 \\ \end{bmatrix}\begin{bmatrix} \mathbf{d}_x \\ \mathbf{d}_y \\ \mathbf{d}_z \\ 1 \\ \end{bmatrix}

in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving

\begin{array}{lcl} \mathbf{b}_x &= &\mathbf{f}_x / \mathbf{f}_w \\ \mathbf{b}_y &= &\mathbf{f}_y / \mathbf{f}_w \\ \end{array}.

The distance of the viewer from the display surface, \mathbf{e}_z, directly relates to the field of view, where \alpha=2 \cdot \tan^{-1}(1/\mathbf{e}_z) is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface)

The above equations can also be rewritten as:

\begin{array}{lcl} \mathbf{b}_x= (\mathbf{d}_x \mathbf{s}_x ) / (\mathbf{d}_z \mathbf{r}_x) \mathbf{r}_z\\ \mathbf{b}_y= (\mathbf{d}_y \mathbf{s}_y ) / (\mathbf{d}_z \mathbf{r}_y) \mathbf{r}_z\\ \end{array}.

In which \mathbf{s}_{x,y} is the display size, \mathbf{r}_{x,y} is the recording surface size (CCD or film), \mathbf{r}_z is the distance from the recording surface to the entrance pupil (camera center), and \mathbf{d}_z is the distance, from the 3D point being projected, to the entrance pupil.

Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.


To determine which screen x-coordinate corresponds to a point at A_x,A_z multiply the point coordinates by:

B_x = A_x \frac{B_z}{A_z}


B_x is the screen x coordinate
A_x is the model x coordinate
B_z is the focal length—the axial distance from the camera center to the image plane
A_z is the subject distance.

Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.

See also


  1. ^ Subhashis Banerjee (2002-02-18). "The Weak-Perspective Camera". 
  2. ^ Alter, T. D. (July 1992). 3D Pose from 3 Corresponding Points under Weak-Perspective Projection (PDF) (Technical report). MIT  
  3. ^ Ingrid Carlbom, Joseph Paciorek (1978). "Planar Geometric Projections and Viewing Transformations" (PDF).  .
  4. ^ Riley, K F (2006). Mathematical Methods for Physics and Engineering.  
  5. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, Mass.: Addison-Wesley Pub. Co. pp. 146–148.  
  6. ^ Sonka, M; Hlavac, V; Boyle, R (1995). Image Processing, Analysis & Machine Vision (2nd ed.). Chapman and Hall. p. 14.  

External links

  • A case study in camera projection
  • Creating 3D Environments from Digital Photographs

Further reading

  • Kenneth C. Finney (2004). 3D Game Programming All in One. Thomson Course. p. 93.  
  • Koehler; Dr. Ralph. 2D/3D Graphics and Splines with Source Code.  
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.