ADC Home > Reference Library > Technical Q&As > Legacy Documents > Graphics & Imaging >

Legacy Documentclose button

Important: This document is part of the Legacy section of the ADC Reference Library. This information should not be used for new development.

Current information on this Reference Library topic can be found here:

NOTE: This Technical Q&A has been retired. Please see the Technical Q&As page for current documentation.

Textures & BitMaps Explained

Q: What is the difference between Textures and Bitmaps?

A: In general conversation we may use the two terms interchangeably, but actual differences do exist:

BitMap

In graphics terms, a BitMap is basically an array of one-bit pixels.

Texture

A Texture is a data structure that contains information for mapping a predefined image onto the surface of a model. A Texture usually uses a PixMap as the source for the image.

PixMap

A PixMap is basically an array of pixels of any depth.

There is more information in both the BitMap and PixMap structures beyond just pixels, such as rowBytes and a bounds rectangle. A PixMap also contains additional information such as the pixel depth. Look in the quickdraw.h header (or equivalent on PC), or any good graphics programming book for more information.

A Texture is not necessarily a mapping of a PixMap, as a Texture could map dynamic data onto an object in a model, (e. g., the TextureEyes sample uses a QuickTime movie as its source). The Texture is just where this data is stored. The information in the Texture is used by a shader to combine information about the Texture, other material properties, lights, position, and orientations. The shader is called as part of the rendering process.

In the quote below, note that shading is the last of seven steps involved in rendering:

"Rendering is a general term that describes the overall process of going from a database representation of a three-dimensional object to a shaded two-dimensional project on a view surface. It involves a number of separate processes:
  1. setting up a polygon model that will contain all the information which is subsequently required in the shading process;
  2. applying linear transformation to the polygon mesh model ...;
  3. culling back-facing polygons;
  4. clipping polygons against a view volume;
  5. scan converting or rasterizing polygons ... ;
  6. applying hidden surface removal algorithm;
  7. shading the individual pixels using an interpolative or incremental shading scheme."
3D Computer Graphics , page 127, by Alan Watt, (Addison-Wesley)

[Aug 21 1996]


Did this document help you?
Yes: Tell us what works for you.
It’s good, but: Report typos, inaccuracies, and so forth.
It wasn’t helpful: Tell us what would have helped.