RAID Levels, Optical Drives, and Graphics Technology
RAID Levels: Performance and Reliability
Penalties in writing. • RAID 4: Levels 4 and 5 of RAID use a technique called independent access. Each disk in the set operates independently, meaning separate I/O requests are addressed in parallel. In both cases, the data strips used are relatively large, and a parity strip is calculated, bit by bit, from the corresponding strips of each data disk (parity blocks). In RAID 4, parity bits are stored in the corresponding strip on a parity disk (which is a bottleneck). • RAID 5: Similar to RAID 4, but distributes parity strips across all disks, typically with a cyclical pattern. It removes the bottleneck of RAID 4. • RAID 6: In the level 6 scheme, there are two different parity calculations, which are stored in different blocks. For a data set whose records were used, it requires N+2. The data can be recovered even if two disks fail at once. • RAID 0+1 and 1+0: More reliable, each is a mirror of the other.
Summary:
a) With respect to reliability: RAID 0 has no redundancy, so it is not truly RAID. RAID 1 duplicates all the information (mirror). Other RAID levels provide greater reliability through error detection schemes without duplicating the information. They have less redundant information than RAID 1 and are less reliable than RAID 1, but offer the best value for money in terms of security. RAID 2, 3, and 4 store redundant information on a single disk. RAID 5 and 6 distribute redundant information to all disks.
b) For performance: RAID 2 and 3 perform a unique I/O operation in parallel across all disks, with very small strips, increasing the transfer speed (low latency) of each operation. RAID 4, 5, and 6 allow several concurrent I/O operations and have increased strip sizes, resulting in more I/O operations.
Optical Drives: Evolution and Technology
Optical Drive – Emerged in 1983 with the appearance of the compact disc (CD) for digital audio (originally up to 74 minutes). Its success facilitated the development of low-cost CD optical memory technology.
CD-ROM: Compact disk read-only, based on the technology of CD-Audio. The digital information (data or audio up to 640MBs originally) is recorded as a series of microscopic holes in a reflective surface. This layer is protected by a thin film of clear lacquer. From a master disc, recorded by a high-intensity laser, copies are made by a stamping process. The information on the disc is recovered by a low-power laser located in the drive (150KB/s at 1X). To maximize disk capacity, initially, a rotating disc with constant linear velocity (CLV) was used. However, the continuing need to increase rotation speed to achieve higher transfer rates from the devices led to the use of 12X speed with constant angular velocity (CAV), similar to magnetic devices. There are also hybrid solutions.
Advantages over other magnetic media: Cost. Disadvantage: Read-only access and longer access time.
Subsequent revisions of the format: CD-R or CD-RW, WORDM. Evolution of optical media: DVD, initially read-only. Same size as CD but much more capable (4.7 GBs/layer) – shorter laser wavelength that can focus light at smaller points, smaller holes, and tracks – recording layers are used per side. Evolution: DVD-R, DVD-RAM, DVD-RW, DVD+R, DVD+RW.
Graphics Technology: Pixels and Color Models
(3.3) Graphics: Today, all systems are based on pixel graphics. An image is generated as an array or string of pixels. The sequence of pixels is stored in a memory region called the frame buffer. We define the depth of the frame buffer as the number of bits used to store the information of each pixel. It determines the maximum number of different colors possible. The resolution of the frame buffer, the total number of pixels, determines the final image detail.
Color Representation
The color we perceive in things is the result of the interaction of light with the medium. Light is a form of electromagnetic radiation (a color can be defined with a function C(Ireland)). A monitor capable of representing all possible colors is beyond current capabilities. As the human eye cannot distinguish more than a limited number of colors (10,000,000), in theory, three colors are used.
Additive Color Model: It is considered that a color is formed by the combination or sum of 3 primary colors, adjusting the intensity of each of them individually. Using red (R), green (G), and blue (B) as primary colors, a number of colors close to those distinguished by the human eye can be represented: RGB model, used in monitors -> C = T1R + T2G + T3B.
Subtractive Color Model: Used in printing. It begins with a white surface, from which the 3 primary colors are subtracted. Normally, cyan (C), magenta (M), and yellow (Y) are used: CMY model.
Color Handling in Graphics
Two approaches:
1) RGB Model: Classical, more difficult to implement than the indexed option because of the greater memory demand. In current systems, memory is no longer a problem. For each pixel, all three RGB components are saved. Conceptually, it is like having a separate frame buffer for each of the components.
2) Indexed Color: Useful when there is limited space for the framebuffer. It involves choosing a color palette or table, i.e., a subset of the total available colors. The bits per pixel do not specify the RGB color of the pixel, but the color index in the palette of colors being used.
Output Devices: Monitors and Display Technology
Output Devices – The most common graphic output system is provided through a monitor or screen (display). A monitor is a peripheral device that displays the data in the frame buffer. The most common parameters of a monitor are: resolution (number of pixels represented on the screen horizontally and vertically), refresh rate (frequency at which the value of each pixel on the screen is updated), and dot size/pixel – smaller size provides smoother images.
Until recently, the most common type of screen was based on cathode ray tubes (CRTs).
Initially, they were vector monitors: the electron beam is directed only to areas of the screen where something is drawn. An element is drawn in each step: point, line, or character. The image is stored in memory as a list of drawing commands (using successive redraws, the screen is kept at a minimum refresh rate of 50Hz). Great clarity in the lines, but very limited representation capacity.
Since the 1980s, current technology is based on pixels. The frame buffer pixels are represented as points on the surface of a screen with a refresh rate sufficient to prevent flicker (between 50 and 85 Hz). Possibility or not to apply interlacing. Three different luminous materials emit red, green, and blue (RGB) light, respectively. Electron beams are launched separately for each of the three colors. Two technologies: shadow mask and aperture grille.
In recent years, displays based on liquid crystal have become popular. They are based on various properties that some liquid crystals have (change in the polarity of light that passes through them, and crystals can be guided by applying an electric field). Main advantages are: thinner, lighter, and have much lower electric consumption.
Key technologies:
a) Passive Matrix: A grid-connected integrated circuits to select, one by one, the pixels to activate. In disuse due to problems: low response time (causing traces), poor contrast caused by the imprecision in the control of voltage, and a rather limited viewing angle.
b) Active Matrix: Endow memory to each point by adding a thin film transistor (TFT), which basically comprises a small transistor and a capacitor. It manages to keep the load of the pixel between refreshes. Increased brightness and softness in the images, greater viewing angle, and shorter response time compared to passive matrix, but also higher cost.
Graphics Cards: Processing and Rendering
Graphics Card – The device that provides the graphics capabilities of the system. It provides the interface between the system and the screen:
a) System – Graphics Card: ISA, VESA, PCI, AGP, PCI-Express…
b) Graphics Card – Display: VGA analog output, DVI digital output.
Modern cards are increasingly taking on more rendering processing tasks: incorporating large amounts of memory and incorporating their own increasingly powerful processors (GPUs).
The graphics bus can work in different video modes. Video modes can be divided mainly into graphics modes and text modes. In text mode, the output consists only of ASCII characters, while in a graphical mode, any image can be represented as a bit map.
Different video modes differ mainly in the resolution, refresh rate, and color depth offered. Normally, higher resolutions and greater color depth are preferred.
Graphics information has gained importance over the years: increased demand for detail and realism in applications in fields such as simulation, gaming, animation, and rendering.
Graphics Pipeline
Graphics Pipeline: The process used to generate a 2D image from: a virtual camera, a set of light sources, a set of objects in 3D geometry, textures, etc.
The final result of the rendering process is the image generated by the objects in the scene. The outcome depends on: position and shape (geometry of objects, the camera used, and the characteristics of the environment) and appearance (material properties, light sources, textures, and lighting models used).
Pipeline: A chain with several independent steps. Idea: speeding up production (n times with n steps); the slowest step determines the maximum rate of the pipeline.
Three different conceptual steps or phases in the graphics pipeline:
- Application (purely software – geometry is generated for the following stages, collision detection, etc.)
- Geometry (most operations on vertices and polygons, normally implemented with the next pipeline. More and more of the tasks that make up this phase are directly implemented in the graphics hardware)
- Raster rendering
Speed (fps) is determined by the slowest task/stage in the pipeline.
Modern graphics cards are heavily skewed towards rendering processing tasks (incorporating large amounts of memory, incorporating their own increasingly powerful processors (GPUs).