Fundamentals of CMOS Image Sensors and Understanding Key Parameters
Source:Shenzhen Kai Mo Rui Electronic Technology Co. LTD2026-04-17
Working Principle of CMOS Image Sensors & Key Concepts Working Principle of CMOS Image Sensors Each CMOS pixel consists of a photodiod a floating diffusion layer, a transfer gate, a MOSFET for amplification, and a MOSFET acting as a pixel selection switch.
During the exposure phase of the CMOS, the photodiode performs photoelectric conversion and generates signal charge. After exposure, the transfer gate opens, and the signal charge is transferred to the floating diffusion layer.
The amplifying MOSFET gate picks up the charge, converting the charge signal into a voltage signal. In this way, the CMOS accomplishes three core functions: photoelectric conversion, charge-to-voltage conversion, and analog-to-digital conversion. It converts light signals into electrical signals, and finally into digital signals readable by computers, giving us the ability to record light intensity. However, this is only for grayscale imaging. To capture color, modern color CMOS sensors add a Color Filter Array (CFA) on top of the monochrome sensor. The most well-known design is the **Bayer CFA (Bayer Color Filter Array)**. An interesting fact: the CMOS, which records light and shadow, works in the opposite way to displays that output light and shadow. The CMOS converts light into electrical signals and records them digitally, while a display converts decoded digital electrical signals back into light. This photoelectric conversion forms the foundation of human digital imaging. Mainstream CMOS Manufacturers Current mainstream CMOS manufacturers include: Sony, Samsung, OmniVision (豪威), GalaxyCore (格科微), Smartsens (思特威), ON Semiconductor (安森美), and others. Common Color Filter Array & ISP Processing A typical color filter array is RGGB: one red filter, one blue filter, and two green filters. Each pixel can only detect one color channel. To output the full RGB value for each pixel, the pixel must interpolate values from neighboring pixels. This calculation and conversion is performed by the ISP (Image Signal Processor), which computes the complete RGB value for each pixel. Although each pixel detects only one color channel, after ISP processing, each output pixel contains full RGB information. The original light-sensing data before processing is called RAW data.

RCCC: 75% of the area is transparent for light transmission, and the remaining 25% consists of red-sensitive filters.
The advantage of RCCC is its high light sensitivity, making it suitable for low-light environments.
Since RCCC only has red light filters, it is mainly used in scenarios sensitive to red indicators, such as traffic light detection.

RCCB: 50% of the area is transparent for light transmission, with the remaining 25% dedicated to red filters and 25% to blue filters respectively. RCCB has slightly lower low-light sensitivity than RCCC (due to fewer clear pixels), but offers superior color discrimination. The captured images can be used for both machine analysis and direct human visual observation.

Mono: 100% transparent transmission. It cannot distinguish colors. The Mono configuration offers the highest low-light sensitivity and is only used in scenarios with no color recognition requirements, such as driver monitoring systems.

Understanding Key Parameters of CMOS Image Sensors 1. Sensor Size Larger sensor size results in a larger imaging system, captures more photons, delivers better photosensitivity, and lower signal-to-noise ratio. Common CMOS sensor sizes include 1‑inch, 2/3‑inch, 1/2‑inch, 1/3‑inch, 1/4‑inch, etc. 2. Total Pixels and Effective Pixels Total pixels refer to the sum of all pixels, one of the key specifications for CMOS image sensors. Effective pixels are those used for actual photoelectric conversion and outputting image signals within the total pixel array. Effective pixels are a subset of total pixels and directly determine the imaging performance of the CMOS sensor. 3. Dynamic Range Dynamic range is determined by the signal processing capability and noise of the CMOS sensor, reflecting its operating range. It is defined as the ratio of the peak signal voltage to the RMS noise voltage at the output, usually expressed in **dB**. 4. Resolution The ability to distinguish fine details (bright and dark) in a scene. 5. Pixel Size (Pixel Pitch) The actual physical size of each pixel on the chip’s pixel array. Common sizes: 14μm, 10μm, 9μm, 7μm, 6.45μm, 3.75μm, 3.0μm, 2.0μm, 1.75μm, 1.4μm, 1.2μm, 1.0μm, etc. Pixel size reflects the chip’s light response capability: Larger pixels collect more photons and generate more charge under the same illumination and exposure time. For low-light imaging, pixel size is a direct indicator of chip sensitivity. 6. Sensitivity A critical parameter with two physical meanings: 1. Photoelectric conversion efficiency (responsivity): Output signal voltage/current per unit exposure within a specific spectral range. Units: nA/Lux, V/W, V/Lux, V/lm. 2. **Minimum detectable radiated power/illuminance** (detectivity): Units: W (watt) or Lux (lux). 7. Number of Defective Pixels Due to manufacturing limitations, it is nearly impossible for a multi-megapixel sensor to have zero defective pixels. A defective pixel is one that cannot image properly or has inconsistency exceeding the allowed range. Defective pixel count is a key quality metric for the sensor chip. 8. Spectral Response The chip’s responsivity to light of different wavelengths. Technology Trends Miniaturization and high pixel count remain major industry goals. Smaller pixels enable higher resolution, better sharpness, smaller module size, and wider applications. 9. CRA (Chief Ray Angle) The maximum angle at which light from the lens side can focus onto a pixel. CRA is near 0° near the lens optical axis and increases with distance from the axis. CRA depends on the pixel’s position on the sensor. If the lens CRA is smaller than the sensor CRA, color shift will definitely occur. 10. Dynamic Range (Repeated Clarification) Measures the sensor’s ability to capture bright and dark objects simultaneously in one frame. Typically defined as the logarithm of the ratio between the maximum bright signal and minimum dark signal. 11. IR Cut (Infrared Cut Filter) Without an IR cut filter, images show severe red cast, which cannot be corrected by software. 12. Shutter Types: Global Shutter vs. Rolling Shutter Corresponding to global exposure and rolling exposure modes. - Rolling Shutter: Exposes line by line. - Global Shutter: Exposes all pixels simultaneously. Global shutter avoids distortion when imaging moving objects, as each pixel includes an integrated storage cell. 13. Pixel Technologies: FSI, BSI, Stacked, and Brand Technologies
FSI (Front-Side Illumination) Light passes through metal wiring lines before reaching the photodetector, causing light loss. BSI (Back-Side Illumination) Light does not pass through metal interconnect layers, greatly improving sensitivity. BSI outperforms FSI in brightness and clarity under low light. Traditional CMOS structure (top to bottom): microlens → color filter → wiring layer → photodetector. Light must pass through wiring openings, leading to loss. BSI places the photodetector layer **above** the wiring layer, allowing direct light incidence and reducing reflection/loss. Production is more difficult, lower yield, higher cost. Stacked (Stack) Structure An advanced evolution of BSI. Moves all wiring layers under the photodetector, maximizing light-receiving area and reducing chip size. Logic circuits are relocated to the bottom, minimizing interference and improving noise suppression. Stacked chips are physically smaller than BSI chips at the same resolution. Process difficulty and cost are higher, yield lower. Examples: Sony IMX214 (Stacked), IMX135 (BSI). Brand Pixel Technologies - Sony STARVIS: BSI-based for surveillance cameras, high imaging in visible and NIR. - Sony Pregius: Combines BSI with global shutter. - Sony Tetracell: 4‑in‑1 pixel binning technology. - Samsung ISOCELL: BSI-based, physical isolation between pixels to reduce crosstalk. - OV PureCel: BSI with advanced 4‑cell on-chip binning. - OV OmniBSI: BSI, compact pixels, reduced crosstalk. - Smartsens smartGS: BSI-based for global shutter. - Smartsens SmartPixel: BSI-based rolling shutter for security. - Smartsens SmartClarity: BSI-based, excellent night vision. 14. Transmission Interfaces - MIPI: Mobile Industry Processor Interface, open standard by MIPI Alliance. Serial, high speed, anti-interference, mainstream. - LVDS: Low-Voltage Differential Signaling. - DVP: Parallel interface, low speed, limited bandwidth. - Parallel: Parallel data, 12-bit data + H/V sync + clock. - HiSPi: High-Speed Pixel Interface, serial. - SLVS-EC: Defined by Sony for high-frame-rate, high-resolution capture. Converts high-speed serial data to DC (Digital Camera) timing for VICAP (Video Capture). Offers higher bandwidth, lower power, lower data redundancy, and more stable transmission. 15. Package Types - BGA:Ball Grid Array, surface-mount package. -LGA: Land Grid Array. - PGA: Pin Grid Array. - CSP: Chip Scale Package. - COB: Chip On Board; die attached to substrate via conductive/non-conductive adhesive, interconnected by wire bonding. - Fan-out: Fan-out Wafer-Level Package. - PLCC: Plastic Leaded Chip Carrier, surface-mount. - TSV: Through-Silicon Via. Not a full packaging scheme but a key enabling technology for high-density interconnection between dies and wafers.
Related News
Fundamentals of CMOS Image Sensors and Understanding Key Parameters
2026-04-17Introduction to Anti-UAV System Technology
2026-04-17Optical Basic Concepts - Resolution
2026-04-16- 2026-04-16
How to Weigh the Advantages and Disadvantages of CCD Image Sensors
2026-04-15Understand Aperture in 5 Minutes: The Relationship Between Aperture Size and Depth of Field!
2026-04-15






+8613798538021