Abstract - The paper is theoretically investigating the funda- mental resources of the HDR image sensors that make them able to customize and to accelerate their performance. We focus on the CMOS sensors and discuss the multiple exposure HDR that merges several low dynamic range frames with increasing expo- sures. We propose a fuzzy logic version of the multiple capture weighted averaging HDR that may be linguistically customized by the user. We also reconsider the Burst Readout Multiple Ex- posure in order to obtain a Cumulative BRME, increasing the sensor’s speed and making possible the self-adaptive exposure. The increasing exposure low dynamic range frames are gener- ated by successive non-destructive readouts of the CMOS sen- sor, during its continuous electrical loading, with no resets, along a single shutter cycle. Index Terms - CMOS image sensors, fuzzy logic, image fusion, multi-frame high dynamic range, cumulative burst readout. I. INTRODUCTION HE image sensors’ field is particularly dynamic in the last years. Scientific, documentary and artistic imagery - vid- eos or still photos - they all need high resolution, high-speed, 3D and/or multi-spectral image sensors, with increasing per- formances. A great deal of the IT technologies, as well as the conventional automatic control and artificial intelligence ap- plications relies on image processing techniques. As a rule of thumb we can consider that the image sensors’ technology is now able to match individually, most of the hu- man sight parameters: resolution, color rendition, speed, and dynamic range [1]. The foveated feature of the human sight can be coped if needed, by foveated sensors [2], which are enhancing the quality of the image capture around a fixation point, corresponding to the center of the eye's retina, the fovea. Stereoscopy does not demand special constraints either. On the other hand, reaching all the above quality levels into one single sensor is not easy. Speed, as well as high qual- ity resolution and color rendition may be achieved by generic image sensors, CCD or CMOS, but when needed high dynam- ic range pictures HDR or wide dynamic range videos WDR, special techniques must be applied. The dynamic range DR of an image is the ratio between the smallest and the largest possible values of the lighting, measured in exposure values (EV) or decibels (dB). Human sight’s DR is close to 90 dB, meaning that we can see objects in starlight (with reduced color differentiation) or in bright sunlight, although we need few seconds to adjust our eyes to different light levels. We expect our visual display systems (image acquisition, processing and rendition) to be able to capture both shadow details in dark scenes and bright areas of sunny scenes, with an even distribution of the resolution and of the DR all over the sensor’s surface. Catching in the same frame the textures of the bride’s shining white dress and of the groom’s dark suit is already a good reward, but we must consider that serious scientific works or essential businesses can rely on a photo camera’s DR. HDR’s goal is to capture from the photographic subject as much information as possi- ble, from dark areas and bright areas in the same time. Our purpose is to figure out the development mainstream of the HDR image sensors for the next years and to identify fundamental solutions able to push up their performance. II. HIGH DYNAMIC RANGE SENSORS AND CAMERAS A. Desktop HDR The first commercial desktop HDR function was introduced by Adobe Photoshop CS2 [1]. The conventional desktop HDR needs multiple standard exposures of the same scene (low dy- namic range LDR). The frames representing the same subject with different exposures are obtained by exposure bracketing, which is a common function for most photo cameras. The camera must be mounted on tripod, the subject must remain still and each exposure needs its own shutter cycle. The multi frame imaging algorithms were initiated by Paul Debevec in 1997 [3]. The exposure X (J/m 2 ) is defined as the product of the irradiance E and the exposure time Δt, at a cer - tain wavelength of the light. After capturing and digitizing the image signal, the information of each pixel is codified by a number Z i with i a spatial index over pixels. The nonlinear link between E and Z is the characteristic curve of the sensor Z i =f(E i ). If we know f we can compute the exposure of each pixel X i =f -1 (Z i ) and E i =X i /Δt the irradiance of each point of the subject’s surface. The Debevec’s algorithm input informa- tion is embedded into the LDR frames issued by exposure bracketing with exposure times Δt j , with still illumination and position of the subject. The sensor reciprocity equation is: Adaptive and Fast HDR Image Sensors – Resources of the CMOS Multiple Exposure M.M. Balas Aurel Vlaicu University of Arad, Romania e-mail: marius.balas@ieee.org T