• No se han encontrado resultados

THESIS TECNOLÓGICO

N/A
N/A
Protected

Academic year: 2022

Share "THESIS TECNOLÓGICO"

Copied!
61
0
0

Texto completo

(1)

INSTITUTO TECNOLÓGICO Y DE ESTUDIOS SUPERIORES DE MONTERREY

CAMPUS MONTERREY SCHOOL OF ENGINEERING

DIVISION OF MECHATRONICS AND INFORMATION TECHNOLOGIES

GRADUATE PROGRAMS

MODELING OF A CMOS ACTIVE PIXEL IMAGE SENSOR

TOWARDS SENSOR INTEGRATION WITH MICROFLUIDIC DEVICES

THESIS

MASTER OF SCIENCE WITH MAJOR IN ELECTRONICS ENGINEERING (ELECTRONCS SYSTEMS)

BY

MATIAS VÁZQUEZ PIÑÓN

(2)

Modeling of a CMOS Active Pixel Image Sensor

Towards Sensor Integration with Microfluidic Devices

by

Matias Vázquez Piñón

Thesis

School of Engineering

Division of Mechatronics and Information Technologies Graduate Programs

Master of Science with Major in Electronics Engineering (Electronics Systems)

Instituto Tecnológico y de Estudios Superiores de Monterrey Campus Monterrey

Monterrey, N.L. May, 2011

(3)

INSTITUTO TECNOLÓGICO Y DE ESTUDIOS SUPERIORES DE MONTERREY

CAMPUS MONTERREY SCHOOL OF ENGINEERING

DIVISION OF MECHATRONICS AND INFORMATION TECHNOLOGIES

GRADUATE PROGRAMS

MODELING OF A CMOS ACTIVE PIXEL IMAGE SENSOR

TOWARDS SENSOR INTEGRATION WITH MICROFLUIDIC DEVICES

THESIS

SUBMITTED AS PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

MASTER OF SCIENCE WITH MAJOR IN ELECTRONICS ENGINEERING (ELECTRONCS SYSTEMS)

B Y

MATIAS VÁZQUEZ PIÑÓN

MONTERREY, N.L. MAY, 2011

(4)

To my family. . .

For supporting me when I decide, congratulating me when I succeed and advising me when I mistake.

v

(5)

Modeling of a CMOS Active Pixel Image Sensor

Towards Sensor Integration with Microfluidic Devices

Matias Vázquez Piñón, B.Sc.

Instituto Tecnológico y de Estudios Superiores de Monterrey, 2011.

Advisor: Sergio Omar Martínez Chapa, Ph.D.

Instituto Tecnológico y de Estudios Superiores de Monterrey.

A B S T R A C T

Recently, microfluidic devices have received considerable attention because of the many potential applications in medicine and environment monitoring. In such systems, cells and particles suspended in fluids can be manipulated for analysis. On the other hand, solid state imagers have been very successful in consumer electronic devices like digital still cameras and handy camcorders. Microfluidic systems are projected to develop more complex func- tions as they integrate electronic/optoelectronic sensors that could monitor the activity within microchannels.

This thesis presents research work on modeling and simulation of CMOS Active Pixel Sensors providing some basis for their future integration with microfluidic devices. An overview of image sensors and a literature review of microfluidic systems integrating image sensors are presented. Different stages of a CMOS active pixel sensor are modeled, including readout, buffer and selection circuits. Computer simulations are carried out demonstrating the functionality of every stage. Additionally, it was modeled and simulated a 4 x 5 pixel array, incorporating the addressing and reset signals.

Simulation results illustrate how the performance of the CMOS active pixel sensor can be adjusted to meet the specifications for scientific applications. A wide dynamic range is obtained by achieving a large full-well capacity for the photodiode and maximizing the gain of the source follower amplifier. Also, the fill factor is increased by reducing the size of the on-pixel transistors.

(6)

Contents

1 Introduction 1

2 Theoretical Background on Solid-State Image Sensors 5

2.1 Introduction . . . 5

2.2 Human light perception . . . 5

2.3 The solid-state imaging process . . . 6

2.3.1 Absorption of light in semiconductors . . . 7

2.3.2 Charge collection . . . 8

2.4 Photodiodes . . . 9

2.4.1 Operation principle of photodiodes . . . 9

2.4.2 The photodiode full-well capacity model . . . 11

2.5 CMOS Image Sensors . . . 12

2.5.1 CMOS pixel structures . . . 12

3 State-of-the-Art on Microfluidic Systems and Image Sensors Integration 18 4 Active Pixel Sensor Modeling and Simulation 25 4.1 Introduction . . . 25

4.2 Pixel read-out circuit . . . 26

4.2.1 Reset transistor MRST . . . 26

4.2.2 Source Follower Amplifier . . . 29

4.3 Photodiode design . . . 31

4.3.1 Pn-junction capacitance . . . 31

4.3.2 Physical characteristics . . . 34

4.4 Pixel array simulation . . . 41

4.4.1 Simulation setup . . . 41

4.4.2 Results . . . 42

5 Conclusions and future work 46

6 Appendix A 49

References 52

Vita 53

vii

(7)

List of Figures

2.1 Solid-state imaging process [1] 6

2.2 Transmission and reflection of light in dielectric layers 7

2.3 Silicon absorption length of light [2] 8

2.4 The photodiode structure 9

2.5 Evolution of pixel size, CMOS technology node used to fabricate the devices and the minimum feature size of the most advanced CMOS logic process [3]. 13 2.6 Passive CMOS pixel with a single in-pixel transistor [3] 14 2.7 Active CMOS pixel based on in-pixel amplifier [3] 15

2.8 Digital Pixel Sensor Architecture 16

3.1 Integrated digital cytometer system components and architecture [4] 18 3.2 Photograph of the linear active pixel CMOS sensor [4] 19 3.3 Flip-chip on glass illustration of a hybrid microfluidic digital cytometer. [4]. . 20 3.4 PDMS cast chamber, shown in a cross-sectional view, realized a microfluidic

channel passing through the structure and over the active area of the sensor

chip which is wire-bonded to a PCB. [4] 20

3.5 The photodiode pixel linear arrays. [5] 21

3.6 Post bond image of the CMOS sensor to the microfluidic chip. [5] 21 3.7 Plot of the CMOS sensor output upon detection of a 6 μrn polystyrene poly-

sphere. [5] 22

3.8 A schematic of photodiode type CMOS active pixel sensor. [6] 22 3.9 Comparison of images of microbeads on chip surface taken by (a) a camera

and (b) the contact imager. A n overlapped view is also shown in (c). [6]. . . . 23 3.10 Schematic diagram of the modified active pixel circuit. [7] 24

4.1 3T pixel configuration 26

4.2 Linear approximation of ∆VTH term as a function of the body-factor term

4.3 Boosted VRST and resulting VP D_ R ST 29

4.4 Source follower drain current and output voltage for different feature sizes of

MC O L 31

4.5 Photodiode response at dark environment at 33ms integration time 32 4.6 Charge distribution effect in the photodiode's capacitance 33 4.7 Photodiode response at a dark environmet at 3.3ms integration time 34 27

(8)

4.8 Full discharge of the 50 fF ideal capacitance at a) 30 frames per second, b) 300 frames per second, c) 3,000 frames per second and d) 30,000 frames per second. . . 35 4.9 Layout example of a square-shaped photodiode including on-pixel transistors 36 4.10 Pn-junction photodiode response at dark environment at 33ms integration time 37 4.11 Charge distribution effect on the pn-junction photodiode . . . 38 4.12 Pn-junction photodiode response at dark environment at 300 frames per second 38 4.13 Full discharge of the square-shaped NDIFF− PSUBphotodiode with a CPD =58.44 fF

pn-junction capacitance at a) 30 frames per second, b) 300 frames per second, c) 3,000 frames per second and d) 30,000 frames per second . . . 39 4.14 Full discharge of the ideal photodiode with a CPD = 58.44 fF capacitance at a)

30 frames per second, b) 300 frames per second, c) 3,000 frames per second and d) 30,000 frames per second . . . 40 4.15 Timing signals for accessing and read-out pixels . . . 41 4.16 Active pixel sensor array including horizontal, vertical and read-out circuitry 42 4.17 Simulation results for a single pixel of the 4 × 5 array. From top to bottom:

Reset pulse, Select pulse, Photodiode voltage, Source follower output, Col- umn output. . . 43 4.18 Vertical control signals: Reset pulses and row selection . . . 44 4.19 Output signals of the pixel array . . . 45

ix

(9)

List of Tables

3.1 Output characteristics and specifications of the Linear Active Pixel Sensor . . 19 3.2 Summary of sensor performance . . . 23 4.1 Physical parameters of MRSTfor simulation of a photodiode with capacitance

CPD =58.44 fF. . . 36 4.2 Simulation results and percentage error of the full-discharge current of the

ideal and the pn-junction capacitances. . . 39 4.3 Simulation results and percentage error of the full-discharge current of the

ideal and the pn-junction capacitances for the same capacitance values. . . 40

(10)

Chapter 1 Introduction

In recent years, biomedical research has brought to the world the possibility of developing each time smaller and more complex devices designed for a wide range of medical applica- tions. From micro-machined accelerometers useful to measure patient’s movement in ther- apy rehabilitation to complete laboratories implemented in just one small piece of silicon to diagnose fast and effectively illnesses. Today, micro-fabrication processes allow the imple- mentation of, not only electronic circuits, but also mechanical and optoelectronical systems on the same silicon die. This capability widely extends the design possibilities when thinking about new devices that help people to diagnose illness, develop new drugs and new kind of sensors.

Of particular interest is the development of new micro-devices that have the capability of performing clinic analysis using a very small volume of samples like corporal fluids, tis- sues or even individual cells [7]. These devices, so-called Laboratory-on-a-Chip (LoCs), are promising instruments because of its high throughput and the possibility of performing such analysis without relying on significant laboratory infrastructure.

A lab-on-a-chip device integrates all the instruments required to perform a clinical anal- ysis on a single die of a very small size (few cm2). This possibility brings some important advantages such as mass fabrication through a standard semiconductor process and with this, a reduced fabrication cost; ease of transportation due to its low volume and weight, low sam- ple volume consumption and shorter analysis times.

On the other hand, LoC technologies are new research areas having many innovations possibilities. The laboratory miniaturization innovation provides development of sensor, in- struments and microelectronic devices on the same die. The LoC technology can be used in a wide variety of biological studies, for example the extraction of DNA from a blood sample, cellular separation and characterization, detection of virus, bacteria and cancer cells, testing of new drugs, among others.

LoCs are possible thanks to the research advances in Biological-Microelectromechanical Systems (BioMEMs) where, as the name suggests, mechanical and electrical elements can be

1

(11)

integrated in a single microsystem in order to analyze biological matter. Mechanical devices are used to manipulate the sample and electrical devices are used to stimulate reactions and monitor results.

Such new micro-opto-electronic devices suggest the possibility of implementing a full image-sensing system where not only the image capture is possible but also the chip timing, control and image processing circuitry onto the same silicon die. This allows the possibility of customizing a micro-camera for a particular application [8] such as LoC monitoring. These micro-devices are called Camera-on-a-Chip and are possible thanks to a relatively new im- age sensor technology called Active Pixel Sensor (APS) that takes advantage of the existing Complementary Metal-Oxide Semiconductor (CMOS) manufacturing facilities. This possi- bility of fabrication brings several advantages over the other main imaging technologies, the Coupled-Charge Device (CCD), like lower foundry cost, lower power consumption, lower power supply voltages, higher speed and smartness by incorporating on-chip signal process- ing [1]. These advantages makes the camera-on-a-chip the perfect complement for the LoCs to accomplish the objective of fast and low-cost clinical analysis.

Camera-on-a-chip technology is implemented using CMOS active pixel sensors. These image sensors have been the subject of extensive development and now share the market with the CCD image sensors, which have dominated the field of imaging sensors for a long time [9]. The CMOS image sensor technology finds many areas of application including robotics and machine vision, guidance and navigation, automotive, etc. [10]. Consumer electronic devices such as digital still cameras (DSC), mobile phone cameras, handy camcorders and digital single lens reflex cameras (DSLR) are other applications of this technology. Moreover, scientific applications sometimes requires additional functions like real-time target tracking or three-dimensional range finding, etc. The devices designed for such applications are called smart CMOS image sensors [9].

In solid-state imaging, there are four important functions that have to be realized: light detection, accumulation of photo-generated signals, switching from accumulation to readout and scanning. The scanning function was proposed in early 1960s by S. R. Morrison at Hon- eywell as the photoscanner and by J. W. Horton et al. at IBM as the scanistor [9]. After that, the solid-state image sensor with scanning circuit using thin-film transistors (TFT) as the photodetector was proposed by P. K. Weimer et al., and M. A. Schuster and G. Strull at NASA proposed the phototransistor (PTr) as well as switching devices to realize X-Y pixel addressing. They successfully obtained images with a fabricated 50 x 50-pixel array sensor [9].

The details of solid-state image sensors were published in IEEE Transaction on Elec- tron Devices in 1968 and almost at the same time, the CCD was invented in 1969 by W.

Boyle and G. E. Smith at AT&T Bell Laboratories. The production of first commercial MOS imagers was in the 1980s. CCDs image sensors were prefered over the MOS image sen- sors due to the fact that they offered superior image quality. Subsequently, efforts has been made to improve quality of signal of MOS imagers by incorporating an in-pixel amplification

(12)

mechanism resulting in several amplified type imagers proposed in the late 1980s including the charge modulated device (CMD), floating gate array (FGA), base-stored image sensor (BASIS), static induction transistor (SIT) type, amplified MOS imager (AMI), and others.

Besides AMI, these architectures required some modification of standard MOS fabrication technology and ultimately they were not commercialized and their development was termi- nated. AMI, on the other hand, can be fabricated in standard CMOS technology without any modification and its structure is the active pixel sensor (APS). However, AMI uses I-V converter as a readout circuit while APS uses a source follower, though this difference is not critical [9].

The development of CMOS APS technology has several advantages over CCD technol- ogy. It has been mentioned the possibility of being fabricated on a standard CMOS process which results in a lower foundry cost, but also has performance improvements such low power consumption (100 to 1000 times lower), high dynamic range, higher blooming threshold, in- dividual pixel readout, low supply voltage operation, high speed, large size array, radiation hardness and smartness [1].

High-resolution imaging applications such as professional photography, astronomical imaging, x-ray, TV broadcasting and machine vision require very large format image sensors.

CCD image sensors have being fabricated with very large formats to support these applica- tions (from 66 megapixel by Philips in 1997 to 111 megapixel by STA Inc in 2007), however, large format CCDs are very expensive and difficult to produce with the low defect densities needed for high quality imaging. When increasing the full-well capacity and incorporating higher spectral response requirements, the necessary pixel size (i.e, sensor size) makes the production of such CCDs extremely expensive. Furthermore, power consumption and the need for external support electronics make CCDs less attractive for those applications. On the other hand, CMOS APS technology has recently gained more popularity in these image sensor segments with the recent advancement in frame rates, noise levels and array formats.

This was achieved by utilizing better image sensor architectures and design techniques, and by improvement in the CMOS fabrication processes and pixel technologies [1].

Furthermore, it has been demonstrated the integration of micro-opto-electronic devices with LoCs. The opto-electronic components placed directly over the LoCs, replace the task done by a microscope in the actual laboratory analysis [7], providing, a low-cost, portable microsystem for clinical analysis.

The work presented in this thesis is concerned with the realization of a CMOS pixel which can be used in the design of a complete CMOS APS camera-on-a-chip useful for con- tact imager applications such as micro-channel dielectrophoretic analysis (DEP), cell-based characterization and others. The proposed pixel was designed using a standard 0.35 µm CMOS process which counts with four metal layers, two poly layers, and a high-resistance poly layer. Pixel schematic circuits were implemented for parameter extraction and behav- ioral simulations. The design of the pixel is a modification of the methodology presented by S. U. Ay in [1].

3

(13)

Thesis outline

This document is organized as follows. Chapter 1 presents a general overview of the recent needs for the development of biomedical micro-devices. It is also explained how advances in MEMS and Microelectronics has brought new concepts of Lab-on-a-Chip, Camera-on-a- Chip, and the integration of both technologies to produce a full clinic analysis system. This explanation includes main applications, advantages and disadvantages.

Chapter 2 presents a theoretical framework on the design of CMOS image sensors. It begins with solid state imaging concepts in order to describe the process; from photons im- pacting the active area of the pixel, through the absorption of light and charge collection.

The main pixel architectures used for CMOS image sensors are described and one of them is selected for this application. Then, the design step of such architecture is presented including the equations that will be used in later chapters.

Chapter 3 reviews the state-of-the-art literature for CMOS image and optical sensors with lab-on-a-chip applications. Some of the active pixel architectures using three and four tran- sistors are described. The chapter also describes techniques used for sensor-microchannel coupling and packages used to avoid microelectronics breakdown due to microfluids manip- ulation. Moreover, advantages and disadvantages are discussed in detail.

In chapter 4, a CMOS active pixel sensor is designed using equations presented on chapter 2. The design process begins in-pixel with the read-out circuit including the reset transistor and the source follower amplifier. Then the photodiode characteristics are determined and simulated including electrical and physical parameters. Simulation results between an ideal capacitance and the designed pn-junction photodiode are presented and analized and the op- timum parameters are determined. Also, the simulation analysis for a complete active pixel is performed its results are discussed.

Finally, chapter 5 shows conclusions and future work.

(14)

Chapter 2

Theoretical Background on Solid-State Image Sensors

2.1 Introduction

In this chapter, the human perception of light and its similarity with solid-state image sen- sors are described. Also a theoretical framework about solid-state imaging is given. The chapter describes the imaging process using the semiconductor material as the photo-sensing element, and includes the transmission and reflection phenomena due to the opaque materials used in today’s fabrication processes. The chapter also illustrates the collection of photo- generated carriers produced by the incidence of photons over the sensitive region.

The main pixel structures described includes the photo-sensitive elements available in a standard CMOS technology. The photodiode structure is shown in detail because this element is the most commonly used in the design of CMOS image sensors and it was selected as the ideal sensing element in this work.

2.2 Human light perception

The human eye is capable of detecting light within a wavelength range of 370 nm to 730 nm.

This is due to the two type of photo-detection cells: the rods and the cones. Both cell types are located at the retina; the rods are located at the periphery and the cones are concentrated at its center. The rods are highly photosensitive but have poor color sensitivity, while cones are highly color sensitive but poor photosensitivity. This means that rods are mainly used under low-light conditions (scotopic vision) at expense of poor color perception and cones are used at well-lit conditions and color perception is better (photopic vision). In dark environments, the human eye can detect between 126 and 214 photons per second in a range of 650nm and 450nm wavelength, respectively, and at 555nm, it can detect 10 photons per second [1]. For color sensitivity, rods are classified into L, M and S types, which have similar characteristics of RGB color filters in image sensors having its center wavelengths of red at 565 nm, green at 545 nm and blue at 440 nm, respectively. The solid-state image sensors developed using

5

(15)

silicon as the photo-sensitive element are suitable to detect light with similar characteristics as the human eye.

2.3 The solid-state imaging process

A solid-state image sensor is a semiconductor device capable to convert an optical image that is formed by an imaging lens into electric signals (current or voltage). An image sensor can detect light within a wide spectral range: from x-rays to infrared wavelengths regions. This is possible by tunning the detector structure and/or by employing a material that is sensitive to the wavelength region of interest [11]. The process of converting light into an electrical signal is depicted in Figure 2.1.

Photon Transmission - Reflection

Absorption / Convertion Collection

Buffering

Conversion Processing Interpretation

Pixel

Imager System

Figure 2.1: Solid-state imaging process [1].

The imaging process starts at the pixel. Impinging photons pass through dielectric layers, then are absorbed in pixel structures and converted into charges taking advantage of the pho- ton energy. Those photogenerated charges are collected in a three dimension confined area, then buffered and read sequentially to an upper level of processing circuits.

The image sensor converts pixel signals into more meaningful signal type, and processes it in such a way that today’s signal processors could use and transport. Processing circuits convert and process pixel readings to form images.

Finally, at the system level these images are again processed or interpreted for human or machine use. All these processes are done at different levels as shown in Figure 2.1[1].

(16)

2.3.1 Absorption of light in semiconductors

When light strikes a semiconductor, the photons pass through multiple layers of dielectrics before reaching the photo-conversion sites. Those dielectric layers are placed on top of the solid-state material to isolate different functional layers such as multi-layer routing metals.

Some of the layers are opaque and some of them are transparent. Because each layer has different optical properties, some portions of the impinging photons are reflected and some are absorbed, leading to quantum loss (Figure 2.2).

Figure 2.2: Transmission and reflection of light in dielectric layers

Nowadays, silicon is the most widely used material in very-large scale integrated circuits (VLSI) and is also suitable for visible-range image sensors because the band gap energy of silicon (« 1.12 eV) matches the energy of visible wavelength photons [11]. This means that photons with an energy higher than 1.12 eV could produce electron-hole pairs in the silicon substrate and those pairs are called photo-generated carriers [9, 1].

The amount of photo-generated carriers in a material is described by means of its absorp- tion coefficient (a), which is defined as the fractional change of light power when the light travels through the material. The mathematical expression is given as follows:

where X is the wavelenght of the light, is the fractional reduction of light power inside the material and Az is the distance traveled by the light. The absorption length La b s in a semiconductor is defined as:

Labs - a 1 (2.2)

Figure 2.3 shows the absorption depth of light for silicon at 300°K, depending on the wavelength of the incident light. Photons that have shorter than 1100 nm wavelength are

(17)

elegible for silicon base imaging. Since the human visible range is in about 380-750 nm, the absorption length lies within ≈ 0.038 to 8 µm.

Figure 2.3: Silicon absorption length of light [2].

2.3.2 Charge collection

After photo-generated carriers are released, negatively charged electrons are separated from positively charged holes using an electric field by which electrons are collected and holes are drained. The electric field of a photodiode-based image sensor is produced at the depletion region of the pn-junction as shown in Figure 2.4.

(18)

Vbias

p-substrate - +

Depletion Region

SiO2

n+

Figure 2.4: The photodiode structure

The number of the collected electrons is a measure of the amount of light dropped on the photosensitive region of the pixel and the way to measure it is to integrate such charges in a charge pocket and read the integrated charges at predetermined time intervals [1].

2.4 Photodiodes

There is a variety of photo-sensing elements that can be built using the silicon substrate and the most commonly used is the pn-junction photodiode [1, 11, 9]. In this work, the photodiode was studied as the sensing element for the design of the CMOS image sensor.

2.4.1 Operation principle of photodiodes

The photodiode is a reverse biased p-n junction diode grounded at the p-type substrate with a shallow n+ doped region. A bias voltage is applied to the n+ region to form a depletion region around the metallurgical p-n junction. This depletion region is free of any charge be- cause of the electrical field formed in that region. Any electron generated in there slide at the opposite direction of the electric field towards the n+ region, while the holes go towards the p- region. Electrons are collected in the charge pocket in the n+ region and the holes are driven to ground, or they are recombined.

The main problem of photodiodes for CMOS image sensors is their low sensitivity in the blue spectrum. This is because short wavelength photons (blue photons) are absorbed on the surface of the silicon so they can reach the depletion region.

There is another type of photodiode which have improved the short wavelength response;

the pinned photodiode. This type of photodiode has been used in CCDs and CIS but its main disadvantage is that it is not available for a standard CMOS process due to an extra p+ mask

9

(19)

that has to be used.

In a pn-junction diode, the forward current IF is expressed as

(2.3)

where q is the electron charge, kB is the Boltzmann constant, n is an ideal factor and Ljif f is the saturation current or diffusion current, which is given by

(2.4)

where Dn p is the diffusion coefficient, L^p is the difussion length, npo is the minority carrier concentration in the p-type region and pn 0 is the minority carrier concentration in the n-type region and A is the cross-section area of the pn-junction photodiode. The output current of the pn-junction photodiode is expressed as follows:

(2.5)

where Ip h is the photo-generated current.

There are three modes for biasing a photodiode: solar cell mode, PD mode and avalanche mode [9]:

• Solar cell mode. In the solar cell mode, no bias is applied to the PD. Under light illumination, the PD acts as a battery that produces a voltage across the pn-junction. In the open circuit condition, the voltage VQC can be obtained from IL = 0 A in Equation 2.5, and thus

(2.6)

This shows that the open circuit voltage does not linearly increase according to the input light intensity.

• PD mode. The second mode is the PD mode. When a PD is reverse biased, that is V < 0, the exponential term in Equation 2.5 can be neglected, and thus II becomes

In Equation 2.7 can be seen that in the absence of light (Ip h = 0 A ) , there is only the diffusion current flowing through the photodiode and as the light intensity increases, the photo-generated current also increases linearly due to the electron-hole pairs generated by the impinging photons.

(2.7)

(20)

• Avalanche mode. The third mode is the avalanche mode. When a PD is strongly biased, the photocurrent suddenly increases. This phenomena is called an avalanche, where impact ionization of electrons and holes occurs and the carriers are multiplied.

The voltage where an avalanche occurs is called the avalanche breakdown voltage V M - The avalanche mode is used in an avalanche photodiode (APD).

2.4.2 The photodiode full-well capacity model

The collected charges are stored in the depletion region of the photodiode. The photodiode's capacitance is related to the area and peripheral of diffusion layer forming the pn-junction.

The junction capacitance of the reverse-biased photodiode si voltage dependent, so the deple- tion capacitance is non-linear. Pn-junction capacitances are function of the applied terminal voltage across the terminals and process parameters. This capacitance consists of two com- ponents: the bottom plate capacitance and the sidewall capacitance.

The zero-bias junction capacitance per unit area associated with the bottom plate deple- tion region of the photodiode is given by

where εSi is the permitivity of silicon, q is the charge of the electron, N A and N D are the doping concentrations for p-type and n-type materials, respectively and Φ0 is the junction built-in potential which is given by

where ΦT is the thermal voltage (26mV at 300°K) and Ni is the intrinsic carrier concentration of the material, which is Ni = 1.432 x 101 0 c m- 3 for silicon.

With this, the junction area capacitance of the bottom plate region of the photodiode is given by

(2.9)

where A is the area of the bottom plate pn-junction, mj is a grading factor specific for each technology and VP D is the photodiode's reverse bias voltage. Similarly to Equation 2.8, the zero reverse-bias sidewall junction capacitance per unit length is given by

(2.10)

where Φosw is the built-in potential for the sidewall junction. Considering the depth of the pn-junction, Xj, the sidewall junction capacitance per unit length is defined as

(2.11)

(2.12)

CJ0SW = C*J0SW x xj

(2.8)

(21)

With this, the total sidewall junction capacitance at zero bias can be calculated by mul- tiplying CJOSW with the perimeter of the junction and the total junction capacitance for any reverse bias voltage on the photodiode is given by

(2.13)

Finally, the total photodiode junction capacitance is calculated as follows:

CPD - Cj + CJSW (2.14)

2.5 CMOS Image Sensors

An image sensor consists of an imaging area, vertical/horizontal access circuitry and read- out circuitry. The imaging area is formed by an array of pixels where each pixel contains a photo-sensitive element and some transistors for accessing and buffering the generated sig- nals out of the array using the access and readout circuitry. The transistors included in the pixel structure define the type of the image sensor.

A pixel structure that includes, besides the photo-sensing element, only one access tran- sistor, is called the passive pixel sensor (PPS) because there is no in-pixel amplification of the photo-generated signal. This was the first structure used in CMOS image sensors.

The second generation of CMOS image sensors, called active pixel sensor (APS), im- proved the image quality due to a buffer (source follower) that was included in the pixel circuit to prevent destructive readout [12]. This type of pixel includes, in its most basic struc- ture, three transistors: one used to take the photodiode's voltage to a known value, one for accessing the pixel through the external circutry, and one more used as an in-pixel amplifier.

This last type of pixel structure is the most widely used for image sensors today because of its superior image quality when compared to passive pixel sensors. A detailed characteristics description is illustrated in this work.

There is a novel type of pixel structure called digital pixel sensor (DPS), which includes an in-pixel analog-to-digital converter. This structure, besides the tasks performed by the APS, it also converts the photodiode's voltage into a digital signal which is read by an exter- nal circuit.

2.5.1 CMOS pixel structures

Through time, the whole variety of photo-sensing elements have been studied and tested on image sensors designs. The first APS was fabricated using a photogate (PG) as the photode- tector element. After that, the photodiode (PD) was used. The PG was first implemented due

(22)

to ease of signal charge handling but its main problem was low sensibility due to the polysil- icon in transistor’s gate is opaque to the visible spectrum. Today, the most used architecture in CMOS image sensors is the APS using three transistors and a photodiode in a pixel (3T- APS). In the first stage of 3T-APS development, the image quality could not compete with that of CCDs, with respect to both, fixed pattern noise (FPN) and random noise [9] (noise types are described in Appendix 6).

By incorporating a pinned PD structure used in CCDs, which has a low dark current and complete depletion structure, the four transistor APS (4T-APS) has been successfully devel- oped. This architecture has four transistors plus a PD and floating diffusion in the pixel. The implementation of the 4T-APS with correlated double-sampling (CDS) circutry reduces the random noise. The main issue for 4T-APSs is the large pixel size when compared to the CCD [9].

Figure 2.5 gives an overview of CMOS imager data published at IEDM and ISSCC over the last 15 years. The bottom curve illustrates the CMOS scaling effects over the years, as described by the International Technology Roadmap for Semiconductors. The second curve shows the technology node used to fabricate the reported CMOS image sensors, and the third curve illustrates the pixel size of the same devices [3].

Figure 2.5: Evolution of pixel size, CMOS technology node used to fabricate the devices and the minimum feature size of the most advanced CMOS logic process [3].

As seen in Figure 2.5, CMOS image sensor are fabricated using processes behind the ITRS processes. The reason for this is that very advanced CMOS processes are not imag- ing friendly due to issues like large leakage currents, low light sensitivity, noise, etc. The difference between CMOS technology used for image sensors and ITRS technology is about 3 technology generations but, CMOS image sensor technologgy scales almost at the same pace as standard digital CMOS processes do and also pixel dimension scales down with the technology node used and the ratio is about a factor of 20 [3].

13

(23)

With the CMOS processes scaling down over the years, the design and fabrication of smaller pixels result in a weaker performance and is a real challenge to improve the pixel de- sign. Nevertheless, there are new innovations and techniques that improve the light sensitivity of imagers like:

• Dedicated processes with limited amount of metal layers

• Thin interconnect layers and thin dielectrics

• Micro-lenses

• Light guide-waves on top of pixels

• Back-side illumination

Recently, CMOS fabrication technology advances have successfully reduced the pixel size of CMOS image sensors [13], although is still difficult to realize a smaller pixel size than that of CCDs. Moreover, a pixel sharing technique has been widely used in 4T-APSs because has been effective in reducing the pixel size to be comparable with that of CCDs [9]

and even with that of the conventional 35 mm film cameras [11]. The main pixel structures are described next.

Passive Pixel Sensor

The first CMOS generation of image sensors was based on passive pixel sensors (PPS) with analog readout. This sensors had poor signal quality due to the direct transmission of the pixel voltage on capacitive column busses [12].

A passive pixel is formed by a combination of a photodiode and an addressing transistor that act as a switch (Figure 2.6).

Figure 2.6: Passive CMOS pixel with a single in-pixel transistor [3].

In this pixel architecture, imaging starts with the light exposure of the photodiode, which is reverse biased to a high voltage. During exposure time, photons impinging decrease the reverse voltage across the photodiode and, at the end of exposure, the remaining voltage is

(24)

transmitted to the column bus. This remaining voltage is a measure of the amount of photons falling in the photodiode during exposure time [3].

The main advantage of this architecture is the large fill factor, but unfortunately, the pixels suffers a large noise level as well, but the improvement of this was made with the next pixel architecture, the active pixel.

Active Pixel Sensor

In this architecture, each pixel has an amplifier, being the source follower (Figure 2.7). Each pixel counts with a photodiode, a reset transistor, the driver for the source follower and the addressing transistor. The current source of the source follower is placed at the end of the column bus.

Figure 2.7: Active CMOS pixel based on in-pixel amplifier [3].

In APS based image sensors, after exposure time, each pixel is addressed and the remain- ing voltage across the photodiode is buffered outside the pixel array by means of the source follower, then the photodiode is reset.

This architecture solve a lot of noise issues but not the kTC noise component which is introduced by resetting the photodiode.

Digital Pixel Sensor

With reduced feature sizes, more transistor per pixel can be added to the point where a sig- nificant part of the pixel circuit is entirely digital. Today, the trend of image sensors moving towards digital pixel sensors (DPS) [12], which is a novel pixel architecture in the design of CMOS image sensors. In this devices, the conversion from analog photo-generated voltage to digital data is implemented on-pixel, due to each pixel, besides a photodiode, it also counts

15

(25)

with a single slope ADC [14].

vre f

+

Figure 2.8: Digital Pixel Sensor Architecture

In a CMOS image sensor based in this pixel architecture, the analog-to-digital conversion is performed in parallel in every pixel and, therefore, readout time is significantly less than a single or per-column analog readout architecture, which permits very high frame rates (up to 10,000 frames per second) [14].

This architecture makes many applications feasible, specially the dynamic range enhance- ment due to the possibility of combining two or more pictures taken with a very high rate. It has been demonstrated that the more samples used for the composition, the better dynamic range will be achieved, so DPS-based image sensors with its very high readout speed is the perfect solution for this type of applications [14].

The main constraint of this architecture is that the use of multiple sampling to get a pic- ture with high dynamic range consumes significant power. Furthermore, an extra hardware is required for implementing the multi-sampling algorithm, so the processing time is extended [14].

The performance of a image sensor is constrained by factors like pixel full-well capac- ity, sensor resolution, wafer/die size, quantum efficiency, sensitivity and dark current. Pixel size is limited by the reticle (die or wafer size), and quality of the supporting optics [1]. For scientific image sensors, the two most important requirements are: a large full-well capacity and low noise readout. The combination of these two requirements leads to a higher dynamic range. Large pixel full-well capacity is only achieved through the use of novel fabrication process and circuit design techniques. In photodiode type CMOS APS pixels, especially in near-UV spectrum (200-400 nm), quantum efficiency (QE) is improved using novel pixel design techniques since it depends on the fabrication process technology and pixel design technique [1].

The die size of a CMOS integrated circuit is limited by the exposure field size of the photolithographic stepper used during manufacturing which is typically 20mm by 20mm [1]. However, with new technology in CMOS Image Sensor (CIS) manufacturing, so-called stitching technology, now is possible to fabricate die sizes up to a single die per 200 mm wafer using a 0.18 µm CMOS process. The photolithographic stepper in the stitching process

(26)

exposes the entire image sensor structure, one piece at a time, by precisely aligning each reticle step. The stitching technology allows to seam 5.5 µm pixel sections into a large pixel array, resulting in ultra-high resolution, high-quality color image sensors [15].

17

(27)

Chapter 3

State-of-the-Art on Microfluidic Systems and Image Sensors Integration

In this chapter is presented a revision of the state-of-the-art about image sensors designed to monitor microfluidic channels. Due to this work presents a design based on a standard CMOS process, the articles reviewed in this chapter includes only image sensors designed using this type of process. As will be seen through this revision, there is a variety of designs which are specific depending on the characterization type and the specific task of the sensor.

In reference [4] is reported a digital 16-element mixed-signal, near-field CMOS active pixel optical sensor using 0.18 µm CMOS technology. This optical sensor is coupled directly to a microfluidic channel employing either flip-chip or molded polymer packing technologies.

Such system is used to identify and quantify the biophysical or biochemical properties of the cell population transported in the microchannel. The schematic diagram of the flip-chip system is illustrated in Figure 3.1.

Figure 3.1: Integrated digital cytometer system components and architecture [4].

As seen in Figure 3.1, the microchannel was mounted over the optical sensor and the generated signal is processed by the on-chip digital interface. Output signals are sent to a mi- crocontroller for interpretation and finally displayed in a pocket PC. The Texas Instruments’

MSP430F449 mixed-signal microcontroller was used to control and monitor the output of the sensor and to interface to the Viewsonic VC37 pocket PC that acts as the host controller.

The microcontroller is an ultra low power, battery-operated, 16-bit RISC architecture which allows portability of the entire system. The CMOS optical sensor designed for near-field microfluidic integration is shown in Figure 3.2.

(28)

Figure 3.2: Photograph of the linear active pixel CMOS sensor [4].

The optical sensor was designed to be directly coupled to the microfluidic channel fabri- cated in glass or polymer as a modular add-on in order to enable the collection of particles and the fluid flow information. This device has seven electrical pads (left-hand side of the figure) and seven mechanical pads (right-hand side of the figure) for flip-chip bonding sta- bility. The output electrical properties, physical dimensions and technology specifications of the sensor chip are provided in Table 3.1 [4].

Table 3.1: Output characteristics and specifications of the Linear Active Pixel Sensor Linear Sensor Chip Specifications

Technology 0.18 µm

Dimensions 1.0 × 2.4 mm

Supply Voltage 1.8 V

Power Consumption 15 mW

Pads 5 dig / 2 pwr / 7 mech

Number of pixels 16

Pixel size 7 µm × 7 µm

Fill Factor 75%

Dynamic Range ≈ 30 dB

Figure 3.3 shows a block diagram of the mixed-signal CMOS sensor architecture which comprises the linear active pixel sensor array (APS), correlated double-sampling (CDS), and an adaptive spatial filter (SF) with a digital control block for monitoring and configuration.

19

(29)

Figure 3.3: Flip-chip on glass illustration of a hybrid microfluidic digital cytometer. [4].

The optical sensor chip was wire-bonded to a PCB and subsequently encapsulated in poly- dimethylsiloxane (PDMS) beneath ∼ 120 µm diameter cylindrical microchannel passing over the sensor’s active area. The cross-section diagram for this structure is depicted in Figure 3.4 [4].

Figure 3.4: PDMS cast chamber, shown in a cross-sectional view, realized a microfluidic channel passing through the structure and over the active area of the sensor chip which is wire- bonded to a PCB. [4].

While the assembly process of the system proved to be mechanically simple and cheap to produce, the reliability of the device presents some issues specifically in the region where the microchannel passes over the chip surface. In this region, the technique used for forming the microchannel resulted in some tearing of the PDMS which results in mechanical instability.

Furthermore, after extensive handling, prototypes eventually succumbed to wire-bond sepa- ration, rendering the devices electricalle non-functional.

In contrasts to reference [4], where a single row APS array was used, in [5] a double linear array was used, which makes possible not only detection, but also the determination of particle velocity and size, which means that the sensor can be used to characterize cells as well as count them. The active area of the new optical sensor consist of two linear arrays of 16 elements with each pixel measuring 7 µm × 7 µm (see Figure 3.5).

(30)

Figure 3.5: The photodiode pixel linear arrays. [5].

In the same way than the die fabricated in [4], the pads located on the left-hand side of the chip are the electrical interface pads, and the pads on the right-hand side are electrically inactive and provide flip-chip bonding stability only. The chip bonding on the glass substrate is designed to ensure that the active area of the sensor properly aligns to the microchannel of the microfluidic substrate after bonding. Figure 3.6 shows the CMOS sensor coupled to the microfluidic chip [5].

Figure 3.6: Post bond image of the CMOS sensor to the microfluidic chip. [5].

From Figure 3.6, as an individual particle is transported over the active area of a pixel, the light intensity received by the photodiode changes. This change manifests as an input current change in the pixel which gives rise to a rapid change in the output voltage of the sensor. In the dual photodiode-photodiode pixel array configuration, such perturbations in voltage as a particle passes over the active area of the sensors, give rise to a characteristic double-pulse signature. Thus by monitoring the output of the sensor, and suitably tracking the presence of double pulses, the detection of particles is enabled. Figure 3.7 shows the output of the sensor during the transit of a 6 µm polystyrene polysphere. As the particle passes over the first APS array, the first negative-going pulse is generated, and as it passes over the second APS array, the second negative-going pulse is generated.

21

(31)

Figure 3.7: Plot of the CMOS sensor output upon detection of a 6 µm polystyrene polysphere. [5].

The negative-pulse width and time interval between pulses are the features of the detected signal, and is sensitive to the particle size and fluid flow rate. The average negative-pulse width increases as the particle size increases, assuming the fluid flow rate is invariant. and the time period between two consecutive pulses is inversely proportional to the particle ve- locity and fluid flow rate is independent of particle size.

Ji et al. developed an optical image sensor called contact imager in order to manipulate individual cells using on-chip micro-actuators [6]. This sensor was designed using a 0.5 µm CMOS technology with a pixel pitch of 8.4 µm and is capable of providing a 2D image of the monitored cells. The designed pixel was intended to be minimum size, so in order to avoid Nwell spacing requirements, a N+Psub photodiode was used. Also, to reduce the number of contacts, there is only one Vdd contact per pixel which is shared by the source follower input transistor of one pixel and the reset transistor of another with this, the fill factor is 17%. The contact imager consists of a 96 ×96 APS array, row and column scanners, column-wise read- out circuits and buffers and switches for input control and clock signals. Figure 3.8 shows the schematic diagram for the CMOS APS.

Figure 3.8: A schematic of photodiode type CMOS active pixel sensor. [6].

(32)

The major characteristics of the chip are summarized in Table 3.2.

Table 3.2: Summary of sensor performance

Process AMI05 (SCMOS design rule, λ = 0.35 µm)

Power supply 5 V

Maximum signal 1.2 V

Conversion gain 22 µm/e

Pixel noise σ =2.5 mV over 2 ms

Dynamic range 53.6 dB

Dark signal 0.46 V/sec

The chip was tested as a contact imager using microbeads placed directy on the chip surface (Figure 3.9), then after bio-compatible material packaging to protect the bonds and wires, the chip was tested with cells. Figure 3.9 shows the image acquired from the contact imager using dry polymer microspheres of diameter 16 µm placed directly on the chip surface.

Figure 3.9: Comparison of images of microbeads on chip surface taken by (a) a camera and (b) the contact imager. An overlapped view is also shown in (c). [6].

A most recent version of the contact imager was published in [7]. This work shows a 256 × 256 four-transistor pixel array. The new active pixel has a 5 µm × 5 µm area with a fill factor of 31%. One of the improvements of this new pixel was that the pixel is able to operate at different modes which are selected by some control signals for either reset noise suppression or dark current reduction. The pixel’s electronic circuit is shown in Figure 3.10.

23

(33)

Figure 3.10: Schematic diagram of the modified active pixel circuit. [7].

Recent efforts have resulted in the development of lab-on-a-chip systems in which it is possible to perform a wide variety of tests using microfluidics. Also, it has been demonstrated that exists the possibility of designing and fabricating a CMOS optical sensor capable of be- ing coupled to either a microfluidic channel or a plane surface where particles are suspended in order to be detected identified and monitored, avoiding with this, the need of expensive and bulky microscopes. Furthermore, with the integration of smart on-chip functions is now possible not only to detect, but also to identify, monitor and even characterize the suspended particles. The goals are pursued due to the need of having highly automated testing plat- forms to enable robust, low-cost analysis in order to eliminate the conventional laboratory equipment, which is roughtly automated.

(34)

Chapter 4

Active Pixel Sensor Modeling and Simulation

In this chapter, the design methodology for the design of an Active Pixel Sensor is presented.

It begins with the read-out circuit that consists of the reset transistor, buffer transistor as part of the source follower amplifier and the selection transistor. There are also determined the characteristics of the active load that is located at the bottom of the pixel array and which, together with the buffer transistor located in-pixel, forms the source follower amplifier that brings the output signal out of the array.

The electrical and physical characteristics of the pn-junction photodiode are determined and compared to an ideal photodiode simulated using a capacitor and a current source. The source terminal of the reset transistor is designed as the pn-junction photodiode by sizing its area and perimeter in order to obtain the required capacitance.

4.1 Introduction

In microfluidics research, a 640 ×480 image sensor pixel resolution is well suited for particle tracking and parameter-determination. On the other hand, at a fixed resolution, the random noise introduced in the image is inversely proportional to the image sensor image size. This is, less random noise will be introduced to a 2/300 sensor (which actual size is 8.8mm × 6.6mm) than a 1/800 sensor (actual size: 1.6mm × 1.2mm) if both have the same number of pixels. This is because the photo-sensitive area on a pixel is larger on a larger image sensor, so more electron-hole pairs can be generated because of more photons have the possibility to impinge such a area. For a larger count of photo-generated carriers, a greater voltage is gen- erated, so the introduced noise is less representative for that signal. If the generated voltage is comparable to the random noise signal, the amplified signal will have a significant amount of noise.

Another important parameter in the noise-inmunity for a image sensor is the full-well

25

(35)

capacity, which is the amount of charge that an imaging pixel could collect and transfer. This parameter is limited by the size of the photoconversion region and by the read-out circuit's ability to buffer pixel signals [1].

4.2 Pixel read-out circuit

4.2.1 Reset transistor MRST

The read-out circuit of a three-transistor, single active pixel is shown in Figure 4.1. There are three in-pixel N M O S transistors and a load transistor at the bottom including the load capacitor.

Figure 4.1: 3T pixel configuration

The transistor MRST is used to set the photodiode to a known voltage value through signal VRST- This device is usually sized to have the minimum allowable feature size in order to maximize the pixel's fill factor and to reduce the charge injection to the photosensitive area after reset [1]. In the case of the C M O S process used in this work, the minimum feature size is W/L = 0.4um/0.35um.

The threshold voltage for an N M O S transistor is given by

where VTHO is the zero back-gate bias threshold voltage; Ky and Ki are the body-effect coef- ficients, AVTH is the term that contains short channel effects, and VSB is the applied voltage (4.1)

(36)

Figure 4.2: Linear approximation of AVTH term as a function of the body-factor term

From the linear regretion of A VTH , the two fitting function coefficients (n1 and n2) were found. Coefficient n1 corresponds to the slope of the linear approximation and coefficient n2 corresponds to the offset. The values for these two coefficients are

n1 = -0.111665

n2 = 0.123019 27

between source and bulk terminals. Os is the surface potential and for short channel is given by

(4.2) where k is Boltzmann constant, T is room temperature (300°K), q is the charge of the elec- tron, NCH is the channel doping concentration and Nt(T) is the temperature dependend intrin- sic carrier concentration of silicon.

Although equation 4.1 is used by the simulation tools to determine the thresold voltage, reference [1] gives a new equation more suitable for hand-calculations. Such equation takes two fitting-coefficients (n\ and n2) to approximate A V T H on Equation 4.1 as a linear equation:

(4.3) From the BSIM3v3 model card for the n-channel device, the zero-bias threshold voltage was found to be VTH0 = 0.4979 V , the first-order body effect coefficient K\ = 0.50296 and the second-order body effect coefficient K2 = 0.033985.

Using the configuration depicted in Figure 4.1, the AVTH term was obtained through simulation. A minimum size reset transistor and a test current source were used to charge and discharge the junction capacitance of the photodiode. The body-factor term was calculated as Results are shown in Figure 4.2, where A VTH term is plotted as a function of the body-factor term.

(37)

Using Equation 4.3 is possible to calculate the threshold voltage for any VSB value. The photodiode reset voltage can be found using Equation 4.4.

VPD_RST - VSB_MI - VRST - VT H_RST - *P - Os (4.4)

where

and

With these equations and using a reset pulse VRST = 3.3 V on the gate of the MRST device, the photodiode reset voltage was found to be VP D_ R ST = 2.43 V , which means that more than the 2 6 % of the signal range is lost because of the increased threshold voltage of MRST-

From the previous results and using Equation 4.4, the threshold voltage of MRST can be easily calculated:

Through simulation, the threshold voltage of the reset transistor was found to be VTHJRST = 0.931 V

for the same conditions; this gives a 6.5% calculation error for the threshold voltage and a

3.7% calculation error for the reset voltage of the photodiode which has a value of VP D_ RST = 2.34 V

in simulation.

One way to recover the voltage loss is to boost the reset pulse to a value above the power supply voltage [suat]. The boosting factor (B) is a fraction of the zero-bias voltage threshold of MRST and is a function of the power supply voltage (VDD) and the minimum channel length available in the technology ( Lmj „ ) and can be calculated using the Equation 4.5:

which, for a 3.3 V supply voltage and 0.35 um minimum channel length, gives a boosting factor B w 1.79. With this, the required reset pulse that allows a 3.3 V photodiode reset voltage is

From simulation and using a reset pulse of VRST = 4.2 V , the reset voltage of the photo- diode gives VP D_RST = 3.12 V . The optimum value was determined to be VRST ~ 5 V for

VPD_RST = 3.3 V . Figure 4.3 shows the obtained results.

VTH_RST - VRST - VPD_RST - 3.3 - 2.43 - 0.87 V

(4.5)

VRST = B x VTHO + 3.3 = 4.2 V

(38)

0 0.5 1 1.5 2 2.5 x 10−6

−1 0 1 2 3 4 5

Time [s]

Voltage [V]

Photodiode reset voltage for different reset pulse amplitudes

Reset pulse Photodiode reset voltage

Figure 4.3: Boosted VRSTand resulting VPD RST

4.2.2 Source Follower Amplifier

After integration time, the remaining photodiode voltage is buffered by the source follower (common-drain) amplifier. From Figure 4.1, device MSF acts as a buffer and MCOLas a cur- rent sink; these two devices form the source follower when both are working on saturation.

Such amplifier has a low voltage gain (slightly less than 1), a high current gain and is used to drive the capacitive load encountered at the end of each column of pixels.

The minimum output voltage of the source follower amplifier is determined by the thresh- old voltage of the buffer device, VTH SF, since for a lower voltage values than that, the buffer device is turned off. On the other hand, the maximum output voltage is considerably lower than VDD because of the body effect which causes VTH SF to increase as the output voltage increases.

The output voltage range of the source follower amplifier is determined by the threshold voltage of the buffer transistor, VTH SF, and by the column current biased by the load transistor MCOL. As mentioned before, the body effect present in the operation of the buffer device causes the output voltage to increase at a slower rate than the input signal, which means that VTH SFincreases with the output and so, the gain is lower than 1. This behavior is described by Equation (4.6):

VOUT=VPD− VTH SF (4.6)

where VPD is the voltage of the photodiode and VTH SFis the modulated threshold voltage of the buffer device MSF.

29

Referencias

Documento similar

The Dwellers in the Garden of Allah 109... The Dwellers in the Garden of Allah

In the previous sections we have shown how astronomical alignments and solar hierophanies – with a common interest in the solstices − were substantiated in the

- A workbook can contain multiple sources, scenes and calculations - Workbooks remain in your MyST account, and are shareable.. - Reusable scenes

* The use of a Linear Phase Recorder measuring the HCD oscillator drift rate of either the WHT or JKT digital clock requires the remote 100KHz signal to be put onto the site

This will normally correspond to the pixel at the centre of the rotation when the instrument rotator is moved (this assumes that the flexure compensation is

The correlation has the disadvantage of reducing the output size compared to the input image, this could result in a maximum value pixel placed in a border pixel, so one or two of

In this paper, we have introduced a local method to re- duce Gaussian noise from colour images which is based on an eigenvector analysis of the colour samples in each

(hundreds of kHz). Resolution problems are directly related to the resulting accuracy of the computation, as it was seen in [18], so 32-bit floating point may not be appropriate