StudentShare
Contact Us
Sign In / Sign Up for FREE
Search
Go to advanced search...
Free

Controlling and Optimizing Wafer Yields - Lab Report Example

Summary
"Controlling and Optimizing Wafer Yields" paper argues that the number of successful chips are quantified as yields and can be calculated by the statistical probability methods. The lab is a model of silicon wafers fabrication where the data used to calculate the yields of chips fabricated…
Download full paper File format: .doc, available for editing
GRAB THE BEST PAPER91.8% of users find it useful

Extract of sample "Controlling and Optimizing Wafer Yields"

Student’s Name Professor’s Name Subject Date Controlling and Optimizing Wafer Yields Abstract The integrated circuits are majorly fabricated on silicon chips which are hewn from silicon wafers. The fabrication of wafers goes through a number steps that determine the chips that will be successful. Through the Czochralski process where crystals are grown, silicon wafers are produced and embedded circuits are transferred to the wafers by lithographic process. The success of transfer of the circuits depends on the control parameters put in place at the fabrication plant. The number of successful chips are quantified as yields and can be calculated by application of statistical probability methods. The lab is a model of silicon wafers fabrication where the data obtained were used to calculate the yields of the chips fabricated. Introduction The probability of obtaining the best Yield chips from the wafers is a function of the control procedures and measures put in place. The production process is therefore carefully controlled to obtained even wafers of crystalline silicon. Crystalline silicon is most preferred for two reasons: first, is that it is abundant in the earth’s atmosphere, being richly found and as an element, coming second after oxygen. The second reason is that it is environmentally friendly. The silicon wafers contain several number of chips that can be used for the fabrication of the Integrated Circuits. The chips that are found to be useful in the fabrication of the ICs are the acceptable yield. The number of successful yields depend on three parameters, namely: speed, pressure and distance. These three parameters determine the thickness of the wafers and hence the success of the yields. The control of the thickness in the plant requires that it be uniform all across the wafer which is a difficult task for control engineers to achieve. Thus the control system has to be optimized according to the enlisted variables. Taking the system as (Y), the state variables then that determine the state of the system as: Speed (X1); Pressure (X2); Distance (X3); By optimizing the control variables a more uniform thickness can be achieved and hence produce maximum yields. In the experiment done below, statistical methods are employed to get the best and optimum conditions for the production of maximum yields. These is from experimental data obtained.[Sie88] Experimentation and data collection The data was collected and tabulated in sheet 1 as shown in the appendix with the thickness for the samples investigated taken at different thicknesses. For the first experiment, both pressure and distance were held constant and the values were taken at high speed for the wafer testing. A total of forty samples were obtained and recorded in the sheet. The standard deviation of the thickness were made for the forty wafers. (Apgar, Virginia) Objectives To write a minimized statistically detailed report on thickness of the wafer under the influence of high speed To determine the best condition for producing a uniform wafer. Results and Discussion Dataset Pressure and distance are unchanging at a low value in these 40 tests Sample number Low Speed Sample number High Speed 1 74 21 151.1 2 78.9 22 136.8 3 90.3 23 161.8 4 82.1 24 147 5 89.4 25 154.7 6 90.9 26 143.1 7 90.2 27 150.2 8 69.2 28 150.3 9 76.2 29 135.9 10 82.2 30 141.1 11 81.9 31 145.5 12 87.3 32 149.2 13 104.9 33 152.3 14 74.4 34 168.7 15 87.7 35 152.9 16 87.8 36 146.1 17 88.9 37 161.4 18 102.7 38 178.9 19 95.9 39 160.5 20 95.5 40 142.2 Q1. Sources of variability in the data In the above data set, we have the independent data (speed) and the dependent data (thickness). This accounts for the variation of the speed. For the twenty samples that were taken in the laboratory the following chart can be used in comparison to the trends, where samples 21 – 40 represent high speed values while those of 1 to 20 represent low speed data. From the graph above it is evident that the thickness are all time high for values taken at high speed as compared to those taken at low speed. A pattern is drawn for the sequencing of data at ranges of samples 8 to 13 for low speed and samples 29 to 34 for high speed. In both, a steady rise is noted with almost a similar gradient then plunges into a depression for both after the values were captured. It is also noted that the peak values were obtained at sample 13 for low speed and sample 34 for high speed at values 104.9 and 168.7 respectively. The above variation could be attributed to a number of sources, which includes the following: Error due to instruments Calibration errors Reading errors Mode of measurement recording. i. Errors Due to instruments From the Czochralski wafer fabrication process, the wafers are made from crystalline silicon under controlled environment. In most cases it is done in a vacuum to prevent impurities from doping the silicon seed being grown. However, uncleansed instruments can introduce foreign material that would otherwise be supplanted in to the silicon crystals thereby forming ununiformed surface of the wafer material thus reducing the yields. ii. Calibration errors Control equipment are prone to shift in calibration with use or by application of excessive strain. This introduces offset error that need to be readjusted such that the equipment record accurate and precise value. The calibration error also depends on the level of sensitivity of the equipment and control machines in use. iii. Reading errors These arise mainly from parallax. It generates most of transposition and transcription errors. Values read incorrectly are inaccurate and inasmuch as they may consistent, they introduce variation and deviation of data from the norm. Thus, the plotted values vary from the expected. This type of variation is not easily corrected but by keenness and recommended reading practices such as perpendicular view. In the case of digital equipment however, this type of variation is lacking especially if the data is read electronically and directly from the capture device. In this case nonetheless with manual data entry, this error is bound to be considered. (Siegel) iv. Mode of measurement recording a. Experience. This depends with the person in charge of measurement. Callousness could result in misread figures and values and hence unsteady and inconsistent data could be recorded. From the second sample on high speed as shown in the graph above, it can be noted that a sudden surge was noted in the recording. Such fluctuations could be from instability of equipment or could also be as a result of callous recording and instrument set up. b. Depth of understanding of the subject. This greatly affects the approximation of initial values to determine whether the conditions are properly set. Previous experimental data show that this error though present is to the least. Consistency is a measure of quality for this variation. Q2. Parametric and non-parametric test The null hypothesis is the claim that the uniformity in coating is not the same for each speed. Taking the assumption that observation of the recorded data is independent and raw: which means it has not been manipulated, then the analysis can proceed to test the distribution curve. The distribution of the values for the low speed and high speed variations can be depicted from the radar chart plotted from the values which denotes the spanning of the data. (Siegel) From the above chart above, it can be noted that for the low speed a significant area spans between the samples 1 – 16 which approves of the fact that the sample indeed follows a normal distribution curve. For the high speed measurement the peak value stands at sample 18 which is close to the averaged mark, and hence likewise the sample follows suite. Q3 Significance, β Equations with multiple independent variable are represented in least squares method as Thus, From experiment 2 data, there is need to calculate the covariance matrix which is then used to solve for the values of β as shown: Where X1 = speed, X2 = pressure and X3 = distance Evaluating to Solving the simultaneous equations, the coefficients become: β = 0 Q4 Simpler derivation of Q3 This is done by using covariance matrix. The covariance matrix is used to draw the regression line for a dataset, which determines the confidence line. In spreadsheet is derived by solving a vector matrix. The equation used is =COVARIANCE.S($A$2:$A$28,$A$2:$A$28) to =COVARIANCE.S($C$2:$C$28,$C$2:$C$28) for the matrix. To yield: The other values obtained are as shown below: Works Cited Apgar, Virginia. A proposal for a new method of evaluationof the newborn infant. Curr. Res. Anesth, 1953. Siegel, Castellan. Non parametric Statistics for the Behavioral Sciences. New York: McGraw-Hill, 1988. Appendix Matlab code A = [-1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1;-1 -1 -1 0 0 0 1 1 1 -1 -1 -1 0 0 0 1 1 1 -1 -1 -1 0 0 0 1 1 1;-1 -1 -1 -1 -1 -1 -1 -1 -1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1]; C =cov(A,1); %this calculatesthe variance %plotting the distribution curve alpha = 0.05; % for the significance level mu = 10; % standard mean sigma = 2; % standard deviation cutoff1 = norminv(alpha, mu, sigma); cutoff2 = norminv(1-alpha, mu, sigma); x = [linspace(mu-4*sigma,cutoff1), ... linspace(cutoff1,cutoff2), ... linspace(cutoff2,mu+4*sigma)]; C = normpdf(A, mu, sigma); plot(A,C) xlo = [A(A=cutoff2)]; patch(xhi, yhi, 'b') Read More
sponsored ads
We use cookies to create the best experience for you. Keep on browsing if you are OK with that, or find out how to manage cookies.
Contact Us