Sei sulla pagina 1di 12

INTERNATIONAL ISLAMIC UNIVERSITY MALAYSIA Kulliyyah of Engineering, IIUM Department of Electrical and Electronics Engineering

ECE 3111 SEMINAR REPORT ON

THE APPLICATION OF STATISTICAL PROCESS CONTROL (SPC) IN COMMUNICATION ENGINEERING

MATRIC 1012847 1023461 1021513 Section 2

NAME MUHAMMAD NAFIS BIN ZULKIFLIE AMIRUL HAKIM BIN MAZLAN MUHAMMAD HUSAINI BIN BAHARIN

SUPERVISOR : DR. AHMAD ZAMANI BIN JUSOH DATE OF SUBMISSION : 25/11/2013

Table of Content Abstract Introduction Subtopics 1. Statistical Process Control (SPC) 2. Application of SPC 3. Potential use of SPC in Communication Engineering Conclusion References

The Application of Statistical Process Control (SPC) in Communication Engineering


Muhammad Nafis Bin Zulkiflie (1012847), Amirul Hakim Bin Mazlan (1023461), Muhammad Husaini Bin Baharin (1021513) Department of Electrical and Computer Engineering, International Islamic University Malaysia

Abstract Statistical Process Control (SPC) is a method used for controlling quality. This technique is usually applied in manufacturing industry. In this paper we will see the potential os this technique to be used in communication engineering.

I. INTRODUCTION For majority companies, adopting SPC would require substantial changes in the existing quality program. Traditional QC programs emphasize product quality control whereas SPC is process oriented. Traditional product-oriented QC systems emphasize defect detection. The company depends on inspection, scrap, and rework to prevent substandard product from being shipped to the customer. QC is an ineffective and inefficient system. As Deming puts it, under a product QC system, a company pays workers to make defects and then to correct them. Process QC systems, by using SPC, emphasize defect prevention through improving the production system. When a company first uses SPC, the objective often is to ensure that the production process is stable and capable of producing product to the specifications. Production personnel monitor variation at critical stages in the process and, when necessary, act to prevent defects before more value is added. Scrap, rework, and therefore work-inprocess inventory are reduced considerably. As these initial objectives are met, the objective of an SPC program should shift to improving the system by continuously striving to reduce variation. The philosophy of SPC as a whole can be summarized as: Defects are prevented by monitoring and controlling variation Real quality improvement comes from improving the system and reducing variation

An understanding of variation is crucial to understanding SPC. Shewhart, whose work led to the invention of SPC, recognized that variation is unavoidable in manufactured products. Further, he recognized that every system has inherent variation due to common (also called random or chance) causes. Shewhart also recognized another type of variationvariation due to special (also called assignable) causes. Common-cause variation is evidenced by a stable, repeating pattern of variation. Specialcause variation is evidenced by a break in the stable, repeating pattern of variation and is an indication that something has changed in the process. Product consistency is ensured by detecting and eliminating special-cause variation. Long-term quality improvement results from reducing common-cause variation.

II.

STATISTICAL PROCESS CONTROL (SPC)

A. Tools CONTROL CHART

As simple as making moms recipe for spaghetti sauce or a more complicated process like admitting patients to an emergency room, the outcome of a process is never exactly the same every time. Inconsistent or variability ascends naturally from the effects of mixed chance events. Statistical Process Control charts graphically represent the variability in a process over time[1]. When used to monitor the process, the control chats can discover inconsistencies and abnormal instabilities. Control charts normally

display the limits that statistical variability can explain as normal. If your process is performing within these limits, it is said to be in control, if not, it is out of control. Control does not necessarily mean that a product or services is meeting your needs, it only means that the process is behaving constantly. Variable control charts are commonly used in pairs. One chart studies the variation in a process, and the other studies the process average. The chart that studies variability must be tested before the chart that studies the process average because the chart that studies the process average assumes that the process variability is stable over time.

3) Trend If the number of successive useful observations, either increasing or decreasing, is greater than 7. 4) Zigzag If the number of useful observations, decreasing and increasing alternately (creating a zigzag pattern), is greater than 14. 5) Wildly different If a useful observation is deemed as wildly different from the other observations. This rule is subjective and is easier to identify when interpreting control charts. 6) Cyclic Pattern If a regular pattern is occurring over time for example a seasonality effect.

RUN

CHART

Two of the best popular SPC(Statistical Process Control) tools in use today are the run chart and the control chart[2]. They are easy to build, as no specialist software is required. They are easy to understand, as there are only a few basic rules to apply in order to identify the variation type without the need to worry too much about the essential statistical theory. Run Chart is a time ordered sequence of data, with a centerline drawn horizontally through the chart. A run chart enables the monitoring of the process level and identification of the type of variation in the process over time. The centerline of a run chart consists of either the mean or median. The mean is used in most cases unless the data is discrete. Discrete Data is where the observations can only take certain numerical values. Almost all counts of events e.g. number of patients, and number of operations. Continuous Data are usually obtained by a form of measurement where the observations are not restricted to certain values. For example - height, age, weight, and blood pressure.Run Chart Rules to recognizing Special Cause Variation 1) Number of Runs If there are too few or too many runs in the process. The table below is a guide based on the number of useful observations in your sample. 2) Shift If the number of succeeding useful observations, falling on the same side of the centerline, is greater than 7.

PARETO

CHART

The Pareto (pah-ray-toe) chart is a very advantageous tool whenever one needs to separate the important from trivial Goetsch. A Pareto Chart is simply a frequency distribution (or Histogram) of attribute data arranged by category Montgomery. It is a special type of bar charts in which the categories of responses are listed on the X-axis, the frequencies of responses (listed from largest to smallest frequency) are shown on the left side Yaxis, and the cumulative percentages of responses are shown on the right side of Y-axis. This diagram named after the Italian economist Alfredo Pareto[3]. Dr. Joseph Juran recognized this concept as a universal that could be applied to many fields. He coined the phrases Vital Few and Useful Many in quality problems Besterfield. The Pareto chart can be used to display categories of problems graphically so they can be properly prioritized. The Pareto chart is named for a 19th century Italian economist who postulated that a small minority (20%) of the people owned a great proportion (80%) of the wealth in the land. There are often many different aspects of a process or system that can be improved, such as the number of malfunctioning products, time apportionment or cost savings. Each aspect usually contains many smaller problems, making it difficult to determine how to approach the issue. A Pareto chart or diagram indicates which problem to tackle first by showing the proportion of the total problem that each of the smaller problems comprise. This is based on the Pareto principle: 20% of the sources cause 80% of the problem.

FLOW CHART Subsequently a process has been identified for improvement and given high priority; it should then be broken down into specific steps and put on paper in a flowchart. This procedure alone can uncover some of the reasons a process is not working correctly. Other problems and hidden traps are often uncovered when working through this process. Flowcharting also breaks the process down into its many sub-processes. Analyzing each of these separately minimizes the number of factors that contribute to the variation in the process. After creating the flowchart, you may want to take another look at the fishbone diagram and see if any other factors have been uncovered. If so, you may need to do another Pareto diagram as well. Quality Control is a continual process, in which factors and causes are constantly reviewed and changes made as required. Flowcharts use a set of standard symbols to represent different actions: Circle / Oval Beginning or end Square A process, something being done Diamond Yes / No decision

Perhaps most importantly, using a team to develop a CE diagram can help to avoid the all-toocommon challenge of pet theories. Pet theories might arise when someone asserts that he or she already knows the cause of a problem. The person(s) presenting this theory may well be right, and if they are in a position of authority, chances are their theory will be the one that gets tested! There are risks, however, in simply tackling the pet theory. If the theory is in fact wrong, time and resources may be wasted, and even if the theory is correct, future team efforts will be stifled, since team members may feel their input to problems is neither needed nor valued. Further, the theory may be only partially correct: It might address a symptom or secondary cause rather than the actual root cause. CE diagrams, instead, bring the team together to identify and solve core problems. Brassard and Ritter (1994) list two common formats for CE diagrams: Dispersion analysis: The diagram is structured according to major cause categories such as machines, methods, materials, operators, and environments. Process classification: The diagram is structured according to the steps involved in the production process such as incoming inspection, ripping, sanding, moulding, etc. CE diagrams are often developed via a brainstorming exercise. Brainstorming can be either a structured or unstructured process. In a structured process, each member of the team takes a turn in presenting an idea. In unstructured brainstorming, people simply present ideas as they come. Either approach may be used, however the advantage of the structured approach is that it elicits ideas from everyoneincluding more shy members of the team. The following steps are taken to develop a CE diagram: 1. Clearly define the problem (effect): Ensure the problem is clearly stated and understood by everyone. The bottom line for CE diagrams is that there is only one clearly defined effect being examined. The process focuses primarily on the causes of which there will likely be far more than one. Decide on format: The team should determine if the dispersion analysis or process classification (described above) is most appropriate for the situation. Either approach is acceptable. The primary

CAUSE AND EFFECT DIAGRAM A cause-and-effect (CE) diagram is a graphical tool for organizing and displaying interrelationships of various theories of the root cause of a problem[9]. CE diagrams are also commonly referred to as fishbone diagrams (due to their resemblance to a fish skeleton) or as Ishikawa diagrams in honour of their inventor, Kaoru Ishikawa, a Japanese quality expert. Just like flowcharts, CE diagrams are typically constructed as a team effort; and as with many team efforts, the process is often more important than the end product. When a team is brought together to study potential causes of a problem, each member of the team is able to share their expertise and experience with the problem. The team approach enables clarification of potential causes and can assist with building consensus for most likely causes. By empowering the team to identify the root cause and its solution, the team gains ownership of the process and is far more motivated to implement and maintain the solution over the long term.

2.

3.

4.

5.

concern is which format works best for the group and the problem being explored. Draw a blank CE diagram: The diagram should look like Figure 1. The effect or problem being studied is entered in the box on the right-hand side. The main backbone is then drawn, followed by angled lines for the various cause categories. Brainstorm causes: The team can now begin brainstorming potential causes of the problem. It is typical for causes to come in rapid-fire fashion unrelated to categories on the diagram. The meeting facilitator will have to enter the causes in the appropriate place on the diagram. If ideas are slow in coming, however, the facilitator might address each of the categories one at a time. Go for the root (cause): As the team discusses some of the causes, it will become apparent that there are underlying causes for some items.

measurement scale. In addition, there are no gaps between adjacent bars.

Figure 2: Example of Histogram

SCATTER DIAGRAM Scatter diagrams are used to study possible relationships between 2 variables[11]. Although these diagrams cant prove that one variable causes the other, they do indicate the existence of a relationship. More than one measured object can be used for the y-axis as long as the objects are of the same type and scale. The purpose of scatter diagram is to display what happens to one variable when the other variable is changed. The diagram is to test the theory that the two variables are related.

Figure 1. Example blank cause-and-effect diagram

The slope of the diagram indicates the type of relationship that exist.

HISTOGRAM Histogram is a special bar chart for measurement data. A histogram is used to graphically summarize and display the distribution of a process dataset[10]. A particular histogram can be constructed by segmenting the range of the data into equal-sized bins (segments, groups, or classes). The vertical axis of the histogram is the frequency (the number of counts for each bin), and the horizontal axis is labeled with the range of the response variable. The number of data points in each bin is determined and the histogram constructed. The user defines the bin size. The difference between bar chart and histogram is that the X-axis on a bar chart is a listing of categories; where as the x-axis on a histogram is a

Figure 3: Example of Scatter Diagram Scatter diagram is quite similar to a line graph except that the data point are plotted without a connecting line drawn between them. Scatter charts are suitable for showing how data points compare to each other.

In order to generate a scatter diagram, at least 2 measured objects are needed for the query (one for x-axis and one for y-axis)

Using the x chart consists of grouping and averaging n readings of x, defined as

B. Application of SPC The most common and popular use of SPC is in the manufacturing field, as its function is to monitor the progress of any process. The most basic tool use in this field is control chart. Control chart is very important as it is use as process monitoring. As an example of usage of control chart, we will see the implementation in semiconductor manufacturing. In semiconductor manufacturing, we often measure and record one or more critical parameters[4]. For example, the lithographic sequence can be monitored by recording the thickness of the photoresist layer before the exposure. The target thickness of the photoresist is typically about 1 pm. Because of the mechanical nature of this operation, the thickness of the layer on each wafer will vary, even if the equipment is functioning properly, In addition, the measurement of the deposited layer, usually done with a photospectrometer, will be subject to calibration errors. The combination of these sources of variability give us a photoresist thickness that, as recorded, appears to be statistically distributed with a routine run to run variation. The goal of SPC is to implement a simple procedure that will reliably flag any significant deviation beyond the routine run to run variation. Firstly, we describe the simple x chart^2, as a vehicle for illustrating some fundamental SPC concepts, such as the types I and I1 errors, the average run length, and the operating characteristic function. The x chart is based on the assumption that, when the process is in control, the monitored variable is distributed according to a normal distribution with a known mean p and a known sigma , symbolically :

Under the assumption that each reading is Independently and Identically Normally Distributed [8] (thereafter to be known as the IIND assumption), the arithmetic average can be shown to be distributed according to another known distribution given as Pictorially, this control scheme is implemented by plotting the value of the group average versus time. If this value falls within a zone of high likelihood (determined by the known distribution of the plotted statistic), then we conclude that the process is under control. If the value falls outside this high likelihood region, the process is considered to be out of control. Traditionally, the high likelihood region is chosen to be within +/ - 3 where , is the standard deviation of the arithmetic average. It can be shown that the threesigma control limits yield a probability of type I error equal to 0.0027. These limits are defined as follows:

If a point falls outside this zone, then we can conclude at an 0.0027 level of significance, that this point is now generated by a different distribution, one that has shifted its mean, its variance or both. As an example, consider the x chart that was implemented to control the thickness of photoresist in the Berkeley microfabrication laboratory. Assuming that we know the mean and variance of this chart, we plot the average thickness

versus

time

in

Fig.

4.

6, we show such a plot, known as the operating characteristic function of the chart. Next we discuss the application of some of the more sophisticated traditional SPC tools.

Figure 4 In addition to the types I and type I1 risks, the average run length (ARL) is also an important characteristic of the control chart. The ARL is defined as the average number of points plotted between alarms, false or otherwise. The ARL is a function of the process status and the type I and I1 risks. In Fig. 1, the process appears to be under control in the region designated as A. Here, we want the ARL to be as long as possible (because any generated alarms will be false), and in fact ARL = l/. If : = 0.0027 (for three sigma control limits), then the average number of samples between alarms is about 370. Again in Fig. 1 the process appears to be out of control in the region designated as B. Here, we want the ARL to be as short as possible, and in fact the out of control ARL is equivalent to I/( 1 - ). In this region , the type I1 risk, is related to the size and the type of the process shift, as illustrated in Fig. 5.

Figure 6 Other than the application of SPC in manufacturing, it also can be used in other field. For example, in 1988, the Software Engineering Institute suggested that SPC could be applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.

Level 4 - Managed It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.

Figure 5 Assuming that the process goes out of control because its mean has shifted while its variance stayed the same, it is possible to plot versus k, the amount of the shift (expressed in units of standard deviation) and the sample size n, which we employ in the calculation of the average. In Fig.

Level 5 - Optimizing It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.

At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, to shift the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative processimprovement objectives. C. Potential use of SPC in Communication Engineering The SPC technique is well known and widely used in manufacture engineering, but it is not very popular when it comes to communication engineering. In this paper, we will see the potential of any SPC tools in communication engineering field. The complexity of computer and communication networks has increased on a daily basis. Not only the variety, but also the volume of data to process during network planning, management, and operation activities is very large. This challengeable scenario has forced engineer [6] to search for new methods to analyze network data in an online manner. The field of network traffic analysis has making significant progress, particularly in network traffic forecasting. Forecasting involves making projections about future on the basis of historical and current data. Decision making is an integral part of network management and operation processes, and forecasting may reduce decision risk by supplying consistent information about the possible future events[5]. The first approach is the most common employed, where the traffic engineer uses the collected traffic data in conjunction with generalpurpose statistical software, such as R project or SPSS, to perform data analysis and forecasting. This approach is very useful mainly for planning purposes, however, for network operation activities online data analysis and prediction are essentials. Online forecasting is a real-time continuous process, requiring not only robust traffic measurement and storing instrumentations as well as a flexible statistical engine able to perform real-

time numerical analysis based on different data sources. An important aspect of an online real-time network traffic forecasting solution is the quality control of its predictions, which must also be a continuous process. The forecasting results will serve as input for decision making in several network operation and management tasks, thus their quality is a major requirement. Although its importance, quality control of network traffic forecasting has not been discussed in previous studies. Hence, in order to contribute with the body of knowledge in this area, we propose a quantitative method, based on statistical process control techniques, in order to control the quality of network traffic predictions. The network traffic predictions are estimated as an online continuous process. Thus, its quality control is primarily an automatic process executed in realtime without online supervision. The traffic samples used in this study come from real LAN, MAN, and WAN networks, which are used to compose the time series evaluated by the selected forecasting models. We assume that forecasts and their quality control are performed continuously following the online approach. Several statistical techniques commonly used to analyze prediction data are not adequate to be applied under this assumption, which are more appropriate to work on offline analyses. Based on this constraint, we choose techniques that are able to generate acceptable degree of accuracy and are feasible to be implemented in an online processing scenario. Most of them are not so sophisticate than that applied in offline analysis, however our focus in this work is to demonstrate the practical applicability of the proposed forecast quality control approach rather than investigate an optimal solution. Hence, our aim is to show how forecasting and their quality control can be implemented in an online manner. Improvements on this proposal are in progress and will be communicated in our future works. For the first part, we have to deal with the forcasting. We evaluated the following models:

Nave, Fitted Nave, Linear regression, Autoregressive (AR - several orders), Moving average (MA - several orders), Exponential weighted moving average (EWMA), Holt- Winters (HW), and Autoregressive moving average (ARMA). The criterion we used to select the bestfit model is the accuracy of their predictions against the observed data. We calculate the models accuracy based on the MAPE (mean absolute percentage error) measure. Based on the calculated MAPE for each model, we build a classification rank with the bestfit models on the top. For the four traffic samples, namely S1.1 (DHCP DISCOVER), S1.2 (DHCP OFFER), S2 (hostile traffic-MAN) and S3 (enterprise application trafficWAN), the models accuracy obtained with the transformed series showed the best results. Although the predictions obtained with the original data from S1.1), S1.2 and S3 were considered acceptable (more than 70% of accuracy), we decide to work with transformed series given that we found more than 90% of accuracy for all tested models. Also, the forecasting results for S2 (nontransformed) original data showed poor accuracy (lesser than 60%). We considered several transformations (e.g., ln(y), 1/y, ey), and verified that the best fit for all models and samples presented for the logaritmized (log10) data sets. Hence, we adopt the log10 transformation as the standard transformation in our proposed approach. Tables 1 below show, respectively, the rankings with the five best-fit models for S1 to S3, using the transformed series.

Table 1 As can be observed in all rank tables, the best-fit models were predominantly AR, MA, ARMA, HW, and EWMA. They kept consistent accuracy for all different types of traffic under study. ARMA and AR showed the best fit for S1.1, S1.2, and S2. Due to the nonstationarity of the S3s traffic pattern, presenting trend and seasonal structures, the HW model performed better. Using this data, we can move to forcasting quality control. To achieve this goal, we use a combination of CUSUM and Shewhart charts[7]. The CUSUM control charts has the advantage of taking into account the history of the forecast series, being able to detect model failure rapidly when forecast errors are relatively small. On the other hand, Shewhart chart [17] is better than CUSUM at detecting large changes in forecast errors. Therefore, using both techniques either large model errors or repeated small ones should be quickly detected. These two monitoring abilities are very important in forecasting for network flows, given that either WAN/MAN or LAN traffics may present both types of pattern variation. Equations 1 and 2 present two CUSUM statistics we used in this study. Ci+ = max[0, yi-(0-k)+C+i-1] Ci- = max[0, yi-(0+k)+C-i-1] (1) (2)

where yi is the prediction error for the ith observation, 0 is the estimated in-control mean, K

is a reference value calculated and initial values for C+ and C- are zero.

where is the magnitude of the process mean shift, usually given in numbers of standard deviations ( ), and n is the size of the observation samples. The sensitivity factor, K, is directly related to the magnitude of process change we want to detect with CUSUM algorithm. We monitor the CUSUM statistics using a two-sided CUSUM chart with decision intervals, H+/-, whose the rationale for choosing H is based on minimizing the risk of false alarms and keeping the ability to promptly detect shifts of interest. We use the standard tabular values for H due to the calculated value of K. As a result, if CUSUM statistics fall out of the ranges (0 + H+) and (0 H-), respectively, then an alarm is issued indicating that the forecasting errors are beyond the acceptable limits, being necessary an intervention. For the Shewhart charts we calculate the upper (UCL) and lower (LCL) (4) (5)

Figure 7

Figure 8

where is the shift detection factor given in numbers of standard deviation. The results of applying the proposed method to the four selected traffic samples are presented in Figures below. They show the statistical control charts used to monitor the quality of forecasting for each traffic sample investigated. Using four different traffic patterns, we evaluate this approach in terms of its performance under different network environments.

Figure 9

Figure 10 The outer and inner limits are related to CUSUM and Shewhart control charts, respectively. Along with the CUSUM statistics (C+ and C-) we plot the observed forecasting errors.

In Figure 7, the forecasting process remains in control until the eighteenth prediction, where the C- statistic falls outside the CUSUMs bounds. At this point, a new rank must be calculated in order to identify possible better models. It is important to highlight that, the underlying sources of network traffic flows change, what very often impacts on the current models predictions, requiring the selection of a new forecasting model to be used. Figure 8 presents a similar behavior than Figure 7, where the Cstatistic fires a new rank calculation. On the other hand, the forecasting process illustrated in Figure 9 has its rank reevaluated earlier. This hostile traffic data set showed the larger variability among all evaluated traffic samples. In case of Figure 10, two data points (10 and 15), both in C+, are outside of the warning limits (Shewharts bounds). This chart shows that, although the forecast errors are larger in the beginning of the process, it situates within the control limits, meaning that the forecasting process is considered adequate for the first twentyfifth predictions. III. CONCLUSION Statistical Process Control is an analytical decision making tool which allows you to see when a process is working correctly and when it is not. Variation is present in any process, deciding when the variation is natural and when it needs correction is the key to quality control. The excellent performance of simpler forecast models applied to network traffic data has motivated their adoption in several applications for traffic engineering. However, to the best of our knowledge, quality control of network traffic forecasting processes has not been discussed previously. As a conclusion, we can say that we have achieved our main objective that is to see the potential of SPC technique in communication

engineering. For this paper, our aim is to show how forecasting and their quality control can be implemented in an online manner. Improvements on this proposal are in progress and will be communicated in our future works. And may in the future more tools will be used in order to improve the quality of communication system.

IV.REFERENCES
[1] Nelson, Loyd S. (1985), "Interpreting Shewhart X Control Charts", Journal of Quality Technology, 17:11416. Steel, R. G. D. and J. H. Torrie (1980), Principles and Procedures of Statistics. New York:McGraw-Hill Western Electric Company (1956),Statistical Quality Control Handbook, available from ATT Technologies, Commercial Sales Clerk, Select Code 700-444, P.O. Box 19901, Indianapolis, IN 46219, 1-800-432-6600. COSTAS J. SPANOS ,Statistical Process Control in Semiconductor Manufacturing PROCEEDINGS OF THE IEEE, VOL. 80, NO 6, JUNE 1992 Rivalino Matias Jr. , Quality Monitoring of Network Traffic Forecasts using Statistical Process Control 2010 IEEE D. Awduche, A. Chiu, A. Elwalid, I. Widjaja, and X. Xiao, RFC 3272: Overview and Principles of Internet Traffic Engineering, IETF, 2002. D. C. Montgomery, Introduction to Statistical Quality Control, John Wiley & Sons, 1996. Marilyn K. Hart, Ph.D. & Robert F. Hart, Ph.D. , Introduction to STATISTICAL PROCESS CONTROL TECHNIQUES 2007 Statit Software, Inc., Leavengood, Scott A.; Reeb, J. E. (James Edmund), Statistical process control. Pt. 5: Cause-and-effect diagrams,Oregon State University. Extension Service; 1951

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10] http://material.eng.usm.my/stafhome/sabar/EBB341/Chapt er%201%20-%20SPC.ppt [11] http://sintak.unika.ac.id/staff/blog/uploaded/5812002253/fi les/pengawasan_mutu/statistical_process_control.ppt

Potrebbero piacerti anche