Statistical Quality Control for Computer Lab Availability

1. Graphic Frequency

Note that for the development of the report, 7 days were tabulated in the given table, which corresponds to a representative sample of the state of the computer lab.

With the tabulated data, we proceeded to perform frequency graphs to demonstrate the problem that affected the laboratory, which corresponds to the non-availability of computers. We built a frequency graph for the 210 observations obtained during the 7 days, through all three shifts (morning, afternoon, and evening), indicating the amount of computers not available for each sample of 10 computers. We also constructed a graph of relative frequency, which shows the percentage of average computers that are not available in 3 shifts, indicating that the number of computers increases are not available on each shift during the day.

2. Diagram of Cause and Effect

The aim of the Pareto chart is to identify major and minor causes of a quality problem, which in this case corresponds to the non-availability of computers in the computer lab. The causes are formulated based on the experience and asking the experts involved in laboratory management.

The method used to construct the cause-effect diagram was the enumeration of causes, which is mainly based on brainstorming.

Then, with the reasons listed, they were classified in the diagram that includes the 5 M’s (Materials, Methods, Manpower, Media, and Environment) and also were assigned as primary or secondary. The following diagram shows the cause and effect.

3. Pareto Chart

The aim of the Pareto chart is to know which of the causes of the quality problem, previously identified, are the most incidents and on which corrective actions must be generated.

With these lawsuits, we sought to mention some statistics in which the causes were more incidents in the problem of unavailability, and based on assumptions, expert consultation, and came to estimate the weights of these causes, which are represented based on percentages of occurrence.

Based on the Pareto chart shown in the figure, it clearly states that the main causes of non-availability of computers are the causes assigned to the Materials, the Media, and the Workforce. Then we will detail the main factors causing these components.

Materials: maintenance, fixtures in disrepair

Media: crash, Internet failure, bad data

Workforce: staff absenteeism, poor training.

The sum of these three cases reaches 80% of the causes of unavailability.

4. Attributes Control Chart

For the realization of the attribute control chart, we used the graphic “p”, using as abscissa the fraction defective.

P chart limits are obtained by:

The Z value is usually close to 3, so this graph will use that number.

As you can see, the chart shows points that are outside the control limits, so it is concluded that the process is not under statistical control. The causes of the points outside the control limits are not normal or common causes, but are assignable causes, which are deviations of the process that are clearly identified and eliminated, due to the inefficiency of the process.

5. Process Capability Index

The objective of the index is to indicate the ability of a given process measure, see if the process is stable or unstable, and if the ranges of variation of the control feature in the graph indicate satisfactory compliance with the required standard. To this, there must be some standard specification that we can compare the process to, better known as specification limits.

Applied to the case, it was assumed that computers should be available in the computer lab should not be below 80% of the total. Taking this to specification limits, only there is an upper specification limit of no computers available that correspond to

Unilateral Specification:

As can be seen, due to the specification, it was assumed, the process capability index is less than unity, therefore, the ability of the process is unsatisfactory.

6. Acceptance Sampling

To determine the acceptance sampling required, and since no one knows the number of computers to inspect, it is assumed that these are between 51 and 90 computers in the lab. With this assumption, the Annexes are used book Statistical Quality Control.

  1. First, with the lot size (51-90) go to Table I: Lyrics batch code and sample size. Depending on the size of Lot and considering that there will be a General Inspection Level II Normal, seek the corresponding letter. Obtained letter E.
  2. Second: with the letter chosen, is to sample tables MIL-STD-105D, through Table II, Single Sampling, Normal Inspection. Here you get the sample size corresponding to 13.
  3. Third, in the same table above, according to an acceptable quality level, find the number of Acceptance and Rejection number. For our case, we used a criterion of an Acceptable Quality Level 3. With this, our acceptance number is 1, and our rejection number is 2.

Therefore, acceptance sampling found under the assumptions used, provides for:

N: Population size = 51-90 Batch or computers

n: Sample size = 13 computers

Ac: Number of Acceptance = 1 computer

Re: Number of Rejection = 2 computers

It should be noted that this sampling plan was estimated based on an improvement in current conditions of the laboratory, as currently the Laboratory has a percentage of defective computers that were not available or approximately 75%, so clearly we have to investigate assignable causes and eliminate the process. So this acceptance sampling is valid to apply to the operation of the computer lab.

7. Operating Characteristic Curve

For a given sampling plan (n, Ac, Re) referred to a lot of size N, there is a single characteristic curve for the plan that relates the probability of acceptance with the percent defective inspection lot.

The representative elements of the characteristic curve are AQL (Acceptable Quality Level), Alfa (Provider Risk), CL (Quality Limit) and Beta (customer risk). Shown on the curve marked.

The AQL is given by binomial distribution or a Poisson, which simplifies the problem. We used the Poisson distribution table, where income with Ac = 1, and the acceptance probability of 95% was obtained with this n * p, and since we have our size with n = 13, we obtained the percentage of defective AQL = 3%

In the same way as we obtained the limit, only to be joining the table with a probability of 10%, and the CL = 27%. So Consumer or customer risk = 10%, and the risk of the provider = 5%.

8. Conclusions

An approach to the problem existing in the laboratory, gives us the frequency graph shows where the Evening and Night shifts are given a greater amount of unavailable or faulty computers, so we can assume that maintenance is a major factor in assignable causes.

Through the Cause and Effect Diagram, we are able to establish primary and secondary causes, to later be able to weigh them, and thus realize the Pareto chart, which gives us valuable information about what needs to be improved within the laboratory, such as improving the maintenance of computers, improving network and Internet systems, and strong staff training. These three causes comprise about 80% of the causes of failures.

The attributes control chart (p chart) told us that the process is not under statistical control, and is due to causes that are not normal but are assignable. These cases are measurable and removable, so good management laboratory suggests we eliminate these causes that make the process not be efficient. In determining the process capability index CP = -6.276, it indicates that the process capability is unsatisfactory, because of the assumption of SLE = 0.2.

To determine Acceptance Sampling was carried out assuming that the size of the laboratory is between 51 and 90 computers, with this ideal parameters were obtained from a sampling with a sample size of 13, an acceptance number of 1 and Rejection 2. This is clearly for an ideal process, but if applied to the current process, which is very bad indicators, all samples would be rejected.

The operating characteristic curve, to be unique, due to sample size and acceptance number, gives us the probability of acceptance, based on defective products. Here are key indicators such as estimated AQL = 3%, and CL = 27%.

We conclude the enormous potential that has the use of statistical quality control as an instrument and tool to better control the quality of the company. Herramienta más is effective in making decisions about adjustments to processes. It is an excellent method for analyzing the systems to be corrected and that the risk of both the consumer and supplier are reduced and processes are not left in uncertainty, and are more deterministic.

Many executives, in ignorance of the subject of Statistical Quality Control, spend huge resources in the search for the reasons by which their processes are failing, or simply inspection costs are very high and unnecessary. It is also the case that by not carrying out quality control processes and product rejection guarantees destroy all the value that the company generated in the process, and losses due to poor quality management.

REFERENCES

  • Statistical Quality Control. Author Rodrigo Mendiburu Sanabria. Edit Distance Education Center of the Universidad Catolica del Norte 2008. Http://web.frm.utn.edu.ar/estadistica/TablasEstadisticas/TD4_PoissonAcumulada.pdf
  • Notes on Total Quality Management, Doctor Edward Johns (c) in Business Studies at the Autonomous University of Madrid, Spain. Master of Science on “Integrated Management System, University of Birmingham, England. Ingeniero Civil Industrial, Universidad Técnica Federico Santa Mary. Diploma in “Advanced Total Quality Management”, Stockholm, Sweden.

ANNEXES

Established Assumptions:

  • To develop the report, we worked with the 7 days that were tabulated in the table given, since it is considered a representative sample of the case study.
  • The causes of non-availability of computers were raised based on the experience and asking the experts related to the management of laboratories.
  • With these statistics, we sought to mention some of the causes which were more incidents in the problem of non-availability, and based on assumptions, expert consultation, and came to estimate the weights of these causes, which are represented based on percentages of occurrence.
Materials35%
Media25%
Manpower20%
Methods10%
Environment5%
Several5%
Overall100%
  • Applied to the case, it was assumed that computers should be available in the computer lab should not be below 80% of the total. Taking this to specification limits, only there is an upper specification limit of no computers available that correspond to
  • It was assumed that the batch size, i.e., the number of computers in the lab, is caught between 51 and 90.
  • For our case, we used a criterion of an Acceptable Quality Level 3

Tabular data

Poisson Distribution

MIL-STD-105D