FUNDAMENTALS OF QUALITY CONTROL AND IMPROVEMENT SOLUTION MANUAL PDF

adminComment(0)
    Contents:

Library of Congress Cataloging-in-Publication Data: Mitra, Amitava Solutions manual to accompany Fundamentals of Quality Control and Improvement—3rd. Incorporating modern ideas, methods, and philosophies of quality management, Fundamentals of Quality Control and Improvement, Third Edition presents a. Fundamentals of Quality Control and Improvement: Solutions Manual to Accompany, Third Edition. Author(s). Amitava Mitra A statistical approach to the principles of quality control and management. Incorporating modern ideas, methods, and First Page · PDF · Request permissions · xml. CHAPTER 2.


Fundamentals Of Quality Control And Improvement Solution Manual Pdf

Author:FELICE QURAISHI
Language:English, Indonesian, Portuguese
Country:Liechtenstein
Genre:Politics & Laws
Pages:219
Published (Last):16.09.2016
ISBN:912-2-35167-308-4
ePub File Size:27.45 MB
PDF File Size:11.24 MB
Distribution:Free* [*Sign up for free]
Downloads:21844
Uploaded by: JUDIE

Fundamentals of Total Quality Management Process analysis and improvement. Fundamentals of Quality Control and Improvement: Solutions Manual. Get this from a library! Fundamentals of quality control and improvement. Solutions manual. [Amitava Mitra] -- A statistical approach to the principles of quality. Amitava Mitra is the author of Fundamentals of Quality Control and Improvement ( avg rating, 70 ratings, 5 reviews, published ), Solutions Manual.

Complete with discussion questions and a summary of key terms in each chapter, Fundamentals of Quality Control and Improvement , Third Edition is an ideal book for courses in management, technology, and engineering at the undergraduate and graduate levels. It also serves as a valuable reference for practitioners and professionals who would like to extend their knowledge of the subject.

Permissions Request permission to reuse content from this site. Undetected country. NO YES. Selected type: Added to Your Shopping Cart. A statistical approach to the principles of quality control and management Incorporating modern ideas, methods, and philosophies of quality management, Fundamentals of Quality Control and Improvement , Third Edition presents a quantitative approach to management-oriented techniques and enforces the integration of statistical concepts into quality assurance methods.

The Third Edition also features: Presentation of acceptance sampling and reliability principles Coverage of ISO standards Profiles of past Malcolm Baldrige National Quality Award winners, which illustrate examples of best business practices Strong emphasis on process control and identification of remedial actions Integration of service sector examples The implementation of MINITAB software in applications found throughout the book as well as in the additional data sets that are available via the related Web site New and revised exercises at the end of most chapters Complete with discussion questions and a summary of key terms in each chapter, Fundamentals of Quality Control and Improvement , Third Edition is an ideal book for courses in management, technology, and engineering at the undergraduate and graduate levels.

Thus, the nearest specification limit is 4.

Hm... Are You a Human?

The proportion nonconforming outside the farthest specification limit is negligible, yielding a total nonconformance rate of 3. As a philosophy, the six sigma concept is embraced by senior management as an ideology to promote the concept of continuous quality improvement.

It is a strategic business initiative, in this context. When six sigma is considered as a methodology, it comprises the phases of define, measure, analyze, improve, and control, with various tools that could be utilized in each phase.

In the define phase, attributes critical to quality, delivery, or cost are identified. Metrics that capture process performance are of interest in the measure phase. In the analyze phase, the impact of the selected factors on the output variable is investigated through data analytic procedures.

The improve phase consists of determining level of the input factors to achieve a desired level of the output variable. Finally, methods to sustain the gains identified in the improve phase are used in the control phase.

Primarily, statistical process control methods are utilized. Out of several alternative proposals, based on the priority ranking of needs and the relative impact of each alternative on meeting each customer need, a weighted index is developed. Under a scarcity of resource environment, proposed alternatives are selected based on the computed weighted index.

QFD reduces product development cycle time through consideration of design aspects along with manufacturing feasibility. It also cuts down on product developmental costs through consideration in the design phase of the myriad of issues that deal with technical ability of the company relative to competitors.

There are some key ingredients necessary for the success of QFD. First, a significant commitment of time has to be devoted to complete the QFD process. Second the use of cross-functional teams is a necessary mode for information gathering required for the QFD process. Some diagnostic measures are exposure to recent developments in software and technical knowledge in the field.

Some strategic outcome measures are retention or attraction of skilled technical people to the company and employee satisfaction. Some strategic performance measures could be policies to empower employees that support innovation and prevalent reward structure.

Internal processes perspective: Some diagnostic measures are errors per K lines of code, type of coding errors and their severity, and delay in responding to customer requests. Some strategic outcome measures are level of service to the client who could be internal or external , and time to develop software and implement it.

Customer perspective: Some diagnostic measures are time to solve and respond to customer problems and number of monthly customer complaints. Some strategic outcome measures are cost of providing service, reliability of operations, and degree of customer satisfaction.

Some strategic performance measures could be cost of subcontracting certain services, and degree of trust in relationship with vendor. Financial perspective: Some strategic outcome measures are return on investment, and market share of company.

Some strategic performance measures could be degree of investment in equipment and infrastructure, and operating expenses. Some diagnostic measures could be lack of timely feedback by nurses, and delays in admitting patients due to incorrect information provided by admitting staff. Some strategic outcome measures are degree of physician and nurse satisfaction, and number of suggestions for improvement by laboratory personnel and admissions staff.

Some strategic performance measures could be the type of reward structure for physicians or nurses, and incentive schemes for hospital staff. Some diagnostic measures are average time to process in- patients for admission, delay in processing an X-ray, and medication errors per patient-day.

Some strategic outcome measures are readmission rate, and length of stay for a certain diagnosis related group. Some strategic performance measures are infection rate and blood culture contamination rate. Some diagnostic measures are time to discharge patients, and time to deliver patient from check-in at the emergency department to a hospital bed. Some strategic outcome measures are degree of satisfaction by in-patients, and proportion of patients that would recommend others.

Some strategic performance measures are cost of a certain surgical procedure, and treatment of patient by nursing staff. Some diagnostic measures are proportion of reimbursement requests denied by Medicare, and physician time in surgery lost due to inappropriate scheduling. Some strategic outcome measures are cost per case of in-patients with a certain diagnosis, and market share captured. Some strategic performance measures could be percentage of asset utilization of operating room capacity, and reduction in unit costs of laboratory work.

Some diagnostic measures could be long inspection time of sampled product, and long set-up time of equipment due to lack of skill. Some strategic outcome measures could be number of process improvement suggestions received from employees, and reward structure to promote environment of continuous improvement. Some strategic performance measures could be spending on employee professional development, and type of recognition system, beyond pay, available to staff. Internal processes: Some strategic outcome measures are time to develop new process based on a new product innovation, and total cost per batch 1 million of microchips.

Some strategic performance measures could be expenditures in research and development of processes, and unit procurement costs from vendor. Some diagnostic measures are response time to meet changes in customer orders, and number of shipments rejected by the customer.

Some strategic outcome measures are proportion of customers complimentary of the company, and increase in annual referrals by customers. Some strategic performance measures could be time to serve customers with a certain minimum volume of orders, and degree of discount offered to customers with high volume of orders.

Some diagnostic measures are overhead costs per batch and cost of machine downtime per month.

Some strategic outcome measures are return on investment and growth in market share. Some strategic performance measures could be percentage of equipment utilization and amount of investment to upgrade equipment.

Some diagnostic measures could be proportion of proposals rejected due to lack of technical competency of staff, and time to develop a proposal for consideration of senior management. Some strategic outcome measures are degree of satisfaction of technical staff, and revenue per employee-hour. Some strategic performance measures are incentive plans for scientists, and number of successful proposals annually. Some diagnostic measures are time to develop a batch of prototype, and throughout rate of a prototype.

Some strategic outcome measures are cost per batch of tablets, and proportion nonconforming ppm of product. Some strategic performance measures are proportion nonconforming ppm of shipments from vendor and unit overhead costs per batch. Some diagnostic measures are time to conduct survey of proposed drug, and lead time to meet customer order changes. Some strategic outcome measures are percentage of satisfied scientists and engineers, and proportion of senior personnel retained.

A strategic performance measure could be time to meet a competitor's deadline for a new product development. Some diagnostic measures could be cost of product that is scrapped due to not meeting desired specifications, and overhead costs per batch. Some strategic outcome measures could be profit margin, and sales growth. Some strategic performance measures are cost savings due to equipment changes, and investment in equipment.

For the airlines industry, customer requirements could be as follows: Price of ticket, convenience of schedule, delay in arrival, lost baggage, and in-flight service, among others. Based on customer survey, priority ratings may be applied to the above requirements. Some technical descriptors could be as follows: Select cities to serve based on competition and demand, type and size of fleet, baggage identification barcoding and handling procedures, training of in-flight attendants and provision of desirable meals.

Possible vision statement could be, "Become the leader in the logistics industry, internationally. In a balanced scorecard analysis, in the learning and growth perspective, possible diagnostic measures could be proportion of time problems or delays occur due to failure of information technology IT systems, and proportion of failures due to lack of core competencies among staff.

Some strategic outcome measures are degree of employee satisfaction, and retention of personnel with core skills. Some strategic performance measures could be the degree of access to strategic information, and amount invested in professional development of technical staff.

Under the perspective of internal process, possible diagnostic measures are proportion of deliveries delayed due to lack of facilities truck, ship, or rail and proportion of shipments damaged due to mishandling between one form of transport ship to another rail.

Some possible strategic outcome measures are operating efficiency of available modes of transportation ship, rail, truck , and cost per unit volume of shipment in each mode of transportation.

Some strategic performance measures could be absenteeism rate of employees directly associated with handling of goods, and degree of investment in new technology. Under the customer perspective, some diagnostic measures are response time to meet a customer request, and time to process a download order and payment for a customer.

Some strategic outcome measures are cost of providing a follow-up service, and proportion of satisfied customers. Some performance outcome measures could be the unit cost of subcontracting a segment of the total transportation requirement and the degree of dependability of the subcontractor to contractual obligations. In the financial perspective, some diagnostic measures could be costs incurred for idle storage of goods due to lack of available transport media, and proportion of costs due to breakdown of equipment.

Some strategic outcome measures are return on investment, net revenue per unit volume shipped, and total market share. Some possible strategic performance measures are degree of expenses due to rental equipment, and revenue by customer categories based on volume of shipment. Some possible customer requirements and their importance ratings are shown in Table No delay in shipments 5 2. Shipment not delivered prior to due date 2 3.

Ease of placing order 3 4. On-line tracking capability 4 5. Ease of using credit to place order 3 important importance rating of 5 requirement, the customer also prefers not to receive the shipment ahead of the promised delivery date, which could be based on following just-in-time JIT criteria. Otherwise, there will be a holding or carrying cost of this inventory, if delivered prior to the chosen date.

The assigned importance to this requirement is not as much assigned rating of 2 as that compared to late shipments. Further, ease of placing order is considered moderately important rating of 3 to the customer. The preference of having an on-line tracking capability, so that the customer may determine the exact location of the shipment in real time, is quite important with a rating of 4.

Additionally, the ease of using customer-available credit to place the order is also of moderate importance. Some possible means technical descriptors to achieve customer requirements are shown in Table Six technical descriptors are listed in Table Also, for each technical descriptor, the degree of its impact on meeting the five customer requirements is listed.

The notation used is as follows: Thus a 1 5 next to a certain technical descriptor indicates that the technical description has a strong relationship in impacting customer requirement 1, which is no delay in shipments.

The remainder of the QFD analysis may be completed through assignment of the appropriate numbers.

For a company that develops microchips, technological development is one of the major factors that impacts the process of benchmarking. Innovations in chip design are taking place at a rapid level. The steps of benchmarking could be as follows: Back-up fleet of carriers through subcontractor 1 5 ; 2 3 2. Recruit part-time personnel 1 5 ; 2 1 3. Qualified staff to process order promptly 1 3 ; 2 1 ; 3 3 4.

Solutions Manual to accompany Fundamentals of Quality Control and Improvement

Top management has the responsibility of ensuring that the design of microchips remains current and competitive. Quality audits are of three types: System audit - this is the most extensive and inclusive type.

Here policies, procedures, and operating instructions are evaluated with respect to a reference standard. Further, evaluation of activities and operations to accomplish the desired quality objectives are also conducted.

Hence, conformance of quality management standards and their implementation to specified norms are the objectives of such an audit. Such an audit may be used to evaluate a potential vendor. Process audit - this involves an evaluation of selected processes in the organization. These processes are examined and compared to specified standards. While not as extensive as the system audit, such an audit is used to improve processes and have been identified maybe through Pareto analysis to be problem areas.

Product audit - this involves an assessment of the final product or service to meet or exceed customer requirements. A product audit could determine the effectiveness of a management control system.

It is not part of the inspection process. For a company producing multiple products, those that perform poorly could be candidates for such an audit. For a financial institution that is considering outsourcing its information technology related services, some criteria to consider are as follows: Error-free performance reliability in recording transactions; Ease of access to updated information by identified personnel; Back up procedures to store and retrieve information so that there is no loss of information, potentially; Ease of obtaining summary information as desired by the financial institution say by type of transaction, account number, etc.

The financial institution should next assign weights to the selected criteria, based on its preference. The weights could be on a point scale. Following this, each vendor could be rated on a relative scale 1 to 5 , with 1 representing least desirable performance and 5 representing most desirable performance. Finally, a weighted score could be obtained for each vendor, where the weight is multiplied by the rating for each performance measure and then added up.

The vendor with the highest weighted score is a candidate for selection. Development of drugs for treatment of Alzheimer's disease using nanotechnology is an area of on-going research. Nanotechnology may assist in delivering an appropriate drug to appropriate cells within the human body without causing major side effects. Innovation and time-based competition also play an important role since identification of cause-effect relationships and development of appropriate drugs are not necessarily known with certainty.

Such processes usually require a good deal of experimentation and testing. The mass-transit system in a large city is expected to encounter a projected increase in demand and is considering possible outsourcing. Provide adequate capacity to satisfy projected increase in demand while ensuring customer satisfaction for those who use the mass-transit system. Some objectives could be: Increase capacity at an annual rate that exceeds rate of growth in demand; Ensure travel times meet customer expectations; Reduce delays through efficient scheduling; Provide accident-free services.

Reliability of operations as measured by percentage of time trips are on time ; Frequency of operations or size of operations that measures number of customers transported between two locations per unit time; Average time to transport customer between two locations by hour of the day - for example, peak hours could be in the morning 7: Average time to transport passengers between two locations - by hour of the day; Percentage of trips that are late; Total customers transported daily on a weekday; Cost of the system.

Some strategic measures are: Percentage of customers satisfied outcome measure ; Return on equity outcome measure ; Percentage utilization of capacity performance measure.

There are several benefits of vendor certification. When the vendor is certified such that it consistently meets or exceeds the downloadr's criteria, the need for routine incoming inspection is eliminated. Additionally, a strategic partnership is created between the vendor and the downloadr. Changes in customer needs that require changes in the product, will necessitate changes in raw material or components provided by the vendor.

Joint effort between the downloadr and vendor helps to reduce lead time and cost, and improve quality. Typical phases of vendor certification are approved vendor, preferred vendor, and a certified vendor.

Initially, the process is documented and performance measures are defined and selected. Roles and responsibilities of the involved personnel are clearly delineated. A quality system survey of the vendor is performed. Vendors that meet acceptable performance standards on defined criteria set by the downloadr are identified as approved vendors. A vendor with a preferred status may be required to have a process control mechanism in place that shows a focus on problem prevention as opposed to problem detection.

A certified vendor, which is the next level, identifies a vendor that not only meets or exceeds the performance measures set by the downloadr, but also has an organizational quality culture that is in consonance with that of the downloadr. In this phase, the vendor and downloadr are partners with a common goal.

Both act harmoniously to meet or exceed quality, cost, and delivery goals. These companies may procure raw material or components from vendors that may be located in several countries.

This can be accomplished through international certification standards, such as those developed by the International Standards Organization ISO. In the U. Thus, for suppliers who are QS certified, they may supply to all three automakers without having to face the burden of demonstrating their quality separately to each downloadr. Cause-and-effect diagram for automobile accidents A cause-and-effect diagram for automobile accidents is shown in Figure Table From the calculated RPN values, the highest value is associated with transmission failure due to broken belts.

Some action plans need to be designed to detect such broken or imminent to break belts during routine or preventive maintenance. Rating scores on severity, occurrences, and detection are assigned and the risk priority number RPN is calculated. From the calculated RPN values, the highest value is associated with emotionally unfit due to personal issues, followed by emotionally unfit due to disturbed work environment.

Detection of such causes are difficult, specially personal issues. This leads to high RPN values which draws attention to create action items to address these issues. For example, if there is only one version of the product with no customer options , we could collect historical data on demand by geographical region or state , as appropriate.

A Pareto chart could depict such demand by region to demonstrate areas of high demand in decreasing order. If we are able to identity factors or causes that influence product demand, perhaps a cause-and-effect diagram could be used. Possible causes might be: Population of region, average disposable income, unemployment rate, competitors in the region, number of stores, ease of ordering product, and price.

For each cause, certain sub causes could be listed.

For example, under competitors in the region, sub causes could be warranty offered, unit price, and lead time to obtain product. A possible flow chart for visiting the physician's office for a routine procedure is shown in Figure There are several reasons for failure of total quality management in organizations.

First, is the lack of management commitment. While initial enthusiasm may be displayed by management for adoption of TQM, allocation of resources is vital. These measures may not be implemented by the organization leading to failure in TQM adoption. Second, lack of adoption of a common and consistent company mission that is embraced by all parts of the organization, could be a reason. Often, goals of units within the organization are not coherent with the overall company goals.

Sharing of information may lead to better decisions. Fourth, lack of cross-functional teams to address issues that impact the company could be a reason.

Managers may recommend action plans that do not incorporate suggestions from a variety of sub-units that are affected by the decisions. Vendors certified through such established standards and using third-party auditors do not need to go through further audits by their customers. Scatterplot of life insurance coverage vs disposable income It seems that, with an increase in disposable income, life insurance coverage increases non-linearly.

A flow chart is shown in Figure Accomplishing registration to ISO standards is significantly different from an audit process. Such registration ensures that an acceptable quality management system is in place. The system includes processes, products, information systems, documentation, management team, quality culture and traceability, among other items.

Audits, on the other hand, may involve a system audit, process audit, or product audit. They may be internal or external. Audits usually identify deficient areas - they do not necessarily rectify the problem. Development of remedial actions, based on the audit outcomes, is a task for management. Only on implementation of such remedial actions will the benefits be derived. In a global economy, many companies are multinational with branches in several countries.

With ISO standards having a universal impact, registration to such standards creates a seal of acceptance in all of these locations. The customer can trust in the quality management system that exists within the organization, if it is ISO certified. Hence, the organization does not have to demonstrate its competence by going through other audits. Incoming inspection can be significantly reduced.

Three business categories exist - manufacturing, service, and small business. Nonprofit public, private, and government organizations are eligible in a separate category. Also, two other award categories exist - education and health- care. It is not a certification process to standards like ISO standards. The general idea behind the award is to motivate U. The objectives are to foster competitiveness.

The award winners are expected to share information on their best practices so that other organizations may adopt or benefit from such knowledge. Preparation for filing for the award stimulates a companywide quality effort. Based on the matrix plot, to achieve low levels of proportion nonconforming, high levels of temperature, low levels of pressure, high proportion of catalyst and low levels of acidity pH value are desirable.

Similarly, Figure shows a contour plot of proportion nonconforming for various levels of combinations of acidity and proportion catalyst. Contour plot of proportion nonconforming vs.

So, an existing drug that is at least as good as the new drug, is recommended for replacement with the new drug by the agency. A type II error occurs when a null hypothesis, that is not true, is not rejected. In this situation, the federal agency would not recommend the new drug, even though it increases average life. In the type I error situation, a proven drug would be replaced.

It seems that the agency should minimize this risk. In the type II error situation, it would lead to a lost opportunity. Hence, for drug companies, it might add to their research and development expenditures, which may lead to increased unit price of drug to consumers. Also, samples are chosen randomly and independently. Depending on assumption on the variability of patient life, for each drug, will influence the precise approach to be used.

For example, 11 am to 1 pm may have a higher rate than 8 am to 10 am. In this case, could select only a specified 2-hour period to model. Also, people may flock to store based on crowds, thereby being influenced by others. Observe the number of customers who enter the store during that period. Repeat this for several days. Based on data collected, estimate the mean number of arrivals. Based on locations, with similar characteristics, where there are existing stores, estimate the mean number of arrivals in a given time period.

This is the difference between the average of the measured values and the true value. Precision refers to the variation in the measured values.

Accuracy is controlled through calibration. Precision is a function of the measuring instrument. downloading an instrument with a higher precision is an alternative. A trimmed mean is preferred when there are outliers in the data that are believed to occur due to special causes. Assumptions are normality of distribution of delivery times, and samples are chosen randomly and independently.

Assumptions are normality of distribution of loan processing times, and samples are chosen randomly and independently. Assumptions are normality of distribution of contract amounts, and samples are chosen randomly and independently. Response times are normally distributed, with random and independent samples chosen.

Depending on assumptions on variance of response times, appropriate formula will have to be used. The sample size is large and random samples are chosen. The probability of success on any trial remains constant in a binomial distribution but no so in a hypergeometric one. A type II error implies concluding that the mean delivery time is 5 or more days, when in fact it is less.

In the first situation, the postal service would be advertising something that they cannot deliver. It may lead to dissatisfied customers. In the second situation, the postal service may miss an opportunity to promote its services. The type I error could be more serious as regards the customer. In the first situation; the institution would be raising their customer's expectations, when they may not be able to meet them.

It may result in dissatisfied customers. In the second situation, the institution may miss an opportunity to promote itself. In the first situation, the firm falsely over-projects its customer contracts.

If contracts are subject to federal or state restrictions, it could impact them. In the second situation, the firm is under-selling itself.

A type I error could be serious under the guidelines of truth-in-advertising. A type II error, in this case, could hurt the firm's chances of obtaining new contracts.

A type II error implies concluding that the company has not improved its efficiency, when it has. A type I error here could be serious under the guidelines of truth-in-advertising.

A type II error here could lead to missed opportunities by failing to publicize its efficient operations. A type I error could be serious in the context of guidelines in truth-in-advertising. A type II error here could lead to missed opportunities. The distribution of the price of homes is usually skewed to the right.

This is because there are some homes, that are very high-priced, compared to the majority. For such distributions, the median price is a better representative since it is not affected as much as the mean by outliers. In this case, significant costs will be incurred in developing a new program that is not necessarily better than the current one.

A type II error would occur when we do not infer that the new program is more effective, when it really is.

In this case, we may lose the opportunity of adopting the new program that is more effective. Data on the above variables could be collected and a regression or analysis of variance procedure could be used to determine significant factors.

Hypotheses are: For employees, a similar set of hypotheses could be tested. For shareholders, a measure of effectiveness could be the unit share price or the rate of return on investment. The hypotheses are: Let X be the annual premium per computer for university to be indifferent to downloading the service contract. While the binomial distribution is appropriate, the reason for the value obtained using the Poisson distribution to be somewhat less could be that n is not sufficiently large.

We want to find: WantP If the time to failure for each switch is distributed exponentially with a failure rate of X, the number for failures within a certain time t follows a Poisson distribution with parameter X t. We may use the Poisson distribution approach that models number of failures or the Gamma distribution that models the time to failure.

So, the minimum number of additional standby switches necessary is 2. So, the smallest value of a the number of standby units that satisfies the criterion is a — 2. Ninety percent of such constructed intervals will enclose the true average assembly time. Ninety nine percent of such constructed intervals will enclose the true average assembly time. The sample size is large enough so that the distribution of the sample mean is normal, using the Central Limit Theorem.

However, if the population distribution is normal, the distribution of the sample mean, for any sample size, is also normal. Ninety five percent of confidence intervals so constructed will enclose the true mean dissolved oxygen level. The population distribution is assumed to be normal. The population standard deviations o t and 02 are unknown but assumed to be equal. The pooled estimate of the common variance is 2 39 3. The critical values are F.

Hence we have not rejected the hypothesis of equal variances of the two populations. From the F-tables, F. The assumptions necessary to perform this test are that the samples are random and independent and that each population has a normal distribution. The critical value of t is —t. Samples are random and the population distribution is normal. The test statistic is 9 1.

So, we cannot conclude that the process variance exceeds 0. The test statistic is: The pooled estimate is: The test statistic of Since the test statistic is We conclude that there is no difference in the output of the machines as regards the proportion of nonconforming parts. Since the test statistic of We conclude that the variance of the diameters does not exceed 0.

Since the test statistic of 4. Random samples chosen from each population. The critical value is F. Since the test statistic of 7. If we had chosen to write the hypothesis as: Since the test statistic of 0. However, because the sample average delay time of the first 4. The hypotheses to be tested are: For a chosen level of significance a of. Thus, we cannot conclude that the mean delay of the first vendor exceeds that of the second.

So, on the basis of the test on comparison of variances of delay time, one would select the first vendor. Systolic blood pressure before administration of drug: So distribution is nearly symmetric about mean and less peaked than the normal. The mean and standard deviation are both lower than their corresponding values before administration of the drug.

We will test, later, if there has been a significant decrease in the mean value. The p-value states that, if the null is true i. Hence, we reject H0. So, we reject H0 and conclude that the drug was effective in reducing average cholesterol level. The distribution is skewed to the right and somewhat more peaked than the normal.

Testing, Ho: Processing time prior to changes: Distribution is fairly symmetric about mean but flatter than the normal distribution. Distribution is skewed to the left and flatter than the normal distribution. Hence, the decision is dependent on the chosen level of significance or.

Since the test statistic of - 2. This means, if the null hypothesis is true, the chances of getting a sample average of 8.

Test statistic is: The assumptions made are that the distribution of processing times are normal, for both before and after process changes. Also, random and independent samples are chosen from each process. Let the premium to be charged be denoted by p. The probability distribution of X, the net amount retained by the company is given as follows: Assuming a Poisson distribution of occurrence of errors. Confidence interval or hypothesis testing on the population mean when the population standard deviation is not known.

Other parametric tests may be hypothesis testing on the difference in the means of two populations, when the population variances are not known, hypothesis testing on the population variances, or that on comparing two population variances.

All of these tests require the assumption of normality of the distribution of the characteristic. If the assumption is not satisfied, transformations may be considered. These include power transformations or Johnson's transformation. Stratified random sampling with a proportional allocation scheme. Chi-squared test for independence of classifications in contingency tables. Chi-squared test on cell probabilities. The various parameters and sample size are related. For example, for a given type I error and power, as the degree of difference that one wishes to detect in a parameter decreases, the sample size increases and vice versa.

For a given difference in the parameter, the type I error can be reduced and the power increased by increasing the sample size. No billing errors; Ha: Billing errors.

A type I error implies concluding that there are billing errors in a customer account when there are none. This could result in a wasted effort and cost on the part of auditors to detect billing errors. A type II error implies concluding that there are no billing errors when, in fact, they exist. Here, customers who find errors in their bills could be very dissatisfied, leading to loss of future market share. A stratified random sampling procedure using a proportional allocation scheme could be chosen.

The strata would be defined by the different time periods in which the traffic rate varies. Distribution is skewed to the right. Assumptions necessary are that the distribution of waiting time is normal, and random, independent samples are selected.

A normality test using the Anderson-Darling test in Minitab is conducted on the natural logarithm of waiting time. The p-value is 0. The mean and standard deviation of the transformed variable are 3. The distribution is skewed to the right. A box plot is shown in Figure Note the long top whisker and the relatively short bottom whisker indicating aright-taileddistribution.

Run chart of pH values 1. The following p-values are indicated: Figure shows the run chart. For examples, for batch 3 observations , the pH values are clustered around Since mixing of the ingredients takes place by batches, this procedure is appropriate.

So clustering effect is significant. So we do not reject the null hypothesis of normality. The test statistic is obtained as: Using last year's data, we calculate the proportion of employees that preferred each plan. At least one pt differs from the hypothesized value. Sales is independent of advertising technique. Sales is not independent of advertising technique. Overall satisfaction rating is independent of response speed rating.

Overall satisfaction rating is not independent of response speed rating. Cramer's index of association: The conservative estimate of 0. This yields a sample size of Also, the standard error of the mean for the cluster sample exceeds that of the stratified sample mean.

Benefits include when to take corrective action, type of remedial action necessary, when to leave a process alone, information on process capability, and providing a benchmark for quality improvement. Special causes are not inherent in the process. Examples are inexperienced operator, or poor quality raw material and components. Common causes are part of the system.

They cannot be totally eliminated. Examples are variations in processing time between qualified operators, or variation in quality within a batch received from a qualified supplier. A normal distribution of the quality characteristic being monitored for example average strength of a cord is assumed. For a normal distribution, control limits placed at 3 standard deviations from the mean ensure that about This implies that very few false alarms will occur.

A type I error occurs when we infer that a process is out of control when it is really in control. A type II error occurs when we infer that a process is in control when it is really out of control. The placement of the control limits influences these two errors. As the control limits are placed further out from the center line, the probability of a type I error decreases, but the probability of a type II error increases, when all other conditions remain the same, and vice versa.

An increase in the sample size may lead to reducing both errors. Warning limits are these that are placed at 2 standard deviations from the centerline. These limits serve as an alert to the user that the process may be going out of control.

The operating characteristic OC curve associated with a control chart indicates the ability of the control chart to detect changes in process parameters. It is a measure that indicates the goodness of the chart through its ability to detect changes in the process parameters when there are changes.

A typical OC curve for a control chart for the mean will be a graph of the probability of non-detection on the vertical axis versus the process mean on the horizontal axis. As the process mean deviates more from the hypothesized or current value, the probability of non-detection should decrease.

The discriminatory power of the OC curve may be improved by increasing the sample size. The average run length ARL is a measure of goodness of the control chart and represents the number of samples, on average, required to detect an out-of-control signal.

For a process in control, the ARL should be high, thus minimizing the number of false alarms. For a process out-of-control, the ARL should be small indicating the sensitivity of the chart. As the degree of shift from the in-control process parameter value increases, the ARL should decrease. Alternatively, from predetermined ARL graphs, the sample size necessary to achieve a desired ARL, for a certain degree of shift in the process parameter, may be determined.

The ARL is linked to the probability of detection of an out-of-control signal. The selection of rational samples or subgroups hinges on the concept that samples should be so selected such that the variation within a sample is due to only common causes, representing the inherent variation in the process.

Further, samples should be selected such that the variation between samples is able to capture special causes that prevail. Utilization of this concept of rational samples is important in the total quality systems approach since the basic premise of setting up the.

Hence, the variability within samples is used to estimate the inherent variation that subsequently impacts the control limits. Rule 1 - A single point plots outside the control limits.

BEAU-ROC INC.

Rule 2 - Two out of 3 consecutive points plot outside the two-sigma limits on the same side of the centerline. Rule 3 - Four out of 5 consecutive points fall beyond the one-sigma limit on the same side of the centerline. Rule 4 - Nine or more consecutive points fall on one side of the centerline. Rule 5 - A run of 6 or more consecutive points steadily increasing or decreasing. All of the rules are formulated on the concept that, if a process is in control, the chances of the particular event happening is quite small.

This is to provide protection against false alarms. Some reasons could be adding a new machine, or a new operator, or a different vendor supplying raw material. Typical causes could be tool wear in a machining operation or learning on the job associated with an increase in time spent on job.

Assume that three-sigma control limits are used. For Rule 2, the probability of 2 out of 3 consecutive points falling outside the two-sigma limits, on a given side of the centerline is: The two-sigma control limits are: Using three- sigma limits, the probability of a type I error is 0.

The standardized normal values at the control limits are: The area between the control limits is 1- 0. Hence the probability of not detecting the shift on the first subgroup plotted after the shift is 0. Next, the probability of not detecting the shift on first subgroup and detecting shift on second subgroup is 1 - 0.

So, the probability of detecting the shift by the second subgroup is 0. Hence, the probability of failing to detect the shift by the second subgroup is 1 - 0. Calculations for shifts in the process mean on one side are shown. Similar calculations will hold when the process mean decreases. The warning limits are: The area between the control limits is 0. The probability of not detecting the shift on the first subgroup and detecting shift on the second subgroup, assuming independence, is 0. Similarly, the probability of not detecting the shift on the first and second subgroup and detecting on the third subgroup is 0.

Hence, the probability of detecting the shift by the third sample drawn after the shift is 0. When the process mean changes to mm, the standardized normal values at the control limits are: So, the probability of detecting the shift on the first sample is 0. The one-sigma control limits are: Two sigma control limits are: For one-sigma limits, we need to find the probability of a type I error. The probability of an observation plotting outside these limits, if the process is in control, is 2 0.

Assuming independence of the rules, the probability of an overall type I error is: Three-sigma control limits are: Also, for Rule 3, the probability of a type I error was found to be 0. Now, the probability of detecting shift on the first subgroup is 0. Next, the probability of not detecting the shift on the first subgroup and detecting on the second subgroup is 0.

Similarly, the probability of not detecting the shift on the first two subgroups and detecting on the third subgroup is 0. Hence, the probability of detecting the change by the third subgroup is 0. Thus, the probability of failing to detect the change by the third subgroup point drawn after the change is 1 - 0.

For Rule 4, the probability of 8 consecutive points falling on one side of the center line is 0. Therefore, the probability of an overall type I error is: With the process mean at mm, using Rule 1, let us calculate the probability of a subgroup mean plotting outside the control limits.

Thus the probability of an observation plotting outside the control limits is 0. With the process mean at mm, the probability of a subgroup mean plotting above the center line is calculated as follows. The standardized normal value at the center line is: Now, using Rule 4, the probability of 8 consecutive observations plotting above the centerline, assuming independence, is 0.

The probability of an observation plotting below the centerline is 1 - 0. As before, the probability of 8 consecutive observations plotting below the centerline is 0. Hence the probability of 8 consecutive observations falling on one side of the centerline is. Assuming independence of the two rules, the probability of an out-of-control condition is: But since Rule 4 can indicate an out-of- control condition with a minimum of 8 observations, the average number of subgroups needed would be 8.

The two- sigma control limits are: The three-sigma limits are: A type II error implies concluding that the average delivery time is hours, when in fact it differs from hours. Using only Rule 1, we demonstrate calculation of the probability of detection: H The probability of not detecting the shift on the first subgroup and detecting on the second subgroup is 0.

Hence, the probability of detecting the shift by the second sample is 0. Thus, the probability of not detecting the shift by the second sample is 1 - 0. On average, if using only Rule 1, if the process mean shifts to , it will take about The three-sigma control limits are: Variables provide more information then attributes since attributes do not show the degree of conformance.

Variables charts are usually applied at the lowest level for example operator or machine level. Sample sizes are typically smaller for variables charts. The pattern of the plot may suggest the type of remedial action, if necessary, to take. The cost of obtaining variable data is usually higher than that for attributes. The Pareto concept is used to select the "vital few" from the "trivial many" characteristics that may be candidates for selection of monitoring through control charts.

The Pareto analysis could be done based on the impact to company revenue. Those characteristics that have a high impact on revenue could be selected. A variety of preliminary decisions are necessary. These involve selection of rational samples, sample size, frequency of sampling, choice of measuring instruments, and design of data recording forms as well as the type of computer software to use.

In selecting rational samples, effort must be made to minimize variation within samples such that it represents the inherent variation due to common causes that exists in the system.

Conversely, samples must be so chosen to maximize the chances of detecting differences between samples, which are likely due to special causes. Depending on the sample size, an appropriate control chart will be constructed as follows: Data on emission levels, say in ppm, could be collected. It could be that the process is in control but not capable of meeting the imposed standards. In this situation, management will need to address the common causes and identify means of process improvement.

Alternatively, a standardized control chart Z and MR chart could be used. For an R chart, an observation plotting below the LCL implies that the spread in the response time is small.

For the X chart, the point plotting below the LCL is rather desirable. Hence, if we can identify the special conditions that facilitated its occurrence, we should attempt to adopt them. If such is feasible, we may not delete that observation during the revision process. For the observation below the LCL on the R-chart, it implies the process variability to be small for that situation. Reducing variation is a goal for everyone. Thus, it may be worthwhile looking into conditions that led to its occurrence and emulating them in the future.

If this is feasible, we may not delete the observation during the revision process. On the R-chart, we would expect a downward trend showing a gradual decrease in the range values as learning on the job takes place.

Personnel could then be assigned to each component and trained accordingly to complete that segment in an efficient manner. Factors that impede flow from one unit to the next, as the proposal is completed, could be investigated and actions taken to minimize bottlenecks.

To reduce the variability of preparation times, tasks could be standardized to the extent possible.The customer can trust in the quality management system that exists within the organization, if it is ISO certified.

The waiting time in a bank before being served, expressed in minutes, is a variable, as are the density of a liquid in grams per cubic centimeter and the resistance of a coil in ohms. However, for software-support personnel, an acceptable work schedule and adequate salary could be rewarding. The selection of rational samples or subgroups hinges on the concept that samples should be so selected such that the variation within a sample is due to only common causes, representing the inherent variation in the process.

In Chapter 2 we introduce some quality philosophies developed by pioneers in the field. Some common causes, that are inherent to the process, whose remediation will require making corresponding process changes could be: Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising.

Create lists, bibliographies and reviews:

IRENE from Champaign
Also read my other posts. One of my hobbies is aggressive inline skating. I relish reading comics freely.
>