ISACA COBIT 5 – Measure (BOK V) Part 14

- By
- January 27, 2023

**46. Process performance indices (BOK V.F.2) **

So earlier we talked about process capability indices and we talked about CP and CPK. Now let’s talk about process performance indices and these are PP and PPK. We did talk a little bit about these earlier as well. Let’s go into a little bit more details here. So, what are the conditions? These four conditions which are listed on this slide were the conditions for finding out CP and CPK. Everything remains same here, except that for PP and PPK which is process performance indices, you don’t need to have process under statistical control. Because here we are looking at the long term.

Once you look at the process from the long term perspective, there would be some shift as time passes. So this does allow for some statistical out of control or the shift or the drift during the process rest. All remains the same. Sample need to be representative of the population. Normal distribution has to be there. Sample size must be sufficient. These conditions are required for PP and PPK. So looking at the formulas, these are exactly similar to what you saw in CP and CPK. So let’s look at one example here. When we talked about CP, everything was same except the sigma.

The only difference between CP and PP is sigma. In PP we are looking at overall sigma. When we talked about CP we were looking at sigma within sigma. Within is short term sigma. So this is short term sigma and sigma overall is long term sigma. And as we earlier talked, this sigma is estimated by some formula like r bar divided by d two. We will be doing this in control charts as well there also we will be estimating the sigma based on the range and d two would be a constant depending on the sample size, how many samples we are drawing. Whereas when we talk about sigma overall this will be calculating using a calculator where each value of x is subtracted from the mean square of that divided by n minus one and square root of this.

So we will be using this formula or we will be using the calculator to find out the overall sigma or the long term sigma. That’s the only difference when we talk of CP and PP similarly, same thing happens with the CPK and PPK also. These are similar except sigma. Sigma is the only difference here. So this is what we talked earlier, that CPK is calculated using within standard division while PPK is using overall standard division.

We talked about that and we talked about that CPK is for the short term and PPK is for the long term. So earlier when we talked about CP and CPK, in case of CP, it didn’t matter whether the process was centered or not. In CPK it did matter because if process was drifted towards the right in our example, your CPK became low. Now, only thing left out here is what about the target value? This concept was given by Taguchi. Taguchi’s concept was it is not good enough that you just be within specification. So if you have a specification and this he called as goal post mentality.

So goal post mentality is that you have a upper specification limit and you have a lower specification limit. Wherever you are in, this doesn’t matter because you would still think that you are within the tolerance. You are within the tolerance, but you are not exactly at the target point. What Taguchi said was there is a difference in quality, being at the target and being within specification limit. Anything which is at the target will definitely be of a better quality than something which is just at the border of the specification limit. So if this was the target, our attempt should be to have more and more things towards the target rather than having within specification limit. Only. Let’s understand that with a simple diagram here. So if this is your lower specification limit and this is your upper specification limit and this is what was your target, and if your process was behaving like this, that is great because one thing is that this is within specification limit. Because your specification limits are wide enough, your process is very narrow.

Most of the things are near target. This is the great thing. So this is good. Let’s take another example, similar example where we have a upper specification limit and lower specification limit. And this is the target value which you want to hit to. But if your production is something like this now if you look at CPK, there will not be any problem because everything is within specification limit. Your CPK will still be okay, good. But the only thing will happen is that none of this in case number two. The case number two, none of the products will be at target value. Most of the products are within tolerance, but none is at the target. Anything which is a target gives a better customer satisfaction, is a better product. That’s what Taguchi said. Now to compensate for that, the another process capability index was developed which is CPM. And CPM is given by this formula here, USL minus LSL.

And you see that here, this T has been added, mu minus T has been added and Mu minus T is this gap, the gap between the center of your distribution and the target. This gap, this gap is your Mu minus T. This has been added in case number one, Mu minus T was zero because Mu and T are at the same point. If Mu and T are at the same point, then your CPM is the same thing as CP because this also will be U-S-L minus LSL divided by six square root of sigma square plus the second term becomes zero. Mu minus t becomes zero. So this becomes, this becomes U-S-L minus LSL divided by six sigma which is exactly equal to CP but in this case, the second case there is Mu minus T. So if there is some value which is coming out of Mu minus T then more thing at the Denominator will reduce CPM. So here the CPM will be less than CP because it is considering the centering at the target as well. Which CPM?

**47. General process capability studies (BOK V.F.3) **

So when it comes to Process Capability studies what do you do? There are a number of steps to be done. Different authors have given different steps. What I have tried to do here is put everything in a very general term so that I can explain you the concept behind that. So if you have to do process capability studies, then the problem comes that where do you start? That’s something which is important. Selecting the right process, the process which really makes the difference, which really helps customer satisfaction. So you need to select those processes only you cannot randomly select a process and start doing the Process Capability Study. So depending on what sort of a complaints you have, depending on what’s important to the customer, you select a specific process which you want to control, for which you want to find out the Process Capability Indices.

Once you have done that, you need to have data collection plan, what all data you need to collect. In the example which we said that you want to collect the sharp diameter, sharp diameter using what equipment, what frequency, all those things you need to have in your data collection plan, you have that. Then you need to do measurement system analysis. Because here you are collecting data and you want to make sure that you collect the right data. Your data is not influenced by operator to operator. Your data is not influenced by measuring equipment to equipment. So you need to have measurement system analysis done to make sure that whatever data you are collecting is reliable. Then you gather data, you take 30 40 pieces which you want to take and you need to confirm the normality of data first, because we earlier said that you need to have a normal data.

For process capability indices. If the data is not normal, then there are other approaches which we will be talking later on. But for Cpcpk or PPP PPK you need to have data which is normally distributed and confirm that process is in control and that you can do using control charts. We will be talking about control charts later on in this course. Because if your data is not within control, if there are special causes, then first thing you need to do is remove those special causes before you go for Process Capability Studies. And this is important, important that you need to make sure that your process is in control. And then you estimate the process capability based on the data which you have collected and your process capability, if it is well within the range, that’s good. If not, then you need to think of improving the process. You can improve the process. You might look at reducing the standard division so that the products which you are producing are consistent.

Or you might even look at the tolerance. Whether your tolerance are real or not, you might have put unreasonable tight tolerances there, see what clients want there. Does client really that tight tolerances? Or someone has hypothetically just put those tolerances? So you look at that. So if your process capability indices are not acceptable or at the lower range, then you have two options, as they said, either reduce the variation or look at the tolerance. So this completes our discussion on the process of doing process capability studies.

**48. Process capability for attributes data (BOK V.F.4) **

So earlier when we talked about Cocky PPP we were talking about continuous data continuous data such as measurement in the form of millimeters that was a continuous data what happens if you have attribute data? Attribute data just like pass, fail, accepted, rejected, good, bad if you have attribute data then really speaking there’s no good process capability method the good thing you can do is use the control chart you can use control charts such as C chart, U chart, P chart or NP chart. We will be talking about these control charts later so you can use one of those charts to find out what is the percentage level of defectives and probably that’s what you want to know. Also when we were doing Cpcpk that’s what we were doing that what sort of rejection we are expecting. So when we have CPK less than one we knew that we are expecting some level of rejection. The same thing could be found out by looking at attribute data what sort of a average rejection rate you have p bar would be average rejection rate. So we will be looking at those things once we go and talk about control charts but for now that’s what you need to understand that is the common way to use attribute data process capability is to use P, NPC or U chart.

**49. Process capability for non-normal data (BOK V.F.5) **

So in process capability, one major assumption was that the data which you are using is normal. And if you remember all these curves when we said that plus minus three sigma was based on normality assumption that your data is normal. So this is what we have done earlier when we were talking about process capability, when we were looking for upper and lower control limit, we assumed our data to be normal. There was a center point and then there was a minus three sigma and plus three sigma and this was called the process range. But what if your data is not normal? How do you find out the process capability for non normal data? Let’s understand by these three things here first understand how do we find out whether our data is normal or not? Let’s do that and for that we will be using histogram and a commonly used normality test which is Anderson Darling test. We will do that then what we will do is if your data is not normal then you need to transform that to normal. So there is a process of transformation. Let’s understand the basics of transformation first. How does transformation works? And then we will go to the commonly used method for transformation which is Box Cox Power transformation.

And once we have done that, then we will try to find out the process capability of a no normal data. In this whole process. We will be using an example from SigmaXL. Let’s start with that on the next slide. So here is the summary of what we are going to do here, when we want to check the normality, we will open SigmaXL. And in Sigma Excel there are sample files. From that we will pick this particular sample file, which is non normal cycle time file. We will open that file, we will check for the histogram. And this is what you will get as a histogram. As you can clearly see here, this no way looks like a normal distribution because all your data is on the left is here and it’s no way normal distribution.

By visually looking at the histogram itself you can tell that this data is not normally distributed but still we will go for another test which is Anderson Darling test. Anderson Darling test will show something like this as you see here, this is the result of Anderson Darling test. What we expect in Anderson Darling test is all these points which you see here and here should fall on this line or at least between this and this yellow lines.

So if all the points are lying between these yellow lines then we can say that data is normally distributed but as we already know looking at the histogram that data is not normally distributed. The same result will be shown by Anderson Darling test as well. You see that lot of data is outside the limits and it no way follows this line this line. So this data is not normal. Another thing which you note in Anderson Darling test is the p value. The p value here, the p value here is zero. For a data to be normal data normally distributed data, this p value has to be greater than 0. 5.

Once we go to hypothesis testing, we will understand that in hypothesis testing we make a null hypothesis and in this case the null hypothesis is that data is normally distributed. And if we get a value which is p greater than this so if your value of p is less than 0. 5, then your null hypothesis gets rejected. Your null hypothesis is that data is normal. But since you get a low value, null hypothesis gets rejected means your data is not normal. And this is what we are seeing here by looking at the p value of zero, p value of zero, zero, zero shows that there is no way this data is normally distributed.

So let’s look at the demonstration of these two things, looking at the histogram and looking at the Anderson Darling test. And then we will go further do the next step which is understanding transformation and doing the box Cox transformation. Let’s open Sigma Excel and open non normal cycle time file. So here we have Sigma Excel and we are on Sigma XL tab as you can see here. Let’s open that non normal data file. For that I press on help and I look for the sample data. Let’s press here, accept that yes, I want to open that file and look for non normal data file and which is here non normal cycle time too. This is the file which we want to open here. Let’s double click that and open that. So here is the non normal cycle time in column A.

The first thing we said was that we want to draw a histogram. So let’s go back to Sigma Excel tab, graphical Tools and we go to basic histogram and click on the entire data table which selects a one to a 31, the whole table and then we go next. And here the cycle time is the only one. Let’s select that and finish this. So this gives the histogram which you have already seen on the slide as well. So this data no way looks like normally distributed but let’s reconfirm that doing Anderson Darling test as well. So let’s go back to SigmaXL recall SigmaXL dialog cancel this. And now I want to do Anderson Darling test. So for that I go to Graphical Tools and in Graphical Tool I am looking for normal probability plot. So let’s do that again. Use the entire data table next and I select cycle time again here and press OK. And this is the probability distribution which you saw earlier on the slide itself.

What we were expecting in case of a normal data was that all these points which are flying here, here, outside this either should be in this range between these two orange lines or the yellow lines. And most preferably these points should lie on this line, the center line. If all the points are on the center line or near the center point, we can say that data is normal. But here, by just by looking at that, I can say that data is not normally distributed. So this is what you see as a part of probability distribution. Now, if you want to look at the Anderson Darling p value, for that we can go to Descriptive Statistics and find out that as well. So for that again, I go back to Sigma Excel dialog box, cancel this. And what I want to do is I want to look at descriptive statistics to find out the p value here.

So for that, I go to Statistical Tools, Descriptive Statistics and I select the entire data table, OK? And select this Y variable and OK, so this is what you are looking for here, Anderson Darling normality test. And in this also, what you want to focus is on the p value and the Pvalue, as you can see here, is zero. This has to be greater than 0. 5 if this data was normally distributed data. But since this is low, then we can say that this data is not normally distributed. That we can say with 95% confidence. And that’s what we wanted here as a part one, to check whether our data is normal or not. And we concluded, based on the histogram and based on Anderson Darling test, that the p value is low. So this data is not normally distributed. So what do we do now? Before we do anything else, let’s understand transformation. How can data be transformed into normal data? Let’s understand the basics of that first.

- Category: Uncategorized

Comments

* The most recent comment are at the top
Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »