ISACA COBIT 5 – Measure (BOK V) Part 5

  • By
  • January 26, 2023
0 Comment

11. Measurement System Analysis – Introduction (BOK V.C.1)

We use DMAC approach, d for define, m for measure, a for analyze, I for improve and C for control. So once we are in measurement phase, which is the second phase where you want to evaluate how things are moving, how your processes are performing, for that you need to take measurements. Measurements to see how your process is is performing. So that you can improve that. We also know that processes have variation. So if you want to measure something, that measurement will have some variation. So if you are producing a bearing and a bearing has a size of 52. 0 mm, not every bearing will have the outer diameter of 52 pointers. Some will have plus, some will have a little bit of minus. So process has variation. We understand that.

But we also need to understand that that the measurements also have variation. Whatever measurement we take will have a variation. For example, this bearing. If I measure for the first time, I will get the reading of 52 point. I measure it again, I might get it 52. 1. So next time when I measure, I might get this as 51. 98. Now I’m getting number of these readings. Which of these readings is the correct reading or the correct value of the outer diameter? You cannot say that because every time you measure you will have a different dimension coming as a result of measurement. Because measurement systems also have variation. And we need to understand that there are people who believe that whatever measurement you take for the first time, that’s the right measurement.

So for example, in this case if I take the measurement of 52. 00, someone will say that no, that’s a correct value. Because the gauge which we are using is a calibrated gauge. Calibrated gauge ensures one thing that your average value is right. But that doesn’t ensure that there is no variation. Because variation will be there in a measurement system. And that’s what we are going to learn in this session, which is measurement system analysis. So when we talk of measurement system analysis, let’s understand what is measurement system. Measurement system consists of everything related to that particular measurement. It includes operator, also known as appraiser who is taking the measurement. This includes the measuring instrument.

The instrument which we are using for the measurement, the procedures which we are using, the environment, the temperature, everything related to measurement is measurement system. But when we look at measurement variation, we look at three main things here. One is the operator. The operator contributes to the variation. We consider instrument because instrument also contributes to the variation. And we consider the part. The part itself contributes to the variation because part itself has variation. So when we talk of variation in measurement, we will be focusing on these three things. The operator, the instrument and the part. As we go further into this, we will understand these three things and how they connect with each other. Before we start let’s understand two basic definitions here. One is the true value and the second is the reference value.

When I’m taking the measurement of this bearing, which was required to be 52. 00, and I take the measurement of that measurement comes out to be 52. 00. Is this the true value? No, the true value is the actual value of that measurement and most of the time that is unknown. This might look a little bit of surprising to you that the true value is unknown because there’s no true value. You cannot say that this particular value is the true value because next time if you measure the same thing, this might come out to be 52. 1. Next time if you measure this, this might come out to be 51. 98. So which one is the true value? You cannot say that. So let’s be clear that true value will always be unknown. What we use here is the reference value instead of true value. Reference value is the accepted value or the substitute for the true value. So what we do to find out the reference value is for reference value, we take the best measuring equipment, we do the measurement under the best condition, using the best technician, the best appraiser we have, so that we can be more or less be sure that this is somehow the true value. But you don’t know the true value.

So under the ideal condition, whatever measurement you take is representative of true value. And we call this value as the reference value. Another important aspect in measurement system analysis is the resolution of the gauge. So what’s the resolution of the gauge? Resolution is the smallest readable unit of the measurement instrument. So if you remember previously we looked at the Vernier caliper, the digital one, which was showing the reading as 52. 00 mm. So . 0 1 mm is the least value this instrument can measure. So that is the resolution of that instrument.

Resolution is also called as discrimination. Now, in resolution there is a rule of ten to one rule, or also known as rule of ten. What it says is, depending on the tolerance which you have on the measurement, your resolution should be one 10th of that. So your instrument should be able to read one 10th of the tolerance of the measurement. Let’s look at the next slide and see an example of this and that will make this more clear. So let’s move on to the next slide. So let’s look at the example of the same bearing outside diameter here, which is showing here as 52 point. Now, this bearing has a tolerance of plus minus zero 5 mm. So this bearing can be anything between 51. 95 millimeter to 52. 5 mm.

So that’s the range. This bearing has to be produced in that range only. So if that is the case, then which of these two instruments would you use to make measurement? On the left I have the Vernier, which is the digital one which has a resolution of zero 1. On the right, I have a measuring tape which has the resolution of 1 mm. Which of these would be used? Let’s look at that on the next slide. So here if I apply ten to one rule of thumb, and for that, let me understand that the tolerance here is plus minus zero five. So anything between 51. 95 to 52. 5. So the total range or the tolerance range here is zero 1. What my digital Warnier can measure is zero one as the minimum value or the resolution which is one 10th of the tolerance range here, one 10th of zero.

So that means my digital Warnier is good for this measurement because this follows ten to one rule and not the tape, because tape can take the measurement up to 1 mm only I could have used tape here if I was producing this bearing, if this bearing tolerance was plus -5 mm. So if it was acceptable to make this bearing outer diameter, anything from 45 to 55 mm, okay, in that case, I could have used the tape, but not when my tolerance is plus -0. 5, so depending on the tolerance, you need to have resolution of the gauge and that should follow ten to one rule of thumb. In measurement system analysis, we will be looking at three broad concepts. One is resolution, which we have already done that. In resolution, we have talked about ten to one rule of thumb also.

So that part is already done. Then another two important aspects which we will learn in MSA is accuracy and precision. In accuracy, we’ll be talking about bias, linearity and stability. In precision, we’ll be talking about repeatability and reproducibility. Before we go any further into this, let’s understand what is accuracy and what is precision that we will do on the next slide. Accuracy is related to closeness to the true value or to the accepted reference value. And we have talked this earlier also, it’s not possible to find the true value. Instead of true value, we use reference value. Reference value which was measurement taken under the best condition with the most acceptable instrument. So accuracy is related to the closeness to the true value or closeness to the reference value. The precision is closeness of the repeated readings to each other. When we talk of precision and accuracy, many times you would have seen this slide which shows you what is accuracy and what is precision. Here it’s talked about the use of a dart or the bullet, which is short on this board.

This is your bull’s eye. If your shots are such that the average of all your shots is near to the true value. And true value here is, let’s say the bull i. So if I shoot here, if I shoot here, here, here, the average of all these things is near to the true value. So this is accurate. But this is not precise because all these shots are not near to each other. On the other hand, when I talk of precision, precision, I would show something like this, where this is my bull’s eye and all my shots are near to each other. So this is precision. And the best scenario will be where we have accurate and precision that would be shown something like this, where our process is accurate. Everything is near to the bullseye and near to each other. In terms of measurement, if I have, let’s say, a scale here, my measurement was supposed to be 100 mm. If I take repeated measurements of one particular item number of times, I might not get value 100 every time. Sometime I might get something more.

Sometimes this value might be something less. But it will be accurate if the average of all those readings is near to 100. So accuracy is when average is near to 100. So if my readings are here, here, here, I would say that the average of these readings is near to 100. So my measurement is accurate, this one is accurate. And even the same thing, if I take another measurement where it’s a hundred millimeter required and my measurements are coming very far away, but still their average is near 200, this is also accurate. Both of these are accurate, but this one is less precise because the values are not close to each other. Here in this one is more precise because values are near to each other.

And if you take a very large number of these readings, this would somehow will look like this. So if this is your measurement, which was supposed to be 100 mm, if everything is fine, most of these readings will be 100. Some will be on the left side and some will be on the more side, on the higher side. So this will sort of form a normal distribution. So this is how your distribution of measurements would be. And if you remember earlier, we would have talked about the variation in the part. This is not the variation in the part, this is the variation in the measurement. Because the part is same. This variation is coming from the variation in the measurement system. Because every time we take the measurement, this might not as 100. Sometimes this will be less, sometimes we have more, sometimes this will be 100. But in general, this will be following a normal distribution curve and which will be this.

12. Measurement System Analysis – Accuracy (BOK V.C.1)

Now we understood the difference between the accuracy and the precision. Accuracy is nearness to the true value or the nearness to the reference value. Precision is nearness to each other, the values are near to each other. Now, what we will do is we will look at each of these sub items here. So in accuracy we have bias, we have linearity and we have stability. Let’s see these three items one by one, starting with the bias here. What’s bias? Bias is the difference between the observed average of measurements and the reference value. So you take the average of all the measurements which you have taken, and if you find out the difference between the average of the measurement and the reference value, that will be your bias. And I am saying reference value here, not the true value, because we have talked about that earlier that it’s not possible to find out the true value of something. So what we find out is the reference value, which is the best estimate using the best measurement conditions. So that’s the reference value here in this case, I have taken one example where I have a reference value of 100 PSI and PSI is the unit of pressure, pounds per square inch. So my reference value is 100. And if I use my measuring device, my pressure gauge, and I find out the value of pressure six different times, I get six different readings which I have recorded here.

So, first time I measure this, this shows me 100 PSI. Next time 101. Next time 102. 4th time 102, 101 and 100. So, these are the six readings which I obtained when I used my measuring pressure gauge to find out the pressure. If I take the average of all these six readings, which comes out to be 101 PSI, and our reference value is 100. So bias in this case will be the difference between the average of measurements which is 101, and the reference value which is 100. So the difference between these two will be the bias, which comes out to be one PSI or one pounds per square inch. So that’s a bias. Now, here I’ve shown the same thing in graphical way. So let me put it here. Let’s say this is my 100 PSI, which was the reference value, this average was 101 in the previous example which we took. And we took number of readings here. Somewhere we found the value as 100, somewhere we found the value as 101. Some values we found at 102 and the average of that will be 101. The difference between this average and the reference value which is 100 is the bias. So this is how I will graphically show the bias. Now, bias is a systematic error, because this error will be reproduced systematically, because this is something which is inherent difference between the average measurement and the reference value. This is addressed by calibration. So if your equipment is going for calibration, this will be readjusted. So this will be readjusted to make sure that the average is being read as 100, not as 101, which was the case in our example there. So, Calibration address is the bias. So now, coming to the next one, what is linearity? Let’s look at that on the next slide. In accuracy, the next item is linearity.

So, we talked about bias earlier, now we are talking about linearity here. What is linearity? Linearity is the measurement of the bias across the operating range of the tool or the instrument. Any equipment will have bias, but this bias would change from one point in the operating range to another point in the operating range. For example, if you look at your bathroom weighing scale that you use for weighing your own weight, let’s say that will work fine in 50, that’s where most of the people would be. But if you want to measure 1 kg on that, or if you want to measure 200 kg on that, you will see that there is too much of bias when you use a particular measuring equipment at the extreme values.

So, what does linearity show is linearity shows the difference of bias at different points of measurement. To clearly understand, let’s look at example here, going back to the same example where we had a pressure gauge, which at 100 PSI, which is here at 100 PSI, this was showing an average pressure of 101 and which was leading to one PSI as a bias. But now, if we find out this bias at different points at zero PSI, 50 PSI, 150 PSI and 200 PSI, we might see that bias is different. At different points. At zero PSI, there is no bias. At 200 PSI, there is a bias of two PSI instead of one, which was at 100 PSI. So bias might change from one point to another point.

That is what is the linearity. So we talked about bias, we talked about linearity. Let’s talk about stability on the next slide. So, what is stability? Stability is the measure of bias over time, and this is also known as drift. So bias would change over time. Let’s say this pressure gauge, which was having a bias of one PSI when you were measuring 100 PSI, that might change over time because of wear and tear, because of the spring getting changed. So all those things could lead to change of bias over time. So that will be the stability. Stability is the bias over a period of time. So here in this picture, if you look at that at time number one, that’s it today, if this was the bias, which is bias one at time two, which might be, let’s say a month from now, a year from now, the bias is different, which is bias two here. So this difference between bias one and bias two will be the stability of the instrument. So, with this, we complete the accuracy part of measuring equipment.

13. Measurement System Analysis – Precision (BOK V.C.1)

So, so far we have talked about accuracy. In that we have talked about bias, linearity and stability coming to the next item, which is the precision of the measuring instrument. Precision is the closeness of repeated readings to each other. And there are two measurements in this. One is repeatability and second is reproducibility. And together these two things which are repeatability and reproducibility, these are known as gauge R and R, which is repeatability and reproducibility. So we will be talking about these three terms on next three slides, starting with what is repeatability. So earlier when we were talking about accuracy, we were talking about how much on the average the measurement is away from the reference value. That was accuracy. When we talk about precision, we are talking about how close these readings are to each other. So, in that there are two things which we need to consider. One is the gauge. Gauge can produce variation and the second thing is the operator or the appraiser. The operator or the appraiser can also introduce variation. These two things, the gauge, the variation because of gauge and variation because of the operator, are represented by these two terms which are repeatability and reproducibility.

So when we talk of repeatability, this is the variation because of the gauge, variation because of the equipment. When you have one operator taking number of measurements using one equipment, let’s say if we start taking measurement of a ball bearing with the digital vernier gauge, first time I measure this, this comes out to be 52 point. Next time I measure this, again, this comes out to be 52. 0. It’s the same measuring instrument, it’s the same piece which we are measuring, it’s the same operator which is doing this measurement. So second time also 52. 00. 3rd time this might come as 52. 2. Next time this might come as 51. 98. If I take number of those measurements, one piece, the same piece, the same operator, the same gauge, that if I plot here on this, this might show something like this. This is my, let’s say 52. 00, which was the reference value here. And sometime it’s less, sometime it’s more.

And this might end up showing as a normal distribution curve. So this variation, this variation which is because of the gauge, is the repeatability. Repeatability is the capability of the gauge to produce consistent results. And then if this is the repeatability, what about reproducibility? Let’s look at that on the next slide. So repeatability was because of the gauge, the variation because of the gauge. Reproducibility is the variation because of the appraiser or the operator or the person who is taking this measurement that will be reproducibility. So what we are doing here is we are changing the operator, we are changing the appraiser here. So one appraiser takes some measurements. Let’s say this is my appraiser number one, appraiser one, and this is appraiser two. This is appraiser three. Appraiser one takes the measurements, his or her measurements are something like this, something here that is shown by this first distribution. And if I use the second appraiser, second appraisers measurements are shown here. The curve is represented here in number two. If I use the third appraiser, then his or her measurements are shown here.

So now I can see that there is a difference in the measurement. If I change the appraisers, that difference is called as reproducibility. So reproducibility is the variation in the average measurements made by different appraisers using the same gauge. All these three operators are using the same gauge because the variation because of gauge is being taken care of separately under repeatability. In reproducibility, we are looking at variation because of the appraisers. So the difference between these would be the difference in the average of these three appraisers. So that will be the reproducibility. So we talked about repeatability, we talked about reproducibility. And now let’s talk about gauge R and R, which is the gauge repeatability and reproducibility. How would we show that on a similar graph? Let’s see that on the next slide. So here is the gauge R and R GRR gauge, repeatability and reproducibility. So if you remember this variation, this variation is repeatability.

This is operator two, this is operator one, this is operator three. So this part is the repeatability because of repeat validity, because of operator two. And same thing with operator one and operator three also. And we said that this part, which is the difference in the averages is the reproducibility, reproducibility because of using different operators. And together these things will make from here to here will be gauge R and R, which is gauge, repeatability and reproducibility. So if you see the graph here, this is sort of an overexgraded graph. What will happen in reality is if you take number of operators and if you look at the repeatability and reproducibility.

So, if I take operator number one, something like this will happen. Operator number one will have something which is the variation is shown something like this. So this will be the variation of operator number one with this as average. If I take operator two, operator two might have a slight shift on left and right. So that will be something like this. So this will be the pattern of operator two doing the same measurement.

Operator three might be, again, let’s say slightly more shifted towards right. So these will be the three variations because of operator and each graph itself is variation because of the equipment. Now, if I look at the total measurement variation, which will be the sum of all these three variations, that would be something like this. Let me show it with the red color here. So that will be something like this. So this will be my overall variation because of measurement. Now, question comes is how much overall measurement variation is acceptable? Probably that will depend upon what is your tolerance level. To understand that, let’s look at precision to tolerance ratio on the next slide. That will show how much overall variation or the measurement variation is acceptable when we take.

14. Measurement System Analysis – Precision to Tolerance Ratio (BOK V.C.1)

If you look at this precision to tolerance ratio, this will tell you how much capable your measurement system is to take the measurement. Precision to tolerance ratio is the ratio between the estimated measurement error which is the precision which we have shown on the previous slide with the help of a red bell curve, which was the overall estimated measurement. Precision and precision consisted of both repeatability and reproducibility. So the ratio between that and the tolerance of the characteristic being measured, that is something which is known as precision to tolerance ratio. What is the precision and precision consists of the repeatability and the reproducibility. Tolerance is something which is tolerance of the item being measured. Let’s say if we are measuring this bearing which has the dimension of 52 pointers, what’s the tolerance on that?

What’s the acceptable range? I can accept anything between 51. 98 to 52. 2. If that’s the tolerance, then we need to see whether our measurement system is capable for that tolerance or not. The next slide will show this in graphical form which will make it much clearer what the precision and tolerance ratio is. So, let’s move on to the next slide. So, P to T, which is precision to tolerance, is the most common estimate of measurement system precision, how much precise your measurement system is. So if I look at this tolerance here, so this is my tolerance. Let’s say in case of bearing measurement, anything between 51. 98 to 52. 2 is acceptable. The average being 52 point.

If my measurement system is something like this, the variation because of my measurement system is this which is one 10th of the tolerance, then my measurement system is good. Here I’ve shown another example where the variation because of measurement system itself is more or less equal to the tolerance. Then my measurement system is not good. Because whatever tolerance I have, my tolerance is 51. 98 to 52. 2. But even if my bearing is good, perfectly 52 point. But my measurement system is such a bad that 52 point, this can read this as 51.

98, this can read this as 52. 02. This can read anything even though the reference value is 52. 00. But my measurement system itself is so bad that that measurement system consumes the whole tolerance. Then how can I control my item in that tolerance? So the right value or the good value for P and T ratio is 10%. Up to 30%, we can accept in certain areas. But beyond 30%, probably there’s no way you can accept that. But the most commonly used is 10%. Your P to T ratios need to be 10%. And here in the third example where my P to T ratio is greater than 100 here more than my tolerance, my measurement system itself is so widely spread, then there’s no way I can make sure that my products are within tolerance. So this was my P and T ratio, precision to tolerance ratio. So here I have a formula for calculating precision to tolerance ratio PTR which is here which is 5.

15 sigma of measurement system. This is the measurement system standard deviation divided by upper and lower specification limit and upper and lower specification limit difference is the tolerance so 515 multiplied by sigma because of measurement system divide that by tolerance will give you precision to tolerance ratio and if you are wondering why this is 5. 15 why not six? At number of places you might see six as well but the logic of putting 5. 15 is that in 5. 15 sigma your 99% area is covered. So if you look at here which is here and if I draw my normal distribution curve for 99% area, I need to have 1% area outside the acceptable limit and in 1% also I need to have 0. 5% on each side. So if I look here, . 5 comes somewhere in between this with this gives me a sigma value of 2. 55 here and 2. 575 so 2. 575 on plus side and on minus side 2. 55 and in between these two. If you add these this will give you 5. 15 just in case. If you are curious why this is 5. 15.

15. Measurement System Analysis – Three Methods of GRR (BOK V.C.1)

Engage R and R. What we intend to do here is study the variation in measurement and see whether that much variation is acceptable for our measurement or not. That’s what we are trying to study here. Broadly, we did talk about this in precision to tolerance ratio earlier, but now we are looking into more details of that at now we want to find out how much of this variation is because of the operator or the appraiser and how much is this because of the gauge itself. That’s what we are doing in gauge R and R studies. So gauge R and R studies are for repeatability and reproducibility and find out how much variation is because of each of these two things. Before I go any further, many of you might have some problem with remembering what is repeatability and what’s reproducibility. One is variation because of the person and one is variation because of the gauge. How do we remember that? I came across one video on YouTube and that was a nice way of remembering what’s repeatability and what’s reproducibility. When you talk of repeatability in repeat you have a pet pete is the Peter. That’s how I too remember that.

So in repeatability, Peter is the one who is taking all the measurements. So it’s one operator, one gauge taking number of measurement. So it is the variation because of that particular one operator, because of that particular one appraiser that’s repeatability repeatability related to the Peter or the peat. The second part is reproducibility. In reproducibility, the way to remember that is now Peter has reproduced number of people because of reproduction. And now because of reproduction, now there are number of small peters, so number of peoples are taking measurement. And now what we are studying here in reproducibility is the variation because of these number of peoples, because there are more people doing measurement. So that’s how I remember repeatability and reproducibility.

Now, coming back to our main topic of gauge R and R study here, we want to find out what part of variation is because of repeatability, what part of variation is because of reproducibility, and whether this measurement system variation is acceptable or not. There are three methods of gauge R and R studies the range method, the average and range method, and ANOVA method. We have not talked about ANOVA yet. As you go further into the course later on, we do talk about anoa. So for that reason I will not be going into the details of manually calculating these numbers using ANOVA. What I will be doing is we will look at the range method, we will look at the average range method in more details. In case of ANOVA, we will use Sigma Excel software to do the calculation and we will understand the results which are generated out of that. So that’s the game plan to understand three methods of conducting gauge R and R. So here I have summarized these three methods the range method quantifies both repeatability and reproducibility together.

So this is a very rough method of doing gauge R and R because this doesn’t separate repeatability and reproducibility as two separate items. This just gives you one single value of gauge variation that’s the range method going to the second one which is the average and the range method. This provides the separate estimate of repeatability and reproducibility. How much of variation is because of repeatability and how much of that is because of reproducibility? In ANOVA, we not only study the estimate of repeatability and reproducibility, but we also look at the interaction between these and when I talk of interaction between these elements, we will talk about that in design of experiments. So if you want to look at this lecture once again, once you have completed the full course on this black belt, you might want to come back here because number of these concepts which we are talking here will be explained later in more detail. Let’s start with the range method on the next slide.

Comments
* The most recent comment are at the top

Interesting posts

5 Easiest Ways to Get CRISC Certification

CRISC Certification – Steps to Triumph Are you ready to stand out in the ever-evolving fields of risk management and information security? Achieving a Certified in Risk and Information Systems Control (CRISC) certification is more than just adding a prestigious title next to your name — it’s a powerful statement about your expertise in safeguarding… Read More »

Complete VMware Certification Guide 2024

Hello, tech aficionados and IT wizards! Ever thought about propelling your career forward with a VMware certification? If you have, great – you’ve landed in the perfect spot. And if you haven’t, get ready to be captivated. VMware stands at the forefront of virtualization and cloud infrastructure globally, presenting a comprehensive certification program tailored to… Read More »

How Cisco CCNA Certification Can Boost Your IT Career?

Hello, fellow tech aficionados! Are you itching to climb the IT career ladder but find yourself at a bit of a standstill? Maybe it’s time to spice up your resume with some serious certification action. And what better way to do that than with the Cisco Certified Network Associate (CCNA) certification? This little gem is… Read More »

What You Need to Know to Become Certified Information Security Manager?

Curious about the path to Certified Information Security Manager? Imagine embarking on a journey where each step brings you closer to mastering the complex realm of information security management. Picture yourself wielding the prestigious Certified Information Security Manager (CISM) certification, a beacon of expertise administered by the esteemed Information Systems Audit and Control Association (ISACA).… Read More »

VMware VCP: Is It Worth It?

Introduction In the dynamic realm of IT and cloud computing, where technology swiftly changes and competition is fierce, certifications shine as vital markers of proficiency and dedication. They act as keys to unlocking career potential for ambitious professionals. Within this context, VMware certifications have become a cornerstone for professionals aiming to showcase their expertise in… Read More »

3 Real-World Tasks You’ll Tackle in Google Data Analytics Certification

Introduction In today’s fast-paced digital world, certifications are essential for professionals aiming to showcase their expertise and progress in their careers. Google’s certifications, especially in data analytics, are highly regarded for their emphasis on practical, job-ready skills. The Google Data Analytics Certification, known for its broad skill development in data processing, analysis, and visualization, stands… Read More »

img