my8data

Frequently Asked Questions

Find quick answers to your questions about my8data

Detailed explanations of the methods can be found at www.sixsigmablackbelt.de

General

my8data is a web-based software solution for statistical quality management. The application supports you in performing measurement system analyses (MSA), machine capability investigations (MFU), and statistical process control (SPC) according to common industry standards such as VDA and AIAG.
my8data runs optimally in current versions of Chrome, Firefox, Edge, and Safari. For the best experience, we recommend Google Chrome or Mozilla Firefox.
You can change the language via the globe icon in the top navigation bar. Currently German and English are supported. Your setting is saved automatically.
All your analyses are automatically saved in the cloud. You can also export each analysis as an Excel file. To do this, go to the respective analysis and click on "Export".

Measurement System Analysis (MSA)

MSA1 (Cg/Cgk) tests the capability of a measuring instrument using a reference standard. MSA2 (Gage R&R) examines repeatability and reproducibility with multiple operators. MSA3 is a simplified Gage R&R with only one operator.
According to VDA Volume 5: Cg ≥ 1.33 and Cgk ≥ 1.33 are the minimum requirements. For special processes, Cg ≥ 1.67 and Cgk ≥ 1.67 are often required.
%GRR shows the proportion of measurement system variation in the total variation. Rating: ≤10% = capable (green), 10-30% = conditionally capable (yellow), >30% = not capable (red). Additionally, ndc should be ≥ 5.
VDA recommends: at least 10 parts, 3 operators, and 2-3 repetitions per part and operator. The parts should be distributed across the entire tolerance range.
MSA7 evaluates attributive (pass/fail) inspection processes with multiple operators using Kappa statistics and agreement rates. MSA7A is for automated inspection systems with only one "operator" (automation).

Machine Capability (MFU)

Cm/Cmk (machine capability) is determined from short-term measurements under stable conditions. Cp/Cpk (process capability) is based on long-term data and considers all process influences. Cm/Cmk should be higher than Cp/Cpk.
VDA recommends at least 50 consecutive measurements. The measurement should be performed under stable conditions (same batch, same operator, same machine).
For new machines: Cmk ≥ 1.67. For existing machines: Cmk ≥ 1.33. For special processes, higher values may be required.
Yes, the assumption of normal distribution should be verified with a test (e.g., Anderson-Darling). For non-normally distributed data, alternative distributions or transformations can be used.

Statistical Process Control (SPC)

X̄-R chart: for sample sizes 2-10. X̄-S chart: for sample sizes >10. I-MR chart: for individual values. The choice depends on your sampling concept.
Control limits are calculated from the process data (not from specification limits!). UCL = Mean + 3σ, LCL = Mean - 3σ. The exact calculation depends on the chart type.
A process is considered out of control when: point outside control limits, 7 consecutive points on one side of the center line, trends, cycles, or other non-random patterns.
Cp/Cpk is based on within-variation (short-term), Pp/Ppk on overall variation (long-term). For a stable process, both are similar. Large differences indicate instability.

Data & Import

You can enter data directly into the spreadsheet fields, paste from Excel via Copy&Paste, or import DFQ files (Q-DAS format).
Each analysis can be exported as an Excel file (.xlsx). The file contains all input data, calculations, and results. To do this, go to the analysis and click on "Export".
Yes, via the Excel template management you can create and reuse templates with predefined header data and settings.
The data generator creates synthetic measurement data for training purposes. You can choose different distributions (Normal, Weibull, etc.) and parameters.

Account & License

Go to your user settings (click on your name → Settings) and select "Change password".
Yes, all data is stored encrypted on German servers. We fully comply with GDPR. Your analysis data belongs exclusively to you.

MSA: Fundamentals

The primary goal is to provide statistical evidence that a measurement process (consisting of equipment, operator, method, and environment) is suitable for the intended measurement task at the specific point of use. It must be ensured that the measuring equipment can capture a quality characteristic with a sufficiently small systematic measurement deviation (trueness) and a sufficiently small measurement variation (precision) relative to the tolerance.
The verification always refers to the entire measurement process at the point of use, not just the isolated measuring device. A measuring device may be excellent under laboratory conditions, but can become unsuitable in the real process due to influencing factors such as vibrations in production, temperature fluctuations, inadequate fixtures, or different operator handling. Therefore, the study must be conducted under series production conditions.
Yes, in principle the capability verification applies exclusively to the investigated characteristic. If different characteristics (e.g., diameter and runout) or characteristics with significantly different tolerances are measured with the same measuring equipment, a separate capability verification is required for each of these characteristics, as the resolution, handling, and physical conditions may differ.
The resolution of the measuring equipment must be small enough to sufficiently subdivide the tolerance of the characteristic. As a rule of thumb ("Golden Rule of Metrology"), the resolution should be at most 5% of the tolerance T (≤ 5% · T). In justified exceptional cases, a resolution of up to 10% of the tolerance is permissible (≤ 10% · T) when technically no better equipment is available. Insufficient resolution leads to information loss and distorts the statistical evaluation of the variation.
The statistical calculation models of Methods 1 through 5 (such as mean, standard deviation, Cgk, ANOVA) are mathematically based on the assumption of a normal distribution of measured values. If no normal distribution is present (e.g., with one-sided physical limits, form errors, or mixed distributions), these standard methods produce incorrect or misleading results. In such cases, special methods must be applied or data transformations must be examined.
Thorough and documented inspection planning is a mandatory prerequisite for every capability verification. It defines what (characteristic), with what (measuring equipment), how (method), and how often measurements are taken. Without this definition, a capability verification is not reproducible and has no normative value. The limit values for the capability indices (e.g., 1.33 or 1.67) depend on customer-specific or normative requirements.
Measurement process capability evaluates the suitability of a system relative to the tolerance of a characteristic for series monitoring (focus: "Is the device good enough for this part?"). Measurement uncertainty (according to VDA 5 / GUM) is a parameter assigned to every measurement result that indicates the range in which the true value lies. While capability is often used for release decisions, measurement uncertainty is critical for conformity decisions according to ISO 14253.
The documentation must be complete enough that the study can be traced at any time. This includes: unique identification of the inspection plan, date/time, environmental conditions (temperature), identification of the reference standards used (calibration certificate no.), operator names, all raw data (individual measured values), the limit values used, the evaluation method, and the result including a clear assessment ("capable" / "not capable").
When the standard methods or AIAG MSA are technically not applicable (e.g., for destructive tests, extremely complex geometries, or dynamic measurements), custom special methods must be developed. This approach must be documented and must be coordinated with the responsible quality management department and, in the case of supplier relationships, with the customer before it is applied.

Method 1: Cg/Cgk

Method 1 is a "machine capability study" for the measuring device. It serves to evaluate the measurement system in isolation – without the influence of different operators or the variation of production parts. The systematic measurement deviation (bias) against a known reference value and the repeatability (variation) of the device are tested.
The reference standard must be long-term stable and match the production characteristic to be measured as closely as possible in geometry and composition. It is crucial that it is calibrated and the "true value" (reference value xm) is known. The calibration uncertainty Ukal should be significantly smaller than the tolerance of the characteristic (ideal case: Ukal < 1% · T; minimum requirement: Ukal < 10% · T).
To obtain a reliable statistical statement about the variation and the mean, a sufficient sample size is required. At least 25 measurements are required, but 50 measurements are recommended. Too few measurements lead to large confidence intervals, which increases the risk of incorrect decisions (false positive or false negative).
Simply repeating the measurement without moving the part would only capture the internal electronic or mechanical variation of the device. To replicate the real process, all handling steps must be simulated (clamping, positioning, probing). Only in this way is the variation captured that also occurs during later series production from inserting the parts.
Cg (Potential Capability Index): Relates the variation of the measuring device (typically 6·s) to the tolerance (typically 0.2·T). It indicates how precise the device is but ignores trueness. Cgk (Critical Capability Index): In addition to variation, it also considers the systematic deviation (bias) from the true value. It checks whether the measured values are not only close together but also centered relative to the reference.
According to common standards, the minimum requirements are Cg ≥ 1.33 and Cgk ≥ 1.33. Values below 1.33 indicate that the variation is too large (device imprecise) or the systematic deviation is too high (device incorrectly adjusted), whereby the tolerance range is too heavily restricted by the measurement uncertainty.
For characteristics without a natural boundary (e.g., distance > 10mm), there is no fixed tolerance range T, which is why Cg and Cgk cannot be calculated. In this case, Method 1 is used to determine bias and variation. From this, a reduced acceptance range for production is calculated (e.g., z ≤ USL - 4s - |Bias|) to ensure that no defective parts are incorrectly accepted as good parts.
These are characteristics that physically cannot become less than zero, such as runout, roughness, or magnitude of unbalance. Here, zero (LSL* = 0) represents a natural boundary. In these cases, a substitute tolerance T* (difference between the limit value and zero) can often be formed to calculate Cg values analogous to two-sided characteristics.
Within Method 1, a t-test is used to check whether the measured deviation (x̄ - xm) is statistically significantly different from zero or whether it could have arisen solely from random variation. If the deviation is not statistically significant, the bias may be neglected in certain calculations (e.g., when establishing acceptance criteria).
Method 1 tests the "basic hygiene" of the measuring device. If the measuring device itself (without operator influence and part variation) is already unable to deliver reproducible and accurate values on the reference standard, there is no point in conducting more complex studies (Methods 2/3). Failure in Method 2 would then be predetermined and troubleshooting unnecessarily complicated.

Method 2: Gage R&R

Method 2 (Gage R&R) must always be performed when the operator can have an influence on the measurement result. This is the case with hand-held measuring instruments (calipers, micrometers), with manual loading of fixtures, or when the operator makes subjective decisions during evaluation (e.g., manually setting measurement lines).
The standard requires: at least 10 representative production parts (n ≥ 10), at least 3 different operators (k ≥ 3), at least 2 repeat measurements per part and operator (r ≥ 2). This results in a total of at least 10 × 3 × 2 = 60 measured values.
The parts should not be ideal but should represent the actual variation of the manufacturing process. Ideally, they cover the entire tolerance range. If the parts are too similar (too little part variation), this can mathematically cause the measurement process to be rated worse than it actually is (see ndc issue).
To exclude psychological effects ("memory effect"), the inspector must not know which part is currently being measured or what was measured last time. A random order (randomization) also prevents trends (e.g., warming up of the device) from being incorrectly interpreted as part variation or operator differences.
The decisive metric is %GRR (Gauge Repeatability and Reproducibility). It relates the combined variation from equipment (EV) and operator (AV) to the reference value. Typically, the reference value is the tolerance T. The simplified formula is: %GRR = (6 · GRR / T) · 100%.
%GRR ≤ 10%: The measurement process is considered capable. 10% < %GRR ≤ 30%: The measurement process is conditionally capable. Use is often possible, but improvement measures should be examined. %GRR > 30%: The measurement process is not capable and must not be used.
The AIAG MSA often recommends total variation (TV) as the reference value. However, this is problematic: if a manufacturing process is very precise (very small part variation), the measurement error appears huge in relation, even though the measuring device is perfectly adequate relative to the tolerance (function of the part). The tolerance reference is therefore usually more technically relevant.
The ndc value indicates how many distinguishable categories (classes) the measurement system can detect within the process variation. It is a measure of the resolution capability of the measurement system relative to the part variation. A value of 1 would mean the device cannot distinguish between different parts.
According to AIAG MSA, the ndc value should be ≥ 5. This ensures that the measurement system resolves finely enough to meaningfully perform process control and analysis. A value below 5 indicates that the measurement variation is too large compared to the part variation.
EV (Equipment Variation): Describes the repeatability. This is the variation that occurs when the same operator measures the same part multiple times (equipment variation). AV (Appraiser Variation): Describes the reproducibility. This is the systematic deviation of the means between the different operators (operator variation).
A high AV value means that the operators measure differently on average. Causes can include: unclear work instructions, different force when clamping parts, parallax errors when reading scales, insufficient training, or different interpretation of the measurement location.
A high EV value indicates problems with the measuring instrument itself. Causes can include: wear or play in the mechanics, poor clamping of the part (wobbling), contamination, or unstable environmental conditions (vibrations).
ANOVA (Analysis of Variance) is mathematically more accurate. Unlike the outdated Average and Range Method (ARM), ANOVA can detect interactions between operator and part (e.g., when operator A measures differently only for small parts compared to operator B). It also uses all data more efficiently for estimating variances.
In such exceptional cases, Method 2 can also be performed with fewer parts. However, to maintain statistical confidence (degrees of freedom), the number of repeat measurements or inspectors must be increased. This must be explicitly documented and justified as a deviation in the report.

Method 3: Automatic

Method 3 (often called "Type 3 Study") is used for fully automatic measurement systems where the operator has no influence on the measurement result. Examples are automated inspection machines in production lines where feeding, positioning, and measurement are performed mechanically.
Since the influencing factor "operator" is eliminated, the parameter k (number of inspectors) is set to 1 (or is omitted). The AV (Appraiser Variation) is therefore zero. In this case, the total GRR variation is determined exclusively by repeatability (EV) and the part/machine interaction.
Since the variation from different operators as a source of information is eliminated, more parts must be measured to obtain a statistically reliable statement about the interplay between measuring device and different part geometries. 25 parts provide a broader basis for this than the 10 parts from Method 2.
The standard requires: at least 25 production parts (n ≥ 25) and at least 2 measurement cycles per part (r ≥ 2). The parts must be newly fed/inserted in each cycle to replicate the real automatic process.
No, the limit values for capability are identical. Here too: %GRR ≤ 10% is capable, up to 30% is conditionally capable, and above that is not capable. The reference value preferably remains the tolerance.

Method 4: Linearity

Normally, linearity is checked during calibration. However, Method 4 is additionally required when the measuring device is used over a very large working range and there is a suspicion that accuracy is not constant across the entire range (e.g., with non-linear sensor characteristics).
At least 5 reference parts (g ≥ 5) are selected that cover the entire working range of the measuring equipment. Each of these parts is measured at least 12 times (m ≥ 12). A regression line is calculated from the mean values of the deviations.
The AIAG MSA test checks purely statistically whether the slope and intercept of the regression line deviate significantly from zero. The relationship to technical tolerance is often neglected. A very precise device can fail statistically (because the smallest deviations are significant) even though they are technically irrelevant. Conversely, a device with high variation can pass because the confidence intervals are enormous.
A recommended alternative is, instead of the complex regression according to MSA, to perform Method 1 multiple times at various support points of the measurement range (e.g., at the lower, middle, and upper range of the tolerance field). This ensures that the requirements for Cg and Cgk are met at each point, which is more technically relevant than the pure linearity test.
According to AIAG MSA, the system is linear when the "zero line" (no deviation) lies entirely within the 95% confidence bands of the regression line. This means that neither the constant deviation (intercept) nor the variable deviation (slope) are statistically significant.

Method 5: Stability

Method 5 monitors the long-term stability of the measurement process. While Methods 1-3 are snapshots, Method 5 ensures that the measurement system continues to deliver correct values over weeks and months (despite temperature changes, wear, contamination). It works similarly to an SPC control chart (Statistical Process Control).
Especially for electronic measurement systems that can drift, for mechanical wear, or when environmental conditions (temperature, humidity) have a critical influence. Continuous monitoring is also advisable for measurement processes that only marginally achieved capability in Methods 1-3.
A so-called stability part (reference part or standard) is measured at regular intervals. It is important that this part itself is stable and is not altered by the measurement (no destruction).
Typically, a mean chart (x̄ chart) is used to monitor trueness (bias drift) and a standard deviation chart (s chart) is used to monitor precision (wear/looseness). Alternatively, for small samples, an individual values chart can be used.
There are two approaches: 1. Based on a preliminary study (historical process data). 2. Based on the tolerance (e.g., it is often specified that the standard deviation s may be at most 2.5% of the tolerance). The limits define the range in which the measurement process is considered "in control".
A point lies outside the red control limits (UCL/LCL). "Trend": 7 consecutive points are increasing or decreasing. "Run": 7 consecutive points lie on the same side of the center line. Such patterns indicate systematic changes (e.g., drift, contamination).
The interval depends on stability. During the ramp-up phase, measurements are often taken more frequently (e.g., several times per shift). If the chart shows stable values over a long period, the interval can be extended (e.g., once per day or week). The interval must be chosen so that faulty measurements are detected promptly to minimize recall actions.

Method 6: Gauges

These are characteristics that could physically be measured continuously (e.g., a diameter) but are only evaluated attributively (discretely) in the process. A classic example is inspection with a limit gauge plug: the result is only "Pass" or "Fail" (OK/NOK), even though the diameter actually has a measurable value.
Attributive inspections have very low information density. You only know "good" or "bad" but not "how good" or "how close." To statistically verify low ppm defect rates, extremely large sample sizes would be needed, which are hardly economically justifiable. Measuring methods are therefore always preferable.
A reference batch of 50 parts is used. These parts are first precisely measured to determine their true continuous value (reference value). They are then inspected multiple times with the attributive gauge (e.g., limit gauge).
When inspecting the 50 parts, you will find that parts far from the boundary are always correctly identified. However, parts very close to the tolerance limit are sometimes evaluated as "Pass" and sometimes as "Fail." The range of reference values in which these inconsistent evaluations occur is the uncertainty range (intermediate zone).
The width of this uncertainty range (d) is interpreted as a measure of the inspection process variation. Analogous to Method 2, this width is related to the tolerance T. The formula for the capability index corresponds to the %GRR approach.
The criteria are analogous to the measuring methods: Calculated %GRR ≤ 10%: Capable. Calculated %GRR ≤ 30%: Conditionally capable. Above that: Not capable.

Method 7: Attributive

Method 7 is used for purely qualitative characteristics where no measured values exist. Examples are visual inspections for scratches, color deviations, voids, completeness checks, or sound inspections ("Sounds good/bad").
A clearly defined standard, usually in the form of a boundary sample catalog. This must contain physical parts or photographs that clearly define what is still "Pass" and what is already "Fail" to minimize the subjectivity of the inspectors.
At least 50 parts are needed (preferably 100+). The batch should contain a mix of clearly good parts, clearly bad parts, and especially borderline cases. The composition should ideally reflect the real defect distribution in production. The "true" values (reference) must be established by experts in advance.
At least 2 (preferably 3) inspectors evaluate all parts of the reference batch independently. Each inspector must inspect the batch at least 2 times (preferably 3 times). It is important that the parts are reshuffled for each round (randomization) so that the inspectors cannot remember the results.
Kappa is a statistical measure of agreement between evaluations. The simple percentage of agreement is misleading, as a certain level of agreement would be achieved by pure guessing. Kappa corrects this chance factor and indicates how much the agreement exceeds pure chance.
Fleiss' Kappa is used. Unlike Cohen's Kappa (which only compares 2 inspectors), Fleiss' Kappa is suitable for any number of inspectors and categories and is therefore more universally applicable.
1. Repeatability (Within Inspector): Does the inspector agree with themselves on repeated evaluations? 2. Reproducibility (Between Inspectors): Do the inspectors agree with each other? 3. Accuracy (Against Reference): Do the inspectors agree with the expert standard (reference value)?
κ ≥ 0.9: Inspection process is capable (very good agreement). 0.7 ≤ κ < 0.9: Inspection process is conditionally capable (improvement needed). κ < 0.7: Inspection process is not capable (agreement too low).
A value of 0 means that the agreement is purely random. The inspectors essentially guessed. A value of 1 would be perfect agreement. Negative values indicate systematic disagreement (worse than chance).
Instead of having inspectors decide between "Good," "Rework," and "Scrap" (3 categories in one step), it is often better to first check "Pass / Fail" and in a second step subdivide "Fail" into "Rework / Scrap." This reduces the complexity of the decision and usually increases consistency (Kappa).

MSA: Practical Tips

Do not replace the measuring device immediately! First, a root cause analysis must be performed (e.g., Ishikawa, 5-Why). Is it due to the equipment (EV too high)? Is it due to the operators (AV too high, training needed)? Or is it due to the fixture/environment? Often it is simple handling errors or contaminated parts.
The process may be used provisionally, but only under conditions. This usually means: increased inspection frequency, multiple measurements with averaging to reduce variation, or intensive training of operators. The goal must be to bring the process into the capable range through corrective actions.
Yes, mathematically a larger tolerance immediately improves the capability indices (Cgk, %GRR), since the tolerance is in the denominator. However, this is only permissible if design/engineering confirms that the function of the component is also guaranteed with the widened tolerance.
If a measurement process is not (yet) capable but is urgently needed (e.g., production start), a temporary special release can be granted. This must include risk mitigation measures (e.g., 100% inspection, counter-measurement in the laboratory) and a timeline for resolving the deficiency.
Since the distance to the tolerance limit is critical, a safety margin must be maintained. The manufacturing tolerance is narrowed by the measurement uncertainty (or 4·s or Ukal) (USLnew = USL - uncertainty). If production stays within this narrowed range, you are on the safe side despite measurement uncertainty.
In Method 1, an ideal reference standard is measured under ideal conditions (often by experts). In Method 2, real production parts (form errors, roughness) and regular operators (handling variations) are added. Therefore, the variation in Method 2 is almost always larger. Method 1 is the "technical limit," Method 2 is the "reality."
This is a common scenario. It means: the measuring device itself is precise (Method 1 okay), but the application process is inadequate (Method 2 poor). Causes are often: fixture does not fit well for production parts, operators insert parts differently, or the part geometry is problematic. Here, work must be done on the process (fixture, instructions), not on the sensor.
T is the tolerance from the drawing (USL - LSL). T* is an auxiliary value for one-sided characteristics with a natural boundary (e.g., roughness). Here, T* = USL - 0 (natural boundary) is set to be able to apply formulas that require a tolerance range.
Tmin is the theoretically smallest tolerance for which this measurement process would still be considered capable (given Cgk=1.33). If the actual tolerance is smaller than Tmin, the process is not capable. The value helps estimate "how far" you are from the target.
Outliers must not simply be deleted to improve the result. Each outlier must be investigated for its physical cause (e.g., typo, contaminated part, measurement error). Only when a clear cause is found may the value be corrected or re-measured. Otherwise, it belongs to the process variation.
Temperature is often the largest disturbance factor (thermal expansion). Measurements should ideally take place at the reference temperature (20 °C). If this is not possible in production, the measuring device and part must at least have the same temperature (acclimatize) or temperature compensation must be applied.
Measuring: Provides a specific numerical value (quantitative). You know how good or bad the part is. Enables process control. Gauging: Provides only a status message (qualitative, Pass/Fail). You do not know whether the part is borderline. Process control is hardly possible. Therefore, measuring is preferred in quality management.
Methodologically, Method 3 is actually a special case of Method 2 (with number of inspectors k=1). In software, they are often treated similarly. If the operator influence (AV) in Method 2 is negligibly small (≈ 0), the result approaches that of Method 3.
The measurement system is still not suitable. Sufficient resolution (≤ 5% T) is a knockout criterion. If the resolution is too coarse (e.g., only 3-4 steps within the tolerance), the calculated standard deviations are mathematically unreliable and process control does not work ("staircase effect").

Didn't find an answer?

Our support team is happy to help you.