Your online education, manufacturing, service and healthcare partner

Thesis Writing

Service Quality

Lean /
Six Sigma

Total Quality
Management

Healthcare Management

APQP /
PPAP

Workshops /
Seminars


Reliabilty & Validity

Reliabilty & Validity

This software module consists of the following templatized tools and techniques:
  • Internal Consistency This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses. The data is then used to calculate the Inter-Item
  • Inter Item This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses. The data is then used to calculate the correlation among the items from which the overall Inter-Item Reliability is calculated.
  • Item Total This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses. The data is then used to calculate the correlation among the items and item-totals from which the overall Item-Total Reliability is calculated.
  • Cronbach This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses. The data is then used to calculate the Cronbach Alpha Reliability.
  • Equivalence The Equivalence Reliability applet tests if two questionnaires are equivalent. The applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses for two item sets (Data A and Data B). The data is then used to calculate the Equivalence Reliability.
  • Stability The Stability Reliability applet tests if two questionnaire response are stable over a period of time. The applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses for two item sets (Data A and Data B). The data is then used to calculate the Stability Reliability.
  • Convergent The Convergent Validity applet tests if two measures (e.g. respondents answer and observers answer) both converge on the measured phenomenon (e.g. sleep). This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses for two item sets (Data A and Data B). The data is then used to calculate the Convergent Reliability.
  • Concurrent The Concurrent Validity applet tests if two measurement methods concur. This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses for two item sets (Data A and Data B). The data is then used to calculate the Concurrent Reliability.
  • Predictive The Predictive Validity applet tests if a measurement method can adequately predict the performance at a later time. This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses for two item sets (Data A and Data B). The data is then used to calculate the Predictive Reliability.
  • Divergent The Divergent Validity applet tests if two phenomena (e.g. anger and depression) can be distinguished by two measures (e.g. questionnaire for anger and questionnaire for depression). This applet allows you to create a set of questionnaire items using short and long description. It also allows you to add respondents and their responses for two item sets (Data A and Data B). The data is then used to calculate the Divergent Reliability.
  • Cohen Kappa Cohens kappa coefficient is a statistic which measures inter-rater agreement for qualitative (categorical) items. It is a more robust measure than simple percent agreement calculation
  • Fleiss Kappa Fleiss kappa is a statistical measures the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. The measure calculates the degree of agreement in classification over that which would be expected by chance.
  • Content Validity Ratio Content Validity Ratio refers to the extent to which a measure represents all facets of a given construct. It is essentially a method for gauging agreement among raters or judges regarding how essential a particular item is.
  • Content Validity Index Content validity refers to how accurately an assessment or measurement tool taps into the various aspects of the specific construct in question. In other words
  • Rasch Analysis Rasch Analysis is a method of evaluating a questionnaire so that a person responding to a set of items is assessed independent of the assessor i.e. invariance of comparisons.
 
NT AUTHORITY\SYSTEM True