-
Question 1
Correct
-
What is another name for the incidence rate?
Your Answer: Incidence density
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 2
Correct
-
What is a common tool used to help determine the appropriate sample size for qualitative research?
Your Answer: Saturation
Explanation:Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 3
Incorrect
-
What is the ratio of the risk of stroke within a 3 year period for high-risk psychiatric patients taking the new oral antithrombotic drug compared to those taking warfarin, based on the given data below? Number who had a stroke within a 3 year period vs Number without stroke New drug: 10 vs 190 Warfarin: 10 vs 490
Your Answer: 1.2
Correct Answer: 2.5
Explanation:The relative risk (RR) of the event of interest in the exposed group compared to the unexposed group is 2.5.
RR = EER / CER
EER = 10 / 200 = 0.05
CER = 10 / 500 = 0.02
RR = EER / CER
= 0.05 / 0.02 = 2.5This means that the exposed group has a 2.5 times higher risk of experiencing the event compared to the unexposed group.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 4
Incorrect
-
Which statement about confounding is incorrect?
Your Answer: A confounding factor obscures the relationship between an exposure and an outcome
Correct Answer: In the analytic stage of a study confounding can be controlled for by randomisation
Explanation:In the analytic stage of a study, confounding cannot be controlled for by the technique of stratification. (This is false, as stratification is a technique commonly used to control for confounding in observational studies.)
Stats Confounding
A confounding factor is a factor that can obscure the relationship between an exposure and an outcome in a study. This factor is associated with both the exposure and the disease. For example, in a study that finds a link between coffee consumption and heart disease, smoking could be a confounding factor because it is associated with both drinking coffee and heart disease. Confounding occurs when there is a non-random distribution of risk factors in the population, such as age, sex, and social class.
To control for confounding in the design stage of an experiment, researchers can use randomization, restriction, of matching. Randomization aims to produce an even distribution of potential risk factors in two populations. Restriction involves limiting the study population to a specific group to ensure similar age distributions. Matching involves finding and enrolling participants who are similar in terms of potential confounding factors.
In the analysis stage of an experiment, researchers can control for confounding by using stratification of multivariate models such as logistic regression, linear regression, of analysis of covariance (ANCOVA). Stratification involves creating categories of strata in which the confounding variable does not vary of varies minimally.
Overall, controlling for confounding is important in ensuring that the relationship between an exposure and an outcome is accurately assessed in a study.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 5
Incorrect
-
How do you calculate the positive predictive value accurately?
Your Answer: TP / (TP + FN )
Correct Answer: TP / (TP + FP)
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 6
Incorrect
-
The data collected represents the ratings given by students to the quality of teaching sessions provided by a consultant psychiatrist. The ratings are on a scale of 1-5, with 1 indicating extremely unsatisfactory and 5 indicating extremely satisfactory. The ratings are used to evaluate the effectiveness of the teaching sessions. How is this data best described?
Your Answer: Nominal
Correct Answer: Ordinal
Explanation:The data gathered will be measured on an ordinal scale, where each answer option is ranked. For instance, 2 is considered lower than 4, and 4 is lower than 5. In an ordinal scale, it is not necessary for the difference between 4 (satisfactory) and 2 (unsatisfactory) to be the same as the difference between 5 (extremely satisfactory) and 3 (neutral). This is because the numbers are not assigned for quantitative measurement but are used for labeling purposes only.
Scales of Measurement in Statistics
In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.
Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.
Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.
Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 7
Incorrect
-
A psychologist aims to conduct a qualitative study to explore the experiences of elderly patients referred to the outpatient clinic. To obtain a sample, the psychologist asks the receptionist to hand an invitation to participate in the study to all follow-up patients who attend for an appointment. The recruitment phase continues until a total of 30 elderly individuals agree to be in the study.
How is this sampling method best described?Your Answer: Chain referral sampling
Correct Answer: Opportunistic sampling
Explanation:Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 8
Incorrect
-
Which of the following is the correct description of construct validity?
Your Answer: Construct validity is the degree to which the conclusions in a study would hold for other persons in other places and at other times
Correct Answer: A test has good construct validity if it has a high correlation with another test that measures the same construct
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 9
Incorrect
-
What factors affect the statistical power of a study?
Your Answer: Observation bias
Correct Answer: Sample size
Explanation:A study that has a greater sample size is considered to have higher power, meaning it is capable of detecting a significant difference of effect that is clinically relevant.
The Importance of Power in Statistical Analysis
Power is a crucial concept in statistical analysis as it helps researchers determine the number of participants needed in a study to detect a clinically significant difference of effect. It represents the probability of correctly rejecting the null hypothesis when it is false, which means avoiding a Type II error. Power values range from 0 to 1, with 0 indicating 0% and 1 indicating 100%. A power of 0.80 is generally considered the minimum acceptable level.
Several factors influence the power of a study, including sample size, effect size, and significance level. Larger sample sizes lead to more precise parameter estimations and increase the study’s ability to detect a significant effect. Effect size, which is determined at the beginning of a study, refers to the size of the difference between two means that leads to rejecting the null hypothesis. Finally, the significance level, also known as the alpha level, represents the probability of a Type I error. By considering these factors, researchers can optimize the power of their studies and increase the likelihood of detecting meaningful effects.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 10
Incorrect
-
How is the phenomenon of regression towards the mean most influential on which type of validity?
Your Answer: Criterion validity
Correct Answer: Internal validity
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 11
Incorrect
-
How would you rephrase the question Which of the following refers to the proportion of people scoring positive on a test that actually have the condition?
Your Answer: Accuracy
Correct Answer: Positive predictive value
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 12
Correct
-
What is the conventional cutoff for a p-value of 0.05 and what does it mean in terms of the likelihood of detecting a difference by chance?
Your Answer: 1 in 14 times
Explanation:The probability of detecting a difference by chance is 1 in 20 times when the p-value is 0.05, which is the conventional cutoff. In this case, the answer is 1 in 14 times, which is equivalent to a p-value of 0.07.
Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 13
Incorrect
-
What is the most accurate definition of 'opportunity cost'?
Your Answer: The cost incurred by failing to take advantage of good opportunities
Correct Answer: The forgone benefit that would have been derived by an option not chosen
Explanation:Opportunity Cost in Economics: Understanding the Value of Choices
Opportunity cost is a crucial concept in economics that helps us make informed decisions. It refers to the value of the next-best alternative that we give up when we choose one option over another. This concept is particularly relevant when we have limited resources, such as a fixed budget, and need to make choices about how to allocate them.
For instance, if we decide to spend our money on antidepressants, we cannot use that same money to pay for cognitive-behavioral therapy (CBT). Both options have a value, but we have to choose one over the other. The opportunity cost of choosing antidepressants over CBT is the value of the benefits we would have received from CBT but did not because we chose antidepressants instead.
To compare the opportunity cost of different choices, economists often use quality-adjusted life years (QALYs). QALYs measure the value of health outcomes in terms of both quantity (life years gained) and quality (health-related quality of life). By using QALYs, we can compare the opportunity cost of different healthcare interventions and choose the one that provides the best value for our resources.
In summary, understanding opportunity cost is essential for making informed decisions in economics and healthcare. By recognizing the value of the alternatives we give up, we can make better choices and maximize the benefits we receive from our limited resources.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 14
Correct
-
The research team is studying the effectiveness of a new treatment for a certain medical condition. They have found that the brand name medication Y and its generic version Y1 have similar efficacy. They approach you for guidance on what type of analysis to conduct next. What would you suggest?
Your Answer: Cost minimisation analysis
Explanation:Cost minimisation analysis is employed to compare net costs when the observed effects of health care interventions are similar. To conduct this analysis, it is necessary to have clinical evidence that demonstrates the differences in health effects between alternatives are negligible of insignificant. This approach is commonly used by institutions like the National Institute for Health and Care Excellence (NICE).
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 15
Incorrect
-
What study design would be most suitable for investigating the potential association between childhood obesity in girls and the risk of polycystic ovarian syndrome, while also providing the strongest evidence for this link?
Your Answer: Cross-over trial
Correct Answer: Cohort study
Explanation:An RCT is not feasible in this situation, but a cohort study would be more reliable than a case-control study in generating evidence.
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 16
Correct
-
If the new antihypertensive therapy is implemented for the secondary prevention of stroke, it would result in an absolute annual risk reduction of 0.5% compared to conventional therapy. However, the cost of the new treatment is £100 more per patient per year. Therefore, what would the cost of implementing the new therapy for each stroke prevented?
Your Answer: £20,000
Explanation:The new drug reduces the annual incidence of stroke by 0.5% and costs £100 more than conventional therapy. This means that for every 200 patients treated, one stroke would be prevented with the new drug compared to conventional therapy. The Number Needed to Treat (NNT) is 200 per year to prevent one stroke. Therefore, the annual cost of this treatment to prevent one stroke would be £20,000, which is the cost of treating 200 patients with the new drug (£100 x 200).
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 17
Incorrect
-
What is the probability that a person who tests negative on the new Mephedrone screening test does not actually use Mephedrone?
Your Answer: 172/175
Correct Answer: 172/177
Explanation:Negative predictive value = 172 / 177
Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 18
Incorrect
-
What is the term used to describe a graph that can be utilized to identify publication bias?
Your Answer:
Correct Answer: Funnel plot
Explanation:Stats Publication Bias
Publication bias refers to the tendency for studies with positive findings to be published more than studies with negative findings, leading to incomplete data sets in meta-analyses and erroneous conclusions. Graphical methods such as funnel plots, Galbraith plots, ordered forest plots, and normal quantile plots can be used to detect publication bias. Funnel plots are the most commonly used and offer an easy visual way to ensure that published literature is evenly weighted. The x-axis represents the effect size, and the y-axis represents the study size. A symmetrical, inverted funnel shape indicates that publication bias is unlikely, while an asymmetrical funnel indicates a relationship between treatment effect and study size, indicating either publication bias of small study effects.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 19
Incorrect
-
What statement accurately describes percentiles?
Your Answer:
Correct Answer: Q1 is the 25th percentile
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 20
Incorrect
-
What is the term used to describe the likelihood of correctly rejecting the null hypothesis when it is actually false?
Your Answer:
Correct Answer: Power of the test
Explanation:Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 21
Incorrect
-
A new medication aimed at preventing age-related macular degeneration (AMD) is being tested in clinical trials. One hundred patients over the age of 60 with early signs of AMD are given the new medication. Over a three month period, 10 of these patients experience progression of their AMD. In the control group, there are 300 patients over the age of 60 with early signs of AMD who are given a placebo. During the same time period, 50 of these patients experience progression of their AMD. What is the relative risk of AMD progression while taking the new medication?
Your Answer:
Correct Answer: 0.6
Explanation:The relative risk (RR) is calculated by dividing the exposure event rate (EER) by the control event rate (CER). In this case, the EER is 10 out of 100 (0.10) and the CER is 50 out of 300 (0.166). Therefore, the RR is calculated as 0.10 divided by 0.166, which equals 0.6.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 22
Incorrect
-
How can confounding be controlled during the analysis stage of a study?
Your Answer:
Correct Answer: Stratification
Explanation:Stratification is a method of managing confounding by dividing the data into two or more groups where the confounding variable remains constant of varies minimally.
Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 23
Incorrect
-
What methods are most effective in determining interobserver agreement?
Your Answer:
Correct Answer: Kappa
Explanation:Kappa is used to assess the consistency of reliability between different raters.
Understanding the Kappa Statistic for Measuring Interobserver Variation
The kappa statistic, also known as Cohen’s kappa coefficient, is a useful tool for quantifying the level of agreement between independent observers. This measure can be applied in any situation where multiple observers are evaluating the same thing, such as in medical diagnoses of research studies. The kappa coefficient ranges from 0 to 1, with 0 indicating complete disagreement and 1 indicating perfect agreement. By using the kappa statistic, researchers and practitioners can gain insight into the level of interobserver variation present in their data, which can help to improve the accuracy and reliability of their findings. Overall, the kappa statistic is a valuable tool for understanding and measuring interobserver variation in a variety of contexts.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 24
Incorrect
-
What type of bias is commonly associated with case-control studies?
Your Answer:
Correct Answer: Recall bias
Explanation:Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 25
Incorrect
-
A study examines the benefits of adding an intensive package of dialectic behavioural therapy (DBT) to standard care following an episode of serious self-harm in adolescents. The following results are obtained:
Percentage of adolescents having a further episode
of serious self harm within 3 months
Standard care 4%
Standard care and intensive DBT 3%
What is the number needed to treat to prevent one adolescent having a further episode of serious self harm within 3 months?Your Answer:
Correct Answer: 100
Explanation:The number needed to treat (NNT) is equal to 100. This means that for every 100 patients treated, one patient will benefit from the treatment. The absolute risk reduction (ARR) is 0.01, which is the difference between the control event rate (CER) of 0.04 and the experimental event rate (EER) of 0.03.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 26
Incorrect
-
What is a true statement about statistical power?
Your Answer:
Correct Answer: The larger the sample size of a study the greater the power
Explanation:The Importance of Power in Statistical Analysis
Power is a crucial concept in statistical analysis as it helps researchers determine the number of participants needed in a study to detect a clinically significant difference of effect. It represents the probability of correctly rejecting the null hypothesis when it is false, which means avoiding a Type II error. Power values range from 0 to 1, with 0 indicating 0% and 1 indicating 100%. A power of 0.80 is generally considered the minimum acceptable level.
Several factors influence the power of a study, including sample size, effect size, and significance level. Larger sample sizes lead to more precise parameter estimations and increase the study’s ability to detect a significant effect. Effect size, which is determined at the beginning of a study, refers to the size of the difference between two means that leads to rejecting the null hypothesis. Finally, the significance level, also known as the alpha level, represents the probability of a Type I error. By considering these factors, researchers can optimize the power of their studies and increase the likelihood of detecting meaningful effects.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 27
Incorrect
-
What level of kappa score indicates complete agreement between two observers?
Your Answer:
Correct Answer: 1
Explanation:Understanding the Kappa Statistic for Measuring Interobserver Variation
The kappa statistic, also known as Cohen’s kappa coefficient, is a useful tool for quantifying the level of agreement between independent observers. This measure can be applied in any situation where multiple observers are evaluating the same thing, such as in medical diagnoses of research studies. The kappa coefficient ranges from 0 to 1, with 0 indicating complete disagreement and 1 indicating perfect agreement. By using the kappa statistic, researchers and practitioners can gain insight into the level of interobserver variation present in their data, which can help to improve the accuracy and reliability of their findings. Overall, the kappa statistic is a valuable tool for understanding and measuring interobserver variation in a variety of contexts.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 28
Incorrect
-
What is the significance of the cut off of 5 on the MDQ in diagnosing depression?
Your Answer:
Correct Answer: The optimal threshold
Explanation:The threshold score that results in the lowest misclassification rate, achieved by minimizing both false positive and false negative rates, is known as the optimal threshold. Based on the findings of the previous study, the ideal cut off for identifying caseness on the MDQ is five, making it the optimal threshold.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 29
Incorrect
-
Which of the following is not considered a crucial factor according to Wilson and Junger when implementing a screening program?
Your Answer:
Correct Answer: The condition should be potentially curable
Explanation:Wilson and Junger Criteria for Screening
1. The condition should be an important public health problem.
2. There should be an acceptable treatment for patients with recognised disease.
3. Facilities for diagnosis and treatment should be available.
4. There should be a recognised latent of early symptomatic stage.
5. The natural history of the condition, including its development from latent to declared disease should be adequately understood.
6. There should be a suitable test of examination.
7. The test of examination should be acceptable to the population.
8. There should be agreed policy on whom to treat.
9. The cost of case-finding (including diagnosis and subsequent treatment of patients) should be economically balanced in relation to the possible expenditure as a whole.
10. Case-finding should be a continuous process and not a ‘once and for all’ project.The Wilson and Junger criteria provide a framework for evaluating the suitability of a screening program for a particular condition. The criteria emphasize the importance of the condition as a public health problem, the availability of effective treatment, and the feasibility of diagnosis and treatment. Additionally, the criteria highlight the importance of understanding the natural history of the condition and the need for a suitable test of examination that is acceptable to the population. The criteria also stress the importance of having agreed policies on whom to treat and ensuring that the cost of case-finding is economically balanced. Finally, the criteria emphasize that case-finding should be a continuous process rather than a one-time project. By considering these criteria, public health officials can determine whether a screening program is appropriate for a particular condition and ensure that resources are used effectively.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 30
Incorrect
-
A study examines the effectiveness of adding a new antiplatelet drug to aspirin for patients over the age of 60 who have had a stroke. A total of 170 patients are enrolled, with 120 receiving the new drug in addition to aspirin and the remaining 50 receiving only aspirin. After 5 years, it is found that 18 patients who received the new drug experienced a subsequent stroke, while only 10 patients who received aspirin alone had a further stroke. What is the number needed to treat?
Your Answer:
Correct Answer: 20
Explanation:Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
00
Correct
00
Incorrect
00
:
00
:
00
Session Time
00
:
00
Average Question Time (
Secs)