-
Question 1
Correct
-
A new clinical trial has found a correlation between alcohol consumption and lung cancer. Considering the well-known link between alcohol consumption and smoking, what is the most probable explanation for this new association?
Your Answer: Confounding
Explanation:The observed link between alcohol consumption and lung cancer is likely due to confounding factors, such as cigarette smoking. Confounding variables are those that are associated with both the independent and dependent variables, in this case, alcohol consumption and lung cancer.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 2
Incorrect
-
A team of scientists conduct a case control study to investigate the association between birth complications and attempted suicide in individuals aged 18-35 years. They enroll 296 cases of attempted suicide and recruit an equal number of controls who are matched for age, gender, and geographical location. Upon analyzing the birth history, they discover that 67 cases of attempted suicide and 61 controls had experienced birth difficulties. What is the unadjusted odds ratio for attempted suicide in individuals with a history of birth complications?
Your Answer: 2.13
Correct Answer: 1.13
Explanation:Odds Ratio Calculation for Birth Difficulties in Case and Control Groups
The odds ratio is a statistical measure that compares the likelihood of an event occurring in one group to that of another group. In this case, we are interested in the odds of birth difficulties in a case group compared to a control group.
To calculate the odds ratio, we need to determine the number of individuals in each group who had birth difficulties and those who did not. In the case group, 67 individuals had birth difficulties, while 229 did not. In the control group, 61 individuals had birth difficulties, while 235 did not.
Using these numbers, we can calculate the odds ratio as follows:
Odds ratio = (67/229) / (61/235) = 1.13
This means that the odds of birth difficulties are 1.13 times higher in the case group compared to the control group.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 3
Correct
-
The ICER is utilized in the following methods of economic evaluation:
Your Answer: Cost-effectiveness analysis
Explanation:The acronym ICER stands for incremental cost-effectiveness ratio.
Methods of Economic Evaluation
There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.
Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.
Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.
Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.
Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.
Costs in Economic Evaluation Studies
There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 4
Incorrect
-
A research project has a significance level of 0.05, and the obtained p-value is 0.0125. What is the probability of committing a Type I error?
Your Answer: 1-Dec
Correct Answer: Jan-80
Explanation:An observed p-value of 0.0125 means that there is a 1.25% chance of obtaining the observed result by chance, assuming the null hypothesis is true. This also means that the Type I error rate (the probability of falsely rejecting the null hypothesis) is 1/80 of 1.25%. In comparison, a p-value of 0.05 indicates a 5% chance of obtaining the observed result by chance, of a Type I error rate of 1/20.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 5
Incorrect
-
What study method would be most suitable for a researcher tasked with comparing the cost-effectiveness of olanzapine and haloperidol in reducing symptom severity of schizophrenia, as measured by the Positive and Negative Syndrome Scale?
Your Answer: Cost-minimisation analysis
Correct Answer: Cost-effectiveness analysis
Explanation:The task assigned to the researcher is to conduct a cost-effectiveness analysis, which involves comparing two interventions based on their costs and their impact on a single clinical measure of effectiveness, specifically the reduction in symptom severity as measured by the PANSS.
Methods of Economic Evaluation
There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.
Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.
Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.
Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.
Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.
Costs in Economic Evaluation Studies
There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 6
Correct
-
A case-control study was conducted to determine if exposure to passive smoking during childhood increases the risk of nicotine dependence. Two groups were recruited: 200 patients with nicotine dependence and 200 controls without nicotine dependence. Among the patients, 40 reported exposure to parental smoking during childhood, while among the controls, 20 reported such exposure. The odds ratio of developing nicotine dependence after being exposed to passive smoking is:
Your Answer: 2.25
Explanation:Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 7
Correct
-
What is a true statement about measures of effect?
Your Answer: Relative risk can be used to measure effect in randomised control trials
Explanation:The use of relative risk is applicable in cohort, cross-sectional, and randomized control trials, but not in case-control studies. In situations where there are no events in the control group, neither the risk ratio nor the odds ratio can be computed. It is important to note that the odds ratio tends to overestimate effects and is always more extreme than the relative risk, moving away from the null value of 1.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 8
Incorrect
-
What is necessary to compute the standard deviation?
Your Answer: Confidence interval
Correct Answer: Mean
Explanation:The standard deviation represents the typical amount that the data points deviate from the mean.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 9
Incorrect
-
What is the NNT for the following study data in a population of patients over the age of 65?
Medication Group vs Control Group
Events: 30 vs 80
Non-events: 120 vs 120
Total subjects: 150 vs 200.Your Answer: 2
Correct Answer: 5
Explanation:To calculate the event rates for the medication and control groups, we divide the number of events by the total number of subjects in each group. For the medication group, the event rate is 0.2 (30/150), and for the control group, it is 0.4 (80/200).
We can also calculate the absolute risk reduction (ARR) by subtracting the event rate in the medication group from the event rate in the control group: ARR = CER – EER = 0.4 – 0.2 = 0.2.
Finally, we can use the ARR to calculate the number needed to treat (NNT), which represents the number of patients who need to be treated with the medication to prevent one additional event compared to the control group. NNT = 1/ARR = 1/0.2 = 5.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 10
Incorrect
-
The prevalence of depressive disease in a village with an adult population of 1000 was assessed using a new diagnostic score. The results showed that out of 1000 adults, 200 tested positive for the disease and 800 tested negative. What is the prevalence of depressive disease in this population?
Your Answer: 2%
Correct Answer: 20%
Explanation:The prevalence of the disease is 20% as there are currently 200 cases out of a total population of 1000.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 11
Incorrect
-
A team of scientists aims to perform a systematic review and meta-analysis of the effects of caffeine on sleep quality. They want to determine if there is any variation in the results across the studies they have gathered.
Which of the following is not a technique that can be employed to evaluate heterogeneity?Your Answer: Q test
Correct Answer: Receiver operating characteristic curve
Explanation:The receiver operating characteristic (ROC) curve is a useful tool for evaluating the diagnostic accuracy of a test in distinguishing between healthy and diseased individuals. It helps to identify the optimal cut-off point between sensitivity and specificity.
Other methods, such as visual inspection of forest plots and Cochran’s Q test, can be used to assess heterogeneity in meta-analysis. Visual inspection of forest plots is a quick and easy method, while Cochran’s Q test is a more formal and widely accepted approach.
For more information on heterogeneity in meta-analysis, further reading is recommended.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 12
Correct
-
How can the prevalence of schizophrenia in the UK population be characterized by the consistent finding of approximately 1%?
Your Answer: Endemic
Explanation:Epidemiology Key Terms
– Epidemic (Outbreak): A rise in disease cases above the anticipated level in a specific population during a particular time frame.
– Endemic: The regular of anticipated level of disease in a particular population.
– Pandemic: Epidemics that affect a significant number of individuals across multiple countries, regions, of continents. -
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 13
Incorrect
-
What is a characteristic of data that is positively skewed?
Your Answer: Mode < median < mean
Correct Answer:
Explanation:Skewed Data: Understanding the Relationship between Mean, Median, and Mode
When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.
In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.
However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.
Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 14
Incorrect
-
Which option is not a type of descriptive statistic?
Your Answer: Variance
Correct Answer: Student's t-test
Explanation:A t-test is a statistical method used to determine if there is a significant difference between the means of two groups. It is a type of statistical inference.
Types of Statistics: Descriptive and Inferential
Statistics can be divided into two categories: descriptive and inferential. Descriptive statistics are used to describe and summarize data without making any generalizations beyond the data at hand. On the other hand, inferential statistics are used to make inferences about a population based on sample data.
Descriptive statistics are useful for identifying patterns and trends in data. Common measures used to describe a data set include measures of central tendency (such as the mean, median, and mode) and measures of variability of dispersion (such as the standard deviation of variance).
Inferential statistics, on the other hand, are used to make predictions of draw conclusions about a population based on sample data. These statistics are also used to determine the probability that observed differences between groups are reliable and not due to chance.
Overall, both descriptive and inferential statistics play important roles in analyzing and interpreting data. Descriptive statistics help us understand the characteristics of a data set, while inferential statistics allow us to make predictions and draw conclusions about larger populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 15
Incorrect
-
What is the term used to describe the rate at which new cases of a disease are appearing, calculated by dividing the number of new cases by the total time that disease-free individuals are observed during a study period?
Your Answer: Relative risk
Correct Answer: Incidence rate
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 16
Incorrect
-
What is the proportion of values that fall within a range of 3 standard deviations from the mean in a normal distribution?
Your Answer: 95.40%
Correct Answer: 99.70%
Explanation:Standard Deviation and Standard Error of the Mean
Standard deviation (SD) and standard error of the mean (SEM) are two important statistical measures used to describe data. SD is a measure of how much the data varies, while SEM is a measure of how precisely we know the true mean of the population. The normal distribution, also known as the Gaussian distribution, is a symmetrical bell-shaped curve that describes the spread of many biological and clinical measurements.
68.3% of the data lies within 1 SD of the mean, 95.4% of the data lies within 2 SD of the mean, and 99.7% of the data lies within 3 SD of the mean. The SD is calculated by taking the square root of the variance and is expressed in the same units as the data set. A low SD indicates that data points tend to be very close to the mean.
On the other hand, SEM is an inferential statistic that quantifies the precision of the mean. It is expressed in the same units as the data and is calculated by dividing the SD of the sample mean by the square root of the sample size. The SEM gets smaller as the sample size increases, and it takes into account both the value of the SD and the sample size.
Both SD and SEM are important measures in statistical analysis, and they are used to calculate confidence intervals and test hypotheses. While SD quantifies scatter, SEM quantifies precision, and both are essential in understanding and interpreting data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 17
Incorrect
-
Which of the following is not a valid type of validity?
Your Answer: Predictive
Correct Answer: Inter-rater
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 18
Incorrect
-
What percentage of the data falls within the range of the lower and upper quartiles, as represented by the interquartile range?
Your Answer: 100%
Correct Answer: 50%
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 19
Incorrect
-
Which of the following is an example of selection bias?
Your Answer: Observer bias
Correct Answer: Berkson's bias
Explanation:Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 20
Incorrect
-
What hierarchical language does NLM utilize to enhance search strategies and index articles?
Your Answer: Automatic term mapping
Correct Answer: MeSH
Explanation:NLM’s hierarchical vocabulary, known as MeSH (Medical Subject Heading), is utilized for the purpose of indexing articles in PubMed.
Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 21
Incorrect
-
What type of data is required to compute the relative risk of odds ratio?
Your Answer: Continuous
Correct Answer: Dichotomous
Explanation:When outcomes are binary (such as dead of alive), there are various ways to report them, including proportions, percentages, risk, odds, risk ratios, odds ratios, number needed to treat, likelihood ratios, sensitivity, specificity, and pre-test and post-test probability. However, for non-binary data types, different methods of reporting are required.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 22
Incorrect
-
What is the standardized score (z-score) for a woman whose haemoglobin concentration is 150 g/L, given that the mean haemoglobin concentration for healthy women is 135 g/L and the standard deviation is 15 g/L?
Your Answer: 15
Correct Answer: 1
Explanation:Z Scores: A Special Application of Transformation Rules
Z scores are a unique way of measuring how much and in which direction an item deviates from the mean of its distribution, expressed in units of its standard deviation. To calculate the z score for an observation x from a population with mean and standard deviation, we use the formula z = (x – mean) / standard deviation. For example, if our observation is 150 and the mean and standard deviation are 135 and 15, respectively, then the z score would be 1.0. Z scores are a useful tool for comparing observations from different distributions and for identifying outliers.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 23
Correct
-
Which of the following is the correct description of construct validity?
Your Answer: A test has good construct validity if it has a high correlation with another test that measures the same construct
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 24
Incorrect
-
What is the calculation that the nurse performed to determine the patient's average daily calorie intake over a seven day period?
Your Answer: Generalised mean
Correct Answer: Arithmetic mean
Explanation:You don’t need to concern yourself with the specifics of the various means. Simply keep in mind that the arithmetic mean is the one utilized in fundamental biostatistics.
Measures of Central Tendency
Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.
The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.
The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.
In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 25
Incorrect
-
Which study design is susceptible to making the erroneous assumption that relationships observed among groups also hold true for individuals?
Your Answer: Cross-sectional study
Correct Answer: Ecological study
Explanation:An ecological fallacy is a potential error that can occur when generalizing relationships observed among groups to individuals. This is a concern when conducting analyses of ecological studies.
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 26
Incorrect
-
A team of scientists aims to conduct a systematic review on the effectiveness of a new medication for elderly patients with dementia. They decide to search for studies published in languages other than English, as they know that positive results are more likely to be published in English-language journals, while negative results are more likely to be published in non-English language journals. What type of bias are they trying to prevent?
Your Answer: Berksonian bias
Correct Answer: Tower of Babel bias
Explanation:When conducting a systematic review, restricting the selection of studies to those published only in English may introduce a bias known as the Tower of Babel effect. This occurs because studies conducted in non-English speaking countries that report positive results are more likely to be published in English language journals, while those with negative results are more likely to be published in non-English language journals.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 27
Incorrect
-
What is the purpose of descriptive statistics?
Your Answer: To test hypotheses based on sample data
Correct Answer: To present characteristics features of a data set
Explanation:Types of Statistics: Descriptive and Inferential
Statistics can be divided into two categories: descriptive and inferential. Descriptive statistics are used to describe and summarize data without making any generalizations beyond the data at hand. On the other hand, inferential statistics are used to make inferences about a population based on sample data.
Descriptive statistics are useful for identifying patterns and trends in data. Common measures used to describe a data set include measures of central tendency (such as the mean, median, and mode) and measures of variability of dispersion (such as the standard deviation of variance).
Inferential statistics, on the other hand, are used to make predictions of draw conclusions about a population based on sample data. These statistics are also used to determine the probability that observed differences between groups are reliable and not due to chance.
Overall, both descriptive and inferential statistics play important roles in analyzing and interpreting data. Descriptive statistics help us understand the characteristics of a data set, while inferential statistics allow us to make predictions and draw conclusions about larger populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 28
Correct
-
What is the accurate formula for determining the likelihood ratio of a positive test outcome?
Your Answer: Sensitivity / (1 - specificity)
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 29
Correct
-
Arrange the following research studies in the correct order based on their level of evidence.
Your Answer: Systematic review of RCTs, RCTs, cohort, case-control, cross-sectional, case-series
Explanation:While many individuals can readily remember that the systematic review is at the highest level and case-series at the lowest, it can be difficult to correctly sequence the intermediate levels.
Levels and Grades of Evidence in Evidence-Based Medicine
To evaluate the quality of evidence on a subject of question, levels of grades are used. The traditional hierarchy approach places systematic reviews of randomized control trials at the top and case-series/report at the bottom. However, this approach is overly simplistic as certain research questions cannot be answered using RCTs. To address this, the Oxford Centre for Evidence-Based Medicine introduced their 2011 Levels of Evidence system, which separates the type of study questions and gives a hierarchy for each.
The grading approach to be aware of is the GRADE system, which classifies the quality of evidence as high, moderate, low, of very low. The process begins by formulating a study question and identifying specific outcomes. Outcomes are then graded as critical of important. The evidence is then gathered and criteria are used to grade the evidence, with the type of evidence being a significant factor. Evidence can be promoted of downgraded based on certain criteria, such as limitations to study quality, inconsistency, uncertainty about directness, imprecise of sparse data, and reporting bias. The GRADE system allows for the promotion of observational studies to high-quality evidence under the right circumstances.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 30
Incorrect
-
Which category does convenience sampling fall under?
Your Answer: Cluster sampling
Correct Answer: Non-probabilistic sampling
Explanation:Sampling Methods in Statistics
When collecting data from a population, it is often impractical and unnecessary to gather information from every single member. Instead, taking a sample is preferred. However, it is crucial that the sample accurately represents the population from which it is drawn. There are two main types of sampling methods: probability (random) sampling and non-probability (non-random) sampling.
Non-probability sampling methods, also known as judgement samples, are based on human choice rather than random selection. These samples are convenient and cheaper than probability sampling methods. Examples of non-probability sampling methods include voluntary sampling, convenience sampling, snowball sampling, and quota sampling.
Probability sampling methods give a more representative sample of the population than non-probability sampling. In each probability sampling technique, each population element has a known (non-zero) chance of being selected for the sample. Examples of probability sampling methods include simple random sampling, systematic sampling, cluster sampling, stratified sampling, and multistage sampling.
Simple random sampling is a sample in which every member of the population has an equal chance of being chosen. Systematic sampling involves selecting every kth member of the population. Cluster sampling involves dividing a population into separate groups (called clusters) and selecting a random sample of clusters. Stratified sampling involves dividing a population into groups (strata) and taking a random sample from each strata. Multistage sampling is a more complex method that involves several stages and combines two of more sampling methods.
Overall, probability sampling methods give a more representative sample of the population, but non-probability sampling methods are often more convenient and cheaper. It is important to choose the appropriate sampling method based on the research question and available resources.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 31
Incorrect
-
What category does country of origin fall under in terms of data classification?
Your Answer: Interval
Correct Answer: Nominal
Explanation:Scales of Measurement in Statistics
In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.
Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.
Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.
Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 32
Incorrect
-
What is the approach that targets confounding variables during the study's design phase?
Your Answer: Bias sampling
Correct Answer: Randomisation
Explanation:Stats Confounding
A confounding factor is a factor that can obscure the relationship between an exposure and an outcome in a study. This factor is associated with both the exposure and the disease. For example, in a study that finds a link between coffee consumption and heart disease, smoking could be a confounding factor because it is associated with both drinking coffee and heart disease. Confounding occurs when there is a non-random distribution of risk factors in the population, such as age, sex, and social class.
To control for confounding in the design stage of an experiment, researchers can use randomization, restriction, of matching. Randomization aims to produce an even distribution of potential risk factors in two populations. Restriction involves limiting the study population to a specific group to ensure similar age distributions. Matching involves finding and enrolling participants who are similar in terms of potential confounding factors.
In the analysis stage of an experiment, researchers can control for confounding by using stratification of multivariate models such as logistic regression, linear regression, of analysis of covariance (ANCOVA). Stratification involves creating categories of strata in which the confounding variable does not vary of varies minimally.
Overall, controlling for confounding is important in ensuring that the relationship between an exposure and an outcome is accurately assessed in a study.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 33
Incorrect
-
Which of the following can be used to represent the overall number of individuals affected by a disease during a specific period?
Your Answer: Point prevalence
Correct Answer: Period prevalence
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 34
Incorrect
-
What is a true statement about standardised mortality ratios?
Your Answer: An SMR of 1 indicates that there is an increased mortality in the study group
Correct Answer: Direct standardisation requires that we know the age-specific rates of mortality in all the populations under study
Explanation:Calculation of Standardised Mortality Ratio (SMR)
To calculate the SMR, age and sex-specific death rates in the standard population are obtained. An estimate for the number of people in each category for both the standard and study populations is needed. The number of expected deaths in each age-sex group of the study population is calculated by multiplying the age-sex-specific rates in the standard population by the number of people in each category of the study population. The sum of all age- and sex-specific expected deaths gives the expected number of deaths for the whole study population. The observed number of deaths is then divided by the expected number of deaths to obtain the SMR.
The SMR can be standardised using the direct of indirect method. The direct method is used when the age-sex-specific rates for the study population and the age-sex-structure of the standard population are known. The indirect method is used when the age-specific rates for the study population are unknown of not available. This method uses the observed number of deaths in the study population and compares it to the number of deaths that would be expected if the age distribution was the same as that of the standard population.
The SMR can be interpreted as follows: an SMR less than 1.0 indicates fewer than expected deaths in the study population, an SMR of 1.0 indicates the number of observed deaths equals the number of expected deaths in the study population, and an SMR greater than 1.0 indicates more than expected deaths in the study population (excess deaths). It is sometimes expressed after multiplying by 100.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 35
Correct
-
A team of scientists aims to perform a systematic review and meta-analysis of the environmental impacts and benefits of using solar energy in residential homes. They want to investigate how their findings would be affected by potential future changes, such as an increase in the cost of solar panels of a shift in government policies promoting renewable energy. What type of analysis should they undertake to address this inquiry?
Your Answer: Sensitivity analysis
Explanation:A sensitivity analysis is a tool utilized to evaluate the degree to which the outcomes of a study of systematic review are influenced by modifications in the methodology employed. It is employed to determine the resilience of the findings to uncertain judgments of assumptions regarding the data and techniques employed.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 36
Incorrect
-
What is another name for the incidence rate?
Your Answer: Cumulative incidence
Correct Answer: Incidence density
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 37
Incorrect
-
What is another term used to refer to Neyman bias?
Your Answer: Non-response bias
Correct Answer: Prevalence/incidence bias
Explanation:Neyman bias arises when a research study is examining a condition that is marked by either undetected cases of cases that result in early deaths, leading to the exclusion of such cases from the analysis.
Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 38
Correct
-
The clinical director of a pediatric unit conducts an economic evaluation study to determine which type of treatment results in the greatest improvement in asthma symptoms (as measured by the Asthma Control Test). She compares the costs of three different treatment options against the average improvement in asthma symptoms achieved by each. What type of economic evaluation method did she employ?
Your Answer: Cost-effectiveness analysis
Explanation:Methods of Economic Evaluation
There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.
Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.
Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.
Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.
Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.
Costs in Economic Evaluation Studies
There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 39
Correct
-
What benefit does conducting a cost-effectiveness analysis offer?
Your Answer: Outcomes are expressed in natural units that are clinically meaningful
Explanation:A major benefit of using cost-effectiveness analysis is that the results are immediately understandable, such as the cost per year of remission from depression. When conducting economic evaluations, costs are typically estimated in a standardized manner across different types of studies, taking into account direct costs (e.g. physician time), indirect costs (e.g. lost productivity from being absent from work), and future costs (e.g. developing diabetes as a result of treatment with clozapine). The primary variation between economic evaluations lies in how outcomes are evaluated.
Methods of Economic Evaluation
There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.
Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.
Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.
Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.
Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.
Costs in Economic Evaluation Studies
There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 40
Incorrect
-
Which statement accurately reflects the standard mortality ratio of a disease in a sampled population that is determined to be 1.4?
Your Answer: There were 140% more fatalities from the disease in this population compared to the reference population
Correct Answer: There were 40% more fatalities from the disease in this population compared to the reference population
Explanation:Calculation of Standardised Mortality Ratio (SMR)
To calculate the SMR, age and sex-specific death rates in the standard population are obtained. An estimate for the number of people in each category for both the standard and study populations is needed. The number of expected deaths in each age-sex group of the study population is calculated by multiplying the age-sex-specific rates in the standard population by the number of people in each category of the study population. The sum of all age- and sex-specific expected deaths gives the expected number of deaths for the whole study population. The observed number of deaths is then divided by the expected number of deaths to obtain the SMR.
The SMR can be standardised using the direct of indirect method. The direct method is used when the age-sex-specific rates for the study population and the age-sex-structure of the standard population are known. The indirect method is used when the age-specific rates for the study population are unknown of not available. This method uses the observed number of deaths in the study population and compares it to the number of deaths that would be expected if the age distribution was the same as that of the standard population.
The SMR can be interpreted as follows: an SMR less than 1.0 indicates fewer than expected deaths in the study population, an SMR of 1.0 indicates the number of observed deaths equals the number of expected deaths in the study population, and an SMR greater than 1.0 indicates more than expected deaths in the study population (excess deaths). It is sometimes expressed after multiplying by 100.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 41
Incorrect
-
What percentage of values fall within one standard deviation above and below the mean?
Your Answer: 95.40%
Correct Answer: 68.20%
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 42
Incorrect
-
One possible method for determining the number needed to treat is:
Your Answer: 1 / (Hazard ratio)
Correct Answer: 1 / (Absolute risk reduction)
Explanation:Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 43
Incorrect
-
What is the statistical test that is represented by the F statistic?
Your Answer: Mann Whitney U
Correct Answer: ANOVA
Explanation:Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 44
Incorrect
-
What level of kappa score indicates complete agreement between two observers?
Your Answer: 2
Correct Answer: 1
Explanation:Understanding the Kappa Statistic for Measuring Interobserver Variation
The kappa statistic, also known as Cohen’s kappa coefficient, is a useful tool for quantifying the level of agreement between independent observers. This measure can be applied in any situation where multiple observers are evaluating the same thing, such as in medical diagnoses of research studies. The kappa coefficient ranges from 0 to 1, with 0 indicating complete disagreement and 1 indicating perfect agreement. By using the kappa statistic, researchers and practitioners can gain insight into the level of interobserver variation present in their data, which can help to improve the accuracy and reliability of their findings. Overall, the kappa statistic is a valuable tool for understanding and measuring interobserver variation in a variety of contexts.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 45
Correct
-
Which study design is always considered observational?
Your Answer: Cohort study
Explanation:Case-studies and case-series can have an experimental nature due to the potential involvement of interventions of treatments.
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 46
Incorrect
-
If a patient follows a new healthy eating campaign for 2 years, with an average weight loss of 18 kg and a standard deviation of 3 kg, what is the probability that their weight loss will fall between 9 and 27 kg?
Your Answer: 68.30%
Correct Answer: 99.70%
Explanation:The mean weight is 18kg with a standard deviation of 3kg. Three standard deviations below the mean is 9kg and three standard deviations above the mean is 27kg.
Standard Deviation and Standard Error of the Mean
Standard deviation (SD) and standard error of the mean (SEM) are two important statistical measures used to describe data. SD is a measure of how much the data varies, while SEM is a measure of how precisely we know the true mean of the population. The normal distribution, also known as the Gaussian distribution, is a symmetrical bell-shaped curve that describes the spread of many biological and clinical measurements.
68.3% of the data lies within 1 SD of the mean, 95.4% of the data lies within 2 SD of the mean, and 99.7% of the data lies within 3 SD of the mean. The SD is calculated by taking the square root of the variance and is expressed in the same units as the data set. A low SD indicates that data points tend to be very close to the mean.
On the other hand, SEM is an inferential statistic that quantifies the precision of the mean. It is expressed in the same units as the data and is calculated by dividing the SD of the sample mean by the square root of the sample size. The SEM gets smaller as the sample size increases, and it takes into account both the value of the SD and the sample size.
Both SD and SEM are important measures in statistical analysis, and they are used to calculate confidence intervals and test hypotheses. While SD quantifies scatter, SEM quantifies precision, and both are essential in understanding and interpreting data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 47
Incorrect
-
Which of the following statements accurately describes the features of a distribution that is negatively skewed?
Your Answer: Mean < mode < median
Correct Answer: Mean < median < mode
Explanation:Skewed Data: Understanding the Relationship between Mean, Median, and Mode
When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.
In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.
However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.
Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 48
Incorrect
-
What term is used to describe an association between two variables that is influenced by a confounding factor?
Your Answer: Spurious
Correct Answer: Indirect
Explanation:Stats Association and Causation
When two variables are found to be more commonly present together, they are said to be associated. However, this association can be of three types: spurious, indirect, of direct. Spurious association is one that has arisen by chance and is not real, while indirect association is due to the presence of another factor, known as a confounding variable. Direct association, on the other hand, is a true association not linked by a third variable.
Once an association has been established, the next question is whether it is causal. To determine causation, the Bradford Hill Causal Criteria are used. These criteria include strength, temporality, specificity, coherence, and consistency. The stronger the association, the more likely it is to be truly causal. Temporality refers to whether the exposure precedes the outcome. Specificity asks whether the suspected cause is associated with a specific outcome of disease. Coherence refers to whether the association fits with other biological knowledge. Finally, consistency asks whether the same association is found in many studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 49
Correct
-
As the occurrence of a condition decreases, what increases?
Your Answer: Negative predictive value
Explanation:The prevalence of a condition has an impact on both the PPV and NPV. When the prevalence decreases, the PPV also decreases while the NPV increases.
Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 50
Incorrect
-
What is the optimal number needed to treat (NNT)?
Your Answer:
Correct Answer: 1
Explanation:The effectiveness of a healthcare intervention, usually a medication, is measured by the number needed to treat (NNT). This represents the average number of patients who must receive treatment to prevent one additional negative outcome. An NNT of 1 would indicate that all treated patients improved while none of the control patients did, which is the ideal scenario. The NNT can be calculated by taking the inverse of the absolute risk reduction. A higher NNT indicates a less effective treatment, with the range of NNT being from 1 to infinity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
00
Correct
00
Incorrect
00
:
00
:
00
Session Time
00
:
00
Average Question Time (
Mins)