-
Question 1
Correct
-
Regarding evidence based medicine, which of the following is an example of a foreground question?
Your Answer: What is the effectiveness of restraints in reducing the occurrence of falls in patients 65 and over?
Explanation:Foreground questions are specific and focused, and can lead to a clinical decision. In contrast, background questions are more general and broad in scope.
Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 2
Incorrect
-
Which of the following is calculated by dividing the standard deviation by the square root of the sample size?
Your Answer: Variance
Correct Answer: Standard error
Explanation:The formula for the standard error of the mean is equal to the standard deviation divided by the square root of the number of patients.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 3
Correct
-
What is a characteristic of a type II error?
Your Answer: Occurs when the null hypothesis is incorrectly accepted
Explanation:Hypothesis testing involves the possibility of two types of errors, namely type I and type II errors. A type I error occurs when the null hypothesis is wrongly rejected of the alternative hypothesis is incorrectly accepted. This error is also referred to as an alpha error, error of the first kind, of a false positive. On the other hand, a type II error occurs when the null hypothesis is wrongly accepted. This error is also known as the beta error, error of the second kind, of the false negative.
Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 4
Correct
-
Which variable classification is not included in Stevens' typology?
Your Answer: Ranked
Explanation:Stevens suggested that scales can be categorized into one of four types based on measurements.
Scales of Measurement in Statistics
In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.
Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.
Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.
Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 5
Incorrect
-
How can authors ensure they cover all necessary aspects when writing articles that describe formal studies of quality improvement?
Your Answer: CONSORT
Correct Answer: SQUIRE
Explanation:As an AI language model, I do not have personal opinions of beliefs. However, I can provide information and guidance on the standards of reporting for different types of research studies. The guidelines mentioned above are essential for ensuring that research studies are reported accurately and transparently, which is crucial for the scientific community to evaluate and replicate the findings. It is important for researchers to be familiar with these standards and follow them when reporting their studies to ensure the quality and integrity of their research.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 6
Incorrect
-
What is the nature of the hypothesis that a researcher wants to test regarding the effect of a drug on a person's heart rate?
Your Answer: Null hypothesis
Correct Answer: One-tailed alternative hypothesis
Explanation:A one-tailed hypothesis indicates a specific direction of association between groups. The researcher not only declares that there will be a distinction between the groups but also defines the direction in which the difference will occur.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 7
Correct
-
What is the middle value in the set of numbers 2, 9, 4, 1, 23?
Your Answer: 4
Explanation:Measures of Central Tendency
Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.
The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.
The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.
In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 8
Incorrect
-
You design an experiment investigating whether 3 different exercise routines each with a different intensity level affect a person's heart rate to a different degree. Which of the following tests would you use to demonstrate a statistically significant difference between the exercise routines?:
Your Answer: Chi squared test
Correct Answer: ANOVA
Explanation:Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 9
Incorrect
-
Which of the following is an example of primary evidence?
Your Answer: A systematic review of patient outcomes following discharge from secure psychiatric hospitals
Correct Answer: A case-series of chronic leukocytosis associated with clozapine
Explanation:Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 10
Correct
-
Which of the following is not a valid type of validity?
Your Answer: Inter-rater
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 11
Incorrect
-
The data collected represents the ratings given by students to the quality of teaching sessions provided by a consultant psychiatrist. The ratings are on a scale of 1-5, with 1 indicating extremely unsatisfactory and 5 indicating extremely satisfactory. The ratings are used to evaluate the effectiveness of the teaching sessions. How is this data best described?
Your Answer:
Correct Answer: Ordinal
Explanation:The data gathered will be measured on an ordinal scale, where each answer option is ranked. For instance, 2 is considered lower than 4, and 4 is lower than 5. In an ordinal scale, it is not necessary for the difference between 4 (satisfactory) and 2 (unsatisfactory) to be the same as the difference between 5 (extremely satisfactory) and 3 (neutral). This is because the numbers are not assigned for quantitative measurement but are used for labeling purposes only.
Scales of Measurement in Statistics
In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.
Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.
Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.
Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 12
Incorrect
-
Which of the following statements accurately describes the normal distribution?
Your Answer:
Correct Answer: Mean = mode = median
Explanation:The Normal distribution is a probability distribution that is continuous in nature.
Standard Deviation and Standard Error of the Mean
Standard deviation (SD) and standard error of the mean (SEM) are two important statistical measures used to describe data. SD is a measure of how much the data varies, while SEM is a measure of how precisely we know the true mean of the population. The normal distribution, also known as the Gaussian distribution, is a symmetrical bell-shaped curve that describes the spread of many biological and clinical measurements.
68.3% of the data lies within 1 SD of the mean, 95.4% of the data lies within 2 SD of the mean, and 99.7% of the data lies within 3 SD of the mean. The SD is calculated by taking the square root of the variance and is expressed in the same units as the data set. A low SD indicates that data points tend to be very close to the mean.
On the other hand, SEM is an inferential statistic that quantifies the precision of the mean. It is expressed in the same units as the data and is calculated by dividing the SD of the sample mean by the square root of the sample size. The SEM gets smaller as the sample size increases, and it takes into account both the value of the SD and the sample size.
Both SD and SEM are important measures in statistical analysis, and they are used to calculate confidence intervals and test hypotheses. While SD quantifies scatter, SEM quantifies precision, and both are essential in understanding and interpreting data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 13
Incorrect
-
How can the pre-test probability be expressed in another way?
Your Answer:
Correct Answer: The prevalence of a condition
Explanation:The prevalence refers to the percentage of individuals in a population who currently have a particular condition, while the incidence is the frequency at which new cases of the condition arise within a specific timeframe.
Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 14
Incorrect
-
A case-control study was conducted to determine if exposure to passive smoking during childhood increases the risk of nicotine dependence. Two groups were recruited: 200 patients with nicotine dependence and 200 controls without nicotine dependence. Among the patients, 40 reported exposure to parental smoking during childhood, while among the controls, 20 reported such exposure. The odds ratio of developing nicotine dependence after being exposed to passive smoking is:
Your Answer:
Correct Answer: 2.25
Explanation:Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 15
Incorrect
-
What is a criterion used to evaluate the quality of meta-analysis reporting?
Your Answer:
Correct Answer: QUORUM
Explanation:As an AI language model, I do not have personal opinions of beliefs. However, I can provide information and guidance on the standards of reporting for different types of research studies. The guidelines mentioned above are essential for ensuring that research studies are reported accurately and transparently, which is crucial for the scientific community to evaluate and replicate the findings. It is important for researchers to be familiar with these standards and follow them when reporting their studies to ensure the quality and integrity of their research.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 16
Incorrect
-
What statement accurately describes the process of searching a database?
Your Answer:
Correct Answer: New references are added to PubMed more quickly than they are to MEDLINE
Explanation:PubMed receives new references faster than MEDLINE because they do not need to undergo indexing, such as adding MeSH headings and checking tags. While an increasing number of MEDLINE citations have a link to the complete article, not all of them do. Since 2010, Embased has included all MEDLINE citations in its database, but it does not have all citations from before that year.
Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 17
Incorrect
-
The ICER is utilized in the following methods of economic evaluation:
Your Answer:
Correct Answer: Cost-effectiveness analysis
Explanation:The acronym ICER stands for incremental cost-effectiveness ratio.
Methods of Economic Evaluation
There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.
Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.
Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.
Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.
Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.
Costs in Economic Evaluation Studies
There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 18
Incorrect
-
How do the incidence rate and cumulative incidence differ from each other?
Your Answer:
Correct Answer: The incidence rate is a more accurate estimate of the rate at which the outcome develops
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 19
Incorrect
-
The national health organization has a team of analysts to compare the effectiveness of two different cancer treatments in terms of cost and patient outcomes. They have gathered data on the number of years of life gained by each treatment and are seeking your recommendation on what type of analysis to conduct next. What analysis would you suggest they undertake?
Your Answer:
Correct Answer: Cost utility analysis
Explanation:Cost utility analysis is a method used in health economics to determine the cost-effectiveness of a health intervention by comparing the cost of the intervention to the benefit it provides in terms of the number of years lived in full health. The cost is measured in monetary units, while the benefit is quantified using a measure that assigns values to different health states, including those that are less desirable than full health. In health technology assessments, this measure is typically expressed as quality-adjusted life years (QALYs).
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 20
Incorrect
-
What value of NNT indicates the most positive result for an intervention?
Your Answer:
Correct Answer: NNT = 1
Explanation:An NNT of 1 indicates that every patient who receives the treatment experiences a positive outcome, while no patient in the control group experiences the same outcome. This represents an ideal outcome.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 21
Incorrect
-
The regional Health Authority has requested your expertise in determining whether to establish a new 12 bed pediatric ward of a six bed adolescent psychiatric unit. Your task is to conduct an economic analysis that evaluates the financial advantages and disadvantages of both proposals.
Your Answer:
Correct Answer: Cost benefit analysis
Explanation:A cost benefit analysis is a method of evaluating whether the benefits of an intervention outweigh its costs, using monetary units as the common measurement. Typically, this type of analysis is employed by funding bodies to make decisions about financing, such as whether to allocate resources for a new delivery suite of electroconvulsive therapy suite.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 22
Incorrect
-
For which of the following research areas are qualitative methods least effective?
Your Answer:
Correct Answer: Treatment evaluation
Explanation:While quantitative methods are typically used for treatment evaluation, qualitative studies can also provide valuable insights by interpreting, qualifying, of illuminating findings. This is especially beneficial when examining unexpected results, as they can help to test the primary hypothesis.
Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 23
Incorrect
-
What proportion of adults are expected to have IgE levels exceeding 2 standard deviations from the mean in a study aimed at establishing the normal reference range for IgE levels in adults, assuming a normal distribution of IgE levels?
Your Answer:
Correct Answer: 2.30%
Explanation:Standard Deviation and Standard Error of the Mean
Standard deviation (SD) and standard error of the mean (SEM) are two important statistical measures used to describe data. SD is a measure of how much the data varies, while SEM is a measure of how precisely we know the true mean of the population. The normal distribution, also known as the Gaussian distribution, is a symmetrical bell-shaped curve that describes the spread of many biological and clinical measurements.
68.3% of the data lies within 1 SD of the mean, 95.4% of the data lies within 2 SD of the mean, and 99.7% of the data lies within 3 SD of the mean. The SD is calculated by taking the square root of the variance and is expressed in the same units as the data set. A low SD indicates that data points tend to be very close to the mean.
On the other hand, SEM is an inferential statistic that quantifies the precision of the mean. It is expressed in the same units as the data and is calculated by dividing the SD of the sample mean by the square root of the sample size. The SEM gets smaller as the sample size increases, and it takes into account both the value of the SD and the sample size.
Both SD and SEM are important measures in statistical analysis, and they are used to calculate confidence intervals and test hypotheses. While SD quantifies scatter, SEM quantifies precision, and both are essential in understanding and interpreting data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 24
Incorrect
-
A team of scientists plans to carry out a placebo-controlled randomized trial to assess the effectiveness of a new medication for treating hypertension in elderly patients. They aim to prevent patients from knowing whether they are receiving the medication of the placebo.
What type of bias are they trying to eliminate?Your Answer:
Correct Answer: Performance bias
Explanation:To prevent bias in the study, the researchers are implementing patient blinding to prevent performance bias, as knowledge of whether they are taking venlafaxine of a placebo, of which arm of the study they are in, could impact the patient’s behavior. Additionally, investigators must also be blinded to avoid measurement bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 25
Incorrect
-
A new antihypertensive medication is trialled for adults with high blood pressure. There are 500 adults in the control group and 300 adults assigned to take the new medication. After 6 months, 200 adults in the control group had high blood pressure compared to 30 adults in the group taking the new medication. What is the relative risk reduction?
Your Answer:
Correct Answer: 75%
Explanation:The RRR (Relative Risk Reduction) is calculated by dividing the ARR (Absolute Risk Reduction) by the CER (Control Event Rate). The CER is determined by dividing the number of control events by the total number of participants, which in this case is 200/500 of 0.4. The EER (Experimental Event Rate) is determined by dividing the number of events in the experimental group by the total number of participants, which in this case is 30/300 of 0.1. The ARR is calculated by subtracting the EER from the CER, which is 0.4 – 0.1 = 0.3. Finally, the RRR is calculated by dividing the ARR by the CER, which is 0.3/0.4 of 0.75 (of 75%).
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 26
Incorrect
-
Which study design involves conducting an experiment?
Your Answer:
Correct Answer: A randomised control study
Explanation:Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 27
Incorrect
-
How does the prevalence of a condition impact a particular aspect?
Your Answer:
Correct Answer: Positive predictive value
Explanation:The characteristics of precision, sensitivity, accuracy, and specificity are not influenced by the prevalence of the condition and remain stable. However, the positive predictive value is affected by the prevalence of the condition, particularly in cases where the prevalence is low. This is because a decrease in the prevalence of the condition leads to a decrease in the number of true positives, which in turn reduces the numerator of the PPV equation, resulting in a lower PPV. The formula for PPV is TP/(TP+FP).
Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 28
Incorrect
-
What qualitative research approach aims to understand individuals' inner experiences and perspectives?
Your Answer:
Correct Answer: Phenomenology
Explanation:Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 29
Incorrect
-
What statement accurately describes percentiles?
Your Answer:
Correct Answer: Q1 is the 25th percentile
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 30
Incorrect
-
A new clinical trial has found a correlation between alcohol consumption and lung cancer. Considering the well-known link between alcohol consumption and smoking, what is the most probable explanation for this new association?
Your Answer:
Correct Answer: Confounding
Explanation:The observed link between alcohol consumption and lung cancer is likely due to confounding factors, such as cigarette smoking. Confounding variables are those that are associated with both the independent and dependent variables, in this case, alcohol consumption and lung cancer.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
00
Correct
00
Incorrect
00
:
00
:
00
Session Time
00
:
00
Average Question Time (
Mins)