-
Question 1
Incorrect
-
What is a characteristic of data that is positively skewed?
Your Answer: Mode < median < mean
Correct Answer:
Explanation:Skewed Data: Understanding the Relationship between Mean, Median, and Mode
When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.
In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.
However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.
Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 2
Correct
-
What is the most suitable measure to describe the most common test grades collected by a college professor?
Your Answer: Mode
Explanation:The median represents the middle value in a set of data. For example, if there were 7 results (A, B, C, D, E, F, F), the median would be D. However, if the question asks for the most common result, the mode would be used. In this example, the mode would be F. The mean would not be appropriate in this case because adding all the values and dividing by the number of values would not provide a meaningful result.
Measures of Central Tendency
Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.
The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.
The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.
In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 3
Incorrect
-
What percentage of the data falls within the range of the lower and upper quartiles, as represented by the interquartile range?
Your Answer: 100%
Correct Answer: 50%
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 4
Incorrect
-
Which of the following is not considered a crucial factor according to Wilson and Junger when implementing a screening program?
Your Answer: The test of examination should be acceptable to the population
Correct Answer: The condition should be potentially curable
Explanation:Wilson and Junger Criteria for Screening
1. The condition should be an important public health problem.
2. There should be an acceptable treatment for patients with recognised disease.
3. Facilities for diagnosis and treatment should be available.
4. There should be a recognised latent of early symptomatic stage.
5. The natural history of the condition, including its development from latent to declared disease should be adequately understood.
6. There should be a suitable test of examination.
7. The test of examination should be acceptable to the population.
8. There should be agreed policy on whom to treat.
9. The cost of case-finding (including diagnosis and subsequent treatment of patients) should be economically balanced in relation to the possible expenditure as a whole.
10. Case-finding should be a continuous process and not a ‘once and for all’ project.The Wilson and Junger criteria provide a framework for evaluating the suitability of a screening program for a particular condition. The criteria emphasize the importance of the condition as a public health problem, the availability of effective treatment, and the feasibility of diagnosis and treatment. Additionally, the criteria highlight the importance of understanding the natural history of the condition and the need for a suitable test of examination that is acceptable to the population. The criteria also stress the importance of having agreed policies on whom to treat and ensuring that the cost of case-finding is economically balanced. Finally, the criteria emphasize that case-finding should be a continuous process rather than a one-time project. By considering these criteria, public health officials can determine whether a screening program is appropriate for a particular condition and ensure that resources are used effectively.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 5
Incorrect
-
What is another term used to refer to a type II error in hypothesis testing?
Your Answer: True negative
Correct Answer: False negative
Explanation:Hypothesis testing involves the possibility of two types of errors: type I and type II errors. A type I error occurs when the null hypothesis is wrongly rejected of the alternative hypothesis is wrongly accepted. This error is also referred to as an alpha error, error of the first kind, of a false positive. On the other hand, a type II error occurs when the null hypothesis is wrongly accepted. This error is also known as the beta error, error of the second kind, of the false negative.
Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 6
Incorrect
-
A study examining potential cases of neuroleptic malignant syndrome reports on several physical parameters, including patient temperature in Celsius.
This is an example of which of the following variables?:Your Answer: Ordinal
Correct Answer: Interval
Explanation:Types of Variables
There are different types of variables in statistics. Binary of dichotomous variables have only two values, such as gender. Categorical variables can be grouped into two or more categories, such as eye color of ethnicity. Continuous variables can be further classified into interval and ratio variables. They can be placed anywhere on a scale and have arithmetic properties. Ratio variables have a value of 0 that indicates the absence of the variable, such as temperature in Kelvin. On the other hand, interval variables, like temperature in Celsius of Fahrenheit, do not have a true zero point. Lastly, ordinal variables allow for ranking but do not allow for arithmetic comparisons between values. Examples of ordinal variables include education level and income bracket.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 7
Incorrect
-
What is the purpose of the PICO model in evidence based medicine?
Your Answer: Establishing the presence of publication bias
Correct Answer: Formulating answerable questions
Explanation:Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 8
Correct
-
What method did the researchers use to ensure the accuracy and credibility of their findings in the qualitative study on antidepressants?
Your Answer: Member checking
Explanation:To ensure validity in qualitative studies, a technique called member checking of respondent validation is used. This involves interviewing a subset of the participants (typically around 11) to confirm that their perspectives align with the study’s findings.
Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 9
Incorrect
-
How is the phenomenon of regression towards the mean most influential on which type of validity?
Your Answer: Construct validity
Correct Answer: Internal validity
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 10
Incorrect
-
Which data type does age in years belong to?
Your Answer: Interval
Correct Answer: Ratio
Explanation:Age is a type of measurement that follows a ratio scale, which means that the values can be compared as multiples of each other. For instance, if someone is 20 years old, they are twice as old as someone who is 10 years old.
Scales of Measurement in Statistics
In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.
Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.
Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.
Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 11
Correct
-
What is another name for admission rate bias?
Your Answer: Berkson's bias
Explanation:Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 12
Incorrect
-
Which type of evidence is typically regarded as the most reliable according to traditional methods?
Your Answer: Case-control studies
Correct Answer: RCTs with non-definitive results
Explanation:Levels and Grades of Evidence in Evidence-Based Medicine
To evaluate the quality of evidence on a subject of question, levels of grades are used. The traditional hierarchy approach places systematic reviews of randomized control trials at the top and case-series/report at the bottom. However, this approach is overly simplistic as certain research questions cannot be answered using RCTs. To address this, the Oxford Centre for Evidence-Based Medicine introduced their 2011 Levels of Evidence system, which separates the type of study questions and gives a hierarchy for each.
The grading approach to be aware of is the GRADE system, which classifies the quality of evidence as high, moderate, low, of very low. The process begins by formulating a study question and identifying specific outcomes. Outcomes are then graded as critical of important. The evidence is then gathered and criteria are used to grade the evidence, with the type of evidence being a significant factor. Evidence can be promoted of downgraded based on certain criteria, such as limitations to study quality, inconsistency, uncertainty about directness, imprecise of sparse data, and reporting bias. The GRADE system allows for the promotion of observational studies to high-quality evidence under the right circumstances.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 13
Incorrect
-
What is the term used to describe a test that initially appears to measure what it is intended to measure?
Your Answer: Good external validity
Correct Answer: Good face validity
Explanation:A test that seems to measure what it is intended to measure has strong face validity.
Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 14
Correct
-
The regional Health Authority has requested your expertise in determining whether to establish a new 12 bed pediatric ward of a six bed adolescent psychiatric unit. Your task is to conduct an economic analysis that evaluates the financial advantages and disadvantages of both proposals.
Your Answer: Cost benefit analysis
Explanation:A cost benefit analysis is a method of evaluating whether the benefits of an intervention outweigh its costs, using monetary units as the common measurement. Typically, this type of analysis is employed by funding bodies to make decisions about financing, such as whether to allocate resources for a new delivery suite of electroconvulsive therapy suite.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 15
Incorrect
-
What is the typical measure of outcome in a case-control study investigating the potential association between autism and a recently developed varicella vaccine?
Your Answer: Numbers needed to harm
Correct Answer: Odds ratio
Explanation:The odds ratio is used in case-control studies to measure the association between exposure and outcome, while the relative risk is used in cohort studies to measure the risk of developing an outcome in the exposed group compared to the unexposed group. To convert the odds ratio to a relative risk, one can use the formula: relative risk = odds ratio / (1 – incidence in the unexposed group x odds ratio).
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 16
Incorrect
-
In a randomised controlled trial investigating the initial management of sexual dysfunction with two drugs, some patients withdraw from the study due to medication-related adverse effects. What is the appropriate method for analysing the resulting data?
Your Answer: Remove patients who drop out from final data set
Correct Answer: Include the patients who drop out in the final data set
Explanation:Intention to Treat Analysis in Randomized Controlled Trials
Intention to treat analysis is a statistical method used in randomized controlled trials to analyze all patients who were randomly assigned to a treatment group, regardless of whether they completed of received the treatment. This approach is used to avoid the potential biases that may arise from patients dropping out of switching between treatment groups. By analyzing all patients according to their original treatment assignment, intention to treat analysis provides a more accurate representation of the true treatment effects. This method is widely used in clinical trials to ensure that the results are reliable and unbiased.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 17
Incorrect
-
Which of the following methods is most effective in eliminating of managing confounding factors?
Your Answer: Matching
Correct Answer: Randomisation
Explanation:The most effective way to eliminate of manage potential confounding factors is to randomize a large enough sample size. This approach addresses all potential confounders, regardless of whether they were measured in the study design. Matching involves pairing individuals who received a treatment of intervention with non-treated individuals who have similar observable characteristics. Post-hoc methods, such as stratification, regression analysis, and analysis of variance, can be used to evaluate the impact of known or suspected confounders.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 18
Incorrect
-
Which of the following is the correct description of construct validity?
Your Answer: A test has good construct validity if it is useful for predicting something
Correct Answer: A test has good construct validity if it has a high correlation with another test that measures the same construct
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 19
Incorrect
-
What is the term used to describe a scenario where a study participant alters their behavior due to the awareness of being observed?
Your Answer: Smiths paradox
Correct Answer: Hawthorne effect
Explanation:Simpson’s Paradox is a real phenomenon where the comparison of association between variables can change direction when data from multiple groups are merged into one. The other three options are not valid terms.
Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 20
Incorrect
-
What test would be appropriate for comparing the proportion of individuals who experience agranulocytosis while taking clozapine versus those who experience it while taking olanzapine?
Your Answer: ANOVA
Correct Answer: Chi-squared test
Explanation:The dependent variable in this scenario is categorical, as individuals either experience agranulocytosis of do not. The independent variable is also categorical, with two options: olanzapine of clozapine. While there are various types of chi-squared tests, it is not necessary to focus on the distinctions between them.
Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 21
Incorrect
-
What is the accurate formula for determining the likelihood ratio of a positive test outcome?
Your Answer: (Sensitivity -1) / specificity
Correct Answer: Sensitivity / (1 - specificity)
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 22
Correct
-
What is the accurate formula for determining the likelihood ratio of a negative test result?
Your Answer: (1 - sensitivity) / specificity
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 23
Incorrect
-
Which statement accurately describes bar charts?
Your Answer:
Correct Answer: The height of the bar indicates the frequency
Explanation:The frequency of each category of characteristic is displayed through the height of the bars in a bar chart. When dealing with discrete data, it is typically organized into distinct categories and presented in a bar chart. On the other hand, continuous data covers a range and the categories are not separate but rather blend into one another. This type of data is best represented through a histogram, which is similar to a bar chart but with bars that are connected.
Differences between Bar Charts and Histograms
Bar charts and histograms are both used to represent data, but they differ in their application and design. Bar charts are used to summarize nominal of ordinal data, while histograms are used for quantitative data. In a bar chart, the x-axis represents categories without a scale, and the y-axis represents frequencies. The columns are of equal width, and the height of the bar indicates the frequency. On the other hand, histograms have a scale on both axes, with the y-axis representing the relative frequency of frequency density. The width of the columns in a histogram can vary, and the area of the column indicates the true frequency. Overall, bar charts and histograms are useful tools for visualizing data, but their differences in design and application make them better suited for different types of data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 24
Incorrect
-
What is the estimated range for the 95% confidence interval for the mean glucose levels in a population of people taking antipsychotics, given a sample mean of 7 mmol/L, a sample standard deviation of 6 mmol/L, and a sample size of 9 with a standard error of the mean of 2 mmol/L?
Your Answer:
Correct Answer: 3-11 mmol/L
Explanation:It is important to note that confidence intervals are derived from standard errors, not standard deviation, despite the common misconception. It is crucial to avoid mixing up these two terms.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 25
Incorrect
-
It has been proposed that individuals who develop schizophrenia may have subtle brain abnormalities present in utero, which predispose them to experiencing obstetric complications during birth. What term best describes this proposed explanation for the association between schizophrenia and birth complications?
Your Answer:
Correct Answer: Reverse causality
Explanation:Common Biases and Errors in Research
Reverse causality occurs when a risk factor appears to cause an illness, but in reality, it is a consequence of the illness. Information bias is a type of error that can occur in research. Two examples of information bias are observer bias and recall bias. Observer bias happens when the experimenter’s biases affect the study’s findings. Recall bias occurs when participants in the case and control groups have different levels of accuracy in their recollections.
There are two types of errors in research: Type I and Type II. A Type I error is when a true null hypothesis is incorrectly rejected, resulting in a false positive. A Type II error is when a false null hypothesis is not rejected, resulting in a false negative. It is essential to be aware of these biases and errors to ensure accurate and reliable research findings.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 26
Incorrect
-
A nationwide study on mental health found that the incidence of depression is significantly higher among elderly individuals living in suburban areas compared to those residing in urban environments. What factors could explain this disparity?
Your Answer:
Correct Answer: Reduced incidence in urban areas
Explanation:The prevalence of schizophrenia may be higher in urban areas due to the social drift phenomenon, where individuals with severe and enduring mental illnesses tend to move towards urban areas. However, a reduced incidence of schizophrenia in urban areas could explain why there is an increased prevalence of the condition in rural settings. It is important to note that prevalence is influenced by both incidence and duration of illness, and can be reduced by increased recovery rates of death from any cause.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 27
Incorrect
-
You record the age of all of your students in your class. You notice that your data set is skewed. What method would you use to describe the typical age of your students?
Your Answer:
Correct Answer: Median
Explanation:When dealing with a data set that is quantitative and measured on a ratio scale, the mean is typically the preferred measure of central tendency. However, if the data is skewed, the median may be a better choice as it is less affected by the skewness of the data.
Measures of Central Tendency
Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.
The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.
The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.
In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 28
Incorrect
-
What is the negative predictive value of the blood test for bowel cancer, given a sensitivity of 60% and a specificity of 80% and a negative test result for a patient?
Your Answer:
Correct Answer: 0.5
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 29
Incorrect
-
A study which aims to see if women over 40 years old have a different length of pregnancy, compare the mean in a group of women of this age against the population mean. Which of the following tests would you use to compare the means?
Your Answer:
Correct Answer: One sample t-test
Explanation:The appropriate statistical test for the study is a one-sample t-test as it involves the calculation of a single mean.
Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 30
Incorrect
-
What level of kappa score indicates complete agreement between two observers?
Your Answer:
Correct Answer: 1
Explanation:Understanding the Kappa Statistic for Measuring Interobserver Variation
The kappa statistic, also known as Cohen’s kappa coefficient, is a useful tool for quantifying the level of agreement between independent observers. This measure can be applied in any situation where multiple observers are evaluating the same thing, such as in medical diagnoses of research studies. The kappa coefficient ranges from 0 to 1, with 0 indicating complete disagreement and 1 indicating perfect agreement. By using the kappa statistic, researchers and practitioners can gain insight into the level of interobserver variation present in their data, which can help to improve the accuracy and reliability of their findings. Overall, the kappa statistic is a valuable tool for understanding and measuring interobserver variation in a variety of contexts.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 31
Incorrect
-
What is the name of the database that focuses on literature created by non-traditional commercial of academic publishing and distribution channels?
Your Answer:
Correct Answer: OpenGrey
Explanation:SIGLE is a database that specializes in collecting and indexing grey literature.
Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 32
Incorrect
-
Which variable has a zero value that is not arbitrary?
Your Answer:
Correct Answer: Ratio
Explanation:The key characteristic that sets ratio variables apart from interval variables is the presence of a meaningful zero point. On a ratio scale, this zero point signifies the absence of the measured attribute, while on an interval scale, the zero point is simply a point on the scale with no inherent significance.
Scales of Measurement in Statistics
In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.
Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.
Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.
Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 33
Incorrect
-
A study of 30 patients with hypertension compares the effectiveness of a new blood pressure medication with standard treatment. 80% of the new treatment group achieved target blood pressure levels at 6 weeks, compared with only 40% of the standard treatment group. What is the number needed to treat for the new treatment?
Your Answer:
Correct Answer: 3
Explanation:To calculate the Number Needed to Treat (NNT), we first need to find the Absolute Risk Reduction (ARR), which is calculated by subtracting the Control Event Rate (CER) from the Experimental Event Rate (EER).
Given that CER is 0.4 and EER is 0.8, we can calculate ARR as follows:
ARR = CER – EER
= 0.4 – 0.8
= -0.4Since the ARR is negative, this means that the treatment actually increases the risk of the event occurring. Therefore, we cannot calculate the NNT in this case.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 34
Incorrect
-
What resource is committed to offering complete articles of systematic reviews on the impacts of healthcare interventions?
Your Answer:
Correct Answer: CDSR
Explanation:When faced with a question, it’s helpful to consider what the letters in the question might represent, even if you don’t know the answer right away. Don’t become overwhelmed and keep this strategy in mind.
Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 35
Incorrect
-
Which of the following statements accurately describes the concept of study power?
Your Answer:
Correct Answer: Is the probability of rejecting the null hypothesis when it is false
Explanation:The Importance of Power in Statistical Analysis
Power is a crucial concept in statistical analysis as it helps researchers determine the number of participants needed in a study to detect a clinically significant difference of effect. It represents the probability of correctly rejecting the null hypothesis when it is false, which means avoiding a Type II error. Power values range from 0 to 1, with 0 indicating 0% and 1 indicating 100%. A power of 0.80 is generally considered the minimum acceptable level.
Several factors influence the power of a study, including sample size, effect size, and significance level. Larger sample sizes lead to more precise parameter estimations and increase the study’s ability to detect a significant effect. Effect size, which is determined at the beginning of a study, refers to the size of the difference between two means that leads to rejecting the null hypothesis. Finally, the significance level, also known as the alpha level, represents the probability of a Type I error. By considering these factors, researchers can optimize the power of their studies and increase the likelihood of detecting meaningful effects.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 36
Incorrect
-
A study is designed to assess a new proton pump inhibitor (PPI) in middle-aged patients who are taking aspirin. The new PPI is given to 120 patients whilst a control group of 240 is given the standard PPI. Over a five year period 24 of the group receiving the new PPI had an upper GI bleed compared to 60 who received the standard PPI. What is the absolute risk reduction?
Your Answer:
Correct Answer: 5%
Explanation:Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 37
Incorrect
-
What methods are most effective in determining interobserver agreement?
Your Answer:
Correct Answer: Kappa
Explanation:Kappa is used to assess the consistency of reliability between different raters.
Understanding the Kappa Statistic for Measuring Interobserver Variation
The kappa statistic, also known as Cohen’s kappa coefficient, is a useful tool for quantifying the level of agreement between independent observers. This measure can be applied in any situation where multiple observers are evaluating the same thing, such as in medical diagnoses of research studies. The kappa coefficient ranges from 0 to 1, with 0 indicating complete disagreement and 1 indicating perfect agreement. By using the kappa statistic, researchers and practitioners can gain insight into the level of interobserver variation present in their data, which can help to improve the accuracy and reliability of their findings. Overall, the kappa statistic is a valuable tool for understanding and measuring interobserver variation in a variety of contexts.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 38
Incorrect
-
What is the approach that targets confounding variables during the study's design phase?
Your Answer:
Correct Answer: Randomisation
Explanation:Stats Confounding
A confounding factor is a factor that can obscure the relationship between an exposure and an outcome in a study. This factor is associated with both the exposure and the disease. For example, in a study that finds a link between coffee consumption and heart disease, smoking could be a confounding factor because it is associated with both drinking coffee and heart disease. Confounding occurs when there is a non-random distribution of risk factors in the population, such as age, sex, and social class.
To control for confounding in the design stage of an experiment, researchers can use randomization, restriction, of matching. Randomization aims to produce an even distribution of potential risk factors in two populations. Restriction involves limiting the study population to a specific group to ensure similar age distributions. Matching involves finding and enrolling participants who are similar in terms of potential confounding factors.
In the analysis stage of an experiment, researchers can control for confounding by using stratification of multivariate models such as logistic regression, linear regression, of analysis of covariance (ANCOVA). Stratification involves creating categories of strata in which the confounding variable does not vary of varies minimally.
Overall, controlling for confounding is important in ensuring that the relationship between an exposure and an outcome is accurately assessed in a study.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 39
Incorrect
-
The Diagnostic Project between the UK and US revealed that the increased prevalence of schizophrenia in New York, as opposed to London, was due to what factor?
Your Answer:
Correct Answer: Bias
Explanation:The US-UK Diagnostic Project found that the higher rates of schizophrenia in New York were due to diagnostic bias, as US psychiatrists used broader diagnostic criteria. However, the use of standardised clinical interviews and operationalised diagnostic criteria greatly reduced the variability of both incidence and prevalence rates of schizophrenia. This was demonstrated in a study by Sartorius et al. (1986) which examined early manifestations and first-contact incidence of schizophrenia in different cultures.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 40
Incorrect
-
Which studies are most susceptible to the Hawthorne effect?
Your Answer:
Correct Answer: Compliance with antipsychotic medication
Explanation:The Hawthorne effect is a phenomenon where individuals may alter their actions of responses when they are aware that they are being monitored of studied. Out of the given choices, the only one that pertains to a change in behavior is the adherence to medication. The remaining options related to outcomes that are not under conscious control.
Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 41
Incorrect
-
Which statement accurately describes box and whisker plots?
Your Answer:
Correct Answer: Each whisker represents approximately 25% of the data
Explanation:Box and whisker plots are a useful tool for displaying information about the range, median, and quartiles of a data set. The whiskers only contain values within 1.5 times the interquartile range (IQR), and any values outside of this range are considered outliers and displayed as dots. The IQR is the difference between the 3rd and 1st quartiles, which divide the data set into quarters. Quartiles can also be used to determine the percentage of observations that fall below a certain value. However, quartiles and ranges have limitations because they do not take into account every score in a data set. To get a more representative idea of spread, measures such as variance and standard deviation are needed. Box plots can also provide information about the shape of a data set, such as whether it is skewed or symmetric. Notched boxes on the plot represent the confidence intervals of the median values.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 42
Incorrect
-
In scientific research, what variable type has traditionally been used to record the age of study participants?
Your Answer:
Correct Answer: Binary
Explanation:Gender has traditionally been recorded as either male of female, creating a binary of dichotomous variable. Other categorical variables, such as eye color and ethnicity, can be grouped into two or more categories. Continuous variables, such as temperature, height, weight, and age, can be placed anywhere on a scale and have mathematical properties. Ordinal variables allow for ranking, but do not allow for direct mathematical comparisons between values.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 43
Incorrect
-
Which study design is susceptible to making the erroneous assumption that relationships observed among groups also hold true for individuals?
Your Answer:
Correct Answer: Ecological study
Explanation:An ecological fallacy is a potential error that can occur when generalizing relationships observed among groups to individuals. This is a concern when conducting analyses of ecological studies.
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 44
Incorrect
-
Which statement about disease rates is incorrect?
Your Answer:
Correct Answer: The odds ratio is synonymous with the risk ratio
Explanation:Disease Rates and Their Interpretation
Disease rates are a measure of the occurrence of a disease in a population. They are used to establish causation, monitor interventions, and measure the impact of exposure on disease rates. The attributable risk is the difference in the rate of disease between the exposed and unexposed groups. It tells us what proportion of deaths in the exposed group were due to the exposure. The relative risk is the risk of an event relative to exposure. It is calculated by dividing the rate of disease in the exposed group by the rate of disease in the unexposed group. A relative risk of 1 means there is no difference between the two groups. A relative risk of <1 means that the event is less likely to occur in the exposed group, while a relative risk of >1 means that the event is more likely to occur in the exposed group. The population attributable risk is the reduction in incidence that would be observed if the population were entirely unexposed. It can be calculated by multiplying the attributable risk by the prevalence of exposure in the population. The attributable proportion is the proportion of the disease that would be eliminated in a population if its disease rate were reduced to that of the unexposed group.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 45
Incorrect
-
What is the standard deviation of the sample mean height of 100 adults who were administered steroids during childhood, given that the average height of the adults is 169cm and the standard deviation is 16cm?
Your Answer:
Correct Answer: 1.6
Explanation:The standard error of the mean is 1.6, calculated by dividing the standard deviation of 16 by the square root of the number of patients, which is 100.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 46
Incorrect
-
Which of the following would make the use of the unpaired t-test inappropriate for comparing the mean ages of two groups of participants?
Your Answer:
Correct Answer: Non-normal distribution of data
Explanation:The t test is limited to parametric data that follows a normal distribution. However, inadequate statistical power due to a small sample size does not necessarily invalidate the t test results. While it is likely that a small sample size may not reveal any significant differences, it is still possible that large differences may be observed regardless of prior power calculations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 47
Incorrect
-
Which of the following statements accurately describes the standard error of the mean?
Your Answer:
Correct Answer: Gets smaller as the sample size increases
Explanation:As the sample size (n) increases, the standard error of the mean (SEM) decreases. This is because the SEM is inversely proportional to the square root of the sample size (n). As n gets larger, the denominator of the SEM equation gets larger, causing the overall value of the SEM to decrease. This means that larger sample sizes provide more accurate estimates of the population mean, as the calculated sample mean is expected to be closer to the true population mean.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 48
Incorrect
-
Which statement accurately reflects the standard mortality ratio of a disease in a sampled population that is determined to be 1.4?
Your Answer:
Correct Answer: There were 40% more fatalities from the disease in this population compared to the reference population
Explanation:Calculation of Standardised Mortality Ratio (SMR)
To calculate the SMR, age and sex-specific death rates in the standard population are obtained. An estimate for the number of people in each category for both the standard and study populations is needed. The number of expected deaths in each age-sex group of the study population is calculated by multiplying the age-sex-specific rates in the standard population by the number of people in each category of the study population. The sum of all age- and sex-specific expected deaths gives the expected number of deaths for the whole study population. The observed number of deaths is then divided by the expected number of deaths to obtain the SMR.
The SMR can be standardised using the direct of indirect method. The direct method is used when the age-sex-specific rates for the study population and the age-sex-structure of the standard population are known. The indirect method is used when the age-specific rates for the study population are unknown of not available. This method uses the observed number of deaths in the study population and compares it to the number of deaths that would be expected if the age distribution was the same as that of the standard population.
The SMR can be interpreted as follows: an SMR less than 1.0 indicates fewer than expected deaths in the study population, an SMR of 1.0 indicates the number of observed deaths equals the number of expected deaths in the study population, and an SMR greater than 1.0 indicates more than expected deaths in the study population (excess deaths). It is sometimes expressed after multiplying by 100.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 49
Incorrect
-
A study was conducted to investigate the correlation between body mass index (BMI) and mortality in patients with schizophrenia. The study involved a cohort of 1000 patients with schizophrenia who were evaluated by measuring their weight and height, and calculating their BMI. The participants were then monitored for up to 15 years after the study commenced. The BMI levels were classified into three categories (high, average, low). The findings revealed that, after adjusting for age, gender, treatment method, and comorbidities, a high BMI at the beginning of the study was linked to a twofold increase in mortality.
How is this study best described?Your Answer:
Correct Answer:
Explanation:The study is a prospective cohort study that observes the effect of BMI as an exposure on the group over time, without manipulating any risk factors of interventions.
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 50
Incorrect
-
A new medication is being developed to treat hypertension in elderly patients. Several different drugs are being considered for their efficacy in reducing blood pressure. Which study design would require the largest number of participants to achieve a significant outcome?
Your Answer:
Correct Answer: Superiority trial
Explanation:Since a superiority trial involves comparing a new drug with an already existing treatment that can also reduce HbA1c levels, a substantial sample size is necessary to establish a significant distinction.
Study Designs for New Drugs: Options and Considerations
When launching a new drug, there are various study design options available. One common approach is a placebo-controlled trial, which can provide strong evidence but may be deemed unethical if established treatments are available. Additionally, it does not allow for a comparison with standard treatments. Therefore, statisticians must decide whether the trial aims to demonstrate superiority, equivalence, of non-inferiority to an existing treatment.
Superiority trials may seem like the obvious choice, but they require a large sample size to show a significant benefit over an existing treatment. Equivalence trials define an equivalence margin on a specified outcome, and if the confidence interval of the difference between the two drugs falls within this margin, the drugs are assumed to have a similar effect. Non-inferiority trials are similar to equivalence trials, but only the lower confidence interval needs to fall within the equivalence margin. These trials require smaller sample sizes, and once a drug has been shown to be non-inferior, larger studies may be conducted to demonstrate superiority.
It is important to note that drug companies may not necessarily aim to show superiority over an existing product. If they can demonstrate that their product is equivalent of even non-inferior, they may compete on price of convenience. Overall, the choice of study design depends on various factors, including ethical considerations, sample size, and the desired outcome.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
00
Correct
00
Incorrect
00
:
00
:
00
Session Time
00
:
00
Average Question Time (
Mins)