-
Question 1
Correct
-
When conducting a literature review, it is advisable to do the following:
Your Answer: Include grey literature
Explanation:When conducting a literature review, it is important to broaden your search beyond traditional academic sources. This means including grey literature, such as reports, conference proceedings, and government documents. Additionally, it is crucial to consider both primary and secondary sources of evidence, as they can provide different perspectives and insights on your research topic. To ensure a comprehensive review, it is recommended to use multiple databases and search engines, rather than relying on a single source.
Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 2
Incorrect
-
A team of scientists plans to carry out a randomized controlled study to assess the effectiveness of a new medication for treating anxiety in elderly patients. To prevent any potential biases, they intend to enroll participants through online portals, ensuring that neither the patients nor the researchers are aware of the group assignment. What type of bias are they seeking to eliminate?
Your Answer: Performance bias
Correct Answer: Selection bias
Explanation:The use of allocation concealment is being implemented by the researchers to prevent interference from investigators of patients in the randomisation process. This is important as knowledge of group allocation can lead to patient refusal to participate of researchers manipulating the allocation process. By using distant call centres for allocation concealment, the risk of selection bias, which refers to systematic differences between comparison groups, is reduced.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 3
Incorrect
-
What statement accurately describes measures of dispersion?
Your Answer: The interquartile range is the difference between the smallest and largest value
Correct Answer: The standard error indicates how close the statistical mean is to the population mean
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 4
Correct
-
A team of researchers aim to explore the opinions of pediatricians who specialize in treating children with asthma. They begin by visiting a local pediatric clinic and speaking with a doctor who has expertise in this area. They then ask this doctor to suggest another pediatrician who specializes in treating children with asthma whom they could interview. They continue this process until they have spoken with all the recommended pediatricians.
Which sampling technique are they employing?Your Answer: Snowball
Explanation:Snowball sampling is a unique technique utilized in qualitative research when the desired sample trait is uncommon. In such cases, it can be challenging of expensive to locate suitable respondents. Snowball sampling involves existing subjects recruiting future subjects, which can help overcome these difficulties. For more information on this method, please refer to the additional resources provided.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 5
Incorrect
-
Which of the following options is not a possible value for Pearson's correlation coefficient?
Your Answer: -0.9
Correct Answer: 1.5
Explanation:Stats: Correlation and Regression
Correlation and regression are related but not interchangeable terms. Correlation is used to test for association between variables, while regression is used to predict values of dependent variables from independent variables. Correlation can be linear, non-linear, of non-existent, and can be strong, moderate, of weak. The strength of a linear relationship is measured by the correlation coefficient, which can be positive of negative and ranges from very weak to very strong. However, the interpretation of a correlation coefficient depends on the context and purposes. Correlation can suggest association but cannot prove of disprove causation. Linear regression, on the other hand, can be used to predict how much one variable changes when a second variable is changed. Scatter graphs are used in correlation and regression analyses to visually determine if variables are associated and to detect outliers. When constructing a scatter graph, the dependent variable is typically placed on the vertical axis and the independent variable on the horizontal axis.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 6
Incorrect
-
A team of scientists aims to perform a systematic review and meta-analysis of the effects of caffeine on sleep quality. They want to determine if there is any variation in the results across the studies they have gathered.
Which of the following is not a technique that can be employed to evaluate heterogeneity?Your Answer: I square statistic
Correct Answer: Receiver operating characteristic curve
Explanation:The receiver operating characteristic (ROC) curve is a useful tool for evaluating the diagnostic accuracy of a test in distinguishing between healthy and diseased individuals. It helps to identify the optimal cut-off point between sensitivity and specificity.
Other methods, such as visual inspection of forest plots and Cochran’s Q test, can be used to assess heterogeneity in meta-analysis. Visual inspection of forest plots is a quick and easy method, while Cochran’s Q test is a more formal and widely accepted approach.
For more information on heterogeneity in meta-analysis, further reading is recommended.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 7
Incorrect
-
What does a relative risk of 10 indicate?
Your Answer: The risk of the event in the unexposed group is 10 times that of the exposed group
Correct Answer: The risk of the event in the exposed group is higher than in the unexposed group
Explanation:Disease Rates and Their Interpretation
Disease rates are a measure of the occurrence of a disease in a population. They are used to establish causation, monitor interventions, and measure the impact of exposure on disease rates. The attributable risk is the difference in the rate of disease between the exposed and unexposed groups. It tells us what proportion of deaths in the exposed group were due to the exposure. The relative risk is the risk of an event relative to exposure. It is calculated by dividing the rate of disease in the exposed group by the rate of disease in the unexposed group. A relative risk of 1 means there is no difference between the two groups. A relative risk of <1 means that the event is less likely to occur in the exposed group, while a relative risk of >1 means that the event is more likely to occur in the exposed group. The population attributable risk is the reduction in incidence that would be observed if the population were entirely unexposed. It can be calculated by multiplying the attributable risk by the prevalence of exposure in the population. The attributable proportion is the proportion of the disease that would be eliminated in a population if its disease rate were reduced to that of the unexposed group.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 8
Correct
-
What is the NNT for the following study data in a population of patients over the age of 65?
Medication Group vs Control Group
Events: 30 vs 80
Non-events: 120 vs 120
Total subjects: 150 vs 200.Your Answer: 5
Explanation:To calculate the event rates for the medication and control groups, we divide the number of events by the total number of subjects in each group. For the medication group, the event rate is 0.2 (30/150), and for the control group, it is 0.4 (80/200).
We can also calculate the absolute risk reduction (ARR) by subtracting the event rate in the medication group from the event rate in the control group: ARR = CER – EER = 0.4 – 0.2 = 0.2.
Finally, we can use the ARR to calculate the number needed to treat (NNT), which represents the number of patients who need to be treated with the medication to prevent one additional event compared to the control group. NNT = 1/ARR = 1/0.2 = 5.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 9
Correct
-
It has been proposed that individuals who develop schizophrenia may have subtle brain abnormalities present in utero, which predispose them to experiencing obstetric complications during birth. What term best describes this proposed explanation for the association between schizophrenia and birth complications?
Your Answer: Reverse causality
Explanation:Common Biases and Errors in Research
Reverse causality occurs when a risk factor appears to cause an illness, but in reality, it is a consequence of the illness. Information bias is a type of error that can occur in research. Two examples of information bias are observer bias and recall bias. Observer bias happens when the experimenter’s biases affect the study’s findings. Recall bias occurs when participants in the case and control groups have different levels of accuracy in their recollections.
There are two types of errors in research: Type I and Type II. A Type I error is when a true null hypothesis is incorrectly rejected, resulting in a false positive. A Type II error is when a false null hypothesis is not rejected, resulting in a false negative. It is essential to be aware of these biases and errors to ensure accurate and reliable research findings.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 10
Correct
-
Which of the following statements accurately describes the features of a distribution that is negatively skewed?
Your Answer: Mean < median < mode
Explanation:Skewed Data: Understanding the Relationship between Mean, Median, and Mode
When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.
In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.
However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.
Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 11
Correct
-
What test would be appropriate for comparing the proportion of individuals who experience agranulocytosis while taking clozapine versus those who experience it while taking olanzapine?
Your Answer: Chi-squared test
Explanation:The dependent variable in this scenario is categorical, as individuals either experience agranulocytosis of do not. The independent variable is also categorical, with two options: olanzapine of clozapine. While there are various types of chi-squared tests, it is not necessary to focus on the distinctions between them.
Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 12
Correct
-
What is the mathematical operation used to determine the value of the square root of the variance?
Your Answer: Standard deviation
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 13
Correct
-
What is a criterion used to evaluate the quality of reporting in randomized controlled trials?
Your Answer: CONSORT
Explanation:As an AI language model, I do not have personal opinions of beliefs. However, I can provide information and guidance on the standards of reporting for different types of research studies. The guidelines mentioned above are essential for ensuring that research studies are reported accurately and transparently, which is crucial for the scientific community to evaluate and replicate the findings. It is important for researchers to be familiar with these standards and follow them when reporting their studies to ensure the quality and integrity of their research.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 14
Correct
-
What is the term used to describe the rate at which new cases of a disease are appearing, calculated by dividing the number of new cases by the total time that disease-free individuals are observed during a study period?
Your Answer: Incidence rate
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 15
Correct
-
What is the term used to describe the proposed idea that a researcher is attempting to validate?
Your Answer: Alternative hypothesis
Explanation:Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 16
Incorrect
-
You have been tasked with examining the potential advantage of establishing a program to assist elderly patients with panic disorder in the nearby region. What is the primary consideration in determining the amount of resources needed?
Your Answer: Bayesian factor
Correct Answer: Prevalence
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 17
Correct
-
What is the appropriate significance test to use when analyzing the data of patients' serum cholesterol levels before and after receiving a new lipid-lowering therapy?
Your Answer: Paired t-test
Explanation:Since the serum cholesterol level is continuous data and assumed to be normally distributed, and the data is paired from the same individuals, the most suitable statistical test is the paired t-test.
Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 18
Incorrect
-
What is the intervention (buprenorphine) relative risk reduction for non-prescription opioid use at six months in the group of patients with opioid dependence who received the treatment compared to those who did not receive it?
Your Answer: 3
Correct Answer: 0.45
Explanation:Relative risk reduction (RRR) is calculated as the percentage decrease in the occurrence of events in the experimental group (EER) compared to the control group (CER). It can be expressed as:
RRR = 1 – (EER / CER)
For example, if the EER is 18 and the CER is 33, then the RRR can be calculated as:
RRR = 1 – (18 / 33) = 0.45 of 45%
Alternatively, the RRR can be calculated as the difference between the CER and EER divided by the CER:
RRR = (CER – EER) / CER
Using the same example, the RRR can be calculated as:
RRR = (33 – 18) / 33 = 0.45 of 45%
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 19
Correct
-
What type of scale does the Beck Depression Inventory belong to?
Your Answer: Ordinal
Explanation:The Beck Depression Inventory cannot be classified as a ratio of interval scale as the scores do not have a consistent and meaningful numerical value. Instead, it is considered an ordinal scale where scores can be ranked in order of severity, but the difference between scores may not be equal in terms of the level of depression experienced. For example, a change from 8 to 13 may be more significant than a change from 35 to 40.
Scales of Measurement in Statistics
In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.
Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.
Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.
Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 20
Correct
-
If the weight of patients enrolled for a trial follows a normal distribution with a mean of 90kg and a standard deviation of 5kg, what is the probability that a randomly selected patient weighs between 85 and 95 kg?
Your Answer: 68.20%
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 21
Correct
-
What is the accurate formula for determining the pre-test odds?
Your Answer: Pre-test probability/ (1 - pre-test probability)
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 22
Correct
-
What is the name of the test that compares the variance within a group to the variance between groups?
Your Answer: ANOVA
Explanation:Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 23
Correct
-
What is a characteristic of skewed data?
Your Answer: For positively skewed data the mean is greater than the mode
Explanation:Skewed Data: Understanding the Relationship between Mean, Median, and Mode
When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.
In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.
However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.
Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 24
Correct
-
What is the likelihood of weight gain when a patient is prescribed risperidone, given that 6 out of 10 patients experience weight gain as a side effect?
Your Answer: 1.5
Explanation:1. The odds of an event happening are calculated by dividing the number of times it occurs by the number of times it does not occur.
2. The odds of an event happening in a given situation are 6 to 4.
3. This translates to a ratio of 1.5, meaning the event is more likely to happen than not.
4. The risk of the event happening is calculated by dividing the number of times it occurs by the total number of possible outcomes.
5. In this case, the risk of the event happening is 6 out of 10.Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 25
Correct
-
Which of the following is not a valid type of validity?
Your Answer: Internal consistency
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 26
Incorrect
-
What is the term used to describe the percentage of a population's disease that would be eradicated if their disease rate was lowered to that of the unexposed group?
Your Answer: Population attributable risk
Correct Answer: Attributable proportion
Explanation:Disease Rates and Their Interpretation
Disease rates are a measure of the occurrence of a disease in a population. They are used to establish causation, monitor interventions, and measure the impact of exposure on disease rates. The attributable risk is the difference in the rate of disease between the exposed and unexposed groups. It tells us what proportion of deaths in the exposed group were due to the exposure. The relative risk is the risk of an event relative to exposure. It is calculated by dividing the rate of disease in the exposed group by the rate of disease in the unexposed group. A relative risk of 1 means there is no difference between the two groups. A relative risk of <1 means that the event is less likely to occur in the exposed group, while a relative risk of >1 means that the event is more likely to occur in the exposed group. The population attributable risk is the reduction in incidence that would be observed if the population were entirely unexposed. It can be calculated by multiplying the attributable risk by the prevalence of exposure in the population. The attributable proportion is the proportion of the disease that would be eliminated in a population if its disease rate were reduced to that of the unexposed group.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 27
Incorrect
-
A study was conducted to investigate the correlation between body mass index (BMI) and mortality in patients with schizophrenia. The study involved a cohort of 1000 patients with schizophrenia who were evaluated by measuring their weight and height, and calculating their BMI. The participants were then monitored for up to 15 years after the study commenced. The BMI levels were classified into three categories (high, average, low). The findings revealed that, after adjusting for age, gender, treatment method, and comorbidities, a high BMI at the beginning of the study was linked to a twofold increase in mortality.
How is this study best described?Your Answer:
Correct Answer:
Explanation:The study is a prospective cohort study that observes the effect of BMI as an exposure on the group over time, without manipulating any risk factors of interventions.
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 28
Incorrect
-
What is a true statement about cost-benefit analysis?
Your Answer: Relates costs to a single clinical measure of effectiveness
Correct Answer: Benefits are valued in monetary terms
Explanation:The net benefit of a proposed scheme is calculated by subtracting the costs from the benefits in a CBA. For instance, if the benefits of the scheme are valued at £140 k and the costs are £10 k, then the net benefit would be £130 k.
Methods of Economic Evaluation
There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.
Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.
Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.
Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.
Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.
Costs in Economic Evaluation Studies
There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 29
Incorrect
-
Can you calculate the specificity of a general practitioner's diagnosis of depression based on the given data from the study assessing their ability to identify cases using GHQ scores?
Your Answer: 10%
Correct Answer: 91%
Explanation:The specificity of the GHQ test is 91%, meaning that 91% of individuals who do not have depression are correctly identified as such by the general practitioner using the test.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 30
Incorrect
-
What is a characteristic of data that is positively skewed?
Your Answer: Mode < median < mean
Correct Answer:
Explanation:Skewed Data: Understanding the Relationship between Mean, Median, and Mode
When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.
In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.
However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.
Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 31
Incorrect
-
Which of the options below does not demonstrate selection bias?
Your Answer: Neyman bias
Correct Answer: Recall bias
Explanation:Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 32
Correct
-
Which of the following is calculated by dividing the standard deviation by the square root of the sample size?
Your Answer: Standard error
Explanation:The formula for the standard error of the mean is equal to the standard deviation divided by the square root of the number of patients.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 33
Incorrect
-
Which of the following checklists would be most helpful in preparing the manuscript of a survey analyzing the opinions of college students on mental health, as evaluated through a set of questionnaires?
Your Answer: PRISMA
Correct Answer: COREQ
Explanation:There are several reporting guidelines available for different types of research studies. The COREQ checklist, consisting of 32 items, is designed for reporting qualitative research that involves interviews and focus groups. The CONSORT Statement provides a 25-item checklist to aid in reporting randomized controlled trials (RCTs). For reporting the pooled findings of multiple studies, the QUOROM and PRISMA guidelines are useful. The STARD statement includes a checklist of 30 items and is designed for reporting diagnostic accuracy studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 34
Correct
-
How is the phenomenon of regression towards the mean most influential on which type of validity?
Your Answer: Internal validity
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 35
Correct
-
What is a criterion used to evaluate the quality of meta-analysis reporting?
Your Answer: QUORUM
Explanation:As an AI language model, I do not have personal opinions of beliefs. However, I can provide information and guidance on the standards of reporting for different types of research studies. The guidelines mentioned above are essential for ensuring that research studies are reported accurately and transparently, which is crucial for the scientific community to evaluate and replicate the findings. It is important for researchers to be familiar with these standards and follow them when reporting their studies to ensure the quality and integrity of their research.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 36
Correct
-
What is the term used to describe the likelihood of correctly rejecting the null hypothesis when it is actually false?
Your Answer: Power of the test
Explanation:Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 37
Incorrect
-
What percentage of the data set falls below the second quartile when considering the interquartile range?
Your Answer: 25%
Correct Answer: 50%
Explanation:The median value is equivalent to Q2 (the second quartile).
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 38
Incorrect
-
In a study, the null hypothesis posits that there is no disparity between the mean values of group A and group B. Upon analysis, the study discovers a difference and presents a p-value of 0.04. Which statement below accurately reflects this scenario?
Your Answer: There is a 4% chance that there is a true difference between the mean values
Correct Answer: Assuming the null hypothesis is correct, there is a 4% chance that the difference detected between A and B has arisen by chance
Explanation:Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 39
Correct
-
Which category does convenience sampling fall under?
Your Answer: Non-probabilistic sampling
Explanation:Sampling Methods in Statistics
When collecting data from a population, it is often impractical and unnecessary to gather information from every single member. Instead, taking a sample is preferred. However, it is crucial that the sample accurately represents the population from which it is drawn. There are two main types of sampling methods: probability (random) sampling and non-probability (non-random) sampling.
Non-probability sampling methods, also known as judgement samples, are based on human choice rather than random selection. These samples are convenient and cheaper than probability sampling methods. Examples of non-probability sampling methods include voluntary sampling, convenience sampling, snowball sampling, and quota sampling.
Probability sampling methods give a more representative sample of the population than non-probability sampling. In each probability sampling technique, each population element has a known (non-zero) chance of being selected for the sample. Examples of probability sampling methods include simple random sampling, systematic sampling, cluster sampling, stratified sampling, and multistage sampling.
Simple random sampling is a sample in which every member of the population has an equal chance of being chosen. Systematic sampling involves selecting every kth member of the population. Cluster sampling involves dividing a population into separate groups (called clusters) and selecting a random sample of clusters. Stratified sampling involves dividing a population into groups (strata) and taking a random sample from each strata. Multistage sampling is a more complex method that involves several stages and combines two of more sampling methods.
Overall, probability sampling methods give a more representative sample of the population, but non-probability sampling methods are often more convenient and cheaper. It is important to choose the appropriate sampling method based on the research question and available resources.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 40
Incorrect
-
What is the most suitable significance test to examine the potential association between serum level and degree of sedation in patients who are prescribed clozapine, where sedation is measured on a scale of 1-10?
Your Answer: Chi-squared test
Correct Answer: Logistic regression
Explanation:This scenario involves examining the correlation between two variables: the sedation scale (which is ordinal) and the serum clozapine level (which is a ratio scale). While the serum clozapine level can be measured using arithmetic and is considered a parametric variable, the sedation scale cannot be treated in the same way due to its non-parametric nature. Therefore, the analysis of the correlation between these two variables will need to take into account the limitations of the sedation scale as an ordinal variable.
Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 41
Correct
-
Which of the following is the correct description of construct validity?
Your Answer: A test has good construct validity if it has a high correlation with another test that measures the same construct
Explanation:Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 42
Correct
-
A worldwide epidemic of influenza is known as a:
Your Answer: Pandemic
Explanation:Epidemiology Key Terms
– Epidemic (Outbreak): A rise in disease cases above the anticipated level in a specific population during a particular time frame.
– Endemic: The regular of anticipated level of disease in a particular population.
– Pandemic: Epidemics that affect a significant number of individuals across multiple countries, regions, of continents. -
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 43
Incorrect
-
What is the average age of the 7 women who participated in the qualitative study on self-harm among females, with ages of 18, 22, 40, 17, 23, 18, and 44?
Your Answer: 26
Correct Answer: 18
Explanation:Measures of Central Tendency
Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.
The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.
The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.
In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 44
Correct
-
What is the term used to describe a scenario where a study participant alters their behavior due to the awareness of being observed?
Your Answer: Hawthorne effect
Explanation:Simpson’s Paradox is a real phenomenon where the comparison of association between variables can change direction when data from multiple groups are merged into one. The other three options are not valid terms.
Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 45
Incorrect
-
What resource is committed to offering complete articles of systematic reviews on the impacts of healthcare interventions?
Your Answer: SIGLE
Correct Answer: CDSR
Explanation:When faced with a question, it’s helpful to consider what the letters in the question might represent, even if you don’t know the answer right away. Don’t become overwhelmed and keep this strategy in mind.
Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.
When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.
There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 46
Correct
-
In a cohort study investigating the association between smoking and Alzheimer's dementia, what is the typical variable used to measure the outcome?
Your Answer: Relative risk
Explanation:The odds ratio is used in case-control studies to measure the association between exposure and outcome, while the relative risk is used in cohort studies to measure the risk of developing an outcome in the exposed group compared to the unexposed group. To convert the odds ratio to a relative risk, one can use the formula: relative risk = odds ratio / (1 – incidence in the unexposed group x odds ratio).
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 47
Correct
-
Which odds ratio, along with its confidence interval, indicates a statistically significant reduction in the odds?
Your Answer: 0.7 (0.1 - 0.8)
Explanation:Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 48
Correct
-
What is the appropriate denominator for calculating cumulative incidence?
Your Answer: The number of disease free people at the beginning of a specified time period
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 49
Correct
-
Which of the following statements accurately describes the relationship between odds and odds ratio?
Your Answer: The odds ratio approximates to relative risk if the outcome of interest is rare
Explanation:Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 50
Incorrect
-
What is the purpose of descriptive statistics?
Your Answer: To make an estimation of a population mean from a sample
Correct Answer: To present characteristics features of a data set
Explanation:Types of Statistics: Descriptive and Inferential
Statistics can be divided into two categories: descriptive and inferential. Descriptive statistics are used to describe and summarize data without making any generalizations beyond the data at hand. On the other hand, inferential statistics are used to make inferences about a population based on sample data.
Descriptive statistics are useful for identifying patterns and trends in data. Common measures used to describe a data set include measures of central tendency (such as the mean, median, and mode) and measures of variability of dispersion (such as the standard deviation of variance).
Inferential statistics, on the other hand, are used to make predictions of draw conclusions about a population based on sample data. These statistics are also used to determine the probability that observed differences between groups are reliable and not due to chance.
Overall, both descriptive and inferential statistics play important roles in analyzing and interpreting data. Descriptive statistics help us understand the characteristics of a data set, while inferential statistics allow us to make predictions and draw conclusions about larger populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
00
Correct
00
Incorrect
00
:
00
:
00
Session Time
00
:
00
Average Question Time (
Mins)