-
Question 1
Incorrect
-
What is a true statement about standardised mortality ratios?
Your Answer: An SMR is not a useful measure when we are comparing two groups which different significantly in age
Correct Answer: Direct standardisation requires that we know the age-specific rates of mortality in all the populations under study
Explanation:Calculation of Standardised Mortality Ratio (SMR)
To calculate the SMR, age and sex-specific death rates in the standard population are obtained. An estimate for the number of people in each category for both the standard and study populations is needed. The number of expected deaths in each age-sex group of the study population is calculated by multiplying the age-sex-specific rates in the standard population by the number of people in each category of the study population. The sum of all age- and sex-specific expected deaths gives the expected number of deaths for the whole study population. The observed number of deaths is then divided by the expected number of deaths to obtain the SMR.
The SMR can be standardised using the direct of indirect method. The direct method is used when the age-sex-specific rates for the study population and the age-sex-structure of the standard population are known. The indirect method is used when the age-specific rates for the study population are unknown of not available. This method uses the observed number of deaths in the study population and compares it to the number of deaths that would be expected if the age distribution was the same as that of the standard population.
The SMR can be interpreted as follows: an SMR less than 1.0 indicates fewer than expected deaths in the study population, an SMR of 1.0 indicates the number of observed deaths equals the number of expected deaths in the study population, and an SMR greater than 1.0 indicates more than expected deaths in the study population (excess deaths). It is sometimes expressed after multiplying by 100.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 2
Incorrect
-
The national health organization has a team of analysts to compare the effectiveness of two different cancer treatments in terms of cost and patient outcomes. They have gathered data on the number of years of life gained by each treatment and are seeking your recommendation on what type of analysis to conduct next. What analysis would you suggest they undertake?
Your Answer: Cost effectiveness analysis
Correct Answer: Cost utility analysis
Explanation:Cost utility analysis is a method used in health economics to determine the cost-effectiveness of a health intervention by comparing the cost of the intervention to the benefit it provides in terms of the number of years lived in full health. The cost is measured in monetary units, while the benefit is quantified using a measure that assigns values to different health states, including those that are less desirable than full health. In health technology assessments, this measure is typically expressed as quality-adjusted life years (QALYs).
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 3
Incorrect
-
What is the accurate formula for determining the pre-test odds?
Your Answer: (pre-test probability - 1)/ pre-test probability
Correct Answer: Pre-test probability/ (1 - pre-test probability)
Explanation:Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 4
Incorrect
-
A study looks into the effects of alcohol consumption on female psychiatrists. A group are selected and separated by the amount they drink into four groups. The first group drinks no alcohol, the second occasionally, the third often, and the fourth large and regular amounts. The group is followed up over the next ten years and the rates of cirrhosis are recorded.
What is the dependent variable in the study?Your Answer: The amount of alcohol consumption
Correct Answer: Rates of liver cirrhosis
Explanation:Understanding Stats Variables
Variables are characteristics, numbers, of quantities that can be measured of counted. They are also known as data items. Examples of variables include age, sex, business income and expenses, country of birth, capital expenditure, class grades, eye colour, and vehicle type. The value of a variable may vary between data units in a population. In a typical study, there are three main variables: independent, dependent, and controlled variables.
The independent variable is something that the researcher purposely changes during the investigation. The dependent variable is the one that is observed and changes in response to the independent variable. Controlled variables are those that are not changed during the experiment. Dependent variables are affected by independent variables but not by controlled variables, as these do not vary throughout the study.
For instance, a researcher wants to test the effectiveness of a new weight loss medication. Participants are divided into three groups, with the first group receiving a placebo (0mg dosage), the second group a 10 mg dose, and the third group a 40 mg dose. After six months, the participants’ weights are measured. In this case, the independent variable is the dosage of the medication, as that is what is being manipulated. The dependent variable is the weight, as that is what is being measured.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 5
Incorrect
-
Which value of r indicates the highest degree of correlation?
Your Answer: -0.56
Correct Answer: -0.8
Explanation:It is important to distinguish between the direction of the correlation (the slope of the line) and its strength (the spread of the data). To emphasize this difference, the correct answer to this question is a negative value.
Stats: Correlation and Regression
Correlation and regression are related but not interchangeable terms. Correlation is used to test for association between variables, while regression is used to predict values of dependent variables from independent variables. Correlation can be linear, non-linear, of non-existent, and can be strong, moderate, of weak. The strength of a linear relationship is measured by the correlation coefficient, which can be positive of negative and ranges from very weak to very strong. However, the interpretation of a correlation coefficient depends on the context and purposes. Correlation can suggest association but cannot prove of disprove causation. Linear regression, on the other hand, can be used to predict how much one variable changes when a second variable is changed. Scatter graphs are used in correlation and regression analyses to visually determine if variables are associated and to detect outliers. When constructing a scatter graph, the dependent variable is typically placed on the vertical axis and the independent variable on the horizontal axis.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 6
Incorrect
-
How are correlation and regression related?
Your Answer: A t-test is a common test of correlation
Correct Answer: Regression allows one variable to be predicted from another variable
Explanation:Stats: Correlation and Regression
Correlation and regression are related but not interchangeable terms. Correlation is used to test for association between variables, while regression is used to predict values of dependent variables from independent variables. Correlation can be linear, non-linear, of non-existent, and can be strong, moderate, of weak. The strength of a linear relationship is measured by the correlation coefficient, which can be positive of negative and ranges from very weak to very strong. However, the interpretation of a correlation coefficient depends on the context and purposes. Correlation can suggest association but cannot prove of disprove causation. Linear regression, on the other hand, can be used to predict how much one variable changes when a second variable is changed. Scatter graphs are used in correlation and regression analyses to visually determine if variables are associated and to detect outliers. When constructing a scatter graph, the dependent variable is typically placed on the vertical axis and the independent variable on the horizontal axis.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 7
Incorrect
-
A team of scientists aims to conduct a systematic review on the effectiveness of a new medication for elderly patients with dementia. They decide to search for studies published in languages other than English, as they know that positive results are more likely to be published in English-language journals, while negative results are more likely to be published in non-English language journals. What type of bias are they trying to prevent?
Your Answer: Analytic bias
Correct Answer: Tower of Babel bias
Explanation:When conducting a systematic review, restricting the selection of studies to those published only in English may introduce a bias known as the Tower of Babel effect. This occurs because studies conducted in non-English speaking countries that report positive results are more likely to be published in English language journals, while those with negative results are more likely to be published in non-English language journals.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 8
Incorrect
-
A study of 30 patients with hypertension compares the effectiveness of a new blood pressure medication with standard treatment. 80% of the new treatment group achieved target blood pressure levels at 6 weeks, compared with only 40% of the standard treatment group. What is the number needed to treat for the new treatment?
Your Answer: 4.5
Correct Answer: 3
Explanation:To calculate the Number Needed to Treat (NNT), we first need to find the Absolute Risk Reduction (ARR), which is calculated by subtracting the Control Event Rate (CER) from the Experimental Event Rate (EER).
Given that CER is 0.4 and EER is 0.8, we can calculate ARR as follows:
ARR = CER – EER
= 0.4 – 0.8
= -0.4Since the ARR is negative, this means that the treatment actually increases the risk of the event occurring. Therefore, we cannot calculate the NNT in this case.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 9
Incorrect
-
Which statement accurately describes bar charts?
Your Answer: The horizontal axis must have a scale
Correct Answer: The height of the bar indicates the frequency
Explanation:The frequency of each category of characteristic is displayed through the height of the bars in a bar chart. When dealing with discrete data, it is typically organized into distinct categories and presented in a bar chart. On the other hand, continuous data covers a range and the categories are not separate but rather blend into one another. This type of data is best represented through a histogram, which is similar to a bar chart but with bars that are connected.
Differences between Bar Charts and Histograms
Bar charts and histograms are both used to represent data, but they differ in their application and design. Bar charts are used to summarize nominal of ordinal data, while histograms are used for quantitative data. In a bar chart, the x-axis represents categories without a scale, and the y-axis represents frequencies. The columns are of equal width, and the height of the bar indicates the frequency. On the other hand, histograms have a scale on both axes, with the y-axis representing the relative frequency of frequency density. The width of the columns in a histogram can vary, and the area of the column indicates the true frequency. Overall, bar charts and histograms are useful tools for visualizing data, but their differences in design and application make them better suited for different types of data.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 10
Incorrect
-
How can it be determined if the study on the effectiveness of a new oral treatment for schizophrenia patients in preventing hospital admissions has yielded statistically significant results?
Your Answer: p-value < 0.5
Correct Answer:
Explanation:Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 11
Incorrect
-
Based on the AUCs shown below, which screening test had the highest overall performance in differentiating between the presence of absence of bulimia?
Test - AUC
Test 1 - 0.42
Test 2 - 0.95
Test 3 - 0.82
Test 4 - 0.11
Test 5 - 0.67Your Answer: Test 3
Correct Answer: Test 2
Explanation:Understanding ROC Curves and AUC Values
ROC (receiver operating characteristic) curves are graphs used to evaluate the effectiveness of a test in distinguishing between two groups, such as those with and without a disease. The curve plots the true positive rate against the false positive rate at different threshold settings. The goal is to find the best trade-off between sensitivity and specificity, which can be adjusted by changing the threshold. AUC (area under the curve) is a measure of the overall performance of the test, with higher values indicating better accuracy. The conventional grading of AUC values ranges from excellent to fail. ROC curves and AUC values are useful in evaluating diagnostic and screening tools, comparing different tests, and studying inter-observer variability.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 12
Incorrect
-
What is the most appropriate indicator of internal consistency?
Your Answer: Cohen's kappa
Correct Answer: Split half correlation
Explanation:Cronbach’s Alpha is a statistical measure used to assess the internal consistency of a test of questionnaire. It is a widely used method to determine the reliability of a test by measuring the extent to which the items on the test are measuring the same construct. Cronbach’s Alpha ranges from 0 to 1, with higher values indicating greater internal consistency. A value of 0.7 of higher is generally considered acceptable for research purposes. The calculation of Cronbach’s Alpha involves comparing the variance of the total score with the variance of the individual items. It is important to note that Cronbach’s Alpha assumes that all items are measuring the same construct, and therefore, it may not be appropriate for tests that measure multiple constructs.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 13
Incorrect
-
What type of bias could arise from using only one psychiatrist to diagnose all participants in a study?
Your Answer: Confounding bias
Correct Answer: Information bias
Explanation:The scenario described above highlights the issue of information bias, which can arise due to errors in measuring, collecting, of interpreting data related to the exposure of disease. Specifically, interviewer/observer bias is a type of information bias that can occur when a single psychiatrist has a tendency to either over of under diagnose a condition, potentially skewing the study results.
Types of Bias in Statistics
Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.
There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 14
Correct
-
What is a characteristic of skewed data?
Your Answer: For positively skewed data the mean is greater than the mode
Explanation:Skewed Data: Understanding the Relationship between Mean, Median, and Mode
When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.
In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.
However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.
Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 15
Incorrect
-
Which of the following is an inferential statistic?
Your Answer: Mode
Correct Answer: Standard error
Explanation:Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 16
Incorrect
-
What is a true statement about statistical power?
Your Answer: The power of a study is equivalent to the type I error
Correct Answer: The larger the sample size of a study the greater the power
Explanation:The Importance of Power in Statistical Analysis
Power is a crucial concept in statistical analysis as it helps researchers determine the number of participants needed in a study to detect a clinically significant difference of effect. It represents the probability of correctly rejecting the null hypothesis when it is false, which means avoiding a Type II error. Power values range from 0 to 1, with 0 indicating 0% and 1 indicating 100%. A power of 0.80 is generally considered the minimum acceptable level.
Several factors influence the power of a study, including sample size, effect size, and significance level. Larger sample sizes lead to more precise parameter estimations and increase the study’s ability to detect a significant effect. Effect size, which is determined at the beginning of a study, refers to the size of the difference between two means that leads to rejecting the null hypothesis. Finally, the significance level, also known as the alpha level, represents the probability of a Type I error. By considering these factors, researchers can optimize the power of their studies and increase the likelihood of detecting meaningful effects.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 17
Incorrect
-
What methods are most effective in determining interobserver agreement?
Your Answer: T-test
Correct Answer: Kappa
Explanation:Kappa is used to assess the consistency of reliability between different raters.
Understanding the Kappa Statistic for Measuring Interobserver Variation
The kappa statistic, also known as Cohen’s kappa coefficient, is a useful tool for quantifying the level of agreement between independent observers. This measure can be applied in any situation where multiple observers are evaluating the same thing, such as in medical diagnoses of research studies. The kappa coefficient ranges from 0 to 1, with 0 indicating complete disagreement and 1 indicating perfect agreement. By using the kappa statistic, researchers and practitioners can gain insight into the level of interobserver variation present in their data, which can help to improve the accuracy and reliability of their findings. Overall, the kappa statistic is a valuable tool for understanding and measuring interobserver variation in a variety of contexts.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 18
Incorrect
-
What tool of method would be most effective in examining the relationship between a potential risk factor and a particular condition?
Your Answer: Period prevalence
Correct Answer: Incidence rate
Explanation:Measures of Disease Frequency: Incidence and Prevalence
Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.
Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.
It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.
Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 19
Incorrect
-
A study comparing the benefit of two surgical procedures for patients over 65 concludes that the two procedures are equally effective. A researcher is then asked to conduct a cost analysis of the two procedures, considering only the financial expenses.
What is the best way to describe this approach?Your Answer: Cost-effectiveness analysis
Correct Answer: Cost-minimisation analysis
Explanation:Methods of Economic Evaluation
There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.
Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.
Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.
Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.
Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.
Costs in Economic Evaluation Studies
There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 20
Incorrect
-
A university lecturer is interested in determining if the psychology students would like more training on working with children. They know that there are 5000 psychology students and of these 60% are under the age of 25 and 40% are 25 of older. To avoid any potential age bias, they create two separate lists of students, one for those under 25 and one for those 25 of older. From these lists, they take a random sample from each list to ensure that they have an equal number of students from each age group. They then ask each selected student if they would like more training on working with children.
How would you describe the sampling strategy of this study?Your Answer: Cluster sampling
Correct Answer: Stratified sampling
Explanation:Sampling Methods in Statistics
When collecting data from a population, it is often impractical and unnecessary to gather information from every single member. Instead, taking a sample is preferred. However, it is crucial that the sample accurately represents the population from which it is drawn. There are two main types of sampling methods: probability (random) sampling and non-probability (non-random) sampling.
Non-probability sampling methods, also known as judgement samples, are based on human choice rather than random selection. These samples are convenient and cheaper than probability sampling methods. Examples of non-probability sampling methods include voluntary sampling, convenience sampling, snowball sampling, and quota sampling.
Probability sampling methods give a more representative sample of the population than non-probability sampling. In each probability sampling technique, each population element has a known (non-zero) chance of being selected for the sample. Examples of probability sampling methods include simple random sampling, systematic sampling, cluster sampling, stratified sampling, and multistage sampling.
Simple random sampling is a sample in which every member of the population has an equal chance of being chosen. Systematic sampling involves selecting every kth member of the population. Cluster sampling involves dividing a population into separate groups (called clusters) and selecting a random sample of clusters. Stratified sampling involves dividing a population into groups (strata) and taking a random sample from each strata. Multistage sampling is a more complex method that involves several stages and combines two of more sampling methods.
Overall, probability sampling methods give a more representative sample of the population, but non-probability sampling methods are often more convenient and cheaper. It is important to choose the appropriate sampling method based on the research question and available resources.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 21
Incorrect
-
If the new antihypertensive therapy is implemented for the secondary prevention of stroke, it would result in an absolute annual risk reduction of 0.5% compared to conventional therapy. However, the cost of the new treatment is £100 more per patient per year. Therefore, what would the cost of implementing the new therapy for each stroke prevented?
Your Answer: £50,000
Correct Answer: £20,000
Explanation:The new drug reduces the annual incidence of stroke by 0.5% and costs £100 more than conventional therapy. This means that for every 200 patients treated, one stroke would be prevented with the new drug compared to conventional therapy. The Number Needed to Treat (NNT) is 200 per year to prevent one stroke. Therefore, the annual cost of this treatment to prevent one stroke would be £20,000, which is the cost of treating 200 patients with the new drug (£100 x 200).
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 22
Incorrect
-
What type of data representation is used in a box and whisker plot?
Your Answer:
Correct Answer: Median
Explanation:Box and whisker plots are a useful tool for displaying information about the range, median, and quartiles of a data set. The whiskers only contain values within 1.5 times the interquartile range (IQR), and any values outside of this range are considered outliers and displayed as dots. The IQR is the difference between the 3rd and 1st quartiles, which divide the data set into quarters. Quartiles can also be used to determine the percentage of observations that fall below a certain value. However, quartiles and ranges have limitations because they do not take into account every score in a data set. To get a more representative idea of spread, measures such as variance and standard deviation are needed. Box plots can also provide information about the shape of a data set, such as whether it is skewed or symmetric. Notched boxes on the plot represent the confidence intervals of the median values.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 23
Incorrect
-
Which of the following is another term for the average of squared deviations from the mean?
Your Answer:
Correct Answer: Variance
Explanation:The variance can be expressed as the mean of the squared differences between each value and the mean.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 24
Incorrect
-
What does a smaller p-value indicate in terms of the strength of evidence?
Your Answer:
Correct Answer: The alternative hypothesis
Explanation:A p-value represents the likelihood of rejecting a null hypothesis that is actually true. A smaller p-value indicates a lower chance of mistakenly rejecting the null hypothesis, providing evidence in favor of the alternative hypothesis.
Understanding Hypothesis Testing in Statistics
In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.
The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.
Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.
P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 25
Incorrect
-
What is a criterion used to evaluate the quality of meta-analysis reporting?
Your Answer:
Correct Answer: QUORUM
Explanation:As an AI language model, I do not have personal opinions of beliefs. However, I can provide information and guidance on the standards of reporting for different types of research studies. The guidelines mentioned above are essential for ensuring that research studies are reported accurately and transparently, which is crucial for the scientific community to evaluate and replicate the findings. It is important for researchers to be familiar with these standards and follow them when reporting their studies to ensure the quality and integrity of their research.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 26
Incorrect
-
A new antihypertensive medication is trialled for adults with high blood pressure. There are 500 adults in the control group and 300 adults assigned to take the new medication. After 6 months, 200 adults in the control group had high blood pressure compared to 30 adults in the group taking the new medication. What is the relative risk reduction?
Your Answer:
Correct Answer: 75%
Explanation:The RRR (Relative Risk Reduction) is calculated by dividing the ARR (Absolute Risk Reduction) by the CER (Control Event Rate). The CER is determined by dividing the number of control events by the total number of participants, which in this case is 200/500 of 0.4. The EER (Experimental Event Rate) is determined by dividing the number of events in the experimental group by the total number of participants, which in this case is 30/300 of 0.1. The ARR is calculated by subtracting the EER from the CER, which is 0.4 – 0.1 = 0.3. Finally, the RRR is calculated by dividing the ARR by the CER, which is 0.3/0.4 of 0.75 (of 75%).
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 27
Incorrect
-
Which of the following statements accurately describes the standard error of the mean?
Your Answer:
Correct Answer: Gets smaller as the sample size increases
Explanation:As the sample size (n) increases, the standard error of the mean (SEM) decreases. This is because the SEM is inversely proportional to the square root of the sample size (n). As n gets larger, the denominator of the SEM equation gets larger, causing the overall value of the SEM to decrease. This means that larger sample sizes provide more accurate estimates of the population mean, as the calculated sample mean is expected to be closer to the true population mean.
Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 28
Incorrect
-
A masters student had noticed that nearly all of her patients with arthritis were over the age of 50. She was keen to investigate this further to see if there was an association.
She selected 100 patients with arthritis and 100 controls. of the 100 patients with arthritis, 90 were over the age of 50. of the 100 controls, only 40 were over the age of 50.
What is the odds ratio?Your Answer:
Correct Answer: 3.77
Explanation:The odds of being married are 3.77 times higher in individuals with panic disorder compared to controls.
Measures of Effect in Clinical Studies
When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.
To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.
The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 29
Incorrect
-
What is the primary benefit of conducting non-inferiority trials in the evaluation of a new medication?
Your Answer:
Correct Answer: Small sample size is required
Explanation:Study Designs for New Drugs: Options and Considerations
When launching a new drug, there are various study design options available. One common approach is a placebo-controlled trial, which can provide strong evidence but may be deemed unethical if established treatments are available. Additionally, it does not allow for a comparison with standard treatments. Therefore, statisticians must decide whether the trial aims to demonstrate superiority, equivalence, of non-inferiority to an existing treatment.
Superiority trials may seem like the obvious choice, but they require a large sample size to show a significant benefit over an existing treatment. Equivalence trials define an equivalence margin on a specified outcome, and if the confidence interval of the difference between the two drugs falls within this margin, the drugs are assumed to have a similar effect. Non-inferiority trials are similar to equivalence trials, but only the lower confidence interval needs to fall within the equivalence margin. These trials require smaller sample sizes, and once a drug has been shown to be non-inferior, larger studies may be conducted to demonstrate superiority.
It is important to note that drug companies may not necessarily aim to show superiority over an existing product. If they can demonstrate that their product is equivalent of even non-inferior, they may compete on price of convenience. Overall, the choice of study design depends on various factors, including ethical considerations, sample size, and the desired outcome.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
-
Question 30
Incorrect
-
In a cohort study investigating the association between smoking and Alzheimer's dementia, what is the typical variable used to measure the outcome?
Your Answer:
Correct Answer: Relative risk
Explanation:The odds ratio is used in case-control studies to measure the association between exposure and outcome, while the relative risk is used in cohort studies to measure the risk of developing an outcome in the exposed group compared to the unexposed group. To convert the odds ratio to a relative risk, one can use the formula: relative risk = odds ratio / (1 – incidence in the unexposed group x odds ratio).
Types of Primary Research Studies and Their Advantages and Disadvantages
Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.
Type of Question Best Type of Study
Therapy Randomized controlled trial (RCT), cohort, case control, case series
Diagnosis Cohort studies with comparison to gold standard test
Prognosis Cohort studies, case control, case series
Etiology/Harm RCT, cohort studies, case control, case series
Prevention RCT, cohort studies, case control, case series
Cost Economic analysisStudy Type Advantages Disadvantages
Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.
-
This question is part of the following fields:
- Research Methods, Statistics, Critical Review And Evidence-Based Practice
-
00
Correct
00
Incorrect
00
:
00
:
00
Session Time
00
:
00
Average Question Time (
Mins)