00
Correct
00
Incorrect
00 : 00 : 00
Session Time
00 : 00
Average Question Time ( Mins)
  • Question 1 - A study which aims to see if women over 40 years old have...

    Incorrect

    • A study which aims to see if women over 40 years old have a different length of pregnancy, compare the mean in a group of women of this age against the population mean. Which of the following tests would you use to compare the means?

      Your Answer: Independent samples t-test

      Correct Answer: One sample t-test

      Explanation:

      The appropriate statistical test for the study is a one-sample t-test as it involves the calculation of a single mean.

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      57.8
      Seconds
  • Question 2 - Which of the following statements accurately describes the features of a distribution that...

    Correct

    • Which of the following statements accurately describes the features of a distribution that is negatively skewed?

      Your Answer: Mean < median < mode

      Explanation:

      Skewed Data: Understanding the Relationship between Mean, Median, and Mode

      When analyzing a data set, it is important to consider the shape of the distribution. In a normally distributed data set, the curve is symmetrical and bell-shaped, with the median, mode, and mean all equal. However, in skewed data sets, the distribution is asymmetrical, with the bulk of the data concentrated on one side of the figure.

      In a negatively skewed distribution, the left tail is longer, and the bulk of the data is concentrated to the right of the figure. In contrast, a positively skewed distribution has a longer right tail, with the bulk of the data concentrated to the left of the figure. In both cases, the median is positioned between the mode and the mean, as it represents the halfway point of the distribution.

      However, the mean is affected by extreme values of outliers, causing it to move away from the median in the direction of the tail. In positively skewed data, the mean is greater than the median, which is greater than the mode. In negatively skewed data, the mode is greater than the median, which is greater than the mean.

      Understanding the relationship between mean, median, and mode in skewed data sets is crucial for accurate data analysis and interpretation. By recognizing the shape of the distribution, researchers can make informed decisions about which measures of central tendency to use and how to interpret their results.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      4.4
      Seconds
  • Question 3 - What is a true statement about statistical power? ...

    Correct

    • What is a true statement about statistical power?

      Your Answer: The larger the sample size of a study the greater the power

      Explanation:

      The Importance of Power in Statistical Analysis

      Power is a crucial concept in statistical analysis as it helps researchers determine the number of participants needed in a study to detect a clinically significant difference of effect. It represents the probability of correctly rejecting the null hypothesis when it is false, which means avoiding a Type II error. Power values range from 0 to 1, with 0 indicating 0% and 1 indicating 100%. A power of 0.80 is generally considered the minimum acceptable level.

      Several factors influence the power of a study, including sample size, effect size, and significance level. Larger sample sizes lead to more precise parameter estimations and increase the study’s ability to detect a significant effect. Effect size, which is determined at the beginning of a study, refers to the size of the difference between two means that leads to rejecting the null hypothesis. Finally, the significance level, also known as the alpha level, represents the probability of a Type I error. By considering these factors, researchers can optimize the power of their studies and increase the likelihood of detecting meaningful effects.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      13.6
      Seconds
  • Question 4 - What is the most accurate definition of 'opportunity cost'? ...

    Correct

    • What is the most accurate definition of 'opportunity cost'?

      Your Answer: The forgone benefit that would have been derived by an option not chosen

      Explanation:

      Opportunity Cost in Economics: Understanding the Value of Choices

      Opportunity cost is a crucial concept in economics that helps us make informed decisions. It refers to the value of the next-best alternative that we give up when we choose one option over another. This concept is particularly relevant when we have limited resources, such as a fixed budget, and need to make choices about how to allocate them.

      For instance, if we decide to spend our money on antidepressants, we cannot use that same money to pay for cognitive-behavioral therapy (CBT). Both options have a value, but we have to choose one over the other. The opportunity cost of choosing antidepressants over CBT is the value of the benefits we would have received from CBT but did not because we chose antidepressants instead.

      To compare the opportunity cost of different choices, economists often use quality-adjusted life years (QALYs). QALYs measure the value of health outcomes in terms of both quantity (life years gained) and quality (health-related quality of life). By using QALYs, we can compare the opportunity cost of different healthcare interventions and choose the one that provides the best value for our resources.

      In summary, understanding opportunity cost is essential for making informed decisions in economics and healthcare. By recognizing the value of the alternatives we give up, we can make better choices and maximize the benefits we receive from our limited resources.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      15.3
      Seconds
  • Question 5 - In an economic evaluation study, which of the options below would be considered...

    Incorrect

    • In an economic evaluation study, which of the options below would be considered a direct cost?

      Your Answer: Costs due to impaired productivity at work

      Correct Answer: Costs of training staff to provide an intervention

      Explanation:

      Methods of Economic Evaluation

      There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.

      Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.

      Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.

      Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.

      Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.

      Costs in Economic Evaluation Studies

      There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      27.7
      Seconds
  • Question 6 - What percentage of values fall within one standard deviation above and below the...

    Correct

    • What percentage of values fall within one standard deviation above and below the mean?

      Your Answer: 68.20%

      Explanation:

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      4.8
      Seconds
  • Question 7 - Six men in a study on the sleep inducing effects of melatonin are...

    Correct

    • Six men in a study on the sleep inducing effects of melatonin are aged 52, 55, 56, 58, 59, and 92. What is the median age of the men included in the study?

      Your Answer: 57

      Explanation:

      – The median is the point with half the values above and half below.
      – In the given data set, there are an even number of values.
      – The median value is halfway between the two middle values.
      – The middle values are 56 and 58.
      – Therefore, the median is (56 + 58) / 2.

      Measures of Central Tendency

      Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.

      The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.

      The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.

      In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      36.3
      Seconds
  • Question 8 - Which of the following statements accurately describes significance tests? ...

    Incorrect

    • Which of the following statements accurately describes significance tests?

      Your Answer: Type I errors are false negatives

      Correct Answer: The type I error level is not affected by sample size

      Explanation:

      The α value, also known as the type I error, is the predetermined probability that is considered acceptable for making an error. If the P value is lower than the predetermined α value, then the null hypothesis (Ho) is rejected, and it is concluded that the observed difference, association, of correlation is statistically significant.

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      21
      Seconds
  • Question 9 - Which of the following statistical measures does not indicate the spread of variability...

    Correct

    • Which of the following statistical measures does not indicate the spread of variability of data?

      Your Answer: Mean

      Explanation:

      The mean, mode, and median are all measures of central tendency.

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      7.8
      Seconds
  • Question 10 - A team of scientists aims to prevent bias in their study on the...

    Correct

    • A team of scientists aims to prevent bias in their study on the effectiveness of a new medication for elderly patients with hypertension. They randomly assign 80 patients to the treatment group, of which 60 complete the 12-week trial. Another 80 patients are assigned to the placebo group, with 75 completing the trial. The researchers agree to conduct an intention-to-treat (ITT) analysis using the LOCF method. What type of bias are they attempting to eliminate?

      Your Answer: Attrition bias

      Explanation:

      To address the issue of drop-outs in a study, an intention to treat (ITT) analysis can be employed. Drop-outs can lead to attrition bias, which creates systematic differences in attrition across treatment groups. In an ITT analysis, all patients are included in the groups they were initially assigned to through random allocation. To handle missing data, two common methods are last observation carried forward and worst case scenario analysis.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      21.4
      Seconds
  • Question 11 - What percentage of the data falls within the range of the lower and...

    Correct

    • What percentage of the data falls within the range of the lower and upper quartiles, as represented by the interquartile range?

      Your Answer: 50%

      Explanation:

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      32.4
      Seconds
  • Question 12 - You record the age of all of your students in your class. You...

    Incorrect

    • You record the age of all of your students in your class. You notice that your data set is skewed. What method would you use to describe the typical age of your students?

      Your Answer: Mean

      Correct Answer: Median

      Explanation:

      When dealing with a data set that is quantitative and measured on a ratio scale, the mean is typically the preferred measure of central tendency. However, if the data is skewed, the median may be a better choice as it is less affected by the skewness of the data.

      Measures of Central Tendency

      Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.

      The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.

      The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.

      In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      12.3
      Seconds
  • Question 13 - Which option below represents a variable that belongs to an interval scale? ...

    Correct

    • Which option below represents a variable that belongs to an interval scale?

      Your Answer: The acidity of a group of patient's urine measured with a urine pH test

      Explanation:

      The categorization of patients on a hospital ward based on their diagnosis = nominal

      Scales of Measurement in Statistics

      In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.

      Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.

      Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.

      Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      339.4
      Seconds
  • Question 14 - What value of NNT indicates the most positive result for an intervention? ...

    Correct

    • What value of NNT indicates the most positive result for an intervention?

      Your Answer: NNT = 1

      Explanation:

      An NNT of 1 indicates that every patient who receives the treatment experiences a positive outcome, while no patient in the control group experiences the same outcome. This represents an ideal outcome.

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      4.4
      Seconds
  • Question 15 - What is the intervention (buprenorphine) relative risk reduction for non-prescription opioid use at...

    Incorrect

    • What is the intervention (buprenorphine) relative risk reduction for non-prescription opioid use at six months in the group of patients with opioid dependence who received the treatment compared to those who did not receive it?

      Your Answer: 7

      Correct Answer: 0.45

      Explanation:

      Relative risk reduction (RRR) is calculated as the percentage decrease in the occurrence of events in the experimental group (EER) compared to the control group (CER). It can be expressed as:

      RRR = 1 – (EER / CER)

      For example, if the EER is 18 and the CER is 33, then the RRR can be calculated as:

      RRR = 1 – (18 / 33) = 0.45 of 45%

      Alternatively, the RRR can be calculated as the difference between the CER and EER divided by the CER:

      RRR = (CER – EER) / CER

      Using the same example, the RRR can be calculated as:

      RRR = (33 – 18) / 33 = 0.45 of 45%

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      24
      Seconds
  • Question 16 - A new clinical trial has found a correlation between alcohol consumption and lung...

    Correct

    • A new clinical trial has found a correlation between alcohol consumption and lung cancer. Considering the well-known link between alcohol consumption and smoking, what is the most probable explanation for this new association?

      Your Answer: Confounding

      Explanation:

      The observed link between alcohol consumption and lung cancer is likely due to confounding factors, such as cigarette smoking. Confounding variables are those that are associated with both the independent and dependent variables, in this case, alcohol consumption and lung cancer.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      6.4
      Seconds
  • Question 17 - Which term is used to refer to the alternative hypothesis in hypothesis testing?...

    Correct

    • Which term is used to refer to the alternative hypothesis in hypothesis testing?

      a) Research hypothesis
      b) Statistical hypothesis
      c) Simple hypothesis
      d) Null hypothesis
      e) Composite hypothesis

      Your Answer: Research hypothesis

      Explanation:

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      16.4
      Seconds
  • Question 18 - A pediatrician becomes interested in a newly identified and rare pediatric syndrome. They...

    Correct

    • A pediatrician becomes interested in a newly identified and rare pediatric syndrome. They are interested to investigate if previous exposure to herpes viruses may put children at increased risk. Which of the following study designs would be most appropriate?

      Your Answer: Case-control study

      Explanation:

      Case-control studies are useful in studying rare diseases as it would be impractical to follow a large group of people for a long period of time to accrue enough incident cases. For instance, if a disease occurs very infrequently, say 1 in 1,000,000 per year, it would require following 1,000,000 people for ten years of 1000 people for 1000 years to accrue ten total cases. However, this is not feasible. Therefore, a case-control study provides a more practical approach to studying rare diseases.

      Types of Primary Research Studies and Their Advantages and Disadvantages

      Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.

      Type of Question Best Type of Study

      Therapy Randomized controlled trial (RCT), cohort, case control, case series
      Diagnosis Cohort studies with comparison to gold standard test
      Prognosis Cohort studies, case control, case series
      Etiology/Harm RCT, cohort studies, case control, case series
      Prevention RCT, cohort studies, case control, case series
      Cost Economic analysis

      Study Type Advantages Disadvantages

      Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
      Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
      Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
      Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
      Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)

      In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      22.4
      Seconds
  • Question 19 - What test would be the most effective in verifying the suitability of using...

    Incorrect

    • What test would be the most effective in verifying the suitability of using a parametric test on a given dataset?

      Your Answer: Log Rank test

      Correct Answer: Lilliefors test

      Explanation:

      Normality Testing in Statistics

      In statistics, parametric tests are based on the assumption that the data set follows a normal distribution. On the other hand, non-parametric tests do not require this assumption but are less powerful. To check if a distribution is normally distributed, there are several tests available, including the Kolmogorov-Smirnov (Goodness-of-Fit) Test, Jarque-Bera test, Wilk-Shapiro test, P-plot, and Q-plot. However, it is important to note that if a data set is not normally distributed, it may be possible to transform it to make it follow a normal distribution, such as by taking the logarithm of the values.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      503.7
      Seconds
  • Question 20 - For what purpose is the GRADE approach used in the field of evidence...

    Incorrect

    • For what purpose is the GRADE approach used in the field of evidence based medicine?

      Your Answer: Suggesting suitable randomisation techniques

      Correct Answer: Assessing the quality of evidence

      Explanation:

      Levels and Grades of Evidence in Evidence-Based Medicine

      To evaluate the quality of evidence on a subject of question, levels of grades are used. The traditional hierarchy approach places systematic reviews of randomized control trials at the top and case-series/report at the bottom. However, this approach is overly simplistic as certain research questions cannot be answered using RCTs. To address this, the Oxford Centre for Evidence-Based Medicine introduced their 2011 Levels of Evidence system, which separates the type of study questions and gives a hierarchy for each.

      The grading approach to be aware of is the GRADE system, which classifies the quality of evidence as high, moderate, low, of very low. The process begins by formulating a study question and identifying specific outcomes. Outcomes are then graded as critical of important. The evidence is then gathered and criteria are used to grade the evidence, with the type of evidence being a significant factor. Evidence can be promoted of downgraded based on certain criteria, such as limitations to study quality, inconsistency, uncertainty about directness, imprecise of sparse data, and reporting bias. The GRADE system allows for the promotion of observational studies to high-quality evidence under the right circumstances.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      96.5
      Seconds
  • Question 21 - What is the GRADE approach used in evidence based medicine and what are...

    Incorrect

    • What is the GRADE approach used in evidence based medicine and what are its characteristics?

      Your Answer: It offers five levels of evidence quality

      Correct Answer: The system can be applied to observational studies

      Explanation:

      Levels and Grades of Evidence in Evidence-Based Medicine

      To evaluate the quality of evidence on a subject of question, levels of grades are used. The traditional hierarchy approach places systematic reviews of randomized control trials at the top and case-series/report at the bottom. However, this approach is overly simplistic as certain research questions cannot be answered using RCTs. To address this, the Oxford Centre for Evidence-Based Medicine introduced their 2011 Levels of Evidence system, which separates the type of study questions and gives a hierarchy for each.

      The grading approach to be aware of is the GRADE system, which classifies the quality of evidence as high, moderate, low, of very low. The process begins by formulating a study question and identifying specific outcomes. Outcomes are then graded as critical of important. The evidence is then gathered and criteria are used to grade the evidence, with the type of evidence being a significant factor. Evidence can be promoted of downgraded based on certain criteria, such as limitations to study quality, inconsistency, uncertainty about directness, imprecise of sparse data, and reporting bias. The GRADE system allows for the promotion of observational studies to high-quality evidence under the right circumstances.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      5.5
      Seconds
  • Question 22 - Which study design involves conducting an experiment? ...

    Incorrect

    • Which study design involves conducting an experiment?

      Your Answer: A case-control study

      Correct Answer: A randomised control study

      Explanation:

      Types of Primary Research Studies and Their Advantages and Disadvantages

      Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.

      Type of Question Best Type of Study

      Therapy Randomized controlled trial (RCT), cohort, case control, case series
      Diagnosis Cohort studies with comparison to gold standard test
      Prognosis Cohort studies, case control, case series
      Etiology/Harm RCT, cohort studies, case control, case series
      Prevention RCT, cohort studies, case control, case series
      Cost Economic analysis

      Study Type Advantages Disadvantages

      Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
      Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
      Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
      Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
      Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)

      In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      7.4
      Seconds
  • Question 23 - A worldwide epidemic of influenza is known as a: ...

    Incorrect

    • A worldwide epidemic of influenza is known as a:

      Your Answer: Endemic

      Correct Answer: Pandemic

      Explanation:

      Epidemiology Key Terms

      – Epidemic (Outbreak): A rise in disease cases above the anticipated level in a specific population during a particular time frame.
      – Endemic: The regular of anticipated level of disease in a particular population.
      – Pandemic: Epidemics that affect a significant number of individuals across multiple countries, regions, of continents.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      25.6
      Seconds
  • Question 24 - A new drug is trialled for the treatment of heart disease. Drug A...

    Incorrect

    • A new drug is trialled for the treatment of heart disease. Drug A is given to 500 people with early stage heart disease and a placebo is given to 450 people with the same condition. After 5 years, 300 people who received drug A had survived compared to 225 who received the placebo. What is the number needed to treat to save one life?

      Your Answer: 2

      Correct Answer: 10

      Explanation:

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      8
      Seconds
  • Question 25 - What is the purpose of descriptive statistics? ...

    Correct

    • What is the purpose of descriptive statistics?

      Your Answer: To present characteristics features of a data set

      Explanation:

      Types of Statistics: Descriptive and Inferential

      Statistics can be divided into two categories: descriptive and inferential. Descriptive statistics are used to describe and summarize data without making any generalizations beyond the data at hand. On the other hand, inferential statistics are used to make inferences about a population based on sample data.

      Descriptive statistics are useful for identifying patterns and trends in data. Common measures used to describe a data set include measures of central tendency (such as the mean, median, and mode) and measures of variability of dispersion (such as the standard deviation of variance).

      Inferential statistics, on the other hand, are used to make predictions of draw conclusions about a population based on sample data. These statistics are also used to determine the probability that observed differences between groups are reliable and not due to chance.

      Overall, both descriptive and inferential statistics play important roles in analyzing and interpreting data. Descriptive statistics help us understand the characteristics of a data set, while inferential statistics allow us to make predictions and draw conclusions about larger populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      29.2
      Seconds
  • Question 26 - How can it be determined if the study on the effectiveness of a...

    Incorrect

    • How can it be determined if the study on the effectiveness of a new oral treatment for schizophrenia patients in preventing hospital admissions has yielded statistically significant results?

      Your Answer: p-value < 0.5

      Correct Answer:

      Explanation:

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      38
      Seconds
  • Question 27 - A team of scientists plans to carry out a placebo-controlled randomized trial to...

    Correct

    • A team of scientists plans to carry out a placebo-controlled randomized trial to assess the effectiveness of a new medication for treating hypertension in elderly patients. They aim to prevent patients from knowing whether they are receiving the medication of the placebo.
      What type of bias are they trying to eliminate?

      Your Answer: Performance bias

      Explanation:

      To prevent bias in the study, the researchers are implementing patient blinding to prevent performance bias, as knowledge of whether they are taking venlafaxine of a placebo, of which arm of the study they are in, could impact the patient’s behavior. Additionally, investigators must also be blinded to avoid measurement bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      91.1
      Seconds
  • Question 28 - How would you rephrase the question to refer to the test's capacity to...

    Incorrect

    • How would you rephrase the question to refer to the test's capacity to identify a person with a disease as positive?

      Your Answer: Positive predictive value

      Correct Answer: Sensitivity

      Explanation:

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      4.3
      Seconds
  • Question 29 - What is a criterion used to evaluate the quality of meta-analysis reporting? ...

    Correct

    • What is a criterion used to evaluate the quality of meta-analysis reporting?

      Your Answer: QUORUM

      Explanation:

      As an AI language model, I do not have personal opinions of beliefs. However, I can provide information and guidance on the standards of reporting for different types of research studies. The guidelines mentioned above are essential for ensuring that research studies are reported accurately and transparently, which is crucial for the scientific community to evaluate and replicate the findings. It is important for researchers to be familiar with these standards and follow them when reporting their studies to ensure the quality and integrity of their research.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      259.6
      Seconds
  • Question 30 - What is another term for case-mix bias? ...

    Incorrect

    • What is another term for case-mix bias?

      Your Answer:

      Correct Answer: Disease spectrum bias

      Explanation:

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds

SESSION STATS - PERFORMANCE PER SPECIALTY

Research Methods, Statistics, Critical Review And Evidence-Based Practice (15/29) 52%
Passmed