00
Correct
00
Incorrect
00 : 00 : 0 00
Session Time
00 : 00
Average Question Time ( Mins)
  • Question 1 - What is another name for the incidence rate? ...

    Incorrect

    • What is another name for the incidence rate?

      Your Answer: Incidence proportion

      Correct Answer: Incidence density

      Explanation:

      Measures of Disease Frequency: Incidence and Prevalence

      Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.

      Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.

      It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.

      Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      7.4
      Seconds
  • Question 2 - Which value of r indicates the highest degree of correlation? ...

    Correct

    • Which value of r indicates the highest degree of correlation?

      Your Answer: -0.8

      Explanation:

      It is important to distinguish between the direction of the correlation (the slope of the line) and its strength (the spread of the data). To emphasize this difference, the correct answer to this question is a negative value.

      Stats: Correlation and Regression

      Correlation and regression are related but not interchangeable terms. Correlation is used to test for association between variables, while regression is used to predict values of dependent variables from independent variables. Correlation can be linear, non-linear, of non-existent, and can be strong, moderate, of weak. The strength of a linear relationship is measured by the correlation coefficient, which can be positive of negative and ranges from very weak to very strong. However, the interpretation of a correlation coefficient depends on the context and purposes. Correlation can suggest association but cannot prove of disprove causation. Linear regression, on the other hand, can be used to predict how much one variable changes when a second variable is changed. Scatter graphs are used in correlation and regression analyses to visually determine if variables are associated and to detect outliers. When constructing a scatter graph, the dependent variable is typically placed on the vertical axis and the independent variable on the horizontal axis.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      6.6
      Seconds
  • Question 3 - After creating a scatter plot of the data, what would be the next...

    Correct

    • After creating a scatter plot of the data, what would be the next step for the researcher to determine if there is a linear relationship between a person's age and blood pressure?

      Your Answer: Pearson's coefficient

      Explanation:

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      13.1
      Seconds
  • Question 4 - The data collected represents the ratings given by students to the quality of...

    Correct

    • The data collected represents the ratings given by students to the quality of teaching sessions provided by a consultant psychiatrist. The ratings are on a scale of 1-5, with 1 indicating extremely unsatisfactory and 5 indicating extremely satisfactory. The ratings are used to evaluate the effectiveness of the teaching sessions. How is this data best described?

      Your Answer: Ordinal

      Explanation:

      The data gathered will be measured on an ordinal scale, where each answer option is ranked. For instance, 2 is considered lower than 4, and 4 is lower than 5. In an ordinal scale, it is not necessary for the difference between 4 (satisfactory) and 2 (unsatisfactory) to be the same as the difference between 5 (extremely satisfactory) and 3 (neutral). This is because the numbers are not assigned for quantitative measurement but are used for labeling purposes only.

      Scales of Measurement in Statistics

      In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.

      Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.

      Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.

      Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      24.1
      Seconds
  • Question 5 - What study design would be most suitable for investigating the potential association between...

    Incorrect

    • What study design would be most suitable for investigating the potential association between childhood obesity in girls and the risk of polycystic ovarian syndrome, while also providing the strongest evidence for this link?

      Your Answer: Case-control study

      Correct Answer: Cohort study

      Explanation:

      An RCT is not feasible in this situation, but a cohort study would be more reliable than a case-control study in generating evidence.

      Types of Primary Research Studies and Their Advantages and Disadvantages

      Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.

      Type of Question Best Type of Study

      Therapy Randomized controlled trial (RCT), cohort, case control, case series
      Diagnosis Cohort studies with comparison to gold standard test
      Prognosis Cohort studies, case control, case series
      Etiology/Harm RCT, cohort studies, case control, case series
      Prevention RCT, cohort studies, case control, case series
      Cost Economic analysis

      Study Type Advantages Disadvantages

      Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
      Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
      Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
      Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
      Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)

      In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      21.3
      Seconds
  • Question 6 - Based on the AUCs shown below, which screening test had the highest overall...

    Correct

    • Based on the AUCs shown below, which screening test had the highest overall performance in differentiating between the presence of absence of bulimia?

      Test - AUC
      Test 1 - 0.42
      Test 2 - 0.95
      Test 3 - 0.82
      Test 4 - 0.11
      Test 5 - 0.67

      Your Answer: Test 2

      Explanation:

      Understanding ROC Curves and AUC Values

      ROC (receiver operating characteristic) curves are graphs used to evaluate the effectiveness of a test in distinguishing between two groups, such as those with and without a disease. The curve plots the true positive rate against the false positive rate at different threshold settings. The goal is to find the best trade-off between sensitivity and specificity, which can be adjusted by changing the threshold. AUC (area under the curve) is a measure of the overall performance of the test, with higher values indicating better accuracy. The conventional grading of AUC values ranges from excellent to fail. ROC curves and AUC values are useful in evaluating diagnostic and screening tools, comparing different tests, and studying inter-observer variability.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      15.7
      Seconds
  • Question 7 - What is the best way to describe the sampling strategy used in the...

    Incorrect

    • What is the best way to describe the sampling strategy used in the medical student's study to estimate the average height of patients with schizophrenia in a psychiatric hospital?

      Your Answer: Cluster sampling

      Correct Answer: Simple random sampling

      Explanation:

      Sampling Methods in Statistics

      When collecting data from a population, it is often impractical and unnecessary to gather information from every single member. Instead, taking a sample is preferred. However, it is crucial that the sample accurately represents the population from which it is drawn. There are two main types of sampling methods: probability (random) sampling and non-probability (non-random) sampling.

      Non-probability sampling methods, also known as judgement samples, are based on human choice rather than random selection. These samples are convenient and cheaper than probability sampling methods. Examples of non-probability sampling methods include voluntary sampling, convenience sampling, snowball sampling, and quota sampling.

      Probability sampling methods give a more representative sample of the population than non-probability sampling. In each probability sampling technique, each population element has a known (non-zero) chance of being selected for the sample. Examples of probability sampling methods include simple random sampling, systematic sampling, cluster sampling, stratified sampling, and multistage sampling.

      Simple random sampling is a sample in which every member of the population has an equal chance of being chosen. Systematic sampling involves selecting every kth member of the population. Cluster sampling involves dividing a population into separate groups (called clusters) and selecting a random sample of clusters. Stratified sampling involves dividing a population into groups (strata) and taking a random sample from each strata. Multistage sampling is a more complex method that involves several stages and combines two of more sampling methods.

      Overall, probability sampling methods give a more representative sample of the population, but non-probability sampling methods are often more convenient and cheaper. It is important to choose the appropriate sampling method based on the research question and available resources.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      36.8
      Seconds
  • Question 8 - In what way can the study on depression be deemed as having limited...

    Correct

    • In what way can the study on depression be deemed as having limited applicability to the average patient population?

      Your Answer: External validity

      Explanation:

      When a study has good external validity, its findings can be applied to other populations with confidence.

      Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      53.9
      Seconds
  • Question 9 - How do you calculate the positive predictive value accurately? ...

    Correct

    • How do you calculate the positive predictive value accurately?

      Your Answer: TP / (TP + FP)

      Explanation:

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      10.7
      Seconds
  • Question 10 - A research project has a significance level of 0.05, and the obtained p-value...

    Incorrect

    • A research project has a significance level of 0.05, and the obtained p-value is 0.0125. What is the probability of committing a Type I error?

      Your Answer: Jan-13

      Correct Answer: Jan-80

      Explanation:

      An observed p-value of 0.0125 means that there is a 1.25% chance of obtaining the observed result by chance, assuming the null hypothesis is true. This also means that the Type I error rate (the probability of falsely rejecting the null hypothesis) is 1/80 of 1.25%. In comparison, a p-value of 0.05 indicates a 5% chance of obtaining the observed result by chance, of a Type I error rate of 1/20.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      57.8
      Seconds
  • Question 11 - What type of regression is appropriate for analyzing data with dichotomous variables? ...

    Incorrect

    • What type of regression is appropriate for analyzing data with dichotomous variables?

      Your Answer: Log

      Correct Answer: Logistic

      Explanation:

      Logistic regression is employed when dealing with dichotomous variables, which are variables that have only two possible values, such as live/dead of head/tail.

      Stats: Correlation and Regression

      Correlation and regression are related but not interchangeable terms. Correlation is used to test for association between variables, while regression is used to predict values of dependent variables from independent variables. Correlation can be linear, non-linear, of non-existent, and can be strong, moderate, of weak. The strength of a linear relationship is measured by the correlation coefficient, which can be positive of negative and ranges from very weak to very strong. However, the interpretation of a correlation coefficient depends on the context and purposes. Correlation can suggest association but cannot prove of disprove causation. Linear regression, on the other hand, can be used to predict how much one variable changes when a second variable is changed. Scatter graphs are used in correlation and regression analyses to visually determine if variables are associated and to detect outliers. When constructing a scatter graph, the dependent variable is typically placed on the vertical axis and the independent variable on the horizontal axis.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      43.3
      Seconds
  • Question 12 - A psychologist aims to conduct a qualitative study to explore the experiences of...

    Incorrect

    • A psychologist aims to conduct a qualitative study to explore the experiences of elderly patients referred to the outpatient clinic. To obtain a sample, the psychologist asks the receptionist to hand an invitation to participate in the study to all follow-up patients who attend for an appointment. The recruitment phase continues until a total of 30 elderly individuals agree to be in the study.

      How is this sampling method best described?

      Your Answer: Purposive sampling

      Correct Answer: Opportunistic sampling

      Explanation:

      Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      72.5
      Seconds
  • Question 13 - What is the purpose of the PICO model in evidence based medicine? ...

    Correct

    • What is the purpose of the PICO model in evidence based medicine?

      Your Answer: Formulating answerable questions

      Explanation:

      Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.

      When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.

      There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      17.1
      Seconds
  • Question 14 - A new drug is trialled for the treatment of heart disease. Drug A...

    Incorrect

    • A new drug is trialled for the treatment of heart disease. Drug A is given to 500 people with early stage heart disease and a placebo is given to 450 people with the same condition. After 5 years, 300 people who received drug A had survived compared to 225 who received the placebo. What is the number needed to treat to save one life?

      Your Answer: 75

      Correct Answer: 10

      Explanation:

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      42.5
      Seconds
  • Question 15 - The ICER is utilized in the following methods of economic evaluation: ...

    Incorrect

    • The ICER is utilized in the following methods of economic evaluation:

      Your Answer: Cost-utility analysis

      Correct Answer: Cost-effectiveness analysis

      Explanation:

      The acronym ICER stands for incremental cost-effectiveness ratio.

      Methods of Economic Evaluation

      There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.

      Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.

      Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.

      Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.

      Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.

      Costs in Economic Evaluation Studies

      There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      5.9
      Seconds
  • Question 16 - Which odds ratio suggests that there is no significant variation in the odds...

    Correct

    • Which odds ratio suggests that there is no significant variation in the odds between two groups?

      Your Answer: 1

      Explanation:

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      22.6
      Seconds
  • Question 17 - How can grounded theory be applied as an analytic technique? ...

    Incorrect

    • How can grounded theory be applied as an analytic technique?

      Your Answer: Content analysis

      Correct Answer: Constant comparison

      Explanation:

      Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      9.4
      Seconds
  • Question 18 - What hierarchical language does NLM utilize to enhance search strategies and index articles?...

    Correct

    • What hierarchical language does NLM utilize to enhance search strategies and index articles?

      Your Answer: MeSH

      Explanation:

      NLM’s hierarchical vocabulary, known as MeSH (Medical Subject Heading), is utilized for the purpose of indexing articles in PubMed.

      Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.

      When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.

      There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      35.6
      Seconds
  • Question 19 - How can the negative predictive value of a screening test be calculated accurately?...

    Correct

    • How can the negative predictive value of a screening test be calculated accurately?

      Your Answer: TN / (TN + FN)

      Explanation:

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      21.3
      Seconds
  • Question 20 - Which of the following is not a method used in qualitative research to...

    Correct

    • Which of the following is not a method used in qualitative research to evaluate validity?

      Your Answer: Content analysis

      Explanation:

      Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      18
      Seconds
  • Question 21 - What is the meaning of a 95% confidence interval? ...

    Correct

    • What is the meaning of a 95% confidence interval?

      Your Answer: If the study was repeated then the mean value would be within this interval 95% of the time

      Explanation:

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      21.8
      Seconds
  • Question 22 - Regarding evidence based medicine, which of the following is an example of a...

    Correct

    • Regarding evidence based medicine, which of the following is an example of a foreground question?

      Your Answer: What is the effectiveness of restraints in reducing the occurrence of falls in patients 65 and over?

      Explanation:

      Foreground questions are specific and focused, and can lead to a clinical decision. In contrast, background questions are more general and broad in scope.

      Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.

      When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.

      There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      59.1
      Seconds
  • Question 23 - What is the most appropriate indicator of internal consistency? ...

    Incorrect

    • What is the most appropriate indicator of internal consistency?

      Your Answer: Test-retest reliability

      Correct Answer: Split half correlation

      Explanation:

      Cronbach’s Alpha is a statistical measure used to assess the internal consistency of a test of questionnaire. It is a widely used method to determine the reliability of a test by measuring the extent to which the items on the test are measuring the same construct. Cronbach’s Alpha ranges from 0 to 1, with higher values indicating greater internal consistency. A value of 0.7 of higher is generally considered acceptable for research purposes. The calculation of Cronbach’s Alpha involves comparing the variance of the total score with the variance of the individual items. It is important to note that Cronbach’s Alpha assumes that all items are measuring the same construct, and therefore, it may not be appropriate for tests that measure multiple constructs.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      12.2
      Seconds
  • Question 24 - A pilot program is implemented in a children's hospital that offers HIV testing...

    Correct

    • A pilot program is implemented in a children's hospital that offers HIV testing for all new patients upon admission. As part of an economic analysis of the program, a researcher evaluates the expenses linked with providing the testing service. How should the potential stress encountered by children waiting for the test results be categorized?

      Your Answer: Intangible cost

      Explanation:

      Methods of Economic Evaluation

      There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.

      Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.

      Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.

      Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.

      Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.

      Costs in Economic Evaluation Studies

      There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      28
      Seconds
  • Question 25 - Which statistical test is best suited for analyzing the difference in blood pressure...

    Incorrect

    • Which statistical test is best suited for analyzing the difference in blood pressure between the two groups of patients who were given either the established of new anti-hypertensive medication in a randomized controlled trial with a crossover design?

      Your Answer: Unpaired t-test

      Correct Answer: Paired t-test

      Explanation:

      The appropriate statistical test to analyze the research question of the difference between two related groups with a dependent variable of change in BP (ratio) and parametric data is a paired t-test.

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      26.2
      Seconds
  • Question 26 - Which statement accurately describes box and whisker plots? ...

    Incorrect

    • Which statement accurately describes box and whisker plots?

      Your Answer: The IQR is represented by Q2-Q3

      Correct Answer: Each whisker represents approximately 25% of the data

      Explanation:

      Box and whisker plots are a useful tool for displaying information about the range, median, and quartiles of a data set. The whiskers only contain values within 1.5 times the interquartile range (IQR), and any values outside of this range are considered outliers and displayed as dots. The IQR is the difference between the 3rd and 1st quartiles, which divide the data set into quarters. Quartiles can also be used to determine the percentage of observations that fall below a certain value. However, quartiles and ranges have limitations because they do not take into account every score in a data set. To get a more representative idea of spread, measures such as variance and standard deviation are needed. Box plots can also provide information about the shape of a data set, such as whether it is skewed or symmetric. Notched boxes on the plot represent the confidence intervals of the median values.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      5867.9
      Seconds
  • Question 27 - Which data type does age in years belong to? ...

    Correct

    • Which data type does age in years belong to?

      Your Answer: Ratio

      Explanation:

      Age is a type of measurement that follows a ratio scale, which means that the values can be compared as multiples of each other. For instance, if someone is 20 years old, they are twice as old as someone who is 10 years old.

      Scales of Measurement in Statistics

      In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.

      Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.

      Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.

      Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      18.3
      Seconds
  • Question 28 - What is the term used to describe the study design where a margin...

    Incorrect

    • What is the term used to describe the study design where a margin is set for the mean reduction of PANSS score, and if the confidence interval of the difference between the new drug and olanzapine falls within this margin, the trial is considered successful?

      Your Answer: Non-inferiority trial

      Correct Answer: Equivalence trial

      Explanation:

      Study Designs for New Drugs: Options and Considerations

      When launching a new drug, there are various study design options available. One common approach is a placebo-controlled trial, which can provide strong evidence but may be deemed unethical if established treatments are available. Additionally, it does not allow for a comparison with standard treatments. Therefore, statisticians must decide whether the trial aims to demonstrate superiority, equivalence, of non-inferiority to an existing treatment.

      Superiority trials may seem like the obvious choice, but they require a large sample size to show a significant benefit over an existing treatment. Equivalence trials define an equivalence margin on a specified outcome, and if the confidence interval of the difference between the two drugs falls within this margin, the drugs are assumed to have a similar effect. Non-inferiority trials are similar to equivalence trials, but only the lower confidence interval needs to fall within the equivalence margin. These trials require smaller sample sizes, and once a drug has been shown to be non-inferior, larger studies may be conducted to demonstrate superiority.

      It is important to note that drug companies may not necessarily aim to show superiority over an existing product. If they can demonstrate that their product is equivalent of even non-inferior, they may compete on price of convenience. Overall, the choice of study design depends on various factors, including ethical considerations, sample size, and the desired outcome.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      27.9
      Seconds
  • Question 29 - A team of investigators aimed to explore the perspectives of experienced psychologists on...

    Correct

    • A team of investigators aimed to explore the perspectives of experienced psychologists on the use of cognitive-behavioral therapy in treating anxiety disorders. They randomly selected a group of psychologists to participate in the study.
      To enhance the credibility of their results, they opted to employ two researchers with different expertise (a clinical psychologist and a social worker) to conduct interviews with the selected psychologists. Furthermore, they collected data from the psychologists not only through interviews but also by organizing focus groups.
      What is the approach used in this qualitative study to improve the credibility of the findings?

      Your Answer: Triangulation

      Explanation:

      Triangulation is a technique commonly employed in research to ensure the accuracy and reliability of results. It involves using multiple methods to verify findings, also known as ‘cross examination’. This approach increases confidence in the results by demonstrating consistency across different methods. Investigator triangulation involves using researchers with diverse backgrounds, while method triangulation involves using different techniques such as interviews and focus groups. The goal of triangulation in qualitative research is to enhance the credibility and validity of the findings by addressing potential biases and limitations associated with single-method, single-observer studies.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      60.4
      Seconds
  • Question 30 - The Diagnostic Project between the UK and US revealed that the increased prevalence...

    Incorrect

    • The Diagnostic Project between the UK and US revealed that the increased prevalence of schizophrenia in New York, as opposed to London, was due to what factor?

      Your Answer: Chance

      Correct Answer: Bias

      Explanation:

      The US-UK Diagnostic Project found that the higher rates of schizophrenia in New York were due to diagnostic bias, as US psychiatrists used broader diagnostic criteria. However, the use of standardised clinical interviews and operationalised diagnostic criteria greatly reduced the variability of both incidence and prevalence rates of schizophrenia. This was demonstrated in a study by Sartorius et al. (1986) which examined early manifestations and first-contact incidence of schizophrenia in different cultures.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      32.5
      Seconds
  • Question 31 - What is the negative predictive value of the blood test for bowel cancer,...

    Incorrect

    • What is the negative predictive value of the blood test for bowel cancer, given a sensitivity of 60% and a specificity of 80% and a negative test result for a patient?

      Your Answer: -0.2

      Correct Answer: 0.5

      Explanation:

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      117.2
      Seconds
  • Question 32 - What is the accurate formula for determining the likelihood ratio of a negative...

    Correct

    • What is the accurate formula for determining the likelihood ratio of a negative test result?

      Your Answer: (1 - sensitivity) / specificity

      Explanation:

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      13.7
      Seconds
  • Question 33 - What is the approach that targets confounding variables during the study's design phase?...

    Correct

    • What is the approach that targets confounding variables during the study's design phase?

      Your Answer: Randomisation

      Explanation:

      Stats Confounding

      A confounding factor is a factor that can obscure the relationship between an exposure and an outcome in a study. This factor is associated with both the exposure and the disease. For example, in a study that finds a link between coffee consumption and heart disease, smoking could be a confounding factor because it is associated with both drinking coffee and heart disease. Confounding occurs when there is a non-random distribution of risk factors in the population, such as age, sex, and social class.

      To control for confounding in the design stage of an experiment, researchers can use randomization, restriction, of matching. Randomization aims to produce an even distribution of potential risk factors in two populations. Restriction involves limiting the study population to a specific group to ensure similar age distributions. Matching involves finding and enrolling participants who are similar in terms of potential confounding factors.

      In the analysis stage of an experiment, researchers can control for confounding by using stratification of multivariate models such as logistic regression, linear regression, of analysis of covariance (ANCOVA). Stratification involves creating categories of strata in which the confounding variable does not vary of varies minimally.

      Overall, controlling for confounding is important in ensuring that the relationship between an exposure and an outcome is accurately assessed in a study.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      12.6
      Seconds
  • Question 34 - Which of the following statements accurately describes the normal distribution? ...

    Correct

    • Which of the following statements accurately describes the normal distribution?

      Your Answer: Mean = mode = median

      Explanation:

      The Normal distribution is a probability distribution that is continuous in nature.

      Standard Deviation and Standard Error of the Mean

      Standard deviation (SD) and standard error of the mean (SEM) are two important statistical measures used to describe data. SD is a measure of how much the data varies, while SEM is a measure of how precisely we know the true mean of the population. The normal distribution, also known as the Gaussian distribution, is a symmetrical bell-shaped curve that describes the spread of many biological and clinical measurements.

      68.3% of the data lies within 1 SD of the mean, 95.4% of the data lies within 2 SD of the mean, and 99.7% of the data lies within 3 SD of the mean. The SD is calculated by taking the square root of the variance and is expressed in the same units as the data set. A low SD indicates that data points tend to be very close to the mean.

      On the other hand, SEM is an inferential statistic that quantifies the precision of the mean. It is expressed in the same units as the data and is calculated by dividing the SD of the sample mean by the square root of the sample size. The SEM gets smaller as the sample size increases, and it takes into account both the value of the SD and the sample size.

      Both SD and SEM are important measures in statistical analysis, and they are used to calculate confidence intervals and test hypotheses. While SD quantifies scatter, SEM quantifies precision, and both are essential in understanding and interpreting data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      14.4
      Seconds
  • Question 35 - If a study has a Type I error rate of <0.05 and a...

    Correct

    • If a study has a Type I error rate of <0.05 and a Type II error rate of 0.2, what is the power of the study?

      Your Answer: 0.8

      Explanation:

      A study’s ability to correctly detect a true effect of difference may be calculated as Power = 1 – Type II error rate. In the given scenario, the power can be calculated as Power = 1 – 0.2 = 0.8. Type I error refers to a false positive, while Type II error refers to a false negative.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      14.2
      Seconds
  • Question 36 - How many people need to be treated with the new drug to prevent...

    Incorrect

    • How many people need to be treated with the new drug to prevent one case of Alzheimer's disease in individuals with a positive family history, based on the results of a randomised controlled trial with 1,000 people in group A taking the drug and 1,400 people in group B taking a placebo, where the Alzheimer's rate was 2% in group A and 4% in group B?

      Your Answer: 2

      Correct Answer: 50

      Explanation:

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      153.2
      Seconds
  • Question 37 - If a case-control study investigates 60 potential risk factors for bipolar affective disorder...

    Incorrect

    • If a case-control study investigates 60 potential risk factors for bipolar affective disorder with a significance level of 0.05, how many risk factors would be expected to show a significant association with the disorder due to random chance?

      Your Answer: 1

      Correct Answer: 3

      Explanation:

      If we consider the above example as 60 separate experiments, we would anticipate that 3 variables would show a connection purely by chance. This is because a p-value of 0.05 indicates that there is a 5% chance of obtaining the observed result by chance, of 1 in every 20 times. Therefore, if we multiply 1 in 20 by 60, we get 3, which is the expected number of variables that would show an association by chance alone.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      13.3
      Seconds
  • Question 38 - A team of scientists plans to carry out a placebo-controlled randomized trial to...

    Incorrect

    • A team of scientists plans to carry out a placebo-controlled randomized trial to assess the effectiveness of a new medication for treating hypertension in elderly patients. They aim to prevent patients from knowing whether they are receiving the medication of the placebo.
      What type of bias are they trying to eliminate?

      Your Answer: Attrition bias

      Correct Answer: Performance bias

      Explanation:

      To prevent bias in the study, the researchers are implementing patient blinding to prevent performance bias, as knowledge of whether they are taking venlafaxine of a placebo, of which arm of the study they are in, could impact the patient’s behavior. Additionally, investigators must also be blinded to avoid measurement bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      341.1
      Seconds
  • Question 39 - What is the estimated range for the 95% confidence interval for the mean...

    Correct

    • What is the estimated range for the 95% confidence interval for the mean glucose levels in a population of people taking antipsychotics, given a sample mean of 7 mmol/L, a sample standard deviation of 6 mmol/L, and a sample size of 9 with a standard error of the mean of 2 mmol/L?

      Your Answer: 3-11 mmol/L

      Explanation:

      It is important to note that confidence intervals are derived from standard errors, not standard deviation, despite the common misconception. It is crucial to avoid mixing up these two terms.

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      31.4
      Seconds
  • Question 40 - What is the percentage of the study's findings that support the internal validity...

    Correct

    • What is the percentage of the study's findings that support the internal validity of the two question depression screening test compared to the Beck Depression Inventory?

      Your Answer: Convergent validity

      Explanation:

      Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      52.1
      Seconds
  • Question 41 - What percentage of the data falls within the range of the lower and...

    Correct

    • What percentage of the data falls within the range of the lower and upper quartiles, as represented by the interquartile range?

      Your Answer: 50%

      Explanation:

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      44.8
      Seconds
  • Question 42 - A new antihypertensive medication is trialled for adults with high blood pressure. There...

    Incorrect

    • A new antihypertensive medication is trialled for adults with high blood pressure. There are 500 adults in the control group and 300 adults assigned to take the new medication. After 6 months, 200 adults in the control group had high blood pressure compared to 30 adults in the group taking the new medication. What is the relative risk reduction?

      Your Answer: 3.33

      Correct Answer: 75%

      Explanation:

      The RRR (Relative Risk Reduction) is calculated by dividing the ARR (Absolute Risk Reduction) by the CER (Control Event Rate). The CER is determined by dividing the number of control events by the total number of participants, which in this case is 200/500 of 0.4. The EER (Experimental Event Rate) is determined by dividing the number of events in the experimental group by the total number of participants, which in this case is 30/300 of 0.1. The ARR is calculated by subtracting the EER from the CER, which is 0.4 – 0.1 = 0.3. Finally, the RRR is calculated by dividing the ARR by the CER, which is 0.3/0.4 of 0.75 (of 75%).

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      51.5
      Seconds
  • Question 43 - What type of data representation is used in a box and whisker plot?...

    Correct

    • What type of data representation is used in a box and whisker plot?

      Your Answer: Median

      Explanation:

      Box and whisker plots are a useful tool for displaying information about the range, median, and quartiles of a data set. The whiskers only contain values within 1.5 times the interquartile range (IQR), and any values outside of this range are considered outliers and displayed as dots. The IQR is the difference between the 3rd and 1st quartiles, which divide the data set into quarters. Quartiles can also be used to determine the percentage of observations that fall below a certain value. However, quartiles and ranges have limitations because they do not take into account every score in a data set. To get a more representative idea of spread, measures such as variance and standard deviation are needed. Box plots can also provide information about the shape of a data set, such as whether it is skewed or symmetric. Notched boxes on the plot represent the confidence intervals of the median values.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      69.9
      Seconds
  • Question 44 - The national health organization has a team of analysts to compare the effectiveness...

    Incorrect

    • The national health organization has a team of analysts to compare the effectiveness of two different cancer treatments in terms of cost and patient outcomes. They have gathered data on the number of years of life gained by each treatment and are seeking your recommendation on what type of analysis to conduct next. What analysis would you suggest they undertake?

      Your Answer: Cost minimisation analysis

      Correct Answer: Cost utility analysis

      Explanation:

      Cost utility analysis is a method used in health economics to determine the cost-effectiveness of a health intervention by comparing the cost of the intervention to the benefit it provides in terms of the number of years lived in full health. The cost is measured in monetary units, while the benefit is quantified using a measure that assigns values to different health states, including those that are less desirable than full health. In health technology assessments, this measure is typically expressed as quality-adjusted life years (QALYs).

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      635.4
      Seconds
  • Question 45 - What type of bias is commonly associated with case-control studies? ...

    Incorrect

    • What type of bias is commonly associated with case-control studies?

      Your Answer: Work-up bias

      Correct Answer: Recall bias

      Explanation:

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      8.3
      Seconds
  • Question 46 - What is the nature of the hypothesis that a researcher wants to test...

    Incorrect

    • What is the nature of the hypothesis that a researcher wants to test regarding the effect of a drug on a person's heart rate?

      Your Answer: Two-tailed alternative hypothesis

      Correct Answer: One-tailed alternative hypothesis

      Explanation:

      A one-tailed hypothesis indicates a specific direction of association between groups. The researcher not only declares that there will be a distinction between the groups but also defines the direction in which the difference will occur.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      18.2
      Seconds
  • Question 47 - What is the calculation that the nurse performed to determine the patient's average...

    Correct

    • What is the calculation that the nurse performed to determine the patient's average daily calorie intake over a seven day period?

      Your Answer: Arithmetic mean

      Explanation:

      You don’t need to concern yourself with the specifics of the various means. Simply keep in mind that the arithmetic mean is the one utilized in fundamental biostatistics.

      Measures of Central Tendency

      Measures of central tendency are used in descriptive statistics to summarize the middle of typical value of a data set. There are three common measures of central tendency: the mean, median, and mode.

      The median is the middle value in a data set that has been arranged in numerical order. It is not affected by outliers and is used for ordinal data. The mode is the most frequent value in a data set and is used for categorical data. The mean is calculated by adding all the values in a data set and dividing by the number of values. It is sensitive to outliers and is used for interval and ratio data.

      The appropriate measure of central tendency depends on the measurement scale of the data. For nominal and categorical data, the mode is used. For ordinal data, the median of mode is used. For interval data with a normal distribution, the mean is preferable, but the median of mode can also be used. For interval data with skewed distribution, the median is used. For ratio data, the mean is preferable, but the median of mode can also be used for skewed data.

      In addition to measures of central tendency, the range is also used to describe the spread of a data set. It is calculated by subtracting the smallest value from the largest value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      2783.6
      Seconds
  • Question 48 - What proportion of adults are expected to have IgE levels exceeding 2 standard...

    Incorrect

    • What proportion of adults are expected to have IgE levels exceeding 2 standard deviations from the mean in a study aimed at establishing the normal reference range for IgE levels in adults, assuming a normal distribution of IgE levels?

      Your Answer: 1.96%

      Correct Answer: 2.30%

      Explanation:

      Standard Deviation and Standard Error of the Mean

      Standard deviation (SD) and standard error of the mean (SEM) are two important statistical measures used to describe data. SD is a measure of how much the data varies, while SEM is a measure of how precisely we know the true mean of the population. The normal distribution, also known as the Gaussian distribution, is a symmetrical bell-shaped curve that describes the spread of many biological and clinical measurements.

      68.3% of the data lies within 1 SD of the mean, 95.4% of the data lies within 2 SD of the mean, and 99.7% of the data lies within 3 SD of the mean. The SD is calculated by taking the square root of the variance and is expressed in the same units as the data set. A low SD indicates that data points tend to be very close to the mean.

      On the other hand, SEM is an inferential statistic that quantifies the precision of the mean. It is expressed in the same units as the data and is calculated by dividing the SD of the sample mean by the square root of the sample size. The SEM gets smaller as the sample size increases, and it takes into account both the value of the SD and the sample size.

      Both SD and SEM are important measures in statistical analysis, and they are used to calculate confidence intervals and test hypotheses. While SD quantifies scatter, SEM quantifies precision, and both are essential in understanding and interpreting data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      17.9
      Seconds
  • Question 49 - One accurate statement about epidemiological measures is: ...

    Correct

    • One accurate statement about epidemiological measures is:

      Your Answer: Cross-sectional surveys can be used to estimate the prevalence of a condition in the population

      Explanation:

      Measures of Disease Frequency: Incidence and Prevalence

      Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.

      Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.

      It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.

      Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      21.3
      Seconds
  • Question 50 - If you anticipate that a drug will result in more side-effects than a...

    Correct

    • If you anticipate that a drug will result in more side-effects than a placebo, what would be your estimated relative risk of side-effects occurring in the group receiving the drug?

      Your Answer: >1

      Explanation:

      Disease Rates and Their Interpretation

      Disease rates are a measure of the occurrence of a disease in a population. They are used to establish causation, monitor interventions, and measure the impact of exposure on disease rates. The attributable risk is the difference in the rate of disease between the exposed and unexposed groups. It tells us what proportion of deaths in the exposed group were due to the exposure. The relative risk is the risk of an event relative to exposure. It is calculated by dividing the rate of disease in the exposed group by the rate of disease in the unexposed group. A relative risk of 1 means there is no difference between the two groups. A relative risk of <1 means that the event is less likely to occur in the exposed group, while a relative risk of >1 means that the event is more likely to occur in the exposed group. The population attributable risk is the reduction in incidence that would be observed if the population were entirely unexposed. It can be calculated by multiplying the attributable risk by the prevalence of exposure in the population. The attributable proportion is the proportion of the disease that would be eliminated in a population if its disease rate were reduced to that of the unexposed group.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      21.6
      Seconds

SESSION STATS - PERFORMANCE PER SPECIALTY

Research Methods, Statistics, Critical Review And Evidence-Based Practice (27/50) 54%
Passmed