00
Correct
00
Incorrect
00 : 00 : 00
Session Time
00 : 00
Average Question Time ( Mins)
  • Question 1 - Which data type does age in years belong to? ...

    Incorrect

    • Which data type does age in years belong to?

      Your Answer: Nominal

      Correct Answer: Ratio

      Explanation:

      Age is a type of measurement that follows a ratio scale, which means that the values can be compared as multiples of each other. For instance, if someone is 20 years old, they are twice as old as someone who is 10 years old.

      Scales of Measurement in Statistics

      In the 1940s, Stanley Smith Stevens introduced four scales of measurement to categorize data variables. Knowing the scale of measurement for a variable is crucial in selecting the appropriate statistical analysis. The four scales of measurement are ratio, interval, ordinal, and nominal.

      Ratio scales are similar to interval scales, but they have true zero points. Examples of ratio scales include weight, time, and length. Interval scales measure the difference between two values, and one unit on the scale represents the same magnitude on the trait of characteristic being measured across the whole range of the scale. The Fahrenheit scale for temperature is an example of an interval scale.

      Ordinal scales categorize observed values into set categories that can be ordered, but the intervals between each value are uncertain. Examples of ordinal scales include social class, education level, and income level. Nominal scales categorize observed values into set categories that have no particular order of hierarchy. Examples of nominal scales include genotype, blood type, and political party.

      Data can also be categorized as quantitative of qualitative. Quantitative variables take on numeric values and can be further classified into discrete and continuous types. Qualitative variables do not take on numerical values and are usually names. Some qualitative variables have an inherent order in their categories and are described as ordinal. Qualitative variables are also called categorical of nominal variables. When a qualitative variable has only two categories, it is called a binary variable.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      5.3
      Seconds
  • Question 2 - What method did the researchers use to ensure the accuracy and credibility of...

    Incorrect

    • What method did the researchers use to ensure the accuracy and credibility of their findings in the qualitative study on antidepressants?

      Your Answer: Triangulation

      Correct Answer: Member checking

      Explanation:

      To ensure validity in qualitative studies, a technique called member checking of respondent validation is used. This involves interviewing a subset of the participants (typically around 11) to confirm that their perspectives align with the study’s findings.

      Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      14.9
      Seconds
  • Question 3 - Which p-value would provide the strongest evidence in favor of the alternative hypothesis?...

    Incorrect

    • Which p-value would provide the strongest evidence in favor of the alternative hypothesis?

      Your Answer: p > 0.07

      Correct Answer:

      Explanation:

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      17.4
      Seconds
  • Question 4 - Which statement accurately describes the measurement of serum potassium in 1,000 patients with...

    Correct

    • Which statement accurately describes the measurement of serum potassium in 1,000 patients with anorexia nervosa, where the mean potassium is 4.6 mmol/l and the standard deviation is 0.3 mmol/l?

      Your Answer: 68.3% of values lie between 4.3 and 4.9 mmol/l

      Explanation:

      Standard Deviation and Standard Error of the Mean

      Standard deviation (SD) and standard error of the mean (SEM) are two important statistical measures used to describe data. SD is a measure of how much the data varies, while SEM is a measure of how precisely we know the true mean of the population. The normal distribution, also known as the Gaussian distribution, is a symmetrical bell-shaped curve that describes the spread of many biological and clinical measurements.

      68.3% of the data lies within 1 SD of the mean, 95.4% of the data lies within 2 SD of the mean, and 99.7% of the data lies within 3 SD of the mean. The SD is calculated by taking the square root of the variance and is expressed in the same units as the data set. A low SD indicates that data points tend to be very close to the mean.

      On the other hand, SEM is an inferential statistic that quantifies the precision of the mean. It is expressed in the same units as the data and is calculated by dividing the SD of the sample mean by the square root of the sample size. The SEM gets smaller as the sample size increases, and it takes into account both the value of the SD and the sample size.

      Both SD and SEM are important measures in statistical analysis, and they are used to calculate confidence intervals and test hypotheses. While SD quantifies scatter, SEM quantifies precision, and both are essential in understanding and interpreting data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      48.8
      Seconds
  • Question 5 - What statement accurately describes population parameters? ...

    Correct

    • What statement accurately describes population parameters?

      Your Answer: Parameters tend to have normal distributions

      Explanation:

      Parametric vs Non-Parametric Statistics

      Statistics are used to draw conclusions about a population based on a sample. A parameter is a numerical value that describes a population characteristic, but it is often impossible to know the true value of a parameter without collecting data from every individual in the population. Instead, we take a sample and use statistics to estimate the parameters.

      Parametric statistical procedures assume that the population distribution is normal and that the parameters (such as means and standard deviations) are known. Examples of parametric tests include the t-test, ANOVA, and Pearson coefficient of correlation.

      Non-parametric statistical procedures make few of no assumptions about the population distribution of parameters. Examples of non-parametric tests include the Mann-Whitney Test, Wilcoxon Signed-Rank Test, Kruskal-Wallis Test, and Fisher Exact Probability test.

      Overall, the choice between parametric and non-parametric tests depends on the nature of the data and the research question being asked.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      105.3
      Seconds
  • Question 6 - What is the most appropriate indicator of internal consistency? ...

    Correct

    • What is the most appropriate indicator of internal consistency?

      Your Answer: Split half correlation

      Explanation:

      Cronbach’s Alpha is a statistical measure used to assess the internal consistency of a test of questionnaire. It is a widely used method to determine the reliability of a test by measuring the extent to which the items on the test are measuring the same construct. Cronbach’s Alpha ranges from 0 to 1, with higher values indicating greater internal consistency. A value of 0.7 of higher is generally considered acceptable for research purposes. The calculation of Cronbach’s Alpha involves comparing the variance of the total score with the variance of the individual items. It is important to note that Cronbach’s Alpha assumes that all items are measuring the same construct, and therefore, it may not be appropriate for tests that measure multiple constructs.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      12.8
      Seconds
  • Question 7 - A study which aims to see if women over 40 years old have...

    Incorrect

    • A study which aims to see if women over 40 years old have a different length of pregnancy, compare the mean in a group of women of this age against the population mean. Which of the following tests would you use to compare the means?

      Your Answer: Chi squared test

      Correct Answer: One sample t-test

      Explanation:

      The appropriate statistical test for the study is a one-sample t-test as it involves the calculation of a single mean.

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      28.4
      Seconds
  • Question 8 - What is the appropriate denominator for calculating the incidence rate? ...

    Incorrect

    • What is the appropriate denominator for calculating the incidence rate?

      Your Answer: The number of disease free people at the beginning of a specified time period

      Correct Answer: The total person time at risk during a specified time period

      Explanation:

      Measures of Disease Frequency: Incidence and Prevalence

      Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.

      Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.

      It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.

      Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      49.4
      Seconds
  • Question 9 - What is another term used to refer to a type II error in...

    Correct

    • What is another term used to refer to a type II error in hypothesis testing?

      Your Answer: False negative

      Explanation:

      Hypothesis testing involves the possibility of two types of errors: type I and type II errors. A type I error occurs when the null hypothesis is wrongly rejected of the alternative hypothesis is wrongly accepted. This error is also referred to as an alpha error, error of the first kind, of a false positive. On the other hand, a type II error occurs when the null hypothesis is wrongly accepted. This error is also known as the beta error, error of the second kind, of the false negative.

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      7.6
      Seconds
  • Question 10 - A team of investigators aimed to explore the perspectives of experienced psychologists on...

    Incorrect

    • A team of investigators aimed to explore the perspectives of experienced psychologists on the use of cognitive-behavioral therapy in treating anxiety disorders. They randomly selected a group of psychologists to participate in the study.
      To enhance the credibility of their results, they opted to employ two researchers with different expertise (a clinical psychologist and a social worker) to conduct interviews with the selected psychologists. Furthermore, they collected data from the psychologists not only through interviews but also by organizing focus groups.
      What is the approach used in this qualitative study to improve the credibility of the findings?

      Your Answer: Data saturation

      Correct Answer: Triangulation

      Explanation:

      Triangulation is a technique commonly employed in research to ensure the accuracy and reliability of results. It involves using multiple methods to verify findings, also known as ‘cross examination’. This approach increases confidence in the results by demonstrating consistency across different methods. Investigator triangulation involves using researchers with diverse backgrounds, while method triangulation involves using different techniques such as interviews and focus groups. The goal of triangulation in qualitative research is to enhance the credibility and validity of the findings by addressing potential biases and limitations associated with single-method, single-observer studies.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      49
      Seconds
  • Question 11 - After creating a scatter plot of the data, what would be the next...

    Correct

    • After creating a scatter plot of the data, what would be the next step for the researcher to determine if there is a linear relationship between a person's age and blood pressure?

      Your Answer: Pearson's coefficient

      Explanation:

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      9.3
      Seconds
  • Question 12 - What statement accurately describes the process of searching a database? ...

    Correct

    • What statement accurately describes the process of searching a database?

      Your Answer: New references are added to PubMed more quickly than they are to MEDLINE

      Explanation:

      PubMed receives new references faster than MEDLINE because they do not need to undergo indexing, such as adding MeSH headings and checking tags. While an increasing number of MEDLINE citations have a link to the complete article, not all of them do. Since 2010, Embased has included all MEDLINE citations in its database, but it does not have all citations from before that year.

      Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.

      When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.

      There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      12.3
      Seconds
  • Question 13 - A pediatrician becomes interested in a newly identified and rare pediatric syndrome. They...

    Incorrect

    • A pediatrician becomes interested in a newly identified and rare pediatric syndrome. They are interested to investigate if previous exposure to herpes viruses may put children at increased risk. Which of the following study designs would be most appropriate?

      Your Answer: Retrospective cohort study

      Correct Answer: Case-control study

      Explanation:

      Case-control studies are useful in studying rare diseases as it would be impractical to follow a large group of people for a long period of time to accrue enough incident cases. For instance, if a disease occurs very infrequently, say 1 in 1,000,000 per year, it would require following 1,000,000 people for ten years of 1000 people for 1000 years to accrue ten total cases. However, this is not feasible. Therefore, a case-control study provides a more practical approach to studying rare diseases.

      Types of Primary Research Studies and Their Advantages and Disadvantages

      Primary research studies can be categorized into six types based on the research question they aim to address. The best type of study for each question type is listed in the table below. There are two main types of study design: experimental and observational. Experimental studies involve an intervention, while observational studies do not. The advantages and disadvantages of each study type are summarized in the table below.

      Type of Question Best Type of Study

      Therapy Randomized controlled trial (RCT), cohort, case control, case series
      Diagnosis Cohort studies with comparison to gold standard test
      Prognosis Cohort studies, case control, case series
      Etiology/Harm RCT, cohort studies, case control, case series
      Prevention RCT, cohort studies, case control, case series
      Cost Economic analysis

      Study Type Advantages Disadvantages

      Randomized Controlled Trial – Unbiased distribution of confounders – Blinding more likely – Randomization facilitates statistical analysis – Expensive – Time-consuming – Volunteer bias – Ethically problematic at times
      Cohort Study – Ethically safe – Subjects can be matched – Can establish timing and directionality of events – Eligibility criteria and outcome assessments can be standardized – Administratively easier and cheaper than RCT – Controls may be difficult to identify – Exposure may be linked to a hidden confounder – Blinding is difficult – Randomization not present – For rare disease, large sample sizes of long follow-up necessary
      Case-Control Study – Quick and cheap – Only feasible method for very rare disorders of those with long lag between exposure and outcome – Fewer subjects needed than cross-sectional studies – Reliance on recall of records to determine exposure status – Confounders – Selection of control groups is difficult – Potential bias: recall, selection
      Cross-Sectional Survey – Cheap and simple – Ethically safe – Establishes association at most, not causality – Recall bias susceptibility – Confounders may be unequally distributed – Neyman bias – Group sizes may be unequal
      Ecological Study – Cheap and simple – Ethically safe – Ecological fallacy (when relationships which exist for groups are assumed to also be true for individuals)

      In conclusion, the choice of study type depends on the research question being addressed. Each study type has its own advantages and disadvantages, and researchers should carefully consider these when designing their studies.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      38.3
      Seconds
  • Question 14 - A psychologist aims to conduct a qualitative study to explore the experiences of...

    Incorrect

    • A psychologist aims to conduct a qualitative study to explore the experiences of elderly patients referred to the outpatient clinic. To obtain a sample, the psychologist asks the receptionist to hand an invitation to participate in the study to all follow-up patients who attend for an appointment. The recruitment phase continues until a total of 30 elderly individuals agree to be in the study.

      How is this sampling method best described?

      Your Answer: Purposive sampling

      Correct Answer: Opportunistic sampling

      Explanation:

      Qualitative research is a method of inquiry that seeks to understand the meaning and experience dimensions of human lives and social worlds. There are different approaches to qualitative research, such as ethnography, phenomenology, and grounded theory, each with its own purpose, role of the researcher, stages of research, and method of data analysis. The most common methods used in healthcare research are interviews and focus groups. Sampling techniques include convenience sampling, purposive sampling, quota sampling, snowball sampling, and case study sampling. Sample size can be determined by data saturation, which occurs when new categories, themes, of explanations stop emerging from the data. Validity can be assessed through triangulation, respondent validation, bracketing, and reflexivity. Analytical approaches include content analysis and constant comparison.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      49.5
      Seconds
  • Question 15 - What is the most accurate definition of 'opportunity cost'? ...

    Correct

    • What is the most accurate definition of 'opportunity cost'?

      Your Answer: The forgone benefit that would have been derived by an option not chosen

      Explanation:

      Opportunity Cost in Economics: Understanding the Value of Choices

      Opportunity cost is a crucial concept in economics that helps us make informed decisions. It refers to the value of the next-best alternative that we give up when we choose one option over another. This concept is particularly relevant when we have limited resources, such as a fixed budget, and need to make choices about how to allocate them.

      For instance, if we decide to spend our money on antidepressants, we cannot use that same money to pay for cognitive-behavioral therapy (CBT). Both options have a value, but we have to choose one over the other. The opportunity cost of choosing antidepressants over CBT is the value of the benefits we would have received from CBT but did not because we chose antidepressants instead.

      To compare the opportunity cost of different choices, economists often use quality-adjusted life years (QALYs). QALYs measure the value of health outcomes in terms of both quantity (life years gained) and quality (health-related quality of life). By using QALYs, we can compare the opportunity cost of different healthcare interventions and choose the one that provides the best value for our resources.

      In summary, understanding opportunity cost is essential for making informed decisions in economics and healthcare. By recognizing the value of the alternatives we give up, we can make better choices and maximize the benefits we receive from our limited resources.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      52.4
      Seconds
  • Question 16 - How many people need to be treated with the new drug to prevent...

    Correct

    • How many people need to be treated with the new drug to prevent one case of Alzheimer's disease in individuals with a positive family history, based on the results of a randomised controlled trial with 1,000 people in group A taking the drug and 1,400 people in group B taking a placebo, where the Alzheimer's rate was 2% in group A and 4% in group B?

      Your Answer: 50

      Explanation:

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      108.4
      Seconds
  • Question 17 - If you anticipate that a drug will result in more side-effects than a...

    Correct

    • If you anticipate that a drug will result in more side-effects than a placebo, what would be your estimated relative risk of side-effects occurring in the group receiving the drug?

      Your Answer: >1

      Explanation:

      Disease Rates and Their Interpretation

      Disease rates are a measure of the occurrence of a disease in a population. They are used to establish causation, monitor interventions, and measure the impact of exposure on disease rates. The attributable risk is the difference in the rate of disease between the exposed and unexposed groups. It tells us what proportion of deaths in the exposed group were due to the exposure. The relative risk is the risk of an event relative to exposure. It is calculated by dividing the rate of disease in the exposed group by the rate of disease in the unexposed group. A relative risk of 1 means there is no difference between the two groups. A relative risk of <1 means that the event is less likely to occur in the exposed group, while a relative risk of >1 means that the event is more likely to occur in the exposed group. The population attributable risk is the reduction in incidence that would be observed if the population were entirely unexposed. It can be calculated by multiplying the attributable risk by the prevalence of exposure in the population. The attributable proportion is the proportion of the disease that would be eliminated in a population if its disease rate were reduced to that of the unexposed group.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      18.5
      Seconds
  • Question 18 - Which of the following statements about calculating the correlation coefficient (r) for the...

    Incorrect

    • Which of the following statements about calculating the correlation coefficient (r) for the relationship between age and systolic blood pressure is not accurate?

      Your Answer: A value of r greater than 0 implies a positive correlation between age and systolic blood pressure

      Correct Answer: May be used to predict systolic blood pressure for a given age

      Explanation:

      To make predictions about systolic blood pressure, linear regression is necessary in this situation.

      Stats: Correlation and Regression

      Correlation and regression are related but not interchangeable terms. Correlation is used to test for association between variables, while regression is used to predict values of dependent variables from independent variables. Correlation can be linear, non-linear, of non-existent, and can be strong, moderate, of weak. The strength of a linear relationship is measured by the correlation coefficient, which can be positive of negative and ranges from very weak to very strong. However, the interpretation of a correlation coefficient depends on the context and purposes. Correlation can suggest association but cannot prove of disprove causation. Linear regression, on the other hand, can be used to predict how much one variable changes when a second variable is changed. Scatter graphs are used in correlation and regression analyses to visually determine if variables are associated and to detect outliers. When constructing a scatter graph, the dependent variable is typically placed on the vertical axis and the independent variable on the horizontal axis.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      30
      Seconds
  • Question 19 - What is the meaning of the C in the PICO model utilized in...

    Correct

    • What is the meaning of the C in the PICO model utilized in evidence-based medicine?

      Your Answer: Comparison

      Explanation:

      Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.

      When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.

      There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      6.4
      Seconds
  • Question 20 - Which of the following is another term for the average of squared deviations...

    Correct

    • Which of the following is another term for the average of squared deviations from the mean?

      Your Answer: Variance

      Explanation:

      The variance can be expressed as the mean of the squared differences between each value and the mean.

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      7.7
      Seconds
  • Question 21 - How can the pre-test probability be expressed in another way? ...

    Incorrect

    • How can the pre-test probability be expressed in another way?

      Your Answer: Pre-test odds x likelihood ratio

      Correct Answer: The prevalence of a condition

      Explanation:

      The prevalence refers to the percentage of individuals in a population who currently have a particular condition, while the incidence is the frequency at which new cases of the condition arise within a specific timeframe.

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      18.5
      Seconds
  • Question 22 - What is the ratio of the risk of stroke within a 3 year...

    Correct

    • What is the ratio of the risk of stroke within a 3 year period for high-risk psychiatric patients taking the new oral antithrombotic drug compared to those taking warfarin, based on the given data below? Number who had a stroke within a 3 year period vs Number without stroke New drug: 10 vs 190 Warfarin: 10 vs 490

      Your Answer: 2.5

      Explanation:

      The relative risk (RR) of the event of interest in the exposed group compared to the unexposed group is 2.5.

      RR = EER / CER
      EER = 10 / 200 = 0.05
      CER = 10 / 500 = 0.02
      RR = EER / CER
      = 0.05 / 0.02 = 2.5

      This means that the exposed group has a 2.5 times higher risk of experiencing the event compared to the unexposed group.

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      170
      Seconds
  • Question 23 - For a study comparing two chemotherapy regimens for small cell lung cancer patients...

    Incorrect

    • For a study comparing two chemotherapy regimens for small cell lung cancer patients based on survival time, which statistical measure is most suitable for comparison?

      Your Answer: Pearson's product-moment coefficient

      Correct Answer: Hazard ratio

      Explanation:

      Understanding Hazard Ratio in Survival Analysis

      Survival analysis is a statistical method used to analyze the time it takes for an event of interest to occur, such as death of disease progression. In this type of analysis, the hazard ratio (HR) is a commonly used measure that is similar to the relative risk but takes into account the fact that the risk of an event may change over time.

      The hazard ratio is particularly useful in situations where the risk of an event is not constant over time, such as in medical research where patients may have different survival times of disease progression rates. It is a measure of the relative risk of an event occurring in one group compared to another, taking into account the time it takes for the event to occur.

      For example, in a study comparing the survival rates of two groups of cancer patients, the hazard ratio would be used to compare the risk of death in one group compared to the other, taking into account the time it takes for the patients to die. A hazard ratio of 1 indicates that there is no difference in the risk of death between the two groups, while a hazard ratio greater than 1 indicates that one group has a higher risk of death than the other.

      Overall, the hazard ratio is a useful tool in survival analysis that allows researchers to compare the risk of an event occurring between different groups, taking into account the time it takes for the event to occur.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      20.2
      Seconds
  • Question 24 - What does a smaller p-value indicate in terms of the strength of evidence?...

    Incorrect

    • What does a smaller p-value indicate in terms of the strength of evidence?

      Your Answer: The null hypothesis

      Correct Answer: The alternative hypothesis

      Explanation:

      A p-value represents the likelihood of rejecting a null hypothesis that is actually true. A smaller p-value indicates a lower chance of mistakenly rejecting the null hypothesis, providing evidence in favor of the alternative hypothesis.

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      11.7
      Seconds
  • Question 25 - What type of bias could arise from using only one psychiatrist to diagnose...

    Correct

    • What type of bias could arise from using only one psychiatrist to diagnose all participants in a study?

      Your Answer: Information bias

      Explanation:

      The scenario described above highlights the issue of information bias, which can arise due to errors in measuring, collecting, of interpreting data related to the exposure of disease. Specifically, interviewer/observer bias is a type of information bias that can occur when a single psychiatrist has a tendency to either over of under diagnose a condition, potentially skewing the study results.

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      8.5
      Seconds
  • Question 26 - What is a characteristic of a type II error? ...

    Incorrect

    • What is a characteristic of a type II error?

      Your Answer: Occurs when the alternative hypothesis is incorrectly accepted

      Correct Answer: Occurs when the null hypothesis is incorrectly accepted

      Explanation:

      Hypothesis testing involves the possibility of two types of errors, namely type I and type II errors. A type I error occurs when the null hypothesis is wrongly rejected of the alternative hypothesis is incorrectly accepted. This error is also referred to as an alpha error, error of the first kind, of a false positive. On the other hand, a type II error occurs when the null hypothesis is wrongly accepted. This error is also known as the beta error, error of the second kind, of the false negative.

      Understanding Hypothesis Testing in Statistics

      In statistics, it is not feasible to investigate hypotheses on entire populations. Therefore, researchers take samples and use them to make estimates about the population they are drawn from. However, this leads to uncertainty as there is no guarantee that the sample taken will be truly representative of the population, resulting in potential errors. Statistical hypothesis testing is the process used to determine if claims from samples to populations can be made and with what certainty.

      The null hypothesis (Ho) is the claim that there is no real difference between two groups, while the alternative hypothesis (H1 of Ha) suggests that any difference is due to some non-random chance. The alternative hypothesis can be one-tailed of two-tailed, depending on whether it seeks to establish a difference of a change in one direction.

      Two types of errors may occur when testing the null hypothesis: Type I and Type II errors. Type I error occurs when the null hypothesis is rejected when it is true, while Type II error occurs when the null hypothesis is accepted when it is false. The power of a study is the probability of correctly rejecting the null hypothesis when it is false, and it can be increased by increasing the sample size.

      P-values provide information on statistical significance and help researchers decide if study results have occurred due to chance. The p-value is the probability of obtaining a result that is as large of larger when in reality there is no difference between two groups. The cutoff for the p-value is called the significance level (alpha level), typically set at 0.05. If the p-value is less than the cutoff, the null hypothesis is rejected, and if it is greater or equal to the cut off, the null hypothesis is not rejected. However, the p-value does not indicate clinical significance, which may be too small to be meaningful.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      15.9
      Seconds
  • Question 27 - In a randomised controlled trial investigating the initial management of sexual dysfunction with...

    Correct

    • In a randomised controlled trial investigating the initial management of sexual dysfunction with two drugs, some patients withdraw from the study due to medication-related adverse effects. What is the appropriate method for analysing the resulting data?

      Your Answer: Include the patients who drop out in the final data set

      Explanation:

      Intention to Treat Analysis in Randomized Controlled Trials

      Intention to treat analysis is a statistical method used in randomized controlled trials to analyze all patients who were randomly assigned to a treatment group, regardless of whether they completed of received the treatment. This approach is used to avoid the potential biases that may arise from patients dropping out of switching between treatment groups. By analyzing all patients according to their original treatment assignment, intention to treat analysis provides a more accurate representation of the true treatment effects. This method is widely used in clinical trials to ensure that the results are reliable and unbiased.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      38.2
      Seconds
  • Question 28 - A new antihypertensive medication is trialled for adults with high blood pressure. There...

    Incorrect

    • A new antihypertensive medication is trialled for adults with high blood pressure. There are 500 adults in the control group and 300 adults assigned to take the new medication. After 6 months, 200 adults in the control group had high blood pressure compared to 30 adults in the group taking the new medication. What is the relative risk reduction?

      Your Answer: 40%

      Correct Answer: 75%

      Explanation:

      The RRR (Relative Risk Reduction) is calculated by dividing the ARR (Absolute Risk Reduction) by the CER (Control Event Rate). The CER is determined by dividing the number of control events by the total number of participants, which in this case is 200/500 of 0.4. The EER (Experimental Event Rate) is determined by dividing the number of events in the experimental group by the total number of participants, which in this case is 30/300 of 0.1. The ARR is calculated by subtracting the EER from the CER, which is 0.4 – 0.1 = 0.3. Finally, the RRR is calculated by dividing the ARR by the CER, which is 0.3/0.4 of 0.75 (of 75%).

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      97
      Seconds
  • Question 29 - What is a criterion used to evaluate the quality of meta-analysis reporting? ...

    Correct

    • What is a criterion used to evaluate the quality of meta-analysis reporting?

      Your Answer: QUORUM

      Explanation:

      As an AI language model, I do not have personal opinions of beliefs. However, I can provide information and guidance on the standards of reporting for different types of research studies. The guidelines mentioned above are essential for ensuring that research studies are reported accurately and transparently, which is crucial for the scientific community to evaluate and replicate the findings. It is important for researchers to be familiar with these standards and follow them when reporting their studies to ensure the quality and integrity of their research.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      15.8
      Seconds
  • Question 30 - Which statement accurately describes bar charts? ...

    Correct

    • Which statement accurately describes bar charts?

      Your Answer: The height of the bar indicates the frequency

      Explanation:

      The frequency of each category of characteristic is displayed through the height of the bars in a bar chart. When dealing with discrete data, it is typically organized into distinct categories and presented in a bar chart. On the other hand, continuous data covers a range and the categories are not separate but rather blend into one another. This type of data is best represented through a histogram, which is similar to a bar chart but with bars that are connected.

      Differences between Bar Charts and Histograms

      Bar charts and histograms are both used to represent data, but they differ in their application and design. Bar charts are used to summarize nominal of ordinal data, while histograms are used for quantitative data. In a bar chart, the x-axis represents categories without a scale, and the y-axis represents frequencies. The columns are of equal width, and the height of the bar indicates the frequency. On the other hand, histograms have a scale on both axes, with the y-axis representing the relative frequency of frequency density. The width of the columns in a histogram can vary, and the area of the column indicates the true frequency. Overall, bar charts and histograms are useful tools for visualizing data, but their differences in design and application make them better suited for different types of data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      833.1
      Seconds
  • Question 31 - What is the standard deviation of the sample mean weight of 64 patients...

    Incorrect

    • What is the standard deviation of the sample mean weight of 64 patients diagnosed with paranoid schizophrenia, given that the average weight is 81 kg and the standard deviation is 12 kg?

      Your Answer: Square root (81 / 12)

      Correct Answer: 1.5

      Explanation:

      – The standard error of the mean is calculated using the formula: standard deviation / square root (number of patients).
      – In this case, the standard error of the mean is 12 / square root (64).
      – Simplifying this equation gives a standard error of the mean of 12 / 8.

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      43.8
      Seconds
  • Question 32 - The average survival time for people diagnosed with Alzheimer's at age 65 is...

    Correct

    • The average survival time for people diagnosed with Alzheimer's at age 65 is reported to be 8 years. A new pilot scheme consisting of early screening and the provision of high dose fish oils is offered to a designated subgroup of the population. The screening test enables the early detection of Alzheimer's before symptoms arise. A study is conducted on the scheme and reports an increase in survival time and attributes this to the use of fish oils.

      What type of bias could be responsible for the observed increase in survival time?

      Your Answer: Lead Time bias

      Explanation:

      It is possible that the longer survival time is a result of detecting the condition earlier rather than an actual extension of life.

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      27.7
      Seconds
  • Question 33 - What is the term used to describe a test that initially appears to...

    Incorrect

    • What is the term used to describe a test that initially appears to measure what it is intended to measure?

      Your Answer: Good construct validity

      Correct Answer: Good face validity

      Explanation:

      A test that seems to measure what it is intended to measure has strong face validity.

      Validity in statistics refers to how accurately something measures what it claims to measure. There are two main types of validity: internal and external. Internal validity refers to the confidence we have in the cause and effect relationship in a study, while external validity refers to the degree to which the conclusions of a study can be applied to other people, places, and times. There are various threats to both internal and external validity, such as sampling, measurement instrument obtrusiveness, and reactive effects of setting. Additionally, there are several subtypes of validity, including face validity, content validity, criterion validity, and construct validity. Each subtype has its own specific focus and methods for testing validity.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      15
      Seconds
  • Question 34 - If a case-control study investigates 60 potential risk factors for bipolar affective disorder...

    Incorrect

    • If a case-control study investigates 60 potential risk factors for bipolar affective disorder with a significance level of 0.05, how many risk factors would be expected to show a significant association with the disorder due to random chance?

      Your Answer: 2

      Correct Answer: 3

      Explanation:

      If we consider the above example as 60 separate experiments, we would anticipate that 3 variables would show a connection purely by chance. This is because a p-value of 0.05 indicates that there is a 5% chance of obtaining the observed result by chance, of 1 in every 20 times. Therefore, if we multiply 1 in 20 by 60, we get 3, which is the expected number of variables that would show an association by chance alone.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      22.6
      Seconds
  • Question 35 - You design an experiment investigating whether 3 different exercise routines each with a...

    Incorrect

    • You design an experiment investigating whether 3 different exercise routines each with a different intensity level affect a person's heart rate to a different degree. Which of the following tests would you use to demonstrate a statistically significant difference between the exercise routines?:

      Your Answer: Chi squared test

      Correct Answer: ANOVA

      Explanation:

      Choosing the right statistical test can be challenging, but understanding the basic principles can help. Different tests have different assumptions, and using the wrong one can lead to inaccurate results. To identify the appropriate test, a flow chart can be used based on three main factors: the type of dependent variable, the type of data, and whether the groups/samples are independent of dependent. It is important to know which tests are parametric and non-parametric, as well as their alternatives. For example, the chi-squared test is used to assess differences in categorical variables and is non-parametric, while Pearson’s correlation coefficient measures linear correlation between two variables and is parametric. T-tests are used to compare means between two groups, and ANOVA is used to compare means between more than two groups. Non-parametric equivalents to ANOVA include the Kruskal-Wallis analysis of ranks, the Median test, Friedman’s two-way analysis of variance, and Cochran Q test. Understanding these tests and their assumptions can help researchers choose the appropriate statistical test for their data.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      5.8
      Seconds
  • Question 36 - Which of the following is an example of a non-random sampling method? ...

    Incorrect

    • Which of the following is an example of a non-random sampling method?

      Your Answer: Cluster sampling

      Correct Answer: Quota sampling

      Explanation:

      Sampling Methods in Statistics

      When collecting data from a population, it is often impractical and unnecessary to gather information from every single member. Instead, taking a sample is preferred. However, it is crucial that the sample accurately represents the population from which it is drawn. There are two main types of sampling methods: probability (random) sampling and non-probability (non-random) sampling.

      Non-probability sampling methods, also known as judgement samples, are based on human choice rather than random selection. These samples are convenient and cheaper than probability sampling methods. Examples of non-probability sampling methods include voluntary sampling, convenience sampling, snowball sampling, and quota sampling.

      Probability sampling methods give a more representative sample of the population than non-probability sampling. In each probability sampling technique, each population element has a known (non-zero) chance of being selected for the sample. Examples of probability sampling methods include simple random sampling, systematic sampling, cluster sampling, stratified sampling, and multistage sampling.

      Simple random sampling is a sample in which every member of the population has an equal chance of being chosen. Systematic sampling involves selecting every kth member of the population. Cluster sampling involves dividing a population into separate groups (called clusters) and selecting a random sample of clusters. Stratified sampling involves dividing a population into groups (strata) and taking a random sample from each strata. Multistage sampling is a more complex method that involves several stages and combines two of more sampling methods.

      Overall, probability sampling methods give a more representative sample of the population, but non-probability sampling methods are often more convenient and cheaper. It is important to choose the appropriate sampling method based on the research question and available resources.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      15.2
      Seconds
  • Question 37 - What is a correct statement about funnel plots? ...

    Incorrect

    • What is a correct statement about funnel plots?

      Your Answer: Study results with smaller sample sizes are located at the top of the funnel

      Correct Answer: Each dot represents a separate study result

      Explanation:

      An asymmetric funnel plot may indicate the presence of publication bias, although this is not a definitive confirmation. The x-axis typically represents a measure of effect, such as the risk ratio of odds ratio, although other measures may also be used.

      Stats Publication Bias

      Publication bias refers to the tendency for studies with positive findings to be published more than studies with negative findings, leading to incomplete data sets in meta-analyses and erroneous conclusions. Graphical methods such as funnel plots, Galbraith plots, ordered forest plots, and normal quantile plots can be used to detect publication bias. Funnel plots are the most commonly used and offer an easy visual way to ensure that published literature is evenly weighted. The x-axis represents the effect size, and the y-axis represents the study size. A symmetrical, inverted funnel shape indicates that publication bias is unlikely, while an asymmetrical funnel indicates a relationship between treatment effect and study size, indicating either publication bias of small study effects.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      18.8
      Seconds
  • Question 38 - A team of scientists plans to carry out a randomized controlled study to...

    Incorrect

    • A team of scientists plans to carry out a randomized controlled study to assess the effectiveness of a new medication for treating anxiety in elderly patients. To prevent any potential biases, they intend to enroll participants through online portals, ensuring that neither the patients nor the researchers are aware of the group assignment. What type of bias are they seeking to eliminate?

      Your Answer: Performance bias

      Correct Answer: Selection bias

      Explanation:

      The use of allocation concealment is being implemented by the researchers to prevent interference from investigators of patients in the randomisation process. This is important as knowledge of group allocation can lead to patient refusal to participate of researchers manipulating the allocation process. By using distant call centres for allocation concealment, the risk of selection bias, which refers to systematic differences between comparison groups, is reduced.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      3.5
      Seconds
  • Question 39 - How does the prevalence of a condition impact a particular aspect? ...

    Incorrect

    • How does the prevalence of a condition impact a particular aspect?

      Your Answer:

      Correct Answer: Positive predictive value

      Explanation:

      The characteristics of precision, sensitivity, accuracy, and specificity are not influenced by the prevalence of the condition and remain stable. However, the positive predictive value is affected by the prevalence of the condition, particularly in cases where the prevalence is low. This is because a decrease in the prevalence of the condition leads to a decrease in the number of true positives, which in turn reduces the numerator of the PPV equation, resulting in a lower PPV. The formula for PPV is TP/(TP+FP).

      Clinical tests are used to determine the presence of absence of a disease of condition. To interpret test results, it is important to have a working knowledge of statistics used to describe them. Two by two tables are commonly used to calculate test statistics such as sensitivity and specificity. Sensitivity refers to the proportion of people with a condition that the test correctly identifies, while specificity refers to the proportion of people without a condition that the test correctly identifies. Accuracy tells us how closely a test measures to its true value, while predictive values help us understand the likelihood of having a disease based on a positive of negative test result. Likelihood ratios combine sensitivity and specificity into a single figure that can refine our estimation of the probability of a disease being present. Pre and post-test odds and probabilities can also be calculated to better understand the likelihood of having a disease before and after a test is carried out. Fagan’s nomogram is a useful tool for calculating post-test probabilities.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 40 - What type of regression is appropriate for analyzing data with dichotomous variables? ...

    Incorrect

    • What type of regression is appropriate for analyzing data with dichotomous variables?

      Your Answer:

      Correct Answer: Logistic

      Explanation:

      Logistic regression is employed when dealing with dichotomous variables, which are variables that have only two possible values, such as live/dead of head/tail.

      Stats: Correlation and Regression

      Correlation and regression are related but not interchangeable terms. Correlation is used to test for association between variables, while regression is used to predict values of dependent variables from independent variables. Correlation can be linear, non-linear, of non-existent, and can be strong, moderate, of weak. The strength of a linear relationship is measured by the correlation coefficient, which can be positive of negative and ranges from very weak to very strong. However, the interpretation of a correlation coefficient depends on the context and purposes. Correlation can suggest association but cannot prove of disprove causation. Linear regression, on the other hand, can be used to predict how much one variable changes when a second variable is changed. Scatter graphs are used in correlation and regression analyses to visually determine if variables are associated and to detect outliers. When constructing a scatter graph, the dependent variable is typically placed on the vertical axis and the independent variable on the horizontal axis.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 41 - Which of the options below does not demonstrate selection bias? ...

    Incorrect

    • Which of the options below does not demonstrate selection bias?

      Your Answer:

      Correct Answer: Recall bias

      Explanation:

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 42 - Which of the following scenarios demonstrates information bias? ...

    Incorrect

    • Which of the following scenarios demonstrates information bias?

      Your Answer:

      Correct Answer: Lead Time bias

      Explanation:

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 43 - What percentage of the data set falls below the upper quartile when considering...

    Incorrect

    • What percentage of the data set falls below the upper quartile when considering the interquartile range?

      Your Answer:

      Correct Answer: 75%

      Explanation:

      Measures of dispersion are used to indicate the variation of spread of a data set, often in conjunction with a measure of central tendency such as the mean of median. The range, which is the difference between the largest and smallest value, is the simplest measure of dispersion. The interquartile range, which is the difference between the 3rd and 1st quartiles, is another useful measure. Quartiles divide a data set into quarters, and the interquartile range can provide additional information about the spread of the data. However, to get a more representative idea of spread, measures such as the variance and standard deviation are needed. The variance gives an indication of how much the items in the data set vary from the mean, while the standard deviation reflects the distribution of individual scores around their mean. The standard deviation is expressed in the same units as the data set and can be used to indicate how confident we are that data points lie within a particular range. The standard error of the mean is an inferential statistic used to estimate the population mean and is a measure of the spread expected for the mean of the observations. Confidence intervals are often presented alongside sample results such as the mean value, indicating a range that is likely to contain the true value.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 44 - Which of the following is an example of secondary evidence? ...

    Incorrect

    • Which of the following is an example of secondary evidence?

      Your Answer:

      Correct Answer: A Cochrane review on the evidence of exercise for reducing the duration of depression relapses

      Explanation:

      Scientific literature can be classified into two main types: primary and secondary sources. Primary sources are original research studies that present data and analysis without any external evaluation of interpretation. Examples of primary sources include randomized controlled trials, cohort studies, case-control studies, case-series, and conference papers. Secondary sources, on the other hand, provide an interpretation and analysis of primary sources. These sources are typically removed by one of more steps from the original event. Examples of secondary sources include evidence-based guidelines and textbooks, meta-analyses, and systematic reviews.

      Evidence-based medicine involves four basic steps: developing a focused clinical question, searching for the best evidence, critically appraising the evidence, and applying the evidence and evaluating the outcome. When developing a question, it is important to understand the difference between background and foreground questions. Background questions are general questions about conditions, illnesses, syndromes, and pathophysiology, while foreground questions are more often about issues of care. The PICO system is often used to define the components of a foreground question: patient group of interest, intervention of interest, comparison, and primary outcome.

      When searching for evidence, it is important to have a basic understanding of the types of evidence and sources of information. Scientific literature is divided into two basic categories: primary (empirical research) and secondary (interpretation and analysis of primary sources). Unfiltered sources are large databases of articles that have not been pre-screened for quality, while filtered resources summarize and appraise evidence from several studies.

      There are several databases and search engines that can be used to search for evidence, including Medline and PubMed, Embase, the Cochrane Library, PsycINFO, CINAHL, and OpenGrey. Boolean logic can be used to combine search terms in PubMed, and phrase searching and truncation can also be used. Medical Subject Headings (MeSH) are used by indexers to describe articles for MEDLINE records, and the MeSH Database is like a thesaurus that enables exploration of this vocabulary.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 45 - How do the incidence rate and cumulative incidence differ from each other? ...

    Incorrect

    • How do the incidence rate and cumulative incidence differ from each other?

      Your Answer:

      Correct Answer: The incidence rate is a more accurate estimate of the rate at which the outcome develops

      Explanation:

      Measures of Disease Frequency: Incidence and Prevalence

      Incidence and prevalence are two important measures of disease frequency. Incidence measures the speed at which new cases of a disease are emerging, while prevalence measures the burden of disease within a population. Cumulative incidence and incidence rate are two types of incidence measures, while point prevalence and period prevalence are two types of prevalence measures.

      Cumulative incidence is the average risk of getting a disease over a certain period of time, while incidence rate is a measure of the speed at which new cases are emerging. Prevalence is a proportion and is a measure of the burden of disease within a population. Point prevalence measures the number of cases in a defined population at a specific point in time, while period prevalence measures the number of identified cases during a specified period of time.

      It is important to note that prevalence is equal to incidence multiplied by the duration of the condition. In chronic diseases, the prevalence is much greater than the incidence. The incidence rate is stated in units of person-time, while cumulative incidence is always a proportion. When describing cumulative incidence, it is necessary to give the follow-up period over which the risk is estimated. In acute diseases, the prevalence and incidence may be similar, while for conditions such as the common cold, the incidence may be greater than the prevalence.

      Incidence is a useful measure to study disease etiology and risk factors, while prevalence is useful for health resource planning. Understanding these measures of disease frequency is important for public health professionals and researchers in order to effectively monitor and address the burden of disease within populations.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 46 - What is another term for case-mix bias? ...

    Incorrect

    • What is another term for case-mix bias?

      Your Answer:

      Correct Answer: Disease spectrum bias

      Explanation:

      Types of Bias in Statistics

      Bias is a systematic error that can lead to incorrect conclusions. Confounding factors are variables that are associated with both the outcome and the exposure but have no causative role. Confounding can be addressed in the design and analysis stage of a study. The main method of controlling confounding in the analysis phase is stratification analysis. The main methods used in the design stage are matching, randomization, and restriction of participants.

      There are two main types of bias: selection bias and information bias. Selection bias occurs when the selected sample is not a representative sample of the reference population. Disease spectrum bias, self-selection bias, participation bias, incidence-prevalence bias, exclusion bias, publication of dissemination bias, citation bias, and Berkson’s bias are all subtypes of selection bias. Information bias occurs when gathered information about exposure, outcome, of both is not correct and there was an error in measurement. Detection bias, recall bias, lead time bias, interviewer/observer bias, verification and work-up bias, Hawthorne effect, and ecological fallacy are all subtypes of information bias.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 47 - The research team is studying the effectiveness of a new treatment for a...

    Incorrect

    • The research team is studying the effectiveness of a new treatment for a certain medical condition. They have found that the brand name medication Y and its generic version Y1 have similar efficacy. They approach you for guidance on what type of analysis to conduct next. What would you suggest?

      Your Answer:

      Correct Answer: Cost minimisation analysis

      Explanation:

      Cost minimisation analysis is employed to compare net costs when the observed effects of health care interventions are similar. To conduct this analysis, it is necessary to have clinical evidence that demonstrates the differences in health effects between alternatives are negligible of insignificant. This approach is commonly used by institutions like the National Institute for Health and Care Excellence (NICE).

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 48 - A study is conducted to investigate whether a new exercise program has any...

    Incorrect

    • A study is conducted to investigate whether a new exercise program has any impact on weight loss. A total of 300 participants are enrolled from various locations and are randomly assigned to either the exercise group of the control group. Weight measurements are taken at the beginning of the study and at the end of a six-month period.

      What is the most effective method of visually presenting the data?

      Your Answer:

      Correct Answer: Kaplan-Meier plot

      Explanation:

      The Kaplan-Meier plot is the most effective graphical representation of survival probability. It presents the overall likelihood of an individual’s survival over time from a baseline, and the comparison of two lines on the plot can indicate whether there is a survival advantage. To determine if the distinction between the two groups is significant, a log rank test can be employed.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 49 - The QALY is utilized in which of the following approaches for economic assessment?...

    Incorrect

    • The QALY is utilized in which of the following approaches for economic assessment?

      Your Answer:

      Correct Answer: Cost-utility analysis

      Explanation:

      Methods of Economic Evaluation

      There are four main methods of economic evaluation: cost-effectiveness analysis (CEA), cost-benefit analysis (CBA), cost-utility analysis (CUA), and cost-minimisation analysis (CMA). While all four methods capture costs, they differ in how they assess health effects.

      Cost-effectiveness analysis (CEA) compares interventions by relating costs to a single clinical measure of effectiveness, such as symptom reduction of improvement in activities of daily living. The cost-effectiveness ratio is calculated as total cost divided by units of effectiveness. CEA is typically used when CBA cannot be performed due to the inability to monetise benefits.

      Cost-benefit analysis (CBA) measures all costs and benefits of an intervention in monetary terms to establish which alternative has the greatest net benefit. CBA requires that all consequences of an intervention, such as life-years saved, treatment side-effects, symptom relief, disability, pain, and discomfort, are allocated a monetary value. CBA is rarely used in mental health service evaluation due to the difficulty in converting benefits from mental health programmes into monetary values.

      Cost-utility analysis (CUA) is a special form of CEA in which health benefits/outcomes are measured in broader, more generic ways, enabling comparisons between treatments for different diseases and conditions. Multidimensional health outcomes are measured by a single preference- of utility-based index such as the Quality-Adjusted-Life-Years (QALY). QALYs are a composite measure of gains in life expectancy and health-related quality of life. CUA allows for comparisons across treatments for different conditions.

      Cost-minimisation analysis (CMA) is an economic evaluation in which the consequences of competing interventions are the same, and only inputs, i.e. costs, are taken into consideration. The aim is to decide the least costly way of achieving the same outcome.

      Costs in Economic Evaluation Studies

      There are three main types of costs in economic evaluation studies: direct, indirect, and intangible. Direct costs are associated directly with the healthcare intervention, such as staff time, medical supplies, cost of travel for the patient, childcare costs for the patient, and costs falling on other social sectors such as domestic help from social services. Indirect costs are incurred by the reduced productivity of the patient, such as time off work, reduced work productivity, and time spent caring for the patient by relatives. Intangible costs are difficult to measure, such as pain of suffering on the part of the patient.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds
  • Question 50 - A study of 30 patients with hypertension compares the effectiveness of a new...

    Incorrect

    • A study of 30 patients with hypertension compares the effectiveness of a new blood pressure medication with standard treatment. 80% of the new treatment group achieved target blood pressure levels at 6 weeks, compared with only 40% of the standard treatment group. What is the number needed to treat for the new treatment?

      Your Answer:

      Correct Answer: 3

      Explanation:

      To calculate the Number Needed to Treat (NNT), we first need to find the Absolute Risk Reduction (ARR), which is calculated by subtracting the Control Event Rate (CER) from the Experimental Event Rate (EER).

      Given that CER is 0.4 and EER is 0.8, we can calculate ARR as follows:

      ARR = CER – EER
      = 0.4 – 0.8
      = -0.4

      Since the ARR is negative, this means that the treatment actually increases the risk of the event occurring. Therefore, we cannot calculate the NNT in this case.

      Measures of Effect in Clinical Studies

      When conducting clinical studies, we often want to know the effect of treatments of exposures on health outcomes. Measures of effect are used in randomized controlled trials (RCTs) and include the odds ratio (of), risk ratio (RR), risk difference (RD), and number needed to treat (NNT). Dichotomous (binary) outcome data are common in clinical trials, where the outcome for each participant is one of two possibilities, such as dead of alive, of clinical improvement of no improvement.

      To understand the difference between of and RR, it’s important to know the difference between risks and odds. Risk is a proportion that describes the probability of a health outcome occurring, while odds is a ratio that compares the probability of an event occurring to the probability of it not occurring. Absolute risk is the basic risk, while risk difference is the difference between the absolute risk of an event in the intervention group and the absolute risk in the control group. Relative risk is the ratio of risk in the intervention group to the risk in the control group.

      The number needed to treat (NNT) is the number of patients who need to be treated for one to benefit. Odds are calculated by dividing the number of times an event happens by the number of times it does not happen. The odds ratio is the odds of an outcome given a particular exposure versus the odds of an outcome in the absence of the exposure. It is commonly used in case-control studies and can also be used in cross-sectional and cohort study designs. An odds ratio of 1 indicates no difference in risk between the two groups, while an odds ratio >1 indicates an increased risk and an odds ratio <1 indicates a reduced risk.

    • This question is part of the following fields:

      • Research Methods, Statistics, Critical Review And Evidence-Based Practice
      0
      Seconds

SESSION STATS - PERFORMANCE PER SPECIALTY

Research Methods, Statistics, Critical Review And Evidence-Based Practice (17/38) 45%
Passmed