SEARCH
You are in browse mode. You must login to use MEMORY

   Log in to start

Psychology: Evidence-Based Approach

Stats


🇬🇧
In English
Created:
Psychology: Evidence-Based Approach


Public
Created by:
Veuve


0 / 5  (0 ratings)



» To start learning, click login

1 / 25

[Front]


Why do we engage in research in Psychology?
[Back]


Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior

Practice Known Questions

Stay up to date with your due questions

Complete 5 questions to enable practice

Exams

Exam: Test your skills

Test your skills in exam mode

Learn New Questions

Popular in this course

Learn with flashcards

Dynamic Modes

SmartIntelligent mix of all modes
CustomUse settings to weight dynamic modes

Manual Mode [BETA]

The course owner has not enabled manual mode
Other available modes

multiple choiceMultiple choice mode
SpeakingAnswer with voice
TypingTyping only mode

Psychology: Evidence-Based Approach - Leaderboard

0 users have completed this course. Be the first!

No users have played this course yet, be the first


Psychology: Evidence-Based Approach - Details

Levels:

Questions:

231 questions
🇬🇧🇬🇧
Why do we engage in research in Psychology?
Psychologists focus their attention on understanding behavior, as well as the cognitive (mental) and physiological (body) processes that underlie behavior
What is the scientist-practitioner model?
The scientist–practitioner model, also called the Boulder Model, is a training model for graduate programs that provide applied psychologists with a foundation in research and scientific practice
What is evidence-based practice?
An evidence-based practice is any practice that relies on scientific and mathematical evidence to form strong inductive or deductive arguments for guidance and decision-making. Practices that are not evidence-based may rely on tradition, intuition, or other unproven methods
Is Psychology a science?
The psychology of science is a branch of the studies of science defined most simply as the scientific study of scientific thought or behaviour
What is science?
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe
Name 5 non-scientific methods
Intuition or Belief, Consensus, Authority, Casual Observation, Informal Logic
What is quantification and why is it valued so highly?
Approximation of a subjective aspect (attribute, characteristic, property) of a thing or phenomenon into numbers through an arbitrary scale (such as the Likert scale). Every aspect of nature can be quantified although it may not be measurable
What are the defining characteristics of a true experimental design
Includes a control group, Randomly selects participants from the population, randomly assigns participants to groups, randomly assigns treatments to groups and has high degree of control over extraneous variabls
Why are experimental designs held in such high esteem?
They have strong internal validity because of the way they are conducted, providing strong support for causal conclusions
Why is a control group so important?
Because it is practically impossible to completely eliminate all of the bias and outside influence that could alter the results of the experiment, but control groups can be used to focus on the variable you're trying to test
Why is randomised allocation so important?
To prevent selection bias by distributing the characteristics of patients that may influence the outcome randomly between the groups, so that any difference in outcome can be explained only by the treatment
What is an independent variable?
A variable that is manipulated to examine its impact on a dependent variable
What is a dependent variable?
A variable it is measured to see whether the treatment or manipulation of the independent variable had an effect
What is a counfounding variable?
A variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations
What is an intervention study?
An interventional study is one in which the participants receive some kind of intervention, such as a new medicine, in order to evaluate it
What is a randomised controlled trial?
A study in which people are allocated at random to receive one of several clinical interventions. This is one of the simplest and most powerful tools in clinical research
What is a double blinded placebo control?
A medical study involving human participants in which neither side knows who's getting what treatment and placebo are given to a control group
What is internal validity?
Refers to how well an experiment is done, especially whether it avoids confounding (more than one possible independent variable acting at the same time)
Name some threats to internal validity
History, Maturation, Testing, Insrumentation, Regression, Selection, Experiemental Mortality and an interaction of threats
What is a quasi-experimental design?
An empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment.
Why can't you make confident cause and effect statements about the results from a quasi-experimental design?
Subject to concerns regarding internal validity because the treatment and control groups may not be comparable at baseline
What is a non-experimental design?
A label given to a study when a research cannot control, manipulate or alter the predictor variable or subjects, but instead, relies on interpretation, observation or interactions to come to a conclusion
Does correlation = causation?
No. There is an inability to legitimately deduce cause-and-effect relationship between two variables solely on the basis of an observed association or correlation between them
What is the normal distribution?
A probability function that describes how the values of a variable are distributed. It is ia symmetric distribution where most of the observations cluster around the central peak and the probabilities for values further away from the mean taper off equally in both directions
Why is the normal distribution so important in quantitative psychological research?
Because it fits many natural phenomena. Also known as the bell curve
What is a Z Score
A numerical measurement of a value's relationship to the mean (average) of a group of values, measured in terms of Standard Deviations from the mean. If a score is 0, it indicates that the data point's score is identical to the mean score
What is sampling error?
An error that occurs when an analyst does not select a sample that represents the entire population of data and the results found in the sample do not represent the results that would be obtained from the entire population
What is the difference between a population and a sample
A population includes all of the elemtns from a set of data. A sample consists of one or more observations drawn from the population
What are some of the major sampling strategies?
Simple random, stratified random, cluster and systematic
What is test reliability
The definition of consistent a measure is of a particular element over a period of time and between different participants.
Describe four types of reliability
Inter-rater (different people, same test), Test-retest (Same people, different times), Parallel forms (Different people, same time, different test), Internal Consistency (Different questions, same construct)
What is test validity
The extent to which a test accurately measures what it is supposed to meaure
Describe four types of test validity
Content (measuring how well the items represent the entire universe of items) Criterion - Concurrent (measures how well a test estimates the criterion) Criterion - Predictive (measures how well a test predicts a criterion), Construct (measuring how well a test assesses some underlying construct
What is the relationship between reliability and validity?
They indicate how well a method, technique or test measures something
What is a sampling distribution?
A probability distribution of a statistic obtained through a large number of samples drawn from a specific population. The distribution of frequencies of a range of different outcomes that could possibly occur for a statistic of a population
Why are sampling distributions important?
They provide a major simplification on the route to statistical inference
What is standard error?
A measure of statistical accuracy of an estimate, equal to the stand deviation of the theoretical distribution of a large population of such estimates
What is the relationship between sample size and standard error?
As you increase sample size, the standard error of the mean will become smaller. With bigger sample sizes, the sample mean becomes a more accurate estimate of the parametric mean, so the standard error of the mean becomes smaller
What is the null hypothesis?
Proposes that no statistical signifance exists in a set of given observations. It attempts to show that no variation exists between variables or that a single variable is no different than its mean
What is the definition of a significance level (p) ?
It is the level of marginal significance within a statistical hypothesis test representing the probability of the occurrence of a given event. The value is used as an alternative to rejection points to provide the smallest level of significance at which the null hypothesis would be rejected
What is the goal of descriptive statistics?
To provide basic information about variables in a dataset and to highlight potential relationships between variables
What is the key information that we derive from numeric descriptive statistics?
To describe the main features of numerical and categorical information with simple summaries
What is the goal of inferential statistics?
To draw conclusions from a sample and generalise them to the population. It determines the probability of the characteristics of the sample using probability theory.
What is alpha?
Significance level
What are qualities of Ordinal measurements?
Assignment of values along some underlying dimension
What are qualities of Interval measurements?
Equal distances between points
What are qualities of Ratio measurements?
Meaningful and nonarbitrary zero
Examples of Nominal measurements
Gender, Preference (like or dislike), Voting record (for or against)
Examples of Ordinal measurements
Rank in College, order of finishing in a race
Examples of Interval measurements
Number of words spelled correctly, Intelligence test scores, Temperature
Why are Z scores important?
Allows for the calculation of the probability of a score occurring within the normal distribution and enables comparison of two scores that are from different normal distributions
What is the goal of the research process?
To provide reasonable answers to interesting questions
Research Goal
Asking the question, Identifying the important factors, Formulating a hypothesis, collecting relevant information, Testing the hypothesis, Working with the hypothesis, Reconsidering the Theory, Asking new questions
Are variables and attributes the same thing?
No, variables HAVE attributes.
Null Hypothesis
Is ALWAYS "No" e.g., smoking e-cigarettes does not cause lung cancer
What is Operationalisation?
Meaningful way of being able to measure what it is that you are measuring
What is a continuous variable?
Involving Interval / Ratio variables
What is a discrete variable?
Involving Nominal / Ordinal variables
3 ways to measure data
Central Tendncy, Dispersion (Variability), Measures of Association
What does a small Standard Deviation indicate?
The scores are relatively close together around the mean
What does a large Stanard Deviation indicate?
The scores are spread widely from the mean
Explain the difference between “exclude cases listwise” and “exclude cases pairwise” in dealing with missing data points.
Exclude cases listwise option will include cases in the analysis only if they have full data on all of the variables listed in your variables box for that case. The Exclude cases pairwise option however excludes the case (person) only if they are missing the data required for the specific analysis
Define the mean, the median, and the mode.
Mean: Average of the data set. Median: Middle set of numbers. Mode: Most common number in the data set
Why is the 5% trimmed mean a useful statistic?
If you compare the original mean and the new trimmed mean, you can see whether extreme scores are having a strong influence on the mean
If you have 2 data sets containing IQ measures, and the second data set has a larger standard deviation, then what does this suggest?
A larger standard deviation means that the values in the data set are further away from the mean, on average
Define a 95% confidence interval
An interval constructed such that the true population mean will fall within this interval in 95% of samples
What is the interquartile range?
A measure of statistical dispersion, being equal to the difference between the 75th and 25th percentiles, or upper and lower quartiles
If a score is at the 90% percentile, what does that mean
If you know that a score if in the 90th percentile, that means you scored better than 90% of people who took the test
Why do we do exploratory data analysis
Need to check for data entry errors. Gather information on descriptive statistics. Identify patterns. Identify any missing data points and devise a strategy how to deal with those. Identify sources of bias.
What are the upper and lower bound for a confidence interval
Refers to the upper and lower limits of where the mean should fall if the study was replicated and generalised to the public.
What is the purpose of a Historgram?
Gives information about normality to tell us whether we should use parametric or nonparametric tests for coninuous data
What is the purpose of bar graphs?
Visually represent continuous data differences between groups
What is the putpose of blox plots?
Gives information about the measures of central tendency and variability - what your data is 'looking' like
What does a positive skew look like?
Scores bunched at low values with the tail pointing to the high values
What does a negative skew look like?
Scores bunched at high values with the tail pointing to the low values
What does positive kurtosis look like?
The distribution has heavier tails thant the normal distribution. Usually looks more peaked
What does negative kurtosis look like?
The distribution has light tails than the normal distribution. Usually looks more flat
When should you delete (or deal with) outliers?
If it is obvious that the outlier is due to incorrectly entered or measured data, you should drop the outlier. If the outlier does not change the results but does affect assumptions, you may drop the outlier. More commonly, the outlier affects both results and assumptions. If the outlier creates a significant association, you shoulddrop the outlier and should not report any significane from your analysis
What is the assumption of normality and why is it important?
Means that you should make sure your data roughly fits a bell curbe shape before running certain statistical tests or regression. Deviations from normality render statistical tests inaccurate so it is important to know if your data are normal. Tests that rely upon the assumption of normality are called parametric tests. Small sample sizes <20 causes
What are different methods for testing normality?
Kolmogorov-Smirnov (tries to determine if 2 datasets differ significantly)
What are some common transformations used to deal with non-normal data?
Log10: Reduces positive skew. Square root: Reduces positive skew. Inverse: Reduces positive skew. Reverse scoring: Helps deal with negative skew
Why is normality the least important assumption?
Does not contribute to bias or inefficiency in regression models. It is only important for the calculation of p values for significance testing, but this is only a consideration when the sample size is very small
What is homogeneity of variance and how is it tested?
The variability of scores for each of the groups need to be similar. Levene's test uses an F-test to tst the null hypothesis that the variance is qual across groups. A p value less than .05 indicates a violation of the assumption
What is the relationship between sample size and the objective measures that are used to test normality?
Kolmogorov-Smirnov is best used for larger sample sizes whereas Shapiro-Wilk is best used for small sample sizes of 25 and under as it is incredibly sensitive.
What should a histogram and Q-Q plot look like when data are normal?
A histogram should look like a bell-curve and dots on a Q-Q plots should fall closely along the line
What are some of the problems with transformations?
They are not a magic bullet - it will not fix everything. They make interpreting the results difficult. Some statistical tests are robust against small violations of assumptions, i.e, ANOVA and normality. So you need to decide whether a transformation is necessary
What is a common transformation used to deal with violation of the homogeneity of variance assumption?
Power Transformation: You raise the data by some power (e.g., squared) and it will shrink the data). Once you transform the data, you would re-run Levene's test for equality on the transformed data - not significant
What is a common transformation used to deal with violation of the homogeneity of variance assumption?
Power Transformation: You raise the data by some power (e.g., squared) and it will shrink the data). Once you transform the data, you would re-run Levene's test for equality on the transformed data - not significant
Should you state what you do with Outliers in a Study?
Ethically, you should state howyou handle outlier data in any notes and write up of analysed work
What is an Outlier?
A score very different from the rest of the data. Sometimes the score is an error but sometimes it is legitimate but it biases our data
What are the 3 different ways we can bias our analysis?
Parameter Estimates (mean being compromised), Spread (confidence intervals and spread of the numbers), Test statistics and P-values
What is the little circle and number that's plotted in a box-plot?
Outlier - the number tells you the row where the outlier is causing the problem
What is the Central Limit Theorem?
If you sample parameter estimates from a population, then as the sample size increases, the distribution of those parameters bcomes incresingly normal