b. Validity
ii. Theory Driven
2) Popper's Falsification
Approach
3) Combining the
Two: Competitive Hypothesis Testing
b. Status of Results
ii. Assessing Relative
Importance/Contributions
ii. Dependent Variable
iii. Control Condition
d. Simple Designs
ii. Within-S
iii. Some Advantages/Disadvantages of
Each
e. Speaking of
Disadvantages & Validity...
ii. Confounding: Alternative
Explanations
iii. Inappropriate Auxiliary Assumptions
iv. Experimenter Bias (Double-Blind
Procedure)
v. Inappropriate Assumptions for Statistical
Analysis
ii. Inferential
b. Inferential
Statistics & Reliability
ii. Between Groups (e.g., experimental & Control)
iii. Issue of Variability
iv. Solution: Given Certain Assumptions,
Try to establish Chance Probability of Result
1)
Is Result So Rare as to be Unlikely?
2) If So, Accept; If
Not, Reject Manipulation Being Responsible
3) So, Find a Suitable
Definition of Rare: alpha-level
4) Determine Actual
Observed Probability: p-level
5) What are Conventional
alpha levels?
6) Note Implications for a
TradeOff: Type I vs. Type II Error
c. More on Statistical
Testing
ii. Parametric vs. Non-Pametric Tests
iii. Common Statistics/Tests You'll Come
Across
iv. Conservative Testing Bias
b. Interactions vs.
Additive Effects