Amazon cover image
Image from Amazon.com
Image from Google Jackets

Statistics explained : an introductory guide for life scientists / Steve McKillup.

By: Material type: TextTextLanguage: English Publication details: Cambridge : Cambridge University Press, 2012.Edition: 2nd edDescription: xiv, 403 p. : ill. ; 23 cmISBN:
  • 9781107005518 (hbk.)
  • 1107005515 (hbk.)
  • 9780521183284 (pbk.)
  • 0521183286 (pbk.)
Subject(s): DDC classification:
  • 519.5 23 MCK
Contents:
Coverpage Halftitle page Title page Copyright page Contents Preface 1 Introduction 1.1 Why do life scientists need to know about experimental design and statistics? 1.2 What is this book designed to do? 2 Doing science: hypotheses, experiments and disproof 2.1 Introduction 2.2 Basic scientific method 2.3 Making a decision about an hypothesis 2.4 Why can’t an hypothesis or theory ever be proven? 2.5 ‘Negative’ outcomes 2.6 Null and alternate hypotheses 2.7 Conclusion 2.8 Questions 3 Collecting and displaying data 3.1 Introduction 3.2 Variables, experimental units and types of data 3.3 Displaying data 3.4 Displaying ordinal or nominal scale data 3.5 Bivariate data 3.6 Multivariate data 3.7 Summary and conclusion 4 Introductory concepts of experimental design 4.1 Introduction 4.2 Sampling – mensurative experiments 4.3 Manipulative experiments 4.4 Sometimes you can only do an unreplicated experiment 4.5 Realism 4.6 A bit of common sense 4.7 Designing a ‘good’ experiment 4.8 Reporting your results 4.9 Summary and conclusion 4.10 Questions 5 Doing science responsibly and ethically 5.1 Introduction 5.2 Dealing fairly with other people’s work 5.3 Doing the experiment 5.4 Evaluating and reporting results 5.5 Quality control in science 5.6 Questions 6 Probability helps you make a decision about your results 6.1 Introduction 6.2 Statistical tests and significance levels 6.3 What has this got to do with making a decision about your results? 6.4 Making the wrong decision 6.5 Other probability levels 6.6 How are probability values reported? 6.7 All statistical tests do the same basic thing 6.8 A very simple example – the chi-square test for goodness of fit 6.9 What if you get a statistic with a probability of exactly 0.05? 6.10 Statistical significance and biological significance 6.11 Summary and conclusion 6.12 Questions 7 Probability explained 7.1 Introduction 7.2 Probability 7.3 The addition rule 7.4 The multiplication rule for independent events 7.5 Conditional probability 7.6 Applications of conditional probability 8 Using the normal distribution to make statistical decisions 8.1 Introduction 8.2 The normal curve 8.3 Two statistics describe a normal distribution 8.4 Samples and populations 8.5 The distribution of sample means is also normal 8.6 What do you do when you only have data from one sample? 8.7 Use of the 95% confidence interval in significance testing 8.8 Distributions that are not normal 8.9 Other distributions 8.10 Other statistics that describe a distribution 8.11 Summary and conclusion 8.12 Questions 9 Comparing the means of one and two samples of normally distributed data 9.1 Introduction 9.2 The 95% confidence interval and 95% confidence limits 9.3 Using the Z statistic to compare a sample mean and population mean when population statistics are known 9.4 Comparing a sample mean to an expected value when population statistics are not known 9.5 Comparing the means of two related samples 9.6 Comparing the means of two independent samples 9.7 One-tailed and two-tailed tests 9.8 Are your data appropriate for a t test? 9.9 Distinguishing between data that should be analysed by a paired sample test and a test for two independent samples 9.10 Reporting the results of t tests 9.11 Conclusion 9.12 Questions 10 Type 1 error and Type 2 error, power and sample size 10.1 Introduction 10.2 Type 1 error 10.3 Type 2 error 10.4 The power of a test 10.5 What sample size do you need to ensure the risk of Type 2 error is not too high? 10.6 Type 1 error, Type 2 error and the concept of biological risk 10.7 Conclusion 10.8 Questions 11 Single-factor analysis of variance 11.1 Introduction 11.2 The concept behind analysis of variance 11.3 More detail and an arithmetic example 11.4 Unequal sample sizes (unbalanced designs) 11.5 An ANOVA does not tell you which particular treatments appear to be from different populations 11.6 Fixed or random effects 11.7 Reporting the results of a single-factor ANOVA 11.8 Summary 11.9 Questions 12 Multiple comparisons after ANOVA 12.1 Introduction 12.2 Multiple comparison tests after a Model I ANOVA 12.3 An a posteriori Tukey comparison following a significant result for a single-factor Model I ANOVA 12.4 Other a posteriori multiple comparison tests 12.5 Planned comparisons 12.6 Reporting the results of a posteriori comparisons 12.7 Questions 13 Two-factor analysis of variance 13.1 Introduction 13.2 What does a two-factor ANOVA do? 13.3 A pictorial example 13.4 How does a two-factor ANOVA separate out the effects of each factor and interaction? 13.5 An example of a two-factor analysis of variance 13.6 Some essential cautions and important complications 13.7 Unbalanced designs 13.8 More complex designs 13.9 Reporting the results of a two-factor ANOVA 13.10 Questions 14 Important assumptions of analysis of variance, transformations, and a test for equality of variances 14.1 Introduction 14.2 Homogeneity of variances 14.3 Normally distributed data 14.4 Independence 14.5 Transformations 14.6 Are transformations legitimate? 14.7 Tests for heteroscedasticity 14.8 Reporting the results of transformations and the Levene test 14.9 Questions 15 More complex ANOVA 15.1 Introduction 15.2 Two-factor ANOVA without replication 15.3 A posteriori comparison of means after a two-factor ANOVA without replication 15.4 Randomised blocks 15.5 Repeated-measures ANOVA 15.6 Nested ANOVA as a special case of a single-factor ANOVA 15.7 A final comment on ANOVA – this book is only an introduction 15.8 Reporting the results of two-factor ANOVA without replication, randomised blocks design, repeated-measures ANOVA and nested ANOVA 15.9 Questions 16 Relationships between variables: correlation and regression 16.1 Introduction 16.2 Correlation contrasted with regression 16.3 Linear correlation 16.4 Calculation of the Pearson r statistic 16.5 Is the value of r statistically significant? 16.6 Assumptions of linear correlation 16.7 Summary and conclusion 16.8 Questions 17 Regression 17.1 Introduction 17.2 Simple linear regression 17.3 Calculation of the slope of the regression line 17.4 Calculation of the intercept with the Y axis 17.5 Testing the significance of the slope and the intercept 17.6 An example – mites that live in the hair follicles 17.7 Predicting a value of Y from a value of X 17.8 Predicting a value of X from a value of Y 17.9 The danger of extrapolation 17.10 Assumptions of linear regression analysis 17.11 Curvilinear regression 17.12 Multiple linear regression 17.13 Questions 18 Analysis of covariance 18.1 Introduction 18.2 Adjusting data to remove the effect of a confounding factor 18.3 An arithmetic example 18.4 Assumptions of ANCOVA and an extremely important caution about parallelism 18.5 Reporting the results of ANCOVA 18.6 More complex models 18.7 Questions 19 Non-parametric statistics 19.1 Introduction 19.2 The danger of assuming normality when a population is grossly non-normal 19.3 The advantage of making a preliminary inspection of the data 20 Non-parametric tests for nominal scale data 20.1 Introduction 20.2 Comparing observed and expected frequencies: the chi-square test for goodness of fit 20.3 Comparing proportions among two or more independent samples 20.4 Bias when there is one degree of freedom 20.5 Three-dimensional contingency tables 20.6 Inappropriate use of tests for goodness of fit and heterogeneity 20.7 Comparing proportions among two or more related samples of nominal scale data 20.8 Recommended tests for categorical data 20.9 Reporting the results of tests for categorical data 20.10 Questions 21 Non-parametric tests for ratio, interval or ordinal scale data 21.1 Introduction 21.2 A non-parametric comparison between one sample and an expected distribution 21.3 Non-parametric comparisons between two independent samples 21.4 Non-parametric comparisons among three or more independent samples 21.5 Non-parametric comparisons of two related samples 21.6 Non-parametric comparisons among three or more related samples 21.7 Analysing ratio, interval or ordinal data that show gross differences in variance among treatments and cannot be satisfactorily transformed 21.8 Non-parametric correlation analysis 21.9 Other non-parametric tests 21.10 Questions 22 Introductory concepts of multivariate analysis 22.1 Introduction 22.2 Simplifying and summarising multivariate data 22.3 An R-mode analysis: principal components analysis 22.4 Q-mode analyses: multidimensional scaling 22.5 Q-mode analyses: cluster analysis 22.6 Which multivariate analysis should you use? 22.7 Questions 23 Choosing a test 23.1 Introduction Appendix: Critical values of chi-square, t and F References Index
Summary: Statistics Explained An Introductory Guide for Life Scientists An understanding of statistics and experimental design is essential for life science studies, but many students lack a mathematical background and some even dread taking an introductory statistics course. Using a refreshingly clear and encouraging reader-friendly approach, this book helps students understand how to choose, carry out, interpret and report the results of complex statistical analyses, critically evaluate the design of experiments and proceed to more advanced material. Taking a straightforward conceptual approach, it is specifically designed to foster understanding, demystify difficult concepts and encourage the unsure. Even complex topics are explained clearly, using a pictorial approach with a minimum of formulae and terminology. Examples of tests included throughout are kept simple by using small data sets. In addition, end-of-chapter exercises, new to this edition, allow self-testing. Handy diagnostic tables help students choose the right test for their work and remain a useful refresher tool for postgraduates.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Status Barcode
Project book Project book CUTN Central Library Non-fiction 519.5 MCK (Browse shelf(Opens below)) Available 55508

Previous ed. published as: Statistics explained : an introductory guide for life sciences, 2005.

Includes bibliography (p. 394-395) and index.

Coverpage
Halftitle page
Title page
Copyright page
Contents
Preface
1 Introduction
1.1 Why do life scientists need to know about experimental design and statistics?
1.2 What is this book designed to do?
2 Doing science: hypotheses, experiments and disproof
2.1 Introduction
2.2 Basic scientific method
2.3 Making a decision about an hypothesis
2.4 Why can’t an hypothesis or theory ever be proven?
2.5 ‘Negative’ outcomes
2.6 Null and alternate hypotheses
2.7 Conclusion
2.8 Questions
3 Collecting and displaying data
3.1 Introduction
3.2 Variables, experimental units and types of data
3.3 Displaying data
3.4 Displaying ordinal or nominal scale data
3.5 Bivariate data
3.6 Multivariate data
3.7 Summary and conclusion
4 Introductory concepts of experimental design
4.1 Introduction
4.2 Sampling – mensurative experiments
4.3 Manipulative experiments
4.4 Sometimes you can only do an unreplicated experiment
4.5 Realism
4.6 A bit of common sense
4.7 Designing a ‘good’ experiment
4.8 Reporting your results
4.9 Summary and conclusion
4.10 Questions
5 Doing science responsibly and ethically
5.1 Introduction
5.2 Dealing fairly with other people’s work
5.3 Doing the experiment
5.4 Evaluating and reporting results
5.5 Quality control in science
5.6 Questions
6 Probability helps you make a decision about your results
6.1 Introduction
6.2 Statistical tests and significance levels
6.3 What has this got to do with making a decision about your results?
6.4 Making the wrong decision
6.5 Other probability levels
6.6 How are probability values reported?
6.7 All statistical tests do the same basic thing
6.8 A very simple example – the chi-square test for goodness of fit
6.9 What if you get a statistic with a probability of exactly 0.05?
6.10 Statistical significance and biological significance
6.11 Summary and conclusion
6.12 Questions
7 Probability explained
7.1 Introduction
7.2 Probability
7.3 The addition rule
7.4 The multiplication rule for independent events
7.5 Conditional probability
7.6 Applications of conditional probability
8 Using the normal distribution to make statistical decisions
8.1 Introduction
8.2 The normal curve
8.3 Two statistics describe a normal distribution
8.4 Samples and populations
8.5 The distribution of sample means is also normal
8.6 What do you do when you only have data from one sample?
8.7 Use of the 95% confidence interval in significance testing
8.8 Distributions that are not normal
8.9 Other distributions
8.10 Other statistics that describe a distribution
8.11 Summary and conclusion
8.12 Questions
9 Comparing the means of one and two samples of normally distributed data
9.1 Introduction
9.2 The 95% confidence interval and 95% confidence limits
9.3 Using the Z statistic to compare a sample mean and population mean when population statistics are known
9.4 Comparing a sample mean to an expected value when population statistics are not known
9.5 Comparing the means of two related samples
9.6 Comparing the means of two independent samples
9.7 One-tailed and two-tailed tests
9.8 Are your data appropriate for a t test?
9.9 Distinguishing between data that should be analysed by a paired sample test and a test for two independent samples
9.10 Reporting the results of t tests
9.11 Conclusion
9.12 Questions
10 Type 1 error and Type 2 error, power and sample size
10.1 Introduction
10.2 Type 1 error
10.3 Type 2 error
10.4 The power of a test
10.5 What sample size do you need to ensure the risk of Type 2 error is not too high?
10.6 Type 1 error, Type 2 error and the concept of biological risk
10.7 Conclusion
10.8 Questions
11 Single-factor analysis of variance
11.1 Introduction
11.2 The concept behind analysis of variance
11.3 More detail and an arithmetic example
11.4 Unequal sample sizes (unbalanced designs)
11.5 An ANOVA does not tell you which particular treatments appear to be from different populations
11.6 Fixed or random effects
11.7 Reporting the results of a single-factor ANOVA
11.8 Summary
11.9 Questions
12 Multiple comparisons after ANOVA
12.1 Introduction
12.2 Multiple comparison tests after a Model I ANOVA
12.3 An a posteriori Tukey comparison following a significant result for a single-factor Model I ANOVA
12.4 Other a posteriori multiple comparison tests
12.5 Planned comparisons
12.6 Reporting the results of a posteriori comparisons
12.7 Questions
13 Two-factor analysis of variance
13.1 Introduction
13.2 What does a two-factor ANOVA do?
13.3 A pictorial example
13.4 How does a two-factor ANOVA separate out the effects of each factor and interaction?
13.5 An example of a two-factor analysis of variance
13.6 Some essential cautions and important complications
13.7 Unbalanced designs
13.8 More complex designs
13.9 Reporting the results of a two-factor ANOVA
13.10 Questions
14 Important assumptions of analysis of variance, transformations, and a test for equality of variances
14.1 Introduction
14.2 Homogeneity of variances
14.3 Normally distributed data
14.4 Independence
14.5 Transformations
14.6 Are transformations legitimate?
14.7 Tests for heteroscedasticity
14.8 Reporting the results of transformations and the Levene test
14.9 Questions
15 More complex ANOVA
15.1 Introduction
15.2 Two-factor ANOVA without replication
15.3 A posteriori comparison of means after a two-factor ANOVA without replication
15.4 Randomised blocks
15.5 Repeated-measures ANOVA
15.6 Nested ANOVA as a special case of a single-factor ANOVA
15.7 A final comment on ANOVA – this book is only an introduction
15.8 Reporting the results of two-factor ANOVA without replication, randomised blocks design, repeated-measures ANOVA and nested ANOVA
15.9 Questions
16 Relationships between variables: correlation and regression
16.1 Introduction
16.2 Correlation contrasted with regression
16.3 Linear correlation
16.4 Calculation of the Pearson r statistic
16.5 Is the value of r statistically significant?
16.6 Assumptions of linear correlation
16.7 Summary and conclusion
16.8 Questions
17 Regression
17.1 Introduction
17.2 Simple linear regression
17.3 Calculation of the slope of the regression line
17.4 Calculation of the intercept with the Y axis
17.5 Testing the significance of the slope and the intercept
17.6 An example – mites that live in the hair follicles
17.7 Predicting a value of Y from a value of X
17.8 Predicting a value of X from a value of Y
17.9 The danger of extrapolation
17.10 Assumptions of linear regression analysis
17.11 Curvilinear regression
17.12 Multiple linear regression
17.13 Questions
18 Analysis of covariance
18.1 Introduction
18.2 Adjusting data to remove the effect of a confounding factor
18.3 An arithmetic example
18.4 Assumptions of ANCOVA and an extremely important caution about parallelism
18.5 Reporting the results of ANCOVA
18.6 More complex models
18.7 Questions
19 Non-parametric statistics
19.1 Introduction
19.2 The danger of assuming normality when a population is grossly non-normal
19.3 The advantage of making a preliminary inspection of the data
20 Non-parametric tests for nominal scale data
20.1 Introduction
20.2 Comparing observed and expected frequencies: the chi-square test for goodness of fit
20.3 Comparing proportions among two or more independent samples
20.4 Bias when there is one degree of freedom
20.5 Three-dimensional contingency tables
20.6 Inappropriate use of tests for goodness of fit and heterogeneity
20.7 Comparing proportions among two or more related samples of nominal scale data
20.8 Recommended tests for categorical data
20.9 Reporting the results of tests for categorical data
20.10 Questions
21 Non-parametric tests for ratio, interval or ordinal scale data
21.1 Introduction
21.2 A non-parametric comparison between one sample and an expected distribution
21.3 Non-parametric comparisons between two independent samples
21.4 Non-parametric comparisons among three or more independent samples
21.5 Non-parametric comparisons of two related samples
21.6 Non-parametric comparisons among three or more related samples
21.7 Analysing ratio, interval or ordinal data that show gross differences in variance among treatments and cannot be satisfactorily transformed
21.8 Non-parametric correlation analysis
21.9 Other non-parametric tests
21.10 Questions
22 Introductory concepts of multivariate analysis
22.1 Introduction
22.2 Simplifying and summarising multivariate data
22.3 An R-mode analysis: principal components analysis
22.4 Q-mode analyses: multidimensional scaling
22.5 Q-mode analyses: cluster analysis
22.6 Which multivariate analysis should you use?
22.7 Questions
23 Choosing a test
23.1 Introduction
Appendix: Critical values of chi-square, t and F
References
Index

Statistics Explained
An Introductory Guide for Life Scientists
An understanding of statistics and experimental design is essential for life science studies, but many students lack a mathematical background and some even dread taking an introductory statistics course. Using a refreshingly clear and encouraging reader-friendly approach, this book helps students understand how to choose, carry out, interpret and report the results of complex statistical analyses, critically evaluate the design of experiments and proceed to more advanced material. Taking a straightforward conceptual approach, it is specifically designed to foster understanding, demystify difficult concepts and encourage the unsure. Even complex topics are explained clearly, using a pictorial approach with a minimum of formulae and terminology. Examples of tests included throughout are kept simple by using small data sets. In addition, end-of-chapter exercises, new to this edition, allow self-testing. Handy diagnostic tables help students choose the right test for their work and remain a useful refresher tool for postgraduates.

There are no comments on this title.

to post a comment.