Statistics explained : (Record no. 45192)

MARC details
000 -LEADER
fixed length control field 11009nam a2200337 a 4500
003 - CONTROL NUMBER IDENTIFIER
control field CUTN
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20250813163944.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 110802s2012 enka b 001 0 eng d
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 9781107005518 (hbk.)
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 1107005515 (hbk.)
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 9780521183284 (pbk.)
020 ## - INTERNATIONAL STANDARD BOOK NUMBER
International Standard Book Number 0521183286 (pbk.)
038 ## - RECORD CONTENT LICENSOR
Record content licensor Uk
041 ## - LANGUAGE CODE
Language English
049 ## - LOCAL HOLDINGS (OCLC)
Local processing data b
Missing elements 2
082 04 - DEWEY DECIMAL CLASSIFICATION NUMBER
Classification number 519.5
Edition number 23
Item number MCK
100 1# - MAIN ENTRY--PERSONAL NAME
Personal name McKillup, Steve.
245 10 - TITLE STATEMENT
Title Statistics explained :
Remainder of title an introductory guide for life scientists /
Statement of responsibility, etc Steve McKillup.
250 ## - EDITION STATEMENT
Edition statement 2nd ed.
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT)
Place of publication, distribution, etc Cambridge :
Name of publisher, distributor, etc Cambridge University Press,
Date of publication, distribution, etc 2012.
300 ## - PHYSICAL DESCRIPTION
Extent xiv, 403 p. :
Other physical details ill. ;
Dimensions 23 cm.
500 ## - GENERAL NOTE
General note Previous ed. published as: Statistics explained : an introductory guide for life sciences, 2005.
505 ## - FORMATTED CONTENTS NOTE
Title Coverpage<br/>Halftitle page<br/>Title page<br/>Copyright page<br/>Contents<br/>Preface<br/>1 Introduction<br/>1.1 Why do life scientists need to know about experimental design and statistics?<br/>1.2 What is this book designed to do?<br/>2 Doing science: hypotheses, experiments and disproof<br/>2.1 Introduction<br/>2.2 Basic scientific method<br/>2.3 Making a decision about an hypothesis<br/>2.4 Why can’t an hypothesis or theory ever be proven?<br/>2.5 ‘Negative’ outcomes<br/>2.6 Null and alternate hypotheses<br/>2.7 Conclusion<br/>2.8 Questions<br/>3 Collecting and displaying data<br/>3.1 Introduction<br/>3.2 Variables, experimental units and types of data<br/>3.3 Displaying data<br/>3.4 Displaying ordinal or nominal scale data<br/>3.5 Bivariate data<br/>3.6 Multivariate data<br/>3.7 Summary and conclusion<br/>4 Introductory concepts of experimental design<br/>4.1 Introduction<br/>4.2 Sampling – mensurative experiments<br/>4.3 Manipulative experiments<br/>4.4 Sometimes you can only do an unreplicated experiment<br/>4.5 Realism<br/>4.6 A bit of common sense<br/>4.7 Designing a ‘good’ experiment<br/>4.8 Reporting your results<br/>4.9 Summary and conclusion<br/>4.10 Questions<br/>5 Doing science responsibly and ethically<br/>5.1 Introduction<br/>5.2 Dealing fairly with other people’s work<br/>5.3 Doing the experiment<br/>5.4 Evaluating and reporting results<br/>5.5 Quality control in science<br/>5.6 Questions<br/>6 Probability helps you make a decision about your results<br/>6.1 Introduction<br/>6.2 Statistical tests and significance levels<br/>6.3 What has this got to do with making a decision about your results?<br/>6.4 Making the wrong decision<br/>6.5 Other probability levels<br/>6.6 How are probability values reported?<br/>6.7 All statistical tests do the same basic thing<br/>6.8 A very simple example – the chi-square test for goodness of fit<br/>6.9 What if you get a statistic with a probability of exactly 0.05?<br/>6.10 Statistical significance and biological significance<br/>6.11 Summary and conclusion<br/>6.12 Questions<br/>7 Probability explained<br/>7.1 Introduction<br/>7.2 Probability<br/>7.3 The addition rule<br/>7.4 The multiplication rule for independent events<br/>7.5 Conditional probability<br/>7.6 Applications of conditional probability<br/>8 Using the normal distribution to make statistical decisions<br/>8.1 Introduction<br/>8.2 The normal curve<br/>8.3 Two statistics describe a normal distribution<br/>8.4 Samples and populations<br/>8.5 The distribution of sample means is also normal<br/>8.6 What do you do when you only have data from one sample?<br/>8.7 Use of the 95% confidence interval in significance testing<br/>8.8 Distributions that are not normal<br/>8.9 Other distributions<br/>8.10 Other statistics that describe a distribution<br/>8.11 Summary and conclusion<br/>8.12 Questions<br/>9 Comparing the means of one and two samples of normally distributed data<br/>9.1 Introduction<br/>9.2 The 95% confidence interval and 95% confidence limits<br/>9.3 Using the Z statistic to compare a sample mean and population mean when population statistics are known<br/>9.4 Comparing a sample mean to an expected value when population statistics are not known<br/>9.5 Comparing the means of two related samples<br/>9.6 Comparing the means of two independent samples<br/>9.7 One-tailed and two-tailed tests<br/>9.8 Are your data appropriate for a t test?<br/>9.9 Distinguishing between data that should be analysed by a paired sample test and a test for two independent samples<br/>9.10 Reporting the results of t tests<br/>9.11 Conclusion<br/>9.12 Questions<br/>10 Type 1 error and Type 2 error, power and sample size<br/>10.1 Introduction<br/>10.2 Type 1 error<br/>10.3 Type 2 error<br/>10.4 The power of a test<br/>10.5 What sample size do you need to ensure the risk of Type 2 error is not too high?<br/>10.6 Type 1 error, Type 2 error and the concept of biological risk<br/>10.7 Conclusion<br/>10.8 Questions<br/>11 Single-factor analysis of variance<br/>11.1 Introduction<br/>11.2 The concept behind analysis of variance<br/>11.3 More detail and an arithmetic example<br/>11.4 Unequal sample sizes (unbalanced designs)<br/>11.5 An ANOVA does not tell you which particular treatments appear to be from different populations<br/>11.6 Fixed or random effects<br/>11.7 Reporting the results of a single-factor ANOVA<br/>11.8 Summary<br/>11.9 Questions<br/>12 Multiple comparisons after ANOVA<br/>12.1 Introduction<br/>12.2 Multiple comparison tests after a Model I ANOVA<br/>12.3 An a posteriori Tukey comparison following a significant result for a single-factor Model I ANOVA<br/>12.4 Other a posteriori multiple comparison tests<br/>12.5 Planned comparisons<br/>12.6 Reporting the results of a posteriori comparisons<br/>12.7 Questions<br/>13 Two-factor analysis of variance<br/>13.1 Introduction<br/>13.2 What does a two-factor ANOVA do?<br/>13.3 A pictorial example<br/>13.4 How does a two-factor ANOVA separate out the effects of each factor and interaction?<br/>13.5 An example of a two-factor analysis of variance<br/>13.6 Some essential cautions and important complications<br/>13.7 Unbalanced designs<br/>13.8 More complex designs<br/>13.9 Reporting the results of a two-factor ANOVA<br/>13.10 Questions<br/>14 Important assumptions of analysis of variance, transformations, and a test for equality of variances<br/>14.1 Introduction<br/>14.2 Homogeneity of variances<br/>14.3 Normally distributed data<br/>14.4 Independence<br/>14.5 Transformations<br/>14.6 Are transformations legitimate?<br/>14.7 Tests for heteroscedasticity<br/>14.8 Reporting the results of transformations and the Levene test<br/>14.9 Questions<br/>15 More complex ANOVA<br/>15.1 Introduction<br/>15.2 Two-factor ANOVA without replication<br/>15.3 A posteriori comparison of means after a two-factor ANOVA without replication<br/>15.4 Randomised blocks<br/>15.5 Repeated-measures ANOVA<br/>15.6 Nested ANOVA as a special case of a single-factor ANOVA<br/>15.7 A final comment on ANOVA – this book is only an introduction<br/>15.8 Reporting the results of two-factor ANOVA without replication, randomised blocks design, repeated-measures ANOVA and nested ANOVA<br/>15.9 Questions<br/>16 Relationships between variables: correlation and regression<br/>16.1 Introduction<br/>16.2 Correlation contrasted with regression<br/>16.3 Linear correlation<br/>16.4 Calculation of the Pearson r statistic<br/>16.5 Is the value of r statistically significant?<br/>16.6 Assumptions of linear correlation<br/>16.7 Summary and conclusion<br/>16.8 Questions<br/>17 Regression<br/>17.1 Introduction<br/>17.2 Simple linear regression<br/>17.3 Calculation of the slope of the regression line<br/>17.4 Calculation of the intercept with the Y axis<br/>17.5 Testing the significance of the slope and the intercept<br/>17.6 An example – mites that live in the hair follicles<br/>17.7 Predicting a value of Y from a value of X<br/>17.8 Predicting a value of X from a value of Y<br/>17.9 The danger of extrapolation<br/>17.10 Assumptions of linear regression analysis<br/>17.11 Curvilinear regression<br/>17.12 Multiple linear regression<br/>17.13 Questions<br/>18 Analysis of covariance<br/>18.1 Introduction<br/>18.2 Adjusting data to remove the effect of a confounding factor<br/>18.3 An arithmetic example<br/>18.4 Assumptions of ANCOVA and an extremely important caution about parallelism<br/>18.5 Reporting the results of ANCOVA<br/>18.6 More complex models<br/>18.7 Questions<br/>19 Non-parametric statistics<br/>19.1 Introduction<br/>19.2 The danger of assuming normality when a population is grossly non-normal<br/>19.3 The advantage of making a preliminary inspection of the data<br/>20 Non-parametric tests for nominal scale data<br/>20.1 Introduction<br/>20.2 Comparing observed and expected frequencies: the chi-square test for goodness of fit<br/>20.3 Comparing proportions among two or more independent samples<br/>20.4 Bias when there is one degree of freedom<br/>20.5 Three-dimensional contingency tables<br/>20.6 Inappropriate use of tests for goodness of fit and heterogeneity<br/>20.7 Comparing proportions among two or more related samples of nominal scale data<br/>20.8 Recommended tests for categorical data<br/>20.9 Reporting the results of tests for categorical data<br/>20.10 Questions<br/>21 Non-parametric tests for ratio, interval or ordinal scale data<br/>21.1 Introduction<br/>21.2 A non-parametric comparison between one sample and an expected distribution<br/>21.3 Non-parametric comparisons between two independent samples<br/>21.4 Non-parametric comparisons among three or more independent samples<br/>21.5 Non-parametric comparisons of two related samples<br/>21.6 Non-parametric comparisons among three or more related samples<br/>21.7 Analysing ratio, interval or ordinal data that show gross differences in variance among treatments and cannot be satisfactorily transformed<br/>21.8 Non-parametric correlation analysis<br/>21.9 Other non-parametric tests<br/>21.10 Questions<br/>22 Introductory concepts of multivariate analysis<br/>22.1 Introduction<br/>22.2 Simplifying and summarising multivariate data<br/>22.3 An R-mode analysis: principal components analysis<br/>22.4 Q-mode analyses: multidimensional scaling<br/>22.5 Q-mode analyses: cluster analysis<br/>22.6 Which multivariate analysis should you use?<br/>22.7 Questions<br/>23 Choosing a test<br/>23.1 Introduction<br/>Appendix: Critical values of chi-square, t and F<br/>References<br/>Index<br/>
520 ## - SUMMARY, ETC.
Summary, etc Statistics Explained<br/>An Introductory Guide for Life Scientists<br/>An understanding of statistics and experimental design is essential for life science studies, but many students lack a mathematical background and some even dread taking an introductory statistics course. Using a refreshingly clear and encouraging reader-friendly approach, this book helps students understand how to choose, carry out, interpret and report the results of complex statistical analyses, critically evaluate the design of experiments and proceed to more advanced material. Taking a straightforward conceptual approach, it is specifically designed to foster understanding, demystify difficult concepts and encourage the unsure. Even complex topics are explained clearly, using a pictorial approach with a minimum of formulae and terminology. Examples of tests included throughout are kept simple by using small data sets. In addition, end-of-chapter exercises, new to this edition, allow self-testing. Handy diagnostic tables help students choose the right test for their work and remain a useful refresher tool for postgraduates.
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Statistics.
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name as entry element Life sciences
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme Dewey Decimal Classification
Koha item type Project book
504 ## - BIBLIOGRAPHY, ETC. NOTE
Bibliography, etc Includes bibliography (p. 394-395) and index.
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
General subdivision Statistical methods.
907 ## - LOCAL DATA ELEMENT G, LDG (RLIN)
a .b29294332
Holdings
Withdrawn status Lost status Source of classification or shelving scheme Damaged status Not for loan Collection code Home library Location Date of Cataloging Source of acquisition Total Checkouts Full call number Barcode Date last seen Price effective from Koha item type
    Dewey Decimal Classification     Non-fiction CUTN Central Library CUTN Central Library 13/08/2025 DBT PG Project dt: 17.06.2025- Biotechnology. TBH SINV00106 dt:25.07.2025   519.5 MCK 55508 13/08/2025 13/08/2025 Project book