Amazon cover image
Image from Amazon.com
Image from Google Jackets

Computational modeling of cognition and behavior / Simon Farrell and Stephan Lewandowsky

By: Contributor(s): Material type: TextTextLanguage: English Publication details: New York : Cambridge University Press, ©2018.Edition: 1stDescription: xxii, 461 p.: illustrations ; 26 cmISBN:
  • 9781107525610
Subject(s): DDC classification:
  • 23 153.0151 FAR
Contents:
1. Introduction -- 1.1. Models and Theories in Science -- 1.2. Quantitative Modeling in Cognition -- 1.2.1. Models and Data -- 1.2.2. Data Description -- 1.2.3. Cognitive Process Models -- 1.3. Potential Problems: Scope and Falsifiability -- 1.4. Modeling as a "Cognitive Aid" for the Scientist -- 1.5. In Vivo -- 2. From Words to Models -- 2.1. Response Times in Speeded-Choice Tasks -- 2.2. Building a Simulation -- 2.2.1. Getting Started: R and RStudio -- 2.2.2. The Random-Walk Model -- 2.2.3. Intuition vs. Computation: Exploring the Predictions of a Random Walk -- 2.2.4. Trial-to-Trial Variability in the Random-Walk Model -- 2.2.5. A Family of Possible Sequential-Sampling Models -- 2.3. The Basic Toolkit -- 2.3.1. Parameters -- 2.3.2. Connecting Model and Data -- 2.4. In Vivo -- 3. Basic Parameter Estimation Techniques -- 3.1. Discrepancy Function -- 3.1.1. Root Mean Squared Deviation (RMSD) -- 3.1.2. Chi-Squared (x2) -- 3.2. Fitting Models to Data: Parameter Estimation Techniques -- 3.3. Least-Squares Estimation in a Familiar Context -- 3.3.1. Visualizing Modeling -- 3.3.2. Estimating Regression Parameters -- 3.4. Inside the Box: Parameter Estimation Techniques -- 3.4.1. Simplex -- 3.4.2. Simulated Annealing -- 3.4.3. Relative Merits of Parameter Estimation Techniques -- 3.5. Variability in Parameter Estimates -- 3.5.1. Bootstrapping -- 3.6. In Vivo -- 4. Maximum Likelihood Parameter Estimation -- 4.1. Basics of Probabilities -- 4.1.1. Defining Probability -- 4.1.2. Properties of Probabilities -- 4.1.3. Probability Functions -- 4.2. What Is a Likelihood? -- 4.3. Defining a Probability Distribution -- 4.3.1. Probability Functions Specified by the Psychological Model -- 4.3.2. Probability Functions via Data Models -- 4.3.3. Two Types of Probability Functions -- 4.3.4. Extending the Data Model -- 4.3.5. Extension to Multiple Data Points and Multiple Parameters -- 4.4. Finding the Maximum Likelihood -- 4.5. Properties of Maximum Likelihood Estimators -- 4.6. In Vivo -- 5. Combining Information from Multiple Participants -- 5.1. It Matters How You Combine Data from Multiple Units -- 5.2. Implications of Averaging -- 5.3. Fitting Aggregate Data -- 5.4. Fitting Individual Participants -- 5.5. Fitting Subgroups of Data and Individual Differences -- 5.5.1. Mixture Modeling -- 5.5.2. K-Means Clustering -- 5.5.3. Modeling Individual Differences -- 5.6. In Vivo -- 6. Bayesian Parameter Estimation -- 6.1. What Is Bayesian Inference? -- 6.1.1. From Conditional Probabilities to Bayes Theorem -- 6.1.2. Marginalizing Probabilities -- 6.2. Analytic Methods for Obtaining Posteriors -- 6.2.1. The Likelihood Function -- 6.2.2. The Prior Distribution -- 6.2.3. The Evidence or Marginal Likelihood -- 6.2.4. The Posterior Distribution -- 6.2.5. Estimating the Bias of a Coin -- 6.2.6. Summary -- 6.3. Determining the Prior Distributions of Parameters -- 6.3.1. Non-Informative Priors -- 6.3.2. Reference Priors -- 6.4. In Vivo -- 7. Bayesian Parameter Estimation -- 7.1. Markov Chain Monte Carlo Methods -- 7.1.1. The Metropolis-Hastings Algorithm for MCMC -- 7.1.2. Estimating Multiple Parameters -- 7.2. Problems Associated with MCMC Sampling -- 7.2.1. Convergence of MCMC Chains -- 7.2.2. Autocorrelation in MCMC Chains -- 7.2.3. Outlook -- 7.3. Approximate Bayesian Computation: A Likelihood-Free Method -- 7.3.1. Likelihoods That Cannot be Computed -- 7.3.2. From Simulations to Estimates of the Posterior -- 7.3.3. An Example: ABC in Action -- 7.4. In Vivo -- 8. Bayesian Parameter Estimation -- 8.1. Gibbs Sampling -- 8.1.1. A Bivariate Example of Gibbs Sampling -- 8.1.2. Gibbs vs. Metropolis-Hastings Sampling -- 8.1.3. Gibbs Sampling of Multivariate Spaces -- 8.2. JAGS: An Introduction -- 8.2.1. Installing JAGS -- 8.2.2. Scripting for JAGS -- 8.3. JAGS: Revisiting Some Known Models and Pushing Their Boundaries -- 8.3.1. Bayesian Modeling of Signal-Detection Theory -- 8.3.2. A Bayesian Approach to Multinomial Tree Models: The High-Threshold Model -- 8.3.3. A Bayesian Approach to Multinomial Tree Models -- 8.3.4. Summary -- 8.4. In Vivo -- 9. Multilevel or Hierarchical Modeling -- 9.1. Conceptualizing Hierarchical Modeling -- 9.2. Bayesian Hierarchical Modeling -- 9.2.1. Graphical Models -- 9.2.2. Hierarchical Modeling of Signal-Detection Performance -- 9.2.3. Hierarchical Modeling of Forgetting -- 9.2.4. Hierarchical Modeling of Inter-Temporal Preferences -- 9.2.5. Summary -- 9.3. Hierarchical Maximum Likelihood Modeling -- 9.3.1. Hierarchical Maximum Likelihood Model of Signal Detection -- 9.4. Recommendations -- 9.5. In Vivo -- 10. Model Comparison -- 10.1. Psychological Data and the Very Bad Good Fit -- 10.1.1. Model Complexity and Over-Fitting -- 10.2. Model Comparison -- 10.3. The Likelihood Ratio Test -- 10.4. Akaike's Information Criterion -- 10.5. Other Methods for Calculating Complexity and Comparing Models -- 10.5.1. Cross-Validation -- 10.5.2. Minimum Description Length -- 10.5.3. Normalized Maximum Likelihood -- 10.6. Parameter Identifiability and Model Testability -- 10.6.1. Identifiability -- 10.6.2. Testability -- 10.7. Conclusions -- 10.8. In Vivo -- 11. Bayesian Model Comparison Using Bayes Factors -- 11.1. Marginal Likelihoods and Bayes Factors -- 11.2. Methods for Obtaining the Marginal Likelihood -- 11.2.1. Numerical Integration -- 11.2.2. Simple Monte Carlo Integration and Importance Sampling -- 11.2.3. The Savage-Dickey Ratio -- 11.2.4. Transdimensional Markov Chain Monte Carlo -- 11.2.5. Laplace Approximation -- 11.2.6. Bayesian Information Criterion -- 11.3. Bayes Factors for Hierarchical Models -- 11.4. The Importance of Priors -- 11.5. Conclusions -- 11.6. In Vivo -- 12. Using Models in Psychology -- 12.1. Broad Overview of the Steps in Modeling -- 12.2. Drawing Conclusions from Models -- 12.2.1. Model Exploration -- 12.2.2. Analyzing the Model -- 12.2.3. Learning from Parameter Estimates -- 12.2.4. Sufficiency of a Model -- 12.2.5. Model Necessity -- 12.2.6. Verisimilitude vs. Truth -- 12.3. Models as Tools for Communication and Shared Understanding -- 12.4. Good Practices to Enhance Understanding and Reproducibility -- 12.4.1. Use Plain Text Wherever Possible -- 12.4.2. Use Sensible Variable and Function Names -- 12.4.3. Use the Debugger -- 12.4.4. Commenting -- 12.4.5. Version Control -- 12.4.6. Sharing Code and Reproducibility -- 12.4.7. Notebooks and Other Tools -- 12.4.8. Enhancing Reproducibility and Runnability -- 12.5. Summary -- 12.6. In Vivo -- 13. Neural Network Models -- 13.1. Hebbian Models -- 13.1.1. The Hebbian Associator -- 13.1.2. Hebbian Models as Matrix Algebra -- 13.1.3. Describing Networks Using Matrix Algebra -- 13.1.4. The Auto-Associator -- 13.1.5. Limitations of Hebbian Models -- 13.2. Backpropagation -- 13.2.1. Learning and the Backpropagation of Error -- 13.2.2. Applications and Criticisms of Backpropagation in Psychology -- 13.3. Final Comments on Neural Networks -- 13.4. In Vivo -- 14. Models of Choice Response Time -- 14.1. Ratcliff's Diffusion Model -- 14.1.1. Fitting the Diffusion Model -- 14.1.2. Interpreting the Diffusion Model -- 14.1.3. Falsifiability of the Diffusion Model -- 14.2. Ballistic Accumulator Models -- 14.2.1. Linear Ballistic Accumulator -- 14.2.2. Fitting the LBA -- 14.3. Summary -- 14.4. Current Issues and Outlook -- 14.5. In Vivo -- 15. Models in Neuroscience -- 15.1. Methods for Relating Neural and Behavioral Data -- 15.2. Reinforcement Learning Models -- 15.2.1. Theories of Reinforcement Learning -- 15.2.2. Neuroscience of Reinforcement Learning -- 15.3. Neural Correlates of Decision-Making -- 15.3.1. Rise-to-Threshold Models of Saccadic Decision-Making -- 15.3.2. Relating Model Parameters to the BOLD Response -- 15.3.3. Accounting for Response Time Variability -- 15.3.4. Using Spike Trains as Model Input -- 15.3.5. Jointly Fitting Behavioral and Neural Data -- 15.4. Conclusions -- 15.5. In Vivo.
Summary: Computational modeling is now ubiquitous in psychology, and researchers who are not modelers may find it increasingly difficult to follow the theoretical developments in their field. This book presents an integrated framework for the development and application of models in psychology and related disciplines. Researchers and students are given the knowledge and tools to interpret models published in their area, as well as to develop, fit, and test their own models. Both the development of models and key features of any model are covered, as are the applications of models in a variety of domains across the behavioural sciences. A number of chapters are devoted to fitting models using maximum likelihood and Bayesian estimation, including fitting hierarchical and mixture models. Model comparison is described as a core philosophy of scientific inference, and the use of models to understand theories and advance scientific discourse is explained.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Status Date due Barcode
General Books General Books CUTN Central Library Philosophy & psychology Non-fiction 153.0151 FAR (Browse shelf(Opens below)) Available 44071

PBK

1. Introduction --
1.1. Models and Theories in Science --
1.2. Quantitative Modeling in Cognition --
1.2.1. Models and Data --
1.2.2. Data Description --
1.2.3. Cognitive Process Models --
1.3. Potential Problems: Scope and Falsifiability --
1.4. Modeling as a "Cognitive Aid" for the Scientist --
1.5. In Vivo --
2. From Words to Models --
2.1. Response Times in Speeded-Choice Tasks --
2.2. Building a Simulation --
2.2.1. Getting Started: R and RStudio --
2.2.2. The Random-Walk Model --
2.2.3. Intuition vs. Computation: Exploring the Predictions of a Random Walk --
2.2.4. Trial-to-Trial Variability in the Random-Walk Model --
2.2.5. A Family of Possible Sequential-Sampling Models --
2.3. The Basic Toolkit --
2.3.1. Parameters --
2.3.2. Connecting Model and Data --
2.4. In Vivo --
3. Basic Parameter Estimation Techniques --
3.1. Discrepancy Function --
3.1.1. Root Mean Squared Deviation (RMSD) --
3.1.2. Chi-Squared (x2) --
3.2. Fitting Models to Data: Parameter Estimation Techniques --
3.3. Least-Squares Estimation in a Familiar Context --
3.3.1. Visualizing Modeling --
3.3.2. Estimating Regression Parameters --
3.4. Inside the Box: Parameter Estimation Techniques --
3.4.1. Simplex --
3.4.2. Simulated Annealing --
3.4.3. Relative Merits of Parameter Estimation Techniques --
3.5. Variability in Parameter Estimates --
3.5.1. Bootstrapping --
3.6. In Vivo --
4. Maximum Likelihood Parameter Estimation --
4.1. Basics of Probabilities --
4.1.1. Defining Probability --
4.1.2. Properties of Probabilities --
4.1.3. Probability Functions --
4.2. What Is a Likelihood? --
4.3. Defining a Probability Distribution --
4.3.1. Probability Functions Specified by the Psychological Model --
4.3.2. Probability Functions via Data Models --
4.3.3. Two Types of Probability Functions --
4.3.4. Extending the Data Model --
4.3.5. Extension to Multiple Data Points and Multiple Parameters --
4.4. Finding the Maximum Likelihood --
4.5. Properties of Maximum Likelihood Estimators --
4.6. In Vivo --
5. Combining Information from Multiple Participants --
5.1. It Matters How You Combine Data from Multiple Units --
5.2. Implications of Averaging --
5.3. Fitting Aggregate Data --
5.4. Fitting Individual Participants --
5.5. Fitting Subgroups of Data and Individual Differences --
5.5.1. Mixture Modeling --
5.5.2. K-Means Clustering --
5.5.3. Modeling Individual Differences --
5.6. In Vivo --
6. Bayesian Parameter Estimation --
6.1. What Is Bayesian Inference? --
6.1.1. From Conditional Probabilities to Bayes Theorem --
6.1.2. Marginalizing Probabilities --
6.2. Analytic Methods for Obtaining Posteriors --
6.2.1. The Likelihood Function --
6.2.2. The Prior Distribution --
6.2.3. The Evidence or Marginal Likelihood --
6.2.4. The Posterior Distribution --
6.2.5. Estimating the Bias of a Coin --
6.2.6. Summary --
6.3. Determining the Prior Distributions of Parameters --
6.3.1. Non-Informative Priors --
6.3.2. Reference Priors --
6.4. In Vivo --
7. Bayesian Parameter Estimation --
7.1. Markov Chain Monte Carlo Methods --
7.1.1. The Metropolis-Hastings Algorithm for MCMC --
7.1.2. Estimating Multiple Parameters --
7.2. Problems Associated with MCMC Sampling --
7.2.1. Convergence of MCMC Chains --
7.2.2. Autocorrelation in MCMC Chains --
7.2.3. Outlook --
7.3. Approximate Bayesian Computation: A Likelihood-Free Method --
7.3.1. Likelihoods That Cannot be Computed --
7.3.2. From Simulations to Estimates of the Posterior --
7.3.3. An Example: ABC in Action --
7.4. In Vivo --
8. Bayesian Parameter Estimation --
8.1. Gibbs Sampling --
8.1.1. A Bivariate Example of Gibbs Sampling --
8.1.2. Gibbs vs. Metropolis-Hastings Sampling --
8.1.3. Gibbs Sampling of Multivariate Spaces --
8.2. JAGS: An Introduction --
8.2.1. Installing JAGS --
8.2.2. Scripting for JAGS --
8.3. JAGS: Revisiting Some Known Models and Pushing Their Boundaries --
8.3.1. Bayesian Modeling of Signal-Detection Theory --
8.3.2. A Bayesian Approach to Multinomial Tree Models: The High-Threshold Model --
8.3.3. A Bayesian Approach to Multinomial Tree Models --
8.3.4. Summary --
8.4. In Vivo --
9. Multilevel or Hierarchical Modeling --
9.1. Conceptualizing Hierarchical Modeling --
9.2. Bayesian Hierarchical Modeling --
9.2.1. Graphical Models --
9.2.2. Hierarchical Modeling of Signal-Detection Performance --
9.2.3. Hierarchical Modeling of Forgetting --
9.2.4. Hierarchical Modeling of Inter-Temporal Preferences --
9.2.5. Summary --
9.3. Hierarchical Maximum Likelihood Modeling --
9.3.1. Hierarchical Maximum Likelihood Model of Signal Detection --
9.4. Recommendations --
9.5. In Vivo --
10. Model Comparison --
10.1. Psychological Data and the Very Bad Good Fit --
10.1.1. Model Complexity and Over-Fitting --
10.2. Model Comparison --
10.3. The Likelihood Ratio Test --
10.4. Akaike's Information Criterion --
10.5. Other Methods for Calculating Complexity and Comparing Models --
10.5.1. Cross-Validation --
10.5.2. Minimum Description Length --
10.5.3. Normalized Maximum Likelihood --
10.6. Parameter Identifiability and Model Testability --
10.6.1. Identifiability --
10.6.2. Testability --
10.7. Conclusions --
10.8. In Vivo --
11. Bayesian Model Comparison Using Bayes Factors --
11.1. Marginal Likelihoods and Bayes Factors --
11.2. Methods for Obtaining the Marginal Likelihood --
11.2.1. Numerical Integration --
11.2.2. Simple Monte Carlo Integration and Importance Sampling --
11.2.3. The Savage-Dickey Ratio --
11.2.4. Transdimensional Markov Chain Monte Carlo --
11.2.5. Laplace Approximation --
11.2.6. Bayesian Information Criterion --
11.3. Bayes Factors for Hierarchical Models --
11.4. The Importance of Priors --
11.5. Conclusions --
11.6. In Vivo --
12. Using Models in Psychology --
12.1. Broad Overview of the Steps in Modeling --
12.2. Drawing Conclusions from Models --
12.2.1. Model Exploration --
12.2.2. Analyzing the Model --
12.2.3. Learning from Parameter Estimates --
12.2.4. Sufficiency of a Model --
12.2.5. Model Necessity --
12.2.6. Verisimilitude vs. Truth --
12.3. Models as Tools for Communication and Shared Understanding --
12.4. Good Practices to Enhance Understanding and Reproducibility --
12.4.1. Use Plain Text Wherever Possible --
12.4.2. Use Sensible Variable and Function Names --
12.4.3. Use the Debugger --
12.4.4. Commenting --
12.4.5. Version Control --
12.4.6. Sharing Code and Reproducibility --
12.4.7. Notebooks and Other Tools --
12.4.8. Enhancing Reproducibility and Runnability --
12.5. Summary --
12.6. In Vivo --
13. Neural Network Models --
13.1. Hebbian Models --
13.1.1. The Hebbian Associator --
13.1.2. Hebbian Models as Matrix Algebra --
13.1.3. Describing Networks Using Matrix Algebra --
13.1.4. The Auto-Associator --
13.1.5. Limitations of Hebbian Models --
13.2. Backpropagation --
13.2.1. Learning and the Backpropagation of Error --
13.2.2. Applications and Criticisms of Backpropagation in Psychology --
13.3. Final Comments on Neural Networks --
13.4. In Vivo --
14. Models of Choice Response Time --
14.1. Ratcliff's Diffusion Model --
14.1.1. Fitting the Diffusion Model --
14.1.2. Interpreting the Diffusion Model --
14.1.3. Falsifiability of the Diffusion Model --
14.2. Ballistic Accumulator Models --
14.2.1. Linear Ballistic Accumulator --
14.2.2. Fitting the LBA --
14.3. Summary --
14.4. Current Issues and Outlook --
14.5. In Vivo --
15. Models in Neuroscience --
15.1. Methods for Relating Neural and Behavioral Data --
15.2. Reinforcement Learning Models --
15.2.1. Theories of Reinforcement Learning --
15.2.2. Neuroscience of Reinforcement Learning --
15.3. Neural Correlates of Decision-Making --
15.3.1. Rise-to-Threshold Models of Saccadic Decision-Making --
15.3.2. Relating Model Parameters to the BOLD Response --
15.3.3. Accounting for Response Time Variability --
15.3.4. Using Spike Trains as Model Input --
15.3.5. Jointly Fitting Behavioral and Neural Data --
15.4. Conclusions --
15.5. In Vivo.

Computational modeling is now ubiquitous in psychology, and researchers who are not modelers may find it increasingly difficult to follow the theoretical developments in their field. This book presents an integrated framework for the development and application of models in psychology and related disciplines. Researchers and students are given the knowledge and tools to interpret models published in their area, as well as to develop, fit, and test their own models. Both the development of models and key features of any model are covered, as are the applications of models in a variety of domains across the behavioural sciences. A number of chapters are devoted to fitting models using maximum likelihood and Bayesian estimation, including fitting hierarchical and mixture models. Model comparison is described as a core philosophy of scientific inference, and the use of models to understand theories and advance scientific discourse is explained.

There are no comments on this title.

to post a comment.

Powered by Koha