Mathematical Statistics with Applications, International Edition
By

Rating

Product Description
Product Details

Table of Contents

1. What Is Statistics?
Introduction. Characterizing a Set of Measurements: Graphical Methods. Characterizing a Set of Measurements: Numerical Methods. How Inferences Are Made. Theory and Reality. Summary.
2. Probability.
Introduction. Probability and Inference. A Review of Set Notation. A Probabilistic Model for an Experiment: The Discrete Case. Calculating the Probability of an Event: The Sample-Point Method. Tools for Counting Sample Points. Conditional Probability and the Independence of Events. Two Laws of Probability. Calculating the Probability of an Event: The Event-Composition Methods. The Law of Total Probability and Bayes''''s Rule. Numerical Events and Random Variables. Random Sampling. Summary.
3. Discrete Random Variables and Their Probability Distributions.
Basic Definition. The Probability Distribution for Discrete Random Variable. The Expected Value of Random Variable or a Function of Random Variable. The Binomial Probability Distribution. The Geometric Probability Distribution. The Negative Binomial Probability Distribution (Optional). The Hypergeometric Probability Distribution. Moments and Moment-Generating Functions. Probability-Generating Functions (Optional). Tchebysheff''''s Theorem. Summary.
4. Continuous Random Variables and Their Probability Distributions.
Introduction. The Probability Distribution for Continuous Random Variable. The Expected Value for Continuous Random Variable. The Uniform Probability Distribution. The Normal Probability Distribution. The Gamma Probability Distribution. The Beta Probability Distribution. Some General Comments. Other Expected Values. Tchebysheff''''s Theorem. Expectations of Discontinuous Functions and Mixed Probability Distributions (Optional). Summary.
5. Multivariate Probability Distributions.
Introduction. Bivariate and Multivariate Probability Distributions. Independent Random Variables. The Expected Value of a Function of Random Variables. Special Theorems. The Covariance of Two Random Variables. The Expected Value and Variance of Linear Functions of Random Variables. The Multinomial Probability Distribution. The Bivariate Normal Distribution (Optional). Conditional Expectations. Summary.
6. Functions of Random Variables.
Introductions. Finding the Probability Distribution of a Function of Random Variables. The Method of Distribution Functions. The Methods of Transformations. Multivariable Transformations Using Jacobians. Order Statistics. Summary.
7. Sampling Distributions and the Central Limit Theorem.
Introduction. Sampling Distributions Related to the Normal Distribution. The Central Limit Theorem. A Proof of the Central Limit Theorem (Optional). The Normal Approximation to the Binomial Distributions. Summary.
8. Estimation.
Introduction. The Bias and Mean Square Error of Point Estimators. Some Common Unbiased Point Estimators. Evaluating the Goodness of Point Estimator. Confidence Intervals. Large-Sample Confidence Intervals Selecting the Sample Size. Small-Sample Confidence Intervals for u and u1-u2. Confidence Intervals for o2. Summary.
9. Properties of Point Estimators and Methods of Estimation.
Introduction. Relative Efficiency. Consistency. Sufficiency. The Rao-Blackwell Theorem and Minimum-Variance Unbiased Estimation. The Method of Moments. The Method of Maximum Likelihood. Some Large-Sample Properties of MLEs (Optional). Summary.
10. Hypothesis Testing.
Introduction. Elements of a Statistical Test. Common Large-Sample Tests. Calculating Type II Error Probabilities and Finding the Sample Size for the Z Test. Relationships Between Hypothesis Testing Procedures and Confidence Intervals. Another Way to Report the Results of a Statistical Test: Attained Significance Levels or p-Values. Some Comments on the Theory of Hypothesis Testing. Small-Sample Hypothesis Testing for u and u1-u2. Testing Hypotheses Concerning Variances. Power of Test and the Neyman-Pearson Lemma. Likelihood Ration Test. Summary.
11. Linear Models and Estimation by Least Squares.
Introduction. Linear Statistical Models. The Method of Least Squares. Properties of the Least Squares Estimators for the Simple Linear Regression Model. Inference Concerning the Parameters BI. Inferences Concerning Linear Functions of the Model Parameters: Simple Linear Regression. Predicting a Particular Value of Y Using Simple Linear Regression. Correlation. Some Practical Examples. Fitting the Linear Model by Using Matrices. Properties of the Least Squares Estimators for the Multiple Linear Regression Model. Inferences Concerning Linear Functions of the Model Parameters: Multiple Linear Regression. Prediction a Particular Value of Y Using Multiple Regression. A Test for H0: Bg+1 + Bg+2 = . = Bk = 0. Summary and Concluding Remarks.
12. Considerations in Designing Experiments.
The Elements Affecting the Information in a Sample. Designing Experiment to Increase Accuracy. The Matched Pairs Experiment. Some Elementary Experimental Designs. Summary.
13. The Analysis of Variance.
Introduction. The Analysis of Variance Procedure. Comparison of More than Two Means: Analysis of Variance for a One-way Layout. An Analysis of Variance Table for a One-Way Layout. A Statistical Model of the One-Way Layout. Proof of Additivity of the Sums of Squares and E (MST) for a One-Way Layout (Optional). Estimation in the One-Way Layout. A Statistical Model for the Randomized Block Design. The Analysis of Variance for a Randomized Block Design. Estimation in the Randomized Block Design. Selecting the Sample Size. Simultaneous Confidence Intervals for More than One Parameter. Analysis of Variance Using Linear Models. Summary.
14. Analysis of Categorical Data.
A Description of the Experiment. The Chi-Square Test. A Test of Hypothesis Concerning Specified Cell Probabilities: A Goodness-of-Fit Test. Contingency Tables. r x c Tables with Fixed Row or Column Totals. Other Applications. Summary and Concluding Remarks.
15. Nonparametric Statistics.
Introduction. A General Two-Sampling Shift Model. A Sign Test for a Matched Pairs Experiment. The Wilcoxon Signed-Rank Test for a Matched Pairs Experiment. The Use of Ranks for Comparing Two Population Distributions: Independent Random Samples. The Mann-Whitney U Test: Independent Random Samples. The Kruskal-Wallis Test for One-Way Layout. The Friedman Test for Randomized Block Designs. The Runs Test: A Test for Randomness. Rank Correlation Coefficient. Some General Comments on Nonparametric Statistical Test.
16. Introduction to Bayesian Methods for Inference.
Introduction. Bayesian Priors, Posteriors and Estimators. Bayesian Credible Intervals. Bayesian Tests of Hypotheses. Summary and Additional Comments.
Appendix 1. Matrices and Other Useful Mathematical Results. Matrices and Matrix Algebra. Addition of Matrices. Multiplication of a Matrix by a Real Number. Matrix Multiplication. Identity Elements. The Inverse of a Matrix. The Transpose of a Matrix. A Matrix Expression for a System of Simultaneous Linear Equations. Inverting a Matrix. Solving a System of Simultaneous Linear Equations. Other Useful Mathematical Results.
Appendix 2. Common Probability Distributions, Means, Variances, and Moment-Generating Functions. Discrete Distributions. Continuous Distributions.
Appendix 3. Tables. Binomial Probabilities. Table of e-x. Poisson Probabilities. Normal Curve Areas. Percentage Points of the t Distributions. Percentage Points of the F Distributions. Distribution of Function U. Critical Values of T in the Wilcoxon Matched-Pairs, Signed-Ranks Test. Distribution of the Total Number of Runs R in Sample Size (n1,n2); P(R < a). Critical Values of Pearman''s Rank Correlation Coefficient. Random Numbers. Answer to Exercises. Index.

About the Author

The late Dr. Mendenhall served in the Navy in the Korean War and obtained a Ph.D. in Statistics at North Carolina State University. After receiving his Ph.D , he was a professor in the Mathematics Department at Bucknell University in Pennsylvania before moving to Gainesville in 1963 where he was the first chairman of the Department of Statistics at the University of Florida. Dr. Mendenhall published articles in some of the top statistics journals, such as Biometika and Technometrics; however, he is more widely known for his prolific textbook career. He authored or co-authored approximately 13 Statistics textbooks and several books about his childhood. Richard L. Scheaffer, Professor Emeritus of Statistics, University of Florida, received his Ph.D. in statistics from Florida State University. Accompanying a career of teaching, research and administration, Dr. Scheaffer has led efforts on the improvement of statistics education throughout the school and college curriculum. Co-author of five textbooks, he was one of the developers of the Quantitative Literacy Project that formed the basis of the data analysis strand in the curriculum standards of the National Council of Teachers of Mathematics. He also led the task force that developed the AP Statistics Program, for which he served as Chief Faculty Consultant. Dr. Scheaffer is a Fellow and past president of the American Statistical Association, a past chair of the Conference Board of the Mathematical Sciences, and an advisor on numerous statistics education projects.

Reviews

1. What Is Statistics? Introduction. Characterizing a Set of Measurements: Graphical Methods. Characterizing a Set of Measurements: Numerical Methods. How Inferences Are Made. Theory and Reality. Summary. 2. Probability. Introduction. Probability and Inference. A Review of Set Notation. A Probabilistic Model for an Experiment: The Discrete Case. Calculating the Probability of an Event: The Sample-Point Method. Tools for Counting Sample Points. Conditional Probability and the Independence of Events. Two Laws of Probability. Calculating the Probability of an Event: The Event-Composition Methods. The Law of Total Probability and Bayes"s Rule. Numerical Events and Random Variables. Random Sampling. Summary. 3. Discrete Random Variables and Their Probability Distributions. Basic Definition. The Probability Distribution for Discrete Random Variable. The Expected Value of Random Variable or a Function of Random Variable. The Binomial Probability Distribution. The Geometric Probability Distribution. The Negative Binomial Probability Distribution (Optional). The Hypergeometric Probability Distribution. Moments and Moment-Generating Functions. Probability-Generating Functions (Optional). Tchebysheff"s Theorem. Summary. 4. Continuous Random Variables and Their Probability Distributions. Introduction. The Probability Distribution for Continuous Random Variable. The Expected Value for Continuous Random Variable. The Uniform Probability Distribution. The Normal Probability Distribution. The Gamma Probability Distribution. The Beta Probability Distribution. Some General Comments. Other Expected Values. Tchebysheff"s Theorem. Expectations of Discontinuous Functions and Mixed Probability Distributions (Optional). Summary. 5. Multivariate Probability Distributions. Introduction. Bivariate and Multivariate Probability Distributions. Independent Random Variables. The Expected Value of a Function of Random Variables. Special Theorems. The Covariance of Two Random Variables. The Expected Value and Variance of Linear Functions of Random Variables. The Multinomial Probability Distribution. The Bivariate Normal Distribution (Optional). Conditional Expectations. Summary. 6. Functions of Random Variables. Introductions. Finding the Probability Distribution of a Function of Random Variables. The Method of Distribution Functions. The Methods of Transformations. Multivariable Transformations Using Jacobians. Order Statistics. Summary. 7. Sampling Distributions and the Central Limit Theorem. Introduction. Sampling Distributions Related to the Normal Distribution. The Central Limit Theorem. A Proof of the Central Limit Theorem (Optional). The Normal Approximation to the Binomial Distributions. Summary. 8. Estimation. Introduction. The Bias and Mean Square Error of Point Estimators. Some Common Unbiased Point Estimators. Evaluating the Goodness of Point Estimator. Confidence Intervals. Large-Sample Confidence Intervals Selecting the Sample Size. Small-Sample Confidence Intervals for u and u1-u2. Confidence Intervals for o2. Summary. 9. Properties of Point Estimators and Methods of Estimation. Introduction. Relative Efficiency. Consistency. Sufficiency. The Rao-Blackwell Theorem and Minimum-Variance Unbiased Estimation. The Method of Moments. The Method of Maximum Likelihood. Some Large-Sample Properties of MLEs (Optional). Summary. 10. Hypothesis Testing. Introduction. Elements of a Statistical Test. Common Large-Sample Tests. Calculating Type II Error Probabilities and Finding the Sample Size for the Z Test. Relationships Between Hypothesis Testing Procedures and Confidence Intervals. Another Way to Report the Results of a Statistical Test: Attained Significance Levels or p-Values. Some Comments on the Theory of Hypothesis Testing. Small-Sample Hypothesis Testing for u and u1-u2. Testing Hypotheses Concerning Variances. Power of Test and the Neyman-Pearson Lemma. Likelihood Ration Test. Summary. 11. Linear Models and Estimation by Least Squares. Introduction. Linear Statistical Models. The Method of Least Squares. Properties of the Least Squares Estimators for the Simple Linear Regression Model. Inference Concerning the Parameters BI. Inferences Concerning Linear Functions of the Model Parameters: Simple Linear Regression. Predicting a Particular Value of Y Using Simple Linear Regression. Correlation. Some Practical Examples. Fitting the Linear Model by Using Matrices. Properties of the Least Squares Estimators for the Multiple Linear Regression Model. Inferences Concerning Linear Functions of the Model Parameters: Multiple Linear Regression. Prediction a Particular Value of Y Using Multiple Regression. A Test for H0: Bg+1 + Bg+2 = . = Bk = 0. Summary and Concluding Remarks. 12. Considerations in Designing Experiments. The Elements Affecting the Information in a Sample. Designing Experiment to Increase Accuracy. The Matched Pairs Experiment. Some Elementary Experimental Designs. Summary. 13. The Analysis of Variance. Introduction. The Analysis of Variance Procedure. Comparison of More than Two Means: Analysis of Variance for a One-way Layout. An Analysis of Variance Table for a One-Way Layout. A Statistical Model of the One-Way Layout. Proof of Additivity of the Sums of Squares and E (MST) for a One-Way Layout (Optional). Estimation in the One-Way Layout. A Statistical Model for the Randomized Block Design. The Analysis of Variance for a Randomized Block Design. Estimation in the Randomized Block Design. Selecting the Sample Size. Simultaneous Confidence Intervals for More than One Parameter. Analysis of Variance Using Linear Models. Summary. 14. Analysis of Categorical Data. A Description of the Experiment. The Chi-Square Test. A Test of Hypothesis Concerning Specified Cell Probabilities: A Goodness-of-Fit Test. Contingency Tables. r x c Tables with Fixed Row or Column Totals. Other Applications. Summary and Concluding Remarks. 15. Nonparametric Statistics. Introduction. A General Two-Sampling Shift Model. A Sign Test for a Matched Pairs Experiment. The Wilcoxon Signed-Rank Test for a Matched Pairs Experiment. The Use of Ranks for Comparing Two Population Distributions: Independent Random Samples. The Mann-Whitney U Test: Independent Random Samples. The Kruskal-Wallis Test for One-Way Layout. The Friedman Test for Randomized Block Designs. The Runs Test: A Test for Randomness. Rank Correlation Coefficient. Some General Comments on Nonparametric Statistical Test. 16. Introduction to Bayesian Methods for Inference. Introduction. Bayesian Priors, Posteriors and Estimators. Bayesian Credible Intervals. Bayesian Tests of Hypotheses. Summary and Additional Comments. Appendix 1. Matrices and Other Useful Mathematical Results. Matrices and Matrix Algebra. Addition of Matrices. Multiplication of a Matrix by a Real Number. Matrix Multiplication. Identity Elements. The Inverse of a Matrix. The Transpose of a Matrix. A Matrix Expression for a System of Simultaneous Linear Equations. Inverting a Matrix. Solving a System of Simultaneous Linear Equations. Other Useful Mathematical Results. Appendix 2. Common Probability Distributions, Means, Variances, and Moment-Generating Functions. Discrete Distributions. Continuous Distributions. Appendix 3. Tables. Binomial Probabilities. Table of e-x. Poisson Probabilities. Normal Curve Areas. Percentage Points of the t Distributions. Percentage Points of the F Distributions. Distribution of Function U. Critical Values of T in the Wilcoxon Matched-Pairs, Signed-Ranks Test. Distribution of the Total Number of Runs R in Sample Size (n1,n2); P(R < a). Critical Values of Pearman's Rank Correlation Coefficient. Random Numbers. Answer to Exercises. Index.

Ask a Question About this Product More...
 
Look for similar items by category
Home » Books » Science » Mathematics » Statistics » General
Home » Books » Science » Mathematics » Applied
How Fishpond Works
Fishpond works with suppliers all over the world to bring you a huge selection of products, really great prices, and delivery included on over 25 million products that we sell. We do our best every day to make Fishpond an awesome place for customers to shop and get what they want — all at the best prices online.
Webmasters, Bloggers & Website Owners
You can earn a 8% commission by selling Mathematical Statistics with Applications, International Edition on your website. It's easy to get started - we will give you example code. After you're set-up, your website can earn you money while you work, play or even sleep! You should start right now!
Authors / Publishers
Are you the Author or Publisher of a book? Or the manufacturer of one of the millions of products that we sell. You can improve sales and grow your revenue by submitting additional information on this title. The better the information we have about a product, the more we will sell!

Back to top