HAPPY BOOKSGIVING
Use code BOOKSGIVING during checkout to save 40%-55% on books and eBooks. Shop now.
This eBook includes the following formats, accessible from your Account page after purchase:
EPUB The open industry format known for its reflowable content and usability on supported mobile devices.
PDF The popular standard, used most often with the free Acrobat® Reader® software.
This eBook requires no passwords or activation to read. We customize your eBook by discreetly watermarking it with your name, making it uniquely yours.
Also available in other formats.
Register your product to gain access to bonus material or receive a coupon.
The Complete Beginner’s Guide to Understanding and Building Machine Learning Systems with Python
Machine Learning with Python for Everyone will help you master the processes, patterns, and strategies you need to build effective learning systems, even if you’re an absolute beginner. If you can write some Python code, this book is for you, no matter how little college-level math you know. Principal instructor Mark E. Fenner relies on plain-English stories, pictures, and Python examples to communicate the ideas of machine learning.
Mark begins by discussing machine learning and what it can do; introducing key mathematical and computational topics in an approachable manner; and walking you through the first steps in building, training, and evaluating learning systems. Step by step, you’ll fill out the components of a practical learning system, broaden your toolbox, and explore some of the field’s most sophisticated and exciting techniques. Whether you’re a student, analyst, scientist, or hobbyist, this guide’s insights will be applicable to every learning system you ever build or use.
Download the sample pages (includes Chapter 3).
Foreword xxi
Preface xxiii
About the Author xxvii
Part I: First Steps 1
Chapter 1: Let’s Discuss Learning 3
1.1 Welcome 3
1.2 Scope, Terminology, Prediction, and Data 4
1.3 Putting the Machine in Machine Learning 7
1.4 Examples of Learning Systems 9
1.5 Evaluating Learning Systems 11
1.6 A Process for Building Learning Systems 13
1.7 Assumptions and Reality of Learning 15
1.8 End-of-Chapter Material 17
Chapter 2: Some Technical Background 19
2.1 About Our Setup 19
2.2 The Need for Mathematical Language 19
2.3 Our Software for Tackling Machine Learning 20
2.4 Probability 21
2.5 Linear Combinations, Weighted Sums, and Dot Products 28
2.6 A Geometric View: Points in Space 34
2.7 Notation and the Plus-One Trick 43
2.8 Getting Groovy, Breaking the Straight-Jacket, and Nonlinearity 45
2.9 NumPy versus “All the Maths” 47
2.10 Floating-Point Issues 52
2.11 EOC 53
Chapter 3: Predicting Categories: Getting Started with Classification 55
3.1 Classification Tasks 55
3.2 A Simple Classification Dataset 56
3.3 Training and Testing: Don’t Teach to the Test 59
3.4 Evaluation: Grading the Exam 62
3.5 Simple Classifier #1: Nearest Neighbors, Long Distance Relationships, and Assumptions 63
3.6 Simple Classifier #2: Naive Bayes, Probability, and Broken Promises 68
3.7 Simplistic Evaluation of Classifiers 70
3.8 EOC 81
Chapter 4: Predicting Numerical Values: Getting Started with Regression 85
4.1 A Simple Regression Dataset 85
4.2 Nearest-Neighbors Regression and Summary Statistics 87
4.3 Linear Regression and Errors 91
4.4 Optimization: Picking the Best Answer 98
4.5 Simple Evaluation and Comparison of Regressors 101
4.6 EOC 104
Part II: Evaluation 107
Chapter 5: Evaluating and Comparing Learners 109
5.1 Evaluation and Why Less Is More 109
5.2 Terminology for Learning Phases 110
5.3 Major Tom, There’s Something Wrong: Overfitting and Underfitting 116
5.4 From Errors to Costs 125
5.5 (Re)Sampling: Making More from Less 128
5.6 Break-It-Down: Deconstructing Error into Bias and Variance 142
5.7 Graphical Evaluation and Comparison 149
5.8 Comparing Learners with Cross-Validation 154
5.9 EOC 155
Chapter 6: Evaluating Classifiers 159
6.1 Baseline Classifiers 159
6.2 Beyond Accuracy: Metrics for Classification 161
6.3 ROC Curves 170
6.4 Another Take on Multiclass: One-versus-One 181
6.5 Precision-Recall Curves 185
6.6 Cumulative Response and Lift Curves 187
6.7 More Sophisticated Evaluation of Classifiers: Take Two 190
6.8 EOC 201
Chapter 7: Evaluating Regressors 205
7.1 Baseline Regressors 205
7.2 Additional Measures for Regression 207
7.3 Residual Plots 214
7.4 A First Look at Standardization 221
7.5 Evaluating Regressors in a More Sophisticated Way: Take Two 225
7.6 EOC 232
Part III: More Methods and Fundamentals 235
Chapter 8: More Classification Methods 237
8.1 Revisiting Classification 237
8.2 Decision Trees 239
8.3 Support Vector Classifiers 249
8.4 Logistic Regression 259
8.5 Discriminant Analysis 269
8.6 Assumptions, Biases, and Classifiers 285
8.7 Comparison of Classifiers: Take Three 287
8.8 EOC 290
Chapter 9: More Regression Methods 295
9.1 Linear Regression in the Penalty Box: Regularization 295
9.2 Support Vector Regression 301
9.3 Piecewise Constant Regression 308
9.4 Regression Trees 313
9.5 Comparison of Regressors: Take Three 314
9.6 EOC 318
Chapter 10: Manual Feature Engineering: Manipulating Data for Fun and Profit 321
10.1 Feature Engineering Terminology and Motivation 321
10.2 Feature Selection and Data Reduction: Taking out the Trash 324
10.3 Feature Scaling 325
10.4 Discretization 329
10.5 Categorical Coding 332
10.6 Relationships and Interactions 341
10.7 Target Manipulations 350
10.8 EOC 356
Chapter 11: Tuning Hyperparameters and Pipelines 359
11.1 Models, Parameters, Hyperparameters 360
11.2 Tuning Hyperparameters 362
11.3 Down the Recursive Rabbit Hole: Nested Cross-Validation 370
11.4 Pipelines 377
11.5 Pipelines and Tuning Together 380
11.6 EOC 382
Part IV: Adding Complexity 385
Chapter 12: Combining Learners 387
12.1 Ensembles 387
12.2 Voting Ensembles 389
12.3 Bagging and Random Forests 390
12.4 Boosting 398
12.5 Comparing the Tree-Ensemble Methods 401
12.6 EOC 405
Chapter 13: Models That Engineer Features for Us 409
13.1 Feature Selection 411
13.2 Feature Construction with Kernels 428
13.3 Principal Components Analysis: An Unsupervised Technique 445
13.4 EOC 462
Chapter 14: Feature Engineering for Domains: Domain-Specific Learning 469
14.1 Working with Text 470
14.2 Clustering 479
14.3 Working with Images 481
14.4 EOC 493
Chapter 15: Connections, Extensions, and Further Directions 497
15.1 Optimization 497
15.2 Linear Regression from Raw Materials 500
15.3 Building Logistic Regression from Raw Materials 504
15.4 SVM from Raw Materials 510
15.5 Neural Networks 512
15.6 Probabilistic Graphical Models 516
15.7 EOC 525
Appendix A: mlwpy.py Listing 529
Index 537