- 3.1 Classification Tasks
- 3.2 A Simple Classification Dataset
- 3.3 Training and Testing: Don't Teach to the Test
- 3.4 Evaluation: Grading the Exam
- 3.5 Simple Classifier #1: Nearest Neighbors, Long Distance Relationships, and Assumptions
- 3.6 Simple Classifier #2: Naive Bayes, Probability, and Broken Promises
- 3.7 Simplistic Evaluation of Classifiers
- 3.8 EOC
3.8 EOC
3.8.1 Sophomore Warning: Limitations and Open Issues
There are several caveats to what we’ve done in this chapter:
We compared these learners on a single dataset.
We used a very simple dataset.
We did no preprocessing on the dataset.
We used a single train-test split.
We used accuracy to evaluate the performance.
We didn’t try different numbers of neighbors.
We only compared two simple models.
Each one of these caveats is great! It means we have more to talk about in the forthcoming chapters. In fact, discussing why these are concerns and figuring out how to address them is the point of this book. Some of these issues have no fixed answer. For example, no one learner is best on all datasets. So, to find a good learner for a particular problem, we often try several different learners and pick the one that does the best on that particular problem. If that sounds like teaching-to-the-test, you’re right! We have to be very careful in how we select the model we use from many potential models. Some of these issues, like our use of accuracy, will spawn a long discussion of how we quantify and visualize the performance of classifiers.
3.8.2 Summary
Wrapping up our discussion, we’ve seen several things in this chapter:
iris, a simple real-world dataset
Nearest-neighbors and Naive Bayes classifiers
The concept of training and testing data
Measuring learning performance with accuracy
Measuring time and space usage within a Jupyter notebook and via stand-alone scripts
3.8.3 Notes
If you happen to be a botanist or are otherwise curious, you can read Anderson’s original paper on irises: www.jstor.org/stable/2394164. The version of the iris data with sklearn comes from the UCI Data repository: https://archive.ics.uci.edu/ml/datasets/iris.
The Minkowski distance isn’t really as scary as it seems. There’s another distance called the Manhattan distance. It is the distance it would take to walk as directly as possible from one point to the other, if we were on a fixed grid of streets like in Manhattan. It simply adds up the absolute values of the feature differences without squares or square roots. All Minkowski does is extend the formulas so we can pick Manhattan, Euclidean, or other distances by varying a value p. The weirdness comes in when we make p very, very big: p → ∞. Of course, that has its own name: the Chebyshev distance.
If you’ve seen theoretical resource analysis of algorithms before, you might remember the terms complexity analysis or Big-O notation. The Big-O analysis simplifies statements on the upper bounds of resource use, as input size grows, with mathematical statements like O(n2)—hence the name Big-O.
I briefly mentioned graphics processing units (GPUs). When you look at the mathematics of computer graphics, like the visuals in modern video games, it is all about describing points in space. And when we play with data, we often talk about examples as points in space. The “natural” mathematical language to describe this is matrix algebra. GPUs are designed to perform matrix algebra at warp speed. So, it turns out that machine learning algorithms can be run very, very efficiently on GPUs. Modern projects like Theano, TensorFlow, and Keras are designed to take advantage of GPUs for learning tasks, often using a type of learning model called a neural network. We’ll briefly introduce these in Chapter 15.
In this chapter, we used Naive Bayes on discrete data. Therefore, learning involved making a table of how often values occurred for the different target classes. When we have continuous numerical values, the game is a bit different. In that case, learning means figuring out the center and spread of a distribution of values. Often, we assume that a normal distribution works well with the data; the process is then called Gaussian Naive Bayes—Gaussian and normal are essentially synonyms. Note that we are making an assumption—it might work well but we might also be wrong. We’ll talk more about GNB in Section 8.5.
In any chapter that discusses performance, I would be remiss if I didn’t tell you that “premature optimization is the root of all evil . . . in programming.” This quote is from an essay form of Donald Knuth’s 1974 Turing Award—the Nobel Prize of Computer Science—acceptance speech. Knuth is, needless to say, a giant in the discipline. There are two points that underlie his quote. Point one: in a computer system, the majority of the execution time is usually tied up in a small part of the code. This observation is a form of the Pareto principle or the 80–20 rule. Point two: optimizing code is hard, error-prone, and makes the code more difficult to understand, maintain, and adapt. Putting these two points together tells us that we can waste an awful lot of programmer time optimizing code that isn’t contributing to the overall performance of our system. So, what’s the better way? (1) Write a good, solid, working system and then measure its performance. (2) Find the bottlenecks—the slow and/or calculation-intensive portions of the program. (3) Optimize those bottlenecks. We only do the work that we know needs to be done and has a chance at meeting our goals. We also do as little of this intense work as possible. One note: inner loops—the innermost nestings of repetition—are often the most fruitful targets for optimization because they are, by definition, code that is repeated the most times.
Recent versions of Jupyter now report a mean and standard deviation for %timeit results. However, the Python core developers and documenters prefer a different strategy for analyzing timeit results: they prefer either (1) taking the minimum of several repeated runs to give an idea of best-case performance, which will be more consistent for comparison sake, or (2) looking at all of the results as a whole, without summary. I think that (2) is always a good idea in data analysis. The mean and standard deviation are not robust; they respond poorly to outliers. Also, while the mean and standard deviation completely characterize normally distributed data, other distributions will be characterized in very different ways; see Chebyshev’s inequality for details. I would be far happier if Jupyter reported medians and inter-quartile ranges (those are the 50th percentile and the 75th–25th percentiles). These are robust to outliers and are not based on distributional assumptions about the data.
What was up with the 1000 loops in the timeit results? Essentially, we are stacking multiple runs of the same, potentially short-lived, task one after the other so we get a longer-running pseudo-task. This longer-running task plays more nicely with the level of detail that the timing functions of the operating system support. Imagine measuring a 100-yard dash using a sundial. It’s going to be very hard because there’s a mismatch between the time scales. As we repeat the task multiple times—our poor sprinters might get worn out but, fortunately, Python keeps chugging along—we may get more meaningful measurements. Without specifying a number, timeit will attempt to find a good number for you. In turn, this may take a while because it will try increasing values for number. There’s also a repeat value you can use with timeit; repeat is an outer loop around the whole process. That’s what we discussed computing statistics on in the prior paragraph.
3.8.4 Exercises
You might be interested in trying some classification problems on your own. You can follow the model of the sample code in this chapter with some other classification datasets from sklearn: datasets.load_wine and datasets.load_breast_cancer will get you started. You can also download numerous datasets from online resources like:
The UCI Machine Learning Repository, https://archive.ics.uci.edu/ml/datasets.html
Kaggle, www.kaggle.com/datasets