Speaker 1: Jeremy Howard Speaker 2: John

0:00 | Lesson 6: Practical Deep Learning for Coders 2022 0:54 | Tabular Data: Titanic Dataset 1:42 | Creating a Machine Learning Algorithm from Scratch 2:11 | 1R Rule 2:23 | 2R Rule 3:55 | Decision Tree 4:44 | Decision Tree Classifier 7:09 | Gini 8:35 | Accuracy Score 9:33 | Minimum Samples per Leaf Node 10:55 | Kaggle Competition 11:43 | CSV File 12:22 | Preprocessing 13:15 | Dummy Variables 15:56 | Growing the Tree Further 16:34 | Limitations of Decision Trees 16:43 | Bagging 19:26 | Random Forest 21:25 | Random Forest Classifier 22:44 | Random Forest vs. Decision Tree 23:03 | Feature Importance Plot 25:21 | Feature Importance Plots for Tabular Data 26:45 | Chapter 9: Auction Prices of Heavy Industrial Equipment 27:30 | Number of Estimators 28:11 | Increasing the Number of Trees 29:50 | Out of Bag Error (OOB Error) 30:45 | Bagging vs. Deep Learning 31:51 | Bagging Other Models 32:38 | Random Forest Insights 34:29 | Prediction Confidence 35:21 | Feature Importance 35:56 | Redundant Features 36:02 | Partial Dependence Plot 39:30 | Describing Why a Prediction Was Made 41:50 | Excluding a Tree from a Forest 42:56 | Ensembles of Bagged Models 43:55 | Explainability Techniques 46:09 | Chapter 9: Auction Prices of Heavy Industrial Equipment 46:30 | Overfitting a Random Forest 47:27 | Adding Randomly Generated Columns 48:26 | Interactions 48:54 | Gradient Boosting 50:37 | Boosting vs. Bagging 51:09 | Gradient Boosting Machine (GBM) 51:57 | Kaggle Notebook on Random Forests 52:45 | Kaggle Competition: Patti Disease Classification 54:46 | Fast Kaggle Module 56:21 | Kaggle Competitions: Testing Models 57:51 | Structuring Code and Analysis 58:45 | Validation Set 59:13 | Iterating Quickly 1:00:13 | Doing Everything Reasonably Well 1:00:54 | Setting a Random Seed 1:01:47 | Data Exploration 1:02:25 | Pillow Image 1:03:05 | Decoding a JPEG 1:03:26 | Parallel Submodule 1:04:02 | Preprocessing Images 1:04:18 | Resizing Images 1:05:18 | Data Augmentation 1:05:52 | Model Building 1:06:09 | Iterating Quickly 1:06:19 | Best Vision Models for Fine-Tuning 1:07:43 | Picking a Model 1:08:10 | ResNet26D 1:08:53 | LRFind 1:09:31 | Learning Rate 1:10:06 | Fine-Tuning 1:10:26 | Submitting to Kaggle 1:10:30 | Creating a Submission 1:11:10 | Test Data Loader 1:11:48 | Getting Predictions 1:12:07 | Decoded Predictions 1:12:31 | Mapping Numbers to Strings 1:13:25 | Pandas Map 1:13:49 | Creating a CSV 1:14:03 | Iterating Rapidly 1:14:27 | Submitting to Kaggle 1:15:16 | Sharing Notebooks 1:15:25 | Push Notebook 1:15:57 | Public Notebooks on Kaggle 1:17:05 | Iterative Approach 1:17:28 | Notebook Strategy 1:20:04 | AutoML Frameworks 1:20:40 | AutoML and Hyperparameter Optimization 1:21:40 | Intentional Approach 1:22:15 | Learning Rate Finder 1:22:48 | Choosing Models 1:23:18 | Best Vision Models for Fine-Tuning 1:23:51 | Tabular Data: Random Forest, GBM, Neural Networks 1:24:28 | Kaggle Iteration Speed 1:24:52 | Virtual CPUs 1:25:31 | Resizing Images 1:26:46 | Convex Tiny Model 1:26:55 | Speed vs. Error Rate 1:27:50 | Training the Convex Tiny Model 1:28:22 | Convex Next Model 1:29:11 | Convex Next: Rules of Thumb 1:29:53 | Speed vs. Accuracy Tradeoff 1:30:04 | Iterating Further 1:30:23 | Cropping Images 1:31:13 | Padding Images 1:32:03 | Test Time Augmentation (TTA) 1:34:14 | TTA: Improving Results 1:34:32 | Rectangular Images 1:35:02 | Resizing to Rectangular Images 1:36:00 | Standardized Approach 1:36:11 | Submitting to Kaggle 1:36:23 | Mapping Numbers to Strings 1:37:23 | Submitting to Kaggle 1:38:09 | Wrapping Up 1:38:15 | TTA During Training 1:39:22 | Data Augmentation 1:40:02 | Rectangular Inputs 1:41:07 | Padding with Black Pixels 1:41:40 | Reflection Padding 1:42:05 | Padding: Impact on Results