Tutorial

From https://www.kaggle.com/c/datasciencebowl/details/tutorial

This original tutorial was written by Aaron Sander, Data Scientist, Booz Allen Hamilton, but I cannot find an online version of the Notebook.

Directories differences caused some changes made below.

In this tutorial, we will go step-by-step through a simple model to distinguish different types of plankton and demonstrate some tools for exploring the image dataset. We will start by going through an example of one image to show how you could choose to develop a metric based on the shape of the object within the image. First, we import the necessary modules from scikit-image, matplotlib, scikit-learn, and numpy. If you don't currently have python installed, you can get the Anaconda distribution that includes all of the referenced packages below.

In [1]:
#Import libraries for doing image analysis
from skimage.io import imread
from skimage.transform import resize
from sklearn.ensemble import RandomForestClassifier as RF
import glob
import os
from sklearn import cross_validation
from sklearn.cross_validation import StratifiedKFold as KFold
from sklearn.metrics import classification_report
from matplotlib import pyplot as plt
from matplotlib import colors
from pylab import cm
from skimage import segmentation
from skimage.morphology import watershed
from skimage import measure
from skimage import morphology
import numpy as np
import pandas as pd
from scipy import ndimage
from skimage.feature import peak_local_max
# make graphics inline
%matplotlib inline
In [2]:
import datetime
start_time = datetime.datetime.now()
print start_time
2015-02-01 13:48:51.434000

In [3]:
import warnings
warnings.filterwarnings("ignore")

Set the random number seed so results are reproducible.

In [4]:
np.random.seed(19937)

Importing the Data

The training data is organized in a series of subdirectories that contain examples for the each class of interest. We will store the list of directory names to aid in labelling the data classes for training and testing purposes.

In [5]:
cd C:\Kaggle\2015\Plankton
C:\Kaggle\2015\Plankton

In [6]:
os.getcwd()
Out[6]:
'C:\\Kaggle\\2015\\Plankton'
In [7]:
directory_names = os.listdir("train")
In [8]:
directory_names[0]
Out[8]:
'acantharia_protist'

Example Image

We will develop our feature on one image example and examine each step before calculating the feature across the distribution of classes.

In [9]:
# Example image
# This example was chosen for because it has two noncontinguous pieces
# that will make the segmentation example more illustrative

example_file = glob.glob(os.path.join("train", directory_names[0],'*.jpg'))[9]
example_file
Out[9]:
'train\\acantharia_protist\\101574.jpg'
In [10]:
im = imread(example_file, as_grey=True)
plt.imshow(im, cmap=cm.gray)
plt.show()

Preparing the Images

To create the features of interest, we will need to prepare the images by completing a few preprocessing procedures. We will step through some common image preprocessing actions: thresholding the images, segmenting the images, and extracting region properties. Using the region properties, we will create features based on the intrinsic properties of the classes, which we expect will allow us discriminate between them. Let's walk through the process of adding one such feature for the ratio of the width by length of the object of interest. First, we begin by thresholding the image on the the mean value. This will reduce some of the noise in the image. Then, we apply a three step segmentation process: first we dilate the image to connect neighboring pixels, then we calculate the labels for connected regions, and finally we apply the original threshold to the labels to label the original, undilated regions.

In [11]:
# First we threshold the image by only taking values greater than the mean to reduce noise 
# in the image to use later as a mask
f = plt.figure(figsize=(12,3))
imthr = im.copy()
imthr = np.where(im > np.mean(im),0.,1.0)
sub1 = plt.subplot(1,4,1)
plt.imshow(im, cmap=cm.gray)
sub1.set_title("Original Image")

sub2 = plt.subplot(1,4,2)
plt.imshow(imthr, cmap=cm.gray_r)  # reversed map
sub2.set_title("Thresholded Image")

imdilated = morphology.dilation(imthr, np.ones((4,4)))
sub3 = plt.subplot(1,4,3)
plt.imshow(imdilated, cmap=cm.gray_r)  # reversed map
sub3.set_title("Dilated Image")

labels = measure.label(imdilated)
labels = imthr*labels
labels = labels.astype(int)
sub4 = plt.subplot(1,4,4)
sub4.set_title("Labeled Image")
plt.imshow(labels)   # default color map
Out[11]:
<matplotlib.image.AxesImage at 0x1771e080>

With the image segmented into different parts, we would like to choose the largest non-background part to compute our metric. We would like to select the largest segment as the likely object of interest for classification purposes. We loop through the available regions and select the one with the largest area. There are many properties available within the regions that you can explore for creating new features. Look at the documentation for regionprops for inspiration.

See regionprops references here and here.

In [12]:
# calculate common region properties for each region within the segmentation
regions = measure.regionprops(labels)
In [13]:
for props in regions:
    ratio = props.minor_axis_length/props.major_axis_length
    print props.label, props.centroid, props.orientation, props.area, props.filled_area, \
    props.major_axis_length, props.minor_axis_length, ratio
1 (7.5, 101.375) 1.35819381731 16.0 16 12.0514717159 8.51539954908 0.706585863521
2 (45.365239294710328, 51.130982367758186) 0.260006867166 397.0 397 64.8799060565 44.6033453179 0.687475491704
3 (32.75, 82.0) -0.0 4.0 4 2.82842712475 1.73205080757 0.612372435696
5 (74.5, 19.214285714285715) 1.33093032045 14.0 14 7.47148184127 5.34650562152 0.715588384621

In [14]:
# find the largest nonzero region
def getLargestRegion(props=regions, labelmap=labels, imagethres=imthr):
    regionmaxprop = None
    for regionprop in props:
        # check to see if the region is at least 50% nonzero
        if sum(imagethres[labelmap == regionprop.label])*1.0/regionprop.area < 0.50:
            continue
        if regionmaxprop is None:
            regionmaxprop = regionprop
        if regionmaxprop.filled_area < regionprop.filled_area:
            regionmaxprop = regionprop
    return regionmaxprop

The results for our test image are shown below. The segmentation picked one region and we use that region to calculate our ratio metric.

In [15]:
regionmax = getLargestRegion()
plt.imshow(np.where(labels == regionmax.label,1.0,0.0))
plt.show()
In [16]:
print regionmax.minor_axis_length/regionmax.major_axis_length
0.687475491704

Why does this value not match the original article by Sander? Should the value be 0.144141, like in that article?

In [17]:
regionmax.minor_axis_length
Out[17]:
44.6033453178995
In [18]:
regionmax.major_axis_length
Out[18]:
64.87990605645001

Now, we collect the previous steps together in a function to make it easily repeatable.

In [19]:
def getMinorMajorRatio(image):
    image = image.copy()
    # Create the thresholded image to eliminate some of the background
    imagethr = np.where(image > np.mean(image),0.,1.0)

    #Dilate the image
    imdilated = morphology.dilation(imagethr, np.ones((4,4)))

    # Create the label list
    label_list = measure.label(imdilated)
    label_list = imagethr*label_list
    label_list = label_list.astype(int)
    
    region_list = measure.regionprops(label_list)
    maxregion = getLargestRegion(region_list, label_list, imagethr)
    
    # guard against cases where the segmentation fails by providing zeros
    ratio = 0.0
    if ((not maxregion is None) and  (maxregion.major_axis_length != 0.0)):
        ratio = 0.0 if maxregion is None else  maxregion.minor_axis_length*1.0 / maxregion.major_axis_length
    return ratio
In [20]:
getMinorMajorRatio(im)
Out[20]:
0.6874754917044964

Preparing Training Data

With our code for the ratio of minor to major axis, let's add the raw pixel values to the list of features for our dataset. In order to use the pixel values in a model for our classifier, we need a fixed length feature vector, so we will rescale the images to be constant size and add the fixed number of pixels to the feature vector.

To create the feature vectors, we will loop through each of the directories in our training data set and then loop over each image within that class. For each image, we will rescale it to 25 x 25 pixels and then add the rescaled pixel values to a feature vector, X. The last feature we include will be our width-to-length ratio. We will also create the class label in the vector y, which will have the true class label for each row of the feature vector, X.

In [21]:
# Rescale the images and create the combined metrics and training labels

#get the total training images
numberofImages = 0
for folder in directory_names:
    for fileNameDir in os.walk(os.path.join("train", folder)):
        for fileName in fileNameDir[2]:
             # Only read in the images
            if fileName[-4:] != ".jpg":
              continue
            numberofImages += 1

print numberofImages

# We'll rescale the images to be 25x25
maxPixel = 25
imageSize = maxPixel * maxPixel
num_rows = numberofImages # one row for each image in the training dataset
num_features = imageSize + 1 # for our ratio

# X is the feature vector with one row of features per image
# consisting of the pixel values and our metric
X = np.zeros((num_rows, num_features), dtype=float)
# y is the numeric class label 
y = np.zeros((num_rows))

files = []
# Generate training data
i = 0    
label = 0
# List of string of class names
namesClasses = list()

print "Reading images"
# Navigate through the list of directories
for folder in directory_names:
    # Append the string class name for each class
    currentClass = folder.split(os.pathsep)[-1]
    namesClasses.append(currentClass)
    for fileNameDir in os.walk(os.path.join("train",folder)):   
        for fileName in fileNameDir[2]:
            # Only read in the images
            if fileName[-4:] != ".jpg":
              continue
            
            # Read in the images and create the features
            nameFileImage = "{0}{1}{2}".format(fileNameDir[0], os.sep, fileName)            
            image = imread(nameFileImage, as_grey=True)
            files.append(nameFileImage)
            axisratio = getMinorMajorRatio(image)
            image = resize(image, (maxPixel, maxPixel))
            
            # Store the rescaled image pixels and the axis ratio
            X[i, 0:imageSize] = np.reshape(image, (1, imageSize))
            X[i, imageSize] = axisratio
            
            # Store the classlabel
            y[i] = label
            i += 1
            # report progress for each 5% done  
            report = [int((j+1)*num_rows/20.) for j in range(20)]
            if i in report: print np.ceil(i *100.0 / num_rows), "% done"
    label += 1
30336
Reading images
5.0 % done
10.0 % done
15.0 % done
20.0 % done
25.0 % done
30.0 % done
35.0 % done
40.0 % done
45.0 % done
50.0 % done
55.0 % done
60.0 % done
65.0 % done
70.0 % done
75.0 % done
80.0 % done
85.0 % done
90.0 % done
95.0 % done
100.0 % done

Width-to-Length Ratio Class Separation

Now that we have calculated the width-to-length ratio metric for all the images, we can look at the class separation to see how well our feature performs. We'll compare pairs of the classes' distributions by plotting each pair of classes. While this will not cover the whole space of hundreds of possible combinations, it will give us a feel for how similar or dissimilar different classes are in this feature, and the class distributions should be comparable across subplots.

In [22]:
# Loop through the classes two at a time and compare their distributions of the Width/Length Ratio

#Create a DataFrame object to make subsetting the data on the class 
df = pd.DataFrame({"class": y[:], "ratio": X[:, num_features-1]})

f = plt.figure(figsize=(30, 20))
#we suppress zeros and choose a few large classes to better highlight the distributions.
df = df.loc[df["ratio"] > 0]
minimumSize = 20 
counts = df["class"].value_counts()
largeclasses = [int(x) for x in list(counts.loc[counts > minimumSize].index)]
# Loop through 40 of the classes 
for j in range(0,40,2):
    subfig = plt.subplot(4, 5, j/2 +1)
    # Plot the normalized histograms for two classes
    classind1 = largeclasses[j]
    classind2 = largeclasses[j+1]
    n, bins,p = plt.hist(df.loc[df["class"] == classind1]["ratio"].values,\
                         alpha=0.5, bins=[x*0.01 for x in range(100)], \
                         label=namesClasses[classind1].split(os.sep)[-1], normed=1)

    n2, bins,p = plt.hist(df.loc[df["class"] == (classind2)]["ratio"].values,\
                          alpha=0.5, bins=bins, label=namesClasses[classind2].split(os.sep)[-1],normed=1)
    subfig.set_ylim([0.,10.])
    plt.legend(loc='upper right')
    plt.xlabel("Width/Length Ratio")

From the figure above, you will see some cases where the classes are well separated and others were they are not. It is typical that one single feature will not allow you to completely separate more than thirty distinct classes. You will need to be creative in coming up with additional metrics to discriminate between all the classes.

Random Forest Classification

We choose a random forest model to classify the images. Random forests perform well in many classification tasks and have robust default settings. We will give a brief description of a random forest model so that you can understand its two main free parameters: n_estimators and max_features.

A random forest model is an ensemble model of n_estimators number of decision trees. During the training process, each decision tree is grown automatically by making a series of conditional splits on the data. At each split in the decision tree, a random sample of max_features number of features is chosen and used to make a conditional decision on which of the two nodes that the data will be grouped in. The best condition for the split is determined by the split that maximizes the class purity of the nodes directly below. The tree continues to grow by making additional splits until the leaves are pure or the leaves have less than the minimum number of samples for a split (in sklearn default for min_samples_split is two data points). The final majority class purity of the terminal nodes of the decision tree are used for making predictions on what class a new data point will belong. Then, the aggregate vote across the forest determines the class prediction for new samples.

With our training data consisting of the feature vector X and the class label vector y, we will now calculate some class metrics for the performance of our model, by class and overall. First, we train the random forest on all the available data and let it perform the 5-fold cross validation. Then we perform the cross validation using the KFold method, which splits the data into train and test sets, and a classification report. The classification report provides a useful list of performance metrics for your classifier vs. the internal metrics of the random forest module.

The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.

The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.

The F-beta score can be interpreted as a weighted harmonic mean of the precision and recall, where an F-beta score reaches its best value at 1 and worst score at 0.

The F-beta score weights recall more than precision by a factor of beta. beta == 1.0 means recall and precision are equally important.

The support is the number of occurrences of each class in y_true.

In [23]:
print "Training"
# n_estimators is the number of decision trees
# max_features also known as m_try is set to the default value of the square root of the number of features
clf = RF(n_estimators=100, n_jobs=3);
scores = cross_validation.cross_val_score(clf, X, y, cv=5, n_jobs=1);
print "Accuracy of all classes"
print np.mean(scores)
Training
Accuracy of all classes
0.466051367628

In [24]:
kf = KFold(y, n_folds=5)
y_pred = y * 0
for train, test in kf:
    X_train, X_test, y_train, y_test = X[train,:], X[test,:], y[train], y[test]
    clf = RF(n_estimators=100, n_jobs=3)
    clf.fit(X_train, y_train)
    y_pred[test] = clf.predict(X_test)
print classification_report(y, y_pred, target_names=namesClasses)
                                               precision    recall  f1-score   support

                           acantharia_protist       0.41      0.86      0.55       889
                acantharia_protist_big_center       0.00      0.00      0.00        13
                      acantharia_protist_halo       0.75      0.04      0.08        71
                                    amphipods       0.00      0.00      0.00        49
                appendicularian_fritillaridae       0.00      0.00      0.00        16
                 appendicularian_slight_curve       0.33      0.41      0.37       532
                     appendicularian_straight       0.31      0.02      0.04       242
                      appendicularian_s_shape       0.37      0.53      0.43       696
                                    artifacts       0.52      0.82      0.64       393
                               artifacts_edge       0.89      0.77      0.82       170
                      chaetognath_non_sagitta       0.56      0.74      0.64       815
                            chaetognath_other       0.40      0.75      0.52      1934
                          chaetognath_sagitta       0.45      0.20      0.27       694
                               chordate_type1       0.46      0.56      0.51        77
                             copepod_calanoid       0.38      0.58      0.46       681
                        copepod_calanoid_eggs       0.73      0.17      0.28       173
                   copepod_calanoid_eucalanus       0.88      0.07      0.13        96
                   copepod_calanoid_flatheads       0.20      0.01      0.01       178
              copepod_calanoid_frillyAntennae       0.00      0.00      0.00        63
                       copepod_calanoid_large       0.50      0.40      0.44       286
    copepod_calanoid_large_side_antennatucked       0.54      0.24      0.33       106
                    copepod_calanoid_octomoms       0.00      0.00      0.00        49
          copepod_calanoid_small_longantennae       0.86      0.14      0.24        87
                    copepod_cyclopoid_copilia       1.00      0.03      0.06        30
                    copepod_cyclopoid_oithona       0.48      0.63      0.54       899
               copepod_cyclopoid_oithona_eggs       0.57      0.79      0.66      1189
                                copepod_other       0.00      0.00      0.00        24
                             crustacean_other       0.26      0.10      0.14       201
                            ctenophore_cestid       0.50      0.01      0.02       113
             ctenophore_cydippid_no_tentacles       0.00      0.00      0.00        42
                ctenophore_cydippid_tentacles       0.00      0.00      0.00        53
                            ctenophore_lobate       0.76      0.50      0.60        38
                                     decapods       0.00      0.00      0.00        55
                                detritus_blob       0.16      0.06      0.08       363
                         detritus_filamentous       0.17      0.02      0.04       394
                               detritus_other       0.23      0.35      0.28       914
                          diatom_chain_string       0.55      0.93      0.69       519
                            diatom_chain_tube       0.38      0.41      0.39       500
         echinoderm_larva_pluteus_brittlestar       0.25      0.03      0.05        36
               echinoderm_larva_pluteus_early       0.53      0.28      0.37        92
               echinoderm_larva_pluteus_typeC       0.83      0.19      0.31        80
              echinoderm_larva_pluteus_urchin       0.57      0.28      0.38        88
          echinoderm_larva_seastar_bipinnaria       0.54      0.60      0.57       385
        echinoderm_larva_seastar_brachiolaria       0.61      0.78      0.69       536
     echinoderm_seacucumber_auricularia_larva       0.00      0.00      0.00        96
                                echinopluteus       0.00      0.00      0.00        27
                                       ephyra       0.00      0.00      0.00        14
                                  euphausiids       0.76      0.10      0.17       136
                            euphausiids_young       0.00      0.00      0.00        38
                                 fecal_pellet       0.33      0.28      0.30       511
                        fish_larvae_deep_body       0.00      0.00      0.00        10
                     fish_larvae_leptocephali       0.00      0.00      0.00        31
                      fish_larvae_medium_body       0.52      0.34      0.41        85
                       fish_larvae_myctophids       0.57      0.53      0.55       114
                        fish_larvae_thin_body       0.20      0.03      0.05        64
                   fish_larvae_very_thin_body       0.00      0.00      0.00        16
                                    heteropod       0.00      0.00      0.00        10
                         hydromedusae_aglaura       0.00      0.00      0.00       127
              hydromedusae_bell_and_tentacles       0.00      0.00      0.00        75
                             hydromedusae_h15       0.77      0.29      0.42        35
                       hydromedusae_haliscera       0.48      0.42      0.45       229
        hydromedusae_haliscera_small_sideview       0.00      0.00      0.00         9
                         hydromedusae_liriope       0.00      0.00      0.00        19
                    hydromedusae_narcomedusae       0.00      0.00      0.00       132
                      hydromedusae_narco_dark       0.00      0.00      0.00        23
                     hydromedusae_narco_young       0.21      0.05      0.08       336
                           hydromedusae_other       0.00      0.00      0.00        12
                    hydromedusae_partial_dark       0.65      0.27      0.38       190
                          hydromedusae_shapeA       0.43      0.74      0.55       412
           hydromedusae_shapeA_sideview_small       0.38      0.11      0.17       274
                          hydromedusae_shapeB       0.40      0.07      0.11       150
                    hydromedusae_sideview_big       0.00      0.00      0.00        76
                        hydromedusae_solmaris       0.39      0.49      0.43       703
                     hydromedusae_solmundella       0.87      0.11      0.19       123
                           hydromedusae_typeD       0.00      0.00      0.00        43
        hydromedusae_typeD_bell_and_tentacles       0.62      0.09      0.16        56
                           hydromedusae_typeE       0.00      0.00      0.00        14
                           hydromedusae_typeF       0.42      0.18      0.25        61
                  invertebrate_larvae_other_A       0.00      0.00      0.00        14
                  invertebrate_larvae_other_B       0.00      0.00      0.00        24
                            jellies_tentacles       0.50      0.04      0.07       141
                                   polychaete       1.00      0.02      0.04       131
                          protist_dark_center       0.00      0.00      0.00       108
                          protist_fuzzy_olive       0.71      0.77      0.74       372
                            protist_noctiluca       0.54      0.53      0.54       625
                                protist_other       0.37      0.66      0.47      1172
                                 protist_star       0.85      0.55      0.67       113
                           pteropod_butterfly       0.50      0.05      0.08       108
                       pteropod_theco_dev_seq       0.00      0.00      0.00        13
                            pteropod_triangle       1.00      0.03      0.06        65
                            radiolarian_chain       0.39      0.04      0.08       287
                           radiolarian_colony       0.49      0.24      0.32       158
                            shrimp-like_other       0.00      0.00      0.00        52
                              shrimp_caridean       0.62      0.20      0.31        49
                           shrimp_sergestidae       0.64      0.06      0.11       153
                                  shrimp_zoea       0.66      0.24      0.35       174
           siphonophore_calycophoran_abylidae       0.19      0.03      0.05       212
   siphonophore_calycophoran_rocketship_adult       0.41      0.07      0.11       135
   siphonophore_calycophoran_rocketship_young       0.38      0.22      0.28       483
      siphonophore_calycophoran_sphaeronectes       0.55      0.13      0.22       179
 siphonophore_calycophoran_sphaeronectes_stem       0.00      0.00      0.00        57
siphonophore_calycophoran_sphaeronectes_young       0.44      0.04      0.08       247
                     siphonophore_other_parts       0.00      0.00      0.00        29
                         siphonophore_partial       0.00      0.00      0.00        30
                       siphonophore_physonect       0.00      0.00      0.00       128
                 siphonophore_physonect_young       0.00      0.00      0.00        21
                                   stomatopod       0.00      0.00      0.00        24
                   tornaria_acorn_worm_larvae       0.73      0.42      0.53        38
                         trichodesmium_bowtie       0.46      0.66      0.54       708
                       trichodesmium_multiple       0.67      0.04      0.07        54
                           trichodesmium_puff       0.72      0.92      0.81      1979
                           trichodesmium_tuft       0.37      0.45      0.41       678
                           trochophore_larvae       0.00      0.00      0.00        29
                            tunicate_doliolid       0.25      0.16      0.20       439
                      tunicate_doliolid_nurse       0.31      0.07      0.11       417
                             tunicate_partial       0.64      0.96      0.76       352
                                tunicate_salp       0.60      0.76      0.67       236
                         tunicate_salp_chains       0.50      0.03      0.05        73
                    unknown_blobs_and_smudges       0.28      0.14      0.18       317
                               unknown_sticks       0.43      0.05      0.09       175
                         unknown_unclassified       0.17      0.01      0.02       425

                                  avg / total       0.44      0.46      0.41     30336


The current model, while somewhat accurate overall, doesn't do well for all classes, including the shrimp caridean, stomatopod, or hydromedusae tentacles classes. For others it does quite well, getting many of the correct classifications for trichodesmium_puff and copepod_oithona_eggs classes. The metrics shown above for measuring model performance include precision, recall, and f1-score. The precision metric gives probability that a chosen class is correct, (true positives / (true positive + false positives)), while recall measures the ability of the model correctly classify examples of a given class, (true positives / (false negatives + true positives)). The F1 score is the geometric average of the precision and recall.

The competition scoring uses a multiclass log-loss metric to compute your overall score. In the next steps, we define the multiclass log-loss function and compute your estimated score on the training dataset.

In [25]:
def multiclass_log_loss(y_true, y_pred, eps=1e-15):
    """Multi class version of Logarithmic Loss metric.
    https://www.kaggle.com/wiki/MultiClassLogLoss

    Parameters
    ----------
    y_true : array, shape = [n_samples]
            true class, intergers in [0, n_classes - 1)
    y_pred : array, shape = [n_samples, n_classes]

    Returns
    -------
    loss : float
    """
    predictions = np.clip(y_pred, eps, 1 - eps)

    # normalize row sums to 1
    predictions /= predictions.sum(axis=1)[:, np.newaxis]

    actual = np.zeros(y_pred.shape)
    n_samples = actual.shape[0]
    actual[np.arange(n_samples), y_true.astype(int)] = 1
    vectsum = np.sum(actual * np.log(predictions))
    loss = -1.0 / n_samples * vectsum
    return loss
In [26]:
# Get the probability predictions for computing the log-loss function
kf = KFold(y, n_folds=5)
# prediction probabilities number of samples, by number of classes
y_pred = np.zeros((len(y),len(set(y))))
for train, test in kf:
    X_train, X_test, y_train, y_test = X[train,:], X[test,:], y[train], y[test]
    clf = RF(n_estimators=100, n_jobs=3)
    clf.fit(X_train, y_train)
    y_pred[test] = clf.predict_proba(X_test)
In [27]:
multiclass_log_loss(y, y_pred)
Out[27]:
3.7112611343684963

The multiclass log loss function is an classification error metric that heavily penalizes you for being both confident (either predicting very high or very low class probability) and wrong. Throughout the competition you will want to check that your model improvements are driving this loss metric lower.

Where to Go From Here

Now that you've made a simple metric, created a model, and examined the model's performance on the training data, the next step is to make improvements to your model to make it more competitive. The random forest model we created does not perform evenly across all classes and in some cases fails completely. By creating new features and looking at some of your distributions for the problem classes directly, you can identify features that specifically help separate those classes from the others. You can add new metrics by considering other image properties, stratified sampling, transformations, or other models for the classification.

In [28]:
stop_time = datetime.datetime.now()
print stop_time
print (stop_time - start_time), "elapsed time"
2015-02-01 14:01:15.658000
0:12:24.224000 elapsed time