Hello All,
In today’s tutorial we will apply 5 different machine learning algorithms to predict house sale prices using the Ames Housing Data.
This dataset is also available as an active Kaggle competition for the next month, so you can use this as a Kaggle starter script (in R). Use the output from the models to generate submission files for the Kaggle platform and view how well you fare on the public leaderboard. This is also a perfect simulation for realworld analytics problem where the final results are validated by a customer / client/ thirdparty.
This tutorial is divided into 4 parts:
 Data Load & Cleanup
 Feature Selection
 Apply algorithms
 Using arithmetic / geometric means from multiple algorithms to increase accuracy
Problem Statement:
Before we begin, let us understand the problem statement :
Predict Home SalePrice for the Test Data Set with the lowest possible error.
The Kaggle competition evaluates the leaderboard score based on the RootMeanSquaredError (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price. So the best model will be one with a score of 0.
The evaluation considers log values so model is penalized for incorrectly predicting both expensive houses AND cheap houses. So the % of error deviation from real value matters, not just the $ value of the homes.
E.g.: if the predicted value of homeprice was 42000$ but actual value was 37000$ , then $value error is only 5000$ which doesn’t seem a lot. However, error % is ~13.52% .
On the contrary, imagine a home with real saleprice = 389411$ , which we predicted to be 410000$. $ value difference is 20589$, yet % error is only ~5.28% which is a better prediction.
In reallife too, you will frequently face situations where the sensitivity of predictions is as important as valueaccuracy.
As always, you can download the code and data files from the Projects Page here , under the Feb 2017.
Data Load & Cleanup:
In this step we perform the following tasks:
 Load the test and training set.

hp = data.frame(fread("train.csv"), stringsAsFactors = FALSE) ht = data.frame(fread("test.csv"), stringsAsFactors = FALSE) 
 Check training set for missing values.

sapply(hp, function(x) sum(is.na(x))) 
 Delete columns with more than 40% missing records. These include the variables Alley (1369 empty ), FireplaceQu (690), Fence (1179), PoolQC (1453), MiscFeature (1406) .
 The other variables identified in step 1 are:
 LotFrontage , Alley , MasVnrType, BsmtQual , BsmtCond , BsmtExposure , BsmtFinType1, BsmtFinType2, GarageType , GarageCond.
 For columns with very few missing values, we choose one of 3 options:
 For categorical values, create a new level “unk” to indicate missing data.
 For columns (numeric or categorical) where the data tends to fall overwhelmingly in one category , mark missing values with this option.
 For numeric data, mark missing values with 1.
 NOTE, we will apply these rules on the test set also , for consistency, irrespective of whether there are missing values or not.
 Repeat steps 25 for test set columns, since there may be empty cells for columns in test set which did not show up in the training set. The variables we now identify include:
 MSZoning, Utilities, Exterior1st, Exterior2nd, MasVnrArea,
 BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, TotalBsmtSF.
 BsmtFullBath. BsmtHalfBath, KitchenQual,
 GarageYrBlt, GarageQual, GarageFinish, GarageCars, GarageArea,
 SaleType.
 We write the modified training and test sets to Excel so we can apply our models on the corrected data. ( This is especially helpful in real life if you need to fine tune your model over a few days time.
Feature Selection:
The easiest way to check if a relationship exists is to use statistical functions: chisquare, anova or correlation.
Our target variable is “SalePrice” (numeric value) . So we will test all the other predictor variables against this factor to see if any relation exists and how much it affects the SalePrice.
For categorical predictors, we will use chisquare test, whereas for numeric predictors, we will use correlation.
Our dataset has 74 predictive factors (excluding SalePrice and Id), so we run a for loop to do a rough check. Relation exists only if pvalues < 0.05.
If the predictor column is of type integer/numeric, we will apply correlation. If column = “character” , we apply chisquare.

relationdf$pval = 0 for(j in 2:75){ if(relationdf[(j1), "vartype"] == "integer"){ relationdf[(j1), "pval"] = cor(hp[,j], hp$SalePrice) #print("hello correlation") }else{ y = chisq.test(hp[,"SalePrice"], hp[,j]) relationdf[(j1), "pval"] = y$p.value #print("hello chisquare") } } 
We also add a column to identify variables of interest using the code below:
 If correlation value falls below 0.75 (high negative correlation) or above 0.75 (high positive correlation) then we mark the variable as “match”.
 If pval from chisquare test falls below 0.05 then we mark it as “match”.

relationdf$matchexpr = "match" relationdf$matchexpr[relationdf$testname == "chisquare" & relationdf$pval >= 0.05] = "mismatch" relationdf$matchexpr[relationdf$testname == "correlation" & (relationdf$pval >= 0.75 & relationdf$pval <=0.75 )] = "mismatch" 
Using this “quack” approach, we quickly identify 19 variables of interest. These include Neighborhood, Building Type (single family, townhome, etc) , YearBuilt, Year Remodeled , Basement type (finished / unfinished), house area for first floor/ second floor / garage, number of bathrooms (both full / half) , no of cars garage can accommodate, sale type and sale condition.
NOTE, it is always a good idea to graphically visualize correlation functions as the above approach may miss out predictors with a nonlinear relationship.
Median House SalePrice by Neighborhood
Apply Algorithms:
We apply the following machine learning algorithms:
 Linear Regression Model.
 Classification Tree Model.
 Neural Network Model.
 Random Forest algorithm model.
 Generalized Linear Model (GLM)
We follow the same steps for all 5 models:
(Note, code and functions shown only for Linear Regression Model. Detailed functions and variables used for the other models are available in the R program files.)
 Use the training set to create a formula.

lmodel < lm(SalePrice ~ MSSubClass + LotArea + LandContour + Utilities + LotConfig + Neighborhood + BldgType + HouseStyle + OverallQual + OverallCond + YearBuilt + MasVnrType + Foundation + BsmtCond + BsmtFinType1 + BsmtFinSF1 + TotalBsmtSF + Heating + X1stFlrSF + X2ndFlrSF + FullBath + HalfBath + KitchenQual + GarageCars + GarageArea + GarageCond + SaleType + SaleCondition, data = hp) 
 Apply formula to predict values for validation set.

fitmodel = data.frame(predict(lmodel, validf, interval = "prediction")) finalvalid = data.frame(Id = validf$Id, SalePrice = fitmodel$fit, oldprice = validf$SalePrice) finalvalid$err_rate = sqrt(abs((finalvalid$SalePrice^2)  (finalvalid$oldprice^2))) finalvalid$pcterr = abs((finalvalid$oldprice  finalvalid$SalePrice )/finalvalid$oldprice)*100 
 Check the deviation from true homeprices in terms of both median $value and % error.

median(finalvalid$err_rate) # median prediction error in $ = 62085$ median(finalvalid$pcterr) # median prediction error as % difference from correct value = 7.861% 
 Apply formula on test set and check rank/score on leaderboard.

fitmodel = data.frame(predict(lmodel, ht, interval = "prediction")) final = data.frame(Id = ht$Id, SalePrice = fitmodel$fit) write.csv(final, "model1_linreg.csv", row.names = FALSE) 
The error rates for all 5 models are given in the table below:
home price table showing model error %
Combining Algorithms to Improve Accuracy:
There are many scientific papers which show that combining answers from multiple models greatly improves accuracy. A simple but excellent explanation (with respect to Kaggle ) is given by MLwave.com founder and past Kaggle winner is provided here. Basically, combining results from unrelated models can improve accuracy, even if individual models are NOT good at 100% accurate predictions.
Similar to the explanation provided in the link, we calculate the arithmetic mean to average the results from all 5 models .

validtestdf = data.frame(Id = finalvalid$Id, trueval = finalvalid$oldprice , linval = finalvalid$SalePrice) validtestdf$treeval = xdf$SalePrice validtestdf$nnetval = xdf$SalePrice validtestdf$rfval = xdf$SalePrice validtestdf$glmval = xdf$SalePrice 
Summary:
We learnt how to clean and process a reallife dataset, select features of interest (and impact on target variable) apply 5 different machine learning algorithms.
The code for this tutorial is available here on the Projects page, under month of Feb.
Please take a look and feel free to comment with your own feedback or models.