Journey of Analytics

Deep dive into data analysis tools, theory and projects

Page 2 of 5

Tutorials – Dashboards with R programming

In this blogpost we are going to implement dashboards using R programming, using the latest R library package “flexdashboard”.

R programming already offers some good features for graphs and charts (packages like ggplot, leaflet, etc). Plus there is always the option to create web applications using the Shiny library and presentations with RMarkdown documents.

However, this new library leverages these libraries and allows us to create some stunning dashboards, using interactive graphs and text. What I loved the most, was the “storyboard” feature that allows me to present content in Tableau-style frames. Please note that for this you need to create RMarkdown (.Rmd) files and insert the code using the R chunks as needed.

Do I think it will replace Tableau or any other enterprise BI dashboard tool? Not really (at least in the near future). But I do think it offers the following advantages:

  1. great alternative to static presentations since your audience can interact with the data.
  2. RMarkdown allows both programming and regular text content to be pooled together in a single document.
  3. Open source (a very big plus, in my opinion!)
  4. Storyboard format allows you to logically move the audience through the analysis : problem statement, raw data and exploration, different parts of the models/simulations/ number crunching, patterns in data, final summary and recommendations. Presenting the patterns that allow you to accept or reject a hypothesis has never been easier.

So, without further ado, let us look at the dashboard implementation with two examples:

  1. Storyboard dashboard.
  2. Simple dashboard with Shiny elements.

 

Library Installation instructions:

To start off, please install the “flexdashboard” package in your RStudio IDE. If installation is completed correctly, you will see the flex-dashboard feature when you create a new RMarkdown document, as shown in images below:

Step1 image:

Step 2 image:

Storyboard Dashboard:

Instead of analyzing a single dataset, I have chosen to present different interactive graph types using the storyboard feature. This will allow you to experience the range of options possible with this package.

An image of the storyboard is shown below, but you can also view the live document here (without source code or data files) .  The complete data and source code files are available for download here, under May 2017 on the Projects page.

The storyboard elements are described below:

  • Element 1 – Click on each frame to see the graph and explanation associated with that story point. (click element 5 to see Facebook stock trends)
  • Element 2 – This is the location for your graphs, tables, etc. One below each story point.
  • Element 3 – This explanation column in the right can be omitted, if required. However, my personal opinion is that this is a good way to highlight certain facts about the graph or place instructions, hyperlinks, etc. Like a webpage sidebar.
  • Element 4 – My tutorial only has 4 story elements, but if you have more flexdashboard automatically provides left-right arrow for navigation. (Just like Tableau).
  • Element 6 – Title bar for the project. Notice the social sharing button on the far right.

 

Shiny Style Simple Dashboard:

Here I have used Shiny elements to allow user to select the variables to plot on a graph. This also shows how you can divide your page into columns and rows, to show different content in different panes.

See image below for some of the features incorporated in this dashboard:

Feature explanation:

  • Element 1 – Title pane. The red color comes from the “cerulean” theme I used. You can change colors to match your business logo or personal preferences.
  • Element 2 – Shiny inputs. The dropdown is populated automatically from the dataset, so you don’t have to specify values separately.
  • Element 3 – Output graph, based on choices from element 1.
  • Element 4 – notice how this is separated vertically from element3, and horizontally from element 5.
  • Element 5 – Another graph. You can render text, image or pull in web content using the appropriate Shiny commands.
  • Element 6 – you can embed social share button, and also the source code. Click the code icon “</>” to view the code. You can also download the data and program files here, from my 2017 Projects page.

 

Hope you found these dashboard implementations useful. Please add your valuable comments and feedback in the comments section.

Until next time, happy coding! 😊

Parallel Programming with R

In this month’s project we will implement parallel programming in R:

We achieve this by using the following packages, so please install them on your RStudio IDE.

  • foreach,
  • doParallel
  • parallel. 

 

 Concept of parallel programming:

For beginners, the concept of parallel programming is simple, instead of calculating outputs (from inputs) one at a time, we divide our computation and allocate it to multiple worker connections that work concurrently. ( to retrieve data, process and calculate outputs, etc ) At the end we collate the results into a single dataset (list or dataframe).
This concept assumes that the computations and inputs are exclusive.

Example:
As a simple example, let us assume you are calculating prime factors for an input dataset with a million random, unsorted numbers. We can have a function to calculate factors that takes n seconds to process each number.
In a sequential (non-parallel ) scenario, it would take 1,000,000 * n seconds to process the entire dataset, assuming n seconds for each input.
However, if we had 7 parallel connections, then we could divide the input set into 7 chunks and process 7 numbers ( or datasets) at a time. This reduces the time by a factor of 7.

 

Benefits:

Parallel programming is not really useful if your computation is quick or if you are exploring stuff. But it is a very powerful tool to speed things up when it comes to simulations, machine learning and even time-consuming calculations or data retrievals.

For example, I was recently working  on a project where I was looking at billions of transaction data for sub-optimal price charges (where the actual transaction price was x% or higher than first bid). The logic itself is simple :

 

However, the sheer amount of data was taking the program hours (yes hours) to complete for a single month, even after optimizing the SQL queries. And I had to look at 6 months worth of data! Like looking for a needle in a haystack. So I implemented parallel worker connections and bingo! time reduced by 80%.

Similarly, I have used parallelization to simulate multiple modeling scenarios (what happens when input price changes to m and competition prices changes to y) in parallel and reduce computation time.

 

Code Structure:

The skeleton code for implementation is as follows:

In the above code, you can also embed the SqlQuery() in a function, along with other calculations, and call the function inside the dopar() function call.

Note, the odbc() function is a fake username-password combination to show the format. We need to place it inside the foreach() loop so each worker gets its own database connection to retrieve data in parallel. Assigning it outside as a global variable will NOT work.

We use a custom function to collate the results as follows:

 

Implementation:

The attached code file shows 3 cases for implementation, using the above code structure:

  1. calling a simple SQL query inside the foreach()
  2. calling a simple math-calculation function, without sql queries.
  3. more complex function to perform both sql data retrieval and math functions to compute results.

You can  download the code files from the Projects Page link here, under Apr 2017.

Points To Note:

  • Make sure you run the query on a single value to ensure that the sql query itself is correct. Unfortunately, R gives a very generic “failed to execute” type error irrespective of whether you missed a comma in the select statement or if the variable name is incorrect. This makes debugging very annoying.
  • If you have sql queries, please do optimize it so it only picks up the minimum number of records/columns needed.
  • Parallel programming does NOT mean instant!! So, processing a million rows may still take an hour or more, if you have loads of calculations. Though you will be tons faster than sequential processing.
  • A cluster with 1000 or more parallel worker connections is NOT effective. Unless you have a supercomputer of course. If using a laptop, don’t try more than 10.
  • Don’t put a query that writes to the table, you will lock the table and the process will NEVER ever complete.
    For a sanity check, try inserting a print statement to keep tabs that the function is actually working.

 

Until next time, happy coding.

Predictive analytics using Ames Housing Data (Kaggle Starter Script)

Hello All,

In today’s tutorial we will apply 5 different machine learning algorithms to predict house sale prices using the Ames Housing Data.

This dataset is also available as an active Kaggle competition for the next month, so you can use this as a Kaggle starter script (in R). Use the output from the models to generate submission files for the Kaggle platform and view how well you fare on the public leaderboard.  This is also a perfect simulation for real-world analytics problem where the final results are validated by a customer / client/ third-party.

This tutorial is divided into 4 parts:

  • Data Load & Cleanup
  • Feature Selection
  • Apply algorithms
  • Using arithmetic / geometric means from multiple algorithms to increase accuracy

 

Problem Statement:

Before we begin, let us understand the problem statement :

Predict Home SalePrice for the Test Data Set with the lowest possible error.

The Kaggle competition evaluates the leaderboard score based on the  Root-Mean-Squared-Error (RMSE)  between the logarithm of the predicted value and the logarithm of the observed sales price. So the best model will be one with a score of 0.

The evaluation considers log values so model is penalized for incorrectly predicting both expensive houses AND cheap houses. So the % of error deviation from real value matters, not just the $ value of the homes.
E.g.: if the predicted value of homeprice was 42000$ but actual value was 37000$ , then $-value error is only 5000$ which doesn’t seem a lot. However, error % is ~13.52% .
On the contrary, imagine a home with real saleprice = 389411$ , which we predicted to be 410000$. $ value difference is 20589$, yet % error is only ~5.28% which is a better prediction.

In real-life too, you will frequently face situations where the sensitivity of predictions  is as important as value-accuracy.

As always,  you can download the code and data files from the Projects Page here , under the Feb 2017.

 

Data Load & Cleanup:

In this step we perform the following tasks:

  1. Load the test and training set.
  2. Check training set for missing values.
  3. Delete columns with more than 40% missing records. These include the variables Alley (1369 empty ), FireplaceQu (690), Fence (1179), PoolQC (1453), MiscFeature (1406) .
  4. The other variables identified in step 1  are:
    • LotFrontage , Alley , MasVnrType, BsmtQual , BsmtCond , BsmtExposure , BsmtFinType1, BsmtFinType2,  GarageType , GarageCond.
  5. For columns with very few missing values, we choose one of 3 options:
    • For categorical values, create a new level “unk” to indicate missing data.
    • For columns (numeric or categorical) where the data tends to fall overwhelmingly in one category , mark missing values with this option.
    • For numeric data, mark missing values with -1.
    • NOTE, we will apply these rules on the test set also , for consistency, irrespective of whether there are missing values or not.
  6. Repeat steps 2-5 for test set columns, since there may be empty cells for columns in test set which did not show up in the training set.  The variables we now identify include:
    • MSZoning, Utilities, Exterior1st, Exterior2nd, MasVnrArea,
    • BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, TotalBsmtSF.
    • BsmtFullBath. BsmtHalfBath, KitchenQual,
    • GarageYrBlt, GarageQual, GarageFinish, GarageCars, GarageArea,
    • SaleType.
  7. We write the modified training and test sets to Excel so we can apply our models on the corrected data. ( This is especially helpful in real life if you need to fine tune your model over a few days time.

 

Feature Selection:

The easiest way to check if a relationship exists is to use statistical functions: chisquare, anova or correlation.

Our target variable is “SalePrice” (numeric value) . So we will test all the other predictor variables against this factor to see if any relation exists and how much it affects the SalePrice.
For categorical predictors, we will use chisquare test, whereas for numeric predictors, we will use correlation.

Our dataset has 74 predictive factors (excluding SalePrice and Id), so we run a for loop to do a rough check. Relation exists only if p-values < 0.05.
If the predictor column is of type integer/numeric, we will apply correlation. If column = “character” , we apply chisquare.

We also add a column to identify variables of interest using the code below:

  • If correlation value falls below -0.75 (high negative correlation) or above 0.75 (high positive correlation) then we mark the variable as “match”.
  • If p-val from chisquare test falls below 0.05 then we mark it as “match”.

Using this “quack” approach, we quickly identify 19 variables of interest. These include Neighborhood, Building Type (single family, townhome, etc) , YearBuilt, Year Remodeled , Basement type (finished / unfinished), house area for first floor/ second floor / garage, number of bathrooms (both full / half) , no of cars garage can accommodate, sale type and sale condition.

NOTE, it is always a good idea to graphically visualize correlation functions as the above approach may miss out predictors with a non-linear relationship.

Median House SalePrice by Neighborhood

Median House SalePrice by Neighborhood

 

Apply Algorithms:

We apply the following machine learning algorithms:

  1. Linear Regression Model.
  2. Classification Tree Model.
  3. Neural Network Model.
  4. Random Forest algorithm model.
  5. Generalized Linear Model (GLM)

 

We follow the same steps for all 5 models:
(Note, code and functions shown only for Linear Regression Model. Detailed functions and variables used for the other models are available in the R program files.)

  1. Use the training set to create a formula.
  2. Apply formula to predict values for validation set.
  3. Check the deviation from true homeprices in terms of both median $-value and % error.
  4. Apply formula on test set and check rank/score on leaderboard.

 

The error rates for all 5 models are given in the table below:

home price table showing model error %

home price table showing model error %

Combining Algorithms to Improve Accuracy:

There are many scientific papers which show that combining answers from multiple models greatly improves accuracy. A simple but excellent explanation (with respect to Kaggle ) is given by MLwave.com founder and past Kaggle winner is  provided here. Basically, combining results from unrelated models can improve accuracy, even if individual models are NOT good at 100% accurate predictions.

Similar to the explanation provided in the link, we calculate the arithmetic mean to average the results from all 5 models .

 

Summary:

We learnt how to clean and process a real-life dataset, select features of interest (and impact on target variable) apply 5 different machine learning algorithms.

The code for this tutorial is available here on the Projects page, under month of Feb.

Please take a look and feel free to comment with your own feedback or models.

 

Machine Learning Model for Predictive Analytics in 6 easy steps

In this post, we are going to learn how to apply a machine learning model for predictive analytics. We will create 5 models using different algorithms and test the results to compare which model gives the most accurate results. You can use this approach to compete on Kaggle or make predictions using your own datasets.

Dataset – For this experiment, we will use the birth_weight dataset from Delaware State Open Data Portal, which includes data from infants born in the period 2009-2016, including place of delivery (hospital/ birthing center/ home), gestation period (premature/ normal) and details about mother’s health conditions. You can download the data directly from the Open Data Link (https://data.delaware.gov/browse) or use the file provided with the code.

Step 1 – Prepare the Workspace.

  1. We clean up the memory of current R session, load some standard library packages (data.table, ggplot, sqldf, etc).
  2. We load the dataset “Births.csv”.

 

 

Step 2 – Data Exploration.

  1. This step helps us understand the dataset – the range of values for variables, most common occurrences, etc. For our dataset, we look at a summary of birth years, birth weight and number of unique values.

2.  This is the point where we process for missing values and make a decision whether to ignore (entire column with large number of missing data), delete (very few records) or possibly replace it with median values. In this set however, there are no missing values that need to be processed.

3. Check how many unique values exist for each column.

 

 

Step 3 – Test and Training Set

If you’ve ever competed on Kaggle, you will realize that the “training” set is the datafile used to create the machine learning model and the “test” set is the one where we use our model to predict the target variables.

In our case, we only have 1 file, so we will manually divide our set into 3 sets – one training set and one 2 test sets. (70% ,15%, 15% split) Why 2 test sets? Because it helps us better understand how the model reacts to new data. You can work with just one if you like. Just use one sequence command and stop with testdf command.

 

Step 4 – Hypothesis Testing

statistical functions

statistical functions

In this step , we try to understand which predictors most affect our target variable using statistical functions such as ANOVA, chisquare, correlation, etc. The exact function you use can be determined using the table alongside.

Irrespective of which function we use, we assume the following hypothesis:
a) Ho (null hypothesis) – no relation exists. Ho is accepted if p-values if >= 0.05
b) Ha (alternate hypothesis) – relation exists. Ha is accepted if p-value < 0.05. If Ha is found true, then we conduct posthoc tests (for Anova and chisquare tests ONLY) to understand which sub-categories show significant differences in the relationship.

 

(1) Relation between birth_weight and mom’s_ethnicity exists since p-value < 0.05.

Using BONFERRONI adjustment and posthoc tests, we realize that mothers with “unknown” race are more likely to have babies with low birth weight, as compared to women of other races.

We also see this from the frequency table (below). Clearly only 70% of babies born to mothers of “unknown” race are of normal weight (2500 gms or above) compared to 92% babies from “other” race moms and 93% babies of White-race origins.

mom ethnicity

mom ethnicity

(2) Relation between birth_weight and when prenatal_care started (first trimester, second, third or none) Although we see p-value < 0.05 Ha cannot be accepted because the posthoc tests do NOT show significant differences among prenatal care subsets.

 

(3) Relation between birth_weight and gestation period:

Posthoc tests show that babies in the groups POSTTERM 42+ WKS and TERM 37-41 WKS are similar and have higher birth weights than premature babies.
(4) We perform similar tests between birth_weight and multiple-babies (single, twins or triplets) and gender.

 

 

Step 5 – Model Creation

We create 5 models:

  • LDA (linear discriminant analysis) model with just 3 variables:
  • LDA model with just 7 variables:
  • Decision tree model:
  • Model using Naïve Bayes theorem.
  • Model using Neural Network theorem.

 

(1) Simple LDA model:

Model formula:

Make predictions with test1 file.

Examine how well the model performed.

lda_model prediction-accuracy

lda_prediction-accuracy

From alongside table, we see that number of correct predictions (highlighted in green)

= (32+166+4150) / 5000

= 4348 / 50

= 0.8696

Thus, 86.96% predictions were correctly identified for test1! (Note, we will use the same process for checking all 5 models.)

Using a similar process, we get 88.4% correct predictions for test2.

 

(2) LDA model with just 7 variables:

Formula:

Make predictions for test1 and test2 files:

We get 87.6% correct predictions for test1 file and 88.57% correct for test2.

 

(3) Decision Tree Model

For the tree model, we first modify the birth weight variable to be treated as a “factor” rather than a string variable.

Model Formula:

Make predictions for test1 and test2 files:

We get 91.16% correct predictions for test1 file and 91.6% correct for test2. However, the sensitivity of this model is little low, since it has predicted that all babies will be of normal weight i.e “2500+” category. This is one of the disadvantages of tree models. If the target variable has a highly popular option which accounts for 80% or more records, then the model basically assigns everyone to it. (sort of brute force algorithm)

 

(4) Naive Bayes Theorem :

Model Formula:

Make predictions for test1 and test2 files:

Again we get model accuracy of 91.16% 91.6% respectively for test1 and test2 files. However, this model also suffers from a “brute-force” approach and has marked all babies with normal weight i.e “2500+” category. This reminds us that we must be careful about both accuracy and sensitivity of the model when applying an algorithm for forecasting purposes.

 

(5) Neural Net Algorithm Model :

Model Formula:

In the above formula, the “maxit” operation specifies a stop after maximum number of iterations, so that the program doesn’t go into an infinite loop trying to converge values. Since we have set the seed to 270, our formula converges after 330 iterations. With other “seed value” this number may be higher or lower.

Make predictions for test1 file:

Validation table (below) shows that total number of correct observations = 4592. Hence model forecast accuracy = 91.84%

nb_model_accuracy

nb_model_accuracy

Test with second file:

Thus, Neural Net models are accurate at 91.84% and 92.57% respectively for test1 and test2 respectively.

 

 

Step 6 – Comparison of models

We take a quick look at how our models fared using a tabular comparison:  We conclude that neural network algorithm gives us the best accuracy and sensitivity.

compare data models

compare data models

 

The code and datafiles for this tutorial are added to the New Projects page under “Jan” section. If you found this useful, please do share with your friends and colleagues. Feel free to share your thoughts and feedback in the comments section.

 

Analyzing Fitbit Data with R

Hello All,

First of all, Happy New Year! Wishing you all a fantastic year in 2017 and hope you achieve all your goals for this year, and much more! 🙂

Most people’s New Year Resolutions are related to health, whether it going to the gym, eating healthy, walking more, reducing that stubborn belly fat or something similar. Since I bought a Fitbit Charge2 fitness tracker late this year, I thought it would be an interesting idea to base this month’s project on the data.
The entire codebase, images and datafiles are available at this link on a new Projects Page.

 

Project Overview:

The project consists of 3 parts:

  1. Scraping the Fitbit Site:  for “sleep quality” data. If you log in to the Fitbit site, they do allow export of exercise, sleep duration and some other data. However, crucial data like heartrate during activities, number of movements during the night, duration of restless sleep, etc are completely missing! I realize not everyone has a Fitbit, so I’ve added some datafiles for you to experiment. However, you can use the logic to scrape other sites in a similar fashion since I am using my login credentials. (similar to API programming explained in these posts on Twitter and Yelp API)
  2. Aggregating downloaded data:  We also download  data freely available on the website itself and then aggregate them together , selecting only the data we want. This  step is important  because in the real-world, data is rarely found in a single repository. Data cleansing, derived variables and other processing steps will happen in this section.
  3. Hypothesis testing: In this part, we will try to understand what factors affect sleep quality. Does it depend on movements during the night,  is there better sleep on weekend nights, etc.?  Does exercising more increase sleep quality?

 

Section 1:

Scraping the Fitbit site was made extremely easy thanks to the package “fitbitScraper”. In our program file “fitbit_scraper.R”, we extract sleep related data for the month of Nov and Dec 2016.

sleep_datafile

sleep_datafile

Section 2:

We combine the data from the web scraper, heartrate and exercise datafiles. We now have data for 2 months regarding the following variables:

  • sleep duration / start/ end time, sleep quality
  • number of movements during the night, number of times awake, duration of both.
  • Calories_burnt/ day, number of minutes performing light/ moderate/ heavy exercise,
  • weekday, date , month.
Fitbit dataset

Fitbit dataset

final datafile Fitbit trackerfinal datafile Fitbit tracker

final datafile Fitbit tracker

 

Section 3:

Using the above data, we use hypothesis testing methods (anova, correlation and chi-square testing ) to understand patterns in our data.

Once you run the code, you will observe the following results:

  1. Number of times awake increase when daily steps are between 4000-7000 steps.
  2. Weekends do NOT equate to better sleep, even though duration of sleep is higher.
  3. Sleep quality is WORST when number of movements is <10 during the night. This may seem counter-intuitive, but I know from personal experience that on the days when I am  stressed out, I sleep like a robot in one position throughout the night. The data seems to support this theory as well. 🙂
  4. Number of calories burnt is highest during weekends (unsurprising), followed by Tuesday.

Apart from the statistical tests, we also use data visualizations to double-check our analysis. Some plots are given below:

correlation diagram

correlation diagram

 

steps versus sleep_quality

steps versus sleep_quality

 

anova

anova

 

relationships between variables

diagram to view relationships between variables

 

Once again, feel free to download the code and play with the data. Share your thoughts and experiences in the comments section.

Until next time, adieu!

« Older posts Newer posts »