Journey of Analytics

Deep dive into data analysis tools, theory and projects

Category: Monthly projects (page 1 of 3)

Mapping Anthony Bourdain’s Travels

Travel maps tutorial

Anthony Bourdain was an amazing personality – chef, author, world traveler, TV showhost. I loved his shows as much for the exotic locations as for the yummilicious local cuisine. So I was delighted to find a dataset that included all travel location data, from all episodes of his 3 hit TV shows. Dataset attributed to Christine Zhang for publishing the dataset on Github.

In today’s tutorial, we are going to plot this extraordinary person’s world travels in R. So our code will cover geospatial data mapping using 2 methods:

  • Leaflets package to create zoomable maps with markers
  • Airplane route style maps to see the paths traveled.

Step 1 – Prepare the Workspace

Here we will load all the required library packages, and import the dataset.

places <- data.frame(fread(‘bourdain_travel_places.csv’), stringsAsFactors = F)

Step 2 – Basic Exploration

Our dataset has data for 3 of Bourdain’s shows:

  • No Reservations
  • Parts Unknown – which I personally loved.
  • The Layover

Let us take a sneak peak into the data:

dataset preview

How many countries did Bourdain visit? We can calculate this for the whole dataset or by show:

numshow <- sqldf(“select show , count(distinct(country)) as num_ctry from places group by show”) # Num countries by show.

numctry <- nrow(table(places$country)) # Total countries visited
numstates <- nrow(table(places$state[places$country == ‘United States’])) ## Total states visited in the US.

Wow! Bourdain visited 93 countries overall, and 68 countries for his show “No Reservations”. Talk about world travel.

I did notice some records have state names as countries, for example California, Washington and Massachussets. But these are exceptions, and overall the dataset is extremely clean. Even disregarding those records, 80+ countries is nothing to be scoffed at, and I had never even heard of some of these exotic locations.

P.S.: You know who else gets to travel a lot? Data scientists earning $100k+ per year. Here’s my new book which will help you how to land such a dream job.

Step 3 – Create a Leaflet to View Sites on World Map

Thankfully, the data already has geographical coordinates, so we don’t need to add any processing steps. However, if you have cities which are missing coordinates then use the “worldcities” file from the Projects page under “Rent Analysis”.

We do have some duplicates, where Bourdain visited the same location in 2 or more shows. So we will de-duplicate before plotting.

Next we will add an info column to list the city and state name that we can use on the marker icons.

places4$info <- paste0(places4$city_or_area, “, “, places4$country) # marker icons

mapcity <- leaflet(places4) %>%
setView(2.35, 48.85, zoom = 3) %>%
addTiles() %>%
addMarkers(~long, ~lat, popup = ~info,
options = popupOptions(closeButton = T),
clusterOptions = markerClusterOptions())
mapcity # Show the leaflet

leaflet view – the markers are interactive in R

Step 4 – Flight Route View

Can we plot the cities in flight view style? Yes, we can as long as we transform the dataframe where each record has a departure and arrival city. We do have the show and episode number so this is quite easy.

Once we do that we will use a custom function which basically plots a circle marker at the two cities and a curved line between the two.

plot_my_connection=function( dep_lon, dep_lat, arr_lon, arr_lat, …){
inter <- gcIntermediate(c(dep_lon, dep_lat), c(arr_lon, arr_lat), n=50, addStartEnd=TRUE, breakAtDateLine=F) inter=data.frame(inter) diff_of_lon=abs(dep_lon) + abs(arr_lon) if(diff_of_lon > 180){
lines(subset(inter, lon>=0), …)
lines(subset(inter, lon<0), …)
}else{
lines(inter, …)
}
} # custom function

For the actual map view, we will create a background world map image, then use the custom function in a loop to plot each step of Bourdain’s travels. Depending on how we create the transformed dataframe, we can plot Bourdain’s travels for a single show, single season or all travels.

Here are two maps separately for the show “Parts Unknown” and “The Layover” respectively. Since the former had more seasons, the map is a lot more congested.

Parts Unknown seasons – travel maps

par(mar=c(0,0,0,0)) # background map
map(‘world’,col=”#262626″, fill=TRUE, bg=”white”, lwd=0.05,mar=rep(0,4),border=0, ylim=c(-80,80) ) # other cols = #262626; #f2f2f2; #727272
for(i in 1:nrow(citydf3)){
plot_my_connection(citydf3$Deplong[i], citydf3$Deplat[i], citydf3$Arrlong[i], citydf3$Arrlat[i],
col=”gold”, lwd=1)
} # add every connections:
points(x=citydf$long, y=citydf$lat, col=”blue”, cex=1, pch=20) # add points and names of cities
text(citydf$city_or_area, x=citydf$long, y=citydf$lat, col=”blue”, cex=0.7, pos=2) # plot city names

As always, the code files are available on the Projects Page. Happy Coding!

Call to Action:

If you read this far and also want a job or promotion in the DataScience field, then please do take a look at my new book “Data Science Jobs“. It will teach you how to optimize your profile to land great jobs with high salary; 100+ interview Qs and niche job sites which everybody else overlooks.

Email Automation for Google Trends

This blogpost will teach you set up automated email reports to view how search volumes i.e. Google Trends vary over time.

Email automation for Google Trends over time

The email report will also include important search terms that are “rising” or near a “breakout point”. This can be really useful as the breakout keywords indicate users across the globe have recently started paying great attention to these search terms. So you could create original content to attract people using these phrases in their Google searches. This is an amazing way to boost traffic/ sales leads, etc. by taking advantage of new trends.

Note, that the results are relative, not absolute search volume terms. But it is still quite useful, as you could narrow down your keywords list, and login to Adsense/ keyword planner to get the exact search volume. Export the list from this script to a .csv and simply upload in Adsense. It also gives you a clear indication of how search volumes compare across similar terms – classic SEO !

We will perform all our data pull requests and manipulations in R. The best part is that you do not need any API keys or logins for this tutorial!

[ Do you know what else is trending? My book on how to get hired as a datascientist. Check it out on Amazon, where it is the #1 NEW RELEASE and top 10 in its categories. Whether you are currently enrolled in a data science course or actively job searching, this book is sure to help you attract multiple job offers in this lucrative niche. ]

Without further interruptions – let’s dive right in to the trend analysis.

Step 1 – Load Libraries

Start by loading the libraries – “gtrends” is the main library package which will help us pull data for search results.

library (gtrendsR)
library(ggplot2)
library(plotly)
library(dplyr)
library(sqldf)
library(reshape2)

Step 2 – Search terms and timeframe

We need to specify the variables we need to pass to the search query:

  • search terms of interest. Unfortunately, you can only pass 5 terms at one time. Theoretically you could store the value and repeat the searches, but since numbers are relative, make sure that you keep at least one term in common and then normalize.
  • Remember, we get relative search volumes (max = 100); NOT absolute volumes!
  • timeframe under consideration, has to be in the format “YYYY-mm-dd” ONLY. No exceptions. End date = today, whereas start date will be the the first of the month, six months ago.
  • country codes – I’ve only used “US”, but you can search for multiple countries using a string array like c(“US”, “CA”, “DE”) which stands for US, Canada and Germany.
  • Channels – default is “web”, but you can search for “new” or “images”. Images would be good for those promoting content in Pinterest or Instagram.

date6mthpast <- Sys.Date() – 180
startdate <- paste0( substr(as.character(date6mthpast), 1,7), “-01”, sep = “”)
currdate = as.character(Sys.Date())
timeframe = paste( startdate, currdate)
keywords <- c(“Data Science jobs”, “MS Analytics Jobs”, “Analytics jobs”,
“Data Scientist course”, “job search”)
country=c(‘US’)
channel=’web’

We will use the gtrends() function to pull the data. This function returns a list of variables and dataframes. Of main interest to us is the “interest_over_time” data frame, “interest_by_region” (state names mostly). There is also a dataframe named “interest_by_city”. The first one does hold data with date value, while the others do not, so it will be aggregated over the entire data frame. I worked around this by pulling once for 6 months, and again for the last 45 days.

trends = gtrends(keywords, gprop =channel,geo=country, time = timeframe )

Step 3 – Search Volume over time

We will use the ggplot function to get a graphical representation of the search volume.

ggplot(data=time_trend, aes(x=date, y=hitval, group=keyword, col=keyword))+
geom_line()+xlab(‘Time’)+ylab(‘Relative Interest’)+ theme_bw() +
theme(legend.title = element_blank(), legend.position=”bottom”, legend.text=element_text(size=12)) +
ggtitle(“Google Search Volume over time”)

Google search volume over time
Google trends over time

Notice that the interest in term “job search” is significantly higher than any other keyword and spike in April and May (people graduating maybe?) This makes sense as it is a more generic term, and applies to larger number of users. On a similar vein, look at the volumes for “MS analytics jobs” . It is almost nil, so clearly a non-starter in terms of targeting.

Step 4 – Interest by Regions

Let us look at interest by region (state names in the US ) . This might be useful to target folks by region, or even location-based ads.

locsearch = trends$interest_by_region
plot2 <- ggplot(subset(locsearch, locsearch$keyword != “job search”),
aes(x=as.factor(location), y= hits, fill=as.factor(keyword) )) +
geom_bar(position=”dodge”, stat=”identity”) +
coord_flip() +
theme(legend.title = element_blank(), legend.position=”bottom”, legend.text=element_text(size=9)) +
xlab(‘Region’)+ ylab(‘Relative Interest’) +
ggtitle(“Google Search Volumes by Region/State”)

Note, that “data scientist course” has almost zero interest compared to broader terms like “analytics jobs” or “datascience Jobs”. Interestingly, “analytics jobs” seems to be a preferred term over “data science” only in NY, MA and Washington DC.

Step 5 – Trending Terms

The original list also returns a dataframe titled “related_queries”. Some of these can have tags as “rising” or “breakout”. These are the terms we would like to know as they occur.

breakoutdf = reltopicdf[reltopicdf$related_queries == ‘rising’,]
volbrkdf = breakoutdf[1:10,]
row.names(volbrkdf) = NULL
volbrkdf

trending seach terms related to keywords of interest
trending seach terms related to keywords of interest

Step 6 – Create pdf for email attachments

We will use the pdf() command to create an attachment with all the graphs and add to the email we want to send/receive on a daily basis.

filename2 = paste0(“Google Search Trends – “, Sys.Date(), “.pdf”)
pdf(paste0(“Google Search Trends – “, Sys.Date(), “.pdf”))
plot1
plot2
print(volbrkdf)
dev.off()
dev.off()

If you want to embed one of these figures in the email itself, instead of attachment then please see my post on automated reports.

Step 7 – Send the email

We use the RDCOM library to send out emails. You do need to have Outlook mail client on your PC.

library(RDCOMClient)
OutApp <- COMCreate(“Outlook.Application”)
outMail = OutApp$CreateItem(0)
outMail[[“To”]] = “an**@gmail.com”
outMail[[“subject”]] = paste(“Google Search Trend Report -“, Sys.Date())
email_body1 <- “Write email body content with correct html tags”
outMail[[“HTMLBody”]] = email_body1
file_location = paste0(paste0(abs_path,”/”),filename2)
outMail[[“Attachments”]]$Add(file_location)
outMail
outMail$Send()

The first time you run this, you might need to “allow” RStudio to access your email account. Just add this script to a task scheduler, and choose frequency of delivery, and you are all set! Note, a dummy email address is used, so remember to change the recipient address.

Try it out and let me know if you have any questions, or run into errors. The script is available on the Projects page – link here

Last but not the least, if you are interested in getting hired in a data science field, then please do take at my job-search book. Here is the Amazon link.

Happy Coding!

Automated Email Reports with R

R is an amazing tool to perform advanced statistical analysis and create stunning visualizations. However, data scientists and analytics practitioners do not work in silos, so these analysis have to be copied and emailed to senior managers and partners teams. Cut-copy-paste sounds great, but if it  is a daily or periodic task, it is more useful to automate the reports. So in this blogpost, we are going to learn how to do exactly that.

The R-code uses specific library packages to do this:

  • RDCOMClient – to connect to Outlook and send emails. In most offices, Outlook is still the defacto email client, so this is fine. However, if you are using Slack or something different it may not work.
  • r2excel – To create an excel output file.

The screenshot below shows the final email view:

email screenshot

email screenshot

As seen in the screenshot, the email contains the following:

  • Custom subject with current date
  • Embedded image
  • Attachments – 1 Excel and 1 pdf report

Code Explanation:

The code and supporting input files are available here, under the Projects page under Report automation. The code has 4 parts:

  • Prepare the work space.
  • Pull the data from source.
  • Cleaning and calculations
  • Create pdf.
  • Create Excel file.
  • Send email.

The workflow should look something like this. For really simple reports, where you do not need much formatting, you can do everything within the first script itself.

Prepare the work space

I always set the relative paths and working directories at the very beginning, so it is easier to change paths later. You can replace the link with a shared network drive path as well.

Load library packages and custom functions. My code uses the r2excel package which is not directly available as an R-cran package. So you need to install using devtools using the code below.

It is possible to do something similar using the “xlsx” package, but r2excel is easier.

Some other notes:

  • you need the first 2 lines of code only for the first time you installation. From the second time onwards, you only need to load the library.
  • r2excel seems to work only with 64-bit installations of R and Rstudio.
  • you do need Java installed on your computer. If you see an error about java namespace, then check the path variables. There is a very useful thread on Stackoverflow, so take a look.
  • As always, if you see errors Google it and use the Stack Overflow conversations. In 99% of cases, you will find an answer.

Pull the data from source

This is where we connect to an Excel CSV (or text) file. In practice, most people connect to a database of some kind. The R-script I am using connects to a .csv file, but I have added the code to a connect to a SQL database.

That code snippet is commented out, so feel free to substitute your own sql database links. The code will also work for Amazon EC2 cluster.

Some points to keep in mind:

  • If you are using sqlquery() then please note that if your query has an error then R sadly shows only a standard error message. So test your query on SQL server to ensure that you are not missing anything.
  • Some queries do take a long time, if you are pulling from a huge dataset. Also the time taken will be longer in R compared to SQL server direct connection. Using the  Sys.time() command before and after the query is helpful to know how long the query took to complete.
  • If you are only planning to pull the data randomly, it may make sense to pull from SQL server and store locally. Use the fread() function to read those files.
  • If you are using R desktop instead of R-server, the amount of data you can pull may be limited to what your system configuration.
  • ALWAYS optimize your query. Even if you have unlimited memory and computation power, only pull the data you absolutely need. Otherwise you end up unnecessarily sorting through irrelevant data.

Cleaning and calculations

For the current data, there are no NAs, so we don’t need to account for those. However, the read.csv() command creates factors, which I personally do not like, as they sometimes cause issues while merging.

Some of the column names have “.” where R converted the space in the names. So we will manually replace those with an underscore using the gsub() function.

We will also rank the apps based on categories of interest, namely:

  • Most Popular Apps – by number of Reviews
  • Most Popular Apps – by number Downloads and Reviews
  • Most Popular Categories – Paid Apps only
  • Most popular apps with 1 billion installations.

Create pdf

We are going to use the pdf() function to paste all graphs to a pdf document. Basically what this function does is write the graphs to a file rather than show on the console. So the only thing to remember is that if you are testing graphs or make an incorrect graph, everything will get posted to the pdf until you hit the “dev.off()” function. Sometimes if the graph throws an error you may end up with a blank page, or worse, with a corrupt file that cannot be opened.

Currently, the code I am only printing 2 simple graphs using ggplot() and barplot() functions, but you can include many other plots as well.

Create Excel file.

The Excel is created in the sequence below:

  • Specify the filename and create an object of type .xlsx This will create an empty Excel placeholder. It is only complete when you save the Workbook using the saveWorkbook() at the end of the section.
  • Use the sheets() to create different worksheets within the Excel.
  • The  xlsx.addHeader() adds a bold Header to each sheet which will help readers understand the content on the page. The r2excel package has other functions to add more informative text in smaller (non-header) font as well, if you need to give some context to readers. Obviously, this is optional if you don’t want to add them.
  • xlsx.addTable() – this is the crucial function that adds the content to Excel, the main “meat” of what you need to show.
  • saveWorkbook() – this function will save the Excel to the folder.
  • xlsx.openFile() – this function opens the file so you can view contents. I typically have the script running on automated mode, so when the Excel opens I am notified that the script completed.

Send email

The email is sent using the following functions:

  • OutApp() – creates an Outlook object. As I mentioned earlier, you do need Outlook and need to be signed in for this to work. I use Outlook for work and at home, so I have not explored options for Slack or other email clients.
  • outmail[[“To”]] – specify the people in the “to” field. You could also read email addresses from a file and pass the values here.
  • outmail[[“cc’]] – similar concept, for the cc field.
  • outmail[[“Subject”]] – I have used the paste0() function to add the current date to the subject, so recipients know it is the latest report.
  • outMail[[“HTMLBody”]] – I used the HTML body so that I can embed the image. If you don’t know HTML programming, no worries! The code is pretty intuitive, you should be able to follow what I’ve done. The image basically is an attachment which the HTML code is forcing to be viewed within the body of the email. If you are sending the email to people outside the organization, they may see a small box instead of the image with a cross on the top left (or right) of the box. Usually, when you hover your mouse near box and right click, it will ask them to download images. You may have seen similar messages in gmail, along with a link to “show images” or ‘always show images from this sender’. You obviously cannot control what the recipient selects, but testing by sending to yourself first helps smoothing out potential aesthetic issues.
  • outMail[[“Attachments”]] – function to add attachments.
  • outMail$Send() – until you run this command, the mail will not be send. If you are using this in office, you may get a popup asking you to do one of the following. Most  of these will generally go away after the first use, but if they don’t, please look up the issue on StackOverflow or contact your IT support for firewall and other security settings.
    • popup to hit “send”
    • popup asking you to “classify” the attachments (internal / public/ confidential) Select as appropriate. For me, this selection is usually  “internal”
    • popup asking you to accept “trust” settings
    • popup blocker notifying you to allow backend app to access Outlook.

That is it – and you are done! You have successfully learned how to send an automated email via R.

How to raise money on Kickstarter – extensive EDA and prediction tutorial

In this tutorial, we will explore the characterisitcs of projects on Kickstarter and try to understand what separates the winners from the projects that failed to reach their funding goals.

Qs for Exploratory Analysis:

We will start our analysis with the aim of answering the following questions:

    1. How many projects were successful on Kickstarter, by year and category.
    2. Which sub-categories raised the most amount of money?
    3. Projects originate from which countries?
    4. How many projects exceeded their funding goal by 50% or more?
    5. Did any projects reach $100,000 or more? $1,000,000 or higher?
    6. What was the average amount contributed by each backer, and how does this change over time? Does this amount differ with categories?
    7. What is the average funding period?

 

Predicting success rates:
Using the answers from the above questions, we will try to create a model that can predict which projects are most likely to be successful.

The dataset is available on Kaggle, and you can run this script LIVE using this kernel link. If you find this tutorial useful or interesting, then please do upvote the kernel ! 🙂

Step1 – Data Pre-processing

a) Let us take a look at the input dataset :

The projects are divided into main and sub-categories. The pledged amount “usd_pledged” has an equivalent value converted to USD, called “usd_pledged_real”. However, the goal amount does not have this conversion. So for now, we will use the amounts as is.

We can see how many people are backing each individual project using the column, “backers”.

b) Now let us look at the first 5 records:

The name doesn’t really indicate any specific pattern although it might be interesting to see if longer names have better success rates. Not pursuing that angle at this time, though.

c) Looking for missing values:

Hurrah, a really clean dataset, even after searching for “empty” strings. 🙂

 d) Date Formatting and splitting:

We have two dates in our dataset – “launch date” and “deadline date”.We convert them from strings to date format.
We also split these dates into the respective year and month columns, so that we can plot variations over time.
So we will now have 4 new columns: launch_year, launch_month, deadline_year and deadline_month.

Exploratory analysis:

a) How many projects are successful?

We see that “failed” and “successful” are the two main categories, comprising ~88% of our dataset.
Sadly we do not know why some projects are marked “undefined” or “canceled”.
“live”” projects are those where the deadlines have not yet passed, although a few among them are already achieved their goal.
Surprisingly, some ‘canceled’ projects had also met their goals (pledged_amount >= goal).
Since these other categories are a very small portion of the dataset, we will subset and only consider records with satus “failed” or “successful” for the rest of the analysis.

b) How many countries have projects on kickstarter?

We see projects are overwhelmingly US. Some country names have the tag N,0“”, so marking them as unknown.

c) Number of projects launched per year:

Looks like some records say dates like 1970, which does not look right. So we discard any records with a launch / deadline year before 2009.
Plotting the counts per year on a graphs: < br />From the graph below, it looks like the count of projects peaked in 2015, then went down. However, this should NOT be taken as an indicator of success rates.

 

 

Drilling down a bit more to see count of projects by main_category.

Over the years, maximum number of projects have been in the categories:

    1. Film & Video
    2. Music
    3. Publishing

 d) Number of projects by sub-category: (Top 20 only)


The Top 5 sub-categories are:

    1. Product Design
    2. Documentary
    3. Music
    4. Tabletop Games (interesting!!!)
    5. Shorts (really?! )

Let us now see “Status” of projects for these Top 5 sub_categories:
From the graph below, we see that for category “shorts” and “tabletop games” there are more successfull projects than failed ones.

 e) Backers by category and sub-category:

Since there are a lot of sub-categories, let us explore the sub-categories under the main theme “Design” 

Product design is not just the sub-category with the highest count of projects, but also the category with the highest success ratio.

 f) add flag to see how many got funded more than the goal.

So ~40% of projects reached or surpassed their goal, which matches the number of successful projects .

 g) Calculate average contribution per backer:

From the mean, median and max values we quickly see that the median amount contributed by each backer is only ~$40 whereas the mean is higher due to the extreme positive values. The max amount by a single backer is ~$5000.

h) Calculate reach_ratio

The amount per backer is a good start, but what if the goal amount itself is only $1000? Then an average contribution per backer of $50 impies we only need 20 backers.
So to better understand the probability of a project’s success, we create a derived metric called “reach_ratio”.
This takes the average user contribution and compares it against the goal fund amount.

We see the median reach_ratio is <1%. Only in the third quartile do we even touch 2%!
Clearly most projects have a very low reach ratio. We could subset for “successful” projects only and check if the reach_ratio is higher.

 i) Number of days to achieve goal:

 Predictive Analystics:

We will apply a very simple decision tree algorithm to our dataset.
Since we do not have a separate “test” set, we will split the input dataframe into 2 parts (70/30 split).
We will use the smaller set to test the accuracy of out algorithm.

Taking a peek at the decision tree rules:

kickstarter success decision tree

kickstarter success decision tree




Thus we see that “backers” and “reach-ratio” are the main significant variables.

Re-applying the tree rules to the training set itself, we can validate our model:

From the above tables, we see that the error rate = ~3% and area under curve >= 97%

Finally applying the tree rules to the test set, we get the following stats:

From the above tables, we see that still the error rate = ~3% and area under curve >= 97%

 

Conclusion:

Thus in this tutorial, we explored the factors that contribtue to a project’s success. Main theme and sub-category were important, but the number of backers and “reach_ratio” were found to be most critical.
If a founder wanted to gauge their probability of success, they could measure their “reach-ratio” halfway to the deadline, or perhaps when 25% of the timeline is complete. If the numbers are lower, it means they need to double down and use promotions/social media marketing to get more backers and funding.

If you liked this tutorial, feel free to fork the script. And dont forget to upvote the kernel! 🙂

Who wants to work at Google?

In this tutorial, we will explore the open roles at Google, and try to see what common attributes Google is looking for, in future employees.

 

This dataset comes from the Kaggle site, and contains text information about job location, title, department, minimum and preferred qualifications and the responsibilities of the position. Using this dataset we will try to answer the following questions: You can download the dataset here, and run the code on the Kaggle site itself here.

  1. Where are the open roles?
  2. Which departments have the most openings?
  3. What are the minimum and preferred educational qualifications needed to get hired at Google?
  4. How much experience is needed?
  5. What categories of roles are the most in demand?

Data Preparation and Cleaning:

The data is all in free-form text, so we do need to do a fair amount of cleanup to remove non-alphanumeric characters. Some of the job locations have special characters too, so we remove those using basic string manipulation functions. Once we read in the file, this is the snapshot of the resulting dataframe:

Job Categories:

First we look at which departments have the most number of open roles. Surprisingly, there are more roles open for the “Marketing and Communications” and “Sales & Account Management” categories, as compared to the traditional technical business units. (like Software Engineering or networking) .

Full-time versus internships:

Let us see how many roles are full-time and how many are for students. As expected, only ~13% of roles are for students i.e. internships. Majority are full-time positions.

Technical Roles:

Since Google is predominantly technical company, let us see how many positions need technical skills, irrespective of the business unit (job category)

a) Roles related to “Google Cloud”:

To check this, we investigate how many roles have the phrase either in the job title or the responsibilities. As shown in the graph below, ~20% of the roles are related to Cloud infrastructure, clearly showing that Google is making Cloud services a high priority.

Educational Qualifications:

Here we are basically parsing the “min_qual” and “pref_qual” columns to see the minimum qualifications needed for the role. If we only take the minimum qualifications into consideration, we see that 80% of the roles explicitly ask for a bachelors degree. Less than 5% of roles ask for a masters or PhD.

min_qualifications for Google jobs

However, when we consider the “preferred” qualifications, the ratio increases to a whopping ~25%. Thus, a fourth of all roles would be more suited to candidates with masters degrees and above.

Google Engineers:

Google is famous for hiring engineers for all types of roles. So we will read the job qualification requirements to identify what percentage of roles requires a technical degree or degree in Engineering.
As seen from the data, 35% specifically ask for an Engineering or computer science degree, including roles in marketing and non-engineering departments.

Years of Experience:

We see that 30% of the roles require at least 5-years, while 35% of roles need even more experience.
So if you did not get hired at Google after graduation, no worries. You have a better chance after gaining a strong experience in other companies.

Role Locations:

The dataset does not have the geographical coordinates for mapping. However, this is easily overcome by using the geocode() function and the amazing Rworldmap package. We are only plotting the locations, so some places would have more roles than others.  So, we see open roles in all parts of the world. However, the maximum positions are in US, followed by UK, and then Europe as a whole.

Responsibilities – Word Cloud:

Let us create a word cloud to see what skills are most needed for the Cloud engineering roles: We see that words like “partner”, “custom solutions”, “cloud”, strategy“,”experience” are more frequent than any specific technical skills. This shows that the Google cloud roles are best filled by senior resources where leadership and business skills become more significant than expertise in a specific technology.

 

Conclusion:

So who has the best chance of getting hired at Google?

For most of the roles (from this dataset), a candidate with the following traits has the best chance of getting hired:

  1. 5+ years of experience.
  2. Engineering or Computer Science bachelor’s degree.
  3. Masters degree or higher.
  4. Working in the US.

The code for this script and graphs are available here on the Kaggle website. If you liked it, don’t forget to upvote the script. 🙂

Thanks and happy coding!

Older posts
Facebook
LinkedIn