Journey of Analytics

Deep dive into data analysis tools, theory and projects

Category: Learning Resources (page 1 of 7)

KubeCon – Preparation Checklist for Attendees

Just 5 days left to KubeCon + CloudNativeCon North America! 🙂 I am quite excited to finally attend this awesome conference and the chance to visit sunny San Diego! 🙂 Whether you are a first time attendee as well, or just looking to get your money’s worth from the conference here is a list of to-dos to make the most of this experience.

If you have not heard about KubeCon, then this is a conference aimed at Kubernetes and related container technologies, a way to get software applications running with cloud services. This is an entire ecosystem, and in the next few years will change software infrastructure concepts for all companies. Myriad companies including Uber, Google, Shopify, JPMorgan are already on board and deploying using these new methods.

These technologies are also a huge part of how machine learning models and AI applications are implemented successfully and at scale, which is why I (as owner of this datascience blog) got interested in Kubernetes. If you’ve run machine learning models using cloud services, you might have also used some of these tools without ever being aware of it.

This is (obviously) my first time attending this conference and visiting the city, so I had tons of Qs and thoughts. The amazing list of speakers and conference tracks also make it hard to choose which sessions to attend. Thankfully, I was able to get some excellent advice from the dedicated Slack channels for the conference and past attendees.

Since the countdown clock has started, I’ve summarized the tips for others, so you can make the most of this experience.

1. Get on Slack

  • I am so thankful to Wendi West, Paris Pittman and other moderators in the Slack channels for patiently answering questions, hotel recommendations, sending event reminders and building some great vibes for the conference! the organizers for this conference .
  • I found a lot of useful information on the channel specific to the Diversity scholarship recipients, followed by the events channel. If you still have last minute Qs, then post on this channel or DM the organizers.
  • The Slack channels are great to connect with folks before the event, so you have some familiar faces to meet at the conference.
  • If you have not checked the Slack channel – look it up via cloud-native.slack.com

2. Which Sessions to Attend

  • By now everyone should have created an itinery for themselves. If not please use the “sched” app with the following workspace url https://kccncna19.sched.com/
  • Note that you should have one broad agenda for the conference – either info for attempting a certification, networking for a job, discuss case studies so you can apply concepts at your job, or something else. This will allow you to make better selections without feeling overwhelmed by the sheer number of (fantastic) choices!
  • For me, the main goals is understand how Kubernetes is deployed for machine learning & AI projects. Since I work for a bank, security issues and migration of legacy/mainframe software into cloud services are also relevant topics, as are case studies. Having this theme allowed me to quickly decide and create a meaningful list of sessions to attend. I also hope to network with 25+ new people, a goal that should be quite easy in a conference with 8000+ attendees and spanning 5 days.
  • PRO TIP: Couple of past attendees advised not to over-schedule and to look at room locations. Although the conference is mainly happening at the San Diego convention center, there are some sessions that are being held at the Marriot and other hotels. So look them up and make sure you have enough time to walk to different venues.
    • Plus, if a session is quite interesting, you might want to hang back and chat with the speaker or ask additional clarifications. This might cause you to miss the next session, so design your schedule carefully.

3. Networking

Networking
Networking
  • Being part of a distince Diversity scholarship Slack channel means that I’ve already connected with 5-10 other recipients. After all these online discussions, it will great to meet these talented and ambitious folks in person!
  • Most past attendees have emphatically stated that folks attending this conference are very generous with their time, so please make the most of their expertise and knowledge.
  • Dont be shy! Speak up.
  • Speakers are amazing, and human too! So feel free to say hello after the session, and ask follow up Qs or just thank them for an interesting discussion.
  • For those who are extremely nervous about networking, here is a unique tip that someone told me years ago. Pick a color and talk to at least 5 people wearing clothes in that color. This might seem crazy, but it is a very practical way of overcoming self-bias and prejudices and talking to people we would not normally approach (feeling shy, out of place or other reasons) I’ve used it at other events and conferences and made some fabulous connections!
  • Use LinkedIn app and connect immediately. If you met someone interesting, then send them an invite during the conversation itself. No one ever says no, and if you wait you will forget to followup, either because you forgot their name, used the wrong spelling or misplaced their business card. Plus, at such large conferences it is terribly hard to keep track of all the people you meet. I used this at the Philly AI conference very successfully, and can’t wait to connect with folks at KubeCon too!

During the conference:

Conference attendees
  • Keep a notepad handy for broad keywords and ideas that are directly applicable to your role (and conference goal) .
  • List the session date, speaker name and time. This will help you look it up later, esp as I’ve heard many speakers post their ppt and videos after the conference.
  • Tweet! Use the tags #KubeCon #CloudNativeCon and #DiversityScholarship.
  • Connect with people on LinkedIn (reiterating from above)
  • Check out the sponsored coffee/breakfast sessions and after hour meetups; I’ve heard they are amazing, as are the “lightning talks” post 5 pm.
  • Attend the sponsor expo and booths. Apart from the cool swag (tees, pens, stickers, etc) you will get to see some interesting demos, hobnob with folks from companies both large and small. Basically everyone from startups to large enterprises like Microsoft and Palo Alto, and everything in between. Great way to learn what’s happening in this space – you might even get your own unicorn startup idea! 🙂

Post Conference

  • Reconnect with the folks you’ve met on LinkedIn.
  • Add a blog post summarizing info you’ve learnt and takeaways from the conference. Everyone has a unique perspective, so don’t feel as if everything has already been said! Ideally do this within a week, when you are still fresh with your ideas. Remember to use the hashtags.
  • Add pictures from the event on LinkedIn. Make sure to tag your new friends too!
  • If possible, present a brown bag or session to your team (or group) at office. This is a great way of disseminating information to others who could not attend, improve your public speaking skills and also score some brownie points for your next employee appraisal! Win-win all around.
  • Use what you’ve learnt. Even if it is just a little portion!
  • Plan ahead to attend next year’s conference!

That’s it from me, see you all at the conference!

November Thanksgiving – Data Science Style!

Hello All,

November is the month of Thanksgiving, and vacations and of course deals galore! As part of saying thanks to my loyal readers, here are some deals specific to data science professionals and students, that you should definitely not miss on.

Book deals:

  1. If you are exploring Data Science careers or preparing for interviews for a winter graduation, then take a look at my ebook “Data Science Jobs“. It is currently part of a Kindle countdown deal and priced 50% off from its normal price. Currently only $2.99 and prices will keep increasing until Friday morning when it goes back to full price.
  2. Want a FREE book on Statistics, as related to R-programming and machine learning algorithms? I am currently looking to giveaway FREE advanced reviewer copies (ARC) . You can look at the book contents here, and if it seems interesting then please sign up here to be a reviewer.
  3. If you are deploying machine learning models on the cloud, then chances are you work with Kubernetes or have at least heard of it. If you haven’t and you are an aspiring data scientist/ engineer, then you should compulsorily learn about tho

Nov projects:

  1. The R-programming project for November is a sentiment analysis on song lyrics by different artists. There is lots of data wrangling involved to aggregate different lyrics, and compare the lyrics favored by 2 different artists. The code repository is added to the Projects page here. I’ve written the main code in R, and used Tableau to generate some of the visuals, but this can be easily tweaked to create an awesome Shiny dashboard to add to a data science portfolio.

Until next time, Adieu for now!

Social Network Visualization with R

In this month’s we are going to look at data analysis and visualization of social networks using R programming.

Social Networks – Data Visualization

Friendster Networks Mapping

Friendster was a yesteryear social media network, something akin to Facebook. I’ve never used it but it is one of those easily available datasets where you have a list of users and all their connections. So it is easy to create a viz and look at whose networks are strong and whose are weak, or even the bridge between multiple networks.

The dataset and code files are added on the Projects Page here , under “social network viz”.

For this analysis, we will be using the following library packages:

  • visNetwork
  • geomnet
  • igraph

Steps:

  1. Load the datafiles. The list of users is given in the file named “nodes” as each user is a node in the graph. The connection list is given in the file named “edges” as a 1-to-1 mapping. So if user Miranda has 10 friends, there would be 10 records for Miranda in the “edges” file, one for each friend. The friendster datafile has been anonymized, so there are numbers (id) rather than names.
  2. Convert the dataframes into a very specific format. We do some prepwork so that we can directly use the graph visualization functions.
  3. Create a graph object. This will also help to create clusters. Since the dataset is anonymized it might seem irrelevant, but imagine this in your own social network. You might have one cluster of friends who are from your school, another bunch from your office, one set who are cousins and family members and some random folks. Creating a graph object allows us to look at where those clusters lie automatically.
  4. Visualize using functions specific to graph objects. The first function is visNetwork() which generates an interactive color coded cluster graph. When you click on any of th nodes (colored circles), it will highlight all the connections radiating from the node. (In the image below, I have highlighted node for user 17. nwk-viz-highlight
  5. You can also use the same function with a bunch of different parameters, as shown below:

In the image below you can see the 3 colored clusters and the central (light blue) node. The connections in blue are the ones that do not have a lot of direct connections. The yellow and red clusters are tigher, indicating they have internal connections with each other. (similar to a bunch of classmates who all know each other)

network clusters
network clusters

That’s it. Again the code is available on the Projects Page.

Code Extensions

Feel free to play around with the code. One extensions of this idea would be to download Facebook or LinkedIn data (premium account needed) and create similar visualizations.

Or if you have a list of airports and routes, you could create something like this as a flight network map, to know the minimum number of hops between 2 destinations and alternative routes.

You could also do a counter to see which nodes have the most number of friends and increase the size of the circle. This would make it easier to view which nodes are the most well-connected.

Of course, do not be over-mesmerized by the data. In real-life, the strength of the relationship also matters. This is hard to quantify or collect, even though its easy to depict once you have the data in hand/ For example, I have a 1000 connections who I’ve met at conferences or random events. If I needed a job, most may not really be useful. But my friend Sarah has only 300 but super-loyal friends who literally found her a job in 2 days when she had to move back to her hometown to take care of a sick parent.

With that thought, do take a look at the code and have fun coding! 🙂

DataScience Portfolio Ideas for Students & Beginners

A lot has been written on the importance of a portfolio if you are looking for a DataScience role. Ideally, you should document your learning journey so that you can reuse code, write well-documented code and also improve your data storytelling skills.

DataScience Portfolio Ideas

However, most students and beginners get stumped on what to include in their portfolio, as their projects are all the same that their classmates, bootcamp associates and seniors have created. So, in this post I am going to tell you what projects you should have in your portfolio kitty, as well as a list of ideas that you can use to construct a collection of projects that will help you stand out on LinkedIn, Github and in the eyes of prospective hiring managers.

Job Search Guide

You can find many interesting projects on the “Projects” page of my website JourneyofAnalytics. I’ve also listed 50+ sources for free datasets in this blogpost.

In this post though, I am classifying projects based on skill level along with sample ideas for DIY projects that you can attempt on your own.

On that note, if you are already looking for a job, or about to do so, do take a look at my book “DataScience Jobs“, available on Amazon. This book will help you reduce your job search time and quickly start a career in analytics.

Since I prefer R over Python, all the project lists in this post will be coded in R. However, feel free to implement these ideas in Python, too!

a. Entry-level / Rookie Stage

  1. If you are just starting out, and are not very comfortable with even syntax, your main aim is to learn how to code along with DataScience concepts. At this stage, just try to write simple scripts in R that can pull data, clean it up and calculate mean/median and create basic exploratory graphs. Pick up any competition dataset on Kaggle.com and look at the highest voted EDA script. Try to recreate it on your own, read through and understand the hows and whys of the code. One excellent example is the Zillow EDA by Philipp Spachtholz.
  2. This will not only teach you the code syntax, but also how to approach a new dataset and slice/dice it to identify meaningful patterns before any analysis can begin.
  3. Once you are comfortable, you can move on to machine learning algorithms. Rather than Titanic, I actually prefer the Housing Prices Dataset. Initially, run the sample submission to establish a baseline score on the leaderboard. Then apply every algorithm you can look up and see how it works on the dataset. This is the fastest way to understand why some algorithms work on numerical target variables versus categorical versus time series.
  4. Next, look at the kernels with decent leaderboard score and replicate them. If you applied those algorithms but did not get the same result, check why there was a mismatch.
  5. Now pick a new dataset and repeat. I prefer competition datasets since you can easily see how your score moves up or down. Sometimes simple decision trees work better than complex Bayesian logic or Xgboost. Experimenting will help you figure out why.

Sample ideas –

  • Survey analysis: Pick up a survey dataset like the Stack overflow developer survey and complete a thorough EDA – men vs women, age and salary correlation, cities with highest salary after factoring in currency differences and cost of living. Can your insights also be converted into an eye-catching Infographic? Can you recreate this?
  • Simple predictions: Apply any algorithms you know on the Google analytics revenue predictor dataset. How do you compare against the baseline sample submission? Against the leaderboard?
  • Automated reporting: Go for end-to-end reporting. Can you automate a simple report, or create a formatted Excel or pdf chart using only R programming? Sample code here.

b. Senior Analyst/Coder

  1. At this stage simple competitions should be easy for you. You dont need to be in the top 1%, even being in the Top 30-40% is good enough. Although, if you can win a competition even better!
  2. Now you can start looking at non-tabular data like NLP sentiment analysis, image classification, API data pulls and even dataset mashup. This is also the stage when you probably feel comfortable enough to start applying for roles, so building unique projects are key.
  3. For sentiment analysis, nothing beats Twitter data, so get the API keys and start pulling data on a topic of interest. You might be limited by the daily pull limits on the free tier, so check if you need 2 accounts and aggregate data over a couple days or even a week. A starter example is the sentiment analysis I did during the Rio Olympics supporting Team USA.
  4. You should also start dabbling in RShiny and automated reports as these will help you in actual jobs where you need to present idea mockups and standardizing weekly/ daily reports.
Yelp College Search App

Sample ideas –

  • Twitter Sentiment Analysis: Look at the Twitter sentiments expressed before big IPO launches and see whether the positive or negative feelings correlated with a jump in prices. There are dozens of apps that look at the relation between stock prices and Twitter sentiments, but for this you’d need to be a little more creative since the IPO will not have any historical data to predict the first day dips and peaks.
  • API/RShiny Project: Develop a RShiny dashboard using Yelp API, showing the most popular restaurants around airports. You can combine a public airport dataset and merge it with filtered data from the Yelp API. A similar example (with code) is included in this Yelp College App dashboard.
  • Lyrics Clustering: Try doing some text analytics using song lyrics from this dataset with 50,000+ songs. Do artists repeat their lyrics? Are there common themes across all artists? Do male singers use different words versus female solo tracks? Do bands focus on a totally different theme? If you see your favorite band or lead singer, check how their work has evolved over the years.
  • Image classification starter tutorial is here. Can you customize the code and apply to a different image database?

c. Expert Data Scientist

DataScience Expert portfolio
  1. By now, you should be fairly comfortable with analyzing data from different datasource types (image, text, unstructured), building advanced recommender systems and implementing unsupervised machine learning algorithms. You are now moving from analyze stage to build stage.
  2. You may or may not already have a job by now. If you do, congratulations! Remember to keep learning and coding so you can accelerate your career further.
  3. If you have not, check out my book on how to land a high-paying ($$$) Data Science job job within 90 days.
  4. Look at building Deep learning using keras and apps using artificial intelligence. Even better, can you fully automate your job? No, you wont “downsize” yourself. Instead your employer will happily promote you since you’ve shown them a superb way to improve efficiency and cut costs, and they will love to have you look at other parts of the business where you can repeat the process.

Sample project ideas –

  • Build an App: College recommender system using public datasets and web scraping in R. (Remember to check terms of service as you do not want to violate any laws!) Goal is to recreate a report like the Top 10 cities to live in, but from a college perspective.
  • Start thinking about what data you need – college details (names, locations, majors, size, demographics, cost), outlook (Christian/HBCU/minority), student prospects (salary after graduation, time to graduate, diversity, scholarship, student debt ) , admission process (deadlines, average scores, heavy sports leaning) and so on. How will you aggregate this data? Where will you store it? How can you make it interactive and create an app that people might pay for?
  • Upwork Gigs: Look at Upwork contracts tagged as intermediate or expert, esp. the ones with $500+ budgets. Even if you dont want to bid, just attempt the project on your own. If you fail, you will know you still need to master some more concepts, if you succeed then it will be a superb confidence booster and learning opportunity.
  • Audio Processing: Use the VOX celebrity dataset to identify the speaker based on audio/speech dataset. Audio files are an interesting datasource with applications in customer recognition (think bank call centers to prevent fraud), parsing for customer complaints, etc.
  • Build your own package: Think about the functions and code you use most often. Can you build a package around it? The most trending R-packages are listed here. Can you build something better?

Do you have any other interesting ideas? If so, feel free to contact me with your ideas or send me a link with the Github repo.

Mapping Anthony Bourdain’s Travels

Travel maps tutorial

Anthony Bourdain was an amazing personality – chef, author, world traveler, TV showhost. I loved his shows as much for the exotic locations as for the yummilicious local cuisine. So I was delighted to find a dataset that included all travel location data, from all episodes of his 3 hit TV shows. Dataset attributed to Christine Zhang for publishing the dataset on Github.

In today’s tutorial, we are going to plot this extraordinary person’s world travels in R. So our code will cover geospatial data mapping using 2 methods:

  • Leaflets package to create zoomable maps with markers
  • Airplane route style maps to see the paths traveled.

Step 1 – Prepare the Workspace

Here we will load all the required library packages, and import the dataset.

places <- data.frame(fread(‘bourdain_travel_places.csv’), stringsAsFactors = F)

Step 2 – Basic Exploration

Our dataset has data for 3 of Bourdain’s shows:

  • No Reservations
  • Parts Unknown – which I personally loved.
  • The Layover

Let us take a sneak peak into the data:

dataset preview

How many countries did Bourdain visit? We can calculate this for the whole dataset or by show:

numshow <- sqldf(“select show , count(distinct(country)) as num_ctry from places group by show”) # Num countries by show.

numctry <- nrow(table(places$country)) # Total countries visited
numstates <- nrow(table(places$state[places$country == ‘United States’])) ## Total states visited in the US.

Wow! Bourdain visited 93 countries overall, and 68 countries for his show “No Reservations”. Talk about world travel.

I did notice some records have state names as countries, for example California, Washington and Massachussets. But these are exceptions, and overall the dataset is extremely clean. Even disregarding those records, 80+ countries is nothing to be scoffed at, and I had never even heard of some of these exotic locations.

P.S.: You know who else gets to travel a lot? Data scientists earning $100k+ per year. Here’s my new book which will help you how to land such a dream job.

Step 3 – Create a Leaflet to View Sites on World Map

Thankfully, the data already has geographical coordinates, so we don’t need to add any processing steps. However, if you have cities which are missing coordinates then use the “worldcities” file from the Projects page under “Rent Analysis”.

We do have some duplicates, where Bourdain visited the same location in 2 or more shows. So we will de-duplicate before plotting.

Next we will add an info column to list the city and state name that we can use on the marker icons.

places4$info <- paste0(places4$city_or_area, “, “, places4$country) # marker icons

mapcity <- leaflet(places4) %>%
setView(2.35, 48.85, zoom = 3) %>%
addTiles() %>%
addMarkers(~long, ~lat, popup = ~info,
options = popupOptions(closeButton = T),
clusterOptions = markerClusterOptions())
mapcity # Show the leaflet

leaflet view – the markers are interactive in R

Step 4 – Flight Route View

Can we plot the cities in flight view style? Yes, we can as long as we transform the dataframe where each record has a departure and arrival city. We do have the show and episode number so this is quite easy.

Once we do that we will use a custom function which basically plots a circle marker at the two cities and a curved line between the two.

plot_my_connection=function( dep_lon, dep_lat, arr_lon, arr_lat, …){
inter <- gcIntermediate(c(dep_lon, dep_lat), c(arr_lon, arr_lat), n=50, addStartEnd=TRUE, breakAtDateLine=F) inter=data.frame(inter) diff_of_lon=abs(dep_lon) + abs(arr_lon) if(diff_of_lon > 180){
lines(subset(inter, lon>=0), …)
lines(subset(inter, lon<0), …)
}else{
lines(inter, …)
}
} # custom function

For the actual map view, we will create a background world map image, then use the custom function in a loop to plot each step of Bourdain’s travels. Depending on how we create the transformed dataframe, we can plot Bourdain’s travels for a single show, single season or all travels.

Here are two maps separately for the show “Parts Unknown” and “The Layover” respectively. Since the former had more seasons, the map is a lot more congested.

Parts Unknown seasons – travel maps

par(mar=c(0,0,0,0)) # background map
map(‘world’,col=”#262626″, fill=TRUE, bg=”white”, lwd=0.05,mar=rep(0,4),border=0, ylim=c(-80,80) ) # other cols = #262626; #f2f2f2; #727272
for(i in 1:nrow(citydf3)){
plot_my_connection(citydf3$Deplong[i], citydf3$Deplat[i], citydf3$Arrlong[i], citydf3$Arrlat[i],
col=”gold”, lwd=1)
} # add every connections:
points(x=citydf$long, y=citydf$lat, col=”blue”, cex=1, pch=20) # add points and names of cities
text(citydf$city_or_area, x=citydf$long, y=citydf$lat, col=”blue”, cex=0.7, pos=2) # plot city names

As always, the code files are available on the Projects Page. Happy Coding!

Call to Action:

If you read this far and also want a job or promotion in the DataScience field, then please do take a look at my new book “Data Science Jobs“. It will teach you how to optimize your profile to land great jobs with high salary; 100+ interview Qs and niche job sites which everybody else overlooks.

Older posts
Facebook
LinkedIn