Despite the spike in the interest related to Data Science and Machine Learning roles and courses, it is still possible to become a fully functional data scientist with minimal resources.
Some caveats, (1) be committed to invest hours of effort building your expertise. (2) The job market has gotten quite competitive, so be mentally prepared to work strategically and accept that finding a job will require sweat equity.
Note, the title of this post is “Data Scientist” but the steps below are true even if your aim is to become a data analyst, data engineer, analytics consultant or machine learning engineer.
Steps to Data Science Expertise
At its core, becoming a data scientist will require three steps (in sequence):
Learn the skills
Build your portfolio
Apply to jobs strategically.
Step 1 – Learn the skills.
The list of skills below are mandatory.
Programming in R or Python.
Programming in SQL. Most courses never talk about SQL, but it is critical.
Machine learning algorithms. Know the code and also which one fits for what use case.
If you search Google, you will find free courses and books on all the above topics. Or go for a low-cost option from Udemy. Essentially you can learn the skills for <$100, even now in 2020.
Step 2 – Build your portfolio.
You can add 100 certifications, but you also do need to showcase the learning by way of projects. Use Github to host your projects or create a free wordpress website. If you have the capacity, explore low cost website hosting from Wix or Squarespace.
The project should be unique to you. Pick any free public dataset, and apply your perspective to slice and dice the data, and extract insights. This is what will set you apart from the 10,000 other candidates who completed the same free bootcamp or Coursera class. Sample project idea list here., based on beginner or advanced skill levels.
The job market is getting heated up, as people enter this field in thousands. Getting job leads is hard, getting to interview stage is even harder.
Make sure your profile on LinkedIn is “all-star”, with at least 500 connections.
You can significantly improve your odds by leveraging niche job sites, and hunting on LinkedIn content tabs and Twitter. Both are highly manual, which is why they work! No one else wants to pursue those methods! 🙂 A detailed how-to guide, full list of niche job boards and interview question sets are all available in my job search book which I keep updating every quarter. These strategies work, hence the blatant plug-in!
Be prepared to face a lot of rejections, especially for landing the first job. In the beginning, don’t be afraid to accept a low-paying job or work internships. It is easier to get a job when you are already hired!
Initially you may be hired as a “data analyst” – accept! A lot of companies are using the terms analyst and scientist intermittently, or use the “data scientist” title to designate more experienced hires.
Note, there are other job types in the data science domain apart from “data scientist” so check if you can leverage your previous experiences for other role types.
Note, I realize a lot of students are graduating soon and the global pandemic is making it hard to find jobs. Some employers are already reneging on confirmed offers, which increases pressure on students. Hence I’ve reduced my ebook price to $0.99 for the month of May 2020.
Note, the book will NOT be marked free to deter folks who just download books and guides but do not intend to put in any effort!
This is just a short note to specify that the list of FREE datasets is updated for 2020. There are 50+ sites and links to the newly released Google Dataset search engine. So, have fun exploring these data repositories to master programming, create stunning visualizations and build your own unique project portfolios.
Some starter projects with these datafiles are available on the Projects page, using R-programming.
The first month of the new decade is almost at an end. It’s also “job-hunting” time when students start looking for internships and employees think about switching roles and companies, in search of better salaries and opportunities. If you fall into one of these categories, then here are the Top 10 skills your resume absolutely needs to include, to get noticed by employers and land your dream job.
I looked at 200 job descriptions for jobs posted on LinkedIn in 7 major US/Canada cities – San Francisco, Seattle, Chicago, New York, Philadelphia, Atlanta, Toronto. Let’s face it – LinkedIn is the go-to platform for job seekers and recruiters. So looking at any other site seems a waste of time.
The job listings included many of the top Global brands in tech (Microsoft, Amazon, etc.), product (AirBnb, Uber, Visa), consulting (Deloitte, Accenture), banks (JP Morgan, Capital One) and so on. I only considered jobs with the title “Data scientist” or “Data Analyst”, with 150+ in the former. It took a while, but doing this manually also allowed me to exclude repetitive postings, since some companies post same role for multiple locations.
Ultimately, this allowed me to quickly identify patterns and
repeated skills, which I am presenting in this blogpost.
I’ve categorized the skills into 2 parts: Core and Advanced. Core skills are the absolute minimum you should have, recruiters and automated job application systems will simply disqualify you without them. Advanced skills are those “preferred” competencies that make you look more valuable as a candidate, so make sure to highlight them with examples on your resume. So, if you are trying to transition to a career in Data Science, then I would highly recommend learning these first, and then jumping into the others. Needless to say, everyone working (or entering) this field needs to have a portfolio of projects.
Disclaimer – having all the 10 skills does NOT guarantee a job but vastly improves your chances. You’ll still need to do some legwork, to get considered and my book “Data Science Jobs” can help you shorten this process. The book is also on SALE for $0.99 this weekend, Jan 25th to Jan 28th, at a 92% discount.
 Programming (R/Python): This is a no-brainer, you need to be an expert in either R/Python. Some jobs will list SAS or other obscure languages, but R or Python was a constant and mandatory requirement in 100% of all the jobs I parsed.
I am not going to argue the merits of one over the other in this post, but I will emphasize that R is still very much a in-demand skill. Plus, for most entry level roles, a candidate with only Python is not going to be considered more favorably (or declined!) over someone who knows only R. In fact, at my current and previous 2 roles, R-programming was the preferred language of choice. If you’d like to know my true views on the R vs Python debate, read this post.
 SQL: Most colleges and bootcamps do not teach this, but it is inordinately valuable. You cannot find insights without data, and 99% of companies predominantly use SQL databases of some kind. Fancy stuff like MongoDb, NoSQL or Hadoop are excellent keywords to add to your bio, but SQL is the baseline. You don’t need to know stored procedures or admin level expertise, but please learn basics of SQL for pulling in data with filters and optimizing table joins. SQL is mandatory to thrive as a data scientist.
 Basic math & Stats: By this I mean basic high-school stuff, like calculating confidence intervals and profit-loss calculations. If you cannot distinguish between mean and median, then no self-respecting manager will trust your numbers, or believe your insights have excluded those pesky outliers. Profits, incremental benefits in $ are other useful formulae you should know too, so brush up on your business math.
 Machine Learning Algorithms: Knowing to code algorithms is expected, but so is knowing the logic behind them. If you cannot explain it in plain English, you really don’t know what you are talking about!
 Data Visualization: Tableau is the preferred technology, although I’ve seen people find success with Excel charts ( Excel will never die! ) and R libraries, too. However, I definitely see Tableau dominating everything else in the coming years.
 Communication skills: A picture is worth 1000 words; and being able to present data in meaningful, concise ways is crucial. Too many newbies get lost in the analysis itself, or hyper-focused on their beautiful code. Most managers want to see recommendations and insights that they can apply in practice! So being able to think like a “consultant” is crucial whether you are entry-level or the lead data scientist.
Good presentation skills (written and verbal) are important, more so for any dashboards or visualization reports, and I don’t mean color palettes or chart-types. Instead, make sure your dashboards are not “data-vomit”, a very practical (and apt!) term coined by Avinash Kaushik. If users cannot make head or tail of the dashboard without your handholding, or if the most important take-away is not obvious within 5 seconds, then you’ve done a poor job.
 Cloud services: Most companies have moved databases to AWS/Azure, and many are implementing production models in the cloud. So, learn those basics about Docker, containers, and deploying your models and code to the cloud. This is still a niche skill, so having it will definitely help you stand apart as most companies make the move towards automation.
 Software engineering: You don’t need to become a software engineer but knowing basic architecture and data flow Qs will help you troubleshoot better, write better code that is easily moved to production. Some Qs to start – what is the data about, where (all) is it coming from? Learn about scheduler jobs and report automation, these have helped me automate the most boring repetitive tasks and look like a superstar to my managers! The infrastructure teams do extremely valuable work (keeping things running smoothly) so learn about “rules” and expectations, and make sure your code conforms to everything. I always do, and my requests are treated much better! 😉
 Automated ML: This is slowly getting popular, as companies try to cut costs and improve efficiencies with automation. H20.ai and DataRobot are just 2 names off my head, but there are many more vendors in the market. If possible, learn how to work with those, as they can reduce your time for analysis and speed up production deployment. They won’t replace good data scientists, but they do magnify the disparity between someone who is mindless copy/pasting code and the truly efficient data scientists. So make sure your “core” skills are impeccable.
 Domain expertise: Nothing beats experience, but even if you are new to the company (or field) learn as much as you can from senior colleagues and partner teams. Find out the “why/how/what” Qs – who is using the analysis results, why do they truly want it? How will it be applied? How does it save the company money or increase profits? How can I do it faster while maintaining accuracy, and also adding to the bottom line? What metric does the end user (or my manager) really care about?
As Machine learning software add more automation and features, this blend of technology and domain expertise will ensure you are never a casualty of layoffs or cost-cutting! I’ve put this at the end, but really you should be thinking about this from DAY ONE!
For example, my current role involves models for credit card fraud prediction. However, once I learned the end-to-end process of card customer lifecycle (incoming application, review, collections, payments, etc.) my models have become much better. Plus, I have deeper understanding of Fair banking and privacy laws which can prevent many demographic variables from being used in modes. Similarly, a friend working in the petrochemical industry realized that his boss cared more about preventing true negatives (Overlooking or NOT maintaining end-of-life or faulty sensors that can potentially cause leaks or explosions ) than false positives (unnecessary maintenance for good sensors), even though both models can give you similar accuracy.
So build these skills, and see your career and salary potential sky-rocket in 2020!
Just 5 days left to KubeCon + CloudNativeCon North America! 🙂 I am quite excited to finally attend this awesome conference and the chance to visit sunny San Diego! 🙂 Whether you are a first time attendee as well, or just looking to get your money’s worth from the conference here is a list of to-dos to make the most of this experience.
If you have not heard about KubeCon, then this is a conference aimed at Kubernetes and related container technologies, a way to get software applications running with cloud services. This is an entire ecosystem, and in the next few years will change software infrastructure concepts for all companies. Myriad companies including Uber, Google, Shopify, JPMorgan are already on board and deploying using these new methods.
These technologies are also a huge part of how machine learning models and AI applications are implemented successfully and at scale, which is why I (as owner of this datascience blog) got interested in Kubernetes. If you’ve run machine learning models using cloud services, you might have also used some of these tools without ever being aware of it.
This is (obviously) my first time attending this conference and visiting the city, so I had tons of Qs and thoughts. The amazing list of speakers and conference tracks also make it hard to choose which sessions to attend. Thankfully, I was able to get some excellent advice from the dedicated Slack channels for the conference and past attendees.
Since the countdown clock has started, I’ve summarized the tips for others, so you can make the most of this experience.
1. Get on Slack
I am so thankful to Wendi West, Paris Pittman and other moderators in the Slack channels for patiently answering questions, hotel recommendations, sending event reminders and building some great vibes for the conference! the organizers for this conference .
I found a lot of useful information on the channel specific to the Diversity scholarship recipients, followed by the events channel. If you still have last minute Qs, then post on this channel or DM the organizers.
The Slack channels are great to connect with folks before the event, so you have some familiar faces to meet at the conference.
By now everyone should have created an itinery for themselves. If not please use the “sched” app with the following workspace url https://kccncna19.sched.com/
Note that you should have one broad agenda for the conference – either info for attempting a certification, networking for a job, discuss case studies so you can apply concepts at your job, or something else. This will allow you to make better selections without feeling overwhelmed by the sheer number of (fantastic) choices!
For me, the main goals is understand how Kubernetes is deployed for machine learning & AI projects. Since I work for a bank, security issues and migration of legacy/mainframe software into cloud services are also relevant topics, as are case studies. Having this theme allowed me to quickly decide and create a meaningful list of sessions to attend. I also hope to network with 25+ new people, a goal that should be quite easy in a conference with 8000+ attendees and spanning 5 days.
PRO TIP: Couple of past attendees advised not to over-schedule and to look at room locations. Although the conference is mainly happening at the San Diego convention center, there are some sessions that are being held at the Marriot and other hotels. So look them up and make sure you have enough time to walk to different venues.
Plus, if a session is quite interesting, you might want to hang back and chat with the speaker or ask additional clarifications. This might cause you to miss the next session, so design your schedule carefully.
Being part of a distince Diversity scholarship Slack channel means that I’ve already connected with 5-10 other recipients. After all these online discussions, it will great to meet these talented and ambitious folks in person!
Most past attendees have emphatically stated that folks attending this conference are very generous with their time, so please make the most of their expertise and knowledge.
Dont be shy! Speak up.
Speakers are amazing, and human too! So feel free to say hello after the session, and ask follow up Qs or just thank them for an interesting discussion.
For those who are extremely nervous about networking, here is a unique tip that someone told me years ago. Pick a color and talk to at least 5 people wearing clothes in that color. This might seem crazy, but it is a very practical way of overcoming self-bias and prejudices and talking to people we would not normally approach (feeling shy, out of place or other reasons) I’ve used it at other events and conferences and made some fabulous connections!
Use LinkedIn app and connect immediately. If you met someone interesting, then send them an invite during the conversation itself. No one ever says no, and if you wait you will forget to followup, either because you forgot their name, used the wrong spelling or misplaced their business card. Plus, at such large conferences it is terribly hard to keep track of all the people you meet. I used this at the Philly AI conference very successfully, and can’t wait to connect with folks at KubeCon too!
During the conference:
Keep a notepad handy for broad keywords and ideas that are directly applicable to your role (and conference goal) .
List the session date, speaker name and time. This will help you look it up later, esp as I’ve heard many speakers post their ppt and videos after the conference.
Tweet! Use the tags #KubeCon #CloudNativeCon and #DiversityScholarship.
Connect with people on LinkedIn (reiterating from above)
Check out the sponsored coffee/breakfast sessions and after hour meetups; I’ve heard they are amazing, as are the “lightning talks” post 5 pm.
Attend the sponsor expo and booths. Apart from the cool swag (tees, pens, stickers, etc) you will get to see some interesting demos, hobnob with folks from companies both large and small. Basically everyone from startups to large enterprises like Microsoft and Palo Alto, and everything in between. Great way to learn what’s happening in this space – you might even get your own unicorn startup idea! 🙂
Reconnect with the folks you’ve met on LinkedIn.
Add a blog post summarizing info you’ve learnt and takeaways from the conference. Everyone has a unique perspective, so don’t feel as if everything has already been said! Ideally do this within a week, when you are still fresh with your ideas. Remember to use the hashtags.
Add pictures from the event on LinkedIn. Make sure to tag your new friends too!
If possible, present a brown bag or session to your team (or group) at office. This is a great way of disseminating information to others who could not attend, improve your public speaking skills and also score some brownie points for your next employee appraisal! Win-win all around.
Use what you’ve learnt. Even if it is just a little portion!
November is the month of Thanksgiving, and vacations and of course deals galore! As part of saying thanks to my loyal readers, here are some deals specific to data science professionals and students, that you should definitely not miss on.
If you are exploring Data Science careers or preparing for interviews for a winter graduation, then take a look at my ebook “Data Science Jobs“. It is currently part of a Kindle countdown deal and priced 50% off from its normal price. Currently only $2.99 and prices will keep increasing until Friday morning when it goes back to full price.
Want a FREE book on Statistics, as related to R-programming and machine learning algorithms? I am currently looking to giveaway FREE advanced reviewer copies (ARC) . You can look at the book contents here, and if it seems interesting then please sign up here to be a reviewer.
If you are deploying machine learning models on the cloud, then chances are you work with Kubernetes or have at least heard of it. If you haven’t and you are an aspiring data scientist/ engineer, then you should compulsorily learn about tho
The R-programming project for November is a sentiment analysis on song lyrics by different artists. There is lots of data wrangling involved to aggregate different lyrics, and compare the lyrics favored by 2 different artists. The code repository is added to the Projects page here. I’ve written the main code in R, and used Tableau to generate some of the visuals, but this can be easily tweaked to create an awesome Shiny dashboard to add to a data science portfolio.