Lately I’ve been exploring deep learning algorithms, and automating system with Artificial Intelligence. Plus, I received a couple of emails asking me about programming skills for AI. So, with those questions in mind, here is a simple introduction to artificial intelligence. The agenda for this post is to answer the following topics:

  1. What is AI?
  2. Types of AI – practical implementation in Fortune500 companies
  3. Applications of AI
  4. Drawbacks of AI
  5. Programming skills for AI
introduction to AI

introduction to AI

What is AI?

AI or artificial intelligence is the process of using software to perform human tasks. It is considered to be a branch of machine learning, and sophisticated algorithms are used to do everything from automating repetitive tasks to creating self-learning sentient systems. Sadly, nowadays AI is being used as an alternative to datascience and machine learning, so it is very difficult to draw a line of difference between them all.

Types of AI

In terms of implementation and usage in Fortune500 companies, AI can perform three important business needs (Davenport & Ronanki, 2018) :

  1. automating business processes,
  2. gaining insight through data analysis, (Cognitive insight)
  3. engaging with customers and employees.

Process automation:

This is perhaps the most prevalent form of AI, where code on a remote server or a sensor sorts information and makes decisions that would otherwise be done by a human. Examples include updating multiple databases with customer address changes or service additions, replacing lost credit or ATM cards, parsing humongous amounts of legal and contractual documents to extract provisions using natural language processing. This type of robotic process automation (RPA) technologies are most popular because they are easy to implement and offer great returns on investment. They are also controversial, because they can sometimes replace low-skilled manual jobs, even though these jobs were always in danger of being outsourced or given to minimum wage workers.

Cognitive insight.

These processes use algorithms to detect patterns in vast volumes of data and interpret their meaning. Using unsupervised machine learning algorithms help the processes become more efficient over time. Examples include predicting what a customer is likely to buy, identify credit fraud in real time, mass-personalization of digital ads and so on. Given the amounts of data, cognitive insight applications are typically used for tasks that would be impossible for people, or to augment human decision-making processes.

Cognitive engagement.

These projects are the most complicated, take most amount of time, and therefore generate the most buzz and are most prone to mismanagement and failures. Examples include intelligent chatbots that are able to process complex questions and improve with every interaction with live customers, voice-to-text reporting solutions, recommendation systems that help providers create customized care plans that take into account individual patients’ health status and previous treatments, recreating customer intimacy with digital customers using digital concierges, etc. However, such AI projects are still not completely mainstream, and companies tend to take a conservative approach in using them for customer-facing systems.

Applications of AI:

There are many applications of AI and currently startups are racing to build AI chips for data centers, robotics, smartphones, drones and other devices. Tech giants like Apple, Google, Facebook, and Microsoft have already created interesting products by applying AI software to speech recognition, internet search, and classifying images. Amazon.com’s AI prowess spans cloud-computing services and voice-activated home digital assistants (Alexa, Amazon Echo). Here are some other interesting applications of AI:

  1. Driverless vehicles
  2. Robo-advisors that can recommend investments, re-balance stock/bond ratios and make personalized recommendations for an individual’s portfolio. An interesting extension of this technique is the list of 50 startups identified by Quid AI CEO, with the most potential to grow, in 2009. Today 10 of those companies have reached billion-dollar evaluations (Reddy, 2017) and include famous names like Evernote, Spotify, Etsy, Zynga, Palantir, Cloudera, OPOWER. [Personal note – if you have not yet heard of Quid, follow them on Twitter @Quid. They publish some amazing business intelligence reports! ]
  3. Image recognition that can aid law enforcement personnel in identifying criminals.
  4. LexisNexis has a product called PatentAdvisor (lexisnexisip.com/products/patent-advisor) which uses data concerning the history of individual patent examiners and how they’ve handled similar patent applications to predict the likelihood of a application being approved. Similarly, there are software applications that use artificial intelligence to help lawyers gather research material for cases, by identifying precedents that will maximize chances for a successful ruling outcome. (Keiser, 2018)

Drawbacks of AI:

There is no doubt that AI has created some amazing opportunities (image recognition to classify malignant tumors) and allowed companies to pass on boring admin tasks to machines. However, since AI systems are created by humans, they do have the following risks and limits:

  1. AI bias: Machine learning and algorithms underlying AI systems also has biases. All algorithms use an initial training set to learn how to identify and predict values. So, if the underlying training set is biased, then the predictions will also be biased. Garbage in, garbage out. Moreover, these biases may not appear as an explicit rule but, rather, be embedded in subtle interactions among the thousands of factors considered. Hence heavily regulated industries like banking bar the use of AI in loan approvals, as it may conflict with fair lending laws.
  2. Lack of verification: Unlike regular rule-based systems, neural network systems which are typically used in AI systems, deal with statistical truths rather than literal truths. So, such systems may fail in extreme rare cases, as the algorithm will overlook cases that may have very low probability of occurrence. For example, predicting a Wall Street crash or a sudden natural calamity like a volcanic eruption (think Hawaii). Lack of verification are major concerns in mission-critical applications, such as controlling a nuclear power plant, or when life-or-death decisions are involved.
  3. Hard to correct errors: If the AI system makes an error (as all systems eventually fail), then diagnosing and correcting the system becomes unimaginably complex, as the underlying mathematics are very complicated.
  4. Human creativity and emotions cannot be automated. AI is excellent at mundane tasks, but not so good at things that are intuitive. As the authors state in a book (Davenport & Kirby, 2016) if the logic can be articulated, a rule-based system can be written, and the process can be automated. However, tasks which involve emotions, creative problem-solving and social interactions cannot be automated. Examples include FBI negotiators, soldiers on flood rescue systems, the inventor who knew the iPod would change the music industry and become a sensation long before anyone expressed a need for it.

Programming Skills for AI:

The skills used to build AI applications are the same as those needed for data scientists and software engineering roles. The top 5 programming skills are Python, R, Java, C++. If you are looking to get started, then three excellent resources are listed below:

  1. Professional Program from Microsoft. The courses are completely free (gasp) although they do charge $99 per course for verified certificates. I took the free versions, and the courses offer a good mix of both practical labs and theory. https://academy.microsoft.com/en-us/professional-program/tracks/artificial-intelligence/
  2. Introduction to AI course from Udacity. https://www.udacity.com/course/intro-to-artificial-intelligence–cs271
  3. AI and Deep Learning courses by Kirill Eremenko, on Udemy. I’ve taken 4 courses from him and they were all great value for money, and give very real-world, hands-on coding experience. https://www.udemy.com/artificial-intelligence-az/ &

Please note that all these 3 are honest recommendations, and I am not being paid or compensated in any shape or form for adding these links.

 

 

REFERENCES

Brynjolfsson, E., McAfee, A. (2017) The business of artificial intelligence: What it can and cannot do for your organization. Harvard Business Review website. Retrieved from https://hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence

Davenport, T., Kirby, J. (2016) Only Humans Need Apply: winners and losers in the age of smart machines. Harper Business.

Davenport, T., Ronanki, R. (2018) Artificial Intelligence for the Real World. Harvard Business Review.

Keiser, B. (2018) Law library Management and Legal Research meet Artificial Intelligence. onlineresearcher.net

Reddy, S. (2017) A computer was asked to predict which start-ups would be successful. The results were astonishing. World Economic Forum. Retrieved from https://www.weforum.org/agenda/2017/07/computer-ai-machine-learning-predict-the-success-of-startups/