What is AI, really? And what does it mean to my business?

The hype surrounding artificial intelligence is reaching a fever pitch. But it’s not about futuristic robots—AI applications are playing out today in less sexy ways

By Steve Holder, National Executive Analytic Strategy at SAS


It seems like only yesterday that Big Data was the Big Deal, the new frontier driving business optimization by taking advantage of every transaction, every piece of unstructured information, even weather trends, to forecast outcomes and divine insights for a better bottom line. Today, Big Data has become just plain old data. Its use to drive business outcomes is commonplace; access to troves of data has become an expectation, even a right, in many organizations. The value of collecting, storing, and analyzing all data is no longer debatable. In part, we have the hype Big Data created to thank for this widespread adoption and implementation.

If Big Data is yesterday, what’s today’s business technology poster child? What’s the latest trend capturing the hearts and minds of business leaders?

Without a doubt it has to be artificial intelligence (AI). A day can’t go by without a headline about AI in the business press or even the mainstream media. Most recently, Tesla CEO Elon Musk—a fervent cyberlibertarian and vocal opponent of regulation of any kind—told a conference of U.S. governors that AI is “a fundamental risk to the existence of human civilization” that can be manipulated by misinformation and requires regulatory intervention now.

Before we take a look at the threats and undeniable promise of AI applications, let’s take a step back and look beyond the hype and ask: What is artificial intelligence?  What applications can it have for any enterprise? And—forgiving my bias as an employee of an analytics company—what is its impact on an organization’s analytics strategy? Let’s see if we can’t get less fiction and more science.

What is AI?

It may be news of the now, but AI’s history is deeper than you think. The term itself was coined by computing pioneer Alan Turing in 1950. If a machine could generate conversational responses to a human interlocutor that couldn’t be distinguished reliably from those of an actual human—the imitation game, as he called it—it could be said to have intelligence.

In the late 1960s, researchers began exploring computers’ ability to mimic human actions and reasoning. It started with computers designed to play simple games like checkers, a line of research that continues to this day with more complex games like chess, and seemingly simple games that actually turn out to be even more complex, like Go, due to the multitude of moves and strategy needed to win. This research laid the foundation for some of our current AI apps, like chatbots, home assistants and self-driving cars.

Take special note of the words “imitation” and “mimic.” Artificial intelligence is the ability to make decisions like those a human would make given the same information. Even if the process of gathering information is autonomous, AI is a simulation of intelligence, distinct from sentience or self-awareness.  The industry has landed on the term weak or narrow AI to separate this simulated intelligence from technology taking over the world.

But enough of philosophy class. What is AI in a contemporary context, given technological advances, cultural and economic imperatives, and usable applications?

AI today

One way to think about AI is as a superset of capabilities that encompasses several sub-disciplines or technologies. Chief among these sub-disciplines:

  • Rules-based systems are a foundational discipline. The use of IF-THEN-ELSE and TRUE-FALSE decisions establish that we’re building a system based on rules.
  • Machine learning applies complex algorithms to allow a computer to learn from data and build its own analytical models. It draws on many other disciplines—neural networks (more below), statistical analysis, operations research and physics, to name a few—to create insight from data. Importantly, the machine is not explicitly programmed to know what it’s looking for; it teaches itself patterns and connections through an iterative process.
  • Neural networks mimic (there’s that word again) the workings of the human brain. The computing system uses interconnected nodes or units the way our brain uses neurons—each unit applies a relatively simple process or calculation to incoming data, then passes its output on to another layer or “neuron,” which applies another simple process or calculation, and so on. Each layer of the network learns from the previous one, and multiple passes of the data can find connections and derive meaning from it that a single pass can’t.
  • Deep learning leverages huge neural networks with many times more layers, or depth. Deeper networks and advanced training allow the network to learn more complex patterns from enormous pools of data.
  • Predictive analytics applies statistical techniques to predict causal relationships. As a discipline, predictive analytics is decades old.
  • Pattern recognition allows an AI system to classify and interpret input from its surroundings. This includes such disciplines as natural language processing—think Apple’s Siri or Amazon’s Alexa home assistant—and computer vision, where a machine is analyzing such environmental input and drawing insight from it.

These overlapping disciplines create an ecosystem of capabilities that allows computers to become cognitive, to respond in a human-like fashion to their environment autonomously—without human intervention.

Why now?

The fundamental philosophy behind AI has been around for decades, research into its applications began years ago, and even many of the component technologies are well along the road to maturity, if they’re not already commonplace and reliable. So why is 2017 shaping up as the Year of Artificial Intelligence? Why are AI-oriented startups all the rage? Why are business leaders being bombarded with AI messaging?

It’s largely for the same reason every “it” technology manages to clear the hump between hype and maturity: More powerful technology, building on now-established applications—last year’s “it” technology—is making it a possibility instead of a pipe dream. Three core issues are driving AI maturity:

  • Low-cost computing power. Moore’s Law continues unabated, with exponentially more computing power for a fraction of the cost available for the foreseeable future. It’s not simply the MIPs (millions of instructions processed per second) that matter; smaller and smaller packaging with ever-lower current draws are making it possible to provide denser computing infrastructure, even in a mobile device footprint.
  • Data. Thanks to the Big Data revolution, we’ve got more and more data to work with, in structured and unstructured formats, and the means to store it for immediate retrieval. With the data storage problem solved, organizations are moving from online data to in-memory data, which dramatically increases processing speed.
  • Deeper understanding of algorithms and AI. The skill sets that support AI development are maturing, with a better theoretical understanding of how AI works—and doesn’t work—in practice.

All about automation

A recently published “hype cycle” analysis from research firm Gartner Inc. placed machine learning (a key piece of the AI puzzle) at the Peak of Inflated Expectations. It’s taken some time to climb that curve; the first notions of machine learning date back to 1959 and computing pioneer Arthur Samuels’s definition: “A field of study that gives computers the ability to learn without being explicitly programmed.” (The history of Samuels’s checkers-playing program makes for a great primer on machine learning and its foundational concepts.)

The premise is simple: Without being taught by a programmer, machines use their own experience to solve a problem.  Computers comb the limitless supply of data available, to determine not only what is relevant but what is significant and what’s noise. They select data and use the most-suitable algorithms to create models that are constantly improved and refined.  I see this automation as a key part of the equation.  

We’re still far (though perhaps not too far) from all knowing, self-aware robots and machine-dominated worlds of science fiction and futurist journals. But today’s “weak AI” holds promise. It can automate known tasks and make humans more efficient, but the technology is not magical.  To ensure organizations apply AI in a practical and meaningful way, we need to ensure we don’t get carried away in the hype, and evaluate important considerations:

  • Training is paramount - like any learner, AI systems can’t provide insight without training.  The system will learn and adapt, but ultimately, it needs to be trained with examples of what success and failure looks like.
  • Knowledge of the problem – To expect a machine to magically solve a problem with no rules and no problem definition is optimistic.  This is not to say we need to spoon feed the AI system, but we do need to provide the system with some form of rules or  boundaries to ensure it’s automating the right insight.
  • Skills – The AI system will not build itself, at least not today.  International Data Corporation (IDC) predicts a need by 2018 for 181,000 people with deep analytical skills.  These deep skills are the basis of AI systems.  Don’t under estimate the skills need to create AI systems and the scarcity of skilled practitioners. 
  • Interpretability – Having a fully automated and machine-driven decision engine is certainly a great goal.  However, organizations, especially in regulated industries, need to ensure they have can understand and explain the output of these AI models. 

Regardless of the considerations, AI, machine learning and automation can bolster our current decision systems.  For example, using a machine to crawl huge data volumes, run thousands of analytic models, and auto-tune the models for hundreds of combinations takes the current process and significantly enhances it.  These weak AI applications add incremental value over the analytics of the past for fraud detection, risk analysis, churn analysis, and customer acquisition.

So, for me, artificial intelligence isn’t necessarily about doing the same job humans can do. It’s about doing the jobs humans can’t realistically do within time, labour and budget considerations. Self-driving cars may be a sexy application, but the meatier ones will be taking place behind the scenes at banks, government institutions, and enterprises of every stripe.

As the Strategy lead at SAS Canada Steve Holder is responsible for creating and driving the SAS solution strategy with our Canadian customers.  A key part of this is providing thought leadership for the SAS Analytics, Big Data and Cloud portfolios including Open Source integration.  A Canadian analytics evangelist Steve has seen first-hand how the use of analytics and data can help customers solve business problems; make the best decisions possible and unearth new opportunities.   Steve’s passion is making technology make sense for everyone regardless of their technical skillset. Steve tweets at @holdersmTO and can be emailed at steve.holder@sas.com. 

Get More Insights


Want more Insights from SAS? Subscribe to our Insights newsletter. Or check back often to get more insights on the topics you care about, including analytics, big data, data management, marketing, and risk & fraud.