Artificial Intelligence

What it is and why it matters

Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.

Artificial Intelligence History

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Today's generative AI technologies have made the benefits of AI clear to a growing number of professionals. LLM-powered AI assistants are showing up inside many existing software products, from forecasting tools to marketing stacks.

The fast adoption of GenAI has also raised questions and concerns about AI anxiety, AI hallucinations, AI governance and AI ethics. As a result, trustworthy AI and responsible AI discussions are becoming crucial in every industry.

1950s–1970s

Neural Networks

Early work with neural networks stirs excitement for “thinking machines.”

1980s–2010s

Machine Learning

Machine learning becomes popular.

2011–2020s

Deep Learning

Deep learning breakthroughs drive AI boom.

Present Day

Generative AI

Generative AI, a disruptive tech, soars in popularity.

What is generative AI?

"With generative AI, we're entering a new era of human and machine interaction," says Marinela Profi, an AI marketing manager at SAS.

Generative AI learns from billions of data points and generates new content based on human prompts.  Hear Profi discuss real-world examples of generative AI across industries, including use cases using large language models (LLMs), synthetic data generation and digital twins.

Learn about the risks and benefits of this new frontier in AI.

Why is artificial intelligence important?


AI automates repetitive learning and discovery through data. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks. And it does so reliably and without fatigue. Of course, humans are still essential to set up the system and ask the right questions.

AI adds intelligence to existing products. Many products you already use will be improved with AI capabilities, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies. Upgrades at home and in the workplace, range from security intelligence and smart cams to investment analysis.

AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that algorithms can acquire skills. Just as an algorithm can teach itself to play chess, it can teach itself what product to recommend next online. And the models adapt when given new data. 

AI analyzes more and deeper data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers used to be impossible. All that has changed with incredible computer power and big data. You need lots of data to train deep learning models because they learn directly from the data. 

AI achieves incredible accuracy through deep neural networks. For example, your interactions with Alexa and Google are all based on deep learning. And these products keep getting more accurate the more you use them. In the medical field, AI techniques from deep learning and object recognition can now be used to pinpoint cancer on medical images with improved accuracy.

AI gets the most out of data. When algorithms are self-learning, the data itself is an asset. The answers are in the data – you just have to apply AI to find them. With this tight relationship between data and AI, your data becomes more important than ever. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win. But using that data to innovate responsibly requires trustworthy AI. And that means your AI systems should be ethical, equitable and sustainable.

Artificial Intelligence in Today's World

Pondering AI podcast

Is artificial intelligence always biased? Does AI need humans? What will AI do next? Join Kimberly Nevala to ponder AI’s progress with a diverse group of guests, including innovators, activists and data experts.

Your journey to AI success

Determine if you really need artificial intelligence. And learn to evaluate if your organization is prepared for AI. This series of strategy guides and accompanying webinars, produced by SAS and MIT SMR Connections, offers guidance from industry pros.

Five AI technologies that you need to know

Read our quick overview of the key technologies fueling the AI craze. This useful introduction offers short descriptions and examples for machine learning, natural language processing and more.

How Artificial Intelligence Is Being Used

Every industry has a high demand for AI capabilities – including systems that can be used for automation, learning, legal assistance, risk notification and research. Specific uses of AI in industry include:

Health Care

AI applications can provide personalized medicine and X-ray readings. Personal health care assistants can act as life coaches, reminding you to take your pills, exercise or eat healthier.

Retail

AI provides virtual shopping capabilities that offer personalized recommendations and discuss purchase options with the consumer. Stock management and site layout technologies will also be improved with AI.

Manufacturing

AI can analyze factory IoT data as it streams from connected equipment to forecast expected load and demand using recurrent networks, a specific type of deep learning network used with sequence data.

Banking

Artificial Intelligence enhances the speed, precision and effectiveness of human efforts. In financial institutions, AI techniques can be used to identify which transactions are likely to be fraudulent, adopt fast and accurate credit scoring, as well as automate manually intense data management tasks.


AI has been an integral part of SAS software for years. Today we help customers in every industry capitalize on advancements in AI, and we’ll continue embedding AI technologies like machine learning and deep learning in solutions across the SAS portfolio. Portrait Jim Goodnight Jim Goodnight CEO SAS

WildTrack and SAS: Saving endangered species one footprint at a time.

Flagship species like the cheetah are disappearing. And with them, the biodiversity that supports us all. WildTrack is exploring the value of artificial intelligence in conservation – to analyze footprints the way indigenous trackers do and protect these endangered animals from extinction.

How Artificial Intelligence Works

AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data. AI is a broad field of study that includes many theories, methods and technologies, as well as the following major subfields:

Machine Learning

Machine learning automates analytical model building. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without explicitly being programmed for where to look or what to conclude.

Neural Networks

A neural network is a type of machine learning that is made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.

Deep Learning

Deep learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.

Additionally, several technologies enable and support AI:

Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.

Natural language processing (NLP) is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.

Graphical processing units are key to AI because they provide the heavy compute power that’s required for iterative processing. Training neural networks requires big data plus compute power.

The Internet of Things generates massive amounts of data from connected devices, most of it unanalyzed. Automating models with AI will allow us to use more of it.

Advanced algorithms are being developed and combined in new ways to analyze more data faster and at multiple levels. This intelligent processing is key to identifying and predicting rare events, understanding complex systems and optimizing unique scenarios.

APIs, or application programming interfacesare portable packages of code that make it possible to add AI functionality to existing products and software packages. They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.

In summary, the goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it’s not a replacement for humans – and won’t be anytime soon. 

Next Steps

See how Artificial Intelligence Solutions augment human creativity and endeavors with AI.

Featured capability for ARTIFICIAL INTELLIGENCE

SAS® for Machine Learning and Deep Learning

AI is simplified when you can prepare data for analysis, develop models with modern machine-learning algorithms and integrate text analytics all in one product. Plus, you can code projects that combine SAS with other languages, including Python, R, Java or Lua.


與 SAS 取得聯絡,瞭解我們能在哪些方面為您效勞。

使用人工智慧帶來了哪些挑戰?

人工智慧將會改變所有產業,但我們必須瞭解其限制。

AI 的主要限制,就是從資料學習。將知識附加上 AI 沒有其他的方法。這代表如果資料中有任何的不正確,都會反映在結果中。而且其他任何的預測或分析功能層,都必須另行添加。

現今的 AI 系統會接受訓練,完成明確定義的工作。會玩撲克牌的系統,就不會玩接龍或西洋棋。會偵測詐欺的系統,就沒辦法開車或提供法律建議。事實上,會偵測醫療保健詐欺的 AI 系統,無法正確偵測稅務詐欺或保固詐欺。

換句話說,這些系統極為專精,只專注於完成單一任務,遠不及人類的表現。

同樣地,自我學習系統並非自主運作的系統。在電影和電視節目中所看到想像中的 AI 技術,仍然只是科幻小說。但是,電腦能夠探勘複雜資料來學習和完美執行特定工作,已經變得越來越常見。