+30 2221311007

9am - 10pm

ORIONS & IONON 13

Gattiefladger

Overview

  • Founded Date April 29, 1938
  • Sectors Factory
  • Posted Jobs 0
  • Viewed 8

Company Description

What Is Artificial Intelligence (AI)?

The concept of “a machine that thinks” go back to ancient Greece. But considering that the introduction of electronic computing (and relative to a few of the topics discussed in this short article) crucial events and milestones in the evolution of AI consist of the following:

1950.
Alan Turing releases Computing Machinery and Intelligence. In this paper, Turing-famous for breaking the German ENIGMA code during WWII and typically referred to as the “daddy of computer system science”- asks the following concern: “Can machines believe?”

From there, he provides a test, now notoriously referred to as the “Turing Test,” where a human interrogator would try to differentiate in between a computer and human text response. While this test has gone through much examination considering that it was released, it remains a vital part of the history of AI, and a continuous concept within viewpoint as it uses concepts around linguistics.

1956.
John McCarthy coins the term “artificial intelligence” at the first-ever AI conference at Dartmouth College. (McCarthy went on to create the Lisp language.) Later that year, Allen Newell, J.C. Shaw and Herbert Simon create the Logic Theorist, the first-ever running AI computer program.

1967.
Frank Rosenblatt develops the Mark 1 Perceptron, the very first computer system based upon a neural network that “discovered” through experimentation. Just a year later, Marvin Minsky and Seymour Papert a book titled Perceptrons, which becomes both the landmark deal with neural networks and, at least for a while, an argument versus future neural network research study efforts.

1980.
Neural networks, which utilize a backpropagation algorithm to train itself, ended up being widely utilized in AI applications.

1995.
Stuart Russell and Peter Norvig release Artificial Intelligence: A Modern Approach, which turns into one of the leading books in the research study of AI. In it, they explore four potential goals or meanings of AI, which differentiates computer systems based upon rationality and thinking versus acting.

1997.
IBM’s Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).

2004.
John McCarthy writes a paper, What Is Expert system?, and proposes an often-cited definition of AI. By this time, the age of huge data and cloud computing is underway, enabling companies to handle ever-larger information estates, which will one day be utilized to train AI models.

2011.
IBM Watson ® beats champs Ken Jennings and Brad Rutter at Jeopardy! Also, around this time, data science starts to emerge as a popular discipline.

2015.
Baidu’s Minwa supercomputer uses an unique deep neural network called a convolutional neural network to recognize and categorize images with a greater rate of precision than the typical human.

2016.
DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go gamer, in a five-game match. The victory is significant offered the huge number of possible moves as the video game progresses (over 14.5 trillion after just four relocations). Later, Google purchased DeepMind for a reported USD 400 million.

2022.
An increase in large language designs or LLMs, such as OpenAI’s ChatGPT, develops a huge modification in performance of AI and its prospective to drive business value. With these brand-new generative AI practices, deep-learning models can be pretrained on large quantities of information.

2024.
The current AI patterns point to a continuing AI renaissance. Multimodal designs that can take several kinds of data as input are providing richer, more robust experiences. These models bring together computer vision image recognition and NLP speech recognition abilities. Smaller designs are likewise making strides in an age of reducing returns with huge designs with large criterion counts.