+30 2221311007

9am - 10pm

ORIONS & IONON 13

Overview

  • Founded Date December 11, 2008
  • Sectors Education
  • Posted Jobs 0
  • Viewed 6

Company Description

What Is Artificial Intelligence (AI)?

The concept of “a device that thinks” go back to ancient Greece. But given that the introduction of electronic computing (and relative to a few of the topics discussed in this short article) crucial events and turning points in the evolution of AI consist of the following:

1950.
Alan Turing publishes Computing Machinery and Intelligence. In this paper, Turing-famous for breaking the German ENIGMA code throughout WWII and typically described as the “dad of computer science”- asks the following concern: “Can machines think?”

From there, he uses a test, now as the “Turing Test,” where a human interrogator would attempt to differentiate between a computer system and human text response. While this test has actually undergone much analysis given that it was published, it stays a crucial part of the history of AI, and an ongoing concept within approach as it uses concepts around linguistics.

1956.
John McCarthy coins the term “artificial intelligence” at the first-ever AI conference at Dartmouth College. (McCarthy went on to invent the Lisp language.) Later that year, Allen Newell, J.C. Shaw and Herbert Simon produce the Logic Theorist, the first-ever running AI computer program.

1967.
Frank Rosenblatt develops the Mark 1 Perceptron, the first computer system based on a neural network that “found out” through trial and mistake. Just a year later on, Marvin Minsky and Seymour Papert release a book titled Perceptrons, which becomes both the landmark deal with neural networks and, a minimum of for a while, an argument against future neural network research initiatives.

1980.
Neural networks, which use a backpropagation algorithm to train itself, ended up being commonly utilized in AI applications.

1995.
Stuart Russell and Peter Norvig publish Artificial Intelligence: A Modern Approach, which turns into one of the leading books in the research study of AI. In it, they dive into 4 prospective goals or meanings of AI, which separates computer systems based on rationality and believing versus acting.

1997.
IBM’s Deep Blue beats then world chess champ Garry Kasparov, in a chess match (and rematch).

2004.
John McCarthy composes a paper, What Is Expert system?, and proposes an often-cited meaning of AI. By this time, the period of big data and cloud computing is underway, making it possible for companies to manage ever-larger data estates, which will one day be used to train AI models.

2011.
IBM Watson ® beats champs Ken Jennings and Brad Rutter at Jeopardy! Also, around this time, information science begins to emerge as a popular discipline.

2015.
Baidu’s Minwa supercomputer utilizes an unique deep neural network called a convolutional neural network to identify and categorize images with a higher rate of accuracy than the average human.

2016.
DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champ Go gamer, in a five-game match. The triumph is substantial given the substantial number of possible relocations as the game advances (over 14.5 trillion after simply four relocations). Later, Google bought DeepMind for a reported USD 400 million.

2022.
A rise in large language models or LLMs, such as OpenAI’s ChatGPT, develops a massive modification in performance of AI and its possible to drive enterprise worth. With these brand-new generative AI practices, deep-learning models can be pretrained on large amounts of data.

2024.
The current AI patterns indicate a continuing AI renaissance. Multimodal designs that can take multiple types of information as input are providing richer, more robust experiences. These designs unite computer vision image acknowledgment and NLP speech acknowledgment capabilities. Smaller models are also making strides in an age of lessening returns with massive models with big specification counts.