Bench TalkBench Talk for Design Engineers | The Official Blog of Mouser Electronicshttps://th.mouser.com/blogAI’s Origin Traced to Ancient Greecehttps://th.mouser.com/blog/ai-origin-traced-ancient-greeceAllEIT 2020: The Intelligent Revolution,General,Industrial,IoTMon, 21 Sep 2020 19:03:52 GMT<p><img alt="Theme Image" src="/blog/Portals/11/Schmidhuber_AI%20Traced%20to%20Ancient%20Greece_Theme%20Image-min.jpg" style="width: 600px; height: 424px;" title="Theme Image" /></p>
<p style="font-size:10px;"><em><small>(Source: imagIN.gr photography/Shutterstock.com)</small></em></p>
<p>After more than a century of research on Artificial Intelligence (AI), the field has recently become both popular and enormously important. In particular, Pattern Recognition and Machine Learning have been revolutionized through Deep Learning (DL), a relatively new moniker for Artificial Neural Networks (NNs) that learn from experience. DL is now heavily used in industry and daily life. Image and speech recognition on your smartphone, and automatic translation from one language to another, are just two examples of DL in action.</p>
<p>Many people in the Anglosphere assume that DL is a creation of the Anglosphere nations. However, DL was, in fact, invented where English is not an official language. Let us first zoom back and have a look at AI history in the broader context of computing history.</p>
<div>
<h2>Early Computing Pioneers</h2>
</div>
<p>One of the earliest mechanical computing machines was the <a href="https://www.antikythera-mechanism.gr/" target="_blank">Antikythera Mechanism</a>, built in Greece in the first-century BC. Running with 37 gears of various sizes, it was used to predict astronomical events (<strong>Figure 1</strong>).</p>
<p><img alt="" src="/blog/Portals/11/Schmidhuber-AI-Traced-Antikythera-Mechanism-Figure1-Adjusted-min_1.jpg" style="width: 600px; height: 400px;" title="" /></p>
<p><em><small><strong>Figure 1</strong>: The Antikythera Mechanism was built in Green in first-century BC. The device consisted of 37 gears of various sizes. It was used to predict astronomical events. (Source: DU ZHI XING/Shutterstock.com)</small></em></p>
<p>The sophistication of the Antikythera mechanism was not surpassed until 1,600 years later when Peter Henlein of Nürnberg began building miniaturized pocket watches in 1505. Like the Antikythera mechanism, however, Henlein’s machines were not general machines calculating results from user-given inputs. They simply used gear ratios to divide time. Watches divide the numbers of seconds by 60 to get minutes, and minutes by 60 to get hours.</p>
<p>In 1623, however, Wilhelm Schickard in Tübingen constructed the first automatic calculator for basic arithmetic. This was soon followed by Blaise Pascal's Pascaline in 1640, and Gottfried Wilhelm Leibniz' step reckoner in 1670, the first machine to perform all four fundamental arithmetic operations of addition, subtraction, multiplication, and division. In 1703, Leibniz published his <em>Explanation of Binary Mathematics</em>, the approach to binary computing that is now used by virtually all modern computers.</p>
<p>Mathematical analysis and data science also continued to develop. Around 1800, Carl Friedrich Gauss and Adrien-Marie Legendre developed the least squares method of pattern recognition through linear regression (now sometimes called "shallow learning"). Gauss famously used such techniques to rediscover the asteroid Ceres by analyzing data points of previous observations, then using various tricks to adjust the parameters of a predictor to correctly predict the new location of Ceres.</p>
<p>The first practical program-controlled machines appeared at about this time in France: automated looms programmed by punch cards. Around 1800, Joseph Marie Jacquard and colleagues thus became the first practical programmers.</p>
<p>In 1837, Charles Babbage of England designed a more general program-controlled machine called the Analytical Engine. Nobody was able to build it, perhaps because it was still based on the cumbersome decimal system instead of Leibniz’ binary arithmetics. However, in 1991, at least a specimen of his less general Difference Engine No. 2 was shown to work.</p>
<p>At the beginning of the 20<sup>th</sup> century, progress toward intelligent machines accelerated dramatically. Here are major milestones related to the development of AI since 1900:</p>
<ul>
<li>In 1914, Spaniard Leonardo Torres y Quevedo built the first chess-playing machine, using electro-magnetic components. It could play out king-rook endgames from any position without human intervention. Back then, chess was considered an intelligent activity.</li>
<li>In 1931, Austrian Kurt Gödel became the founder of AI theory, and of theoretical computer science in general, when he introduced the first universal coding language that was based on integers. He used it to describe general computational theorem provers and to identify the fundamental limitations of mathematics, computation, and AI. Much of the later work in AI and expert systems during the 1960s and ‘70s applied Gödel’s approach to theorem proving and deduction.</li>
<li>In 1935, American mathematician Alonzo Church published an extension of Gödel's 1931 results, solving the <a href="https://www.sciencedirect.com/topics/mathematics/entscheidungsproblem">Entscheidungsproblem</a> or decision problem, introducing an alternative universal language called lambda calculus. This is the basis of the popular programming language LISP. Alan Turing in the U.K. reformulated that result in 1936, using yet another equally powerful theoretical construct, now called the Turing machine (<strong>Figure</strong> <strong>2</strong>). He also suggested a subjective AI test.</li>
</ul>
<p style="margin-left:.5in;"><img alt="Turing machine" src="/blog/Portals/11/Schmidhuber_Turing%20machine%20image-min.jpg" style="width: 600px; height: 400px;" title="Turing machine" /></p>
<p><em><small><strong>Figure 2</strong>: Alan Turing in the U.K. reformulated the popular programming language LISP in 1936, using theoretical construct called the Turing machine. (Source: EQRoy/Shutterstock.com)</small></em></p>
<ul>
<li>Between 1935 and 1941, Konrad Zuse built the first practical, working program-controlled computer, the Z3. In the 1940s, he also devised the first high-level programming language, and used it to write the first general chess program. In 1950, Zuse delivered the world’s first commercial computer, the Z4, several months before the first UNIVAC.</li>
<li>Although the name "AI" was coined by John McCarthy at the Dartmouth Conference of 1956, the topic was addressed five years earlier at the famous conference on computers and human thought in Paris ("<a href="https://www.rutherfordjournal.org/article050103.html" target="_blank">Les Machines à Calculer et la Pensee Humaine</a>”). Herbert Bruderer rightly calls it the first conference on AI. During that conference, in which hundreds of world experts participated, Norbert Wiener played a game of chess against Torres y Quevedo’s famous chess machine mentioned earlier.</li>
<li>In the late 1950s, Frank Rosenblatt developed perceptrons and simple learning algorithms for "shallow neural nets." These were actually variants of old linear regressors introduced by Gauss and Legendre around 1800. Rosenblatt later also thought about deeper nets but did not get very far.</li>
<li>In 1965, Alexey Ivakhnenko and Valentin Lapa, two Ukrainians published the first work on a learning algorithm for deep multilayer perceptrons with an arbitrary number of layers. If there is a "father of deep learning" in feedforward networks, it is Ivakhnenko. His nets were deep even by post-2000 standards (up to eight layers). And like today's deep NNs, they learned to create internal representations of incoming data that are hierarchical and distributed. In recent decades, deep learning has become very important. It is a specialized branch of AI somewhat related to the human brain that contains about 100 billion neurons, each connected to 10,000 other neurons. Some are input neurons that feed the other neurons with data (sound, vision, tactile, pain, hunger). Others are output neurons that control muscles. Most neurons are hidden in between, where thinking takes place. Your brain learns by changing the strengths or weights of the connections, which determine how strongly neurons influence each other and encode all your lifelong experiences. Today’s DL artificial neural networks (NNs) are inspired by this and learn better than previous methods.</li>
<li>In 1969, Marvin Minsky and Seymour Papert's famous 1969 book “<em>Perceptrons: an introduction to computational geometry</em>” about the limitations of shallow learning, discussed the problem that had in fact been solved four years earlier by Alexey Ivakhnenko and Valentin Lapa. It has been said that Minsky's book slowed NN-related research, but that is not the case, or certainly not for research happening outside the US. In subsequent decades, many researchers, especially in Eastern Europe, built on the work of Ivakhnenko and others. Even in the 2000s, people were still using his highly cited method for training deep nets.</li>
</ul>
<p>So much for the history up to 1970. AI History Part II will take a closer look at what has happened since then.</p>
1499