Elements applications of artificial intelligence in transport and logistics - страница 5



In the decades that followed, researchers were able to refine the concepts of natural language processing and knowledge representation. This advance has led to the development of the ubiquitous natural language processing and machine translation technologies in use today.

In 1978, Andrew Ng and Andrew Hsey wrote an influential review article in the journal Nature containing over 2,000 papers on AI and robotic systems. The paper covered many aspects of this area such as modeling, reinforcement learning, decision trees, and social media.

Since then, it has become increasingly difficult to involve researchers in natural language processing, and new advances in robotics and digital sensing have surpassed the state of the art in natural language processing.

In the early 2000s, a lot of attention was paid to the introduction of machine learning. Learning algorithms are mathematical systems that learn by observation.

In the 1960s, Bendixon and Ruelle began to apply the concepts of learning machines to education and beyond. Their innovations inspired researchers to further explore this area, and many research papers were published in this area in the 1990s.

Sumit Chintal’s 2002 article, Learning with Fake Data, discusses a feedback system in which artificial intelligence learns by experimenting with the data it receives as input.

In 2006, Judofsky, Stein, and Tucker published an article on deep learning that proposed a scalable deep neural network architecture.

In 2007, Rohit described" hyperparameters». The term "hyperparameter" is used to describe a mathematical formula that is used in computer learning. While it is possible to design systems with tens, hundreds, or thousands of hyperparameters, the number of parameters must be carefully controlled because overloading the system with too many hyperparameters can degrade performance.

Google co-founders Larry Page and Sergey Brin published an article on the future of robotics in 2006. This document includes a section on developing intelligent systems using deep neural networks. Page also noted that this area would not be practical without a wide range of underlying technologies.

In 2008, Max Jaderberg and Shai Halevi published «Deep Speech». In it was presented the technology «Deep Speech», which allowed the system to determine the phonemes of spoken language. The system entered four sentences and was able to output sentences that were almost grammatically correct, but had the wrong pronunciation of several consonants. Deep Speech was one of the first programs to learn to speak and had a great impact on research in the field of natural language processing.

In 2010, Jeffrey Hinton describes the relationship between human-centered design and the field of natural language processing. The book was widely cited because it introduced the field of human-centered AI research.

Around the same time, Clifford Nass and Herbert A. Simon emphasized the importance of human-centered design in building artificial intelligence systems and laid out a number of design principles.

In 2014, Hinton and Thomas Kluver describe neural networks and use them to build a system that can transcribe a person with a cleft lip. The transcription system has shown significant improvements in speech recognition accuracy.

In 2015, Neil Jacobstein and Arun Ross describe the TensorFlow framework, which is now one of the most popular data-driven machine learning frameworks.