Artificial intelligence is the intelligence machines can have. It differs from the human natural intelligence. AI research is considered to have begun properly in 1956 from a conference held at Darthmouth University.
Artificial intelligence research has experienced many ups and downs since its inception. Between 1956 and 1974, it lived a golden age: scientists predicted that in a few years they could develop a computer with the same cognitive capacity as a human being, and they got millionaire investments for research. However, their estimates were mistaken, the very high expectations could not be met and the investments disappeared. The period between 1974 and 1980 is considered the winter of artificial intelligence. Apart from the financial problems, the projects were faced with a very reduced computational and data storage capacity that prevented the necessary processes and experiments from being carried out.
Between 1980 and the end of the 1990s, the artificial intelligence industry suffered several ups and downs in popularity accompanied by the consequent emergence and disappearance of investment. At the end of the 90's computers began to have a sufficient capacity to carry out advances in the field. In fact, the computer used to play chess in 1997 was 10 million times more powerful than the one used for the same purpose in 1951.
Since then, there has been a change of perspective around AI. Big Data and the power of computers have allowed great advances, even if they have been carried out in a different direction than the one that was carried out until then. Progress has begun in the field of deep learning, as well as neural networks and machine learning. All of them are branches of artificial intelligence. There are also other branches or subbranches, such as predictive analysis, Natural Language Recognition and Facial Recognition.
Bismart offers predictive analysis, natural language understanding and facial and object recognition services:
Predictive analytics is part of machine learning. It is based on predictive models that exploit patterns found in historical and transactional data to identify risks and opportunities. With predictive analysis, future events can be predicted with high precision, so that organizations can prepare in advance. For example, predictive analysis could predict the failure of a machine in a production line. In this way, the problem could be fixed before it leads to a failure and thus avoid or reduce the downtime of the production chain.
Netflix is one of the many companies that use predictive analysis to improve their services, in this case, their recommendation engine. 80% of Netflix users watch exclusively the movies and shows recommended to them by the platform, which has reduced the service cancellation percentage. Furthermore, Netflix uses content consumption data such as time of the day and quantity of content watched to improve its recommendations.
Natural language understanding (NLU) is a subbranch within natural language processing. NLU is one of the most complex AI problems, known as AI-hard problems. This field is gaining popularity for its use in large-scale content analysis. Through natural language understanding it is possible to reveal content in audiovisual format, whether it comes from structured or unstructured data and in large volumes.
An example of this technology are virtual assistants such as Alexa, Siri or Google Assistant. Siri, Apple's assistant, is able to recognize commands based on the training of its neural network. The system carries out probability calculations to determine if the sound registered is in fact "Hey, Siri", comparing it to the original model. From a certain value up, the system activates.
Facial recognition is a biometric application of artificial intelligence that is capable of identifying a person's face by analyzing their facial shapes and textures. Its uses are very diverse; from security, identifying people wanted by the police, to more commercial uses, such as retargeting. In addition, it can also be used to find out how much time people spend in a particular place to control waiting times and seating.
Facebook is a good example of facial recognition technology. In its image libraries it is applied to identify in which images someone appears based on one single picture. Facebook creates a sort of template of a face from the analysis of every pixel in a face in an image. Every template is unique, it is almost like a thumbprint. Every time an image is uploaded, the system compares the faces in it with the templates and when it finds a match it suggests a tag.
Object recognition works similarly to facial recognition. It can be used, for example, in production lines to detect defective parts, to know the number of parts manufactured or in the distribution of packages or other objects.