Artificial intelligence (AI), a branch of Computer Science, is a constellation of technologies grouping generally two (2) main technical fields namely, symbolic learning (SL) and machine learning (ML). Advances in hardware enable AI implementation. Such advances will likely facilitate innovation and thereby expand the field of AI.
AI attempts to mimic human intelligence. For example, a human navigates the outside world moving from place to place and all the while making decision by using sensory data like sounds recorded by the ears, image and symbols captured by the eyes, temperature sensed by the skin and smell perceived by the nose. Using these data the brain computes a reality within the temporal and spatial dimensions.
Conventionally, symbolic learning, which falls under image processing comprises two (2) main branches namely, robotics and computer vision.
Machine learning (ML) is also divided into two (2) main branches to wit, statistical learning and deep learning. Generally, machine learning is used for classification (e.g., identify patterns with large volumes of business data) or prediction (smart algorithm). In pattern identification, if the machine is left to figure out the pattern, some in the industry refer to it as unsupervised learning. On the other hand, if a trained algorithm with an answer is used, then it’s referred to as supervised learning.
Statistical learning is further divided into speech recognition and natural language processing (NLP). Deep learning involves neural networks. Neural network as a field of AI has evolved into convolution neural networks (CNN), recurrent neural networks.
By way of background, the earliest neural net, the perceptron was developed in the 1950s. It was considered the first step toward human level machine intelligence. However, a 1969 book by MIT’s Marvin Minsky and Seymour Papert entitled “perceptrons” prove mathematically that such networks could only perform the most basic functions.
It was not until 1986 that Hinton showed that back propagation could train a deep neural network and the advancement of computer technology renewed interest in neural networks. Even though AI made great strides, but much work remains to be done. After all, today’s neural networks are based on the decades-old architecture and a fairly simplistic notion of how the human brain works says Jacob Vogelstein a neural network scientist. Efforts are under way in different universities and research centers to map the brain of rat in order to obtain some answers to the many questions regarding how the human brain works. For example, the real brain is full of feedback: for every bundle of nerve fibers conveying signals from one region to the next, there is an equal or greater number of fibers coming back the other way says M. Mitchell Waldrop a neurologist working on the mapping project.
The increased interest in AI results in tremendous growth of patent protection for AI in the U.S. However, as may be expected, in the US the law about AI is unsettled. Two recent Court of Appeals for the Federal Circuit (CAFC) decisions, i.e., Berkheimer v. HP and Aatrix Software v. Green Shades Software have not provided any bright line rule concerning adjudication of AI. In fact, the indication is that there is an increase likelihood that decision by a finder of fact will be required in any issues involving AI. In Europe, AI and machine learning are largely unpatentable and are per se “of an abstract mathematical nature.” The European Patent Office will look closely at whether claimed subject matter has a technical character as a whole. As a result, expressions such as “neural network” and “reasoning machine” usually refer to abstract models and are therefore unpatentable.