AI actually struggled to gain traction throughout the 20th century and underwent two significant “AI winters.” Yet now, scientists expect the global AI market value to reach $267 billion by 2027. Moreover, $15.7 trillion is the expected contribution of AI to the global economy by 2030.
The technology keeps growing in importance and consequently bringing researchers to the idea that AI might eventually be able to possess the same qualities as the human brain does. However, currently, scientists struggle to understand what makes the human brain and people’s thought processes so special; therefore, the sentience of machines cannot be expected any time soon. To learn more about the prospects of AI becoming sentient, read this AITJ article: Will AI Become Sentient?
In this article, I discuss the concepts of CNN, GNN, and GAN. However, to dig deeper into how they function, it is important to explain the primary aspects of how the human brain works. So, let’s take a look at the concept of neural networks first.
It is a well-known fact that the human brain consists of billions of neurons. So, normally, a neuron collects signals from other neurons through a host of structures called dendrites. The neuron sends electrical activity through a conducting structure (the axon) which can then divide into numerous branches. At the end of each of these branches, the activity is converted by a synapse into electrical effects that subdue or stimulate activity in the target neuron.
Regarding the learning process, it occurs when the influence of one neuron on another somehow changes. It can happen through the change in the membrane potential. Researchers made numerous attempts to mimic the neural networks in the human brain by creating artificial neural networks.
In the language of a computer, a neuron can be described as a function that gets other “neurons” outputs and transmits a number between 0 and 1. Hence, artificial neurons are connected to each other and form an artificial neural network.
Every neural network has an input and output that are composed of numerous nodes. The artificial neural network receives information through the input layer while the output layer represents the final results.
There are several stages (columns of neurons) between the input and the output. They are called the hidden layers, and their main function is to transform information into smaller pieces that are easier to process. And, as there can be numerous hidden layers, the algorithm is referred to as deep learning.
To learn more about deep learning, read this AITJ article: How To Leverage Deep Learning Cognitive Computing For Cybersecurity?
A Convolutional Neural Network (CNN) represents a type of artificial neural network; it is also one of the most known deep neural networks.
This class of deep neural networks is generally used for image analysis, recognition, and processing of pixel data. CNNs use deep learning algorithms to complete descriptive and generative tasks. Moreover, it occasionally implements image and video recognition, recommender systems, and natural language processing (NLP).
The history of CNN started back in the 1980s. Its first function was recognizing handwritten digits; therefore, its first application was in the postal sector generally to identify pin and zip codes. As CNNs require a large amount of data to train, their utilization was strongly limited during that time. However, in 2012 Alex Krizhevsky had the idea of reviving the branch of deep learning that utilizes multi-layered neural networks, and since then, the technology has been researched and upgraded.
The principle of how CNNs work is generally the same as the artificial neural network method described above. However, I’ll explain some features specific to CNNs.
It is well-known that an RGB image, for example, is a large matrix with pixel values that have three plains, such as width, height, and color channels. It is all the same for grayscale images but for the point that they have only one plane. So, for the sake of simplicity, let’s focus on how CNNs interact with grayscale images.
As mentioned above, an input image represents a matrix with pixel values. We take a kernel or filter (3×3 matrix) and apply it to a certain area in the analyzed picture to get the convoluted feature, which then goes to the next layer. By that, I mean that:
- A specific 3×3 matrix out of the whole image is chosen and analyzed with the kernel
- Each feature of this matrix is multiplied by the feature in the same position in the filter
- We get 9 results of the multiplication and sum them up to get one number
This final number represents our convoluted feature. And the process is repeated until the whole image is analyzed by the CNNs.
GNNs are Graph Neural Networks that represent a class of deep learning algorithms. As you can understand from their name, they are generally applied in graph analysis. It is often claimed that GNNs are able to perform tasks that CNNs fail to do, including processing data described by graphs.
Before I describe how GNNs are applied, I want to focus on the definition of graphs. In Computer Science, the term “graph” is understood as a data structure that consists of vertices (often called nodes) and edges or sets of vertices and edges.
GNNs’ typical task is node classification. A typical node has two characteristics: its specific feature and a ground-truth label (it helps check if machine learning results comply with the real world). In a partially labeled graph, GNNs use the labeled nodes, analyze them, and then predict the possible label in the unlabeled ones. Generally, Graph Neural Networks are used for:
- Node Classification
- Graph Classification
- Graph Visualization
- Link Prediction
- Graph Clustering
Generative Adversarial Networks (GANs) represent a class of machine learning methods that utilize two neural networks and pitch one against the other to improve the accuracy of their predictions. This AI technique introduced such a human-specific feature as creativity to computers. If you want to learn more about how GANs are utilized for performing creative tasks, read this AITJ article: How AI Benefits Fashion Retail.
The idea of two neural networks working together came to Arthur Samuel, a well-known researcher in computer science. In 2014, in their research paper, Ian Goodfellow and his colleagues described the first working generative model that was based on adversarial networks.
GANs are applied in:
- Creating and modifying images
- Creating interior designs to provide designers with more ideas
- Creating musical compositions
- Creating video footage from several video frames
I mentioned only some most significant applications of Generative Adversarial Networks; however, their utilization is much more broad-based. For instance, GANs are involved in video prediction, 3D object generation, face aging, and clothing translation. Generally, this technology is utilized in various processes connected to video or photo processing, as well as any analysis of visual data.