Convolutional neural network

From WikiMD's Food, Medicine & Wellness Encyclopedia

Comparison image neural networks

Convolutional Neural Network (CNN) is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNNs are also known as convnets or CNN. They have applications in image and video recognition, recommender systems, image classification, medical image analysis, natural language processing, and financial time series.

CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.

Architecture[edit | edit source]

The architecture of a CNN is designed to take advantage of the 2D structure of an input image. This is achieved through the use of multiple building blocks, such as convolutional layers, pooling layers, and fully connected layers.

Convolutional Layers[edit | edit source]

The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnable filters (or kernels), which have a small receptive field, but extend through the full depth of the input volume. As the filter slides over the input image (a process known as convolution), a 2D activation map is produced that gives the responses of that filter at every spatial position. Intuitively, the network learns filters that activate when they see some specific type of feature at some spatial position in the input.

Pooling Layers[edit | edit source]

Pooling layers are used to reduce the dimensions of the data by combining the outputs of neuron clusters at one layer into a single neuron in the next layer. Max pooling, for example, partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum.

Fully Connected Layers[edit | edit source]

Fully connected layers connect every neuron in one layer to every neuron in the next layer. In the context of CNNs, fully connected layers are often placed after several convolutional and pooling layers to perform high-level reasoning such as classifying the features found by the CNN.

Training[edit | edit source]

CNNs are trained using a backpropagation algorithm. However, unlike in traditional neural networks, the layers in CNNs are three-dimensional, which means the network learns both the height and width dimensions of its filters, along with the depth.

Applications[edit | edit source]

CNNs have been successfully applied to various fields, including:

Advantages[edit | edit source]

  • CNNs can automatically and adaptively learn spatial hierarchies of features from input images.
  • CNNs reduce the number of parameters to learn, making the network less prone to overfitting.

Challenges[edit | edit source]

  • CNNs require a significant amount of training data to avoid overfitting.
  • They are computationally intensive to train and require substantial computing resources, particularly for large datasets.

Future Directions[edit | edit source]

The future of CNNs includes improvements in network architecture, training methods, and applications. Researchers are exploring ways to make CNNs more efficient, capable of handling more complex types of data, and easier to train.


This article is a stub.

Help WikiMD grow by registering to expand it.
Editing is available only to registered and verified users.
About WikiMD: A comprehensive, free health & wellness encyclopedia.

Wiki.png

Navigation: Wellness - Encyclopedia - Health topics - Disease Index‏‎ - Drugs - World Directory - Gray's Anatomy - Keto diet - Recipes

Search WikiMD


Ad.Tired of being Overweight? Try W8MD's physician weight loss program.
Semaglutide (Ozempic / Wegovy and Tirzepatide (Mounjaro / Zepbound) available.
Advertise on WikiMD

WikiMD is not a substitute for professional medical advice. See full disclaimer.

Credits:Most images are courtesy of Wikimedia commons, and templates Wikipedia, licensed under CC BY SA or similar.

Contributors: Prab R. Tumpati, MD