ICrypto

Hotest Blockchain News in First Media Index

Neural networks, the machine learning algorithm based on the human brain

In the brain, a neural network is a circuit of neurons linked through chemical and/or electrical impulses. Neurons use these signals to communicate with each other in order to perform a certain function or action, for example, carrying out a cognitive task such as thinking, remembering, and learning.

The neuron sends out an electrical signal through its axon or nerve fiber. The end of the axon has many branches, called dendrites. When the signal reaches the dendrites, chemicals called neurotransmitters are released into the gap between cells. The cells on the other side of the gap contain receptors where the neurotransmitters bind to trigger changes in the cells.

Sometimes neurotransmitters cause an electrical signal to be transmitted down the receiving cell. Others can block the signal from continuing, preventing the message from being carried on to other nerve cells.

In this way, large numbers of neurons can communicate with each other, forming large-scale brain networks.

Now, this is how biological neural networks work. If we’re talking about them it’s because it’s important to understand the basic functioning of biological neural networks in order to explain the origin and functioning of artificial neural networks —a node-based computing system that somewhat imitates the neurons in the human brain to help machines learn

If you programmed a computer to do something, the computer would always do the same thing. It would react to certain situations the way you “told” it to. This is what an algorithm is: a set of instructions to solve a certain kind of problem. 

But there are limitations in the instructions that humans can write down in a code. We can’t use a simple code to teach a computer how to interpret the natural language or how to make predictions, in effect how to "think" for itself. This is because a code can't be large enough to cover all possible situations, such as all of the decisions we make when we drive - like predicting what other drivers will do and deciding what we will do based on that. 

However, a computer is not able to react differently or correctly to these special conditions because it simply does not (or cannot) have pre-configured specific responses to them. But what if it could figure them out by itself? This is what machine learning is for —to “train” computers to learn from data and develop predictive capacities and decision-making abilities. 

Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a type of machine learning. Their design is inspired by the way that biological neurons signal to one another.

In artificial neural networks, the counterparts of biological neurons are layers of interconnected nodes that transmit signals to other nodes, using information from the analysis of data to give an output.

Most Popular

Artificial neural networks have three types of layers: 

  • Input layers, where the input data is placed;
  • Hidden layers, where processing occurs through weighted connections;
  • Output layers, where the response to the “stimuli” is delivered.
    Source:  LearnDataSci/Wikimedia Commons

Each individual node takes in data and assigns a weight to it, giving it more or less importance. Data that is weighted more heavily contributes more to the output compared with other data. If the output data exceeds a given threshold, it “fires” the node, passing the data to the next layer in the network. 

When neural networks have more than one hidden layer to process the input data, they can learn more complex tasks because they have more “neurons” to process that data through all the hidden layers combined. These multi-layered neural networks are called deep neural networks and what they do is called deep learning. 

We can compare it to what the brain of a 3-year-old kid knows versus what the brain of a 30-year-old adult knows. The toddler may be just as smart as the adult but is not as experienced as the adult (doesn't have as much data), therefore, she doesn’t have as much information or information processing ability as the adult when trying to solve problems.

This is precisely why neural networks need to be trained. They need to be fed large data sets so the network can find the appropriate weighting to use to best map inputs to outputs. Neural networks do this by applying optimization algorithms, such as gradient backward propagation. 

In this way, deep learning can even surpass human-level accuracy because it can sift through and sort huge amounts of data.

Just like we’ve organized our schools in different grades according to the student's level of knowledge at each stage, deep neural networks build different levels of hierarchical knowledge in their layers. For example, they can store information about basic shapes in their initial layer and end up completely recognizing an object and its characteristics in the output layer. 

Deep neural networks and deep learning are both subsets of machine learning. 

What are Convolutional Neural Networks?

Convolutional neural networks (CNNs) are a class of artificial neural networks that use connectivity patterns to process pixel data. In fact, convolutional neural networks are mainly used for image recognition and classification tasks because they are arranged to be especially good at that.

They normally employ matrix multiplication to recognize patterns within images, for which they require a lot of computing power and training. They have three kinds of layers:

  • Convolutional layer, which performs the convolution —the search for specific features in the input image through feature detectors called filters.
  • Pooling layer, where the dimensions of the feature maps are reduced in size, while their important characteristics are preserved. This reduces the number of parameters and calculations that need to be made, improving efficiency. 
  • Fully-connected layer, where all the nodes and inputs from all layers are connected, weighted, and activated and the classification occurs. This may also be preceded or include a rectified linear unit layer. This replaces all negative values received as inputs by zeros and acts as an activation function. 
    Source: Aphex34/Wikimedia Commons

What are Recurrent Neural Networks? 

Recurrent neural networks (RNN) are a kind of artificial neural network that is specialized in processing sequential time series data. Their deep learning algorithm is designed to solve temporal problems like those present in speech recognition, sales forecasting, and automatic image captioning. 

Recurrent neural networks take information from prior inputs and apply this to current inputs and outputs. While traditional neural networks have inputs and outputs that are independent of each other, the output of recurrent neural networks depends on the other elements in the sequence. 

This approach is used in speech recognition because human languages work with sequences of words, not individual words. So, in order to interpret speech, recurrent neural networks need to “understand” whole sentences and not only individual words. 

For example, in order for the idiomatic expression, "give someone the cold shoulder," to make sense, each word needs to be expressed in a specific order. For a recurrent network to accurately interpret this idiom, it needs to account for the position of each word and then use that information to predict the next word in the sequence. 

Why are neural networks important?

All types of neural networks can boost artificial intelligence’s performance to the next level in their own way. In general, they are important because they have applications in many areas. For example, in the aerospace industry, they are used to improve fault diagnosis and autopilot in aircraft and spacecraft. In medicine, convolutional neural networks can help with medical diagnosis through the processing and comparison of medical imaging data (such as X-ray, CT scan, or ultrasound). 

Neural networks have also applications in security systems, for example, face recognition systems (which compare a detected face with the ones present in the database to identify the individual), signature verification (mainly used to avoid forgeries in banks and other financial institutions), etc. They allow self-driving cars to navigate the roads, detect pedestrians and other vehicles, and make decisions. 

Because of their predictive abilities, neural networks are also used in weather forecasting and stock market predictions. 

But neural networks can be found also in the most basic, everyday technology we have today. For example, Google Translate uses a neural machine translation system in order to process and be able to translate whole sentences with increasing accuracy. Apple’s Siri uses a deep neural network to recognize the voice command that activates it (Hey Siri), as well as the speech that follows. 

Neural networks are useful because of their efficiency. Plus, they bear great technological potential as they grow in size and in their problem-solving capabilities.

SHOW COMMENT (1)
For You
innovation
Phantom Space: A new rocket startup says it can launch at half the cost of SpaceX

"Our first space launch will occur in less than half the time it took Space X to achieve that milestone," said Phantom Space CEO Jim Cantrell.

Chris Young | 9/20/2022
scienceCould the ocean help us fight climate change?
Deena Theresa| 7/31/2022
diy75+ essential AutoCAD shortcuts and commands for the speedy engineer
Christopher McFadden| 10/18/2022
More Stories
health

A small tweak to genes may finally enable us to regrow cartilage

Nergis Firtina| 11/30/2022
innovation

FIFA World Cup in Qatar: It's 'the hand of God' vs. the hand of technology

Baba Tamim| 11/25/2022
science

In a first, researchers discovered a rare mineral that comes directly from Earth's lower mantle

Sade Agard| 11/28/2022
Share
 02.12.2022

Hotest Cryptocurrency News

End of content

No more pages to load

Next page