How To Build Own Neural Network

2 minute read

Neural Networks

Neural Networks is mathamatic function that maps a inputs to desired output.

Neural Network basic Components

  • an input layers x
  • an arbitary amount of hidden layers
  • an output layers ŷ
  • a set of weights and biases between each layers w , b
  • Activation fuction for each hidden layers

2-layer Neural Network

2 Layer Neural Network

Lets create Python Class for Neural Network

	class NeuralNetwork:
    def __init__(self,x,y):
    self.input = x
    self.weights1 = np.random.rand(self.input.shape[1] ,4)
    self.weights2 = np.random.rand(4,1)
    self.y = y
    self.output  = np.zeros(y.shape) Training of Neural Network   The output of :two: layer nn will be **ŷ** = S(w2 * S(w1x+b1) + b2) (Here assume S : **Sigmoid Function** )  In Training process of Networks there happens two process :  * Feedforword  * Backword Propogation 
  • Calculating pridected output ŷ is Known as Feedforward And Updating weights and baises is known as Backpropogation

You might notice that in the output equation above, the weights W and the biases b are the only variables that affects the output ŷ.

Feedforward

feedforward is just simple calculus and for a basic 2-layer neural network, the output of the Neural Network is:

                    y^​=σ(w2∗σ(w1x+b1)+b2)

Feedforward Python Code Implemenataion

def feedforward(self):
    self.layer1 = sigmoid(np.dot(self.input,self.weights1))
    self.output = sigmoid(np.dot(self.layer1 ,self.weights2))

To predict how good is our model we need to calculates loss function for our network .We will take sum of square error loss function as our loss function .

                Sum of square Error   = ∑ (y-ŷ)^2

Our goal is to minimise the loss function and update weights and biases .

Backpropagation

def backprop(self):
    b_weights2 = np.dot(self.layer1.T ,(2*(self.y -self.output) * sigmoid_derivative(self.output)))
    b_weights1 = np.dot(self.input.T,  (np.dot(2*(self.y - self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1)))

we have predicted our loss function for neural networks now we need to find a way to propogate weights and biases back . Updating weights and biases:

self.weights1 += b_weights1
self.weights2 += b_weights2

In order to know the appropriate amount to adjust the weights and biases by, we need to know the derivative of the loss function with respect to the weights and biases.

If we have the derivative, we can simply update the weights and biases by increasing/reducing with it. This is known as gradient descent.

We will use Chain rule for calculating derivatives of loss function . Loss Function

Let’s add the backpropagation function into our python code.

Train the Network

Let’s train the Neural Network for 2000 iterations and see what happens.

for i in range(2000):  
nn.feedforward()  
nn.backprop()  
  
print(nn.output)

Output:

[[0.0068838 ] [0.97302756] [0.97322523] [0.03232224]]

and in this way we have successfully created 2 Layer Neural Network .

Thanks For reading :heart:

Updated:

Leave a Comment