10.12: Neural Networks: Feedforward Algorithm Part 1 – The Nature of Code

[ad_1]
In this video, I tackle a fundamental algorithm for neural networks: Feedforward. I discuss how the algorithm works in a Multi-layered Perceptron and connect the algorithm with the matrix math from previous videos.

Next Part:

This video is part of Chapter 10 of The Nature of Code (

This video is also part of session 4 of my Spring 2017 ITP “Intelligence and Learning” course (

Support this channel on Patreon:
To buy Coding Train merchandise:
To donate to the Processing Foundation:

Send me your questions and coding challenges!:

Contact:
Twitter:
The Coding Train website:

Links discussed in this video:
The Coding Train Amazon Shop:
Sigmoid Function on Wikipedia:

Videos mentioned in this video:
3Blue1Brown Neural Networks playlist:

Source Code for the all Video Lessons:

p5.js:
Processing:

The Nature of Code playlist:
For More Coding Challenges:
For More Intelligence and Learning:


Posted

in

by

Tags:

Comments

6 responses to “10.12: Neural Networks: Feedforward Algorithm Part 1 – The Nature of Code”

  1. Ibakon Ferba Avatar

    The link for the next part leads to the previous video ^^'

  2. Khusna Aullia Avatar

    Can u do .. simple exponential smoothing ?

  3. ROSHAN PAWARA Avatar

    https://livebook.manning.com/#!/book/grokking-algorithms/chapter-1/1

    Sir could you please go to this link and tell me whether this is the actual "Grokking Algorithms" book you tweeted about?

  4. Youssef Hesham Avatar

    Wating for this video

Leave a Reply

Your email address will not be published. Required fields are marked *