We've Moved! Visit our NEW FORUM to join the latest discussions. This is an archive of our previous conversations...

You can find the login page for the old forum here.
CHATPRIVACYDONATELOGINREGISTER
DMT-Nexus
FAQWIKIHEALTH & SAFETYARTATTITUDEACTIVE TOPICS
TensorFlow Neural Network Playground Options
 
#1 Posted : 10/17/2019 4:56:21 PM
DMT-Nexus member

ModeratorSenior Member

Posts: 4612
Joined: 17-Jan-2009
Last visit: 07-Mar-2024


https://playground.tensorflow.org

Tensorflow's had this page for awhile. It's a nice && sort've fun way to experience at a high level view of what's going on in a neural network/deep learning setup without having to write the code for it.

I'll try to give a little explanation:

Essentially you're trying to classify all the given points within the 2D image. This 2D image is one form of whats called in deep/machine learning - the 'training data', 'training set', etc . So with this instance we know ahead of time the classifications per each point in the data set/training data - this is a form of supervised learning.

The whole objective of this is to create a neural network w/ no prior knowledge, based on just the given training data. This will end up performing a binary classification - which points are orange vs. which points are blue. It will continually loop/iterate over the training set, learning from the given data being fed into the initial input layer of neurons/inputs. This input layer is fed the given x/y coordinates [vertical/horizontal position] for each colored data point.

As it iterates, you'll notice that it'll continually reinforce specific connections with their respective 'weights' [i.e: numerical values] that lead eventually to the correct classification. You can also go into each given connection and adjust the weight value, which can be fun to mess around with.

There's some other aspects of neural network learning/weighting system that aren't shown here readily, things like gradient descent/backpropogation, but that's a whole other discussion.

Then you have following this - the 'hidden layer/s' of inputs/neurons. With this you can add/subtract the hidden layers. They perform similarly to the input layer in terms of their function.

The output layer converges on a final decision based on everything previous - # of layers, neurons, data points, inputs ,etc. TensorFlow shows 2 output neurons, though since this is a simpler binary classification problem you could subtract down to just 1 output neuron and still get the same result.

On the left side you can choose which dataset you'd like to use. Some are a little more difficult, though they're all doing supervised learning & binary classification

If you want to get into more depth you can also adjust learning rate, the activation functions within the neurons, etc.

Anyway, fun to tool around with & see how things look from a high level, at least in this type of instance.





 

STS is a community for people interested in growing, preserving and researching botanical species, particularly those with remarkable therapeutic and/or psychoactive properties.
 
Jees
#2 Posted : 10/17/2019 6:34:39 PM

DMT-Nexus member


Posts: 4031
Joined: 28-Jun-2012
Last visit: 05-Mar-2024
How cool to see these things at work!
Where is the drug-input? Big grin

If one bumps up the learning rate too high, it doesn't perform very well, the lines get too thick too fast and this works counter productive. Like overly feedback or something.
 
#3 Posted : 10/17/2019 10:09:54 PM
DMT-Nexus member

ModeratorSenior Member

Posts: 4612
Joined: 17-Jan-2009
Last visit: 07-Mar-2024
LR controls the level of adjustment of the individual weighting between each connection in the network in respect to the loss slope. Controls how much to alter the model in response to the calculated error - each time the weights are refreshed/updated via backpropogation - which backpropogation updates the weights to/from each neuron - so that the error gap between the neural nets output layer and the 'known expected output' is given back through to the network and is used to refresh/modify the state of the weights.

Having a high LR essentially causes the weights to change too much/far too often with each iteration of the model. Can end up causing massive overcorrection, then things typically will diverge [instead of converge] & the continual loss for each iteration typically increases.

So yeah it freaks out. Razz

Though having too low an LR would cause it to take a million years to make the classification and converge.

So LR is pretty much how it sounds in relation to overshooting/undershooting in terms of reaching its conclusion. I'm grossly oversimplifying this though.

 
Jees
#4 Posted : 2/9/2020 7:40:35 PM

DMT-Nexus member


Posts: 4031
Joined: 28-Jun-2012
Last visit: 05-Mar-2024
Adapting structures
 
 
Users browsing this forum
Guest

DMT-Nexus theme created by The Traveler
This page was generated in 0.027 seconds.