Deep neural networks as fuzzy logic

Deep neural networks as fuzzy logic

You can take the perspective that (deep) neural networks are pattern based fuzzy logic.

Basically each layer is then just a list of fuzzy logic if then statements.

https://randomprojectionai.blogspot.com/2017/10/deep-neural-networks-as-fuzzy-logic.html

That way you can gain some additional insight about the effects of different activation functions etc.

6 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Thank you for sharing the information. Are you looking for any help?

Only having weak hardware to hand (a laptop) is both and advantage and disadvantage.  It certainly results in a focus on efficient algorithms.  I guess with one of the top Intel chips you would get at least a 100 times speedup in the neural network code, which would be impressive to see.  And easier to code than a GPU. 

I think there are many use cases for fast random projection (RP) in artificial intelligence.   Just as one simple example the final readout layer in many neural networks is linear.  A slight improvement could be to use a RP instead.

I have used quite a strong evolutionary algorithm in the code.  However if you say that in training a deep neural network you are simply aligning fuzzy logic if then statements then it is under-utilized.  It should be possible to do more with evolution given a somewhat richer system than current neural networks seem to be.

https://forum.processing.org/two/discussion/23422/a-simple-way-to-optimize

Another application of RP is multiplexing and demultplexing higher dimensional vectors.  Basically you could split the output of a neural network into a number of almost orthogonal (vector) signals to feed to subsystems like long term memory.  And then multiplex various (vector) signals into one vector for further neural network processing. 

 

 

Actually that reminds me there is a cleaner, fully orthogonal way to split the information in vectors using the Walsh Hadamard transform.   y=WHT(x)+x, z=WHT(x)-x.  You can also use partial calculations of the WHT to do similar things. 

And just randomly: One reason you don't see sparsity inducing activation functions like taking the square of the weighting dot product is you are greatly reducing the number of weights in the next layer that will have an effect.  Adding a random projection between layers will fix that.  It will turn the point-wise sparsity to a higher entropy pattern wise-sparsity.  Just as the FFT will turn a sparse single frequency in the input to sine or cosine wave in the output.

 

Since there is no update for more than 30 days, closing this thread. if required it can be referred.

Leave a Comment

Please sign in to add a comment. Not a member? Join today