Skip to content

xytis/CANN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

fc966dc · Nov 17, 2012

History

57 Commits
Jan 18, 2011
Feb 21, 2011
Jan 20, 2011
Feb 21, 2011
Feb 18, 2011
Nov 17, 2012
Jan 4, 2011
Jan 18, 2011
Feb 18, 2011
Jul 17, 2009
Jul 17, 2009
Feb 17, 2011
Nov 17, 2012
Jan 7, 2011
Nov 17, 2012
Nov 17, 2012
Jan 16, 2011
Nov 29, 2010
Feb 21, 2011

Repository files navigation

CANN

Build Status

What is this?

Artificial neural networks are a major component of current AI systems. While doing my own reading about them I came up with an idea of cyclic neural networks, which in first approach seemed just plain stupid. After some thought, I decided to try them out and only then draw any conclusion. So this project aims to do exactly that: prove (or negate) that Cyclic Artificial Neural Networks are useless.

What's so cool about loops?

Firstly, our brain (any brain, as a mater of fact) contains looped neurons. Those loops may be the perfect place for short term memory (not the one that keeps track of events, but the one used when thinking, like cache of sorts). So if such loops could be reproduced, AI would have instant memory of it's actions.

Secondly, such system can be saturated. By saturation I mean it can be filled with junk signals, propagating over and over again, until all neurons in system are active end external input no longer influences the outcome of the network.

Finally, balance condition exists. It is unique for every system and I am not sure, if it can be expressed in other ways than numericaly. If system is put into stable balance, external input may be 'considered' using available memory and (perhaps) inteligent output can be produced.