As for Minsky Papert, I’ll defer to your knowledge, as I’m shaky on the concept of linearity in neural networks
Brian – I saw Randy’s Science paper and (probably like yourself) was very surprised
I understand that they made the mistake of only considering single-layer networks when pouncing on perceptrons; if they had considered multi-layered networks they would have seen that things like XOR are possible. Linearity and analog systems notwithstanding, I can say with the hindsight of a huge generational gap that it just seems silly to me that they didn’t consider multi-layered networks.
A consistent thread in your comment is that some differences are merely «implementational» or «architectural» details, and thus are actually unimportant or otherwise superficial. IMO, that attitude is scientifically dangerous (how can you know for sure?) and *very* premature (when we have an artificially intelligent digital computer, I’ll be convinced).
Just as the brain only has hardware (as you said, there is no software that is the mind running on top), the only thing that counts when programming a mind is the software. […]