Here, common-sense rules inferred from the everyday world are hard-coded into the system such that it will be able to handle any type of situation. And it is in this "extremely symbolic" approach that the worst failures of that approach will probably be seen: forget one fact, and the system crashes, with nothing to lean back on.
On the other hand, best-suited to the connectionist approach are models of the brain at the micro-level. The brain is, after all, a neural network-literally. The problem here is that we get a working model, but with very little description of what is actually going on inside, and the question begs to be asked (by connectionists, of course): why model it if it cannot be explained
The natural thought is that there must be some way the two systems can "co-operate." Consider an interesting problem, one that may seem far-fetched but which is good enough to serve as an example: that of nonsense translation, as in "English French German Suite," quoted in Gdel, Escher, Bach: An Eternal Golden Braid (Douglas Hofstadter, 1979, page 366). Here, a translation into German by Robert Scott of Lewis Carroll's Jabberwocky is presented. The English stanza