Abstract: | Clark & Thornton take issue with my claim that parity is not a generalisation problem, and that nothing can be inferred about back-propagation in particular, or learning in general, from failures of parity generalisation. They advance arguments to support their contention that generalisation is a relevant issue. In this continuing commentary, I examine generalisation more closely in order to refute these arguments. Different learning algorithms will have different patterns of failure: back-propagation has no special status in this respect. This is not to deny that a particular algorithm might fortuitously happen to produce the "intended" function in an (oxymoronic) parity-generalisation task. |