Access

You are not currently logged in.

Access your personal account or get JSTOR access through your library or other institution:

login

Log in to your personal account or through your institution.

If you need an accessible version of this item please contact JSTOR User Support

Natural Deduction in Connectionist Systems

William Bechtel
Synthese
Vol. 101, No. 3, Connectionism and the Frontiers of Artificial Intelligence (Dec., 1994), pp. 433-463
Published by: Springer
Stable URL: http://www.jstor.org/stable/20117969
Page Count: 31
  • Download ($43.95)
  • Cite this Item
If you need an accessible version of this item please contact JSTOR User Support
Natural Deduction in Connectionist Systems
Preview not available

Abstract

The relation between logic and thought has long been controversial, but has recently influenced theorizing about the nature of mental processes in cognitive science. One prominent tradition argues that to explain the systematicity of thought we must posit syntactically structured representations inside the cognitive system which can be operated upon by structure sensitive rules similar to those employed in systems of natural deduction. I have argued elsewhere that the systematicity of human thought might better be explained as resulting from the fact that we have learned natural languages which are themselves syntactically structured. According to this view, symbols of natural language are external to the cognitive processing system and what the cognitive system must learn to do is produce and comprehend such symbols. In this paper I pursue that idea by arguing that ability in natural deduction itself may rely on pattern recognition abilities that enable us to operate on external symbols rather than encodings of rules that might be applied to internal representations. To support this suggestion, I present a series of experiments with connectionist networks that have been trained to construct simple natural deductions in sentential logic. These networks not only succeed in reconstructing the derivations on which they have been trained, but in constructing new derivations that are only similar to the ones on which they have been trained.

Page Thumbnails

  • Thumbnail: Page 
[433]
    [433]
  • Thumbnail: Page 
434
    434
  • Thumbnail: Page 
435
    435
  • Thumbnail: Page 
436
    436
  • Thumbnail: Page 
437
    437
  • Thumbnail: Page 
438
    438
  • Thumbnail: Page 
439
    439
  • Thumbnail: Page 
440
    440
  • Thumbnail: Page 
441
    441
  • Thumbnail: Page 
442
    442
  • Thumbnail: Page 
443
    443
  • Thumbnail: Page 
444
    444
  • Thumbnail: Page 
445
    445
  • Thumbnail: Page 
446
    446
  • Thumbnail: Page 
447
    447
  • Thumbnail: Page 
448
    448
  • Thumbnail: Page 
449
    449
  • Thumbnail: Page 
450
    450
  • Thumbnail: Page 
451
    451
  • Thumbnail: Page 
452
    452
  • Thumbnail: Page 
453
    453
  • Thumbnail: Page 
454
    454
  • Thumbnail: Page 
455
    455
  • Thumbnail: Page 
456
    456
  • Thumbnail: Page 
457
    457
  • Thumbnail: Page 
458
    458
  • Thumbnail: Page 
459
    459
  • Thumbnail: Page 
460
    460
  • Thumbnail: Page 
461
    461
  • Thumbnail: Page 
462
    462
  • Thumbnail: Page 
463
    463