To build conscious machines, focus on general intelligence: A framework for the assessment of consciousness in biological and artificial systems
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Consciousness presents us with a number of different explanatory challenges. The most fundamental, sometimes called the ‘Hard Problem’, is focused on how subjective states can arise from objective physical systems. Another important question concerns the cognitive mechanisms that distinguish conscious from unconscious states. A third debate, and the one that will be the focus of the present enquiry, concerns how we can determine whether a given biological or artificial agent is conscious at all. Of the three questions, the latter has particular practical and ethical significance: our treatment of animals depends in part on whether we regard them as having a capacity for conscious experience. Likewise, while few would endorse the idea that current artificial systems are conscious, as their capacities improve and come to more closely resemble those of animals and humans, ethical and legal questions concerning machine consciousness will likely loom large.