S usan S chneider and P ete M andik considerations about teleofunction or John Searle’s alleged derived/non-derived intentionality distinction) (Searle 1983) Let’s turn now to a second suggestion for making progress on the Hard Problem of AI Consciousness Let us return to the case of one’s migration to the cloud In the process of migrating, neurons that form the neural basis of one’s consciousness are gradually replaced by silicon chips If, during this process, a prosthetic part of the brain ceases to function normally – specifically, if it ceases to give rise to the aspect of consciousness that that brain area is responsible for – then there should be behavioral indications, including verbal reports An otherwise normal person should be able to detect, or at least indicate to others through odd behaviors, that something is amiss, as with traumatic brain injuries involving the loss of consciousness in some domain, such as blindsight or blindness denial This would indicate a “substitution failure” of the artificial part for the original component But should we really draw the conclusion, from a substitution failure, that the underlying cause is that silicon cannot be a neural correlate of conscious experience? Why not instead conclude that scientists failed to program in a key feature of the original component – a problem which science can eventually solve? But after years and years of trying, we may reasonably question whether silicon is a suitable substitute for carbon when it comes to consciousness This would be a sign that the answer to the hard problem of AI consciousness is negative: AI cannot be conscious But even a longstanding substitution failure would not be definitive, for there is always the chance that our science has fallen short But this scenario would provide some evidence for a negative answer Readers familiar with Chalmers’s “absent qualia, dancing qualia” thought experiment may object that we’ve missed something, for Chalmers’s thought experiment supports the view that consciousness supervenes on functional configuration: if you fix the psychofunctional facts, you fix the qualia But we are disputing that functional isomorphism occurs in the first place We consider it an open question If silicon systems cannot be conscious, then the functional facts cannot be fixed When it comes to consciousness, carbon and silicon are not functionally interchangeable For why would a silicon system, S2, be a psychofunctional isomorph of the original system, S1, after the transfer? S2s replaced brain region, or minicolumn, being made of silicon, will always differ causally from the replaced component For wouldn’t the new silicon component somehow signal to other brain areas that there is a defect in consciousness, as with neurophysiological deficits? Could the silicon chip be doctored, so as to signal consciousness when consciousness was absent though? This is a tricky question It could be the case that there are some observational false positives, in which case, we may fail to rule out certain cases of non-conscious systems But would it then be a genuine functional isomorph of a carbon system? It is not clear that it would be, for the brain chip would need to prevent signaling to other brain areas that consciousness is lacking 316