The Concept Of Mind-Brain Consciousness

downloadDownload
  • Words 982
  • Pages 2
Download PDF

The concept of consciousness has long been considered the most challenging in human thought. How consciousness manifests in living entities is to many an almost impossible question to answer. Moreover, it has recently become a topic of concern about whether non-human entities are conscious, such as machines, or robots. This essay will discuss how an enigmatic definition of consciousness can be determined in both humans and non-humans alike, and whether this can constitute as definitive consciousness.

It has been argued that the criterion for knowing if someone is conscious is decided through his or her behavioural responses. Many aspects of consciousness are implied through behaviour; for example one’s ability to engage in conscious will (Roskies, 2011), or the ability to participate in interpersonal conversation (Gamez, 2007). Indeed, it may seem inconceivable that all healthy human beings as we know them do not match the same phenomenal feeling of the self that we ourselves experience, in part, due to the similarities in behaviours.

Click to get a unique essay

Our writers can write you a new plagiarism-free essay on any topic

Nevertheless, there are situations whereby behaviour is not possible – but does that mean we cannot tell whether they are conscious? This is the case for patients in a ‘vegetative’ state; defined as having no awareness with the external world, of who seemingly lack basic consciousness (Owen et. al, 2016). Their level of consciousness is fairly ambiguous, as they are unable to communicate with their surroundings. However, methods to examine their consciousness through observing unambiguous brain activity in fMRI and EEG assessments have shown that certain stimuli elicits brain activity in a handful of vegetative patients (Fernandez-Espejo, 2013). Such studies have been able to tell whether they are conscious directly through their deliberate brain responses.

More recent discussions of consciousness turn to it’s possibility in robots and machines (Gamez, 2007). This argument follows from a functionalist viewpoint. Functionalism posits that mental states are, by nature, the result of causal reactions to external (sensory) or internal (other mental) stimuli (Levin, 2004). Indeed, Rodney Brooks, a famous robocist, argues that the universe is completely mechanistic, limiting everything within it as just the consequence of countless little rules. Thus, from a functionalist perspective, as long as robots around us have the same neural underpinnings as we do, and these neural underpinnings function in the same way, they will have the same level of consciousness.

The most famous example of testing consciousness in robots is coined the ‘Turing Test’. First developed by Alan Turing in 1950, the Turing Test aimed to develop a means of determining if a machine could exhibit behavioural intelligence, and in effect have consciousness (Oppy & Dowe, 2016). Turing equates the test to a game of deception (Copeland, 2000). Imagine a man and a woman in a room separate from an interrogator. The aim for the interrogator is to ask relevant questions to determine which one is the woman, whilst the aim for the man and the woman is to convince the interrogator that they are the woman. Turing argues that a similar logic can be followed for a man and a machine, with the interrogator’s ability of distinguishing the two as the critical assessment of the machine’s level of consciousness.

The Turing Test has accurately been able to distinguish robot intelligence from human intelligence. An example of this is the robot ELIZA. Upon taking the test, it was made obvious that ELIZA did not have emotional cognition to the same extent a human does, as her answers were characterised by a motivation to continue the conversation without intelligent acknowledgement of the content of the questions being asked.

However, the Turing test has it’s faults. Indeed, it may be too unfair on machines, as it puts an emphasis on a machine matching behaviour that is considered human, rather than an emphasis on a machine having consciousness (Blackmore & Troscianko, 2018). Furthermore, it fails to distinguish between Strong AI and Weak AI. Coined by Searle, Strong AI would believe that the machine is intelligent and have consciousness with a similar nature to humans (Blackmore & Troscianko, 2018). However, Weak AI places consciousness in machines as a mere simulation of human consciousness (Blackmore & Troscianko, 2018). The Turing Test cannot distinguish between the two as it’s primary goal is in observing behaviour, and thus, it is limited in it’s ability to detect true forms of consciousness.

This leads us a bigger picture beyond machines. It is fundamentally impossible to determine without doubt if something is conscious (Moor, 2008). We rely too often on behaviour, but inferring consciousness merely from behaviour may be a limited approach. Indeed, Rodney Brooks noted that external behaviour gives no indication on someone’s internal feelings. This can argued through Nagel’s (1984) analogy of a bat. It is acknowledged that a bat has consciousness, and we, as humans, can imagine what it is like to be a bat in conjunction with the potential experiences a bat could have. However, we can never know what it is like for a bat to be a bat. This argues the subjective nature of conscious experience rejects our knowing of whether anything other than ourselves is conscious, as one can’t introspect in a ‘body’ that one does not inhabit. Applying this to consciousness in a robot, it follows that whilst it can seem logical to assume that the robot has consciousness, one can never conclude this with absolute certainty. It is the same with healthy humans; someone’s behaviour may be typical of someone with consciousness, but how do we know they are not just a robot?

In sum, whilst there exist logical ways to test whether someone or something has consciousness, the fastidious approach above limits their suggestions, by the fundamental reasoning that we will never know without reasonable doubt. As humans, we may see a phone and pass it as ‘thinking’ if it takes too long to respond, just as a human would (Blackmore & Troscianko, 2018). However, other than ourselves, we may never know if anything thinks at all.  

image

We use cookies to give you the best experience possible. By continuing we’ll assume you board with our cookie policy.