Viewing entries by
Kelvin Chen

THERAPEUTIC NIHILISM IN DISORDERS OF CONSCIOUSNESS CARE AND THE RIGHT TO LIVE

Comment

THERAPEUTIC NIHILISM IN DISORDERS OF CONSCIOUSNESS CARE AND THE RIGHT TO LIVE

The ambiguous mystery of consciousness that relates our subjective phenomenal experiences to an objective reality has puzzled the human mind since antiquity. From as early as the ancient Greeks to the 21stcentury, the concept of consciousness has generated numerous inquiries and theoretical propositions in the divergent fields of philosophy and neuroscience [1]. Alas, the concept of consciousness is ill-defined across time and culture given the juxtaposing nature of the term in both the metaphysical and scientific sense. In the philosophical text A Treatise of Human Nature, the Scottish philosopher David Hume described consciousness as “nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in perpetual flux and movement” [2]. In his psychoanalytic theory, the Swiss psychiatrist Carl Jung characterized consciousness as “the function or activity which maintains the relation of psychic contents to the ego” [3]. The French cognitive neuroscientist Stanislas Dehaene explained consciousness in his Global Neuronal Workspace Theory as a “global information broadcasting within the cortex [that] arises from a neuronal network whose raison d’être is the massive sharing of pertinent information throughout the brain” [1, 4]. These definitions, among many others, are nevertheless insufficient to explain consciousness if isolated on their own unless they are synthesized into a continuum of interwoven ideas that can account for the heterogeneous nature of consciousness [1]. 

Despite the dispute among historical and contemporary thinkers on the questions of consciousness and its neural correlates, the essence of each and every argument is fundamentally rooted in either the “easy” or “hard” problems of consciousness, as formulated by the Australian philosopher David Chalmers [5]. The “easy” problem seeks to identify mechanistic explanations for the various cognitive phenomena (e.g., perception, learning, behavior) resulting from the underlying biophysical processes of the brain. On the other hand, the “hard” problem accounts for why there is an association between such phenomena and consciousness [6]. However, the objective of the discussion herein is not to individually distinguish and comprehensively assess the myriad theories of consciousness. Instead, the concept of consciousness and its distinguishing features are considered in the context of clinical patients who suffer from disorders of consciousness (DOC), in which altered levels of consciousness has presented a neuroethical challenge in evaluating the life and death of DOC patients, particularly when aspects of their inherently intrinsic values are pathologically compromised to render them in a state of extreme vulnerability [7]. 

In the clinical context, consciousness is precisely conceptualized according to the Aristotelian formulation of wakefulness and awareness, wherein the state of consciousness is evaluated on the basis of arousal (e.g., eye-opening) and the ability to react to external stimuli (e.g., visual tracking), respectively [1]. Specifically, the elements of wakefulness and awareness encapsulate the ensuing features as the core indicators of consciousness: distinct sensory modalities, spatiotemporal framing, intentionality, dynamicism, short-term stabilization, and an integration of all components of the conscious experience [8]. The extent of the two said elements, however, varies according to the neurophysiological conditions of the DOC patient, which factors in variables such as brainstem reflexes, functional communication, language, and fMRI/EEG evidence of association cortex responses. Depending on the results of these testing variables that relies on the use of dichotomic binary communication paradigms (i.e., yes/no or on/off), the patients’ condition can thus be diagnosed, which includes, inter alia, coma, persistent vegetative state (PVS), unresponsive wakefulness syndrome (UWS), minimally conscious state (MCS), post-traumatic confusional state (PTCS), covert cortical processing (CCP), and locked-in syndrome (LIS) [1]. 

To begin the discussion on the neuroethics of patients with DOC, the controversial case of Terri Schiavo and her legacy ought to be taken into reflection [9]. In 1990, Schiavo was left in a vegetative state after suffering from a cardiac arrest, a symptom of hypokalemia induced by her eating disorder. From that point onward, Schiavo was in an eye-open state but was unaware of herself and her environment (a dissociation of awareness from wakefulness) for the next 15 years of her life until her death in 2005. Schiavo’s neurologist initially concluded that her condition was irreversible and that she was no longer capable of having emotions, which caused her spouse to request the removal of her feeding tube and any life-sustaining treatment. Schiavo’s parents were diametrically opposed to this decision because they held onto the belief that there was a possibility for neurological recovery and that Schiavo was still a sentient being. Both parties claimed to act in Schiavo’s best interest as the justified surrogate decision-maker, yet their familial conflict had ultimately gone to litigation, in which the verdict ruled in favor of Schiavo’s spouse. Upon close examination, the tragic outcomes of Schiavo’s case underscores several ethical violations in the medical, legal, and social realms in relation to end-of-life decision-making and the exercise of autonomy.

Recent developments in functional neuroimaging and neuroelectrophysiological methods have changed the ways through which DOC patients are diagnosed, prognosed, and treated. The most prominent revelation from such advancement is the high rates of misdiagnosis and inaccurate prognostication among DOC patients by clinicians, wherein cases of patients who are originally diagnosed as PVS are in reality reclassified as MCS upon neuroimaging procedures. In contrast to PVS patients, MCS patients are found to display both wakefulness and behavioral signs of rudimentary awareness. Ultimately, such misdiagnosis can be attributed to several factors such as sensorimotor impairments in the patient, or confirmation bias in the clinician. In turn, this creates epistemic risks because of limited certainty in DOC nosology [1]. Notwithstanding the promising potential of neuroimaging and similar neurotechnologies (e.g., brain-computer interfaces), the medical practice of neuroimaging itself poses an increasing degree of ethical tension due to the lack of informed consent from the DOC patient themselves as a requirement prior to undergoing neuroimaging. This is a challenge that is often highlighted in neuroimaging-driven DOC research, compounded with the disclosure of experimental results to the patient themselves or their family, which are deemed as imperative to mitigate miscommunications in patient-clinician relationships [10, 11]. 

Given that cognitive biases are a salient class of factors contributing to misdiagnoses in DOC, emphasis ought to be placed on the prevalence of ableist bias in influencing erroneous, monotonic judgments with respect to the neurological outcomes of DOC patients. Note that this particular issue is a byproduct of the disability paradox—a phenomenon that occurs when individuals with disabilities claim to experience a good quality of life despite the fact that many external observers perceive them as in a state of suffering [12]. As a result, prejudiced assumptions create a discrepancy between the actual wishes of the DOC patient and what others perceive as acceptable; oftentimes, the latter revolves around the belief that being in an unconscious state is worse than death [13]. Though in the event that the preference of the DOC patient is well-documented in regard to the best course of action to take when the patient is nonverbal or behaviorally disabled, the judgment of the clinician will not be a matter of importance. A study by Patrick et al., however, discovered that there exists a psychological discordance among DOC patients in which patients who initially expressed the belief that life is not worth living in such conditions still wished to undergo life-sustaining interventions. Hence, the implications of the ableist notion that “it is better to be dead than disabled” is not meant to be taken in the literal sense [14]. Fortunately, functional neuroimaging is indeed capable of unveiling residual consciousness and psychological continuity even if the apparent behavioral characteristics of the DOC patient suggests otherwise, as shown in a neuroimaging study by Owen et al. [15]. This scenario presents the common ethical problem in regard to the premature or uncertain termination of life-sustaining care for DOC patients based on the decision of moral agents such as a surrogate (e.g., the patient’s kin) or a third party (e.g., government), which may be ill-informed or morally absolutist at times [16]. 

Thus, upon careful consideration of such scenarios and the case of Schiavo, is it still morally permissible to withdraw artificial life support from DOC patients in the absence of informed consent or advance directives from the will of the patient themselves? While it is reasonable to argue that it is futile to continue maintaining the life of DOC patients given the lack of expected utility, this justification is heavily rooted in a deep sense of therapeutic nihilism—an aporetic belief toward the successful curing of a disease—that gave rise to a pessimistic outlook on the assessment of predicting the likelihood of meaningful recovery among DOC patients and their right to life-sustaining treatment from the view of clinicians. [13]. Evidently, with the patients’ best interest at stake, the ways through which patient outcomes are perceived under the lens of undue pessimism coupled with a saturated sense of self-fulfilling prophecy illustrate the negligence of DOC patients’ potential to recover and regain functional independence in the long-run. After all, behavioral recovery typically occurs beyond the minimum standard timeframe of 28 days during post-brain injury, as suggested in an observational study conducted by Giacino et al. on DOC patients with PVS/MCS [17]. Such empirical evidence, therefore, rejects the futility thesis in DOC care and underscores the risk of superimposing the beliefs and values of the observers onto the patient. Integrating prudence and fiduciary responsibility into the ethos of palliative care for DOC patients are thus relevant in the clear discernment of consciousness from the nonconscious state [1]. 

In recognizing the effects of pessimistic attitudes on clinical decision-making, the consequentialist argument of cost and health utilities in treating DOC patients is another antithetical statement toward preserving the life of DOC patients. Indeed, in some outlying cases, extensive care for patients with DOC may not necessarily result in favorable outcomes, irrespective of the duration and intensity of the treatments that are provided. In such circumstances, the harm-benefit analysis yielded problems such as significant emotional distress and financial losses for the families of DOC patients, as well as failing to protect the dignity and comfort of DOC patients as a consequence of undergoing continuous painful treatments that may seem to have minimal benefits, compounded with the fact that these resources are scarce in most instances. Additional opportunity costs that may incur on the families of DOC patients include renouncing education or employment opportunities to act as caregivers for their beloved [1]. Hence, the continuation of sustaining the life of DOC patients under such criteria cannot possibly maximize the cost and health utilities of the patient and their family. 

To balance these potential harms, however, it ought to be noted that the proclivity for utility maximization is in conflict with the contractual obligation of the clinician in improving the care of DOC patients without resorting to abandonment and risk aversion and guaranteeing them the value of human life. The life of a human being and the act of protecting its existential stature is what translates to having dignity in lieu of death, such that the phrase “dying with dignity” has been diluted in meaning and is contradictory in this interpretation of dignity [14, 15]. Furthermore, to withhold distributive justice in the clinical setting to align with democratic principles, resources ought to be allocated in a fair, unbiased manner to ensure equity in access to appropriate treatments for DOC patients that are also affordable for all to offset the inclination among clinicians to allocate rehabilitative resources to patients with a higher chance of recovery than those with a poor course of prognosis [13]. Therefore, it is a necessary risk to provide life-sustaining treatment even to DOC patients with a low likelihood of recovery and survival, for doing so will lead to further advancements in improving DOC care and make medical progress in overcoming technical limitations and understanding the complexity of DOC for posterity. 

On the subject of personhood, however, many observers who are under the influence of therapeutic nihilism tend to perceive DOC patients as an empty vessel that is devoid of personhood given their loss of cognitive capacities, which are claimed to be indispensable for consciousness to be constituted. Such perceptions are merely natural considering the hypercognitive nature of postmodern Western societies and cultures, wherein the deprivation of certain capacities that the majority deems as important has separated those who are unworthy of care and attention from those who are worthy. Nevertheless, regardless of whether or not personhood is ascribed to patients with DOC, the worth of a human life should not be evaluated on the basis of the unlikeness of the human mind, for all persons ought to be respected in the name of equality and solidarity as fundamental moral sentiments to maintain a just legal and healthcare system, especially toward those who are in their most vulnerable state [14, 18].

A reinforcing assertion on defending the continual existence of personhood claims that consciousness is not an essential prerequisite for the acknowledgement of one’s identity, hence the intrinsic values of DOC patients are retained to qualify them as individual persons in spite of their incompetence to communicate or act in accordance with their freedom of will [19]. Even more so, the implementation of disability rights perspectives into the social analysis and lawful policymaking surrounding DOC patients proclaims that individual identities are not singularly characterized by working cognitions and emotions. As opposed to basing the identity of DOC patients on their medical conditions, their identity is constructed around the notion that their disabling attributes as a result of DOC is an inalienable component of their overall identity. Indeed, disability is only transparent when life-sustaining treatments are denied to DOC patients as a reflection of the institutional failure to both accommodate those who are in urgent need of care and withhold constitutional and federal civil rights protections, specifically in regard to the Americans with Disabilities Act (ADA) that ought to be applied universally [20, 21, 22]. 

Even in the depths of inescapable nihilism in managing DOC care, ethics must prevail in remembrance of the principle of in dubio pro vita—"when in doubt, favor life” [23]. In considering the complex diagnostic and therapeutic difficulties in conjunction with uncertain prognostications for patients with DOC, the health prospects of the patient are at a constant risk due to the epistemological interstice between current understandings of consciousness and the behavioral conditions of DOC patients [10]. Therefore, to overcome the existing state of DOC care that is characterized by issues of informed consent, cognitive biases, futility-inspired pessimism, unjust resource allocations, negligence of personhood, and various unknown risk factors, it demands the deployment of effective medical and legal protocols and ethical guidelines for pragmatic clinical decision-making. As such, the traditional beliefs in the practices of medicine and law must be challenged to preserve the life of DOC patients alongside their personal identity, dignity, freedom of will, and most crucially, the right to live at the heart of humanity.

References 

1. Young, M. J., Bodien, Y. G., Giacino, J. T., Fins, J. J., Truog, R. D., Hochberg, L. R., & Edlow, B. L. (2021). The neuroethics of disorders of consciousness: a brief history of evolving ideas. Brain, 144(11), 3291-3310. 

2. Hume, D. (2009). A treatise of human nature. (P.H. Nidditch, Ed.). Clarendon Press. (Original work published 1739-40) 

3. Jung, C. G. (1921). Psychological Types. In Collected Works (Vol. 6). Princeton, NJ: Princeton University Press. 

4. Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776-798. 

5. Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of consciousness studies, 2(3), 200-219. 

6. Mills, F. B. (1998). The easy and hard problems of consciousness: A Cartesian perspective. The Journal of mind and behavior, 119-140. 

7. Roskies, A. (2021, March 3). Neuroethics. Stanford Encyclopedia of Philosophy. Retrieved January 8, 2023, from https://plato.stanford.edu/entries/neuroethics/ 8. Farisco, M., Pennartz, C., Annen, J., Cecconi, B., & Evers, K. (2022). Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Medical Ethics, 23(1), 30. 

9. Weijer, C. (2005). A death in the family: reflections on the Terri Schiavo case. CMAJ, 172(9), 1197-1198. 

10. Young, M. J., Bodien, Y. G., & Edlow, B. L. (2022). Ethical considerations in clinical trials for disorders of consciousness. Brain Sciences, 12(2), 211. 11. Istace, T. (2022). Empowering the voiceless: disorders of consciousness, neuroimaging and supported decision-making. Frontiers in psychiatry/Frontiers Research Foundation (Lausanne, Switzerland)-Lausanne, 2010, currens, 13, 1-10. 12. Albrecht, G. L., & Devlieger, P. J. (1999). The disability paradox: high quality of life against all odds. Social science & medicine, 48(8), 977-988. 

13. Choi, W. (2022). Against Futility Judgments for Patients with Prolonged Disorders of Consciousness. 

14. Golan, O. G., & Marcus, E. L. (2012). Should we provide life-sustaining treatments to patients with permanent loss of cognitive capacities?. Rambam Maimonides Medical Journal, 3(3). 

15. Owen, A. M., Coleman, M. R., Boly, M., Davis, M. H., Laureys, S., & Pickard, J. D. (2006). Detecting awareness in the vegetative state. science, 313(5792), 1402-1402.

16. Fins, J. J. (2005). Clinical pragmatism and the care of brain damaged patients: toward a palliative neuroethics for disorders of consciousness. Progress in brain research, 150, 565-582. 

17. Giacino, J. T., Sherer, M., Christoforou, A., Maurer-Karattup, P., Hammond, F. M., Long, D., & Bagiella, E. (2020). Behavioral recovery and early decision making in patients with prolonged disturbance in consciousness after traumatic brain injury. Journal of neurotrauma, 37(2), 357-365. 

18. Post, S. G. (2000). The moral challenge of Alzheimer disease: Ethical issues from diagnosis to dying. JHU Press. 

19. Foster, C. (2019). It is never lawful or ethical to withdraw life-sustaining treatment from patients with prolonged disorders of consciousness. Journal of Medical Ethics, 45(4), 265-270. 

20. Forber-Pratt, A. J., Lyew, D. A., Mueller, C., & Samples, L. B. (2017). Disability identity development: A systematic review of the literature. Rehabilitation psychology, 62(2), 198. 

21. Rissman, L., & Paquette, E. T. (2020). Ethical and legal considerations related to disorders of consciousness. Current opinion in pediatrics, 32(6), 765. 22. Chua, H. M. H. (2020). Revisiting the Vegetative State: A Disability Rights Law Analysis. 

23. Pavlovic, D., Lehmann, C., & Wendt, M. (2009). For an indeterministic ethics. The emptiness of the rule in dubio pro vita and life cessation decisions. Philosophy, Ethics, and Humanities in Medicine, 4(1), 1-5.


Comment

A NEUROETHICAL DISCOURSE ON THE APPLICATION OF OPTOGENETICS FOR MEMORY MODIFICATION

Comment

A NEUROETHICAL DISCOURSE ON THE APPLICATION OF OPTOGENETICS FOR MEMORY MODIFICATION

As an emerging neuromodulation tool, optogenetics affords the capability of manipulating neuronal activities of genetically defined neurons using light. In principle, optogenetics offers scientific insights into deciphering the complexity of various behavioral states and the neural pathways that underpin normal and abnormal brain functions with therapeutic applications [1]. In fact, human clinical trials have been initiated in utilizing optogenetics to treat retinitis pigmentosa to restore vision, and animal models have been extensively used in optogenetics studies to develop therapies for a myriad of nervous system disorders as an alternative to deep brain stimulation (DBS) [2]. To understand the inner workings of this novel neurotechnology, the basic neurobiological basis of synaptic communication between neurons must be briefly elaborated. Fundamentally, Na+ions flow into neurons until the threshold potential is reached with sufficient voltage to elicit an action potential along the axons of successive depolarized neurons to transmit information by means of neurotransmitter release at the synapses. For ions to passively travel into the neurons across the cell membrane for activation or deactivation requires the gated ion channels to be opened or closed, respectively, which can be done by applying an external stimulus such as temperature and ligand molecules. Alternatively, protein pumps can also facilitate the inward flow of specific ions via active transport under similar stressors. Neurons eventually become hyperpolarized as K+ions begin to flow outward to inhibit the signal until the threshold is reached again from the resting potential, all of which are done iteratively [3]. 

The discovery of optogenetics has thus introduced light-gated channels and pumps as a new mechanism for controlling synaptic communication among neurons. Light-sensitive proteins called opsins are the genes responsible for encoding light-gated channels and pumps, which are typically found in microbial species of archaea, bacteria, and fungi—the source from which Type I opsin genes are derived [1]. Opsins are subsequently cloned to be expressed in a target population of neurons that lack light-gated channels and pumps via viral vectors (i.e., adeno-associated viruses (AAV) or lentiviruses (LV)), thereby enabling the control and regulation of neuronal activities of various neural circuits at the supramacroscopic scale in real time with light through the insertion of an optical fiber. Upon the illumination of light, neurons can either be excited or inhibited, depending on the nature of the optogenetic proteins that are neuronally expressed in accordance with the functions that the researcher intends to mediate for individual neurons [2]. Channelrhodopsin-2 (ChR2) is a light-gated cation channel protein that can excite neurons when illuminated with blue light at regular pulses, which causes the inward flow of cations (e.g., Na+, Ca2+, H+) to increase the rate of action potentials [4]. To inhibit the activity of neurons, the yellow-light sensitive proton pump archaerhodopsin-3 (AR3) is used, wherein protons are transported out of neurons to decrease the rate of action potentials. Similarly, wild-type halorhodopsin (NpHR) is a Cl ion pump that also engages in neuronal inhibition in response to the continuous illumination of yellow light to sustain neurons in a hyperpolarized state [5]. 

Given the bidirectional modality of optogenetics in controlling specific neuronal ensembles by means of regulating the movement of ions to facilitate or prevent synaptic communication, one application of optogenetics extends to its potential for modifying memories as a form of improved and versatile memory modification technology (MMT). Pre-existing MMTs include DBS along with pharmacological agents (i.e., propranolol or mifepristone), all of which can alter the brain via external means. However, due to the indiscriminate spread of electrical currents to neighboring nerve fibers of targeted cells in DBS and the poor temporal precision in the administration of pharmacological agents, optogenetics are fortunately capable of compensating these practical limitations [6]. The spatiotemporal selectivity and precision of optogenetics are best illustrated when considering the diversity of light-sensitive proteins that can be expressed in correspondence to the different cell types in the central nervous system (CNS), some of which are genetically defined such that they are restricted to limited optogenetic proteins based on the type of neurotransmitters they secrete or the direction of their axonal projections [2]. 

Therefore, optogenetics renders the ability for memory modification in such a manner that specific memories, whether they are newly formed or well-consolidated, can be activated or deactivated in their respective engrams by manipulating the activities of targeted neurons in the hippocampal region of the brain, primarily at the dentate gyrus (DG) where memories are initially formed from the merging of sensory modalities [7, 8]. For example, Liu et al. has demonstrated the implantation of de novo false memories into mice through optogenetic manipulation using ChR2 aided by contextual fear conditioning [9]. Another memory modification experiment using optogenetics conducted by Guskjolen et al. has surprisingly shown that lost, inaccessible memories in infant mice due to infantile amnesia can be recovered by optogenetically targeting hippocampal and cortical neurons responsible for encoding infant memories, ensued by reactivating the ChR2-labeled neuronal ensembles when the infant mice reached adulthood after a period of three months [10].

Additional applications of optogenetics in the context of memory modification includes enhancing the cognitive capacity for memory, changing the valence of a memory (from negative to positive, and vice versa) without distorting the content, and treating memory impairments that are characteristic of conditions such as Alzheimer’s disease (AD) and post-traumatic stress disorder (PTSD) in clinical patients [6]. Notwithstanding the futuristic promise of optogenetics, the apparent harm of manipulating select memories in humans to various extents on demand is of equal relevance when considering the collective ramifications of this novel yet ambivalent neurotechnology. The neuroethical flaws of using optogenetics for memory modification are thus worthy of being discussed in detail. 

Of similar nature to many revolutionary technologies such as the CRISPR/Cas9 system for genome editing, safety risks present a limitation to the use of optogenetics for memory modification applications given its invasive nature—requiring the injection of viral vectors into the brains of experimental subjects for the in vivo delivery of optogenetic proteins [11]. Furthermore, deep brain optogenetic photostimulation also requires tethered optical fibers or other forms of implants to be surgically inserted into the brain to provide a light source, which may cause tissue damage, ischemia, and infections as with many invasive neurosurgeries [12]. A non-invasive approach in the utilization of optogenetics, interestingly, has been developed by Lin et al. using an engineered red-shifted variant of ChR known as red-activable ChR (ReaChR) that was expressed in the vibrissa motor cortex of mice. With the penetration of red light through the external auditory canal, neurons can subsequently be optically activated to drive spiking and vibrissa motion in mice to enable transcranial optogenetic excitations with an intact skull [13]. 

Nevertheless, safety risks cannot be easily dismissed in a premature manner in the event that optogenetic manipulations lead to off-target behavioral or emotional effects, wherein unpredictable network changes may occur in areas outside of the targeted optogenetic activation or deactivation zone [14]. For example, episodic memories are not solely distributed in the hippocampus; the entire system also consists of surrounding brain structures of the medial temporal lobe including the perirhinal and entorhinal cortices in conjunction with structurally connected sites (e.g., thalamic nuclei, mammillary bodies, retrosplenial cortex). Targeting a set of hippocampal neurons that is known to encode a specific memory may also induce unforeseeable changes, for it is presumed that their function is not solely exclusive to memory encoding. Moreover, since the off-target effects pertaining to the optogenetic modification of memory have not been heavily analyzed in the neuroethical literature, it adds the weight of uncertainties regarding safety issues. The level of uncertainties is further elevated by the risk of long-term expression of optogenetic proteins in the mammalian brain with unknown consequences [6]. 

Beyond safety issues and technical limitations of optogenetics, attention is now shifted toward issues unique to optogenetics that may or may not be shared among pre-existing MMTs in the context of memory modification, notably in regard to erasing one’s unwanted memories which are reasonable targets for optogenetic interventions. The first argument concerns the problem of abandoning one’s moral obligations in hypothetical scenarios where the witnesses of a crime wish to erase their memories of the event through optogenetics [15]. While doing so is aligned with one’s right to personal choice if the witnesses find the crime to be far too upsetting to remember, it is not within the interests of society to erase such memories which are useful as testimonies during criminal prosecutions, even if the memories prove to be unreliable. Therefore, it is a moral obligation for witnesses to retain their memories for consequentialist reasons (i.e., preventing future crimes and exploitation) and withholding justice, and the same idea is applicable to the victims of a crime. From the perspective of criminal offenders, furthermore, it is also their moral obligation to retain the memories of their unlawful actions without optogenetic interventions even if they develop a guilty conscience. Otherwise, it would be deemed as an inappropriate moral reaction and responsibility needs to be held nevertheless on the part of the criminal offender if their memories are erased [6, 7]. 

Retaining memories sustained from traumatic experiences such as discrimination or abuse is also justified in the sense that traumatic memories may have a subtle influence on cultivating one’s personality and values [6]. Children who experienced childhood trauma are found to exhibit elevated levels of empathy as adults relative to children who did not have such experiences, as shown by Greenberg et al. [16]. In turn, having undergone traumatic experiences will ultimately motivate affected individuals to seek and initiate systemic changes in society by means of activism, for instance, to mitigate the root cause of their experienced trauma [6]. To further justify the means of relying on the traumatic memories of individuals to achieve the ends of society’s welfare in the absence of optogenetic interventions, it ought to be reiterated that without such means, social relations among members of society will remain in an oppressive and unspontaneous condition, such that individuals will not be inured to the sufferings of others but live in a continuous state of mass oblivion [15]. Using optogenetics to erase traumatic memories will thus nullify the motivational impulses and humaneness that are shared among affected individuals and most significantly, it has the potential to distort the trajectory of one's personality and values to a certain extent, especially when the valence of the memories is significantly altered to affect one’s dispositions [6]. 

Traumatic experiences are also pivotal in partially formulating self-defining memories that are of equal importance, for they are the underlying constituents of a person’s fundamental character and their sense of self. This is reinforced by the ideas of John Locke on memory with support from contemporary empirical evidence in spite of critical objections that claim no relationships between memory and personal identity [17]. While dissenting views ought to be acknowledged, the premise of Lockean ideas and any experimental support acts as a vital presumption in the current argument, which asserts that erasing one’s self-defining memories may change an individual’s narrative identity—the integration of one’s internalized, evolving life stories to render the person’s life with unity and meaning [18]. For the reason that narrative identities are malleable to change in sync with individuals’ memories, this implies that the reactivation of previously erased self-defining memories or implanting false memories may fail to be reintegrated with the self [6]. In such circumstances, individuals become susceptible to betraying or self-deceiving their original self as their life deviates from their truthful identity in the event of having their memories manipulated by optogenetics [6]. 

Analyzing the effects of memory modification on personal identity further requires a discussion on the threat posed toward individual authenticity. While the idea of authenticity has multiple conceptualizations, it is beneficial to consider authenticity from a dual-basis framework that combines accounts from existentialism (self-creation) and essentialism (self-discovery) in prompting critical ethical inquiries regarding the use of optogenetics [6]. Existentialists outline authenticity as having the ability to act upon one’s honest choices and identity without the influence of external social pressure and norms, while essentialists add in the concomitant aspect of being faithful to one’s true self—meaning that the individual has a clear and accurate depiction of their own life narratives in both the past and present that culminated in who they are to drive their purpose in life upon realization. The interference of optogenetics in modifying individuals’ memories suggests the alteration of one’s identity and certain affiliated values, beliefs, and other characteristics. Ultimately, doing so leads to the consequences as described above as one’s authenticity and intrinsic character becomes prone to diminishment and misrepresentation, respectively, thus leading the acts of self-creation and self-discovery into disarray [19, 20]. 

Note that the act of becoming inauthentic is generally deemed as morally permissible, however, under the circumstance that the choice of undergoing optogenetic intervention to modify one’s memory is made without ambivalence but rather it is derived from one’s higher-order desires that may lead to greater benefits relative to the potential harms, such as PTSD patients with severe symptoms in which conventional treatment methods are ineffective. These cases are important to be considered when formulating effective frameworks for regulating the use of optogenetics, yet questions such as to what extent is one’s external freedom compromised or is the essence of the individual resulted from optogenetic memory modification different than their original self are equally noteworthy for ethical examination. As a result, the dynamic and relational narrative construction of individuals’ identities (i.e., discovering oneself and acknowledging one’s identity) becomes subjugated to conformity in the sense that individual choices are no longer established on the basis of adhering to one’s true self; instead, they stem from the altering effects of optogenetic memory modification that violates the pillars of authenticity at the expense of favoring one’s local autonomy over authenticity. One familiar example is for a naturally shy individual to behave in an outgoing manner when interviewing for jobs that prefer extroverted attributes in its applicants [19, 20, 21]. 

The authenticity argument in relation to one’s identity nevertheless suffers from criticisms regarding the practical utility of the dual-basis framework in assessing memory modification and its implications given the individual-focused and idealistic framing of ideas. Despite everything, the dual-basis framework offers a well-balanced account of the complexity of neuroscience and psychology by presenting both the possibilities and constraints of creating one’s narrative identity [20]. Though interestingly, Kostick and Lázaro-Muñoz have argued that the brain has neural safeguards against inauthenticity caused by optogenetics that relies on neuroplasticity [22]. Of note that the discussion above only entails the worst-case scenarios of memory modification, however, to help guide future directions in the neuroethics of optogenetic applications since the degree of optogenetic effects on memory has yet to be demarcated. Therefore, it is worthwhile to be reliant upon the possible outcomes of hypotheticals to gauge reality, for it is currently difficult to translate optogenetic findings in animal models to humans in conjunction with the lack of a comprehensive neurobiological understanding of memory’s unpredictable nature.

As with any novel neurotechnology with an undefined impact, optogenetics imposes its own risks and benefits for the purpose of memory modification that requires a neuroethical evaluation of its ramifications in changing the properties and dimensions of memory. The arguments that have been presented herein is reminiscent of the events that unfolded in the 2004 romance and science fiction film Eternal Sunshine of the Spotless Mind, where the two protagonists both decided to undergo the procedure of having their memories of each other removed following a breakup, only to found remorse in the aftermath as they tried to reconcile their relationship despite the loss of their memories. Therefore, memory is what keeps the stories of our lives in a continuous state of progression as what oxygen is to fire; it is the gate that reveals our identities, values, ambitions, struggles, and relations to one another to empower us to live happily in a dreadful world in remembrance of who we are and those who we cherish.

References 

1. Josselyn, S. A. (2018). The past, present and future of light-gated ion channels and optogenetics. Elife, 7, e42367. 

2. Felsen, G., & Blumenthal-Barby, J. (2022). 7 Ethical Issues Raised by Recent Developments in Neuroscience: The Case of Optogenetics. Neuroscience and Philosophy. 

3. Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K., & Walter, P. (2002). Ion channels and the electrical properties of membranes. In Molecular Biology of the Cell. 4th edition. Garland Science. 

4. Fenno, L., Yizhar, O., & Deisseroth, K. (2011). The development and application of optogenetics. Annual review of neuroscience, 34, 389-412. 

5. Carter, M., & Shieh, J. C. (2015). Guide to research techniques in neuroscience. Academic Press. 

6. Adamczyk, A. K., & Zawadzki, P. (2020). The memory-modifying potential of optogenetics and the need for neuroethics. NanoEthics, 14(3), 207-225. 7. Canli, T. (2015). Neurogenethics: An emerging discipline at the intersection of ethics, neuroscience, and genomics. Applied & translational genomics, 5, 18-22. 8. Hamilton, G. F., & Rhodes, J. S. (2015). Exercise regulation of cognitive function and neuroplasticity in the healthy and diseased brain. Progress in molecular biology and translational science, 135, 381-406. 

9. Liu, X., Ramirez, S., & Tonegawa, S. (2014). Inception of a false memory by optogenetic manipulation of a hippocampal memory engram. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1633), 20130142. 

10. Guskjolen, A., Kenney, J. W., de la Parra, J., Yeung, B. R. A., Josselyn, S. A., & Frankland, P. W. (2018). Recovery of “lost” infant memories in mice. Current Biology, 28(14), 2283-2290. 

11. Rook, N., Tuff, J. M., Isparta, S., Masseck, O. A., Herlitze, S., Güntürkün, O., & Pusch, R. (2021). AAV1 is the optimal viral vector for optogenetic experiments in pigeons (Columba livia). Communications Biology, 4(1), 100. 

12. Chen, R., Gore, F., Nguyen, Q. A., Ramakrishnan, C., Patel, S., Kim, S. H., ... & Deisseroth, K. (2021). Deep brain optogenetics without intracranial surgery. Nature biotechnology, 39(2), 161-164. 

13. Lin, J. Y., Knutsen, P. M., Muller, A., Kleinfeld, D., & Tsien, R. Y. (2013). ReaChR: a red-shifted variant of channelrhodopsin enables deep transcranial optogenetic excitation. Nature neuroscience, 16(10), 1499-1508.

14. Andrei, A. R., Debes, S., Chelaru, M., Liu, X., Rodarte, E., Spudich, J. L., ... & Dragoi, V. (2021). Heterogeneous side effects of cortical inactivation in behaving animals. Elife, 10, e66400. 

15. Kolber, A. J. (2006). Therapeutic forgetting: The legal and ethical implications of memory dampening. Vand. L. Rev., 59, 1559. 

16. Greenberg, D. M., Baron-Cohen, S., Rosenberg, N., Fonagy, P., & Rentfrow, P. J. (2018). Elevated empathy in adults following childhood trauma. PLoS one, 13(10), e0203886. 

17. Robillard, J. M., & Illes, J. (2016). Manipulating memories: The ethics of yesterday’s science fiction and today’s reality. AMA Journal of Ethics, 18(12), 1225-1231. 

18. McAdams, D. P., & McLean, K. C. (2013). Narrative identity. Current directions in psychological science, 22(3), 233-238. 

19. Tan, S. Z. K., & Lim, L. W. (2020). A practical approach to the ethical use of memory modulating technologies. BMC Medical Ethics, 21(1), 1-14. 20. Leuenberger, M. (2022). Memory modification and authenticity: a narrative approach. Neuroethics, 15(1), 10. 

21. Zawadzki, P. (2023). The Ethics of Memory Modification: Personal Narratives, Relational Selves and Autonomy. Neuroethics, 16(1), 6. 

22. Kostick, K. M., & Lázaro-Muñoz, G. (2021). Neural safeguards against global impacts of memory modification on identity: ethical and practical considerations. AJOB neuroscience, 12(1), 45-48.


Comment

On The Psychological Disembodiment Of Autonomy And Agency In Patients With Brain-Computer Interface Implants

Comment

On The Psychological Disembodiment Of Autonomy And Agency In Patients With Brain-Computer Interface Implants

   The radical symbiosis of the human body or mind with machines via technological interventions is one area of cutting-edge research in neural engineering that is reminiscent of many speculative science fiction stories on robotics [1]. “You just can’t differentiate between a robot and the very best of humans,” as the writer and biochemist Isaac Asimov once warned. This rendezvous is another typical instance in which the introduction of artificial intelligence (AI) has rendered the capability to revolutionize the many facets of human life, notably our physical and mental health. For that reason, the invisibility of the failing condition of the human brain necessitates the restoration of welfare to those who suffer from the severity of neuropsychiatric or neuromuscular disorders which are constrictive to human flourishing. 

   One prospective solution that is representative of the work of for-profit neurotechnology companies such as Kernel, Neuralink, and Synchron is the development of brain-computer interface (BCI) technology. BCIs are electronic feedback systems that record and analyze the brain activity of the user in real- or near-time using AI algorithms, thereby enabling the user to control an external device (computer cursors, prosthetic limbs, automated wheelchairs, etc.) with their mental faculties [2]. The purpose of BCI is to render neurologically compromised individuals (i.e., noncommunicative, paralyzed, etc.) with some extent of control over their social environment by enabling them to control any given external machine to compensate for their loss of certain cortical functions (e.g., speech, motor control). In application, BCI can be utilized to treat conditions such as cerebral palsy, spinal cord injury, locked-in syndrome, and amyotrophic lateral sclerosis [3]. Through utilizing machine learning techniques, BCIs can also become automatized, thereby possessing the capability to predict the likelihood of impending seizures in individuals with epilepsy, for example, by means of espying precursory neuronal events and subsequently advising the individual to take cautionary measures through sensory cues [4]. 

   Three distinct categories of BCI technology in terms of the amount of volitional control that is needed to generate signals are known: active, reactive, and passive [5]. Active BCIs demand the user to strategically produce specific neuronal patterns such as mentally picturing the movement of certain body parts. Reactive BCIs facilitate by having the user voluntarily attend to an external stimulus among various stimuli to evoke changes in brain activity. Finally, passive BCIs are reliant on the involuntary brain activities of the individual such as mental workload or affective states.

   In practice, BCIs (i.e., microelectrodes) are planted on the surface or on the inside of the neural cortex by means of invasive surgeries to establish the intimate connection between the brain and an external machine that will translate a mental process into an executable output [2]. One example of such an invasive BCIs is deep brain stimulation (DBS), which employs a two-directional (closed-loop) system as opposed to the one-directional algorithm of other common BCIs [6]. DBS is effective in the treatment of movement disorders such as Parkinson’s disease, obsessive-compulsive disorder, and dystonia. DBS requires the implantation of electrodes into certain areas of the brain, which can generate electrical impulses that can subdue abnormal brain activities in targeted brain regions (i.e., the subthalamic nucleus or the globus pallidus internus) and regulate chemical imbalances that are characteristic of specific circuitopathies [7]. The amount of electrical stimulation needed is sent through an extension wire that is connected to an internal pulse generator implanted under the skin in the upper chest. Noninvasive BCI methods allow for the recording of brain activity from the scalp using neuroimaging instruments (e.g., EEG, fMRI, NIRS), thereby eliminating the need for functional neurosurgery [8]. Interestingly, the work of Rakhmatulin and Volkl has demonstrated that a simple and noninvasive BCI device could be inexpensively made using a Raspberry Pi to interpret EEG patterns and allow the user to control mechanical objects—unlike other forms of BCI which pose the challenges of affordability and equity due to the extravagance of this technology [9]. 

   While the current development of many examples of BCI is still in a state of infancy and undergoing clinical trials, further applications of BCI can be extrapolated to commercial purposes such as entertainment, especially in the industry of video games [2], and even as far as augmenting one’s natural intelligence or military combat abilities for cognitive or physical enhancements, respectively, and allowing for brain-to-brain communication among users [10,11]. Only the clinical applications of invasive BCIs (assume all mentions of BCIs hereafter is invasive, unless stated otherwise) will be subjected to discussion and scrutiny herein, however, for the development of BCI technology and its application to medicine and healthcare is an ethically questionable undertaking in spite of its novelty and benefits. One specific peculiarity in our discussion poses the following ethical challenge: In patients with BCI implants, the degree of autonomy and agency can potentially be altered [12,13]. Subsequently, this could impact the accountability, privacy, and identity of patients on a psychological, social, and legal level [10]—along with other iatrogenic complications (i.e., acute trauma, glial scarring) from BCI-induced effects that could raise concerns for safety and ultimately transmute the hope of yesterday into the despair of tomorrow [2].

   It is of foremost importance to discern autonomy from agency as both terms assume that any individual in question is a free agent in a world where one has conscious control over their thoughts and actions. Whereas the concept of autonomy refers to the ability to independently choose a course of actions using one’s reason and knowledge in the absence of any imposed interferences or limitations [10], agency refers to the ability to influence a course of actions as desired and having the feeling of ownership of those actions [13]. These are merely general definitions, however, for various philosophers of different schools of thought are not in equal agreement on the correctness of such ambiguous concepts. 

  Neurologically compromised patients with BCI implants may ultimately experience a diminished sense of autonomy and agency such that they are no longer free agents of the world. Rather, they would simply be subjects to the laws of determinism in the same fashion that the biological brain and the physical universe are constructed, as evidenced through the first-person narratives of patients with BCI implants which will be discussed later. As a result, this concern raises several moral and philosophical questions in relation to one’s humanity and personhood: To what extent does an individual with a BCI implant feel “artificial”? What fraction of the thoughts and decisions that the individual produces are reflective of their authentic self? How much of the individual’s sense of self and judgment are fused with the technology in any given context [1]? In the same vein, Li and Zhang have previously demonstrated the cyborgization of live Madagascar hissing cockroaches that can be remotely controlled with the human brain. Using a portable SSVEP (steady-state visual evoked potential)-based BCI paradigm that delivers electrical stimulations to the cyborg cockroaches (the receiver), the cockroaches can be motioned to move in various directions in accordance with the intentions of the human subject (the controller) who is wearing an EEG headset [14]. While the use of cockroaches may be ethically justifiable to some extent, the story would be different if humans were placed on the receiving end. The line that distinguishes BCI as an assistive tool from the user’s body schema and self-understanding is thus blurred in the hybridization of mankind with AI in the forthcoming mechanistic evolution of Homo sapiens to Homo sapiens technologicus [2]. 

  The biggest antithesis to the argument regarding the preservation of one’s autonomy and agency is perhaps that the issues of humanity and personhood are irrelevant to the ethical debate on BCI. After all, the changes to identity experienced by patients with untreated neurological conditions are, arguably, far greater than the changes brought on by BCI. Therefore, the moral concerns of BCI in the problem of identity change, as some BCI researchers have argued, should not hold too much weight in the ethical guidelines for BCI as changes in self-perception among BCI users are only natural in the implementation of such neurotechnology [15]. As a matter of fact, a fraction of researchers has also made the assertion that the lack of independence to make decisions on the basis of one’s desires is not the fault of BCIs but rather, it should be ascribed to the pathologic condition of the individual that is hindering their will to express themselves or act [2]. If anything, BCIs are tools of empowerment, and studies on patients’ attitudes toward BCI have shown evidence of optimism [16]. 

   Nevertheless, the sense of autonomy and agency experienced by some BCI users are possibly illusionary and are prone to attribution errors [2]. The interplay between human and machine decision-making are becoming increasingly complex and intimate, such that BCIs can initiate certain outputs without the user’s volitional input [17]. This is technologically feasible, given that BCI systems contain a “black box” of outsourced information from the patient to which they have no access [12]. Current BCI systems offer only a restricted level of guidance control to influence executed movements and the ability to veto any initiated commands through specific output channels is also very much lacking [17]. Theoretically, BCIs, especially passive ones, can algorithmically learn the neural activity patterns of the user in specific situations, thus utilizing the collected and stored information to render selectively fewer options for the user to act upon, even in cases where the user may not necessarily endorse these alterations. Such algorithm-derived options, therefore, may attenuate the user’s freedom to produce autonomous commands and their capacity to make choices [12]. This problem is notably pronounced in DBS, where patients’ ability to think and make decisions is only prompted when electrical currents are conducted to alter the brain activity of the patients, which suggests it is a direct means of manipulation [1]. The effects of BCI on the autonomy and agency of patients will be further examined through a first-person perspective. 

   Gilbert et al. conducted a series of individual interviews on six epileptic patients who volunteered to undergo BCI implantation [18]. The objective of the interviews was to capture the contrast between pre- and post-operative experiences of these patients with respect to the perception of their self-image and self-change as a result of BCI-mediated events. The patients’ responses were rather ambiguous, however. One patient asserted that the incorporation of BCI changed her life favorably, such that it “changed [her] confidence [and her] abilities,” further elaborating that “with this device [she] found [herself]” [18]. Clearly, this particular patient experienced a sense of control and empowerment over her life with the BCI implant, yet her experience is in opposition to a different patient, who claimed that the BCI merely caused her otherness to be more apparent in the eyes of society and how it “made [her] feel [she] had no control [… she] got really depressed” [18]. The inability to have control over BCI-driven events suggests an issue regarding accountability. Noisy signals as a result of subconscious neural activity are known to feed into the output system of BCIs, thereby creating unintended movements that are not indicative of the user’s true desires [13]. Because unintended movements can lead to unpredictable and harmful consequences in specific contexts, accountability could possibly be misattributed to the BCI user, and feelings of having zero control in those circumstances may elicit the same miserable sentiment that the latter patient had expressed. The risks of device failure, hacker intrusion, and akrasia are also among the diverse factors that could potentially cause unintended acts to be performed by BCIs which may distort the user’s sense of self and identity in conjunction with the user’s moral and legal responsibilities [10]. 

   It is worth noting that when the aforementioned patient whose life was favorably impacted as a result of the BCI implant was ultimately subjected to having her BCI removed because the company that administered her the implant had to declare bankruptcy, her world fell apart. In the mind of that particular patient, it was as if a piece of her own flesh was being torn apart from her body. Gilbert later remarked that “the company [now] owned the existence of this new person” [1]. Notice, however, that one key element that contributed to the difference in the lived experience of patients with BCI implants is whether or not they view their disability as a part of who they are. Patients who accepted their epilepsy as a part of themselves were more likely to regard the BCI implants in a more positive or neutral manner than patients who did not view themselves as an epileptic, who instead experienced more distress and estrangement [14]. Although this observation cannot be generalized to the entire population of disabled individuals since the study only contained a sample size of six individuals, it does raise the question of whether BCI is a form of treatment or enhancement for disabled individuals. Depending on how disabled individuals see themselves, using BCI without identifying oneself as disabled may be interpreted as an enhancement, for example [2]. Alternatively, individuals who do identify themselves as disabled may perceive BCI as a treatment, but such individuals are most likely fearful of being subjugated to normality and thus refuse to undergo BCI implantation as a result of their attachment to their disability identity, which runs perpendicular to the original purpose of BCI in restoring normal capabilities to individuals who are neurologically compromised. 

The curious case of the patient who experienced a loss of control thus serves as the basis as to why BCI technology may not fully pass the ethical test. Notably, an emphasis on the psychological aftermath of BCI-induced effects with respect to the disembodiment of one’s sense of autonomy and agency should be taken into account. Let us consider the following hypothetical case scenario [19] that highlights our ethical intrigue: A 35-year-old man named Frank is an alcoholic who is at risk of alcohol use disorder and other health complications such as cardiovascular disease and liver cancer as a result of his excessive alcohol consumption. Due to his inability to fight off the withdrawal symptoms, his family suggested he seek DBS treatment to alleviate his compulsive drinking behaviors. Days after his surgery, Frank became indifferent to alcohol and was able to control his intake. However, Frank’s loss of interest in alcohol eventually caused him to experience remorse for undergoing the DBS treatment and he could no longer feel a connection to his old self, as though the treatment changed something about him that was more than his alcoholism. Nevertheless, Frank is hesitant about expressing his inner feelings to his family, who are likely to stigmatize him for his drinking behavior if Frank were to have the DBS device removed. 

  In the above scenario, the existential crisis that Frank is experiencing after his DBS treatment involves a paradox: the desire to be able to enjoy drinking alcohol and the desire to be rid of his alcohol addiction. Even more so, Frank also felt a sense of threat to his identity as a result of having the DBS device implanted in him, as though his decisions are entirely attributed to the device, thus depriving him of his right to autonomy and agency—a testament to the fact that these risks are commonly ill-conceived when giving meaningful consent and should be prioritized as much as protecting one’s privacy from being obtained through BCI systems [6]. One social factor, moreover, that is compromising Frank’s ability to decide and act in accordance with his own wishes is the societal stigma that is imposed upon him by his family, whose biased point of view is in alignment with the norm and is on the contrary to that of Frank’s [2]. The disembodiment of Frank’s identity and the closely affiliated psychological aspects of autonomy and agency are thus the focal points of this ethical debate in relation to the unprecedented effects of BCI on an individual’s humanity and personhood. 

   With utmost certainty, BCIs are among the list of technological singularities that will eventually bring about profound changes to the way clinical patients will live and prosper. Yet, the imminent ethical challenges that BCIs impose on patients contain a myriad of uncertainties with respect to changes in their psychology, notably their sense of autonomy and agency, given the disembodied nature of BCI technology. Interacting with the world through neurotechnological means, as it seems, is the epitomized reality of the 21stcentury. Be that as it may, it ought to be recognized that BCI is to neuroscience what human cloning is to genetics and what nuclear weapons are to nuclear physics—it is a perpetual cycle of progress and destruction in the sustainable development of futuristic societies. 

References 

1. Drew, L. (2019). The ethics of brain-computer interfaces. Nature, 571(7766), S19-S19. 

2. Burwell, S., Sample, M., & Racine, E. (2017). Ethical aspects of brain computer interfaces: a scoping review. BMC medical ethics, 18(1), 1-11. 

3. Shih, J. J., Krusienski, D. J., & Wolpaw, J. R. (2012, March). Brain-computer interfaces in medicine. In Mayo clinic proceedings (Vol. 87, No. 3, pp. 268-279). Elsevier. 

4. Cook, M. J., O'Brien, T. J., Berkovic, S. F., Murphy, M., Morokoff, A., Fabinyi, G., ... & Himes, D. (2013). Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study. The Lancet Neurology, 12(6), 563-571. 

5. Kögel, J., Schmid, J. R., Jox, R. J., & Friedrich, O. (2019). Using brain-computer interfaces: a scoping review of studies employing social research methods. BMC medical ethics, 20, 1-17. 

6. Klein, E., Goering, S., Gagne, J., Shea, C. V., Franklin, R., Zorowitz, S., ... & Widge, A. S. (2016). Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces, 3(3), 140-148. 

7. Lozano, A. M., Lipsman, N., Bergman, H., Brown, P., Chabardes, S., Chang, J. W., ... & Krauss, J. K. (2019). Deep brain stimulation: current challenges and future directions. Nature Reviews Neurology, 15(3), 148-160. 

8. Vlek, R. J., Steines, D., Szibbo, D., Kübler, A., Schneider, M. J., Haselager, P., & Nijboer, F. (2012). Ethical issues in brain–computer interface research, development, and dissemination. Journal of neurologic physical therapy, 36(2), 94-99. 

9. Rakhmatulin, I., & Volkl, S. (2022). PIEEG: Turn a Raspberry Pi into a Brain-Computer-Interface to measure biosignals. arXiv preprint arXiv:2201.02228.

10. Zeng, Y., Sun, K., & Lu, E. (2021). Declaration on the ethics of brain–computer interfaces and augment intelligence. AI and Ethics, 1(3), 209-211. 

11. Fuchs, T. (2006). Ethical issues in neuroscience. Current opinion in psychiatry, 19(6), 600-607. 

12. Friedrich, O., Racine, E., Steinert, S., Pömsl, J., & Jox, R. J. (2021). An analysis of the impact of brain-computer interfaces on autonomy. Neuroethics, 14, 17-29. 13. Davidoff, E. J. (2020). Agency and accountability: ethical considerations for brain-computer interfaces. The Rutgers journal of bioethics, 11, 9. 

14. Li, G., & Zhang, D. (2017). Brain-computer interface controlling cyborg: A functional brain-to-brain interface between human and 

cockroach. Brain-Computer Interface Research: A State-of-the-Art Summary 5, 71-79. 15. Nijboer, F., Clausen, J., Allison, B. Z., & Haselager, P. (2013). The asilomar survey: Stakeholders’ opinions on ethical issues related to brain-computer interfacing. Neuroethics, 6, 541-578. 

16. Schicktanz, S., Amelung, T., & Rieger, J. W. (2015). Qualitative assessment of patients’ attitudes and expectations toward BCIs and implications for future technology development. Frontiers in systems neuroscience, 9, 64. 

17. Steinert, S., Bublitz, C., Jox, R., & Friedrich, O. (2019). Doing things with thoughts: Brain-computer interfaces and disembodied agency. Philosophy & Technology, 32, 457-482. 

18. Gilbert, F., Cook, M., O’Brien, T., & Illes, J. (2019). Embodiment and estrangement: results from a first-in-human “intelligent BCI” trial. Science and engineering ethics, 25, 83-96. 

19. Brown, T., CSNE Ethics Thrust (2014, October). Case studies in neuroscience. Center for Neurotechnology. 

https://centerforneurotech.uw.edu/sites/default/files/CSNE%20Neuroethics%20C ases_for%20distribution.pdf


Comment

MORAL STATUS IN CEREBRAL ORGANOIDS, GATRULOIDS, AND CHIMERAS

Comment

MORAL STATUS IN CEREBRAL ORGANOIDS, GATRULOIDS, AND CHIMERAS

The development of stem cell technology in recent years has transformed biomedical research and clinical care with the introduction of organoids, three-dimensional in vitro tissue culture models grown from pluripotent stem cells (PSCs), adult stem cells (ASCs), or embryonic stem cells (ESCs) [1]. Originally derived from human tissues, organoids have a self-organization mechanism that can spontaneously produce progenies that resemble the complex functionality and physiology of different organs in vivo via cell sorting [2]. Though miniature in size and possessing fewer cell types [3], organoids can be individualized in a superficial manner to architecturally mimic the structures and properties of several specialized organs of the human body at the cellular level [2]. Past research studies have successfully created organoids that resembled the heart, kidney, lung, retina, and thyroid [4,5,6,7,8]; in essence, the possibilities are manifold. In theory, organoids can serve as substitutes for common animal models to mitigate the ethical concerns associated with the latter [2]; moreover, organoids are more accurate models to work with relative to, say, mouse models because organoids are derived from the tissues of human donors, which would produce more effective results when used in applications such as regenerative medicine, drugs screening, and human development and disease modeling [1]. 

No organoid technology, however, is without ethical flaws of its own; after all, there are serious ethical issues raised as a result of the intimate connection between organoids and the tissue donors from which the stem cells are derived. The present case is reminiscent of HeLa cells, cancerous cells that can reproduce indefinitely but were unethically obtained from its donor, an African American woman named Henrietta Lacks who was diagnosed with cervical cancer [9]. The story of Henrietta Lacks, as portrayed in the biography The Immortal Life of Henrietta Lacks by Rebecca Skloot, scrutinized the critical issue of informed consent (along with other relevant bioethical issues), for Henrietta Lacks was unaware that her cells were taken for purposes that she did not permit before passing away [10]. Likewise, similar novel ethical issues pertaining to organoid technology are called into question and opened to debate. The ethical problems of three distinct subtypes of organoids of interest will be discussed herein in relation to the possible attribution of moral status to these entities: cerebral organoids, gastruloids, and chimeras.

Of all forms of organoids, cerebral organoids are by far the most ethically problematic subtype, as they are derived from neural tissues of the human brain [2]. Yet cerebral organoid models are scientifically and clinically promising in enhancing our understanding of the brain’s relationship with other organs of the body through the integration of various organoids to form a complex known as an ‘assembloid’ [11], as well as understanding the mechanisms that lead to neurodevelopment, neurodegeneration, and neurologic diseases by means of modeling. As a matter of fact, relative to early models that relied on cellular techniques to elucidate the complexity of the brain, cerebral organoids have successfully modeled the pathological pathways that underlie idiopathic autism and virus-induced microcephaly [2], thus shedding light on potential therapeutic strategies and remedies for neurologic disease prevention. 

Notwithstanding the auspicious prospects of cerebral organoids, the ability of this in vitro system in conducting action potentials, forming neural networks, and producing oscillatory electroencephalogram (EEG) wave patterns has given rise to one intriguing possibility: the rise of consciousness in the neocortex of the organoid [3]. While the idea of consciousness is not universally conceptualized and is currently open to many physical and non-physical interpretations (e.g., multiple drafts model, integrated information theory, quantum mechanics) [12], it has been hypothesized that cerebral organoids have the capability to mature and thereby become sentient beings that can react to light or experience nociceptive pain; furthermore, higher-order cognitive functions such as working memory, self-awareness, and fine motor control are postulated to be attainable as well; but such claims are also subjected to intense skepticism [2]. However, under the premise that a cerebral organoid does in fact have the ability to spark consciousness within itself and even exhibit pleasure-seeking and pain-avoidant behaviors [13], it follows that moral status must be partially attributed to the organoid [3]. For a given being, to have moral status is to not be harmed but to be respectfully treated for the sake of the being and their best interests [14]. Hence, cerebral organoids may possess as much moral status as a human—if you acknowledge it as a human clone in this sense—or to a lesser extent, a laboratory mouse, only if it is capable of developing cortical activity-induced consciousness and the associated cognitive traits and sensory inputs over a base threshold. Speaking of which, there appears to be a growing interest in creating a test that can objectively assess consciousness in cerebral organoids. Currently, the perturbation complexity index (PCI) has been used as a measure of consciousness in unresponsive patients suffering from brain injuries [15]. Regardless of future research outcomes, the perception surrounding cerebral organoids is constantly changing; therefore, these models are no longer an exploitable biomaterial that can be arbitrarily subjugated to the interest of the individual who is in possession of the cerebral organoid.

Academic critics such as scientists and ethicists, however, have underscored the notion that cerebral organoids do not (and never will) have the capacity for consciousness due to the lack of social interactions in their given environment [2] and the lack of a vascular system and other relevant cell types that are essential to forming brain connectivity, thereby limiting the degree of neural computation, maturation, and functioning [16]. On the other hand, some members of the public think otherwise, arguing that cerebral organoids can have conscious experiences nevertheless without being faithful to the real human brain [3]. It is also worth noting that misunderstandings of organoid technology are surfacing as a result of false media portrayals [2]. While there are no definitive answers yet to address the fears and worries of organoids until further advances in stem cell biology are made, the problem of cerebral organoids, though, concerns itself around the ethics of this technology given the presumption that moral status is attributed. Under the condition that consciousness does reside within an organoid even after divorcing from its human of origin, are tissue donors able to withdraw their consent [2]? Who has the rightful ownership over the created organoids [2]? Such questions bring up the need for practical models of informed consent, considering that in some cases, donors may develop close ties with their respective cerebral organoids because of a shared identity that is genetically linked; this also implies the risk of privacy invasion [2]. Moreover, is it ethical to genetically manipulate cerebral organoids by means of activating suicide genes that would instigate programmed cell death or expressing specific genes (e.g., MIXL1) that would alter the fate of certain cells to prevent the neural processes that are required for consciousness, despite the underlying possibility? Again, it is difficult to give definitive answers without the necessary methods that can reliably detect consciousness, or lack thereof.  

A conflict in interest also occurs when the donor’s intention is for the researcher to utilize the cerebral organoid for the welfare of others, but external third parties can purchase the organoid from companies who have property rights over the cerebral organoid (if it becomes a patent) and thus receive profits [2] while the donor is most likely not compensated or reimbursed in any form. The commercialization of organoids, in general, becomes more ethically challenging when the party who wishes to buy the organoid is the donor him or herself acting as a patient, because patient-derived, organ-specific organoids can be used as effective models in personalized medicine to help treat the patient and others with the same disease and genetic similarities [2]. Thus, this suggests that patients will become dependent on the companies who can either grant or refuse them access to the organoid, though the latter may lead to public distrust in such companies [2]. In reflection, how much oversight and protection should be granted to cerebral organoids by research ethics review boards and legislation [2]? While the attribution of moral status to cerebral organoids is still under deliberation, the possibility of consciousness should not be neglected even if it is far from fruition when estimating the value of this technology and weighing its benefits with the risks and uncertainties. 

In addition to the discussion on cerebral organoids, another subtype of organoids is gastruloids [2], which follow the same bioengineering protocols as cerebral organoids. What distinguishes these two entities is that in lieu of recapitulating the entire human organ, gastruloids mimic the developmental process that is reminiscent of human embryos in vitro [2]. Structurally, gastruloids are similar to human embryos in that the formation of the primitive streak and cells that are intrinsic to the three germ layers are indeed present in these bioartificial gastruloids [17]. Serving as an alternative to mammalian embryonic models [18], in vitro gastruloids models offer insights into the early evolutionary development of human embryos with contrasts to other animal species, as well as enlighten our understanding of disorders that are linked to pregnancy loss during the first trimester [17]. 

The structural similarities of gastruloids with human embryos signify a paramount notion that was applied to cerebral organoids: the potential to have moral status [2]. What constitutes an embryo is still a question that is without clear answers; however, it is well-established that specifically 14 days [16] is the time span for a cultured embryo to undergo gastrulation—the process by which an embryo develops multidimensional structures from a one-dimensional layer [19]—and reach maturity to eventually become a developing fetus. For gastruloids, it may take less than 14 days to mature [17] and thus the attribution of moral status becomes controversial, and comparable to cerebral organoids, the ethical challenge that must be overcome is justifying the act of preventing gastruloids from becoming a living being through gene knockout so that the expression of the gene sequence that is essential for complete embryonic development is stopped, or in the instance of discarding gastruloids [2]—either of which eliminates the possibility for gastruloids to acquire or maintain life. In principle, the usage of human embryos in research is strongly discouraged or even prohibited in certain jurisdictions, but the moral concern of PSC-, ASC-, or ESC-derived gastruloids that are also found in cerebral organoids complicate their usage in the absence of clear ethical and legal frameworks in regulating the extent of gastruloids’ maturation [17]. 

One additional area of organoid research that is in need of ethical consideration is the farming of interspecies chimeras, living organisms that are created from the combination of cells from two or more species of different origins; for example, the combination of human cells with that of porcine cells to form a human-pig chimera [20]. Chimeras are useful in various aspects of medicine, notably in surgical practices, as they provide a solution to the apparent undersupply of transplantable organs for surgeries. One example includes implanting equine-derived heart values into patients with dysfunctional cardiac valves [21]. Even though organ transplants may cause fears over the unpredictability of zoonotic diseases or xenogeneic organ rejections, gene editing technology (e.g., CRISPR/Cas9) can potentially silence the exact animal genes that are causing such problems [20]. While ethical guidelines for chimerism vary from place to place [21], high concentrations of neurons that can be injected into a mouse brain to create neurological chimeras [22], for instance, have already prompted recommendations to limit the quantity of stem cells that can be derived from a human [2].  

The farming of human-animal chimeras in vitro in which an animal come to possess human organs, moreover, creates an issue of humanizing chimeric animals, as it involves transplanting human-derived organoids into the body of the recipient animal, giving that animal human-like capacities such as consciousness. This overwhelming possibility is demonstrated through implanting cerebral organoids into the central nervous system (CNS) of a host animal (e.g., a mouse), in which the CNS of the host animal have come to exhibit characteristics such as increased size, vascularization, and a vast differentiation of neuronal and non-neuronal cells, thereby offering a more representative model of the human brain [2]. Nevertheless, in certain cases, chimeric animals can become vulnerable to human viruses when human cells are introduced to their biological systems [21], which, again, raises the ambiguous question of whether the chimera bears some level of moral status or not [2], along with other nuanced ethical challenges which pertained to cerebral organoids and gastruloids, considering the fact that consciousness is not uniquely a human characteristic [22]. Another way of humanizing chimeric animals is through implanting human gonadal organoids which would result in the production of human gametes that are capable of reproduction, thereby creating human embryos [21]. The moral concern, in this case, lies in the potentiality of cross-species breeding and reproduction to form hybrid embryos [21]. And while the possibility for chimeras to have human-like characteristics is not believed to be plausible such as in the case of consciousness [21], recent studies have shown that in the microenvironment of a mouse, glial progenitor cells that were derived from a human were more competitive than the mouse glial progenitor cells in composing white matter [23]. Therefore, in terms of applications, the advantages and disadvantages of humanizing animals and animalizing humans should be closely monitored with respect to moral permissibility when crossing the boundary between humans and animals.

Upon factoring in the ethics of organoid technology, the general consensus on the moral status of cerebral organoids, gastruloids, and chimeras in both research and clinical settings is pending. It is a decision that cannot be ill-considered, though the ethical challenges of such advanced stem cell technology and its potential development in the near future should be taken into current views for constructing the appropriate framework for organoid usage. Ultimately, it is the moral responsibility of all parties involved to address the given challenges, for the solutions are heavily connected to the fundamental philosophical question of what it truly means to be human. 

References

  1. Azar, J., Bahmad, H. F., Daher, D., Moubarak, M. M., Hadadeh, O., Monzer, A., ... & Abou-Kheir, W. (2021). The use of stem cell-derived organoids in disease modeling: an update. International Journal of Molecular Sciences22(14), 7667.

  2. de Jongh, D., Massey, E. K., & Bunnik, E. M. (2022). Organoids: a systematic review of ethical issues. Stem Cell Research & Therapy13(1), 1-21.

  3. Jeziorski, J., Brandt, R., Evans, J. H., Campana, W., Kalichman, M., Thompson, E., ... & Muotri, A. R. (2022, March). Brain organoids, consciousness, ethics and moral status. In Seminars in Cell & Developmental Biology. Academic Press.

  4. Voges, H. K., Mills, R. J., Elliott, D. A., Parton, R. G., Porrello, E. R., & Hudson, J. E. (2017). Development of a human cardiac organoid injury model reveals innate regenerative potential. Development144(6), 1118-1127.

  5. Xia, Y., Nivet, E., Sancho-Martinez, I., Gallegos, T., Suzuki, K., Okamura, D., ... & Belmonte, J. C. I. (2013). Directed differentiation of human pluripotent cells to ureteric bud kidney progenitor-like cells. Nature cell biology15(12), 1507-1515.

  6. Wilkinson, D. C., Alva-Ornelas, J. A., Sucre, J. M., Vijayaraj, P., Durra, A., Richardson, W., ... & Gomperts, B. N. (2017). Development of a three-dimensional bioengineering technology to generate lung tissue for personalized disease modeling. Stem cells translational medicine6(2), 622-633.

  7. Eiraku, M., Takata, N., Ishibashi, H., Kawada, M., Sakakura, E., Okuda, S., ... & Sasai, Y. (2011). Self-organizing optic-cup morphogenesis in three-dimensional culture. Nature472(7341), 51-56.

  8. Antonica, F., Kasprzyk, D. F., Opitz, R., Iacovino, M., Liao, X. H., Dumitrescu, A. M., ... & Costagliola, S. (2012). Generation of functional thyroid from embryonic stem cells. Nature491(7422), 66-71.

  9. Skloot, R. (2017). The immortal life of Henrietta Lacks. Broadway Paperbacks.

  10. Beskow, L. M. (2016). Lessons from HeLa cells: the ethics and policy of biospecimens. Annual review of genomics and human genetics17, 395.

  11. Kanton, S., & Paşca, S. P. (2022). Human assembloids. Development149(20), dev201120.

  12. Van Gulick, R. (2014, January 14). Consciousness. Stanford Encyclopedia of Philosophy. Retrieved January 15, 2023, from https://plato.stanford.edu/entries/consciousness/#RepThe 

  13. Cohon, R. (2018, August 20). Hume's Moral Philosophy. Stanford Encyclopedia of Philosophy. Retrieved February 2, 2023, from https://plato.stanford.edu/archives/fall2018/entries/hume-moral/  

  14. DeGrazia, D., & Millum, J. (2021). A Theory of Bioethics. Cambridge University Press.

  15. Sinitsyn, D. O., Poydasheva, A. G., Bakulin, I. S., Legostaeva, L. A., Iazeva, E. G., Sergeev, D. V., ... & Piradov, M. A. (2020). Detecting the potential for consciousness in unresponsive patients using the perturbational complexity index. Brain sciences10(12), 917.

  16. National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Science, Technology, and Law; Committee on Ethical, Legal, and Regulatory Issues Associated with Neural Chimeras and Organoids. (2021). The emerging field of human neural organoids, transplants, and chimeras: Science, ethics, and governance. National Academies Press. 

  17. Munsie, M., Hyun, I., & Sugarman, J. (2017). Ethical issues in human organoid and gastruloid research. Development144(6), 942-945.

  18. Rossi, G., Giger, S., Hübscher, T., & Lutolf, M. P. (2022). Gastruloids as in vitro models of embryonic blood development with spatial and temporal resolution. Scientific Reports12(1), 1-12.

  19. Muhr, J., & Ackerman, K. M. (2020). Embryology, gastrulation. StatPearls Publishing.

  20. Koplin, J., & Wilkinson, D. (2019). Moral uncertainty and the farming of human-pig chimeras. Journal of Medical Ethics45(7), 440-446.

  21. Bourret, R., Martinez, E., Vialla, F., Giquel, C., Thonnat-Marin, A., & De Vos, J. (2016). Human–animal chimeras: ethical issues about farming chimeric animals bearing human organs. Stem cell research & therapy7(1), 1-7.

  22. Hyun, I. (2019). Ethical considerations for human–animal neurological chimera research: mouse models and beyond. The EMBO Journal38(21), e103331.

  23. Windrem, M. S., Schanz, S. J., Morrow, C., Munir, J., Chandler-Militello, D., Wang, S., & Goldman, S. A. (2014). A competitive advantage by neonatally engrafted human glial progenitors yields mice whose brains are chimeric for human glia. Journal of Neuroscience34(48), 16153-16161.


Comment

The Inadvertent Consequences Of Scanning The Human Brain

Comment

The Inadvertent Consequences Of Scanning The Human Brain

One inquiry of concern that merited the attention of neuroscientists, clinicians, ethicists, policymakers, and the like over the past decades stemmed from the development and sophistication of advanced neuroimaging techniques. Neuroimaging has become a revolutionary tool in the frontier of neuroscience for both diagnostic and research purposes. Since the emergence of neuroimaging technology, physicians have employed structural imaging, such as magnetic resonance imaging (MRI) and computed axial tomography (CAT), to help visualize the anatomical structures of the brain, thereby facilitating diagnoses for neurologic or psychiatric conditions (e.g., Alzheimer’s disease, brain cancer, schizophrenia) in patients as a supplement to genetic tests and other examinations [1]. These structural imaging modalities are often used to complement functional imaging modalities, which are frequently used by researchers to examine the brain activity of participants in real-time when instructed to perform certain tasks or react to stimuli in both healthy or diseased brains; examples of such modalities include, inter alia, functional magnetic resonance imaging (fMRI) for measuring blood flow, positron emission tomography (PET) for measuring metabolic activity [2], and magnetoencephalography (MEG) for measuring magnetic fields from electrical currents in neurons [3]. 

Each neuroimaging technique has its own advantages and flaws; for example, the limited temporal resolution of fMRI is compensated with the high temporal resolution of MEG in a combined-modality to render more useful neuroimaging data [4]. In imaging studies that require a large pool of participants, fMRI lowers the risk-benefit ratio as it is less invasive relative to PET (which involves the injection of radioactive contrast agents into the participants) [2]. The capability of neuroimaging further extends to revealing one’s unconscious thoughts [1] and evaluating one’s susceptibility to a particular disease or behavior [2]. Yet for each capability rendered by neuroimaging, there are subsequent inadvertent consequences that are worthy of neuroethical analysis in relation to medical, social, and legal issues and challenges. 

A prevalent occurrence in research studies (as well as in clinical practices) involving human subjects undergoing neuroimaging procedures is the unintended discovery of incidental findings—neurological anomalies that are unrelated to the objective of the brain scan [5] but may contain clinical significances (e.g., the detection of asymptomatic brain tumors or cysts). Depending on a multitude of factors, such as participants’ age and the region of the body that is being scanned, the likelihood of incidental findings, in general, can occur in approximately 2% of imaging scans in the lower extreme and 47% on the higher extreme [6]. Under such circumstances, the appropriate management of incidental findings, especially in brain scans, poses ethical dilemmas during, say, the process of informed consent or deciding whether the incidental finding should be disclosed to the subject or not, if detected. 

It can be agreed upon that it is ethically sensible to report incidental findings to research participants for their own health benefit; the story of Sarah Hilgenberg is one prime example. When Hilgenberg was a medical student at Stanford, she participated in an fMRI study on learning and memory. It was later revealed that Hilgenberg had malformed connections between the blood vessels in her brain; a condition known as arteriovenous malformation which could lead to bleeding due to high blood pressure and further complications if left untreated. Hilgenberg claimed that this incidental finding had positively affected her life and she felt grateful in retrospect [7]. While revealing incidental findings to the subject is done so in the interest of the participant, such events may also act as a psychological burden on the person. There was a case where an anonymous correspondent wrote to the journal Nature, stating how he was informed of a tumor in his brain upon volunteering to help test a new MRI machine. In the aftermath, the said individual had to undergo neurosurgery to remove the tumor but it cost him and his family everything financially because his medical insurability was adversely affected due to having knowledge of the scans [8]. In other cases, incidental findings may also cause unnecessary treatment when a clinically insignificant finding is perceived to be significant, or in the event of false positive results [2]—all of which may cause severe emotional and financial distress for the participant. 

Disclosing incidental findings, therefore, can have negative implications from the perspective of the participants, even though one study’s data suggested that over 90% of participants wished to be informed in the event of incidental findings [9]. Given the uncertainties associated with incidental findings over the course of neuroimaging studies, transparent protocols (e.g., facilitating direct communication with the participant, reporting incidental findings to a research ethics review board) should be implemented beginning with the process of informed consent to ease tensions among participants [10]. It is the researcher’s moral and legal obligation to manage disclosures and non-disclosures in respecting the participants’ autonomy and interests while taking into account the clinical relevance of the incidental finding(s).    

On top of incidental findings, the wealth of neuroimaging databases and discoveries in social neuroscience experiments have presented thought privacy and confidentiality as integral issues to the ethics of neuroimaging. Intimate and personal thought processes that are distinctive to an individual can be captured during neuroimaging to yield brain activation profiles. Individual thought privacy is thus at risk as researchers in past studies were able to pinpoint the exact neural correlates for cooperation, rejection, moral judgment, implicit racial biases [11,12,13,14], and a myriad of other human social behaviors that are grounded in personal thoughts and the subconscious mind. The Human Brain Project and the International Consortium for Brain Mapping (ICBM) are just a few of the large-scale research initiatives that possess an archive of neuroimaging databases in which data can be easily accessed and shared [2]. Data confidentiality is thus an ethical concern, for researchers could identify individuals based on their corresponding neuroprofiles, and this is reflective of how genomic data are being handled. Neuroimaging data could also be commercialized to for-profit sectors since there are neural correlates for economic decision-making among consumers, which are heavily utilized by companies as a neuromarketing strategy that puts consumers’ confidentiality at severe risk [4]. As a result, this raises the need for individuals to prevent traceability by means of securing their cognitive liberty—the right to have autonomy over individuals’ own mental processes [15]. The idea of cognitive liberty is often raised against neuroimaging because of a prevalent Orwellian fear: those who can probe into our brains can act as a ‘Big Brother’ figure and seek contents that the subject is unaware of or does not wish to be known. 

One prominent application of neuroimaging that emphasizes the medical implications of neuroimaging is predicting the onset of neurologic or psychiatric disorders prior to the appearance of overt symptoms [4]. By identifying potential biomarkers or recognizing regional brain activation patterns or neural signatures that are characteristic of the pathological mechanisms of a disease, one’s risk for developing a particular kind of disease can thus be probabilistically assessed [2]. However, one problem with predictive neuroimaging is that probabilistic data are not strictly reliable given the complex and plastic nature of the brain and that the covariance between image data and human behavior does not suggest causality when external factors like culture play a role [4]. In addition, many disease pathologies are not well-understood yet and could therefore lead to the premature application of neuroimaging techniques in this regard [2]. Furthermore, while predictive neuroimaging may encourage early disease prevention through therapeutic interventions, individuals who are at risk of a prognosed disease may be subjected to societal stigma and discrimination, [2] as this is oftentimes seen with respect to vulnerable populations and minority identities.  

The underlying cause for this social issue of stigma and discrimination can be attributed to the popular media’s tendency to oversimplify and overinterpret neuroimaging data through reductionist and pessimistic stances in a manner that supports deterministic beliefs [2]. The media portrayal of basing our essence upon neuroimaging results—a view known as ‘neuroessentialism’—can effectively shape the public perception to develop stigmatizing and discriminatory tendencies and thereby ostracize those with neurologic or psychiatric conditions [2]. This further creates a divide between what may be perceived as “superior” and “inferior” brain characteristics and promotes a non-moralistic view of human nature such as fatalistic biological determinism [2], the notion that our physiology or genes are the roots of our behavior and personality [16]. Hence, a misunderstanding of neuroimaging data could be damaging for vulnerable populations, and the burden of having such knowledge from predictive neuroimaging should be carefully communicated with those who may be at risk. Otherwise, access to insurance, education, and employment may be limited or denied for individuals living with an invisible disease [17].

Neuroimaging data could be used in greater detail in the criminal justice system which may result in legal complications. Ever since the gradual improvement of neuroimaging techniques, brain scans can be used as admissible evidence in court cases related to murder convictions, for example, when questions regarding sanity are raised for the individual that is convicted (this often necessitates psychiatric assistance) [17]. To illustrate its application, in a 1992 New York Supreme Court case People v. Weinstein, the defendant Herbert Weinstein was charged with second-degree murder for killing his wife. Yet, PET scans revealed that Weinstein had a psychiatric defect in which an arachnoid cyst was found in his brain which may have influenced him to act with violence; ultimately, Weinstein’s murder conviction resulted in a manslaughter plea [17]. Hence, whether a convicted individual should be vindicated from his or her crime or not becomes disputable when neuroimaging suggests that it is possible for individuals to be predisposed to committing aggressive and recidivist acts such as homicide as a consequence of brain damage or even childhood abuse and neglect [4]. Indeed, neuroimaging studies have shown that individuals with impaired prefrontal cortical functioning demonstrated a lack of ability to control impulses, assess risks, and empathize—behavioral characteristics that are common among criminal offenders [4].

Another implementation of neuroimaging in the field of forensic science is a practice known as brain fingerprinting, where an individual’s brain waves (P300) are measured using an electroencephalogram (EEG) in response to information and facts that are relevant to the case in question [4]. The stimuli are presented in millisecond intervals to test for P300 wave responses [18]. Arguably, neuroimaging may help discern “truths from lies or real memories from false ones” according to lawyer and law professor Henry Greely [17]. Such was the case for Terry Harrington, who was convicted of murdering a retired police officer in 1977 and was thereafter sentenced to prison after the verdict in the Iowa Supreme Court case Iowa v. Harrington [18]. However, Harrington’s murder conviction was later reversed when he underwent brain fingerprinting in 2000 and had negative results, in which no P300 wave signals were evoked in response to details that were related to his case [18]. Of note, however, that using neuroimaging for lie detection does not assure absolute accuracy as promised and rates of error are uncertain [19]. 

While it is crucial to consider mental defects and diseases in convicted individuals, to what extent is the linkage between brain abnormality and criminal behavior sufficient to prove someone as free of legal responsibility? How reliable and valid ought neuroimaging data be prior to being used as admissible evidence in court? Does forcibly imaging the brain of the offender in search of incriminating memories or other information violate the 4th and 5th Amendments to the U.S. Constitution [19]? The fine line that distinguishes innocence from guilt in such cases is blurred by the foregoing concerns, and the ethics surrounding the usage of neuroimaging as part of the judiciary process remains to be scrutinized.

The key in regard to neuroimaging and its ethical framework relies on the proper interpretation and application of neuroimaging data while considering the many limitations of neuroimaging. Thus far, neuroimaging technology has yet to reach the level of direct mind-reading but offers many inferences into our innermost thoughts and psychological traits. Neuroimaging can be pervasive in many domains of our life beyond medically diagnosing and treating patients; chiefly toward influencing societal viewpoints and legal decisions that could have unforeseen ramifications and even as far as putting individual lives and values at their most vulnerable state. 




References

  1. Brašić, J. R., & Mohamed, M. (2014). Human brain imaging of autism spectrum disorders. In Imaging of the human brain in health and disease (pp. 373-406). Academic Press.

  2. Racine, E., & Illes, J. (2007). Emerging ethical challenges in advanced neuroimaging research: review, recommendations and research agenda. Journal of Empirical Research on Human Research Ethics2(2), 1-10.

  3. Singh, S. P. (2014). Magnetoencephalography: basic principles. Annals of Indian Academy of Neurology17(Suppl 1), S107.

  4. Fuchs, T. (2006). Ethical issues in neuroscience. Current opinion in psychiatry19(6), 600-607.

  5. Shaw, R. L., Senior, C., Peel, E., Cooke, R., & Donnelly, L. S. (2008). Ethical issues in neuroimaging health research: an IPA study with research participants. Journal of health psychology13(8), 1051-1059.

  6. Bomhof, C. H., Van Bodegom, L., Vernooij, M. W., Pinxten, W., De Beaufort, I. D., & Bunnik, E. M. (2020). The impact of incidental findings detected during brain imaging on research participants of the Rotterdam study: an interview study. Cambridge Quarterly of Healthcare Ethics29(4), 542-556.

  7. Hilgenberg, S. (2005). Formation, malformation, and transformation: my experience as medical student and patient. Stanford Med Student Clin J9, 22-5.

  8. Anonymous. (2005). How volunteering for an MRI scan changed my life. Nature, 434(7029), 17.

  9. Kirschen, M. P., Jaworska, A., & Illes, J. (2006). Subjects' expectations in neuroimaging research. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine23(2), 205-209.

  10. Leung, L. (2013). Incidental findings in neuroimaging: Ethical and medicolegal considerations. Neuroscience Journal2013.

  11. Decety, J., Jackson, P. L., Sommerville, J. A., Chaminade, T., & Meltzoff, A. N. (2004). The neural bases of cooperation and competition: an fMRI investigation. Neuroimage23(2), 744-751.

  12. Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (42003). Does rejection hurt? An fMRI study of social exclusion. Science302(5643), 290-292.

  13. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science293(5537), 2105-2108.

  14. Lieberman, M. D., Hariri, A., Jarcho, J. M., Eisenberger, N. I., & Bookheimer, S. Y. (2005). An fMRI investigation of race-related amygdala activity in African-American and Caucasian-American individuals. Nature neuroscience8(6), 720-722.

  15. Sententia, W. (2004). Neuroethical considerations: cognitive liberty and converging technologies for improving human cognition. Annals of the New York Academy of Sciences1013(1), 221-228.

  16. Allen, G. E. (1984). The roots of biological determinism.

  17. Illes, J., & Racine, E. (2005). Imaging or imagining? A neuroethics challenge informed by genetics. The American Journal of Bioethics5(2), 5-18.

  18. Thompson, P., Cannon, T. D., Narr, K. L., Van Erp, T., Poutanen, V. P., Huttunen, M., & Toga, A. W. (2001). Forensic neuroscience on trial. Nature neuroscience4(1), 1-1.

  19. Roskies, A. (2021, March 3). Neuroethics. Stanford Encyclopedia of Philosophy. Retrieved January 8, 2023, from https://plato.stanford.edu/entries/neuroethics/

Comment

Towards a Brave New World: The Huxleyan Reality of Using  Pharmacological Neuroenhancement

Comment

Towards a Brave New World: The Huxleyan Reality of Using Pharmacological Neuroenhancement

Among all possible contingencies, there has been an evident progression toward the dystopian future foretold by visionary writers such as Aldous Huxley in particular. Huxley’s 1932 novel Brave New World alluded to the pre-existing epidemic that concerned the use of pharmacological neuroenhancement for improved cognitive and affective functioning (i.e., attention, memory, mood, etc.) via legal or illicit drug use [1]. But at what costs? As portrayed in the novel, the drug “soma” induces feelings of happiness while nullifying any kind of discomfort and pain [2]. Such quixotic concepts have been exploited to serve as a form of enhancement to construct the best version of our brains, especially in the postmodern era [3]. The use of pharmacological neuroenhancement, synonymously known as brain doping, by healthy subjects for non-medical purposes to enhance performance and work has been a common practice that can be traced back to the ancient Greeks who used pharmacological neuroenhancement when competing in the Olympic Games [4]. Although intense polarization has been ceaselessly ongoing between transhumanists and bioconservatives in the neuroenhancement debate.

 

While transhumanists argue in favor of utilizing pharmacological neuroenhancement as a form of genetic engineering to increase human cognitive abilities and thereby radically change the ways through which our species develop, bioconservatives are skeptical of such propositions and dismiss any morally permissible ideas related to modifying our natural intelligence. In their views, bioconservatives criticize the transhumanist ideal of perfection and their hubristic ambition to override the inherent principles of nature, which echoes the warning conveyed by Huxley that the perturbing influence of eugenics-inspired biotechnologies must be resisted. It is therefore imperative to critically examine and analyze the philosophical basis for both stances in relation to the ethics of using pharmacological neuroenhancement and to ultimately unveil the truth of the Huxleyan reality—a nightmarish, oppressive vision that Huxley once envisaged and feared.

           

In utilitarian terms, an alluring aspect as promised by the effective use of pharmacological neuroenhancement is increased cognitive functioning, which transiently improves performance in any given activity such as learning and memorization [7]. One familiar example is the use of psychostimulants like methylphenidate (brand name: Ritalin) among college and university students. Taking such neuroenhancement drugs lead to improved working memory and increased attention, which reportedly helps students study for longer hours and study more productively with reduced anxiety. Of note, however, is that the original purpose of the stimulant is meant to be prescribed to treat individuals with ADHD, [7] and not for augmenting one’s cognition for the benefit of improving the human cognitive condition [9]. All progress throughout the course of human civilization is rooted in the fundamental belief of climbing to the next step in the evolutionary hierarchy, which motivated the need to intervene and manipulate the systematic complexities of the human mind for the sake of continuing progress. In the instance of using pharmacological neuroenhancement to increase one’s cognitive abilities, an intelligence-based hierarchy as modeled after the social infrastructures that exist in the Huxleyan reality would thus be born [2]. Living under this type of hierarchy implies that even though an individual would rather prefer not to take neuroenhancement drugs, he or she would nevertheless be pressured to take the drug in an increasingly competitive environment to be on par with someone who is already conditioned to take the drug on a regular basis [5]. The aftermath would then result in the problem of, does the individual still retain his or her autonomy?

 

Calling transhumanism “the world’s most dangerous idea,” the political scientist Francis Fukuyama suggests that when giving in to the temptations of transhumanist thought such as in the goal of enhanced cognition, the impulse to buy into such ideas oftentimes would cause the individual to be blinded to the actual price [10]. Notwithstanding the utopian ideal, it is noteworthy to mention the health risks along with the addictive effects associated with taking neuroenhancement drugs that may pose harm to the individual, to which many of the users are oblivious [11]. By taking the use of neuroenhancement drugs out of its medical context for unintended purposes such as serving as a studying aid, the thin line that distinguishes enhancement from treatment is blurred as a result. In doing so, the individual becomes vulnerable to, among other things, increased blood pressure, nausea, and irregular palpitations from the effect of altered brain chemistry as a result of taking psychostimulants [12]. Ironically, the case of memory enhancement may also cause impairment to memory retrieval by interfering with the balance between remembering and forgetting due to information overload in the higher-order capacities of the brain [3]. Hence, while it is reasonably acceptable for individuals with mental illnesses or cognitive deficiencies to take neuroenhancement medications to reinstate normal brain functioning, the inverse is true in the case of enhancement where safety is a primary ethical concern. 

 

A supporting argument used to oppose bioconservatism is that pharmacological neuroenhancement promotes fairness and equity, notably for individuals who may be disadvantaged in some ways when compared to others in the context of applying to schools or finding employment [13]. Interventions by unnatural, pharmacological means thus allow underperformed individuals to have improved capabilities by elevating their lower cognitive functioning to match the normal baseline functioning [5]. However, this argument fails to take into account the socio-economic barriers that influence how justly neuroenhancement drugs will be distributed among a given population [13]. Problems would arise when disparities become apparent by considering the fact that neuroenhancement drugs may not be affordable for everyone, especially for the lower classes and those without health insurance coverages. Hypothetically, even if neuroenhancement drugs were made universally accessible through public policy solutions or government mandates, ethical concerns may be raised with regard to losing one’s liberty as a direct result of being coerced into taking neuroenhancement drugs. The likely outcomes of this hypothetical scenario have already been reflected in vaccine mandates throughout the course of the COVID-19 pandemic, where some individuals expressed antagonism in response to such mandates to protect their liberty and personal choice [5]. However, in the Huxleyan reality, liberty simply does not prevail. While coercion may be justified in the context of vaccine mandates for disease prevention, this hypothetical scenario as it applies to enhancement supports the bioconservative argument by resembling the events that were detailed in Brave New World, in which the citizens of Huxley’s dystopia all became mentally compromised by virtue of taking soma as administered by the World State [2]. Hence, when the cause for coercion is unjustifiable, it becomes worthy to ponder upon the question, is there any degree of freedom to our actions when under the influence of neuroenhancement drugs?  

 

Bioconservatives, moreover, continue to reinforce their objections to the use of pharmacological neuroenhancement with rationales that are beyond the preceding arguments related to safety and fairness, one that reveals the psychological horror of using pharmacological neuroenhancement. Imagine a baseball player taking neuroenhancement drugs such as steroids to increase their competitiveness on the playing field; as a result, ten home runs were hit in a single game. Contrast this with a baseball player who accomplished the same feat but without the use of neuroenhancement drugs; instead, it was the result of rigorous training and real effort [13]. The questions now present themselves as, which of the two players is more undeserving of his achievement and unworthy of praise? Does integrity not matter? The answers, of course, are obvious. Our intuition is capable of telling us what is morally responsible and what is not. In the case of the baseball player who used neuroenhancement drugs, the erosion of human agency rendered his achievement to become “hollowed” and his character to become morally “defective”, supplemented by the loss of personal dignity and overall humanity according to the foundational ethical framework that governs our morals and values [14]. Because at the end of the day, the credit does not go to the baseball player, but rather to the pharmacist who prescribed the drug or to the dealer who sold the drug.

 

The hubris objection as illustrated by the aforementioned example denounces the transhumanist methodology of undermining intrinsic values such as grit and tenacity and underappreciating our “gifted” character and powers with which we derived from nature, and further ridiculing the Promethean desire to be dominant over the natural order [15]. This objection was originally configured by political philosopher Michael Sandel and supported by physician-scientist Leon Kass. Kass added to Sandel’s argument of “giftedness” by emphasizing the dire consequences of substituting our moral virtues with pharmacological neuroenhancement-induced effects. Consider this thought experiment: If drugs that were originally meant to treat patients with PTSD become easily accessible to anyone, how many people without mental illnesses would be willing to take such drugs for the purpose of preventing bad memories from consolidating in their minds? While the feelings of trauma or sorrow from an unfortunate experience is undesirable, taking neuroenhancement drugs to block out those feelings and the associated painful memories of that experience would hinder our ability to develop the necessary mechanisms to cope with the myriad of negative feelings that are essential to our psychology [16]. In consequence, we would miss the opportunity for personal growth upon losing our self-control and fail to integrate the adversities of our lives to our character development. We no longer feel the sharpness of that feeling of pain, but do we still retain the courage to stand up and walk forward? This question often appears in the Huxleyan reality in which the character John “the Savage” is refused the right to be unhappy because unhappiness is not a realized concept [2]. When an individual wishes to take soma to relish in pleasure and escape from disillusionment, but in exchange, the individual would forever be trapped in a peaceful fantasy that juxtaposes the cruel reality, where does real happiness lie? To John, real happiness lies in enduring excruciating pain but knowing that it will all be worthwhile—like acing a test after hours spent studying or winning a game after the amount of effort that was spent during practice. Thus, it is within reason that taking shortcuts via artificial means offers no meaning in our pursuit of accomplishments—irrespective of the scale—and such conducts only pathologize our imperfections. 

 

To live with dissonance and the unanticipated are preconditions to appreciating what life has already given us and its beauty, thus enabling us to live a flourishing life that is fulfilling without external interventions that could potentially disfigure the relationship that we have with ourselves and restrict our exercise of freedom. In reference to the words of John who chose to be unhappy, we can cherish the notion that unhappiness is bliss, for it ensures that we still preserve our individual identities and the autonomy to taste the flavors of life: “I don't want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin” [2]. While the endeavor to advance the evolution of human cognition is appealing, one must be cautious of the dubious means represented by the unnaturalness of pharmacological neuroenhancement, as it is irresponsible to toy with our very own subjective experiences and personality traits in the midst of ambiguities [3]. Let it be known that the aspiration to enhance our cognition may lead to the inability to achieve a complete understanding of ourselves and ultimately to the downfall of our own humanity; even more so, it is at odds with our inalienable human rights in which the happiness that we would find from pharmacological neuroenhancement is only illusionary under this fallacy. The Huxleyan reality of using pharmacological neuroenhancement is not an inevitable future, and can be averted if we first consider the question: What am I willing to lose

 

References

 

  1. Heller, S., Tibubos, A. N., Hoff, T. A., Werner, A. M., Reichel, J. L., Mülder, L. M., ... & Dietz, P. (2022). Potential risk groups and psychological, psychosocial, and health behavioral predictors of pharmacological neuroenhancement among university students in Germany. Scientific reports, 12(1), 1-10.

  2. Huxley, A. (1998). Brave new world. HarperPerennial. 

  3. Fuchs, T. (2006). Ethical issues in neuroscience. Current opinion in psychiatry19(6), 600-607.

  4. Bowers, L. D. (1998). Athletic drug testing. Clinics in sports medicine, 17(2), 299-318.

  5. Roskies, A. (2021, March 3). Neuroethics. Stanford Encyclopedia of Philosophy. Retrieved October 21, 2022, from https://plato.stanford.edu/entries/neuroethics/ 

  6. Sandel, M. J. (2009). The case against perfection: Ethics in the age of genetic engineering. The Belknap Press of Harvard University Press. 

  7. Schleim, S., & Quednow, B. B. (2018). How realistic are the scientific assumptions of the neuroenhancement debate? Assessing the pharmacological optimism and neuroenhancement prevalence hypotheses. Frontiers in Pharmacology, 9, 3.

  8. Marazziti, D., Avella, M. T., Ivaldi, T., Palermo, S., Massa, L., Della Vecchia, A., ... & Mucci, F. (2021). Neuroenhancement: state of the art and future perspectives. Clinical Neuropsychiatry, 18(3), 137.

  9. Liszka, J. (2021). Pragmatism and the Ethic of Meliorism. European Journal of Pragmatism and American Philosophy, 13(XIII-2).

  10. McNamee, M. J., & Edwards, S. D. (2006). Transhumanism, medical technology and slippery slopes. Journal of Medical Ethics32(9), 513-518.

  11. Shipman, M. (2019, May 8). The ethics and challenges surrounding neuroenhancement. NC State News. Retrieved October 22, 2022, from https://news.ncsu.edu/2019/05/neuroenhancement-ethics-challenges/ 

  12. Nootropics. Cognitive enhancers - Alcohol and Drug Foundation. (n.d.). Retrieved October 22, 2022, from https://adf.org.au/drug-facts/cognitive-enhancers/ 

  13. Forlini, C., & Hall, W. (2016). The is and ought of the ethics of neuroenhancement: mind the gap. Frontiers in Psychology, 6, 1998.

  14. Faber, N. S., Savulescu, J., & Douglas, T. (2016). Why is cognitive enhancement deemed unacceptable? The role of fairness, deservingness, and hollow achievements. Frontiers in Psychology, 7, 232.

  15. The president's Council on Bioethics: What's wrong with enhancement? (n.d.). Retrieved October 22, 2022, from https://bioethicsarchive.georgetown.edu/pcbe/background/sandelpaper.html 

  16. Kass, L. R. (2003). Ageless bodies, happy souls: biotechnology and the pursuit of perfection. The New Atlantis, (1), 9-28.

 

Comment