Embodied cognition: what it means to “Throw Like a Girl”

While I tell myself now that I’m just “not the athletic type,” the reality is that I used to be. Back in middle school, I recall actually really enjoying track and field, basketball, and soccer. But the age at which young girls enter peak physical shape also marks a time in which ill-rehearsed gender roles begin to cement themselves and become immutable features of our social repertoire.

The rehearsal of these gender performances run deep enough to mold even our most basic bodily movements. In “Throwing like a Girl,” Iris Young dissects the work of two of my favourite philosophers-the notorious SDB and Maurice Merleau-Ponty (who was, coincidentally, Simone de Beauvoir’s first boyfriend).

You’re probably already familiar with de Beauvoir’s Second Sex, in which she describes some of the structural biological differences between men and women that have perhaps led to female oppression.

Maurice Merleau-Ponty, also a preeminent phenomenologist at the time, argued mostly for the primacy of embodiment – meaning that any sweeping claims about the nature of the external universe must first take into account our physical bodies and how they move, perceive, sense, and interact with the outer world.He would argue that if you want to study consciousness, you can’t only study the brain – you must understand the basic sensorimotor phenomena which feed the brain everything it knows.

The concept of embodied cognition is taking off in cognitive neuroscience these days. The embodiment thesis suggests that “many features of cognition are embodied in that they are deeply dependent upon characteristics of the physical body of an agent, such that the agent’s beyond-the-brain body plays a significant causal role, or a physically constintutive role, in that agent’s cognitive processing.”

This is, of course, a hot topic in AI research; Alan Turing himself said that in order for AI to think and speak like a human, it would probably not only need heavy-duty cognitive processing power… it would require fully human-like peripheral sensorimotor capabilities, as well.

In any case, Young’s hypothesis suggests that despite the purely physical, genetically-encoded differences between men and women, one’s being a woman prevents her from achieving her full physical potential because to some degree, she is constantly engaging in self-objectification.

When, for instance, a ball is thrown to me, I have a tendency to think that it is being thrown at me,and will then run, duck, and hide, instead of trying to catch it. (I was never really one for heading the high balls in soccer). Or when girls learn to ride bikes or ski, they often see themselves as the object of a motion, rather than its originator.

I’ve often wondered why horse-back riding was such a female-dominated sport, and Young’s essay shed light on this for me. The classic Freudian explanation is that young girls crave the ability to tame and control a wild, powerful animal (representative of the id)– the horse becomes an extension of the rider’s willpower, such that the ego draws power from the id in the act of controlling it. Another classic Freudian analysis would be the innate sexuality of the physical vibrations for little girls. Yet another is the idea that young girls (lacking a phallus, and therefore hopelessly incomplete) are attracted to and wish to possess powerful, phallic creatures like horses.

Young’s theory might explain this innate draw in a much more consistent (and less misogynistic) way. If it’s true that women have a tendency to see themselves relationally (in relation to the features of their physical environment), then horseback riding is quite naturally appealing to young girls, being one of the only solo sports in which one must position oneself in relation to another agent (the horse) in order to succeed. Contrast this sport to something like dirt-bike riding or motocross: quite similar in concept (control a thing, make it go as fast as possible), and yet you’d be hard-pressed to find a single girl in the top 100 motocrossers in the world. This is because when girls learn to ride a dirt-bike, they see themselves as the object of the motion, rather than its originator; whereas horseback riding forces them to position themselves in relation to the horse. To be a good horseback rider, you have to understand the horse’s wants, coax it to change direction, and praise it constantly for achieving your ends. This is, of course, in addition to all the other good analysis about animals and women and our innate penchant for empathizing with creatures of nature.

Young’s paper also touches on the almighty male gaze and its effects on our bodily comportment. In my track and field days, I can definitely recall girls holding back in shot put simply because they were afraid of how it would look to other people (way too masculine a sport). Not to mention all the girls who run away from the frisbee instead of trying to catch it because they don’t want to get hit. Her  hypothesis is elegant, and complete: Women are used to being looked at and acted upon, like objects rather than subjects. And this conditioning changes even our most fundamental cognitive processing at a basic, sensorimotor level. We see ourselves in relation to the objects of our environment, rather than free and autonomous agents within it.

As a result, we are confined by our own bodies. Rather than seeing them as avatars through which we can achieve our ends, we view them as hindrances. We seldom throw a ball or lift a box with our full weight and corporal potential because we are conditioned to believe we are delicate objects and to constantly second-guess our chances at success.

The cognitive component to this argument cannot be understated.The way we understand and control our own bodies is crucial to the way that we think and perceive the world around us. Think of the recent neuroscience papers which demonstrate that visual processing is modulated by active flight in drosophila (fruit flies). This could essentially show how the very act of moving and dynamically controlling our bodies changes the way we visually process the world in real-time. Recent fMRI studies have also shown how the planning and perception of actions may be influenced by one’s body posture. Popular science outlets have moreover found clickbait fodder in studies showing that “power posing” (assuming a broad, upright posture of dominance) produced lower cortisol and higher testosterone levels in both men and women, leading to socially advantageous behavioral changes. The implications of this particular finding are disturbing when paired with the observation that women are rewarded for taking up as little physical space as possible in social situations, both literally and metaphorically. I shudder to think of the neurological effects that being taught to “sit like a lady,” “walk like a woman,” and “throw like a girl” has on the way a woman perceives her sense of self, her body, and the obstacles and challenges in her environment. It scares me to think that these effects might be so deep-rooted as to impact us at the level of primary sensory perception, sensorimotor modulation, and downstream cognition.

At a very basic cognitive level, Young suggests that young girls are typically conditioned to be “field-dependent” learners, as opposed to their field-independent male counterparts. The idea of field dependence as cognitive style was one of the earliest of its kind, advanced by Herman Witkin in the 60’s; and the support for this gendered trend is quite undeniable. Women perceive themselves to be more or less continuous with their spatial environment, while men perceive themselves as free, central agents within it. Unsurprisingly, this cognitive style suggests women are more heavily context-driven; female performance in a clerical task seemed to be far more affected  by whether their examiner seemed “approving” or “disapproving” throughout, reflecting a “greater attentiveness to the attitudes of those around them.”

Fascinating essays like Young’s make me certain that as the themes of neuroethics expand to encompass those of neuroexistentialism, feminist theory will play an increasingly prominent role.

Naming the devil: The mental health double bind

(Published on The Neuroethics Blog)

The “Bell Let’s Talk” initiative swept through Canada on January 27, hoping to end the stigma associated with mental illness, one text and one share at a time. Michael Landsberg shares his thoughts in a short video on the Facebook page. “The stigma exists because fundamentally there’s a feeling in this country still that depression is more of a weakness than a sickness,” he explains. “People use the word depression all the time to describe a bad time in their life, a down time. But that’s very different than the illness itself.” Perhaps such a bold statement merits closer examination.
Philosophers, psychologists, and neuroscientists find themselves rallying behind two starkly contrasting paradigms of mental health, lobbying for conflicting changes in policy and attitude. On one end of the spectrum lies the medical model of psychiatry – the notion that the classification of mental illness can and ought to be truly objective, scientific, and devoid of value judgements. At the other extreme, a Foucault-esque theory posits that most psychiatric classifications are nothing more than a reflection of the values of those who do the classifying; classification is inherently normative and necessarily serves the interests of those in power.

Most modern paradigms take a more moderate approach, arguing that classification is based on both objective facts about the body and elements of normativity, but that diagnoses are useful nonetheless and do ultimately describe “real” illnesses. Nevertheless, the push and pull of each extreme keeps our current societal approach to mental illness in an uncomfortable double bind. In an over-medicalized paradigm, where we prescribe anti-depressants for those going through financial or relationship crises, we risk prescribing inauthentic neurobiological fixes to the suffering caused by complex social problems. But in an under-medicalized paradigm, we risk inadequately addressing the suffering caused by treatable neurobiological anomalies, under the pretense of total social relativism (more on the issues surrounding naming mental illness here).

For instance, in favour of de-medicalization, the neurodiversity movement (see previous blog posts on the topic herehere, and here)  quite reasonably suggests that conditions like autism ought not to be considered disorders, but rather alternative ways of thinking. Society can holistically benefit from including and adjusting to diverse modes of thought, rather than attempting to change autistic individuals to fit the mold (see also: philosopher Ian Hacking’s “looping effect” which might describe the way in which the very act of being diagnosed with a Diagnostic and Statistical Manual (DSM)-classified mental disorder can alter one’s self- and public perception of the condition, creating an “otherness” where it ought not exist).


Interpreting physical illness vs. medical illness, image
courtesy of Buzzfeed
Similarly, the categorization and naming of mental disorders can be damaging to ethnic minorities, women, and the socioeconomically oppressed. Naming compels individuals to misattribute the suffering caused by societal structures to problems intrinsic to their own bodies and brains and prevents marginalized individuals from seeing the reality of their greater social context, which legitimizes and perpetuates harmful social structures. For instance, so-called “Self-Defeating Personality Disorder” (SDPD) was introduced in the DSM III-R in 1987, describing criteria which closely mirrored traditional feminine submissiveness in the context of domestic abuse. An individual with SDPD “Chooses people and situations that lead to disappointment, failure, or mistreatment even when better options are clearly available … Engages in excessive self-sacrifice that is unsolicited by the intended recipients of the sacrifice.” It was subsequently excluded from DSM-IV in recognition that symptoms of abuse are primarily caused by male abusers, and that misguided medical diagnoses can have profoundly damaging effects on the already socially marginalized.


The naming of mental disorders is much more socially relative than that of physical disorders. And yet in some cases, the comparison between mental and physical disorders can have incredibly beneficial impacts on mental health discourse. Consider the message of the simple yet effective #BellLetsTalk campaign, or BuzzFeed’s recent pieces on mental illness (exemplified by this video and this listicle). In promoting a liberal stance on mental health in popular discourse, popular media frequently draw on the comparison between mental and physical disorders to reveal contradictory attitudes and social policies. This comparison inherently medicalizes mental health, but to the effect of taking mental illness more seriously, with arguably positive outcomes for de-stigmatization and patient care. In positing that the brain, like the kidney or any other organ, can malfunction and “get sick” for periods during one’s life (as is said to occur during some episodes of depression or mania), we classify mental disorders into discrete categories, in the same, dispassionate way one might be diagnosed with a stomach ulcer.


In diminishing the stigma surrounding mental disorders to match that of mundane physical illnesses, the medical classification of mental illness might provide individuals with the emotional detachment needed to seek appropriate help, whether in the form of reaching out to friends and employers, or seeking therapy or medication.


Such dispassionate comparison to physical diagnoses may moreover be crucial in legitimizing policy discourse, providing us the linguistic tools to address inadequacies such as sick leave and insurance coverage. As economist Richard Layard and CBT specialist David M. Clark project in “Thrive,” depression, when viewed as an illness like any other, is on average 50% more disabling than physical conditions like angina, asthma, arthritis, and diabetes, yet is much more likely to go untreated in Britain’s healthcare system. There may therefore be a lot of political progress to be made through the injection of objectivity into the public discourse on mental health.


Moreover, perhaps we overlook the psychological benefits of medical categorization in the phenomenology of mental illness itself. It may be empowering to be able to conceptualize depression or OCD or addiction as a foreign thing to be beat, rather than festering in the hopeless determinism of one’s (often unalterable) social conditions or previous life decisions. In naming an illness, an individual can recognize her current state as an aberration from her authentic self, positioning herself in opposition to her affliction during the healing process, battling against depression or addiction in much the same way that one might battle against cancer. On a social level, this paradigm might open the door to seeking support, in the knowledge that one’s condition is not one’s “fault,” and no more shameful or unusual than the common cold. Medicalization in social discourse can therefore serve a useful purpose and is not always necessarily a thing to be feared.
The use of biomarkers, including those found through blood tests,
have been found to outperform traditional diagnoses of mental
illness, image courtesy of Wikipedia
Nevertheless, as Sana Sheikh points out in a brilliant piece for Jacobin, we must recognize the disproportionate economic incentives which bias our healthcare system toward over-medicalization. Pharmaceutical innovation in the mental health domain has been stagnant, with very few new psychiatric drugs being developed over the last decade (predominantly because the neural mechanisms underlying most mental illnesses are still largely uncharted).


Hoping to bring objective neurological mechanisms to the forefront of mental health research, with possible pharmaceutical applications, the National Institute of Mental Health (NIMH)’s new Research Domain Criteria (RDoC) initiative seeks to redraft our framework for mental health research into its most systematized, objective formulation to date. And as the largest provider of funding for mental health research, the influence of the NIMH in dictating our prevailing views on mental illness must not be underestimated.


Rejecting symptom-based DSM groupings as still too subjective, the new system relies nearly exclusively on measurable biomarkers for the categorization of mental illness. Blood tests or genetic screens for depression could soon eclipse subjective accounts. Proponents insist that biotypes (biomarker-based categories) outperform traditional diagnoses of illnesses like schizophrenia or bipolar disorder, in that there is significant biological overlap between traditional DSM groupings.


Of course, the system is already under fire for its seeming total lack of consideration for psychosocial or environmental factors in the pathology of mental disease. Moreover, as Sheikh reminds us, there must be an irreducibly subjective element to mental illness – if someone self-reports feeling depressed but the biomarkers in their blood suggest otherwise, it would be bizarre to conclude that they are wrong about their own mental state.


The medicalization of mental illness is thus not simplistically good or bad, and the degree to which medicalization is appropriate or beneficial will vary from case to case. Faced with this uncertainty, we must be wary of blanket policies that lean too far in either direction. One-dimensional policies like NIMH’s RDoC may well produce pharmaceutical innovation, but certainly have the potential to lead to harmful, reductionist accounts of mental illness. Conversely, it might be beneficial in policy discourse for conditions like depression to be treated as a veritable mental illness. In light of the rapidly changing policy and funding landscapes of neuroscience and psychology, we must insist on studying the pathology of mental disorders as a constellation of environmental, psychosocial and biological factors, and seek authentic, balanced, and multi-faceted solutions to the unique suffering presented by each.

The Suffering of Mice and Men: A Utilitarian Approach to Animal Experimentation

(Full text on International Neuroethics Society page)

Non-human animals feature prominently in all areas of neuroscience research, ranging from drosophila to higher order primates. It is a problematic belief among many researchers that nonhuman animals are outside the scope of moral consideration, but that we ought to “reduce, refine, and replace” their use where possible.1, 2

Drawing on Peter Singer’s interpretation of utilitarianism,3 I first seek to demonstrate that it is capacity for suffering that qualifies beings for moral consideration in a utilitarian framework, as opposed to intelligence, language, or any other arbitrary trait. I will then argue that various species’ capacities to suffer can be reasonably estimated and expressed as a relative numeric “c-value.” Based on these values, I propose a utilitarian model for animal experimentation which incorporates seven key considerations: capacity for suffering (c), degree of suffering inflicted (s), number of animals used (n), probability of positive experimental results (Pe), probability that practical medical treatments will arise from positive findings (Pb), value of individual human benefit from treatment (B), and number of humans benefitted (N). If the product of csn>PePbBN, I argue that the experiment in question is morally wrong, whereas the inverse signifies an experiment which is morally right (and ought to be done).

Within classical utilitarianism, the action resulting in the highest net utility (one’s happiness, or the ability to further one’s needs or desires) is the morally right action to take, while choices resulting in comparatively less utility are morally wrong. 4 Due to imperfect knowledge, the utility of a given outcome is often multiplied by the probability of that outcome occurring (as evaluated to the best of one’s current knowledge).5 Utilitarianism is an appealing normative model for its simplicity and its fundamentally egalitarian premise. No one individual’s well-being is weighted more than another, as every individual in the moral equation counts as one.4, 6 But some would argue that the decision to only include human beings in moral considerations is an arbitrary one.3, 6 Peter Singer and Jeremy Bentham elegantly propose that one’s capacity to suffer is the essential quality that ought to qualify a being for utilitarian consideration, not simply one’s arbitrary status as a human being.3,4 If adult monkeys are undoubtedly smarter, more conscious and more expressive than day-old human babies, how could we possibly prioritize all humans over all animals on the arbitrary basis of intelligence or language?3

Nevertheless, non-human animals are incredibly varied in complexity – it seems intuitive that torturing a monkey and torturing a fruit fly should not have equal moral weighting. It seems highly plausible that capacity to appreciate suffering, like all cognitive processes, exists on an evolutionary spectrum, and ought to be weighted in a utilitarian equation accordingly. I put forward two simple, candidate metrics that might serve as reasonable metrics for a species’ capacity to suffer (in order to derive a theoretical “c-value”): a) cortical thickness and b) complexity of social behaviour.

It has long been proposed that cortical thickness is a general indicator of general intelligence — one’s ability to detect patterns and solve novel problems.11 While it is problematic to equate intelligence with capacity to suffer, it seems plausible that more intelligent animals have a better understanding of and memory for instances of pain, thus possibly adding dimensions to suffering that go beyond immediate stimulus-response pairings. For instance, complex pattern analysis might be a prerequisite for feeling anxious of future pain, and a sophisticated memory might be necessary to experience post-traumatic stress. This metric provides intuitively sound rankings, with C. elegans = Aplysia = Drosophila < mice < rats < squirrels < dogs < cats < rhesus monkeys < horses < gorillas < chimpanzees.10, 11

Secondly, degree of social complexity may be able to provide a similarly plausible set of rankings for a species’ capacity to suffer. It seems highly probable that empathy (including the ability to understand and abstract the suffering of other conspecifics) developed out of pure evolutionary necessity in social animals.21 Such adaptations increase the chances of survival of individuals via reciprocal altruism, but likely also increase the capacity for understanding and abstracting one’s own suffering, in addition to the suffering of others. Most regulations protecting higher-order animals are mere guidelines, open to a wide range of subjective interpretations by any given review board.8, 9

To illustrate how a more rigorous utilitarian approach can be applied on review boards based on theoretical c-values, let us examine the 2003 experiment by Carmena et al., in which two rhesus macaques were taught to control a closed brain-machine interface (BMIc). Experimenters read electrical activity in the frontoparietal cortex via surgically implanted electrodes. Repeated training with the BMIc allowed the monkeys to use visual feedback to reach and grasp with the robotic arm, without moving their own arm. These findings may contribute to technologies that would allow paralyzed patients to bypass their spinal cord injury to elicit voluntary, machine-mediated movements directly from the brain.12

Capacity for suffering (c). Rhesus monkeys have 480 million cortical neurons, compared to 11 500 million in man and 160 million in dogs11. The social organization of this species is complex and well-documented. For instance, they are capable of producing at least five distinct types of scream vocalizations during agonistic encounters to elicit support from conspecifics, each denoting particular kinds of threats and levels of aggression.16 Most notably, rhesus monkeys which were excluded from their social group quickly died in the absence of support, protection and resources from their conspecifics.17, 18 Fear and suffering caused by social exclusion is therefore likely an important feature for survival in this species (as it has been in humans), and it seems plausible that they share with humans the deep pain associated with rejection and isolation. On this basis, let us assume a c-value of 0.75, which is to say the average rhesus macaque might have about 75% of the typical human’s capacity for suffering.

Degree of suffering inflicted (s). In this particular experiment, the painful surgical implantation of brain electrodes coupled with the stress of social isolation, captivity, and forced physical restraint12 might reasonably warrant an s-value of 3 out of a possible 4.1

Number of animals used (n). X utiles lost by 5 animals is a five-fold worse outcome than X utiles lost by only 1 animal. In this case, two adult female macaques were used.12 Thus the total score for animal outcomes might be c*s*n = (0.75)(3)(2) = 4.5.

Probability of positive experimental findings (Pe). Experiments which seek to fine-tune already well-researched treatments will be far more likely to succeed in their goal than those which seek to pioneer new treatments from scratch. The Pe term ought to reflect the probability that the experiment proposed is a reasonable one and will not simply result in negligibly beneficial negative findings, evaluated by the impact and number of related previous experiments in the field. The hypothesis that “drug X is not harmful for primate consumption” might have quite a high Pe if there is extensive pre-existing evidence that drug X is not harmful to mice or reptiles. In contrast, BMIcs were relatively uncharted territory at the time of this experiment (with only about three preceding studies of this kind having ever been done on primates),13-15 yielding a low Pe, reflecting a relatively high chance of failure. Let us retrospectively assume a Pe of 0.1, multiplied by 0.95 to adjust for type I statistical error. 2

Probability of benefit resulting from findings (Pb). It can be argued that there is intrinsic value in scientific knowledge itself (knowing for the sake of knowing), but utilitarianism would hold that this knowledge is only valuable insofar as it can be used to derive practical treatments or 1 McGill’s 2013 Animal Use Report (see Appendix A) categorizes animal experiments by degree of suffering, ranging from category B (“experiments causing little or no discomfort or stress”) to category E (“procedures involving inflicting severe pain, near, at, or above the pain threshold of unanaesthetized, conscious animals”), providing a useful standard for numerical categorization. 2 We typically allow for a 0.05 (5%) margin of type-I statistical error; thus, at best, even if the hypothesis was corroborated by experimental findings, there is typically only up to a 0.95 probability that the findings indeed reflected true differences in the population at large benefits for actors at some point down the road.3 For instance, experiments on animals for the purposes of educating undergraduates may have ramifications on student knowledge of biology, and some of these students may go on to benefit society as doctors or researchers. However, such links are tenuous and ought to reflect a relatively low Pb — there are a number of interfering factors between a given educational experiment and a student giving back to her community. Conversely, experiments testing the safety and efficacy of pre-clinical stage pharmaceutical drugs may have a high relative Pb, as testing for adverse effects in animals can very immediately and directly save human lives. Experiments which seek to contribute to basic understanding of fundamental biological structures and functions without a direct practical benefit in mind ought to have a Pb value somewhere in between. In the case of the macaque experiment, there remain a number of scientific and financial barriers impeding the development of mechanical limbs for day-to-day use among paraplegics, and far more research will undoubtedly need to be conducted before this benefit can ever be practically realized.12 Let us therefore assume that this particular study advanced the possibility of brain-machine interfaces for paraplegics by only 5%, or 0.05.

Treatment or benefit (B). To match the 4-point scale for animal suffering, one might propose an analogous 4-point scale for the value of the human benefit. In the field of public health, QALYs (quality-adjusted life years) attempt to assign numerical utility to particular ailments in a similar manner.19, 20 Let us assume the B-value of a mechanical limb for the average paraplegic to be 3 out of a possible 4.

Number of humans benefitted (N). Of the 85000 patients with paraplegia in Canada,22 let us assume 1000 will be eligible and able to afford MBIs due to the high cost of the treatment. The probability-adjusted human utility for performing this experiment might therefore amount to Pe*Pb*B*N = [(0.95)(0.1)](0.05)(3)(1000) = 14.25. According to these estimations, it was 3 McGill’s annual Animal Use Report delineates 5 categories of purposes of animal use (PAUs) (see Appendix A) which may be useful in evaluating Pb. morally correct to approve this experiment. Nevertheless, upon a more careful analysis of factors, one can easily imagine an increase or decrease in any one of these values, which might tip the scale in one direction or another.

Overall, it is highly reasonable to assume that capacity to suffer qualifies beings for ethical consideration, and that animals are capable of suffering. It follows that inflicting pain on animals through experimentation comes at a moral cost. It seems intuitive that this moral cost ought to be weighted according to a species’ relative capacity to suffer, as fruit flies likely cannot experience the complex emotional suffering that monkeys can. “Capacity to suffer” times “suffering inflicted” times “number of animals” used is therefore a reasonable estimation for the negative utility produced in a given experiment. Moreover, ethics boards intuitively justify the suffering of animals by citing the possible societal benefits that might arise from scientific findings. “The probability of a given hypothesis being true” times “the probability of given findings resulting in a treatment” times “the utility of the treatment” times “the number of humans that will benefit from the treatment” is therefore a reasonable expression of net utility gained. While these values may never be objectively resolvable, such a model at the very least delineates the kinds of factors one ought to weigh when attempting to justify the moral status of an animal experiment. A standardized cost-benefit framework such as the present one ought to be employed as a decision aid to supplement intuitive decision-making on ethics review boards

Positive impact of safe-injection sites is well-documented and immense

(Published in the Montreal Gazette)

Having visited a safe-injection site in Europe, I can say with great certainty that I would be grateful for, not fearful of, the establishment of such sites in Montreal.

As part of McGill’s Comparative Health Systems Program, I visited one of the longest standing and most effective safe-injection sites in the world, in Switzerland, the birthplace of the safe consumption room itself. “Quai 9” stands alone in a sea of luxury stores and four-star hotels, only a few blocks away from scenic Lake Geneva (a thriving tourist hot spot). Yes, a safe-injection site exists in the heart of Geneva’s ritzy commercial district, and the city has yet to implode. Surrounding streets are not rife with crime and violence. Dirty needles do not litter the sidewalks. Neighbouring stores, banks, and restaurants have carried on, business as usual, for years.

The building features couches and a small kitchen in the common area, along with a designated room with sterilized tables and chairs for safe consumption. In a world that too often leaves them without options, drug users are here awarded a few moments of autonomy, community and, above all, dignity. Visitors, many of whom are without homes and jobs, are able to take a breather from the streets, grab a drink of water, shower, nap or just socialize in the common area.

The site receives 130 visits each day. Many visitors express intentions of quitting, but some of them do not. All are welcomed by the site’s competent medical staff, no questions asked.

What many fail to grasp is that drug users who do not wish to quit still have an active interest in protecting themselves as much as possible. Drug users, even those who do not wish to quit, seek refuge in safe-injection sites because they care and worry about their own health — a motive many find difficult to understand, given the mischaracterization of drug addiction as a conscious decision to continually self-harm.

Virtually all injection drug users, whether they intend to quit or not, do not want to contract infectious diseases and do not want to die by overdose. What communities must understand is that it is in both the broader community’s interest and that of drug users to help them protect themselves against these realities. The alternative is alleyway needle-sharing, the rampant spread of infectious diseases like HIV and Hepatitis C, and streets and sidewalks littered with dirty needles.

Montreal is pushing for three safe injection sites by this fall, despite staunch resistance from Conservatives at the federal level. The call is no doubt warranted after irrefutable evidence from sites like Vancouver’s Insite show plummeting rates of transmittable diseases by needle sharing and of deaths by overdose.

The fears that safe injection sites significantly alter their surrounding neighbourhoods or promote drug use among drug-naive individuals are both incredibly overblown and largely unsubstantiated. And yet, the positive impact such sites have on the health and basic human dignity of drug users, as well as the safety and sanitation of public spaces, is immediate, well-documented, and immense.

Like it or not, individuals with these illnesses exist, they do place value in their own health and safety, and like all other sick citizens, they are deserving of both medical attention and basic human dignity. Rona Ambrose believes Montreal citizens would veto the construction of these sites were they given the choice, but she seems to forget that reaching out to the most vulnerable among us, intelligently and with compassion, is what makes our community strong.

Too few women on Osheaga stages

(Published in the Montreal Gazette)

The summer festival is a hallmark of our generation; and the Osheaga lineup, a litmus test for cultural relevancy among millennials. Thousands of women will undoubtedly turn up to party, but why are female acts still so few and far between?

Of particular note is the scarcity of female DJs. Not a single woman performed on the Piknic électronik stage all weekend last year (the main stage for solo DJs and EDM artists). This year, there at least is one (The Black Madonna, performing Sunday, Aug. 2 at 7 p.m.).

The female presence on the two main stages was abysmal as well — last year’s Friday lineup included only a single woman on either of the two main stages (as part of a larger band, The Royal Streets). The disparity is common to festivals across Canada — none of the 19 main acts on the main stage of Toronto’s Veld festival last year were women, either.

There are several main-stage acts this year that are female or have a prominent female presence, including Grace Potter, Angus and Julia Stone, Marina and the Diamonds, Of Monsters and Men, Florence and the Machine, St. Vincent and First Aid Kit.

But women remain significantly under-represented. I Googled every single band scheduled to play at Osheaga 2015 and tallied the number of female and male performers, using biographical information from the band’s Wikipedia or Facebook page (with the exception of one act for which I could find no information online). I counted any act composed of two or more central members as a band (like The Black Keys), and any act composed of only one central member or a single DJ as an individual artist (Kendrick Lamar, Chet Faker). The results were unsurprising, but disheartening nonetheless. In total, 252 male artists comprised 86 per cent of all performers, while 41 female artists comprised a meagre 14 per cent.

Friday, July 31: 40 total acts (23 bands composed of 80 band members, 17 individual acts). Of the 80 band members, 11 are women (14 per cent). Of the 17 individual acts, 2 are women (12 per cent).

Saturday, Aug. 1: 38 total acts (19 bands composed of 65 band members, 19 individual acts). Of the 65 band members, 5 are women (8 per cent). Of the 19 individual acts, four are women (21 per cent).

Sunday, Aug. 2: 39 total acts (22 bands composed of 95 band members, 17 individual acts). Of the 95 band members, 12 are women (13 per cent). Of the 17 individual acts, seven are women (41 per cent).

While there are 42 all-male bands, only two all-female bands are scheduled to play during the weekend (both of which happen to be duos — Milk & Bone and First Aid Kit). Possibly even more disheartening is the fact that the number of women playing Saturday does not even reach double digits (nine women, compared with 75 men).

It’s 2015, and every teenage boy with a MacBook thinks he’s a DJ. But women still appear to be relegated to the role of consumer rather than creator when it comes to festival culture — DJ acts as well as other performers. Perhaps cultural expectations of femininity discourage the idea of a woman challenging boundaries or commanding a crowd. Amateur artists are told to “put themselves out there” to make it, and women are too often discouraged from self-advocating for fear of appearing immodest. Perhaps this is a holdover from generations past — with fewer female artists to aspire to, women can’t as easily picture themselves behind turntables. Or perhaps there is no shortage of aspiring female artists, but the upper sectors of the music industry are still a bit of an old boy’s club.

Whatever the reasons, it’s time for great female artists to get the main stage recognition they deserve at music festivals, and I hope the organizers of Osheaga 2016 can agree.

The science of chemical warfare

(Published in The McGill Tribune)

Recent attacks in Syria reopen an analysis of what makes chemical weaponry so dangerous

As members of the international community condemn the horrific chemical attacks on the suburbs of Damascus, Syria that began Aug. 18, the past few days have cast a spotlight on the mechanisms behind chemical warfare. The recent series of events in Syria have reopened an analysis as to what exactly makes chemical weapons so much more immoral than those employed in conventional artillery warfare.

Why the distinction between ‘chemical’ and ‘conventional’ arms? 

Chemical agents conjure a certain psychological terror among civilians in part due to the entirely indiscriminate nature of gas attacks, and the fact that often no smell, sight, or even sound precedes the victim’s imminent death. If not a clean death, the sheer physical brutality of chemical maiming is cruel and usually carries long-term generational and environmental effects.

Often referred to as the ‘poor man’s weapon of mass destruction,’ critics, such as political scientist Dominic Tierney claim Western powers are quick to condemn the use of chemicals due to the vast array of powerful and expensive conventional arms these countries hold at their advantage.

“In fact, people likely die more quickly and in less pain from sarin poisoning than if they bled to death from a shrapnel wound,” said Stan Brown, a chemistry professor and chemical weapons expert at Queen’s University in an interview with theNational Post.

Still, there is a remarkably low technological and monetary barrier preventing rogue actors from obtaining chemical weaponry in very large quantities. Many technologies, equipment, and materials used throughout the world for civilian purposes can easily be gathered to produce and manufacture chemical weapons agents, and there lies its greatest threat. An artillery shell the size of a suitcase full of sarin gas is lethal enough to kill an entire football stadium of civilians—a much greater effect than explosives of equivalent size.

By understanding the biological mechanisms of these chemical agents, research quickly illuminates why and how chemical weaponry pose such a threat.


Sarin gas

Widely suspected as the chemical employed in Damascus last week in the killing of 1,500 civilians, sarin gas affects the nerve endings of victims’ muscles through the nervous system. Eyewitness’ accounts of the recent attacks relay harrowing images of children running from their houses, convulsing, and gasping for breath before collapsing to the floor. Typically, sufferers experience frightening symptoms, such as foaming at the mouth and violent full-body convulsions. At high enough doses, sarin ultimately results in asphyxiation.

Under normal conditions, nerve cells release the neurotransmitter acetylcholine, a molecule that transmits signals from neurons to cells, to stimulate the muscle. The neurotransmitter crosses a tiny gap, known as a synapse, binding to the surface of adjacent muscle cells in order to excite the tissue and facilitate muscular movement. Then the enzyme acetylcholinesterase quickly degrades the acetylcholine in the synapse to prevent overstimulation of the cell, and relax the muscles.

The chemical compound, sarin, inhibits acetylcholinesterase. Therefore, when sarin gas enters the nervous system, it prevents acetylcholinesterase from degrading acetylcholine. A dangerous build up of acetylcholine can occur within minutes, resulting in a continual excitatory response in the muscles. This stimulation causes muscle seizures and impairs the respiratory system, ultimately resulting in respiratory arrest and the victim’s death.

In addition to its use in Damascus, sarin gas was employed in Iraq by Iraqi military forces against the Kurds in the 80’s, along with a number of cult terrorist attacks in Japan in the 90’s in an effort to bring down the government and install the group’s founder as the ‘emperor’ of Japan.


Mustard gas

Sulfur mustard carries an odor resembling that of mustard plants or horseradish; it is a potent vesicant—a chemical agent that produces blistering on exposed skin and mucosal membranes.

Often, mustard gas is used medicinally in wart removal. However, ingestion of even a very small amount of the compound can be fatal, leaving soldiers and civilians with painful internal and external disfigurations.

Upon entering the body, the chemical reacts with the water surrounding the body’s cells and loses a chloride ion, leaving behind an ion intermediate that reacts quickly with a number of enzymes and proteins on cell surfaces. Since this chemical process occurs most quickly in warm, moist conditions, the mucous membranes, eyes and respiratory tract are the most affected areas of the body. However, much is still unknown about the exact mechanism of tissue injury. The chemical can also mutate nucleotides—organic molecules that form the basic building blocks of DNA; this explains the long-term carcinogenic properties of mustard gas.

Since its first use in World War I, documented mustard gas use includes the Iran-Iraq war in 1984. In recent weeks, French intelligence has accused the Syrian Assad regime of having stockpiled 1,000 tonnes of both sarin and mustard gas, but this claim is still under much contention.

Fear of vaccination breathes new life into virus

(Published in the McGill Tribune)

Violence in Pakistan threatens eradication efforts in the fight against the poliovirus

Poliovirus has been eliminated in most of the developing world. Its eradication has been primarily due to the Global Polio Eradication Initiative (GPEI), a multilateral proposal passed by the World Health Assembly in 1988. However, three countries—Afghanistan, Nigeria, and Pakistan—stand between the GPEI and its goal of making polio the world’s second eradicated virus.

The Global Polio Eradication Initiative is an international health initiative 20 years in the making. The project involves tens of thousands of vaccinators scouting people door-to-door in villages in developing countries—many of which are highly inaccessible and dangerous. Equipped with little more than GPS systems, vaccinators must navigate the shifting political landscape of the developing world in search of the disease.

The biggest challenge ahead for the GPEI lies in the recent rise in local resistance to vaccination efforts. The gunning down of nine vaccination workers in Pakistan’s largest city in December 2012 resulted in the suspension of the GPEI’s vaccination campaign and its 225,000 workers—a tragedy for which Taliban-linked militants are largely thought to be guilty. As political pressures mount, vaccinators are missing key opportunities to improve the situation. The success of the campaign relies on the crucial dry season—the next two months— during which the virus is weakest and spreads least effectively.

Poliovirus primarily affects children under five years of age. The virus enters the body through the mouth and multiplies in the intestinal tract. It is then shed into its surroundings through feces. Once in the environment, polio can spread rapidly through communities, hitting those with poor hygiene and shoddy sanitation infrastructure the hardest.

Most infected people have little to no symptoms, so cases often go unrecognized. However, in its most severe occurrences, poliovirus can lead to infantile paralysis and degenerative crippling through inflammation of the spinal cord’s grey matter and the death of motor neurons.

While there is no cure for polio, the vaccination is over 90 per cent effective. If such efforts are halted, it will become increasingly difficult to contain the disease. The more time that is lost in the GPEI campaign, the more likely it is that polio will spread back out into other areas of the world—reversing any efforts made by this worldwide program. Furthermore, steps need to be taken on a global scale to prevent the re-emergence of mutated vaccine-derived polioviruses that may be prevalent in small numbers.

Resistance to vaccination efforts stems from a variety of reasons. Attitudes of distrust and skepticism towards Western immunization workers are prevalent among many Islamic militant groups and the general public following the CIA’s hepatitis vaccination campaign ruse. Last year, the CIA sponsored a widespread vaccination effort against hepatitis in a failed attempt to collect DNA from children living in Osama bin Laden’s compound in northern Pakistan as confirmation of his whereabouts. Not only did the CIA fail to obtain DNA samples, but it also fostered a lack of trust amongst Pakistanis and vaccination workers.

Extremist groups also crudely associate polio workers with the devastating U.S. drone strikes responsible for killing civilians, giving rise to anti-West sentiments that may continue to lead to violent attacks, like last month’s shootings.

Efforts are further impeded by widespread rumors adopted by parts of the Muslim community, such as suspicion that the vaccine contains pork or is being developed to sterilize Muslim girls. These rumors are based on inaccurate scientific information. For example, the sterilization myth was based on the vaccine containing trace amounts of estrogen, which they believed would have negative health impacts. However, the concentration of this hormone is too low in the vaccine to cause medical problems.

Yet the skepticism of the public towards these vaccination attempts is not unfounded historically. The chemical company Pfizer tested its meningitis antibiotic Trovan in remote communities in northern Nigeria in 1996, resulting in the death of 11 children. As a result, the Boko Haram, an Islamic militant group, has publicly opposed vaccinations in Nigeria.

Resistance among the public to vaccination has mounting consequences. Nigeria is presently the only country in the world for which the year-to-year incidence of polio is rising, but Pakistan could soon face a similar fate if its the vaccination program is not resumed.

Vaccination campaigns are trying to integrate the distribution of mosquito nets and vitamin supplements into their program in an attempt to regain favour with the general public; but a final eradication of Polio hot-spots in some of the poorest and most remote pockets of the developing world will prove no easy task.