Tuesday 14 April 2015

Is Medical Research Broken? (And Can We Fix It?)

There is an awful lot of medical research. Publications increase each year and at any one time members from several university departments are in the air travelling to medical conferences around the globe. But does this do any good? New drugs are increasingly rare and in mental health, there have been few real advances within the last twenty years. Medical research is increasingly seen as irrelevant to real life problems, so much so, that a new discipline of “knowledge translation” has been invented in order to explain just how research is relevant. Publication in highly profitable medical journals is problematic. Elsevier has profit margins of 39% with most of that coming from public money via University libraries. With bias in what does get published, publicly funded research is hidden behind paywalls and unpaid peer reviewers are providing idiosyncratic opinions. Problems of conflict of interest result from authors receiving lucrative advisory or speaking contracts with pharmaceutical companies.

So what can we do about this? There already have been some responses to this such as: the standardized reporting of trials or the compulsion to register trials prior to their starting so that publication bias becomes more obvious. Another response which has so far received less attention, at least within the context of North America, is the explicit involvement of patients and caregivers in medical research as instigators and collaborators of studies. The benefits are immediately obvious.  Having service users and caregivers involved, means that relevant questions get asked and outcomes that are important to patients are included in trials. Also, people who experience ill health are often experts on their own condition and are certainly experts in navigating the complicated Canadian healthcare system.

The slogan adopted by the National Health System in the UK proclaims “nothing about us without us.” This indicates the importance of involving service users and their caregivers in the planning and delivery of health services. The financial levers are now being put into place that will encourage researchers to include people with lived experience.  The Canadian Institute of Health Research is now promoting a strategy for patient orientated research. However, service users and caregivers face significant barriers to getting involved. There are few places where people with lived experience encounter researchers on anything like an equal footing. The way in which research is funded is confusing to outsiders especially when there are a plethora of hospitals, research institutes and universities all competing for the same research dollar; and when service users and caregivers actually meet researchers it is sometimes difficult to prevent this being just a token exercise.

Real change will mean sharing of power. This will mean that researchers will have to actively get out and engage with patients and institutional measures need to be put in place to encourage this to happen. Such changes may mean ethics committees insisting that there is a patient and caregivers representative on all applications; interview committees for new researchers include service users; and research committees including in their terms of reference provisions for service users and caregivers. The big change in early 21st century health care system is the sharing of knowledge that used to be exclusive to professions coupled with the design of health care systems with users at their core to ensure the integration of research and excellent clinical care.

Relevant Websites:


And a relevant meeting in Ottawa:

Sign up today to attend the Service Users and Caregivers Research Interest Group event on Thursday, April 16th:


Monday 19 January 2015

Is Quetiapine the new Valium?

One of the most striking aspects about coming to North America and witnessing psychiatric practice is the degree of polypharmacy, which is much greater than I’m familiar with. This may or may not be a bad thing but it is very different to any other setting that I’ve worked in or examined. Nowhere is this more noticeable than the prescription of quetiapine. It seems that you can’t get to see a psychiatrist unless you are already taking it. The reasons vary but it is prescribed for sleep, as an augmentation for antidepressants, to control behavioral symptoms in dementia, to treat delirium and as treatment for psychotic symptoms. There are very few psychiatric disorders it is not prescribed for. Is this just anecdote or is there some evidence for this?

Figures from the Canadian CompuScript Database show that over 2 million prescriptions for quetiapine were written in 2012 compared to 0.9 million in 2005. This increase has not been mirrored by other antipsychotics. Until 2012, Seroquel was the fifth-largest selling pharmaceutical of any kind, generating $6 billion in global sales for AstraZeneca.


Quetiapine has been approved for use in schizophrenia, bipolar disorder and major depressive disorder. It hasn’t been approved for sleep, anxiety or managing agitation in people with dementia. (In 2010, the company paid $520 million for marketing the drug off-label).

How does this compare to valium, which from the 1960’s to the 1980’s was also prescribed in large amounts often for sleep, anxiety or lesser degrees of stress. The parallels are striking. First is the initial optimism that at last there is a drug with few side effects for common difficult to treat problems. Then when problems arise the degree of adverse effects are only slowly realized. Then there are official warnings about over prescription which are widely ignored and then the law suits followed by a decrease in prescribing.

We are probably at the stage where the degree of adverse effects which include diabetes and discontinuation syndromes are just beginning to be realized. Soon there will be the law suits with some reports estimating that there are 10,000 product liability lawsuits pending against AstraZeneca for the adverse effects. Next will be guidance about reducing the prescription of quetiapine and other antipsychotics off label. Perhaps on this one we should be ahead of the curve?

Wednesday 22 October 2014

Ottawa Shootings and the Response of the Mental Health Community

As I write this downtown Ottawa is still in lockdown following several shootings. Like many people in Ottawa I feel shocked that this is happening in my town and angry that someone is trying to kill my neighbours. Trying to work out what is happening and whether it is safe for me to travel across the city, I was also struck by the numerous tweets offering psychological support. This made me think, as a psychiatrist, what should be the response of the mental health community to shocking disasters such as these. Having been in Sri Lanka just after the 2005 tsunami and in New Zealand during and after the Christchurch earthquake I have had some experience of what makes an effective response.

The most important thing to say is that most people will find their own ways to cope with this and most people, even those with direct exposure to the events of today, will not become mentally ill or develop post-traumatic stress disorder. Feeling anxious, sad and not sleeping after a disaster is a normal response to an abnormal situation. Paying attention to a hierarchy of basic needs including making people feel safe, providing food and drink and adequate timely information so that people can make meaning of what has happened is helpful.

Also having teams of “counsellors” offering immediate debriefing to people is unlikely to help and may make things worse by getting them to relive the trauma.

What is helpful is using existing networks and services, which means both professional and personal networks. Reinventing the wheel after a disaster rarely works. In the first week after a disaster those who are at most risk of more serious psychological distress include those with existing mental disorders, those intensively involved with the trauma and those whose symptoms of stress are prolonged. In the longer term the use of rituals such as memorial services or other spiritual activities can be extraordinarily helpful (which they were in Sri Lanka) along with a return to “normal” activities and timetables as soon as possible.


Other communities have experienced and survived disasters in the past – here are some links to resources which inform the psychological response:

Psychosocial Support in Disasters – excellent resource from Australia
Coping after a Traumatic Event – link to UK Royal College of Psychiatrists web page and information leaflet
Information sheet provided by the Royal Australian and New Zealand College of Psychiatrists for clinicians providing care to people after the Christchurch earthquake
World Health Psychiatric Association and World Health Organisation statement on the role of psychiatrists after disasters

Friday 16 August 2013

It’s Not the Combat – Maybe it’s the Drinking in Vulnerable Young Men

JAMA 2013;310(5):496-506

A cohort study published in JAMA tried to answer the question what are the risk factors for suicide in the US military. This is a hot topic as the rate of suicide has increased in US military personnel from about 11/100,000 people in 2005 to about 18/100,000 so that now deaths from suicide outnumber deaths from combat.

What the authors did was to gather together three cohorts in 2000, 2003 and 2006. Importantly they included reservists as well as full time members of the military (reservists make up about 30% of the US army which is relatively high compared to other countries). Participants were asked to complete a baseline questionnaire and then a questionnaire every three years. About a third of people approached agreed to take part in the study and they were more likely to be women, younger and college educated than the typical US military population.

Death, the main outcome, was assessed from the National Death Index and the Department of Defense Medical Mortality Registry. The authors were particularly interested in whether deployment and combat experience were linked to suicide, having gathered information about this from records and surveys.  The questionnaires completed by the participants assessed various other factors such as stressful life events; post-traumatic stress disorder; depression and drinking (although they didn’t ask about other drug use which seems a significant omission).

What they found in the 151,000 participants is that 83 people died as a result of suicide between 2001 and 2008 which gives a crude rate of 11.73 (95% confidence interval 9.21-14.26) per 100,000. What is more important is not the crude rate (as the authors didn’t set out to establish what the “true” rate was in the US military) but the risk factors associated with suicides. They found that suicide was highest in those with bipolar disorder, depression and alcohol problems. There was no link with deployment or combat exposure – in fact those who were not deployed had higher crude rates of suicide than those deployed once.

One way of expressing risk is to report the population attributable risk which tells you how much the rate of a disorder would reduce if you got rid of the risk factor.  (This is useful as it takes into account how common the risk factor is and how much it increases the risk – if a risk factor is rare then from a population perspective how much it increases your risk is relatively unimportant).  In this study the population attributable risk for alcohol related problems was 18%, depression 11% and bipolar disorder 5%.

So in this population, as in most risk factor studies, it was the usual suspects of depression and substance use that was associated with suicide. There are some caveats here though. In this study there were only 83 suicides so there were wide confidence interval intervals in the hazard ratios. There is also the possibility of the healthy deployment issue – those personnel who were unwell or had substance abuse problems may have been less likely to have been deployed or see combat.  The study also only covers only 2005 to 2008 when the increase in suicides was beginning. It will be interesting to see if later data supports these initial conclusions.

However, the study does beg the question of whether there was something “depressogenic” about being in the US military. The population of the US military is generally younger than other high income country armies and deployment when it occurs is longer than other countries with shorter breaks “at home”. Personal testimony from serving and recently retired soldiers also contributes to a view that invading and occupying small defenceless countries far away from the US can affect morale. However, this argument also applies to armies from other countries such as Canada and the UK where there has not been such an increase in military suicides.

Another argument to explain these findings is that the response to mental illness in the military is problematic. We know from the civilian population that identifying and responding to depression in primary care is one of the best ways of reducing suicides. As the authors note mental health problems in the military increase risk of suicide – just as in civilian life- but the response is often different with a much greater reluctance to divulge mental health problems.  This might explain why alcohol abuse – often used to self-medicate psychiatric disorders - was the most important risk factor in this study.

Comparative Efficacy and Tolerability of Antipsychotic Drugs in Schizophrenia

The Lancet, 27 June 2013 (In Press - DOI: 10.1016/S0140-6736(13)60733-3

Recently available on-line in The Lancet is a paper which describes the comparative efficacy and tolerability of 15 antipsychotics in people with schizophrenia written by a group in Germany with a track record of producing such studies. The difficulty with the evidence when comparing drug treatments is that where you have lots of pharmaceutical options there are rarely any head to head comparisons of the different treatments. One way to get a sense of comparative effectiveness is to do a meta-analysis of different studies.

The authors here combined the results of 212 blinded randomised controlled trials that included 43049 patients who had schizophrenia. They excluded studies where people had mainly negative symptoms, those who were treatment resistant and those done in stable patients (mainly relapse prevention studies).  Of the people in the trials the mean duration of illness was about 12 years and they had an average age of 38 years.

Using a sophisticated form of metal analysis they then produced hierarchies of effect sizes for overall efficacy, discontinuation, weight gain, extrapyramidal side effects, prolactin increase, QTc prolongation and sedation. What they found for overall efficacy is that clozapine (by some margin) was the most effective followed by amisulpiride, olanzapine and risperidone.  Haloperidol came in at seventh in the list. When looking at overall acceptability the top three least likely to be discontinued compared to placebo were amisulpiride, olanzapine and clozapine.

The true interest in this study is that it challenges two ideas. The first is that all antipsychotics have the same effectiveness. Whilst the difference in effectiveness between the drugs was small (and smaller than for the side effects) it was a robust finding that didn’t alter much when tested in the sensitivity analysis. The second challenge is that thinking about antipsychotics as first generation or second generation isn’t very helpful as it obscures differences in adverse effects between all antipsychotics. For example aripiprazole is less sedating than quetiapine but haloperidol is somewhere between; similarly olanzapine causes more weight gain than risperidone but chlorpromazine is in between them.

As the authors state, “Antipsychotic drugs differ in many properties and can therefore not be categorised in first generation and second generation groupings. The suggested hierarchies in seven major domains should help clinicians to adapt choice of antipsychotic drug to the needs of individual patients”.

Thursday 16 May 2013

The DSM-5 and the Complexities and Capitalizing of Classification


Well it’s not actually a journal article but as everyone and their dog has an opinion on the launch of DSM-5 next week I thought I would pitch in as well.

First why bother with classification at all? We have classification because it is useful for communication and ultimately inevitable. The reason that it is inevitable is that once you recognize that some people share something in common that other people do not have – low mood or forgetfulness for example – you are creating a classificatory system. The other choices are everyone is unique (not that useful because it means you can’t apply lessons from one person to another) or that everyone is the same (again not that useful).

So given that we have to have a system of classification in medicine what should it look like? There are three choices – classification by symptoms, by course of the disorder or by aetiology.

Until the mid-19th century most disorders in medicine were classified by symptoms – so a reading of medical textbooks from the 1700s would have several chapters on different types of fever. This changed as more knowledge was gained about how the body worked and links were made between pathology and symptoms in life. For most medical disciplines the classification changed from symptoms to aetiology so that physicians today don’t diagnosis central crushing chest pain disorder (a symptom), they diagnosis a myocardial infarction (an aetiology). Classification by course of a disorder has been tried in medicine but is never really that successful as it can only be done retrospectively.

The reason that classification never shifted from symptoms to aetiologies in psychiatry is that the brain is the most complex organ in the human body (whose workings we still don’t fully understand), which is enclosed in a bony box (the skull), making it hard to study (unlike the heart or pancreas for example). So in psychiatry, with a few exceptions, we are still stuck with a symptomatic classification of disorders. The trouble with symptomatic classifications is that they are not really that powerful in helping decisions about treatment or prognosis – hence the focus on formulations in training psychiatrists which attempt to take a wider view of the aetiology and impact of disorders.

Which brings us to DSM-5  This is a classification by symptoms from a particularly U.S. point of view. The two big criticisms of DSM-5 are that it medicalizes what should be normal and that, while it pretends to be biomedical, the evidence for the biological basis for most disorders is lacking.

In my opinion there is considerable merit in the argument about DSM-5 being an attempt to medicalize the normal – a sort of “psychiatric mission creep”. The suspicion here is of the influence of pharmaceutical companies on the designers of DSM-5 to create new markets.

There are many examples of undeclared conflicts of interest, particularly in U.S. academic psychiatry, influencing the research agenda and the interpretation of research. The other financial conflict concerns the American Psychiatric Association who publishes the DSM. The drafting of DSM-5 has missed most deadlines except the final publication and launch date, leading to the suspicion that the APA is in poor financial straits and needs the DSM to come out now in order to collects the money it makes from its sales. 

The second argument about the DSM-5 is that it does not reflect biological reality and the comparison is often made with the rest of medicine. However, many disorders in medicine are equally subjective (pain for example) or their cause is obscure (headaches or migraines anyone?). Also disorders in other areas of medicine regularly undergo reclassification (epilepsy or acute coronary events come to mind).

Critics often condemn the “medical model” when what they really are referring to is a reductionist biological model that equates all disease with biological pathology. However, anyone who spends any time on a medical ward round or out-patient clinic will quickly discover that the “medical model” actually means integrating biology, psychology and sociology in a complete package.

So in psychiatry we are still left with a predominantly symptomatic classification to understand people who present with distress. Our focus should be on improving and defending services for such people and, as good clinicians, integrating psychology and biology to do something helpful.

The role of classificatory symptoms is often exaggerated – the best quote I have heard about them is that “they are like lines of longitude and latitude – nothing like them exists in the real world – but they are helpful in finding your way around”.

Monday 13 May 2013

Screening to Assess who is at Risk for Suicide Falls Short - Screening to Assess Depression Likely More Constructive in Managing the Risk of Suicide

Annals of Internal Medicine; Published first online April 23, 2013

Suicide and how to prevent it is a hot topic. From the evidence that we have, investing in primary care to improve the detection and treatment of depression would appear to be the place where you get the biggest bang for your buck. Depression is clearly related to suicide, so you would think that screening for suicide in primary care might be helpful in suicide prevention.

However, a recent systematic review of “Screening for and Treatment of Suicide Risk Relevant to Primary Care” in the Annals of Internal Medicine concluded that screening in primary care was of limited usefulness. The authors searched for English language studies only (again!) up to December 2012 which looked at two questions:

“What are the benefits and accuracy of screening instruments in primary care?”

“What is the effectiveness of suicide prevention interventions in primary care or mental health settings?”

Addressing the first question, the authors found five studies which showed no clear short term benefits (within two weeks) of screening and that the accuracy of screening instruments was poor. It should be noted that while the authors talk about screening, this is not screening as it would be applied to other disorders in medicine. In other disorders screening refers to the time between biological onset (say the presence of cancerous cells) and developing symptoms (for example breast lumps).

For screening to be effective, treatment given before symptoms develop needs to be more effective than treatment given after symptoms appear. Clearly if people respond positively to questions about suicide then “symptoms” have already developed, so what the authors are really talking about here is case finding rather than screening. Perhaps more effort should be put into screening people for depression rather than suicide specifically.

The second part of this review looked at interventions in both primary care and mental health settings that might reduce suicide risk. As one of the authors of a study included in this review I was interested to see their conclusions. What the authors found were 49 trials looking at reducing suicidal risk.

Psychotherapies reduced suicide attempts in adults but had little effect on suicidal ideas whereas interventions which tried to enhance usual care had little effect on suicide risk. Nearly all the studies were not done in primary care and recruited patients at high risk of suicide from general hospitals (usually people who presented with self-harm to the emergency department) so the results cannot really inform what to do with people detected as being suicidal in primary care.

So what next? Risk assessment tools do not predict who will commit suicide or repeat self-harm. What are needed are better risk management systems rather than yet another risk assessment form. Screening for depression and offering brief effective treatments – possibly computerized therapies – that can be used in primary care may prove to be a more useful strategy than simply screening to find “suicidal” people.