Friday 24 December 2010

Light bikes are faster, right?

You would think so. Otherwise Alberto Contador would have tackled the mountains of Le Tour on an old school steel boneshaker rather than on a delicate wafer of carbon fibre so light it was at risk of floating away on the breeze. Otherwise, I wouldn’t have several featherweights of my own (each for a different occasion, you understand). So it was with some consternation that I read an article in the British Medical Journal in which a doctor reported that, on his daily commute of 27 miles, his brand new carbon steed was no quicker than an old, second hand steel bike weighing nearly 50% more.

It is a strange article, part of BMJ's Christmas in which somewhat light hearted articles about a variety of medical issues are published. These light hearted themes are still dressed up in the language of science, so one is never quite sure whether they are to be taken seriously or not. In this particular piece, an English GP, Dr Groves, describes his experiment about his daily cycle to work as a “single centre, randomised non-blinded trial; n=1. What this means in plain English is that he tossed a coin every morning to see which of his two bikes he should ride, that he made no attempt to prevent himself from knowing which of his bikes he was riding and that he didn’t involve any other riders in the trial. Now there are all sorts of things wrong with this. Trials are blinded for a reason. For example, I know I ride faster on my carbon bike simply because I put more effort in, embarrassed to be seen pootling around on what is so obviously a high end machine. There are also problems with a noisy dataset in this trial: if you remove the four most extreme times (outliers) then the result is reversed – the carbon bike comes out marginally faster.

However, whatever the failings in the design of the trial, there is no doubting that in real world conditions it is difficult to demonstrate a substantial advantage in riding a light bike. This is because, even on a hilly route through the Derbyshire dales, bike weight is only one factor that determines the transformation of rider power into forward momentum. For a start, bike weight is usually a fraction of that of its rider, both of which have to be dragged up those hills. And while weight is important, overcoming drag is also a major sink of expended energy, a fact that any rider who has cycled into a headwind can attest to.

Dr Groves concludes his article by suggesting that perhaps we should all save our money and stick to cheap steel bikes because they are just as fast. Does the man have no soul? Since when do issues of mere efficiency come into the choice of bike? Has he never felt the thrill of hefting a bike in one hand to find that it barely weighs anything at all? Has he never lusted after a bike just for its sheer aesthetic perfection? Has he never been tempted by the prospect of owning the same machine that was ridden to victory on one of the Grand Tours? It may well owe nothing to logic, it may well be a triumph of the seduction of marketing, but many bike purchases, especially those made by the infamous MAMILS, are driven by the desires of the heart and not the reasoning of the head.

In any case, if Dr Groves really is that worried about cycling efficiency he should work on his flexibility. A helpful mathematician from Montreal, writing on the BMJ website in response to Dr Groves’ article, has calculated that a 20% decrease in ‘frontal area’ would save Dr Groves 6 minutes and 36 seconds on his daily commute. Time to stretch that lower back, old boy and ride in those drops. ‘Tis what the pros do, after all.

Sunday 24 October 2010

Cancer: an old disease, a new disease or something in between?

This is the title of an article published in Nature Reviews Cancer. It is an “opinion piece” – a review that puts forward a particular viewpoint, backed up by evidence from the scientific literature. The aim is to be provocative, to stimulate discussion among the relevant community of experts.  Sometimes an exchange of published letters to the journal might result. All very civilized and the sort of exchange of views that helps keep thinking sharp and prevent fields stagnating. All very academic.

This particular article stimulated far more discussion than the authors, Rosalie David (Professor and Director of the KNH Centre for Biomedical Egyptology at the University of Manchester, UK) and Michael Zimmerman (an anthropologist at the University of Pennsylvania, USA) could have ever imagined. I wonder what these two senior and well respected academics thought as their academic article turned into lurid headlines sent screaming across the world’s media. “Cancer is purely man-made, say scientists…”. And then the backlash: “Claims that cancer is only a ‘modern, man-made disease’ are false and misleading” huffed Cancer Research UK. “Cancer is not a disease of the modern world” puffed New Scientist. 

One almost feels sorry for David and Zimmerman. What their article actually says is that there is a paucity of evidence for the widespread occurrence of cancer in antiquity. This is based on a variety of sources from a lack of descriptions of cancer in ancient art and literature to a lack of evident tumours in mummified remains. At the end of their article they make the following cautious conclusion: “Despite the fact that other explanations, such as inadequate techniques of disease diagnosis, cannot be ruled out, the rarity of malignancies in antiquity is strongly suggested by the available palaeopathological and literary evidence. This might be related to the prevalence of carcinogens in modern societies”. It is this last sentence that has been seized upon and inflated to mean that modern life has caused cancer.

But David and Zimmerman are not entirely innocent. For a start, they contributed to a highly inflammatory press release published on the University of Manchester's website. “Scientists suggest that cancer is man-made” was the slightly less cautious headline. One can almost hear  news-hounds around the world sharpening their pencils. And then, perhaps carried away with their moment in the spotlight, they contributed statements to the press release that at best are downright wrong and at worst are plain misleading. “There is nothing in the natural environment that can cause cancer” trumpeted Professor David, “So it has to be a man-made disease, down to pollution and changes to our diet and lifestyle.” Er, UV radiation? Radon? Viruses and bacteria causing cervical cancer, stomach cancer etc.? Professor Zimmerman joined in the fun: “The virtual absence of malignancies in mummies must be interpreted as indicating their rarity in antiquity, indicating that cancer causing factors are limited to societies affected by modern industrialization”. This neatly overlooks the fact that human lifespan in antiquity was much less than today and that age is a major factor in terms of the prevalence of cancer. It is also at odds with the rather more balanced discussion of the age-factor in their paper. Naturally, the critics jumped on these errors and used them to discredit the article.

It is a shame, because they may well be on to something. Unfortunately, they have picked the wrong modern risk factors. Environmental carcinogens caused by industrialization are not the problem here. As Cancer Research UK says: “The evidence that pollution and industrialization have a widespread role in UK cancer rates is weak”. On the other hand, there is a wealth of evidence suggesting that lifestyle factors – smoking, booze, poor diet, lack of exercise – are major risk factors in a large number of cancers. The ‘Western’ lifestyle is just plain unhealthy.

So while it may be that age is also a factor, could it also be true that the rarity of cancer in ancient civilization could be put down to an avoidance of the worst of modern Western excesses? And what if, in the absence of those lifestyle excesses, cancer is no longer an inevitable consequence of age? Surprising as this may seem, it appears that neither ageing nor cancer are an inevitable consequence of long life. Take plants, for example. Despite being exposed to the full range of industrial carcinogens and pollutants, plants neither suffer from cancer (apart from the specific case of crown gall which is disease caused by a bacterial pathogen) nor age. Amazingly, even the most venerable of long-lived trees are as hale and hearty as they were as mere saplings (Penuelas & Munne-Bosch, 2010) still capable of producing new cells at an undiminished rate and apparently resistant to the accumulation of mistakes and errors that bring about our slow decline. It seems that it is not ageing that kills perennial plants, but changes in environment or physical damage. So, if ageing is not inevitable in biological organisms, then might it be possible that we humans could learn the trick? Is there hope for those who yearn for immortality after all? Well, it would take more than a change of lifestyle, but even that would be a start.

Saturday 2 October 2010

It’s not ADHD Sir, it’s in my genes….

Another headline (Daily Telegraph Friday 1st October, 2010), another human genome versus disease study. And a very similar story to the genetics of myopia (see my previous post ill-communications.blogspot.com). Some serious science (published in The Lancet) looking at DNA variations in groups of individuals with a disease, in this case the psychological syndrome, attention deficit hyperactivity disorder. Some ill-advised press releases and comments: "Study is the first to find direct evidence that ADHD is a genetic disorder" (Lancet press release) and "Now we can say with confidence that ADHD is a genetic disease and that the brains of children with this condition develop differently to those of other children" (Prof Anita Thapar, the lead author of the Lancet paper). Lots of media hoo-haa. See the excellent blog by the BBC’s medical correspondent, Fergus Walsh,  for a summary of the main issues that got discussed.

Just like myopia, ADHD is a ‘complex’ condition caused by a whole variety of factors. These may include genetic risk factors but they also include environmental risk factors: smoking during pregnancy, pre-natal stress, and the usual social problems linked to child behavioural problems such as abuse, marital breakdown and poverty. And just like myopia, it appears that the environmental factors dwarf the genetic. In the Lancet paper, it is reported that 14% of children with ADHD had large variations in their DNA that were only present in 7% of children without ADHD. Or to put it another way, only 1 in every 7 children with ADHD had the genetic variant. Moreover, the particular type of genetic variation present, known as ‘copy number variations’ – deletions or duplications of large chunks of DNA  - do not resolve neatly down to this gene or that. In fact 57 different variations were found in the group of 366 children with ADHD. It is difficult to imagine, even in the science fiction world of routine genome tweaking, a treatment that will correct this.

So perhaps it is time the scientists got smart? As the debate about the amount of UK public money spent on scientific research reaches its zenith (see http://www.guardian.co.uk/science), is it really worth spending serious amounts of public money characterising the minute genetic risk factors of complex disorders like ADHD?

Friday 24 September 2010

Found: the gene that causes short sight: now experts say condition could be halted by eye drops

The above was a headline (13th September, 2010) from the UK tabloid The Daily Mail a well known arbiter of rationalism and restraint. To be fair to the Mail, the story was picked up by most of the UK media and tagged with similarly lurid headlines. E.g. “Short-sightedness gene discovery could consign glasses to history” from the Telegraph. And it was not just the UK press. The story had gone global: “Rogue gene causes short-sightedness” (Times of India), “Gene for nearsightedness found; treatment could eliminate need for eyeglasses, contact lenses” (New York Daily News). “Australian discovery of myopia gene link” (Sydney Morning Herald).

The headlines, I am afraid, are rather far from the truth as usual. Indeed, if one reads further into these articles you eventually get to the quotes from the poor scientists involved, caught between a desire to publicise their research while at the same time issuing a plaintive bleat for the facts.

The story originates with two papers published in Nature Genetics Hysi PG, Young TL, Mackey DA et al. A genome-wide association study for myopia and refractive error identifies a susceptibility locus at 15q25. Nature Genetics, 12 September 2010
Solouki AM, Verhoeven VJM, van Duijn CM et al. A genome-wide association study identifies a susceptibility locus for refractive errors and myopia at 15q14. Nature Genetics, 12 September 2010

Both studies used DNA microarray technology to identify variations in DNA sequences amongst thousands of individuals. The idea is simple. Each cohort consists of individuals who are short sighted, those who are long sighted and those who have no sight defects. Thousands of positions of known DNA sequence variability known as single nucleotide polymorphisms – that is a change at a single letter of the DNA sequence - across the whole genome of each individual were analysed. Statistics was then used to identify DNA variants that are strongly associated with defective eyesight. The two studies identified different DNA variants, but in both cases the variations were close to genes that are known to be expressed strongly in eye tissues and in one case have been shown to be necessary for normal lens formation in mice eyes. Hence the headlines compelling us to believe that ‘the gene’ for short-sightedness has been discovered.

Even the most cursory reading of the above paragraph should reveal the fallacy of these headlines. The two studies identified different variants. So already, we know that there is more than one DNA variant involved in eyesight defects. In fact, variations near three different genes were identified. The second more fundamental issue is that variations in these genes do not cause sight defect in all individuals. In fact the effect is surprisingly small.  For example, individuals with the variant rs8027411 were only 1.16 times more likely to have myopia than no eye problems. So even if gene therapy was routine and it was possible to administer a magic eye drop that would fix the ‘bad’ DNA variant (conservative estimates reckon it will be at least ten years before such a treatment is possible) then the patients’ risk of developing eye problems would decrease by only 16%.

The problem is that the genetic component of vision defects seems to be swamped by environmental effects. In modern society short-sightedness is on the rise. In some parts of Asia, the incidence of myopia has reached extraordinary levels. Nowhere is this more so than in Singapore where 80% of 18-year old army recruits are now short sighted (up from 25% just 30 years ago). And before you Westerners get complacent, this is not some sort of genetic pre-disposition in the Asian population. For example, Ian Morgan and Kathryn Rose of the Australian National University show in their paper “How genetic is school myopia?” published in Progress in Retinal and Eye Research, 24 (2005) 1-38, that 70% of men of Indian origin living in Singapore are short sighted, even though the incidence of short sightedness in India itself is only 10%.

There is little doubt as to the cause. Too much time spent focused on close objects, the computer screen probably being the biggest evil. Light reaching the eye from a near source has to be bent more to bring it into focus on the retina. The eye compensates by growing longer so that the muscles of the lens have to work less hard. The problem then comes when you look up across the room and try to bring something from further away into focus. These more parallel waves of light fall into focus in front of the retina in the long eye. You are now short sighted. In countries like Singapore, a particularly reading-intensive school programme is thought to be behind the high incidence of short sightedness, the still developing eye the most likely to grow longer.

The solution appears to be simple. As Terri Young of Duke University Medical Center (a lead author of one of the Nature Genetics papers) said in a Duke University press release: “People need to go outside and look at the horizon”. Rather makes you wonder why they spent thousands of dollars doing all that genome analysis, doesn’t it?

Thursday 2 September 2010

Will the wheat genome really feed the world?

On the 27th of August, a team of British scientists led by Professor Neil Hall at the University of Liverpool announced that they had ‘decoded’ the wheat genome and the story was picked up by the media accompanied by predictably rabid headlines: “Decoding of wheat genome will help address global food shortage”, “Wheat genome boost to food supply”, “Scientists crack wheat genome and offer yield potential”. You could be forgiven for thinking that the looming food crisis we keep hearing about is more or less solved. Now that the genome has been sequenced, it is only a matter of time before those clever plant scientists start churning out new varieties of wheat that are higher yielding, that can better tolerate drought and pests, that can grow on marginal soils. In short, all the challenges facing the agricultural industry as a result of the twin pressures of population growth and climate change will be successfully met thanks to modern plant genetics. The truth, alas, is rather different.

The first problem is that sequencing an organism’s genome does not automatically lead to an understanding as to how that organism works, a fact that pharmaceutical companies attempting to exploit the human genome have become only too aware. A genome is merely a set of instructions, a parts list. The challenge is to understand how that parts list fits together. Imagine a car engine disassembled, all the parts neatly laid out on the floor. Would knowledge of all those parts naturally lead one to understand how an engine works? Not at all. It is only by seeing how all of those parts fit together and work as a system that the magic of internal combustion is revealed. And the internal combustion engine is ridiculously simple compared to biological organisms whose parts list number in the tens of thousands.

Moreover, all this rather assumes that the function of each part is known, which remains far from the case. Take the model plant species Arabidopsis thaliana. This is the plant scientists’ equivalent of the laboratory mouse. Its genome was sequenced some ten years ago and this unassuming weed has been the subject of intense research ever since. And yet, despite a huge international research effort costing billions of pounds, the function of around 30% of the genes in the Arabidopsis genome remains unknown (incidentally, this is more or less the same figure as when the genome was initially sequenced, but that is another story). The clues to high yield, drought resistance, pest resistance and so on, could lie in those mysterious genes whose purpose remains completely obscure.

The wheat breeders are right to be excited about the prospect of a sequenced genome because it will be a significant tool for breeding. The process of breeding is essentially unchanged since the dawn of agriculture. Two different species of a related plant cross fertilise and if the resultant hybrid has an improvement in some desirable trait it is selected for future use. The only difference now is that rather than selecting for visible traits such as grain size, modern breeders can select for the transfer of specific pieces of DNA. These pieces of DNA are identified by the presence of marker sequences, small differences in DNA sequence between the two parent species that can be easily identified in the lab, telling the breeder which of the two parents the DNA between the markers must be derived from. The breeders are excited because the complete genome sequence will allow them to identify many more of these markers allowing them to breed across specific pieces of DNA with much greater precision.
But the headlines are regrettably premature. 

First of all, the wheat sequence released by Professor Hall and his colleagues is unordered. To sequence the genome, you chop it up into small bits and read each small fragment. Assembling all those fragments into the correct order is a mammoth task, without which the sequence data is essentially meaningless. It is a bit like taking the scissors to War and Peace and trying to work out the story from fragments of sentences and paragraphs. Already, there are moves afoot to calm the hyperbole with the International Wheat Genome Consortium (www.wheatgenome.org) issuing a press release disagreeing with the claims made in relation to the British scientists’ first draft sequence.

In addition, those DNA markers that the breeders so desperately crave cannot be identified from a single genome sequence. The markers are differences in sequence meaning that you would need the sequence of more than one species of wheat to identify them. This is perhaps a minor quibble. Sequencing DNA is now quick and relatively cheap (and becoming quicker and cheaper all the time), so sequences of other wheat species are surely round the corner. But the final and biggest problem is to know which bits of DNA to breed across. Which genes are responsible for controlling a complex trait such as drought tolerance? How does one track the many hundreds of genes that might collectively impart greater resistance to pathogens? Which genes are linked to increased yield? Geneticists have been hunting for the answer to these questions for decades. There is no doubt that genome sequence information will be an incredibly useful tool in their search for answers. But it could still be decades more before the wheat genome is truly ‘decoded’ and those answers are found.