How reliable is "science"

You've just found the penis-shaped door to freedom. GET ON YOUR FUCKING FEET. Turn the tables on your masters. Light the entire world on fire. The time for sitting there like a little bitch is OVER.
Forum rules
This section is open to the public. Feel free to post questions, criticisms or comments. Thank you.
Post Reply
User avatar
Hall Monitor
Posts: 571
Joined: Sat Nov 26, 2011 9:51 pm
Location: Berkeley, CA

Re: How reliable is "science"

Post by A-Team » Sat Apr 26, 2014 11:53 am

Peer review: a flawed process at the heart of science and journals

What is Peer review?

My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party—who is neither the author nor the person making a judgement on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The paper looks all right to me', which is sadly what peer review sometimes seems to be. Or somebody pouring all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you'd expect by chance.1

That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'

Does Peer Review 'work' and what is it for?
But does peer review `work' at all? A systematic review of all the available evidence on peer review concluded that `the practice of peer review is based on faith in its effects, rather than on facts'.2 But the answer to the question on whether peer review works depends on the question `What is peer review for?'.

One answer is that it is a method to select the best grant applications for funding and the best papers to publish in a journal. It is hard to test this aim because there is no agreed definition of what constitutes a good paper or a good research proposal. Plus what is peer review to be tested against? Chance? Or a much simpler process? Stephen Lock when editor of the BMJ conducted a study in which he alone decided which of a consecutive series of papers submitted to the journal he would publish. He then let the papers go through the usual process. There was little difference between the papers he chose and those selected after the full process of peer review.1 This small study suggests that perhaps you do not need an elaborate process. Maybe a lone editor, thoroughly familiar with what the journal wants and knowledgeable about research methods, would be enough. But it would be a bold journal that stepped aside from the sacred path of peer review.

Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers.3,4 Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which I will return to, is whether peer review and journals should cease to work on trust.

The Defects of Peer Review

So we have little evidence on the effectiveness of peer review, but we have considerable evidence on its defects. In addition to being poor at detecting gross defects and almost useless for detecting fraud it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.

Slow and expensive

Many journals, even in the age of the internet, take more than a year to review and publish a paper. It is hard to get good data on the cost of peer review, particularly because reviewers are often not paid (the same, come to that, is true of many editors). Yet there is a substantial `opportunity cost', as economists call it, in that the time spent reviewing could be spent doing something more productive—like original research. I estimate that the average cost of peer review per paper for the BMJ (remembering that the journal rejected 60% without external review) was of the order of £100, whereas the cost of a paper that made it right though the system was closer to £1000.

The cost of peer review has become important because of the open access movement, which hopes to make research freely available to everybody. With the current publishing model peer review is usually `free' to authors, and publishers make their money by charging institutions to access the material. One open access model is that authors will pay for peer review and the cost of posting their article on a website. So those offering or proposing this system have had to come up with a figure—which is currently between $500-$2500 per article. Those promoting the open access system calculate that at the moment the academic community pays about $5000 for access to a peer reviewed paper. (The $5000 is obviously paying for much more than peer review: it includes other editorial costs, distribution costs—expensive with paper—and a big chunk of profit for the publisher.) So there may be substantial financial gains to be had by academics if the model for publishing science changes.

There is an obvious irony in people charging for a process that is not proved to be effective, but that is how much the scientific community values its faith in peer review.


People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process. I regularly received letters from authors who were upset that the BMJ rejected their paper and then published what they thought to be a much inferior paper on the same subject. Always they saw something underhand. They found it hard to accept that peer review is a subjective and, therefore, inconsistent process. But it is probably unreasonable to expect it to be objective and consistent. If I ask people to rank painters like Titian, Tintoretto, Bellini, Carpaccio, and Veronese, I would never expect them to come up with the same order. A scientific study submitted to a medical journal may not be as complex a work as a Tintoretto altarpiece, but it is complex. Inevitably people will take different views on its strengths, weaknesses, and importance.

So, the evidence is that if reviewers are asked to give an opinion on whether or not a paper should be published they agree only slightly more than they would be expected to agree by chance. (I am conscious that this evidence conflicts with the study of Stephen Lock showing that he alone and the whole BMJ peer review process tended to reach the same decision on which papers should be published. The explanation may be that being the editor who had designed the BMJ process and appointed the editors and reviewers it was not surprising that they were fashioned in his image and made similar decisions.)

Sometimes the inconsistency can be laughable. Here is an example of two reviewers commenting on the same papers.

Reviewer A: `I found this paper an extremely muddled paper with a large number of deficits'

Reviewer B: `It is written in a clear style and would be understood by any reader'.

This—perhaps inevitable—inconsistency can make peer review something of a lottery. You submit a study to a journal. It enters a system that is effectively a black box, and then a more or less sensible answer comes out at the other end. The black box is like the roulette wheel, and the prizes and the losses can be big. For an academic, publication in a major journal like Nature or Cell is to win the jackpot.


The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants.5 The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci.6 They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions.

This is known as the Mathew effect: `To those who have, shall be given; to those who have not shall be taken away even the little that they have'. I remember feeling the effect strongly when as a young editor I had to consider a paper submitted to the BMJ by Karl Popper.7 I was unimpressed and thought we should reject the paper. But we could not. The power of the name was too strong. So we published, and time has shown we were right to do so. The paper argued that we should pay much more attention to error in medicine, about 20 years before many papers appeared arguing the same.

The editorial peer review process has been strongly biased against `negative studies', i.e. studies that find an intervention does not work. It is also clear that authors often do not even bother to write up such studies. This matters because it biases the information base of medicine. It is easy to see why journals would be biased against negative studies. Journalistic values come into play. Who wants to read that a new treatment does not work? That's boring.

We became very conscious of this bias at the BMJ; we always tried to concentrate not on the results of a study we were considering but on the question it was asking. If the question is important and the answer valid, then it must not matter whether the answer is positive or negative. I fear, however, that bias is not so easily abolished and persists.

The Lancet has tried to get round the problem by agreeing to consider the protocols (plans) for studies yet to be done.8 If it thinks the protocol sound and if the protocol is followed, the Lancet will publish the final results regardless of whether they are positive or negative. Such a system also has the advantage of stopping resources being spent on poor studies. The main disadvantage is that it increases the sum of peer reviewing—because most protocols will need to be reviewed in order to get funding to perform the study.

Abuse of peer review

There are several ways to abuse the process of peer review. You can steal ideas and present them as your own, or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened. Drummond Rennie tells the story of a paper he sent, when deputy editor of the New England Journal of Medicine, for review to Vijay Soman.9 Having produced a critical review of the paper, Soman copied some of the paragraphs and submitted it to another journal, the American Journal of Medicine. This journal, by coincidence, sent it for review to the boss of the author of the plagiarized paper. She realized that she had been plagiarized and objected strongly. She threatened to denounce Soman but was advised against it. Eventually, however, Soman was discovered to have invented data and patients, and left the country. Rennie learnt a lesson that he never subsequently forgot but which medical authorities seem reluctant to accept: those who behave dishonestly in one way are likely to do so in other ways as well.

Trust in Science and Peer Review
One difficult question is whether peer review should continue to operate on trust. Some have made small steps beyond into the world of audit. The Food and Drug Administration in the USA reserves the right to go and look at the records and raw data of those who produce studies that are used in applications for new drugs to receive licences. Sometimes it does so. Some journals, including the BMJ, make it a condition of submission that the editors can ask for the raw data behind a study. We did so once or twice, only to discover that reviewing raw data is difficult, expensive, and time consuming. I cannot see journals moving beyond trust in any major way unless the whole scientific enterprise moves in that direction.

So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.
[b][color=#FF0000]I'm sick of training with strangers[/color][/b]

[color=#FF0000]The only guys who are good at "self-improvement" are the ones who don't do it by themselves[/color]

User avatar
Dick Van Dyke
Hall Monitor
Posts: 1614
Joined: Thu Feb 16, 2012 11:59 pm

Re: How reliable is "science"

Post by Dick Van Dyke » Wed Apr 30, 2014 11:03 pm

U of Wisconsin to offer a post-doc in ‘feminist biology’ because science is sexist

The University of Wisconsin - Madison (UW) has announced it will offer a post-doctorate in “feminist biology” because biological science is rife with sexism and must be changed to reflect feminist thinking.

The focus of the program will be to “uncover and reverse gender bias in biology” and to “develop new theory and methods in biology that affect feminist approaches,” according to a news release posted by the college on April 17.

“It means being able to detect gender bias in previous research and ... figuring out ways to move forward in research and theory that removes the gender bias.” - Janet Hyde


Hyde said such a program is necessary because sexism among male scientists’ sometimes makes them incapable of accurate research.

“All human beings have gender stereotypes in their brain,” she said. “Gender stereotypes are pervasive … people just don’t see things or don’t appreciate them or don’t process them when they don’t conform to stereotype notions,” Hyde told Campus Reform in an interview on Thursday.

The two-year program will focus on conducting scientific research from a feminist viewpoint, Janet Hyde, the director of the Campus Center for Research on Gender and Women, said in an interview with Campus Reform on Thursday.

“It means being able to detect gender bias in previous research and ... figuring out ways to move forward in research and theory that removes the gender bias,” she said.

For example, Hyde said, scientists had once had an inaccurate understanding of the sexual behaviors of primates — and that was clearly due to their sexism.

“Females of these species were portrayed as being passive or uninterested in sex, and that was reflective of gender stereotypes in humans,” she said. “The females solicited sex all the time.”

The first post-doctoral fellow, Caroline VanSickle, will begin the program in September.

“We aren’t doing science well if we ignore the ideas and research of people who aren’t male, white, straight, or rich,” VanSickle said in an email to Campus Reform. “Feminist science seeks to improve our understanding of the world by including people with different viewpoints. A more inclusive science means an opportunity to make new discoveries.”
coffee's for closers.

User avatar
attention hole
Jedi Bonersaber
Posts: 192
Joined: Tue Aug 06, 2013 7:54 pm
Location: Pittsburgh, PA

Re: How reliable is "science"

Post by attention hole » Sun May 04, 2014 9:24 am


Ohh god. "feminist science" :nah:

Sounds something like Lysenkoism where communist Russia created a branch of science that went along with socialism. Any idea in science that went against those ideals were crushed and made to disappear.
well maybe you should stop doing it YOUR WAY and try doing it MY WAY for a change. -The Professor

User avatar
Lazy Artist
Small boy from Nigeria
Posts: 5
Joined: Mon Mar 04, 2013 4:22 pm
Location: Northwest Florida

Re: How reliable is "science"

Post by Lazy Artist » Mon Jun 16, 2014 7:42 pm

"Whoever rebukes a man will afterward find more favor
than he who flatters with his tongue."
Proverbs 28:23

Dean of Beatdowns
Posts: 10516
Joined: Sat May 15, 2010 10:34 am

Re: How reliable is "science"

Post by Info » Thu Jun 19, 2014 10:21 am ... boardroom/" onclick=";return false;
social interaction is an interruption.

shape or be shaped.

Dean of Beatdowns
Posts: 10516
Joined: Sat May 15, 2010 10:34 am

Re: How reliable is "science"

Post by Info » Thu Aug 07, 2014 8:34 am

Reddit's Favorite Scientist Just Got Banned for Cheating the Site
July 31, 2014

Turns out Reddit's favorite scientist and second most popular user on the site wasn't quite as popular as everyone thought he was. Ben Eisenkop, better known as Unidan, was banned by the site last night after the administrators there discovered he was faking votes to make sure his posts got more attention, using fake accounts to downvote his enemies, and upvoting his own rebuttals.

Unidan became known as the "excited biologist" for regularly explaining natural phenomena, for being hugely active in the /r/AskScience subreddit, and for generally being a
hardcore nerd
knowledgeable guy. He was also five other guys, apparently—turns out he was using at least five accounts to help destroy his enemies and to artificially pad his own comments and posts.

Because of Reddit's childish democratic system which relies on popularity to decide to decide the truth, this creates a huge problem. Usually, you can tell within a couple minutes whether a post will get popular or will get buried: Five early, quick upvotes is enough to push public opinion (and the post's visibility) in the right direction. Meanwhile, a couple downvotes soon after something is posted can essentially kill a post before it gets started at all.

"He was caught using a number of alternate accounts to downvote people he was arguing with, upvote his own submissions and comments, and downvote submissions made around the same time he posted his own so that he got even more of an artificial popularity boost," a Reddit administrator, Cupcake1713, wrote on the site. "It was some pretty blatant vote manipulation, which is against our site rules."

Unidan initially tried to bullshit everyone shortly after he was caught by trying to downplay his unethical behavior. But of course he was caught a second time and had little choice but to come clean (at least concerning the allegations that we know of.)

"He plays it off as not that big of a deal, but, as I said, when something is new, those early votes are hugely important. In essence, Unidan was destroying posts that had a chance of being more popular than his, and was boosting his own—that's a 12 vote swing, which is absolutely integral to getting attention on posts."

"You would be surprised at how effective 5 upvotes of your own stuff and 5 downvotes of someone else's stuff is," Cupcake1713 wrote. "Early voting like that really does affect the perception of other people for better or for worse."

Reddit has long since passed the point where it's merely a social networking site—it quite literally has the ability to make or break the commercial viability of news sites. It drives many millions of page views every day, and the traffic bump a top Reddit post gives a site is insane. Gaming the system, like Unidan did, is tempting for a lot of people, and a lot more common than you'd think.
Moral of the story: 'science' is a tool. Tools aren't the problem. It's the people wielding them you have to watch out for. Be weary whenever some fucktard tells you: "SCIENCE says..." Science doesn't say dick. Those people are just trying to overinflate their argumentation with an authority fallacy in lieu of taking accountability for their own work. :naughty:
social interaction is an interruption.

shape or be shaped.

Dean of Beatdowns
Posts: 10516
Joined: Sat May 15, 2010 10:34 am

Re: How reliable is "science"

Post by Info » Sat Dec 06, 2014 1:01 pm

How Academia and Publishing are Destroying Scientific Innovation: A Conversation with Sydney Brenner
Elizabeth Dzeng, Feb 24th

I recently had the privilege of speaking with Professor Sydney Brenner, a professor of Genetic medicine at the University of Cambridge and Nobel Laureate in Physiology or Medicine in 2002. My original intention was to ask him about Professor Frederick Sanger, the two-time Nobel Prize winner famous for his discovery of the structure of proteins and his development of DNA sequencing methods, who passed away in November. I wanted to do the classic tribute by exploring his scientific contributions and getting a first hand account of what it was like to work with him at Cambridge’s Medical Research Council’s (MRC) Laboratory for Molecular Biology (LMB) and at King’s College where they were both fellows. What transpired instead was a fascinating account of the LMB’s quest to unlock the genetic code and a critical commentary on why our current scientific research environment makes this kind of breakthrough unlikely today.

It is difficult to exaggerate the significance of Professor Brenner and his colleagues’ contributions to biology. Brenner won the Nobel Prize for establishing Caenorhabditis elegans, a type of roundworm, as the model organism for cellular and developmental biological research, which led to discoveries in organ development and programmed cell death. He made his breakthroughs at the LMB, where beginning in the 1950s, an extraordinary number of successive innovations elucidated our understanding of the genetic code. This code is the process by which cells in our body translate information stored in our DNA into proteins, vital molecules important to the structure and functioning of cells. It was here that James Watson and Francis Crick discovered the double-helical structure of DNA. Brenner was one of the first scientists to see this ground-breaking model, driving from Oxford, where he was working at the time in the Department of Chemistry, to Cambridge to witness this breakthrough. This young group of scientists, considered renegades at the time, made a series of successive revolutionary discoveries that ultimately led to the creation of a new field called molecular biology.

To begin our interview, I asked Professor Brenner to speak about Professor Sanger and what led him to his Nobel Prize winning discoveries.

Sydney Brenner: Fred realized very early on that if we could sequence DNA, we would have direct contact with the genes. The problem was that you couldn’t get hold of genes in any way. You couldn’t purify what was a gene. That is why right from the start in 1954, we decided we would do this by using Fred’s method of sequencing proteins, which he had achieved [proteins are derived from the information held in DNA]. You have to realise it was only on a small scale. I think there were only forty-five amino acids [the building blocks of proteins] that were in insulin. We thought even scaling that up for proteins would be difficult. But finally DNA sequencing was invented. Then it became clear that we could directly approach the gene, and it produced a completely new period in science.

He was interested in the method and interested in getting the methods to work. I was really clear in my own mind that what he did in DNA sequencing, even at the time, would cause a revolution in the subject, which it did. And of course we immediately, as fast as possible, began to use these methods in our own research.

ED: This foundational research ushered in a new era of biological science. It has formed the basis of nearly all subsequent discoveries in the field, from understanding the mechanisms of diseases, to the development of new drugs for diseases such as cancer. Imagining the creative energy that drove these discoveries was truly inspirational. I asked Professor Brenner what it felt like to be part of this scientific adventure.

SB: I think it’s really hard to communicate that because I lived through the entire period from its very beginning, and it took on different forms as matters progressed. So it was, of course, wonderful. That’s what I tell students. The way to succeed is to get born at the right time and in the right place. If you can do that then you are bound to succeed. You have to be receptive and have some talent as well.

ED: Today, the structure of DNA and how genetic information is translated into proteins are established scientific canon. Brenner joked that he “knew that molecular biology was doomed to success when [he] heard two students speaking in a bus once and asking whether they would get the genetic code in the examination. It had become an academic subject.” But in the 1950s, the hypotheses generated at the LMB were dismissed as inconceivable nonsense.

SB: To have seen the development of a subject, which was looked upon with disdain by the establishment from the very start, actually become the basis of our whole approach to biology today. That is something that was worth living for.

I remember Francis Crick gave a lecture in 1958, in which he discussed the adapter hypothesis at the time. He proposed that there were twenty enzymes, which linked amino acids to twenty different molecules of RNA, which we call adapters. It was these adapters that lined up the amino acids. The adapter hypothesis was conceived I think as early as 1954 and of course it was to explain these two languages: DNA, the language of information, and proteins, the language of work.

Of course that was a paradox, because how did you get one without the other? That was solved by discovering that a molecule from RNA could actually have function. So this information on RNA, which happened much later really, solved that problem as far as origins were concerned.

ED: (Professor Brenner was far too modest here, as it was he who discovered RNA’s critical role in this translation from gene to protein.)

SB: So he [Crick] gave the lecture and biochemists stood up in the audience and said this is completely ridiculous, because if there were twenty enzymes, we biochemists would have already discovered them. To them, the fact that they still hadn’t, went to show that this was nonsense. Little did the man know that at that very moment scientists were in the process of finding the very first of these enzymes, which today we know are the enzymes that combined amino acids with transfer RNA. And so you really had to say that the message kept its purity all the way through.

What people don’t realise is that at the beginning, it was just a handful of people who saw the light, if I can put it that way. So it was like belonging to an evangelical sect, because there were so few of us, and all the others sort of thought that there was something wrong with us.

They weren’t willing to believe. Of course they just said, well, what you’re trying to do is impossible. That’s what they said about crystallography of large molecules. They just said it’s hopeless. It’s a hopeless task. And so what we were trying to do with the chemistry of proteins and nucleic acids looked hopeless for a long time. Partly because they didn’t understand how they were built, which I think we molecular biologists had the first insight into, and partly because they just thought they were amorphous blobs and would never be able to be analysed.

I remember when going to London to talk at meetings, people used to ask me what am I going to do in London, and I used to tell them I’m going to preach to the heathens. We viewed most of everybody else as not doing the right science. Like one says, the young Turks will become old Greeks. That’s the trouble with life. I think molecular biology was marvellous because every time you thought it was over and it was just going to be boring, something new happened. It was happening every day.

So I don’t know if you can ride on the crest of a wave; you can ride on it, I believe, forever. I think that being in science is the most incredible experience to have, and I now spend quite a lot of my time trying to help the younger people in science to enjoy it and not to feel that they are part of some gigantic machine, which a lot of people feel today.

ED: I asked him what inspired them to maintain their faith and pursue these revolutionary ideas in the face of such doubt and opposition.

SB: Once you saw the light you were just certain that you had to be right, that it was the right way to do it and the right answer. And of course our faith, if you like, has been borne out.

I think it would have been difficult to keep going without the strong support we had from the Medical Research Council. I think they took a big gamble when they founded that little unit in the Cavendish. I think all the early people they had were amazing. There were amazing personalities amongst them.

This was not your usual university department, but a rather flamboyant and very exceptional group that was meant to get together. An important thing for us was that with the changes in America then, from the late fifties almost to the present day, there was an enormous stream of talent and American postdoctoral fellows that came to our lab to work with us. But the important thing was that they went back. Many of them are now leaders of American molecular biology, who are alumni of the old MRC.

ED: The 1950s to 1960s at the LMB was a renaissance of biological discovery, when a group of young, intrepid scientists made fundamental advances that overturned conventional thinking. The atmosphere and camaraderie reminded me of another esteemed group of friends at King’s College – the Bloomsbury Group, whose members included Virginia Woolf, John Maynard Keynes, E.M. Forester, and many others. Coming from diverse intellectual backgrounds, these friends shared ideas and attitudes, which inspired their writing and research. Perhaps there was something about the nature of the Cambridge college systems that allowed for such revolutionary creativity?

SB: In most places in the world, you live your social life and your ordinary life in the lab. You don’t know anybody else. Sometimes you don’t even know other people in the same building, these things become so large.

The wonderful thing about the college system is that it’s broken up again into a whole different unit. And in these, you can meet and talk to, and be influenced by and influence people, not only from other scientific disciplines, but from other disciplines. So for me, and I think for many others as well, that was a really important part of intellectual life. That’s why I think people in the college have to work to keep that going.

Cambridge is still unique in that you can get a PhD in a field in which you have no undergraduate training. So I think that structure in Cambridge really needs to be retained, although I see so often that rules are being invented all the time. In America you’ve got to have credits from a large number of courses before you can do a PhD. That’s very good for training a very good average scientific work professional. But that training doesn’t allow people the kind of room to expand their own creativity. But expanding your own creativity doesn’t suit everybody. For the exceptional students, the ones who can and probably will make a mark, they will still need institutions free from regulation.

ED: I was excited to hear that we had a mutual appreciation of the college system, and its ability to inspire interdisciplinary work and research. Brenner himself was a biochemist also trained in medicine, and Sanger was a chemist who was more interested in chemistry than biology.

SB: I’m not sure whether Fred was really interested in the biological problems, but I think the methods he developed, he was interested in achieving the possibility of finding out the chemistry of all these important molecules from the very earliest.

ED: Professor Brenner noted that these scientific discoveries required a new way of approaching old problems, which resist traditional disciplinary thinking.

SB: The thing is to have no discipline at all. Biology got its main success by the importation of physicists that came into the field not knowing any biology and I think today that’s very important.

I strongly believe that the only way to encourage innovation is to give it to the young. The young have a great advantage in that they are ignorant. Because I think ignorance in science is very important. If you’re like me and you know too much you can’t try new things. I always work in fields of which I’m totally ignorant.

ED: But he felt that young people today face immense challenges as well, which hinder their ability to creatively innovate.

SB: Today the Americans have developed a new culture in science based on the slavery of graduate students. Now graduate students of American institutions are afraid. He just performs. He’s got to perform. The post-doc is an indentured labourer. We now have labs that don’t work in the same way as the early labs where people were independent, where they could have their own ideas and could pursue them.

The most important thing today is for young people to take responsibility, to actually know how to formulate an idea and how to work on it. Not to buy into the so-called apprenticeship. I think you can only foster that by having sort of deviant studies. That is, you go on and do something really different. Then I think you will be able to foster it.

But today there is no way to do this without money. That’s the difficulty. In order to do science you have to have it supported. The supporters now, the bureaucrats of science, do not wish to take any risks. So in order to get it supported, they want to know from the start that it will work. This means you have to have preliminary information, which means that you are bound to follow the straight and narrow.

There’s no exploration any more except in a very few places. You know like someone going off to study Neanderthal bones. Can you see this happening anywhere else? No, you see, because he would need to do something that’s important to advance the aims of the people who fund science.

I think I’ve often divided people into two classes: Catholics and Methodists. Catholics are people who sit on committees and devise huge schemes in order to try to change things, but nothing’s happened. Nothing happens because the committee is a regression to the mean, and the mean is mediocre. Now what you’ve got to do is good works in your own parish. That’s a Methodist.

ED: His faith in young, naïve (in the most positive sense) scientists is so strong that he has dedicated his later career to fostering their talent against these negative forces.

SB: I am fortunate enough to be able to do this because in Singapore I actually have started two labs and am about to start a third, which are only for young people. These are young Singaporeans who have all been sent abroad to get their PhDs at places like Cambridge, Stanford, and Berkeley. They return back and rather than work five years as a post-doc for some other person, I’ve got a lab where they can work for themselves. They’re not working for me and I’ve told them that.

But what is interesting is that very few accept that challenge, providing what I think is a good standard deviation from the mean. Exceptional people, the ones who have the initiative, have gone out and got their own funding. I think these are clearly going to be the winners. The eldest is thirty-two.

They can have some money, and of course they’ve got to accept the responsibility of execution. I help them in the sense that I oblige them and help them find things, and I can also guide them and so on. We discuss things a lot because I’ve never believed in these group meetings, which seems to be the bane of American life; the head of the lab trying to find out what’s going on in his lab. Instead, I work with people one on one, like the Cambridge tutorial. Now we just have seminars and group meetings and so on.

So I think you’ve got to try to do something like that for the young people and if you can then I think you will create. That’s the way to change the future. Because if these people are successful then they will be running science in twenty years’ time.

ED: Our discussion made me think about what we consider markers of success today. It reminded me of a paragraph in Professor Brenner’s tribute to Professor Sanger in Science:

“A Fred Sanger would not survive today’s world of science. With continuous reporting and appraisals, some committee would note that he published little of import between insulin in 1952 and his first paper on RNA sequencing in 1967 with another long gap until DNA sequencing in 1977. He would be labelled as unproductive, and his modest personal support would be denied. We no longer have a culture that allows individuals to embark on long-term—and what would be considered today extremely risky—projects.”

I found this particularly striking given that another recent Nobel prize winner, Peter Higgs, who identified the particle that bears his name, the Higgs boson, similarly remarked in an interview with the Guardian that, “he doubts a similar breakthrough could be achieved in today’s academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said that: ‘it’s difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964.’”

It is alarming that so many Nobel Prize recipients have lamented that they would never have survived this current academic environment. What are the implications of this on the discovery of future scientific paradigm shifts and scientific inquiry in general? I asked Professor Brenner to elaborate.

SB: He wouldn’t have survived. Even God wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments (creating the universe), but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently? And a third would say, to top it all, he published it all in an un-refereed journal (The Bible).

So you know we now have these performance criteria, which I think are just ridiculous in many ways. But of course this money has to be apportioned, and our administrators love having numbers like impact factors or scores. Singapore is full of them too. Everybody has what are called key performance indicators. But everybody has them. You have to justify them.

I think one of the big things we had in the old LMB, which I don’t think is the case now, was that we never let the committee assess individuals. We never let them; the individuals were our responsibility. We asked them to review the work of the group as a whole. Because if they went down to individuals, they would say, this man is unproductive. He hasn’t published anything for the last five years. So you’ve got to have institutions that can not only allow this, but also protect the people that are engaged on very long term, and to the funders, extremely risky work.

I have sometimes given a lecture in America called “The Casino Fund”. In the Casino Fund, every organisation that gives money to science gives 1% of that to the Casino Fund and writes it off. So now who runs the Casino Fund? You give it to me. You give it to people like me, to successful gamblers. People who have done all this who can have different ideas about projects and people, and you let us allocate it.

You should hear the uproar. No sooner did I sit down then all the business people stand up and say, how can we ensure payback on our investment? My answer was, okay make it 0.1%. But nobody wants to accept the risk. Of course we would love it if we were to put it to work. We’d love it for nothing. They won’t even allow 1%. And of course all the academics say we’ve got to have peer review. But I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean.

I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals.

Now I mean, people are trying to do something, but I think it’s not publish or perish, it’s publish in the okay places [or perish]. And this has assembled a most ridiculous group of people. I wrote a column for many years in the nineties, in a journal called Current Biology. In one article, “Hard Cases”, I campaigned against this [culture] because I think it is not only bad, it’s corrupt. In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all. And that’s all been done in the aid of commerce, because they are now giant organisations making money out of it.

ED: Subscriptions to academic journals typically cost a British university between £4-6 million a year. In this time of austerity where university staff face deep salary cuts and redundancies, and adjunct faculty are forced to live on food stamps, do we have the resources to pour millions of dollars into the coffers of publishing giants? Shouldn’t these public monies be put to better use, funding important research and paying researchers liveable wages? To add insult to injury, many academics are forced to relinquish ownership of their work to publishers.

SB: I think there was a time, and I’m trying to trace the history when the rights to publish, the copyright, was owned jointly by the authors and the journal. Somehow that’s why the journals insist they will not publish your paper unless you sign that copyright over. It is never stated in the invitation, but that’s what you sell in order to publish. And everybody works for these journals for nothing. There’s no compensation. There’s nothing. They get everything free. They just have to employ a lot of failed scientists, editors who are just like the people at Homeland Security, little power grabbers in their own sphere.

If you send a PDF of your own paper to a friend, then you are committing an infringement. Of course they can’t police it, and many of my colleagues just slap all their papers online. I think you’re only allowed to make a few copies for your own purposes. It seems to me to be absolutely criminal. When I write for these papers, I don’t give them the copyright. I keep it myself. That’s another point of publishing, don’t sign any copyright agreement. That’s my advice. I think it’s now become such a giant operation. I think it is impossible to try to get control over it back again.

ED: It does seem nearly impossible to institute change to such powerful institutions. But academics have enthusiastically coordinated to strike in support of decent wages. Why not capitalise on this collective action and target the publication industry, a root cause of these financial woes? One can draw inspiration from efforts such as that of the entire editorial board of the journal Topology, who resigned in 2006 due to pricing policies of their publisher, Elsevier.

Professor Tim Gowers, a Cambridge mathematician and recipient of the Fields medal, announced in 2012, that he would not be submitting publications to nor peer reviewing for Elsevier, which publishes some of the world’s top journals in an array of fields including Cell and The Lancet. Thousands of other researchers have followed suit, pledging that they would not support Elsevier via an online initiative, the Cost of Knowledge. This “Academic Spring”, is gathering force, with open access publishing as its flagship call.

SB: Recently there has been an open access movement and it’s beginning to change. I think that even Nature, Science and Cell are going to have to begin to bow. I mean in America we’ve got old George Bush who made an executive order that everybody in America is entitled to read anything printed with federal funds, tax payers’ money, so they have to allow access to this. But they don’t allow you access to the published paper. They allow you I think what looks like a proof, which you can then display.

ED: On board is the Wellcome Trust, one of the world’s largest funders of science, who announced last year that they would soon require that researchers ensure that their publications are freely available to the public within six months of publication. There have also been proposals to make grant renewals contingent upon open access publishing, as well as penalties on future grant applications for researchers who do not comply.

It is admirable that the Wellcome Trust has taken this stance, but can these sanctions be enforced without harming their researchers’ academic careers? Currently, only 55% of Wellcome funded researchers comply with open access publishing, a testament to the fact that there are stronger motivators at play that trump this moral high ground. For this to be successful, funders and universities will have to demonstrate collective leadership and commitment by judging research quality not by publication counts, but on individual merit.

Promotion and grant committees would need to clearly commit both on paper and in practice to these new standards. This is of course not easy. I suspect the reason impact factors and publication counts are the currency of academic achievement is because they are a quick and easy metric. Reading through papers and judging research by its merit would be a much more time and energy intensive process, something I anticipate would be incompatible with a busy academic’s schedule. But a failure to change the system has its consequences. Professor Brenner reflected on the disillusioning impact this reality has on young scientists’ goals and dreams.

SB: I think that this has now just become ridiculous and its one of the contaminating things that young people in particular have to actually now contend with. I know of many places in which they say they need this paper in Nature, or I need my paper in Science because I’ve got to get a post-doc. But there is no judgment of its contribution as it is.

ED: Professor Brenner hit upon several hot topics amongst academics in all disciplines. When Randy Scheckman won his Nobel Prize this year in the Physiology or Medicine, he announced his boycott of “luxury” journals such as Nature, Science, and Cell, declaring that their distorting incentives “encouraged researchers to cut corners and pursue trendy fields of science instead of doing more important work.”

Because publications have become a proxy for research quality, publications in high impact factor journals are the metric used by grant and promotion committees to assess individual researchers. The problem is that impact factor, which is based on the number of times papers are cited, does not necessarily correlate with good science. To maximize impact factor, journal editors seek out sensational papers, which boldly challenge norms or explore trendy topics, and ignore less spectacular, but equally important things like replication studies or negative results. As a consequence, academics are incentivised to produce research that caters to these demands.

Academics are slowly awakening to the fact that this dogged drive to publish rubbish has serious consequences on the quality of the science that they produce, which have far reaching consequences for public policy, costs, and human lives. One study found that only six out of 53 landmark studies in cancer research were replicable. In another study, researchers were only able to repeat a quarter of 67 influential papers in their field.

Only the most successful academics can afford to challenge these norms by boycotting high impact journals. Until we win our Nobel Prizes, or grant and promotion structures change, we are shackled to this “publish or perish” culture. But together with leaders in science and academia such as Professor Brenner, we can start to change the structure of academic research and the language we use to judge quality. As Brenner emphasised, it was the culture of the LMB and the scientific environment at the time that permitted him and his colleagues to uncover the genetic basis of life. His belief that scientists like Professor Sanger would not have survived today are cautionary words, providing new urgency to the grievances we have against the unintended consequences of the demands required to achieve academic success.
There's more to it than that. It shows a bias. These fields are notorious for fraud. ... py-1.10634" onclick=";return false;
In a survey of more than 2,000 psychologists, Leslie John, a consumer psychologist from Harvard Business School in Boston, Massachusetts, showed that more than 50% had waited to decide whether to collect more data until they had checked the significance of their results, thereby allowing them to hold out until positive results materialize. More than 40% had selectively reported studies that “worked.” On average, most respondents felt that these practices were defensible. “Many people continue to use these approaches because that is how they were taught,” says Brent Roberts, a psychologist at the University of Illinois at Urbana-Champaign. ... _graph.jpg" onclick=";return false; ... wanted=all" onclick=";return false;
At the end of November [2011], the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.

The adjective “sloppy” seems charitable. Several psychologists I spoke to admitted that each of these more common practices was as deliberate as any of Stapel’s wholesale fabrications. Each was a choice made by the scientist every time he or she came to a fork in the road of experimental research — one way pointing to the truth, however dull and unsatisfying, and the other beckoning the researcher toward a rosier and more notable result that could be patently false or only partly true. What may be most troubling about the research culture the committees describe in their report are the plentiful opportunities and incentives for fraud. “The cookie jar was on the table without a lid” is how Stapel put it to me once. Those who suspect a colleague of fraud may be inclined to keep mum because of the potential costs of whistle-blowing.

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Fraud like Stapel’s — brazen and careless in hindsight — might represent a lesser threat to the integrity of science than the massaging of data and selective reporting of experiments. The young professor who backed the two student whistle-blowers told me that tweaking results — like stopping data collection once the results confirm a hypothesis — is a common practice. “I could certainly see that if you do it in more subtle ways, it’s more difficult to detect,” Ap Dijksterhuis, one of the Netherlands’ best known psychologists, told me. He added that the field was making a sustained effort to remedy the problems that have been brought to light by Stapel’s fraud. ... ies-update" onclick=";return false;
The peer review process itself is ridiculously corrupt: ... -the-mean/" onclick=";return false;
social interaction is an interruption.

shape or be shaped.

Dean of Beatdowns
Posts: 10516
Joined: Sat May 15, 2010 10:34 am

Re: How reliable is "science"

Post by Info » Tue Jan 20, 2015 2:37 pm

Doctor behind vaccine-autism link loses license
May 24, 2010

It took nearly six months but the General Medical Council (GMC) in the U.K. has pulled Dr. Andrew Wakefield’s license to practice medicine in the United Kingdom.

Wakefield is the researcher who nearly single-handedly fueled parental concerns about the link between vaccines and autism. In 1998, he published a paper in the medical journal Lancet describing eight children who showed signs of autism within days of being inoculated for measles, mumps and rubella. A gastroenterologist by training, Wakefield went on in further studies to suggest that the virus from the vaccine was leading to inflammation in the youngsters’ guts that then impeded normal brain development.

Further investigations by other researchers in the decades since have failed to confirm his claims, and in January, the GMC ruled that Wakefield had acted “dishonestly and irresponsibly” in conducting the experiments that led to the publication of the paper. According to the BBC, among his alleged acts of misconduct were conducting those studies without ethical approval of the hospital at which he practiced, and paying children at his son’s birthday party for blood samples. He also served as a paid consultant to attorneys of parents who believed their children had been harmed by vaccines.

In February, editors of the Lancet retracted Wakefield’s controversial paper, telling the Guardian “It was utterly clear, without any ambiguity at all, that the statements in the paper were utterly false.”

Defending his career on the Today show on Monday, Wakefield, now in the U.S., vowed to appeal the decision and maintained that “there are millions of children out there suffering, and the fact [is] that the vaccines cause autism.” Without a license to practice medicine, and the growing evidence to the contrary, it’s going to harder for him to prove that claim.
social interaction is an interruption.

shape or be shaped.

Dean of Beatdowns
Posts: 10516
Joined: Sat May 15, 2010 10:34 am

Re: How reliable is "science"

Post by Info » Wed Feb 04, 2015 12:44 pm

The Pseudo-Scientific Polygraph Has Been Lying for 80 Years
Eighty years ago, Leonarde Keeler's lie detector made its debut in court. Decades later, we're still paying the price for his con job.

Eighty years ago this week, inventor Leonarde Keeler proudly proclaimed his expert testimony before a Wisconsin jury to be “a signal victory for those who believe in scientific crime detection.” One of the creators of the modern-day polygraph, the man named after Leonardo DaVinci by his father in the hopes that he would do similarly great things, had just presented his findings in the case of Cecil Loniello and Tony Grignano, two young men on trial for the attempted murder of a police officer as they fled the scene of a robbery. The judge in the case had sought out the polygraph because of the technology’s showing at the 1933 World’s Fair police exhibit. Both sides had agreed to allow the test—the defendants saw little to lose and the prosecution was armed with only circumstantial evidence and two untrustworthy witnesses.

Keeler strapped his lie detector to each man’s chest and arm. When Loniello was asked whether he shot the sheriff, the needle recorded a violent fluctuation indicating a change in breathing, a sudden increase in blood pressure, and a rise in pulse. When asked whether he was driving the car, all systems were normal. The results told Keeler and his colleague, Fred Inbau, not only that the two men were both guilty and lying, but revealed each man’s role in the crime. Science, it seemed, had triumphed. It was the first time that a jury had been permitted to hear polygraph results as evidence and on the stand Keeler was measured in his statements: “I wouldn’t want to convict a man on the grounds of the records alone,” he told the judge. But outside the courthouse, Keeler beamed when the jury returned with a guilty verdict. “It means that the findings of the lie detector are as acceptable in court as fingerprint testimony,” he told the press.

Except that it didn’t. A previous Supreme Court case had decided a decade earlier that the lie detector hadn’t gained approval from the scientific community and was thus inadmissible in court. Apart from a few rare cases, the polygraph has been barred from federal and most state courts ever since. “The supplanting of the jury and its judgment are something judges have been very wary of,” said Ken Alder, professor of history at Northwestern University and author of The Lie Detectors: The History of an American Obsession.

As early as 1911, an article in The New York Times imagined this kind of world where a truth box—then called a psychometer—would altogether do away with detectives, attorneys, and juries. “The state will merely submit all suspects in a case to the tests of scientific instruments and as these instruments cannot be made to make mistakes nor tell lies, their evidence will be conclusive of guilt or innocence, and the court will deliver sentence accordingly.” Despite the wide acceptance of the polygraph today—in police investigations, to monitor people on probation, and by the government to screen potential employees—it may not be used as evidence. But the lie detector has found a more circuitous way into our legal system.

“The results of a lie detector are not admissible in court, but if you confess during the course of interrogation, that’s admissible,” said Alder. “The lie detector is essentially used in practice as a way to get people to confess to crimes.” Keeler’s polygraph was not the first to be conceived or created. Others were patented by Keeler’s rivals at the time, the most threatening of whom was legendary eccentric William Marston, a Harvard-trained psychologist who would go on to create the Wonder Woman comic (a superhero whose weapon happens to be the Lasso of Truth).

The exact origins of the polygraph go back even further. “The lie detector pioneers were very keen to stress that it was an invention, like the light bulb,” said Geoff Bunn, a professor of psychology at Manchester Metropolitan University and the author of The Truth Machine: A Social History of the Lie Detector. “In fact, the lie detector that we understand, the polygraph machine, with the three different measurement devices—those were all already in use by 19th-century criminologists. All the lie detector guys did was put them in the same box.”

That box—with its essential tech of sensors that record changes in breathing, blood pressure, and sweat—hasn’t changed much since then. And the problem, Bunn said, remains the same as well: It doesn’t work. “The basic idea that these [measurements] add up to a lie hasn’t panned out,” Bunn said.

But Keeler’s patented box—among the first designed expressly for police interrogation—did get results. According to unpublished survey data kept in archives, up to 60 percent of the criminals labeled deceptive after an examination with Keeler’s polygraph would confess to their crimes.

Keeler’s intentions, while self-serving (he was a publicity hound who even appeared as himself in the docu-noir film Call Northside 777), weren’t ignoble. He had been good friends with August Vollmer, Berkeley, California’s first police chief and a reformer who looked to the lie detector as an alternative to the brutal interrogation techniques of the time. Soon Keeler moved his operation to Chicago, home to the country’s first forensic lab and a notoriously brutal police force. Keeler’s colleague Fred Inbau later described the Chicago Police Department’s reaction to the polygraph like this: “Why use a polygraph? Beat the hell out of him. If he tells the truth, he’s guilty; if he doesn’t, he’s innocent.”

To be sure, the lie detector was a preferable innovation to the blackjacks and rubber hoses many officers employed at the time. But make no mistake: The truth box just turned intimidation from physical to psychological. According to Bunn, Keeler would say, “Let them stew overnight in the cell… We’ll hook them up to the sweat box in the morning by which time they’ll be so fearful of it, they will simply confess.”

“It does tend to make people frightened, and it does make people confess, even though it cannot detect a lie,” Bunn said. Even the lie detector’s harshest critics concede it can be a useful interrogation tool.

“There’s a myth of the lie detector in American society where many people believe it works and that has usefulness in some situations,” said George Maschke, co-founder of, a website dedicated to cheating and ultimately abolishing the lie detector. “For example if a criminal can be convinced by an examiner he’s been caught in a lie, he might give a confession that can be corroborated by other evidence like the location of a body or details about a murder weapon that only the murderer would know.”

But not all confessions, Maschke notes, are equal. Although, according to the 2,500-member strong American Polygraph Association, professional examiners boast over 90 percent accuracy, virtually every scientific association puts the rate much lower. In 2003 the National Academy of Sciences concluded a century of research in psychology and physiology provides little support for the polygraph’s worth. The position of the American Psychological Association is that the lie detector is more the stuff of TV crime drama than scientific reality.

Statistics aren’t kept on the number of polygraphs administered in criminal investigations and current data on the instrument’s reach is hard to come by. By the 1980s, an estimated one million polygraphs were given each year, over a third by private companies, most of which are no longer allowed to do so after an act from Congress barred the practice.

What’s clear is that misplaced trust in the lie detector has consequences. In some cases people have used widely known methods to cheat: with countermeasures like taking drugs or shocking their senses by biting their tongues or clinching their anuses. In 1987, Gary Ridgway passed a lie detector test by “just relaxing.” It wasn’t until 2001 that advancement in DNA technology pointed to Ridgway as the serial murderer of 48 women.

Even more tragic are the innocent who have been wrongly convicted based on the tests. In October, Jeff Deskovic was awarded $40 million for his wrongful conviction in the rape and murder of a high school classmate. Deskovic was exonerated after serving 16 years, his guilt determined largely on the results of a five-hour polygraph exam in which Deskovic says the examiner called the then-16-year-old a murderer, convinced him that he had failed the polygraph, and left him on the floor in the fetal position to give a false confession.

Even the original polygraph’s most vocal advocates seemingly knew that the lie detector wasn’t strict science. In fact, Keeler would demonstrate the polygraph’s accuracy with a deceptive card trick. Keeler would instruct the subject to pick one of 10 playing cards and return it to the deck—which Keeler marked, just to be sure. The subject would be told to look at the cards individually and deny each was his. The lie would be detected, and the correct card would be chosen, thus, giving Keeler the subject’s physiological “lying response,” but more importantly proving the polygaph’s magic.

“In order for it to work, you have to believe it’s going to work,” Northwestern’s Ken Alder told me. “It was a very clever way to trick people.”
social interaction is an interruption.

shape or be shaped.

User avatar
Dick Van Dyke
Hall Monitor
Posts: 1614
Joined: Thu Feb 16, 2012 11:59 pm

Re: How reliable is "science"

Post by Dick Van Dyke » Sun Feb 08, 2015 10:38 am

The fiddling with temperature data is the biggest science scandal ever
8 February 2015

When future generations look back on the global-warming scare of the past 30 years, nothing will shock them more than the extent to which the official temperature records – on which the entire panic ultimately rested – were systematically “adjusted” to show the Earth as having warmed much more than the actual data justified.

Two weeks ago, under the headline “How we are being tricked by flawed data on global warming”, I wrote about Paul Homewood, who, on his Notalotofpeopleknowthat blog, had checked the published temperature graphs for three weather stations in Paraguay against the temperatures that had originally been recorded. In each instance, the actual trend of 60 years of data had been dramatically reversed, so that a cooling trend was changed to one that showed a marked warming.

This was only the latest of many examples of a practice long recognised by expert observers around the world – one that raises an ever larger question mark over the entire official surface-temperature record.

Following my last article, Homewood checked a swathe of other South American weather stations around the original three. In each case he found the same suspicious one-way “adjustments”. First these were made by the US government’s Global Historical Climate Network (GHCN). They were then amplified by two of the main official surface records, the Goddard Institute for Space Studies (Giss) and the National Climate Data Center (NCDC), which use the warming trends to estimate temperatures across the vast regions of the Earth where no measurements are taken. Yet these are the very records on which scientists and politicians rely for their belief in “global warming”.

Homewood has now turned his attention to the weather stations across much of the Arctic, between Canada (51 degrees W) and the heart of Siberia (87 degrees E). Again, in nearly every case, the same one-way adjustments have been made, to show warming up to 1 degree C or more higher than was indicated by the data that was actually recorded. This has surprised no one more than Traust Jonsson, who was long in charge of climate research for the Iceland met office (and with whom Homewood has been in touch). Jonsson was amazed to see how the new version completely “disappears” Iceland’s “sea ice years” around 1970, when a period of extreme cooling almost devastated his country’s economy.

One of the first examples of these “adjustments” was exposed in 2007 by the statistician Steve McIntyre, when he discovered a paper published in 1987 by James Hansen, the scientist (later turned fanatical climate activist) who for many years ran Giss. Hansen’s original graph showed temperatures in the Arctic as having been much higher around 1940 than at any time since. But as Homewood reveals in his blog post, “Temperature adjustments transform Arctic history”, Giss has turned this upside down. Arctic temperatures from that time have been lowered so much that that they are now dwarfed by those of the past 20 years.

Homewood’s interest in the Arctic is partly because the “vanishing” of its polar ice (and the polar bears) has become such a poster-child for those trying to persuade us that we are threatened by runaway warming. But he chose that particular stretch of the Arctic because it is where ice is affected by warmer water brought in by cyclical shifts in a major Atlantic current – this last peaked at just the time 75 years ago when Arctic ice retreated even further than it has done recently. The ice-melt is not caused by rising global temperatures at all.

Of much more serious significance, however, is the way this wholesale manipulation of the official temperature record – for reasons GHCN and Giss have never plausibly explained – has become the real elephant in the room of the greatest and most costly scare the world has known. This really does begin to look like one of the greatest scientific scandals of all time.
coffee's for closers.

Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests